text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
On positive-definite ternary quadratic forms with the same representations overRyoko Oishi-TomiyasuGraduate School of Science and Engineering,Yamagata University / JST PRESTO 990-8560 1-4-12, Kojirakawa-cho, Yamagata-shi, Yamagata, JapanE-mail: tomiyasu@imi.kyushu-u.ac.jp[Current affiliation: Institute of Mathematics for Industry (IMI), Kyushu University] ===================================================================================================================================================================================================================================================================================================empty empty First-passage time (FPT) of an Ornstein-Uhlenbeck (OU) process is of immense interest in a variety of contexts. This paper considers an OU process with two boundaries, one of which is absorbing while the other one could be either reflecting or absorbing, and studies the control strategies that can lead to desired FPT moments. Our analysis shows that the FPT distribution of an OU process is scale invariant with respect to the drift parameter, i.e., the drift parameter just controls the mean FPT and doesn't affect the shape of the distribution. This allows to independently control the mean and coefficient of variation (CV) of the FPT. We show that that increasing the threshold may increase or decrease CV of the FPT, depending upon whether or not one of the threshold is reflecting.We also explore the effect of control parameters on the FPT distribution, and find parameters that minimize the distance between the FPT distribution and a desired distribution. § INTRODUCTION The first passage time (FPT) is the earliest time at which a trajectory of a stochastic process initially inside a bounded region leaves the region. The FPTs are extensively used across disciplines, including neuroscience <cit.>, biology <cit.>, finance <cit.>, ecology <cit.>, engineering <cit.>, statistical physics <cit.>, finance <cit.>, and health science <cit.> to model several interesting phenomena. For example, the FPT of diffusion processes is used to model human decision-making <cit.>, animal foraging <cit.>, financial markets <cit.>, and clock synchronization <cit.>. In this paper, we study control of the FPT statistics of an Ornstein-Uhlenbeck (OU) process between two fixed boundaries; one of which is absorbing, and the other can be absorbing or reflecting. An OU process belongs to the class of diffusion processes and is a generalization of drift-diffusion process. The OU process is also a continuum approximation to several discrete time Markov models. For biological phenomena modeled by FPTs, the analysis in this paper can provide insights into the mechanisms these systems employ to cope with uncertainty and ensure resilient performance. For example, how attention and memory is modulated in human decision-making, or how a gene's expression is regulated to control timing of its response, or how animals regulate their foraging activity. For engineered systems, these analysis may provide insights into optimal control laws that delay an undesired event such as epidemic outbreak, or optimal control laws that achieve a desired distribution for time to certain event such as adoption of a product by certain fraction of population. The problem of steering a linear stochastic system to a desired final distribution has been studied <cit.>. However, computing and controlling FPT distribution is significantly more complicated than controlling the evolution of trajectories without boundaries. Indeed, the Fokker-Plank equation for the OU process is nonlinear and has limited tractability <cit.>. Control of FPT distribution for OU process has been studied in <cit.>, wherein the boundary of the region is controlled to steer the FPT distribution to a Gamma distribution. Loosely speaking, this problem can be thought of as a boundary control of a PDE <cit.>, where underlying PDE is the Fokker-Planck equation. In our recent work <cit.>, similar problems were explored in the context of gene expression. Therein the stochastic process is a continuous-time discrete-stateprocess defined on positive integers, with a reflecting boundary at 0 and a fixed absorbing boundary. The results showed that the best strategy to minimize the coefficient of variation (CV) of FPT for a fixed mean FPT is a constant rate of production (forward hopping) and no decay (backward hopping). These results interpreted in continuum limit would mean that the optimal stochastic process (within OU processes) for minimizing the CV of FPT for a given mean is the drift-diffusion process. In other words, the optimal control is a feedforward controller and requires no state feedback. In this paper, we explore this control problem in more detail. Although the FPT properties of OU processes have been extensively studied in the literature <cit.>, a control theoretic analysis of how the process can be steered to some desired FPT statistics is lacking. This paper provides an unified approach based on characteristic functions to find control parameters that lead to desired FPT distributions. The approach is analytically and numerically tractable and provides important insights into the FPT behavior of the OU process. The major contributions of this paper are threefold. First, we show that the FPT distribution for OU process is scale-invariant with respect to drift parameter, which facilitates independent tuning of the mean and CV of the FPT. Second, using the characteristic function of the FPT, we explore the space of control parameters to understand the variation of the FPT statistics with these parameters. Third, we determine optimal control parameters to achieve desired FPT statistics. The paper is organized as follows. Section <ref> introduces the problem. Section <ref> presents background results on the characteristic function for the FPT of an OU process. Section <ref> uses the characteristic function to find properties of moments of the FPT, and optimal parameters that lead todesired moments. A more general control problem that explores the parameter space to reach a desired FPT distribution is studied in Section <ref>. Finally, conclusions and future work are discussed in Section <ref>. § PROBLEM DESCRIPTIONConsider anOU process defined by the following stochastic differential equationdx=-θ x dt+σ√(θ) dw_t.Here x is the state, θ∈_≥ 0, and σ∈_≥ 0 are parameters, and dw_t are i.i.d. Wiener increments. We will refer to θ as the drift and σ as the relative noise strength. Let a and b denote two thresholds such that a<b. The FPT, τ, for x(t) to cross either of these thresholds is mathematically defined asτ=inf{t: x(t) ∉ (a,b) | x(0)=x_0 ∈ (a,b)}.Our aim is to investigate optimal drift θ and relative noise strength σ that lead to desired FPT moments. Such problems could be of relevance in many contexts wherein a desired mean FPT and at least a tolerable CV is required. This problem can be generalized further by demanding a FPT distribution that is as close to a desired distribution as it could be. The thresholds a and b could be both absorbing, or one absorbing and the other reflecting.To handle these problems in an unified manner, we propose to use the characteristic function of the OU process. Not only the moments can be easily computed from the characteristic function, but it also provides a useful way to characterize the distance between two probability distribution functions. To be more specific, the characteristic function, ψ_τ(α), of the FPT, τ, is defined asψ_τ(α)=[e^i ατ], α∈.A m-th order moment [τ^m] can be computed as[τ^m]=i^-m[d^m/dα^mψ_τ(α)]_α=0.Furthermore, the following result due to Parseval–Plancherel provides a metric to quantify the difference between two probability density functions in terms of their characteristic functions: Let f_τ(t) denote the probability density function of the FPT, and f_d(t) be a desired probability distribution function. The distance between these functions can be quantified in terms of their characteristic functions ψ_τ(α) and ψ_d(α) as∫_0^∞[f_τ(t)-f_d(t)]^2 dt = 1/2π∫_-∞^∞|ψ_τ(α)-ψ_d(α)|^2 dα,provided that the integrals exist <cit.>.§ BACKGROUND RESULTS ON FPT OF AN OU PROCESSIn this section, we provide background results on FPT of the OU process (<ref>). For completeness, we provide detailed computation of the characteristic function using standard tools from the theory of stochastic processes (see, <cit.>). We consider two thresholds at a and b, both of which could be absorbing or one of them could be reflecting. §.§ When both thresholds are absorbingTo derive the characteristic function, ψ_τ(α), for the OU process in (<ref>), we define g(y) asg(y)=[e^i ατ(y)],where y represents an initial condition, and τ(y) denotes the FPT starting from an initial condition y. Note that the characteristic function is related with g(y) as ψ_τ=g(x_0). The computation of g(y) using first principles is discussed below.Consider the evolution of the OU process starting from y in an infinitesimal time interval h. Denote x_h=x(h)=y-θ y h+σ√(θ) w_h. It follows thatg(y)= _x_h_τ(x_h)[e^i α(h+τ(x_h))] = e^i α h_x_h[g(x_h)] = e^i α h(g(y) - θ y h dg/dy + 1/2σ^2 θ h d^2g/dy^2) + O(h^2). Taking the limit h → 0 results in1/2σ^2 θd^2g(y)/dy^2 -θ y dg(y)/dy + iα g(y)=0. We are interested in the solution to the above differential equation which can be obtained using the series method. Let g(y)=∑_n=0^∞c_n y^n.Plugging this in (<ref>) results in1/2σ^2 θ∑_n=0^∞(n+2)(n+1)c_n+2y^n -θ∑_n=0^∞n c_n y^n + iα∑_n=0^∞c_n y^n=0.It is straightforward to see that (<ref>) results in the following recursive relation in the coefficientsc_n+2=2(-iα + n θ)/σ^2 θ (n+2)(n+1) c_n.The above recursion yields the following solution c_n=2^nΓ[1/2(n-i α/θ)]c_0/Γ[-i α/2 θ]σ^n n!,n=0, 2, 4, … c_n=2^n-1Γ[1/2(n-i α/θ)]c_1/Γ[-i α + θ/2 θ]σ^n-1 n!,n=1, 3, 5, …, where Γ is the Gamma function. A general solution to (<ref>) can be given by (<ref>), with the coefficients given by (<ref>). Simplifying the series in (<ref>) via symbolic manipulation in Mathematica yields g(y)=c_0 _1F_1(-i α/2 θ,1/2,y^2/σ ^2)+c_1 y _1F_1(θ -i α/2 θ, 3/2,y^2/σ ^2),where _1F_1 represents the Kummer's confluent hypergeometric function.The solution in (<ref>) consists of two unknown coefficients c_0 and c_1 which can be computed using the boundary conditions. When both thresholds a and b are absorbing, the boundary conditions are given by g(a)=1 and g(b)=1. Using these boundary values, c_0 and c_1 can be determined by solving c_0 _1F_1(-i α/2 θ,1/2,a^2/σ ^2)+c_1 a _1F_1(θ -i α/2 θ, 3/2,a^2/σ ^2)=1, c_0 _1F_1(-i α/2 θ,1/2,b^2/σ ^2)+c_1 b _1F_1(θ -i α/2 θ, 3/2,b^2/σ ^2)=1.Using these coefficients in (<ref>) and evaluating g(x_0) results in the following for the characteristic function ψ_τ(α)=N_ψ/D_ψ,whereN_ψ =-x_0 _1F_1(-i α/2 θ,1/2,a^2/σ ^2) _1F_1(θ -i α/2 θ,3/2,x_0^2/σ ^2) +a _1F_1(θ -i α/2 θ,3/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,x_0^2/σ ^2)-b _1F_1(θ -i α/2 θ,3/2,b^2/σ ^2) _1F_1(-i α/2 θ,1/2,x_0^2/σ ^2) +x_0 _1F_1(-i α/2 θ,1/2,b^2/σ ^2) _1F_1(θ -i α/2 θ,3/2,x_0^2/σ ^2),D_ψ = a _1F_1(θ -i α/2 θ,3/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,b^2/σ ^2) -b _1F_1(-i α/2 θ,1/2,a^2/σ ^2) _1F_1(θ -i α/2 θ,3/2,b^2/σ ^2). The hypergeometric functions _1F_1 can be converted to other special functions, such as Hermite functions, parabolic cylinder functions, etc. <cit.>. Results on FPT of OU are presented in some of these formsin standard texts <cit.>.If the FPT characteristic function is desired for a single threshold, it could be computed as a special case of the two threshold case analyzed here. There are two possibilities: either the initial condition is above the threshold, or below it. If the initial condition is above the threshold, then we may analyze this case as two thresholds case by letting the threshold at b → +∞, and considering a as our threshold of interest. In the other case when the initial condition x_0 is below the threshold, then we let a → -∞ and assume the threshold of interest at b.Recall our definition of the FPT for two threshold case given in (<ref>). The initial condition x_0 there is assumed to lie between the thresholds a and b. If that were not the case, then the thresholds problem also becomes a single threshold problem. More specifically, if x_0<a<b, then the process will always reach a before b. Therefore, the FPT is same as that for a single threshold at a. Analogously, if x_0>b>a, then the process will hit the threshold b before the threshold a, and the FPT is same as that for reaching a single threshold at b. §.§ When one of the threshold is reflectingAnother possible situation of interest arises when one of the thresholds is reflecting. For example, we could assume that the threshold at a is not absorbing and the process is reflected back as soon as it hits a. We are interested in computing the characteristic function of the first time at which the process reaches the threshold b. The computation follows the same principles as those for the two threshold case, and therefore reduces to solving the differential equation (<ref>) for g(y). The general form of the solution in (<ref>) can be used in this case, with appropriate boundary conditions given by g(b)=1 and g'(a)=0 (see <cit.> for more details). Note that if the absorbing threshold is at a and b is the reflecting threshold, then we will have the boundary conditions g(a)=1 and g'(b)=0. We do not analyze this case here. Let us denote the FPT characteristic function ψ_τ_r. Using the initial conditions to compute c_0 and c_1 in (<ref>) and then evaluating g(x_0) results in the following for the characteristic function ψ_τ r(α)=N_ψ r/D_ψ r,whereN_ψ r = 6 i α/θb x_0 _1F_1(1-i α/2 θ,3/2,a^2/σ ^2) _1F_1(θ -i α/2 θ,3/2,x_0^2/σ ^2) +2 a^2 (1-iα/θ) _1F_1(3/2-i α/2 θ,5/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,x_0^2/σ ^2)+3 σ ^2 _1F_1(θ -i α/2 θ,3/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,x_0^2/σ ^2), D_ψ r = 2 a^2 (1-iα/θ)_1F_1(3/2-i α/2 θ,5/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,b^2/σ ^2) +6 iα/θb^2 _1F_1(1-i α/2 θ,3/2,a^2/σ ^2) _1F_1(θ -i α/2 θ,3/2,b^2/σ ^2)+3 σ ^2 _1F_1(θ -i α/2 θ,3/2,a^2/σ ^2) _1F_1(-i α/2 θ,1/2,b^2/σ ^2).So far we have computed the characteristic functions for FPT of OU process in various scenarios. The characteristic function can now be used to explore how various parameters affect the FPT statistics, and how they could be tuned to achieve desired FPT behavior. § OPTIMAL PARAMETERS FOR DESIRED FPT MOMENTSIn this section, we investigate the effect of various parameters of the OU process on the FPT moments. Then, we examine how the parameters could be tuned so as to get a desired FPT moments.§.§ Scale invariance of the FPT distributionIn the previous section, we derived characteristic functions of FPT distribution of the OU process under different scenarios (both thresholds absorbing or one of them reflecting). More generally, the characteristic function for other scenarios can also be derived from the generalized form in (<ref>), with appropriate boundary conditions.An important point to note is that in both (<ref>) and (<ref>), the drift parameter θ always appears with α (as in α/θ). Therefore, if we consider the rescaled variable τ=τθ, and find a general form similar to (<ref>), it would be given by g(y) =[ e^i ατ (y)]=[ e^i αθτ (y)] =c_0 _1F_1(-i α/2,1/2,y^2/σ ^2)+c_1 y _1F_1(1-i α/2, 3/2,y^2/σ ^2).Thus, the general solution g(y) for the rescaled variable τ=θτ would not depend on θ. As the coefficients c_0 and c_1 above are obtained from boundary conditions, they would also be independent of θ. An alternate way to infer this feature is to look at (<ref>). As dw is of the order of √(dt), we can rescale time by θ (as in t=θ t) and rewrite (<ref>) asdx=-x dt + σ dw_t.Because θ does not appear in the new time scale, the characteristic function of the FPT with this rescaling should be independent of θ as well. To understand the implications of this property, consider the characteristic function of the rescaled variable τψ_τ(α)=[ e^ i ατ]=1+i α[τ] + i^2α^2/2![τ^2]+…. Since θ does not appear in the above characteristic function, all moments of τ are independent of θ. Furthermore, because τ=θτ, we have that[τ^m]=θ^m [τ^m],m ≥ 1.Since [τ] does not depend upon upon θ, this implies that[τ^m] ∝θ^m,and appropriately scaled moments of the FPT, [τ^m]/([τ])^m, are independent of θ. It follows that if we operate with normalized higher statistical moments such as the coefficient of variation (CV), skewness, kurtosis, etc, then changing the drift parameter θ only changes the mean FPT [τ].The scale invariance has been observed in distributions of other quantities <cit.>, and also of FPTs in other contexts <cit.>.In terms of the characteristic function, we illustrate the scale invariance property in Fig. <ref> for the case when both thresholds are absorbing. The real and imaginary parts of the characteristic function are plotted for the FPT. By varying the drift parameter θ, the characteristic function ψ_τ(α) does not change in shape and just scales with respect to the α axis. However, changing the relative noise strength σ affects the shape of the characteristic function. Similar behavior is also seen in the case when one of the thresholds is reflecting, though the results are not shown in order to avoid repetition.§.§ Tuning FPT momentsRecall the form of (<ref>). Suppose that we are interested in tuning the two parameters (θ and σ) of the process so as to get desired moments of the FPT. Since θ only changes the mean, and the other quantities of interest (such as coefficient of variation (CV), skewness etc.) are independent of it, one could independently tune the mean FPT and one other quantity. Typically, the CV is the other quantity of interest because it represents the noise in the FPT.What remains to be seen is how the relative noise strength σ affects the mean and CV of the FPT. One could then choose appropriate σ such that the CV is at a desired level, and then tune θ to get the desired mean. It turns out that both the mean and CV are decreasing functions of σ for the two absorbing thresholds case. The CV eventually approaches to a limiting value√((x_0-a)^2+(b-x_0)^2/3(x_0-a)(b-x_0)), which corresponds to the CV of the FPT for a diffusion with zero drift. In case when the threshold at a is reflecting, the mean still decreases with increase in σ. The CV, on the other hand, shows a slight dip before increasing to a limiting value√(2 ( (x_0-a)^2+(b-a)^2)/3 (b-x_0) ((x_0-a)+(b-a)))that corresponds to the CV of the FPT for a diffusion with zero drift. Collectively, these results show that if one were to tune θ, and σ then any desired mean FPT could be achieved, but there is a limit to the achievable CV. Achieving a low CV in the double thresholds case requires a high value of σ whereas for the case when one the barriers is reflecting, there is an optimal σ that minimizes the CV. §.§ Effect of thresholds and initial conditionOur analysis thus far has assumed fixed thresholds and a given initial condition. In Fig. <ref>, we examine how the results change when one of these parameters are changed. As a first case, we consider a symmetric thresholds, i.e., a=-b, and the initial condition to be at x_0=0. In this case, increasing b leads to increase in CV of FPT if both thresholds are absorbing. In contrast, if a is considered to be reflecting, then there is an optimal threshold b at which the CV hits a minimum.Increasing the threshold beyond a certain point does not affect the CV anymore. This corresponds to the situation when the absorbing threshold(s) is far from the initial condition and crossing it is dominated purely by noise (see Fig. <ref>, top).Next, we consider the case when the initial condition x_0 is not symmetric. Assuming the threshold to be a=-b, we take two cases: x_0=-b/2 and x_0=b/2. When both a and b are absorbing, increasing the threshold decreases CV of FPT and the CV seems to approach the limit of symmetric initial condition (Fig. <ref>, middle and bottom). However, when the threshold a is taken as reflecting, then the CV properties change depending upon x_0. More specifically, when x_0 is near the reflecting threshold, then increasing the threshold increases CV. In contrast, when x_0 is near the absorbing threshold, then increasing the threshold leads to reduction in CV of FPT.To sum up, the FPT distribution for an OU process is scale invariant with respect to the drift parameter, and thereby allows independent tuning of the mean FPT and another statistical quantity that consists of appropriately scaled moments of the FPT. If one is interested in obtaining a FPT distribution that matches more than two statistical quantities of interest, it is not possible. A question of interest at this point is how close can the FPT distribution get to a given distribution? § OPTIMAL PARAMETERS FOR DESIRED FPT DISTRIBUTIONSuppose that instead of tuning the moments, we are interested in tuning the distribution of the FPT itself. More specifically, we are interested in choosing the parameters such that the FPT distribution is as close to a desired distribution as possible. In this section, we discuss the tuning of OU process to achieve such behavior. To this end, we consider the relation between probability density function and the characteristic function stated in (<ref>). Although the desired distribution could be specified as any distribution of interest, we consider the Gamma distribution here. The rationale behind this is that the Gamma distribution is that it is the distribution of a summation of exponential random variables and in a limiting case, it can even represent a degenerate (deterministic) distribution. In Fig. <ref>, we assume both thresholds to be absorbing and plot the distance metric between the desired distribution and the FPT distribution as a function of θ. The value of σ is taken to be fixed. It can be seen that there is an optimal value of θ that minimizes the distance metric. Furthermore, if σ is increased and this process is iterated, we see that the optimal value of θ decreases, and the minimum of the distance metric decreases as well. Referring to Fig. <ref>, it can be seen that fixing the value of σ basically fixes the shape of the characteristic function, and then by changing θ, an appropriate scale is chosen so that the characteristic function matches with the desired one. By iterating over σ, we change the shape of the characteristic function and find the optimal shape that matches the desired characteristic function with appropriate scaling θ. Even in the case when one of the boundary is reflecting, we get optimal parameters that minimize the distance metric. The results are not presented here to avoid redundancy. § CONCLUSIONThe OU process is used to model stochastic phenomena in a variety of contexts. In particular, the FPT of OU process has been used to study decision-making <cit.>, animal foraging <cit.>, financial markets <cit.>, and synchronization of clocks <cit.>. In this paper, we consider OU process with two thresholds. Both of the thresholds could be absorbing, or one of them could be reflecting while the other one is absorbing. We analyze the effect of OU process parameters on its FPT statistics, and how the parameters could be chosen to obtain a desired FPT statistics. Future work will focus on analyzing the optimal parameters by not restricting θ to be positive, and thereby allowing the OU process to not just be mean reverting. Given the numerical tractability of characteristic functions, it would also be interesting to explore other stochastic processes the approach presented here and explore optimal/sub-optimal control strategies that result in desired FPT statistics. IEEEtran
http://arxiv.org/abs/1703.08846v1
{ "authors": [ "Khem Raj Ghusinga", "Vaibhav Srivastava", "Abhyudai Singh" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170326165233", "title": "Driving an Ornstein--Uhlenbeck Process to Desired First-Passage Time Statistics" }
[LO]Proper holomorphic immersions into Stein manifolds with the density property [RE]F. Forstnerič[RO,LE]empty Proper holomorphic immersions into Stein manifolds with the density propertyFranc ForstneričAbstract In this paper we prove that every Stein manifold S admits a proper holomorphic immersion into any Stein manifoldof dimension 2 S enjoying the density property or the volume density property.Keywords Stein manifold, density property, proper holomorphic immersion MSC (2010):32E10, 32E20, 32E30, 32H02; 32Q99 § INTRODUCTION A complex manifold X is said to enjoy the density propertyif the Lie algebra generated by all the -complete holomorphic vector fields is dense in theLie algebra of all holomorphic vector fields on X (see Varolin <cit.> or <cit.>). Similarly one defines the volume density property of acomplex manifold endowed with a holomorphic volume form, as well as the algebraic versions of these properties (see Kaliman and Kutzschebauch <cit.>).This important class has been the focus of intensive recent research for affine algebraic and Stein manifolds; seethe papers <cit.>, among others. These manifolds are highly symmetric and enjoy the Andersén-Lempert property <cit.> on approximation of isotopies of injective holomorphic maps between Runge domainsby holomorphic automorphisms; see <cit.> for ^nand <cit.>. Furthermore, every complex manifold with thedensity property is an Oka manifold; see <cit.>. The following general embedding theorem has been discovered recently. (Andrist et al. <cit.>.) Let X be a Stein manifold with the (volume) density property.If S is a Stein manifold and 2 S +1≤ X, then any continuous map S→ X ishomotopic to a proper holomorphic embedding S X.In this paper we complete the picture by proving the corresponding result on the existence of proper holomorphic immersions into manifolds of double dimension. Let X be a Stein manifold enjoying the density property or the volume density property.If S is a Stein manifold with 2 S= X, then any continuous map S→ X ishomotopic to a proper holomorphic immersion with simple double points.An immersion f S→ X is said to have simple double points if for any pair of distinct points p,q∈ S with f(p)= f(q) the tangentplanes df_p(T_pS) and df_q(T_qS) intersect trivially within T_f(p)X, and f has no triple points. Clearly, any such pair (p,q) is isolated, and if f is proper then the sequence (p_j,q_j)∈ S× Sof pairs of double points is such that each of the sequences p_j and q_j discrete in S.The special case of Theorem <ref> with X=^2 S is a classicaltheorem of Bishop <cit.> and Narasimhan <cit.>. When S is an open Riemann surface and X=2, Theorem <ref> was provedbeforehand by Andrist and Wold <cit.>. Let us mention a few related open problems. Assume that S and X are Stein manifolds and X has the (volume) density property. Can one always find a proper holomorphic map S→ X when S < X?This is possible for X=^n according to the papers of Bishop and Narasimhan cited above.The second question is whether the immersion and the embedding dimensionsin the above theorems can be lowered. Recall that every Stein manifold of dimension d≥ 1admits a proper holomorphic immersion into the complex Euclidean space of dimension [3d+1/2], and if d>1 then it admits a proper holomorphic embedding into^[3d/2]+1 (see Eliashberg and Gromov <cit.> andSchürmann <cit.>). Examples of Forster <cit.> show that this embedding dimension is minimal for every d>1, and the immersion dimension is minimal for even values of dand is off by at most one for odd values of d. The optimal non-proper immersion dimension is [3d/2].Their proofs rely on the product structure of ^n and do not generalize to more general target manifolds. The problem whether every open Riemann surface embeds properly holomorphically into ^2 is still open, although considerable progress has been made in the last decade. A discussion of these topics can be found in <cit.>. The paper is organized as follows. In Sect. <ref> we collect some preliminaries and develop the notion of a very special Cartan pair which plays an important role in the proof. In Sect. <ref> we prove the main technicalresult, Lemma <ref>. Although it is similar to <cit.>, its proof strongly relies on the fact that the attaching set of the convex bump in a very special Cartan pair can be an arbitrarily thin convex slab. This allows us to ensure that the immersion into X,given in an induction step of the proof, is an embedding on the attaching set of the bump.This condition isimportant in the rest of the argument where the (volume)density property of X is used to approximate the given immersionon the attaching set by an immersion of the bump. The rest of the procedure, gluing these two maps etc., is the same as in <cit.>. With Lemma <ref> in hand, Theorem <ref> is proved in exactly the same way as<cit.>. By using the technique of this paper, it is possible to obtain a more precise version of Theorem <ref> inthe spirit of <cit.> and <cit.>, i.e., with interpolation and the control of the image of the set S∖ K for a givencompact (S)-convex subset K of the source Stein manifold S. More precisely, assume that S and X are as in Theorem <ref>, K⊂ S is a compact (S)-convex set, U⊂ S is a open set containing K,S' ⊂ S is a closed complex subvariety of S, L⊂ X is a compact (X)-convex set, and f U∪ S'→ X is a holomorphic map such thatf:S'→ X is an immersion with simple double points satisfying f(bK ∪ (S'∖ K)) ∩ L =∅.Then we can approximate f uniformly on K by a holomorphic immersion F S→ Xwith simple double points satisfying F(S∖ K) ⊂ X ∖ L and F|_S'=f|_S'.If in addition the map f|_S' S'→ X is proper (in particular, if S'=∅), then F can also be chosen proper. Lemma <ref> also serves to complete the details inthe proof of <cit.>, due to T. Ritter and the author, in the second case when the compact set L⊂^n in the statement of that theoremis polynomially convex and n=2 X. (In <cit.>, X denotes the source Stein manifold whichcorresponds to S in the present paper, while the target manifold is ^n.) The proof of <cit.> is complete when L is convex orholomorphically contractible, while the case when L is polynomially convexand n=2 X can be seen by supplementing the proof in <cit.> by Lemma <ref> in the present paper. § PRELIMINARIES Let(S) denote the algebra of all holomorphic functions on a complex manifold S,endowed with the compact-open topology. A compact set K in S is said to be (S)-convexif for every point p∈ S∖ K there exists a function g∈(S) with |g(p)| > sup_K |g|. If S is a closed complex subvariety of a Stein manifold X, then a compact set K⊂ S is (S)-convex of and only if it is (X)-convex. We shall need a version of this result for immersed submanifolds with simple double points. Assume that S and X are Stein manifolds and f S→ X is a properholomorphic immersion with only simple double points.Then a compact subset K⊂ S is (S)-convex if and only if its image f(K)⊂ X is (X)-convex.Let Δ_S denote the diagonal of S× S. The hypothesis on f implies that there are at most a countably many pairs (a_j,b_j)∈ S× S∖Δ_Ssuch that the sequences a_j and b_j are discrete in S, f(a_j)=f(b_j) for all j, and any(a,b)∈ S× S∖Δ_S satisfying f(a)=f(b) is one of the pairs (a_j,b_j).The image Σ=f(S)⊂ X is a closed complex subvariety of X whose onlysingularities are simple normal crossings at the points c_j=f(a_j)=f(b_j). Since f is proper, the sequence c_j∈Σ is discrete.Assume that the compact set K⊂ S is (S)-convex. Let p∈Σ∖ f(K). If p c_j for all j,then p=f(q) for a unique point q∈ S∖ K. Since the sequences a_j,b_j are discrete in S, the Cartan-Oka-Weil theorem gives a function g∈(S) such that g(q)=1, |g|<1/2 on K,and g(a_j)=g(b_j) for all j. Hence there is a unique holomorphic functionh∈(Σ) such that h∘ f=g. Then h(p)=1 and |h|<1/2 on f(K), sop∉f(K)_(Σ). If p=c_j for some j,we choose g∈(S) such that g(a_j)=g(b_j)=1, |g|<1/2 on K, and g(a_i)=g(b_i) for all i j; the conclusion is the same as above. This proves that f(K) is (Σ)-convex, and hence also (X)-convex.Conversely, if K⊂ S is a compact set such that f(K)⊂ X is (X)-convex, then K=f^-1(f(K)) is (S)-convex. The condition on f implies that K is the union of K with at most finitely many points p_1,…, p_m∈ S∖ K. By the Oka-Weil theoremthere exists g∈(S) such that g(p_i) is close to 1 for i=1,…, m and |g|<1/2 on K.Hence the points p_i do not belong to the hull of K, so K is (S)-convex.A compact set K in a topological space S is said to be regular if K is the closure of its interior K. A pair K⊂ L of compact convex sets in ^N is a simple convex pair ifthere are a linear function λ^N→ and constant a∈ such thatK={z∈ L : λ(z)≤ a}.Given regular compact convex sets C⊂ Bin ^N and an open set U⊂^N containing C, there is a finite sequence of regular compact convex sets K_1⊂ K_2⊂⋯⊂ K_m+1=B such that C ⊂ K_1⊂ U and (K_i,K_i+1) is a simple convex pair for every i=1,…,m.Given a linear function λ^N→ and a number a∈ we let H(λ,a)={x∈^N:λ(x)≤ a}.Since C is compact and convex, it is the intersection of closed half-spaces. Hence there exist finitely many linear functions λ_1,…,λ_m^N→ and numbers a_1,…,a_m∈ such that C⊂⋂_i=1^m H(λ_i,a_i)⊂U.The sets K_i=⋂_j=i^m H(λ_j,a_j) ∩ B for i=1,…,m and K_m+1=Bthen satisfy the lemma. (If K_i=K_i+1 for some i then K_i may be removed from the sequence.) A compact set K in a complex manifold Sis said to be a Stein compact if K admits a basis ofopen Stein neighborhoods in S. If K⊂ A are compacts in S, we say that K is (A)-convex if thereis an open set W⊂ S containing A such that K is (W)-convex.By (A) we denote the algebra of functions that are holomorphic in open neighborhoods of A (depending on the function).The following notion of a special Cartan pair is a small variation of <cit.>.A slightly more restrictive notion was used in <cit.>where the sets C⊂ B were assumed to be smoothly bounded strongly convex,while A andD=A∪ B were strongly pseudoconvex (a strongly pseudoconvex Cartan pair). The main novelty is the notion ofa very special Cartan pair in which the attaching setof the convex bump is a thin convex slab; this plays an important role in the proof of Lemma <ref> in the following section.A pair of compact sets (A,B) in a complex manifold S is a special Cartan pair if (i)the sets A, B, D=A∪ B and C=A∩ B are Stein compacts, (ii)A and B are separated in the sense that A∖ B∩B∖ A =∅, and (iii) there is a holomorphic coordinate system on a neighborhood of B in Sin which B and C=A∩ B are regular convex sets.A special Cartan pair (A,B) with C=A∩ B is very special if(iv) there is a holomorphic coordinate system on a neighborhood of B in Sin which (C,B) is simple convex pair (see Definition <ref>). If K is a compact convex set in ^N, then a slice of K is the intersection of K with areal affine hyperplane, and a slab of K is a subset of the formK_a,b = {x∈ K : a≤λ(x) ≤ b}where a<b are real numbers and λ^N→ is alinear function.The number b-a is called the thickness of the slab K_a,b. If K is a compact subset of a manifold S that is contained in a local chart and is convex in that chart, then a slice or a slab of Kwill be understood as a subset of the respective type in the given chart. Assume that (A,B) is a special Cartan pair in a complex manifold S.Given an open set W⊂ S containing A, there is a finite sequence of compact sets A⊂ A_1⊂ A_2⊂⋯ A_m+1= A∪ Bsuch that A_1⊂ W, for every i=1,…,m we have A_i+1=A_i∪ B_i where (A_i,B_i) is a very special Cartan pair, and C_i=A_i∩ B_i is an arbitrarily thin slab of B_i.If the set B is (D)-convex where D=A∪ B,then the pairs (A_i,B_i) can be chosen such that B_i is (A_i+1)-convex for every i=1,…,m. Let C=A∩ B. By the assumption, there are an open neighborhood V_0 ⊂ S of B and a biholomorphic map θ V_0 → V_0 ⊂^d onto an open convex subsetof ^d such that θ(C)⊂θ(B) are regular compact convex set in ^d. We use the chart θ to define the notion of convexity, slices, slabs, and simple convex pairs in V_0. Pick an open neighborhood U of C, with U⊂ W, and choose a compact convex set C⊂ U which contains C in its interior.Lemma <ref> furnishes a sequence K_1⊂ K_2⊂⋯⊂ K_m+1=Bof regular compact convex sets (with respect to the chart θ V_0→ V_0) such that B∩ C ⊂ K_1⊂ Uand (K_i,K_i+1) is a simple convex pair for every i=1,…,m (see Def. <ref>). This means that for every i there are an -linear function λ_i^d→ and a number b_i∈ such thatK_i={x∈ K_i+1 : λ_i(θ(x)) ≤ b_i}.Choose a number a_i∈ with a_i<b_i and close to b_i. For every i=1,…, m letA_i=A∪ K_i,B_i={x∈ K_i+1 : a_i ≤λ_i(θ(x))}.Assuming that a_i is chosen sufficiently close to b_i for each i,condition (<ref>) implies that A∩ B_i=∅fori=1,…,m.Then A_i∪ B_i=A∪ K_i+1=A_i+1⊂ A∪ B andC_i=A_i∩ B_i={x∈ K_i+1 : a_i ≤λ_i(θ(x)) ≤ b_i}.Thus, C_i is a slab of the compact convex set K_i+1. Note also that D=A∪ B = A_m+1 which is a Stein compact. It is easily verified by a downward induction on i that A_i is a Stein compact and(A_i,B_i) is a very special Cartan pair for every i=1,…,m. Note that every B_i is a convex subset of B (in the θ-coordinates).If B is (D) convex, then it clearly follows that B_i is (A_i+1)-convex for every i=1,…,m. We have a lot of freedom in the choices of the slabs C_i (<ref>). In particular, we can replace a_i and b_iby any pair of numbers a'_i,b'_i with a_i<a'_i<b'_i<b_i and redefine the sets K_i (<ref>), A_i,B_i (<ref>) and C_i (<ref>) accordingly.In particular, the attaching slabs C_i of the convex bumps B_i can be chosenarbitrarily thin, a fact that will be important in the proof of Lemma <ref> in the following section. § THE MAIN LEMMA In this section we prove the following main lemma which implies Theorem <ref>in exactly the same way as <cit.> implies <cit.>. Assume that S is a complex manifold of dimension d, and X is a Stein manifold of dimension 2d with the density property or the volume density property. Let (A,B) be a special Cartan pair in S (see Def. <ref>). Set C=A∩ B and D=A∪ B. Assume that(a) L is a compact (X)-convex set in X, (b) K is a compact set contained in A∖ C such that K∪ B is (D)-convex,(c) W⊂ S is an open set containing A, and (d) f W→ X is a holomorphic map such that f^-1(L)⊂ K; equivalently,f(W∖ K) ⊂ X∖ L. Then it is possible to approximate fas closely as desired, uniformly on A, by a holomorphic immersionf̃ W→ X on a neighborhood W of D=A∪ B such thatf̃( W∖ K) ⊂ X∖ L.By the definition of a very special Cartan pair (see Def. <ref>) there are an open neighborhood V_0 of Band a biholomorphic map θ V_0 → V_0 ⊂^donto an open convex subsetof ^d such that θ(C)⊂θ(B) are regular compact convex set in ^d.In the sequel, when speaking of convex subsets of V_0, we mean sets whose θ-images in ^d are convex.Replacing S by a Stein neighborhood of the compact strongly pseudoconvexdomain D=A∪ B, we may assume that D is (S)-convex. Hence,any subset of D which is (D)-convex is also (S)-convex. In particular, this holds for the sets B and K∪ B by the assumption (b). The same is true for the set C which is convex in B. Furthermore, we claim that A is (S)-convex. Indeed, given a point p∈ D∖ A = B∖ A, there is a function g∈(B) such that g(p)=1 and |g|<1/2 on C. Since B is (S)-convex, we can approximated g uniformly on B by a function h∈(S) satisfying the same conditions. In particular, |h|<1/2 on bA∩ B which is the relative boundary of the set B∖ A in D. Since B∖ A is a relative neighborhood of p in D, Rossi's local maximum modulus principle implies that p does not belong to the (D)-convex hull of A and the claim follows.By Lemma <ref> it suffices to consider the case when (A,B) is a very special Cartan pair. Indeed, the cited lemma allows us to replace a special Cartan pair by a finite sequence of very special Cartan pairs, so we obtain a map f̃ satisfying the conclusion of Lemma <ref> by a finite number of applications of the same lemma for a very special Cartan pair.Hence we shall assume from now on that (A,B) is a very special Cartan pair.Let W⊂ S be a neighborhood of A as in conditions (c) and (d). Pick a smoothly bounded strongly pseudoconvex Runge domain W_0⋐ W such that A⊂ W_0. We claim that f can approximated as closely as desired uniformly on Aby a proper holomorphic immersion g W_0 X such that g^-1(L) ⊂ K.To see this, pick a strongly plurisubharmonic exhaustion function σ X→ such that{σ<0} on Land σ >0 on f(W_0 ∖ K); such σ exists because L is (X)-convex and f(W_0 ∖ K) ∩ L=∅. Given ϵ>0, we can apply <cit.> in order to approximatef uniformly on A by a proper holomorphic map g W_0→ X satisfying σ(g(z)) > σ(f(z))-ϵfor all z∈ W_0. Choosing ϵ>0 small enough ensures that g^-1(L) ⊂ K; equivalently, g(W_0∖ K)⊂ X∖ L. Since n=2d, the general position argument shows that g can be chosen an immersion with simple double points. Replacing f by g, we may assume that f satisfies these conditions. The image Σ= f(W_0) ⊂ X is a closed immersed complex submanifold of X withsimple double points. By Lemma <ref> it follows that a compact subsetM ⊂ W_0 is (W_0)-convex if and only if its image f(M)⊂ X is (X)-convex.By what has been said above, the sets f(A), f(C), and f(K∪ C) are (X)-convex.By(<ref>))we also have that L∩Σ⊂ f(K), and hence the sets L'=L∪ f(K) andL'∪ f(C) are (X)-convex in view of<cit.>.At this point we arrive to the main difference with respect to <cit.>. In that lemma, the map f W_0→ X can be chosen an embedding since n>2d. In the present case, with n=2d, it is an immersion with simple double points. However, the attaching set C=A∩ B of the bump Bcan be chosen a thin slab of the convex set B (see Lemma <ref> and Remark <ref>). A suitable choice of C ensures that f is an embedding on a neighborhood of C.Indeed, most slices of B (which are convex sets of real dimension 2d-1) do not contain any of the finitely many double points of f; it then suffices to let C be a sufficiently thin slabaround such a slice and to adjust the sets A and B accordingly (see Remark <ref>).We assume from now on that f(C) is embedded in X. Pick a compact set P⊂ X∖ L' containing f(C)in its interior such that L' ∪ P is also (X)-convex. Choose small open convex neighborhoods U⊂ V of the sets C and B, respectively, such that U⋐ V∩ W_0 and V⋐ V_0. (The notation V⋐ V_0 means that the closure of V is compact and contained in V_0.) We choose U small enough such that f|_U is an embedding (recall that f|_C is an embedding). The normal bundle of the immersion f W_0→ Xis holomorphically trivial over the convex set U by the Oka-Grauert principle(see <cit.>). Let ^n-d denote the unit polydisc in ^n-d.It follows that there are a neighborhood W_1 ⊂ W_0 of A, a convex neighborhood U_1⊂ U ofC, and a holomorphic map F W_1 ×^n-d→ X such that F is injectiveon U_1×^n-d (hence biholomorphic onto the open subsetF(U_1×^n-d) of X) and F(z,0)=f(z) holds for all z∈ W_1. By a further shrinking of the neighborhood U_1⊃ Cand rescaling of the variable w∈^n-d we may also assume that the Stein domain Ω:=F(U_1 ×^n-d)⊂ P⊂ X∖ L'is Runge in P and its closure Ω is (P)-convex.Since L'∪ P is (X)-convex, it follows that L' ∪Ω is also (X)-convex.Hence there is a Stein neighborhood Ω'⊂ X of L' such thatΩ∩Ω'=∅ and the unionΩ_0:=Ω∪Ω' is a Stein Runge domain in X. Since the sets U_1⊂ V are convex, we can find an isotopy r_t V→ V of injective holomorphicself-maps, depending smoothly on the parameter t∈ [0,1], such that* r_0 is the identity map on V, * r_t(U_1)⊂ U_1 for all t∈ [0,1], and * r_1(V)⊂ U_1. In fact, in the coordinates on V_0 provided by the coordinate map θ V_0→ V_0⊂^dwe can choose r_t to be a family of linear contractions towards a point in U_1.Consider theisotopy of biholomorphic mapsϕ_t V×^n-d→ V×^n-d defined by ϕ_t(z,w)= (r_t(z),w), z∈ V, w∈^n-d, t∈ [0,1].Since r_1(V)⊂ U_1 by the condition (3) above, we have thatϕ_1(V×^n-d) ⊂ U_1×^n-d.Recall that Ω is given by (<ref>) and Ω_0=Ω∪Ω'. We define a smooth isotopy of injective holomorphic maps ψ_tΩ_0 → X (t∈ [0,1])byψ_t = F∘ϕ_t ∘ F^-1onΩ; ψ_t=IdonΩ'. The map ψ_t is defined on Ω since r_t(U_1)⊂ U_1(and hence ϕ_t(U_1×^n-d)⊂ U_1×^n-d).Note that ψ_0 is the identity on Ω_0 and the domain ψ_t(Ω_0) is Runge in X for all t∈ [0,1]. Assuming that X enjoys the density property, the Andersén-Lempert-Forstnerič-Rosay theorem<cit.> allows us to approximate the map ψ_1Ω_0 → X uniformly on compactsin Ω_0 by holomorphic automorphisms Ψ∈(X). Consider the injective holomorphic map G = Ψ^-1∘ F∘ϕ_1V×^n-d→ X.Note that G is well-defined and injective on V×^n-d in view of (<ref>) and because F is injective on U_1×^n-d. By (<ref>) and (<ref>) we have that(F∘ϕ_1)(V×^n-d) ∩ L' ⊂ F(U_1×^n-d)∩ L' = Ω∩ L'=∅.Since ψ_1 equals the identity map on Ω' ⊃ L' by(<ref>),Ψ can be chosento approximate the identity as closely as desiredon a neighborhood of L', so we may assume that G(V×^n-d) ⊂ X∖ L'.From(<ref>) and the first equation in (<ref>) we see thatG=Ψ^-1∘ F∘ϕ_1 = Ψ^-1∘ψ_1 ∘ F onU_1×^n-d.Since Ψ^-1∘ψ_1 is close to the identity map on Ω = F(U_1×^n-d),G is arbitrarily close to F uniformly on compacts in U_1×^n-d.Hence we can apply <cit.> (see also <cit.>) to glue F and G into a holomorphic map F (A∪ B)×1/2^n-d→ Xsuch that F is close to F on A×1/2^n-d and to G on B×1/2^n-d.The holomorphic map f̃:=F(,0)A∪ B =D→ X then satisfies the conclusion of Lemma <ref>, except that it need not be an immersion with simple double points;this can be achieved by a small perturbation since 2d=n.If the approximations are close enough, then f̃(B)∩ L=∅ and f̃^-1(L)⊂ K, so condition (<ref>) holds.This proves Lemma <ref> in the case when X enjoys the density property. A similar argument applies if X enjoys the volume density propertywith respect to a holomorphic volume form; the details are the same as in <cit.>. Theorem <ref> follows from Lemma <ref> in exactly the same way as <cit.> follows from <cit.>; the proof goes as follows. One distinguishes thenoncritical case and the critical case. In the noncritical case we have a pair of compact strongly pseudoconvex domains A⊂ A' in S, given by two sublevel sets of a strongly plurisubharmonic function without critical points on A∖ A'. We are also given a holomorphic immersion f A→ X on a neighborhood of A, a compact (A)-convex set K⊂ A, and a compact (X)-convex set L⊂ X such that f(A∖ K)⊂ X∖ L (see (<ref>)).We wish to approximate f as closely as desired uniformly on A by a holomorphic immersion f̃ A'→ X satisfying f̃(A'∖ K)⊂ X∖ L (see (<ref>)).As explained in <cit.>, we can obtain A' from A byattaching finitely many special convex bumps of the type used in Lemma <ref> so that we have a special Cartan pair at every stage. (Note that strongly pseudoconvex Cartan pairs, used in <cit.>, are also special Cartan pairs in the sense of Definition <ref>.) Hence, an immersion f̃ A'→ X with the stated properties is obtainedby successively applying Lemma <ref> finitely many times. The critical case, whichamounts to the change of topology at a critical point of a strongly plurisubharmonic exhaustionfunction on X, is handled in exactly the same way as in <cit.>.§.§ AcknowledgementsF. Forstnerič is partiallysupportedby the research program P1-0291 and the research grantJ1-7256 from ARRS, Republic of Slovenia.abbrv Franc ForstneričFaculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI–1000 Ljubljana, Slovenia.Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI–1000 Ljubljana.e-mail: franc.forstneric@fmf.uni-lj.si
http://arxiv.org/abs/1703.08594v2
{ "authors": [ "Franc Forstneric" ], "categories": [ "math.CV", "32E10, 32E20, 32E30, 32H02, 32Q99" ], "primary_category": "math.CV", "published": "20170324205338", "title": "Proper holomorphic immersions into Stein manifolds with the density property" }
The localdistinguishabilityof any three generalized Bell statesYan-Ling Wang^1, Mao-Sheng Li^2, Shao-Ming Fei^3,4, Zhu-Jun Zheng^1 ^1Department of Mathematics, South China University of Technology, Guangzhou 510640, P.R.China ^2Department of Mathematical of Science, TsinghuaUniversity,Beijing 100084, P.R.China ^3School of Mathematical Sciences, Capital Normal University, Beijing 100048, China ^4Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================We study the problem of distinguishing maximally entangled quantum states by using local operations and classical communication (LOCC). A question of fundamental interest is whether any three maximally entangled states in ℂ^d⊗ℂ^d (d≥ 4) are distinguishable by LOCC. In this paper, we restrict ourselves to consider thegeneralized Bell states. And we prove that any three generalized Bell states in ℂ^d⊗ℂ^d (d≥ 4) are locally distinguishable. The localdistinguishabilityof any three generalized Bell statesYan-Ling Wang^1, Mao-Sheng Li^2, Shao-Ming Fei^3,4, Zhu-Jun Zheng^1 ^1Department of Mathematics, South China University of Technology, Guangzhou 510640, P.R.China ^2Department of Mathematical of Science, TsinghuaUniversity,Beijing 100084, P.R.China ^3School of Mathematical Sciences, Capital Normal University, Beijing 100048, China ^4Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION Global operators can not be implemented generally by using only local operations and classical communication (LOCC) in compound quantum systems. Hence it is interesting tounderstandthe limitation of quantum operators that can be implemented by LOCC. The local distinguishability of quantum states plays important roles in exploring the ability of LOCC <cit.>. Suppose Alice and Bob share an unknown bipartite quantum state chosen from a given specific set of mutually orthogonal states. Their task is to identify the shared state by using LOCC.Throughout the paper, the words “locally distinguishable", “distinguishedwith LOCC" and “locally distinguished" have the same meanings. The local distinguishability has also practical applications in quantum cryptography primitives such asdata hiding <cit.>. According to the property ofmutually orthogonal quantum states to be distinguished, the local distinguishability problemcan be classified as three cases: maximally entangled states, product states and general states. In 2000, Walgate et. al. showed that any two orthogonal pure states can be locally distinguishable<cit.>. It has been observed in <cit.> that any set ofmaximally entangled states in ℂ^d⊗ℂ^d can not be locally distinguished for large d. The lower bound of the numbers of maximally entangled states that are not locally distinguishable has been extensively investigated <cit.>. Locally indistinguishable sets of d maximally entangled states in ℂ^d⊗ℂ^d systems areconstructed for all d⩾4<cit.>. Smaller sets of locally indistinguishable maximally entangled states can be found in <cit.>. Due to the difficultyof the problem, some researchers studied an easier problem: the one-way local distinguishabililty of maximally entangled states <cit.>. For the case of ℂ^d⊗ℂ^d with d≥4, a set of 3⌈√(d)⌉-1 one-way LOCC indistinguishable maximally entangled states, which are generalized Bell states, has been constructed <cit.>. On the other hand, one can consider the upper bound of the number of maximally entangled states that are locally distinguishable. In 2004, Fan <cit.> showed that if d is prime, then any k mutually orthogonal generalized Bell states can be locally distinguished if k(k-1)<2d. For d=3, any three generalized Bell states can be locally distinguished. In <cit.>, it has been shown that in ℂ^3⊗ℂ^3, any three mutually orthogonal maximally entangled states can be distinguished by LOCC. However, their approaches can not be extended to higher dimensional case. Since then it has been an open question whether any three mutually orthogonal maximally entangled states in high dimensions can be distinguishedwith LOCC.In 2013, Nathanson presented some examples for triples ofmaximally entangled states that cannot be distinguished with one-way LOCC but two-way <cit.>. Moreover, Nathanson proved thatany three mutually orthogonal maximally entangled states in ℂ^d⊗ℂ^d, d≥ 3, can be distinguished with a PPT measurement. In 2015, Tian et al extended Fan's result to quantum systems with dimension of prime power by considering the mutually commuting qudit lattice states <cit.>. And Singal et al give acomplete analysis of perfect local distinguishability of four generalized Bell states in ℂ^4⊗ℂ^4 <cit.>. As an open question remained, it is interesting to consider whether any threemutually orthogonal generalized Bell states can be locally distinguished for an arbitrarydimensiond. In this paper, we mainly restict ourselves to the locally distinguishable of generalized Bell states. We first give some properties of the generalized Bell states. We first prove an equation by employing the method in <cit.>. By using this equation and some annoyed analysis, we prove the local distinguishability of any three generalized Bell states case by case. We also solve some exceptional cases by showing the strategies of Alice and Bob employed in order to distinguish the given three states. § PROPERTIES OF GENERALIZED BELL STATES Throughout the paper, we use the following notations. In the bipartite system ℂ^d⊗ℂ^d, under the computational basis {|i⟩}_i=0^d-1, |ψ_0⟩=1/√(d)∑_i=0^d-1|ii⟩ is a canonical maximally entangled state. In general, a maximally entangled state can be written in the form|ψ⟩=(U⊗ I)|ψ_0⟩ with a unitary matrixU. The following d^2maximally entangled states are well known as the generalized Bell states:{|ψ_m,n⟩=(U_m,n⊗ I)|ψ_0⟩| U_m,n=X^mZ^n, m,n=0,1,⋯,d-1}, whereX=∑_l=0^d-1|l+1modd⟩⟨ l|, andZ=∑_i=0^d-1ω^i|i⟩⟨ i| with ω=e^2π√(-1)/d. We define d operators H_α,α=0,1,...,d-1, with the entries of H_α given by (H_α)_jk=ω^-jk-α s_k, j,k=0,1,⋯,d-1, where s_k=k+(k+1)+⋯+(d-1), k=0,1,⋯,d-1. In particular, we set s_d=s_0=d(d-1)2. Then 1/√(d)H_α is unitary for every α. Motivated by the method in <cit.> for prime dimensions, we first prove a generalized equation. Lemma 1. The followingequation is satisfied up to a whole phase for all α when d is odd and for even α when d is even.H_αX^mZ^nH_α^†=X^α m+nZ^-m. Proof: Since (s_q-s_q+1=q when q=0,1,...,d-2 and ω^α s_d=ω^α s_0=1, then ω^α(s_d-1-s_d)=ω^α s_d-1=ω^α (d-1).We have ω^α (s_q-s_q+1) =ω^α q and[ H_αXH_α^† = ∑_j,k=0^d-1ω^-jk-α s_k|j⟩⟨ k|·∑_i=0^d-1|i+1⟩⟨ i|·∑_p,q=0^d-1ω^pq+α s_q|q⟩⟨ p|; =∑_j,k,p,q=0^d-1ω^-jk-α s_k+pq+α s_q|j⟩⟨ k-1|q⟩⟨ p|; =∑_j,p,q=0^d-1ω^-j(q+1)-α s_q+1+pq+α s_q|j⟩⟨ p|; =∑_j,p,q=0^d-1ω^-j+(-j+α+p)q|j⟩⟨ p|; = ∑_p=0^d-1ω^-α-p|p+α⟩⟨ p|=Z^-1X^α. ][H_αZH_α^†= ∑_j,k=0^d-1ω^-jk-α s_k|j⟩⟨ k|·∑_i=0^d-1ω^i|i⟩⟨ i|·∑_p,q=0^d-1ω^pq+α s_q|q⟩⟨ p|; = ∑_j,k,p,q=0^d-1ω^-(j-1)k-α s_k+pq+α s_q|j⟩⟨ k|q⟩⟨ p|; = ∑_j,p,q=0^d-1ω^(p+1-j)q|j⟩⟨ p|; =∑_p=0^d-1|p+1⟩⟨ p|=X. ] By using equations (<ref>) and (<ref>), it is easy to deriveH_αX^mZ^nH_α^†=X^α m+nZ^-m up to a whole phase. Since the localdistinguishablility of a set of quantum statesis unchanged under arbitary local unitary operators. To locally distinguish a set of generalized Bell states, we first let Alice and Bob do unitary operations 1/√(d)H_α and (1/√(d)H_α)^t, respectively, where t stands for transposition. This operation is equivalent to the transformation 1/dH_α X^m_i Z^n_iH_α on the Alice side. That is, (1/√(d)H_α⊗ (1/√(d)H_α)^t)(X^m_i Z^n_i⊗ I)|ψ_0⟩ =1/dH_α X^m_i Z^n_iH_α⊗ I|ψ_0⟩.Here the normalization factor 1/√(d) in 1/√(d)H_α does not affect the local distinguishability of the quantum states. We will ignore the factor 1/√(d) and just consider H_α as a unitary matrix. From Lemma <ref>, we know that the transformations 1/√(d)H_α⊗ (1/√(d)H_α)^t transfer the set of generalized Bell states into itself provided that α satisfies the conditions in Lemma <ref>. The following lemma has been mentioned without a proof in ref.<cit.>. We give an explicit proof here. Lemma2. A set of generalized Bell states {|ψ_m_i n_i⟩=(U_m_i n_i⊗ I) |ψ_0⟩}_i=1^N can be distinguished under LOCC, if m_i≠ m_jfor all i≠jorn_i≠ n_jfor alli≠j. Proof: Ifn_i≠ n_j for all i≠j, then we apply a transformation H_α⊗ (H_α^†)^t on the given states:(H_α⊗ (H_α^†)^t)(X^m_i Z^n_i⊗ I)|ψ_0⟩ =H_α X^m_i Z^n_iH_α^†⊗ I|ψ_0⟩. By Lemma <ref>, we have the following equations:H_αX^m_iZ^n_iH_α^†=X^α m_i+n_iZ^-m_i. Taking α=0 in equations (<ref>) and (<ref>), we have that the transformation H_0⊗ (H_0^†)^t transformsU_m_i n_i⊗ I |ψ_0⟩ to U_m_i^' n_i^'⊗ I |ψ_0⟩ withm_i^'≠ m_j^' for alli≠j since m_i^'=n_i and m_j^'=n_j. Hence, we only need to consider the former case. Suppose m_i≠ m_j for all i≠j.Alice starts by performing a rank-one projective measurement corresponding to the following orthonormal basis: {|i⟩}_i=0^d-1. For each outcome of Alices measurement, the post measurement set will be of the following form, up to an irrelevant phase:{|ψ_m_1 n_1⟩,|ψ_m_2 n_2⟩,...,|ψ_m_N n_N⟩}⟶{|k⟩|k+m_1⟩,|k⟩|k+m_2⟩,...,|k⟩|k+m_N⟩}.Then the Bob's reduced states are orthogonal each other. Thus, once Alice tells Bob her measurement outcome k, Bob needs to perform measurement in the {|j⟩}_j=0^d-1 basis. If the outcome of Bob's measurement is k+m_i, then the state they shared is |ψ_m_i,n_i⟩. Remark: Since the local unitrary transformation does not change the local distinguishability of quantum states, any set of states that can be transformed into a set of states satisfying the conditions of Lemma <ref> is locally distinguishable. § LOCAL DISTINGUISHABILITY OF THREE GENERALIZED BELL STATES In this section, we use the unitary matrix X^m_iZ^n_ito represent the maximally entangled state |ψ_m_i,n_i⟩= X^m_iZ^n_i⊗ I |ψ_0⟩. We call |ψ_m_i,n_i⟩the state corresponding toX^m_iZ^n_i. Theorem. In ℂ^d⊗ℂ^d with d≥4, any three states in S={|ψ_m,n⟩|m,n=0,1,⋯,d-1} are locally distinguishable. Proof: The proof is based on theremark of Lemma <ref>. We first give two observations to simplify the problem. Observation 1: We only need to consider the case {X^m_1Z^n_1 , X^m_1Z^n_2 , X^m_2Z^n_2}. By Lemma <ref>, we have shown that the states corresponding to the matrices{X^m_iZ^n_i}_i=1^3 are local distinguishablewith different m_i or n_i. Hence we can assume m_1=m_2 and n_1≠ n_2. If n_1,n_2,n_3 are all different, then by Lemma <ref>, the three states can also be locally distinguished. Hence we can assume n_3 = n_2 (or equivalently n_3= n_1). Observation 2:Accounting to the following transformations which do not change the local distinguishability:[ (X^-m_1⊗ Z^-n_2^t)(X^m_1Z^n_1⊗ I)|ψ⟩=(Z^n_1-n_2⊗ I) |ψ⟩,;(X^-m_1⊗ Z^-n_2^t)(X^m_1Z^n_2⊗ I)|ψ⟩=I⊗ I|ψ⟩,;(X^-m_1⊗ Z^-n_2^t)(X^m_2Z^n_2⊗ I)|ψ⟩=(X^m_2-m_1⊗ I)|ψ⟩, ]we only need to consider the case S_0={I , X^m , Z^n } (0<m,n<d). From the above observations, we only need to prove the local distinguishability of the states {I , X^m , Z^n }.We useX^m_i Z^n_iH_αH_αX^m_i Z^n_i H_α^† to representthe following transformation (<ref>),(H_α⊗ (H_α^†)^t)(X^m_i Z^n_i⊗ I)|ψ_0⟩ =H_α X^m_i Z^n_iH_α^†⊗ I|ψ_0⟩.Hence, acting H_α on the states {I , X^m , Z^n }, we obtain {I , H_αX^mH_α^† ,H_αZ^nH_α^†}. We separate our proof into two cases by the parity of d, see Fig. <ref>. Case I: If d is odd, then for any α,H_αX^mH_α^†=X^α mZ^-m, and H_αZ^nH_α^†=X^n. Under the transformation H_α, the states {I , X^m, Z^n } are transfered to the states {I , X^α mZ^-m , X^n }. i) m≠ n, {I , X^m , Z^n }H_1S_1={I , X^mZ^-m , X^n }. Note that the states {0,m,n modd} are not equal each other. By Lemma <ref>, the set S_1 of states are locally distinguishable. ii) m=n, {I , X^n , Z^n }H_2S_2={I , X^2nZ^-n , X^n }. Also it is easy to show that {0,2n,nmodd} are not equal each other for odd d. Then by Lemma <ref>, the set S_2 of states are locally distinguishable. Case II: Ifd is even, H_αX^mH_α^†=X^α mZ^-m and H_αZ^nH_α^†=X^n are satisfied for evenα. Consider the following transformation,{I, X^m, Z^n}H_2S_2={I, X^2mZ^-m, X^n}. i) If 2m≠ 0modd, and2m≠ nmodd, then {0,2m,n modd} are not identical. By Lemma <ref>, the set S_2 of states is locally distinguishable. ii) If 2m≡ 0modd, then we must have d=2m. Hence we have the following transformations {I, X^m, Z^n}H_2{I, Z^m, X^n}H_2S_2,2={I, X^m, X^2nZ^-n}. This case can be separated into the following three cases: * 2n ≠ dmod d and 2n ≠ mmod d, then by Lemma <ref> theset S_2,2 is distinguishable. * 2n ≡ dmod d , then d=2n, and m=n. We only need to check the case {I, X^n, Z^n}, where d=2n. This case will be solved below as the exceptional case 1. * 2n ≡ mmod d with d=2m, ⇒ 2n=3m, ⇒n=3k,m=2k for some integer k. We need to consider our first setS={I, X^3k, Z^2k}, where d=4k. This case will be solved below as the exceptional case 2. iii) If 2m≡ nmodd, under the following transformation:{I, X^m, Z^n}H_4S_4={I,X^4mZ^-m, X^n}, clearly, we have 4m≠ nmodd. Then if 4m≠ 0 modd, by Lemma <ref>, we get the conclusion. And the case 4m ≡ 0 modd imples that 2n≡ d ≡ 0 modd, hence d=2n. Hence we have 2m≡ nmod2n ⇒ 2m=3n, ⇒n=2k,m=3k for some integer k.Then we only need to consider the caseS_4= {I, Z^2k, X^3k} with d=4k.This case will be solved below as the exceptional case 2. Now we give an explicit strategies for Alice and Bob in order to distinguish the two sets of exceptional cases. Exceptional case 1: the case {I, X^n, Z^n} with d=2n, n≥2. The corresponding (unnormalized) states are shown below,[ |ψ_1⟩=|0, 0⟩+|1, 1⟩+|2,2⟩+|3,3⟩+...+|2n-2,2n-2⟩+|2n-1,2n-1⟩,; |ψ_2⟩=|n,0⟩+|n+1,1⟩+...+|2n-1, n-1⟩+ |0,n⟩+...+|n-1,2n-1⟩,; |ψ_3⟩=|0,0⟩-|1,1⟩+|2,2⟩-|3,3⟩+...+|2n-2,2n-2⟩-|2n-1,2n-1⟩. ] Alice employ the following projective measurements: M_k^±=(|2k-2⟩±|2k-1⟩)(⟨2k-2|±⟨2k-1|), k=1,2,...,n. The corresponding resulting states are, respectively,[ |ψ_1⟩=(|2k-2⟩± |2k-1⟩)(|2k-2⟩± |2k-1⟩),; |ψ_2⟩=(|2k-2⟩± |2k-1⟩)(|2k-2+n⟩± |2k-1+n⟩),;|ψ_3⟩=(|2k-2⟩± |2k-1⟩)(|2k-2⟩∓|2k-1⟩). ] Hence the states of Bob's system are orthogonal each other and Bob can distinguish the above three states {|ψ_1⟩,|ψ_2⟩,|ψ_3⟩} exactly. Exceptional case 2: the case {I, X^3k, Z^2k} with d=4k. The corresponding states are given below,[ |ψ_1⟩=|0, 0⟩+|1, 1⟩+|2,2⟩+|3,3⟩+...+|4k-2,4k-2⟩+|4k-1,4k-1⟩,; |ψ_2⟩=|3k,0⟩+|3k+1,1⟩+...+|4k-1,k-1⟩+ |0,k ⟩+...+| 3k-1,4k-1⟩,; |ψ_3⟩=|0,0⟩-|1,1⟩+|2,2⟩-|3,3⟩+...+|4k-2,4k-2⟩-|4k-1,4k-1⟩. ] Alice applies the following projective measurements: M_l^±=(|2l-2⟩±|2l-1⟩)(⟨2l-2|±⟨2l-1|), l=1,2,3,...,2k. correspondingly one gets[ |ψ_1⟩=(|2l-2⟩± |2l-1⟩)(|2l-2⟩± |2l-1⟩),; |ψ_2⟩=(|2l-2⟩± |2l-1⟩)(|2l-2+k⟩± |2l-1+k⟩),;|ψ_3⟩=(|2l-2⟩± |2l-1⟩)(|2l-2⟩∓|2l-1⟩). ] If k≥ 2, the states of Bob's system are orthogonal each other, and the states {|ψ_1⟩,|ψ_2⟩,|ψ_3⟩} can be distinguished exactly. The case k=1 is considered in Theorem 2 of the ref.<cit.> and was proved to be locally distinguished. The results of the above theorem can be understood asa little step towards thegeneralization of H. Fan's results in <cit.> toarbitrary dimensional case.Unlike the prime dimensional cases, sometimes, it may need to do several transformations before one could use Lemma <ref>. Moreover, one may encounter some exceptional cases which could not be dealt with by applyingLemma <ref>. The above results can be also understoood as part of results toward the problem of locally distinguishability for any three orthgonal maximally entangled states. The results we obtained and those in <cit.> give an evidence of positive answer. In <cit.>, the authors presented some triple sets of maximally entangled states which are shown to be two-waydistinguishable by giving the explicit strategies. In our paper, we mainly devote to the set of generalized Bell states satisfying some conditions. Under these conditions we can transform complicated cases into some simple ones. However, there are also some exceptional cases for which the explicit constructions of strategies are needed. § CONCLUSION AND DISCUSSION In this paper, we have studied the problem of local distinguishability of maximally entangled states, the generalized Bell states.Firstly, we generalized some equations which have been considered by H. Fan for prime dimensional case to the case of arbitary dimensional ones. Since the local distinguishability of a set of quantum states is unchanged under localunitary operations, we apply some local unitary operations to simplify the locally distinguished strategies.By using the generalized equations and giving the explicit strategies for some exceptional cases, we have obtained that any three generalized Bell states in ℂ^d⊗ℂ^d (d≥4) are locally distinguishable.However, the local distinguishablity of any three maximally entangled states in ℂ^d⊗ℂ^d (d≥4) remains open. It is natural to ask whether four, five or more generalized Bell states can always be locally distinguished for large dimension d.It seems that from what we have done in this paper, the case of four, five or more states can be similarly dealt with case by case.However, the problem becomes more complicated.Hence it is also interesting to develop other methods to solve these problems. AcknowledgmentsThe authors thank thereferees for many helpful suggestions. This work is supported by the NSFC 11475178, NSFC 11571119 and NSFC 11675113. Bennett99 Bennett C.H., DiVincenzo D.P., Fuchs C.A., Mor T., Rains E., Shor P.W., Smolin J.A. and Wootters W.K.: Quantum nonlocality without entanglement. Phys. Rev. A, 59:1070-1091 (1999) Walgate02 Walgate J. and Hardy L.: Nonlocality asymmetry and distinguishing bipartite states. Phys. Rev. Lett. 89, 147901 (2002) DiVincenzo02 DiVincenzo D.P., Leung D.W.and Terhal B.M.: Quantum data hiding. IEEE Trans. Inf. Theory 48, 580 (2002) Walgate00 Walgate J., Short A. J., Hardy L. and Vedral V.: Local distinguishability of multipartite orthogonal quantum states. Phys. Rev. Lett.85, 4972 (2000) Ghosh01 Ghosh S., Kar G., Roy A., Sen(De)A. and Sen U.: Distinguishability of bell states. Phys. Rev. Lett.87, 277902 (2001) Ghosh04 Ghosh S., Kar G., Roy A. and Sarkar D.: Distinguishability of maximally entangled states. Phys. Rev. A 70, 022304 (2004) Fan04 Fan H.:Distinguishability and indistinguishability by local operations and classical communication. Phys. Rev. Lett.92, 177905 (2004) Nathanson05 Nathanson M.: Distinguishing bipartitite orthogonal states using LOCC: best and worst cases. J. Math. Phys. 46, 062103 (2005) Duan12 Yu N., Duan R.and Ying M.: Four locally indistinguishable ququad-ququad orthogonal maximally entangled states. Phys. Rev. Lett.109, 020506 (2012) Cosentino13 Cosentino A.: Positive-partial-transpose-indistinguishable states via semidefinite programming. Phys. Rev. A, 87, 012321 (2013) CosentinoR14 Cosentino A. andRussoV.: Small sets of locally indistinguishable orthogonal maximally entangled states. Quantum Information & Computation, 14, 1098–1106 (2014) Li15 LiM.-S.,Wang Y.-L.,Fei S.-M. and Zheng Z.-J.: d locally indistinguishable maximally entangled states in ℂ^d⊗ℂ^d. Phys. Rev. A91, 042318 (2015) Yu15 Yu S.-X. and Oh C.H.: Detecting the local indistinguishability of maximally entangled states. arXiv:1502.01274v1(2015) Bandyopadhyay11 Bandyopadhyay S., Ghosh S. and Kar G.: LOCC distinguishability of unilaterally transformable quantum states. New J. Phys. 13, 123013 (2011) Nathanson2013 Nathanson M.: Three maximally entangled states can require two-way local operations and classical communication for local discrimination. Phys. Rev. A, 88, 062316 (2013) Zhang14 Zhang Z.-C., Wen Q.-Y., Gao F., Tian G.-J. and Cao T.-Q.: One-way LOCC indistinguishability of maximally entangled states. Quantum Inf. Proc. 13,795 (2014) Zhang15 Zhang Z.-C., Feng K.-Q., Gao F. and Wen Q.-Y.: Distinguishing maximally entangled states by one-way local operations and classical communication. Phys. Rev. A91, 012329 (2015) Wang16 Wang Y.-L., LiM.-S., Zheng Z.-J. and Fei S.-M.: On small set of one-way LOCC indistinguishability of maximally entangled states.Quantum Inf. Proc. 15, 1661 (2016) Tian15Tian G.-J.,Yu S.-X., Gao F., Wen Q.-Y. and Oh C.H.: Local discrimination of qudit lattice states via commutativity. Phys. Rev. A92, 042320 (2015) Singal15 Singal T.,Rahman R.,Ghosh S. andKar G.: Complete analysis of perfect local distinguishability of ensemble of four generalized bell states in ℂ^4⊗ℂ^4. arXiv:1506.03667 (2015)
http://arxiv.org/abs/1703.08773v1
{ "authors": [ "Yan-Ling Wang", "Mao-Sheng Li", "Shao-Ming Fei", "Zhu-Jun Zheng" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170326061838", "title": "The local distinguishability of any three generalized Bell states" }
Querying Log Data with Metric Temporal Logic Sebastian Brandt sebastian-philipp.brandt@siemens.com Siemens CTMünchen, Germany Elem Güzel Kalaycı kalayci@inf.unibz.itKRDB Research CentreFaculty of Computer ScienceFree University of Bozen-Bolzano, Italy Vladislav Ryzhikov vlad@dcs.bbk.ac.uk Department of Computer Science and Information SystemsBirkbeck, University of London, UK Guohui Xiao xiao@inf.unibz.it KRDB Research CentreFaculty of Computer ScienceFree University of Bozen-Bolzano, ItalyMichael Zakharyaschev michael@dcs.bbk.ac.ukDepartment of Computer Science and Information SystemsBirkbeck, University of London, UK December 30, 2023 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================We propose a novel framework for ontology-based access to temporal log data using a datalog extensionof the Horn fragment of the metric temporal logic . We show thatis ExpSpace-completeeven with punctual intervals, in which case fullis known to be undecidable. We also prove that nonrecursiveis PSpace-complete for combined complexity and in AC^0 for data complexity.We demonstrate by two real-world use cases that nonrecursiveprograms can express complex temporal concepts from typical user queries and thereby facilitate access to temporal log data. Our experiments with Siemens turbine data and MesoWest weather data show thatontology-mediated queries are efficient and scale on large datasets. § INTRODUCTION In this paper, we present a new ontology-based framework for querying temporal log data. We begin by outlining this framework in the context of data gathering and analysis at Siemens, a leading manufacturer and supplier of systems for power generation, power transmission, medical diagnosis, and industry automation. §.§ Data gathering at Siemens For the Siemens equipment, analytics services are usually delivered by remote diagnostic centres that store data from the relevant industrial sites or individual equipment around the globe.The analytics provided at these centres falls into three categories: descriptive, predictive, and prescriptive. Descriptive analytics describes or quantifies in detail what has happened after an event. Predictive analytics aims to anticipate events before they occur and provide a window of opportunity for countermeasures. Prescriptive analytics aims to automate the process of suggesting underlying reasons for the predicted events and carrying out appropriate countermeasures.All these types of analytics heavily rely on the ability to recognise interesting events using sensor measurements or other machine data such as the power output of a gas turbine, its maximum rotor speed, average exhaust temperature, etc.For example, a service engineer at a Siemens remote diagnostic centre could be interested in active power trips of the turbine, that is, events when(𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉) the active power was above 1.5MW for a period of at least 10 seconds, maximum 3 seconds after which there was a period of at least one minute where the active power was below 0.15MW. Under the standard workflow, when facing the task of finding the active power trips of the turbine, the engineer would call an IT expert who would then produce a specific script (in a proprietary signal processing language developed by Siemens) such as 𝚖𝚎𝚜𝚜𝚊𝚐𝚎(“𝚊𝚌𝚝𝚒𝚟𝚎 𝚙𝚘𝚠𝚎𝚛 𝚝𝚛𝚒𝚙")=$t1: 𝚎𝚟𝚊𝚕( >, #𝚊𝚌𝚝𝚒𝚟𝚎𝙿𝚘𝚠𝚎𝚛, 1.5 ) :𝚏𝚘𝚛( >= 10s)&&𝚎𝚟𝚊𝚕( <, #𝚊𝚌𝚝𝚒𝚟𝚎𝙿𝚘𝚠𝚎𝚛, 0.15 ) :𝚜𝚝𝚊𝚛𝚝( 𝚊𝚏𝚝𝚎𝚛[ 0s, 3s ] $t1:𝚎𝚗𝚍 ):𝚏𝚘𝚛( >= 1m); for the turbine aggregated data stored in a table , which looks as follows: turbineId dateTime activePower rotorSpeed mainFlame … … tb0 2015-04-04 12:20:48 2 1550 0 tb0 2015-04-04 12:20:49 1.8 1400 null tb0 2015-04-04 12:20:52 1.7 1350 1… The result of running the script is a log with records such as“2015-04-04 12:22:17 𝚊𝚌𝚝𝚒𝚟𝚎 𝚙𝚘𝚠𝚎𝚛 𝚝𝚛𝚒𝚙 tb0”where information about all the events is accumulated.When facing the same task but for a different turbine, the engineer may have to call the IT expert once again because different models of turbines and sensors may have different log/database formats. Moreover, the storage platform for the sensor data often changes (thus, currently Siemens are pondering over migrating certain data to a cloud-based storage).Maintaining a set of scripts, one for each data source, does not provide an efficient solution since a query such as `find all the turbines that had an active power trip in May 2017' would require an intermediate database with integrated data of active power trips.Another difficulty is that the definitions of events the engineer is interested in can also change. Some changes are minor, say the pressure threshold or the number of seconds in the active power trip definition, but some could be more substantial, such as `find the active power trips that were followed by a high pressure within 3 minutes that lasted for 30 seconds'. This modification would require rewriting the script above into a much longer one rather than using it as a module in the new definition. The permanent involvement of an IT expert familiar with database technology incurs high costs for Siemens, and data gathering accounts for a major part of the time the service engineers spend at Siemens remote diagnostic centres, most of which due to the indirect access to data. §.§ Ontology-based data access Ontology-based data access (OBDA for short) offers adifferent workflow that excludes the IT middleman from data gathering <cit.>; consult also the recent survey by IJCAI-18. In a nutshell, the OBDA workflow in the Siemens context looks as follows. Domain experts develop and maintain an ontology that contains terms for the events the engineers may be interested in. IT experts develop and maintainmappings that relate these terms to the database schemas. The engineer can now use familiar terms from the ontology and a graphical tool such as OptiqueVQS <cit.> to construct and run queries such as 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(tb0)@x. The task of the OBDA system such as Ontop <cit.> will be, using the mappings, to rewrite the engineer's ontology-mediatedquery into an SQL query over the database and then execute it returning the time intervals x where the turbine with the ID tb0 had active power trips.Unfortunately, the ontology and query languages designed for OBDA and standardised by the W3C—theprofile ofand SPARQL—are not suitable for the Siemens case because they were not meant to deal with essentially temporal data, concepts and properties. There have been several attempts to develop temporal OBDA.One approach is to use the sameas an ontology language, assuming that ontology axioms hold at all times, and extend the query language with various temporal operators <cit.>. Unfortunately,is not able to define the temporal feature of `active power trip', and so the engineer would have to capture it in a complex temporal query (or call an expert in temporal logic). Another knownapproach is to allow the temporal operators of the linear-time temporal logicin both queries and ontologies <cit.>. For more details and further references, consult the recent survey by DBLP:conf/time/ArtaleKKRWZ17[Surveys of early developments in temporal deductive databases are given by DBLP:books/bc/tanselCGSS93/BaudinetCW93 and DBLP:conf/dagstuhl/ChomickiT98.].However, standardover a discrete timeline such as (ℕ,≤) or (ℤ,≤) is not able to adequately represent the temporal data and knowledge in the Siemens use case because measurements are taken and sent asynchronously by multiple sensors at irregular time intervals, which can depend on the turbine model, sensor type, etc. To model measurements and events using discrete time, one could take a sufficiently small time unit (quantum), say 1 second, and encode `active power was below 0.15MW for a period of one minute' by an -formula of the form p ^2 p …^60 p, whereis the previous-time operator. One problem with this encoding is that it is clearly awkward, not succinct, and only works under the assumption that the active power is measured each and every second. If, for some reason, a measurement is missing as in , the formula becomes inadequate.This problem can be solved by using the (more succinct) metric temporal logicwithoperators like ⊟_[1,60] interpreted as `at every time instant within the previous minute when a measurement was taken'. The satisfiability problem for the description logic 𝒜ℒ𝒞 extended with such operators over (ℕ,≤) was investigated by Gutiérrez-Basulto, Jung, and Ozaki . A more fundamental issue with modelling turbine events using discrete time is that it only applies to data complying with the chosen quantum and requires amendments every time thequantum has to be set to a different value because of a new equipment or because asynchronous sensor measurements start to happen more frequently.Thus, a better way of modelling the temporal data and events under consideration is by means of a suitable fragment ofinterpreted over dense time such as the rationals (ℚ,≤) or reals (ℝ,≤). This would allow us to capture, for example, that one event, say a sharp temperature rise, happened just before (maybe a fraction of a quantum), and so possibly caused another event, say an emergency shutdown, which is a typical feature of an asynchronous behaviour of real-time systems where the actual time of event occurrences cannot be predicted at the modelling stage.§.§ Metric Temporal Logic The metric temporal logic was originally designed for modelling and reasoning about real-time systems <cit.>.is equippedwith two alternative semantics, pointwise and continuous (aka interval-based). In both semantics, the timestamps are taken from a dense timeline (𝕋,≤) such as (ℚ,≤) or (ℝ,≤). Under the pointwise semantics, an interpretation is a timed word, that is, a finite or infinite sequence of pairs (Σ_i, t_i), where Σ_i is a subset of propositional variables that are assumed to hold at t_i ∈𝕋 and t_i < t_j for i < j. Under the continuous semantics, an interpretation is an assignment of a setof propositional variablesto each t ∈𝕋.allows formulas such as ⊞_[1.5,3]φ (or _[1.5,3]φ) that holds at a moment t if and only if φ holds at every (respectively, some) moment in the interval [t+1.5,t+3]. However, under the pointwise semantics, t must be a timestamp from the timed word and φ must only hold at every (respectively, some) t_i with 1.5 ≤ t_i - t ≤ 3. Thus, ⊞_[1,1] is satisfiable under the pointwise semantics, for example, by a timed word with t_i+1 - t_i > 1, but not under the continuous semantics.In the Siemens case, we assume that the real-time system is being continuously monitored, the result of the next measurement of a sensor is only recorded when it exceeds the previous one by some fixed margin, and events such as active power trip can happen between measurements. This makes the continuous semantics a natural choice for temporal modelling.The satisfiability problem forunder this semantics turns out to be undecidable <cit.> and ExpSpace-complete if the punctual operators such as _[1,1] are disallowed <cit.>; see also the work by Ouaknine:2005:DMT:1078035.1079694,DBLP:conf/formats/OuaknineW08. Note that, under the pointwise semantics,is decidable over finite timed words, though not primitive recursive <cit.>. §.§ Our contribution Having analysed two real-world scenarios of querying asynchronous real-time systems (to be discussed in Section <ref>), we came to a conclusion that a basic ontology language for temporal OBDA should contain datalog rules withoperators in their bodies. In this language, for example, the event ofactive power trip can be defined by the rule 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(v) ←𝖳𝗎𝗋𝖻𝗂𝗇𝖾(v) ⊟_[0,1m]𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖡𝖾𝗅𝗈𝗐0.15(v)_[60s,63s]⊟_[0,10s]𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5(v). The variables of the predicates in such rules range over a (non-temporal) object domain. Thus, the intended domain for v in (<ref>) comprises turbines, their parts, sensors, etc. The underlying (dense) timeline is implicit: we understand (<ref>) as saying that 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(v) holds at any given time instant t if the pattern shown in the picture below has occurred before t:[point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [point,label=above:t] (t) at (0,0) node[below right] 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉; (t') at (-5,0); [purple, ultra thick] (t) – (t') node [above, midway] 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖡𝖾𝗅𝗈𝗐0.15;(t”') at (-5.4,0); [purple, dotted, thick] (t') – (t”') ;(t”) at (-7.4,0); [blue, ultra thick] (t”') – (t”)node [above left, pos=0.4] 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5;[thick, dotted] (t”) – (-10,0) node [left, pos =1] v; [decorate,decoration=brace,amplitude=15pt, raise=2pt] (t) – ((t')+(0.15,0)) node [midway,shift=(.4,-.7)] 1m; [decorate,decoration=brace,amplitude=15pt, raise=7pt] ((t”)+(1.7,0)) – (t) node [midway,shift=(.4,.9)] 63s; [decorate,decoration=brace,amplitude=15pt, raise=2pt] (t”') – ((t”)+(.15,0)) node [midway,shift=(.4,-.7)] 10s; Unlike model-checking liveness properties (that some events eventually happen) in transition systems, our task is to query historical data for events that have already happened and are actually implicitly recorded in the data. As a consequence, we do not need ontology axioms with eventuality operators in the head such as _[0,3s]𝖲𝗁𝗎𝗍𝖣𝗈𝗐𝗇(v) ←𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(v) saying that an active power trip must be followed by a shutdown within 3 seconds. allows existential quantification in the head of rules such as ∃ u𝗁𝖺𝗌𝖱𝗈𝗍𝗈𝗋(v,u) ←𝖳𝗎𝗋𝖻𝗂𝗇𝖾(v) stating that every turbine has a rotor. Although axioms of this sort are present in the Siemens turbine configuration ontology <cit.>, we opted not to include ∃ in the head of rules in our language. On the one hand, we have not foundmeaningful queries in the use cases for which such axioms would provide more answers. On the other hand, it is known that existential axioms may considerably increase the combined complexity of both atemporal <cit.> and temporal ontology-mediated query answering <cit.>. For these reasons, we do not allow existential rules in our ontology language and leave their investigation for future work.The resulting temporal ontology language can be described as a datalog extension of the Horn fragment of(without diamond operators in the head of rules). We denote this language byand prove in Section <ref> that answering ontology-mediated queries of the form (Π, Q(v)@x) is ExpSpace-complete for combined complexity, where Π is aprogram, Q(v) a goal with individual variables v, and x a variable over time intervals during which Q(v) holds. On the other hand, we show thatbecomes undecidable if the diamond operators are allowed in the head of rules.We also prove that answering propositionalqueries is P-hard for data complexity. To compare, recall that answering ontology-mediated queries with propositional (not necessarily Horn)ontologies is NC^1-complete for data complexity <cit.>.From the practical point of view, most interesting are nonrecursivequeries. We show in Section <ref> that answering such queries is in AC^0 for data complexity (assuming that data timestamps and the ranges of the temporal operators inprograms are represented as finite binary fractions) and PSpace-complete for combined complexity (even NP-complete if the arity of predicates is bounded). In this case, we develop a query answering algorithm that can be implemented in standard SQL with window functions. We also present in Section <ref> a framework for practical OBDA with nonrecursivequeries and temporal log data stored in databases as shown above. Finally, in Section <ref>, we evaluate our framework on two use cases. We develop aontology for temporal concepts used in typicalqueries at Siemens (e.g., 𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝗈𝗉 that takes place if events 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖮𝖿𝖿, 𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖮𝖿𝖿, 𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇6600𝗍𝗈1500, and 𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇1500𝗍𝗈200 happen in a certain temporal pattern). We also create a weather ontology defining standard meteorological concepts such as 𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾 (𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾𝖥𝗈𝗋𝖼𝖾𝖶𝗂𝗇𝖽, wind with the speed above 118 km/h, lasting at least 1 hour).Using Siemens sensor databases and MesoWest historical records of the weather stations across the US, we experimentally demonstratethat our algorithm is efficient in practice and scales on large datasets of up to 8.3GB. We used two systems, PostgreSQL and Apache Spark, to evaluate our SQL programs. To our surprise, Apache Spark achieved tenfold better performance on the weather data than PostgreSQL. This effect can be attributed to the capacity of Spark to parallelise query execution as well as to the natural `modularity' of weather data by location.An extended abstract of this paper was presented at AAAI-17 <cit.>.§ DATALOG In the standard metric temporal logic  <cit.>, the temporal domain is the real numbers ℝ, while the intervalsin the constrained temporal operators such as _ (sometime in the future within the intervalfrom now) have natural numbers or ∞ as their endpoints. In the context of the applications ofwe deal with in this paper, itis more natural to assume that the endpoints ofare non-negative dyadic rational numbers—finite binary fractions[In other words, a dyadic rational is anumber of the form n/2^m, where n ∈ℤ and m ∈ℕ.] such as 101.011—or ∞. We denote the set of dyadic rationals by ℚ_2 and remind the reader that ℚ_2 is dense in ℝ and, by Cantor's theorem, (ℚ_2,<) is isomorphic to (ℚ,<).By an interval, ι, we mean any nonempty subset of ℚ_2 of the form[t_1, t_2], [t_1, t_2), (t_1, t_2] or (t_1, t_2), where t_1,t_2∈ℚ_2∪{-∞, ∞} and t_1 ≤ t_2. We identify (t,∞] with (t,∞), [-∞,t] with (-∞,t], etc.A range, , is an interval with non-negative endpoints.The temporal operators oftake the form ⊞_, _ and _, which refer to the future, and ⊟_, _ and _, which refer to the past.The end-points of intervals and ranges are assumed to be represented in binary.An individual term, τ, is an individual variable, v, or aconstant, c. As usual, we assume that there is a countably-infinite list of predicate symbols, P, with assigned arities. Aprogram, Π, is a finite set of rules of the form A^+ ← A_1… A_k or ← A_1… A_k, where k ≥ 1, each A_i (1 ≤ i ≤ k) is either an inequality (ττ')or defined by the grammar A::=P(τ_1,…,τ_m)|⊤ |⊞_ A |⊟_ A |_ A |_ A|A _ A' |A _ A' and A^+ is given by the same grammar but without any `non-deterministic' operators _, _, _, _. The atoms A_1,…,A_k constitute the body of the rule, while A^+ orits head. As usual, we assume that every variable in the head of a rule also occurs in its body.A data instance, , is a finite set of facts of the form P(c)@ι, where P(c) is a ground atom (with a tuple c of individual constants) and ι an interval. The fact P(c)@ι states that P(c) holds throughout the interval ι. We denote by () the set of numbers (excluding ±∞) that occur in , and by (Π, ) the set of number occurring in Π or .An interpretation, , is based on a domain Δ∅ for the individual variables and constants. For any m-ary predicate P, m-tuple a from Δ, and moment of time t ∈ℝ, the interpretation 𝔐 specifies whether P is true on a at t, in which case we write 𝔐,tP(a). Let ν be an assignment of elements of Δ to the individual terms. To simplify notation, we adopt the standard name assumption according to which ν(c) = c, for every individual constant c.We then set inductively: , t ^ν P(τ)iff , tP(ν(τ)),, t ^ν (ττ') iff ν(τ) ν(τ'),𝔐,t ^ν⊞_ Aiff 𝔐,s^ν Afor allswith s-t ∈,𝔐,t ^ν⊟_ Aiff 𝔐,s^ν Afor allswitht-s ∈,𝔐,t^ν_ Aiff 𝔐,s^ν Afor someswith s-t ∈, 𝔐,t^ν_ A iff𝔐,s^ν Afor someswitht-s ∈,𝔐,t^ν A _ A' iff𝔐,t'^ν A'for somet'with t'-t ∈ and 𝔐,s^ν Afor alls ∈ (t, t'),𝔐,t^ν A _ A' iff𝔐,t'^ν A'for somet'with t-t' ∈ and 𝔐,s^ν Afor alls ∈ (t', t),𝔐,t^ν⊤,𝔐,t ^ν. The picture below illustrates the semantics of the `future' operators for = [d, e]:[point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [point] (t) at (0,0)node[below left] t node[above] ⊞_A;[thick] (-1, 0) – (t);(t') at (3,0); [purple, thick] (t) – (t');(t”) at (5,0); [blue, ultra thick] (t') – (t”) node [below, midway] s node [above, midway] A;[thick, -open triangle 60] (t”) – (6,0); [decorate,decoration=brace,amplitude=15pt, raise=2pt] (t) – (t') node [midway,shift=(.3,.7)] d; [decorate,decoration=brace,amplitude=15pt, raise=7pt] (t”) – (t) node [midway,shift=(.3,-.7)] e; [blue] ((t')+(0,.1)) – ((t')+(0,-.1));[blue] ((t”)+(0,.1)) – ((t”)+(0,-.1));[point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [point] (t) at (0,0)node[below left] t node[above] _A;[thick] (-1, 0) – (t);(t') at (3,0); [purple, thick] (t) – (t');(t”) at (5,0); [blue, thick, dotted] (t') – (t”) node [below, pos = 0.7] s node [above, pos = 0.7] A node[pos = 0.7] ∙;[thick, -open triangle 60] (t”) – (6,0); [decorate,decoration=brace,amplitude=15pt, raise=2pt] (t) – (t') node [midway,shift=(.3,.7)] d; [decorate,decoration=brace,amplitude=15pt, raise=7pt] (t”) – (t) node [midway,shift=(.3,-.7)] e; [blue] ((t')+(0,.1)) – ((t')+(0,-.1));[blue] ((t”)+(0,.1)) – ((t”)+(0,-.1));[point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [point] (t) at (0,0)node[below left] t node[above] A _A';[thick] (-1, 0) – (t);(t') at (3,0); [purple, thick] (t) – (t');(t”) at (5,0); [blue, thick, dotted] (t') – (t”) node [below, pos = 0.7] s node [above, pos = 0.7] A' node[pos = 0.7] ∙;[thick, -open triangle 60] (t”) – (6,0); [decorate,decoration=brace,amplitude=15pt, raise=2pt] (t) – (t') node [midway,shift=(.3,.7)] d; [decorate,decoration=brace,amplitude=15pt, raise=7pt] (t”) – (t) node [midway,shift=(.3,-.7)] e; [blue] ((t')+(0,.1)) – ((t')+(0,-.1));[blue] ((t”)+(0,.1)) – ((t”)+(0,-.1));[orange, ultra thick] ((t)+(0.15,-.1)) – ((t')+(1.3,-.1)) node[midway, below, shift=(0,.1)] A; We say thatsatisfies aprogram Π under an assignment ν if, forall t ∈ℝ and all the rules A ← A_1… A_k in Π, we have , t ^ν Awhenever , t ^ν A_i for 1 ≤ i ≤ k. We calla model of Π andand write (Π, ) ifsatisfies Π under every assignment, and 𝔐,tP(c) for any P(c)@ι inand any t ∈ι.Π andare consistent if they have a model.Note that rangesin the temporal operators can be punctual [r,r], in which case ⊞_[r,r] A is equivalent to _[r,r] A, and ⊟_[r,r] A to _[r,r] A. We also observe that ⊤_ A is equivalent to _ A (that is, 𝔐,t^ν⊤_ A iff 𝔐,t^ν_ A for all 𝔐, t and ν), and ⊤_ A is equivalent to _ A.Aquery takes the form (Π, (v,x)), where Π is aprogram and (v,x) = Q(τ)@x, for some predicate Q, v is a tuple of all individual variables occurring in the terms τ, and x an interval variable. A certain answer to (Π, (v,x)) over a data instanceis a pair (c,ι) such that c is a tuple of constants fromof the same length as v, ι an interval and, for any t ∈ι,any modelof Π and , and any assignment ν mapping v to c, we have ,t ^ν Q(τ). In this case, we write ,t (c).If the tuple v is empty (that is, Q(τ) does not have any individual variables), then we say that ι is a certain answer to (Π, (x)) over .Suppose that Π has one rule (<ref>) andconsists of the facts 𝖳𝗎𝗋𝖻𝗂𝗇𝖾(𝗍𝖻0)@(-∞, ∞),𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5(𝗍𝖻0)@[13:00:00, 13:00:15),𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖡𝖾𝗅𝗈𝗐0.15(𝗍𝖻0)@[13:00:17, 13:01:25). Then any subinterval of the interval [13:01:17, 13:01:18) is a certain answer to thequery (Π,𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(𝗍𝖻0)@x).We illustrate the importance of the operators(since) and(until) using an example inspired by the ballet moves ontology <cit.>. Suppose we want to say that 𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖡𝖾𝗇𝖽𝗂𝗇𝗀 is a move spanning from the beginning to the endof 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖫𝗈𝗐𝖯𝗅𝖺𝖼𝖾 provided that it is preceded by 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖬𝗂𝖽𝖽𝗅𝖾𝖯𝗅𝖺𝖼𝖾, which ends within 3s from the beginning of the 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖫𝗈𝗐𝖯𝗅𝖺𝖼𝖾, as shown below:[point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [thick, blue] (0, 0) – (6, 0) node[midway,above] 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖬𝗂𝖽𝖽𝗅𝖾𝖯𝗅𝖺𝖼𝖾;[thick, orange] (8, 0) – (15, 0) node[midway,above] 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖫𝗈𝗐𝖯𝗅𝖺𝖼𝖾;[thick, darkgreen] (8, -1) – (15, -1) node[midway,above] 𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖡𝖾𝗇𝖽𝗂𝗇𝗀;[decorate,decoration=brace,amplitude=15pt, raise=2pt] (6,0) – (8.5,0) node [midway,shift=(.3,.7)] 3s; [thick, gray] (0, -2) – (8.5, -2) node[midway,above] _[0, 3s]𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖬𝗂𝖽𝖽𝗅𝖾𝖯𝗅𝖺𝖼𝖾;We can define the 𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖡𝖾𝗇𝖽𝗂𝗇𝗀 move using the following rule: 𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖡𝖾𝗇𝖽𝗂𝗇𝗀← 𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖫𝗈𝗐𝖯𝗅𝖺𝖼𝖾_[0, ∞)(_[0, 3s]𝖱𝗂𝗀𝗁𝗍𝖠𝗇𝖽𝖫𝖾𝖿𝗍𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖬𝗂𝖽𝖽𝗅𝖾𝖯𝗅𝖺𝖼𝖾).(note that a definition of 𝖲𝗎𝗉𝗉𝗈𝗋𝗍𝖡𝖾𝗇𝖽𝗂𝗇𝗀 inwould be problematic if only theandoperators were available).By answeringqueries we understand the problem of checking whether a given pair (c,ι) is a certain answer to a givenquery (Π, (v,x)) over a given data instance . The consistency (or satisfiability) problem is to check whether a givenprogram Π is consistent with a given data instance . As usual in database theory <cit.> and ontology-mediated query answering, we distinguish between the combined complexity and the data complexity of these problems: the former regards all the ingredients—Π, (c,ι) and —as input, while the latter one assumes that Π andare fixed and only  and (c,ι) are the input. Answeringqueries and consistency checking are polynomially reducible to the complement of each other.Suppose first that we want to check whether (c, ι) is a certain answer to (Π, (v,x)) over , where (v,x) = Q(τ)@x and ι = [-t_1, t_2), t_1, t_2 ∈ℚ_2^≥ 0; other types of ι are considered analogously. Consider the following program Π' and data instance ': Π' = Π∪{← P(v) ⊟_[0,t_1] Q(v) ⊞_(0,t_2) Q(v)},' = ∪{P(c)@[0,0] }, where P is a fresh predicate. It is readily seen that (c, ι) is a certain answer to (Π, (v,x)) overiff Π' is not consistent with '. Conversely, Π andare consistent iff [0,0] is not a certain answer to (Π, P@x) over , where P is a fresh 0-ary predicate, that is, a propositional variable. We conclude this section by reminding the reader that, over the integer numbers (ℤ,<),is as expressive as the linear temporal logicwith the operators(at the next moment),(until),(always in the future),(some time in the future) and their past counterparts , 𝒮,and . For example, the -formulaA is equivalent to _[1,1]A and AB under the irreflexive semantics to A_(0,∞)B; conversely, _[2,3]A is clearly equivalent to the -formula AA. However,operators are more succinct, which explains why -satisfiability over (ℤ,<) is ExpSpace-complete <cit.> whereas -satisfiability is PSpace-complete <cit.>. In the next section, we show that consistency checking forprograms is ExpSpace-complete for combined complexity. It follows from Proposition <ref> that answeringque­ries is ExpSpace-complete as well. On the other hand, we also prove that answering propositionalqueries is P-hard for data complexity, and that the extension ofwithin the head of rules leads to undecidability. § COMPLEXITY OF ANSWERINGQUERIESObserve first that everyprogram Π can be transformed (using polynomially-many fresh predicates) to aprogram in normal form that only contains rules such as P(τ) ←⋀_i∈ I P_i(τ_i), ←⋀_i∈ I P_i(τ_i),P(τ) ← P_1 (τ_1)_ P_2(τ_2), P(τ) ← P_1(τ_1) _ P_2(τ_2), P(τ) ←⊟_ P_1(τ_1),P(τ) ←⊞_ P_1(τ_1), and gives the same certain answers as Π over any data instance. (In particular,programs in normal form do not contain occurrences of the diamond operators.) For example, we can replace the rule ⊞_' P(τ) ← P_1(τ_1) ⊟_ P_2(τ_2) in Π with three rules P'(τ) ← P_1(τ_1)P'_2(τ_2), P'_2(τ_2) ←⊟_ P_2(τ_2), P(τ) ←⊤_' P'(τ), where P' is a fresh predicate of the same arity as P and P'_2 a fresh predicate of the same arity as P_2.Moreover, we can only consider those programs and data instanceswhere intervals take one of the following two forms:– (t_1, t_2) with t_1,t_2∈ℚ_2∪{ -∞,∞}, – [t,t] with t ∈ℚ_2; such intervals are called punctual. For example, a data instance = ' ∪{P(c)@(t_1, t_2] } is equivalent to the data instance = ' ∪{P(c)@(t_1, t_2), P(c)@[t_2, t_2] } in the sense that is gives the same certain answers as , the rule P(v) ←⊟_(r_1, r_2] P'(v) is equivalent to P(v) ←⊟_(r_1, r_2) P'(v) ⊟_[r_2, r_2] P'(v), whereas the rule P(v) ← P_1(v) _(r_1, r_2] P_2(v) is equivalent to the pair of rules P(v) ← P_1(v) _(r_1, r_2) P_2(v), P(v) ← P_1 (v) _[r_2, r_2] P_2(v). We use the following notations. We assume that ⟨ is one of ( and [, while ⟩ is one of ) and ]. Given an interval ι = ⟨ι_b, ι_e ⟩ and a range , we set ι =⟨ι_b + r, ι_e +r ⟩, if= [r,r],(ι_b + r_1, ι_e + r_2), if= (r_1, r_2),ι =⟨ι_b - r, ι_e -r ⟩, if= [r,r],(ι_b - r_2, ι_e - r_1), if= (r_1, r_2). In other words, ι = { t + k | t∈ι andk ∈} andι = { t - k | t∈ι andk ∈}. We also set ι = ⟨ι_b - r, ι_e -r ⟩, if= [r,r], [ι_b - r_1, ι_e - r_2], if= (r_1, r_2), r_2, ι_e ∈ℚ_2, [ι_b - r_1, ∞), if= (r_1, r_2), r_2 = ∞ or ι_e= ∞,ι = ⟨ι_b + r, ι_e +r ⟩, if= [r,r],[ι_b + r_2, ι_e + r_1], if= (r_1, r_2), r_2, ι_b ∈ℚ_2,(-∞, ι_e + r_1], if= (r_1, r_2),r_2= ∞ or ι_b= -∞. We assume that ι and ι are only defined if r_2 - r_1 ≤ι_e - ι_b, in which case we write ⊑ι. Thus, ι is defined if there is t' such that t' + k ∈ι, for all k∈. Symmetrically, ι is defined if there is t' such that t' - k ∈ι. The picture below illustrates the intuition behind ι and ι, for non-punctual , and the difference between them: [point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] (f) at (-0.5,4); [black](f) nodeι; (t) at (4,4); [gray, dashed, thick] (0.5, 4) – (t); [black](t) node(; (t') at (12,4); [black, thick] (t) – (t'); [black](t') node); [gray, dashed, thick, -open triangle 60] (t') – (13,4); (f) at (-0.5,3); [darkgreen](f) node; (t) at (1,3); [gray, dashed, thick] (0.5, 3) – (t); [darkgreen](t) node(; (t') at (3,3); [darkgreen, thick] (t) – (t'); [darkgreen](t') node); (t”) at (4,3); [gray, thick] (t') – (t”); [darkgreen](t') node); [gray] ((t”)+(0,.1)) – ((t”)+(0,-.1)); [darkgreen,decorate,decoration=brace,amplitude=5pt, raise=5pt] (t) – (t”) node [midway,shift=(.2,.5)] r_2; [darkgreen,decorate,decoration=brace,amplitude=5pt, raise=5pt, mirror] (t') – (t”) node [midway,shift=(.2,-.5)] r_1; (x) at (9,3); [gray, dashed, thick] (t”) – (x); [blue](x) node(; (x') at (11,3); [blue, thick] (x) – (x'); [blue](x') node); (x”) at (12,3); [gray, thick] (x') – (x”); [blue](x') node); [gray] ((x”)+(0,.1)) – ((x”)+(0,-.1)); [blue,decorate,decoration=brace,amplitude=5pt, raise=5pt] (x) – (x”) node [midway,shift=(.2,.5)] r_2; [blue,decorate,decoration=brace,amplitude=5pt, raise=5pt, mirror] (x') – (x”) node [midway,shift=(.2,-.5)] r_1; [gray, dashed, thick, -open triangle 60] (x”) – (13,3); (f) at (-0.5,2); [myred](f) nodeι; (t) at (3,2); [gray, dashed, thick] (0.5, 2) – (t); [myred](t) node⟨; (t') at (9,2); [myred, thick] (t) – (t'); [myred](t') node⟩; [gray, dashed, thick, -open triangle 60] (t') – (13,2); (f) at (-0.5,1); [darkgreen](f) node; (t) at (3,1); [gray, dashed, thick] (0.5, 1) – (t); [darkgreen](t) node(; (t') at (5,1); [darkgreen, thick] (t) – (t'); [darkgreen](t') node); (t”) at (6,1); [gray, thick] (t') – (t”); [darkgreen](t') node); [gray] ((t”)+(0,.1)) – ((t”)+(0,-.1)); [darkgreen,decorate,decoration=brace,amplitude=5pt, raise=5pt] (t) – (t”) node [midway,shift=(.2,.5)] r_2; [darkgreen,decorate,decoration=brace,amplitude=5pt, raise=5pt, mirror] (t') – (t”) node [midway,shift=(.2,-.5)] r_1; (x) at (7,1); [gray, dashed, thick] (t”) – (x); [blue](x) node(; (x') at (9,1); [blue, thick] (x) – (x'); [blue](x') node); (x”) at (10,1); [gray, thick] (x') – (x”); [blue](x') node); [gray] ((x”)+(0,.1)) – ((x”)+(0,-.1)); [blue,decorate,decoration=brace,amplitude=5pt, raise=5pt] (x) – (x”) node [midway,shift=(.2,.5)] r_2; [blue,decorate,decoration=brace,amplitude=5pt, raise=5pt, mirror] (x') – (x”) node [midway,shift=(.2,-.5)] r_1; [gray, dashed, thick, -open triangle 60] (x”) – (13,1); (f) at (-0.5,0); [black](f) nodeι; (t) at (6,0); [gray, dashed, thick] (0.5, 0) – (t); [black](t) node[; (t') at (10,0); [black, thick] (t) – (t'); [black](t') node]; [gray, dashed, thick, -open triangle 60] (t') – (13,0); (x) at (3,2.7); (x') at (3,2.2); [gray, dotted, thick] (x) – (x'); (x) at (3,1.7); (x') at (3,1.2); [gray, dotted, thick] (x) – (x'); (x) at (9,2.7); (x') at (9,2.2); [gray, dotted, thick] (x) – (x'); (x) at (9,1.7); (x') at (9,1.2); [gray, dotted, thick] (x) – (x'); Furthermore, we write– ⋂_i ∈ Iι_i ≠∅ to say that theintersection of the intervals ι_i, for i ∈ I, is non-empty; – ⋂_i ∈ Iι_i for the intersection of the intervals ι_i provided that ⋂_i ∈ Iι_i ≠∅; otherwise ⋂_i ∈ Iι_i is undefined; – ⋃_i ∈ Iι_i for the union of the intervals ι_i provided that ⋃_i ∈ Iι_i is a single interval; otherwise ⋃_i ∈ Iι_i is undefined; – ι^c for the closure of an interval ι, that isι^c=[ι_b, ι_e] for any ι=⟨ι_b, ι_e ⟩.Suppose now that we are given aprogram Π (in normal form) and a data instance . We define a (possibly infinite) set ℭ_Π, of atoms of the form P(c)@ι or @ι that contains all answers toqueries with Π over . The construction is essentially the standard chase procedure from database theory <cit.> adapted to time intervals and the temporal operators by mimicking their semantics. The only new chase rule is coalescing(coal) that merges—possibly infinitely-many—smaller intervals into the lager one they cover. Because of this rule, our chase construction requires transfinite recursion; see also the work by DBLP:journals/tocl/BresolinKMRSZ17 and DBLP:conf/ijcai/ArtaleKKRWZ15.Letbe some set of atoms of the form P(c)@ι or @ ι from Π and .Denote by () the result of applying exhaustively and non-recursively the following rules to :(coal) if P(c)@ι_i ∈, for all i ∈ I with a possibly infinite set I, and ⋃_i∈ Iι_i is defined, then we add P(c)@⋃_i∈ Iι_i to ;(horn) if P(c) ←⋀_i ∈ I P_i(c_i) is an instance of a rule in Π with all P_i(c_i)@ι_i inand ⋂_i∈ Iι_i ∅, then we add P(c)@⋂_i∈ Iι_i to ; if ←⋀_i ∈ I P_i(c_i) is an instance of a rule in Π, then we add @⋂_i∈ Iι_i to ; (_) if P(c) ← P_1(c_1) _ P_2(c_2) is an instance of a rule in Π with P_i(c_i)@ ι_i ∈ for i ∈{1,2}, ι^c_1 ∩ι_2 ≠∅, and ((ι^c_1 ∩ι_2) ) ∩ι^c_1 ≠∅, then we add P(c)@((ι^c_1 ∩ι_2) ) ∩ι^c_1 to ; see the picture below, where = (r_1, r_2); [point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] (f) at (-1.7,4); [blue](f) nodeι_2; (t) at (1,4); [gray, dashed, thick] (0, 4) – (t); [blue](t) node⟨; (t') at (3,4); [blue, thick] (t) – (t') node [above, midway] P_2; [blue](t') node⟩; [gray, dashed, thick, -open triangle 60] (t') – (7,4); (f) at (-1.7,3); [red](f) nodeι_1; (t) at (2,3); [gray, dashed, thick] (0, 3) – (t); [red](t) node⟨; (t') at (5,3); [red, thick] (t) – (t') node [above, midway] P_1; [red](t') node⟩; [gray, dashed, thick, -open triangle 60] (t') – (7,3); (f) at (-1.7,2); [black](f) node(ι^c_1 ∩ι_2); (t) at (2,2); [gray, dashed, thick] (0, 2) – (t); [black](t) node[; (t') at (3,2); [black, thick] (t) – (t'); [black](t') node⟩; [gray, dashed, thick, -open triangle 60] (t') – (7,2); (f) at (-1.7,1); [myred](f) node(ι^c_1 ∩ι_2); (t) at (2,1); [gray, dashed, thick] (0, 1) – (t); [gray](t) node[; (t') at (3,1); [gray, thick] (t) – (t'); [gray](t') node⟩; (x) at (3.5,1); [gray,decorate,decoration=brace,amplitude=5pt, raise=7pt] (t) – (x) node [midway,shift=(.2,.5)] r_1; (x') at (5.5,1); [gray,decorate,decoration=brace,amplitude=5pt, raise=5pt, mirror] (t') – (x') node [midway,shift=(.2,-.5)] r_2; (t) at (3.5,1); [gray, dashed, thick] (0, 1) – (t); [myred](t) node(; (t') at (5.5,1); [myred, thick] (t) – (t'); [myred](t') node); [gray, dashed, thick, -open triangle 60] (t') – (7,1); (f) at (-1.7,0); [myblue](f) node((ι^c_1 ∩ι_2) ) ∩ι^c_1; (t) at (3.5,0); [gray, dashed, thick] (0, 0) – (t); [myblue](t) node(; (t') at (5,0); [myblue, thick] (t) – (t'); [myblue](t') node]; [gray, dashed, thick, -open triangle 60] (t') – (7,0); (⊞_) if P(c) ←⊞_ P_1(c_1) is an instance of a rule in Π with P_1(c_1)@ι∈ and ⊑ι, then we add P(c)@(ι) to ; (_) if P(c) ← P_1(c_1) _ P_2(c_2) is an instance of a rule in Π with P_i(c_i)@ ι_i ∈, ι^c_1 ∩ι_2 ≠∅ and ((ι^c_1 ∩ι_2) ) ∩ι^c_1 ≠∅, then we add P(c)@((ι^c_1 ∩ι_2) ) ∩ι^c_1 to ; (⊟_) if P(c) ←⊟_ P_1(c_1) is an instance of a rule in Π with P_1(c_1)@ι∈ and ⊑ι, then we add P(c)@(ι) to . We set ^0() = ∪{⊤(-∞, ∞)} and, for any successor ordinal ξ +1 and limit ordinal ζ, ^ξ +1() = (^ξ()),^ζ () = ⋃_ξ<ζ^ξ()and ℭ_Π, = ^ω_1(), where ω_1 is the first uncountable ordinal (as ^ω_1() is countable, there is an ordinal α < ω_1 such that ^α() = ^β (), for all β≥α). We regard ℭ_Π, as both a set of atoms of the form P(c)@ι or @ι and an interpretation where, for any t ∈ℝ, any P (different from ), and any tuple c of individual constants, we have ℭ_Π,,tP(c) iff P(c)@ι∈ℭ_Π, and t ∈ι. The domain of ℭ_Π, is the set () ∪(Π) that comprises the individual constants occurring inand Π.We illustrate the definition above by a simple example:Let Π have two rules P ←⊟_[1,1] P and Q ←⊞_(0, ∞) P, and let = {P(0,1] }. The first ω steps of the construction of ℭ_Π, will produce, using the rules (⊟_) and (coal), the atoms P(n,n+1] and P(0,n+1], for n < ω. In the step ω + 1, (coal) will give P(0,∞) and then (⊞_) will return Q@[0, ∞).Let Π be aprogram anda data instance. Then, for any predicate symbol P from Π and , any tuple c of constants fromand Π, and any interval ι,(i) P(c)@ι∈ℭ_Π, implies ,tP(c), for all t ∈ι and all modelsof Π and(ii) if @ι∉ℭ_Π, for any ι, then ℭ_Π, (Π, ) otherwise, Π andare inconsistent. (i) Suppose thatis a model of Π and , and thatP(c)@ι∈ℭ_Π,. Let ξ be the smallest ordinal such that P(c)@ι∈^ξ(). We show that ,tP(c) for all t ∈ι by induction of ξ. If ξ = 0, then P(c)@ι∈, and sincesatisfies every assertion in , we are done. Ifξ = ξ' + 1 then P(c)@ι was obtained from ^ξ'() by applying one of the construction rules for ℭ_Π,. Suppose P(c)@ι is P(c)@⋃_i∈ Iι_i obtained by (coal). By the induction hypothesis, ,tP(c) for all t ∈ι_i and i ∈ I. Clearly, , tP(c) for all t ∈⋃_i∈ Iι_i, and so for all t ∈ι. The case of (horn) is similar (with intersection in place of union).Suppose P(c)@ι is obtained by (_) from P_i(c_i)@ι_i, i ∈{1,2}. By the induction hypothesis, ,tP_i(c_i) for every t ∈ι_i. Take an arbitrary t ∈ ((ι^c_1 ∩ι_2) ) ∩ι_1^c. Then there exists t' ∈ι^c_1 ∩ι_2 such that t - t' ∈ and ,tP_2(c_2). Moreover, we have , sP_1(c_1) for all s ∈ (t',t). Therefore, , tP_1(c_1)P_2(c_2). If P(c)@ι is obtained by (⊞_) from P_1(c_1)@ι, the proof is analogous by considering t ∈ι. The remaining rules are treated similarly.(ii) Suppose @ι∉ℭ_Π, for any ι. By definition, ⊆ℭ_Π,, and so ℭ_Π, P(c)@ι for every P(c)@ι∈. To show that all the rules in Π are satisfied by ℭ_Π,, we take an assignment ν, a rule P(τ) ←⋀_i∈ I P_i(τ_i) from Π, and suppose that ℭ_Π,, t ^ν P_i(τ_i), for all i∈ I. By the definition of ℭ_Π,, it follows that ℭ_Π,, tP_i(ν(τ_i)) and P_i(ν(τ_i)) ∈ℭ_Π,, for some ι_i ∋ t. Moreover, there are ordinals ξ_i, i ∈ I, such that P_i(ν(τ_i))@ι_i ∈^ξ_i(). By the rule (horn), we then have P(ν(τ))@⋂_i∈ Iι_i ∈^max{ξ_i | i ∈ I}+1(), from which P(ν(τ))@⋂_i∈ Iι_i ∈ℭ_Π,, and so ℭ_Π,,tP(ν(τ)). Now, consider a rule ←⋀_i∈ I P_i(τ_i) and suppose that ℭ_Π,, t ^ν P_i(τ_i), for all i∈ I. By the argumentabove, we then should have @ ⋂_i∈ Iι_i ∈ℭ_Π,, which is a contradiction. For a ruleP(τ) ← P_1(τ_1) _ P_2(τ_2), take an arbitrary t and suppose that ℭ_Π,, t_2 ^ν P_2(τ_2) for some t_2 with t - t_2 ∈ and ℭ_Π,, t_1 ^ν P_1(τ_1) for all t_2 ∈ (t_2, t). By the construction ofℭ_Π,, it follows that P_2(ν(τ_2)) @ ι_2 ∈ℭ_Π, for some ι_2 ∋ t_2. Moreover, there are finitely many intervals ι_i', i ∈ I, such that (t_2, t) ⊆⋃_i ∈ Iι_i' and P_1(ν(τ_1)) @ ι_i' ∈ℭ_Π,. By the rule (coal), P_1(ν(τ_1)) @ ι_1 ∈ℭ_Π, for ι_1 = ⋃_i ∈ Iι_i'. It follows then that t_2, t ∈ι_1^c, and so ι_1^c ∩ι_2 ≠∅ and t ∈ ((ι^c_1 ∩ι_2) ) ∩ι^c_1. Thus, by the rule (_), we have P(ν(τ))@((ι^c_1 ∩ι_2) ) ∩ι^c_1 ∈ℭ_Π,. Therefore, ℭ_Π,, t ^ν P(τ). The remaining rules are considered in the same manner.That @ι∈ℭ_Π,, for some ι, implies inconsistency ofand Π follows from (i).If @ ι∉ℭ_Π,, we call ℭ_Π, the canonical (or minimal) model of Π and . We now establish an important property of ℭ_Π, that will allow us to reduce consistency checking forprograms and data to the satisfiability problem for formulas in the linear temporal logicover (ℤ,<). Recall that the greatest common divisor of a finite set N ⊆ℚ (at least one of which is not 0) is the largest number (N) >0 such that every n ∈ N is divisible by (N) (in the sense that n/(N) ∈ℤ). It is known that (N) always exists and (N) ≤∏_n ∈ N |n|. It is easy to see that, for any a finite set N ⊆ℚ_2 (at least one of which is not 0), we have (N) = 2^m, where m is the maximal natural number such that n/2^m ∈N is an irreducible fraction. Thus, (N) can be computed and stored using space polynomial in |N| (the size of the binary encoding of N). To make further definitions simpler, it will be convenient to assume that (N)=1 if N = {0}.Given aprogram Π and a data instance , we take d = ((Π, )). Denote by _Π, the set of all the intervals of the form [k d, k d] and , for k ∈ℤ. Clearly, _Π, is a partition of ℚ_2. We represent _Π, as _Π,= {… ,σ_-3,σ_-2,σ_-1, σ_0, σ_1, σ_2, σ_3, …}, where σ_0 = [0,0], σ_1 = (0, d), σ_2 = [d,d], σ_3 = (d,2d), σ_-1 = (-d, 0), etc. Thus, σ_i is punctual if i is even and non-punctual if i is odd. We refer to the σ_i as sections of _Π,. For every atom P(c) and every σ∈_Π,, we either have ℭ_Π, , tP(c) for all t ∈σ, or ℭ_Π, , t P(c) for all t ∈σ.It suffices to show that every interval ι such thatP(c)@ι∈ℭ_Π, takes one of the following forms: (- ∞, ∞), ⟨ dk, ∞), (- ∞, dk ⟩, ⟨ dk, dk' ⟩, where k,k' ∈ℤ. This can readily be done by induction on the construction of ℭ_Π,. Indeed, when applied to a set of atoms of this form, the operatoralso results in a set of such atoms. Our aim now is to encode the structure of ℭ_Π, given by Lemma <ref> by means of an -formula φ_Π, that is satisfiable over (ℤ,<) iff Π andare consistent. The -formula φ_Π, contains propositional variables of the form P^c, where P is a predicate symbol from Π andof arity m and c an m-tuple of individual constants fromand Π, as well as two additional propositional variables 𝗈𝖽𝖽 and 𝖾𝗏𝖾𝗇. We define φ_Π, as a conjunction of the following clauses, where ν is any assignment of the individual constants fromand Π to the terms in Π, and ψ is a shorthand for φφφ: – 𝖾𝗏𝖾𝗇(𝖾𝗏𝖾𝗇↔𝗈𝖽𝖽) (𝗈𝖽𝖽↔𝖾𝗏𝖾𝗇); – (P^ν(τ)←⋀_i∈ I P_i^ν(τ_i)), for every rule P(τ) ←⋀_i∈ I P_i(τ_i) in Π; – (←⋀_i∈ I P_i^ν(τ_i)), for every rule ←⋀_i∈ I P_i(τ_i) in Π; – for every rule P(τ) ← P_1(τ_1) _ P_2(τ_2) in Π with = [r,r], we require two clauses: ( P^ν(τ)←𝖾𝗏𝖾𝗇^-2r/d P_2^ν(τ_2)⋀_-2r/d < j < 0^j P_1^ν(τ_1)), ( P^ν(τ)←𝗈𝖽𝖽^-2r/d P_2^ν(τ_2)⋀_-2r/d ≤ j ≤ 0^j P_1^ν(τ_1)), where ^n φ= …_n φif n > 0, ^0 φ = φ, and ^n φ = …_|n|φ if n < 0; – for every rule P(τ) ← P_1(τ_1) _ P_2(τ_2) in Π with = (r_1,r_2), we require four clauses: ( P^ν(τ)←𝖾𝗏𝖾𝗇⋁_-2r_2/d < k < - 2r_1/d (^k P_2^ν(τ_2)^k 𝖾𝗏𝖾𝗇⋀_k < j < 0^j P_1^ν(τ_1)), ( P^ν(τ)←𝖾𝗏𝖾𝗇⋁_-2r_2/d < k < - 2r_1/d (^k P_2^ν(τ_2)^k 𝗈𝖽𝖽⋀_k ≤ j < 0^j P_1^ν(τ_1)), ( P^ν(τ)←𝗈𝖽𝖽⋁_-2r_2/d ≤ k ≤ - 2r_1/d (^k P_2^ν(τ_2)^k 𝖾𝗏𝖾𝗇⋀_k < j ≤ 0^j P_1^ν(τ_1)), ( P^ν(τ)←𝗈𝖽𝖽⋁_-2r_2/d ≤ k ≤ - 2r_1/d (^k P_2^ν(τ_2)^k 𝗈𝖽𝖽⋀_k ≤ j ≤ 0^j P_1^ν(τ_1));– for every rule P(τ) ← P_1(τ_1) _ P_2(τ_2) in Π with = (r_1, ∞), ( P^ν(τ)←𝖾𝗏𝖾𝗇⋀_-2r_1/d ≤ j < 0^j P_1^ν(τ_1)^-2r_1/d ( P_1^ν(τ_1) (𝖾𝗏𝖾𝗇 P_2^ν(τ_2))P_1^ν(τ_1) (𝗈𝖽𝖽 P_1^ν(τ_1) P_2^ν(τ_2)))), ( P^ν(τ)←𝗈𝖽𝖽⋀_-2r_1/d ≤ j ≤ 0^j P_1^ν(τ_1)^-2r_1/d ( P_2^ν(τ_2) P_1^ν(τ_1) (𝖾𝗏𝖾𝗇 P_2^ν(τ_2))P_1^ν(τ_1) (𝗈𝖽𝖽 P_1^ν(τ_1) P_2^ν(τ_2))))(recall that PQ holds at i iff there exists k < i, such that Q holds at k and P holds at all j with k < j < i); – similar clauses for the rules of the form P(τ) ← P_1(τ_1) _ P_2(τ_2) (here we need the `until' operator ), P(τ) ←⊟_ P_1(τ_1) and P(τ) ←⊞_ P_1(τ_1) in Π; – for every fact P(c)@ι in , we need the clauses: ^2r/d P^c,if ι = [r,r],⋀_2r_1/d < i < 2r_2/d^i P^c, if ι = (r_1, r_2)andr_1, r_2 ∈ℚ_2,^2r_1/d P^c, if ι = (r_1, r_2), r_1 ∈ℚ_2andr_2 = ∞,^2r_2/d P^c, if ι = (r_1, r_2), r_1 = -∞ andr_2 ∈ℚ_2, P^c,if ι = (r_1, r_2), r_1 = -∞ andr_2 = ∞. (Π, D) is consistent iff φ_Π, is satisfiable.(⇒) If ℭ_Π, is a model of (Π, ), we define an -interpretationby taking– , iP^c iff ℭ_Π, , tP(c), for all t ∈σ_i and i∈ℤ, all tuples of individual constants c, and predicates P; – , i 𝖾𝗏𝖾𝗇, for even i ∈ℤ; – , i 𝗈𝖽𝖽, for odd i ∈ℤ. It is routine to check that ,0 φ_Π,,taking into account that ℭ_Π, , tP_1(c_1) _ϱP_2(c_2) for some (= all) t ∈σ_i iff the following conditions hold:Case ϱ = [r,r]: ℭ_Π, , t'P_2(c_2), for some t' ∈σ_i - 2r/d, and ℭ_Π, , sP_1(c_1) for all s ∈σ_j such that i -2r/d < j < i, if i is even, i -2r/d ≤ j ≤ i, if iis odd;Case ϱ = (r_1,r_2): there exists σ_k with ℭ_Π, , t'P_2(c_2), for some t' ∈σ_k, and ℭ_Π, , sP_1(c_1) for all s ∈σ_j such that i -2r_2/d < k < i - 2r_1/d, k < j < i, if iis even andkis even,i -2r_2/d < k < i - 2r_1/d, k ≤ j < i, if i is even andkis odd,i -2r_2/d ≤ k ≤ i - 2r_1/d, k < j ≤ i, if i is odd andkis even,i -2r_2/d ≤ k ≤ i - 2r_1/d, k ≤ j ≤ i, if i is odd andkis odd; and similarly for the other temporal operators in φ_Π,.(⇐) Suppose now φ_Π, is satisfiable. Take the canonicalmodel 𝔐 of φ_Π, with 𝔐,0 φ_Π,; see the work by DBLP:conf/lpar/ArtaleKRZ13 for details. Using the observations above, it is not hard to check that , iP^c iff ℭ_Π, , tP(c), for all t ∈σ_i and i∈ℤ, all tuples of individual constants c and predicates P. Details are left to the reader.We are now in a position to prove our first complexity result: Consistency checking forprograms is ExpSpace-complete. The lower bound holds even for propositional . We first show the upper bound. By the two lemmas above, aprogram Π is consistent with a data instanceiff theformula φ_Π, is satisfiable. Thus, a consistency checkingExpSpace algorithm can first construct φ_Π,, which requires exponential time in the size of Π and .Indeed, the greatest common divisor of the set (Π, ) can be computed in polynomial time. Theformula φ_Π, contains exponentially many clauses (as there are exponentially many assignments ν) of at most exponential size (as they contain 2t/d conjuncts or disjuncts, where t is a number from Π or ).After that we can run a standard PSpace satisfiability checking algorithm for ; see, e.g., the work by Sistla Clarke85. We establish the matching lower boundby reduction of the non-halting problem for deterministic Turing machines with an exponential tape.Let M a deterministic Turing machinethat requires 2^f(m) cells of the tape given an input of length m, for some polynomial f. Let n = f(m). Without loss of generality, we can assume that M never runs outside the first 2^n cells.Suppose M= (Q, Γ, #, Σ, δ, q_0, q_h), where Q is a finite set of states, Γ a tape alphabet, #∈Γ the blank symbol, Σ⊆Γ a set of input symbols, δ (Q∖{q_h}) ×Γ→ Q ×Γ×{L,R} a transition function, and q_0,q_h ∈ Q are the initial and halting states, respectively. Let a⃗=a_1… a_m be an input for M. We construct a propositionalprogram Π and a data instancesuch that they are not consistent iff M accepts a⃗.In our encoding, we employ the following propositional variables, where a ∈Γ, q ∈ Q:– H_q,a indicating that a cell is read by the head, the current state of the machine is q, and the cell contains a; – N_a indicating that a cell is not read by the head and contains a, – 𝖿𝗂𝗋𝗌𝗍 and 𝗅𝖺𝗌𝗍 marking the first and last cells of a configuration, respectively. The program Π consists of the following rules, for a,a', a”∈Γ, q,q'∈ Q: ⊞_2^n+1 H_q', a”← H_q,a⊞_1 N_a”,⊞_2^n N_a'← H_q,a , if δ(q,a) = (q',a',R), ⊞_2^n-1 H_q', a”← H_q,a⊟_1 N_a”,⊞_2^n N_a'← H_q,a , if δ(q,a) = (q',a',L), ⊞_2^n N_a←⊟_1 N_a'N_a⊞_1 N_a”,⊞_2^n N_a←⊟_1 H_q, a'N_a⊞_1 N_a”,if δ(q,a') ≠ (r,b,R)for all r,b,⊞_2^n N_a←⊟_1 N_a' N_a⊞_1 H_q, a”,if δ(q,a”) ≠ (r,b,L)for all r,b,⊞_2^n N_a← N_a𝖿𝗂𝗋𝗌𝗍⊞_1N_a',⊞_2^n N_a← N_a𝖿𝗂𝗋𝗌𝗍⊞_1 H_q, a',if δ(q,a') ≠ (r,b,L)for all r,b,⊞_2^n N_a←⊟_1 N_a'N_a 𝗅𝖺𝗌𝗍,⊞_2^n N_a←⊟_1 N_q, a'N_a 𝗅𝖺𝗌𝗍,if δ(q,a') ≠ (r,b,R)for all r,b,⊞_2^n𝖿𝗂𝗋𝗌𝗍←𝖿𝗂𝗋𝗌𝗍, ⊞_2^n𝗅𝖺𝗌𝗍←𝗅𝖺𝗌𝗍,← H_q_h, a, ⊞_1 N_#← N_#_(0, ∞) N_#^<, where ⊞_r is an abbreviation for ⊞_[r,r] and similarly for ⊟_r. Letcontain the following facts: N_a_i@[i,i],for1 < i ≤ m, N_#@[m+1,m+1],N_#^<@[2^n,2^n], H_q_0, a_1@[1,1], 𝖿𝗂𝗋𝗌𝗍@[1,1], 𝗅𝖺𝗌𝗍@[2^n, 2^n]. The program represents the computation of M on a⃗ as a sequence of configurations. The initial one is spread over the time instants1, …, 2^n, from which the first m instants represent a⃗ and the remaining ones are #. The second configuration uses the next 2^n instants (i.e., 2^n+1, …, 2^n + 2^n), etc. It is routine to check that M halts on a⃗ iff Π andare inconsistent. Note thatallows punctual intervals of the form [r,r] as ranges of temporal operators, and that full propositionalwith such intervals is undecidable <cit.>.Now we turn to the data complexity ofand show the following result: Consistency checking and answering propositionalqueries is P-hard for data complexity under LogSpace reductions.We establish this lower bound by reduction of the monotone circuit value problem, which is known to be P-complete <cit.>. Let C be a monotone circuit with input gates having fan-in 1 and all other gates fan-in 2. We assume that the gates are enumerated by consecutive positive integers, so that if there is an edge from n to m then n < m. Let N = 2^k, for some k ∈ℕ, be the minimal number that is greater than or equal to the maximal gate number. We encode the computation of C on an input α by a data instance _Cwith the following punctual facts, where [n] stands for [n,n]:=0pt– V[2n+ n/N], if n is an inputgate and α(n) = V ∈{T,F};– D[2n+n/N], if n is an OR gate;– C[2n+n/N], if n is an AND gate;– I_0[2n + m/N], I_1[2n + k/N], if n is a gate with input gates m and k. Let Π_C be aprogram with the rules T←_[2,2] T,F ←_[2,2] F,T←_[0,1](I_0T)D, F←_[0,1](I_0F)C,T←_[0,1](I_1T)D,F ←_[0,1](I_1F)C,F←_[0,1](I_0F) _[0,1](I_1F)D,T ←_[0,1](I_0T) _[0,1](I_1T)C. Suppose n is the output gate. Then it is straightforward to check that the value of 𝐂 on α is T iff (Π, )T[2n + n/N]. This immediately implies the required hardness for the query answering problem.An example of a circuit C with an assignment α, and an initial part of the canonical model of (Π_C, _C) are shown below, with the black symbols above the timestamps indicating what is given in _C and the grey ones what is implied by Π_C: [scale = 1.3, xshift = 160, circuit logic US, every circuit symbol/.style=thick,thick] [and gate,inputs=nn, point right,fill=gray!20,label=[xshift=.3cm, yshift = -.1cm]above left:4] (n1) at (0,0) ;[or gate,inputs=nn, point right,fill=gray!20,label=[yshift = -.05cm]above left:3] (n2) at (0,1) ;[and gate,inputs=nn, point right,fill=gray!20,label=[xshift=.25cm, yshift = -.05cm] above left:5] (n3) at (1.3,0.5) ;(n1.input 2) – +(-0.65,0); (n1.input 1) -| (-0.6,0.5); (n2.input 2) -| (-0.6,0.5); (n2.input 1) – +(-0.7,0); (-0.6,0.5) – (-1,0.5); [rectangle, draw, label=160:1] at (-1.2,0.5) F; [rectangle, draw, label=160:2] at (-1.2,-0.1) T; [rectangle, draw, label=160:0] at (-1.2,1.1) T; (n1.output) – +(0.25,0) |- (n3.input 2); (n2.output) – +(0.2,0) |- (n3.input 1); (n3.output) – +(0.2,0); [xscale = .9, yshift = -50, xshift = -50, point/.style=draw, thick, circle, inner sep=1.5, outer sep=2] [point,label=below:0, label=above:T] at (0,0) ; [point,label=below:1/8] at (0.3,0) ; [point,label=below:2/8] at (0.6,0) ; [point,label=below:3/8] at (0.9,0) ; [point,label=below:4/8] at (1.2,0) ; [point,label=below:5/8] at (1.5,0) ; [point,label=below:1] at (2.4,0) ; [point,label=below:16/8, label=[yshift = .4cm]above: T] at (4.8,0) ; [point, label=above:F] at (5.1,0) ; [point ] at (5.4,0) ; [point,label=below: …] at (5.7,0) ; [point ] at (6,0) ; [point,label=below:21/8] at (6.3,0) ; [point,label=below:3] at (7.2,0) ; [point,label=below:32/8, label=[yshift = .4cm]above: T] at (9.6,0) ; [point, label=[yshift = .4cm]above: F] at (9.9,0) ; [point, label=above:T] at (10.2,0) ; [point,label=below:…] at (10.5,0) ; [point] at (10.8,0) ; [point,label=below:37/8] at (11.1,0) ; [point,label=below:5] at (12.0,0) ; [point,label=below:48/8, label=above:I_0, label=[yshift = .4cm]above: T] at (14.4,0) ; [point, label=above:I_1, label=[yshift = .4cm]above: F] at (14.7,0) ; [point, label=[yshift = .4cm]above: T] at (15.0,0) ; [point,label=below: …, label=above:D, label=[yshift = .4cm]above: T] at (15.3,0) ; [point] at (15.6,0) ; [point,label=below:53/8] at (15.9,0) ; at (16.4,0) ..; To show P-hardness of the consistency problem, it suffices to add the fact P[2n + n/N] to _C, for a fresh P, and the axiom ← PT to Π_C.The exact data complexity of answering propositionalqueries remains open. It is worth noting that answering ontology-mediated queries with propositionalontologies is NC^1-complete for data complexity <cit.>, while answering propositional datalog queries with the Halpern-Shoham operators is P-complete for data complexity <cit.>. The diamond operators _ and _ are disallowed in the head ofrules. Denote by ^ the extension ofthat allows both box and diamond operators in the head of rules. We show now that this language has much more expressive power and can encode 2-counter Minsky machines, which gives the following theorem; cf. the work by DBLP:journals/corr/abs-1305-6137: Consistency checking for propositional ^ programs is undecidable.We use some ideas of DBLP:journals/corr/abs-1305-6137, wherea non-Horn fragment ofwas shown to be undecidable. The proof is by reduction of the undecidable non-halting problem for Minksy machines: given a 2-counter Minsky machine, decide whether it does not halt starting from 0 in both counters.Suppose we are given a Minsky machine with counters C_1 and C_2 that has n-1 instructions of the form i: Increment(C_k), goto j,i: Decrement(C_k), goto j, i: If C_k = 0 then j_1 else j_2, where i, j, j_1 and j_2 are instruction indexes, k=1,2, and the n-th instruction is n: Halt. We encode successive configurations of the machine using the sequence [0,4),[4,8),[8,12),… of timeintervals. The currentinstruction index is represented by a propositional variable P_i, for 1 ≤ i ≤ n, that holds at the first point, say 4m, of theinterval [4m,4m+4). The current value, say k_1, of the counter C_1 is encoded by exactly k_1 moments of time in the interval (4m +1,4m+2) where the propositional variable C holds true. Similarly, the value k_2 of C_2 is encoded by exactly k_2 moments in the interval (4m +3,4m+4) where the propositional variable C holds true.The initial configuration is encoded by the following data instance , where the variable Z indicates that both counters are 0: P_1@[0,0],Z@ (1,2),Z@ (3,4). For every i (1 ≤ i ≤ n) we require the rules ⊞_[0,1] Z ←P_i,⊞_[2,3] Z ← P_i, ← ZC, ← ZN saying, in particular, that C cannot hold true outside the intended intervals (here N is an auxiliary variable).To simplify notation, we use the following abbreviations: = ⊞_[1,1], = ⊞_[3,3], and = ⊞_[4,4].The machine instructions are encoded as follows (the instructions for C_2 are obtained by replacingwith): P_j_1← P_i ⊞_(0,1) Z, P_j_2← P_i _(0,1) C,i: if C_1 = 0 then j_1⊞_(0,1)CP← P_i, ⊞_(0,1)CP← P_i,else j_2 P_j ← P_i, ⊞_(0,1)IC← P_i,⊞_(0,1)CP← P_i, i: Inc(C_1), goto j P_j ← P_i, ⊞_(0,1)DC← P_i,⊞_(0,1)CP← P_i, i: Dec(C_1), goto j. Here the variable CP means copying of the counter value, DC meansdecrementing it by 1, and IC incrementing it by 1. To achieve this, we require the following rules: C←CP C, Z←CP Z,C←DC C _(0,1) C,Z←DC Z _(0,1) C,⊞_[0,1] Z ←DC C ⊞_(0,1) Z,_(0,1) N ←⊞_(0,1)IC⊞_(0,1) Z, _(0,1) N ← C IC⊞_(0,1) Z,C←IC C,C ←IC N,Z←IC Z _(0,1) N, ⊞_(0,1) Z ←IC N ⊞_(0,1) Z, We explain the intuition behind the most complex rules (<ref>)–(<ref>) that are used to model the increment of the counters. The rules (<ref>) mark a new time-pointwith the variable N in a block located after the last C-time-point in this block (or, according the first axiom, N is placed anywhere in the block if the current value of a counter is 0). The rules (<ref>) insert C in the next block, where in the current block we have either C or N. The rules (<ref>) transfer Z from the current block to the next one excluding the time-point where N holds. Finally, we add the rule ← P_n, n: Halt. It is not hard to check that the program and data instance above are consistent iff the given 2-counter Minsky machine does not halt. The diamond operators in the head of rules can encode disjunction and thereby ruin `Horness'. Thus, the temporalised description logic ℰℒ with such rules is undecidable <cit.>; cf.also the work by GJK-IJCAI16. The addition of diamonds in the heads to the Horn fragment of the propositional Halpern-Shoham logic ℋ𝒮 can make a P-complete logicundecidable <cit.>. A distinctive feature of these formalisms is their two-dimensionality <cit.>, while propositionalis one-dimensional.Diamonds in the head of rules also ruin FO-rewritabilityof answering ontology-mediated queries with temporalised DL-Lite ontologies by increasing their data complexity to coNP <cit.>. The same construction actually shows that nonrecursivewith binary predicates and diamonds in the heads is coNP-hard.§ NONRECURSIVEAs none of theprograms required in our use cases is recursive, we now consider the classof nonrecursiveprograms. We first show that consistency checking (and so query answering) forprograms is PSpace-complete for combined complexity. Then we regard a givenprogram as fixed and reduce these problems to evaluating a (data-independent) FO(<)-formula over any given data, thereby establishing thatis in AC^0 for data complexity.More precisely, for a program Π, let ⋖ be the dependence relation on the predicate symbols in Π: we have P ⋖ Q iff Π contains a clause with P in the head and Q in the body. Π is called nonrecursive if P ⋖^+ P does not hold for any predicate symbol P in Π, where ⋖^+ is the transitive closure of ⋖. We denote by 𝖽𝖾𝗉𝗍𝗁_Π(P) the maximal number l such thatP_0 ⋖ P_1 ⋖…⋖ P_l = P. (Note that 𝖽𝖾𝗉𝗍𝗁_Π(P) = 0 iff either P does not occur in Π or P occurs only in the body of some rules.) The maximal 𝖽𝖾𝗉𝗍𝗁_Π(P) over all predicates P is denoted by𝖽𝖾𝗉𝗍𝗁(Π).It should be clear that, for any nonrecursive Π and any data instance , there exists some n ∈ℕ such that ^n+1() = ^n() = ℭ_Π,. Therefore, ℭ_Π, is finite.Denote by min and max the minimal and, respectively, maximal finite numbers that occur in the intervals from .Let K be the largest number occurring in Π. We then set M_l = min -K ×𝖽𝖾𝗉𝗍𝗁(Π) and M_r = max +K ×𝖽𝖾𝗉𝗍𝗁(Π). Let d = ((Π, )). The next lemma will be required for our PSpace algorithm checking consistency ofprograms. Let Π be aprogram. Then every interval ι such thatP(c)@ι∈ℭ_Π, or (c)@ι∈ℭ_Π, takes one of the following forms: (- ∞, ∞), ⟨ dk, ∞), (- ∞, dk ⟩, ⟨ dk, dk' ⟩, where k, k' ∈ℤ andM_l ≤ dk ≤ dk' ≤ M_r.That every interval in ℭ_Π, is of the form (- ∞, ∞), ⟨ dk, ∞), (- ∞, dk ⟩, ⟨ dk, dk' ⟩, where k, k' ∈ℤ, was observed in the proof of Lemma <ref>. Thus, we only need to establishthe bounds on dk and dk'. For each P, let 𝗁𝗂(P) and 𝗅𝗈(P) be the maximal and, respectively, minimal number dk ∈ℚ such that P(c)@ι∈ℭ_Π, and dk is an end-point of ι. Note that 𝗁𝗂(P) and 𝗅𝗈(P) can be undefined.We are going to show that 𝗁𝗂(P) is either undefined or 𝗁𝗂(P) ≤max + 𝖽𝖾𝗉𝗍𝗁_Π(P) K. (That 𝗅𝗈(P) is either undefined or 𝗅𝗈(P) ≥min - 𝖽𝖾𝗉𝗍𝗁_Π(P) K is left to the reader.) Clearly, this fact implies the required bounds on dk and dk'.The proof is by induction on the construction of ℭ_Π,. Let 𝗁𝗂^n(P) be the maximal dk ∈ℚ_2 such that P(c)@ι∈^n() and dk is an end-point of ι. We show by induction on n that either 𝗁𝗂^n(P) isundefined or 𝗁𝗂^n(P) ≤max + K 𝖽𝖾𝗉𝗍𝗁_Π(P).For the basis of induction, if 𝗁𝗂^0(P) is defined andP(c)@ι∈^0() is an atom mentioning 𝗁𝗂^0(P), then P(c)@ι∈ and 𝗁𝗂^0(P) ≤max. Assume next thatn = n' + 1. Suppose 𝗁𝗂^n(P) is defined and let P(c)@ι∈^n() be an atom mentioning 𝗁𝗂^n(P). If P(c)@ι∈^n'(), we are done by the induction hypothesis. Otherwise, we consider how P(c)@ι was obtained. Suppose it was obtained by (coal) with ι = ⋃_i ∈ Iι_i. By the induction hypothesis, 𝗁𝗂^n'(P) ≤max + K𝖽𝖾𝗉𝗍𝗁_Π(P), and so every number mentioned in {ι_i | i ∈ I} does not exceed max + K𝖽𝖾𝗉𝗍𝗁_Π(P). Thus, we have 𝗁𝗂^n(P) ≤max + K𝖽𝖾𝗉𝗍𝗁_Π(P). Now suppose that P(c)@ι was obtained by (horn) from P_i(c_i)@ι_i, i ∈ I. Observe that 𝖽𝖾𝗉𝗍𝗁_Π(P_i) < 𝖽𝖾𝗉𝗍𝗁_Π(P) and, by the induction hypothesis, 𝗁𝗂^n'(P_i) ≤max + K𝖽𝖾𝗉𝗍𝗁_Π(P_i). Since ι = ⋂_i ∈ Iι_i, the maximal number mentioned in ι cannot exceed max + K𝖽𝖾𝗉𝗍𝗁_Π(P). Thus, 𝗁𝗂^n(P) ≤max + K𝖽𝖾𝗉𝗍𝗁_Π(P).Consider now the case when P(c)@ι was obtained by applying (_) to P_i(c_i)@ι_i, i ∈{1,2}. By the induction hypothesis, the largest number mentioned in ι_i does not exceed max + K𝖽𝖾𝗉𝗍𝗁_Π(P_i). On the other hand, 𝖽𝖾𝗉𝗍𝗁_Π(P_i) < 𝖽𝖾𝗉𝗍𝗁_Π(P) and the maximal number in ι cannot be larger that the maximal number in {ι_i | i ∈{1,2}} plus K. Thus, the maximal number in ι does not exceed max + K𝖽𝖾𝗉𝗍𝗁_Π(P_i) + K ≤max + K𝖽𝖾𝗉𝗍𝗁_Π(P), and so 𝗁𝗂^n(P) ≤max + K 𝖽𝖾𝗉𝗍𝗁_Π(P). The remaining temporal rules are similar and left to the reader. Suppose we are given aprogram Π and a data instance . If Π andare inconsistent then, by Lemmas <ref> and <ref>, we have @ι∈ℭ_Π,, for some ι of the form (- ∞, ∞), ⟨ dk, ∞), (- ∞, dk ⟩, ⟨ dk, dk' ⟩, where k, k' ∈ℤ andM_l ≤ dk ≤ dk' ≤ M_r. Thus, there is a derivation of @ι from Π and , that is, a tree whose root is @ι, whose leaves are some atoms from , and whose every non-leaf vertex results from applying one of the rules (coal), (horn), (_), (⊞_), (_), (⊟_) to the immediate predecessors of this vertex. If @ι∈ℭ_Π, then there is a derivationof @ι from Π andsuch that(i) the length of any branch in the derivation does not exceed 2|Π|; (ii) for some polynomial p, every non-leaf vertex, corresponding to the application of coal in the derivation, has at most 2^p(|Π|,||) immediate predecessors. To show (i), it suffices to recall that Π is non-recursive (and so none of the rules in Π can be applied twice in the same branch of the derivation) and observe that we can always replace multiple successive applications of the rule (coal) with a single application.(ii) follows from Lemma <ref>.Consistency checking forprograms is PSpace-complete for combined complexity. The lower bound holds even for propositional .The upper bound is established by a standard algorithm <cit.> using Lemma <ref> and Savitch's theorem according to which NPSpace = PSpace. In essence, the NPSpace algorithm guesses branches of the derivation one by one and keeps only last two branches in memory. By Lemma <ref> (i), each branch contains ≤ 2|Π| atoms of the form P(c)@ι, where ι is as in Lemma <ref>, and so is stored in polynomial space. In addition, we store the axioms in Π that created these atoms, or (coal) if the atom was obtained by coalescing. In the latter case, we also need to guess a number k indicating how many distinct intervals are coalesced to obtain ι. By Lemma <ref> (ii), k ≤ 2^p(|Π|,||), and so it can be stored in polynomial space.The lower bound is proved by reduction of the satisfiability problem for quantified Boolean formulas (QBFs), which is known to be PSpace-complete. Let φ = Q_n p_n … Q_0 p_0 φ_0 be a QBF, where each Q_i is either ∀ or ∃, and φ_0 = c_0 … c_m is a propositional formula in CNFwith c_i = l_0 … l_k, with each l_i being either a variable p_j or its negation p_j, for 0 ≤ j ≤ n. In ourprogram, we use the following propositional variables: – P_0, …, P_n (to represent p_0, …, p_n from φ);– P̅_0, …, P̅_n (to represent p_0, …,p_n);– P_0^0, …, P_0^n for p_0; P_1^1, …, P_1^n for p_1, etc.; P_n^n for p_n, and similarly for p_i;– F_0, …, F_n+1;– C_0, …, C_m (to represent c_0,…,c_m).We first take a data instancewith the following facts: P_i^i@[0, 2^i),P̅_i^i@[2^i, 2^i+1), for 0 ≤ i ≤ n.Starting from this data, we can generate all the truth-assignments for the variables p_0, …, p_n using the following rules, where 0 ≤ i ≤ n: P_i ← P_i^n, P̅_i ←P̅_i^n, P_i^j+1← P_i^j, ⊞_2^j+1 P_i^j+1← P_i^j, P̅_i^j+1←P̅_i^j, ⊞_2^j+1P̅_i^j+1←P̅_i^j,i ≤ j < n.The canonical model forand the rules above for the variables p_0, p_1, p_2 (thus, n=2) is shown in Fig. <ref>. We then need the rules: C_i ←P_j,p_joccurs inc_i,C_i ← P̅_̅j̅,p_joccurs inc_i,F_0 ← ⋀_0 ≤ i ≤ mC_i, for 0 ≤ i ≤ m, 0 ≤ j ≤ n. Note that F_0 will hold at the moments of time corresponding to the assignments that make φ_0 true. Further, we consider the formula φ_i = Q_i-1 p_i-1… Q_0 p_0 φ_0, for 1 ≤ i ≤ n+1 (note that φ_n+1 = φ), and provide rules that make F_i true precisely at the moments of time corresponding to the assignments that make φ_i true. We take ⊞_[0,2^i]F_i+1← F_iP_i, ⊟_[0,2^i]F_i+1← F_i P̅_i,ifQ_i = ∃, ⊞_[0, 2^i+1) F_i+1←⊞_[0,2^i)P_i ⊞_[0, 2^i+1) F_i,ifQ_i = ∀, for 0 ≤ i ≤ n, and, finally, ←⊞_[0, 2^n+1) F_n+1. All the rules above form the requiredprogram Π. We now prove that Π is consistent withiff φ is not satisfiable. By Lemma <ref>, itsuffices to show that F_n+1@[0, 2^n+1) ∈ℭ_Π, iff φ is satisfiable. For (⇒), suppose F_n+1@[0, 2^n+1) ∈ℭ_Π,. If Q_n = ∃ then, in view of (<ref>), either F_n@[0, 2^n), P_n@[0, 2^n) ∈ℭ_Π,or F_n@[2^n, 2^n+1), P̅_n@[2^n, 2^n+1) ∈ℭ_Π,. If the first option holds, we show that φ_n is satisfiable when p_n is true; if the second option holds, we show that φ_n is satisfiable when p_n is false. Similarly, if Q_n = ∀, then by (<ref>), we have F_n@[0, 2^n), P_n@[0, 2^n) ∈ℭ_Π, andF_n@[2^n, 2^n+1), P̅_n@[2^n, 2^n+1) ∈ℭ_Π,. In this case, we show that φ_n is satisfiable when p_n can be both false and true. To show that F_n@[0, 2^n), P_n@[0, 2^n) ∈ℭ_Π, implies that φ_n is satisfiable when p_n is true (the other case is analogous and left to the reader), suppose Q_n-1 = ∃. By (<ref>), either F_n-1@[0, 2^n-1), P_n-1@[0, 2^n-1) ∈ℭ_Π,or F_n@[2^n-1, 2^n), P̅_n-1@[2^n-1, 2^n) ∈ℭ_Π,. (If Q_n-1 = ∀, by (<ref>) both of these options hold.) Therefore, to show that φ is satisfiable, it now suffices to show that (i) F_n-1@[0, 2^n-1), P_n-1@[0, 2^n-1) ∈ℭ_Π, implies that φ_n-1 is satisfiable when p_n is true and p_n-1 is true; (ii) F_n-1@[2^n-1, 2^n), P̅_n-1@[2^n-1, 2^n) ∈ℭ_Π, implies that φ_n-1 is satisfiable when p_n is true and p_n-1 is false. We only consider (i), leaving (ii) to the reader, and after applying the argument above n times, will need to show that (i) F_0@[0, 1), P_0@[0, 1) ∈ℭ_Π, implies that φ_0 is satisfiable when p_n, …, p_1 and p_0 are all true; (ii) F_0@[1, 2), P̅_0@[1, 2) ∈ℭ_Π, implies that φ_0 is satisfiable when p_n, …, p_1 are true while p_0 is false. That (i) holds follows from (<ref>)–(<ref>), and similarly for (ii). This concludes the proof of (⇒); the other direction is proved analogously. Using the techniques of DBLP:journals/tocl/ArtaleKRZ14, it can be shown that nonrecursive Horn fragment ofis P-complete. The same complexity can be derived from the work by DBLP:journals/tocl/BresolinKMRSZ17 for the nonrecursive Horn fragment of the Halpern-Shoham logic ℋ𝒮.As we have just seen, the combined complexity of query answeringdrops from ExpSpace forto PSpace for . We now show that the data complexity drops to AC^0, which is important for practical query answering using standard database systems. Note that this result is non-trivial in view of Theorem <ref>. The crux of the proof is encoding coalescing by FO-formulas with ∀ (which is typically not needed for rewriting atemporal ontology-mediated queries).Consistency checking and answeringqueries is in AC^0 for data complexity.We only consider a propositionalprogram Π. The proof can be straightforwardly adapted to the case of arity ≥ 1 by adding more (object) variables to the predicates used below. Let N be a set of comprising numbers or ∞, -∞. We use N + r as a shorthand for { t + r | t∈ N} and similarly for N - r (we assume that t + ∞ = ∞ and t - ∞ = -∞).For a propositional variable P in Π, we define two sets 𝗅𝖾(P) and 𝗋𝗂(P) as follows:– 𝗅𝖾(P) = 𝗋𝗂(P) = {0} if there is no P' such that P ⋖ P'; – otherwise, 𝗅𝖾(P) is the union of:* ⋃_i ∈ I𝗅𝖾(P_i), for each P ←⋀_i ∈ I P_i in Π, * 𝗅𝖾(P_2) + r_1 ∪𝗋𝗂(P_1), for each P ← P_1 _⟨ r_1, r_2 ⟩ P_2 in Π, * 𝗅𝖾(P_2) - r_2 ∪𝗅𝖾(P_1), for each P ← P_1 _⟨ r_1, r_2 ⟩ P_2 in Π, * 𝗅𝖾(P_1) + r_2, for each P ←⊟_⟨ r_1, r_2 ⟩ P_1 in Π, * 𝗅𝖾(P_1) - r_1, for each P ←⊞_⟨ r_1, r_2 ⟩ P_1 in Π, and 𝗋𝗂(P) is the union of:* ⋃_i ∈ I𝗋𝗂(P_i), for each P(τ) ←⋀_i ∈ I P_i in Π, * 𝗋𝗂(P_2) + r_2 ∪𝗋𝗂(P_1), for each P ← P_1 _⟨ r_1, r_2 ⟩ P_2 in Π, * 𝗋𝗂(P_2) - r_1 ∪𝗅𝖾(P_1), for each P ← P_1 _⟨ r_1, r_2 ⟩ P_2 in Π, * 𝗋𝗂(P_1) + r_1, for each P ←⊟_⟨ r_1, r_2 ⟩ P_1 in Π, * 𝗋𝗂(P_1) - r_2, for each P ←⊞_⟨ r_1, r_2 ⟩ P_1 in Π.Using an argument that is similar to the proof of Lemma <ref>, one can show the following: For anyprogram Π, any data instance , and anyP@⟨ t_1, t_2 ⟩∈ℭ_Π,,– t_1 = t_1' + n_1, for some n_1 ∈𝗅𝖾(P) and some t_1' such that P'[t_1', t_1'] ∈ or P'(t_1', s_2) ∈, – t_2 = t_2' + n_2, for some n_2 ∈𝗋𝗂(P) and some t_2' such that P'[t_2', t_2'] ∈ or P'(s_1, t_2') ∈.In view of Lemma <ref>, we can prove Theorem <ref> by constructing FO-formulas φ_P^⟨ m, n ⟩(x,y) with m ∈𝗅𝖾(P) and n ∈𝗋𝗂(P) such that, for any data instance , P@⟨ t_1+m, t_2+n ⟩∈ℭ_Π, iff𝔄_φ_P^⟨ m, n ⟩(t_1, t_2), where 𝔄_ is the FO-structure defined below.To slightly simplify presentation (and without much loss of generality), we assume that all numbers in () are positive, and set 𝔄_ = ( Δ, <, P_1^[], P_1^(), …, P_l^[], P_l^(),𝖻𝗂𝗍^ in, 𝖻𝗂𝗍^ fr), where– Δ is a set of (ℓ+1)-many elements strictly linearly ordered by <, ℓ is the maximum of the number of distinct timestamps inand the number of bits in the longest binary fraction in(excluding the binary point); for simplicity, we assume that Δ = {0,…,ℓ}, < is the natural order, and denote by n̅ the nth fraction in ((),<), counting from 0;– P_i^[] (n,n) holds in 𝔄_ iff P_i@[n̅,n̅] ∈ and P_i^() (n,m) holds in 𝔄_ iff P_i@(n̅,m̅) ∈, for any P_i occurring in ; – for n̅∞, 𝖻𝗂𝗍^ in(n,i,0) (𝖻𝗂𝗍^ fr(n,i,0)) holds in 𝔄_ iff the ith bit of the integer (respectively, fractional) part of n̅ is 0, and 𝖻𝗂𝗍^ in(n,i,1) (𝖻𝗂𝗍^ fr(n,i,1)), for i ∈Δ, holds in 𝔄_ iff the ith bit of the integer (respectively, fractional) part of n̅ is 1 (as usual, we start counting bits from the least significant one); – for n̅ = ∞, 𝖻𝗂𝗍^ in(n,i,1) and 𝖻𝗂𝗍^ fr(n,i,1) for all i ∈Δ. For example, the data instance = {P[110.001, 110.001],P(10000, ∞)} is given as the FO structure 𝔄_ = ( Δ, <, P^[],P^(), 𝖻𝗂𝗍^ in, 𝖻𝗂𝗍^ fr), where Δ = {0,…,6 }, P^[] = {(0, 0)}, P^() = { (1,2) }, and 𝖻𝗂𝗍^ in ={(0, 0, 0), (0, 1, 1), (0, 2, 1)}∪{(0, i, 0) | 3 ≤ i ≤ 6}∪{(1, i, 0) | 0 ≤ i ≤ 3}∪{(1,4,1)}∪{(1,5,0)}∪{(1,6,0)}∪{(2, i, 1) | 0 ≤ i ≤ 6}. 𝖻𝗂𝗍^ fr ={(0, 4, 1) }∪{(0, i, 0) | 0 ≤ i ≤ 6,i ≠ 4}∪{(1, i, 0) | 0 ≤ i ≤ 6 }∪{(2, i, 1) | 0 ≤ i ≤ 6 }. To construct the required φ_P^⟨ m, n ⟩(x,y),suppose that we have FO-formulas– 𝖼𝗈𝖺𝗅_P^⟨ m, n ⟩ (x, y) saying that P@⟨ x+m, y+n ⟩ is added to ℭ_Π, by an application of the rule (coal); – ψ_P^⟨ m, n ⟩(x,y) saying that either P@⟨ x+m, y+n ⟩ is added to ℭ_Π, because it belongs to the given data instance (in which case we can assume that m=n=0, and ⟨⟩ is either () or []),or P@⟨ x+m, y+n ⟩ is added to ℭ_Π, as a result of an application of one of the `logical' rules.In this case we can set φ_P^⟨ m, n ⟩(x,y)  = ψ_P^⟨ m, n ⟩(x,y) 𝖼𝗈𝖺𝗅_P^⟨ m, n ⟩ (x, y). Using the predicate 𝗂𝗌_a,b, which is ⊤ if a = b andotherwise, we can define ψ_P^⟨ m, n ⟩(x,y) as a disjunction of the following formulas:– 𝗂𝗌_⟨, [𝗂𝗌_⟩, ]𝗂𝗌_m, 0𝗂𝗌_n, 0 P^[](x,y); – 𝗂𝗌_⟨, (𝗂𝗌_⟩, )𝗂𝗌_m, 0𝗂𝗌_n, 0 P^()(x,y);– for every P ←⋀_1 ≤ i ≤ kP_i in Π,∃ x_1, y_1, …, x_k, y_k ⋁_cm_1 ∈𝗅𝖾(P_1) n_1 ∈𝗋𝗂(P_1) ⌈_1 ∈{ [, ( },⌉_1 ∈{ ], ) }(φ_P_1^⌈_1 m_1, n_1 ⌉_1 (x_1, y_1) …⋁_cm_k ∈𝗅𝖾(P_k) n_k ∈𝗋𝗂(P_k) ⌈_k∈{ [, ( },⌉_k ∈{ ], ) }(φ_P_k^⌈_k m_k, n_k ⌉_k (x_k, y_k)𝗂𝗇𝗍𝖾𝗋^⟨ m, n ⟩_⌈_1 m_1, n_1 ⌉_1, …, ⌈_k m_k, n_k ⌉_k(x, y, x_1, y_1, …, x_k, y_k) ) …),where 𝗂𝗇𝗍𝖾𝗋^⟨ m, n ⟩_⌈_1 m_1, n_1 ⌉_1, …, ⌈_k m_k, n_k ⌉_k(x, y, x_1, y_1, …, x_k, y_k) says that ⟨ x+m, y+n ⟩ is an intersection of ⌈_1 x_1 + m_1, y_1 + n_1 ⌉_1 ,…, ⌈_k x_k + m_k, y_k + n_k ⌉_k (this formula can easily be defined in terms of the predicates x+m = y+n and x+m < y+n given below); – for every P ← P_1 _ P_2 in Π, the formula σ_, P,P_1, P_2^⟨ m, n ⟩(x,y) saying that ⟨ x+m, y+n ⟩ is((ι^c_1 ∩ι_2) ) ∩ι^c_1 for some ι_1 and ι_2, where P_1 and P_2 hold, respectively(we give a definition of σ_, P,P_1, P_2^⟨ m, n ⟩(x,y) in the appendix); – analogous formulas encoding the relevant operations on intervals for the other temporal operators. The formula 𝖼𝗈𝖺𝗅_P^⟨ m, n ⟩ (x, y) is defined as follows: 𝖼𝗈𝖺𝗅_P^⟨ m, n ⟩ (x, y)  = ∀ z ⋀_cl ∈𝗅𝖾(P) ∪𝗋𝗂(P) ( (x + m ≤ z + l)(z + l ≤ y + m) →𝗇𝗈𝗀𝖺𝗉^l_P,⟨ m, n ⟩ (z, x, y) ), where 𝗇𝗈𝗀𝖺𝗉^l_P,⟨ m, n ⟩ (z, x, y) is the formula ∃ x_1, y_1, x_2, y_2, x_3, y_3⋁_cm_1 ∈𝗅𝖾(P) n_1 ∈𝗋𝗂(P) ⌈_1 ∈{ [, ( },⌉_1 ∈{ ], ) }( ψ_P^⌈_1 m_1, n_1 ⌉_1 (x_1, y_1) 𝗌𝗎𝖻^⌈_1 m_1, n_1 ⌉_1_⟨ m, n ⟩ (x_1, y_1, x, y) ⋁_cm_2 ∈𝗅𝖾(P) n_2 ∈𝗋𝗂(P) ⌈_2 ∈{ [, ( },⌉_2 ∈{ ], ) }( ψ_P^⌈_2 m_2, n_2 ⌉_2 (x_2, y_2) 𝗌𝗎𝖻^⌈_2 m_2, n_2 ⌉_2_⟨ m, n ⟩ (x_2, y_2, x, y) ((x_1 + m_1 < z + l < y_1 + n_1)(x_1 + m_1 < y_1 + l_1 = z + l = x_2 + m_2 < y_2 + n_2) 𝗂𝗌_⌉_1, ]𝗂𝗌_⌈_2, [) ⋁_cm_3 ∈𝗅𝖾(P) n_3 ∈𝗋𝗂(P) ( ψ_P^[ m_3, n_3 ] (x_3, y_3) 𝗌𝗎𝖻^[ m_3, n_3 ]_⟨ m, n ⟩ (x_3, y_3, x, y) [(x_3 + m_3= y_3+n_3 = z + l = x+m = x_1 + m_1 < y_1 + n_1) (x_1 + m_1 < y_1 + n_1 = z+l = y+ n = x_3 + m_3= y_3+n_3) (x_1 + m_1 < y_1 + l_1 = z + l = x_3 + m_3= y_3+n_3 = x_2 + m_2 < y_2 + n_2)] ) )) and 𝗌𝗎𝖻^⌈ m', n' ⌉_⟨ m, n ⟩(x', y', x, y) says that ⌈ x' + m', y' + n' ⌉ is a subinterval of ⟨ x+m, y+n ⟩. Intuitively, 𝗇𝗈𝗀𝖺𝗉^l_P,⟨ m, n ⟩ (z, x, y) says that around the time instant z+l (that is, to the left and right of it as well as at z+l itself),their is no subinterval of ⟨ x+m, y+n ⟩ that is not covered by P. The five cases considered in the formula 𝗇𝗈𝗀𝖺𝗉^l_P,⟨ m, n ⟩ (z, x, y) are illustrated in Fig. <ref>. When evaluating φ^⟨ m, n ⟩(x,y) over 𝔄_, we need to compute the truth-values of x+m = y+n and x+m < y+n (for fixed m and n). We regard the former as a formula with the predicates 𝖻𝗂𝗍^in, 𝖻𝗂𝗍^fr and < that is truejust in case x = y + (n-m) if n ≥ m, and y = x+(m-n) otherwise. We provide a definition of x = y + c, for a positive c, in the appendix. A formula expressing x+m < y+n is constructed similarly and left to the reader.Finally, we show how the formulas φ_P^⟨ m, n ⟩(x,y) defined above can be used to check whether an interval ι = ⟨ι_b, ι_e ⟩ is a certain answer to (Π, P@x) over . As follows from Lemma <ref>, if @ ⌈ t_1, t_2 ⌉∈ℭ_Π, then, for some m ∈𝗅𝖾(), n ∈𝗋𝗂() and some numbers t_1', t_2' ∈𝗇𝗎𝗆() such that t_1' (t_2') occurs as the left (right) end of some interval, we have t_1 = t_1' + m and t_2 = t_2' + n.Take the structure 𝔄_^ι that extends 𝔄_ with the numbers ι_b and ι_e. By (<ref>), ι is a certain answer to (Π, P@x) overiff the formula ∃ x, y ⋁_cm ∈𝗅𝖾() n ∈𝗋𝗂() φ_^⌈ m, n ⌉(x,y)∃ x, y, x_1, y_1 ⋁_cm_1 ∈𝗅𝖾() n_1 ∈𝗋𝗂() ⌈_1 ∈{ [, ( },⌉_1 ∈{ ], )}(φ_P^⌈_1 m, n ⌉_1(x_1, y_1) 𝗌𝗎𝖻^⟨ 0, 0 ⟩_⌈_1 m, n ⌉_1(x,y, x_1, y_1)(x = ι_b)(y = ι_e) ) holds true in 𝔄_^ι. § IMPLEMENTINGUnfortunately, the (data independent) FO-rewriting (<ref>) turns out to be impractical because of the universal quantifier used for coalescing in (<ref>).It is well known that ∀ is implemented in SQL as ∃ resulting in suboptimal performance in general.Having experimented with a few different approaches, we decided to use a materialisation (bottom-up) technique. In this section, we first present a bottom-up algorithm whose worst-case running time is linear in the number of intervals of an input data instance , under a practically motivated assumption that the order of occurrence of the intervals incoincides with the natural temporal order on those intervals. Then we describe how our algorithm can be implemented in SQL (with views). In particular, we consider two alternative implementations of coalescing in SQL. §.§ Bottom-up algorithm We first introduce some notation and obtain a few results about temporal tables T with column names 𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋. A temporal table with m=0 will be called purely temporal. We refer to the i-th row of T as T[i], to the value of the column 𝖺𝗍𝗍𝗋_j in the i-th row as T[i, 𝖺𝗍𝗍𝗋_j], and set T[i, 𝖺𝗍𝗍𝗋_j,…, 𝖺𝗍𝗍𝗋_k] = (T[i, 𝖺𝗍𝗍𝗋_j],…, T[i, 𝖺𝗍𝗍𝗋_k]). We assume that the columns 𝗅𝖾𝖽𝗀𝖾 and 𝗋𝖾𝖽𝗀𝖾 store timestamps or special values for ∞, -∞, 𝗅𝗉𝖺𝗋 stores [ or (, and 𝗋𝗉𝖺𝗋 stores ] or ). Define an order ≺ on intervals by taking ⟨ t_1, t_2 ⟩≺⌈ s_1, s_2 ⌉ iff one of the following conditions holds:– t_1 < s_1; – t_1 = s_1, ⟨ is [, and ⌈ is (; – t_1 = s_1, ⟨ and ⌈ are the same, and t_2 < s_2; – t_1 = s_1, ⟨ and ⌈ are the same, t_2 = s_2, ⟩ is ), and ⌈ is ]. It should be clear that ≺ is a strict linear order on the set of all intervals. For example, we have [3, 8) ≺ [4, 7) ≺ (4,6) ≺ (4,7) ≺ (4,7]. (In fact, the results of this section will work with any other linear order over intervals.) We write T[i, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋] ≺ T'[j, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋] to say that the interval defined by the ith row of a temporal table T ≺-precedes the interval given by the jth row of a temporal table T'.We make the following temporal ordering assumption (or TOA), for any temporal table T with m attributes: if T[i, 𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m] = T[j, 𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m] and i < j, then T[i, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋] ≼ T[j, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋]. For a purely temporal table T, this assumption means that the rows of T respect ≼.Let T[𝖺𝗍𝗍𝗋_j, …, 𝖺𝗍𝗍𝗋_k] be theprojection of T on the columns 𝖺𝗍𝗍𝗋_j, …, 𝖺𝗍𝗍𝗋_k that keeps only distinct tuples. We define |T|_o to be the cardinality of T[𝖺𝗍𝗍𝗋_1,…, 𝖺𝗍𝗍𝗋_m] and |T|_t to be the cardinality of T[𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾,𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋]. The first measure estimates how large the table is in terms of individual constants, while the second measure concerns the number of timepoints. For the tables of extensional predicates in our use-cases, |T|_o is much smaller than |T|_t.We say that a table T is coalesced if it does not contain distinct tuples (c_1, …, c_m, ⟨, t_1, t_2, ⟩) and (c_1, …, c_m, ⌈, t_1', t_2', ⌉) such that ⟨ t_1, t_2 ⟩∩⌈ t_1', t_2' ⌉≠∅. For a tuple of individual constants (c_1, …, c_m), let T_c_1, …, c_m be the set of all intervals ⟨ t_1, t_2 ⟩ such that (c_1, …, c_m, ⟨, t_1, t_2, ⟩) occurs in T. For a set ℐ of intervals, we then denote by 𝖼𝗈𝖺𝗅𝖾𝗌𝖼𝖾(ℐ) the (minimal) set of intervals that results from coalescing ℐ. Finally, a coalescing of T is a minimal table, T^*, with the same columns as T such that the following condition holds:(coalesce) for any (c_1, …, c_m) in T[𝖺𝗍𝗍𝗋_1,…, 𝖺𝗍𝗍𝗋_m] and ⟨ t_1, t_2 ⟩ in 𝖼𝗈𝖺𝗅𝖾𝗌𝖼𝖾(T_c_1, …, c_m), there exists (c_1, …, c_m, ⟨ t_1, t_2 ⟩) in T^*. Clearly, T^* is a coalesced table. Suppose a table T satisfies TOA. Then its coalescing T^* satisfying TOA and such that |T^*|_o = |T|_o and |T^*|_t ≤ |T|_t can be computed in time O(|T|_o^2 × |T|_t).Consider first a purely temporal table S that satisfies temporal ordering. There is a simple linear-time algorithm to produce a coalesced table S^* that also satisfies temporal ordering. Indeed, initially we set ⟨ b, e ⟩ = S[0, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋]. In a loop, we take each ⌈ t_1, t_2 ⌉ = S[i, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋] (clearly, ⟨ b, e ⟩≺⌈ t_1, t_2 ⌉). If ⌈ t_1, t_2 ⌉ and ⟨ b, e ⟩ are disjoint, we add ⟨ b, e ⟩ to S^* and set ⟨ b, e ⟩ = ⌈ t_1, t_2 ⌉. If they are not disjoint, we set ⟨ b, e ⟩=⌈ t_1, t_2 ⌉∪⟨ b, e ⟩ and move on. It is easily checked that the resulting table S^* is as required. Below, we refer to this algorithm as an imperative coalescing algorithm.It only remains to explain how the algorithm above can be applied to T in order to obtain the required complexity. Note that |T| ≤ |T|_o× |T|_t and we can construct |T|_o-many separate tables T_c_1, …, c_m, for each (c_1, …, c_m), in time |T| × |T|_o. Then, we can apply the algorithm described above to each T_c_1, …, c_m in time |T|_t and merge the results. Therefore, the overall running time is |T| × |T|_o + |T|_t× |T|_o = O(|T|_o^2 × |T|_t). Before presenting our query answering algorithm, we determine the complexity of computing temporal joins. Let T be a table with attributes 𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m, 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋 and let T' be a table with attributes 𝖺𝗍𝗍𝗋_1', …, 𝖺𝗍𝗍𝗋_n', 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋. A temporal join of T and T' is a table T” with attributes 𝖺𝗍𝗍𝗋_1”, …, 𝖺𝗍𝗍𝗋_k”, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋 such that {𝖺𝗍𝗍𝗋_1”, …, 𝖺𝗍𝗍𝗋_k”} = {𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m}∪{𝖺𝗍𝗍𝗋_1', …, 𝖺𝗍𝗍𝗋_n' } and (c_1”, …, c_k”, ⟨, t_1”, t_2”, ⟩) is in T” iff there exist two tuples (c_1, …, c_m, ⌈, t_1, t_2, ⌉) from T and (c_1', …, c_n', ⌊, t_1', t_2', ⌋) from T' satisfying the following conditions:– c_i” = c_j, for all i,j such that 𝖺𝗍𝗍𝗋_i” =𝖺𝗍𝗍𝗋_j; – c_i” = c_j', for all i,j such that 𝖺𝗍𝗍𝗋_i” =𝖺𝗍𝗍𝗋_j'; – ⌈ t_1, t_2 ⌉∩⌊ t_1', t_2' ⌋≠∅ and ⟨ t_1”, t_2”⟩ = ⌈ t_1, t_2 ⌉∩⌊ t_1', t_2' ⌋. If T, T' satisfy TOA, then a temporal join T” of T and T' satisfying TOA and such that |T”|_o ≤ |T|_o × |T'|_o, |T”|_t ≤ |T|_t + |T'|_t can be computed in time O(|T|_o^2 × |T'|_o^2 × (|T|_t + |T'|_t)).We first give an algorithm for computing the temporal join of purely temporal tables S and S'. We assume that these tables are coalesced (which can be done in time O(|S|) and O(|S'|)). The algorithm works starting from the first tuples S[i] and S'[i'] of the tables. If S[i] ∩ S'[i'] ≠∅, we write S[i] ∩ S'[i'] to the output table S”. Then, if S[i+1] ≽ S'[i'+1], we set i' := i' + 1 (without changing i); otherwise, i := i+1. We iterate until we have considered all the tuples in both tables. Clearly, computing the full S” requires time O(|S|+|S'|).The complete algorithm for the tables T and T' will first, similarly to the argument of Lemma <ref>, produce |T|_o-many purely temporal tables T_c_1, …, c_m, for each (c_1, …, c_m) occurring in T. Note that |T_c_1, …, c_m| ≤ |T|_t for each of those tables. In the same way, we produce |T'|_o purely temporal tables T'_c_1', …, c_n', for each (c_1', …, c'_n) occurring in T'. It remains to apply the temporal join algorithm described above to all pairs of tables T_c_1, …, c_m and T'_c_1', …, c_n', which can be done in the required time. Another operation on temporal tables we need is projection. Let T be a table with column names as above and let {𝖺𝗍𝗍𝗋_1', …, 𝖺𝗍𝗍𝗋_n'}⊆{𝖺𝗍𝗍𝗋_1, …, 𝖺𝗍𝗍𝗋_m}. A projection of T on 𝖺𝗍𝗍𝗋_1', …, 𝖺𝗍𝗍𝗋_n' is a table with columns 𝖺𝗍𝗍𝗋_1', …, 𝖺𝗍𝗍𝗋_n', 𝗅𝗉𝖺𝗋, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, 𝗋𝗉𝖺𝗋 containing all (c_1', …, c_n', ⟨ t_1, t_2 ⟩) such that some (c_1, …, c_m, ⟨ t_1, t_2 ⟩) is in T and c_i'=c_j whenever 𝖺𝗍𝗍𝗋_i' = 𝖺𝗍𝗍𝗋_j.As we have to preserve the temporal order, our algorithm for computing projections requires some attention. To show that a naïve projection does not preserve the temporal order, consider a table T with two tuples (a, [,1,1,]) and (b, [,0,0,]), which satisfies our temporal order assumption. The projection of T that removes the first column results is the table with two tuples ([,1,1,]) and ([,0,0,]), which is not ordered. If T satisfies TOA, then a projection of T satisfying TOA can be computed in time O(|T|_o^2 × |T|_t).Now, consider the union operation on pairs of tables T and T' with the same columns that returns a table with all the tuples from the set T ∪ T'. For any pair of tables T and T' satisfying TOA, their union table also satisfying TOA can be computed in time O((|T|_o^2 + |T'|_o^2) × (|T|_t + |T'|_t)).The proofs of Lemmas <ref> and <ref> can be found in theappendix. We are now in a position to describe the bottom-up query answering algorithm. Suppose we are given a program Π in normal form. Suppose also that each extensional predicate P is given by a table T_P satisfying TOA. (This assumption can be made in all of our use-cases. Indeed, both tablesand𝖶𝖾𝖺𝗍𝗁𝖾𝗋 are naturally ordered by the timestamp, and our mappings (see Section <ref>) can be easily written in a way to take advantage of this order and produce tables T satisfying TOA.) Thus, we can assume that the given data instanceis represented by a set of T_P, where each T_P contains all the tuples (c_1, …, c_m, ⟨, t_1, t_2, ⟩) such that P(c_1, …, c_m)@ ⟨ t_1, t_2 ⟩∈.Consider a predicate P and suppose that we have computed temporal tables T_P_i satisfying TOA, for each P_i with P ⋖ P_i (see Section <ref>). We assume that the T_P_i have (non-temporal) columns (P_i, 1),…, (P_i, m). For each rule α in Π with P in the head, we compute a table T_P^α satisfying TOA. If α is of the form (<ref>), we first compute the temporal join T of T_P_1, …, T_P_I (we change the names so that T_P_i has columns (P_i,τ_1, 1), …, (P_i,τ_m, m), where τ_i = (τ_1, …, τ_m), and so all the tables T_P_i have distinct column names). Then we select from T only those tuples (c_1, …, c_n, ⟨, t_1, t_2, ⟩) for which c_i = c_j in case the column names for c_i and c_j mention the same variable x, and the tuples for which c_i = a in case the column name for c_i mentions the constant a. These two steps can be done in time O(∏_i|T_P_i|_o^2 ×∑_i |T_P_i|_t), and the size of the resulting table does not exceed ∏_i|T_P_i|_o ×∑_i |T_P_i|_t. It remains to perform projection in the following way. SupposeP(τ) with τ = (x_1, … x_m) is the head of α (if τ also contains constants, the procedure below can be easily modified). Then we keep only one column among all the columns named (P_i,x_j, k), for each variable x_j. It remains to rename the remaining (P_i,x_j, k) to (P,j), for each j. The total time required to compute T_P^α is O(∏_i|T_P_i|_o^2 ×∑_i |T_P_i|_t).If α is of the form (<ref>), provided that T_P_1 is coalesced, computing T_P^α reduces to using arithmetic operations for ι, ι, and ⊑ι as in the rules (⊞_)/(⊟_), and projection. Therefore, T_P^α satisfying TOA can be computed in time |T_P_1|_o^2 × |T_P_1|_t. Computing T_P^α for rules of the form (<ref>) can be done in time O(|T_P_1|_o × |T_P_2|_o × (|T_P_1|_t + T_P_2|_t)). Indeed, to construct T_P^α for a rule α of the form P(τ) ← P_1 (τ_1)_ P_2(τ_2)), we follow the rule (_) and first produce a table T_P_1^c with the same columns as T_P_1, where for each tuple of T_P_1, we apply the operation ·^c to its interval. We then compute the temporal join T of T_P_1^c and T_P_2 after applying the renaming described above. Then we compute T^+^o ϱ by applying the operation +^o ϱ to the interval columns of each tuple in T, after which we compute the temporal join of T^+^o ϱ and T_P_1^c (with renaming applied to the columns of T_P_1^c). To produce T_P^α, it remains to perform projection and renaming as described above. Finally, to compute T_P, it is sufficient to compute the union of all T_P^α satisfying TOA. Thus, we obtain the following, where the degree of therule (<ref>) is |I|, of (<ref>) is 2, and of (<ref>) is 1: Let Π be a program and P a predicate in it such that K-manyrules have P in the head, with R being the maximal degree of those rules, m the maximum of |T_P'|_t among P' such that P ⋖ P', and n the maximum of |T_P'|_o among those P'. Then T_P is of size at most n^RmRK and can be computed in time O(n^2RmRK). To compute the table for the goal Q, we iterate the described procedure as many times as the length of the longest chain of predicates in the dependence relation ⋖ for Π. Thus, we obtain: Let m be the maximum of |T_P|_t among the extensional predicates P, and n the maximum of |T_P|_o among those P. The overall time required to compute the goal predicate Q of Π is exponential in the size of Π, polynomial in n, and linear in m. Note that if all T_P are extracted from one table ℛ, as in our use-cases, then n corresponds to the number of individual tuples in ℛ, whereas m to the number of temporal intervals. It is to be emphasised that, in practice, programs tend to be small, and the number of individual constants is also small compared to the number of temporal intervals. The theorem above explains the linear patterns in our experiments below, where the size of individual tuples is fixed. §.§ Implementation in SQLNow, we show how to rewrite a givenquery (Π, Q(τ)@x) with Π in normal form (<ref>)–(<ref>) to an SQL query computing the certain answers (c,ι) to the query with maximal intervals ι. We illustrate the idea by a (relatively) simple example.Consider thequery (Π, 𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒(𝖼𝗈𝗎𝗇𝗍𝗒)@x), where Π = { ⊟_[0,24h]𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(v) ←⊟_[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24(v) _[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41(v), 𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(u,v) 𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(u) } is part of the meteorological ontology from Section <ref>. First, we transform Π to normal form: Π = { 𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(v) ←_[0,24h]𝖷 (v), 𝖷(v) ←𝖸(v) 𝖹(v),𝖸(v) ←⊟_[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24(v),𝖹(v) ←_[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41(v), 𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(u,v) 𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(u) }. We regard 𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24, 𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41, 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒 as extensional predicates given by the tables T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24, T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41, T_𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒. The first two of these tables havecolumns 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾, and the third one 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽, 𝖼𝗈𝗎𝗇𝗍𝗒, 𝗅𝖾𝖽𝗀𝖾, 𝗋𝖾𝖽𝗀𝖾. To simplify presentation, we omit the columns 𝗅𝗉𝖺𝗋 and 𝗋𝗉𝖺𝗋 used in the previous section and assume that all the temporal intervals take the form (t,t']; see Section <ref>.For each predicate P in Π, we also create a view (temporary table) V_P^* with the same columns as T_P. We set V_P^* = 𝖼𝗈𝖺𝗅𝖾𝗌𝖼𝖾(T_P), where 𝖼𝗈𝖺𝗅𝖾𝗌𝖼𝖾 is a query that implements coalescing in SQL[It should not be confused with the standard coalesce function in SQL that returns the first of its arguments that is not null, or null if all of the arguments are null.] We explain the idea behind this query for a temporal table T (as mentioned above, we omit columns 𝚕𝚙𝚊𝚛, 𝚛𝚙𝚊𝚛). For a moment of time t occurring in T, we denote by b^≥(T,t) the number of i such that T[i, 𝚕𝚎𝚍𝚐𝚎] ≥ t, and by e^≥(T,t) the number of i such that T[i, 𝚛𝚎𝚍𝚐𝚎] ≥ t; the numbers b^≤(T,t) and e^≤(T,t) are defined analogously. It can be readily seen that every t in T[𝚕𝚎𝚍𝚐𝚎] such that b^≥(T,t) = e^≥(T,t) is the beginning of some interval in the coalesced table T^*. Similarly, every t' in T[𝚛𝚎𝚍𝚐𝚎] such that b^≤(T,t') = e^≤(T,t') is the end of some interval in T^*. The coalesced intervals of T^* can be then obtained as pairs (t,t”], where t is as above and t” is the minimum over those t' defined above that are ≥ t. Thus, to coalesce T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24 we first use the query V_l =   𝚂𝙴𝙻𝙴𝙲𝚃 T.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 𝙰𝚂 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽, T.𝗅𝖾𝖽𝗀𝖾 𝙰𝚂 𝗅𝖾𝖽𝗀𝖾 𝙵𝚁𝙾𝙼 T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24 T 𝚆𝙷𝙴𝚁𝙴 (𝚂𝙴𝙻𝙴𝙲𝚃 𝙲𝙾𝚄𝙽𝚃(*) 𝚏𝚛𝚘𝚖 T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24 S𝚆𝙷𝙴𝚁𝙴 S.𝗅𝖾𝖽𝗀𝖾≥ T.𝗅𝖾𝖽𝗀𝖾 𝙰𝙽𝙳 S.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 = T.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽) = (𝚂𝙴𝙻𝙴𝙲𝚃 𝙲𝙾𝚄𝙽𝚃(*) 𝚏𝚛𝚘𝚖 T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24 S𝚆𝙷𝙴𝚁𝙴 S.𝗋𝖾𝖽𝗀𝖾≥ T.𝗅𝖾𝖽𝗀𝖾 𝙰𝙽𝙳 S.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 = T.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽), which extracts the pairs (n,t), where t is as described above and 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 = n. An analogous query can be used to produce V_r, a table of pairs (n,t'), where t' is as described above and 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 = n. Finally, we set V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^* =  𝚂𝙴𝙻𝙴𝙲𝚃 V_l.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 𝙰𝚂 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽, V_l.𝗅𝖾𝖽𝗀𝖾 𝙰𝚂 𝗅𝖾𝖽𝗀𝖾,(𝚂𝙴𝙻𝙴𝙲𝚃 𝙼𝙸𝙽 (V_r.𝗋𝖾𝖽𝗀𝖾)𝙵𝚁𝙾𝙼V_r𝚆𝙷𝙴𝚁𝙴 V_r.𝗋𝖾𝖽𝗀𝖾≥ V_l.𝗅𝖾𝖽𝗀𝖾 𝙰𝙽𝙳 V_l.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 = V_r.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽) 𝙰𝚂 𝗋𝖾𝖽𝗀𝖾𝙵𝚁𝙾𝙼 V_l. A more efficient variant of this algorithm that uses window functions with sorting and partitioning allows us to avoid joins used, e.g., in the query V_l <cit.>. We will refer to this algorithm in Section <ref> as a standard SQL algorithm. In contrast to the imperative algorithm described in the proof of Lemma <ref>, this algorithm can be implemented using standard SQL operators.In addition, for each intensional predicate P of Π, we create a view V_P defined by an SQL query that reflects the definitions of Pin Π. For example, we set V_𝖸 = 𝖲𝖤𝖫𝖤𝖢𝖳 V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽𝖠𝖲 𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 ,V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗅𝖾𝖽𝗀𝖾+24h𝖠𝖲 𝗅𝖾𝖽𝗀𝖾, V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗋𝖾𝖽𝗀𝖾 𝖠𝖲 𝗋𝖾𝖽𝗀𝖾, 𝖥𝖱𝖮𝖬 V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*𝖶𝖧𝖤𝖱𝖤V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗋𝖾𝖽𝗀𝖾 - V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗅𝖾𝖽𝗀𝖾≥ 24h. This query implements the ι +^c operation for = [0, 24h], andthe 𝖶𝖧𝖤𝖱𝖤 clause checks whether ⊑ι holds, where ι = ( V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗅𝖾𝖽𝗀𝖾,V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24^*.𝗋𝖾𝖽𝗀𝖾]. We then setV_𝖸^* = 𝖼𝗈𝖺𝗅𝖾𝗌𝖼𝖾(V_𝖸) and note that the query 𝖲𝖤𝖫𝖤𝖢𝖳𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽 , 𝗅𝖾𝖽𝗀𝖾,𝗋𝖾𝖽𝗀𝖾𝖥𝖱𝖮𝖬 V_𝖸^*, when evaluated over the tables T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24, T_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41andT_𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒, would produce the answers to the query (Π, 𝖸(𝗌𝗍𝖺𝗍𝗂𝗈𝗇_𝗂𝖽, 𝖼𝗈𝗎𝗇𝗍𝗒)@ x) with maximal intervals ι = (ι_b, ι_e], where ι_b corresponds to 𝗅𝖾𝖽𝗀𝖾, and ι_e to 𝗋𝖾𝖽𝗀𝖾. We now explain how to construct queries for the concepts whosedefinitions involveusing the example of 𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒: V_𝙷𝚎𝚊𝚝𝙰𝚏𝚏𝚎𝚌𝚝𝚎𝚍𝙲𝚘𝚞𝚗𝚝𝚢 = 𝚂𝙴𝙻𝙴𝙲𝚃V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚌𝚘𝚞𝚗𝚝𝚢 𝙰𝚂 𝚌𝚘𝚞𝚗𝚝𝚢, 𝙼𝚇(V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚕𝚎𝚍𝚐𝚎,V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^*.𝚕𝚎𝚍𝚐𝚎)𝙰𝚂 𝚕𝚎𝚍𝚐𝚎,𝙼𝙽(V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚛𝚎𝚍𝚐𝚎,V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^*.𝚛𝚎𝚍𝚐𝚎) 𝙰𝚂 𝚛𝚎𝚍𝚐𝚎𝙵𝚁𝙾𝙼 V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*, V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^* 𝚆𝙷𝙴𝚁𝙴 𝙼𝚇(V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚕𝚎𝚍𝚐𝚎,V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^*.𝚕𝚎𝚍𝚐𝚎) < 𝙼𝙽(V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚛𝚎𝚍𝚐𝚎,V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^*.𝚛𝚎𝚍𝚐𝚎) 𝙰𝙽𝙳V_𝙻𝚘𝚌𝚊𝚝𝚎𝚍𝙸𝚗𝙲𝚘𝚞𝚗𝚝𝚢^*.𝚌𝚘𝚞𝚗𝚝𝚢 = V_𝙴𝚡𝚌𝚎𝚜𝚜𝚒𝚟𝚎𝙷𝚎𝚊𝚝^*.𝚌𝚘𝚞𝚗𝚝𝚢, where 𝙼𝙽 (𝙼𝚇) is the function that returns the earliest (latest) of any two given date/time values (it can be implemented in SQL as a user-defined function, or using the 𝙲𝙰𝚂𝙴 operator). Finally, we use a query similar to (<ref>) over V^*_𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒 to produce the answers to (Π, q(𝖼𝗈𝗎𝗇𝗍𝗒, x)).We are mostly interested in the scenario where the tables T_P are not available immediately, but extracted from raw timestamped data tables R by means of mappings. In this case, we use views V_P instead of T_P defined over R. For example, if the raw data is stored in the table 𝖶𝖾𝖺𝗍𝗁𝖾𝗋, we define the view: V_𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24 = 𝚂𝙴𝙻𝙴𝙲𝚃 𝚜𝚒𝚍, 𝚕𝚎𝚍𝚐𝚎, 𝚛𝚎𝚍𝚐𝚎𝙵𝚁𝙾𝙼( 𝚂𝙴𝙻𝙴𝙲𝚃 𝚜𝚝𝚊𝚝𝚒𝚘𝚗_𝚒𝚍 𝙰𝚂 𝚜𝚒𝚍,𝙻𝙰𝙶(𝚍𝚊𝚝𝚎_𝚝𝚒𝚖𝚎, 1)𝙾𝚅𝙴𝚁 (𝚠)𝙰𝚂𝚕𝚎𝚍𝚐𝚎,𝚍𝚊𝚝𝚎_𝚝𝚒𝚖𝚎𝙰𝚂𝚛𝚎𝚍𝚐𝚎𝙵𝚁𝙾𝙼𝖶𝖾𝖺𝗍𝗁𝖾𝗋𝚆𝙸𝙽𝙳𝙾𝚆 𝚠 𝙰𝚂(𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽 𝙱𝚈𝚜𝚝𝚊𝚝𝚒𝚘𝚗_𝚒𝚍 𝙾𝚁𝙳𝙴𝚁𝙱𝚈𝚍𝚊𝚝𝚎_𝚝𝚒𝚖𝚎))𝚝𝚖𝚙𝚆𝙷𝙴𝚁𝙴𝚊𝚒𝚛_𝚝𝚎𝚖𝚙_𝚜𝚎𝚝_1>=24. Our general rewriting algorithm is outlined in Fig. <ref>, where the function 𝚊𝚗𝚜 produces an SQL query that computes the certain answers to (Π, Q(τ)@x) (with maximal intervals) by evaluating the query over the input database . The algorithm is a variation of the standard translation of non-recursive Datalog to relational algebra—see, e.g., the work by Ullman88-dbkb-v1—extended with the operations on temporal intervals described above (they are underlined in Fig. <ref>).It is to be noted that the `views' introduced by the algorithm do not require modifying the underlying database. They can be implemented in different ways: for example, by using subqueries, common table expressions (CTEs), or temporary tables. For the experiments in Section <ref>, we use the last approach, where temporary tables are generated on the fly and exist only within a transaction. § USE CASESWe test the feasibility of OBDA withby querying Siemens turbine log data and Meso­West weather data. In this section, we briefly describe these use cases;detailed results of our experiments will presented in Section <ref>. §.§ Siemens Siemens service centres store aggregated turbine sensor datain tables such as . The data comes with (not necessarily regular) timestamps t_1,t_2,…, and it is deemed that the values remain constant in every interval [t_i,t_i+1).Using a set of mappings, we extract from these tables a data instance containing ground facts such as 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5(𝗍𝖻0)@[12:20:48,12:20:49),𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5(𝗍𝖻0)@[12:20:49,12:20:52),𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾1500(𝗍𝖻0)@[12:20:48,12:20:49),𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖡𝖾𝗅𝗈𝗐0.1(𝗍𝖻0)@[12:20:48,12:20:52). For example, the first two of them are obtained from the tableusing the following SQL mapping ℳ: 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖠𝖻𝗈𝗏𝖾1.5(𝗍𝖻𝗂𝖽)@[ 𝚕𝚎𝚍𝚐𝚎,𝚛𝚎𝚍𝚐𝚎)←𝚂𝙴𝙻𝙴𝙲𝚃 𝚝𝚋𝚒𝚍, 𝚕𝚎𝚍𝚐𝚎, 𝚛𝚎𝚍𝚐𝚎𝙵𝚁𝙾𝙼(𝚂𝙴𝙻𝙴𝙲𝚃 𝚝𝚞𝚛𝚋𝚒𝚗𝚎𝙸𝚍 𝙰𝚂 𝚝𝚋𝚒𝚍,𝙻𝙰𝙶(𝚍𝚊𝚝𝚎𝚃𝚒𝚖𝚎, 1)𝙾𝚅𝙴𝚁 (𝚠)𝙰𝚂𝚕𝚎𝚍𝚐𝚎,𝙻𝙰𝙶(𝚊𝚌𝚝𝚒𝚟𝚎𝙿𝚘𝚠𝚎𝚛, 1)𝙾𝚅𝙴𝚁 (𝚠)𝙰𝚂 𝚕𝚊𝚐_𝚊𝚌𝚝𝚒𝚟𝚎𝙿𝚘𝚠𝚎𝚛,𝚍𝚊𝚝𝚎𝚃𝚒𝚖𝚎𝙰𝚂𝚛𝚎𝚍𝚐𝚎𝙵𝚁𝙾𝙼𝚆𝙸𝙽𝙳𝙾𝚆 𝚠 𝙰𝚂(𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽 𝙱𝚈𝚝𝚞𝚛𝚋𝚒𝚗𝚎𝙸𝚍 𝙾𝚁𝙳𝙴𝚁𝙱𝚈𝚍𝚊𝚝𝚎𝚃𝚒𝚖𝚎))𝚝𝚖𝚙 𝚆𝙷𝙴𝚁𝙴𝚕𝚊𝚐_𝚊𝚌𝚝𝚒𝚟𝚎𝙿𝚘𝚠𝚎𝚛>1.5 In terms of the basic predicates above, we define more complex ones that are used in queries posed by the Siemens engineers:𝖭𝗈𝗋𝗆𝖺𝗅𝖱𝖾𝗌𝗍𝖺𝗋𝗍(v) ←𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝖺𝗋𝗍(v) _(0,1h]𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝗈𝗉(v),𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝗈𝗉(v) ←𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇1500𝗍𝗈200(v) _(0,9m][𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇6600𝗍𝗈1500(v) _(0,2m](𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖮𝖿𝖿(v) _(0,2m]𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖮𝖿𝖿(v) )],𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖮𝖿𝖿(v) ←⊟_[0s,10s]𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖡𝖾𝗅𝗈𝗐0.1(v),𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖮𝖿𝖿(v) ←⊟_[0s,10s]𝖬𝖺𝗂𝗇𝖯𝗈𝗐𝖾𝗋𝖡𝖾𝗅𝗈𝗐0.15(v),𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇6600𝗍𝗈1500(v) ←⊟_[0s,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐1500(v) _(0, 2m]⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾6600(v) ,𝖢𝗈𝖺𝗌𝗍𝖣𝗈𝗐𝗇1500𝗍𝗈200(v) ←⊟_[0s,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐200(v)_(0, 9m]⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾1500(v),𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝖺𝗋𝗍(v) ←𝖲𝖳𝖢𝗍𝗈𝖱𝖴𝖢𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) _(0,30s][𝖱𝖺𝗆𝗉𝖢𝗁𝖺𝗇𝗀𝖾1-2𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) _(0,5m](𝖯𝗎𝗋𝗀𝗂𝗇𝗀𝖨𝗌𝖮𝗏𝖾𝗋(v) _(0,11m](𝖯𝗎𝗋𝗀𝖾𝖠𝗇𝖽𝖨𝗀𝗇𝗂𝗍𝗂𝗈𝗇𝖲𝗉𝖾𝖾𝖽𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) _(0,15s]𝖥𝗋𝗈𝗆𝖲𝗍𝖺𝗇𝖽𝖲𝗍𝗂𝗅𝗅𝖳𝗈180(v) ))],𝖲𝖳𝖢𝗍𝗈𝖱𝖴𝖢𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) ←⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾4800(v) _(0,2m]⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐4400(𝗏),𝖱𝖺𝗆𝗉𝖢𝗁𝖺𝗇𝗀𝖾1-2𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) ←⊟_(0s,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾4400(v) _(0,6.5m]⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐1500(v),𝖯𝗎𝗋𝗀𝗂𝗇𝗀𝖨𝗌𝖮𝗏𝖾𝗋(v) ←⊟_[0s,10s]𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾𝖮𝗇(v) _(0, 10m][ ⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾1260(v) _(0,2m]⊟_(0,1m]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐1000(v) ],𝖯𝗎𝗋𝗀𝖾𝖠𝗇𝖽𝖨𝗀𝗇𝗂𝗍𝗂𝗈𝗇𝖲𝗉𝖾𝖾𝖽𝖱𝖾𝖺𝖼𝗁𝖾𝖽(v) ←⊟_[0s,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾1260(v)_(0, 2m]⊟_(0,30s]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐200(v),𝖥𝗋𝗈𝗆𝖲𝗍𝖺𝗇𝖽𝖲𝗍𝗂𝗅𝗅𝖳𝗈180(v) ←⊟_[0s,1m]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖠𝖻𝗈𝗏𝖾180(v)_(0, 1.5m]⊟_(0,1m]𝖱𝗈𝗍𝗈𝗋𝖲𝗉𝖾𝖾𝖽𝖡𝖾𝗅𝗈𝗐60(v).§.§ MesoWestThe Meso­West (<http://mesowest.utah.edu/>) project makes publicly available historical records of the weather stations across the US showing such parameters of meteorological conditions as temperature, wind speed and direction, amount of precipitation, etc. Each station outputs its measurements with some periodicity, with the output at time t_i+1 containing the accumulative (e.g., for precipitation) or averaged (e.g., for wind speed) value over the interval (t_i,t_i+1].The data comes in a table 𝖶𝖾𝖺𝗍𝗁𝖾𝗋, which looks as follows: -3ptstationId dateTime airTemp windSpeed windDir hourPrecip… … KBVY 2013-02-15;15:14 8 45 10 0.05 KMNI 2013-02-15;15:21 6 123 240 0KBVY 2013-02-15;15:24 8 47 10 0.08 KMNI 2013-02-15;15:31 6.7 119 220 0… One more table, 𝖬𝖾𝗍𝖺𝖽𝖺𝗍𝖺, provides some atemporal meta information about the stations: -3ptstationId county state latitude longitude … … KBVY Essex Massachusetts 42.58361 -70.91639KMNI Essex Massachusetts 33.58333-80.21667 … The monitoring and historical analysis of the weather involves answeringqueries such as `find showery counties, where one station observes precipitation at the moment, while another one does not, but observed precipitation 30 minutes ago'. We use SQL mappings over the 𝖶𝖾𝖺𝗍𝗁𝖾𝗋 table similar to those in the Siemens case to obtain ground atoms such as 𝖭𝗈𝗋𝗍𝗁𝖶𝗂𝗇𝖽(𝖪𝖡𝖵𝖸)@(15:14,15:24],𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾𝖥𝗈𝗋𝖼𝖾𝖶𝗂𝗇𝖽(𝖪𝖬𝖭𝖨)@(15:21,15:31],𝖯𝗋𝖾𝖼𝗂𝗉𝗂𝗍𝖺𝗍𝗂𝗈𝗇(𝖪𝖡𝖵𝖸)@(15:14,15:24],𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾0(𝖪𝖡𝖵𝖸)@(15:14,15:24],𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾0(𝖪𝖬𝖭𝖨)@(15:21,15:31](according to the standard definition, the hurricane force wind is above 118 km/h). On the other hand, mappings to the 𝖬𝖾𝗍𝖺𝖽𝖺𝗍𝖺 table provide atoms such as 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(𝖪𝖡𝖵𝖸,𝖤𝗌𝗌𝖾𝗑)@(-∞, ∞),𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(𝖪𝖡𝖵𝖸,𝖬𝖺𝗌𝗌𝖺𝖼𝗁𝗎𝗌𝖾𝗍𝗍𝗌)@(-∞, ∞). Our ontology contains definitions of various meteorological terms:𝖲𝗁𝗈𝗐𝖾𝗋𝗒𝖢𝗈𝗎𝗇𝗍𝗒(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(u_1, v) 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(u_2, v) 𝖯𝗋𝖾𝖼𝗂𝗉𝗂𝗍𝖺𝗍𝗂𝗈𝗇(u_1) 𝖭𝗈𝖯𝗋𝖾𝖼𝗂𝗉𝗂𝗍𝖺𝗍𝗂𝗈𝗇(u_2) _(0,30m]𝖯𝗋𝖾𝖼𝗂𝗉𝗂𝗍𝖺𝗍𝗂𝗈𝗇(u_2),⊟_[0,1h]𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾(v) ←⊟_[0,1h]𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾𝖥𝗈𝗋𝖼𝖾𝖶𝗂𝗇𝖽(v),𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖲𝗍𝖺𝗍𝖾(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(u,v) 𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾(u), ⊟_[0,24h]𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(v) ←⊟_[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾24(v) _[0,24h]𝖳𝖾𝗆𝗉𝖠𝖻𝗈𝗏𝖾41(v),𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖢𝗈𝗎𝗇𝗍𝗒(u,v) 𝖤𝗑𝖼𝖾𝗌𝗌𝗂𝗏𝖾𝖧𝖾𝖺𝗍(u),𝖢𝗒𝖼𝗅𝗈𝗇𝖾𝖯𝖺𝗍𝗍𝖾𝗋𝗇𝖲𝗍𝖺𝗍𝖾(v) ←𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(u_1, v) 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(u_2, v) 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(u_3, v) 𝖫𝗈𝖼𝖺𝗍𝖾𝖽𝖨𝗇𝖲𝗍𝖺𝗍𝖾(u_4, v) 𝖤𝖺𝗌𝗍𝖶𝗂𝗇𝖽(u_1) 𝖭𝗈𝗋𝗍𝗁𝖶𝗂𝗇𝖽(u_2) 𝖶𝖾𝗌𝗍𝖶𝗂𝗇𝖽(u_3) 𝖲𝗈𝗎𝗍𝗁𝖶𝗂𝗇𝖽(u_4).§ EXPERIMENTSTo evaluate the performance of the SQL queries produced by therewriting algorithm outlined in Section <ref>, we developed two benchmarks for our use cases.We ran the experiments on an HP Proliant server with 2 Intel Xeon X5690 Processors (with 12 logical cores at 3.47GHz each), 106GB of RAM and five 1TB 15K RPM HD. We used both PostgreSQL 9.6 and the SQL interface <cit.> of Apache Spark 2.1.0. Apache Spark is a cluster-computing framework that provides distributed task dispatching, scheduling and data parallelisation. For each of these two systems, we provided two different implementations, imperative and standard SQL, which diverge in the computation of maximal intervals; see Section <ref>. We ran all the querieswith a timeout of 30 minutes. §.§ Siemens Siemens provided us with a sample of data for one running turbine, which we denote by 𝗍𝖻0, over 4 days in the form of the table .The data table was rather sparse, containing a lot ofnulls, because different sensors recorded data at different frequencies. For example, 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋 arrivedmost frequently with average periodicity of 7 seconds, whereas the values for the field 𝖬𝖺𝗂𝗇𝖥𝗅𝖺𝗆𝖾 arrived most rarely, every 1 minute on average. We replicated this sample to imitate the data for one turbine over 10 different periods ranging from 32 to 320 months. The statistics of the data sets are given in Tables <ref> and <ref>. We evaluated four queries 𝖠𝖼𝗍𝗂𝗏𝖾𝖯𝗈𝗐𝖾𝗋𝖳𝗋𝗂𝗉(𝗍𝖻0)@x, 𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝖺𝗋𝗍(𝗍𝖻0)@x, 𝖭𝗈𝗋𝗆𝖺𝗅𝖲𝗍𝗈𝗉(𝗍𝖻0)@x and 𝖭𝗈𝗋𝗆𝖺𝗅𝖱𝖾𝗌𝗍𝖺𝗋𝗍(𝗍𝖻0)@x. The statistics of returned answersis given in Table <ref>. The execution times for the Siemens use case are given in Fig. <ref>.Although Apache Spark was designed to perform efficient parallel computations, it failed to take advantage of this feature due to the fact that the Siemens data could not be partitioned by mapping each part to a separate core. PostgreSQL 9.6 also supports parallel query execution in some cases. However, as many operators (e.g., scans of temporary tables) in our queries are classified either `parallel unsafe' or `parallel restricted' in the parallel safety documentation <cit.>, the query planner failed to produce any parallel execution strategy in our case.The reason why PostgreSQL outperformed Apache Spark is that the latter does not provide a convenient way to define proper indexes over temporary tables, which leads to quadratically growing running times. On the other hand, PostgreSQL shows linear growth in the size ofdata (confirming theoretical results since we deal with a single turbine). Note that the normal restart (start) query timeouts on the data for more than 18 (respectively, 21) years, which is more than enough for the monitoring and diagnostics tasks at Siemens, where the two most common application scenarios for sensor data analytics are daily monitoring (that is, analytics of high-frequency data of the previous 24 hours)and fleet-level analytics of key-performance indicators over one year. In both cases, the computation time of the results is far less a crucial cost factor than the lead-time for data preparation.§.§ MesoWest In contrast to the Siemens case, the weather tables contain very few nulls. Normally, the data values arrive with periodicity from 1 to 20 minutes. We tested the performance of our algorithm by increasing (i) the temporal span (with some necessary increase of the spatial spread) and (ii) the geographical spread of data.For (i), we took the New York state data for the 10 continuous periods between 2005 and 2014; see Tables <ref> and <ref>. As each year around 70 new weather stations were added, our 10 data samples increase more than linearly in size. For (ii), we fixed the time period of one year (2012) and linearly increased the data from 1 to 19 states (NY, NJ, MD, DE, GA, RI, MA, CT, LA, VT, ME, WV, NH, NC, MS, SC, ND, KY, SD); see Table <ref> and <ref>.In both cases, we executed fourqueries 𝖲𝗁𝗈𝗐𝖾𝗋𝗒𝖢𝗈𝗎𝗇𝗍𝗒(v)@x, 𝖧𝗎𝗋𝗋𝗂𝖼𝖺𝗇𝖾𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖲𝗍𝖺𝗍𝖾(NY)@x, 𝖧𝖾𝖺𝗍𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖢𝗈𝗎𝗇𝗍𝗒(v)@x, 𝖢𝗒𝖼𝗅𝗈𝗇𝖾𝖯𝖺𝗍𝗍𝖾𝗋𝗇𝖲𝗍𝖺𝗍𝖾(NY)@x.The statistics of the returned answers is shown in Tables <ref> and <ref>.The execution times are shown in Figures <ref> and <ref>. All the four queries can be answered within the time limit. The most expensive one is the cyclone pattern state query because its definition includes a join of four atoms for winds in four directions, each with a large volume of instances.All the graphs in Figures <ref> and <ref> exhibit linear behaviour with respect to the size of data.The nearly tenfold better performance of Spark over PostgreSQL can be explained by the fact that, unlike the data in the Siemens case, the MesoWest data is highly parallelisable. Since it was collected from hundreds of different weather stations, it can be partitioned by station id, state, county, etc. to perfectly fit the MapReduce programming model extended with resilient distributed datasets (RDDs) <cit.>. In this case, Apache Spark is able to take advantage of the multi-core and large memory hardware infrastructure, to compute mappings and coalescing in parallel, making it 10 times faster than PostgreSQL; see Figures <ref> and <ref>.Overall, the results of the experiments look very encouraging: ourquery rewriting algorithm produces SQL queries that are executable by a standard database engine PostgreSQL in acceptable time, and by a cluster-computing framework Apache Spark in better than acceptable time (in case data can be properly partitioned) over large sets of real-world temporal data of up to 8.3GB in CSV format. The relatively challenging queries such as 𝖭𝗈𝗋𝗆𝖺𝗅𝖱𝖾𝗌𝗍𝖺𝗋𝗍 and 𝖢𝗒𝖼𝗅𝗈𝗇𝖾𝖯𝖺𝗍𝗍𝖾𝗋𝗇𝖲𝗍𝖺𝗍𝖾 require a large number of temporal joins, which turn out to be rather expensive. § CONCLUSIONS AND FUTURE WORK To facilitate access to sensor temporal data with the aim of monitoring and diagnostics, we suggested the ontology language , an extension of datalog with the Horn fragment of the metric temporal logic(under the continuous semantics). We showed that answeringqueries is ExpSpace-complete for combined complexity, but becomes undecidable if the diamond operators are allowed in the head of rules. We also proved that answering nonrecursivequeries is PSpace-complete for combined complexity and in AC^0 for data complexity. We tested feasibility and efficiency of OBDA withon two real-world use cases by querying Siemens turbine data and MesoWest weather data. Namely, we designedontologies defining typical concepts used by Siemens engineers and various meteorological terms, developed and implemented an algorithm rewritingqueries into SQL queries, and then executed the SQL queries obtained by this algorithm from our ontologies over the Siemens and MesoWest data, showing their acceptable efficiency and scalability.(To the best of our knowledge, this is the first workon practical OBDA with temporal ontologies, and so no other systems with similar functionalities are available for comparison.)Based on these encouraging results, we plan to include our temporal OBDA framework into the Ontop platform <cit.>; visit <http://ontop.inf.unibz.it/> for more information on Ontop. Note also thatpresented here has been recently used to develop an ontology of ballet moves (see Example <ref>) that underlies a search engine of annotated sequences in ballet videos <cit.>. This is a third use case for our framework (and we are aware of a few more emerging use cases), which makes an efficient and user-friendly implementation of the framework a top priority.We are also working on the streaming data setting, where the challenge is to continuously evaluate queries over the incoming data.A rule-based language with window operators for analysing streaming data has been suggested by DBLP:conf/aaai/BeckDEF15. This language is very expressive as it uses an abstract semantics for window operators (which does not have to guarantee decidability) and allows negation and disjunction in the rules. It would be interesting to identify and adapt a suitable fragment of this language in our temporal OBDA framework. §.§ AcknowledgementsThis work was supported by the UK EPSRC grant EP/M012670 `iTract: Islands of Tractability in Ontology-Based Data Access' and by the OBATS project at the Free University of Bozen-Bolzano.Guohui Xiao is the corresponding author of this article. § §.§ Proof of Theorem <ref> The formula σ^⟨ m, n ⟩_, P, P_1, P_2(x,y) is defined as follows: ∃ x_1, y_1, …, x_5, y_5 ⋁_cm_1 ∈𝗅𝖾(P_1) n_1 ∈𝗋𝗂(P_1) ⌈_1 ∈{ [, ( },⌉_1 ∈{ ], ) }( φ_P_1^⌈_1 m_1, n_1 ⌉_1 (x_1, y_1) ⋁_cm_2 ∈𝗅𝖾(P_2) n_2 ∈𝗋𝗂(P_2) ⌈_2 ∈{ [, ( },⌉_2 ∈{ ], ) }( φ_P_2^⌈_2 m_2, n_2 ⌉_2 (x_2, y_2) ⋁_cm_3 = m_1 n_3 = n_1 ⌈_3 ∈{ [, ( },⌉_3 ∈{ ], ) }((x_3 = x_1)(y_3 = y_1) 𝗂𝗌_⌈_3,[𝗂𝗌_⌉_3,]⋁_cm_4 ∈𝗅𝖾(P_1) ∪𝗅𝖾(P_2) n_4 ∈𝗋𝗂(P_1) ∪𝗋𝗂(P_2) ⌈_4 ∈{ [, ( },⌉_4 ∈{ ], ) }( 𝗂𝗇𝗍𝖾𝗋_⌈_2 m_2, n_2 ⌉_2, ⌈_3 m_3, n_3 ⌉_3^⌈_4 m_4, n_4 ⌉_4(x_4, y_4, x_2, y_2, x_3, y_3) ⋁_cm_4 ∈𝗅𝖾(P) n_4 ∈𝗋𝗂(P) ⌈_5 ∈{ [, ( },⌉_5 ∈{ ], ) }(𝗉𝗅𝗎𝗌𝗈_, ⌈_4 m_4, n_4 ⌉_4^⌈_5 m_5, n_5 ⌉_5(x_5, y_5, x_4, y_4) 𝗂𝗇𝗍𝖾𝗋_⌈_5 m_5, n_5 ⌉_5, ⌈_3 m_3, n_3 ⌉_3^⟨ m, n ⟩(x, y, x_5, y_5, x_3, y_3) ))))), where 𝗉𝗅𝗎𝗌𝗈_, ⌈_4 m_4, n_4 ⌉_4^⌈_5 m_5, n_5 ⌉_5(x_5, y_5, x_4, y_4) is an (obvious) formula saying that ⌈_5 x_5 +m_5, y_5 + n_5 ⌉_5 is the interval ⌈_4 x_4+m_4, y_4+n_4 ⌉_4 +^o.The formula x = y + c, for a non-negative c, is defined as follows. For c = ∞, we take the formula ∀ j(𝖻𝗂𝗍^ in(x, j, 1) 𝖻𝗂𝗍^ fr(x, j, 1)), whereas for a constant c= h/2^k, we can use ∀ j ((𝖻𝗂𝗍^ in(x, j, 0) 𝖻𝗂𝗍^ in_+ h/2^k(y, j, 0)) (𝖻𝗂𝗍^ in(x, j, 1) 𝖻𝗂𝗍^ in_+ h/2^k(y, j, 1)))∀ j ((𝖻𝗂𝗍^ fr(x, j, 0) 𝖻𝗂𝗍^ fr_+ h/2^k(y, j, 0)) (𝖻𝗂𝗍^ fr(x, j, 1) 𝖻𝗂𝗍^ fr_+ h/2^k(y, j, 1))), where predicates 𝖻𝗂𝗍^ in_+ h/2^k(y, j, v), saying that v is the j-th bit of the integer part of y + h/2^k, and 𝖻𝗂𝗍^ fr_+ h/2^k(y, j, v), saying that v is the j-th bit of the fractional part of y + h/2^k, are defined inductively as follows: 𝖻𝗂𝗍^ fr_+0/2^k(y, j, v) = 𝖻𝗂𝗍^ fr (y, j, v),𝖻𝗂𝗍^ fr_+(d+1/2^k)(y, j, v) = ∃ u ( (u = ℓ - k) ( ((j ≤ u) 𝖻𝗂𝗍^ fr_+d(y, j, v) ) ( (v = 0) 𝖻𝗂𝗍^ fr_+d(y, j, 0) ∃ j' ((u < j' < j) 𝖻𝗂𝗍^ fr_+d(y, j', 0)) ) ( (v = 0) 𝖻𝗂𝗍^ fr_+d(y, j, 1) ∀ j' ((u < j' < j) →𝖻𝗂𝗍^ fr_+d(y, j', 1))) ( (v = 1) 𝖻𝗂𝗍^ fr_+d(y, j, 1) ∃ j' ((u < j' < j) 𝖻𝗂𝗍^ fr_+d(y, j', 0)) ) ( (v = 1) 𝖻𝗂𝗍^ fr_+d(y, j, 0) ∀ j' ((u < j' < j) →𝖻𝗂𝗍^ fr_+d(y, j', 1)) ) ) ),𝖻𝗂𝗍^ in_+0/2^k(y, j, v) = 𝖻𝗂𝗍^ in (y, j, v),𝖻𝗂𝗍^ in_+(d+1/2^k)(y, j, v) = ∃ u ( (u = ℓ - k) (( (v = 0) 𝖻𝗂𝗍^ in_+d(y, j, 0) ∃ j' ( ((j' < j) 𝖻𝗂𝗍^ in_+d(y, j', 0))((u < j' < j) 𝖻𝗂𝗍^ fr_+d(y, j', 0))) ) ( (v = 0) 𝖻𝗂𝗍^ in_+d(y, j, 1) ∀ j' (((j' < j) →𝖻𝗂𝗍^ in_+d(y, j', 1))(u < j' < j) →𝖻𝗂𝗍^ fr_+d(y, j', 1)) ) ( (v = 1) 𝖻𝗂𝗍^ in_+d(y, j, 0) ∃ j' ( ((j' < j) 𝖻𝗂𝗍^ in_+d(y, j', 0))((u < j' < j) 𝖻𝗂𝗍^ fr_+d(y, j', 0))) ) ( (v = 1) 𝖻𝗂𝗍^ in_+d(y, j, 1) ∀ j' (((j' < j) →𝖻𝗂𝗍^ in_+d(y, j', 1)) ((u < j' < j) →𝖻𝗂𝗍^ fr_+d(y, j', 1))) ) ) ). Here, u = ℓ - k can be easily defined using < and k.§.§ Proofs of Lemmas <ref> and <ref>If T satisfies TOA, then a projection of T satisfying TOA can be computed in time O(|T|_o^2 × |T|_t).We first partition T into a set of purely temporal tables T_c_1, …, c_m and compute the set of all individual tuples (c_1', …, c_n') that will appear in the projection T'. Let (c_1', …, c_n') be one such tuple, and consider the tables T_c_1^1, …, c_m^1, …, T_c_1^k, …, c_m^k such that the projection of each (c_1^i, …, c_m^i) is precisely (c_1', …, c_n'). Clearly, we have at most |T|_o such tables. It is well-known that, for a pair of ordered tables S and S', we can construct an ordered table that contains all the tuples S ∪ S' in time |S|+|S'|. We use this algorithm k times to obtain an ordered table containing all the tuples of T_c_1^1, …, c_m^1∪…∪ T_c_1^k, …, c_m^k in time O(k|T|_o). We then write the tuples of the form (c_1', …, c_n', ⟨, t_1, t_2, ⟩), where (⟨, t_1, t_2, ⟩) is a tuple from the united table, into the output table. It can be readily checked that the complete output table can be produced in the required time. For any pair of tables T and T' satisfying TOA, their union table also satisfying TOA can be computed in time O((|T|_o^2 + |T'|_o^2) × (|T|_t + |T'|_t)).We first partition T and T' into sets of purely temporal tables T_c_1, …, c_m and, respectively, T'_c_1, …, c_m. While doing this partition, we make sure that the tables T_c_1, …, c_m are stored sequentially with respect to some order on the tuples (c_1, …, c_m) (it can be done in time |T|_o^2 × |T|_t). We do the same for the tables T_c_1, …, c_m'. It remains to go through all the tuples ⟨,t_1, t_2, ⟩ and ⌈,t_1', t_2', ⌉ in all the tables T_c_1, …, c_m and T'_c_1, …, c_m to produce the union table by an algorithm similar to the one applied to the tables S and S' in the proof of Lemma <ref>.§.§ Experimental Results Here, CSV is the size of the data in CSV format; PostgreSQL (raw size) is the size of the data itself stored in PostgreSQL reported by thefunction;PostgreSQL (total size) is the size of the total data (including the index) stored in PostgreSQL reported by thefunction; and Parquet is the size of the data in the Apache Parquet format, used by Apache Spark.0.2intheapa
http://arxiv.org/abs/1703.08982v3
{ "authors": [ "Sebastian Brandt", "Elem Güzel Kalaycı", "Vladislav Ryzhikov", "Guohui Xiao", "Michael Zakharyaschev" ], "categories": [ "cs.LO", "I.2.4; F.4.1" ], "primary_category": "cs.LO", "published": "20170327092826", "title": "Querying Log Data with Metric Temporal Logic (Technical Report)" }
=1doublecases { .TUM-HEP 1079/17KIAS-P17060Optimized velocity distributions for direct dark matter detection [8mm] Alejandro Ibarra^1,2, Andreas Rappelt^1 ^1 Physik-Department T30d, Technische Universität München,James-Franck-Straße, 85748 Garching, Germany^2 School of Physics, Korea Institute for Advanced Study, Seoul 02455, South Korea =========================================================================================================================================================================================================================================== We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) to assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results. § INTRODUCTIONNumerous observations point towards the existence of a population of a non-luminous matter component in galaxies, clusters of galaxies and the Universe at large scales, dubbed dark matter (for reviews, see  <cit.>). A plausible hypothesis for the nature of the dark matter is that it is constituted by new particles not contained in the Standard Model. If correct, dark matter particles would scatter on nuclei, leading to potential tests of the particle dark matter hypothesis and to the eventual determination of its characteristics, such as its mass and the strength of its interactions with ordinary matter. Various search strategies have been proposed based on the possibility that dark matter particles could scatter with nuclei. Direct detection experiments aim to detect the nuclear recoil induced by the elastic scattering of the dark matter particles traversing a detector at the Earth <cit.>. This search strategy requires an exquisite suppression of the rate of recoils by electromagnetic interactions of α-particles, electrons, and photons produced by the radioactive isotopes in the surrounding material, as well as by nuclear interactions of neutrons produced by natural radioactivity (for a review, see e.g. <cit.>). Alternatively, one may search for the characteristic annual modulation of the rate of dark matter induced scatterings against the, mostly time-independent, rate of background events <cit.>. A complementary strategy consists in the search, using neutrino telescopes, of a flux of high energy neutrinos correlated to the direction of the Sun, which is hypothetically produced in the annihilation of dark matter particles previously captured in the Sun via a series of scatterings with the matter in the solar interior <cit.>. The interpretation of the experimental results in a concrete model of particle dark matter suffers from various nuclear and astrophysical uncertainties. Concretely, the rate of dark matter-nucleus scatterings crucially depend on the flux of dark matter particles impinging the target material, which in turn depends on the dark matter number density and velocity distribution inside the Solar System. It is common in the literature to adopt a local dark matter density ρ_ loc≈ 0.3GeV/ cm^3 and a velocity distribution in the galactic rest frame with a Maxwell-Boltzmann form. While the adopted value of the local dark matter density is well motivated by astronomical observations (see e.g. <cit.>), the form of the velocity distribution is totally unknown and relies purely on theoretical considerations. The Maxwell-Boltzmann form arises as the solution to the collisionless Boltzmann equation for a dark matter distribution consisting on an isotropic, isothermal sphere with density distribution ρ(r)∼ r^-2 <cit.>. On the other hand, N-body simulations indicate that a Maxwellian distribution might not provide a good description of the smooth halo component <cit.>. Furthermore, it has been argued that the dark matter halo of our Galaxy might contain tidal streams or a dark disk component  <cit.> which may induce significant deviations in the velocity distribution from the Maxwell-Boltzmann form and in turn affect the interpretation of dark matter search experiments <cit.> (however, the existence of a dark disk in the Milky Way has been questioned in<cit.> and seems to be disfavored by observations <cit.>). More recently,hydrodynamical simulations have suggested that the average velocity distribution at the position of the Solar System may be well described by a Maxwell-Boltzmann form <cit.>. However, this conclusion is based on a sample of particles enclosed in a fairly large volume and deviations from the Maxwellian form at the scales of the Solar System cannot be precluded.Our current ignorance of the dark matter velocity distribution inside the Solar System represents an important source of uncertainty in the analysis of direct detection experiments and motivates the development of halo-independent methods. In<cit.> it was proposed a method to map experimental signals from one detector to another by introducing a variable that includes both the information about the dark matter scattering cross section and the integrated velocity distribution, often denoted as η(v_min). This method has been applied to quantify the compatibility of a positive claim with a null result in a halo-independent way, by comparing measurements and upper limits of this variable from different direct detection experiments <cit.>. Other works focus on the dark matter parameter estimation in case a positive signal is detected in a direct detection experiment, either using general parametrizations of the velocity distribution <cit.>, or by decomposing the velocity distribution into a number of streams <cit.>. Other methods exploit the complementarity of direct search experiments and neutrino telescopes in probing different parts of the dark matter velocity distribution, and have derived halo-independent constraints on the dark matter properties from combining two null search experiments <cit.>, or investigated the implications of a positive signal in a direct detection experiment for the forecast neutrino flux from the Sun  <cit.> or, in the event that a signal is also detected at a neutrino telescope, for the reconstruction of the dark matter parameters  <cit.>.In this paper we develop a new method to compare, in a halo independent manner, the outcome of two or more experiments probing the dark matter distribution inside the Solar System. Utilizing the fact that the flux of dark matter particles, and therefore the rate of scatterings with nuclei, is linear in the velocity distribution, we apply techniques of linear programming to determine the velocity distribution that minimizes/maximizes the outcome of one experiment subject to the constraints from other experiments. We also illustrate this approach with three concrete applications:i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) to assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.This work is organized as follows. In Section <ref> we review the formalism to calculate the rate and the modulation signal of dark matter induced scatterings at a direct detection experiment, as well as the dark matter capture rate in the Sun. In Section <ref> we present our method to optimize the outcome of an experiment given the constraints from other experiments, and in Section <ref> we discuss some concrete applications of our method. Finally, in Section <ref> we present our conclusions and an outlook. § APPROACHES TO DARK MATTER DETECTION IN THE SOLAR SYSTEMWe postulate that the dark matter distribution inside the Solar System is spatially homogeneous and has density ρ_ loc, is constant over time, and has a velocity distribution relative to the solar frame f(v⃗) normalized such that∫_v ≤ v_max dv^3 f(v⃗)=1,where v≡ |v⃗| and v_max is the maximal velocity of a dark matter particle that is gravitationally bound to the galaxy, expressed in the solar frame. Throughout the paper, we will adopt v_max≃ 777 km/s, which is the sum of the galactic escape velocity ≃ 533 km/s <cit.> and the local velocity of the Sun with respect to the halo ≃ 244 km/s <cit.>. Three different methods have been proposed to test the particle nature of the putative dark matter population inside the solar system:the search for dark matter induced nuclear recoils in a direct detection experiment <cit.>, the search for the characteristic annual modulation signal in the rate of nuclear recoils <cit.> and the search for a high energy neutrino flux from the Sun from the annihilation of dark matter particles captured in the solar interior via scatterings <cit.>. The rate of nuclear recoils induced by scatterings of dark matter particles traversing a detector at the Earth can be calculated from:R= ∑_i ∫_0^∞dE_Rϵ_i (E_R) ξ_i ρ_loc/m_A_i m_DM∫_v^ (D)≥ v_min,i^ (D)(E_R)d^3 v^ (D)v^ (D) f (v⃗^(D)+v⃗_ obs(t))dσ_i/dE_R .Here, v⃗^(D) denotes the dark matter velocity in the frame of the detector D, hence the velocity distribution of dark matter particles is f (v⃗^(D)+v⃗_ obs(t)), with v⃗_ obs(t) the (time-dependent) velocity of the observer relative to the Sun, given by <cit.>:v⃗_obs=v_⊕·{e⃗_1·sin(ω·(t-t_phase))-e⃗_2·cos(ω·(t-t_phase))},where v_⊕=29.8 km/s is the absolute value of the Earth's velocity with respect to the Sun, ω=2π/year, t_phase=0.218 is the time of Spring equinox in units of years, and e⃗_1 and e⃗_2 are unit vectors in the direction of the Sun during Spring equinox and Summer solstice, which in galactic coordinates read e⃗_1 = (-0.0670,0.4927,-0.8676), e⃗_2 = (-0.9931,-0.1170,0.01032) <cit.>. Besides, dσ_i/dE_R is the differential cross section for the elastic scattering of a dark matter particle off a nuclear isotope i with mass m_A_i and mass fraction ξ_i in the detector, producing in the scattering a nucleus with energy E_R. Furthermore, v_min,i^ (D)(E_R) = √(m_A_i E_R/(2 μ_A_i^2)) is the minimal velocity necessary for a dark matter particle to induce a recoil with energy E_R,with μ_A_i being the reduced mass of the dark matter-nucleus scattering, and ϵ_i(E_R) is the probability to detect a nuclear recoil off the target nucleus i with energy E_R. Finally, the number of expected recoil events at a direct detection experiment reads N=R·ℰ, withℰ the exposure of the experiment.Due to the changing direction of the Earth's velocity relative to the Sun's velocity, the flux of dark matter particles at Earth is expected to change with a one-year periodicity, leading to an annual modulation in the rate of nuclear recoils which could be detected in an experiment with sufficiently long exposure <cit.>. The modulation signal is defined as the difference of the recoil rates at June 1st and December 1st, averaged over the energy interval [E_-,E_+], namely:S_[E_-,E_+] = 1/E_+-E_-·1/2·( R_[E_-,E_+]|_June 1st - R_[E_-,E_+]|_ Dec 1st),where R_[E_-,E_+](t) is the total event rate in that bin at the time t, which can be calculated from Eq. (<ref>). This definition is motivated by the fact that in the Standard Halo Model the maximum rate of high energetic recoils is expected around June 1st and the minimum around December 1st; in this case, S_[E_-,E_+] can be identified with the amplitude of the annual modulation signal.Finally, dark matter particles traversing the Sun could scatter off nuclei in the solar interior, lose energy and eventually sink to the center. This process leads to an overdensity of dark matter particles in the solar interior, where annihilations can occur at a rate which can be sufficiently large to allow observation at Earth of the high energy neutrinos produced in the annihilation  <cit.>. Assuming that dark matter captures and annihilations occur at the same rate in the solar interior,the neutrino flux from annihilations in the Sun is completely determined by the capture rate,which is given by <cit.>C=∑_i ∫_0^R_⊙ 4π r^2 drη_i(r) ρ_loc/m_DM∫_v ≤ v_max,i^(Sun)(r)d^3 v f (v⃗)/v (v^2+[v_esc(r)]^2 ) ×∫_m_DM v^2 /2^2 μ_A_i^2 (v^2+[v_esc(r)]^2 )/m_A_id E_Rdσ_i/dE_R ,where η_i(r) is the number density of the element i at a distance r from the solar center (for which we adopt the solar model AGSS09 <cit.>),R_⊙ is the solar radius, v_esc(r) is the escape velocity from the Sun at the distance r from the center, and v_max,i^(Sun)(r) = 2 v_esc(r) √(m_DM m_A_i)/| m_DM - m_A_i|is the maximum velocity of a dark matter particle such that the capture in the Sun at r remains kinematically possible after scattering with the element i.The theoretical interpretation of the outcome of any of the three search strategies described above is subject to uncertainties from our ignorance of the nature of the dark matter particle and its interactions with nuclei, as well as of the density and velocity distribution inside the Solar System.It is common in the literature to cast the differential cross section as (see e.g. <cit.>)dσ_i/dE_R=m_A_i/2μ_A_i^2v^(D) 2(σ_SIF_SI,i^2(E_R)+σ_SDF_SD,i^2(E_R)),where σ_SI and σ_SD are, respectively, the spin-independent (SI) and spin-dependent (SD) cross sections at zero momentum transfer, which can be calculated in a concrete dark matter model in terms of its fundamental parameters, while F_SI,i(E_R) and F_SD,i(E_R)are form factors that depend on the nucleus. In our work, we will assume that the scattering cross sections off protons and off neutrons are identical, and we will adopt the form factors reported in <cit.> for the SI scattering, in <cit.> for the SD scattering off the nuclei relevant for direct detection experiments, and in <cit.> for the SD scattering off the nuclei relevant for the dark matter capture in the Sun. Therefore,we will treat as free parameters the dark matter mass m_ DM and the SI and SD cross sections at zero momentum transfer, σ_SI and σ_SD. Besides, it is common to assume a local dark matter density ρ_ loc=0.3GeV/ cm^3 and a velocity distribution in the galactic rest frame following a Maxwell-Boltzmann distribution. Under these assumptions, the results from experiments can be cast as limits on the SI or SD cross sections as a function of the dark matter mass (or allowed regions, for experiments reporting a positive signal). While the Maxwell-Boltzmann form for the velocity distribution can be justified theoretically for a dark matter population in thermal equilibrium with density distributionρ(r)∼ r^-2, significant deviations from this simple structure cannot be precluded, especially at small scales as is the case of the Solar System. In the next section we will present a method to derive limits on the cross section from combining information of various direct detection experiments and/or neutrino telescopes, and which does not rely on the choice of the velocity distribution.§ SIGNAL OPTIMIZATION Our goal is to optimize the outcome of an experiment A, N^(A) (where N can be the recoil rate, R, the modulation signal, S, or the capture rate, C), given the upper limits on the outcome of p experiments B_α, N^(B_α)≤ N^(B_α)_ max with α=1,..., p, and the lower limits on the outcome of q experimentsB_α, N^(B_α)≥ N^(B_α)_ min with α=p+1,..., p+q, with the requirement that the velocity distribution is normalized to unity. To this end, we use the identityf(v⃗)=∫_|v⃗_0|≤ v_escd^3v_0 f(v⃗_0) δ(v⃗-v⃗_0) ,which physically can be interpreted as a decomposition of the dark matter velocity distribution into a superposition of streams with fixed velocity v⃗_0 (and velocity distribution f_v⃗_0=δ(v⃗-v⃗_0)) and with weight f(v⃗_0). Each stream produces in a given experiment the outcome N_v⃗_0, and the outcome produced by the true velocity distribution f(v⃗) can then be obtained multiplying the contribution from each stream by its weight, f(v⃗_0), and summing over all the streams conforming the velocity distribution. Namely,N  = ∫_|v⃗_0|≤ v_escd^3v_0 f(v⃗_0) N_v⃗_0 . The optimization problem can then be written as: optimize    F[f] ≡∫ d^3 v_0  f(v⃗_0) N^(A)_v⃗_0 ,subject to   ∫ d^3 v_0  f(v⃗_0)=1 ,and   ∫ d^3 v_0  f(v⃗_0)  N^(B_α)_v⃗_0≤ N^(B_α)_ max,   α=1,..., p, and   ∫ d^3 v_0  f(v⃗_0)  N^(B_α)_v⃗_0≥ N^(B_α)_ min,   α=p+1,..., p+q , and   f(v⃗_0)≥ 0 .where F[f] is a functional of the velocity distribution. It is important to note that the objective functional F[f] and all the constraints are linear in the velocity distribution f, therefore the optimization using calculus of variations does not provide a solution. [This can be checked explicitly by deriving the Euler-Lagrange equations for the functional F[f] includingKarush-Kuhn-Tucker multipliers, in order to incorporate the inequality constraints. A posteriori, the failure of the calculus of variations to find the solution to this optimization problem can be attributed to the fact that the optimized solution turns out to be not continuously differentiable, as we will show below.] We will then apply linear programming techniques to find the velocity distribution that optimizes the signal at the experiment A with the constraints listed above. To this end, we first discretize the velocity distribution into a finite sum of n streams with velocity v⃗_i, i=1,..., n:f(v⃗)=∑_i=1^n c_v⃗_i δ(v⃗-v⃗_i) ,with c_i the weight of the stream f_v⃗_i=δ(v⃗-v⃗_i) in the velocity distribution. Then, the discretized optimization problem can be written as: optimize    F(c_v⃗_1..., c_v⃗_n)=∑_i=1^nc_v⃗_i N^(A)_v⃗_i ,subject to   ∑_i=1^n c_v⃗_i=1 ,and   ∑_i=1^n c_v⃗_i N^(B_α)_v⃗_i≤ N^(B_α)_ max,   α=1,..., p ,and   ∑_i=1^n c_v⃗_i N^(B_α)_v⃗_i≥ N^(B_α)_ min,   α=p+1,..., p+q, and   c_v⃗_i≥ 0,   i=1... n ,where F(c_v⃗_1..., c_v⃗_n) is a function of the weights. Finally, after straightforward algebra, the discretized optimization problem can be cast in the standard form of linear programming problems: optimize   F(c_v⃗_1..., c_v⃗_n)=∑_i=1^n c_v⃗_i N^(A)_i , subject to   ∑_j=1^p+q+n M_α j d_j= b_α,   α=1, ..., p+q+1 ,and    d_i ≥ 0,    i=1,..., n+p+q .Here F(c_v⃗_1..., c_v⃗_n) is identified with the objective function, which depends on the “decision variables”, c_v⃗_i, i=1, ..., n, which correspond in this case to the weights of the streams. Besides, d_i are the components of the (p+q+n)-dimensional vector (c_v⃗_1,... c_v⃗_n, s_1, ... s_p, s'_1, ... s'_q), which contains, in addition of the “decision variables”, a set of p “slack variables”, s_α, and q “surplus variables”, s'_α, which are introduced to cast the inequality constraints Eqs. (<ref>,<ref>) in the form of an equality constraint, as in Eq. (<ref>). In the latter equation, M is a (p+q+n)× (p+q+1) matrix which explicitly readsM= ( [ [ 1 ⋯ 1; ]00; [ N_1^(1) ⋯ N_n^(1); ⋮ ⋮ ⋮; N_1^(p) ⋯ N_n^(p); ] [ 1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ 1; ]0; [ N_1^(p+1) ⋯ N_n^(p+1); ⋮ ⋮ ⋮; N_1^(p+q) ⋯ N_n^(p+q); ]0 [ -1⋯0;⋮⋱⋮;0⋯ -1;];]) ,and b_i are the components of the (p+q+1)-dimensional vector (1, N_ max^(1),..., N_ max^(p), N_ min^(p+1),..., N_ min^(p+q)).As is well known, the values for d_i that minimize the linear program lie at the vertices of the feasible region defined by Eqs. (<ref>,<ref>) (see e.g. <cit.>). If the rows of the matrix M are linearly independent, then there are p+q+1 variables taking non-zero values (so-called “basic variables”), while the remaining n-1 variables must vanish. One should note that slack and surplus variables can also be basic, therefore the solution does not necessarily contain p+q+1 non-vanishing decision variables. In fact, when r of the constraints are not saturated (namely, when r constraints are “not active”), then there are r corresponding slack or surplus variables which are basic, and hence only p+q+1-r non-vanishing decision variables. We then conclude that the optimized velocity distribution consists on a superposition of p+q+1-r streams, with r the number of inequality constraints that are not saturated, and therefore contains a maximum of p+q+1 streams and a minimum of 1 stream.This approach has a number of applications: i. Derive a halo-independent upper limit on the cross section from a set of null results.We consider the null search experiments A and B_α, α=1, ..., p, with outcomes N^(A)≤ N^(A)_ max, N^(B_α)≤ N^(B_α)_ max. We calculate, for a fixed dark matter mass and interaction cross section, the minimal outcome at the experiment A from varying the velocity distribution, min{N^(A)}(σ,m_DM), subject to the constraints from the set of p null results from the experiments B_α. The set of parameters m_ DM and σ is ruled out by the combination of theupper limits from the p experiments B_α and the upper limit from the experiment A ifmin{N^(A)}(σ,m_DM) ≥ N^(A)_ max ,from where one obtains a halo-independent upper limit on the scattering cross section as a function of the dark matter mass from combining the outcome of p+1 null results. We have for this case p upper limit constraints, plus the equality constraint from normalization, therefore the optimized velocity distribution will consist of a superposition of streams, with a number that varies between p+1 (when all upper limits are saturated) and 1 (when no upper limit is saturated). ii. Confront a detection claim to a set of null results in a halo independent manner. We consider the experiment A, with outcome N^(A)≤ N^(A)_ max, and the experiments B_α,α=1,...,p, with outcomeN^(B_α)_ min≤ N^(B_α)≤ N^(B_α)_ max for α=1,...,q, and N^(B_α)≤ N^(B_α)_ max forα=q+1,...,p. Namely, the experiments B_α, α=1,...,q, report the detection of a signal, while the experiments A and B_α, α=q+1, ...,p, report upper limits. We now minimize, with respect to the velocity distribution, the outcome of the null search experiment A for a given value of the dark matter mass and cross section, min{N^(A)} (σ,m_DM), subject to the constraints N^(B_α)≤ N^(B_α)_ max, α=1, ..., p, and N^(B_α)≥ N^(B_α)_ min, α=1, ..., q. The set of parameters m_ DM and σ is incompatible with the upper limits from the experiment A and the p experiments B_α,α=1,...,p, and with the detection claim of the q experiments B_α, α=1,...,q,ifmin{N^(A)}(σ,m_DM) ≥ N^(A)_ max . We have for this case p upper limit and q lower limit constraints, plus the equality constraint from normalization. On general grounds, the optimized velocity distribution contains p+q+1 streams, however it is clear that q of the weights must vanish, since the slack and the surplus variables corresponding to the upper and the lower bounds reported by the same experiment cannot be zero simultaneously (this would require to saturate at the same time the lower and the upper bound). Therefore, also for this case the velocity distribution must be a superposition of streams, with a number that varies between p+1 (when all upper limits are saturated) and 1 (when no upper limit is saturated). iii. Assess, in a halo-independent manner, the prospects for detection in a projected experiment given a set of upper limits from current experiments.We consider the experiments B_α, α=1, ..., p, providing the null resultsN^(B_α)≤ N^(B_α)_ max and the projected experiment A, which can claim detection if the outcome of the experiment is larger than N_det^(A). We maximize the outcome of the experiment A for a given value of the dark matter mass and interaction cross section, max{N^(A)}(σ,m_DM), subject to the constraints N^(B_α)≤ N^(B_α)_ max, α=1,...,p. The regions of the parameter space where dark matter will escape detection at the experiment A are defined by the condition:max{N^(A)}(σ,m_DM) ≤ N_ det^(A) .As for the applicationi, also in this case the optimized velocity distribution will consist of a superposition of streams, with a number that varies between p+1 and 1.Furthermore, the regions of the parameter space that will be ruled out in a halo independent manner, in case of no detection, can be derived along the lines of our applicationi:min{N^(A)}(σ,m_DM) ≥ N_ det^(A) . In the next section we will illustrate these three applications with concrete examples. § APPLICATIONS §.§ Upper limit on the cross section from combining two null resultsWe illustrate this application of our method calculating a halo independent upper limit on the scattering cross section from combining the null results from a direct detection experiment and the null results from a neutrino telescope. Concretely, we will use the upper limit on the number of recoil events at PandaX Run 8 <cit.> and 9 <cit.> for the SI interaction, or from PICO-60 <cit.> for the SD case, and the upper limit on the capture rate from IceCube, using 532 days of data,<cit.> and Super-Kamiokande, using 3903 days of data, <cit.>, assuming for concreteness annihilations into W^+W^- (τ^+τ^- for m_ DM< M_W). We note that the process of capture in the Sun does not depend on the characteristics of the neutrino telescope, therefore, for a given dark matter mass, it suffices to consider only the most stringent bound between the two. In our analysis, we will fix the time-dependent velocity of the Earth to the value giving the smallest recoil rate, such that our constraints can be regarded as conservative. Furthermore, with this prescription the initial problem of calculating the three-dimensional dark matter velocity distribution simplifies to a one-dimensional problem; for our implementation of the linear programming methods, we discretize the one-dimensional velocity space with 775 streams.[We have checked that increasing the resolution of the velocity space does not affect our conclusions.] We show the corresponding halo independent upper limits on the spin independent and spin dependent cross sections in Fig. <ref>. We also show, as a dashed line, the halo independent upper limit on the scattering cross section from considering the null results from neutrino telescopes only, and which follows from the fact that neutrino telescopes probe the whole velocity space. More specifically, they correspond to the requirement min{ C}(σ,m_ DM)≥ C_ max. Details how to calculate R_ max and C_ max are provided in Appendix <ref>. The Figure also shows, for comparison, the upper limits published by the corresponding direct detection experiment or neutrino telescope, assuming the Standard Halo Model (SHM).The limits we obtain are remarkably strong and reach σ_SI^p≲ 3× 10^-44 cm^2 for the SI and σ_SD^p≲ 10^-39cm^2 for the SD scattering cross sections for m_DM∼ 1 TeV, assuming a local dark matter density ρ_ DM=0.3GeV/ cm^3. We note that for very large dark matter masses it is not possible to derive a halo independent upper limit on the cross section. The reason is that, in this regime, capture in the Sun is possible only when the velocity in the Solar frame isv ≤max_r,i{v_ max,i(r)}≲ max_i{2 v_ esc(0) √(m_A_i/m_)}, where the maximum is taken over all possible distances to the center of the Sun and nuclei. On the other hand, nuclear recoils can be detected in a direct search experiment only when the velocity in the detector frame is v^ (D)≳min_i{√(E_R/(2 m_A_i))}, which corresponds to a velocity in the Solar frame v≳min_i{|v_⊕ -√(E_R/(2 m_A_i))|}; in this case the minimum must be taken over all nuclear species in the detector. Clearly, for sufficiently large dark matter masses, it is always possible to construct a velocity distribution consisting of streams with velocities which are too large to allow capture in the Sun and too small to produce a detectable nuclear recoil. These velocity distributions produce no signal in the direct detection experiment nor in the neutrino telescope and are therefore unconstrained. Concretely, for our concrete example the maximum mass that can be probed with our halo independent approach assuming scattering via the SI interaction only is m_ DM∼ 165 TeVfrom combining PandaX and IceCube, [Such large dark matter masses are, on the other hand, in tension with the unitarity limit for thermally produced dark matter <cit.> and are contrived theoretically.] and assuming the SD interaction only,m_ DM∼ 4.5 TeV from combining PICO-60 and IceCube.For this specific application it is possible to determine analytically, for a given dark matter mass and cross section, the smallest possible scattering rate at a direct detection experiment compatible with the null results from a neutrino telescope, as well as the requirement that the velocity distribution is normalized to unity (for details, see Appendix <ref>). For a given dark matter mass, if the cross section is sufficiently small, the predicted capture rate for any stream configuration will be smaller than the upper limit from neutrino telescopes. Therefore, as the upper limit constraint cannot be saturated, there is only one non-vanishing decision variable, hence the optimized velocity distribution consists on just one stream. Clearly, the stream that produces the smallest recoil rate has zero velocity and accordingly the minimum recoil rate is zero. As the cross section increases, the upper limit constraint is saturated, the corresponding slack variable vanishes, and therefore there are two non-vanishing decision variables and the optimized velocity distribution consists of two streams.[We are grateful to Bradley Kavanagh for discussions about this point.]In this part of the parameter space we find two possible solutions to the minimization problem:I   f_ I=1/2δ(v-(v̂-ϵ))+1/2δ(v-(v̂+ϵ)) ,with ϵ→ 0  and v̂  defined by     C_v̂=C_ max ,giving  a  rate     R_ I=R_v̂ . II  f_ II=C_ max/C_v_2δ(v-v_1)+(1-C_ max/C_v_2)δ(v-v_ max) , with  v_1  defined by     d/dv[C_ max/C_v R_v+(1-C_ max/C_v) R_v_ max]|_v=v_1=0 ,giving a rate     R_ II=C_ max/C_v_1 R_v_1+(1-C_ max/C_v_1) R_v_ max . The optimized velocity distribution depends on the point in the parameter space and corresponds to the one giving the minimum betweenR_ I and R_ II. Finally, for a given dark matter mass, theregions of the parameter space which are incompatible with the null search results from the given direct detection experiment and neutrino telescope are determined frommin{R_ I, R_ II}(σ,m_ DM)≥ R_ max . The region of the parameter space which is excluded using null results from neutrino telescopes only (bounded in Fig. <ref> with a dashed line), can also be determined analytically. The velocity distribution that minimizes the capture rate in the Sun, with the only constraint that the velocity distribution is normalized to unity, consists of a single stream with velocity v_0. This velocity can be calculated from ∂ C_v/∂ v|_v=v_0=0 which, due to the fact that the capture rate for streams decreases monotonically with the velocity, is achieved for v_0=v_ max. The excluded region is then defined bymin{C}(σ,m_ DM)= C_v_ max(σ,m_ DM)≥ C_ max.§.§ DAMA confronted to null results The DAMA collaboration has reported a non-zero annual modulation of the scintillation light in sodium iodine detectors. The modulation has been consistently observed over 14 annual cycles, with a combined significance of 9.3σ <cit.>. Concretely, the modulation signal, as defined in Eq. (<ref>), measured in the energy bins [2.0, 2.5], [2.5, 3.0] and [3.0, 3.5] keV are, respectively,(1.75± 0.37)× 10^-2, (2.51 ± 0.40)× 10^-2 and (2.16 ± 0.40)× 10^-2 day^-1 kg^-1 keV^-1. It has been claimed that the measured annual modulation could be explained from the time-dependent rate of scatterings of dark matter particles with the nuclei in the detector, which results from the changing alignment of the Earth's velocity with respect to the Sun's over the course of the year <cit.>. However, assuming the Standard Halo Model, the values of the dark matter parameters necessary to explain the measured modulation signal are in conflict with various null results, for instance, with the upper limits on the SI cross section from PandaX <cit.>, IceCube <cit.> and Super-Kamiokande <cit.>, and with the upper limits on the SD cross section from PICO-60 <cit.>, IceCube <cit.> and Super-Kamiokande  <cit.>, among other experiments (a similar conclusion holds when considering the most general effective interaction compatible with the Galilean symmetry <cit.>).Here, we apply our method to investigate, in a halo independent manner, the compatibility of the modulation signal reported by DAMA with the null results from direct detection experiments and/or from neutrino telescopes; in our analysis we will adopt as quenching factors Q_Na=0.30 and Q_I=0.09, as used by the DAMA collaboration in <cit.>, and we include the effects of channeling (and dechanneling) as described in <cit.>, adopting the largest values for the channeling fractions. For a fixed dark matter mass and cross section, we discretize the three-dimensional velocity space, expressed in galactic coordinates, in 100000 streams, and we calculate for each of the streams the modulation signal at DAMA, as defined in Eq. (<ref>), in theenergy bins [2.0, 2.5], [2.5, 3.0] and [3.0, 3.5] keV, the scattering rate at PandaX (PICO-60) assuming the SI (SD) interaction only, and the capture rate inside the Sun. For the scattering rate at PandaX and PICO-60, we take properly into account the motion of the Earth around the Sun during the period of data taking, discretizing the orbit of the Earth in time intervals of 2 weeks. Finally, we implement the linear programming method, as described in Section <ref>In Fig. <ref>, upper panels, we show as a white region the values for the SI (left plot) and SD (right plot) cross sections for which the measured values of S^ (DAMA)_[2.0,2.5],S^ (DAMA)_[2.5,3.0] andS^ (DAMA)_[3.0,3.5] are incompatible withthe null results from PandaX or from PICO-60, respectively. The regions shown in pink, on the other hand, are compatible with the direct detection experiments, given our set-up. Some parts of this region, on the other hand, may be also excluded when taking into account in the analysis other constraints. For instance, the DAMA experiment has observed an annual modulation following approximately a cosine function, with a maximum in late May and a minimum in late November. However, our maximized solutions consist of a small number of streams, hence the time dependence of the recoil rate is expected to be more complex than a cosine function (see e.g. <cit.>), with a maximal rate not necessarily expected at June 1st, nor a minimal rate at December 1st, and with an amplitude different to the one given by the definition of the modulaton signal Eq. (<ref>). Therefore, we expect that, if one adds to the set of constraints the requirement of reproducing the measured modulation rate in different time bins (or, alternatively, the requirement of reproducing the different coefficients in the Fourier series), then the excluded region will become larger. In this paper, and in order to illustrate our method, we just apply the simple test of reproducing the modulation signal, as defined in Eq. (<ref>), in the three energy bins under consideration, deferring more detailed analyses for future work. Finally, we also show in the plots for comparison the upper bounds on the cross section reported by the corresponding direct detection experiments, as well as the regions favored by DAMA for the SI and SD interaction, taken respectively from <cit.> and from <cit.>, all assuming the Standard Halo Model.It is notable that there exist velocity distributions for which the DAMA signal is compatible with the null results from PandaX and PICO-60.[This conclusion, on the other hand, heavily relies on the inclusion of channeling effects in the calculation of the event rate, as the streams could be aligned with the axes of the DAMA crystals, hence enhancing the signal rate at this experiment. When these effects are neglected, we found no region in the parameter space compatible with the PandaX and PICO-60 constraints.] Following the general discussion in Section <ref>, the velocity distribution that minimizes the rate at a direct detection experiment subject to the constraint of reproducing the modulation signal in three energy bins as reported by DAMA, and subject to the normalization constraint, consists of the superposition of a maximum of four streams. In Table <ref> we show the characteristics of the streams conforming the velocity distribution that minimizes the rates at PandaX and PICO-60, while reproducing the DAMA modulation signal, taking as benchmark values for the dark matter parameters m_ DM=10GeV and σ_ SI=10^-37cm^2 for the former, and m_ DM=3000GeV and σ_ SD=10^-34cm^2for the latter. The velocity distribution that minimizes the rate at PandaX consists of three streams with maximal velocities during the period of data taking, relative to the detector frame, equal to 257.1, 264.3 and 255.1 km/ s. The streams #1 and #3 have maximal velocities smaller than the minimum velocity required to induce observable nuclear recoils for this DM mass, v^(PandaX)_ min=259.6km/ s, therefore, these streams produce no signal at PandaX. However, stream #2 is above the threshold and can produce observable recoils. In this case, we obtain 0.0036, which is below the upper limit reported by PandaX. On the other hand, the three streams have velocities, both in June 1st and in December 1st, which are large enough to produce recoils in the DAMA experiment with energy between 3.0 and 3.5 keV, as the velocity threshold in this bin is v^(DAMA[3.0,3.5])_ min= 143.9 km/ s. Therefore, these streams can produce a modulation signal, as defined in Eq. (<ref>), in the three energy bins, [2.0, 2.5], [2.5, 3.0] and [3.0, 3.5] keV; the weights and directions of the streams are such that the modulation signal in each of these three bins is reproduced. Similarly, the velocity distribution that minimizes the rate at PICO-60 in our benchmark point consists of four streams with maximal velocities in the detector frame equal to 112.4, 117.5, 107.3 and 94.1 km/s, which lie below the velocity threshold of PICO-60, v^(PICO-60)_ min=120.0km/ s, and hence do not produce observable recoils in this experiment. However, these streams have velocities in the DAMA detector frame above its threshold in the highest energetic bin, v^(DAMA[3.0,3.5])_ min= 20.3 km/ s, both in June and in December, and induce an annual modulation signal as observed by the experiment. We note that in both cases the optimized solutions contain streams with velocities very close to the threshold of the experiment. Namely, for the SI interaction, stream #2 has a maximum velocity during the period of data taking slightly above the threshold and produces a non-zero number of events at PandaX, although below the measured upper limit, while for the SD interaction, stream #2 has a maximum velocity slightly below the threshold and produces no event. Therefore, even small perturbations of these configurations may induce a number of recoil events in excess of the upper limit reported by the experiments. More concretely if these two streams are smeared by a Gaussian distribution with width Δ v=1.7 km/ s and Δ v=1.6 km/ s, respectively, these velocity distributions would be excluded.As apparent from the figure, there exists a minimum value of the cross-section necessary to produce the DAMA modulation signal while circumventing the null results from PandaX and PICO-60. Our analysis also shows that there exist solutions for arbitrarily large cross sections. To elucidate this point, let us consider the optimized velocity distribution for our benchmark dark matter parameters m_ DM=10GeV and σ_ SI=10^-37cm^2, assuming the SI interaction only, and which consists of three streams. Clearly, an increase in the cross section can be compensated with a decrease of the weights of the three streams, such that the rates at PandaX and DAMA remain the same. In doing this, however, the velocity distribution is no longer normalized to unity. On the other hand, this can be amended by introducing a fourth stream, with velocity below the thresholds of both experiments, and with a weight chosen to fulfill the normalization condition. Therefore, it is always possible to construct a velocity distribution that compensates the increase in the cross section and which produces the same number of recoil events. One should also note that as the cross section increases, a smaller and smaller spread in the streams is required in order not to produce a number of recoil events in PandaX in excess with observations, and the solution becomes accordingly more and more tuned. The bottom panels of Fig. <ref> show, on the other hand, the values of the SI and SD cross-sections where the DAMA claim is compatible with the null results from IceCube and Super-Kamiokande, assuming annihilations into W^+W^-. As before, we find velocity distributions for which the modulation signal reported by DAMA is compatible with the null results from the neutrino telescopes Super-Kamiokande and IceCube. The regions of the parameter space where such velocity distributions exist are shown in pink. Both for the SI and SD interactions, there exists an allowed region at high dark matter masses, concretely m_ DM≳ 4TeV for the SI interaction and m_ DM≳ 250GeV for the SD interaction. Moreover, assuming only the SI interaction, we find an allowed region for masses∼ 20-40 GeV and cross sections 10^-42-10^-41cm^2.In this case the optimized velocity distribution also consists of a maximum of four streams; the characteristics of these streams are shown in table <ref>, taking the same benchmark dark matter parameters as in table <ref>: m_ DM=10GeV and σ_ SI=10^-37cm^2 for the SI interaction, m_ DM=3000GeV and σ_ SD=10^-34cm^2for the SD interaction. The optimized velocity distribution, assuming only the SI interaction, consists of two streams with fairly high velocities, 728.0 and 734.2 km/s in the rest frame of the Sun, and one stream with lower velocity, 362.8 km/s, but with a very small weight. Dark matter particles in these three streams can be captured inside the Sun at a sufficiently large rate to achieve equilibration, and generate in annihilations a neutrino flux in excess with observations. For the benchmark point for the SD interaction, on the other hand, the dark matter particles in the three streams have a large kinetic energy, so that after scattering with one nucleus in the solar interior and transferring part of its energy, the DM particle still has a velocity which is larger than the escape velocity. Capture in this case is very inefficient and as a result these streams are untestable using neutrino telescopes.For very large values of the cross section we find no allowed solutions, contrary to the behavior found when comparing DAMA with the null results from direct detection experiments (the excluded regions in the bottom panels of Fig. <ref> lie, however, outside of the Figure). Following a similar rationale as for the combination of DAMA with other direct detection experiments, an increase in the cross section can be compensated with a decrease in the weights of the streams reproducing the DAMA signal, whereas the normalization condition can be fulfilled by postulating an additional stream with velocity below the threshold of the experiment. Now, the dark matter particles in this low-velocity stream can be efficiently captured inside the Sun, due to the large cross section, and therefore produce a neutrino flux. As a result, reproducing the DAMA modulation signal with the constraints on the capture rate from neutrino telescopes, translates into a lower and an upper bound on the scattering cross section; the former, from requiring a large enough modulation signal, and the latter, from requiring a small enough capture rate. One should note that the velocity distribution that minimizes the recoil rate at a direct detection experiment and the one that minimizes the capture rate at the Sun are qualitatively different: in the former case, the velocity distribution should not contain streams with very high velocities (or have a small weight), in order to prevent a too large scattering rate at PandaX or PICO-60, and in the latter, the velocity distribution should not contain streams with very low velocities (or have a small weight) in order to prevent efficient capture inside the Sun. As the requirements for the velocity distributions are opposite, it is interesting to investigate whether there exist velocity distributions for which the DAMA modulation signal can be made compatible with the null results from other direct detection experiments and from the null results from neutrino telescopes. We have applied our method to calculate the minimum number or recoil events at PandaX and PICO-60, with the constraints of correctly reproducing the DAMA modulation signal in the three relevant energy bins, the upper limit on the capture rate from neutrino telescopes, and the normalization condition (in this case, the optimized velocity distribution consists of a superposition of a maximum of five streams). The result is shown in Fig.<ref>. The plot shows that no dark matter particle interacting with nucleons via the SI interaction only can produce the observed signal at DAMA, while being compatible with the null searches from PandaX and from current neutrino telescopes, if the DM mass is in the range 5 GeV- 10 TeV, as the number of expected events at PandaX is in this mass range larger than ∼ 3000. Only for very large dark matter masses, where capture in the Sun becomes very ineffective, it is possible to find velocity distributions where the DAMA signal is compatible with the null searches, as neutrino telescopes do not constrain this region of the parameter space. However, as discussed in Subsection <ref>, these regions are in tension with the unitarity limit for thermally produced dark matter and are contrived theoretically. On the other hand, we find that if the DM scatters via the SD interaction only, there are velocity distributions where the experiments considered are compatible, if the DM mass is larger than ∼ 4.5 TeV. However, and as discussed above, the velocity distributions for which DAMA is compatible with the null results from PandaX or PICO-60 require a very small velocity dispersion in the dark matter streams and can be regarded as fine tuned. §.§ Prospects for LZ from current null results Finally, we apply our halo independent method to assess the prospects to observe a dark matter signal in a future experiment, in view of null results from current experiments.Concretely, we will identify the regions of the parameter space where a signal is expected at the projected LUX-ZEPLIN (LZ) experiment <cit.>, regardless of the velocity distribution, the regions which will remain untested, and the regions which may produce signals, depending on the velocity distribution. To identify all these regions we will use the null search results fromSuperCDMS[The halo independent comparison is only meaningful among experiments with different characteristics e.g., different energy thresholds or different energy resolutions. Therefore,to illustrate our method and assess the reach of LZ, which is based on a xenon target, we will employ the null results from SuperCDMS, which is based on a germanium target, and which has very different characteristics than LZ.] <cit.> and/or from the neutrino telescopes IceCube <cit.>and Super-Kamiokande <cit.>, when analyzing scattering induced by the SI interaction only, and from PICO-60 <cit.> and/or from IceCube and Super-Kamiokande, when considering the SD interaction only. As in the rest of this work, we will assume equilibration between dark matter capture and annihilation in the solar interior, and that dark matter annihilates only into W^+W^- (or τ^+τ^- for m_ DM<M_W).Similarly to the numerical implementation of the method presented in Subsection <ref>, wewill fix the time-dependent velocity of the Earth to the value giving the smallest (largest) recoil rate over the year, in order to conservatively define the regions which will be tested (remain untestable) at LZ, and in order to reduce the three-dimensional problem of calculating the optimized velocity distribution into a simpler one-dimensional problem; in our calculation we discretized the one-dimensional velocity distribution with 775 streams.In Fig. <ref> we show the values of the SI cross section (left panels) or SD cross section (right panels) where the maximum number of events expected at LZ is smaller than 1 (white regions), as well as the regions where the minimum number of events is larger than 1 (red regions), for all possible choices of the velocity distribution, in view of the current null search results from the direct detection experiments SuperCDMS or PICO-60 (upper panels), from neutrino telescopes (middle panels), or from considering both search strategies simultaneously (lower panels). The regions in white will remain untested, while the regions in red will be fully tested at LZ, regardless of the true velocity distribution. The regions in pink, on the other hand, produce a number of signal events larger than 1, but only for certain choices of the velocity distribution, and hence can only be partially tested. Finally, the regions in gray in the middle and bottom plots show the parameter space excluded by current experiments following our analysis in Section. <ref>. We also show in the Figure, for comparison, the expected reach of the LZ experiment assuming the SHM. Our analysis shows that LZ will be sensitive to σ_ SI≳ 3× 10^-45cm^2 and toσ_ SD≳ 3× 10^-40cm^2 for m_ DM=1TeV, regardless of the velocity distribution of dark matter particles in the Solar System, for a local density ρ_ loc=0.3GeV/ cm^3. Furthermore, it shows that cross sections below σ_ SI≲ 2× 10^-47cm^2 andσ_ SD≲ 2× 10^-40cm^2 will escape detection, even for the most favorable velocity distribution, if the dark matter mass is in the range 5-10^4GeV.It is possible to understand analytically the contours shown in the Figure. The region where max{R^ (LZ)}(σ,m_ DM)≤ 1 (shown in white in the Figure) corresponds to very low cross-sections and hence none of the current upper limit constraints is saturated. As a result, the optimized velocity distribution giving the maximal recoil rate at LZ consists of a single dark matter stream f(v)=δ(v-v_0), where v_0 is determined from ∂ R_v/∂ v|_v=v_0=0, if v_0<v_ max, and v_0=v_ max otherwise, giving max{R^ (LZ)}=R^ (LZ)_v_0. The white region is then defined by the condition R^ (LZ)_v_0(σ,m_ DM)≤ 1. The red region, where min{R^ (LZ)}≥ 1, does not exists when imposing in the minimization only the constraints from SuperCDMS: as both SuperCDMS and LZ have a non-zero velocity threshold, it is clear that the minimal rate at LZ compatible with the null search results from SuperCDMS will be equal to zero, corresponding to a single stream at zero velocity. In contrast, neutrino telescopes are sensitive to streams with velocities arbitrarily small, and therefore one expects an impact of the upper limit on the capture rate from the neutrino telescopes on the minimal rate expected at LZ. The minimal rate occurs for the velocity configurations where the upper limit on the capture rate is saturated, and therefore corresponds to two streams, which can be calculated following the lines of Subsection <ref>, under the assumption that LZ will observe less than 1 event. Finally, the gray region in the middle plots is excluded in a halo independent manner by the null searches at neutrino telescopes only, while in the bottom plots, by the combination of the null searches at neutrino telescopes and at PandaX (left panel) or PICO-60 (right panel), as discussed in Subsection <ref>.§ CONCLUSIONS AND OUTLOOKThe interpretation of any experiment probing the dark matter distribution inside the Solar System (either through the rate of nuclear recoils, the annual modulation signal, or the neutrino flux from the Sun from dark matter particles captured in the solar interior via scatterings) is subject to our ignorance of the local dark matter density and velocity distribution. It is then important to develop methods to extract from the experimental data information about the dark matter properties, without relying on assumptions about the unknown astrophysics. In this paper we have developed a new method to calculate the minimum/maximum number of signal events in an experiment probing the dark matter distribution inside the Solar System, in view of a number of constraints from direct detection experiments and/or neutrino telescopes. The method is based in a decomposition of the velocity distribution into a linear combination of an arbitrarily large number of dark matter streams. Then, using the fact that the rate of signal events in a direct detection experiment or in a neutrino telescope is linear in the velocity distribution, we have applied methods of linear programming to optimize the rate in one experiment given a number of constraints from other experiments, and imposing that the velocity distribution must be normalized to unity. For p upper limit constraints, we show that the optimized velocity distribution is composed of p+1-r streams, where r is the number of upper limit constraints which are not saturated. The velocities of the streams can be calculated numerically using linear programming methods, although in some simple cases the optimized values can also be derived analytically or semi-analytically.We have illustrated our method with three concrete applications. First, we have derived a halo-independent upper limit on the spin-independent and spin-dependent cross sections from combining the null search results from the neutrino telescopes IceCube and Super-Kamiokande, assuming annihilations into W^+W^- (τ^+τ^- for m_ DM< M_W), and from the direct detection experiments PandaX and PICO-60, respectively. The limits we obtain are remarkably strong and reach, for m_ DM = 1TeV,σ_SI^p≲ 3× 10^-44 cm^2 for the SI and σ_SD^p≲ 10^-39cm^2 for the SD scattering cross section, assuming ρ_ loc=0.3GeV/ cm^3. Second, we have confronted the dark matter interpretation of the DAMA annual modulation signal, assuming spin-independent scattering only or spin-dependent scattering only, to the null search results from neutrino telescopes and from PandaX and PICO-60, respectively. We have found velocity distributions where the modulation signal reported by DAMA in the [2.0, 2.5], [2.5, 3.0] and [3.0, 3.5] keV energy bins are compatible with the null search experiments, provided m_ DM≳ 165TeV for the spin-independent interaction and m_ DM≳ 4.5 TeV for the spin-dependent interaction. These solutions only arise when channeling effects in the NaI crystal are included in the analysis of the experimental results.Further tests to these solutions, such as the requirement of reproducing not only the difference in rates between June 1st and December 1st, but also the time dependence of the modulation signal, may rule out some of them.Moreover, these solutions are very fine tuned, and require the streams to be oriented in very concrete directions and with very little dispersion. Even small deviations from these configurations produce a number of events in PandaX or PICO-60 in excess of observations. Finally, we have assessed the prospects to observe dark matter induced recoils at the projected LZ experiment in view of the current null search results from IceCube, Super-Kamiokande, SuperCDMS (for the spin-independent interaction) and PICO-60 (for the spin-dependent interaction). We find that LZ will find, assuming m_ DM=1TeV, dark matter induced recoils if σ_ SI≳ 3× 10^-45cm^2 orσ_ SD≳ 3× 10^-40cm^2, regardless of the velocity distribution of dark matter particles in the Solar System (assuming ρ_ loc=0.3GeV/ cm^3). On the other hand, values for the cross sections σ_ SI≲ 2× 10^-47cm^2 andσ_ SD≲ 2× 10^-40cm^2 will escape detection, even for the most favorable velocity distribution, if the dark matter mass is in the range 5-10^4GeV.This method can be extended to include in the analysis other dark matter interactions, or can be generalized to account for more realistic velocity configurations, e.g. including a smooth halo component in addition to the dark matter streams. The results of these analyses will be presented elsewhere <cit.>.§ ACKNOWLEDGMENTSThis work has been partially supported by the DFG cluster of excellence EXC 153 “Origin and Structure of the Universe” and by the Collaborative Research Center SFB1258. We are grateful to Riccardo Catena, Paolo Gondolo, Bradley Kavanagh and Sebastian Wild for useful discussions.§ NOTE ADDEDAfter the completion of this paper, we learned about the work <cit.>, where it is developed a new halo independent method and is used to estimate the unmodulated DAMA signal, based on the profiling of the likelihood function over velocity distributions.§ ANALYTICAL DERIVATION OF THE OPTIMIZED VELOCITY DISTRIBUTIONIn this Appendix we present an alternative method to calculate the optimized scattering rate and velocity distribution, and which can be solved analytically when the number of constraints is small. Concretely, we illustrate the method calculating the velocity distribution that minimizes the scattering rate at a direct detection experiment, given an upper limit on the capture rate from a neutrino telescope, and given the normalization constraint on the velocity distribution. To this end, we first fix the time-dependent velocity of the Earth to the value giving the smallest recoil rate. This prescription also simplifies the initial problem of calculating a three-dimensional dark matter velocity distribution, as the optimized solution depends only on the modulus of the velocity. The minimization problem can be formulated as:minimize    R({a_v_i})=∑_i=1^na_v_i^2 R_v_i ,subject to   ∑_i=1^n a^2_v_i=1 , and    C({a_v_i})=∑_i=1^n a_v_i^2C_v_i≤ C_ max ,where we have cast the decision variables as a_v_i^2 to ensure that they are non-negative.To minimize the objective function we introduce the LagrangianL({a_v_i}, {v_i}, s, λ_1,λ_2)= ∑_i=1^na_v_i^2 R_v_i -λ_1(∑_i=1^n a^2_v_i-1) -λ_2(∑_i=1^n a_v_i^2C_v_i +s^2 - C_ max) ,with λ_1 and λ_2 Lagrange multipliers, and where s^2 is a (non-negative) slack variable, introducedto recast the upper inequality constraint Eq.(<ref>) into an equality constraint. The minimization conditions are:∂ L/∂ a_v_p =2 a_v_p[R_v_p-λ_1 -λ_2 C_v_p]=0,    p=1, ..., n, ∂ L/∂ v_p = a_v_i^2 [∂ R_v_p/∂ v_p - λ_2 ∂ C_v_p/∂ v_p]=0,    p=1, ..., n, ∂ L/∂ s =2 λ_2 s =0 ,∂ L/∂λ_1 =∑_i=1^n a^2_v_i-1=0 ,∂ L/∂λ_2 =∑_i=1^n a_v_i^2C_v_i +s^2 - C_ max=0. Eq. (<ref>) is satisfied either when λ_2=0or when s=0. Note that the latter case corresponds, following Eq. (<ref>),to saturating the inequality constraint, C=C_ max, while the former, to a strict inequality C<C_ max. Let us discuss each case separately. §.§ C<C_ max (or λ_2=0)In this case, Eq. (<ref>) reads a_v_p[R_v_p-λ_1]=0,    p=1, ..., n ,which implies that only one among the n decision variables, that we label as a_v_1 can be non-vanishing. Furthermore, from Eq. (<ref>) one obtains that the velocity v_1 is determined by the condition∂ R_v/∂ v|_v=v_1=0 .The velocity v which satisfies the previous equation corresponds to a maximum of R_v. However, extrema of the function R_v may also occur at the boundaries of the region where v is defined. Indeed, it is clear that a minimum arises for any stream velocity below the threshold required to induce an observable recoil, in particular for v=0. We then conclude that when the cross section is such that C<C_ max for all streams,one possible choice of the optimized velocity distribution corresponds to a single stream with zero velocity, giving a scattering rate equal to zero:f_ opt(v) =δ(v-0) , R_ min =0 .§.§ C=C_ max (or s=0) In this case, the upper limit inequality Eq. (<ref>) is saturated. Then, Eq. (<ref>) reads a_v_p[R_v_p-λ_1 -λ_2 C_v_p]=0,    p=1, ..., n ,which implies that two decision variables, a_v_1 and a_v_2 are non-vanishing, while the remaining n-2 decision variables vanish. The Lagrange multipliers are easily obtained from this equation, the result being:λ_1=C_v_1R_v_2-C_v_2R_v_1/C_v_1-C_v_2 ,     λ_2=R_v_1-R_v_2/C_v_1-C_v_2 ,whereas the weights of the corresponding two streams can be found from Eqs. (<ref>) and (<ref>):a^2_v_1=C_ max-C_v_2/C_v_1-C_v_2 ,      a^2_v_2=C_v_1-C_ max/C_v_1-C_v_2 .Since C(v) is a monotonically decreasing function, it follows that a solution exists only when 0≤ v_1 ≤v̂ and v̂≤ v_2 ≤ v_ max, where v̂ is defined by the condition C(v̂)=C_ max. Finally, one can determine the velocities of the two streams using Eq. (<ref>):∂ R_v_1/∂ v_1 = (R_v_1-R_v_2/C_v_1-C_v_2) ∂ C_v_1/∂ v_1 , ∂ R_v_2/∂ v_2 = (R_v_1-R_v_2/C_v_1-C_v_2) ∂ C_v_2/∂ v_2 .These two equations are simultaneously fulfilled if v_1=v̅-ϵ v_2=v̅+ϵ with ϵ→ 0, which implies, following Eq. (<ref>), that v̅=v̂. Let us denote this solution as I. On the other hand,and as already mentioned for the case C<C_ max, the extrema may not lie on the interior of the domain where the functions are defined, in this case 0≤ v_1 ≤v̂ and v̂≤ v_2≤ v_ max, but they may also lie at the boundary. We then find four more possible solutions, that we denote as II, III and IV and V:solution II:  v_2=v_ max, v_1  defined by d/dv[C_ max/C_v R_v+(1-C_ max/C_v) R_v_ max]|_v=v_1=0, solution III:  v_1=0,  v_2  defined  by d/dv[(C_0-C_ max/C_0-C_v) R_v]|_v=v_2=0,solution IV:  v_1=v̂ ,solution V:  v_2=v̂ .We note that the function (C_0-C_ max/C_0-C_v) R_v entering in Eq. (<ref>) for solution III is the product of a monotonically decreasing function times a function that contains a maximum. Therefore, the minimum can only exist at the boundaries, either at v_2=v̂ or atv_2=v_ max. To summarize, the minimum of the scattering rate occurs for one of the six following velocity distributions: I. f_ I=1/2δ(v-(v̂-ϵ))+1/2δ(v-(v̂+ϵ)), with ϵ→ 0 and v̂ defined by C_v̂=C_ max ,           giving a rateR_ I=R_v̂ ,II. f_ II=C_ max/C_v_1δ(v-v_1)+(1-C_ max/C_v_1)δ(v-v_ max), with v_1 defined by d/dv[C_ max/C_v R_v+(1-C_ max/C_v) R_v_ max]|_v=v_1=0 ,           giving a rateR_ II=C_ max/C_v_1 R_v_1+(1-C_ max/C_v_1) R_v_ max ,IIIa. f_ IIIa=δ(v-v̂), with v̂ defined by C_v̂=C_ max, giving a rateR_ IIIa=R_v̂ ,IIIb. f_ IIIb=C_ max/C_0δ(v-0)+(1-C_ max/C_0) δ(v-v_ max),giving a rateR_ IIIb= (1-C_ max/C_0)R_v_ max , IV. f_ IV=δ(v-v̂), withv̂ defined by C_v̂=C_ max, giving a rateR_ IV=R_v̂ ,V. f_ V=δ(v-v̂), withv̂ defined by C_v̂=C_ max, giving a rateR_ V=R_v̂ .Namely,R_ min =min{R_I, R_ II,R_ IIIa, R_ IIIb, R_ IV, R_ V} . It is clear that solutions I, IIIa, IV and V are identical. Furthermore, taking into account that the capture rate for streams decreases monotonically with its velocity, C_v_1≤ C_0, and that for zero velocity the capture rate vanishes, R_0=0, one obtainsR_ II= min_v{C_ max/C_v R_v+(1-C_ max/C_v) R_v_ max}≤(1-C_ max/C_0) R_v_ max=R_ IIIbHence, solutions IIa, IIb, IV and V turn out to be included in solutions I and II, and the minimal rate can be simply calculated from: R_ min=min{R_I,R_ II} ,being the optimized velocity distribution the one corresponding to the minimal rate. § DARK MATTER SEARCH EXPERIMENTSIn this Appendix, we provide details about the characteristics of the experiments which are relevant to reproduce our results. DAMA The DAMA experiment <cit.> searches for dark matter via the distinctive feature of an annual modulation of the event rate. The energy resolution is depicted in figure 20b of <cit.> and can be parametrized as follows:σ(E_ee)=(0.448  keVee)·√(E_ee/keVee)+0.0091·E_ee ,where E_ee denotes the recoil energy in keVee (electron equivalent energy). Besides, we take for the efficiency in the energy bin [E_-, E_+], ϵ_i(E_R) = Φ( Q_i E_R, E_-, E_+), where Q_i is the quenching factor for the isotope i, and Φ (Q_i E_R,E_-, E_+) is the probability that an event with a nuclear recoil energy E_R, and hence with a quenched energy of Q_i E_R, is detected in the energy bin [E_-, E_+]. Following <cit.>, we assume this probability to be Gaussian. Finally, we adoptas quenching factors Q_Na=0.30 and Q_I=0.09 as used by the DAMA collaboration <cit.>. In our analysis we also take into account the channeling effect, which was studied in detail in <cit.>. PandaX We calculate the limits from PandaX Run 8 <cit.> and Run 9 <cit.> by incorporating the detection efficiency from figure 2 of <cit.> and assuming an energy threshold of E_th=1.1 keVnr. With a combined exposure of 3.3× 10^4 kg×days, PandaX observed three events below the median of the NR calibration band, which leads to N_max=6.7. Furthermore, we multiply the event rate by an additional factor of 0.5 to account for the fact that only half of the nuclear recoil band is used.PICO-60 To obtain limits from the PICO-60 experiment, we follow <cit.> and incorporate the bubble efficiency which is given by the dashed lines in figure 4 of <cit.>. Furthermore, we use an exposure of 1167 kg×days as reported in <cit.>. Since PICO-60 observed no signal events, we conservatively assume a 90% C.L. upper limit on the number of recoils of N_max=2.3. SuperCDMS The SuperCDMS experiment <cit.> recorded data between October 2012 and June 2013, resulting in a total exposure of 577 kg×days. The energy dependent efficiency is given in figure 1 of <cit.> and the dark matter search window is defined between 1.6 keVnr and 10 keVnr. After unblinding, SuperCDMS observed 11 candidate events which leads to an upper limit of N_max=16.6 at 90% confidence level. LUX-ZEPLIN LUX-ZEPLIN (LZ) <cit.> is the planned successor of LUX <cit.> and ZEPLIN <cit.>. LZ further refines their successful detection technique and is expected to increase the fiducial mass from 145 kg to 5.6 tonnes. After running for 1000 days, this will result in an unprecedented exposure of 5600 tonnes×day. Due to the similar detector technique as LUX and Panda X, we adopt the same efficiency as in LUX, taken from figure 2 of <cit.> and a nuclear recoil acceptance of 0.5.Neutrino telescopes In order to calculate the capture rate of dark matter particles inside the sun, we use the density profile from the solar model AGSS09 <cit.>. For the SI interactions, we include the 29 most abundant elements inside the sun and assume the Helm form factor <cit.>. We take into account the scattering off hydrogen and ^14 N when calculating the capture rate induced by SD interactions and we use the SD form factors provided in <cit.>. For this study, we adopt the latest results of IceCube <cit.> and Super-Kamiokande <cit.>. For Super-Kamiokande, we use<cit.> to convert limits on the neutrino flux induced by dark matter annihilations into limits on the capture rate. JHEP
http://arxiv.org/abs/1703.09168v2
{ "authors": [ "Alejandro Ibarra", "Andreas Rappelt" ], "categories": [ "hep-ph", "astro-ph.CO" ], "primary_category": "hep-ph", "published": "20170327162210", "title": "Optimized velocity distributions for direct dark matter detection" }
A Multi-Wavelength Analysis of Dust and Gas in the SR 24S Transition Disk L. Testi December 30, 2023 =========================================================================We survey the latest advances in machine learning with deep neural networks by applying them to the task of radio modulation recognition. Results show that radio modulation recognition is not limited by network depth and further work should focus on improving learned synchronization and equalization. Advances in these areas will likely come from novel architectures designed for these tasks or through novel training methods.§ INTRODUCTION Deep neural networks have been pushing recent performance boundaries for a variety of machine learning tasks in fields such as computer vision, natural language processing, and speaker recognition. Recently researchers in the wireless communications field have started to apply deep neural networks to cognitive radio tasks with some success <cit.>. In particular it has been shown that relatively simple convolutional neural networks outperform algorithms with decades of expert feature searches for radio modulation <cit.>. This paper provides an introduction to deep neural networks for the cognitive radio task of modulation recognition, compares several state of the art methods in other domains, and experiments with learning techniques.Deep neural networks are large function approximations comprised of a series of layers, where each layer represents some transform from input to output activations based on a parametric transfer function with some set of learned weights. Each layer is typically a known linear function with adjustable parameters and a non-linear activation function such that the resulting function composition can be highly non-linear <cit.>.Function parameters in deep neural networks are typically trained with a gradient descent optimizer from some loss function computed on the output of the network. For a multi-class classification task such as modulation recognition the objective function is often categorical cross-entropy (eq. <ref>). Categorical cross-entropy is a measure of difference between two probability distributions. For deep learning classification tasks the probability distribution is usually a softmax (eq. <ref>) of the output of the classifier network which is then converted to a one-hot encoding for classification purposes <cit.>.The error is calculated in what is known as the forward-pass and weights are adjusted using the chain rule to find error contribution for each parameter in what is known as the backward-pass. This kind of network output layer, optimization and loss function have been used very successfully for multi-class vision tasks such as object recognition on the Imagenet dataset <cit.>. H(p,q) = Σ_x p(x) log q(x)σ(z)_j = e^z_j/Σ^K _k=1 e^z_kforj=1,...,K Applying deep neural networks to solve well-known problem types, such as classification, is a matter of * selecting a network architecture and hyper-parameters* training the network to select weights which minimize loss* applying it to the problem at hand There are several well established network architectures including multi-layer perceptrons, many variations of convolutional networks, and recurrent networks. Although the goal of machine learning is to develop generalized techniques the current state of the art network types still seem to be application specific. For example, Google views Convolutional Long short-term Deep Neural Networks (CLDNN) to be worthy of patenting even though it is only used in their voice processing research. The state of the art in image recognition uses variants of the inception architecture, residual networks, and other architectures that enable many combinations of convolutional layers, while managing the combinatorial complexity of weights and activations.We discuss these methods in greater detail in the next section.Before applying deep neural networks to wireless communications signals it is worth reviewing the state of the art for other application areas. The next section will review deep neural network architecture and learning advances that are likely to be valid and useful for wireless communications applications. Following the review of interesting deep architectures and training methods, results are in section <ref> and a discussion in section <ref>. §.§ Neural Network ArchitecturesThe common element in all state of the art deep neural networks is the use of convolutional layers. A convolutional layer consists of N_f convolutional filters. The use of convolutional layers started for image and hand-writing recognition to provide feature translation invariance <cit.>. The use of convolutional filters in neural networks may be slightly different than expected for someone already familiar with FIR filters and DSP at least partially due to the use of activation functions in neural networks. Convolutions in neural networks are typically very small (1x1 through 5x5 are common sizes in image processing). In typical DSP applications filters are very wide (many taps/high order) rather than deep (small taps, but cascaded). Modern methods of implementing these filters, such as polyphase filterbanks, typically provide ways to reduce the width of filters for computational or latency reasons.The transfer function for a standard convolutional layer <cit.> is given below in equation <ref>, where y_i is the output feature map for the ith filter, b and k represent learned bias and filter weight parameters, x_i represents the input activations, ∗ denotes the convolution operation, and f(..) denotes a (typically non-linear) activation function such as a rectified linear unit (ReLU) or sigmoid. y_i = f( b_j + Σ_i k_i j∗ x_i )A visible trend in neural networks for image processing is building deeper networks to learn more complex functions and hierarchical feature relationships <cit.>. Deep networks enable more complex functions to be learned more readily from raw data than shallower networks with the same number of parameters <cit.>; however, depth in neural networks is widely believed to be limited by unstable gradients that either explode or vanish in earlier or later layers in the network. This problem has been improved in recent years by the use of gradient normalization in optimizers as well as non-linearities which do not exacerbate the vanishing gradient problem such as rectified linear units (ReLUs). As a result several important architectures have been used to win competitions such as ImageNet by increasing depth that we will look at for improving radio modulation recognition.The inception architecture used in GoogLenet <cit.> is one successful approach to increasing network depth and ability to generalize to feature of differing scales while still managing complexity. This network consists of repeated inception modules. Each inception module (shown in figure <ref>), contains four parallel paths with the output being the concatenation of the four parallel outputs. The first path is a bank of 1x1 convolutions that forward along selected information. The 1x1 convolutions are a form of selective highway networks that simply pass information forward with no transformation. The second and third paths are 1x1 convolutions followed by a bank of 3x3 and 5x5 convolutions to provide multiple scales of feature detection. Finally, the last parallel path is a 3x3 pooling layer followed by 1x1 convolutions. Intermediate inception modules in the network are connected to softmax classifiers that contribute to the network's global loss for training.These classifiers are believed to help in preventing vanishing gradients. Another approach to increasing depth uses architectures that forward information untouched across layers. The best approach so far, which won ImageNet 2015, is residual networks <cit.>. A residual network adds one layer's output to the output of the layer two layers deeper (as shown in figure <ref>).This is known as a residual network because the forwarded information forces the network to learn a residual function as part of feature extraction. The residual network authors suggest that vanishing gradients are resolved by normalization techniques that have been widely adopted and that network depth is instead limited by training complexity of deep networks which can be simplified with residual functions. CLDNNs are an approach for voice processing that operate on raw time-domain waveforms rather than expert voice features such as log-mel cepstrums <cit.>.The architecture uses two convolutional layers followed by two recurrent layers made up of Long Short-Term Memory (LSTM) cells. LSTMs are a common recurrent network architecture consisting of several gates that control how long history is maintained <cit.>. CLDNNs can also have connections that bypass layers that are intended to provide a longer time context for the extracted features. For example, the original CLDNN forwards raw samples with the output of convolutional layers before the LSTM layers <cit.>. Inspired by the use of expert knowledge to guide network architectures such as convolutional networks and CLDNNs we experiment with a convolutional network that we will refer to as a convolutional matched filter. The rather simple idea is to take the general architecture of a typical communications receiver and build a neural network architecture that has similar parts. Communications receivers have a filter (typically matched to the transmitted pulse or wave shape), synchronizer, and sampler. Often the filter up front decimates to a small number of samples per symbol for the synchronizer which performs phase shifts to find the optimal sampling point. The sampler then slices to bits or emits audio for analog modulations. The neural network architecture analog to this is a convolutional layer with pooling followed by an LSTM. §.§ Neural Network TrainingHyper-parameters of a network such as learning rate, number of filters/feature maps per layer, filter size, and to some extent number of layers all affect network size and are hard to optimize. Recent research has attempted to optimize hyper-parameters as regular parameters that can be trained with backpropagation and gradient descent like network weights and biases. For this study we ignore training hyper-parameters and use the adam optimizer <cit.> which provides gradient normalization and momentum which reduces the importance of hyper-parameters like learning rate.Guided by work that shows depth being more important than number of feature maps <cit.> we will establish a baseline convolutional network similar to that used in Radio Convolutional Modulation Networks <cit.>. Our first step is to tune the number of filters and number of taps per filter and view those as unimportant hyper-parameters for the remainder of experiments to test suitability of different architectures for RF data. §.§ Test Setup§ TECHNICAL APPROACH We use the RadioML2016.10a dataset <cit.> as a basis for evaluating the modulation recognition task. The goal is to use a 128-sample complex (baseband I/Q) time-domain vector to identify the modulation scheme out of 11 possible classes. The 128 samples are fed in to the network in a 2x128 vector where the real and imaginary parts of the complex time samples are separated. The dataset uses a power delay profile, frequency selective fading, local oscillator offset, and additive white Gaussian noise with details of these effects in <cit.>. The dataset is labeled with both modulation type and SNR ground truth.We use the all-SNR top-1 classification accuracy as a single-number benchmark and show top-1 accuracy over SNR to compare techniques.All models and training are done with the Keras deep learning library using the theano backend using an Nvidia GTX 1070 GPU.We start with a network similar to the CNN2 network from <cit.>. This is the chosen baseline because results from <cit.> show significant improvement upon expert methods; any further improvements should be considered state of the art. The primary difference is we will use nfilt filters of size 1xtaps on each layer. We will do a simple hyper-parameter optimization to* find the best number of filters and filter size for RF modulation recognition* test assumptions gained from other fields on network depth and filter size § RESULTS §.§ Baseline Convolution Network The baseline convolutional network has two convolution layers and a single dense layer before the softmax classifier. Each hidden layer has a rectified linear unit (ReLU) activation function and dropout of 50%. The first hyper parameter optimization is the size of our convolutional layers. Each layer will have 1x3 filters and we will vary the number of filters to find how many are needed. From <cit.> we expect that a large range in the number of filters will give similar performance before any overfitting will happen.As expected there is a rather large window from about 30 to 70 filters per layer where performance is very similar. The top-1 classification accuracy for 20-90 filters in 10-filter increments is shown in figure <ref>. For the remaining experiments we will use 50 filters per layer.Next, we optimize the size of each filter. <cit.> suggests that the size of filters also has minimal impact, but based on expert knowledge of the radio domain and the dataset we expect 8-tap filters to be optimal. For this experiment we use a two-convolution layer network with a single hidden dense layer followed by the softmax classifier. The convolution layers each have 50 filters with a filter size of 1xntaps where ntaps varies from 3 to 12.Results from varying filter sizes for each convolution layer suggest that smaller filters are not as good as larger filters. We hypothesized based on expert knowledge of the dataset that 8-tap filters would be the best. It is difficult to distinguish a clear winner from the results per SNR graph in figure <ref>; however, the whole dataset classification accuracy shows that 7-12 taps all have similar performance around 61% with differences being statistically insignificant.Finally, for purely convolutional networks we experiment with increasing network depth. For this experiment we use 50-tap convolutional layers with 1x8 filters. After the convolution layers we use a single hidden dense layer followed by a final dense softmax classifier. We start with a 2-convolutional-layer network and add convolution layers. Trends from deep learning suggest that adding more layers should improve classification performance until the gradient becomes unstable.Varying the number of convolutional layers shows little to no improvement in classification accuracy. Accuracy over SNR for this task is shown in figure <ref>. This shows that there is no more feature depth for our network to learn. The data is not highly hierarchical to start with since the modulated data generally changes only the amplitude, frequency, or phase of a complex sinusoid; however, it is somewhat surprising that adding more convolutional layers does not appear to help reduce affects of noise at lower SNRs. §.§ Residual Networks Although it is not surprising that adding more convolutional layers does not improve classification accuracy it is surprising that the classification and loss improvements plateau as soon as 2 or 3 convolutional layers. The original resnet insight is that deeper networks result in higher training loss which suggests higher training difficult rather than overfitting. Figure <ref> shows that our hyper-parameter optimized CNN and a 9-layer residual network reaches similar loss, validation loss, and accuracy which is not shown; however, the residual network learns in fewer epochs. We also experimented with residual networks with 5-9 layers that all had similar performance and training times. This combined with our hyper-parameter search for ordinary CNN depth suggests we are not limited by network depth for radio learning tasks as much as we are limited by features purely CNN architectures can learn. §.§ Inception Modules Inception modules also do not improve radio modulation classification in our experiments here, using inception modules tuned for our dataset. The three branches used in each module are 50 1x1 filters, 50 1x3 filters, and 50 1x8 filters. The 1x3 and 1x5 filter branches also have 50 1x1 filters in front of them as shown in figure <ref>. The results for 1-4 inception modules in a network do not show any improvement over our hyper-parameter optimized CNN. Again, this suggests that we are not limited by depth or apparently by scale of filters. §.§ LSTM Networks As the final architecture we test adding recurrent network layers, namely those comprised of LSTM units, for modeling temporal features. This approach is widely used in time-series applications and we expected that modulated baseband time-series may be similarly applicable. We tested two and three layer convolutions followed by recurrent layers in a CLDNN-type architecture with and without the forward/bypass connection before the recurrent layer. We found that the forward connection as a concatenation of the raw waveform and the convolutional output, shown in figure <ref>, results in better classification accuracy and more stable gradient descent than other architectures. Using a pooling layer that would create an architecture like the previously described convolutional matched filter detector does not help classification.To further understand what limits classification accuracy we look at the confusion matrix for a CLDNN shown in figure <ref>. There are two primary areas of confusion. One is between the analog modulations and the other is between higher order QAMs. The analog modulations will be hard to address, but the QAMs can likely be improved on with better synchronization and reducing channel impairments. Gaining intuition on what the CLDNN is learning in each layer is important for guiding future work. To do this we plot the time and frequency representations of some filter taps. For the frequency response the filter taps are zero-padded with 100 zeros to get a 128-point FFT. Figs. <ref> and <ref> show two select filters from the first layer. The time-domain representations do not look particularly familiar to an expert eye; however the frequency responses do show shaped low-pass filters. Other filters that are not shown have frequency selective components, DC blockers, and sinc-like spectral shapes.Another way to visualize these filters is to apply random data to them and perform a gradient ascent for the output of a particular filter which will converge on data that most activates a convolutional neuron <cit.>. Results for the selected two filters are shown in figs. <ref> and <ref>. The resulting vectors look somewhat like crude PSK and FM/FSK modulations to an expert eye. The vectors also display some constant phase rotation that is present in our dataset due to the simulated channel model. It is important to note that these two filter visualizations were selected and not all filters appear meaningful to an expert. § DISCUSSIONPerformance of deep neural networks in the radio domain does not seem to be limited by network depth the same way the image, natural language processing, and acoustic domains are. Although our experiments focused on modulation recognition as a benchmark task, we expect other radio machine learning tasks to be able to use similar network architectures. Further advances in deep learning for radio tasks will likely come from improved training methods and network architectures that can learn to transform RF data to remove effects of wireless channels, which neural network architectures are not designed for. One example that is currently being explored is the use of spatial transforms to equalize and synchronize incoming waveforms <cit.>.These experiments also focused on a dataset that is nominally bandwidth-normalized which is a poor assumption for signals captured from real radio transmissions. Future networks used in real-world applications will need to learn to either resample signals to be bandwidth normalized, or learn features for many bandwidths. Networks that can resample, synchronize, and remove non-linear channel distortions are all exciting future work for the field.We believe that as radio environments become increasingly complex, combining varying temporal behaviors of modulations, multi-modulation protocols and combining many radio emitters interoperating within a single band, many of these notions of hierarchy within deep networks will become increasingly important in allowing our networks to scale to cope with the complexity effectively as has been similarly shown in the vision domain within complex multi-object scenes.plain
http://arxiv.org/abs/1703.09197v1
{ "authors": [ "Nathan E West", "Timothy J. O'Shea" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170327172843", "title": "Deep Architectures for Modulation Recognition" }
Open Vocabulary Scene Parsing Hang Zhao^1, Xavier Puig^1, Bolei Zhou^1, Sanja Fidler^2, Antonio Torralba^1^1Massachusetts Institute of Technology, USA^2University of Toronto, CanadaDecember 30, 2023 ============================================================================================================================================================================Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this problem. Our proposed approach to this problem is a joint image pixel and word concept embeddings framework, where word concepts are connected by semantic relations.We validate the open vocabulary prediction ability of our framework on ADE20K dataset which covers a wide variety of scenes and objects. We further explore the trained joint embedding space to show its interpretability. § INTRODUCTION One of the grand goals in computer vision is to recognize and segment arbitrary objects in the wild. Recent efforts in image classification/detection/segmentation have shown this trend: emerging image datasets enable recognition on a large-scale  <cit.>, while image captioning can be seen as a special instance of this task <cit.>. However, nowadays most recognition models are still not capable of classifying objects at the level of a human, in particular, taking into account the taxonomy of object categories. Ordinary people or laymen classify things on the entry-levels, and experts give more specific labels: there is no object with a single correct label, so the prediction vocabulary is inherently open-ended. Furthermore, there is no widely-accepted way to evaluate open-ended recognition tasks, which is also a main reason this direction is not pursued more often.In this work, we are pushing towards open vocabulary scene parsing: model predictions are not limited to a fixed set of categories, but also words in a larger dictionary, or even a knowledge graph. Considering existing image parsing datasets only contain a small number of categories(˜100 classes), there is much more a model can learn from those images given extra semantic knowledge, like WordNet dictionary (˜100,000 synsets) or Word2Vec from external corpus.To solve this new problem, we propose a framework that is able to segment all objects in an imageusing open vocabulary labels. In particular, while the method strives to label each pixel with the same word as the one used by the human annotator, it resorts to a taxonomy when it is not sure about its prediction.As a result, our model aims to make plausible predictions even for categories that have not been shown during training, e.g. if the model has never seen tricycle, it may still give a confident guess on vehicle, performing more like a human.Our framework incorporates hypernym/hyponym relations from WordNet <cit.> to help parsing. More concretely, word concepts and image pixel features are embedded into a joint high-dimentional vector space so that (1) hypernym/hyponym relations are preserved for the concepts, (2) image pixel embeddings are close to concepts related to their annotations according to some distance measures. This framework offers three major advantages: (1) predictions are made in a structured way, i.e., they can be intermediate nodes in WordNet, and thus yielding more reasonable mistakes; (2) it is an end-to-end trainable system, its vocabulary can be huge and is easily extensible; (3) the framework leaves more freedom to the annotations: inconsistent annotations from workers with different domain knowledge have less of an affect on the performance of the model. We additionally explored several evaluation metrics, which are useful measures not only for our open vocabulary parsing tasks, but also for any large-scale recognition tasks where confusions often exist.The open vocabulary parsing ability of the proposed framework is evaluated on the recent ADE20K dataset <cit.>. We further explore the properties of the embedding space by concept retrieval, classification boundary loosing and concept synthesis with arithmetics.§.§ Related work Our work is related to different topics in literature which we briefly review below.Semantic segmentation and scene parsing. Due to astonishing performance of deep learning, in particular CNNs <cit.>,pixel-wise dense labeling has received significant amount of attention. Existing work include fully convolutional neural network (FCN) <cit.>, deconvolutional neural network <cit.>, encoder-decoder SegNet <cit.>, dilated neural network <cit.>, etc.These networks perform well on datasets like PASCAL VOC <cit.> with 20 object categories, Cityscapes <cit.> with 30 classes, and a recently released benchmark SceneParse150 <cit.> covering 150 most frequent daily objects. However, these models are not easily adaptable to new objects. In this paper we aim at going beyond this limit and to make predictions in the wild.Zero-shot learning. Zero-shot learning addresses knowledge transfer and generalization <cit.>. Models are often evaluated on unseen categories, and predictions are made based on the knowledge extracted from the training categories. Rohrbach <cit.> introduced the idea to transfer large-scale linguistic knowledge into vision tasks. Socher et al. <cit.> and Frome et al. <cit.> directly embedded visual features into the word vector space so that visual similarities are connected to semantic similarities. Norouzi et al. <cit.> used a convex combination of visual features of training classes to represent new categories. Attribute-based methods are another major direction in zero-shot learning that maps object attribute labels or language descriptions to visual classifiers <cit.>.Hierarchical classifications. Hierarchical classification addresses the common circumstances that candidate categories share hierarchical semantic relations. Zweig et al. <cit.> combined classifiers on different levels to help improve classification. Deng et al. <cit.> achieved hierarchical image-level classification by trading off accuracy and gain as an optimization problem. Ordonez et al. <cit.>, on the other hand, proposed to make entry-level predictions when dealing with a large number of categories. More recently, Deng et al. <cit.> formulated a label relation graph that could be directly integrated with deep neural networks.Our approach on hierarchical parsing is inspired by the order-embeddings work <cit.>, we attempt to construct an asymmetric embedding space, so that both image features and hierarchical information in the knowledge graph are effectively and implicitly encoded by the deep neural networks. While most previous approaches combine deep neural networks with optimization methods like conditional random fields so that the semantic relatedness is incorporated into the framework, the advantage of our approach is that it makes an end-to-end trainable network, which is easily scalable when dealing with larger datasets in practical applications.§ LEARNING JOINT EMBEDDINGS FOR PIXEL FEATURES AND WORD CONCEPTSWe treat open-ended scene parsing as a retrieval problem for each pixel, following the ideas of image-caption retrieval work <cit.>. Our goal is to embed image pixel features and word conceptsinto a joint high-dimensional positive vector space R_+^N, as illustrated in Figure  <ref>. The guiding principle while constructing the joint embedding space is that image features should be close to their concept labels, and word concepts should preserve their semantic hypernym/hyponym relations. In this embedding space, (1) vectors close to origin are general concepts, and vectors with larger norms represent higher specificity; (2) hypernym/hyponym relation is defined by whether one vector is smaller/greater than another vector in all the N dimensions. A hypernym scoring function is crucial in building this embedding space, which will be detailed in Section <ref>.Figure  <ref> gives an overview of our proposed framework. It is composed of two streams: a concept stream and an image stream. The concept stream tries to encode the pre-defined semantics: it learns an embedding function f(·) that maps the words into R_+^N so that they preserve the hypernym/hyponym relationship between word concepts. The image stream g(·) embeds image pixels into the same space by pushing them close to their labels (word concepts). We describe these two streams in more details in Section <ref> and <ref>.§.§ Scoring functions In this embedding problem, training is performed on pairs: image-label pairs and concept-concept pairs. For either of the streams, the goal is to maximize scores of matching pairs and minimize scores of non-matching pairs. So the choice of scoring functions S(x,y) becomes important. There are symmetric scoring functions like L_p distance and cosine similarity widely used in the embedding tasks,S_Lp(x,y) = -x-y^p,S_cos(x,y) = x · y.In order to reveal the asymmetric hypernym/hyponym relations between word concepts, hypernym scoring function <cit.> is indispensable,S_hyper(x,y) = - max(0, x-y)^p.If x is hypernym of y (x ≽ y), then ideally all the coordinates of x are smaller than y (⋀_i (x_i≤ y_i)), so S_hyper(x,y) = S_hyper,max = 0. Note that due to asymmetry, swapping x and y will result in total different scores.§.§ Concept stream The objective of the concept stream is to build up semantic relations in the embedding space. In our case, the semantic structure is obtained from WordNet hypernym/hyponym relations. Consider all the vocabulary concepts form a directed acyclic graph (DAG) H=(V,E), sharing a common root v̂∈ V “entity", each node in the graph v ∈ V can be an abstract concept as the unions of its children nodes, or a specific class as a leaf.A visualization of part of the DAG we built based on WordNet and ADE20K labels can be found in Supplementary Materials. Internally, the concept stream include parallel layers of a shared trainable lookup table, mapping the word concepts u,v to f(u), f(v). And then they are evaluated on hypernym scores S_concept(f(u), f(v)) = S_hyper(f(u), f(v)), which tells how confident u is a hypernym of v. A max-margin loss is used to learn the embedding function f(·),ℒ_concept (u,v)= -S_concept(f(u), f(v))ifu ≽ v, max{0,α+S_concept(f(u), f(v))} otherwiseNote that positive samples u ≽ v are the cases where u is an ancestor of v in the graph, so all the coordinates of f(v) are pushed towards values larger than f(u); negative samples can be inverted pairs or random pairs, the loss function pushes them apart in the embedding space. In our training, we fix the root of DAG “entity" as anchor at origin, so the embedding space stays in R_+^N. §.§ Image stream The image stream is composed of a fully convolutional network which is commonly used in image segmentation tasks, and a lookup layer shared with the word concept stream. Consider an image pixel at position (i,j) with label x_i,j, its feature y_i,j is the top layer output of the convolutional network. Our mapping function g(y_i,j) embeds the pixel features into the same space as their label f(x_i,j), and then evaluate them with a scoring function S_image(f(x_i,j),g(y_i,j)).As label retrieval is inherently a ranking problem, negative labels x'_i,j are introduced in training. A max-margin ranking loss is commonly used <cit.> to encourage the scores of true labels be larger than negative labels by a margin,ℒ_image(y_i,j)= 1.0!∑_x'_i,jmax{0,β - S_image(f(x_i,j),g(y_i,j)) + S_image(f(x'_i,j),g(y_i,j))}.In the experiment, we use a softmax loss for all our models and empirically find better performance, ℒ_image(y_i,j) = -loge^S_image(f(x_i,j),g(y_i,j))/e^S_image(f(x_i,j),g(y_i,j)) + ∑_x'_i,j e^S_image(f(x'_i,j),g(y_i,j)).This loss function is a variation of triplet ranking loss proposed in <cit.>. The choice of scoring function here is flexible, we can either (1) simply make image pixel features “close" to the embedding of their labels by using symmetric scores S_L_p(f(x_i,j), g(y_i,j)), S_cos(f(x_i,j), g(y_i,j)); (2) or use asymmetric hypernym score S_hyper(f(x_i,j), g(y_i,j)). In the latter case, we treat images as specific instances or specializations of their label concepts, and labels as general abstraction of the images.§.§ Joint model Our joint model combines the two streams via a joint loss function to preserve concept hierarchy as well as visual feature similarities. In particular, we simply weighted sum the losses of two streams ℒ = ℒ_image + λℒ_concept (λ=5) during training. We set the embedding space dimension to N=300, which is commonly used in word embeddings. Training and model details are described in Section <ref>. § EVALUATION CRITERIA To better evaluate our models, metrics for different parsing tasks are explored in this section. §.§ Baseline flat metricsWhile working on a limited number of classes,four traditional criteria are good measures of the scene parsing model performance: (1) pixel-wise accuracy: the proportion of correctly classified pixels; (2) mean accuracy: the proportion of correctly classified pixels averaged over all the classes; (3) mean IoU: the intersection-over-union between the predictions and ground-truth, averaged over all the classes; (4) weighted IoU: the IoU weighted by pixel ratio of each class.§.§ Open vocabulary metricsGiven the nature of open vocabulary recognition, selecting a good evaluation criteria is non-trivial. Firstly, it should leverage the graph structure of the concepts to tell the distance of the predicted class from the ground truth. Secondly, the evaluation should correctly represent the highly unbalanced distribution of the dataset classes, which are also common in the objects seen in nature. To do so, for each sample/pixel, a score s(l,p) is used to measure the similarity between the ground truth label s and the prediction p. The total accuracy is the mean score over all the samples/pixels.§.§.§ Hierarchical precision, recall and F-scoreHierarchical precision, recall and F-score were also known as Wu-Palmer similarity, which was originally used for lexical selection <cit.>.For two given concepts l and p, we define the lowest common ancestoras the most specific concept (i.e. furthest from the root Entity) that is an hypernym of both. Thenhierarchical precision and recall are defined by the number of common hypernyms that prediction and label have over the vocabulary hierarchy H, formally:s_HP(l,p) = d_𝙻𝙲𝙰/d_p,s_HR(l,p) = d_𝙻𝙲𝙰/d_l.where depth of the lowest common ancestor node d_𝙻𝙲𝙰 is the number of hypernyms in common.Combining hierarchical precision and hierarchical recall, we get hierarchical F-score s_HF(l,p), which defined as the depth ofnode over the sum of depth of label and prediction nodes:s_HF(l,p) = 2 s_HP(l,p) · s_HR(l,p)/s_HP(l,p) + s_HR(l,p) = 2 · d_𝙻𝙲𝙰/d_l + d_p. One prominent advantage of these hierarchical metrics is they penalize predictions when they go too specific. For example, “guitar" (d_l=10) and “piano" (d_p=10) are all “musical instrument" (d_𝙻𝙲𝙰=8). When “guitar" is predicted as “piano", s_HF=2 · 8/10 + 10=0.8; when “guitar" is predicted as “musical instrument", s_HF=2 · 8/10 + 8=0.89. It agrees with human judgment that the prediction “musical instrument" is more accurate than “piano".§.§.§ Information content ratio As mentioned before, unbalanced distribution of data points could make performance dominated by frequent classes.Information content ratio, which was also used in lexical search, addresses these problems effectively.According to information theory and statistics, the information content of a message is the inverse logarithm of its frequency I(c) = -logP(c).We inherit this idea and pre-process our image data to get the pixel frequency of each concept v∈ H. Specifically, the frequency of a concept is the sum of its own frequency and all its descendents' frequencies in the image dataset.It is expected that the root “entity" has frequency 1.0 and information content 0.During evaluations, we measure, for each testing sample, how much information our prediction gets out of the total amount of information in the label. So the final score is determined by the information of theand that of the ground truth and predicted concepts.s_I(l,p) = 2 · I_𝙻𝙲𝙰/I_l+I_p = 2 ·logP(𝙻𝙲𝙰)/logP(l) + logP(p)As information content ratio requires the statistics of the image dataset and the semantic hierarchy, it rewards both inference difficulty and hierarchical accuracy.§ EXPERIMENTS§.§ Image label and concept associationTo learn the joint embedding,we associate each class in ADE20K dataset with a Synset in WordNet, representing a unique concept. The data association process requires semantic understanding, so we resort to Amazon Mechanical Turk (AMT). We develop a rigorous annotation protocol, which is detailed in Supplementary Materials.After association, we end up with 3019 classes in the dataset having synset matches. Out of these there are 2019 unique synsets forming a DAG. All the matched synsets have entity.n.01 as the top hypernymand there are in average 8.2 synsets in between. The depths of the ADE20K dataset annotations range from 4 to 19.§.§ Network implementations§.§.§ Concept streamThe data layer of concept stream feeds the network with positive and negative vocabulary concept pairs. The positive training pairs are found by traversing the graph H and find all the transitive closure hypernym pairs, e.g. “neckwear" and “tie", “clothing" and “tie", “entity" and “tie"; negative samples are randomly generated before each training epoch by excluding these positive samples. Using transitive closure pair greatly improves the performance of embedding by providing us with more training data.§.§.§ Image streamOur core CNN in the image stream is adapted from VGG-16 by taking away pool4 and pool5 and then making all the following convolution layers dilated (or Atrous) <cit.>. Considering the features of an image pixel from the last layer of the fully convolutional network fc7 to be y_i,j with dimension 4096, we add a 1× 1 convolution layer g(·) with weight dimension of 4096× 300 to embed the pixel feature. To ensure positivity, we further add a ReLU layer. A technique we use to improve the training is to fix the norms of the embeddings of image pixels to be 30, where a wide range of values will work. This technique stabilizes training numerically and speeds up the convergence. Intuitively, fixing image to have a large norm makes sense in the hierarchical embedding space: image pixels are most specific descriptions of concepts, while words are more general, and closer to the origin.§.§.§ Training and inferenceIn all the experiments, we first train the concept stream to get the word embeddings, and then use them as initializations in the joint training. Pre-trained weights from VGG-ImageNet <cit.> are used as initializations for the image stream. Adam optimizer <cit.> with learning rate 1e-3 is used to update weights across the model. The margin of loss functions is default to α=1.0.In the inference stage, there are two cases: (1) While testing on the 150 training classes, the pixel embeddings are compared with the embeddings of all the 150 candidate labels based on the scoring function, the class with the highest score is taken as the prediction; (2) While doing zero-shot predictions, on the other hand, we use a threshold on the scores to decide the cutoff score, concepts with scores above the cutoff are taken as predictions. This best threshold is found before testing on a set of 100 validation images.§.§ Results on SceneParse150 In this section, we report the performance of our model on scene parsing task. The training is performed on the most frequent 150 classes of stuffs and objects in the ADE20K dataset, SceneParse150, where each of the class has at least 0.02% of total pixels in the dataset.We have trained some models in the references and several variants of our proposed model, all of which share the same architecture of convolutional networks to make fair comparisons. Softmax is the baseline model that does classical multi-class classification. Conditional Softmax is a hierarchical classification model proposed in <cit.>. It builds a tree based on the label relations, and softmax is performed only between nodes of a common parent, so only conditional probabilities for each node are computed. To get absolute probabilities during testing, the conditional probabilities are multiplied following the paths to root.Word2Vec is a model that simply regresses the image pixel features to pre-trained word embeddings, where we use the GoogleNews vectors. Since the dimensionality of GoogleNews vectors is 300, the weight dimension of the last convolution layer is a 1×1×4096×300. Cosine similarity and max-margin ranking loss with negative samples are used during training. This model is a direct counterpart of DeViSe<cit.> in our scene parsing settings.Word2Vec+ is our improved version of Word2Vec model. There are two major modifications: (1) We replace the max margin loss with a softmax loss as mentioned in Section <ref>; (2) We augment the GoogleNews vectors by finetuning them on domain specific corpus. Concretely, from AMT we collect 3 to 5 scene descriptive sentences for each image in the ADE20K training set (20,210 images). Then we finetune the pre-trained word vectors with skip-gram model <cit.> for 5 epochs, and these word vectors are finally fixed for regression like Word2Vec.There are 6 variants of our proposed model. Model names with Image-* refer to the cases where only image stream is trained, by fixing the concept embeddings. In models Joint-* we train two streams together to learn a joint embedding space. Three aforementioned scoring functions are used for the image stream, their corresponding models are marked as *-L2, *-Cosine and *-Hyper.§.§.§ On the training classesEvaluating on the 150 training classes, our proposed models offer competitive results. Baseline flat metrics are used to compare the performance, as shown in Table <ref>. Without surprise, the best performance is achieved by the Softmax baseline, which agrees with the observation from <cit.>, classification formulations usually achieves higher accuracy than regression formulations.At the same time, our proposed models Joint-Cosine and Word2Vec+ fall short of Softmax by only around 1%, which is an affordable sacrifice given the zero-shot prediction capability and interpretability that will be discussed later. Visual results of the best proposed model Joint-Cosine are shown in Figure <ref>.§.§.§ Zero-shot predictionsWe then move to the zero-shot prediction tasks to fully leverage the hierarchical prediction ability of our models. The models are evaluated on 500 less frequent object classes in the ADE20K dataset. Predictions can be in the 500 classes, or their hypernyms, which could be compared based on our open vocabulary metrics.Softmax and Conditional Softmax models are not able to make inferences outside the training classes, so we take their predictions within the 150 classes for evaluation. Convex Combination <cit.> is another baseline model for comparison: we take the probability output from Softmax within the 150 classes, to form new embeddings in the word vector space, and then find the nearest neighbors in vector space. This approach does not require re-training, but still offers reasonable performance.Most of our proposed models can retrieve the hypernym of the testing classes, except *-Cosine as they throw away the norm information during scoring, which is important for hypernym predictions.Table <ref> shows results on zero-shot predictions. In terms of the hierarchical metrics, Joint-Hyper gives the best performance. And our proposed models in general win by a large margin over baseline methods. It confirms us that modeling the asymmetric relations of data pairs better represents the hierarchy. Figure <ref> shows some prediction samples of our best model Joint-Hyper (see Supplementary Materials for full predictions of our model). In each image, we only show one ground truth category to make things clear, different colors represent different predictions. Though the model does not always get the ground truth labels exactly correct, it gives reasonable predictions. Another observation is that predictions are sometimes noisy, we get 2-3 predictions on a single objects. Some of the inconsistencies are plausible though, e.g. in the first row, the upper part of the “rocking chair" is predicted as “chair” while the lower part is predicted as “furniture”. As the pixels in the upper segment are closer to ordinary chairs while the lower segment does not, so in the latter case the model gives a more general prediction. §.§ Diversity test The open vocabulary recognition problem naturally raises a question: how many training classes do we need to generalize well on zero-shot tasks? To answer this question, we do a diversity test in this section. Different from the previous experiments, we do not take the most frequent classes for training, instead uniformly sample training and testing classes from the histogram of pixel numbers. For better comparison, we fix the number of zero-shot test set classes to be 500, and the training classes range from 50 to 1500. In the training process, we offset the unbalance in pixel numbers by weighting the training class loss with their corresponding information content, so the less frequent classes contribute higher loss.We only experiment with our best model Joint-Hyper for this diversity test. Results inFigure <ref> suggest that performance could saturate after training with more than 500 classes. We conjecture that training with many classes with few instances could introduce sample noise. So to further improve performance, more high quality data is required. § INTERPRETING THE EMBEDDING SPACEThe joint embedding space we trained earlier features different properties from known spaces like Word2Vec. In this section, we conduct three tests to explore them. Concept search. In our framework, the joint training does not require all the concepts to have corresponding image data, the semantics can be propagated. This enables us to train with all the WordNet synsets and search with concepts that are not trained with images. In Figure <ref>, we show some pixel-level concept search results. The heatmaps are the scores in corresponding embedding spaces. As the search concepts become increasingly abstract, our model far outperforms Word2Vec+, showing the effective encoding of hierarchical information in our embedding space.Implicit attributes encoding. One intriguing property of feature embeddings is that it is a continuous space, and classification boundaries are flexible. So we explore the vicinity of some concepts. In Figure <ref>, we show score maps when searching for the concept “chair". Interestingly, it is a common phenomenon that objects like “bench” and “ottoman”, which are not hyponyms of “chair" in WordNet, get reasonable response. We conjecture that the embedding space implicitly encodes some abstract attributes by clustering them, e.g. sittable is an affordance attribute. So by simply loosing classification threshold of “chair”, one can detect regions where one can sit on.Synthesized concepts with arithmetics. Similar to Word2Vec, in our joint embedding space, new concepts or image detectors can be synthesized with arithmetics. As shown in Figure <ref>, we take elementwiseandoperations on the word concepts, and then search for thesesynthesized concepts in the images. It can be found thatoperation takes the intersection of the concepts, e.g. the pool table is the common hyponym of “table" and “game equipment"; andtakes the union, e.g. the cart is composed of attributes of “bicycle" and “canopy". § CONCLUSIONWe introduced a new challenging task: open vocabulary scene parsing, which aims at parsing images in the wild. And we proposed a framework to solve it by embedding concepts from a knowledge graph and image pixel features into a joint vector space, where the semantic hierarchy is preserved. We showed our model performs well on open vocabulary parsing, and further explored the semantics learned in the embedding space. Acknowledgement: This work was supported by Samsung and NSF grant No.1524817 to AT. SF acknowledges the support from NSERC. BZ is supported by Facebook Fellowship. ieee§ SUPPLEMENTARY MATERIALS § 1. DATA ASSOCIATION PROTOCOLTo learn the joint embeddings of images and word concepts, we need to augment ADE20K dataset by adding information about how the label classes (>3000) are semantically related. We associate each class in ADE20K dataset with a synset in WordNet, representing a unique concept. The data association process requires semantic understanding, so we resort to Amazon Mechanical Turk (AMT). The annotation protocol is detailed as follows, and screen shots of our AMT interface are shown in Figure <ref>. We search for each class in the dataset, for all the synsets having the same name. We find 3 different cases: (1) a single synset is found for the given class; (2) multiple synsets are found due to polysemy; (3) no sysnets are found, either because the correct synset has a different name or because that concept is not in WordNet. In the first case, we automatically match classes in the dataset with the obtained synsets, and then ask workers on AMT to verify by looking at the image labels and the definitions of synsets in the WordNet. In the second case where multiple synsets were found, we show an image displaying such concept and ask workers to select the synset whose definition matches the given class.In the last case where no synset candidate was found, we show an image with the concept and ask workers to find the best matching synset by looking over WordNet online API. They also have the option to indicate when no synset can match.§ 2. CONCEPT GRAPHAfter data association, we end up getting 3019 classes in the dataset having synset matches. Out of these there are 2019 unique synsets forming a DAG. All the matched synsets have entity.n.01 as the top hypernym and there are in average 8.2 synsets in between. The depths of the ADE20K dataset annotations range from 4 to 19. A detailed visualization of the concept graph built is shown in Figure <ref>. The node radii indicate the class frequencies in the ADE20K dataset. The figure only shows part of the full graph, nodes with 5 descendents or less have been hidden.§ 3. FULL ZERO-SHOT PREDICTION LISTSOur model gives each sample a list of predictions in hierarchical order. Due to the page limitations, full prediction lists are not shown in the main paper. In Figure <ref>, we give details of zero-shot predictions, both ground truth and prediction lists are shown in the texts beneath the images. Correct predictions are marked in green, inconsistent items are marked in orange. It can be seen that for hard examples, e.g. “dome" (row1, column3), a general and conservative prediction is made; when the test sample is easy and similar to training samples, e.g. “wagon" (row1, column1), our model gives specific and aggressive predictions.
http://arxiv.org/abs/1703.08769v2
{ "authors": [ "Hang Zhao", "Xavier Puig", "Bolei Zhou", "Sanja Fidler", "Antonio Torralba" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20170326054456", "title": "Open Vocabulary Scene Parsing" }
theoremTheorem[section]corollary[theorem]Corollary definition[1][Definition]#1 ⌊⌋ remarkRemark[section]defnDefinition[section] lemma[theorem]Lemma propositionProposition[section]include-notes include-notesTrue DART:Noise Injection for Robust Imitation LearningMichael Laskey EECS DepartmentUC Berkeley Jonathan LeeEECS DepartmentUC BerkeleyRoy Fox EECS DepartmentUC Berkeley Anca DraganEECS DepartmentUC BerkeleyKen GoldbergIEOR and EECS DepartmentUC Berkeley March 27, 2017 ==================================================================================================================================================================================================================================================================================== One approach to Imitation Learning isBehavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this “off-policy" approach is that the robot's errors compound when drifting away from the supervisor's demonstrations.On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be tedious for human supervisors, add significant computation burden, and may visit dangerous states during training.We propose an off-policy approach that injects noise into the supervisor's policy while demonstrating. This forces the supervisor to demonstrate how to recover from errors. We propose a new algorithm, DART (Disturbances for Augmenting Robot Trajectories), that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot's trained policy during data collection.We compare DART with DAgger and Behavior Cloning in two domains: in simulation with an algorithmic supervisor on the MuJoCotasks (Walker, Humanoid, Hopper, Half-Cheetah) and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter.For high dimensional tasks like Humanoid, DART can be up to 3x faster in computation time and only decreases the supervisor's cumulative reward by 5% during training, whereas DAgger executes policies that have 80% less cumulative reward than the supervisor.On the grasping in clutter task, DART obtains on average a 62% performance increase over Behavior Cloning.§ INTRODUCTION Robotic manipulation tasks are challenging to learn due to noncontinuous dynamics that are difficult to model, high dimensional state representations, and potentially delayed reward. Deep reinforcement learning has the potential totrain such control policies, however in practice it may require a very large number of samples <cit.>. Rather than pure ab initio learning, an alternative is to leverage supervision to guide the robot's policy. A common approach to this problem is Behavior Cloning, in which a robot observes a supervisor's policy and learns a mapping from state to control via regression  <cit.>. This is an off-policy method, that suffers from compounding error when the robot executes its policy, leading it to drift to new and possibly dangerous states <cit.>. Theoretically, the drifting occurs because of covariate shift, where execution of the robot's policy causes it to move to a different distribution from the one on which it was trained.Ross and Bagnell proposed DAgger <cit.> to correct for covariate shift by sampling states from the robot's policy. DAgger is an on-policy method, in which the robot iteratively rolls out its current policy and asks for supervisor labels (or corrections) at the states it visits. Empirically, DAgger has been shown to reduce covariate shift and lead to robust control policies <cit.>. However, DAgger suffers from three key limitations: 1) it can be tedious for human supervisors to provide these corrections <cit.> 2) it can be potentially dangerous for a physical robot to visit highly sub-optimal states  <cit.>, and 3) repeatedly updating the robot's policy is computationally expensive.In this paper, we introduce an alternative approach to address covariate shift and enable robust policies. One way to expose the robot to corrective examples and reduce drift is to inject noise into the supervisor's demonstrations and let the supervisor provide corrections as needed.Our insight is that by injecting small levels of noise, we will focus on the states that the robot needs to recover from – the states at the boundary of the ideal policy. This has the potential to obtain the advantages of on-policy methods by recovering from errors as the robot starts making them, without the disadvantage of aggregating these errors at training time, and getting the robot to dangerous states with low reward. We propose to still perform off-policy learning, but to inject an optimized level of noise into the supervisor's control stream as we acquire demonstrations.Injecting noise requires selecting the magnitude and direction of the noise appropriately. Intuitively, the noise should approximate the error of the trained robot's policy, so that the demonstrations have state distributions that are close to the one that the robot will encounter at test time. This may be challenging in robotic control tasks where the control signals are high-dimensional. We thus propose to approximate the optimal noise distribution by first selecting a parameterized noise model, and then iteratively optimizing the likelihood of the supervisor's noise-injected policy applying the robot's control. We propose DART( Disturbances for Augmenting Robot Trajectories): a noise injection variant of Behavior Cloning. We evaluate DART on the MuJoCo locomotion environments and a grasping in clutter task on a Toyota HSR robot. In the locomotion environments, where a continuous control robot is trained to walk forward, DART achieves parity with state of the art on-policy methods, such as DAgger.For high dimensional tasks like Humanoid, DART is 3x faster in computation time and during training only decreases the supervisor's cumulative reward by 5%, whereas DAgger executes policies that have80% less cumulative reward than the supervisor, thus suggesting only a small amount of noise is sufficient to reduce covariate shift.We then evaluate DART with four human supervisors, who trained a robot to perform a grasping in clutter task. In the task, a robot must reason from an eye-in-hand camera perspective how to push occluding objects away to reach a goal object. We observe that on average DART leads to a 62% performance increase over traditional Behavior Cloning. This paper contributes: 1) a new algorithm, DART, that sets the appropriate level of noise to inject into the supervisor's control stream 2) a theoretical analysis of noise injection, which illustrates when DART can improve performance over Behavior Cloning3) experimentation with algorithmic and human supervisors characterizing noise injection can reduce covariate shift.§ RELATED WORK§.§ Imitation LearningImitation Learning schemes are either off-policy or on-policy. In off-policy Imitation Learning, the robot passively observes the supervisor, andlearns a policy mapping states to controls by approximating the supervisor's policy. This technique has been successful, for instance, in learning visuomotor control policies for self-driving cars <cit.>. However, Pomerleau et al. observed that the self-driving car would steer towards the edge of the road during execution and not be able to recover <cit.>. Later, Ross and Bangell theoretically showed that this was due to the robot's distribution being different than the supervisor's, a property known as covariate shift, which caused errors to compound during execution <cit.>. Ross et al. <cit.> proposed DAgger, an on-policy method in which the supervisor iteratively provides corrective feedback on the robot's behavior. This alleviates the problem of compounding errors, since the robot is trained to identify and fix small errors after they occur. But despite their ability to address covariate shift, on-policy methods have been shown to be challenging for human supervisors <cit.> and require the robot to visit potentially dangerous regions of the state space during training <cit.>. Further, on-policy methods require retraining the policy from scratch after each round of corrections. Techniques have been proposed to address some of these limitations. Recently, Sun et al. proposed a gradient update for on-policy methods to alleviate the computation burden  <cit.>. However, it has been shown local gradient updates suffer in performance compared to full retraining of the policy <cit.>. Zhang et al. trained a classifier to predict when the robot is likely to error during execution and then proposed to transfer control over to the supervisor <cit.>. However, this approach inherits the other limitations of on-policy methods, the challenge for human supervisors and the computational burden.§.§ Noise Injection to Increase RobustnessIn robust control theory, Andereson et al. noted a phenomenon similar to the covariate shift when performing model identification <cit.>. They observed that once the model has been estimated and a control law has been optimized, the robotvisited regions of the state space in which it was unstable. A condition was proposed to correct for this known as persistance excitation, which requires the training data to be informative enough to learn a robust model  <cit.>.One way to achieve persistance excitation is to inject isotropic Gaussian noise into the control signal <cit.>. Due to its full-rank covariance matrix, such white noise can expose the system to any disturbance with positive probability <cit.>. In light of this, we propose injecting noise into the supervisor's policy in order to collect demonstrations that simulate the errors that the robot would make at test time. § PROBLEM STATEMENTThe objective of Imitation Learning is to learn a policy that matchesthe supervisor's policy.Modeling Choices and Assumptions:We model the system dynamics as Markovian and stochastic. We model the initial state as sampled from a distribution over the state space. We assume a known state space and set of actions. We also assume access to a robot, such that we can sample from the state sequences induced by a policy. Lastly, we assume access to a supervisor who can provide a demonstration of the task. Policies and State Densities. We denote by 𝒳 the set consisting of observablestates for a robot, and by 𝒰 the set of actions. We model dynamics as Markovian, such that the probability of visiting state _t+1∈𝒳 can be determined from the previous state _t∈𝒳 and action _t∈𝒰: p(_t+1| _0,_0, …_t, _t )=p(_t+1| _t, _t).We assume a probability density over initial states p(_0). An environment is thus defined as a specific instance of action and state spaces, initial state distribution, and dynamics. Given a time horizon T∈ℕ, a trajectory ξ is a finite sequence of T pairs of states visited and corresponding control inputs at these states, ξ = (_0,_0,_1,_1,…, _T), where _t∈𝒳 and _t∈𝒰 for each t. A policy is a measurable function π: 𝒳→𝒰 from states to controls.We consider policies π_θ:𝒳→𝒰 parameterized by some θ∈Θ. Under our assumptions, any such policy π_θ induces a probability density over the set oftrajectories of length T:p(ξ | π_θ)= p(_0)∏_t=0^T-1π_θ(_t|_t)p(_t+1|_t,_t) The term π_θ(_t|_t) indicates stochasticity in the applied policy and we consider this to be a user-defined distribution in which the deterministic output of the policy is a parameter in the distribution. An example distribution is ϵ-greedy, in which with probability ϵ a random control is applied instead of π_θ(_t).While we do not assume knowledge of the distributions corresponding to p(_t+1|_t,_t) or p(_0), we assume that we have a real robot or a simulator. Therefore, when `rolling out' trajectories under a policy π_θ, we utilize the robot or the simulator to sample from the resulting distribution over trajectories rather than estimating p(ξ|π_θ) itself. Objective. In Imitation Learning, we do not assume access to a reward function, like we do in Reinforcement Learning <cit.>, but instead a supervisor, π_θ^*, where θ^* may not be contained in Θ. We assume the supervisor achieves a desired level of performance on the task, although it may not be optimal. We measure the difference between controls using a surrogate loss l : 𝒰×𝒰→ℝ <cit.>. A commonly considered surrogate loss is the squared L2-norm on the control vectors l(_1,_2) = ||_1- _2||^2_2 <cit.>. We measure total loss along a trajectory with respect to two policies π_θ_1 and π_θ_2 by J(θ_1, θ_2 | ξ) = ∑_t=0^T-1 l(π_θ_1(_t),π_θ_2(_t)). The objective of Imitation Learning is to minimize the expected surrogate loss along the distribution induced by the robot's policy: θ E_p(ξ|π_θ) J(θ,θ^* | ξ). In Eq. <ref>, the distribution on trajectories and the cumulative surrogate loss are coupled, which makes this a challenging optimization problem. The field of Imitation Learning has considered two types of algorithmic solutions to this objective, off-policy learning and on-policy learning <cit.>. A common off-policy technique is Behavior Cloning, which samples from the supervisor's distribution and performs expected risk minimization on the demonstrations: θ^R = θ E_p(ξ|π_θ^*) J(θ,θ^* | ξ).The performance of the policy π_θ^R can be written in terms of the following decomposition: E_p(ξ |π_θ^R) J(θ^R,θ^*|ξ)= E_p(ξ |π_θ^R) J(θ^R,θ^*| ξ) -E_p(ξ |π_θ^*) J(θ^R,θ^*|ξ)_Shift + E_p(ξ |π_θ^*) J(θ^R,θ^*| ξ) _Loss,which corresponds to the covariate shift and the standard loss. In this work, we focus on minimizing the covariate shift. For references on how to minimize the standard loss see  <cit.>. In the next section, we will show how noise injection can be used to reduce covariate shift.§ OFF-POLICY IMITATION LEARNING WITH NOISE INJECTIONr0.5 < g r a p h i c s > Robot Learning to reach a goal state _G. The grey denotes the distribution over trajectories.Left: Off-Policy learning in which the supervisor, the orange arrows, provides demonstrations. The robot, the teal arrows, deviates from the distributions and incurs high error. Middle: On-Policy which samples from the current robot's policy, the light teal arrows, to receive corrective examples from the supervisor.Right: DART, which injects noise to widen the supervisor's distribution and provides corrective examples. DART is off-policy but robust. Noise injected into the supervisor's policy simulates error occurring during testing. Under this noise, the supervisor is forced to take corrective actions in order to successfully perform the task. The corrective examples allow the robot to learn how to recover when it deviates from the supervisor's distribution. However, because the demonstrations are still concentrated around the supervisor's policy, it can have less cognitive load on the supervisor than on-policy methods, which are concentrated around the robot's policy.The intuition of providing corrective examples during data collection is shown in Fig. 1.Denote by p(ξ|π_θ^*,ψ) a distribution over trajectories with noise injected into the supervisor's distribution π_θ^*(|,ψ). The parameter ψ represents the sufficient statistics that define the noise distribution. For example, if Gaussian noise is injected parameterized by ψ, π_θ^*(|,ψ) = 𝒩(π_θ^*(), Σ).Similar to Behavior Cloning,we can sample demonstrations from the noise-injected supervisor and minimize the expected loss via standard supervised learning techniques. This can be written as follows: θ^R = θ E_p(ξ|π_θ^*,ψ) J(θ,θ^* | ξ) Eq. <ref>, though, does not explicitly minimize the covariate shift for arbitrary choice ofψ.In order to reduce covariate shift, we introduce the following Lemma to bound the shift. We can then attempt to optimize this bound with respect to ψ. This bound holds when the surrogate loss ∀_1, _2 ∈𝒰 l(_1,_2) ∈ [0,1]. Examples of when this occurs is if the robot has a discrete set of actions orbounded continuous controls and they are normalized during learning. If ∀_1, _2 ∈𝒰,0 ≤ l(_1,_2) ≤ 1 the following is true for a time horizon of T|E_p(ξ|π_θ^*,ψ) J(θ^R,θ^* | ξ) - E_p(ξ|π_θ^R) J(θ^R,θ^* | ξ)| ≤ T √(1/2𝒟_𝒦ℒ(p(ξ|π_θ^R),p(ξ|π_θ^*,ψ)))[See Appendix for Proof]To minimize the covariate shift,we can optimize the KL-divergence, 𝒟_𝒦ℒ, with respect to the sufficient statistics, ψ, of the noise distribution.This can be reduced to the following optimization problem:ψE_p(ξ|π_θ^R) -∑^T-1_t=0[π_θ^*(π_θ^R(_t)|_t,ψ)] This above optimization problem decreases the negative log-likelihood of therobot's control during data collection. Thus, we want to choose a noise parameter that makes the supervisor's distribution closer to the final robot's distribution. A clear limitation of this optimization problem is that it requires knowing the final robot's distribution p(ξ|π_θ^R), which is determined only after the data is collected.The dependency on this term creates achicken and egg problem. In the next section, we present DART, which applies an iterative approach to the optimization. §.§ DART: Disturbances for Augmenting Robot TrajectoriesThe above objective cannot be solved because p(ξ|π_θ^R) is not known until after the robot has been trained.We can instead iteratively sample from the supervisor's distribution with the current noise parameter, ψ_k, and minimize the negative log-likelihood of the noise-injected supervisor taking the current robot's, π_θ̂, control.ψ̂_k+1 = ψE_p(ξ|π_θ^*, ψ_k) -∑^T-1_t=0[π_θ^*(π_θ̂(_t)|_t,ψ)] However, the above iterative process can be slow to converge becauseit is optimizing the noise with respect to the current robot's policy and are optimizing on samples from the supervisor's distribution. We can obtain a better estimate by observing that the supervisor should simulate as much expected error as the final robot policy, E_p(ξ|π_θ^R) J(θ^R,θ^*|ξ). It is possible that we have some knowledge of this quantity from previously training on similar domains.Inspired by shrinkage estimation <cit.>, we can obtain a better estimate of ψ̂ by a target value based on the anticipated final robot's error and subsequently scaling the current simulated error to this level. DenoteE_p(ξ|π_θ^*, ψ̂)∑^T-1_t=0 l(_t, π_θ^*(_t)), the expected deviation from the supervisor's control for a given surrogate loss. The scaling can be denoted ψ^α_k+1 = ψ̂_k+1 * β≥ 0|α- E_p(ξ|π_θ^*, β*ψ̂_k+1)∑^T-1_t=0 l(_t, π_θ^*(_t))|where α corresponds to an estimated guess of the robot's final error and ψ^α_k+1 corresponds to the sufficient statistic after the scaling has been applied. For Gaussian noise with covariance matrix Σ, the expected deviation has a closed form solution for the squared Euclidean loss: E_p(ξ|π_θ^*, ψ̂)∑^T-1_t=0 l(_t, π_θ^*(_t))) = T(Σ).Thus, a knownclosedform solution can be derived for both Eq. 3 and Eq. 4:Σ^α_k+1 = α/T(Σ̂_k+1)Σ̂_k+1, Σ̂_k+1= 1/TE_p(ξ|π_θ^*, Σ_k^α)∑^T-1_t=0 ( π_θ̂(_t) - π_θ^*(_t))( π_θ̂(_t) - π_θ^*(_t)) ^T. r0.5< g r a p h i c s >In Algorithm 1, we present DART, which iteratively solvesfor ψ^α_k+1 and to collect data and train the final robot policy. First N demonstrations are collected from a supervisor with an initial noise parameter set. Then a policy, π_θ̂, is learned via empirical risk minimization. The learned policy is then used to optimize Eq. <ref> based on sample estimates and the outcome is then scaled based on Eq. <ref>. Once the noise term is found N demonstrations are collected with the noise-injected supervisor and the robot is trained on the aggregate dataset. The algorithm is repeated for K iterations. §.§ Theoretical Analysis of DARTCompared to Behavior Cloning, DART reduces covariate shift by simulating errors the robot is likely to make. We show the following Proposition to provide better intuition for when this improvement occurs.Given a deterministic supervisor's policy with Gaussian noise injected, π^*(|,ψ) = 𝒩(π^*(),Σ), when the robot policy has error E_p(ξ|π_θ^R) J(θ^R,θ^*|ξ) > 0. The following holds: 𝒟_𝒦ℒ(p(ξ|π_θ^R),p(ξ|π_θ^*,Σ)) < 𝒟_𝒦ℒ(p(ξ|π_θ^R),p(ξ|π_θ^*)) where the right hand side corresponds to Behavior Cloning.[See Appendix for Proof] Proposition <ref> shows that DART reduces an upper bound on covariate shift, or the distance between the distributions, more so than Behavior Cloning, when the robot has non-zero error with respect to the supervisor. We note this assumes the supervisor is not affected by the noise injection, which for human supervisor's this might not always be true.Thus, noise should be injected if the robot is expected to make some errors after training. However, noise injection will offer no improvement if the robot can represent the supervisor perfectly and collect sufficient data.A similar result was shown in Laskey et al. <cit.> for when DAgger is expected to improve over Behavior Cloning. In practical applications, it is unlikely to obtain sufficient data to perfectly represent the supervisor, which highlights the need for DART.§ EXPERIMENTSOur experiments are designed to explore: 1) Does DART reduce covariate shift as effectively as on-policy methods? 2) How much does DART reduce the computational cost and how much does it decay the supervisor's performance during data collection? 3) Are human supervisors able to provide better demonstrations with DART?§.§ MuJoCo Locomotion EnvironmentsTo test how well DART performs compared with an on-policy methods such as DAgger and off-policy methods like Behavior Cloning, we use MuJoCo locomotion environments <cit.>.The challenge for these environments is that the learner does not have access to the dynamics model and must learn a control policy that operates in a high-dimensional continuous control space and moves the robot in a forward direction without falling. We use an algorithmic supervisor in these domains: a policy which is trained with TRPO <cit.> and is represented as a neural network with two hidden layers of size 64. Note, while TRPO uses a stochastic policy, the supervisor is the deterministic mean of the learner. This is the same supervisor used in  <cit.>. For all 4 domains, we used the same neural network as the supervisor and trained the policies in Tensorflow. At each iteration we collected one demonstration. In order to make the task challenging for Imitation Learning, we used the same technique as in <cit.>, which is to sub-sample 50 state and control pairs from the demonstrated trajectories, making the learners receive less data at each iteration.We compare the following five techniques: Behavior Cloning, where data is collected from the supervisor's distribution without noise injected; DAgger, which is an on-policy method that stochastically samples from the robot's distribution and then fully retrains the robot's policy after every demonstration is collected; DAgger-B, which is DAgger but the retraining is done intermittently to reduce computation time; Isotropic-Gaussian which injects non-optimized isotropic noise; and DART. See supplement material for how the hyper-parameters for each method are set. We used the following MuJoCo environments:Walker, Hopper, Humanoid, and Half-Cheetah. The size of the state and control space for each task is { |𝒳| = 17, |𝒰| = 6 }, { |𝒳| = 11, |𝒰| = 3 }, { |𝒳| = 376, |𝒰| = 17 }, { |𝒳| = 117, |𝒰| = 6 }, respectively. All experiments were run on a MacBook Pro with an 2.5 GHz Intel Core i7 CPU.We measured the cumulative reward of each learned policy by rolling it for 500 timesteps, the total computation time for each method and the cumulative reward obtained during learning.Fig. <ref> shows the results. In all domains, DART achieves parity with DAgger, whereas Behavior Cloning and DAgger-B are below this performance level in Walker and Humanoid.For Humanoid, DART is 3x faster in computation time and during training only decreases the supervisor's cumulative reward by 5%, whereas DAgger executes policies that have over 80% less cumulative reward than the supervisor. The reason for this is that DAgger requires constantly updating the current robot policy and forces the robot into sub-optimal states.While, one way to reduce this computation is to decrease the number of times the policy is updated.DAgger-B illustrates that this can significantly deteriorate performance. Lastly, naively applying isotropic noise does not perform well, and leads to unsafe policies during execution, which suggests the need for optimizing the level of noise. §.§ Robotic Grasping in Clutter We evaluate DART with human supervisors in a grasping in clutter task on a Toyota HSR robot. We consider atask inspired by the Amazon Picking Challenge <cit.>.The goal of the task is for the robot to retrieve a goal object from a cupboard. The task is challenging because the goal object is occluded by obstacle objects and the robot must reason about how to clear a path based on observations taken from an eye-in-hand perspective. The objects are 6 common household food items, which consist of boxes and bottles with varying textures and mass distributions. The target object is fixed to always be a mustard bottle. The robot, task and image viewpoint are shown in Fig. <ref>. See supplement material for additional information on the task. We use 4 supervisors who have robotics experience but not specifically in the field of Imitation Learning and compare Behavior Cloning and DART.When performing the study, we first collect N=10 demonstrations with Behavior Cloning (i.e. no noise) and then in a counter-balanced ordering collect N=30 more demonstrations with each technique.Our experiment was within-subject, have every supervisor perform all three methods. We only updated the noise parameter after the first 10 demonstrations (i.e. K=2).The final robot policy was then trained on the total of 40 demonstrations.We consider two different choices ofα: α = 3T(Σ̂_̂1̂) and α = 3T(Σ̂_̂1̂). These choices correspond to the intuition that with small datasets in high dimensional image space the robot will have significant shift from the current loss on the supervisor's distribution. Through the rest of the paper, we will write our choices as α = 3 and α = 6 for brevity in notation. During policy evaluation, we measured success as 1) the robot is able to identify the goal object and 2) there being a clear path between its gripper and the object. Once these conditions are identified, as detailed in the supplement material,an open-loop motion plan is generated to execute a grasp around the target object. In Fig. <ref>, we reportthe average performance of the three techniques across DART 20 times on different sampled initial states.DART with α = 3 performs the best out of the three techniques, with a 79% success rate over traditional Behavior Cloning's 49% success rate. Interestingly, DART with α = 6 performs better than Behavior Cloning, but only has a 72% success rate. This may suggest this level of noise was potentially too high for the human supervisor.§ CONCLUSION This paper considers injecting noise into the supervisor's demonstrations to minimize covariate shift.Our algorithm, DART, provides the robot with corrective examples at the boundary of the supervisor's policy. By being concentrated around the supervisor's policy, it collects demonstrations without visiting highly sub-optimal states and is easier for human supervisors tele-operating. We demonstrate it achieves parity with on-policy methods in simulated MuJoCo domains and is significantly better than traditional Behavior Cloning on a real robot. For code and more information see <http://berkeleyautomation.github.io/DART/>§ ACKNOWLEDGEMENTSThis research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, the Berkeley Deep Drive (BDD) Initiative, the Real-Time Intelligent Secure Execution (RISE) Lab, and the CITRIS "People and Robots" (CPAR) Initiative and by the Scalable Collaborative Human-Robot Learning (SCHooL) Project, NSF National Robotics Initiative Award 1734633. Additional donations were from donations from Toyota, Siemens, Google, Cisco, Samsung, Autodesk, Intuitive Surgical, Amazon,and IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Sponsors. We thank our colleagues who provided helpful feedback and suggestions, in particular Kevin Jamieson, Sanjay Krishnan, Jeff Mahler and Zoey McCarthy.plainnat§ APPENDIX §.§ Proofs for Theoretical Analysis If ∀_1, _2 ∈𝒰,0 ≤ l(_1,_2) ≤ 1, the following is true: |E_p(ξ|π_θ^*,ψ) J(θ,θ^* | ξ) - E_p(ξ|π_θ) J(θ,θ^* | ξ)| ≤ T √(1/2𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*,ψ))) Proof:|E_p(ξ|π_θ^*,ψ) J(θ,θ^* | ξ) - E_p(ξ|π_θ) J(θ,θ^* | ξ)| ≤ T||p(ξ|π_θ^*,ψ) - p(ξ|π_θ)||_TV≤ T √(1/2𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*,ψ))) The first line of the proof follos from Lemma 4.2, which is stated below and the fact ∀_1,_20 ≤ l(_1,_2) ≤ 1. The second line follows from Pinsker's Inequality.▪Let P and Q be any distribution on 𝒳. Let f:𝒳→ [0,B]. Then|E_P[f(x)] - E_Q[f(x)]| ≤ B ||P-Q||_TVProof:|E_P[f(x)] - E_Q[f(x)]| = |∫_x p(x)f(x)dx - ∫_x q(x)f(x)dx| = |∫_x (p(x)-q(x))f(x)dx|=|∫_x (p(x)-q(x))(f(x) - B/2)dx + B/2∫_x (p(x) - q(x))dx| ≤∫_x |p(x)-q(x)||f(x) - B/2|dx ≤B/2∫_x |p(x)-q(x)|dx≤ B ||P-Q||_TV The last line applies the definition of total variational distance, which is ||P-Q||_TV = 1/2∫_x |p(x)-q(x)|. ▪Given a supervisor's policy with Gaussian noise injected, π^*(|,ψ) = 𝒩(π^*(),Σ), when the robot policy has error E_p(τ|π_θ) J(θ,θ^*|ξ) > 0. The following is true: 𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*,Σ)) < 𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*)) where the right hand side corresponds to Behavior Cloning.Proof: We begin the proof with the definition of the KL-divergence: 𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*))=E_p(ξ|π_θ) ∏^T-1_t=0π_θ(_t|_t)/∏^T-1_t=0π_θ^*(_t|_t)=E_p(ξ|π_θ) ∑^T-1_t=π_θ(_t|_t)/π_θ^*(_t|_t) We know that if E_p(τ|π_θ) J(θ,θ^*|ψ) = 0, then the KL divergence would be zero because at all states under the distribution the two policies would perfectly agree. However, if they do not agree then the KL-divergence becomes the there ∃ξ such that p(ξ|π_θ*) = 0 when p(ξ|π_θ) > 0, which implies <cit.>. 𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*)) = ∞ One technique to correct for this is to ensure that the supervisor has finite probability of applying any control (i.e. ∀ξ p(ξ|π_θ*) > 0 when p(ξ|π_θ) > 0.Injecting Gaussian noise ensures non-zero probability for all controls at a given trajectory and subesequently:𝒟_𝒦ℒ(p(ξ|π_θ),p(ξ|π_θ^*,Σ)) < ∞▪ §.§ Derivation for Optimizing NoiseWhen optimizing noise, we need to first optimize Eq. <ref>, which finds the noise parameter that maximizes the likelihood of the current robot policy and then we can scale the parameter to our prior of the error in the final robot's policy. In this section, we re-derive the closed from solutions for the Gaussian and ϵ-greedy scenario for the MLE objective of the current robot's control. We then derive a solution for Eq. <ref> for the Gaussian case. Gaussian MLE We begin with the continuous case where the supervisor's policy is stochastic with distribution π_θ^*(_t | _t, Σ) defined by 𝒩 (π_θ^*(_t, Σ) and the learner's policy is given by π_θ̂. Then the optimization problem isΣ̂= _Σ - E_p(ξ | π_θ^*)∑_t = 0^T - 1log[ π_θ^*( π_θ̂ (_t) | _t, Σ )]= _Σ E_p(ξ | π_θ^*) - T/2logΣ^-1 + 1/2∑_t = 0^T - 1(π_θ̂(_t) - π_θ^*(_t) )^T Σ^-1( π_θ̂(_t) - π_θ^*(_t) ).We take the derivative with respect to Σ and set it equal to zero:- TΣ + E_p(ξ | π_θ^*)∑_t = 0^T - 1(π_θ̂(_t) - π_θ^*(_t) ) ( π_θ̂(_t) - π_θ^*(_t) )^T.Then the optimal covariance matrix is Σ̂= 1/T E_p(ξ | π_θ^*)∑_t = 0^T - 1(π_θ̂(_t) - π_θ^*(_t) ) ( π_θ̂(_t) - π_θ^*(_t) )^T.ϵ-greedy MLE For the ϵ-greedy case in the discrete action domain with a finite number of controls K, the probability of the supervisor choosing a control at a particular state is given byπ_θ^*(_t | _t, ϵ) =1 - ϵ π_θ^* (_t) = _tϵ/K - 1 otherwise. For any _1, _2 ∈𝒰, let the loss function l(_1, _2) be defined as the indicator function so that l(_1, _2) = 0 for _1 = _2 and 1 otherwise. Then the optimization problem in Eq. <ref> becomes ϵ̂= _ϵ - E_p(ξ | π_θ^*)∑_t = 0^T - 1log[ π_θ^*( π_θ̂ (_t) | _t, ϵ )]= _ϵ - E_p(ξ | π_θ^*) J(θ̂, θ^* | ξ) log[ ϵ/K - 1] + (T - J(θ̂, θ^* | ξ))log[ 1 - ϵ]The previous line follows from the fact that along any given trajectory ξ from π_θ^*, the probability of ξ can be written using only J(θ̂, θ^* | ξ), which is the number times that π_θ̂ and π_θ^* disagree along ξ. Note that this is convex in ϵ. By taking the derivative with respect to ϵ and setting it equal to zero, we get E_p(ξ | π_θ^*) J(θ̂, θ^* | ξ)/ϵ - T - E_p(ξ | π_θ^*) J(θ̂, θ^* | ξ)/1 - ϵ= 0.Therefore, we haveϵ̂= 1/TE_p(ξ | π_θ^*) J(θ̂, θ^* | ξ). Gaussian Scaling Given an optimized covaraince matrix Σ̂, we wish to scale it to some prior over what we expect the robot's final error to be.β≥ 0|α- E_p(ξ|π_θ^*, β*Σ̂)∑^T-1_t=0 l(_t, π_θ^*(_t)))|= β≥ 0|α- E_p(ξ|π_θ^*, β*Σ̂)∑^T-1_t=0 ||_t- π_θ^*(_t)))||^2_2= β≥ 0|α- β T(Σ̂)| Where the last line followed from known properties of the multi-variate Gaussian and the fact that the level of noise is independent of the current state _t. We can derive a solution for β. = α/T(Σ̂) §.§ Additional Information for MuJoCo ExperimentsThe following hyperparameters were used for all domains. For DAgger and DAgger-B, we swept several values for β, the stochastic mixing hyperparameter, and found that setting β = 0.5 yields the best results. For noise injection algorithms, we chose α= T(Σ̂_k), which corresponds to no additional knowledge being used. For Isotropic-Gaussian noise injection, we set Σ_k^α = I for all k. Additionally, the covariance matrix for DART was always estimated using held-out demonstrations collected in prior iterations.In the Walker, Hopper and HalfCheetah domains, we ran these algorithms for 20 iterations, evaluating the learners at 5, 10, 15, and 20 iterations. One initial supervisor demonstration was always collected in the first iteration. To remain consistent, we updated DAgger-B and DART at the same iterations: iteration 2 and iteration 8.In the Humanoid domain, we ran the algorithms for 200 iterations and evaluated the learners at iterations 50, 100, 150, and 200. Again, we collected one initial supervisor demonstration for DAgger and Isotropic-Gaussian. For DAgger-B and DART, we collected 50 initial supervisor demonstrations before retraining every 25 iterations.§.§ Additional Experiments in MuJoCo DomainsTo better understand how DART was able to achieve the reported performance, we asked the additional questions: 1) Does DART reduce the covariate shift? 2) How well do other noise injection methods perform against DART?Reducing Covariate Shift To test how effective DART was at reducing the shift, we evaluate the surrogate loss on both the supervisor's distribution p(ξ | π_θ^*, ψ̂) and the robot's distribution p(ξ | π_θ^*). In Fig. <ref>, we report the losses for each algorithm. Both DAgger and DART are capable of reducing the covariate shift, which is reflected by the smaller disparity between the losses on the supervisor distributions and those on the robot distributions.Random Covariance Matrices We also compared DART against learners that obtained data from supervisors with random covariance matrices. We define the simulated error of a noisy supervisor to be the expected error of the supervisor on its noisy distribution. In the Gaussian case, note that the expected squared L2 error that the noisy supervisor simulates is equivalent to the trace of the covariance matrix. That is, E_p(ξ | π_θ^*, ψ̂)[ 1/T∑_t = 0^T - 1_t- π_θ^*(_t) _2^2]= tr E_p(ξ | π_θ^*, ψ̂)[ 1/T∑_t = 0^T - 1 (_t- π_θ^*(_t))(_t- π_θ^*(_t))^T]= tr (Σ),where ψ̂ corresponds to the parameters of the multivariate Gaussian distribution with zero mean and covariance matrix Σ. Thus, the amount of simulated error can be tuned by tuning the trace of the covariance matrix. We used this fact generate random covariance matrices drawn from the Wishart distribution and scaled the traces of the covariance matrices such that they simulated losses of 0.005, 0.5, and 5.0 for Walker, Hopper, and HalfCheetah and 0.005, 0.5, and 10.0 for Humanoid. The results, reported in Fig. <ref>, suggest that, as long as the simulated error is carefully chosen, randomly sampled covariance matrices may perform as well as DART; however it is not often known in practice what the simulated error should be before training the learner and evaluating it.§.§ Additional Information on Robot SetupDuring policy evaluation, we measured success as 1) the robot is able to identify the goal object and 2) there being a clear path between its gripper and the object. Once these conditions are identified an open-loop motion plan is generated to execute a grasp around the target object. In order for the robot to identify the target object, we use the Faster R-CNN trained on the PASCAL VOC 2012 dataset, which has a bottle object category <cit.>. Once the mustard bottle has been identified and the a path is cleared a human supervisor tells the robot to execute the open-loop motion plan towards a fix position that the goal object is at. The human supervisor is used to ensure that the success criteria is not prone to error in the system, but the supervisor is blinded to which policy is being evaluated to prevent biases. We evaluate the policy 20 times on different sampled initial states. A video of the learned policy and the robot setup can be found here: <https://youtu.be/LfMD69llesg>.
http://arxiv.org/abs/1703.09327v2
{ "authors": [ "Michael Laskey", "Jonathan Lee", "Roy Fox", "Anca Dragan", "Ken Goldberg" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170327222616", "title": "DART: Noise Injection for Robust Imitation Learning" }
Short title]On generalization of D'Aurizio-Sándor trigonometric inequalities with a parameter Li-Chang Hung, Department of Mathematics, National Taiwan University, Taipei, Taiwanlichang.hung@gmail.comPei-Ying Li, Department of Finance, National Taiwan University, Taipei, Taiwanemail of the Second AuthorLi-Chang Hung 26D15, 26D99In this work, we generalize the D'Aurizio-Sándor inequalities (<cit.>) using an elementary approach. In particular, our approach provides an alternative proof of the D'Aurizio-Sándor inequalities. Moreover, as an immediate consequence of the generalized D'Aurizio-Sándor inequalities, we establish the D'Aurizio-Sándor-type inequalities for hyperbolic functions. [ Li-Chang Hung and Pei-Ying Li DD.MM.YYYY ================================= § INTRODUCTIONBased on infinite product expansions and inequalities on series and the Riemann's zeta function, D'Aurizio (<cit.>) proved the following inequality:1-cos x/cosx/2/x^2<4/π^2,where x∈(0,π/2). Using an elementary approach, Sándor (<cit.>) offered an alternative proof of (<ref>) by employing trigonometric inequalities and an auxiliary function. In the same paper, Sándor also provided the converse to (<ref>):1-cos x/cosx/2/x^2>3/8,where x∈(0,π/2). In addition, Sándor found the following analogous inequality (<ref>)holds true for the case of sine functions: The two double inequalities3/8<1-cos x/cosx/2/x^2<4/π^2and 4/π^2(2-√(2))< 2-sin x/sinx/2/x^2<1/4hold for any x∈(0,π/2). Throughout this paper, we denote 1-cos x/cosx/p/x^2 and p-sin x/sinx/p/x^2 by f_c(x) and f_s(x), respectively:f_p^c(x)=1-cos x/cosx/p/x^2, f_p^s(x)=p-sin x/sinx/p/x^2. Our aim is to generalize the D'Aurizio-Sándor inequalities for the case of f_p^c(x) and f_p^s(x) as follows: Let 0<x<π/2. Then the two double inequalities4/π^2<1-cos x/cosx/p/x^2<p^2-1/2 p^2and 4/π^2(p-(π/2p))< p-sin x/sinx/p/x^2<p^2-1/6 phold for p=3, 4 , 5,⋯.In particular, the double inequality (<ref>) remains true when p=2 while the double inequality (<ref>) is reversed when p=2.The remainder of this paper is organized as follows. Section <ref> is devoted to the proof of Theorem <ref> and an alternative proof of Theorem <ref>. In Section <ref>, we establish analogue of Theorem <ref> for hyperbolic functions. As an application of Theorem <ref>, we apply in Section <ref> inequality (<ref>) to the Chebyshev polynomials of the second kind and establish a trigonometric inequality. § PROOF OF THE MAIN RESULTS At first we will prove the following lemma. The lemma provides expressions of the higher-order derivatived^2/dx^2(x^3d/dx f_p^(x)) involving f_p^(x) (=c, s), which are helpful in proving Theorem <ref>. We note that the sign of d^2/dx^2(x^3d/dx f_p^(x)) plays a crucial role in proving Theorem <ref>. Let 0<x<π/2 and k=1,2,3,⋯. Then when p∈ℝ and p≠0, we have (i) d^2/dx^2 (x^3d/dx f_p^c(x)) =-x^4(x/p)/8 p^3((p+1)^3 sin(x-3 x/p)+(p-1)^3 sin(x+3 x/p) + (3 p^3+3 p^2-15 p-23)sin(x-x/p)+(3 p^3-3 p^2-15 p+23) sin(x+x/p));(ii) d^2/dx^2 (x^3d/dx f_p^s(x)) =x^4(x/p)/8 p^3((p+1)^3 sin(x-3 x/p)-(p-1)^3 sin(x+3 x/p) + (-3 p^3-3 p^2+15 p+23)sin(x-x/p)+(3 p^3-3 p^2-15 p+23) sin(x+x/p)). In particular,(iii) when p=2 k, d^2/dx^2(x^3d/dx f_p^s(x))=-x/4 k^3∑_j=0^k-1 (2 j+1)^3 sin(2 j+1/2 k x); (iv) when p=2 k+1, d^2/dx^2(x^3d/dx f_p^c(x))= -16 x/(2 k+1)^3∑_j=1^k j^3 sin(2 j/2 k+1 x) (-1)^j-1,d^2/dx^2(x^3d/dx f_p^s(x))= -16 x/(2 k+1)^3∑_j=1^k j^3 sin(2 j/2 k+1 x).(v) For =c, s and p∈ℝ∖{0}, lim_x→0d/dx(x^3d/dx f_p^(x))=lim_x→0 x^3d/dx f_p^(x)=0. (i), (ii) and (v) follows directly from calculations using elementary Calculus. In particular, trigonometric addition formulas are used in proving (i) and (ii). To prove (<ref>), we claim-x/4 k^3∑_j=0^k-1 (2 j+1)^3 sin(2 j+1/2 k x)=- x d^3/dx^3( sin x/sin(x/2 k)). Indeed, we rewrite1/4 k^3∑_j=0^k-1 (2 j+1)^3 sin(2 j+1/2 k x) = 2d^3/dx^3(∑_j=0^k-1cos(2 j+1/2 k x)).On the other hand, making use of Euler's formula e^i z=cos z+i sin z leads to an alternative expression of the left-hand side of (<ref>):∑_j=0^k-1cos(2 j+1/2 k x)= ∑_j=0^k-1{ e^i(x/2 k+x/k j)} = {e^ix/2 k∑_j=0^k-1(e^ix/k)^j}= {e^ix/2 k1-e^i x/1-e^ix/k} = {e^ix/2 ke^i x/2(e^-i x/2-e^i x/2)/e^i x/2 k(e^-i x/2 k-e^i x/2 k)}= { e^i x/2 sin(x/2)/sin(x/2 k)} =cos(x/2)sin(x/2)/sin(x/2 k)=sin x/2 sin(x/2 k),where {z} is the real part of z and i=√(-1). Now it suffices to showd^2/dx^2(x^3d/dx f_p^s(x))=-xd^3/dx^3( sin x/sin(x/2 k)).Using (<ref>) in (ii), this can be achieved by straightforward calculations. Thus (iii) is true. The proof of (iv) is similar, and we omit the details. We complete the proof of Lemma <ref>. We provide here an alternative proof of the two double inequalities in Theorem <ref>.To this end, we show that for x∈(0,π/2), f_2^c(x)=1-cos x/cosx/2/x^2 is strictly increasing while f_2^s(x)=2-sin x/sinx/2/x^2 is strictly decreasing. These lead to the desired inequalities since it is easy to see that lim_x→0f_2^c(x) =3/8,lim_x→π/2f_2^c(x) =4/π^2, lim_x→0f_2^s(x) =1/4,lim_x→π/2f_2^s(x) =4/π^2(2-√(2)).To see f_2^c(x) is strictly increasing, we employ (<ref>) in Lemma <ref> to obtaind^2/dx^2(x^3d/dx f_2^c(x))=-x/64^4(x/2) (-44 sin(x/2)+5 sin(3 x/2)+sin(5 x/2)) =-x/16^4(x/2) sin(x/2) (cos x-2)(cos x+5)>0.As lim_x→0d/dx(x^3d/dx f_2^c(x))=0, it follows that d/dx(x^3d/dx f_2^c(x))>0. We are led to x^3d/dx f_2^c(x)>0 or d/dx f_2^c(x)>0 since lim_x→0(x^3d/dx f_2^c(x))=0. This that shows f_2^c(x) is strictly increasing.By using (<ref>) in Lemma <ref>, we haved^2/dx^2(x^3d/dx f_2^s(x))=-x/4 sin(x/2)<0,from which we infer that d/dx(x^3d/dx f_2^s(x))<0 since lim_x→0d/dx(x^3d/dx f_2^s(x))=0 by (v) of Lemma <ref>. Thend/dx(x^3d/dx f_2^s(x))<0together with the fact lim_x→0(x^3d/dx f_2^s(x))=0 from (v) of Lemma <ref> yields x^3d/dx f_2^s(x)<0 or d/dx f_2^s(x)<0. Thus we have shown that f_2^s(x) is strictly decreasing. This completes the proof of the theorem.We are now in the position to give the proof of Theorem <ref>. The proof of the case when p=2 has been given in Theorem <ref>. For p≥3, we prove the desired inequalities by showing that d/dx f_p^(x)<0 for =c, s. Due to (i) of Lemma <ref>, we see that d^2/dx^2(x^3d/dx f_p^c(x))<0 for p≥3. Instead of employing (ii) of Lemma <ref>, we use (<ref>) and (<ref>) in Lemma <ref> to conclude that d^2/dx^2(x^3d/dx f_p^s(x))<0. Thus we have for =c, s,d^2/dx^2(x^3d/dx f_p^(x))<0.Because of the first vanishing limit in (v) of Lemma <ref>, it follows that d/dx(x^3d/dx f_p^(x))<0,which, together with the fact that the second limit in (v) of Lemma <ref> vanishes, implies that x^3d/dx f_p^(x)<0 or d/dx f_p^(x)<0 for =c, s. It remains to find the following limits:lim_x→0f_p^c(x) =p^2-1/2 p^2,lim_x→π/2f_p^c(x) =4/π^2, lim_x→0f_p^s(x) =p^2-1/6 p,lim_x→π/2f_p^s(x) =4/π^2(p-(π/2p)).We immediately have4/π^2=lim_x→π/2f_p^c(x)<1-cos x/cosx/p/x^2<lim_x→0f_p^c(x)=p^2-1/2 p^2and 4/π^2(p-(π/2p))=lim_x→π/2f_p^s(x)< p-sin x/sinx/p/x^2<lim_x→0f_p^s(x)=p^2-1/6 p.The proof is completed. § GENERALIZED D'AURIZIO-SÁNDOR INEQUALITIES FOR HYPERBOLIC FUNCTIONS In this section, we show an analogue of Theorem <ref> for the case of hyperbolic functions holds true. Leth_p^c(x)= 1-cosh x/coshx/p/x^2, h_p^s(x)= p-sinh x/sinhx/p/x^2.Following the same arguments for proving Lemma <ref>, it can be shown that Lemma <ref> with cos x, sin x and f_p^(x) (=c, s) replaced by cosh x, sinh x and h_p^(x) (=c, s) respectively, remains true. It follows that we can prove d/dx h_p^(x)<0 for =c, s as in the proof of Theorem <ref>. It remains to calculate the following limits:lim_x→0f_p^c(x) =1-p^2/2 p^2,lim_x→π/2f_p^c(x) =4/π^2(1-cosh(π/2)(π/2 p)), lim_x→0f_p^s(x) =1-p^2/6 p,lim_x→π/2f_p^s(x) =4/π^2(p-sinh(π/2)(π/2p)). Thus, we have the following analogue of Theorem <ref> for cosh x and sinh x. Let 0<x<π/2. Then the two double inequalities4/π^2(1-cosh(π/2)(π/2 p))<1-cosh x/coshx/p/x^2<1-p^2/2 p^2and 4/π ^2(p-sinh(π/2)(π/2p))< p-sinh x/sinhx/p/x^2<1-p^2/6 phold for p=3, 4 , 5,⋯. In particular, the double inequality (<ref>) is reversed when p=2 while the double inequality (<ref>) remains true when p=2.§ APPLICATION OF THE GENERALIZED D'AURIZIO-SÁNDOR INEQUALITIES TO THE CHEBYSHEV POLYNOMIALS OF THE SECOND KINDS The first few Chebyshev polynomials of the second kind U_n(x) (n=0,1,2,⋯) are (<cit.>)U_0(x) =1, U_1(x) =2 x, U_2(x) =4 x^2-1, U_3(x) =8 x^3-4 x, U_4(x) =16 x^4-12 x^2+1, U_5(x) =32 x^5-32 x^3+6 x, U_6(x) =64 x^6-80 x^4+24 x^2-1.In this section, we apply Theorem <ref> to U_n(x) with x=cosθ. By means of the formula U_n(cosθ)=sin ((n+1) θ)/sinθ, we obtain the following corollary. Let y∈(0,π/2 p). The double inequalityp/6((1-p^2 )y^2+6)<U_p-1(cos y)<p-4/π^2(p-(π/2p)) p y^2holds for p=2, 3, 4 , 5,⋯. The double inequality (<ref>) in Theorem <ref> can be written asp-p^2-1/6 p x^2<sin x/sinx/p<p-4/π^2(p-(π/2p)) x^2,x∈(0,π/2).Letting x/p=y, we havep/6((1-p^2 )y^2+6)<sin (p y)/sin y<p-4/π^2(p-(π/2p)) p y^2,y∈(0,π/2 p).Since sin (p y)/sin y=U_p-1(cos y), the proof is completed. Letting p=7 in Corollary <ref> results in the following inequality7-56 y^2<64 cos ^6 y-80 cos ^4 y+24 cos ^2 y-1<7-196 (7-(π/14))/π ^2 y^2,where y∈(0,π/14)≈ (0,0.2244) and 196 (7-(π/14))/π ^2≈ 49.7673.Acknowledgements. The authors wish to express sincere gratitude to Tom Mollee for his careful reading of the manuscript and valuable suggestions to improve the readability of the paper. Thanks are also due to Chiun-Chuan Chen and Mach Nguyet Minh for the fruitful discussions. The authors are grateful to the anonymous referee for many helpful comments and valuable suggestions on this paper. AS2 M. Abramowitz and I. A. Stegun (Eds) Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables National Bureau of Standards, Applied Mathematics Series 55, 9th printing Washington1970 D'Aurizio J. D'Aurizio Refinements of the Shafer-Fink inequality of arbitrary uniform precision Math. Inequal. Appl. 17 4 2014 1487–1498 Chebyshev T. J. Rivlin Chebyshev Polynomials Wiley New York1990 Sandor J. Sándor On D'Aurizio's trigonometric inequality J. Math. Inequal. 10 3 2016 885–888
http://arxiv.org/abs/1703.09618v1
{ "authors": [ "Li-Chang Hung", "Pei-Ying Li" ], "categories": [ "math.CA" ], "primary_category": "math.CA", "published": "20170326190958", "title": "On generalization of D'Aurizio-Sándor trigonometric inequalities with a parameter" }
Department of Mechanical Engineering, McGill University, Montreal, Quebec, Canada [Corresponding author: ]andrew.higgins@mcgill.ca Department of Mechanical Engineering, McGill University, Montreal, Quebec, Canada Department of Mechanical and Industrial Engineering, Concordia University, Montreal, Quebec, Canada Department of Mechanical and Industrial Engineering, Concordia University, Montreal, Quebec, Canada Cavendish Laboratory, Department of Physics, University of Cambridge, Cambridge, United Kingdom Detonation propagation in a compressible medium wherein the energy release has been made spatially inhomogeneous is examined via numerical simulation.The inhomogeneity is introduced via step functions in the reaction progress variable, with the local value of energy release correspondingly increased so as to maintain the same average energy density in the medium, and thus a constant Chapman Jouguet (CJ) detonation velocity. A one-step Arrhenius rate governs the rate of energy release in the reactive zones. The resulting dynamics of a detonation propagating in such systems with one-dimensional layers and two-dimensional squares are simulated using a Godunov-type finite-volume scheme.The resulting wave dynamics are analyzed by computing the average wave velocity and one-dimensional averaged wave structure.In the case of sufficiently inhomogeneous media wherein the spacing between reactive zones is greater than the inherent reaction zone length, average wave speeds significantly greater than the corresponding CJ speed of the homogenized medium are obtained.If the shock transit time between reactive zones is less than the reaction time scale, then the classical CJ detonation velocity is recovered.The spatio-temporal averaged structure of the waves in these systems is analyzed via a Favre averaging technique, with terms associated with the thermal and mechanical fluctuations being explicitly computed.The analysis of the averaged wave structure identifies the super-CJ detonations as weak detonations owing to the existence of mechanical non-equilibrium at the effective sonic point embedded within the wave structure.The correspondence of the super-CJ behavior identified in this study with real detonation phenomena that may be observed in experiments is discussed.Propagation of gaseous detonation waves in a spatially inhomogeneous reactive medium Nikolaos Nikiforakis December 30, 2023 ====================================================================================§ INTRODUCTION A substantial collection of experimental evidence revealing that all detonation waves in gaseous mixtures possess a transient, multi-dimensional structure has been uncovered over the past 60 years.<cit.>The structure consists of triple-point interactions between the leading shock and transverse shock waves that result in a cellular wave front. As a result of this spatially and temporally varying shock front compressing the reactive mixture at different strengths, the distribution of post-shock temperature varies greatly over the detonation cell cycle.Since the exothermic reactions in gaseous combustible mixtures governed by Arrhenius kinetics are very temperature sensitive, the reaction rates in different regions behind the leading shock may differ by several orders of magnitude.Although zones of prompt reaction triggered by adiabatic compression may exist in regions of the front consisting of strong Mach stems, weakly-shocked pockets of reactant may not be able to undergo significant exothermic reaction due to their thermal history on the time scale of a detonation.<cit.> Particularly in hydrocarbon fuel mixtures, pockets of compressed, unreacted mixture are observed to be separated from the shock front by the shear layer emanating from the triple points propagating transversely across the front.These pockets eventually burn out during the cell cycle, likely due to a turbulent flame-like mechanism, releasing their energy in compression waves that still help to support the leading front.Successive generations of computational simulation of this phenomenon since the late 1970s have revealed greater and greater intricacy of the details, many of which have also been observed in experiments, making a notoriously challenging problem for theoretical description.<cit.> While detonation waves in pre-mixed, homogenous media exhibit localized spatio-temporal reaction zone structures (e.g., unburned pockets, etc.), greater complications might arise from spatially inhomogeneous reactive media.Most practical applications of detonations rarely occur under conditions of perfect homogeneity such as, for example, accident scenarios involving the unintentional release of detonable fuel.In propulsive applications of detonative combustion, such as the rotating detonation engine (RDE), the detonable mixture is often created by the dynamic injection of fuel and oxidizer immediately ahead of the propagating detonation that may not have time to completely mix, resulting in large spatial variations in chemical reactivity.<cit.> Inhomogeneity in density, temperature, and particle velocity might also be present, under certain conditions, due to pre-existing turbulence (again, likely to be encountered in RDEs, for example). Examination of the effect of spatial inhomogeneities on the propagation behavior of gaseous detonation waves is of importance to treat these scenarios. Given such challenging problems for detonation research, the available theoretical tools deriving from first principles are surprisingly simple, perhaps even oversimplified for the task of describing detonation propagation. Most theoretical models of detonation are developed from the assumption of the steady, quasi-one-dimensional Zel'dovich-von Neumann-Döring (ZND) solution satisfying a generalized Chapman-Jouguet (CJ) criterion, i.e., a vanishing thermicity at the point in the reaction zone where the flow moves away from the leading shock at the local speed of sound.<cit.> In these models, the reactive medium in which the detonation wave propagates is always considered to be spatially homogeneous based on the averaged thermodynamic, flow, and chemical properties. The question then arises as to whether the propagation behavior of detonation waves that are influenced by the spatial inhomogeneities of the reactive medium (or the inherent spatio-temporal variations that exist in cellular detonations in homogeneous media) can be accurately predicted by these simple models based on an averaging treatment. Answering the above-mentioned question was recently attempted by Mi et al. <cit.> In their study, the propagation speed of a detonation, resulting from a medium that consists of spatially discretized energy sources separated by regions of inert material, was examined via one-dimensional, direct numerical simulations. In the cases of highly discretized energy sources, the resulting detonation velocity was observed to be greater than the predicted CJ velocity of a homogenized medium with the same amount of energy release (for the case of ratio of a specific heat capacity γ=1.1, the average wave speed was nearly 15% greater than the CJ speed). These significant deviations from the CJ prediction were hypothesized to be indicative of weak detonations with a non-equilibrium state at the effective sonic surface. In their study, an artificial mechanism of energy deposition, i.e., a discrete source that is instantaneously triggered by the passage of the leading shock, independent of the shock strength, after a prescribed delay time, was implemented due to its simplicity. Hence, a more realistic mechanism of heat release, wherein the energy release evolves from the reactive media itself, depending upon the local thermodynamic state, must be incorporated in further investigations of this problem. In the present study, the effect of both one- and two-dimensional spatial inhomogeneities on the propagation speed of gaseous detonation waves without losses is computationally examined. Since a typical detonable mixture of gases is governed by activated chemical reactions, single-step Arrhenius kinetics, as the simplest candidate reaction model, is incorporated into the system. The spatial discretization of energy can be realized, as illustrated in Fig. <ref>(b), via concentrating the reactant into layers (or sheets), standing perpendicular to the direction of detonation wave propagation, separated by regions of inert gas. This arrangement of discrete sources is similar to that used in <cit.> and can be implemented in both one- and two-dimensional simulations. Another way to discretize the reactive medium is by concentrating the reactant into infinitely long square-based prisms laying along an axis that is perpendicular to the direction of detonation wave propagation, as shown in Fig. <ref>(c). This arrangement can be implemented in two-dimensional simulations as an array of square sources. These two arrangements of spatial inhomogeneities are referred to as reactive layers and squares, respectively, in this paper. The first objective of this study is to examine whether the super-CJ wave propagation, which was identified in <cit.> for the cases with highly discrete sources, still occurs in a one- or two-dimensional gaseous detonation system with state-dependent Arrhenius kinetics. The simulation results are then analyzed via a spatio-temporal averaging procedure to further elucidate the physical mechanism that is responsible for this super-CJ wave speed. By performing parametric studies, a continuous transition from the continuum CJ propagation to the super-CJ waves in extremely discretized reactive media, i.e., a sequence of point-source blasts that in turn trigger the next source, is systematically explored and analyzed.This paper is organized as follows. In Sec. <ref>, the problem statement and the governing equations of the proposed system are introduced. Section <ref> describes the numerical methodology used to solve the governing equations. The results of sample one- and two-dimensional wave structures, the history of instantaneous propagation speed, and the averaged propagation speed as a function of the model parameters are presented in Sec. <ref>. In Sec. <ref>, the procedures of data analysis are described. The findings based upon the simulation results and the analysis are discussed in Sec. <ref> and summarized in the Conclusions (Sec. <ref>). The detailed derivation of the governing equations based on the averaged properties can be found in the Appendix. § PROBLEM STATEMENT The detonable mixtures are modelled to be calorically perfect (i.e., with a fixed ratio of specific heats γ) and have the potential to release chemical energy with a specific heating value Q (J/kg). The tilde “∼” denotes a dimensional quantity. The flow variables, density, pressure, temperature, and particle velocity (x- and y-components), are non-dimensionalized with reference to the initial state ahead of the leading shock, i.e., ρ = ρ/ρ_0, p = p/p_0, T = T/T_0, u = u/√(p_0/ρ_0), and v = v/√(p_0/ρ_0), respectively. The subscript “0” indicates the pre-shock, initial state of the reactive medium. The properties of a thermodynamic state are related via the ideal gas law, i.e., p=ρRT, where R is the gas constant, or p=ρ T in dimensionless form. The heat release Q is non-dimensionalized as Q=Q/ ( p_0/ρ_0). Applying the CJ criterion, the velocity of a detonation wave propagating in a uniform reactive medium with the heat release Q can be calculated via the following relation,V_CJ = M_CJ c_0 = √(γ^2-1/γQ+√(( γ^2-1/γQ+1 )^2-1)+1)·√(γ)where M_CJ is the CJ Mach number and c_0 = √(γ) is the non-dimensionalized initial speed of sound. The average propagation speed resulting from each inhomogeneous scenario simulated in this study will be compared with the V_CJ corresponding to a homogeneous reactive system with the same average energy release Q. The non-linear, unsteady gasdynamics of the system is described by the two-dimensional (or one-dimensional) reactive Euler equations in the laboratory-fixed reference frame:∂U/∂ t + ∂F( U)/∂ x + ∂G( U)/∂ y = S( U)where the conserved variable U, the convective fluxes F and G, and reactive source term S are, respectively,U=[ ρ; ρ u; ρ v; ρ e; ρ Z ] F =[ρ u;ρ u^2+p; ρ uv; (ρ e+p)u; ρ Zu ] G =[ρ v; ρ uv;ρ v^2+p; (ρ e+p)v; ρ Zv ] S =[0;0;0;0; ρΩ ]In the above equations, e is the non-dimensional specific total energy, and Z is the reaction progress variable, or the normalized concentration of reactant, which varies between 1 (unreacted) and 0 (fully reacted). For a homogeneous reactive system, the specific total energy is defined as e=p/(γ-1)ρ+(u^2+v^2)/2+ZQ. In this study, the reaction rate Ω=∂ Z/∂ t is governed by single-step Arrhenius chemical kinetics as follows,Ω = -kZ ×exp(-E_a/T )where k and E_a are the dimensionless pre-exponential factor and activation energy, respectively.The reactive domain that contains discrete sources is initialized with uniform pressure, density, and particle velocity as p=1, ρ=1, and u=0, respectively. An initiation zone, where pressure and density equal twice the corresponding CJ value, i.e., p=2p_CJ and ρ=2ρ_CJ, is placed on the left of the reactive domain. A rightward propagating shock wave generated from this initiation zone thus triggers the discrete sources and supports a reaction wave to propagate to the right. The spatial inhomogeneities are introduced into the system as spatially discrete reactive layers or squares separated by inert regions. This spatial discretization is realized by initializing Z as 1 in the reactive sources and 0 in the inert regions. As shown in Fig. 2(a), the reactive layers with a width W are distributed in the domain with a regular spacing L between each two consecutive layers. Thus, the initial distribution of Z in space can be described as a summation of regularly spaced, one-dimensional Heaviside (boxcar) functions,Z (x,y,t=0 ) = ∑_iH( x-iL ) H( iL+W-x )where i is the index of each discrete reactive layer. The scenario with discrete reactive squares is shown in Fig. 2 (b). The side length of each square source is W and the spacing between each two neighboring sources is L. In this case, the initial distribution of Z can be described as a summation of regularly spaced, two-dimensional Heaviside functions,Z (x,y,t=0 ) = ∑_i∑_jH( x-iL ) H( iL+W-x ) H( y-jL ) H( jL+W-y )where i and j are the indices of each discrete reactive square in x- and y-directions, respectively. The spatial discreteness parameter Γ is defined as Γ=W/L for the cases with reactive layers and Γ=W^2/L^2 for the cases with reactive squares. In the limit of Γ→ 1, the initial distribution of Z becomes uniform in the reactive medium; in the limit of Γ→ 0, the spatially discrete source approaches a δ-function in space. In order to maintain the overall amount of energy release Q the same as the homogeneous case (Γ=1), the actual heat release associated with each discrete source must be increased according to the prescribed spatial discreteness Γ. Hence, for the cases with discrete reactive sources, the specific total energy is formulated as e=p/(γ-1)ρ+(u^2+v^2)/2+ZQ/Γ. This current study is focused on exploring the effect of spatial discreteness Γ and source spacing L on the wave propagation behavior in an inhomogeneous reactive medium. The values of Q and γ are chosen to be Q=50 and γ=1.2 to represent a typical gaseous detonable mixture. The dimensionless activation energy is chosen to be E_a=20 to ensure stable detonation propagation in the one-dimensional, homogeneous system, which can be used as a control case to more clearly identify the effect of the spatial inhomogeneities and intrinsic multi-dimensional instabilities on the resulting propagation behavior.<cit.> The pre-exponential factor k=16.45 is chosen so that the half-reaction-zone length for the homogeneous case is unity. § NUMERICAL METHODOLOGY Two independently-written simulation codes were used to solve the one- and two-dimensional reactive Euler equations. Both of them were based upon a uniform Cartesian grid. The one-dimensional simulation code used the MUSCL-Hancock TVD Godunov-type finite-volume scheme <cit.> with an exact Riemann solver and the van Leer non-smooth slope limiter. The reaction process in the one-dimensional simulations was solved using a second-order, two-stage explicit Runge-Kutta method. The Strang splitting method was used in order to maintain second-order accuracy.<cit.> The two-dimensional simulation code was also based upon the MUSCL-Hancock scheme but with an HLLC approximate solver for the Riemann problem.<cit.> This code was implemented in Nvidia's CUDA programming language and performed on a Nvidia Tesla K40M GPU computing processor to accelerate the simulation runs. In each case simulated in this study, the length (i.e., size in the x-direction) of the entire domain was at least 50 times the source spacing L. For the cases with reactive squares, transverse width (i.e., size in the y-direction) was at least 5L. In order to have a better algorithmic efficiency, instead of the entire domain, the simulations were only performed in a window that enclosed the leading wave complex at each time step. A window size (in the x-direction) of 10L (i.e., containing 10 discrete reactive layers or 10 vertical arrays of reactive squares) was used and was verified to be sufficient to capture all of the dynamics contributing to the propagation of the leading shock. Once the leading shock front reached the end (right boundary) of the computational window, the window frame (i.e., left and right boundaries) was advanced by half of the window size 5L. A transmissive boundary condition was applied on both left and right boundaries of the computational window. On the top and bottom boundaries, a periodic boundary condition was applied to simulate a detonation wave propagating in an infinitely wide domain without experiencing any losses due to lateral expansion. The minimum numerical resolution used in this study was 20 computational cells per half-reaction-zone (unity) length of the homogeneous case, i.e., Δx=0.05. For cases with very small source spacing, e.g., L=1, a high numerical resolution of 100 computational cells per half-reaction-zone length was used to ensure a sufficient number (∼ 10) of computational cells within a discrete source. § RESULT Results from three different cases are presented here:reactive layers in one dimension, reactive layers in two dimensions, and reactive squares in two dimensions.For each case, a snapshot of the flow field will be shown, followed by the velocity history. The measured average velocity, V_avg, will be presented as a function of spatial discreteness Γ and source spacing L. Since Q=50, γ = 1.2, and E_a = 20 are fixed in this current study, only the values of L and Γ are mentioned for each specific case of simulation presented in the remainder of this paper. In Figs. <ref> and <ref> where the results of V_avg are presented, the data points for the cases with one-dimensional reactive layers, two-dimensional reactive layers, and two-dimensional reactive squares are plotted as blue circles, green diamonds, and red squares, respectively; solid symbols are for the cases with a fixed L and various Γ, while open symbols are for the cases with a fixed Γ and various L. §.§ One-dimensional reactive layers The sample result plotted in Fig. <ref> shows the time evolution (from (a) to (c)) of the pressure (top row) and reaction progress variable (bottom row) profiles of the computational domain for a simulation with Γ=0.04 and L=10 (spacing between two sources). The leading wave front propagates rightward in this figure. The δ-function-like, vertical spikes in the profile of Z, as indicated in the figure, are the discrete reactive layers where chemical energy is highly concentrated. As shown in Fig. <ref>(a), the peak in the pressure profile is associated with a strong exothermic reaction upon the leading shock encountering one of these reactive layers. The shorter spike in the Z profile in Fig. <ref>(a) corresponds to this partially reacted discrete source shortly after being shocked. As shown in Fig. <ref>(b), forward- and backward-running blast waves that are generated by this strong exothermic reaction can be identified in the pressure profile. Downstream from the leading shock, the pressure profile consists of a large number of decaying and interacting blast waves generated by the earlier sources.The history of the instantaneous propagation speed V normalized by V_CJ for the same case is plotted in Fig. <ref>(a) as a function of the leading shock position x_s. The trajectory of the leading shock x_s(t) can be obtained from the simulation by finding the location where pressure increases to p=1.5 from its initial, pre-shock state p_0=1 at each time step. The instantaneous propagation speed V can then be calculated by numerically differentiating x_s(t) over time. After a short process of initiation (over approximately 5 sources), the wave propagation becomes periodic as shown in the inset in Fig. <ref>(a). A cycle of pulsation in wave velocity, V, occurs over a length that is the same as the spacing between two reactive layers, L. An averaged propagation speed can be measured over a long distance (about 40 sources). The average wave speeds V_avg, normalized by V_CJ, resulting from the one-dimensional simulations are plotted as functions of Γ and L (as solid and open blue circles) in Fig. <ref>(a) and (b), respectively. As Γ decreases from 1 to 0.01 or L increases from 1 to 200, V_avg increases from V_CJ and asymptotically approaches a plateau value that is approximately 9-10% greater than V_CJ. §.§ Two-dimensional reactive layers The sample results plotted in Fig. <ref>(a) and (b) are the two-dimensional contours of the pressure (left column) and reaction progress variable (right column) at early ((a) t=30.2) and later (b) t=140.5) times for a simulation of discrete reactive layers with Γ=0.04 and L=10. The leading wave front propagates rightward in this figure. The red, vertical lines in the contours of Z are the highly concentrated, reactive layers. At early times, as shown in Fig. <ref>(a), the resulting wave structure remains transversely planar (uniform in the y-direction). Forward- and backward-running blast waves associated with high pressure (yellow-red) regions can be clearly identified in Fig. <ref>(a). At later times, as shown in Fig. <ref>(b), significant instabilities have developed, resulting in a no longer planar but highly irregular wave structure. The history of V/V_CJ as a function of leading shock position x_s is plotted in Fig. <ref>(b). At each time step, the leading shock position is found along the middle line in y-direction of the two-dimensional domain (at y=25) using the same technique described in Sec. <ref>. As shown in Inset I of Fig. <ref>(b), V varies in a regularly periodic fashion over a length scale of L. After the wave propagates over more than 700 sources, the variations in V become irregular and exhibit much larger amplitudes. As shown in Inset II, the spacing between two consecutive peaks in V is no longer a constant distance L. Note that, before the onset of instabilities, the propagation dynamics resulting from this two-dimensional case with reactive layers are identical to those of the one-dimensional case with the same parameter values (shown in Fig. <ref>(a)). The average propagation velocities reported in this paper were measured over a sufficiently long distance (more than 40 sources) after the instabilities had fully developed. The V_avg values resulting from the two-dimensional simulations with reactive layers are plotted as functions of Γ and L (as solid and open green diamonds) in Fig. <ref>(a) and (b), respectively. As Γ decreases or L increases, V_avg increases from V_CJ to super-CJ values. In Fig. <ref>(b), as L increases from 5 to 200, V_avg asymptotically approaches a plateau value that is nearly 10% greater than V_CJ, which is approximately the same as that resulting from the one-dimensional cases. The V_avg of the two-dimensional simulations are less than those of the corresponding values of the one-dimensional simulations for the same value of Γ and L. §.§ Two-dimensional reactive squares The sample results plotted in Fig. <ref>(c) are the two-dimensional contours of the pressure (left figure) and reaction progress variable (right figure) at early and later times for a simulation of discrete reactive squares with Γ=0.04 and L=25. The leading wave front propagates rightward in this figure. The red squares in the contour of Z are the highly concentrated sources of energy. As shown in the contour of pressure in Fig. <ref>(c), the transversely regular, wavy leading wave front, which consists of blast waves generated by the energy release of regularly spaced square sources, can be identified. Downstream (to the left) from the leading shock, the wave structure becomes increasingly irregular.In the plot of V/V_CJ as a function of leading shock position (Fig. <ref>(c)), a regularly periodic variation in V over a length of L, as shown in the inset of Fig. <ref>(c), persists throughout the simulation containing 120 vertical arrays of square sources. The leading shock position is again defined as that along the middle line in y-direction of the two-dimensional domain (at y=62.5). The values of V_avg resulting from the two-dimensional simulations with reactive squares are plotted as functions of Γ and L (as solid and open red squares) in Fig. <ref>(a) and (b), respectively. As Γ decreases or L increases, V_avg increases from V_CJ to super-CJ values. In Fig. <ref>(b), at L=50, V_avg reaches the same plateau value (i.e., nearly 10% greater than V_CJ) as that resulting from both the one- and two-dimensional cases with reactive layers. The V_avg of the two-dimensional, reactive square cases is fairly close to that of the two-dimensional, reactive layer cases, but lower than that of the one-dimensional cases for the same values of Γ and L. § ANALYSIS As shown in Sec. <ref>, a full spectrum of average wave propagation speeds that are significantly greater than V_CJ is obtained in both one- and two-dimensional systems with discretized energy sources governed by finite-rate, state-dependent Arrhenius kinetics. In order to understand the physical mechanism underlying these super-CJ waves, the simulation results are analyzed in two steps. First, the results of select cases are analyzed via a density-weighted (Favre), spatio-temporal averaging method. Using this analysis, which was introduced to the field of detonation by Lee and Radulescu <cit.>, Radulescu et al. <cit.>, and Sow et al. <cit.>. Mi et al. interpreted the super-CJ propagation, resulting from a system with highly concentrated sources that instantaneously deposit energy after a fixed delay time, as weak detonations owing to the non-equilibrium condition at the average sonic surface.<cit.> The motivation of performing this analysis in the present study is to verify that this mechanism of weak detonation is also responsible for the super-CJ propagation with more realistic reaction kinetics and a higher dimension. Second, with the assistance of an x-t diagram constructed from the numerical flow field, a physical parameter, τ_c, which compares the reaction time of a source t_r and shock transit time from one source to the next t_s, i.e., τ_c = t_r/t_s, can be determined. This parameter is used to explain the continuous transition of the propagation speed from V_CJ to the plateau super-CJ value. §.§ Averaged steady, one-dimensional wave structure One- and two-dimensional systems with discrete reactive layers are analyzed using a Favre-averaging approach. The two representative cases selected for further analysis are with Γ=0.04 and L=10. The resulting average wave speed V_avg in both these one- and two-dimensional cases is approximately 10% greater than the CJ speed. Since the simulations are performed in a lab-fixed reference frame, the data are first transformed into a wave-attached reference frame moving at a constant value of V_avg. For the one-dimensional case, temporal averaging is performed to the transient wave structure as the leading shock propagates over 20 sources. For the two-dimensional case, the resulting flow field at each time step is first spatially averaged over the transverse (y-) direction. The temporal averaging is then performed to the time history of the spatially averaged one-dimensional wave profiles. The two-dimensional results are averaged over the time span required for the leading shock to propagate over 20 sources. The detailed derivation of the Favre-averaged equations can be found in the Appendix. In Fig. <ref>(a), the averaged pressure p̅ for the one-dimensional case is plotted with respect to the wave-attached coordinates x', where x'=x-V_avgt. The sonic point marked as the black circle on the profile of p̅ is where the slope of the averaged u+c characteristics equals 0, i.e., u^∗+c^∗=0. The average pressure at this averaged sonic point, p̅_sonic = 25.1, is significantly greater than the pressure of the equilibrium CJ state, as indicated by the horizontal dashed line, p_CJ=21.5. This deviation of p̅_sonic from p_CJ suggests that equilibrium is not reached as the flow passes through the effective sonic surface. In order to further verify this finding, the thermicity due to the mechanical fluctuation in momentum ϕ_M (blue dash-dot curve), the thermicity due to the thermal fluctuation in total energy ϕ_T (green dash curve), and the exothermicity associated with chemical reaction ϕ_R(red dotted curve) are evaluated and plotted near the average sonic point in Fig. <ref>(b). Thermicity is defined as the terms in the momentum equation that result in a change in the average flow velocity or, equivalently, a change in average pressure of the flow in the reaction zone of a detonation. As shown in this inset, ϕ_M, ϕ_T, and ϕ_R are still finite, the total thermicity, i.e., ϕ=ϕ_M+ϕ_T+ϕ_R (thick black line), reaches zero in the vicinity of the sonic point. These significant fluctuations in momentum and total energy render a non-equilibrium state of the flow upon reaching the effective sonic surface. The derivation of ϕ_M, ϕ_T, and ϕ_R, and the master equation (Eq. <ref>) that relates the acceleration/deceleration of the averaged flow with the total thermicity and sonic condition are shown in the Appendix. A similar profile of p̅ is obtained for the two-dimensional case as shown in Fig. <ref>(c). The jump in pressure associated to the averaged leading shock front is however less sharp (smeared out) than that for the one-dimensional case. As indicated by the black circle in the inset, p̅_sonic=21.8 is close to but still greater than p_CJ (dashed line). As shown in Fig. <ref>(d), while the exothermic reaction rate still remains significantly positive, and ϕ_M and ϕ_T persist with significantly large amplitudes, the total thermicity ϕ vanishes in the immediate vicinity of the average sonic point. Thus, in the two-dimensional case, the non-equilibrium state associated with significant mechanical and thermal fluctuations is identified at the location where the averaged flow encounters the effective sonic surface. §.§ Evaluation of τ_c As shown in Sec. <ref> (Fig. <ref>), a continuous transition of the propagation speed from V_CJ to the plateau super-CJ value is found as Γ decreases from 1 to 0 or L increases. An analogous spectrum of propagation regimes is identified in flame propagation in reactive media with spatially discrete or point-like sources.<cit.> A physical parameter, τ_c, which is the ratio between the heat release time of each source and the characteristic time of heat diffusion between neighboring sources, is used to characterize the corresponding flame propagation regime. Similarly, in this system of discrete source detonations, single-step Arrhenius kinetics with a finite reaction rate permit us to measure the time over which a discrete source (layer or square) releases its chemical energy, t_r. Knowing the trajectory of the leading shock wave, the time required for the wave front to travel from one discrete source to the next, t_s, can also be measured. Thus, the ratio between t_r and t_s, i.e., τ_c=t_r/t_s, can be evaluated. As the physical significance of τ_c related to the wave propagation regimes in discretized reactive media is discussed in Sec. <ref>, this subsection is only focused on presenting an approach to post-processing the simulation data in order to evaluate τ_c.The time evolution of the reaction progress variable Z can be plotted in an x'-t diagram where x' is the spatial coordinate in a wave-attached reference frame. This x'-t diagram of Z can be directly constructed from the simulation results for the one-dimensional cases. For the two-dimensional simulations, the flow field of Z at each time step first needs to be spatially averaged along the y-axis to obtain a one-dimensional profile. Then, the x'-t diagram can be constructed based on these averaged profiles of Z from the two-dimensional simulation results. The x'-t diagrams of Z for cases with various model parameters are shown in Fig. <ref>.Figure <ref>(a), the case with one-dimensional reactive layers (Γ=0.04 and L=10), is taken in this subsection as an example to explain how τ_c is determined. The color contour of Z is scaled from bright to dark as Z=1 to 0. Thus, the bright stripes on the right of this figure are the loci of unreacted discrete layers moving (leftwards) towards the leading shock whose trajectory is plotted as the blue curve. The shock transit time between discrete sources t_s can be obtained by measuring the vertical spacing between two bright stripes. The dark zones separating the discrete layers are the inert regions. The areas of gradual color change emanating from where the leading shock encounters the reactive layers indicate the energy release. The areas of energy release are bounded by a thin red outline, which is the iso-contour of Z=0.05, indicating that 95% of the chemical energy initially stored in each source is released within the bounded zone. The reaction time t_r of each discrete source can be measured as the vertical spacing between the locus of shock-source intersection and the upper bound (in time) of the 95% heat release zone. Although this technique of determining t_r is somewhat arbitrary, it should be sufficient to characterize the wave propagation regimes as long as this measurement is consistently performed in this study. The results of average propagation velocity normalized by the CJ value for the scenarios of one-dimensional reactive layers (blue circles), two-dimensional reactive layers (green diamonds), and two-dimensional reactive squares (red squares) with various Γ (solid symbols) and L (open symbols) can thus be plotted as a function of τ_c as shown in Fig. <ref>.§ DISCUSSION The results presented in this paper show that, in an adiabatic system of discretized energy sources governed by single-step Arrhenius kinetics, waves can propagate, in a self-sustained manner, at a speed that is significantly greater than the CJ value of a homogeneous system with the same amount of overall heat release and without the support of a piston. Based on the analysis presented in Sec. <ref> for selected one- and two-dimensional cases, this nearly 10% super-CJ wave propagation can be interpreted as a weak detonation where the flow remains in a non-equilibrium state upon reaching the effective sonic surface. Note that, of all detonation solutions satisfying the conservation laws, the CJ solution with a complete equilibrium state at the sonic surface corresponds to the slowest possible wave speed. By evaluating the terms which comprise the total thermicity in the master equation (Eq. <ref>) based on the Favre-averaged properties, the non-equilibrium condition at the sonic point is attributed to the intense fluctuations in momentum and total energy of the flow. The generalized-CJ condition, i.e., a vanishing thermicity (ϕ=0) at the average sonic point (u^∗+c^∗ = 0), is satisfied owing to the balance between the exothermic chemical reaction and the mechanical and thermal fluctuations. The finding of this study incorporating a more realistic, state-dependent reaction model complements the previous study by Mi et al.<cit.>, verifying that the classic CJ criterion assuming a homogeneous medium based on averaged properties is not always applicable to predict the wave propagation speed in a spatially inhomogeneous system, and further suggesting that the resulting super-CJ propagation is independent of the particular energy deposition mechanism. In the previous work of Mi et al. <cit.>, where an instantaneous, state- and shock strength-independent mechanism of energy deposition was considered, the spatial coordinate can be normalized by the regular spacing between two consecutive sources. In other words, source spacing L does not affect the wave propagation behavior. In that study, the ratio of specific heat capacity γ and the spatial discreteness parameter Γ are the only two factors determining the deviation of average wave speed away from the CJ solution. In the current study, however, as a finite-rate, state-dependent reaction rate is incorporated, an additional length scale, i.e., the reaction zone length of a detonation in the homogenized system, comes into play. This physical length scale is a function of E_a, Q, and γ, but independent of source spacing. The source spacing relative to the intrinsic reaction zone length therefore affects the resulting wave propagation behavior. The effect of L on the average wave speed can be identified in Fig. <ref>(b). For a fixed spatial discreteness Γ=0.04, V_avg increases from V_CJ to a plateau value that is 10% greater than V_CJ as L increases from 1 (i.e., source spacing equals half-reaction-zone length) to 200. As Γ decreases from 1 to the limit of Γ→ 0, a similar trend of V_avg increasing from the CJ speed to the same plateau value is shown in Fig. <ref>(a). These two asymptotic limits of Γ can be understood as follows: When Γ=1, the source size equals the source spacing; the system is thus continuous, resulting in a CJ propagation speed. As Γ→ 0, the discrete sources tend to be spatial δ-functions and release energy nearly instantaneously. In this limit, each source generates forward- and backward-running blast waves. The forward running blast triggers the next source, so the wave propagates via a mechanism of sequentially initiated blast waves by the point sources, which can be qualitatively captured by the heuristic model based on the construction of point-source blast solutions in the Appendix of Ref. <cit.>. Since the variation of V_avg as a function of L is between the same asymptotic limits as those of Γ, the underlying mechanisms at these limits must have an equivalent effect on the wave propagation. When L is small, i.e., on the order of the intrinsic half-reaction-zone length, these spatial inhomogeneities are too fine so that the reactive medium is effectively homogenized. In the other limit, where L is hundreds of times larger than the half-reaction-zone length, the time of a discrete source being processed by the leading shock and releasing energy is much shorter than the time required for the leading shock to travel from a source to the next. Hence, given the large time scale of wave propagation, the energy of one source is released effectively instantaneously, and the overall picture of this wave propagation reverts to the case of sequentially triggered point blasts. Note that, since neither losses nor a chemical kinetic cutoff are considered in this system, further increasing L will not qualitatively alter the resulting wave dynamics or lead to quenching. The continuous spectrum of the wave solutions from the effectively homogeneous CJ propagation to a sequence of point-source blasts can be rationalized with the assistance of τ_c evaluated via the method presented in Sec. <ref>. In other words, the effect of L and Γ on the wave propagation speed can be reconciled by considering the τ_c parameter. The x'-t diagram of Z-contour shown in Fig. <ref>(b) is for the case of one-dimensional reactive layers with Γ=0.04 and L=1, where the reaction time t_r of a source is much longer than the shock transit time t_s, i.e., τ_c is significantly greater than unity. This case corresponds to the scenario wherein the very small scale discrete sources are effectively homogenized, and results in a CJ wave speed. Keeping Γ fixed at 0.04 and increasing L to 10 (for the one-dimensional case), as shown in Fig. <ref>(a), t_r is still finite but smaller than t_s. In this case, where τ_c=0.21, V_avg reaches an intermediate value that is approximately 8.5% greater than V_CJ, but still less than the 10% super-CJ plateau value. For the one-dimensional case withΓ=0.04 and L increased to 50,as shown in Fig. <ref>(c), t_r is significantly smaller than t_s, i.e., τ_c=0.05. The wave propagation in this case is thus via the mechanism of sequentially triggered point-source blasts, and a plateau super-CJ speed is observed. As shown in Fig. <ref>, the resulting V_avg values in the one- and two-dimensional cases with reactive layers coincide at the CJ and plateau super-CJ limits, but differ over the transitional range of Γ and L. Over this range, the V_avg values of the two-dimensional cases are smaller than that of the one-dimensional cases. This difference is due to the fact that, while the detonation in the one-dimensional homogeneous system is stable for the selected parameters (Q=50, γ=1.2, and E_a=20) <cit.>, it is intrinsically unstable in a homogeneous two-dimensional system.<cit.> In addition, the stability analysis which indicates that this system should be stable in one-dimension only applies to homogeneous media.As the source energy is concentrated into reactive layers or squares, the local heat release increases by a factor of 1/Γ, likely promoting the development of instability. For the cases with large Γ and small L, which are not severely inhomogeneous, the intrinsic detonation instabilities are likely developed in a two-dimensional system. As shown in the two-dimensional sample result in Fig. <ref>(b), after the instabilities have fully developed, the leading shock front becomes transversely wavy, and thus processes different parts of the discrete reactive layer at different times and with different strength. The spatially smeared shock front in the p̅ profile for the two-dimensional case shown in Fig. <ref>(c) is a result of these developed instabilities. Hence, the heat release of a discrete layer is also temporally and spatially smeared out, having a homogenizing effect on the energy deposition. This effect can be verified in the x'-t diagram of Z-contour for the two-dimensional case with Γ=0.04 and L=10 based on the spatially averaged one-dimensional wave profiles (Fig. <ref>(d)). The τ_c for this case is determined as 0.33, which is greater than that for the one-dimensional case with the same Γ and L, i.e., τ_c=0.21, as shown in Fig. <ref>(a) and (d). Correspondingly, the V_avg resulting from the above-mentioned two-dimensional case is 6.1% greater than V_CJ while that for the one-dimensional case is 8.5% greater than V_CJ. Therefore, as an alternative to Γ and L, τ_c can be used as a general parameter that quantifies the effect of energy discretization on the wave propagation speed. As demonstrated in Fig. <ref>, the results V_avg for both one- and two-dimensional cases with various Γ and L follow qualitatively the same trend when plotted as a function of τ_c. In this study, the super-CJ wave propagation is identified in the cases with a two-dimensional arrangement of reactive squares. The super-CJ plateau value and the dependence of the deviation from the CJ propagation on the spatial discreteness Γ and spacing between reactive squares L is qualitatively the same as that for the cases with reactive layers. This result suggests that the super-CJ wave propagation and its underlying mechanism due to the spatial inhomogeneities are unlikely an artifact only arising from a one-dimensional system or a system with a one-dimensional, laminar-like arrangement of discrete sources (i.e., reactive layers), but a rather fundamental consequence of multi-dimensionally distributed inhomogeneities on the propagation of reaction waves. Although this study considers simplified scenarios of spatial inhomogeneities, it may capture some details of a detonation propagating in the combustion chamber of a rotating detonation engine with discretely located fuel/oxidizer injection. The scenario with reactive layers resembles the RDE design where detonable gases are axially injected into the annular combustion chamber such as those studied in Refs. <cit.>; the RDEs with impinging injection of non-premixed fuel and oxidizer <cit.> can be conceptualized as the scenario with discrete reactive squares. The key finding of this current work may explain the 5% super-CJ detonation velocity recently reported by Fujii et al.<cit.> for the numerical simulations of a detonation wave propagating in a RDE combustion chamber with relatively widely spaced, premixed gas injection. Drawing inspiration from Vasil'ev and Nikolaev's heuristic model <cit.>, which utilized interacting point-source blast waves to mimic detonation cell structure, the two-dimensional arrangement of highly concentrated, reactive squares considered in this study can be used to investigate the wave dynamics of cellular detonations in future efforts. By selecting a source spacing L that is similar to typical detonation cell sizes, the wave structure induced by imposing spatial inhomogeneities can be hypothesized to have a similar effect on the overall propagation behavior and critical limits as those resulting from the intrinsic cellular structure. Spatially regular and random distributions of inhomogeneities can potentially be used to induce wave structures similar to that in weakly and strongly unstable mixtures, respectively. Further development of this detonation system with spatial inhomogeneities will also be carried out by incorporating a multi-step, chain-branching reaction scheme that provides a kinetic quenching mechanism <cit.>, i.e., a critical temperature below which the exothermic reaction rate is decreased significantly (or quenches). With such a system, it would be possible to examine critical detonation phenomena, for example, a propagation limit in source spacing L beyond which the blast wave generated by a discrete source decays to a shock that is too weak to trigger the exothermic reaction of the subsequent sources.§ CONCLUSION The effect of spatial inhomogeneity in the reaction progress variable upon detonation propagation, while maintaining the overall energy release of the medium as constant, has been studied via numerical simulations in one-dimensional systems and in two-dimensional systems of reactive layers and squares governed by activated, Arrhenius kinetics.The average wave speeds are observed to agree with the predictions of the classical Chapman-Jouguet criterion provided that the time scale of the energy release is greater than the time required for the leading shock to propagate between sources.This regime is observed if the medium is nearly homogeneous (i.e., with the gaps of inert media being smaller than the reactive areas) or when the spacing between the reactive layers is small in comparison to the half reaction zone length of a detonation in the equivalent homogeneous media.In sufficiently inhomogeneous media, wherein the spacing between reactive regions is greater than the inherent reaction zone length, average wave propagation speeds significantly greater than the CJ velocity of the equivalent homogenous medium are observed (up to 10%).Based on spatial and temporal averaging of the numerical results, the super-CJ waves can be interpreted as weak detonations wherein the generalized CJ condition applies at a state of non-equilibrium existing at an effective sonic point inside the wave structure, rather than at an equilibrium point located at the end of the reaction zone in the classical CJ detonation criterion.The non-equilibrium condition in the flow is attributed to persistent fluctuations in momentum and total energy resulting from the intense shock waves generated by the concentrated pockets of energy release.* §The complete derivation of the Favre-averaged (i.e., density-weighted, spatio-temporally averaged) equations, the master equation, and the thermicity terms (ϕ, ϕ_M, ϕ_T, and ϕ_R) are presented in this Appendix. The averaging is performed in a reference frame moving at the averaged wave propagation velocity V_avg. In this moving reference frame, the spatial coordinate and the x-component of particle velocity are transformed as x'=x-V_avgt and u'=u-V_avg, respectively. For convenience, u denotes the x-component of particle velocity with respect to the moving frame in this Appendix.A simple spatio-temporal averaging (or only temporal for one-dimensional cases), i.e., Reynolds averaging procedure, is then applied to density and pressure as followsρ̅( x' ) = 1/t_2-t_1∫_t_1^t_21/y_2-y_1∫_y_1^y_2ρ( x',t ) dydtand ρ = ρ̅+ρ^∘ p̅( x' ) = 1/t_2-t_1∫_t_1^t_21/y_2-y_1∫_y_1^y_2 p ( x',t ) dydtand p = p̅+p^∘where t_1 and t_2 indicate the starting and ending time of the period, and y_1 and y_2 indicate the lower and upper boundaries of the computational domain in the y-direction, over which ρ and p are averaged. The bar “” and superscript “∘” indicate spatio-temporally averaged variables and their corresponding fluctuating quantities. Favre averaging (i.e., density-weighted averaging) is applied to the particle velocity and reaction progress variable as follows,u^∗ = ρ u/ρ̅and u = u^∗+u” Z^∗ = ρ Z/ρ̅and Z = Z^∗+Z”where superscripts “∗” and “”” indicate Favre-averaged variables and their corresponding fluctuating quantities, respectively. The average structure of the wave is therefore governed by the one-dimensional, stationary Favre-averaged Euler equations as follows, ∂/∂ x'( ρ̅ u^∗) = 0 ∂/∂ x'( ρ̅u^∗^2 + p̅ + ρu”^2) = 0 ∂/∂ x'( ρ̅ e^∗ u^∗ + ρ̅(e” u”)^∗ + pu) = 0where the averaged specific total energy e^∗ can be expressed as follows,e^∗ = p̅/ρ̅(γ-1) + u^∗^2/2 +Z^∗ Q/ΓKnowing the upstream boundary condition, i.e., the initial state of the region ahead of the leading shock, Eqs. <ref>-<ref> can be integrated to obtain the following equations,ρ̅ u^∗ = V_avg V_avg^2/ρ̅ + p̅ + f = V_avg^2 + 1 γp̅/(γ-1) ρ̅ + u^∗^2/2 + Z^∗ Q/Γ +g/V_avg = γ/γ-1 + Q + V_avg^2/2where f = ρu”^2 and g = ρe”u” + p^∘u” are the intensities of mechanical and thermal fluctuations, respectively. With the averaged quantities V_avg, p̅, ρ̅, u^∗, and Z^∗ calculated, f and g can be then evaluated using Eqs. <ref> and <ref>. The average sound speed, which is assumed to be independent of the intensity of fluctuation, can be calculated asc^∗ = √(γp̅/ρ̅)The effective sonic point in the one-dimensional averaged wave structure is located at the position at where u^∗ + c^∗ = 0. Considering Eqs. <ref> and <ref> and taking the expression for e^∗ (Eq. <ref>) into Eq. <ref>, after some algebraic manipulation, one obtains the so-called master equation as follows,du^∗/dx' = γ u^∗df/dx' - (γ-1)dg/dx' - (γ-1) Q V_avg/ΓdZ^∗/dx'/ρ̅( c^∗^2 - u^∗^2 ) = ϕ/ρ̅( c^∗^2 - u^∗^2 )whereϕ_M= γ u^∗df/dx' ϕ_T= -(γ-1)dg/dx' ϕ_R= - (γ-1) Q V_avg/ΓdZ^∗/dx' ϕ=ϕ_M+ϕ_T+ϕ_RThe master equation describes how the particle velocity of a fluid element traversing through a one-dimensional, steady Favre-averaged wave structure is influenced by the thermicity (ϕ) due to mechanical fluctuations (ϕ_M), thermal fluctuations (ϕ_M), and chemical reaction progress (ϕ_R). Upon the flow passing through the averaged sonic point where the denominator of the master equation (Eq. <ref>) equals zero, i.e., c^∗^2 - u^∗^2 = 0, the thermicity ϕ must vanish, i.e., ϕ = 0. Otherwise, a singularity would be encountered at the averaged sonic point. Thus, the condition ϕ = 0 at the sonic surface that permits a singularity-free wave structure is known as the generalized CJ condition.apsrev4-1
http://arxiv.org/abs/1703.09321v1
{ "authors": [ "XiaoCheng Mi", "Andrew J. Higgins", "Hoi Dick Ng", "Charles B. Kiyanda", "Nikolaos Nikiforakis" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170327220006", "title": "Propagation of gaseous detonation waves in a spatially inhomogeneous reactive medium" }
⟨#|1 |#⟩1 #1#2 #1#2#1#2These two authors contributed equally to this work. Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, D-57072 SiegenThese two authors contributed equally to this work. ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain National Quantum Information Centre of Gdańsk, 81-824 Sopot, Poland Faculty of Mathematics, Physics and Informatics, Institute of Theoretical Physics and Astrophysics, University of Gdańsk, 80-952 Gdañsk, Poland Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, D-57072 SiegenWe develop a general theory to estimate magnetic field gradients in quantum metrology.We consider a system of N particles distributedon a line whose internal degrees of freedom interact with a magneticfield.Usually gradient estimation is based on precise measurementsof the magnetic field at two different locations, performed with twoindependent groups of particles. This approach, however, is sensitiveto fluctuations of the off-set field determining the level-splittingof the particles and results in collective dephasing. In this work we use the framework of quantum metrology to assess the maximal accuracy for gradient estimation.For arbitrary positioning of particles, we identify optimal entangled and separable statesallowing the estimation of gradients with the maximal accuracy, quantified bythe quantum Fisher information. We also analyze the performance of states from the decoherence-free subspace (DFS),which are insensitive to the fluctuations of the magnetic offset field. We find that these states allowto measure a gradient directly, without the necessity of estimating the magnetic offset field.Moreover, we show thatDFS states attain a precision for gradient estimationcomparable to the optimal entangled states. Finally, for the above classes of stateswe find simple and feasible measurements saturating the quantum Cramér-Rao bound. Estimation of gradients in quantum metrology Otfried Gühne December 30, 2023 ============================================§ INTRODUCTION Quantum metrology holds the promise to enhance the measurement of physicalquantities with the help of quantum effects <cit.>.In practice, ideas from quantum metrology may improve gravitational wavedetectors <cit.>, imaging in biology <cit.> or sensors for protein molecules <cit.>. In a typical metrological scenarioone aims to estimate a certain phase φ, e.g., generated by amagnetic field, with quantum probe systems. By using entanglement betweenthe probes, the uncertainty Δ^2φ̃ of the estimatecan be reduced <cit.>. In this way, quantum metrology offersan advantage in theory, but for practical implementations noise anddecoherence have to be taken into account. Here, it has been shownthat noise has often a negative effect and the improved scaling getslost <cit.>. Nevertheless, concepts such as differential metrology, where some probe systems are used to monitor the noise, can be used to maintain a quantum advantage <cit.>.A different problem is the estimation of the gradient of a spatiallydistributed magnetic field <cit.>. Of course,one may just measure the field at different positions <cit.> or move a single probe through the field <cit.>, and then computethe gradient. But these are not necessarily the optimal strategies, especiallyin cases where one aims to measure small fluctuations of a large offsetfield. Then, a detection of magnetic fields with high precision and spatial resolution is often not possible <cit.>.Furthermore, techniques to measure spatial varying fields by probes with well known position can be reversed in order to measure thespatial distributions of probes <cit.> or the spatial distribution of entanglement <cit.>by a well known spatial field distribution. For example in magnetic resonance imaging (MRI) the spatial resolution of such an image depends on the strength of the applied magnetic field gradient and the calibration of this gradient.The larger the magnetic field gradient, the better the resolution. However, practically the resolution is limited by the patients, e.g., for patients with medical implants. An old cardiac pacemaker or a cochlea implant would make the application of a large magnetic field gradient and therefore a high spatial resolution MRI impossible.A precise calibration of the applied spatial field distribution is necessary.Here, quantum metrology could offer a solution for a precise calibration. With the here presented findings it will be possible to calibrate a gradient for MRI with high precision.In this paper we discuss the estimation of magnetic gradients using the language of quantum metrology. We consider N particles distributed in an arbitrary but fixed manner along a line, and ask in which quantum state they have to be prepared and which measurements have to be carried out in order to estimate the magnetic field gradient with the optimal precision. We also consider the case of collective dephasing noise, as it occurs inrealistic set-ups with trapped ions <cit.> or neutral atoms in optical microtraps. We arrive at a general scheme with optimal states and measurements, depending on the knowledge of the offset field, or on the presence of noise. This paper is organized as follows: In Section II we explain basic factsabout quantum metrology and the Fisher information, being the centralfigure of merit in estimation scenarios. In Section III we introduce the scenario of gradient estimation. Section IV deals with the case that the offset field B_0 at a certain position is known and the gradient should be estimated. In this part we also consider the effect of collective phase noise on the performance of gradient estimation. Section V considers the situation, where the offset field B_0 is not known. Section VI discusses shortly the measurement of more general notions than the gradient of the field. Finally, we conclude anddiscuss the optimal strategies.§ QUANTUM METROLOGY We first review the basics of quantum metrology <cit.> and introduce the main technical tools and notation that will be used in our work. Then we briefly present the canonical phase estimation scheme and compare it with gradient estimation that is studied in this work.The task in a typical quantum metrology scheme is to determine an unknown parameterwhich is encoded in a quantum state _ by a quantum channel described via the map Λ_. After passing through the quantum channel, the state is subsequently measured and the whole process repeated ν times to gather the sufficient statistics.Let p_j() be the probability for the measurement outcome j, given that the initial state wasand the unknown parameter was . Then aresult in classical statistics states that the variance Δ^2 of any unbiased and consistent estimatorofislower-bounded by the Cramér-Rao Bound (CRB) <cit.>: Δ^2≥1/ν({p_j()}),where({p_j()}) ∑_j [ ∂_φ p_j(φ) ]^2/p_j(φ) is the classical Fisher information (FI). A single estimator saturating Eq. (<ref>) may not always exist for the whole parameter range. When estimating small fluctuations of a parameter around a given value, the CRB in Eq. (<ref>) is guaranteed to be tight in the limit of large number of repetitionsν<cit.>.If the probabilities p_j() come from a quantum mechanical experiment, the classical Fisher information (FI)depends on the initial state ϱ, the map Λ_ encoding the phaseΛ_: ϱ_φΛ_φ() and the performed measurement. In quantum mechanics the measurement process is described by a Positive Operator-Valued Measure (POVM), i.e. a collection ={M_j} of positive semi definite operators satisfying the normalization condition ∑_j M_j=𝕀. The probability of measuring the outcome j on the state _ is given byp_j()=Tr[M_j Λ_φ(ϱ)].In the following we will denote the classical Fisher information for the measurement statistics obtained from _ by the POVMby (ρ_,).The optimization of (ρ_,) over all possible POVMs is called Quantum Fisher Information [ρ, Λ_] (QFI) <cit.>. The QFI depends solely on the quantum stateand Λ_, whereas the FI depends on the state , Λ_ and the POVM. The QFI operationally quantifies the metrological usefulness of the initial stateunder the map Λ_ for the estimation of . The precision limitations for estimatingis usually put in the from of the quantum Cramér-Rao Bound Δ^2 ≥1/ν[, Λ_φ] .The QFIcan be computed explicitly via the formula <cit.>, [, Λ_φ]=2∑_α,β:λ_α +λ_β≠ 0|α|∂_φΛ_()β|^2/λ_α + λ_β ,where{λ_α} are the eigenvalues and{|α⟩} the eigenvectors of Λ_(). If the parameteris encoded via an unitary evolution, i.e. when Λ_()=U_ U^†_, where U_=exp(-H) for some Hermitian operator H, then the QFI depends only on the initial stateand the operator H and will be denoted by [,H]. The QFI for pure states ψψψin unitary time evolutions is related to the variance of the operator H,[ψ,H]=4 Δ^2_ψ H4[(H^2 ψ) - (H ψ)^2 ] . Let us also recall that [,H] is a convex function of . For this reason the maximal value of QFI isalways attained for pure states. In fact, for a fixed Hermitian operator H, the maximalcan be computed explicitly by <cit.>max_∈()̋[,H]=(λ_max-λ_min)^2 ,where λ_max and λ_min are the maximal and minimal eigenvalues of H respectively, and ()̋ denotes the set of (mixed and pure) quantum states supported on the Hilbert space $̋. The pure state for which the QFI attains Eq. (<ref>) is given by |ψ_opt⟩=1/√(2)(|max⟩+|min⟩) ,where|max⟩and|min⟩are eigenvectors ofHcorresponding to eigenvaluesλ_maxand respectivelyλ_min.In the experimental context it is common to infer the value of the parameter solely from the expectation value⟨M ⟩_(_M)of some observableM. This is done by using the Taylor expansion _M - _0 ≈⟨ M ⟩_ - ⟨ M ⟩__0/. ∂_⟨ M ⟩|__0 ,to construct the estimator_M[More specifically, in order to construct the estimator _M, one has to assume that the statistical fluctuations of the phasearound the known value _0 are small (so that Eq. (<ref>) makes sense) and that the expectation value ⟨ M⟩__0 is known.] of the value of. This strategy is in general only optimal for a specific choice [Note, however, that locally (in the neighborhood of the specific value _0) the precision attainable by this method saturates the quantum Cramér-Rao bound given in Eq. (<ref>), for the suitable choice of the observable. The optimal observable in general depends on the value of the phase. In particular, it is known that the error propagation formula in Eq. (<ref>) for the so-called symmetric logarithmic derivative <cit.> saturates Eq. (<ref>). However, in general there is no guarantee that symmetric logarithmic derivative is an observable easily accessible in an experiment.] of the operatorM. The precision of this estimatorΔ^2 _ M, after the experiment is repeatedνtimes, is given by the error-propagation formula <cit.> Δ^2 _M=(Δ^2_ϱ_φ_0 M)^2/ν[. ∂_⟨ M ⟩|__0]^2 .In what follows, we will drop the number of repetitionsνin order to simplify the notation and discussion.Standard metrological scenario: In the standard scenario a quantum device (e.g., an interferometer) acts on a single particle (photon, atom, etc.) with the Hamiltonianh_0(often taken to be equal to1/2σ_z). The deviceencodes the unknown parameteron the system ofNparticles by performing the parallel unitarytransformationU_ = _^N, where_=^-h_0 [see Figure<ref>(a)]. This unitary evolution is generated by the global HamiltonianH = ∑_i^N h_0^(i), whereh_0^(i)denotes the Hamiltonian on theith particle. In classical measurement strategies (corresponding to separable input sates),the particles are only classical correlated and the varianceΔ^2for measuringφis limited by the number of particlesNvia the standard quantum limit (SQL)Δ^2∝1/N. However, in quantum mechanics we have the freedom to apply the device in parallel to an entangled state ofNparticles [see Figure <ref> (a)].This allows to obtain the accuracyΔ^2∝1/N^2, which is usually referred as the Heisenberg limit (HL).The concepts of SQL and HL are tailored to the schemes where every particle is affected by the same unitary_. As we will see later, in the context of the estimation of gradients of electric or magnetic fields, it is natural to consider again parallelencoding, but this time allowing single particle unitariesU^i_acting differently on different particles - see Figure <ref> (b). The standard HL and the SQL are no longer valid and new bounds in precision have to be derived. This is one of the main aim of the present paper. At this point, it is important to remark that in the standard metrological scenario, the Heisenberg scaling is typically destroyed by local noise and asymptotically only an enhancement by a constant factor can be achieved <cit.>. However, in the case of global noise such as collective phase noise the scalingΔ^2 ∝1/ N^2can be restored, e.g., by differential interferometry <cit.>.In this work we will also discuss the impact of collective phase noise on the performance for gradient estimation.§ SET-UP FOR GRADIENT ESTIMATIONThroughout this paper we consider a string ofNparticles whose internal, qubit-like, degrees of freedom are coupled to thezcomponent of the spatially-varying magnetic fieldB(x):=B(x) e⃗_zwithB(x)=B_0 + (x-x_0)G,withB_0:=B(x_0)being the field at positionx_0, called offset field, andGbeing the strength of the gradient. Usually in experiments the offset fieldB_0is set to split the degenerate energetic levels. The direction of the offset fieldB_0is defined to be the quantization axis that is calledzaxis. Furthermore, the offset fieldB_0is strong comparing to fields that point in other directions.Therefore, we neglect all other components. We assume without loss of generality a spatial gradient in thexdirection. The particles are arranged along thexaxis and labeled in such a way thatB(x_i) ≤B(x_i+1), wherex_iwithi ∈{1,⋯, N}denotes the position of theith qubit - see Fig. <ref>.The magnetic fieldB(x)depends on the positionsx_iof the qubits. In our analysis we will focus on experiments, where these positions can be measured with high precision. This is the case, e.g., in experiments with trapped ions <cit.> or neutral atoms in optical microtraps <cit.>. In both kinds of experiments the position of the qubits can be measured up to∼ nm, whereas the distance of the qubits scales with∼ μm. Generally, position dependent fields such as magnetic gradients lead to a coupling between the internal and external degrees of freedom <cit.>. Within this paper, we will assume that this coupling is negligible small, such that the positionx_iof theith qubit can be treated classically. This can be always achieved by, e.g., trapping the particles strong enough. The Hamiltonian describing the interaction of the internal degrees of freedom with the magnetic field is given byH=ħγH_0 + ħγG H_G, withH_0B_0 J_z ,H_G1/2∑_i=1^N (x_i -x_0)σ_z^(i), whereσ_z^(i)denotesthe Pauli matrixacting on theith qubit,J_z=1/2∑σ_z^(i)is the collective spin operator, andγis the coupling strength. The mapΛ_Gdescribing the unitary time evolution due toHfor timetis given byΛ_G()=U_GU_G^†, whereU_Gexp[-γ B_0 t J_z - γ G t ∑_i=1^N (x_i-x_0) σ_z^(i)/2].In the following, we will use tools of quantum metrology to derive limits in precision for a classical and quantum enhanced estimation of the gradientG, analogous to SQL and HL known from the standard phase estimation scheme depicted in Fig. <ref>(a). The maximal achievable precision ofGdepends on the knowledge about the magnetic offset fieldB_0.In general, an experimenter could measure the offset field with some uncertaintyΔ^2 B̃_0first and would have some a priori knowledge about the offset field before estimating the gradientG.However, throughout this paper we focus on two extremal situations: full and no a priori knowledge aboutB_0. The first scenario is coverd in Section <ref> and allows to study the clean situation, where the only parameter to-be-estimated is the gradient of the field and leads to the ultimate bounds in precision. The second scenario is described in Section <ref> and applies to two interconnectedcases where either (i) an experimenter has no access to the reference frame associated to the rotations generated by the offset field or (ii) the system is strongly affected by the action of collective phase noise. As we discuss in Section <ref> collective phase noise is the main source of noise in setups of trapped ions or trapped atoms in an optical micro-trap.It leads to an effective erasure of the information about the offset fieldB_0. In Section <ref> we will argue that in these experimental scenariosthe precision in gradient estimation does not gain much from the measurementsof the offset field or having a partial knowledge about it. Hence, wea fortiori justify why we focused on the two extreme cases of full and no a priori knowledge aboutB_0. Other experimental scenariosmay require a more refined analysis of the problem such as multiparameter estimation <cit.> and systematically taking into account the lack of knowledge aboutB_0<cit.>. Please note that in the main text of the paper we will make two implicit assumptions on the system that we are considering. First, we will assume thatx_i≥x_0for all particles. This affects the precise form of the optimal states and the formula for the maximal QFI. For the sake of simplicity of the presentation we decided to discuss this in detail in Appendix <ref>. Second, we will assume that the number of particlesNis even. This has only a slight effect on the form of the results for the case of no a priori knowledge aboutB_0. The changefor oddNis that the summation range has to be changed fromN/2to⌊N/2 ⌋. These are discussed in full generalityinAppendix <ref>.§ GRADIENT ESTIMATION WITH FULL A PRIORI KNOWLEDGE ABOUT B_0 Having the full a priori knowledge about the offset fieldB_0amounts to treating it as a fixed constant. Using the commutation relations [J_z, H_G ]=[U_G,H_G ]=0 .and Eq. (<ref>) we obtain the following relation[,Λ_G]=(γ t)^2[,H_G].In what follows we will reserve the notationF_Q()[,Λ_G ]to avoid ambiguity and simplify the notation.The physical meaning of Eq. (<ref>) is that the QFI for gradient estimation is reduced to the QFI for the standard Hamiltonian[,H_G], and that[,Λ_G]does not depend on the value ofthe magnetic offset fieldB_0atx_0. However, the QFI does depend onx_0, via the dependence ofH_Gon this parameter - see Eq. (<ref>). Notice that the unitary transformation generated by the field has aproduct structureU_G=⊗_i=1^N^i_G, where^i_G=exp{-γt [B_0 + G (x_i -x_0)] σ^(i)_z /2}is a single qubit unitary. Therefore, the problem of deriving the maximal QFI and the optimal state becomes mathematically equivalent to the case of parallel encoding of the phase given in Figure <ref> (b).The rest of this section is organized as follows. First, in Part A weidentify the bounds in precision for the estimation of the gradientGwith separable and entangled states. Then, in Part B we give simple, physically-accessible measurements saturating these bound. Finally, in Part C, wediscuss the influence of collective phase noise on the proposed gradient estimation scheme. §.§ Bounds on precision for gradient estimationHere, we first derive precision bounds for fixed positions{x_i}and identify optimal probe states. Then, we discuss the case of linear spacing of particles. Finally, we identify the optimal positioning of qubits and give the ultimate bounds for gradient estimation. Separable states: Our first result concerns the maximal QFI for estimatingGusing separable states. We start with the observation that from the decompositionH_G = ∑_i=1^N h^(i)_G ,withh^(i)_G =(x_i -x_0) σ^(i)_z /2, we can simplify the QFI for product states=⊗_i=1^N _ivia[⊗_i=1^N _i,H_G ]=∑_i=1^N [_i,h^(i)_G ],by using the additivity of the QFI <cit.>.Usingthis relation together with the convexity of QFI and Eq. (<ref>) we find that the maximum of the QFI on the set of separable states onNqubitsSEP_Nis obtained for the product state⊗_i=1^N _isuch that each_imaximizes[_i,h^(i)_G ]. Therefore, we have max_∈SEP_N()=(γ t)^2∑_i=1^N(x_i -x_0)^2 .and the maximum is obtained for the state|P⟩:=|+⟩^⊗ N, with | +⟩=1/√(2)(|0⟩+|1⟩). From Eq. (<ref>) we get the bound in precision for separable input states Δ^2 G̃≥1/(γ t)^2∑_i=1^N(x_i -x_0)^2 . Entangled states: The second result concerns the maximal QFI over all states from theNqubit Hilbert space_̋N. To compute this maximum we useEq. (<ref>), Eq. (<ref>), and the fact thatH_Gcan be explicitlydiagonalized by the computational basis of theNqubit Hilbert space_̋N. We obtain max_∈(_̋N)()=(γ t)^2[∑_i=1^N(x_i -x_0)]^2 ,with the optimal state being theNqubit Greenberger-Horne-Zeilinger (GHZ) state <cit.> |⟩1/√(2)(|0⟩^⊗ N+|1⟩^⊗ N).Analogously to the case of separable states in Eq. (<ref>) we use the quantum Cramér-Rao bound to get the limitations on the precision for estimatingGwith entangled statesΔ^2 G̃≥1/(γ t)^2[∑_i=1^N(x_i -x_0)]^2 .Let us remark that both, the maximal QFI in Eq. (<ref>) and the maximal QFI for separable states in Eq. (<ref>) strongly depend on the positioning of the particles and the coordinatex_0, if the value of the magnetic offset fieldB_0is assumed to be known. Notice, however, that the quantum states for which the optimal values are attained do not depend on the spacing of particles. Moreover, the optimal states derived by us are invariant under the relabeling of qubits according toB(x_i) ≥B(x_i+1), which proves that our scheme works also for a negative value of the gradientG. Let us finally remark that in our analysis we have assumed, according to the note in the end of Section <ref>, thatx_i ≥x_0. The structure of optimal states and the precise formula for maximal QFI changes if this assumption is dropped. We discuss this in detail in Appendix <ref>.Equidistant spacing: Neutral atoms in an optical microtrap are equidistant spaced. We consider an equidistant spacing in the interval[x_0,L+x_0], i.e.x_i-x_0=(i-1) L/N-1for measuring the gradientGwithNqubits. For this positioning the QFI for separable states is given bymax_∈SEP_N()= (γ t L)^2/6N(2N-1)/N-1,which (for fixed lengthL) scales proportionally toNfor a large number of particles. On the other hand, the QFI for entangled states becomesmax_∈(_̋N)()=(γ t L)^2/4 N^2,and scales withN^2(for fixedL).Optimal positioning and the ultimate bounds: We can optimize the QFI in Eq. (<ref>) and Eq. (<ref>) over the positioningx_0. Again, we assume that the particles are located in the interval[x_0,L+x_0]and we fix bothx_0andL. In order to maximize the right-hand sides of both Eq. (<ref>) and Eq. (<ref>),an experimenter should put all qubits at the positionx_i=x_0+L. This means that the particles are as far away as possible from the pointx_0. If this is the case we get for separable statesmax_∈SEP_N()= (γ t)^2 N L^2.Similarly, the maximal QFI over all states becomesmax_∈(_̋N)()=(γ t)^2 N^2 L^2.An experimental realization of this positioning includes another dimension of the system. Atoms in an optical microtrap can be arranged in a2dimensional lattice. Then, one dimension can be defined as thexdimension and all qubits can be placed at one positionx=x_0+Lby using the second dimension. In state of the art ion traps this arrangement is hard to realize. However, in future on-chip ion traps as proposed, e.g., in Refs. <cit.> this arrangement is possible.In both types of experiments the extension of the qubit chain atx_0+Lmust be much smaller asLin order to exclude effects due to field gradients in the second dimension. Using Eq. (<ref>) and Eq. (<ref>)we can give now the ultimate bounds for the precision of estimatingG, with the usage ofNqubits placed in the fixed interval[x_0,L+x_0], and when we perfectly know the value of the field atx_0. For separable probe states, the best achievable precision for the determination ofGis given byΔ^2 G̃≥1/(γ t)^2 N L^2,similar to the SQL. Likewise, for entangled probe states, we get a Heisenberg-like scaling given byΔ^2 G̃≥1/(γ t)^2 N^2 L^2. For both, the scaling behavior inN(for fixed length L) is identical to the case of the estimation of global parameters.This is not surprising, since we assumed that we perfectly know the value of the offset fieldB_0atthe positionx_0and the optimal strategy is to use all particles for the estimation of the field at the positionx_0+L. §.§ Optimal measurements for experimental realizations The optimal states derived by us can be prepared inexperimental settings such as trapped ions and neutral atoms in optical microtraps. In experiments with trapped ions the preparation of GHZ states <cit.> up toN=14qubits with high fidelity is possible <cit.>. In experiments with neutral atoms in optical microtraps the preparation of a Bell or GHZ state as defined in Eq. (<ref>) withN=2qubits has been achieved <cit.>.However, as explained in Section <ref> the bounds involving the QFI assume implicitly the application of the optimal measurement. In general, the optimal measurement saturating the quantum Cramér-Rao bound is the projective measurement in the eigenbasis of the so-called symmetric logarithmic derivative <cit.>. This measurement can be difficult to perform in practice. Fortunately, the optimal measurement is not necessarily unique. In what follows we show that parity measurements in thexbasis are sufficient in order to reach the maximal possible precision in gradient estimation with GHZ states. Parity measurements can be easily performed in experiments with trapped ions <cit.> as first proposed by Bollinger et al. in 1996 <cit.> and neutral atoms in optical microtraps <cit.>. A parity measurement is basically a detection of the number of qubits in either the spin-up or spin-down state and can be realized with almost100%efficiency <cit.>. Interestingly, the parity measurement does not depend on the spacing of particles and is thus the same for any configuration{x_i }. Classical Fisher information:A parity measurement in thexbasis is a projective measurement {P_+,P_-}with the projective operatorsP_+ = 1/2( +σ_x^ N) ,P_- =1/2(- σ_x^ N) .After the timetthe initialN-qubit stateevolves due toU_G ρU_G^†. Upon measuringonρ_G, the output probabilities are given byp_+(G)= (U_G ρ U_G^† P_+) ,p_-(G)= (U_G ρ U_G^† P_-) .In Appendix <ref> we show that (U_G ψ_ U^†_Gσ_x^ N)=cos[ N γ B_0 t+ γ G t ∑_i=1^N (x_i-x_0) ] ,whereψ_. Using this expression, together with Eq.(<ref>) and the definition of the FI in Eq. (<ref>) we find that the classical Fisher information associated with the statistics of parity measurements with GHZ states is given by(U_G ψ_ U_G^†,) = (γ t)^2 [∑_i=1^N(x_i-x_0)]^2,and equals the QFI for estimatingGwith GHZ states [see Eq. (<ref>)]. Therefore, parity measurements in thexbasis are optimal for gradient estimation with GHZ states.The choice of an optimal measurement is not unique, also measurements of the collective spin operator in thex-directionJ_xare optimal as shown in Appendix <ref>.Error propagation formula: It turns out that measurements of the expectation value of the parityM̂=P_++P_-=σ^N_xwith GHZ states also saturate the ultimate limitations given in Eq. (<ref>) for the accuracy of the measurement ofG. In usual experiments this expectation value⟨M̂|$⟩ is measured for different probing times t and if the initial state is ψ_ the theoretical time dependence is given in Eq. (<ref>). In this measurement scheme the gradient G is deduced from the value of thefrequency, which can be estimated by a fit on the data. This procedure however requires the known value of the offset field B_0. If one has no a priori knowledge about B_0 one has to average <cit.> over all possible values of B_0 and one cannot infer the value of G. It is possible to avoid this problem by measuringthis expectation value for different positioning {x_i} at a fixed probing time t. This strategy has been realized with a single ion moving through a gradient field in Ref. <cit.>. However, this scheme is definitely a less practical solution as one has to make sure that the initial quantum states are the same, despite the change in the configuration of the chain. The case of no a priori knowledge about the offset field will be considered systematically in Section <ref>. With the error propagation formula in Eq. (<ref>) and using Eq. (<ref>) together with the fact that ⟨M̂^2|=⟩1 we can show that both measurement strategies (varying the probing time at a fixed positioning and varying the positioning at a fixed probing time) saturate the Cramér-Rao bound and therefore also the analogue HL for gradient estimation, assuming that the positioning {x_i} and the measurement time t can be determined with high precision, so we haveΔ^2 G̃_M̂=1/(γ t)^2[∑_i=1^N(x_i -x_0)]^2 .§.§ Gradient estimation in presence of collective phase noise In realistic experiments noise affects the time evolution of a quantum system,reducing the entanglement of the probe states. This can diminish the enhancement in precision forgradient estimation obtained with entangled states. In both types of experiments considered in this work collective phase noise is the main source of decoherence. In experiments with trapped ions, this noise is caused by temporal magnetic field fluctuations <cit.>, whereas in experiments with neutral atoms in optical microtraps, collective phase noise is caused by temporal fluctuations of the trapping potential <cit.>. In what follows we describe the influence of collective phase noise on the proposed scheme for the estimation of the gradient of the magnetic field. Collective phase noise: We focus our attention on trapped ions. Wefollow the steps and assumptions for describing the noise source in this system given in Ref. <cit.>. The total Hamiltonian of the system including the noise is given byH'=ħγ H_0 + ħγ H_G +ħγ' Δ E(t) J_z, where operators H_0 and H_G are defined in Eq. (<ref>), γ' is the coupling constant, and Δ E(t) is the temporally fluctuating random field. We will use ⟨·⟩ to denote the average over the stochastic fluctuations of this field. Following Ref. <cit.> we assume (i) no systematic time-dependent bias due to phase fluctuations⟨δφ|=⟩0, where δφ∫_0^tdτΔ E(τ), (ii) Gaussian character of the fluctuations δφ, (iii) stationarity of the noise process, ⟨E(t+τ)E(t)|=⟩⟨E(τ)E(0)|$⟩, and finally (iv) that the time correlation⟨ΔE(t) ΔE(0)|=⟩(ΔE)^2 exp[-t/τ_c ]decays exponentially, with the correlation timeτ_cand the fluctuation strengthΔE.Now, for a fixed realization of the stochastic process the output state at a given timetis given by (t)=U'_t (U'_t)^† , withU'_t=U_G U_noise, whereU_Gis given in Eq. (<ref>) andU_noise=exp[-γ' ∫_0^tΔE(τ)dτJ_z]describes the noise acting on the system. By averaging over the realization of the stochastic processΔE(t)(for the fixed timet) we get that the initial stateis mapped intoΛ'_G(), where Λ'_G()= U_G (t) U_G^†with(t)⟨ U_noise U^†_noise⟩ . Since the encoding of the value of the gradient commutes with the map describing the noise we have(,Λ'_G)=(γ t)^2[ (t),H_G ].That is, in order to compute the QFI in the presence of collective phase noise it suffices to calculate the "standard" QFI on the noisy initial state.In ion trap experiments, for which this paper is relevant, the repetition rate (i.e. the rate with which a single experiment can be repeated) typically is fixedfor noise cancellation of another noise source andt≈μs-ms <cit.>.Moreover, the correlation time for the field fluctuationsΔE(t)is of orderτ_c ≈s . Therefore, we can assumeτ_c ≫t<cit.>. Noisy gradient estimation with GHZ states: For a GHZ state in presence of collective phase noise, the QFI can be calculated analytically (see Appendix <ref> for details) and takes a closed form (ψ_,Λ'_G)=d(t)^2 γ^2 t^2 [∑_i=1^N (x_i-x_0)]^2 ,withd(t)=exp{-(N γ' ΔE τ_c)^2 [exp(- t/τ_c)+t/τ_c -1]}.We see that the QFI first increases witht^2and then, in the limit of large times, decreases double exponentially to zero. Therefore, there exists a global maximum[Because the repetition rate is fixed, the relevant figure of merit to optimize is (t) rather than (t)/t, which appears naturally when a variation of the repetition rate is possible and the total time of the experimental procedure is fixed<cit.>. ] and an optimal measurement time as shown in Fig. <ref>.Under the conditionτ_c ≫t[see the discussion below Eq. (<ref>)] we getexp[(-t/τ_c)+t/τ_c -1]≈1/2(t/ τ_c)^2, which gives the optimal measurement timet_opt=√(2)/(N γ' ΔE )and the maximal QFI =2 γ^2 [∑_i=1^N (x_i-x_0)]^2 / e (N γ' Δ E )^2 ,which can be maximized over the positioningx_i. The positioning maximizing the QFI in Eq. (<ref>) leads to placing all qubits as far away as possible fromx_0that isx_0+L. Then, the maximal QFI is given by=2 γ^2 L^2 / e ( γ' Δ E )^2 ,which does neither scale withNnor withN^2.Now we will discuss the saturation ofthe quantumCramér-Rao bound for the estimation of the gradientG, with the QFI as in Eq. (<ref>) with parity measurements. Generically we can achieve such a saturation by a suitable choice of the global phaseθfor the probe state|_θ⟩ 1/√(2)(|0⟩^⊗ N+exp(θ)|1⟩^⊗ N) , and performing a parity measurementM̂=σ_x^Nas shown in Appendix <ref> and <ref>. Then, the Cramér-Rao bound can be saturated if the conditioncot[Nγ B_0 t+γ G t ∑_i=1^N (x_i-x_0)+θ]=0,holds. However, due to this condition an experimenter must have full knowledge aboutB_0andGat all measurement timestin order to prepare the state in Eq. (<ref>) that is not feasible. Noisy gradient estimation with the state |P⟩: In general, it is difficult to evaluate the QFI in Eq. (<ref>) analyticallyfor arbitrary probe states. The initial pure product stateψ_PPP, given in Eq. (<ref>), evolves into a mixed stateψ̅_P(t)due to noise. We will focus on the regime of large probing timest ∞. In this limit, the state does not change any more due to collective phase noise and[ψ̅_P(∞),J_z]=0<cit.> and therefore we call this regime the steady state regime.We get that for this state the QFI does not vanish in the limit of large times (see Appendix <ref> for details),(ψ_P,Λ'_G)(γ t)^2[∑_i=1^N x_i^2-1/N( ∑_i=1^N x_i)^2] .Thus, in the steady state regime, the product state|P⟩performs better then the GHZ state in the presence of noise. Interestingly the QFI in Eq. (<ref>) is independent ofx_0. This is due to the fact that in the steady state regime the probe state is not only invariant under collective phase noise but also under the offset fieldB_0since[ψ̅_P(∞),J_z]=0. Optimal positioning: For the GHZ state in the steady state regime the QFI vanishes independent of the positioning of the qubits. However, for the separable state|P⟩in the steady state regime the optimal positioning (seeLemma 1 in Appendix <ref> for the proof) at the intervalx_i∈[x̃_0,x̃_0+L]is to place one half of the qubits at positionx_i=x̃_0and the other half as far away as possiblex_i=L+x̃_0(note that the positionx̃_0is some fixed reference coordinate and can have any value includingx_0). For this case the QFI is given by(ψ_P,Λ'_G)(γ t)^2 L^2 N/4 ,that is linear inNand by constant factor of1/4smaller than in the case of having no noise and placing all qubits atx_i=x_0 + L[see Eq. (<ref>)].This can be realized by a similar arrangement as described in Section <ref> A.The optimal spacing of the particles leads to the situation in which the particles are located at two different positions. As a result, two differentunitaries_G^Iand_G^IIact on one half of the particles each. This is a local estimation strategy similar to differential interferometry <cit.>.In these works phase and frequency estimation in the presence of correlated phase noise were investigated.It was shown that the quadratic scaling inNcan be preserved by the usage of differential interferometryinpresence of correlated noise. Furthermore, it was shown that for the product state|P⟩a linear scaling inNup to a constant factor can be preserved. In Eq. (<ref>) we find a similar result. Furthermore, in Ref. <cit.> also similar results where found. Here, the two unitaries_G^Iand_G^II=_-G^Iact on half of the particles each. For this estimation scenario it was shown that the HL can be preserved in presence of correlated dephasing. § GRADIENT ESTIMATION WITHOUT A PRIORI KNOWLEDGE ABOUT B_0 In Section <ref>, we derived bounds in precision for the estimation of the gradientG, assuming complete knowledge of the offset fieldB_0. The question arises whether it is possible to measureGwithout knowing anything aboutB_0. We can already answer this question:collective phase noise can be interpreted as an erasure of information aboutB_0in time. Therefore, waiting long enough leads to the case of having no knowledge aboutB_0.In Eq. (<ref>)we saw that in thesteady state regime the QFI for the GHZ state|⟩vanishes such that the GHZ state is useless in order to estimateGin the case of having no knowledge aboutB_0. However, in Eq. (<ref>) we saw that in thesteady state regime the QFI for the product state|P⟩didn't vanish. Therefore, it is possible to measureGwithout knowingB_0. In this section we systematically study limits on the accuracy for estimating the gradientG, when no knowledge aboutB_0is available.In this section we first (A) identify optimal probe states and the corresponding bounds in precision for estimatingGwhen no a priori knowledge ofB_0is available. Interestingly, we find that these bounds asymptotically behave in the similar way, as in the noiseless case.Then, (B) we provethat similarly to the noiseless case parity measurements in thexbasis saturate the derived bounds in precision.Finally, (C) we compare these results with the other measurement strategies considered in this paper and earlier works on the subject.§.§ Bounds in precision for gradient estimation In what follows we derive precision bounds for estimating a gradient with a fixed positioning{x_i}. Then, we discuss the case of equidistant spacing. Finally, we derive the optimal positioning for the particles located in the fixed interval of lengthL.When assuming no a priori knowledge about the offset fieldB_0the Hamiltonian of the system does not change compared to Eq. (<ref>). However, now the offset fieldB_0is unknown and therefore must be treated as a random variable and all states, operations, and measurements performed on the system have to be averaged over all realizations of this random variable [Because of the specific form of the measurements and evolutions used in our analysis, we can limit ourselves only to averaging the initialquantum states.].Phrasing this in a different language it can be said that we erase the reference frame <cit.> associated to theknowledge of the offset field [or, formally speaking, theone-parameter group of transformations formed by operatorsexp(-θJ_z ), whereθ∈[0,2π)]. Complete erasure of the knowledge aboutB_0is modeledby averaging the initial stateover all possible rotations around thez-axis∫_0^2πdθ/2 π e^- 2 θJ_ze^ 2θ J_z .States of the above form are called decoherence-free states since they are stationary states with respect to collective phase noise:[,J_z ]=0. Conversely, every stateτsatisfying[τ,J_z ]=0can be written asτ=, for a suitable<cit.>. Decoherence-free states are insensitive to the offset fieldB_0but in general can be affected by gradients i.e.[ϱ̅, H_G]≠0. This suggests that they can be used for gradient estimation. In what follows we will usethe decoherence-free subspace forNqubitsDFS_Nto denote the set of decoherence-free states in the considered scenario. It is easy to see from the definition that every∈_Ncan be written as a convex combination=∑_k=0^N p_k _kof decoherence-free states_k ∈(_k), eachsupported on the subspaces_kspanned by computational basis vectors|i_1⟩|i_2⟩…|i_n⟩containing exactlykexcitations ("1") andN-kqubits in the ground state "0" [Alternatively, _k can be characterized as the eigenspace of J_z corresponding to the eigenvalue λ_k =1/2(N- 2k).].Optimal decoherence-free states:In order to compute how useful decoherence-free states are for the estimation of the gradientGwe use directly Eq. (<ref>), which reduces the problem of computing the QFI for the proposed metrological scheme to the computation of the[,H_G], whereH_G = ∑_i=1^N (x_i-x_0) σ_z^(i)/2. Using the fact thatH_Gpreserves subspaces_kand the properties of[,H_G](see Appendix <ref> for details), we prove that in order to find optimal decoherence-free states it suffices to look only at optimal (and thus necessary pure) states in each subspace_kseparately, max_∈_N() = (γ t)^2 max_k=0,…,N max_∈(𝒱_k)[,H_G].The maximal attainable QFI for states in the subspaces_kwithkexcitations is given by (see Appendix <ref>)max_∈(𝒱_k)()=(γ t)^2[∑_i=1^l(x_i-x_N-i+1)]^2 ,wherel=min{k,N-k}. We observe that the above result is independent ofx_0and that it does not change under the simultaneous translation of eachx_i x_i+δby the same distanceδ. This is a consequence of the relation[ϱ,J_z]=0, valid for∈_N. The optimal decoherence-free (ODF) state for a given number of excitationsk, yielding Eq. (<ref>) is given by|_k⟩=1/√(2)(|1⟩^⊗ k⊗|0⟩^⊗ N-k+|0⟩^⊗ N-k⊗|1⟩^⊗ k) .The detailed derivation ofEq. (<ref>)and Eq. (<ref>) is given in Appendix <ref>. A remarkable fact is that for decoherence-free states the QFI for the estimation ofthe gradientGdoes not decrease in time due to the collective phase noise. In Fig. <ref> we show the QFI from Eq. (<ref>) for different number of excitationskand for different positioning{x_i}. We observe that the maximal QFI is attained exactly fork=N/2, for an arbitrary positioning of the qubits[For simplicity we assumed that N is even. In general the maximal QFI is attained fork=⌊ N/2 ⌋ (see Appendix <ref> for details.)]. This observation can be proven analytically for any positioning of the particles (seeAppendix <ref> for details) and we findmax_∈_N()=(γ t)^2[∑_i=1^N/2 (x_i-x_N-i+1)]^2with|_N/2⟩being the optimal state. It is important to note that just like in the noiseless case [see Eq. (<ref>)], the optimal state does not depend on the spacing of particles. From the quantum Cramér-Rao bound in Eq. (<ref>) we get the ultimate bound on the precision of the estimation of the gradientGwith decoherence-free statesΔ^2 G̃≥1/(γ t)^2 [∑_i=1^N/2(x_i-x_N-i+1)]^2 . Separable states:In the case of no a priori knowledge about the offset field it is hard to derive precision bounds for separable states. This follows from the difficulty to characterize the convex set_N∩SEP_N, consisting of states that are both decoherence-free and separable. In particular, extremal points of_N∩SEP_Ngenerally do not have the form of pure state. We leave the problem of finding the optimal decoherence-free separable state open. However, let us remark that the decoherence-free separable state[Recall that the averaging operation (t) preserves separability of quantum states.] ψ̅_P(∞)exhibits asymptoticallythe same (linear inN) scaling of the QFI as the optimal product stateψ_P= PPat least for the case of equal and optimal spacing (that is placing half of the qubits at each positionx̃_0andx̃_0+Lfor ψ̅_P(∞)and placing all qubits at positionx_0+Lforψ_P) - see Eq. (<ref>) and Eq. (<ref>). Equidistant spacing:Just like in the case of complete knowledge aboutB_0(described in Sec. <ref>) we consider a measurement scheme in whichNparticles are equally spaced in the interval[x̃_0,x̃_0+L], i.e.x_i=x̃_0+(i-1)L/N-1(recall that the positionx̃_0is some fixed reference coordinate and can have any value includingx_0).Then, for the optimal decoherence-free stateψ^N/2__N/2_N/2we have(ψ^N/2_)= (γ t L)^2/16N^4/(N-1)^2 ,which scales∝N^2for large numbers of particles.With the optimal separable state from the noiseless case| P⟩, the QFI for equidistant spacing in the steady state regime becomes(ψ̅_P(∞))=(γ t L)^2/12N(N+1)/N-1 ,which scales∝Nfor large numbers of particles. Optimal positioning: Optimizing the right hand side of Eq. (<ref>) over the positionsx_i ∈[ x̃_0,x̃_0+L], we find the maximal QFI over all decoherence-free statesmax_∈_N()=(γ t L)^2/4N^2 ,which is independent ofx̃_0and scales∝N^2. The optimal positioning leading to Eq. (<ref>) isx_i= x̃_0fori≤N/2andx_i=x̃_0 + Lfori> N/2(see Appendix <ref> for the proof). This corresponds to locating the particles at two positions with the maximal possible distanceL. Recall that the same positioning was found to be optimal for estimating the gradientGwith the state|P⟩in the steady state regime [ψ̅_P(∞)as discussed above Eq. (<ref>)]. §.§ Optimal measurements for the experimental realizationOptimal decoherence-free states|_k⟩are equivalent under local unitaries to GHZ states, that means that ODF states can be transformed into GHZ states and vice versa by local unitaries. ODF states can be prepared with high fidelity by a global Sørensen-Mølmer gate <cit.> in experiments with trapped ions. That has been performed forN=14qubits in Ref. <cit.>. In experiments with neutral atoms in a lattice the preparation of the ODF state withk= N/2forN=2qubits has been realized <cit.>. We therefore conclude that optimal probe states for gradient estimation can be realized in experiments considered in this work. As in the case of full knowledge aboutB_0and the absence of noise, the question remains of which measurement should be performed in order to attain the maximal precision. In what follows we show thatfor parity measurements in thexbasis (i) the classical Fisher information [see Eq. (<ref>)] and (ii) the error propagation formula [see Eq. (<ref>)] saturate the quantum Cramér-Rao bound in Eq. (<ref>) for optimal decoherence-free states|_k⟩. Classical Fisher information:As described in Sec. <ref>, the projective measurementof the parity in thex-basis is described by the projectorsP_±=1/2(±σ_x^N)[see Eq. (<ref>)]. In Appendix <ref> we show that the expectation value of the parity on the stateψ^k_evolves according to(U_G ψ^k_ U^†_Gσ_x^ N)=cos[γ t G ∑_i=1^l (x_i-x_N-i+1) ],wherel=min{k, N-k}. Using this result and performing the analogous computations as the ones given in Sec. <ref>we get(U_G ψ^k_ U_G^†,)=(γ t)^2 [∑_i=1^l(x_i-x_N-i+1)]^2 ,withl=min{k, N-k}. Comparing Eq. (<ref>) and Eq. (<ref>) we see that parity measurements in thexbasis saturate the quantum Cramér-Rao bound for the estimation ofGwith the optimal decoherence-free state|_k⟩.In particular, the quantum Cramér-Rao bound is saturated for the optimal state|_N/2⟩and therefore also the bound in Eq. (<ref>) is saturated. Error propagation formula: Just like in the noiseless case (discussed in Sec. <ref>), one can try to estimate the gradientGfrom the measurements of the expectation valueofM̂=σ_x^Ngiven in Eq. (<ref>). Using the error propagation formula in Eq. (<ref>) and the formula for the expectation value in Eq. (<ref>) with⟨M̂^2|=⟩1, we obtain that this measurement strategy again leads (for small fluctuations of the gradientG) to the maximal achievable precision, provided by the optimal state|_N/2⟩:Δ^2 G̃_M̂=1/(γ t)^2 [∑_i=1^N/2(x_i-x_N-i+1)]^2 . From the above discussion we see that parity measurements in thexbasisare optimalin both extremal scenarios considered in this paper - under the condition of full and no a priori knowledge about the offset fieldB_0. As discussed in Sec. <ref> these measurements can be routinely realized in experiments with trapped ions and neutral atoms in an optical lattices. It is also important that the ultimate accuracy given in Eq. (<ref>) is saturated independently on the value of the gradientGand does not deteriorate with time due to collective phase noise. §.§ Comparison of the performance of decoherence-free states with other strategiesIn this part we compare the performance of ODF states with (i) GHZ states in the case of full a priori knowledge ofB_0but in the presence of collective phase noise, and (ii) withDicke states, for the estimation of gradients. Comparison with GHZ states: We compare the performance of GHZ states with optimal decoherence-free states when we havecomplete information aboutB_0and collective phase noise is present. As mentioned before, collective phase noise can be interpreted as an erasure of information aboutB_0. Therefore, the QFI forstates vanishes for long measurement timestand ODF states|_N/2⟩withk=N/2excitations perform better. In contrast, for short probing timestGHZ states perform better. We can calculate the critical timet_critfor which the QFI for GHZ states under noise is equal to the QFI for|_N/2⟩. In experiments for which this paper is relevant typicallyt≈μs-ms and the correlation timeτ_c≈s of the field fluctuationsΔE(t). Therefore, we can assumeτ_c≫tfrom which we get(see Appendix <ref>)t_crit={2 log[ (∑_i=1^N (x_i-x_0))^2/(∑_i=1^N/2(x_i-x_N-i+1)) ^2]}^1/2/Nγ' Δ E ,In the case of complete a priori knowledge aboutB_0at the beginning and collective phase noise GHZ states perform good fort< t_crit. Fort>t_critODF states|_N/2⟩outperform GHZ states.Neutral atoms in an optical microtrap are arranged equidistantx_i-x_0=(i-1) L/(N-1). For this positioning we find the critical timet_crit=2√(log[2(N-1)/N])/(N γ' ΔE). This is independent of the total lengthLof the string. ForN=50qubits andγ' ΔE= 2 π·50 Hz (as used for Fig. <ref>) we findt_crit=104 μs and forN=8we findt_crit=595 μs. Both are within typical coherence times of such experiments.In Sec. <ref> we discussed the case of full a priori knowledge aboutB_0. Here, we found that in the absence of noise GHZ states are optimal. However, this holds only under the assumption thatx_i≥x_0, this means that the whole string of qubits is on the right hand side of the positionx_0whereB_0is known. Ifx_0is defined to be located within the range of the qubit string GHZ states are not optimal anymore (as shown in Appendix <ref>). The experimentally relevant case is, if an experimenter has full a priori knowledge about the offset fieldB_0right in the middle of the qubit stringx_N/2≤x_0 <x_N/2+1, e.g., by estimating the average field. Interestingly, in this case the optimal states are ODF states (as shown in Appendix <ref>) that are decoherence free. Therefore, whenx_0is defined to be in the middle of the string it doesn't matter whether an experimenter has knowledge about the offset field. This fact implies, that only if an experimenter is able to measure the offset field at a position that is not right in the middle of the string, she could gain from having information about the offset field. In principle a priori knowledge about the offset field could enhance the precision for gradient estimation since the maximal QFI when having fulla priori knowledge about the offset field in Eq. (<ref>)is by constant factor of4greater than the one in the case of having no a priori knowledge about the offset field in Eq. (<ref>). However, this comparison is unfair because this enhanced precision is gained by an unknown amount of resources that was previously used to determine the off-set fieldB_0.Furthermore, as discussed before a gradient measurement does not always gain from having a priori knowledge about the offset field. In fact only fort<t_critknowledge about the offset field enhances the precision for gradient estimation since collective phase noise immediately erases the information about the offset field.However, even fort<t_critthe gain from measuring the offset field is only a constant factor (up to4). This factor can also be reached by using longer measurement times since∝t^2for ODF states (that are insensitive to the offset field). In the here considered experimentst_crit∝ μs whereas typical measurement timest ∝ ms such thatt>t_critas we discussed above. Then, ODF states perform better than GHZ states. Therefore, in the here considered experimentsit is not worth to spend any resources for a measurement of the offset fieldfor the estimation of gradients.One possible objection to the above reasoning is that for any specified probing timetthere exist in principle optimal states and measurements that would give a precision for gradient estimation higher than the one for optimal DFS states [given in Eq. (<ref>)]. The technical limitation of such a scheme is that the optimal states and measurements depend on the probing time which results in experimental difficulties. On the other hand, the optimal DFS statesand the corresponding measurements have already been implemented inexperiments <cit.>. Gradient estimation with Dicke states: In Ref. <cit.> it was claimed that for gradient estimation with a W state a good scaling of the QFI inNis possible. Recall that W states are decoherence-free statesand belong to the set of symmetric Dicke states <cit.>|^k_N⟩= 1/∑_j _j{|0⟩^⊗N-k⊗|1⟩^⊗k}, whereis a normalization constant and∑_j _j{.}denotes the sum over all possible permutations. Symmetric Dicke states withk=1excitations are exactly W states. We use Eq. (<ref>) to compute the QFI for Dickestates_N^k ^k_N^k_N[see Appendix <ref>, Eq. (<ref>) for details] and the final result is(_N^k)/(γ t)^2= ∑_i=1^N x_i^2-( ∑_i=1^N x_i)^2 (2k-N)^2/ N^2+∑_i ≠ j=1^N x_ix_j[(2k-N)^2-N/N(N-1)] .For equidistant positioningx_i=(i-1)L/N-1we have(_N^k)= (γ t)^2 (L/N-1)^2(N+1) k (N-k)/3,which is maximal fork=N/2, (_N^N/2)=(γ t)^2(L/N-1)^2N^2 (N+1)/12 .For W states (k=1) we have(_N^1)= (γ t)^2 (L/N-1)^2N^2-1/3 . This is exactly the same result as the one from Ref. <cit.> witha=L/(N-1). In Ref. <cit.>ais defined as a fixed distance between the qubits withx_i=(i-1) a, such that adding a qubit leads to an extension of the total lengthLof the string. Using this convention the QFI for W states in Eq. (<ref>) scales with∝a^2 N^2for largeN. At first sight this seems to be a good scaling since it is quadratic inN. However, when fixing the distance between the qubitsathe HL from Eq. (<ref>) for gradient estimation isΔ^2 G̃ ∝1/N^4and the SQL from Eq. (<ref>) isΔ^2 G̃∝1/N^3for largeNand withL=(N-1)a. Therefore, a quadratic scaling inNis not a good scaling for a fixed distance between the qubits. Furthermore, when fixing the total lengthLthe QFI for W states decreases withNto a constant(_N^1)(γt)^2 L^2 /3. The product state|P⟩in the steady state regime in Eq. (<ref>)performs better then a W state in Eq. (<ref>) and is for largeNequal to the maximal attainable QFI with symmetric Dicke states (k=N/2). We conclude the section with the graphical comparison of the performance of different families of states forgradient estimation in Fig. <ref>, under the assumption of (a) full or (b) no a priori knowledge about the offset fieldB_0. § GENERALIZATION The model described in Section <ref> can be generalized to an arbitrary known spatial distributionf(x)of thezcomponent of the magnetic field. We can consider an experiment for the estimation of the strengthGof a spatial magnetic field distribution given by B(x)=B_0+G f(x-x_0),where the functionf(x)is known andf(x_0)=0holds. E.g., due to the quadratic Zeeman effect it may be known that the field has to be quadratic inxsuch thatf(x-x_0)=(x-x_0)^2-(x-x_0)aas depicted in Fig. <ref>. The Hamiltonian in Eq. (<ref>) is then generalized by replacingx_i-x_0byf(x_i-x_0), fori=1,…,N. The labeling of the particles is then imposed by the ordering of the values of the magnetic field i.e.B(x_i)≤B(x_i+1). Under these slight modifications essentially all the results presented in this paper carry over. In particular, the bounds on the precision given by the QFI in Eq. (<ref>) and Eq. (<ref>) for the case of full a priori knowledge ofB_0and inEq. (<ref>) for the case of no a priori knowledge are valid in this generalized model.Also the optimal states and the optimal measurements attaining these bounds do not change. Furthermore, the optimal positioning for the case of full a priori knowledge aboutB_0is to put all qubits at the positionx_maxthat maximizesf(x-x_0). For the case of no a priori knowledge aboutB_0it is optimal to put half of the qubits at the positionx_minthat minimizesf(x-x_0)and the other half of the qubits at the positionx_maxthat maximizesf(x-x_0).Note that one has to keep in mind that the above analysis is valid under the assumptionf(x-x_0)≥0(which corresponds to the conditionx_i ≥x_0from the note given in the end of Sec. <ref>).As depicted in Fig. <ref>the assumptionf(x-x_0)≥0does in general not always hold. If this assumption is dropped all the results for the case of no a priori knowledge aboutB_0(decoherence free states) still carry over. However, for the case of full a priori knowledge aboutB_0the precise form of the optimal states and formulas for maximal QFI change.Although one can still recover the results by substitutingx_i-x_0byf(x_i-x_0)in the appropriate formulas given in Appendix <ref>. § CONCLUSIONSWe presented a systematic analysis of the ultimate limits in precision for the estimation of a gradient of a spatially-varying magnetic field in systems of cold atoms and trapped ions. The position degrees of freedom were treated classically andtaken as fixed. We used the framework of quantummetrology to study two extreme scenarios: (i) the case when themagnetic offset field is known and (ii) the case, where the magnetic offset field is not a priori known.For the first case (i) we have introduced the bounds in precision for gradient estimation analogous to the standard quantum limit (maximal possible accuracy with separable states) and the Heisenberg limit (maximal possible accuracy with entangled states) known from the usual phase estimation scenario. Moreover, we have identified the optimal probe state, that is a GHZ state [see Eq. (<ref>)]. It is then optimal to put all qubits as far away as possible from the pointx_0, where the magnetic offset field is known. This leads to a magnetic field measurement, similar to a magnetic offset field measurement, but at a different place.For the second case (ii) we found that GHZ states are completely useless (=0) for the estimation of a magnetic field gradient. In the absence of knowledge aboutB_0effective super-selection rules restrict the class of allowed states to decoherence-free states. We proved that the decoherence-free state given in Eq. (<ref>) withk=N/2excitations is optimal and does not depend on the positions of the qubits.Here, the optimal positioning is, to put half of the qubits at one place and the other half as far away as possible. We also showed that the performance of optimal decoherence-free states is generically comparable tooptimal GHZ states in the case of complete knowledge aboutB_0– both scale withN^2. Both optimal states can be prepared with high fidelities in experiments with trapped ions up toN=14and cold atoms up toN=2.For both scenarios, we identified the parity measurement in thexbasis as the optimal measurement saturating the quantum Cramér-Rao bounds for gradient estimation. This measurementis feasible in experiments considered in this work, as for the positions of the particles can be considered fixed and local measurement ofσ_xcan be easily performed. Finally, we investigated the effect of collective phase noise. Collective phase noise can be interpreted as an erasure of knowledge about the magnetic offset field and continuously interpolates between scenario (i) and (ii) for strong noise or rather long probing times. We found a critical timet_critfor which the GHZ state performs as good as the ODF state withk=N/2excitations. Fort<t_cGHZ states perform better than ODF states and fort>t_cODF states outperform GHZ states.These results are summarized in the decision diagram in Fig. <ref>. Values of the QFI for different positioning and different states in the two cases (i) and (ii) are summarized in Table <ref>.We derived Cramér-Rao bounds for gradient estimation from the QFI and discussed their saturation with FI. Such a saturation implies an unlimited amount of statistics and therefore many repetitions of a measurement.However, realistic experiments are limited in measurement time and therefore limited in the amount of possible repetitions. In such a scenario, a proper analysis of bounds in precision can be performed in a Bayesian estimation approach. For the standard scheme [as depicted in Fig. <ref>(a)] it was shown in Ref. <cit.> that only the boundΔ^2 φ≥π^2/N^2can be saturated with limited statistics in contrary to the boundΔ^2 φ≥1/N^2from the QFI. An investigation of bounds from a Bayesian approach for the estimation of gradients would be interesting for further work. Moreover, it would be interesting to take the uncertainty in positioning of the qubitsinto account. In fact, independently from our work, an article on gradient estimation with systems of atoms with probability distributions in position has appeared <cit.>.For such a system also weak value measurements could offer an enhancement in precision for the estimation of gradients<cit.>. Furthermore, in certain setups there is a coupling between the internal and externaldegrees of freedom, i.e. the spin and the position <cit.>. This requires an adaption of our ideas. Also, the investigation of precision limits and optimal strategies for simultaneous estimation of many parameters describing a field (i.e., the offset field B 0 , the gradient G, and higher derivatives), could be interesting (especially in the presence of collective dephasing noise <cit.>). A first step in this direction, however without considering the effect of noise has been done in Ref. <cit.>. Finally, another interesting topic for further studies is the performance of random multiparticle states <cit.> for gradient estimation.§ ACKNOWLEDGEMENTS We are grateful to Antonio Acín, Iagoba Apellaniz, Michael Johanning,Janek Koł odyński,Morgan Mitchell, Christina Ritz, and Gael Sentísfor interesting and fruitful discussions. This work has been supported by the European Research Council (Consolidator Grants TempoQ/683107 and QITBOX), Spanish MINECO (QIBEQI FIS2016-80773-P,and Severo Ochoa Grant No. SEV-2015-0522), Fundació Privada Cellex, Generalitat de Catalunya (Grant No. SGR 874, 875, and CERCA Programme), the Friedrich-Ebert-Stiftung, the FQXi Fund (Silicon Valley Community Foundation), and the DFG. M.O acknowledges the support of Homing programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.§ MAXIMAL QFI AND OPTIMAL STATES FOR ARBITRARY POSITION X_0 In this part we derive the maximal QFI for gradient estimation for an arbitrary position of x_0, in which the offset field is assumed to beperfectly known. Let us first introduce the auxiliary notation f_ix_i-x_0.Moreover, just like in the main text let us label the qubits in such a way that f_i+1≥ f_i. By the virtue of Eq. (<ref>) we can focus on maximizing[,H_G ], where H_G = ∑_i=1^N f_i σ^(i)_z/2.Let us note that the above Hamiltonian can be diagonalized by the computational basis |I⟩=|i_1⟩|i_2⟩…|i_n⟩ ,with i_k∈{ 0,1} .The symbol Ilabels the set of positions of particles which are in the state |0⟩ and is formally defined byI={ l |i_l=0.} .Note that I can be arbitrary (in particular it can be also the empty set). The eigenvalue corresponding to the eigenvector |I⟩ is given by λ_I=1/2∑_i∈ If_i-1/2∑_i∈I̅f_i , where I̅denotes the complement of the set I in the set { 1,2,…,N}.The maximal eigenvalue λ_max is given byλ_max = 1/2∑_i=1^N |f_i| with the corresponding eigenvector that is |I_max⟩, whereI_max = { l| f_l≥ 0 .} .Let m be the minimal number with f_m ≤ 0, that is the number of particles on the left side of x_0 such that x_m≤ x_0 < x_m+1. Now, using the ordering f_i+1≥ f_i we find |I_max⟩=|1⟩^ m|0⟩^ N-m.Because of λ_I̅= - λ_I we get λ_min= -λ_max and|I_min⟩=|0⟩^ m|1⟩^ N-m.Finally, by virtue of Eq. (<ref>), Eq.(<ref>), and Eq. (<ref>), we obtainmax_∈(_̋N) ()=(γ t)^2 [∑_i=1^N |f_i|]^2,with the optimal state given by|Ψ_m⟩=1/√(2)(|1⟩^ m|0⟩^ N-m+|0⟩^ m|1⟩^ N-m) .Note that for m=N/2 this state happens to be the optimal decoherence-free state |_N/2⟩. § PARITY MEASUREMENTSIn this part of the Appendix we compute expectation values of the parity operator M̂=σ_x^ N on the families of quantum states investigated in this paper. These computations are relevant for the computations involving the classical Fisher information and the error propagation formula given the main text.§.§ Parity expectation value for GHZ statesRecall that |⟩=1/√(2)( |0⟩^ N + |1⟩^ N). Using the identitiesU_G |0⟩^ N=exp[-/2(N γ B_0 t+γ G t ∑_i=1^N (x_i -x_0))] |0⟩^ N,U_G |1⟩^ N=exp[/2(N γ B_0 t+γ G t ∑_i=1^N (x_i -x_0))]|1⟩^ N,and the property M̂|0⟩^ N= |1⟩^ N we obtain (U_G ψ_ U^†_GM̂)=cos[ N γ B_0 t+γ t G ∑_i=1^N (x_i-x_0)].§.§ Parity expectation value for noisy GHZ statesLet ψ_,θ_θ_θ, where |_θ⟩=1/√(2)(|0⟩^ N+ exp(θ) |1⟩^ N).In this part we compute the expectation value of the parity σ_x^ N on the noisy state ρ:=U_G ψ̅_,θ(t) U^†_G.Using Eq. (<ref>) we findρ =d(t) U_G ψ_,θ U^†_G + (1-d(t))1/2[ (00)^ N+ (11)^ N],where d(t) is given below Eq. (<ref>) in Appendix <ref>. From the above expression and using M̂|0⟩^ N= |1⟩^ N, we get (ρM̂)=d(t)(U_G ψ_,θ U^†_G M̂) .Finally, repeating essentially the same computations as the ones from Section <ref> we obtain (ρM̂)=d(t)cos[ N γ B_0 t+ γ t G ∑_i=1^N (x_i-x_0)+θ] .§.§ Parity expectation value for optimal decoherence-free statesRepeating the analogous computations to these given in Appendix <ref>for |_N/2⟩=1/√(2)(|1⟩^⊗ N/2⊗|0⟩^ N/2 +|0⟩^ N/2⊗|1⟩^⊗ N/2)we obtainU_G |a⟩= exp[-/2(γ t G ∑_i=1^N/2(x_i-x_N-i+1) )]|a⟩ ,U_G |b⟩= exp[/2(γ t∑_i=1^N/2(x_i-x_N-i+1) )] |b⟩ ,where we denoted |a⟩:=|1⟩^⊗ N/2⊗|0⟩^ N/2 and |b⟩:=|0⟩^ N/2|1⟩^⊗ N/2.Using the above expressions together with the identity M̂|b⟩ =|a⟩ we obtain(U_G ψ^N/2_ U^†_GM̂)=cos(γ t ∑_i=1^N/2(x_i-x_N-i+1) G ),where ψ^N/2_=_N/2_N/2. § CLASSICAL FISHER INFORMATION FOR J_X MEASUREMENTAnalogous computations to the ones performed in Section <ref> can be performed to show that the classical Fisher Information for the measurement of the projective POVM _J_x, associated with the eigenspaces of J_x, also gives the QFI for the optimal states. More precisely(_G, _J_x)= (_G, ) ,for = ψ_GHZ and=ψ^N/2_. This result can be also derived from the monotonicity of QFI under coarse-graining, i.e., (_G, { M_i})≥(_G, { N_i}) ,where { N_i } is a POVM obtained by coarse-graining of a POVM{ M_i } i.e for every outcome i we haveN_i =∑_j q(i|j) M_j for some stochastic matrix q(i|j)(we call a matrix q(i|j) stochastic if and only if q(i|j)≥ 0 and for every j we have ∑_i q(i|j)=1). A measure , describing the measurement of the parity in x basis σ_x ^⊗ N, can be obtained as follows: First, measuring the projective measurement _J_x. Second, output "+1" or "-1" depending on the number of excitations contributing to the observed eigenvalue of J_x. Therefore,is coarse-graining of_J_x and thus, by the virtue of Eq. (<ref>)we obtain Eq. (<ref>) for arbitrary states . § COMPUTATIONS OF ERROR-PROPAGATION FORMULA FOR GHZ STATES IN PRESENCE OF NOISE The aim of this section is to show that for a suitable chosen value of the initial relative phase in a state |_θ⟩ it is possible to saturate the quantum Cramér-Rao bound with the measurement of M̂=σ_x^ N, even in the presence of collective phase noise. Our reasoning essentially mimics the one given in the previous section. Setting ψ_G,noise=U_G ψ̅_,θ U^†_G and using(<ref>) we obtain Δ_ψ_G,noise^2 M̂=1-d(t)^2cos^2[ α(t)],with α(t):= Nγ B_0 t+γ G t ∑_i=1^N (x_i-x_0)+θ. Using this formula in the error propagation formula given in Eq. (<ref>) we get Δ^2 G̃_M̂ =1-d(t)^2cos^2[α(t)]/[d(t)γ t ∑_i=1^N (x_i-x_0) ]^2 sin^2[α(t)],=1+[1-d(t)^2] cot^2[α(t)]/[d(t)γ t ∑_i=1^N (x_i-x_0) ]^2 .From this we see that the quantum Cramér-Rao bound [in this case given by the inverse of the QFI in Eq.(<ref>)] is saturated for cot[α(t)]=0 (for a suitable choice of the initial phase θ).§ OPTIMAL STATES IN PRESENCE OF COLLECTIVE PHASE NOISEIn this section we will assume fulla priori knowledge about B_0. We calculate the QFI first for GHZ states in presence of noise and then for optimal states for arbitrary position x_0in presence of noise.§.§ GHZ states in presence of collective phase noiseDue to the noise, the probe state ϱ evolves into a mixed state (t)⟨ U_noise U^†_noise⟩ where U_noise=exp[-γ' ∫_0^tΔ E(τ)dτ J_z] describes the noise acting on the system. The diagonal entries of the probe state do not change ϱ̅(t)_ii=ϱ_ii. However, the off-diagonal ones do. The GHZ state has only two non-zero off-diagonal entries ϱ_0,q=(ϱ_q,0)^†, where q=2^N-1 for a state of dimension 2^N. For these entries we find <cit.> [U_noise U^†_noise]_0,q = exp[-γ' N ∫_0^tΔ E(τ)dτ]_0,q.Now we use the fact that ⟨exp[±δφ]⟩=exp[-1/2Δ^2δφ] for an unbiased Gaussian distribution of δφ that means that ⟨δφ|=⟩0. With the fact that the variance Δ^2δφ=⟨δφ^2|$⟩ we can calculate⟨exp[±γ' N ∫_0^tΔ E(τ)dτ]⟩=exp[-1/2(γ' N)^2 C(t)]withC(t)=⟨∫_0^t dτ_1∫_0^t dτ_2Δ E(τ_1) Δ E(τ_2) ⟩. Substitutingt_1=τ_1-τ_2andt_2=τ_1+τ_2and using⟨E(t+τ)E(t)|=⟩⟨E(τ)E(0)|$⟩ and ⟨Δ E(t) Δ E(0)|=⟩(Δ E)^2 exp[-t/τ_c ] we find C(t)=2(Δ E τ_c)^2 (exp(-t/τ_c)+t/τ_c -1) .Together, we find the N-particle GHZ state evolves in presence of collective phase noise into the state ϱ̅(t) =1/20^⊗ N0^⊗ N+1/21^⊗ N1^⊗ N+ d(t)/20^⊗ N1^⊗ N+ d(t)/21^⊗ N0^⊗ Nwith d(t)=exp[-(Nγ' Δ E τ_c)^2 (exp(-t/τ_c)+t/τ_c -1)].This state in its eigendecomposition is given by the non-zero eigenvalues λ_±=1/2(1 ± d(t)) and the corresponding eigenvectors |v_±⟩=1/√(2)(|0 ⋯ 0⟩±|1 ⋯ 1⟩).In order to compute the QFI for a noisy GHZ state and for estimating G we use Eq. (<ref>) and Eq. (<ref>), with the final result ((t)) = (λ_+-λ_-)^2/λ_++λ_-(γ t)^2 |v_+|∑_i=1^N (x_i -x_0) σ_z^(i)v_-|^2=d(t)^2 (γ t)^2 [∑_i (x_i-x_0)]^2,where all other terms in Eq. (<ref>) vanish since ∑_i=1^N (x_i-x_0) σ_z^(i)|v_+⟩=∑_i=1^N (x_i-x_0) |v_-⟩ .§.§ Optimal states for arbitrary position x_0 in presence of noiseLet m be the minimal number with f_m ≤ 0, that is the number of particles on the left side of x_0. Then, in Appendix<ref> we showed that the optimal state is given by|Ψ_m⟩=1/√(2)(|1⟩^ m|0⟩^ N-m+|0⟩^ m|1⟩^ N-m) .Following the calculations from the previous section we find the averaged state for a given time t ϱ̅(t) =1/21^⊗ m,0^⊗ N-m1^⊗ m,0^⊗ N-m+1/20^⊗ m,1^⊗ N-m0^⊗ m,1^⊗ N-m+d_m(t)/21^⊗ m,0^⊗ N-m0^⊗ m,1^⊗ N-m+d_m(t)/20^⊗ m,1^⊗ N-m1^⊗ m,0^⊗ N-m,with d_m(t):=exp[-(N-2m)^2(γ'Δ E τ_c)^2 (e^- t/τ_c+t/τ_c -1)].In the cases of m=0 and m=N the optimal state is a GHZ state |Ψ_0⟩=|Ψ_N⟩=|⟩ and we find d_0(t)=d_N(t)=d(t). The non-zero eigenvalues of ϱ̅(t) are given by λ^m_± (t)=1± d(m,t)/2 with the corresponding eigenvectors|v_±^m⟩=1/√(2)(|1⟩^ m|0⟩^ N-m±|0⟩^ m|1⟩^ N-m).Now we can use the fact that ∑_i=1^N (x_i-x_0) σ_z^(i)|v^m_+⟩=(∑_i=1^N |x_i-x_0| )|v^m_-⟩ .to evaluate the QFI for estimating G that is given byF_Q = (λ^m_+ -λ^m_-)^2/λ^m_+ +λ^m_-(γ t)^2 |v_+^m|∑_i=1^N (x_i-x_0) σ_z^(i)v_-^m|^2 =(γ t)^2 d_m(t)^2 (∑_i=1^N |x_i-x_0| )^2. § OPTIMAL PRODUCT STATE IN THE STEADY STATE REGIMEIn the noiseless case, the product state |P⟩is the best classical probe state. We now want to understand, what noise (loosing information about B_0) does to the scaling for this state. The state ψ̅_P (t→∞)=∑_k=0^N p_k |_N^k⟩⟨_N^k| is a mixture of symmetric Dicke states |_N^k⟩<cit.> with probabilities p_k=2^-NNk, where ∑_k=0^N p_k=1. Recall first that H_G =∑_i=1^N f_iσ_z^(i)/2 ,where we set for convenience f_i x_i - x_0. We perform the calculations analogous to the ones given in Ref. <cit.>.Because of the fact that H_G preserves subspaces _k and |_N^k⟩∈_k we have _N^s|H_G _N^l∝δ_l,s and therefore the QFI reduces to=4 (γ t)^2 ∑_k=0^N p_k Δ_k^2 H_G,with Δ_k^2 H_G being the variance of H_G in the state |_N^k⟩. The second term of the variance is the squared expectation value ⟨H_G|$⟩, which is given by_N^k|∑_i=1^N f_iσ_z^(i)/2_N^k =∑_i=1^N f_i_N^k|σ_z^(i)/2_N^k=∑_i=1^N f_i 1/N_N^k| J_z_N^k=∑_i=1^N f_i(2k-N)/2N,using the symmetry of the state. The expectation value of the squared operator⟨H_G^2|$⟩ is_N^k|[∑_i=1^N f_iσ_z^(i)/2]^2_N^k =∑_i,j=1^N f_i f_j _N^k| σ_z^(i)/2σ_z^(j)/2_N^k=∑_i=1^N f_i^2_N^k|( σ_z^(i)/2)^2_N^k_=1/4+ ∑_i ≠ j=1^N f_if_j_N^k| σ_z^(i)/2σ_z^(j)/2_N^k.Using the symmetry of the state_N^k|σ_z^(i)σ_z^(j)_N^k=_N^k|σ_z^(a)σ_z^(b)_N^k for arbitrary a and b, we can rewrite the second term_N^k| σ_z^(i)/2σ_z^(j)/2_N^k =1/ N(N-1)∑_a ≠ b=1^N _N^k| σ_z^(a)/2σ_z^(b)/2_N^k=1/N(N-1)(_N^k|J_z^2_N^k_=(2k-N)^2/4-N/4).Together the variance is given by4Δ_k^2 H_G= ∑_i=1^N f_i^2-( ∑_i=1^N f_i)^2 (2k-N)^2/ N^2+∑_i ≠ j=1^N f_if_j[(2k-N)^2-N/N(N-1)].Here, all terms with x_0 vanish. Therefore, 4Δ_k^2 H_G is independent of x_0. Using ∑_k=0^N 2^-NNk(2k-N)^2=N, we can calculate the QFI= 4 (γ t)^2 ∑_k=0^N p_k Δ_k^2 H_G=(γ t)^2[∑_i x_i^2-1/N( ∑_i x_i)^2]. For the maximization of Eq. (<ref>) over the positioning {x_i} we can state:Let N be an even natural number and letf(x_1,x_2,…,x_N)∑_i=1^N x_i^2-1/N( ∑_i=1^N x_i)^2. Then, the restriction of f to the domain x_i ∈[x_0, x_0 +L] attains the maximum value for x_i=x_0, for i=1,…,N/2 and x_i=x_0 +L for i=N/2+1.…,N.A direct computation shows that for any δ∈ℝ we havef(x_1+δ,x_2+δ,…,x_N+δ)=f(x_1,x_2,…,x_N).Therefore, the problem of maximizing f in the domain specified by the restrictions x_i ∈[x_0, x_0 +L] can be reduced to the problem of maximizing this function for its arguments satisfying x_i ∈[-L/2, L/2]. For suchrestrictions setting half of x_i equal to -L/2 and the other half equal to L/2 maximizes ∑_i=1^N x_i^2 while at the same time minimizing ( ∑_i=1^N x_i)^2. Thus, such configuration maximizes f for x_i ∈[-L/2, L/2]. Coming back to the original interval we get the thesis of the lemma. § BOUNDS IN PRECISION FOR GRADIENT ESTIMATION WITH NO A PRIORI KNOWLEDGE ABOUT B_0 In this Section we derive the bounds in precision for gradient estimation under the assumption of no a priori knowledge about the offset field B_0. In particular, we will prove Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>)given in Section <ref>.Let us start with proving Eq.(<ref>) which reads max_∈_N() = (γ t)^2 max_k=0,…,N max_∈(𝒱_k)[,H_G].In order to prove this equation we first recall the connection ()=(γ t)^2 [,H_G]. Then, for the "Hamiltonian" quantum Fisher information we have[∑_k=0^N p_k_k,H_G]=∑_k=0^N p_k[_k,H_G] ,where { p_k } is a probability distribution and states _k are supported on _k. Eq. (<ref>) follows from the fact that H_G preserves decoherence-free subspaces _k and the "additivity" QFI under the convex combinations of states supported on orthogonal subspaces <cit.>. The identity in Eq. (<ref>) follows now from the linearity of the right-hand side of Eq. (<ref>) in { p_k} and the fact that decoherence-free states are precisely of the form ∑_k=0^N p_k_k for _k supported on _k. The optimal value of the QFI on (_k) can be found by using Eq. (<ref>) and Eq. (<ref>). Let λ_max^(k) and λ_min^(k) denote the maximal and respectively minimal eigenvalues of .H_G|_𝒱_k [the formula for H_G is given for example in Eq. (<ref>)]. Using the monotonicity of the coefficients f_i+1≥ f_i with f_i=(x_i-x_0) we getλ_max^(k)=∑_i=k+1^Nf_i-∑_i=1^kf_i , λ_min^(k)=∑_i=1^N-kf_i-∑_i=N-k+1^Nf_i .The corresponding eigenvectors are given by |I_max^(k)⟩=(|1⟩^⊗ k)⊗(|0⟩^⊗ N-k) , |I_min^(k)⟩=(|0⟩^⊗ N-k)⊗(|1⟩^⊗ k) .Using Eq. (<ref>) and Eq. (<ref>) we get λ_max^(k)-λ_min^(k)=∑_i=1^l(f_i-f_N-i+1) , l=min{k,N-k} .From Eq. (<ref>) and Eq. (<ref>) we obtain the explicit formula for the maximal QFI on 𝒱_k,max_∈(_k)() =(γ t)^2[∑_i=1^l(f_i-f_N-i+1)]^2 ,where l=min{k,N-k}. From Eq. (<ref>) we find that the above value is attained for the state |_k⟩=1/√(2)(|I_max^(k)⟩+|I_min^(k)⟩)We have therefore proved Eq. (<ref>) and Eq. (<ref>). We conclude by noting that the right hand side of Eq. (<ref>) is a monotonic function in k for 2k≤ N and max_∈(_k)() = max_∈(_N-k)()– see Fig. <ref> for a graphical explanation of this fact. Therefore, the QFI is maximal for k=⌊N/2⌋, where ⌊ n ⌋ is the smallest integer smaller or equal to n that is called the floor of n. Using Eq. (<ref>)we getmax_∈_N()=(γ t)^2[∑_i=1^⌊N/2⌋(x_i-x_N-i+1)]^2 ,This maximum is obtained for the state |_⌊N/2⌋⟩=1/√(2)[|1⟩^⊗⌊N/2⌋|0⟩^⊗⌈N/2⌉+|0⟩^⊗⌈N/2⌉|1⟩^⊗⌊N/2⌋] ,where ⌈ n ⌉ is the highest integer greater or equal to n that is called the ceil of n. Let us note that if N is not even also the state|_⌈N/2⌉⟩=1/√(2)(|I_max^⌈N/2⌉⟩+|I_min^⌈N/2⌉⟩)attains the maximal QFI given in Eq. (<ref>).§.§ Optimal positioning of the qubitsConsider N particles that are set to be located in the interval x_i∈[x̃_0,L+x̃_0]. Here, x̃_0 is an arbitrary point. We are interested in how to optimally locate the qubits in order to get the best possible accuracy for the estimation of G (for fixed N and L). According to Eq. (<ref>) the maximal QFI, attainable with a state from the _N is given by ()=(γ t)^2[∑_i=1^⌊N/2⌋(x_i-x_N-i+1)]^2 .The maximum of (<ref>) over the locations of all particles { x_i} _i=1^Nis attained for x_i=x̃_0 for i≤⌊N/2⌋ and x_i=L+x̃_0 for i> ⌊N/2⌋ or vice versa. Then, the maximal QFI is given by=(γ t)^2⌊N/2⌋ ^2L^2 . Let us note that in the case when N is odd the position of the “middle” particle can be arbitrary. We want to emphasize that the scaling behavior (with respect to N and L) of the (over the choice of x_i's)optimized QFI is preserved if one picks the optimal state which is invariant to the considered noise model. §.§ Crosspoint GHZ in presence of noise and optimal state from the DFSIn the noiseless case the GHZ state is optimal for gradient estimation when B_0 is known. However, collective phase noise causes an erasure of knowledge about the offset field B_0. In the limit of no knowledge about B_0 we found an optimal state from the _N given in Eq. (<ref>) with k=⌊ N/2 ⌋. In total the maximal attainable QFI for this state is smaller then for the GHZ state in the noiseless case. Therefore, in this section we calculate the measurement time t_crit in which both perform similar. The QFI for GHZ states in presence of collective phase noise is given by=d(t)^2 (γ t)^2 [∑_i=1^N (x_i-x_0)]^2,with d(t)=exp[-(Nγ' Δ E τ_c)^2 (exp(-t/τ_c)+t/τ_c -1)]. The QFI for the optimal state from the _N with k=⌊ N/2 ⌋ is given by=(γ t)^2[∑_i=1^⌊N/2⌋(x_N-i+1-x_i)]^2 .Then, we can calculate the critical time t_crit by setting both equal and solve for t. In realistic experiments the correlation time τ_c ∝s and the measurement time t∝ms. Such that we can assume t/τ_c ≪ 1 which leads to [exp(-t/τ_c)+t/τ_c -1]≈ 1/2(t/ τ_c)^2 and we findt_crit={ 2log[ (∑_i=1^N (x_i-x_0))^2/(∑_i=1^N/2(x_i-x_N-i+1)) ^2]}^1/2/Nγ' Δ E . § SPATIAL DISTRIBUTIONS USED IN FIG. <REF>In Fig. <ref> we illustrated the QFI with a state from the _N with k excitations for different kinds of spatial distributions of the qubits. For these, we used the following functions:The optimal spatial distribution for the positioning of the qubits is marked in black in Fig. <ref> and is given byx_i= 0for i≤⌊ N/2 ⌋ ,Lfor i > ⌊ N/2 ⌋ .The spatial distribution marked by the color darker grey (purple), is given byx_i=L/2{1+ tanh[(2i/L-1) π]} .The equidistant spatial distribution marked in grey (red) is given byx_i=(i-1)L/N-1and the spatial distribution marked by lighter grey (yellow), is given byx_i=L/2{1+ tan[(2i/L-1) π/4]} . 99 Giovannetti2006 V. Giovannetti, S. Lloyd, and L. Maccone,Phys. Rev. Lett.96, 010401 (2006).Toth2014 G. Tóth and I. Apellaniz, J. Phys. A: Math. Theor.47, 424006 (2014).schnabel R. Schnabel, N. Mavalvala, D. E. McClell, and P. K. Lam, Nat. Comm.1, 121 (2010).biology M. A. Taylor and W. P. Bowen, arXiv:1409.0950.jelezko A. Ermakova, G. Pramanik, J.-M. Cai, G. Algara-Siller, U. Kaiser,T. Weil, Y.-K. Tzeng, H. C. Chang, L. P. McGuinness, M. B. Plenio,B. Naydenov, and F. Jelezko, Nano. Lett.13, 3305 (2013). Demkowicz-Dobrzanski2012 R. Demkowicz-Dobrzański, J. Kołodyński, and M. Guţă,Nat. Comm.3, 1063 (2012).Escher2011 B. M. Escher, R. L. de Matos Filho, and L. Davidovich, Nat. Phys.7, 406 (2011). Landini2014 M. Landini, M. Fattori, L. Pezzè, and A. Smerzi, New J. Phys.16, 113074 (2014).Altenburg2016 S. Altenburg, S. Wölk, G. Tóth, and G. Gühne,Phys. Rev. A94 052306 (2016).othergradientpapers S. Wildermuth, S. Hofferberth, I. Lesanovsky, S. Groth, P. Krüger, and J. Schmiedmayer,Appl. Phys. Lett.88, 264103 (2006); H. Cable and G. A. Durkin,Phys. Rev. Lett.105, 013603 (2010); M.-K. Zhou, Z.-K. Hu, X.-C. Duan, B.-L. Sun, J.-B. Zhao, and J. Luo,Phys. Rev. A82, 061602(R) (2010).inigo I. Urizar-Lanz, P. Hyllus, I. L. Egusquiza, M. W. Mitchell, and G. Tóth, Phys. Rev. A88, 013626 (2013). Schmidt-Kaler2012a F. Schmidt-Kaler and R. Gerritsma,EPL99, 53001 (2012). Snadden1998M. J. Snadden, J. M. McGuirk, P. Bouyer, K. G. Haritos, and M. A. Kasevich,Phys. Rev. Lett.81, 971 (1998). Walther2011 A. Walther, U. Poschinger, F. Ziesel, M. Hettrich, A. Wiens, J. Welzel, and F. Schmidt-Kaler, Phys. Rev. A83, 062329 (2011).Keenan2012 S. T. Keenan and E. J. Romans, NDT & E Int.47, 1 (2012).Wildermuth2005 S. Wildermuth, S. Hofferberth, I. Lesanovsky, E. Haller, L. M. Andersson, S. Groth,I. Bar-Joseph, P. Krüger, and J. Schmiedmayer, Nature435, 440 (2005).MRIpapers G. Balasubramanian, I. Y. Chan, R. Kolesov, M. Al-Hmoud, J. Tisler, C. Shin, C. Kim,A. Wojcik, P. R. Hemmer, A. Krueger, T. Hanke, A. Leitenstorfer, R. Bratschitsch,F. Jelezko, and J. Wrachtrup,Nature455, 648 (2008);S. Y. Huang, A. Nummenmaa, T. Witzel, T. Duval, J. Cohen-Adad, L. L. Wald, and J. A. McNab, Neuroimage106, 464 (2015);M. S. Grinolds, M. Warner, K. De Greve, Y. Dovzhenko, L. Thiel, R. L. Walsworth, S. Hong, P. Maletinsky, and A. Yacoby, Nat. Nanotechnol.9, 279 (2014).Woelk2016a S. Wölk and O. Gühne,New J. Phys.18, 123024 (2016). ClassStat E. L. Lehmann and G. Casella,Theory of Point Estimation, (Springer, New York, 1998).Caves1994 S. L. Braunstein and C. M. Caves, Phys. Rev. Lett.72, 3439 (1994).RDD2015 R. Demkowicz-Dobrzański, M. Jarzyna, and J. Kołodyński, Progress in Optics60, 345 (2015).Monz2011 T. Monz, P. Schindler, J. T. Barreiro, M. Chwalla, D. Nigg, W. A. Coish, M. Harlander, W. Hänsel,M. Hennrich, and R. Blatt, Phys. Rev. Lett.106, 130506 (2011); see also T. Monz,Quantum information processing beyond ten ion-qubits (PhD thesis, University of Innsbruck), available at www.quantumoptics.at/images/ publications/dissertation/monz_diss.pdf.Leibfried2004 D. Leibfried, M. D. Barrett, T. Schätz, J. Britton, J. Chiaverini, W. M. Itano, J. D. Jost, C. Langer, and D. J.  Wineland,Science304, 1476 (2004). Schlosser2011 M. Schlosser, Stichelmann, J. Kruse, and G. Birkl,Quantum Information Processing10, 907 (2011).Bloch2008 I. Bloch,Nature453, 1016 (2008). Mintert2001 F. Mintert and C. Wunderlich,Phys. Rev. Lett.87, 257904 (2001).Woelk2016 S. Wölk and Ch. Wunderlich, arXiv: 1606.04821 (2016).Ragy2016 S. Ragy, M. Jarzyna, and R. Demkowicz-Dobrzański, Phys. Rev. A94, 052108 (2016).Proctor2017 T. J. Proctor, P. A. Knott, and J. A. Dunningham, arXiv: 1702.04271 (2017) OpDeph2013 Sergey I. Knysh, Gabriel A. Durkin, arXiv:1307.0470 (2013)Greenberger1989 D. M. Greenberger, M. A. Horne, and A. Zeilinger, Going beyond Bell's theorem,in Bell's theorem, quantum theory and conceptions of the universe (Springer, 1989)page 69.Kielpinski2002 D. Kielpinski, C. Monroe, and D. J. Wineland, Nature417, 709-711 (2002).Lekitsch2017 B. Lekitsch, S. Weidt, A. G. Fowler, K. Mølmer, S. J. Devitt, C. Wunderlich, and W. K. Hensinger,Science Advances3, e1601540 (2017).Wilk2010 T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshnychenko, P. Grangier, and A. Browaeys,Phys. Rev. Lett.104, 010502 (2010).Bollinger1996 J. J. Bollinger, W. M. Itano, and D. J. Wineland, Phys. Rev. A54, R4649 (1996).Isenhower2010 L. Isenhower, E. Urban, X. L. Zhang, A. T. Gill, T. Henage, T. A. Johnson, T. G. Walker, and M. Saffman,Phys. Rev. Lett.104, 010503 (2010). Bartlett2007 S. D. Bartlett, T. Rudolph, and R. W. Spekkens,Rev. Mod. Phys.79, 555 (2007). Baumgart2016 I. Baumgart, J.-M. Cai, A. Retzker, M. B. Plenio, and Ch. Wunderlich, Phys. Rev. Lett.116, 240801 (2016).Kuhr2005 S. Kuhr, W. Alt, D. Schrader, I. Dotsenko, Y. Miroshnychenko, A. Rauschenbeutel, and D. Meschede,Phys. Rev. A72, 023406 (2005).Lidar1998 D. A. Lidar, I. L. Chuang, and K. B. Whaley, Phys. Rev. Lett.81, 2594 (1998). Dorner2012 U. Dorner, New J. of Phys.14, 043011 (2012). Dorner2013 U. Dorner, Phys. Rev. A88, 062113 (2013). Sorensen1999 A. Sørensen and K. Mølmer,Phys. Rev. Lett.82, 1971 (1999).Dicke1954 R. H. Dicke,Phys. Rev.93, 99 (1954).Zhang2014 Y.-L. Zhang, H. Wang, L. Jing, L.-Z. Mu, and H. Fan,Scientific Reports4, 7390 (2014). Berry2000D. W. Berry and H. M. Wiseman, Phys. Rev. Lett.85, 5098 (2000); M. Jarzyna and R. Demkowicz-Dobrzański, New J. Phys.17, 013010 (2015).Apellaniz2017 I. Apellaniz, I. Urizar-Lanz, Z. Zimboras, P. Hyllus, and G. Tóth,arXiv: 1703.09056 (2017).Aharonov1988 Y. Aharonov, D. Z. Albert, and L. Vaidmann, Phys. Rev. Lett.60 1351 (1988).RandSTAT M. Oszmaniec, R. Augusiak, C. Gogolin, J. Kołodyński, A. Acín, and M. Lewenstein,Phys. Rev. X 6, 041044 (2016).
http://arxiv.org/abs/1703.09123v2
{ "authors": [ "Sanah Altenburg", "Michał Oszmaniec", "Sabine Wölk", "Otfried Gühne" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170327144811", "title": "Estimation of gradients in quantum metrology" }
numbers, compressnatbib
http://arxiv.org/abs/1703.08840v2
{ "authors": [ "Yunzhu Li", "Jiaming Song", "Stefano Ermon" ], "categories": [ "cs.LG", "cs.AI", "cs.CV" ], "primary_category": "cs.LG", "published": "20170326162036", "title": "InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations" }
A mesoscopic, mixed particle- and field-based Brownian Dynamics methodology for the simulation of entangled polymermelts has been developed. Polymeric beads consist of several Kuhn segments and their motion is dictated by theHelmholtz energy of the sample, which is a sum of the entropic elasticity of chain strands between beads; slip-springs;and non-bonded interactions. Following earlier works in the field (Phys. Rev. Lett. 2012, 109, 148302) the entanglement effect is introduced by the slip-springs, which are springs connecting either non successive beads onthe same chain, or beads on differentpolymer chains. The terminal positions of slip-springs are altered during the simulation through a kinetic Monte Carlohopping scheme, with rate-controlled creation/destruction processes for the slip-springs at chain ends. The rate constants are consistent with the free energy functionemployed and satisfy microscopic reversibility at equilibrium.The free energy of nonbonded interactions is derived from an appropriate equation of stateand it is computed as a functional of the local density by passing an orthogonal grid through the simulation box;accounting for it is necessary for reproducing the correct compressibility of the polymeric material. Parametersinvoked by the mesoscopic model are derived from experimental volumetric and viscosity data or from atomisticMolecular Dynamics simulations, establishing a “bottom-up” predictive framework for conducting slip-spring simulations of polymeric systems of specific chemistry. Initial configurations for the mesoscopic simulations are obtained by further coarse-grainingof well-equilibrated structures represented at a greater level of detail. The mesoscopic simulation methodology isimplemented for the case of cis-1,4 polyisoprene, whose structure, dynamics, thermodynamics andlinear rheology in the melt state are quantitatively predicted and validated without a posteriori fitting the results to experimental measurements. § INTRODUCTION One of the fundamental concepts in the molecular description of structure - property relations of polymer melts is chain entanglement. When macromolecules interpentrate, the term entanglements intends to describe the topological interactions resulting from the uncrossability of chains. The fact that two polymer chains cannot go though each other in the course of their motion changes theirdynamical behavior dramatically, without altering their equilibrium properties.Anogiannakis et al.<cit.> have examined microscopically at what level topological constraints can be described as a collective entanglement effect, as in tube model theories, or as certain pairwise uncrossabilityinteractions, as in slip-link models. They employed a novel methodology, which analyzes entanglement constraints into a complete set of pairwise interactions (links), characterized by a spectrum of confinement strengths. As a measure of theentanglement strength, these authors used the fraction of time for which the links are active. The confinement wasfound to be mainly imposed by the strongest links. The weak, trapped, uncrossability interactions cannot contribute to the low frequency modulus of an elastomer, or the plateau modulus of a melt. In tube model theories,<cit.>it is postulated that the entanglements generate a confining mean field potential, which restricts the lateral monomer motion to a tube-like region surrounding each chain. In polymer melts the confinement is not permanent, but leads to a one- dimensional diffusion of the chain along its tube, called reptation.<cit.> An alternative, discrete, localized version of the tube constraint is utilized in models employing slip-links. <cit.> The tube is replaced by a set of slip-links along the chain, which restrict lateral motion but permit chain slidingthrough them. The real chain is represented by its primitive path, which is a series of strands of average molarmass M_ e connecting the links.Hua and Schieber<cit.> considered that the molecular details on the monomer orKuhn-length level are smeared out in the slip-link model, while the segmental network of generic polymers isdirectly modeled, by introducing links between chains, which constrain the motion of segments of each chain into atubular region. The motion of segments is updated stochastically, and the positions of slip-links can either be fixed in space, or mobile. When either of the constrained segments slithers out of a slip-link constraint, they areconsidered to be disentangled, and the slip-link is destroyed. Conversely, the end of one segment can hop towardsanother segment and create another new entanglement or slip-link.The governing equations in the slip-link model can be split into two parts: the chain motion inside its tube isgoverned by Langevin equations and the tube motion is governed by deterministic convection and stochastic constraintrelease processes. Based on the tube model,<cit.> it is assumed that the motion ofthe primitive path makes the primary contribution to the rheological properties of entangled polymer melts. Therefore, from the microscopic information given by the slip-link model, these authors could precisely access the longest polymer chain relaxation time. Moreover, by employing an elegant formulation, the macroscopic properties of polymer melts,i.e., stress and dielectric relaxation can be extracted from the ordering, spatial location and aging of theentanglements or slip-links in the simulations.Later, Schieber and co-workers studied the fluctuation effect on the chain entanglement and viscosity using amean-field model.<cit.> Shanbhag et al.<cit.> developed a dual slip-link model with chain-end fluctuations for entangledstar polymers, which explained the observed deviations from the “dynamic dilution” equation in dielectric andstress relaxation data. Doi and Takimoto<cit.> adopted the dual slip-link model tostudy the nonlinear rheology of linear and star polymers with arbitrary molecular weight distribution. Likhtman<cit.> has shown that the standard tube model cannot be applied to neutron spin-echo measurements because the statistics of a one-dimensional chain in a three-dimensional random-walk tube becomewrong on the length scale of the tube diameter. He then introduced a new single-chain dynamic slip-link model todescribe the experimental results for neutron spin echo, linear rheology and diffusion of monodisperse polymer melts.All the parameters in this model were obtained from one type of experiment and were applied to predict other experimental results.The model was formulated in terms of stochastic differential equations, suitable for Brownian Dynamics (BD) simulations.The results were characterized by some systematic discrepancies, suggesting possible inadequacy of the Gaussian chainmodel for some of the polymers considered, and possible inadequacy of the time - temperature superposition(TTS) principle.Del Biondo et al.<cit.> extended the slip link model of Likhtman<cit.>in order to study an inhomogeneous system. These authors studied the dependence of the relaxation modulus on theslip link density and stiffness, and explored the nonlinear rheological properties of the modelfor a set of its parameters. The crucial part of their work was the introduction of excluded volume interactions ina mean field manner in order to describe inhomogeneous systems, and application of this description to a simplenanocomposite model. They concentrated on an idealized situation where the fillers were well dispersed, with a simplehardcore interaction between the fillers and the polymer matrix. However, addressing real situations where the fillersare poorly dispersed and partially aggregated was clearly possible within their framework.Masubuchi et al.<cit.> performed several multichain simulations for entangled polymer melts byutilizing slip links to model the entanglements. These authors proposed a primitive chain network model, in which thepolymer chain is coarse-grained into a set of segments (strands) going through entanglement points.Different segments are coupled togetherthrough the force balance at the entanglement node. The Langevin equation is applied to update the positions ofthese entanglement nodes, by incorporating the tension force from chain segments and an osmotic force caused bydensity fluctuations. The entanglement points, modeled as slip-links, can also fluctuate spatially (or three dimensionally).The creation and annihilation of entanglements are controlled by the number of monomers at chain ends.The longest relaxation time and the self-diffusion coefficient scaling, as predicted from the model, were found in good agreement with experimental results. Later on, the primitive chain network model was extended to study the relationship between entanglement length andplateau modulus.<cit.> It was also extended to study star and branched polymers,<cit.> nonlinear rheology,<cit.> phase separation in polymer blends,<cit.> block copolymers,<cit.> and the dynamics of confined polymers.<cit.> Recently, Masubuchi has compiled an excellent review of simulations of entangled polymers.<cit.> Coarse graining from an atomistic Molecular Dynamics (MD) simulation to a mesoscopic one results in soft repulsivepotentials between the coarse-grained blobs, causing unphysical bond crossings influencing the dynamics of the system.The Twentanglement uncrossability scheme proposed by Padding and Briels,<cit.>was introduced in order to prevent these bond crossings.Their method is based on considering the bonds as elastic bands between the bondedblobs. As soon as two of these elastic bands make contact, an “entanglement” is created which prevents the elasticbands from crossing. In their method, entanglements were defined as the objects which prevent the crossing of chains.Only a few of them were expected to be entanglements in the usual sense of long-lasting topological constraints,slowing down the chain movement. Simulations of a polyethylene melt<cit.> showed that the rheologycan be described correctly, deducing the blob-blob interactions and the friction parameters from MD simulations. Chappa et al.<cit.> proposed a model in which the topological effect ofnoncrossability of long flexible macromolecules is effectively taken into account byslip-springs, which are local, pairwise, translationally and rotatationally invariant interactions between polymer beads that do not affectthe equilibrium properties of the melt. The slip-springs introduce an effective attractive potential between the beads; in order to eliminate this effect theauthors introduced an analytically derived repulsive compensating potential. The conformations of polymers and slip springs are updated by a hybrid Monte Carlo (MC) scheme.At every step, either the positions of the beads are evolved via a short Dissipative Particle Dynamics (DPD) run or theconfiguration of the slip-springs is modified by MC moves, involving discrete jumps of the slip-springs along the chain contour and creation or deletion at the chain ends. The number of slip-springs can vary during the simulation, obeying a prescribed chemical potential. That model can correctly describe many aspects of thedynamical and rheological behavior of entangled polymer liquids in a computationally efficient manner, sinceeverything is cast into only pairwise-additive interactions between beads. The mean-square displacement of the beads evolving according to this model was found to be in favorable agreement withthe tube model predictions.Moreover, the model exhibited realistic shear thinning, deformation of conformations, and a decrease of the number of entanglements at high shear rates.Müller and Daoulas were the first to introduce the hopping of slip-springs by means of discrete MC moves.<cit.> These authors thoroughly investigated the ability of MC algorithms to describe the single-chain dynamics ina dense homogeneous melt and a lamellar phase of a symmetric diblock copolymer.Three different types of single-chain dynamics were considered: local, unconstrained dynamics, slithering-snake dynamics, and slip-link dynamics. Ramírez-Hernández et al.<cit.> replaced the discrete hopping of a slip-spring along the chaincontour by a one-dimensional continuous Langevin equation. In their method, slip-springs consist of two rings that slip along different chain contours, and are connected by a harmonic spring. The rings move in straight lines between beads belonging to different chains, scanning the whole contour of the chainsin a continuous way. Comparison with experimental results was made possible by rescaling the frequency and the modulus, in order to matchthe intersection point of storage and loss moduli curves of polystyrene melts.More recently, Ramírez-Hernández et al.<cit.> presented a more general formalism in order toqualitatively capture the linear rheology of pure homopolymers and their blends, as well as the nonlinear rheologyof highly entangled polymers and the dynamics of diblock copolymers. The number of slip-springs in their approachremained constant throughout the simulation, albeit their connectivity changes. These authors have later presented atheoretically informed entangled polymer simulation approach wherein the total number of slip-springs is notpreserved but, instead, it is controlled through a chemical potential that determines the average molecularweight between entanglements.<cit.> The idea of slip-springs was in parallel used by Uneyama and Masubuchi,<cit.> who proposed a multi-chain slip-spring model inspired by the single chain slip-spring model of Likhtman.<cit.> Differently from the primitive chain network model of the same authors, they defined the total free energy for the new model, and employed a time evolution equation and stochastic processes for describing its dynamical evolution. All dynamic ingredients satisfy the detailed balance condition, and are thus capable of reproducing the thermal equilibriumwhich is characterized by the free energy. The number of slip-springs varies. Later, Langelothet al.<cit.> presented a simplified version of the slip-spring model of Uneyama andMasubuchi,<cit.> where the number of slip-springs remains constant throughout the simulation.Working on a different problem than melt rheology, Terzis et al.<cit.> have invoked the microscopic description of entanglements and the associated processes envisioned in slip-link models, in order to generate entanglement network specimens of interfacial polymeric systems and study their deformation to fracture. The specimens were created by sampling the configurational distribution functions derivedfrom a Self Consistent Field (SCF) lattice model. The specimens generated were not in detailed mechanical equilibrium.To this end, these authors developed a method for relaxing the network with respect to its density distribution,and thereby imposing the condition of mechanical equilibrium, without changing the network topology.<cit.> The free energy function of the network was minimized with respect to the coordinates of all entanglement points andchain ends. Contributions to the free energy included (a) the elastic energy due to stretching of the chain strandsand (b) the free energy due to the repulsive and attractive (cohesive) interactions between segments. The latter was calculated by superimposing a simple cubic grid on the network and taking into account contributions between cells andwithin each cell. A density functional was used for non-bonded interactions depending on a segment density field derived fromcoarse-grained particle positions tracked in a simulation. Laradji et al.<cit.> originally introduced the concept of conducting particle-based simulations with interactionsdescribed via collective variables and also two ways of calculating these collective variables: grid-based andcontinuum weighting-function-based. Along the same lines, Soga et al.<cit.> investigated thestructure of an end-grafted polymer layer immersed in poor solvent through MC simulations based on Edward's Hamiltonian, incorporating a third-order density functional. Pagonabarraga and Frenkel<cit.> carried outDissipative Particle Dynamics (DPD) simulations of nonideal fluids. In their work the conservative forces needed for theDPD scheme were derived from a free energy density obtained from an equation of state (EOS) (e.g. the van der Waals EOS).Later on, Daoulas and Müller<cit.> have also employed the density functional representation ofnonbonded interactionsin order to study the single chain structure and intermolecular correlations in polymer melts, and fluctuationeffects on the order-disorder transition of symmetric diblock copolymers. In the present work we introduce a grid-based density field-based scheme for dealing with nonbonded interactions incoarse grained simulations which takes into account a specific equation of state.The purpose of this work is twofold. First, our main goal is to develop a consistent and rigorous methodology capableof simulating polymeric melts over realistic (i.e., 10^-2 s) time scales. In our previous works,<cit.> we have developed a methodology in order togenerate and equilibrate (nanocomposite) polymer melts at large length scales (on the order of 100 nm). Thiscoarse-grained representation is based on the idea that the polymer chains can be described as random flights at lengthscales larger than that defined by the Kuhn length of the polymer. Now, we develop a methodology to track the dynamics of the system at a coarse-grained level,by invoking a free energy functional motivated by an EOS capable of describing real polymeric materials, where bothconformational and nonbonded contributions are taken into account.The EOS-based nonbonded free energy allows us to simulate a compressible model which, in principle, can deal withequation of state effects, phase transitions (e.g., cavitation) and interfaces.It is computed as a functional of the local density bypassing an orthogonal grid through the simulation box. The voxels of the grid employed are large enough that theirlocal thermodynamic properties can be described well by the chosen macroscopic EOS. The validity of thisassumption is tested. At our level of description, i.e. beads consisting of 50 chemical monomers, forces among beadsare not pairwise-additive and many-body effects are likely to be important.<cit.> Pairwise additive effective potentials between beads<cit.> may not work well under conditions of high stress, where phase separation phenomena such as cavitation take place, or near interfaces, where long-range attractions are important in shaping structure. Thus, resorting to drastically coarse-grained models based on collective variables is a promisingroute.<cit.> Introducing a nonbonded compensating potential like Chappa etal.,<cit.> eliminates the problem of chain conformations, caused by the artificial attractionof the slip-springs. Here, we refrain from explicitly including a term of this kind, in order to check whether the useof our non-bonded energy functional can provide an alternative way for avoiding unphysical agglomerations. Once the free energy is known, BD simulations driven by the free energy functional can be used inorder to obtain thermodynamic averages and correlation functions. When macromolecules interpenetrate, the term entanglement intends to describe the interactions resulting from the uncrossability of chains. At this level of description, we introduce theentanglements as slip-springs connecting beads belonging to different chains. In the course of a simulation, the topology of the entanglement network changes through the introduction of elementary kinetic Monte Carlo (kMC) events governed by rate expressions which are based on the reptation picture of polymer dynamics andthe free energy defined.Our second objective concerns parameterizing this methodology in a bottom-up approach, avoiding the introduction of non-meaningful parameters and ad-hoc fitting of the results. All ingredients of our model are based on eithermore detailed (atomistic) simulations or experimental findings. For every parameter we introduce, we thoroughlyexplain its physical meaning and provide rigorous guidelines for estimating its exact value or its range of variation.Our work is different from the extensive current literature on the subject in a number of ways.All observables stem out of an explicit Helmholtz energy functional. The stress tensor of the model, a necessary ingredient for rheology calculations, isrigorously derived from the free energy functional including all contributions (bonded and nonbonded). Its special characteristics are presented and discussed. In contrast to previous relevant studies, a rigorous kinetic MCscheme (with rate constants defined in terms of the free energy) is used for tracking slip-spring hopping dynamics inthe system. A hybrid integration scheme allows for rate-controlled discrete slip-spring processes to be tracked inparallel with the integration of equations of motion of the polymeric beads. A new density field-based scheme fordealing with nonbonded interactions in coarse grained simulations, reproducing a specific equation of state, is presented and its numerical application is thoroughly reviewed.All necessary links to more detailed levels ofsimulation and experimental observables are established, without introducing a posteriori tuning of the parameters.Finally, extensive quantitative validation against existing experimental findings is performed.§ MODEL AND METHOD§.§ Polymer Description Our melt consists of linear chains, represented by specific points or beads (i.e. internal nodal points, end points, orpermanent crosslink points) along their contour, connected by entropy springs. Each coarse-grained bead representsseveral Kuhn segments of the polymer under consideration.Our construction results in a set of nodes for each chain, where each node i has a specific contour position along thechain, positional vector in three-dimensional space, 𝐫_i, and pairing with other nodes, as shown in Figure <ref>. The piece ofchain between two nodes (blue spring in Figure <ref>) is referred to as a strand. This paper will be concerned with melts of linear chains, so permanent crosslink points will not be considered further.We introduce the effect of entanglement by dispersing slip-springs,<cit.> which are designed tobring about reptational motion of chains along their contours, as envisioned in theories and simulations of dynamicsin entangled polymer melts and as observed by topological analysis of molecular configurations evolving in the courseof MD simulations.<cit.> In more detail, a slip-spring connects two internal nodal points or chain ends on two different polymer chains and is stochastically destroyed when one of the nodes it connects is a chain- end. To compensate for slip-spring destruction, new slip springs are created stochastically by free chain ends in the polymer network. Along with the BD simulation of the bead motion in the periodicsimulation box, a parallel simulation of the evolution of the slip-springs present is undertaken. The rates used for thekMC procedure are described in detail in the following paragraphs. The initial contour length between consecutive entanglement points (slip-spring anchoring beads) on the same chainis commensurate with the entanglement molecular weight, M_ e, of the simulated polymer.With the above mesoscopic model, the relative importance of reptation, constraint release (CR) and contour-lengthfluctuation (CLF) mechanisms depends on the specific melt of interest. As expected, in a monodisperse sample of longmolecules, reptation plays a dominant role. In contrast, in a bidisperse sample composed of short and longmacromolecules, CR and CLF may dominate the relaxation process. <cit.> §.§ Model Free EnergyWe postulate that the entangled melt specimen of given spatial extent, defined by the edge vectors of our periodicsimulation box (L_x, L_y, L_z), under temperature T, is governed by a Helmholtz energy function, A, whichhas a direct dependence on the set of local densities ρ𝐫, the temperature, T, andthe separation vectors between pairs of connected polymer beads,𝐫_ij≡𝐫_j - 𝐫_i:A (𝐫_ij, ρ𝐫, T ) = A_ b( 𝐫_ij, T ) +A_ nb(ρ𝐫, T)The first term on the right-hand side of eq <ref>, A_ b, is the contribution ofbonded interactions, whereas the second one, A_ nb, is the contribution of the nonbonded interactions to the Helmholtz energy. It should be noted that we employ the full free energy (including ideal gas contribution),and not the excess free energy in our formulation. In the discussion of the nonbonded contribution to the Helmholtz energy, A_ nb, we will elaborate further on our choice of employing the full Helmholtz energy, rather than the excess one (with reference to an ideal gas of beads). We start by considering the bonded contribution to the Helmholtz energy, which can be written as a sum overall bonded pairs, i,j, where i is connected to j with either intramolecular springs or slip-springs:A_ b(𝐫_ij, T ) = A_ b(r_ij , T ) =∑_i,j A_ pairr_ij, TThe sum runs over all pairs, where each pair is thought to interact via a distance-dependent Helmholtz energy,A_ pairr_ij, T. The elastic force depends on the coordinates of the nodes, but it does not depend on the local network density. The elastic force between connected beads arises due to the retractive force acting to resist the stretching of astrand. This force originates in the decrease in entropy of a stretched polymer strand.In this approximation, the force on bead i, due to its connection to bead j is: 𝐅_ij^ b = - ∇_𝐫_ji A_ pairr_ij, T The Gaussian approximation to A_ pair(r_ij) can be used for mostextensions.<cit.>The conformational entropy of strands is taken into account via a simple harmonic expression:A_ pair^ intrar_ij, T =3/2 k_ B T r_ij^2/N_ij b_ K^2where N_ij is the number of Kuhn segments assigned to the strand, b_ K the Kuhn length of the polymer,and k_ B the Boltzmann constant. The summation is carried over all pairs i,j which lie along the contour of the chains. The Helmholtz energy of slip-springs, which are included to account for the entanglement effect, is described by the following equation:A_ pair^ sl( r_ij, T ) = 3/2 k_ B T r_ij^2/l_ ss^2where l_ ss is an adjustable parameter (i.e.root mean square equilibrium slip-spring length) which should be larger thanthe Kuhn length, b_ K, and smaller than or equal to the tube diameter of the polymer under consideration,a_ pp.In order to account for nonbonded (excluded volume and van der Waals attractive) interactions in the networkrepresentation, we introduce aHelmholtz energy functional:A_ nbρ𝐫, T = ∫_ box^ 3 r a_ vol[ρ(𝐫), T ]where a_ vol is a free energy density (free energy per unit volume) and ρ𝐫is the local mass density at position 𝐫.Expressions for a_ vol(ρ, T) may be extracted from equations of state.Local density, ρ(𝐫), will be resolved only at the level of entire cells, defined by passing an orthogonal grid through the simulation domain.Thus, eq <ref> will be approximated by a discrete sum over all cells of the orthogonal grid:A_ nbρ𝐫, T= ∑_k=1^N_ cells V_ cell, k^ acca_ vol(ρ_ cell, k, T)where V_ cell, k^ acc is the accessible volume of cell k. The cell density, ρ_ cell, k, must be defined based on the beads in and around cell k.The spatial discretization scheme employed for nonbonded interactions is thoroughly presented in Appendix A.In deriving nonbonded forces (see eq <ref> below), it would have been more correct to use the excess Helmholtz energy density (relative to an ideal gas of beads), rather than the total Helmholtz energy density and to rely on the particle-based simulation for effectively including the contribution of translational entropy. Our formulation uses the total, rather the excess Helmholtz energy density, thereby missing the translational entropyassociated with allowing the beads which reside in onevoxel to visit other voxels. However, we have validated our results against those obtained by employing the excess Helhmoltzenergy. They are practically identical, because, in our implementation, the cells of the discretization grid are so large thatthey behave almost as macroscopic domains. The translational entropy associated with exchange of a bead between voxels is negligible in comparison with the entropy associated with roaming of the bead within a voxel.In this work we invoke the Sanchez-Lacombe equation of state,<cit.> which gives for the Helmholtzenergy:A^ SL(ρ, T ) = -n r k_ BT^* ρ̃ + n r k_ B T [(1/ρ̃-1)ln(1-ρ̃) + 1/rlnρ̃]- n k_ B T lnwwhere ρ̃ =ρ / ρ* is the reduced density with ρ being the mass density of the melt and ρ^*the close packed mass density. The temperature of the melt is denoted by T and T^* = ϵ/ k_ B correspondsto a equivalent temperature defined in terms of ϵ which is the nonbonded, mer-mer interaction energy (k_ Bbeing Boltzmann's constant). Then chains of the melt, whose molecular weight per chain is M, are thought to be composed ofr Sanchez Lacombe segments each (“r-mers”). Finally, w is a parameter quantifying the number of configurations available to the system.The last term in eq <ref> does not depend on the density of the system,being an ideal gas contribution (please see discussion in Appendix B). All Sanchez-Lacombe parameters are presented in Table B1 of Appendix B.The Helmholtz energy density, a_ vol(ρ, T), can be calculated as:a_ vol(ρ, T ) = A(ρ, T )/V = A( ρ, T)/n M/ρ N_ A = ρ N_ AA(ρ,T)/n M = ρa_ mass(ρ, T)where the a_ mass(ρ, T) denotes the Helmholtz energy per unit mass of the system. Based on eq <ref>, we can calculate the above:a_ mass(ρ, T) = - r R T^* ρ̃/M + r R T/M[(ṽ-1) ln(1-ρ̃)+1/rln( ρ̃)]-R T/Mln(w)and by using M = r ρ^* v^* and r = (M p^* ) / (ρ^* R T^* )the above expression becomes:a_ mass(ρ, T) = - p^*/ρ^*2ρ + p^* T̃/ρ^*[ (ρ^*/ρ - 1) ln(1 - ρ/ρ^*)+ ρ^* R T^*/M p^*ln( ρ/ρ^* w)]The Helmholtz energy density (free energy per unit volume), a_ vol(ρ, T) is:a_ vol(ρ, T) = ρa_ mass(ρ, T)= -p^* ρ̃^2+ p^* T̃ρ̃[ (1/ρ̃ - 1)ln(1 - ρ̃)+ ρ^* R T^*/M p^*lnρ̃ -ρ^* R T^*/M p^*lnw]where everything is cast in terms of the reduced variables T^*, p^*, ρ^* and the molecular weight of a chain,M.Upon integration over the domain of a system with given mass, the last term within the brackets of eq<ref>, which is strictly proportional todensity, yields a density-independent contribution.Thus, it does not affect the equations of motion or thermodynamic properties. All needed parameters (i.e. T^*, p^*, ρ^*) can be obtained from experimental studies.<cit.> The force due to non-bonded interactions on a bead i is given by:𝐅^ nb_i = - ∇_𝐫_i A_ nbρ_ cell, T =- ∂/∂𝐫_i [ ∑_k ∈ cells V_ cell,k^ acc a_ vol(ρ_ cell, k, T)]= - ∑_k∈cells V_ cell,k^ acc . ∂ a_ vol(ρ, T)/∂ρ |_ρ = ρ_ cell,k∂ρ_ cell,k/∂𝐫_iwith the derivative of a_ vol(ρ, T) with respect to density being:∂ a_ vol(ρ, T)/∂ρ =- 2 p^*/ρ^*2ρ- p^* T/ρ^* T^*[1 + ln(1-ρ/ρ^*)] + RT/M[ 1 + ln(ρ/ρ^* w)]and the derivative ∂ρ_ cell,k / ∂𝐫_i given by eq<ref> in Appendix A. The terms of the Helmholtz energy density containing the parameter w (or the thermal wavelength as those derived in Appendix B), which are linear in the density, do not contribute to the forces. Upon integration over the entire domain of the primary simulation box, these terms give a constant contribution to the Helmholtz energyand their gradient with respect to any bead position is zero. §.§ Generation of Initial ConfigurationsInitial configurations for the linear melt are obtained by field theory - inspired Monte Carlo (FT-i MC) equilibrationof a coarse-grained melt, wherein chains are represented as freely jointed sequences of Kuhn segments subject to acoarse-grained Helfand Hamiltonian which prevents the density from departing from its mean value anywhere in thesystem.<cit.> The coarse-graining from the freely jointed chain model to the bead-spring model involves placement of beads atregular intervals along the contour of the chains obtained after the MC equilibration. As already discussed in the previous section, at the new (coarser, mesoscopic) level of description, the polymer isenvisioned as a network of strands connecting internal nodal points and end points.Starting from a well equilibrated configuration ℛ, obtained from a FT-i MC simulation, we determinethe box size and shape for which the Gibbs energy function:<cit.>G T, τ = A𝐫_ij, {ρ(𝐫) }, T- V_ℛ1/3 Trτ- V_ℛ∑_αβτ_αβϵ_αβ= G 𝐫_i, τbecomes minimal under the given, externally imposed τ. By prescribing the stress tensor, τ, we minimize the free energy with respect to the positions of the nodal points,𝐫_i, which we assume to follow the macroscopic strain in an affine way. The presence of any symmetry elementin the stress tensor reduces the number of minimization parameters, and in the special case where only hydrostaticpressure is applied on the system, the sum of the last two terms in eq<ref> is equivalent to -p V, letting the volume V of the deformed configurationbe the only parameter for the Gibbs energy minimization. In that case, the volume is expressed as V = V_ℛ1 + ϵ_xx + ϵ_yy + ϵ_zz. The Gibbs energy function, eq <ref>, is, of course, valid only for small deformations away from the reference state. In our case, we let the system relax under atmospheric hydrostatic pressure, starting from an equilibratedconfiguration at the average density for the temperature under consideration.Slip-springs have not been introduced yet at this point.Slip-springs can either be placed randomly in the initial configuration, or they can be allowed to be created, following the fluctuating kMC scheme described in Appendix E.In the former way, the number of slip-springs is chosen so as to be consistent with the molar mass betweenentanglements, M_ e, of the polymer under consideration. Every possible pair of beads in the melt can be coupled by a slip-spring, given that their separation distance is less than l_ ss (eq <ref>). Otherwise, the initially slip-spring-free configuration is used as an input and slip-springs are generated on the go byutilizing the fluctuating kMC scheme (Appendix E) with a hopping pre-exponential factor, ν_ hop, at theupper limit of the allowed range (Appendix D). In this way, slip-springs are introduced in an unbiased and rigorous waywhile the melt is in parallel equilibrated. The procedure is interrupted at the point where the number of entanglements(slip-springs) reaches the desired value, and the frequency factor ν_ hop is fixed at the value that ensures conservation of the number of slip-springs. Once the slip-springs have been introduced, the Gibbs energy of the system (eq <ref>) can be minimized again in their presence, at fixed topology of theentanglement network, in order to ensure the proper simulation box dimensions. However, slip-springs contributelittle (on the order of 1%) to the stress tensor of the system. §.§ Brownian DynamicsWhen simulating a system of coarse-grained particles, some degrees of freedom are treated explicitly, whereas otherare represented only by their stochastic influence on the former ones. In our model the effect of the surrounding melt on the motion of the coarse-grained beads is mimicked by introducing a stochastic force plus a frictional force intothe equations of motion of the beads. When the stochastic force contains no correlations in space or time, one obtains the simplest form of stochastic dynamics, called Brownian Dynamics (BD).<cit.> The theory of Brownian motion was developed to describe the dynamic behavior of particles whose mass and size are muchlarger than those of the host medium particles. In this case the position Langevin equation becomes:𝐯_i t = 1/ζ_i𝐅_i 𝐫_it +1/ζ_iℱ_itThe systematic force 𝐅_it is the explicit mutual force between the N particles andℱ_it represents the effect of the medium on the particles. Each particle ischaracterized by its mass m_i and the friction coefficient ζ_i, measured in kg/s.The systematic force, 𝐅_i, is to be derived from the free energy following eqs <ref> and <ref>:𝐅_i 𝐫_it = - ∇_𝐫_itA {𝐫_ijt}, {ρ(𝐫,t) }, T with the expression of the Helmholtz energy as given in eq <ref>. The stochastic force ℱ_i is assumed to be stationary, Markovian and Gaussian with zero mean and to have no correlation with prior velocities nor with the systematic force. For large values of (ζ_i / m_i) Δ t in the diffusive regime, when the friction is so strong that the velocities relax within Δ t, the BD algorithm of van Gunsteren and Berendsen<cit.>, eq<ref> can be used:r_i, αt_n + Δ t = r_i,αt_n + 1/ζ_iF_i,αt_nΔ t + 1/2Ḟ_i,αt_nΔ t^2 + ℛ_i,αΔ twhere the random variable ℛ_i,α,n(Δ t) is sampled from a Gausssian distribution with zero mean and width:⟨ℛ_i,α^2 Δ t⟩ = 2 k_ B T_ ref/ζ_iΔ tThe time derivatives of the forces, Ḟ_i,α, are estimated by finite differences.However, more sophisticated schemes, like Smart Monte Carlo, have been developed.<cit.>§.§ Slip-spring Kinetic Monte Carlo Chain slippage through entanglements is modeled as hopping of slip-springs along the contours of the chains they connect, without actual displacements of their beads. Let the ends of a slip-spring connect beads a_0 and b_0along chains a and b (see Figure <ref>). In order to track hopping events in the system, we use a discretized version of the kinetic Monte Carlo (kMC)algorithm, which runs in parallel with the BD integration of beads' equations of motion.The dynamics of slip-spring jumps is envisioned as consisting of infrequent transitions from one state to another, with longperiods of relative inactivity between these transitions. We envision that each state corresponds to a single freeenergy basin, and the long time between transitions arises because the system must surmount an energy barrier,A^, to get from one basin to another. Then, for each possible escape pathway from an original to a destination basin (e.g. 𝒪→𝒩), there is arate constant k_𝒪→𝒩 that characterizes the probability, per unit time,that an escape from state 𝒪 to a new one, 𝒩, will occur. These rate constants are independent of what state preceded state 𝒪. We sample slip-spring transitions at regular pre-defined time intervals Δ t_ kMC which are multiples ofthe timestep used for the BD integration, Δ t_ kMC = n_ kMCΔ t. Every n_ kMC steps of the BD integration along the trajectory of the system, we freeze the coordinates of all beads and calculate the transitionprobabilities,k_ hopΔ t_ kMC for all possible transitions of the slip-springs.The kMC time interval, Δ t_ kMC, is chosen such that the number of transitions performed at every kMC step is much smaller than the number of slip-springs present in the system and preferably on the order of unity(i.e., infrequent events governed by Poisson process statistics).The kMC step consists in visiting all slip-springsof the system and computing the probabilities for them to perform a jump, considering all possible states accessiblefrom their current one. A probability is assigned to every possible transition,p_𝒪→𝒩 = k_𝒪→𝒩Δ t_ kMC. The transition is undertaken only ifp_𝒪→𝒩 exceeds a pseudo-random number sampled from a uniform distribution in the interval [0,1). The use of small kMC timesteps, Δ t_ kMC, ensures that ∑_𝒩 p_𝒪→𝒩 << 1 at every step kMC is attempted.In the following we will present a microscopically reversible and physically meaningful formulation for calculatingall rates connected with slip-spring jumps, slip-spring destruction and formation. Our formulation is furtherelaborated to Appendices D, E and F, where detailed considerations are presented. In order to develop a formalism for elementary events of slip-spring hopping, creation or destruction, we needexpressions for the rate of slippage along the chain backbone. We concentrate on the slip-spring connecting beadsa_0 and b_0. The Helmholtz energy of the system is denoted as A^N_ ss_a_0b_0,where in our notation we have incorporated both the number of slip-springs present and the connectivity the particular slip-spring finds itself at. An individual jump of one end of a slip-spring along the chain backbone, e.g. from bead b_0 to bead b_+,takes place with rate:k_ hop =ν_0 exp-A^,N_ ss - A^N_ ss_a_0b_0/k_ B Tconforming to a transition state theory (TST) picture of the slippage along the backbone as an infrequent event,which involves a transitionfrom state 𝒪≡ a_0b_0 to state 𝒩≡ a_0b_+ over a barrier of free energy,A^,N_ ss - A^N_ ss_a_0b_0 (c.f. Figure <ref>).Since the connectivity of the N_ ss-1 slip-springs does not change during the transition, we can defineA_𝒪→𝒩^ as the difference between the total Helmholtz energy of the configurationwith a slip spring at the transition state, A^,N_ ss, and the total Helmholtz energy of aconfiguration that is missing the particular slip spring but is otherwise identical, A^N_ ss-1, i.e.,A^,N_ ss = A^N_ ss-1 + A_𝒪→𝒩^. Accordingly, the free energy of the starting configuration can be split asA_a_0b_0^N_ ss = A^N_ ss-1 + A_a_0b_0. All configurations depicted in Figure <ref> are identical but for the presence and end pointpositions of the N_ ss-th slip-spring. We use the logical conjunction operator “” to denote the connectivity of the system. As depicted in Figure <ref>, in the case considered none of the nearest neighbor beads of theends of the slip-spring is a chain end. Double jumps (e.g. a_0 → a_+ and b_0 → b_+) are disallowed, which is a reasonable approximation for small timesteps, Δ t_ kMC. Based on the proper (following Appendix D) selection of the pre-exponential factor ν_ hop, the number of steps between two successive kinetic Monte Carlo steps, n_ kMC, should be chosen such that the overall probability, p_ hop = k_ hopΔ t_ kMC = k_ hop n_ kMCΔ t, is less than 1 for every slip-spring in the system. Based on the selected ν_ hop and Δ t we set n_ kMC = 100. Any number smaller than that can be used and any number larger than that renders hopping probabilities per kMC step of the order of unity and thus has to be avoided. Our results are insensitive to n_ kMC within the aforementioned range.Assuming that all slip-springs attempting to jump face the same free energy barrier, A_𝒪→𝒩^,on top of A^N_ ss-1, eq <ref> can be more conveniently rewritten as:k_ hop= ν_0 exp-A_𝒪→𝒩^- A_a_0b_0/k_ B T= ν_ hopexpA_a_0b_0/k_ BT where the rate of hopping, k_ hop, depends directly on the energy stored in the slip-spring at its initial state (not of the entire configuration), A_a_0b_0, while the dependence on the height of the free energy at the barrier(i.e. A_𝒪→𝒩^) has been absorbed into the pre-exponential factorν_ hop = ν_0 exp-β A_𝒪→𝒩^, withβ = 1/k_ BT. Eq <ref> leaves us with only one adjustable parameter to be determined,the pre-exponential frequency factor ν_ hop. In Appendix D, a simple theoretical guideline in order to arriveto an estimate of ν_ hop is presented, given the fact that the average hopping rate k_ hop, can be accessed.The representation of slip-springs in terms of entropic Gaussian springs allows for a direct estimation of theexpected hopping rate, k_ hop. The distribution of the length of the slip-springs, P_ er, and the corresponding hopping rate, k_ hopr, are presented in Figure <ref>. Given the probability distribution, the expected average rate of hopping per slip-spring can be estimated ask_ hop = ∫_0^∞ P_ er k_ hopr r/ ∫_0^∞ P_ er r. By assuming a maximum extension of slip-springs up to 3 ł_ ss, a distance excluding less than10^-4 of the cumulative probability distribution, the expected hopping rate is roughly equal tok_ hop / ν_ hop≃ 38. Let us assume that the ends of the slip-spring connect beads a_0 and b_0 as those are depicted in Figure <ref>.One of the beads connected by the slip-spring (e.g. b_0) is a chain end. One end of the slip spring can hop alongthe chain, while the second can either hop moving away from the chain end or be destroyed.The rate of destruction of a slip-spring is equal to the rate of hopping along the chain, k_ hop.This assumption keeps the necessary adjustable parameters of our model to an absolute minimum. Through the intrinsic dynamics of the system, it is possible for a slip-spring end to leave its chain. This processmimics the disentanglement at the chain ends and the process of CR. The process of slip-spring destruction is introduced in the model in order to represent the chain disentanglement, as that is envisioned by the polymer tube theories. In the case that both ends of a slip-spring are attached to chain ends, based on the scheme we have introduced, there are equal probabilities for any of them sliding across any of the chain ends and getting destroyed.As far as the event of creation of a slip-spring is concerned, two schemes have been developed; the formerpreserves the number of slip springs, while the latter allows a fluctuating number of slip springs obeying microscopic reversibility. For the sake of clarity, we will present the constant scheme here, while the fluctuatingscheme is presented in Appendix E. Since both schemes yield identical trajectories for the systems studied we willrefrain from further distinction between them in the following. In order to preserve the number of slip-springs constant throughout the simulation, we assume that a new slip-springis created whenever an existing slip-spring is destroyed through slippage past a chain end. In creating a new slip-spring, a free end of a chain can be paired toanother bead if they are separated by a distance less than α_ attempt. Thus, in order to ensuremicroscopic reversibility, a slip-spring can be destroyed only if its extent is less than α_ attempt at the time the destuction is decided to take place. If a slip-spring is considered for destruction, all possibleconnections of a_0 (i.e., the bead that is not a candidate for slippage along its chain contour, see Figure<ref>) with other beads b lying inside a sphere of radius α_ attemptcentered at a_0 are identified. If no such beads can be found, the destruction/creation move is rejected, due to violation of the microscopic reversibility, since the system would not be able to access its present state through the kMC moves. The Rosenbluth weight of the old configuration is accumulated, by iterating over all neighboring beads b, W_𝒪 = ∑_bexp-β A_a_0b.A chain end a^'_0 in the system is randomly selected with probability 1/2n, with n being the total number of chains. All possible connections of a^'_0 with beads b^' lying insidea sphere of radius α_attempt centered at it are identified and the Rosenbluth weightW_𝒩 = ∑_b^'exp-β A_a^'_0b^' is accumulated. From all candidate anchoring points, b^', for the slip-spring emanating from chain end a^'_0, one is chosen to serve as the end of the new slip-spring, b^'_0 with probabilityP_a^'_0b^'_0 = exp-βA_a^'_0b^'_0 /W_𝒩. The new connectivity of the system is finally accepted with probability:P^ accept_𝒪→𝒩 = min1,exp-A_a_0b_0 - A_a^'_0b^'_0/k_ BTW_𝒩/W_𝒪If the creation of a new slip-spring a^'_0b^'_0 is rejected, based on the criterion ofeq <ref>, the slip-spring a_0b_0 remains as is, i.e. anchored to beadsa_0 and b_0. Otherwise, the actual probability for a specific slip-spring destruction/creation event to take placeat any point in the simulation where changes in slip-spring ends are considered is:P_a_0 b_0 → a^'_0b^'_0 = k_ hop, a_0b_0Δ t_ kMC P_𝒪→𝒩^ acceptwhere k_ hop,a_0 b_0 as defined in eq <ref> and Δ t_ kMCis the elapsed time from the previous kMC step. By design the overall destruction/creation scheme satisfiesmicroscopic reversibility. The corresponding proof can be found in Appendix F. §.§ Simulation Details In this section we will discuss the parameterization of our methodology in a bottom-up approach. During the development of the model we have tried to keep its adjustable parameters to an absolute minimum, without introducing new parameters which cannot be mapped to physically relevant observables. We willelaborate on how values of the parameters are obtained with respect to structure, thermodynamics (e.g., equation ofstate), entanglement density, and friction/hopping rates, either from experimental evidence, or from more detailedsimulation levels. The systems considered were cis-1,4 polyisoprene (natural rubber) melts in the range of molar masses,M = 21 kg/mol to M = 120 kg/mol.All simulations were carried out in the canonical statistical ensemble, at the temperature of T=400 K, where rheological measurements are available.<cit.> The simulation box was cubic with varying edge length from 30 nm to 100 nm.The mean squared end-to-end distance of a PI chain can be either expressed in terms of the number of isoprene units orKuhn lengths:⟨ R_ e,0^2⟩ = C_∞4 N l^2 = N_ Kuhn b^2where C_∞ is the characteristic ratio of PI, N is the length of the chain measured in monomers, l is the root mean-squared average carbon-carbon bond length over an isoprene monomer and b is the Kuhn length of polyisoprene. The factorof 4 is required because each isoprene contains four backbone bonds. Using Mark's <cit.> values for C_∞ and l of 4.7 and 0.1485 nm, respectively, we obtain b = 0.958nm and 2.21 isoprene units per Kuhn length.<cit.> The effective length of an isoprene unit has been shown to have a slight temperature dependence, <cit.> however, we ignore this effect in our model. Each bead in our representation consists of n_ Kuhns/bead = 10 PI Kuhn segments. Thus, its mass ism_ bead = 1506 g/mol and its characteristic mean-square end-to-end distance (if considered as a randomwalk) should be ( n_ Kuhns/beadb^2 ) = 9.1776 × 10^-18m^2. As far as the parameterization of the slip-springs is concerned, Fetters et al. <cit.> haveestimated the entanglement molecular weight of PI, M_ e to be 5430 g/mol, thus allowing usto define the average required number of slip-springs present in the system as:N_ ss = n Z-1/2with n being the number of chains present and Z = M/M_ e. Based on the chain discretization we have introduced above, the average distance between the slip-springs is roughlyfour coarse-grained beads.It is not at all clear or well defined what a slip-spring is. In this work we try to relate the parameters of the slip-springs to quantities obtainable from atomistic simulations or experiments, so as to reduce the number of adjustable parameters. However, one slip-spring may not correspond to exactly one entanglement. Other interpretations can be invoked, i.e. thinking of the slip-spring as a soft constraint of the motion of the chain perpendicular to its contour and not as a “kink” in the primitive path.The stiffness of the slip-springs can be tuned by the parameter l_ ss, introducedin eq <ref>. We set this equal to a reasonable estimate of the entanglement tube diameter of PI, l_ ss = α_ pp≃ 6.2 nm.<cit.>At this point, we should stress an important design rule for the degree of coarse-graining to be chosen. In order for thechains to preserve their unperturbed fractal nature in the melt, entropic springs along them should be stiffer thanslip-springs, i.e., l_ ss > √(n_ Kuhns/bead) b. If this is not the case, n_ Kuhns/bead should betuned (i.e., the degree of coarse-graining should be adjusted) in order to ensure that entropic attraction between beadsbelonging to the same chain is at least an order of magnitude stronger than attraction due to slip-springs (connectingmostly beads belonging to different chains).We have used the parameters of the Sanchez-Lacombe equation of state, given by Rudolf et al.<cit.> who have conducted pVT measurements on several polymers under isothermal conditions, above their glass transition temperature.These authors suggest p^* = 383.0 MPa, T^* = 631.2 K and ρ^* = 0.961 g/cm^3 for molten polyisoprene of M = 2594 g/mol. Based on these values, the density of polyisoprene at T=300 K is estimated as ρ(300K)= 0.908 g/cm^3. Klopffer et al. <cit.> have characterized the rheological behavior of a series of polybutadienesand polyisoprenes over a wide range of temperatures. The viscoelastic coefficients resulting from thetime-temperature superposition principle were determined. A Rouse theory modified for undilutedpolymers was used to calculate the monomeric friction coefficient, ζ_0. The monomeric friction coefficient, ζ_0, characterizes the resistance encountered by a monomer unit movingthrough its surroundings. It was concluded that, within experimental error, a single set of Williams-Landel-Ferry (WLF)parameters<cit.> at T_ g was adequateto characterize the relaxation dynamics irrespective of the vinyl content of the polybutadienes and polyisoprenes.These authors proposed that the variation of the monomeric friction coefficient with temperature can be given by: logζ_0 (T) = logζ_∞ + C_1^ gC_2^ g/T-T_ g + C_2^ gwith the parameters C_1^ g = 13.5 ± 0.2, C_2^ g = 45 ± 3 K,logζ_∞ = -10.4 dyn s cm^-1 and T_ g = 211.15 K.At a temperature of 298 K, ζ_0(298K) = 1.61 × 10^-6dyn s cm^-1, while at a temperature of 400 K, ζ_0(400K) = 1.1508 × 10^-11kg/sIf we think of the friction coefficient as being proportional to the mass of the entity it refers to, we can estimate the friction coefficient of our coarse-grained beads as:ζ_ bead = m_ bead/m_ monomerζ_0where m_ monomer = 68.12 g/mol refers to the molar mass of a PI monomer. The friction coefficient of a coarse-grained bead, at the temperature of 413K is:ζ_ bead = 2.54 × 10^-10 kg/ sMoreover, Doxastakis et al. <cit.> have estimated the self-diffusion coefficient of unentangled PI chains consisting of 115 carbon atoms, at T = 413 K to be:D_C115 = 4.4 × 10^-11m^2/ sThe length of these chains corresponds to 23 monomers (or 11 PI Kuhn segments). If we use the Rouse model<cit.> to predict the diffusivity of these chains, based the parameters we have chosen above, thatwould be: D_ Rouse,C115 = k_ B T/N_ monomersζ_0 =2.15 × 10^-11m^2/ swhere N_ monomers is the chain length measured in monomers and ζ_0 the monomeric friction coefficient. Our estimation of the self-diffusivity of Rouse chains, based on the model parameters we have introduced coheres with what Doxastakis et al. have measured both experimentally and by all-atom MD simulations. Finally, as far as the implementation of the model is concerned, we integrate the equations of motion, eq <ref>, by employinga timestep Δ t = 10^-11s and up to at least 10 t_ run with t_ run = 10^-2s.The kMC scheme is carried out every n_ kMC = 100 steps, thus resulting in Δ t_ kMC = 10^-9s.The frequency factor needed for thehopping rates of the kMC scheme is allowed to vary in the range 10^-1≤ν_ hop≤ 10 s^-1and is kept constant for the whole range of molecular weights studied. In the following, its optimal value will be thoroughly discussed. § RESULTS AND DISCUSSION§.§ Hopping Dynamics At first we study the distribution of residence times between successive slip-spring jumps, in the course of aBD/kMC simulation.The probability density functions presented in Figure <ref> include a number of salientpoints which should be discussed. Based on our picture of slip-spring hops as infrequent transitions, the hoppingprocess should follow Poisson statistics, which is evident from the inset to the figure, where the distributions ofresidence times are clearly straight lines in semi-logarithmic axes. There are minor deviations at long time scales, which are expected, since the population of slip-springs being stationary till that time is extremely small. We have studied two molecular weights, namely 50 kg/mol and 100 kg/mol. It is evident that the distribution of residence times is independent of the molecular weight of the chains. The use of the same pre-exponential frequency factor, ν_ hop, yields indistinguishable results, as far as the individual motion of the slip-springs is concerned. This is crucial for the methodology we havedeveloped, since we will employ the same ν_ hop across different molecular weights and we expect the differences inchain dynamics to arise from the difference of their sizes (as they should) and not from tuning the slip-spring dynamics.In order to evaluate the effective hopping rate of the slip-springs, k_ hop, we invoke a hazard-plot analysis <cit.> of the hopping events which have taken place along the BD trajectory. The hazard rate, h(t), is defined such that h(t) t is the probability that a slip-spring which has survived atime t in a certain state since its last transition, will undergo a transition (i.e., jump) at the time between tand t. The cumulative hazard is defined as H(t) = ∫_0^t h(t^') t^'. By assuming a Poisson process consisting of elementary transitions with first order kinetics from one state tothe other, the Poisson rate can be extracted as the slope of a (linear) plot of the cumulative hazard versus the residence time. Thus, the effective rate constant of the slip-spring hopping can be obtained from the hazard plot of theresidence time of the slip-springs at a specific topology (i.e. the elapsed time between consecutive slip-spring jumps), which is presented in Figure <ref>. At first, we can confirm that the hopping processes follow Poisson statistics, since the main parts of the plots arestraight lines. However, there is a significant error related to fitting the hazard plots, due to the fact that atshort time scales a lot of short-lived events are monitored. Given eq <ref>, knowledge of ν_ hop which is a controlled parameter and theeffective rate of slip-spring jumps from the hazard plots (i.e., k_ hop) allows us an estimation of the averagefree energy of the slip-springs, A_a_0 -b_0, entering eq <ref>. The effective hopping rate, k_ hop, is barely sensitive to the frequency factor, ν_ hop, as can be observed in the inset to Figure <ref>.The dependence of the former on the latter seems logarithmic, a rule of thumb that can be used if fine tuning of thedynamics of the model is needed. This observation seems to be in apparent contradiction to the discussion connected with Figure<ref>. However, it is fully justified, since when deriving a theoretical simplified estimate ofthe average hopping rate on the pre-exponential frequency factor, a number of assumptions were made.The entanglement creation and destruction processes were not taken into account. If, for example, a slip-spring tobe destroyed does not find the proper candidates for its new anchoring points, it remains as is, perturbing thestatistics. Moreover, slip-spring jumps are not fully independent events. The jump of a slip-spring at a specifictimestep may help (or hinder) the remaining ones to relax in the course of the simulation or vice versa. Finally,the slope of the hazard plots that we have used in order to estimate the hopping rate is sensitive to the recrossingevents taking place at short time scales (e.g. a specific slip-spring jumping back and forth). Summarizing, theeffective hopping rate presented in Figure <ref> incorporates cooperative effectsbetween different slip-springs and reduction of slip-spring jumps due to slip-spring creation rules employed. However, we expect that the linear dependence may be recovered for higher ν_ hop frequency factors. §.§ Structural Features We start by examining the structure of the melt chains at the level of individual strands, where we refer to astrand as the distance between successive beads along the contour of a chain. The distribution of the lengthof the strands (entropic springs) that connect the beads along the contour of the chains is depicted in Figure<ref>.The entropic springs along the chain are considered Gaussian with an unperturbed length which equals the total length of the Kuhn segments represented by a coarse-grained bead. The conformational features, at least at the strand level,continue to be respected during the BD simulation. Moreover, the distribution of strand lengths during thesimulation coincides with a Gaussian distribution centered at the unperturbed strand length, as is theoreticallyexpected. Our simulation scheme seems to produce trajectories of the system consistent with the imposed Helmholtz energyfunction, eq <ref>.In the inset to Figure <ref> we examine the distribution of slip-spring lengths.Slip-springs represent entanglements of a chain with its surrounding chains. The polymer tube model considers atube formed around the primitive path of the chain, which fluctuates in time. Recent simulations have shown thatthe probability of finding segments of the neighboring chains inside the tube of the chain under consideration isGaussian.<cit.> This is also the case in our simulations. The use of Gaussian entropic springs for describing the free energy of theslip-springs results in a Gaussian distribution of slip-spring lengths, conforming to the picture obtained by moredetailed simulations and theoretical arguments. The distributions obtained from the simulations are slightly shifted to higher distances. However, they are independent of the molecular weight of the chains under consideration.Finally, we examine the chain dimensions, as those can be quantified by the end-to-end length, R_ e. In Figure <ref> we examine the distribution of end-to-end distances for two molecular weights, lying atthe extremes of our range of study. Shorter chains, of M = 21 kg/mol, behave extremely close theirunperturbed, Gaussian statistics. This is not the case for larger chains whichbecome conformationally stiffer compared to the theoretical prediction. End-to-end departures occur at distances commensurate or larger than the average distancebetween slip-spring anchoring beads. At this length scale, slip-springs bring up an extension of the chains.However, even this extension does not affect the scaling law ∼ N^1/2 of the end-to-end length of the chains, as can be observed in the inset to Figure <ref>, where the end-to-end distance is calculatedas a function of the number beads of sub-chains. Polymer chains, in both cases, behave as random walks, with a larger “Kuhn” step, though. Our minimalistic approach has significantly reduced, but not eliminated, perturbations in chainconformations caused by the unphysical attraction introduced by the slip-springs. This attraction is not causingobvious problems under the considered conditions, but is always present in our simulations. The only wayof restoring chain conformations is by introducing a compensating potential as that of Chappa etal.,<cit.> which fully compensates the effect of slip-springs on equilibrium properties. §.§ ThermodynamicsThermodynamics is naturally introduced in our model, via the nonbonded energy which is dictated by a suitable equationof state (e.g., the Sanchez-Lacombe equation of state). Several thermodynamic properties can be calculated. As a proof of concept, the compressibility of our PI melts can be calculated. We can calculate the time evolution of local densities at the cells of our computational grid, as presented in Appendix A. By treating each cell of the grid as a systemcapable of exchanging mass with its surroundings, compressibility can be estimated by the following fluctuationformula:κ_ T/V = δ n_ Kuhns/cell^2/k_ BT n_ Kuhns/cell^2where the averages ... are taken over all cells of the grid and along the trajectory of the simulation. The averages appearing in the numerator and denominator of eq <ref> are the variance and the mean number of Kuhn segments assigned to each cell, respectively. It should be noted that local densities ofgrid cells are obtained via the smearing scheme introduced in Appendix A. The isothermal compressibility, estimated from our simulations varies from 1.09 × 10^-4 bar^-1to 1.04 × 10^-4 bar^-1 for samples from 21 kg/mol to 120 kg/mol, respectively. On the other hand, starting from the macroscopic equation of state, the compressibility is:κ_ T≡ -1/VVpT = lnρ̃pTwhich is equal to 1.27 × 10^-4 bar^-1.The small difference (lower compressibility in our simulation) is probably attributable to the additional contributions to the Helmholtz energy, i.e., entropy springs and slip-springs. Our model can faithfully reproduce the compressibilityof polymeric melts with reasonable accuracy. Realistic compressibility is missing from the majority of coarse-grained models and methods. §.§ Polymer Dynamics A rigorous way of studying the mobility of a polymeric melt is to calculate the mean-squared displacement (MSD) of the structural units of the chains. In order to avoid chain end effects,<cit.>only the innermost beads along the chain contribute to the calculations:g_1t = 1/2 n_ inner + 1∑_i=N/2-n_ inner^i=N/2+n_ inner𝐫_it_0+t-𝐫_it_0^2with the value of the parameter n_ inner quantifying the number of innermost atoms, on each side of the middle segment of each chain that are monitored. In our case, n_ inner is set in such a way that we track more than half of the chain, excluding one-fifth of the chain close to one end and one-fifth close to the other end. Moreover, the motion of the chains can be described in terms of the mean-squared displacement of their centers of mass, g_3 t:g_3 t = 𝐫_ cmt_0+t - 𝐫_ cmt_0^2Figure <ref> contains results for the time dependence of the mean-square displacements, g_1 tand g_3 t for two PI melts of different molecular weights. To a first approximation, the dynamics of short chains in a melt can be described by the Rouse model.<cit.> As far as the lower molecular weight chains are concerned, it can be seen in Figure <ref> that themean-square center-of-mass displacement, g_3t, remains almost linear at all times; this means the intermolecularforces between polymers are too weak to affect diffusive behavior and play a minor role compared to the bondedinteractions. However, small departures are fully justified due to the few slip-springs present even it the short-chain system. The bead mean-squared displacements, g_1t, exhibit a subdiffusivebehavior that arises from chain connectivity, and is characterized by a power law of the form g_1t∼ t^1/2.After an initial relaxation time where a change in g_1t occurs, a regular diffusive regime is entered,where g_1t∼ t. This sequence of scaling trends is predicted by the Rouse theory.The limiting behavior of the chains' center-of-mass displacement can yield an estimate of the diffusivity of the chains: lim_t →∞ g_3 t = 6 D_ cm t, which has been found in excellent agreement with the diffusivity predicted by all-atom MD simulations ofPI chains of the same molecular weight by Doxastakis et al.<cit.> The introduction of slip-springs in those short-chain systems does not seem to affect the scaling laws of theunentangled melt significantly. Moreover, Figure <ref> includes results for the mean-square displacements, g_1t andg_3t, as a function of time, obtained from the simulation of a 100 kg/mol PI melt with slip-springspresent. At short time scales, the segmental MSD curves, g_1t, for both melts coincide. However at longer time scales, the characteristic signature of an entangled polymer system can beobserved in the longer-chain system. It can be seen that at short times the bead mean-square displacement, g_1t, shows a scalingregime with a power law t^1/2; at intermediate times, a regime with a power lawt^1/4 appears; eventually weobserve a crossover to regular diffusion at long times. The mean-square displacement of the chain center-of-mass,g_3t, also exhibits subdiffusive behavior at intermediate times, with a scaling behavior t^1/2,as predicted by the tube model; at long times, regular diffusion is achieved. From the long time behavior ofg_3t, we can estimate the longest relaxation time, τ_ d.It should be noted that all transitions between scaling regimes are very smooth in our model. Despite the fact that crossovers between the scaling regions are not accurately discerned in our results, the scalingof polymer melts, as expected from tube theory, is observed. From the long time behavior of the mean-square displacement of the center of mass of the chains, g_3t,we compute the diffusion coefficient and the longest relaxation time (disengagement time), τ_ d, as a function of molecular weight. The results are presented in Figure <ref>. As expected, D is a monotonically decreasing function of M (or N) with a power scaling law of -2 (tube model) or -2.3 (experimental observations <cit.>) for well entangled melts.Our slip-spring simulations can recover the aforementioned scaling behavior of the diffusion coefficient and their predictions lie in the correct order of magnitude, cohering with the resultsfrom detailed atomistic simulations of Doxastakis et al.<cit.> In the inset to Figure<ref>, the molecular weight dependence of the disengagement time, τ_ d, is presented.For large M (or N), the longest relaxation time should grow as N^3 (as predicted from the tube model) orwith slightly higher exponents (around 3.5<cit.>). Our results fit well to a scaling law of 3.3 which is reasonable. Experimental results at a significantly lower temperature<cit.> are also included forcomparison. The exponent of the experimental measurements is slightly larger (around 3.5) for PI measurements at a temperature of 300 K.§.§ Rheology Linear rheological properties can be characterized through the shear relaxation modulus,Gt = τ_xy(t) / γ, with γ being a small shear deformation and xy twoorthogonal axes. In computer simulations, the most convenient way of evaluatingG(t) is to use the fluctuation-dissipation theorem:<cit.>Gt = V/k_ BT⟨τ_t_0 + tτ_t_0⟩wherestand for any two orthogonal directions. One can show that the stress relaxation after any smalldeformation will be proportional to Gt, and thus Gt fully characterizes the linear rheology ofa polymeric system with given external parameters. However, the stress autocorrelation function (acf) is notoriously difficult tocalculate due to huge fluctuations at long times. In order to improve the accuracy of our calculations, we average⟨τ_t_0 + tτ_t_0⟩over all possible ways of selecting a pair of perpendicular axes α and β.<cit.> We compute the time correlation functions in eq <ref> by using the multiple-tau correlatoralgorithm of Ramírez et al.<cit.>Let us first consider Figure <ref>, which displays the time evolution of the shear relaxation modulus, as defined in eq <ref>, for PI melts of different chain lengths. The initial value of the modulus is ρ k_ BT, with ρ being the mean density of the polymer melt (slightly higher for longer chains). The relaxation of the shortest chains (21 kg/mol) considered is typical of unentangled polymermelts, and is close to the shear relaxation modulus predicted by the Rouse model.In particular, the intermediate time scale behavior of the stress acf is consistent with the Rouse model scaling, G(t) ∼ t^-1/2, while the long time decay is exponential, with a time constant characterizing the longest stress relaxation time in the system. It is clear that the longest relaxation time from the stress autocorrelation function follows the same power law scaling with N as the end-to-end vector relaxation time. Upon increasing the chain length, a plateau starts to appear; this indicates that the viscoelastic character of polymermelts is captured by our model.However, we should note the log-log scale of Figure <ref>, which implies that the rubbery plateau still involves significant relaxation, even for M = 120kg/mol. The shear viscosity, η, can be computed from the stress relaxation function through a Green-Kubo relation:η = ∫_0^∞ G_αβ (t)dtIn the inset to Figure <ref> we present the viscosity as a function of the molecular weight. The viscosity, η, is a monotonically increasing function of N, which exhibits a scaling behavior N^3.4, asexperimental data suggest.<cit.> It should be noted that the tube model predicts a N^3dependence of viscosity. Our simulation results seem to cohere with the N^3.4 scaling. Moreover, there isvery good quantitative agreement with available experimental data for the same system. Experimental results have been obtained at a slightly lower temperature, that being the reason for higher viscosity values for the same molecular weight. It is very promising that our methodology, parameterized in a bottom-up approach, captures the correct values of viscosity without any adjustment or rescaling of the shear relaxation function.The precise scaling of viscosity of polymer melts is still elusive and remains an open question for theoretical, simulationand experimental studies.<cit.> The results presented by Auhl et al.<cit.> imply a M^3.6 scaling law for polyisoprene melts, in contrast to the established M^3.4 scaling of the earlyexperimental studies. A slip-spring simulation scheme can capture modes of entangled motion beyond pure reptation(M^3 scaling). In linear response contour length fluctuation (CLF), the Brownian fluctuation of the length of theentanglement path through the melt, modifies early-time relaxation. Similarly, the process of constraint release(CR), by which the reptation of surrounding chains endows the tube constraints on a probe chain with finite lifetimes,contributes to the conformational relaxation of chains at longer times. Both processes of CLF and CR exist inslip-spring models and their relative importance contributes to the quantitative prediction of linear rheology.In our case, achieving a M^3.4 scaling law can be considered as a balance between all relaxation mechanismsinvolved in the rheology of linear melts.<cit.> Thus, compared to the recent experimental results, we may reasonably underestimate the exponent (slope in thelog-log plot of η versus N), given that the slope in the log-log plot of D versus N is reasonable.Furthermore, our simulation results are in good agreement with previous slip-spring simulations of other researchgroups,<cit.> which exhibit the same or even lower scaling exponent.In experimental practice, the cosine and sine Fourier transforms of Gt can be measured:G^'ω= ω∫_0^∞sinω t Gt tG^''ω= ω∫_0^∞cosω t Gt tby applying small oscillatory deformation and recording the in-phase and out-of-phase responsesG^'ω and G^''ω, respectively, which are called storage and loss moduli. A straightforward way to obtain the Fourier transform of Gt is to fit it with a sum of exponentials:Gt = ∑_i=1^m G_i exp-t/τ_i where G_i and τ_i are the amplitude and the relaxation time of the mode number i (often called the Maxwell modes). Our results for the storage and loss moduli of the PI melts under study are presented in Figure <ref>.The forms of the G^'ω and G^'' curves for PI are typical of terminal andplateau responses for monodisperse linear polymers in the entangledregion.<cit.> For short chains of M = 21 kg/mol, relaxation is rather fast; the longest relaxation time being on theorder of 10^-4s. For longer chains, several important features arise. At low frequencies, G^'∼ω^2 and G^''∼ω, providing values of the zero-shear viscosity η and steady-state recoverable compliance, J_ e^0:<cit.>η= lim_ω→ 0G^''ω/ω J_ e^0 = 1/η^2lim_ω→ 0G^'ω/ωWhen moving to higher frequencies, G^''ω passes through a shallow maximum,G^''_ max at ω_ max. The terminal loss peak becomes progressively more well-defined(especially for 100 and 120 kg/mol),i.e., further separated from the transition response, withincreasing chain length. The plateau modulus, G_ N^0, sim can be determined by an integration over a fully resolved terminal loss peak:<cit.>G_ N^0, sim = 2/π∫_-∞^∞ G^''ωlnω= 0.39 MPawhich is in excellent agreement with the measured plateau modulus, G_ N^0 = 0.42 MPa. The majority of the plateau modulus values reported in the literature have been obtained by thismethod.<cit.>Even for the longest chains, the loss modulus (open symbols in Figure <ref>) shows thatsome relaxation is nevertheless present, revealing a power law close to ω^-1/4. It is often observed that rheological response measured at different temperatures can be reduced to a single mastercurve at the reference temperature T_0 if one shifts the time (or frequency) appropriately.According to the time-temperature superposition (TTS) principle,the complex relaxation modulus of rheologically simple polymers, defined asG^* ω = G^'ω + i G^''ω, measured at different temperatures, obeys:G^* ω,T = ℬT G^*𝒜Tω, T_0where T_0 is the reference temperature, and 𝒜T and ℬT are horizontal and vertical shift factors for this particular reference temperature.<cit.>The range of validity of TTS include only those processes whose rates are proportional to the rate of one particular basic process (e.g. monomer diffusion), which in turn depends on temperature. In polymer melts, the shift factors are usually given by empirical equations:log_10𝒜T = C_1 T-T_0 +C_2/M/T -T_∞ + C_2/M ℬT= ρT T/ρT_0T_0with the former being the Williams-Landel-Ferry (WLF)<cit.> equation (with two materialconstants C_1 and C_2), and the latter incorporating the dependence of polymer density on temperature. In our case, we use the experimentally derived curves of Auhl et al.<cit.>, horizontally and vertically shifted to our temperature of interest (400 K). The values of the relevant parameters are:T_0 = 25 ^∘ C, T_∞ = -114.03 ^∘ C, C_1 = 4.986 and C_2 = 14.65 with M measured in kg/mol.<cit.> The relation for the temperature dependence of the PI density was obtained from the Sanchez-Lacombe equation of state used for describing the nonbonded interactions of the our coarse-grained model. In Figure <ref>, the predicted moduli of the polyisoprene sample of 100 kg/mol are compared against the experimental measurements of Auhl et al<cit.> shifted to the temperature of the simulations following the TTS principle described in the previous paragraph.Despite the fact that experimental measurements have been conducted for a slightly lower molecular weight,M=94.9 kg/mol, the predicted moduli are in quantitative agreement with the experimental observations. The minor discrepancies observed can be attributed to several reasons. In the case of our simulations,polyisoprene melts are strictly monodisperse, which does not hold for the experimental studies, where a finitepolydispersity exists.The plateau value obtained from the simulation curve around 1/τ_ e is the same as the one measured by rheological experiments, as well as the shape of the whole storage modulus curve in the frequency range of10^2 to 10^6 s^-1.The use of the kMC scheme for the slip-spring hopping introduces a relaxation time scale on the order of 10^-6s, which accounts for the different slope of the simulated storage modulus curve with respectto the experimental in the range ω > 10^6 s^-1.As far as the loss modulus is concerned, discrepancies start to appear at high frequencies (close to the shortestrelaxation time, τ_0 ≃ζ b_ K^2 / 3 π^2 k_ BT), which correspond to glassybehavior that cannot be captured by a coarse grained model which omits truly microscopic, atomistic processes.Experimental loss modulus curves are monotonically increasing with a power law ω^1/2 at high frequencies,which is not the case for our soft, coarse-grained model.§ SUMMARY AND CONCLUSIONSThe first steps towards a consistent coarse-grained model capable of reproducing the rheological properties ofpolymer melts have been presented. The methodology and the corresponding computer code have been developed for the case of a pure polymer melt. Chains are modeled as sequences of beads, each bead encompassing approximately 10 Kuhn segments. The Helmholtz energy of the system is written as a sum of three contributions: entropy springs, representing the entropic elasticityof chain strands between beads; slip-springs, representing entanglements; and non-bonded interactions. The Helmholtzenergy of non-bonded interactions is estimated by invoking an arbitrary equation of state and is computed as afunctional of the local density by passing an orthogonal grid through the simulation box.Slip-springs are envisioned as connecting nodes on different polymer chains or on the same chain.Equations for a stochastic description of the dynamics are derived from the coarse-grained Helmholtz energy function.All beads execute Brownian motion in the high friction limit. The ends of the slip-springs execute thermally activated hops between adjacent beads along chain backbones, these hops being tracked by kMC simulation. In addition, creation/destruction processes are included for the slip-springs.A slip spring is destroyed when one of its ends slips past the free end of a polymer chain. A new slip spring is formedwhen a chain end captures a bead of another chain lying within a certain radius from it, according to a prescribed rate constant. Parameters needed in the model are derived from experimental volumetric and melt viscosity data or from atomistic molecular dynamics simulations. Initial configurations for the network are obtained fromMC simulations of linear melts.Tests of the simulation code on molten linear (non-crosslinked) cis-1,4 polyisprene of high molar mass at equilibrium have given satisfactory results for the mean square displacement of beads and for the shear relaxationmodulus. By following the rigorous procedures described before, a consistent methodology has been set in place. To implement this methodology we have developed a general purpose in-house software program to simulatepolymer melts of abstract geometry.Predicting the rheology of binary blends of chains of different geometries poses formidable challenges for tube and slip-link models.<cit.>Nonlinear rheology can also be studied with the formulation developed in this work and strain rate experiments can be conducted.<cit.> The methodology has already been implemented to study tensile deformations of crosslinked cis-14polyisoprene, with very encouraging results.<cit.> Morover, it can be readily extended to systems containing solid surfaces, particles orcavities.<cit.> Therefore, when all relevant relaxation processes are correctly treated, the viscoelastic response of the melt shouldbe fully obtained from the simulation findings. § APPENDIX A: DISCRETE NONBONDED INTERACTIONS SCHEME FOR SLIP-SPRING SIMULATIONS The simplest option for relating the positions and masses of the beads to ρ_ cell is to envision eachbead i as a cube containing mass m_i, of edge length h_i, centered at position𝐫_i = (x_i, y_i, z_i ), as shown in Figure <ref>. Our scheme was designed to permit fast analytical calculation of nonbonded forces. It ensurescontinuity of the nonbonded forces as functions of bead position, but not of the derivatives of these forces (which are not needed in our simulation setup). It has proven satisfactory for use in the Brownian Dynamics simulations conducted in this work and can be readily extended to systems containing solid surfaces, particles or cavities. Other schemes for determining the instantaneous density field from the particle positions,<cit.> including gridless ones,<cit.> seem not to be straightforwardly usable in the aforementioned cases.The cell dimensions along the x, y and z directions will bedenoted as l_x, l_y, and l_z, respectively. We will focus on a cell extending between x_ cell - l_x and x_ cell along the x-direction, between y_ cell - l_y and y_ cell along the y-direction, and between z_ cell - l_z and z_ cell along the z-direction. In the regular grid considered, if(0,0,0) is taken as one of the grid points, x_ cell, y_ cell, and z_ cell will be integer multiples of l_x, l_y and l_z, respectively. In the following, we assume that h_i < min(l_x, l_y, l_z ).The mass contributed by the node to the cell is:m_i, cell = m_i V_ cube i ∩cell/V_ cubeiA1with V_ cube i ∩cell being the volume of the intersection of cube i, associated with bead i, and the considered cell, while V_ cubei = h_i^3 is the volume of cube i.Under the condition h_i < min(l_x, l_y, l_z ), V_ cube i ∩cell is obtainable as:V_ cube i ∩cell =max{[min(x_i + h_i/2, x_ cell) - max(x_i - h_i/2, x_ cell - l_x )],0 } × max{[min(y_i + h_i/2, y_ cell) - max(y_i - h_i/2, y_ cell - l_y )],0 } × max{[min(z_i + h_i/2, z_ cell) - max(z_i - h_i/2, z_ cell - l_z )],0 }A2As defined by eq <ref>, V_ cube i ∩cell is a linear function of the bead coordinates. Clearly, if cube i lies entirely within the cell,V_ cube i ∩cell = V_ cube i and, consequently, m_i, cell=m_i. If however, the borders of the cube i intersect the borders of the considered cell, then node i will contribute amass m_i, cell < m_i to the cell. The total mass contributed by bead i to all cells in which it participates will always be m_i.The density ρ_ cell in the considered cell is estimated as:ρ_ cell = 1/V_ cell^ acc∑_i m_i, cellA3Clearly, only beads i whose cubes have a nonzero overlap with the considered cell will contribute to the summation of eq <ref>. The position vectors 𝐫_i of these nodal points will necessarily lie within the considered cell or its immediate neighbors.The precise conditions for cube i to have common points with the considered cell are:x_ cell - l_x < x_i + h_i/2 < x_ cell + h_iy_ cell - l_y < y_i + h_i/2 < y_ cell + h_iz_ cell - l_z < z_i + h_i/2 < z_ cell + h_i A4According to the above approach, the force on node i due to nonbonded interactions is:𝐅_i = - ∇_𝐫_i A_ nb = - ∑_ cells ∩ cube iV_ cell^ acc. ∂ a_ vol(ρ, T)/∂ρ|_ρ = ρ_ cell∇_𝐫_iρ_ cellA5 From eqs <ref>, <ref> and <ref> one can obtain thecomponents of ∇_𝐫_iρ_ cell. Along the x direction,∂/∂ x_iρ_ cell =0 x_i ≤ x_ cell- l_x - h_i/2 m_i/V_ cell^ accV_ cube i ∩cell/V_ cube i1/x_i + h_i/2 - x_ cell + l_xx_ cell - l_x - h_i/2 < x_i < x_ cell - l_x + h_i/20 x_ cell-l_x + h_i/2≤ x_i≤ x_ cell-h_i/2-m_i/V_ cell^ accV_ cube i ∩cell/V_ cube i1/x_ cell-x_i+h_i/2x_ cell-h_i/2 < x_i < x_ cell+h_i/20 x_i ≥ x_ cell + h_i/2A6and similarly for ∂ρ_ cell / ∂ y_i and∂ρ_ cell / ∂ z_iThe derivatives are bounded but not continuous. To make them continuous, an extension of the “smearing scheme” for beads which uses a continuous density distribution for the contribution of each bead would be required. Two nodes whose cubes lie entirely within a cell experience the same nonbonded force (zero). This is not true, however, of nodes whose cubes intersect cell borders.The edge length of the cube assigned to node i, h_i, can be set approximately equal to the root mean square end-to-end distance of the strands assigned to a node:h_i = (m_i/m_ Kb_ K^2 )^1/2A7where m_ K and b_ K are the mass and the length of a Kuhn segment of the polymer under consideration. Figure <ref> shows the variation of the total and the intermolecular pair distribution functions with the distancebetween polymeric beads. On very short length scales, the total distribution function diverges as 1/r because of theintramolecular contribution dictated by the Gaussian chain model. This spike formed is an artificial effect, caused bythe fact that chain self-intersection is not prevented in the Gaussian chains used in our calculations. If an atomisticdescription were used, this spike would be replaced by a series of intramolecular peaks reflecting the bonded geometryand conformational preferences of polyisoprene molecules. In the regime of small distances, segments of other moleculesare expelled from the volume of a reference chain and this gives rise to a slight “correlation hole” effect in the intermolecular gr. For large length scales, the intermolecular distribution function approaches unity, as the intramolecular distributionfunction approaches zero. The use of the non-bonded scheme employed in this work leads to a uniform density profilethroughout the volume of the simulation box (the non-bonded discretization length employed for Figure <ref> was 5 nm). In a dense polymer melt, density fluctuations are strongly suppressed on large length scales, resulting in thescreening of excluded volume interactions among the polymer beads. The correlations of local segment density in homopolymer melts are characterized by the structure factor S_ tot(q), i.e. the Fourier transformwith respect to distance r -r^' of the density correlationfunction ⟨ρ(𝐫) ρ(𝐫^')⟩, where brackets denote averaging in the canonical ensemble. Since our simulations deal with explicit particle coordinates,S_ tot(q) can be calculated according to: S_ tot(q) = 1/n N⟨| ∑_j=1^nN e^-i𝐪·𝐫_j|^2 ⟩A8where 𝐫_j stands for the position vector of bead j. The limiting value of the total structure factor at large length scales, q→ 0, is directly connected to thecompressibility of the system:lim_|𝐪|→ 0S_ tot(𝐪) = 1/k_ BTρ_0 κ_ TA8where ρ_0 is the mean segment density and κ_T the isothermal compressibility. By calculating S_ tot(q) for a range of q-vectors, the compressibility of the system can be estimatedas a function of the observation length scale.In the inset to Figure <ref> the compressibility of a polyisoprene system is plotted as a function ofthe scattering vector 𝐪 used for the calculation of the total structure factor.Coarse-grained simulations with soft interactions are performed with potential expressions and parameters which yieldmore compressible systems than their atomistically simulated counterparts. This applies to our simulations, too. It isevident that the compressibility of the system deviates from the macroscopic one (∼ 10^-4bar^-1), asq increases (i.e., as one moves to smaller length scales). However, at large length scales compressibility valueslie in the range expected for experimental systems.§ APPENDIX B: EQUATION OF STATE CONSIDERATIONSOne of the most widely-used equations of state (EOS) for polymer fluids is the one derived by Sanchez and Lacombe.<cit.> These authors have employed a lattice formulation, wherein the polymer chains are occupying discrete lattice sites, while there also exist vacant lattice sites (holes). The Gibbs energy, based on the Sanchez-Lacombe equation of state can be expressed in dimensionless variables as:G̃≡G/n r ε^* = -ρ̃ + p̃ṽ + T̃ṽ-1ln1 - ρ̃+ 1/rlnρ̃/wB1where T̃, p̃, ṽ and ρ̃ are the reduced temperature, pressure, volume and density.The parameter w is connected to the number of different configurations available to a system of n r-mers. It will be examined in detail later. The Sanchez-Lacombe parameters are presented in Table B1. The corresponding equation of state (EoS) can be extracted, by minimizing G:G̃ṽT̃, p̃ = 0 B2which yields the equation of state for the system:ρ̃^2 + p̃ + T̃ln1 - ρ̃ + 1-1/rρ̃ = 0B3It should be noted that, in the isothermal-isobaric ensemble, p̃ and T̃ are the independent variables,while ρ̃ is the dependent one. Therefore, eq <ref> defines thevalue of ρ̃ at given T̃, p̃ that minimizes the free energy. Equations <ref> and <ref> contain the complete thermodynamic description of the model fluid.Since the Sanchez-Lacombe theory is not a corresponding states theory, the state parameters are predicted to vary with chain length. It has been shown that thermal expansion coefficients, compressibilities and free volumes are predicted bythe Sanchez-Lacombe EoS to decrease with increasing degree of polymerization, in agreement withexperiment. The decrease of the free volume with increasing molecular weight is supported by the observation that the glass temperature increases with increasing molecular weight.<cit.>If we wish to obtain some physical insight into the parameter w, we can compare the Sanchez - Lacombe EoS with the ideal gas EoS. The former should reduce to the latter, in the limit of zero molecular density,ρ_ mol = n/V:lim_ρ_ mol→ 0^+ A^ SL = lim_ρ_ mol→ 0^+ A^ id.g.B4where we have used A^ SL and A^ id.g. to denote the Helmholtz energy obtained from the Sanchez-Lacombe and the ideal gas EoS, respectively.The Helmholtz energy can be obtained from eq <ref>:A^ SL(ρ, T ) = G - p V = G - p̃n r ϵ^*/ρ̃ = -n r k_ BT^* ρ̃ + n r k_ B T [(ṽ-1)ln(1-ρ̃) + 1/rln(ρ̃/w)]B5hence, the limit appearing in the left-hand side of eq <ref> reduces to:lim_ρ_ mol→ 0^+ A^ SL = n k_ B T lim_ρ_ mol→ 0^+lnρ_ mol M/ N_ Aρ^* w -rB6 On the other hand, the Helmholtz energy of an ideal gas can be written as:A^ id.g.ρ, T= G^ id.g.ρ, T - p^ id.g. V= n μ^ id.g.ρ, T - n k_ BT= n k_ B T lnρ N_ A∏_iΛ_i^3/M 𝒵_0-1B7with Λ_i being the thermal wavelength of atom i of a molecule of the fluid and 𝒵_0the configurational integral of a single molecule with respect to all but three translational degrees of freedom.The molar mass of the molecule is denoted by M. At this point we can introduce the density of molecules, ρ_ mol, in eq<ref> and consider its limit at ρ_ mol→ 0^+:lim_ρ_ mol→ 0^+A^ id.g. = n k_ BT lim_ρ_ mol→ 0^+lnρ_ mol∏_i Λ_i^3/𝒵_0 eB8 Finally, by setting the right-hand side term of eq <ref> equal to the right-hand side term of eq <ref> we obtain:lim_ρ_ mol→ 0^+lnρ_ molM/N_ Aρ^* w e^r= lim_ρ_ mol→ 0^+lnρ_ mol∏_i Λ_i^3/𝒵_0 eB9which results in:1/w = N_ Aρ^* ∏_i Λ_i^3/M 𝒵_0 e^r-1B10that connects the parameter w with the thermal wavelengths of the molecules and their intramolecular configurational integral, 𝒵_0. The molecular weight, M, enters eq <ref> because ρ^* refers to the critical mass density of the equation of state. § APPENDIX C: DERIVATION OF THE MODEL STRESS TENSOR We consider the free energy per unit mass, A (T, V)/m of the system. The thermodynamic stress tensor, τ is:<cit.>τ = ρ𝔽·( (A/m)𝔽)^ TC1where A is the Helmholtz energy, m is the total mass in the system and 𝔽 denotes the deformationgradient tensor. The same equation can be written in component form as:τ_αβ = ρ∑_γ = 1^3𝔽_αγ( A/m )𝔽_βγC2If we assume that our simulation box does not exchange mass with its environment, the stress tensor can further be simplified:τ_αβ = 1/V_ℛ∑_γ = 1^3𝔽_αγA𝔽_βγC3where V_ℛ is the volume of the simulation box in the reference state ℛ. In general the prefactor should be 1/V with V being the current volume of the system. The density, ρ, generally changes with 𝔽 and V = V_ℛ1 +det𝔽.In our model, the free energy per unit mass,A (𝐫_ij, ρ𝐫, T )/m, is a function ofthe separation vectors between all connected beads, 𝐫_ij, local densities, ρ𝐫≡ρ_ cell andtemperature, T. The thermodynamic stress tensor, τ is given by eq <ref>:τ = ρ_ℛ 𝔽·(∂(A( {𝐫_ij}, {ρ_ cell}, T )/m)/∂𝔽)^ TC4where A is the total Helmholtz energy (eq <ref>) and m is the total mass in the system. It should be noted that density, ρ_ℛ, refers to the reference, ℛ, configuration of the system. 𝔽 denotes the deformation gradient tensor defined through a mapping of an infinitesimal vector𝐱 of the initial configuration onto the infinitesimal vector 𝐱^' after the deformation. Equation <ref> can be written in component form as:τ_αβ = ρ_ℛ ∑_γ = 1^3𝔽_αγ∂( A ({𝐫_ij}, {ρ_ cell}, T )/m)/∂𝔽_βγC5 Invoking the functional dependence of our Helmholtz energy (eq <ref>):τ_αβ= ρ_ℛ/m∑_γ = 1^3 𝔽_αγ[∂ A_ b({ r_ij}, T)/∂𝔽_βγ +∂ A_ nb({ρ_ cell}, T )/∂𝔽_βγ]= τ_αβ^ b({r_ij}, T ) + τ_αβ^ nb({ρ_ cell}, T ) C6whereτ_αβ^ b({ r_ij}, T ) = 1/V_ℛ∑_γ = 1^3 𝔽_αγ{∂/∂𝔽_βγ[ ∑_(i,j) A_ pairr_ij , T] }C7is the bonded contribution to the stress tensor, and τ_αβ^ nb({ρ_ cell}, T ) =1/V_ℛ∑_γ = 1^3 𝔽_αγ{∂/∂𝔽_βγ[ ∑_k ∈cells V_ cell,k^ acc a_ vol(ρ_ cell,k, T ) ] }C8is the nonbonded contribution to the stress tensor.We start by calculating the bonded contribution to the stress tensor, which depends only on the distances betweenconnected pair of atoms, r_ij, and temperature, T:τ_αβ^ b({r_ij}, T ) =1/V_ℛ∑_γ = 1^3𝔽_αγ{∑_(i,j) [ ∂ A_ pair( r_ij , T)/∂𝔽_βγ ] } = 1/V_ℛ∑_γ = 1^3𝔽_αγ{∑_(i,j) [∂ A_ pair( r_ij , T)/∂ r_ij∂ r_ij/∂𝔽_βγ ] } = 1/V_ℛ∑_( i, j)∂ A_ pair(r_ij, T)/∂ r_ij∑_γ = 1^3∂ r_ij/∂𝔽_βγ𝔽_αγ = 1/V_ℛ∑_( i, j)∂ A_ pair(r_ij, T)/∂ r_ij∑_γ = 1^3∂ r_ij/∂ r_ij,β∂ r_ij,β/∂𝔽_βγ𝔽_αγ = 1/V_ℛ∑_( i, j)∂ A_ pair(r_ij, T)/∂ r_ijr_ij,β/r_ij∑_γ = 1^3∂ r_ij,β/∂𝔽_βγ𝔽_αγC9where we have made use of the fact that the partial derivative of the Euclidean norm of a vector𝐚 = (a_1,a_2, ..., a_n) with respect to one of its components, a_j, is:∂𝐚/∂ a_j = ∂/∂ a_j( √(∑_i = 1^n a_i^2))= 1/2 𝐚∑_i=1^n [∂/∂ a_j(a_i^2) ]= a_j/𝐚C10It should be noted that eq <ref> is valid for any kind of deformation (both affine and nonaffine). We envision that, at a certain time, a homogeneous deformation is applied on the polymer that displaces bond endsaffinely, in the sense that their positions are changed in the same way as material points in a macroscopic continuumdescription. Straight parallel lines in the reference configuration map to straight parallel lines in the deformedconfiguration Let 𝐫_i and 𝐫_j be the positions of the start and the end of the bonded beads before thedeformation and 𝐫_i^' and 𝐫_j^' the positions of the same beads after thedeformation, then:𝐫_i^' = 𝔽𝐫_i C11 𝐫_j^' = 𝔽𝐫_j C12By subtracting eq <ref> from eq <ref>, we get:𝐫_ij^' = 𝔽𝐫_ijC13Thus, one component of the deformed vector is connected to the components of the undeformed through the relation:r_ij, β^' = ∑_γ = 1^3 𝔽_βγ r_ij, γC14and the derivative appearing in eq <ref> can now be calculated:∂ r_ij,β/∂𝔽_βγ = r_ij, γC15Eq <ref> takes the form:τ_αβ^ b({r_ij}, T )= 1/V_ℛ∑_( i, j)∂ A_ pair(r_ij, T)/∂ r_ijr_ij,β/r_ij∑_γ = 1^3 r_ij,γ𝔽_αγ = 1/V_ℛ∑_( i, j)∂ A_ pair(r_ij, T)/∂ r_ij r_ij,β r_ij, α/r_ijC16where the virial theorem is recovered.<cit.>We now move to the estimation of the nonbonded contribution to the stress tensor, τ_αβ^ nb. We calculate the stress by taking the derivative of the equation-of-state free energy density,a_ vol(ρ_ cell,k, T ), with respect to the deformation gradient tensor, 𝔽, considering the set of space discretization voxels (introduced in Appendix A) as a permanent scaffold into which beadswould be reassigned upon deformation. In that approach, a tensile deformation in a periodic system would amount toextending the system such that it occupies an additional layer of voxels, for example. This would indeed beinfinitesimal for a very large model system. There are practical problems with this, however, so we consider ourvoxels as a means of spatial discretization in integrating the free energy density over 3-dimensional space, allowing them todeform affinely following the macroscopically applied deformation. This may not be too bad if the voxel size exceedsthe correlation length of density fluctuations in the polymer.Following our assumptions, the nonbonded contribution to the stress tensor can be cast as:τ_αβ^ nb({ρ_ cell}, T ) =1/V_ℛ∑_γ = 1^3 𝔽_αγ{∂/∂𝔽_βγ[ ∑_k ∈cells V_ cell,k^ acc a_ vol(ρ_ cell,k, T ) ]} = 1/V_ℛ∑_γ = 1^3𝔽_αγ∑_k∈cells[ ∂ V_ cell,k^ acc/∂𝔽_βγa_ vol(ρ_ cell,k, T ) ]+ 1/V_ℛ∑_γ = 1^3𝔽_αγ∑_k∈cells[ V_ cell,k^ acc∂ a_ vol(ρ_ cell,k, T )/∂𝔽_βγ] = 1/V_ℛ∑_k∈cellsa_ vol(ρ_ cell,k,T ) ∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγ + 1/V_ℛ∑_k∈cells V_ cell,k^ acc∑_γ = 1^3𝔽_αγ∂ a_ vol(ρ_ cell,k, T )/∂𝔽_βγ = 1/V_ℛ∑_k∈cellsa_ vol(ρ_ cell,k,T ) ∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγ + 1/V_ℛ∑_k∈cells V_ cell,k^ acc∑_γ = 1^3𝔽_αγ∂ a_ vol(ρ_ cell,k, T )/∂ρ_ cell,k∂ρ_ cell,k/∂𝔽_βγ = 1/V_ℛ∑_k∈cellsa_ vol(ρ_ cell,k,T ) ∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγ - 1/V_ℛ∑_k∈cellsρ_ cell,k∂ a_ vol(ρ_ cell,k, T )/∂ρ_ cell,k∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγ = 1/V_ℛ∑_k∈cells[a_ vol(ρ_ cell,k, T )- ρ_ cell,k∂ a_ vol(ρ_ cell,k, T )/∂ρ_ cell,k] ∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγC17where a_ volρ_ cell,k,T is the nonbonded Helmholtz energy density in cell k.The term in brackets is the negative of the pressure, - pρ_ cell,k,T as that is predicted bythe equation of state(eq <ref>) under given density ρ_ cell,k and temperature, T. At this point we have to calculate the derivatives expressing the variation of the volume of a cell with respect to an element of the deformation gradient tensor, ∂ V_ cell,k^ acc/∂𝔽_βγ.The determinant of the deformation gradient tensor is the ratio of volumes or densities of the deformed and initialconfigurations:𝔽 = V^'/V_ℛ = V_ cell,k^ acc '/V_ cell,k^ acc= ρ/ρ^' = ρ_ cell,k/ρ_ cell,k^'C18where the use of primes denotes the deformed configuration.The derivative of the determinant of 𝔽 withrespect to the tensor 𝔽 itself is calculated by the following equation:<cit.>∂( ( 𝔽))/∂𝔽 = (𝔽)(𝔽^-1)^ TC19Finally, the nonbonded contribution to the stress tensor, substituting the above terms ineq <ref> is:τ_^ nbρ_ cell, T= - 1/V_ℛ∑_k∈cellspρ_ cell,k,T∑_γ = 1^3𝔽_αγ∂ V_ cell,k^ acc/∂𝔽_βγ = - 1/V_ℛ∑_k∈cellspρ_ cell,k,T∑_γ = 1^3𝔽_αγV_ cell,k^ acc𝔽𝔽𝔽_βγ = - δ_1/V_ℛ∑_k∈cellspρ_ cell,k,T V_ cell,k^ accC20where all volumes, V_ℛ and V_ cell,k^ acc refer to the undeformed (reference) state of the system. Eq <ref> could be fully anticipated. The contribution of the equation ofstate to the stress tensor of the system is limited to the diagonal components and its magnitude is the negative ofthe weighted average pressure over the cells of the grid, with volumes V_ cell,k^ acc being theweights multiplying the individual contributions. § APPENDIX D: INITIAL ESTIMATION OF THE HOPPING FREQUENCY FACTORThe mean square displacement of a slip-spring in time Δ t is k_ hopΔ t n_ Kuhns/bead b^2, where the mean square displacement of the chain along the primitive path due to the slip-springs is:Δ r_ cm^2= mean square displacement of slip-springs/n_ ss/chain = k_ hopΔ t n_ Kuhns/beadb^2/n_ ss/chain = k_ hopΔ t n_ Kuhns/beadb^2/2N_ ss/n = k_ hopΔ t n_ Kuhns/bead b^2/2 N_ ss/N_ beads N= 2 D_ cm_ alongcontourΔ tD1 with N_ beads and N_ ss being the total number of beads and slip-springs in the system, respectively, and n the number of chains. The factor 2 is included because a slip-spring is attached to two chains. N isthe chain length measured in number of Kuhn segments. The chain diffusivity by means of slip-spring jumps is:D_ cm_ along contour = k_ hopn_ Kuhns/beadb^2/4N_ ss/N_ beadsND2while the corresponding diffusivity, according to the Rouse model:D_ cm, Rouse = k_ BT/N ζD3for the above two to become equal, it should hold:k_ hop = 4 k_ BT/ζN_ ss/N_ beads1/n_ Kuhns/bead b^2D4The last expression provides an upper limit to the hopping rate, in the case the diffusion of the polymer chainswas based only on their motion along their primitive paths. Following the discussion of the main text, an estimate of the average hopping rate, in the case of Gaussian slip-springs, is:k_ hop≃ 38 ν_ hopD5which, in view of eq <ref>, allows us for an upper limit for ν_ hop:ν_ hop < 2/19k_ BT/ζN_ ss/N_ beads1/n_ Kuhns/bead b^2D6The exact value of ν_ hop to be used during the simulation is the one ensuring conservation of theaverage number of slip-springs when implementing the fluctuating number of slip-springs scheme (Appendix E). § APPENDIX E: MICROSCOPIC REVERSIBILITY IN SLIP-SPRING FORMATION AND DESTRUCTION (FLUCTUATING NUMBER OF SLIP- SPRINGS SCHEME) Consider a specific configuration bearing a slip-spring connecting an end b_0 of a chain contained in it with a bead a_0 of the same or another chain (which may be either an interior or an end bead). Let us call thisconfiguration a_0b_0, for simplicity. Consider also a configuration which is identical to a_0b_0, the only difference being that the slip-spring connecting a_0 and b_0 is missing. Let us call that configuration a_0b_× (see Figure <ref>). If a_0 is not a chain end, the only way in which one can go from a_0b_0 to a_0b_× is slippage of the slip-spring past the chain end b_0. The probability per unit time for observing the transition is:𝒬_a_0b_0 → a_0b_× = k_ hop,a_0 b_0 P_a_0b_0E1with P_a_0b_0 being the a priori probability of state a_0b_0 and k_ hop, a_0b_0 being the rate constant for hopping of the slip-spring along the chain in one of the two directions away from chain end b_0 in configuration a_0b_0. In the reverse transition, the only way in which one can go from a_0b_× to a_0b_0 is formation of a slip-spring off of the chain end b_0. The probability per unit time for observinga transition from a_0b_× to a_0b_0 is:𝒬_a_0b_×→ a_0b_0 = k_ form, a_0 b_×→ a_0b_0P_a_0b_×.E2 At equilibrium, we demand detailed balance:𝒬_a_0b_0 → a_0b_× = 𝒬_a_0b_×→ a_0b_0which enables us to define k_ form in terms of k_ hop:k_ form, a_0 b_×→ a_0b_0 = k_ hop, a_0b_0P_a_0b_0/P_a_0b_×.E3In the fluctuating number of slip-springs scheme the probability distribution of configurations is dictated by an ensemble which is canonical with respect to the chains and grand canonical with respect to the slip-springs. Multiple slip-springs connecting the same pair of beads are, in principle, possible and are considered indistinguishable. <cit.> The probabilities P_a_0b_0 and P_a_0b_× satisfy the following proportionalities with the same proportionality constant:P_a_0b_0∝ exp-A_a_0b_0^N_ ss/k_ BTz^N_ ss1/n_ ss, a_0b_0!E4 P_a_0b_×∝ exp-A^N_ ss-1/k_ BTz^N_ ss-11/n_ ss,a_0b_0-1!E5where N_ ss is the total number of slip-springs in the starting configuration, a_0b_0 andn_ ss,a_0b_0 is the total number of slip-springs (usually 1) connecting beads a_0 and b_0 in that configuration. A_a_0b_0^N_ ss and A^N_ ss-1 are the total Helmholtzenergies of the configuration including and not including the slip-spring, respectively (c.f. Figure <ref>). z is the fugacity of the slip-springs, connected to the chemicalpotential of the slip-springs by:z = expμ/k_ BT.E6From eqs <ref> to <ref> one obtainsk_ form,a_0 b_×→ a_0b_0 = k_ hop, a_0b_0exp-A_a_0b_0^N_ ss - A^N_ ss-1/k_ BTz/n_ ss,a_0 b_0E7According to the approach we have introduced in the main text, all hopping moves takeplace with the same frequency factor, ν_0, surpassing a constant free energy barrier, A_𝒪→𝒩^, forall slip-springs. This ensures microscopic reversibility during slip-springs moves. In other words,we assume:k_ hop, a_0b_0 = ν_0exp- A_𝒪→𝒩^ - A_a_0b_0/k_ B TE8with A_a_0b_0 being the free energy stored in the considered slip-spring connecting a_0 and b_0 (not of the entire configuration). With this approach, the imposition of detailed balance, eq <ref>, gives:k_ form,a_0b_×→ a_0b_0= ν_0 exp-A_𝒪→𝒩^ - A_a_0b_0/k_ BTexp-A_a_0b_0^N_ ss - A^N_ ss-1/k_ BTz/n_ ss,a_0 b_0E9ork_ form, a_0b_×→ a_0b_0 = ν_0 exp-A_𝒪→𝒩^/k_ BTz/n_ ss,a_0 b_0= ν_ hopz/n_ ss,a_0 b_0E10since A_a_0b_0^N_ ss = A^N_ ss-1 + A_a_0b_0E11For the slip-spring chemical potentials considered here, multiple connections between the same two beads are extremely improbable; the quantity n_ ss, a_0b_0 is practically equal to 1 in all cases. Thus, k_ form, a_0b_×→ a_0b_0 amounts to a configuration-independent constant.The total rate of slip-spring formation off of the chain end b_0 is:k_ form, a_0b_×→ = ν_ hop n_ candsa_0b_×z/n_ ss,a_0 b_0E12and is proportional to the number of candidate segments, n_ candsa_0b_×, with which end b_0 can be bridged through a new slip-spring. In our scheme, since the candidate bridging sites are selected among all sites within a radius α_ attempt from b_0 (Figure <ref>), noslip-springs longer than α_ attempt shouldbe allowed to slip past an end in the reverse, slip-springdestruction, move.The slip-spring creation move can proceed as follows. For each chain end, we maintain a list of all bridgeablebeads (either belonging to same or different chains) lying within distance α_ attempt from it, including other chain ends. For each end b_0 and candidate partner a_0, we attempt construction of a slip-spring between them at a constant rate ν_ hop z/n_ ss,a_0 b_0. This could be implemented by consideringeach chain end and deciding whether a new slip-spring will be constructed off of it with probability:P_ form,a_0b_0 = ν_ hopz/n_ ss,a_0 b_0 n_ candsa_0b_0Δ t_ kMCE13where n_ candsa_0b_0 is the number of bridgeable beads around b_0 and Δ t_ kMC is the time span between successive implementations of kMC events. For each picked end b_0, pick one of itsbridgeable n_ candsa_0b_0 beads at random and construct a slip-spring. Clearly, Δ t_ kMC should be small enough for the probability P_ form,a_0b_0 to be considerably smaller than 1. This would also make the construction of double slip-spring bridges between chain ends very unlikely.§ APPENDIX F: MICROSCOPIC REVERSIBILITY IN SLIP-SPRING FORMATION AND DESTRUCTION (CONSTANT NUMBER OF SLIP-SPRINGS SCHEME) Following the main text and the preceding Appendix E, we consider a specific configuration bearing a slip-springconnecting an end b_0 of a chain contained in it with a bead a_0 of the same or another chain (which may beeither an interior or an end bead). We have denoted this configuration with 𝒪≡ a_0b_0,for simplicity. Moreover, we consider also a configuration which is identical to 𝒪, the only difference being that the slip-spring connecting a_0 and b_0 has been replaced bya slip-spring connecting beads a_0^' and b_0^'. Let us call that configuration𝒩≡ a_0^' b_0^'. The process driving us from the former to the latter configuration is by destroying the slip-spring a_0b_0 and subsequently creating the slip-spring a_0^' b_0^'. The coupled destruction/formation process ensures the conservation of the number of slip-springs during thesimulation. In the following we will consider only the case where the one end of the slip-spring is a chain end, for clarity. However, the same procedure applies to the case where both ends of the slip-spring are chain ends. If a_0 is not a chain end, the only way in which one can go from a_0b_0 to a^'_0b^'_0 isslippage of the slip-spring past the chain end b_0. The probability per unit time for observing the transition is:𝒬_a_0b_0 → a^'_0b^'_0 = P_a_0b_0 k_ hop,a_0 b_0 P_a^'_0b^'_0^ sel P_𝒪→𝒩^ acceptF1with P_a_0b_0 = exp-β A^ N_ ss_a_0b_0 being the a prioriprobability of state a_0b_0 and k_ hop, a_0b_0 = ν_ hopexpβ A_a_0b_0 being the rate constant for hopping of the slip-spring along the chain in one of the two directions away from chain end b_0 in configuration a_0b_0. At this point we should recall the distinction between A^ N_ ss_a_0b_0 which isthe total free energy of a configuration whose N_ ss-th slip-spring finds itself connecting beads a_0, and b_0 and A_a_0b_0 that is the free energy stored in the considered slip-spring connecting a_0 and b_0 (not of the entire configuration). Following the constant number of slip-springs scheme introduced in the main text, we should also multiply by theprobability of selecting the pair a^'_0b^'_0 as the new configuration,P_a^'_0b^'_0^ sel, once the slip-spring has passed through the chain end and isconsidered for destruction.The final term in eq <ref>, P_𝒪→𝒩^ accept, is the acceptance probability of the combined destruction/creation move. Equivalently, the probability of destroying a slip-spring anchored at a^'_0 and b^'_0 and subsequently creating a slip-springanchored at a_0 and b_0 is:𝒬_a^'_0b^'_0 → a_0b_0 = P_a^'_0b^'_0 k_ hop,a^'_0 b^'_0P_a_0b_0^ selP_𝒩→𝒪^ acceptF1 In order to create a new slip-spring, we randomly select one of the 2n end beads (with n being the number of chains) available in the system. We then try to create a new slip-spring emanating from the selected chain end, e.g.a_0^'. This is accomplished with probability:P_ sela^'_0 = 1/2nF2After searching for candidates b^' inside a sphere of radius α_ attempt centered at a^'_0,one of them, e.g. b_0^', is selected with probability:P_ selb^'_0 = exp-β A_a^'_0b^'_0/W_𝒩F3with W_𝒩 being the corresponding Rosenbluth weight:W_𝒩 = ∑_b^' = 1^n_ cands a^'_0exp-β A_a^'_0 b^'F4with b^' running over all possible candidates lying in the vicinity of a^'_0, n_ cands a^'_0.The overall probability of choosing to create a new slip-spring connecting a^'_0 with b^'_0 is:P_a^'_0b^'_0^ sel = 1/2nexp-β A_a^'_0b^'_0/∑_b^' = 1^n_ cands a^'_0exp-β A_a^'_0b^'F5In a completely analogous way, the probability of creating a slip-spring anchored at a_0 and b_0, once a slip-spring anchored at a^'_0 and b^'_0 is considered for destruction, is:P_a_0b_0^ sel = 1/2nexp-β A_a_0b_0/W_𝒪 =1/2nexp-β A_a_0b_0/∑_b = 1^n_ cands a_0exp-β A_a_0bF6with b running over all anchoring candidates of a_0.At equilibrium, we demand detailed balance:𝒬_a_0b_0 → a^'_0b^'_0 = 𝒬_a^'_0b^'_0 → a_0b_0F7which, after replacing all factors yields:exp-β A^ N_ ss_a_0b_0ν_ hopexpβ A_a_0b_01/2nexp-β A_a^'_0b^'_0/W_𝒩 P_𝒪→𝒩^ accept =exp-β A^N_ ss_a^'_0b^'_0ν_ hopexpβ A_a^'_0b^'_01/2nexp-β A_a_0b_0/W_𝒪P_𝒩→𝒪^ acceptF8At this point, by recalling the definitions of the free energy levels introduced in Figure <ref>, we can replaceA^N_ ss_a_0b_0 = A^N_ ss-1 + A_a_0b_0 and A^N_ ss_a^'_0b^'_0 = A^N_ ss-1 + A_a^'_0b^'_0. Thus, eq <ref> can be simplified to:exp-β A_a^'_0b^'_0/W_𝒩P_𝒪→𝒩^ accept = exp-β A_a_0b_0/W_𝒪P_𝒩→𝒪^ acceptF9 which is the necessary requirement for microscopic reversibility. For the detailed balance condition to hold, eq<ref> implies that a combined destuction/creation move leading from configuration𝒪 = a_0b_0 to a configuration 𝒩 = a^'_0b^'_0, should be acceptedwith probability:P^ accept_𝒪→𝒩 = min1,exp-A_a_0b_0 - A_a^'_0b^'_0/k_ BTW_𝒩/W_𝒪F10which is eq <ref> of the main text.This work was funded by the European Union through the project COMPNANOCOMP under grant number 295355. G.G.V. thanks the Alexander S. Onassis Public Benefit Foundation for a doctoral scholarship. During the course of this research G.M. and D.N.T. have been funded by the Volkswagen Foundationin the context of the project “Mesoscopic Simulations of Viscoelastic Properties of Networks”.These authors also thank the Limmat Foundation for giving them the opportunity to extend the present researchto polymer networks (under the project entitled “Multiscale Simulations of Complex Polymer Systems”). The authors thank Mr. Aris Sgouros (National Technical University of Athens) for his help with improvingparts of the methodology and the associated computer code. Fruitful and stimulating discussions with Prof. Dr. Marcus Müller (Georg-August-Universität Göttingen), and Mr. Ludwig Schneider (Georg-August-Universität Göttingen) are gratefully acknowledged.
http://arxiv.org/abs/1703.08983v1
{ "authors": [ "Georgios G. Vogiatzis", "Grigorios Megariotis", "Doros N. Theodorou" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170327093545", "title": "Equation of State Based Slip Spring Model for Entangled Polymer Dynamics" }
N. R. Tanvir nrt3@leicester.ac.uk0000-0003-3274-6336]N. R. Tanvir Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH, UK National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA Department of Astronomy, University of California, 501 Campbell Hall, Berkeley, CA 94720-3411, USA Department of Physics, University of Warwick, Coventry, CV4 7AL, UK Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill,Liverpool, L3 5RF, UK Dark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, Denmark Dark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, Denmark School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 Joint Space-Science Institute, University of Maryland, College Park, MD 20742, USA Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, Germany Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748, Garching, Germany Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH, UK Dark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, Denmark University of the Virgin Islands, #2 John Brewers Bay, 00802 St Thomas, VI, USA Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, UK. Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008, Granada, Spain Department of Physics, The George Washington University, Washington, DC 20052, USA INAF, Osservatorio Astronomico di Brera, Via E. Bianchi 46, I-23807 Merate (LC), Italy INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monteporzio Catone, Italy ASI-Science Data Centre, Via del Politecnico snc, I-00133 Rome, Italy Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 USA Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USAAPC, Astroparticule et Cosmologie, Universite Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cit, 10, Rue Alice Domon et Lonie Duquet, 75205, Paris Cedex 13, France GEPI, Observatoire de Paris, CNRS, 5 Place Jules Janssen, Meudon F-92195, France Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavk, Iceland Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen, Denmark Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107, Reykjavik, Iceland Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008, Granada, SpainAnton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam, the Netherlands Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, 85748, Garching, Germany Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, 85748, Garching, Germany Dark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, Denmark DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby, Denmark Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital 263 002, India Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam, the Netherlands Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008, Granada, Spain Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 7610001, IsraelDepartment of Physics, Gibbet Hill Road, Coventry, CV4 7AL, UK Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía s/n, E-18008, Granada, Spain Dark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, DenmarkDark Cosmology Centre, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, 2100 Copenhagen Ø, Denmark Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam, the Netherlands CAS Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China Gamma-ray bursts (GRBs) are powerful probes of early stars and galaxies, during and potentially even before the era of reionization. Although the number of GRBs identified at z≳6remains small, they provide a unique window on typical star-forming galaxies at that time, andthus are complementary to deep field observations. We report the identification of the opticaldrop-out afterglow of Swift GRB 120923A in near-infrared Gemini-North imaging, and derive a redshift ofz=7.84_-0.12^+0.06 from VLT/X-shooter spectroscopy.At this redshift the peak 15-150 keV luminosity of the burst was 3.2×10^52 erg s^-1, and in fact the burst was close to the Swift/BAT detection threshold. The X-ray and near-infrared afterglow were also faint, and in this sense it was a rather typical long-duration GRB in terms of rest-frame luminosity. We present ground- and space-based follow-upobservations spanning from X-ray to radio, and find that a standard external shockmodelwith a constant-density circumburst environment with density,n≈4×10^-2 cm^-3 gives a good fit to the data.The near-infrared light curve exhibits a sharp break at t ≈ 3.4 days in the observer frame, which if interpreted as being due to a jet corresponds to an opening angle of≈5 degrees.The beaming correctedγ-ray energy is then ≈2×10^50 erg, while the beaming-corrected kinetic energyis lower, ≈10^49 erg, suggesting that GRB 120923A was a comparatively low kinetic energy event. We discuss the implications of this event for our understanding of thehigh-redshift population of GRBsand their identification.§ INTRODUCTIONThe earlygalaxies in the universe, born in the first few hundred million years after the Big Bang,have been the focus of extensive observational searches in recent years. The interest is not only in the nature of these primordial collapsed objects, but also in whether the UV light they emitted was sufficient to have brought about the reionization of theintergalactic medium (IGM) <cit.>. Since the recent Planck results suggest a peak of the reionization era at z∼8–9<cit.>, the focus on galaxies in the rangez=7–10 has become even more intense.Directly detecting galaxies at such redshifts, however, is highly challenging due to theirintrinsic faintness and high luminosity distance; the samples of z>8 galaxies in the HubbleUltra-Deep Field (HUDF) are almost entirely candidates based on photometric redshifts. Furthermore, although Lyman-α emission has now been detected in one galaxy atz=8.7 <cit.>, the rising neutral hydrogen in the IGM itself increasingly absorbs thisemission, which likely contributes to the declining Lyman-α detection rates at z>7 <cit.>. The highest spectroscopic redshifts for galaxies based on the Lyman-α break are z≈7.5, in the case of a galaxy benefiting from significant amplification bygravitational lensing of a comparatively bright galaxy <cit.>, and a surprisingly luminous galaxy recently claimed to be at z≈11.1 <cit.>. Long-duration gamma-ray bursts (GRBs) are the most luminous transients known <cit.>,and are unambiguously linked to the core-collapse of massive stars <cit.>. Thus, they provide analternative tracer of galaxies in the early universe, and indeed are currently the only signature we have of individual stars at such distances. Redshifts can often be measured from afterglow spectroscopy, a method that benefits from their simple underlying power-law continua against which the Lyman-α break imprints an unmistakable signature at high-z.Afterglow spectroscopy also gives information on the metal enrichment in the host galaxies, complementary to measurements of abundances in ancient stars locally <cit.>, and the neutral fraction in the surrounding IGM <cit.>.The hosts of high redshift GRBs provide a census of primordial star-forming galaxies. It is likely that a large fraction of all star formation at z>7 was occurring in small galaxies too faint to be seen in the HUDF<cit.>, and similarly challenging even for the James Webb Space Telescope (JWST) by z∼10. Deep searches for high-z GRB hosts can in principle directly constrain this fraction, which is crucial for quantifying the total contribution of galaxies to the reionization budget <cit.>.Of course, fully exploiting GRBs as high redshift probes also depends on understanding the extent of any evolution of the GRB population as a whole over cosmic time, and whether they preferentially select certain host galaxies or modes of star formation. Recent studies have found evidence that GRBs follow star formation in a fairly unbiased way below a threshold of roughly a third Solar to Solar metallicity <cit.>, which bodes well for using them as tracers of star formation at high redshift. Other studies have found hints of possible evolution of, for example, shorter rest-frame duration <cit.> and narrower jet opening angle <cit.>, with increasing redshift, although samples remainsmall and selection effects hard to assess. To date, the most distant GRBs found have been GRB 090423, with a spectroscopic redshift of z=8.2 <cit.>, and GRB 090429B, with a photometric redshift of z≈9.4 <cit.>, although the latter result could be as low as z≈7 if there is significant dust obscuration in the host. Here we report the discovery of GRB 120923A at aspectroscopic redshift of z≈7.8, corresponding to an age of the universe of ≈670 Myr, and present our modelling of its afterglow.Throughout the paper, we adopt the following values for cosmological parameters:H_0=71 km s^-1 Mpc^-1, Ω_M=0.27 and Ω_Λ=0.73. All times are inthe observer frame, uncertainties are at the 68% confidence level (1σ), unlessotherwise noted, and magnitudes are in the AB system. § OBSERVATIONS§.§ Swift observations GRB 120923A triggered theBurst Alert Telescope <cit.> on 2012 Sep23 at 05:16:06 UT <cit.>. The observed burst duration was T_90=27.2±3.0 s, with a fluence of(3.2±0.8)×10^-7 erg cm^-2 <cit.>.The 1-s peak flux was F_15-150keV=4.1×10^-8 erg cm^-2 s^-1 <cit.>, close to the effective detection threshold of BAT. Thetime-averaged γ-ray spectrum is well fit by a power law with an exponential cut-off,with a photon index of Γ = -0.29±1.66 and peak energy, E_ peak=44.4±10.6 keV(both at 90% confidence). Integrating the BAT (15–150 keV) spectral model taking z≈8(the evidence for a redshift of this order is presented in Section <ref>), and including the effect of statistical uncertainties in all measuredquantities using a Monte Carlo analysis, we find that the isotropic equivalent γ-ray energyis = (4.8^+6.1_-1.6)×10^52 erg (1–10^4 keV, rest frame).TheX-ray telescope <cit.> began observing the field at 05:18:26.0 UT,140 s after the BAT trigger, leading to a detection of the X-ray afterglow. The source waslocalized to RA = 20^ h15^ m1073, Dec=+06^∘13169 (J2000), with anuncertainty radius of 19 (90% containment). XRT continued observing the afterglow for3.6 days in photon-counting (PC) mode, with the last detection at ≈0.6 days.We extracted XRT PC-mode spectra using the on-line tool on thewebsite<cit.>[<http://www.swift.ac.uk/xrt_spectra/00534402>].We used Xspec (v12.8.2) to fit the PC-mode spectrum between 1.7×10^-3 and 0.67 days, assuminga photoelectrically absorbed power law model () at the redshift of the GRB, and a Galactic neutralhydrogen column density of N_ H, MW = 1.5×10^21cm^-2 <cit.>.Our best-fit model has a photon index of Γ=1.77±0.14(68% confidence intervals, estimatedusing Markov Chain Monte Carlo in Xspec; C-stat= 84 for 105 degrees of freedom).The data do notconstrain intrinsic absorption within the host galaxy <cit.>.In the following analysis, we assume N_H, int = 0 and use the 0.3–10 keV count rate light curve from thewebsite, togetherwith Γ = 1.77, to compute the 1 keV flux density (Table <ref>). cccc GRB 120923A: Log of X-ray observations 0ptΔ t_ start Δ t_ endFlux Flux density (hr) (hr)(10^-12 erg cm^-2 s^-1) at 1 keV (μJy) 0.042 0.079 7.2±1.6 0.77±0.16 0.079 0.118 6.9±1.5 0.67±0.15 0.118 0.165 6.0±1.3 0.59±0.14 0.165 0.224 4.5±1.0 0.45±0.10 0.224 0.275 4.1±1.0 0.41±0.10 0.275 0.431 2.2±0.4 0.22±0.04 9.78 90.2 0.018±0.006 (1.8±0.7)×10^-3XRT 0.3–10 keV flux measurements obtained in photon-counting mode. The start and end time of each observation is relative to the BAT trigger time of 2012 Sep 23 05:16:06 (UT). The count rate light curve has been converted to a fluxdensity at 1 keV using a photon index of Γ = 1.77. §.§ Ground-based imaging We obtained optical and near infra-red (NIR) imaging from Gemini-North using the Near Infrared Imager and Spectrometer (NIRI) and Gemini Multi-Object Spectrograph (GMOS), and the United Kingdom Infra-Red Telescope (UKIRT) using the Wide-Field Camera (WFCAM),beginning 80 min after thetrigger.Conditions in Hawaii were excellent with ≈05 full-width-half-maximum (FWHM) seeing, and the target was at low airmass for severalhours. We detected a point source in the JHK bands at RA =20^ h15^ m1078,Dec =+06^∘13163 (J2000), accurate to ±0.3 in each dimension, which is consistent with the X-ray position. The source was absent in the rizY bands(Figure <ref>), and its blue colour of H-K≈0.1 mag, together with being aY-band drop-out (Y-J≳1 mag), suggested a very high redshift of z≳7. The NIRcounterpart faded in subsequent photometry obtained with the Very Large Telescope (VLT) Infrared Spectrometer and Array Camera (ISAAC), and theEuropean Southern Observatory/Max Planck Gesellschaft (ESO/MPG) 2.2m GRB Optical and Near Infrared Detector <cit.>, in addition to UKIRT and Gemini-Northover the next several nights, confirming it to be the GRB afterglow. cclcrcc GRB 120923A: Log of optical and NIR imaging observations andafterglowphotometry 0ptΔ t_ start(hr) Δ t_ end (hr)Telescope/Camera Filter Exp. (min) Measured flux (μJy) AB_0 1.372 1.701 Gemini-N/NIRI J17 2.74±0.1422.69±0.051.898 2.050 Gemini-N/NIRI Y 80.21±0.42>23.69 3.241 3.396 Gemini-N/NIRI H 8 2.75±0.2322.73±0.093.442 3.597 Gemini-N/NIRI K 8 2.80±0.28 22.73±0.103.705 4.199 Gemini-N/NIRI Y260.17±0.23 >24.264.311 4.464 Gemini-N/NIRI J8 2.09±0.33 22.99±0.1623.79 24.14 Gemini-N/NIRI J171.27±0.3923.52±0.29 2.164 2.461 Gemini-N/GMOS r 150.014±0.028>26.43 2.473 2.769 Gemini-N/GMOS i 150.032±0.031 >26.202.781 3.078 Gemini-N/GMOS z 15-0.029±0.150 >25.051.461 1.958 UKIRT/WFCAM K 24 4.81±0.8822.15±0.18 2.036 2.538 UKIRT/WFCAM H 24 3.79±0.5122.38±0.14 2.565 3.062 UKIRT/WFCAM J 24 2.38±0.6222.85±0.254.331 4.831 UKIRT/WFCAM K 24 3.08±1.0622.63±0.324.852 5.225 UKIRT/WFCAM J 18 2.40±0.8322.84±0.32 23.66 24.15 UKIRT/WFCAM H 241.16±1.10>22.5118.52 18.87 VLT/ISAAC K_ s 15 3.55±1.01 22.48±0.2718.91 19.26 VLT/ISAAC H 15 2.92±0.88 22.66±0.2919.31 19.66 VLT/ISAAC J 15 1.57±0.56 23.30±0.33 67.82 69.25 VLT/ISAAC J 60 0.71±0.26 24.16±0.34 18.58 20.35 VLT/FORS2 z 80 0.020±0.088 >25.53102.2 103.9 /WFC3-IR F140W 10 0.21±0.03 25.49±0.12120.0 127.8 /WFC3-IR F140W 25 0.13±0.02 26.02±0.14156.5 159.8 /WFC3-IR F140W 15 0.11±0.02 26.20±0.19172.5 183.7 /WFC3-IR F140W 10 0.072±0.015 26.66±0.21 477.3 479.1 /WFC3-IR F140W 43.5 0.014±0.010 >27.46The start and end time of each observation is relative to the BAT trigger time of 2012 Sep 2305:16:06 (UT). The fluxes are as measured at the location of the afterglow, whereas the ABmagnitudes are corrected for Galactic foreground extinction <cit.>, and in cases of no significant detection are reported as 2σ upper limits. We performed photometry using [<http://astro.dur.ac.uk/∼pdraper/gaia/gaia.html>],with the target aperture placed at the location of theafterglow as determined from the high-S/N J-band image and the aperture size set according tothe seeing (≈1.3×FWHM). We tied the calibration of the WFCAM JHK-band images to the 2MASS photometricsystem[<http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec6_4a.html>] using many stars on each frame, and tied the smaller field NIRI and ISAAC images to this using a secondarysequence of fainter stars close to the burst location.We obtained optical riz-band calibration using the wide-field GROND observations. The Y-band calibration was achieved by interpolating the sequence star magnitudes betweenz and J according to Y = J + 0.534 (z - J)-0.058 (derived from GROND observations of photometric standard stars).Uncertainties introduced by these calibration steps are included in the error budget, but are small compared to the random errors on the afterglow photometry. We summarise our NIR and opticalobservations and photometry in Table <ref> (note, the GROND limits at the afterglow location are not reported sincethey are shallower than the corresponding VLT observations obtained at almost the same time),and present the resulting light curves inFigure <ref>. Constraints on the photometric redshift are outlined in Section <ref>. §.§ Ground-based spectroscopyOur first spectrum of the afterglow was obtained with the VLT/X-shooter spectrograph <cit.> together with the K-band blocking filter, beginning 0.78 days post-burst(Table <ref>). The slit width was fixed at 09, which is reasonably matched to the ≈10-11 seeing. The target was acquired by offsetting from a nearby bright star.We nodded the target between two positions (A and B, separated by 5) on the slit and tookexposures in an `ABBA' sequence, as is usual for X-shooter.This was repeated at two different position angles of the slit, specifically 157.6^∘ and -161.8^∘ (defined from N through E), which maintained an approximately parallactic position. The data were reduced using the X-shooter pipeline <cit.>. The spectra were first rectified andre-sampled to produce linear spectra on a uniform 0.6 Å pix^-1 wavelength scale. Preliminary sky subtraction wasdone by differencing neighbouring frames. No continuum trace was visible at this stage. We refined the sky subtraction by masking out the brightest sky lines, and subtracting any residualsky signal channel by channel. Atmospheric throughput variations were calibrated by reference toobservations of two telluric standard stars (Hip094250, Hip094986) obtained close in time to the science data. Channelswith the highest telluric absorption (>50%), bad pixels, and other image artefacts were allmasked out. Finally, we co-added all the frames weighted by their respective signal-to-noise ratios, and optimally binned the data in wavelength to produce wide, 30 Å, channels. This revealed a weak butclear trace at the expected position on the slit in the NIR arm (which covers wavelength range ∼1.02-1.8 μm). We normalized the absolute fluxscale of the spectrum to match the J-band photometry at the same epoch. No signal was detected in either the UVB (∼0.35-0.56 μm) or VIS (∼0.56-1.02 μm) arms. The spectroscopic redshift we deduce is presented in Section <ref>.rrrcr GRB 120923A: Log of spectroscopic observations 0ptΔ t_ start(hr) Δ t_ end (hr)Telescope/Camera Spectral element Exp. (min)18.69 21.69 VLT/X-shooter - 160 102.3 104.6 /WFC3-IR G141 80 158.1 160.4 /WFC3-IR G141 80 172.5 173.2 /WFC3-IR G141 40 183.7 184.4 /WFC3-IR G141 40 The start and end time of each observation is relative to the BAT trigger time of 2012 Sep 2305:16:06 (UT).§.§ Hubble Space Telescope observations We triggered our cycle 19 Hubble Space Telescope () program to acquire slit-less grism spectroscopyof the afterglow with the Wide-Field Camera 3-IR (WFC3-IR), in addition to further imaging in the NIR using theF140W filter (approximately a wide JH-band). We photometered the afterglow in the F140W images using a ≈032 radius aperture, adopting the standardzero-point calibration and aperture correction for this filter. This sequence of observations, beginning at 4.3 days post-burst, revealed a marked steepening of the lightcurve compared to the previous J-band decline rate of α_ J≈-0.25(F_ν∝ t^α) between ∼2 hrand ∼1 days (see Figure <ref> and Section <ref>). As a consequence of this unexpectedly rapid fadingof the afterglow, combined with challenges due to overlapping traces from faint sources in thecrowded field, no usable grism spectrum could be extracted. We report our F140W photometry in Table <ref>and do not consider the grism data further in our analysis. §.§ Radio observations We observed GRB 120923A with the Combined Array for Research in Millimeter Astronomy (CARMA)beginning on 2012 Sep 23.99 UT (0.77 days after the burst) at a mean frequency of 85 GHz. We found nosignificant millimetre emission at the position of the NIR counterpart or within theenhanced /XRT error circle to a 3σ limit of 0.39 mJy.We observed the afterglow in the C (4–7 GHz; mean frequency 6.05 GHz) andK (18–25 GHz; mean frequency of 21.8 GHz) radio bands using the Karl G. Jansky VeryLarge Array (VLA) starting 0.82 days after the burst.Dependingon the start time of the observations, we used either 3C286 or 3C48 as the flux and bandpasscalibrator. We used J1950+0807 as gain calibrator and carried out data reduction using the CommonAstronomy Software Applications package ([<https://casa.nrao.edu/ >]).A possible weak source was seen at 7.9 days in the C band with flux density 25±8 μJy.We followed up this putative radio afterglow througha VLA Director's Discretionary Time proposal (12B-387, PI: Zauderer) over a period of 44 days (Figure <ref>).The C band observations at 11.8 days also show a marginally significant peak in the flux density map, but the position is offset from that of the previous epoch by ≈2σ. No significant source is detected inthe subsequent C band epochs nor in the K band data within the Gemini-North error circle.Detailed examination and stacking analyses of the images suggeststhat the two possible detectionsin the C band are likely due to noise. We therefore consider the VLAobservations to yield a non-detection of the radio afterglow, and report the upper limits andformalphotometric point source fits derived from the maps and stacks in Table <ref>.lcccccc9 0pt Millimeter and Radio Observations of GRB 120923AEpoch t-t_0 Observatory Band Frequency Integration Time 3σ Upper Limit (days)(GHz)(min)(μJy)1 0.77 CARMA 3 mm 85.0…<3901 0.824 VLAK 21.8 17.7 <69.11 0.853 VLAC 6.05 19.9 <29.7 2 3.90VLAK 21.8 18.1 <64.42 3.91VLAC 6.05 17.4 <31.03 7.91VLAC 6.05 27.8 <22.54 11.8VLAC 6.05 35.5 <20.15 23.9VLAC 6.05 37.9 <17.16 40.8VLAC 6.05 36.3 <18.47 43.8VLAC 6.05 49.5 <15.83&4^* 10.1^†VLAC 6.05 63.4 <15.56&7^* 42.5^†VLAC 6.05 85.8 <14.8^* Stacks. ^†The reported value of t-t_0 for stacks is weighted by theintegration time of the individual observations used in the stack. § REDSHIFT DETERMINATION §.§ Photometric redshift constraints We first investigate the redshift constraints from the optical-NIR spectral energy distribution (SED) of theafterglow, using techniques similar to those described in <cit.>. For uniformity, weselected observations from a single telescope for this analysis, specifically the Gemini-North/GMOS rizmeasurements and the Gemini-North/NIRI YJHK measurements obtained within 5 hr post-burst.We corrected these data for Galactic extinction, using A_ V,gal=0.4 mag<cit.>, and the Milky Way extinction model of <cit.>. The photometry was interpolatedtoa common time corresponding to the NIRI H-band observation, using a power law fit to the J-bandlight curve between 0.06 days and 1.0 days, the latter yielding α_ J =-0.252±0.022. We added the interpolation uncertainty in quadrature with the photometric uncertaintyto determine the total uncertainty at each point on the SED.We assumed the intrinsic spectrum of the afterglow is a power law, F_ν∝ν^β, and used the sight-line-averaged model for the optical depth of the IGMfrom<cit.>, accounting for Lyα absorption by neutral hydrogenandphotoelectric absorption by intervening systems. We also included Lyα absorption by the hostgalaxy interstellar medium (ISM), for which we assumed a column density of log(N_ H/ cm^-2) = 21.1, the mean value forGRBs at z∼2–3 <cit.>, although within the errors the photometric redshift is insensitive to the exact value chosen.The free parameters in our model are the redshift of the GRB,the extinction along the line of sight within the host galaxy (A_V), and the spectral index(β) of the afterglow SED.The SMC dust extinction law of<cit.> was assumed to model the extinction in the host galaxy, <cit.>. We took a flat prior for the redshift and the extinction, and employed the distribution of extinction-corrected optical-NIR spectral slopes, β_ o from<cit.> as a prior on β. Fitting was performed using a Markov Chain Monte Carlo (MCMC) algorithm to explorethe parameter space, integrating the model over the filter bandpasses, and computing the likelihoodof the model by comparing the resulting fluxes with the observed values using a implementation of the ensemble MCMC sampler<cit.>. The resulting 68% confidence intervals about the median values for the fitted parameters arez = 8.1±0.4, β = -0.17^+0.34_-0.25, and = 0.07^+0.09_-0.05 mag, where the large errors on β reflect the limited lever arm obtained from the three JHK detections. The highest-likelihood model is z ≈ 7.79, β≈ -0.39, and ≲0.1 mag, and this is shown, along witha model with the median parameters, inFigure <ref>. The full posterior density function for the redshift is shown inFigure <ref>, and allows us to rule out a redshift of z≲7.3 at 99.7% confidence. §.§ Spectroscopic redshift The X-shooter spectrum (Figure <ref>) exhibits significant flux redward of 1.2 μm (below 2.5×10^14 Hz), with a spectral slope of β=-0.6±0.5, and a steep cut-off bluewardof≈1.1 μm. We model the spectrum as a power law with index β, and interpret the breakas due to Lyα absorption by neutral hydrogen in the host galaxy followed by a Gunn-Petersontrough blueward of the host absorption. We proceed with the remainder of the analysis as for thephotometric redshift, assuming a flat prior for the redshiftand the extinction, and again using thedistribution of β_ o from <cit.> as a prior on β.We also fix the neutral hydrogen column density of the host galaxy tolog(N_ HI, host/ cm^-2)=21.1 (Section <ref>); a significantly higher column thanthis is unlikely given the evidence for low extinction and the suggestion of a trend toward somewhat lower columns seen in GRBs at z≳6 <cit.>, whilst assuming a lower value for N_ HI, host does not change the derived redshift within the errors.However, instead of integrating over filter bandpasses, we fitted the model directly to the observed X-shooter NIR spectrum.We find z=7.84^+0.06_-0.12, β = -0.54±0.40, and=0.17^+0.09_-0.12, where the uncertainties reflect 68% credible intervals about themedian. We plot our best-fit model in Figure <ref>. The best fit parameters arez≈7.8, β≈-0.54, and ≈0.17, all consistent with the median values, andwith the photometric redshift of z=8.1±0.4 (Section <ref>). § BURST PROPERTIES AND COMPARISON TO THE LONG-GRB POPULATION §.§ High energy behaviourAt z≈7.8, the BAT peak flux corresponds to a luminosity, L_ iso≈3.2×10^52 erg s^-1. In Figure <ref> we show the peak luminosity for all theGRBs with measured redshifts to March 2015. The low energy cut-off imposed by the BAT selection function indicates that only GRBs atthe bright end of the luminosity function can be detected at z>6, despiteutilizing a varietyof algorithms to try to recover even time-dilated bursts <cit.>.It is clear that GRB 120923A was close to this detection limit, and the intrinsically faintest event found at z>6.5 to-date. In Figure <ref>, we compare the X-ray light curve of GRB 120923A to those of a largesample ofbursts <cit.>. We find that GRB 120923A was amongst the faintestlong-duration GRB afterglows seen by XRT. Another view of these data is shown in Figure <ref>, in which each burst hasbeen shifted to show how it would have appeared if it were at z=8 (we also show the corresponding rest-frame axes).In this case we restrict the low redshift sample to the eventsincluded in The Optically-Unbiased GRB Host survey<cit.>. The high redshiftcompleteness of that sample minimises any optical selection biases. This shows that GRB 120923A wasrather typical in terms of its intrinsic X-ray behaviour. §.§ Infrared behaviourWe present the GRB 120923A composite NIR light curve formed by the JHK and F140W photometry inFigure <ref>, scaling the JHK bands by small factors to match the F140W band. A fitto this overall light curve of a broken power-law model yields a shallow initial slope, α_1≈-0.25, breaking at≈35 hours to a steep decay with α_2≈-1.9 (χ^2/dof =12.1/13). We compare the light curve to other high-redshift GRBs in Figure <ref>. Theafterglow of GRB 120923A is comparatively faint, and could easily have escaped detection in othercircumstances; i.e. we were lucky in being able to observe the afterglow with an 8-m telescope inexcellent seeing within 2 hours, and to continue observations for several hours before the sourceset. Since the peak flux density of the afterglow SED is directly proportional to the blastwavekinetic energy <cit.>, the relative faintness may result at least in part from a comparatively low value of. We investigate this further via multi-wavelength modelling in Section <ref>.It is interesting to note that the SED of this afterglow is comparatively blue (β=-0.17^+0.34_-0.25 at≈0.14 d; Figure <ref> and Section <ref>), consistent with little line-of-sight dust extinction inthe host, as has generally been found for other afterglows of the high-z GRBs<cit.>. This may reflect the limited time to build up dust, particularlyin the small galaxies that are likely dominating the total star formation budget at z>6<cit.>.Of course, there is also an observational bias against discovering dusty afterglows at high redshift. We consider the quantitative constraints on dust extinction to GRB 120923A in Section <ref>. § MULTI-WAVELENGTH MODELLING§.§ Synchrotron ModelWe now interpret the multi-wavelength observations of GRB 120923A in the context of the standard synchrotronmodel, in which the afterglow radiation arises from the blastwave shock set up by the expandingrelativistic GRB ejecta interacting with their circumburst environment.This model assumes an idealised jet and ambient medium, but is appropriate given our limited sampling of the evolving SED. The resulting radiationis expected to exhibit characteristic power law spectral segments connected at `break frequencies', namelythe synchrotron cooling frequency (), the characteristic synchrotron frequency (), andthe self-absorption frequency (). The location of these frequencies depend on the physicalparameters: the isotropic-equivalent kinetic energy (), the circumburst density (, or thenormalised mass-loss rate in a wind-like environment, ), the fraction of the blastwave energyimparted to non-thermal electrons (), and the fraction converted into post-shock magneticenergy density ().The different possible orderings of the break frequencies (e.g., <: `slow cooling',and <: `fast cooling') then give rise to five possible afterglow SED shapes<cit.>. As the radius and Lorentz factor of the blastwave change with time, these breakfrequencies evolve and the afterglow SED may transition between these different spectral shapes. Topreserve smooth light curves when break frequencies cross, we use the weighting schemes describedin <cit.> to compute the afterglow SED as a function of time. As in Section <ref>, we adopt the SMCextinction curve <cit.> to model the extinction in the host galaxy, , and include thepossibility of an achromatic `jet-break' in the afterglow light curves due to spreading andedge-effects expected for a collimated outflow. To efficiently sample the available parameterspace, we carry out an MCMC analysis using . The details of our modelling scheme and MCMCimplementation are described in <cit.>. §.§ Basic Considerations The lack of any flattening in the late-time NIR light curve (Fig. <ref>) indicates that any host contamination of the afterglow photometry is small, and we assume it tobe negligible in what follows. The J-band light curve declines as α_ J = -0.23±0.04 between 0.06 days and0.2 days. In the synchrotron model, such a shallow decline in the NIR is only possible if ≲ <, where the lightcurves decline as t^-1/4 regardless of the density profile ofthe circumburst environment. This suggests that the afterglow radiation is in the fast coolingregime, and that the NIR bands are on segment F of <cit.>.In this scenario, we would expect the light curve to steepen to α = (2-3p)/4 when∝ t^-3/2 passes through the NIR band, where p is the power-law index of the electron energy distribution. Alternatively, in the particular case of awind environment, ∝ t^1/2 may pass through the NIR band first, and the NIR decayrate would steepen to α = -2/3.The steepness of the late decay (α_ J≈ -1.8±0.3 between 4.3 days and 20 days), and marked change from the early behaviour, provides evidence that a jet break occurred at ≲4.3 days, and also suggests the passage ofthrough the NIR band between0.2 and 4.3 days, and indicates a uniform(ISM-like) environment. If we take the power law within this window, α_ J = -1.2±0.2, as indicative of the slope afterpassage, but before the jet break, then using α_ J = (2-3p)/4 yields p=2.3±0.3. The XRT PC-mode light curve is well-fit with a broken power law, with an initial flat segment, α_ X, 1 = 0.0±0.2, breakinginto[The smoothness (y in ; equivalent to sin ) of the break is poorlyconstrained, and we fix it to y=3.]α_ X, 2 = -1.32±0.05 at t_ b = (6.3±1.2)×10^-3 days.The initial flat portion of the X-ray light curve may be due to theX-rays being on the same segment (F) of the synchrotron SED as the NIR data. In this case, theX-ray decline rate is also expected to be t^-1/4. The break in the X-ray light curve would thencorrespond to the passage ofthrough the X-ray band. Since the X-ray band spans an orderof magnitude in energy, whileevolves as t^-3/2, we would expect the break to besmoothed out over a factor of ≈10^-2/3≈5 in time. If we assumed ≈ν_J≈ 2.2×10^14 Hz at ≈0.2 days, we would expect ≈ 1 keV at≈2×10^-3 days. This is consistent with the observed steepening in the X-ray lightcurveat ≈6×10^-3 days. In this model, the post-break decline rate of α_ X =-1.32±0.05 yields p=2.43±0.07. The different decline rates in the X-ray and NIR bandsbetween 0.06 days and 0.2 days suggests that these bands are on different segments of the afterglowsynchrotron spectrum, consistent with the spectral ordering, ≲ << duringthis period.The spectral index in segment F is expected to be β = -0.5, independent of p. When theX-rays are on this segment, we would expect Γ_ X = 1-β_ X = 1.5, which isconsistent with the value of Γ_ X = 1.61±0.14 derived in Section <ref>.We would also expect spectral evolution from Γ_ X = 1.5 to Γ_ X = 1+p/2aftercrosses the X-ray band. Unfortunately, paucity of data following the orbital gapin the X-ray light curve precludes confirmation of this behaviour.Interpolating the X-ray lightcurve using the best fit broken power law model, we finda flux density of (3.5±0.4)×10^-2 μJy at the time of the Gemini/NIRI J-band observation at0.064 days. The spectral index between the NIR and X-ray band is then β_NIR-X≈-0.65±0.01. This is significantly different from -0.5, suggesting that at leastone spectral break frequency lies between the NIR and X-ray band.Assuming a spectral slope ofβ=-0.5 and β=-p/2=-1.22 below and above this break, respectively, and using themeasured J-band flux density and extrapolated X-ray flux density, we can locate the break to beat ≈5.9×10^16 Hz at this epoch. However, extrapolating∝ t^-3/2 from ≈2.2×10^14 Hz at ≈0.2 days to 0.064 days yields≈1.2×10^15 Hz, which is more than an order ofmagnitude lower.Here we have neglected the possibility of dust extinction in the host galaxy. A small amount of extinction would lead us to overestimate β_ NIR-X and hence overestimate , and so in principle alleviate this discrepancy. However, we estimate that requiring β_ NIR-X≈β_ Xwould necessitate A_ V≳0.2, which is disfavored from our analysis of the NIR photometry in Section <ref>. We return to the question of host extinction in Section <ref>.In the synchrotron model, we can use the observed X-ray flux density at 1 keV at a time that isdominated by afterglow radiation to estimate the burst kinetic energy <cit.>. This requires the X-ray bandto be located above the peak and cooling frequencies. Since ,< after the break inthe X-ray light curve, this condition is satisfied. We use the last point preceding the orbital gap with f_ X≈0.22 μJy at≈0.36 hr,together with fiducial values of p=2.2 <cit.> and ==1/3 to estimate ≈3×10^51 erg.We verify this result in the next section.To summarise, the X-ray light curve, X-ray spectrum, and NIR J-band light curve suggest that theobserved synchrotron radiation from the blastwave shock is in the fast cooling regime with ≲ << at ≈ 0.2 days;passes through the X-ray band at about a few× 10^-3 days, and through the J-band between ≈0.2 days and 4.3 days, while the steep declinein theF140W data indicates a jet break before ≈ 4.3 days. §.§ Multi-wavelength Model for GRB 120923A We now describe the full multi-wavelength modelling of all available afterglow data for GRB 120923Ausing the techniques presented in <cit.>, which include accommodating upper limits.We adopted weak, flat priors based on plausible ranges: 2.01<p<3.45,,<1/3, -10<log(/ cm^-3)<10, -4<log(/10^52erg)<2.7, < 20 mag,and -5<log(/ d)<5.For the sake of generality, we did not fix the redshift, butbased on our analysis of the NIR spectral energy distribution (Section <ref>) restricted the redshift range to 7.0<z<8.5. We find that an ISM-like model with a jet break adequately explains the full set of afterglowobservations. Our highest likelihood model (Figure <ref>) has the parametersp≈2.5, z≈8.1, ≈0.33,≈0.32,≈4.0×10^-2cm^-3, and ≈2.9×10^51 erg, with a jet break at≈3.0 days (χ^2/dof = 1.1).We note that the derived valueofconfirms the estimate made using the X-ray data (Section <ref>). An implication of this is a very high value for the radiative efficiency η=/(+)≈0.92. Such high values for η have also been inferred for the prompt emission of some other GRBs <cit.>, although theyremain a challenge to explain theoretically.Thus the redshift derived by this approach is completely consistent with the photometric redshift found in Section <ref>. This model requires a small amount of extinctionin the host galaxy, ≈0.06 mag (Figure <ref>). Using the relation,θ_ jet = 0.1(/E_ K,iso/10^52)^1/8(/(1+z)/6.2hr)^3/8for the jet opening angle <cit.> we find ≈4.9 degrees and beaming-correctedkinetic energy, ≈1.1×10^49 erg. The break frequencies are locatedat ≈3×10^7 Hz,≈7×10^7 Hz, ≈6×10^14 Hz,and ≈3×10^15 Hz at 0.1 days and the peak flux density is ≈13 μJy at for the highest-likelihood model. In this model,passes through 1 keV at≈5×10^-3 days, which is precisely the time of the observed break in the X-ray lightcurve, t_ b=(6.3±1.2)×10^-3 days (Section <ref>). The shallow-to-steeptransition in the X-ray light curve is therefore consistent with the passage of . We note that the spectrum peaks atin the fast coolingregime, and the proximity of the cooling break to the NIR J-band at ≈0.1 days results in aspectrum near the J-band that is flatter than ν^-0.5. This explains the lower value of (which lies betweenand ) inferred from the NIR and X-ray light curves, comparedwith the value required in a broken power-law fit to match the NIR J-band and interpolatedX-ray flux density at this time. The location of ≳ν_ J at 0.1 d rulesout the wind model, since in that case we would expect the NIR light curve to decline as t^-2/3after ≈ 0.1 d (Section <ref>).The passage ofthrough the NIR J-band occurs at ≈0.5 days, at which point bothandare below ν_ J and the light curve steepens to α≈-1.4,consistent with the limited observations at this time. Finally, the best-fit model requires a jet break at ≈3 days.In broad agreement with the basic analysis presented in Section <ref>, themodel afterglow SED remains in fast cooling until ≈0.34 days (approximately 1 hr in the rest-frame).This conclusion is driven in the fit by the apparent change in NIR spectral slope between the early data, particularly the Gemini photometry at ∼0.14 days and the VLT epoch at ∼0.8 days, as illustrated in Figure <ref>.From our MCMC simulations, we constrain the fitting parameters to p=2.7^+0.3_-0.2,z=8.1^+0.2_-0.3, =0.31^+0.02_-0.04, = 0.23^+0.07_-0.11,= (4.1^+2.2_-1.4)×10^-2cm^-3, =(3.2^+0.8_-0.5)×10^51 erg,= 3.4^+1.1_-0.5 days (68% credible intervals), and ≲0.08 mag (90%confidence upper limit). Thus, although the best-fit model has a small amount of extinction (Figure<ref>), our MCMC results indicate that evidence for dust along the line of sight isstatistically marginal. Applying the expression forabove to our MCMC chains with theirindividual values of , , z, and , we find =5.0^+1.3_-0.8 degrees and =(1.2^+0.5_-0.2)×10^49 erg. Correctingfor beaming using thismeasurement of , we find =(1.8±0.8)×10^50 erg. We present histograms ofthe marginalized posterior density for each parameter in Figure <ref> andcorrelation contours between the physical parameters and expected relations between theparameters in the absence of constraints on one of the spectral break frequencies inFigure <ref>. Wesummarise the results of our MCMC analysis in Table <ref>.lcc3 0pt Parameters from multi-wavelength modelling of GRB 120923AParameter Best-fit value MCMC result z8.18.1^+0.2_-0.3 [1ex] p2.52.7^+0.3_-0.2 [1ex] 0.33 0.31^+0.02_-0.04 [1ex] 0.32 0.23^+0.07_-0.11 [1ex] (cm^-3)4.0×10^-2 (4.1^+2.2_-1.4)×10^-2 [1ex] E_ K,iso (10^51 erg)2.9 3.2^+0.8_-0.5[1ex] (d)3.03.4^+1.1_-0.5 [1ex] (degrees) 4.9 5.0^+1.3_-0.8 [1ex] (mag)0.06 ≲0.08^†[1ex] E_γ, iso^ (10^52 erg) 2c4.8^+6.1_-1.6 [2ex] E_γ (10^50 erg)1.81.8±0.8^* 0pt3exE_ K (10^49 erg) 1.11.2^+0.5_-0.2^† 90% confidence upper limit. The median value of the host extinction inour MCMC analysis is =3.8×10^-5 mag with a 68% credible interval ∈(0.00,0.06).^ 1–10^4 keV, rest frame. ^* Using symmetrized uncertainties (one-half positive error minus negative error) for both and , followed by a Monte-Carlo calculation. § DISCUSSIONThe photometric redshift derived from SED-fitting (z=8.1±0.4) and multi-wavelength modelling(z=8.1^+0.2_-0.3) agree with the redshift derived from the X-shooter spectrum(z=7.84^+0.06_-0.12).As expected, the multi-wavelength modelling produces a narrowerposterior density function compared to SED-fitting alone, while the spectral analysis providing thestrongest constraint of the three methods. In principle, it is possible to use the posterior densityfunction of z derived from the spectrum as a prior on the redshift for the multi-wavelengthanalysis. However, a perusal of the correlation contours between the redshift and the otherparameters in the MCMC results of the multi-wavelength analysis suggests that the redshift is notstrongly coupled to the other parameters and that, therefore, imposing such a prior is of limitedutility. In confirmation, we find that selecting the multi-wavelength analysis MCMC samples withinthe redshift range 7.72 < z < 7.90 (the 68% credible interval from the spectral analysis)results in identical posteriors for the other parameters as for the full distribution. The lack ofa strong correlation between z and the other parameters suggests that the measurement of z isdriven by a small subset of the data, essentially in a model-independent fashion.To place our measured value of the circumburst density for GRB 120923A in context, we computesummary statistics for the circumburst density for GRBs with ISM-like environments reported andaggregated in <cit.> and <cit.>. Since the density spans several orders of magnitudeand therefore acts as a scale parameter, we use ϱ≡log_10(/ cm^-3) for this analysis.We find ϱ̅ = -0.19, the standard deviation, σ_ϱ=2.0, and the median,ϱ̂=-0.17. In comparison, we have ϱ=-1.4±0.2 for GRB 120923A, such that|(ϱ-ϱ̅)|/σ_ϱ≈0.6. Whereas the measured density is lower thanboth the mean and median reported for GRB afterglows thus far, it is consistent with being drawnfrom the same distribution.For the beaming corrected kinetic energy, we once again work with the logarithm: ε≡log_10(E_ K/10^50erg) and have ε̅=0.75,σ_ε=0.70, and ε̂=0.58 for the comparison sample. ForGRB 120923A, we find ε=-0.9^+0.16_-0.10, such that|(ε-ε̅)|/σ_ε≈2.4. Thus, although we cannot rule out thatGRB 120923A is drawn from the same sample as the comparison events based on , the measured valueof the beaming-corrected kinetic energy in the case of this event is one of the lowest observed forGRB afterglows so far (Figure <ref>). A caveat here is that the comparison sample for which these parameters have been derived consists of well-studied and generally bright events, and so could itself bebiased compared to the wider population.Our HST observations enabled us to measure the steep light curve decay which we have attributed to a jet break.Although the identification of jet breaks in GRB afterglow light curves has sometimes proven controversial in the Swift era <cit.>, the sharp and marked break in this case is rather hard to interpret otherwise. Through multi-wavelength modelling, we have derived a jet opening angle of=5.0^+1.3_-0.8 degrees, the fourth such measurement at z≳6. Interestingly, thisvalue is comparable to the values obtained for other z≳6 events,but is smaller than the median value for z∼1 events reported in the literature (Figure <ref>;although we caution that limited data for many older bursts in this compilation means that interpretation of temporal breaks as being due to beaming is less secure).This supports the hypothesis of <cit.> that observed high-redshift GRBs may be more tightly-beamed on the average than their more local counterparts, which may be a consequence of narrower jets leading to more intrinsically luminous and hence easier to observe afterglows. Multi-wavelength analysis for more high-redshift events coupled with a uniform statistical study ofthe z∼1 events would further clarify this inference.§ CONCLUSIONSWe have presented X-ray, NIR, and radio observations of GRB 120923A. The faintness of the afterglow made the initial identification as an optical drop-out and subsequent spectroscopy challenging. Nonetheless, we were able to derive a redshift for the event ofz=7.84^+0.06_-0.12 from a low signal-to-noise VLT/X-shooter spectrum, which is consistent with that obtained from the photometric redshift analysis.The absence of significant flux at the afterglow location in our final HST image suggests the host galaxyis likely fainter than M_ F140W,AB≳27.5, consistent with the deep limits on other z∼8 GRB hosts <cit.>.Our multi-wavelength modelling of all available afterglow observations, shows that a standard external shock in a constant-density circumburstenvironment with ≈0.04cm^-3 explains the data well. Using deep HST observations, we findevidence for a jet break at =3.4^+1.1_-0.5 days, from which we computed a jet opening angle of=5.0^+1.3_-0.8 degrees. Our results support the apparent trend of smaller opening angles forz≳6 GRBs compared to z∼1 events. This may reflect the fact thatat high redshift we can onlydetect events with the highest isotropic luminosities, which would therefore favour selection of more narrowly beamed jets assuming a fixed range of intrinsic energy reservoirs. The blastwave kinetic energy,E_ K=1.2^+0.5_-0.2×10^49 erg, is one of the lowest seen so far for both nearby and high-z well-studied events. Otherwise the properties of GRB 120923A, like those of the other z≳6 bursts discovered to date <cit.>, show no signatures that would suggest they could be produced by Pop III stars, such as very long duration or extremely large energy <cit.>.In the case of GRB 120923A, NIR observations within the first hours post-burst alerted us to the high-redshiftnature of this event. In addition, they were essential to catch the peak of the afterglow SED at atime that the radiation was in the fast cooling regime, allowing us to constrain the circumburstdensity even in the absence of a radio detection and the resulting freedom in locating thesynchrotron self-absorption frequency. Rapid-response NIR observations at large telescopes aretherefore crucial not only for their ability to help us identify GRBs at z≳6, butalso for studying the progenitors and environments of these energetic phenomena, establishing themas unique probes of star-formation at the highest redshifts.In the JWST era, NIR spectroscopy, even several days post-burst, of similar events will provide much higher signal-to-noise data, allowing meaningful constraints to be placed on abundances and neutral hydrogen in the host galaxy. We thank the anonymous referee for their constructive comments.This work is based on observations made with the NASA/ESA Hubble Space Telescope, obtained at theSpace Telescope Science Institute, which is operated by the Association of Universities for Researchin Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programGO12558. Support for Program number GO12558 was provided by NASA through a grant from the SpaceTelescope Science Institute, which is operated by the Association of Universities for Research inAstronomy, Incorporated, under NASA contract NAS5-26555.This work is based on observations collected at the European Organisation for AstronomicalResearch in the Southern Hemisphere (ESO), Chile under programme 089.A-0067, and on observations obtained at the Gemini Observatory (acquired through the Gemini Science Archive andprocessed using the Gemini IRAF package), which is operated by theAssociation of Universities forResearch in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Geminipartnership: the National Science Foundation (United States), the National Research Council(Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério daCiência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología eInnovación Productiva (Argentina). The National Radio Astronomy Observatory is a facility of theNational Science Foundation operated under cooperative agreement by Associated Universities, Inc.When the datareported here were acquired, UKIRT was operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K. We thank Tim Carroll for his assistance in making these observations.Based on data obtained with the VLA under program 12A-394.The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.The Dark Cosmology Centre was funded by the DNRF. The research leading to these results has receivedfunding from the European Research Council under the European Union's Seventh Framework Program(FP7/2007-2013)/ERC Grant agreement no.EGGS-278202.DAK is grateful to TLS for financial support.ERS acknowledges support from UK STFC consolidated grantST/L000733/1DM thanks the Instrument Center for Danish Astrophysics (IDA) for support.AdUP acknowledges support from a Ramón y Cajal fellowship.RSR and AdUP acknowledge support from a 2016 BBVA Foundation Grant for Researchers and Cultural Creators. DAK, ZC, RSR and AdUP acknowledge support from the Spanish research project AYA 2014-58381-P.DX acknowledges the support by the One-Hundred-Talent Program of the Chinese Academy of Sciences (CAS), by the Strategic Priority Research Program Multi-wavelength Gravitational Wave Universe of the CAS (No. XDB23000000), and by the National Natural Science Foundation of China under grant 11533003.TK acknowledges support through the Sofja Kovalevskaja Award to Patricia Schady from the Alexander von Humboldt Foundation of Germany.NRT and KW acknowledge support from the UK STFC under consolidated grantST/N000757/1.Gemini-North (GMOS, NIRI),HST (WFC3-IR), VLT (X-shooter, ISAAC, FORS2), UKIRT (WFCAM),VLA 99 [Barkana & Loeb(2004)]Barkana04 Barkana, R., & Loeb, A. 2004, , 601, 64[Barthelmy et al. (2005)]bbc+05 Barthelmy, S. D., Barbier, L. M., Cummings, J. R., et al. 2005, , 120, 143 [Boër et al.(2006)]Boer06 Boër, M., Atteia, J. L., Damerdji, Y., et al. 2006, , 638, L71[Bolton & Haehnelt(2013)]Bolton13 Bolton, J. S., & Haehnelt, M. G. 2013, , 429, 1695[Bouwens et al.(2015)]Bouwens15 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, , 811, 140[Bunker et al.(2013)]Bunker13 Bunker, A. J., Caruana, J., Wilkins, S. M., et al. 2013, , 430, 3314 [Burrows et al. (2005)]bhn+05 Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, , 120, 165 [Cenko et al.(2010)]cfh+10 Cenko, S. B., Frail, D. A., Harrison, F. A., et al. 2010, , 711, 641 [Cenko et al.(2011)]cfh+11 Cenko, S. B.; Frail, D. A.; Harrison, F. A., et al. 2011, , 732, 29 [Chandra et al.(2008)]ccf+08 Chandra, P., Cenko, S. B., Frail, D. A., et al. 2008, , 683, 924 [Chornock et al.(2014)]Chornock15 Chornock R., Berger E., Fox D. B., Fong W., Laskar T., Roth K. C., 2014,arXiv:1405.7400[Cucchiara et al.(2011)]Cucchiara11 Cucchiara, A., Levan,A. J., Fox, D. B., et al. 2011, , 736, 7[Curran et al.(2008)]curran08 Curran, P. A., van der Horst, A. J., & Wijers, R. A. M. J. 2008, , 386, 859[Curran et al.(2010)]curran10 Curran, P. A., Evans, P. A., de Pasquale, M., Page, M. J., & van der Horst, A. J. 2010, , 716, L135[Evans et al.(2007)]Evans07 Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2007, , 469, 379[Evans et al.(2009)]Evans09 Evans, P. A., Beardmore,A. P., Page, K. L., et al. 2009, , 397, 1177[Foreman-Mackey et al.(2013)]fhlg13 Foreman-Mackey, D., Hogg, D. W., Lang, D. & Goodman, J. 2013, , 125, 306 [Frebel & Norris(2015)]frebel15 Frebel, A., & Norris, J. E. 2015, , 53, 631[Friedman & Bloom(2005)]fb05 Friedman, A. S. & Bloom, J. S. 2005, , 627, 1 [Fynbo et al.(2009)]fjp+09 Fynbo, J. P. U., Jakobsson, P.; Prochaska, J. X., et al. 2009, , 185, 526 [Ghirlanda at al.(2007)]gngf07 Ghirlanda, G., Nava, L., Ghisellini, G., et al. 2007, , 466, 127 [Goldoni(2011)]Goldoni11 Goldoni, P. 2011, Astronomische Nachrichten, 332, 227[Graham & Fruchter(2017)]graham2017 Graham, J. F., & Fruchter, A. S. 2017, , 834, 170[Granot & Sari(2002)]gs02 Granot, J., & Sari, R. 2002, , 568, 820 [Granot et al.(2006)]Granot06 Granot, J., Königl, A., & Piran, T. 2006, , 370, 1946[Greiner et al.(2008)]Greiner08 Greiner, J., Bornemann, W., Clemens, C., et al. 2008, , 120, 405 [Greiner et al.(2009)]Greiner09 Greiner, J., Krühler, T., Fynbo, J. P. U., et al. 2009, , 693, 1610[Greiner et al.(2011)]gkk+11 Greiner, J., Krühler, T., Klose, S., et al. 2011, , 526, A30 [Haislip et al.(2006)]Haislip06 Haislip, J. B., Nysewander, M. C., Reichart, D. E., et al. 2006, , 440, 181[Hartoog et al.(2015)]Hartoog15 Hartoog O. E., et al., 2015, , 580, 139 [Hjorth et al.(2003)]hjorth03 Hjorth, J., Sollerman, J., Møller, P., et al. 2003, , 423, 847[Hjorth et al.(2012)]Hjorth12 Hjorth, J., Malesani, D., Jakobsson, P., et al. 2012, , 756, 187[Jakobsson et al.(2012)]Jakobsson12 Jakobsson, P.,Hjorth, J., Malesani, D., et al. 2012, , 752, 62[Kann et al.(2010)]Kann10 Kann, D. A., Klose, S.,Zhang, B., et al. 2010, , 720, 1513[Kawai et al.(2006)]Kawai06 Kawai, N., Kosugi, G., Aoki, K., et al. 2006, , 440, 184[Krühler et al.(2012)]Kruehler12 Krühler, T., Malesani, D., Milvang-Jensen, B., et al. 2012, , 758, 46[Krühler et al.(2015)]Kruehler15 Krühler, T., Malesani, D., Fynbo, J. P. U., et al. 2015, , 581, A125[Laskar et al.(2014)]Laskar14 Laskar, T., Berger, E.,Tanvir, N., et al. 2014, , 781, 1[Laskar et al.(2015)]lbm+15 Laskar, T., Berger, E., Margutti, R., et al. 2015, , 814, 1 [Levan et al.(2012)]Levan12 Levan, A. J., Perley, D. A., Tanvir, N. R., & Cucchiara, A. 2012, GRB Coordinates Network, 13802 [Lien et al.(2014)]lien14 Lien, A., Sakamoto, T., Gehrels, N., et al. 2014, , 783, 24[Lien et al.(2016)]lien16 Lien, A., Sakamoto, T., Barthelmy, S. D., et al. 2016, , 829, 7[Littlejohns et al.(2013)]Littlejohns13 Littlejohns, O. M., Tanvir, N. R., Willingale, R., et al. 2013, , 436, 3640[Madau (1995)]mad95 Madau, P.1995, , 441, 18 [Markwardt et al.(2012)]Markwardt12 Markwardt, C. B., Barthelmy, S. D., Baumgartner, W. H., et al. 2012, GRB Coordinates Network, 13807 [Maselli et al.(2014)]maselli14 Maselli, A., Melandri, A., Nava, L., et al. 2014, Science, 343, 48[McGuire et al.(2016)]mcguire16 McGuire, J. T. W., Tanvir, N. R., Levan, A. J., et al. 2016, , 825, 135[Melandri et al.(2015)]Melandri15 Melandri, A., Bernardini, M. G., D'Avanzo, P., et al. 2015, , 581, A86[Mészáros & Rees(2010)]Meszaros2010 Mészáros P., Rees M. J., 2010, ApJ, 715, 967[Miralda-Escudé(1998)]Miralda98 Miralda-Escudé, J. 1998, , 501, 15[Oesch et al.(2016)]oesch16 Oesch, P. A., Brammer, G., van Dokkum, P. G., et al. 2016, , 819, 129[Panaitescu & Kumar(2002)]pk02 Panaitescu, A., & Kumar, P., 2002, , 571, 779 [Pei (1992)]pei92 Pei, Y. C. 1992, , 395, 130[Perley et al.(2013)]perley13 Perley, D. A., Levan, A. J., Tanvir, N. R., et al. 2013, , 778, 128[Perley et al.(2016)]perley2016 Perley, D. A., Tanvir, N. R., Hjorth, J., et al. 2016, , 817, 8[Planck Collaboration (2016)]Planck2016 Planck Collaboration, 2016, , 596, A108[Racusin et al.(2008)]Racusin08 Racusin, J. L., Karpov, S. V., Sokolowski, M., et al. 2008, , 455, 183bibitem[Robertson et al.(2015)]Robertson15 Robertson, B. E., Ellis, R. S., Furlanetto, S. R., & Dunlop, J. S. 2015, , 802, L19[Sakamoto et al.(2011)]Sakamoto11 Sakamoto, T., Barthelmy, S. D., Baumgartner, W. H., et al. 2011, , 195, 2[Salvaterra et al.(2009)]Salvaterra09 Salvaterra, R.,Della Valle, M., Campana, S., et al. 2009, , 461, 1258[Sari et al.(1998)]Sari98 Sari, R., Piran, T., & Narayan, R. 1998, , 497, L17[Sari et al.(1999)]sph99 Sari, R., Piran, T., & Halpern, J. P. 1999 , 519, L17 [Schady et al.(2012)]schady12 Schady, P., Dwelly, T., Page, M. J., et al. 2012, , 537, A15 [Schlafly & Finkbeiner(2011)]Schlafly11 Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103[Starling et al.(2013)]Starling13 Starling, R. L. C., Willingale, R., Tanvir, N. R., et al. 2013, , 431, 3159 [Tanvir & Jakobsson(2007)]Tanvir07 Tanvir, N. R., & Jakobsson, P. 2007, Royal Society of London Philosophical Transactions Series A, 365, 1377[Tanvir et al.(2009)]Tanvir09 Tanvir, N. R., Fox,D. B., Levan, A. J., et al. 2009, , 461, 1254[Tanvir et al.(2012)]Tanvir12 Tanvir, N. R., Levan, A. J., Fruchter, A. S., et al. 2012, , 754, 46[Thöne et al.(2013)]Thoene13 Thöne, C. C., Fynbo, J. P. U., Goldoni, P., et al. 2013, , 428, 3590[Totani et al.(2006)]Totani06 Totani, T., Kawai, N., Kosugi, G., et al. 2006, , 58, 485[Trenti et al.(2012)]Trenti12 Trenti, M., Perna, R., Levesque, E. M., Shull, J. M., & Stocke, J. T. 2012, , 749, L38[Vergani et al.(2015)]vergani15 Vergani, S. D., Salvaterra, R., Japelj, J., et al. 2015, , 581, A102[Vernet et al.(2011)]Vernet11 Vernet, J., Dekker, H., D'Odorico, S., et al. 2011, , 536, A105[Watson et al.(2015)]Watson15 Watson, D., Christensen, L., Knudsen, K. K., et al. 2015, , 519, 327[Willingale et al.(2013)]Willingale13 Willingale, R., Starling, R. L. C., Beardmore, A. P., Tanvir, N. R., & O'Brien, P. T. 2013, , 431, 394[Xu et al.(2013)]xu13 Xu D., et al., 2013, ApJ, 776, 98[Yershov et al.(2012)]Yershov12 Yershov, V. N.,Barthelmy, S. D., Krimm, H. A., et al.2012, GRB Coordinates Network, 13796 [Yost et al.(2003)]yhsf03 Yost, S. A., Harrison, F. A., Sari, R., et al. 2003, , 597, 459 [Zafar et al.(2011)]Zafar11 Zafar, T., Watson, D. J.,Tanvir, N. R., et al. 2011, , 735, 2[Zhang et al.(2007)]Zhang07 Zhang B., et al., 2007, ApJ, 655, 989[Zhang et al.(2009)]Zhang09 Zhang, B., Zhang, B.-B., Virgili, F. J., et al. 2009, , 703, 1696[Zitrin et al.(2015)]Zitrin15 Zitrin, A., Labbé, I., Belli, S., et al. 2015, , 810, L12
http://arxiv.org/abs/1703.09052v1
{ "authors": [ "N. R. Tanvir", "T. Laskar", "A. J. Levan", "D. A. Perley", "J. Zabl", "J. P. U. Fynbo", "J. Rhoads", "S. B. Cenko", "J. Greiner", "K. Wiersema", "J. Hjorth", "A. Cucchiara", "E. Berger", "M. N. Bremer", "Z. Cano", "B. E. Cobb", "S. Covino", "V. D'Elia", "W. Fong", "A. S. Fruchter", "P. Goldoni", "F. Hammer", "K. E. Heintz", "P. Jakobsson", "D. A. Kann", "L. Kaper", "S. Klose", "F. Knust", "T. Kruehler", "D. Malesani", "K. Misra", "A. Nicuesa Guelbenzu", "G. Pugliese", "R. Sanchez-Ramirez", "S. Schulze", "E. R. Stanway", "A. de Ugarte Postigo", "D. Watson", "R. A. M. J. Wijers", "D. Xu" ], "categories": [ "astro-ph.HE", "astro-ph.CO" ], "primary_category": "astro-ph.HE", "published": "20170327132622", "title": "The properties of GRB 120923A at a spectroscopic redshift of z=7.8" }
We develop some consequences of the connection between Calabi-Yau structures and torsion-free G_2 structures on compact and asymptotically cylindrical six- and seven-dimensional manifolds. Firstly, we improve the known proof that matching asymptotically cylindrical Calabi-Yau threefolds can be glued. Secondly, we give an alternative proof that the moduli space of Calabi-Yau structures on a six-dimensional real manifold is smooth, and extend it to the asymptotically cylindrical case. Finally, we prove that the gluing map of Calabi-Yau threefolds, extended between these moduli spaces, is a local diffeomorphism: that is, that every deformation of a glued Calabi-Yau threefold arises from an essentially unique deformation of the asymptotically cylindrical pieces. StyleBank: An Explicit Representation for Neural Image Style Transfer[ Received ; accepted====================================================================== § INTRODUCTIONThis paper is about Calabi-Yau threefolds, which we define as Riemannian manifolds with holonomy contained in SU(3). These have been extensively studied: this condition immediately implies that the manifold is Kähler and Ricci-flat. Conversely, if we were interested in Ricci-flat Kähler manifolds, the Calabi conjecture, proved by Yau <cit.>, states that in the compact case it is essentially sufficient to consider compact Kählerian manifolds whose canonical bundle has torsion first Chern class; it states that each such manifold has a unique Ricci-flat metric in each Kähler class. If, more strongly, we assume the canonical bundle is holomorphically trivial, so that there is a holomorphic volume form, then we have a parallel Kähler form and a parallel holomorphic volume form, so that the holonomy is contained in SU(3). Such manifolds are interesting because of their restricted holonomy, because manifolds with restricted curvature are interesting in general, and because they are conjectured to be useful in physics: in certain forms of supersymmetric string theory, spacetime is conjectured to take the form M × K where K is compact, Ricci-flat and Kähler. See <cit.>. As an auxiliary tool, we will use G_2 manifolds, that is those seven-dimensional manifolds with holonomy contained in the exceptional Lie group G_2. Since the subgroup of G_2 fixing a nonzero vector is isomorphic to SU(3), the Riemannian product of a Calabi-Yau threefold and a general one-manifold is a G_2 manifold. The purpose of this paper is to use this correspondence to obtain results for Calabi-Yau threefolds from corresponding results for G_2 manifolds: because a small perturbation of a G_2 structures as a 3-form is again a G_2 structure, the G_2 analysis is often easier than trying to work with Calabi-Yau structures directly.Specifically, we shall study a gluing construction of compact Calabi-Yau threefolds given by taking asymptotically cylindrical Calabi-Yau threefolds and joining them to form a manifold with a neck. Combining geometric objects via manifolds with necks can also be done if the original objects were not asymptotically cylindrical. An important early example was the construction of self-dual conformal structures on four-manifolds, initiated by Floer<cit.>. In turn, such combinations lead to questions about which deformations of the glued structure arise from compatible deformations of the structures being glued and which, if any, arise from choices in the gluing. For instance, after showing that anti-self-dual connections on compact four-manifolds could be glued on a suitable connected sum, Donaldson and Kronheimer showed <cit.> that every deformation of an anti-self-dual connection on a four-manifold obtained by gluing can be obtained by either deformation of the connections being glued, variation of the length of the neck in gluing, or a change in the identification of the pieces, in an essentially unique way.Later, Kovalev <cit.> gave a gluing construction of G_2 manifolds from asymptotically cylindrical G_2 manifolds, in such a way that the holonomy was exactly G_2. The idea is that we can easily construct a G_2 structure with torsion, and that this torsion is small and can be removed by a perturbation argument. Removing small torsion was previously studied by Joyce <cit.> with a detailed result designed for use desingularising conical singularities. In the case of gluing asymptotically cylindrical G_2 structures, Nordström <cit.> proved the analogous result on deformations: that every deformation of a G_2 structure obtained by gluing two matching asymptotically cylindrical G_2 structures is obtained, uniquely, by one of perturbing the glued structures, perturbing the identification of the two manifolds, or perturbing the length of the neck. We therefore seek a similar result in the Calabi-Yau case. We note that by work beginning with Tian–Yau and continued by Kovalev and Haskins–Hein–Nordström <cit.>, asymptotically cylindrical Calabi-Yau manifolds do indeed exist. It is known that they can be glued by deformation in the sense of complex algebraic geometry. Using the smoothing results of Friedman <cit.> and Kawamata–Namikawa <cit.>, Lee <cit.> showed that we can deform the singular space given by identifying the manifolds “at infinity" to give a smooth manifold. This work leaves open the question of what the topology of the deformed space is and so whether it can be regarded as a gluing given by cutting off the structures to their asymptotically cylindrical limits, joining them to form a manifold with a long neck, and perturbing to maintain a Calabi-Yau structure. In a similar vein, it remains open whether all the different possible ways of deforming the singular space yield structures equivalent up to diffeomorphism, and so it's not clear that using these deformations gives a well-defined map on moduli spaces. We therefore will not consider this approach. We concentrate on the case n=3 because then we only have to introduce one extra dimension to reach seven dimensional G_2 manifolds. Similar arguments should work for n<3. If n=2, the obvious thing to do is to consider the Riemannian and holomorphic product of a Calabi-Yau twofold with a complex torus to obtain SU(2) results from the SU(3) results we prove in this paper. Hence, it would probably be necessary to consider the moduli space of Calabi-Yau structures on such a torus. In the same way, it should be possible given a “Calabi-Yau curve" to construct a twofold by taking a product with a complex torus; however, as we need the moduli space of Calabi-Yau structures on a complex torus to apply this reduction, care would be required to avoid circular reasoning. By a Calabi-Yau structure, we mean a torsion-free SU(n) structure. We regard both a Calabi-Yau and a torsion-free G_2 structure on a manifold as a set of differential forms satisfying appropriate algebraic and differential conditions; an asymptotically cylindrical such structure satisfies additional asymptotic conditions. We may then define moduli spaces _SU(3) and _G_2 to be the quotient of structures on a given compact or asymptotically cylindrical manifold by the natural pullback action by (asymptotically cylindrical) diffeomorphisms. Similarly, by making everything invariant by a circle action, we may define a moduli space _G_2^S^1 of S^1-invariant torsion-free G_2 structures on a product M × S^1. When n=3, we obtainThe Calabi-Yau moduli space _SU(3) on the compact or asymptotically cylindrical manifold M is a smooth manifold, and we have a diffeomorphism_SU(3)×_>0× H^1(M, ) ≅_G_2^S^1Also, _G_2^S^1 is locally diffeomorphic to open subsets of the G_2 moduli space (again, compact or asymptotically cylindrical) _G_2 on M × S^1. Theorem A is proved as Theorem <ref> and Theorem <ref>. The _>0 and H^1(M, ) components give the length of the S^1 factor and in some sense its angle to the M factor respectively. Note that it follows that _SU(3) is finite-dimensional. Its dimension is easy to compute from the corresponding G_2 results: for instance, in the compact case, we get_SU(3)(M) = b^3(M × S^1) - 1 - b^1(M) = b^3(M) + b^2(M) - b^1(M) - 1 Given that we can glue Calabi-Yau structures (which we prove as Theorem <ref>), there is evidently a map from pairs of Calabi-Yau structures on a pair of manifolds to the glued structures. This map need not immediately induce a well-defined map of moduli spaces. However, once Theorem A is established, we can showThe gluing map from the “moduli space of asymptotically cylindrical SU(3) gluing data" to the moduli space of SU(3) structures on the glued manifold is well-defined, and is a local diffeomorphism.The technical statement of this theorem is Theorem <ref>, which follows a considerable amount of necessary preliminary work. The theorem says roughly that supposing a Calabi-Yau structure is given by gluing, any sufficiently close deformation of it is given by a gluing of small deformations of the original asymptotically cylindrical Calabi-Yau structures and gluing parameters, and moreover these structures and parameters are unique for any given deformation. The proof is fairly straightforward given Theorem A: using the moduli spaces from Theorem A, we define a moduli space of gluing data, by which we mean structures and necessary parameters, and an analysis of the structure of this moduli space and the gluing map shows that Theorem B follows from the corresponding result for G_2. Our approach to both of these thus rests on the result analogous to Theorem A at the level of structures.Suppose M is six-dimensional and is compact or has an end. There is a homeomorphism between (asymptotically cylindrical) G_2 structures on M × S^1 which are S^1-invariant and triples consisting of an SU(3) structure on M, a positive number, and an (asymptotically translation invariant with appropriate limit) one-form on M, with respect to the smooth topologies on the spaces of differential forms and hence structures. The precise statement of this theorem is Theorem <ref> below. We should say immediately that the compact case is essentially already known, following from work of Chan <cit.> by allowing slightly more variation. The asymptotically cylindrical case contains some limited originality, but is not much harder. We mention this theorem at this point, despite the fact that it is less significant than Theorems A and B, as it illuminates the fundamental idea: that if we want to glue Calabi-Yau structures on threefolds, we can convert into G_2 structures and then convert back again. A slightly more subtle version of Theorem C would also show that given an SU(3) structure with small torsion, then we can construct a G_2 structure with small torsion, perturb to remove torsion there, and then return to a Calabi-Yau structure. This idea was used by Chan<cit.> in the context of desingularising conical Calabi-Yau orbifolds. He showed that a “nearly Calabi-Yau structure" can be deformed to a Calabi-Yau structure by passing through a G_2 structure on M × S^1. We do not use Chan's concept of a nearly Calabi-Yau structure, but a similar analysis would apply to a nearly Calabi-Yau structure obtained by cutting off and gluing the SU(3) structures (Ω_1, ω_1), (Ω_2, ω_2) directly, proving our Theorem <ref>.The idea was also used by Doi and Yotsutani<cit.> for gluing asymptotically cylindrical SU(3) structures, in the simply connected case. In distinction to Chan's approach of gluing SU(3) structures approximately first and then passing to G_2, they crossed with S^1 first and then glued the resulting G_2 structures using Kovalev's result, with no guarantee that the answer is well-adapted to this product. Then, using simple connectedness and the Cheeger-Gromoll splitting theorem, as well as classification theory for six-manifolds, they argued that the (simply connected) glued six-manifold is diffeomorphic to a six-dimensional factor of the universal cover of the glued seven-manifold; since the universal cover is a Riemannian product and also admits a G_2 structure, this factor must admit an SU(3) structure. Our analysis simply removes the need for the classification theory and splitting (and consequently the necessary assumption of simple connectedness): in the simply connected case we can identify the universal cover concretely. The relation between SU(3) and G_2 structures is also used in physics, for instance, by de la Ossa, Larfors, and Svanes <cit.>. By putting a G_2 structure on the product of their SU(3) manifold and a half-line, they constrain the evolution of an SU(3) structure, with torsion, (according to various physical hypotheses), and so study paths in the “moduli space" of SU(3) structures with torsion. They showed for instance that if the G_2 structure does not satisfy specific torsion conditions, then various components of the torsion of SU(3) structures may become nonzero along these paths even if they were not nonzero to start with. None of these authors consider, however, whether similar arguments and Nordström’s theorem in the G_2 case <cit.> enable us to say anything about the action of this gluing on the moduli space. The first step towards such a result is to set up the Calabi-Yau moduli space, and establish Theorem A. In the compact case, a moduli space of Calabi-Yau structures can be constructed using Hitchin's work<cit.> on three-forms on six-dimensional compact manifolds. He showed that if Ω was a holomorphic volume form, then a small perturbation of Ω was also the real part of a holomorphic volume form, and showed that locally each cohomology class of three-forms only contained one such real part. To get the full moduli space result, this has to be combined with some work on deformations of the Kähler form: this was carried out by Nordström in <cit.> to provide boundary values for deformations of asymptotically cylindrical G_2 manifolds. We do not use this approach to the Calabi-Yau moduli space directly, except when considering how this deformation theory of G_2 manifolds passes to the S^1-invariant setting. If we set up the Calabi-Yau moduli space in this way, it is relatively straightforward to show the remaining part of Theorem A. On the other hand, the asymptotically cylindrical Calabi-Yau moduli space has not been studied in the sense we will use. Various similar objects have been studied. Most of these studies have used the notion of a logarithmic deformation of a complex structure. By a result of Haskins–Hein–Nordström<cit.>, any asymptotically cylindrical Calabi-Yau manifold of dimension greater than 2 is given by removing a divisor from a suitable orbifold. Kawamata <cit.> studied deformations of complex manifolds compactifiable in a similar sense; he called those corresponding to a fixed compactification logarithmic deformations. Kovalev<cit.> studied deformations of Ricci-flat Kähler metrics. Given an asymptotically cylindrical Calabi-Yau manifold, he showed that we can define an orbifold of Ricci-flat metrics around its metric, and that locally all such metrics with the same limit are Kähler for some logarithmic deformation of the complex structure. However, <cit.> did not consider how the complex structure varied in general: in particular, it did not consider variations of complex structure not leading to a change in the metric.More recently, Conlon, Mazzeo and Rochon<cit.> combined <cit.> with <cit.> to show (their Theorem C) that the complex deformation theory of asymptotically cylindrical Calabi-Yau manifolds is unobstructed. Conversely to Kovalev, they did not explicitly consider deformations of the Kähler form, but observed (Lemma 9.1) that if the first Betti number of the compactifying orbifold is zero, then Kähler classes remain Kähler under such deformations of complex structure. Combining this with <cit.>, which says in particular that for any Kähler class we can find an asymptotically cylindrical Calabi-Yau metric, we essentially expect that every Kähler class on the orbifold gives a deformation of the metric corresponding to this complex deformation. To get a result comparable to the one we use, we would also have to combine this with <cit.> so that we can vary the Kähler class without the complex structure and both the Kähler class and the complex structure simultaneously. We would also need to extend <cit.> to the case where the first Betti number of the compactifying orbifold is nonzero. Nevertheless, the G_2 moduli space has already been studied in the asymptotically cylindrical case. The result we use is Theorem <ref>: in the compact case it is due to Bryant and Harvey, but the first published proof was provided by Joyce; in adapting it for the results we need, we follow the simplified proof of Hitchin <cit.> and its elaboration for the asymptotically cylindrical case by Nordström <cit.>. Ebin <cit.> also constructed a moduli space of general metrics, and some ideas from his paper have become standard. This paper is organised as follows.In section <ref>, we recall various preliminary definitions. In subsection <ref>, we define what it means for a manifold to be asymptotically cylindrical and make various suitably adapted definitions (of diffeomorphisms, isotopy, and so forth). This material is all review from various sources ranging from Lockhart–McOwen <cit.> and to some extent Maz'ja–Plamenevskiĭ<cit.> toNordström<cit.>, except the discussion of asymptotically cylindrical isotopy, which is perhaps implied in some of these sources. Subsection <ref> gives a definition of the notion of SU(n) (for general n) structure, mostly following Hitchin <cit.>. Finally in subsection <ref>, we introduce the notion of a G_2 structure following Joyce <cit.>. In section <ref>, we then explain Chan's work<cit.> on the connection between S^1-invariant G_2 and SU(3) structures, and how to extend this work to the asymptotically cylindrical case. This proves Theorem C. In section <ref>, we begin to consider moduli spaces. We define a moduli space of S^1-invariant G_2 structures in such a way that Chan's arguments extend to give a bijection between it and the product required by Theorem A. By analysing the arguments of Hitchin <cit.> and Nordström <cit.> showing that the G_2 moduli space is smooth, we then show that this moduli space is smooth; in fact, it is locally diffeomorphic to an open subset of the full G_2 moduli space. Using these together, we then argue that the SU(3) moduli space can be assumed to be smooth with a suitably natural smooth structure. Finally, section <ref> falls into five subsections. Firstly, in subsection <ref>, we show that there is (indeed, are potentially multiple) sensible gluing maps defined on matching SU(3) structures (essentially using the G_2 gluing map of <cit.>). In subsection <ref> we consider how they differ, or not, as maps into moduli space. In subsection <ref> we define moduli spaces of gluing data, following <cit.>, and show the analogous result toTheorem A on the relation between these in the G_2 and Calabi-Yau setting; in subsection <ref> we define the gluing map and check it is well-defined. Finally in subsection <ref>, we combine improved versions of the statements from the first subsection on how our family of gluing maps behaves with the local diffeomorphism of G_2 moduli spaces proved in <cit.> to prove that each gluing map defines a local diffeomorphism of SU(3) moduli spaces also. Notation. All cohomology groups are to be understood with real coefficients as de Rham cohomology groups. Acknowledgement. I am indebted to Alexei Kovalev for much helpful advice and guidance.§ PRELIMINARIESIn this section, we set up preliminary definitions. In subsection <ref>, we define various necessary asymptotically cylindrical objects, which will be used throughout. We then turn to SU(n) and G_2 structures. We will follow the definitions and in large part the approach of Hitchin <cit.> when dealing with SU(n) in subsection <ref>, and <cit.> when dealing with G_2 in subsection <ref>. §.§ Asymptotically cylindrical manifolds We first define asymptotically cylindrical structures of various kinds.[cf. <cit.>]An oriented Riemannian manifold (M, g) is said to have an end if it can be decomposed as a smooth manifold into a compact manifold M^, with compact (oriented) boundary N, and the product manifold N × [0, ∞), with the obvious identification of N = ∂ M^ with N ×{0}. Given such a manifold we can find a global function t that is the coordinate on [0, ∞) on (1, ∞) and zero on M^; throughout, we shall use t for this function. M is then said to be asymptotically cylindrical with rate δ>0 if there is some metric g_N on N and constants C_r, such that |D^r (g|_N × [0, ∞) - g_N -dt^2)| < C_re^-δ tfor all r=0,1,…,D is the Levi-Civita connection induced by g, and |·| is the metric induced by g on the appropriate space of tensors. (M, g) is said to be asymptotically cylindrical if there is some δ>0 such that it is asymptotically cylindrical with rate δ. Given a bundle E associated to the tangent bundle over M, a section α̃ of E|_N extends to a section of E|_N × [0, ∞) by extending parallel in t. A section α of E is said to be asymptotically translation-invariant with rate 0<δ'<δ if there is a section α̃ of E|_N and (<ref>) holds (with δ') for |D^r (α -α̃)| for t>T. Throughout, given a section α of such a bundle, α̃ will be its limit. Note that dt^2 + g|_N is not asymptotically translation invariant for any rate greater than δ, which is why we restrict to δ'<δ.If (M, g_1) is asymptotically cylindrical with rate δ and α is asymptotically translation invariant with rate δ'<δ and g_2 is another asymptotically cylindrical Riemannian metric on M with rate ϵ>δ', then α is also asymptotically translation invariant on (M, g_2) with rate at least δ'. Consequently, we may refer to “asymptotically translation-invariant" tensors without specifying an asymptotically cylindrical metric. To simplify statements, all fields on asymptotically cylindrical manifolds will be taken to be asymptotically translation invariant. Given our smooth function t on a manifold M with an end, we can define cutoff functions. Letψ_T: → be a smooth function withψ_T(t) =1 fort ≥ T-1 0 fort ≤ T-2Then ψ_T evidently extends to M. We let ψ = ψ_2. We have a C^k topology on asymptotically translation invariant forms with a given rate δ'. This is called the extended weighted topology in <cit.>.See also the extended L^2 spaces of <cit.>, although both of these work with extensions of Sobolev spaces whereas we work with Hölder spaces. Specifically, we define a C^k_δ-norm on a subset of asymptotically translation invariant fields α byα_C^k_δ = (1-ψ)α + ψ e^δ t(α -α̃)_C^k(E, g) + α̃_C^k(E|_N, g̃) The topology induced by (<ref>) is called the topology with weight δ. Note that if, for some given α, there is some fixed δ so that these norms are finite for all k=0, 1, …, α must be asymptotically translation invariant with any rate greater than δ. On the other hand, if α is asymptotically translation invariant, with rate greater than δ, then all these norms are finite. In practice, the major results are proved for each individual δ, and if necessary we then combine the different δs. In the same way we have a Hölder C^k, α_δ topology, and by taking the inverse limit we also have a C^∞_δ topology. These topologies (as opposed to the norms) depend only on the decay rate of the metric, since by compactness of N and M^ all metrics with the same decay rate are Lipschitz equivalent. In section <ref> (e.g. Definition <ref>), we will also need the notion of an asymptotically cylindrical diffeomorphism: general diffeomorphisms obviously do not have to preserve the cylindrical asymptotic, and so do not act by pullback on asymptotically cylindrical metrics. Following Nordström <cit.>, we make [<cit.>]A diffeomorphism Φ of the asymptotically cylindrical manifold (M, g) is asymptotically cylindrical if there is a diffeomorphism Φ̃ of N and a parameter L ∈ such thatΦ(n, t) → (Φ̃(n), t+L)exponentially, meaning that on restriction to N × (T, ∞) for some large T, we have Φ = exp V ∘ (Φ̃(n), t+L) for some vector field V on M decaying exponentially with all derivatives.The pullback by an asymptotically cylindrical diffeomorphism of an asymptotically cylindrical metric is asymptotically cylindrical.Also, whether a diffeomorphism Φ of M is asymptotically cylindrical does not depend essentially on the asymptotically cylindrical metric g (compare Proposition 6.22 of <cit.>). Finally we note that the asymptotically cylindrical diffeomorphisms form a group. For section <ref>, we would like to restrict to the identity component of asymptotically cylindrical diffeomorphisms. We thus need to define a topology. The topology we shall define is essentially used, although not in so many words, by Kovalev <cit.> in his choice of norm on the generating vector fields, and Nordström implicitly (some such assumption is required to get the bottom of <cit.>); we have stated it explicitly to avoid confusion about what “isotopic to the identity" means. Let (M, g) be an asymptotically cylindrical manifold. Fix δ > 0, smaller than the decay rate of g, and consider the subset of asymptotically cylindrical diffeomorphisms that decay at rate at least δ with respect to this metric, ^g_δ. We define a topology on ^g_δ by giving neighbourhoods of the identity.Let Φ be an asymptotically cylindrical diffeomorphism. By definition, Φ = exp V ∘ (Φ̃(n), t+L) far enough along the end. If (Φ̃, L) is not close to the identity, then we do not include Φ in the neighbourhood. Consequently, we may suppose that Φ̃= exp W, where this exponential map is taken with respect to g̃, so that Φ = exp_g V ∘exp_g̃ (W + L). Then V+W+L is an asymptotically translation invariant vector field. That is, we have identified a subset of ^g_δ containing the identity all of whose elements define an asymptotically translation invariant vector field, and where the identity defines the zero vector field. To define our neighbourhoods of the identity, we then take the neighbourhoods of zero with respect to the extended weighted topologies described in Definition <ref> on the corresponding asymptotically translation invariant vector fields. We may now define a topology on the setof all asymptotically cylindrical deformations. U ⊂ is open if and only if U ∩^g_δ is open in the topology of Definition <ref> for every δ>0.This topology is also independent of our choice of metric g. As the map Φ↦Φ̃ is continuous, this definition also automatically gives us a well-defined map from a quotient by _0 on M to a quotient by _0 on N. We now make The asymptotically cylindrical diffeomorphism Φ is asymptotically cylindrically isotopic to the identity if it lies in the identity component _0 of 𝒟. For simplicity, if M is an asymptotically cylindrical manifold, then if we say “Φ is a diffeomorphism of M isotopic to the identity", we shall mean that Φ is an asymptotically cylindrical diffeomorphism asymptotically cylindrically isotopic to the identity. Furthermore, each potential limit (Φ̃, L) of an asymptotically cylindrical diffeomorphism defines a closed subspace of , as the map to the limit is continuous; we shall say that diffeomorphisms are isotopic with fixed limit if they can be joined by a continuous path in such a subspace (in particular, of course, this implies that they have the same limits). We use isotopy with fixed limit in Definition <ref>.§.§ SU(n) structures We now define SU(n) structures, following Hitchin <cit.>.Let M be a 2n-dimensional manifold. An SU(n) structure on M is induced by a pair (Ω, ω) where Ω is a smooth complex n-form on M and ω is a smooth real 2-form on M such that at every point p of M: * Ω_p = β_1 ∧⋯∧β_n for some β_i ∈ T^*_p M ⊗* Ω_p ∧Ω̅_p ≠ 0* Ω∧Ω̅= (-2)^ni^n^2/n!ω^n* ω∧Ω = 0* ω_p(v, Iv) > 0 for every v ∈ T_pM where I is as in the following proposition.The standard definition is actually that an SU(n) structure is a principal SU(n)-subbundle of the frame bundle of M (for instance, see <cit.>). A pair of forms as in Definition <ref> determines such a subbundle by taking the stabiliser at each point, but evidently the same subbundle could be obtained from different pairs of forms. Nevertheless, we shall abuse notation and say that (Ω, ω) is an SU(n) structure. If the two forms have stabiliser SU(n), they determine a complex structure and a hermitian metric.Suppose that M is a 2n-dimensional manifold and (Ω, ω) is an SU(n) structure on it. Then there is a unique almost complex structure I on M with respect to which Ω is an (n, 0)-form and, with respect to I, ω is the fundamental form of an hermitian metric g. The scaling condition iii) is not used in the proof of this result: it is used for torsion considerations below. In fact, in <cit.>,Joyce calls a structure in which iii) may fail an almost Calabi-Yau structure. Our analysis of the relation between SU(3) structures and G_2 structures extends to a relation between almost Calabi-Yau structures and G_2 structures without much additional work – we give more details in Remarks <ref> and <ref>.However, we would like I to be a complex structure and ω aform. To achieve this, we add further conditions, making Suppose that (Ω, ω) is an SU(n) structure. It is said to be torsion-free, or a Calabi-Yau structure, if dΩ and dω are both zero.Note that Definition <ref> is equivalent to the vanishing of the intrinsic torsion of the principal SU(n)-subbundle.We indeed have Suppose that (Ω, ω) is a torsion-free SU(n) structure. The induced almost complex structure I is a complex structure, and the hermitian metric induced from ω is Kähler and Ricci-flat.We now have to combine Definition <ref> with Definition <ref> to define an asymptotically cylindrical SU(n) structures. An SU(n) structure on a manifold M with an end is said to be asymptotically cylindrical if the induced metric g is asymptotically cylindrical and, with respect to g, Ω and ω are asymptotically translation invariant.Note that if (Ω, ω) are asymptotically translation invariant forms for some cylindrical metric, it need not be the case that ∂/∂ t is orthogonal to N. Conversely, any almost complex structure admitting a non vanishing (n, 0) form and such that a cylindrical metric is hermitian can be combined with that cylindrical metric to yield an SU(n) structure inducing a cylindrical metric. This almost complex structure need not be asymptotically translation invariant, so that the fact that the induced metric g is asymptotically cylindrical does not imply that Ω and ω are asymptotically translation invariant. That is, the two conditions of Definition <ref> are independent of each other.If we have a torsion-free asymptotically cylindrical SU(n) structure on M, then it induces a Ricci-flat metric. If M has disconnected cross-section, it admits a line between different components of the cross-section; by the Cheeger–Gromoll splitting theorem <cit.> it is then a product cylinder N ×, and so not especially interesting. We may thus assume N is connected wherever required. It is only explicitly required in Lemma <ref>, but that lemma is often used after its proof.We will also find it useful for Proposition <ref> to know that diffeomorphisms of a Calabi-Yau manifold isotopic to the identity are isometries if and only if they are automorphisms of the underlying Calabi-Yau structures. The infinitesimal version of this (that Killing fields are holomorphic vector fields and vice versa) is a special case of <cit.>.Suppose M is compact or has an end, that (Ω, ω) is an (asymptotically cylindrical) torsion-free SU(n) structure, and that Φ∈_0(M). Then Φ^* g_(Ω, ω) = g_Ω, ω, Φ is an isometry, if and only if Φ^* Ω = Ω and Φ^* ω = ω, Φ is an automorphism of the SU(n) structure. If Φ is an automorphism, it is clearly an isometry, because the metric is obtained from (Ω, ω) in a natural fashion, so commuting with the pullback. Conversely, if Φ is an isometry, it pulls back Ω and ω to parallel forms with respect to the induced metric. If M is compact, it also preserves the cohomology classes [Ω] and [ω]; since an exact parallel form is zero, it preserves Ω and ω and so is an automorphism. If M has an end, then the limit cohomology classes are also preserved, and the differences Φ^* Ω - Ω and Φ^* ω -ω are exponentially decaying parallel forms trivial in cohomology, and hence zero. §.§ G₂ structures As a technical tool, we will also use G_2 structures on 7-dimensional manifolds. We define A G_2 structure on a seven-dimensional manifold M is a smooth three-form such that at every point pthere is a basis e_1, …, e_7 of T_p^*M such thatϕ_p =e_1 ∧ e_2 ∧ e_3 + e_1 ∧ e_4 ∧ e_5 + e_1 ∧ e_6 ∧ e_7 + e_2 ∧ e_4 ∧ e_6 - e_2 ∧ e_5 ∧ e_7 - e_3 ∧ e_4 ∧ e_7 - e_3 ∧ e_5 ∧ e_6 Every G_2 structure induces a metric, by taking the corresponding basis to be orthonormal. Thus, ϕ induces a 4-form *_ϕϕ. We use this 4-form to define torsion-freeness of a G_2 structure.A G_2 structure ϕ on M is torsion-free if the forms ϕ and *_ϕϕ are both closed. As in the SU(n) case, this is equivalent to torsion-freeness of the principal G_2-subbundle given by the stabiliser. Torsion-freeness would more naturally be described by ϕ being parallel with respect to the metric it induces; that Definition <ref> implies that is a result of Fernández and Gray<cit.>. Further as in the SU(n) case, Definition <ref> implies that the induced metric is Ricci-flat.Also as in the SU(n) case, we require a notion of asymptotically cylindrical G_2 structure. As in Definition <ref>, we makeA G_2 structure on a manifold M with an endis said to be asymptotically cylindrical if the induced metric is asymptotically cylindrical and, with respect to this metric, ϕ is asymptotically translation invariant.The fact that torsion-free G_2 structures induce Ricci-flat metrics gives the analogue of Remark <ref> in the asymptotically cylindrical G_2 setting; the fact they are parallel yields the analogue of Lemma <ref> in the compact and asymptotically cylindrical G_2 settings, with the same proof.§ SU(3) STRUCTURES AS S¹-INVARIANT G₂ STRUCTURES We now proceed to the relationship between SU(3) and G_2 structures induced by the inclusion SU(3) ⊂ G_2, and prove Theorem C. We closely follow ideas in Chan's analysis of the three-dimensional conical gluing problem, <cit.>. Only the details of the asymptotically cylindrical case are original. The beginnings of these ideas can be found in <cit.>, which corresponds to the easier directions of Propositions <ref> and <ref>: that if we have a Calabi-Yau structure (Ω, ω) on a six-manifold M (without any global conditions), we can induce a torsion-free G_2 structure on M × by ϕ = Ω + dθ∧ω, with corresponding metric g_M + dθ^2. Our setup, following Chan, is more general. Because in gluing we introduce a perturbation to the G_2 structure which is uncontrolled except for being small, it is not at all clear that the G_2 structure will remain in the proper subspace {Ω + dθ∧ω: (Ω, ω)an SU(3) structure}. For instance, and more geometrically, it is not clear that the perturbed G_2 structure will either haveorthogonal to M or haveof length one. We introduce z, therefore, as a generalisation of dθ: a 1-form with nonzero coefficient of dθ. In practice, we shall soon assume it has positive coefficient of dθ, and by the end of this section we will assume that z = Ldθ + v for a constant L and a closed 1-form v on M. In particular, if b^1(M) = 0, we could reduce further when discussing moduli spaces, and we may as well assume in section <ref> and the relevant parts of section <ref> that z = Ldθ.However, we work in full generality to start with. We begin with the vector space case, where z is just a covector. Suppose given a six-dimensional vector space V, and suppose that z ∈ (V ⊕)^* ≅ V^* ⊕ is complementary to V^*. Suppose further that (Ω, ω) is an SU(3) structure on V (that is, (Ω, ω) ∈⋀^3 V^* ⊗⊕⋀^2 V^* satisfying the conditions of Definition <ref>) with associated metric g_Ω, ω. Then the three-formϕ = Ω + z ∧ωis a G_2 structure on V ⊕ (that is, ϕ∈⋀^3 (V ⊕)^* satisfying the condition in Definition <ref>) with associated metric z⊗ z + g_Ω, ω. Moreover, given a G_2 structure ϕ on V ⊕, there exist exactly two possible triples (z, Ω, ω) and (z', Ω', ω'), with (Ω, ω) and (Ω', ω') SU(3) structures on V, such that ϕ was obtained from this triple as in (<ref>). They satisfy z' = -z, Ω' = Ω̅, and ω' = -ω.First choose a basis e_2, e_3, …, e_7 of V^* so that (Ω, ω) is the standard SU(3) structure (this can be done similarly to the proof of Proposition <ref>):Ω = (e_2 + i e_3) ∧ (e_4 + i e_5) ∧ (e_6 + i e_7) ω = e_2 ∧ e_3 + e_4 ∧ e_5 + e_6 ∧ e_7Then e_1=z, e_2, …, e_7 is a basis of (V⊕)^* and Ω + z∧ω is the G_2 structure given by (<ref>) with respect to this basis; hence, this construction always yields a G_2 structure. The metric follows by this construction: the dual basis vector to z is orthogonal to V and length 1, and V has its original metric.Conversely, suppose given a G_2 structure ϕ on V ⊕. We want to choose a basis so that ϕ is the G_2 structure given by (<ref>) and V^* is spanned by e_2, …, e_7. First choose any basis such that ϕ is given by (<ref>). We need an element of G_2 mapping {e_2, …, e_7} to V^*. This is equivalent to taking e_1 to V^*'s unit normal, and this is possible because G_2 is transitive on S^6. In this basis, we then have the structure of the previous paragraph.To show that any ϕ is given precisely by these two triples, we begin by noting that if Ω + z∧ω is a G_2 structure then z and V^* are orthogonal with respect to the induced metric and z has length 1. Therefore, z is a unit normal vector to V^* with respect to the metric of ϕ and is determined up to sign. Fix a possible z, and then consider SU(3) structures (Ω, ω) on V such that ϕ = z ∧ω + Ω; the fact that z is complementary means ϕ and z uniquely determine ω and so Ω. It is easy to check that given any such z,ω and Ω, *ϕ = 1/2ω∧ω + z∧Ωand so as ϕ uniquely determines *ϕ it also uniquely determines Ω.If we reverse the sign of z, this merely reverses the signs of Ω and ω, and so gives the second triple.At this point, we have two options. We can either hold on to this 2:1 correspondence throughout, or we can make a uniform choice. We will do the latter, partly for notational simplicity and partly to guarantee that the set of z's is connected (a similar result will be technically useful later.) Since (Ω, ω) ↔ (Ω̅, -ω) is an isomorphism of SU(3) structures, it has no serious effect on the results.We shall express this choice as an orientation on(and later the corresponding manifold S^1); this fixes the sign of z by demanding that its relevant component should be positive with respect to a standard form dθ on ; equivalently, this is defining which orientation V has as a subspace of V ⊕. If (Ω, ω) is only the restriction to a point of an almost Calabi-Yau structure as in Joyce<cit.> (that is we drop the normalisation condition on the relative sizes of Ω and ω), the forward direction clearly gives a G_2 structure, as we may imagine rescaling z. However, for this reason there are many choices for the backward direction: we may freely scale ω and z. Thus in this case we may assume that the dθ component of z has coefficient ± 1, to retain our two options. It is clear that the correspondence of Proposition <ref> extends to global structures.We require the notion of a structure being S^1-invariant to ensure that each G_2 structure arises from a single SU(3) structure. Let M be a six-dimensional manifold. Consider the product M × S^1, and letbe the vector field corresponding to a global function θ giving a coordinate on the circle. The diffeomorphism Θ is given by the flow offor some time. A differential form α on M × S^1 is said to be S^1-invariant if its Lie derivative in thedirection is zero, or equivalently it is preserved by pullback by Θ. Any other tensor is said to be S^1-invariant if the same conditions hold (since Θ is a diffeomorphism we can consider pushforward by its inverse). A map of tensors is S^1-equivariant if it commutes with the appropriate pullback and pushforward maps induced by Θ. We shall use , Θ, and the notion of S^1-invariance throughout. S^1-equivariance is primarily used in subsection <ref>. We may now state Let M be a six-dimensional manifold admitting an SU(3) structure. Let z be an S^1-invariant covector field on M × S^1 that is always complementary to the subbundle T^*M and has positive orientation with respect to the circle, ∫_{p}× S^1 z > 0 everywhere. Then the construction of Proposition <ref> yields a G_2 structure on M × S^1. Conversely, if M × S^1 admits a G_2 structure, the structure is constructed as in Proposition <ref> from some unique section of the bundle of SU(3) structures on TM over M × S^1 (that is, a structure on T_pM at each point (p, θ), but potentially varying with θ) and some unique complementary covector field on M× S^1 with suitable orientation at each point. In particular, if the G_2 structure is S^1-invariant, then both of these sections are S^1-invariant, so reduce to an SU(3) structure on M and an S^1-invariant complementary covector field with suitable orientation as in the previous paragraph.Moreover, the maps (z, Ω, ω) ↦ϕ and ϕ↦ (z, Ω, ω) are smooth maps of Fréchet spaces. Proposition <ref> proves the first two paragraphs pointwise; we have to show that the resulting sections are smooth, and the final paragraph. We prove the final paragraph in proving smoothness in the first and second. For the first, if z, ω and Ω are smooth sections of the relevant bundles then so is ϕ = Ω + z ∧ω, and as this is multilinear it is clearly a smooth function of (z, Ω, ω). For the second paragraph, we show directly that z, Ω, and ω are smooth functions of ϕ, and so in particular are themselves smooth if ϕ is. We begin by showing that z is smooth. We observe that z= ()^♭/|| at all points of M× S^1, where θ is a positively oriented coordinate (meaning dθ is positive with respect to our choice of orientation).Indeed, if u ∈ T^*_pM ⊂ T^*_(p, θ)(M × S^1) we haveu, ()^♭ = u () = 0so ()^♭ is indeed orthogonal to T_p^*M, and ()^♭/|| is clearly unit. Since ()^♭() = ||^2>0, the orientation is positive, and so z is indeed ()^♭/||. Note that althoughis independent of the coordinates on M, the metric need not be, so z depends on our position on M. We have ()^♭ = ι_ g. The interior product is linear and continuous; the map g↦ || is clearly smooth, as the square root of the smooth function g↦ g(, ). It is therefore enough, to prove z is smooth, to prove thatϕ↦ g_ϕis smooth. This essentially follows by the computation in Hitchin <cit.>: bothB_ϕ: (u, v) ↦ι_u ϕ∧ι_v ϕ∧ϕand K_ϕ, the reinterpretation of the bilinear form B_ϕ as an endomorphism, are smooth functions of ϕ. Similarly, so are the determinant of K_ϕ andg_ϕ =( K_ϕ)^-1/9 B_ϕusing compactness and asymptotic cylindricality to ensure that K_ϕ is bounded away from zero.Then the division of ϕ and * ϕ into “the z part" and “the other part" is smooth, because it is linear and continuous; it follows that Ω and ω are smooth. We also have to check that the correspondence of Propositions <ref> and <ref> respects the notions of asymptotically cylindrical structure that we have defined. Suppose that M is a six-dimensional smooth manifold with an end. Let (Ω, ω) be an SU(3) structure on M and z a complementary covector field in the sense of Proposition <ref> (appropriately oriented). Suppose that ϕ is the corresponding S^1-invariant G_2 structure on M × S^1.Then ϕ is asymptotically cylindrical if and only if (Ω, ω) is asymptotically cylindrical, z is asymptotically translation-invariant, and z(∂/∂ t) → 0 exponentially uniformly on N.It is clear that if Ω, ω, and z are asymptotically translation invariant, then so too is ϕ. By Proposition <ref>, the induced asymptotically translation invariant metric has limit g̃_Ω, ω + z̃⊗z̃ = g_N + dt ⊗ dt + z̃⊗z̃ (again we use ·̃ to denote limit). We thus have to show that z̃⊗z̃ can be taken as a form on only N × S^1. As z(∂/∂ t) → 0 , z̃ has no dt component and this is indeed the case.For the converse, we observe that both the asymptotically cylindrical G_2 structure ϕ and its limit ϕ̃ must split in the usual way and therefore we haveϕ = Ω + z∧ω→Ω̃+ z̃∧ω̃= ϕ̃with respect to the metric induced by either. Since ϕ→ϕ̃, exponentially with all derivatives, and the map ϕ↦ (z, Ω, ω) is a continuous map of Fréchet spaces with implicit constants (in continuity arguments) bounded since ϕ is asymptotically translation invariant, we must have z →z̃ and so on too. So z, Ω, and ω are asymptotically translation invariant. Furthermore, z(∂/∂ t) = g(, ∂/∂ t)/||and since ϕ is asymptotically cylindrical the right hand side of (<ref>) tends to zero uniformly in N and exponentially in t. Thus z̃⊗z̃ has no dt ⊗ dt component, and as in the first paragraph (Ω, ω) must be asymptotically cylindrical. We now find a condition on z which combined with (Ω, ω) being torsion-free implies the G_2 structure Ω + z ∧ω is torsion-free. Because torsion-freeness, as a differential equation, is a global condition, we restrict to M compact or asymptotically cylindrical. Suppose that M is a compact six-dimensional smooth manifold. Let (Ω, ω) be an SU(3) structure on M and z be a covector field on the product M× S^1 complementary to T^* M with positive orientation. Suppose that ϕ is the corresponding S^1-invariant G_2 structure on M× S^1. Then ϕ is torsion-free if and only if z is closed and (Ω, ω) is torsion-free.The same holds if (Ω, ω) is an asymptotically cylindrical SU(3) structure and z an asymptotically translation invariant complementary covector field, with z̃() = 0, so that ϕ is an asymptotically cylindrical G_2 structure.Firstly, given a torsion-free SU(3) structure and z closed, we haveϕ = Ω + z ∧ω *ϕ = 1/2ω∧ω + z ∧ΩSince d is a real operator, Ω̅ is closed, and so both ϕ and *ϕ are closed.Conversely, if ϕ is a torsion-free S^1-invariant G_2 structure on M × S^1, we begin by considering the covector field z. Letbe the vector field on S^1 induced by a standard coordinate θ (positively orientated and inducing the rotation we use for “S^1-invariance").Since ϕ is S^1-invariant,is a Killing field. A Bochner argument (<cit.>) shows that Killing fields on compact Ricci-flat manifolds are parallel, and we know that the metric associated to the torsion-free G_2 structure ϕ is Ricci-flat. Consequently, if M is compact,is parallel. We want to show thatis also parallel in the asymptotically cylindrical case. ϕ̃ defines a translation-invariant G_2 structure on the limit N ×× S^1.is a translation-invariant Killing vector field on N ×× S^1, and thus we may imagine we work on N × S^1 × S^1 to deduce thatis parallel with respect to the limit metric. Now we know that ∇ decays, we may do the integration by parts required by the Bochner argument and deduce thatis parallel. Hence, z = ()^♭/|| is parallel and in particular closed.Then the torsion-freeness of ϕ yields from the formulae for ϕ and *_ϕϕ in terms of ω and Ω0 = z ∧ dω + d Ω 0 = ω∧ dω + z ∧ dΩSince dω, dΩ and dΩ are all forms on M, it follows since z is complementary to the subbundle of such forms that they are all zero, and so (Ω, ω) is torsion-free.In the almost Calabi-Yau case of Joyce <cit.> the torsion-freeness conditions are much weaker: it is only required that dω = 0. Using the S^1-invariance of ϕ and Cartan's magic formula, this is equivalent to ι_ dϕ = 0. Thus almost Calabi-Yau structures on M are identified, up to a choice of covector field z, with S^1-invariant G_2 structures ϕ on M × S^1 satisfying ι_ dϕ = 0. We now introduce some terminology to simplify the rest of the paper.Let M be a six-dimensional manifold. A closed S^1-invariant covector field z = Ldθ + v for which L > 0 is called a twisting. If M is an asymptotically cylindrical manifold, we require also that the limit z̃ satisfies z̃() = 0.Combining all of this section, and restricting to the torsion-free case, we have the critical Theorem C:If M is a compact six-manifold , there is a homeomorphism{torsion-free S^1-invariant G_2 structures on M × S^1} ↔ {torsion-free SU(3) structures on M}×_>0×{closed 1-forms on M} If M is a six-manifold with an end, there is a homeomorphism{torsion-free asymptotically cylindrical S^1-invariant G_2 structures on M × S^1} ↔ {torsion-free asymptotically cylindrical SU(3) structures on M}×_>0×{closed asymptotically translation invariant 1-forms v on M with ṽ() = 0}In both cases, this homeomorphism is defined by the map ((Ω, ω), L, v) ↔Ω + (L dθ + v) ∧ωThe whole section applies equally if the one-dimensional factor is a line instead of a circle, because by invariance we can join the ends to form a circle. Thus the same argument applies on the end of an asymptotically cylindrical G_2 manifold, for instance, but asymptotic cylindricality means v that L must be one and v must be zero, or equivalently that the closed 1-form z must be dt.§ MODULI SPACES For the remainder of the paper, we shall assume that all SU(n) and G_2 structures are torsion-free unless specifically stated otherwise. We now want to push the relationship between torsion-free SU(3) structures and torsion-free G_2 structures discussed in section <ref> and culminating there in Theorem <ref> (Theorem C) slightly further. In this section, we define moduli spaces and proveTheorem A (Theorem <ref>) on how the SU(3) moduli space relates to the G_2 moduli space. The section falls into three parts. In subsection <ref>, we set up a moduli space of S^1-invariant torsion-free G_2 structures (Definition <ref>). We choose this S^1-invariant G_2 moduli space so that we have a homeomorphism between it and the product of the Calabi-Yau moduli space with the “moduli space" Z of potential twistings z, using the relationship between Calabi-Yau structures and G_2 structures. We then have to use this bijection to show the Calabi-Yau moduli space is a manifold. In subsection <ref>, we prove that the S^1-invariant G_2 moduli space is locally homeomorphic to the moduli space of G_2 structures and so a manifold (Theorem <ref>), by closely following the proof that the G_2 moduli space itself is a manifold. We also give an idea for an alternative proof of Theorem <ref> and a discussion of where it runs into difficulty. We then discuss the space Z of classes of twistings in subsection <ref>, identifying it as the open subset of the cohomology space H^1(M × S^1) corresponding to positive H^1(S^1) component. Finally, in subsection <ref>, we return to the relationship between Calabi-Yau structures and S^1-invariant G_2 structures. We show that the projection map from the S^1-invariant G_2 moduli space to the “moduli space" Z of potential twistings z is a smooth surjective submersion, so that each SU(3) moduli-space fibre is a smooth manifold. To prove Theorem A (Theorem <ref>), it only then remains to show that all these fibres are diffeomorphic and that the product structure obtained in subsection <ref> is compatible with the manifold structures. Suppose M is a (smooth) compact 6-manifold. We henceforth restrict attention to 6-manifolds which admit Calabi-Yau structures to avoid having to consider the possibility of empty moduli spaces. On manifolds with ends we will further restrict to manifolds for which these structures can be chosen asymptotically cylindrical. Quotienting by the pullback action of the identity component of the diffeomorphism group, we make If M is a compact 6-manifold,ℳ_SU(3)(M) = {Calabi-Yau structures on M}/_0 equivalence We make a similar definition in the asymptotically cylindrical case. Recall the definition of _0 on such a manifold from Definition <ref>. If M is a 6-manifold with an end,ℳ_SU(3)(M) = {asymptotically cylindrical Calabi-Yau structures on M}/_0 equivalence There is also a natural action by the rescaling (Ω, ω) ↦ (a^3/2Ω, a ω) for a fixed constant a. We will not quotient by this, as it makes the setup of the moduli spaces slightly more complex: for details of the results we would get, and an example of the resulting complexity, see Remark <ref> below. In the G_2 case, similarly, we restrict to 7-manifolds that admit (asymptotically cylindrical) torsion-free G_2 structures. We correspondingly makeIf M is a 6-manifold, either compact or with an end,ℳ_G_2(M × S^1) = {(asymptotically cylindrical) torsion-free G_2 structures on M × S^1}/_0 equivalence Note that M × S^1 has an end if and only if M does. §.§ Setup of the S¹-invariant G₂ moduli space By Theorem <ref>, we have in both the compact and asymptotically cylindrical cases a bijection roughly given by{Calabi-Yau structures}⊕_>0⊕{closed 1-forms}↔{S^1-invariant torsion-free G_2 structures}In this subsection, we show that this bijection induces a homeomorphism of moduli spaces. Note that since we have not proved that the asymptotically cylindrical SU(3) moduli space is a manifold, we cannot yet ask for a diffeomorphism. Therefore, we first define the moduli space of S^1-invariant G_2 structures and a “moduli space" of twistings. We do so precisely so that the map induced from Theorem <ref> is a well-defined bijection. We shall use the following set of diffeomorphisms Suppose that M is a compact or asymptotically cylindrical manifold with an S^1-invariant Ricci-flat metric g. Let the space _0^S^1 be the identity path-component of{Φ∈_0: Φ_*= }Elements of (<ref>) shall be called S^1-invariant diffeomorphisms. In Proposition <ref>, we shall show that _0^S^1 is equal to the identity path-component of diffeomorphisms satisfying the weaker condition that Φ_* is a Killing field for g. The diffeomorphisms of M extended by the identity certainly define S^1-invariant diffeomorphisms, and so we haveIf Φ is a (asymptotically cylindrical) diffeomorphism of M^6 isotopic to the identity then the diffeomorphism Φ̂: (x, θ) ↦ (Φ(x), θ) of M^6 × S^1 lies in _0^S^1(M × S^1). If (Ω, ω) is a (asymptotically cylindrical) Calabi-Yau structure and Ldθ +v a twisting, the torsion-free G_2 structures Φ^*(Ω)) + (Ldθ + Φ^*v) ∧Φ^*(ω) and Ω + (Ldθ + v) ∧ω are identified by Φ̂. We would like Φ^*(Ω)) + (Ldθ + v) ∧Φ^*(ω) and Ω + (Ldθ + v) ∧ω to be identified in our S^1-invariant G_2 moduli space, as they correspond to the same element of the SU(3) moduli space with the same twisting. Lemma <ref> says that it is sufficient to choose some more diffeomorphisms so that Ω + (Ldθ + Φ^*v) ∧ω and Ω + (Ldθ + v) ∧ω are identified. More concretely, we shall identify S^1-invariant G_2 structures where the twisting differs by df for some (asymptotically translation invariant) f; v-Φ^*v is exact and it's clear that the resulting f can be chosen to be asymptotically translation invariant if necessary, by its explicit form as the integral of an asymptotically translation invariant integrand. If (Ω, ω) is a Calabi-Yau structure on M, Ldθ + v is a twisting, and f is a bounded function on M, there is a diffeomorphism Φ∈_0^S^1(M × S^1) such thatΦ^*(Ω + (Ldθ + v)∧ω) = Ω + (L dθ + v + df) ∧ωConsider the curve of diffeomorphisms Φ_s: (x, θ) ↦(x, θ + sf(x)/L)Each Φ_s is clearly smooth and smoothly invertible by (x, θ) ↦ (x, θ - sf(x)/L). Moreover, it is easy to see that Φ_s* = for all s: hence Φ = Φ_1 is in _0^S^1(M × S^1). As all the M^6 coordinates are left unchanged, and the structure is invariant by S^1 (and so Ω, ω and v are), Φ acts as the identity on them. However, by definitionΦ^*(dθ)=d(θ∘Φ)=d(θ + f(x)/L) =dθ + df/LWe obtain (<ref>). Lemmas <ref> and <ref> show that if we quotient by _0^S^1(M × S^1) we have a well-defined map from the Calabi-Yau moduli space. We shall thus make ℳ_G_2^S^1(M × S^1) := S^1-invariant torsion-free G_2 structures/^S^1_0(M× S^1) We shall call _G_2^S^1 the S^1-invariant G_2 moduli space. It remains to choose the “moduli space" Z of twistings z. Given Definition <ref>, Lemma <ref> implies that we have to quotient by the differentials of (asymptotically translation invariant) functions in order to make the induced map an injection. Consequently, we make Let the set of twisting-classes Z be the quotient of the twistings of Definition <ref>{closed S^1-invariant 1-forms z = Ldθ +v:L>0 (with z̃() = 0) on M × S^1}/{differentials of S^1-invariant (asymptotically translation invariant) functions} Z has a relatively simple description, which is discussed in subsection <ref> leading to Lemma <ref> below. We now verify that the map _SU(3)× Z →_G_2^S^1induced from Theorem <ref> is a well-defined bijection. Well-definition follows from Lemmas <ref> and <ref>; injectivity is Suppose that Φ∈_0^S^1(M × S^1) and that (Ω, ω) is a Calabi-Yau structure on M with z = Ldθ + v a closed complementary covector field with positive orientation (L>0.) Then, choosing a point on S^1 and so an identification of S^1 with /, Φ is of the form (p, θ) ↦ (f(p), θ + g(p)) for some smooth functions f and g. Consequently, there exist Φ_1 ∈_0(M) and Φ_2 the time-1 flow of f (for f asymptotically translation invariant, if necessary) such thatΦ^* Ω = Φ_2^* Φ_1^* ΩΦ^* ω = Φ_2^* Φ_1^* ωΦ^* z = Φ_2^* Φ_1^* zIn Lemma <ref>, we do not claim that Φ = Φ_1 Φ_2, though this will of course be true up to an isometry.Given 0 ∈ S^1, write Φ(p, 0) = (f(p), g(p)) for smooth functions f and g. Now suppose (p, θ') ∈ S^1, and consider the curve (p, sθ') between (p, 0) and (p, θ') in M × S^1. At all points of this curve its derivative is θ'. Consequently, the derivative of its image is θ' and so its image is (f(p), g(p) + sθ'). Hence Φ(p, θ') = (f(p), g(p) + θ'), as required. Hence, f is a diffeomorphism. Now let Φ_1 be the extension of the map f as in Lemma <ref>. It is then clear that we have Φ_1^* Ω = Φ^* Ω and Φ_1^* ω = Φ^* ω. The θ component is preserved by Φ_1, and so Φ_1^* (tdθ + v) = tdθ + Φ^* v. As before, Φ^* v - Φ_1^* v is exact, and in the asymptotically cylindrical case is the differential of an asymptotically translation invariant function. Thus there exists such a Φ_2, by Lemma <ref>. The map induced by the bijection of Theorem <ref> is a well-defined homeomorphism_SU(3)(M) × Z →_G_2^S^1(M × S^1) The fact that (<ref>) is a homeomorphism follows immediately as the maps between structures are continuous and we just take the quotient topology. However, we know very little about what the topology on the right hand side looks like. §.§ Smoothness of the S¹-invariant G₂ moduli space In order to use the homeomorphism of Proposition <ref> to show the moduli space of Calabi-Yau structures _SU(3) is a manifold, we first have to show that _G_2^S^1 is a manifold. The objective of this subsection is to prove Theorem <ref>, which says that _G_2^S^1 is a manifold and in fact is locally diffeomorphic to _G_2. The idea of the proof is that the constructions of the G_2 moduli space due to Hitchin <cit.> and Nordström <cit.> work by, given a G_2 structure, constructing a geometrically natural premoduli space of structures and arguing that this premoduli space is locally homeomorphic to the moduli space. Since the construction is geometrically natural, if the G_2 structure concerned is S^1-invariant, the premoduli space also consists of S^1-invariant structures, and the result then follows in the same way. In the G_2 case, the standard result isLet M be a six-dimensional manifold.* If M is compact, the moduli space of torsion-free G_2 structures on M × S^1 is a smooth manifold. It is locally diffeomorphic to H^3(M× S^1) ≅ H^3(M) ⊕ H^2(M) by the well-defined map that takes a representative of a moduli class to its cohomology class.* If M has an end, the moduli space of asymptotically cylindrical torsion-free G_2 structures on M × S^1 is a smooth manifold. It is locally diffeomorphic to a submanifold of H^3(M × S^1) ⊕ H^2(N × S^1) ≅ H^3(M) ⊕ H^2(M) ⊕ H^2(N) ⊕ H^1(N), by the well-defined map taking a representative ϕ of a moduli class with limit ϕ̃= ϕ̃_1 + dt ∧ϕ̃_2 to the pair ([ϕ], [ϕ̃_2]). This submanifold is wholly determined by the requirement that there be a Calabi-Yau structure (Ω, ω) on N × S^1 with (Ω, ω) = (ϕ̃_1, ϕ̃_2), which follows from section <ref>.* In either case, given a (asymptotically cylindrical) torsion-free G_2 structure ϕ on M × S^1, we may find a set U of such structures containing ϕ and with the following properties.* All the structures in the chart have the same groupof (asymptotically cylindrical) automorphisms isotopic to the identity.* There are neighbourhoods D of [𝕀] ∈_0/ and V of ϕ in the set of all (asymptotically cylindrical) torsion-free G_2 structures such that the pullback map defines a bijection between the product D × U and V (in fact, a homeomorphism). The set U is called a slice neighbourhood (or just a slice).We concentrate on the proofs of (a) and (b); (c)(i) follows in exactly the same way in the S^1-invariant case as in the general case, and then (c)(ii) also follows. We begin with (a), as it is simpler. From the work of Hitchin, we extract Propositions <ref> and <ref> which together prove (a). Of course, since we rely on the implicit function theorem, these should properly be stated in terms of suitable Banach spaces. However, the choice of Banach spaces is straightforward and of no relevance to the introduction of S^1-invariance; consequently we omit it. We first need a slice for the _0 action at some G_2 structure ϕ, that is, essentially, a local cross-section of the quotient. To choose a slice, we need the following standard fact about two-forms on a manifold with a torsion-free G_2 structure.Let the seven-manifold X admit a G_2 structure ϕ. We then have an isomorphism of bundles⋀^2 T^*(X) = ⋀^2_7 ⊕⋀^2_14where ⋀^2_7 is a rank-seven bundle given by contractions of ϕ with tangent vectors, ⋀^2_14 is a rank-fourteen bundle, and the fibres of these sub-bundles are orthogonal with respect to the inner product induced on two-forms by ϕ. We apply Lemma <ref> in the case where X = M × S^1. It is virtually sufficient to prove the following proposition. Note that here we make no torsion-freeness assumption on the three-forms other than ϕ.Let ϕ be a torsion-free G_2 structure on the compact seven-manifold M × S^1. LetE = {α∈Ω^3(M × S^1):dα = 0, d^*α∈Ω^2_14}where Ω^2_14 is the subspace of 2-forms which at every point are in the subbundle ⋀^2_14. Then E is L^2-orthogonal to the space of Lie derivatives Ł_X ϕ of ϕ, with respect to the metric at ϕ, and the sum of these spaces is the set of all three-forms. (We choose the Banach spaces so that the projection onto E is continuous). Consequently, locally E is transverse to the orbits of the identity component of the diffeomorphism group. Now we have to pass to the torsion-free G_2 structures in the slice E.Let ϕ_0 be a torsion-free G_2 structure on M × S^1 as in the previous proposition, and let E be as stated there. Define a map F from a neighbourhood of ϕ_0 ∈ E to exact forms by F(ϕ) = P(*_0 * ϕ) where *_0 is the Hodge star induced by ϕ_0, * is the Hodge star induced by ϕ, and P is the orthogonal projection onto exact forms induced by ϕ_0. If F(ϕ) = 0 for ϕ sufficiently close to ϕ_0, then ϕ is itself a torsion-free G_2 structure. The derivative DF has kernel consisting of harmonic forms and is surjective to the exact forms, proving (a) of Theorem <ref>. We have to transfer Propositions <ref> and <ref> and their asymptotically cylindrical analogues from the work of Nordström <cit.> to the S^1-invariant setting. We begin with the slice in both the compact and asymptotically cylindrical setting. We first need to show that _0^S^1 has a well-defined tangent space so that we can still work with the space of Lie derivatives. In other words, we must prove _0^S^1 is a manifold. Its tangent space is given by the S^1-invariant vector fields. Since _0^S^1 is a group, it suffices to show it is locally a manifold around the identity. A neighbourhood of the identity in _0 is given, using the Riemannian exponential map for some metric g, by a neighbourhood of zero in the vector fields on M. We choose g to be S^1-invariant. From Lemma <ref> and a trivial calculation, the S^1-invariant diffeomorphisms are the diffeomorphisms of the form Φ(p, θ) = (f(p), g(p) + θ); equivalently, they are precisely the diffeomorphisms that commute with the rotations Θ. We must show that the vector fields in this neighbourhood such that exp_v commutes with Θ are a submanifold with the specified tangent space. In fact we shall show that they are precisely the S^1-invariant vector fields small enough to be in the neighbourhood; the intersection of a vector subspace with the neighbourhood is clearly a submanifold. Suppose given a sufficiently small vector field v. We have to compare Θexp_v(p, θ) and exp_v Θ(p, θ), for some (p, θ) ∈ M × S^1. Suppose that Θ is rotation by θ'. Θexp_v(p, θ) is the endpoint of the image under Θ of the geodesic with initial velocity v_(p, θ); since the metric is S^1-invariant, Θ is an isometry, and so Θexp_v(p, θ) is the endpoint of the geodesic with initial velocity (Θ_* v)_(p, θ + θ'). On the other hand, exp_vΘ(p, θ) is the endpoint of the geodesic with initial velocity v_(p, θ + θ'). By assumption, v and hence Θ_* v are sufficiently small that the exponential map is injective, and consequently we have exp_v Θ = Θexp_v if and only if Θ_* v = v.That is, the tangent space to the orbit of a G_2 structure ϕ under _0^S^1 is given by the Lie derivatives Ł_X ϕ with X S^1-invariant. We shall prove that for ϕ S^1-invariant, this is equivalent to the S^1-invariant Lie derivatives Ł_X ϕ. We begin with the simplest case: an S^1-invariant G_2 structure on a compact manifold. Let M be a compact six-dimensional manifold and let ϕ be an S^1-invariant torsion-free G_2 structure on M × S^1. Suppose that X is a vector field on M × S^1 and Ł_X ϕ is S^1-invariant. Then X is S^1-invariant.We begin by showing that any Killing field X is S^1-invariant. As M is Ricci-flat, a Killing field is parallel; hence Ł_ X = [X, ] = 0 and this is equivalent to S^1-invariance. Now suppose that Ł_X ϕ is S^1-invariant, but not necessarily zero. Then we have Ł_Ł_X ϕ = 0 = Ł_X Ł_ϕ, and so Ł_[X, ]ϕ = 0. By the previous paragraph we find that [X, ] is S^1-invariant. We may now work locally on M. Pick some open subset of M on which we have coordinates x_1, …, x_n, and on this subset writeX = a_0+ ∑_i=1^n a_i ∂/∂ x_iWe see by elementary computation that 0 = [[X, ], ] = ∂^2 a_0/∂θ^2 + ∑_i=1^n ∂^2 a_i/∂θ^2∂/∂ x_iIt follows that each of the a_i (i possibly zero) is of the form A_i θ + B_i, where A_i and B_i are functions on M. But as there is no globally defined function θ, A_i must be identically zero. It follows that X is independent of θ, as required.In the asymptotically cylindrical case, we will need a couple of statements very similar to Lemma <ref>.* Let N be a compact five-dimensional manifold and let (Ω, ω) be an S^1-invariant Calabi-Yau structure on N × S^1. Suppose that X is a vector field on N × S^1 and Ł_X Ω is S^1-invariant. Then X is S^1-invariant, and in particular any Killing field is S^1-invariant.* Let M be a six-dimensional manifold with an end and let ϕ be an asymptotically cylindrical torsion-free G_2 structure on M × S^1. Suppose that X is an asymptotically translation invariant vector field on M × S^1 such that Ł_X ϕ is also S^1-invariant. Then X is S^1-invariant; in particular, again, any Killing field is S^1-invariant. (ii) is essentially identical. The only part of Lemma <ref> that required compactness of M was showing that Killing fields are parallel, and this was briefly explained in Proposition <ref>. (i) is slightly more involved, as it is not immediately clear that Ł_X Ω = 0 implies that X is a Killing field and so parallel. However, it follows from Hitchin <cit.>, as follows.Hitchin proves that Ω can be determined at each point from Ω. Moreover, the map Ω↦Ω is smooth. Hence, if Ł_X Ω = 0, we have a curve of Calabi-Yau structures corresponding to a curve of diffeomorphisms; at each point Ω depends smoothly on Ω, and so Ł_X Ω depends linearly on Ł_X Ω, and must in turn be zero. Hence, if Ł_X Ω =0, then Ł_X Ω =0, X is holomorphic. But then it follows by the argument from Kobayashi <cit.> mentioned before Lemma <ref> that Ł_[X, ]ω = 0. Then X is Killing, and we can apply the previous argument. A similar argument proves a non-infinitesimal version. It does not quite imply Lemma <ref> as we would need to integrate up: we would need to show that a vector field X with Ł_X ϕ S^1-invariant induces a curve Φ_s of diffeomorphisms so that Φ_s^* ϕ is S^1-invariant for all s sufficiently small.Let M × S^1 have an S^1-invariant Ricci-flat metric g. The space _0^S^1 defined in Definition <ref> is also the identity path-component of the set{Φ∈_0(M × S^1): Φ_*is Killing}Becauseis certainly a Killing field for the metric g, it is clear that (<ref>) is a subspace of (<ref>), and consequently _0^S^1(M × S^1) is contained in the identity path-component of (<ref>). It suffices to show that (<ref>) is open and closed in (<ref>), for then connectedness of the identity component of (<ref>) implies it is all of _0^S^1(M × S^1). Closedness follows immediately from continuity of the pushforward. For openness, we suppose that Φ_0 is in _0^S^1; we need to show that there is an open neighbourhood U of Φ_0 in _0 such that if Φ∈ U and Φ_* is Killing, then Φ_* is . We work around some p ∈ M. We note first that by the argument of Lemma <ref>, Φ_0 carries {p}× S^1 onto {q}× S^1 for some point q of M depending on p. Let V be a small chart around p and let W be a small chart around q such that Φ_0(V × S^1) ⊂ W × S^1. Choose a smaller neighbourhood V' whose closure is compact and contained in V; then for Φ sufficiently close to Φ_0, Φ(V' × S^1) ⊂ W × S^1. We may consequently analyse Φ in terms of the coordinates on U × S^1 and V × S^1; that is, we shall writeΦ(x_1, …, x_n, θ) = (y_1(x_1, …, x_n, θ), …, y_n(x_1, …, x_n, θ), θ'(x_1, …, x_n, θ))Now Φ_* is Killing, and so parallel as in the proof of Proposition <ref>, hence S^1-invariant. It follows that for each i, ∂^2 y_i/∂θ^2 = 0. We deduce that for fixed x_1, …, x_n, y_i = A_i θ + B_i, where A_i and B_i depend on x_1, …, x_n. It follows immediately that A_i = 0, as otherwise we do not have a well-defined map from the circle. Hence, we find thatΦ_*= Afor some function A(x_1, …, x_n, θ). Taking the second derivatives of θ' in the same way, A must be independent of θ; since we still need a well-defined map from the circle, we also know that A ∈ℤ for all points of M. Since Φ is close to Φ_0 in the C^1 topology and Φ_0* = we obtain A = 1 identically, as required. It follows immediately from Proposition <ref> that the identity component of the isometry group of a manifold M × S^1 with an S^1-invariant Ricci-flat metric g is contained in _0^S^1. Hence, when we quotient by the automorphism group as in Theorem <ref>(c)(ii), it doesn't matter whether we work with _0 or _0^S^1 as in a local neighbourhood of the origin the subgroups of isometries are the same. In practice, of course, the fact that Killing fields are S^1-invariant would also prove that the subgroups of isometries are the same locally around the identity. We can now show that slices such as that in Proposition <ref> restrict to slices in the S^1-invariant setting. For the asymptotically cylindrical case, we will need analogous results to Proposition <ref> for Calabi-Yau structures as well, and so Proposition <ref> contains three cases corresponding to Lemma <ref> and the two cases of Lemma <ref>.Suppose that W × S^1 is a six- or seven-dimensional manifold, where W is one of a compact or asymptotically cylindrical six-manifold, or a compact five-manifold, and has either an S^1-invariant torsion-free G_2 structure ϕ or an S^1-invariant Calabi-Yau structure (Ω, ω) respectively. Let α be ϕ or Ω respectively. Suppose thatY = E ⊕{L_X α: Xvector field}is an orthogonal splitting for some vector spaces of three-forms E and Y.Let E' and Y' be the intersection of E and Y respectively with the set of S^1-invariant three-forms. Then we haveY' = E' ⊕{L_X ϕ: X S^1-invariant vector field}again as an orthogonal splitting. Lemmas <ref> and <ref> say that the space of Lie derivatives on the right hand side of (<ref>) is precisely{Ł_X α: X vector field}∩{S^1-invariant forms}and thus we just need to show that the orthogonal splitting is preserved when we intersect throughout with S^1-invariant forms. The two components obviously remain orthogonal, so it suffices to show that the projection to the Lie derivative part is S^1-equivariant, commutes with the pullback by any rotation Θ. We observe that Θ^* Ł_X ϕ = Ł_Θ^-1_* XΘ^* ϕ for all X, again using the S^1-invariance of ϕ.Now given a 3-form β its Lie derivative part is the unique 3-form γ such that β - γ, Ł_X α = 0for all vector fields XAs Θ is an isometry, we have for any XΘ^* β - Θ^* γ, Ł_X α = β - γ, (Θ^-1)^* Ł_X α= β - γ, Ł_Θ^-1_* Xα = 0and thus the Lie derivative part of Θ^* β is Θ^* γ. In particular, if β is S^1-invariant its Lie derivative part is S^1-invariant. Thus its E part is too, and lies in E'. We obtain the orthogonal splitting{closed S^1-invariant 3-forms} = E' ⊕{Ł_X ϕ: [X, ]=0} Applying Proposition <ref> to Proposition <ref>, we obtain Let ϕ be an S^1-invariant G_2 structure on M × S^1. With respect to ϕ, let E' = {α∈Ω^3(M × S^1): αS^1-invariant, dα = 0, d^*α∈Ω^2_14}where Ω^2_14 is the subbundle of 2-forms which at every point are in the subbundle corresponding to the 14-dimensional subrepresentation of G_2 on the space of alternating two-forms. Then E' is L^2-orthogonal to the space of Lie derivatives Ł_X ϕ of ϕ for X S^1-invariant, with respect to the metric at ϕ, and the sum of these spaces is the set of all three-forms. Consequently, locally E is transverse to the orbits of _0. We now proceed to the S^1-invariant version of the implicit function theorem argument for Proposition <ref>. We have to show that when we pass to the S^1-invariant setting the derivative DF is still surjective and has the same kernel. Again, we state and prove a more general version that will be used for the asymptotically cylindrical case. We first prove some easy lemmas saying that harmonic forms are S^1-invariant. The proof is very similar to the proof of Lemma <ref>. Suppose M × S^1 is a compact manifold with an S^1-invariant Riemannian metric, and α a harmonic form on it. Then α is S^1-invariant. Consider a rotation Θ. As Θ is isotopic to the identity, [Θ^* α] = [α]. As the metric is S^1-invariant, Θ is also an isometry and so Θ^* α is also a harmonic form. Thus, by Hodge decomposition, Θ^*α = α, α is S^1-invariant.Suppose M × S^1 is a manifold with an end N × S^1 × (0, ∞), equipped with an asymptotically cylindricalS^1-invariant Riemannian metric, and α an asymptotically translation invariant harmonic form on it. Then α is S^1-invariant. We apply asymptotically cylindrical Hodge theory, such as in <cit.>, and the argument in Lemma <ref>. By Theorem 5.9 of that paper, α is the sum of a decaying harmonic form, an exact harmonic form, and a coexact harmonic form. We first show that the exact harmonic form β, say, is S^1-invariant. By the discussion after Theorem 5.9, the map from the exact harmonic form to its limit is injective. By Lemma <ref>, its limit is S^1-invariant. Thus, the exact harmonic form Θ^* β has the same limit: so Θ^* β = β, as required. By the same argument the coexact harmonic form is also S^1-invariant. Taking the difference, we now have to show that the decaying harmonic form, γ say, is S^1-invariant. Again, Θ^* γ is a decaying harmonic form, and it follows from <cit.> that the map from such forms to cohomology is injective. Thus Θ^* γ = γ exactly as in Lemma <ref>.We now prove our general S^1-invariance proposition which we will apply to Proposition <ref>.Let M be a compact or asymptotically cylindrical manifold. Let F: X → Y be a smooth S^1-equivariant nonlinear map between Banach spaces of (asymptotically translation invariant) differential forms on M × S^1 (withS^1-invariant norms). Suppose that DF is surjective and its kernel consists of harmonic forms. Suppose further that in the compact case X is continuously contained in the space of L^2 forms and in the asymptotically cylindrical case we can find an S^1-invariant complement W to the kernel of DF in X. Then the restrictionF^S^1: X' := X ∩{S^1-invariant forms}→ Y ∩{S^1-invariant forms} =: Y'is a well-defined map, with DF^S^1 surjective and the same kernel as DF.Since F is S^1-equivariant, the image of an S^1-invariant form under it is an S^1-invariant form. Consequently, F^S^1 is a well-defined map. We now consider the derivative. By hypothesis, its kernel consists of (asymptotically translation invariant) harmonic forms; by Lemmas <ref> and <ref> we know that these are S^1-invariant, and consequently we know that the kernel of DF^S^1 agrees with the kernel of DF. For surjectivity, note first that DF^S^1 must also be S^1-equivariant.Suppose we have a form α∈ F(X'), and let α̇ be a tangent to Y' at α, so we have to show that α̇ is in the image of DF^S^1. Since DF is surjective, we can find β̇ with DF(β̇) = α̇. There is always an S^1-invariant complement W to the finite-dimensional kernel of DF; in the asymptotically cylindrical case, this is assumed, and in the compact case we may let W be the L^2-orthogonal complement and note that since the metric is S^1-invariant, W is also S^1-invariant. We suppose that β̇ lies in W. Now, given arotation Θ, since DF^S^1 is S^1-equivariant and α̇ is S^1-invariant, Θ^* β̇ also maps to α̇. Since W is S^1-invariant, Θ^* β̇ is also in W. Consequently the difference β̇- Θ^* β̇ is in W and maps to zero; hence it is zero, and β̇ is S^1-invariant. Hence, α̇= DF^S^1 (β̇), and DF^S^1 is surjective.Applying Proposition <ref> to Proposition <ref> givesLet ϕ_0 be a torsion-free S^1-invariant G_2 structure on M × S^1 and let E and F be as in Proposition <ref>. When we restrict all the spaces to be S^1-invariant (passing to E' and so forth), the kernel of DF is unchanged and it remains surjective. We apply Proposition <ref>. We have to check that F is S^1-equivariant and that its kernel consists of harmonic forms. The fact the kernel consists of harmonic forms is stated in Proposition <ref>. For S^1-equivariance, we recall that F(ϕ) = P(*_0 * ϕ) where *_0 and * are the Hodge stars induced by ϕ_0 and ϕ and P is the orthogonal projection onto exact forms induced by ϕ_0. Since ϕ_0 is S^1-invariant, *_0 and P are S^1-equivariant (P was essentially proved to be so as part of Proposition <ref>). On the other hand, it is clear that the map ϕ↦ * ϕ is S^1-equivariant. Consequently, F is S^1-equivariant. By Proposition <ref>, the result follows. Corollaries <ref> and <ref> essentially prove the compact case of Theorem <ref>. The asymptotically cylindrical case is similar but more involved. Propositions <ref> and <ref> will suffice to prove everything, but we have to apply them to the proof of smoothness of the asymptotically cylindrical G_2 moduli space in Nordström <cit.> which is substantially more complicated than Hitchin's proof for the compact case. We consequently only summarise this case.We begin by considering the limit as in <cit.>. The limit is a torsion-free Calabi-Yau structure on the cross-section N × S^1, so first of all requires us to define an S^1-invariant Calabi-Yau moduli space on N × S^1. As in the compact G_2 case, the proof is fundamentally that we first choose a slice and then consider the torsion-freeness map from that slice. In our case, we work with a base Calabi-Yau structure (Ω, ω) which is S^1-invariant. The slice is defined in <cit.>, by identifying an orthogonal complement to the Lie derivative of Ω (essentially symmetrically to that in Proposition <ref>. It is then not necessary to consider the Lie derivatives of ω, as each diffeomorphism can be identified by its Ω part. The Calabi-Yau case of Proposition <ref> says that when all forms in these splittings are taken S^1-invariant, the reduced orthogonal complement is still an orthogonal complement.The map defining torsion-freeness is F in <cit.>, viz.F(β, γ) = (P_1(*β̂), P_2(β∧γ), 1/4β∧β̂- 1/6γ^3)where, for (β, γ) close to (Ω, ω), β̂ is the imaginary part of the unique decomposable complex 3-form of which β is the real part, P_1 is the orthogonal projection to those three-forms in the slice which are orthogonal to harmonic forms, and P_2 is an orthogonal projection on closed five-forms induced by the Calabi-Yau structure (to the harmonic forms and exterior derivatives of (3, 1) + (1, 3) forms). Wedge products are clearly S^1-equivariant. The map β↦β̂ is S^1-equivariant by uniqueness of the decomposable complex 3-form, so to show that F is S^1-equivariant we only need to show that the two projections are. However, the Calabi-Yau structure with respect to which these splittings are taken is S^1-invariant, so the subspaces that these splittings project to and their complements are S^1-invariant. It follows as in Proposition <ref> that the projection maps are S^1-equivariant. <cit.> says that DF is surjective and <cit.> says that the kernel of DF consists of harmonic forms. By Proposition <ref> it follows that the kernel is the same and the derivative remains surjective when we pass to the S^1-invariant case. This proves that the S^1-invariant moduli space of Calabi-Yau structures on N × S^1 is a smooth manifold locally diffeomorphic to that for all Calabi-Yau structures on N × S^1. Only a subspace of these Calabi-Yau structures might arise as limits of a torsion-free G_2 structure: we have to check that this subspace is still a manifold. In the non-S^1-invariant case, this is <cit.>. Again, the proof is that structures arising as limits correspond to the kernel of two nonlinear maps (taken consecutively). Note that the tangent space to the Calabi-Yau moduli space already consists of harmonic forms, so this hypothesis of Proposition <ref> does not need checking. The nonlinear maps concerned are composites of the wedge product, orthogonal projections to S^1-invariant subspaces determined by the base Calabi-Yau structure, and the orthogonal projection to the complement of such an S^1-invariant subspace with respect to the metric induced by the Calabi-Yau structure we consider. These are clearly all S^1-equivariant, and so Proposition <ref> shows that the derivatives remain surjective, so that the subspace of the S^1-invariant moduli space corresponding to limits of S^1-invariant torsion-free G_2 structures is indeed a submanifold.We now must pass to the full asymptotically cylindrical setting. We must, here, restrict to asymptotically cylindrical structures and diffeomorphisms with a fixed decay rate δ >0 to define our Banach spaces, as in <cit.>.The slice we take in the full asymptotically cylindrical setting is given in <cit.>. We restrict to the subspace of asymptotically translation invariant closed 3-forms with suitable limits, and in particular to vector fields (defining diffeomorphisms) whose limits are Killing fields for the limit structure. We then take E to be the subspace satisfying d^* α∈Ω^2_14 as in Proposition <ref>. Then we again have that E and the space of Lie derivatives are orthogonal complements. By Proposition <ref>, it follows that this orthogonal splitting is preserved when we pass to the S^1-invariant setting. The final map we consider is F of <cit.>: F(ϕ) = P(*_0 * ϕ) exactly as in Proposition <ref>; by <cit.> the kernel of F is precisely the torsion-free G_2 structures. Exactly as in Corollary <ref>, F is S^1-equivariant. <cit.> says that the kernel of the derivative, when we restrict to the slice, consists exactly of harmonic forms, and the derivative is surjective. Moreover, the kernel is all the harmonic forms with suitable limits, and by construction every nonzero limit arises as a limit of a harmonic form,and decaying forms orthogonal to harmonic forms form an S^1-invariant complement for the kernel. Consequently by Proposition <ref> the derivative has the same kernel and is surjective in the S^1-invariant setting too. We have now essentially proved In both the compact and asymptotically cylindrical cases _G_2^S^1 is a manifold and is locally diffeomorphic to _G_2.The remaining parts of this proof are passing from the reduced-regularity and fixed-decay-rate Banach spaces to smooth and all asymptotically cylindrical structures: S^1-invariance is irrelevant to these, which thus follow exactly as in <cit.> and <cit.>. Note that the map _G_2^S^1→_G_2 is not known to be an inclusion map as it need not be globally injective.It is tempting to try to prove Theorem <ref> more directly. We outline this “more direct" proof in the compact case, and explain where it runs into difficulty. We know from Theorem <ref> that the slice neighbourhood for _G_2 around an S^1-invariant G_2 structure ϕ_0 must consist of S^1-invariant G_2 structures, because we know its representatives are preserved by the isometry Θ of ϕ_0, and thus we always have a local continuous map _G_2→_G_2^S^1. As _0^S^1⊂_0, and an S^1-invariant G_2 structure is a G_2 structure, we always also have a well-defined continuous map _G_2^S^1→_G_2 as in the last proof. If we could show that locally these were inverse to each other, and so defined a homeomorphism, we would immediately get the manifold structure on _G_2^S^1. We fix neighbourhoods so that these maps are well-defined, and work entirely on these neighbourhoods (so references to injectivity are only to local injectivity).It is clear that the composition _G_2→_G_2^S^1→_G_2 is the identity. We need to check that the map _G_2^S^1→_G_2 is injective; it then follows the other composition is also the identity. We know that every point in the image has a slice representative and so it suffices to check that if ϕ_0 is in the slice neighbourhood and Φ∈_0, then ϕ_1 = Φ^* ϕ_0 is also Ψ^* ϕ_0 for some Ψ∈_0^S^1. If we have a curve ϕ_t of S^1-invariant G_2 structures from ϕ_0 to ϕ_1 then, at least after passing to a suitable C^k, α space, we know by Theorem <ref> that for all t, ϕ_t is the pullback of an element in the slice and can thus be written as Φ_t^* ϕ̂_t with ϕ̂_t always in the slice. (In particular, ϕ̂_1 = ϕ̂_0 = ϕ_0). We know also, as ϕ̂_t is in the slice, that ϕ̂_t is S^1-invariant. Theorem <ref> says that isometries of ϕ̂_t are isometries of ϕ_0, and it follows that Φ_t^* ϕ_0 is also S^1-invariant, by considering the isometry Φ_t ∘Θ∘Φ_t^-1 of ϕ̂_t. It then follows from Proposition <ref> that Φ_t is S^1-invariant for all t; in particular, Φ_1 is S^1-invariant, and this proves that the map _G_2^S^1→_G_2 is (locally) injective. Thus this proof reduces to The S^1-invariant torsion-free G_2 structures form a locally path-connected subset of torsion-free G_2 structures.Without torsion-freeness this claim is evident because of the openness of G_2 structures. The natural thing to do is to take a path of torsion-free G_2 structures and average out the rotation, but we cannot take averages because the map defining the torsion is non-linear. Removing the remaining torsion would therefore require some analysis: it should be possible, but is unlikely to be easier than the arguments we have outlined in this subsection.To complete this extended remark, we will observe where Claim <ref> comes in our original proof, or, essentially equivalently, how we have local path-connectedness for all torsion-free G_2 structures (before passing to S^1-invariant ones). The idea is that we only use the whole set of torsion-free G_2 structures as the product of diffeomorphisms and the slice. First, we show that the set of torsion-free G_2 structures is locally this product, essentially by using the implicit function theorem on (Φ, ϕ) ↦Φ^* ϕ; then we use the implicit function theorem again to determine that the torsion-free G_2 structures in what's left are also a manifold, and so locally path-connected. Since we already know that the diffeomorphisms are a manifold, so locally path-connected, we know that the set of torsion-free G_2 structures is a product of locally path-connected spaces and so locally path-connected. It would perhaps be possible to combine these applications of the implicit function theorem and show Claim <ref> directly, by showing that S^1-invariant torsion-free G_2 structures are themselves a manifold; but the whole point of this “more direct argument" is to avoid these two applications of the implicit function theorem (which correspond, for instance, to Corollaries <ref> and <ref>). Before turning to the components _SU(3) and Z of _G_2^S^1, we deal with the question promised after Definition <ref>, of what would happen if we defined our moduli spaces to also quotient by the rescaling action.Suppose for simplicity that M is compact; similar arguments will apply in the asymptotically cylindrical case. We know that there is a natural rescaling action on both SU(3) structures and G_2 structures. The action induced on S^1-invariant G_2 structures by rescaling of SU(3) structures is not just rescaling: it maps the G_2 structures a^3/2Ω + az ∧ω to Ω + z ∧ω. Consequently, if we quotient by rescaling of SU(3) structures, we have to quotient by this partial rescaling of S^1-invariant G_2 structures, otherwise Proposition <ref> no longer holds. If we also quotient by rescaling the G_2 structures, we find in particular that Ω + z ∧ω is identified with a^3/2Ω + a^3/2 z ∧ω and hence with Ω + a^1/2 z ∧ω; that is, we are also quotienting by rescaling of Z. The natural slice to take for Z is to recall that an element z of Z is of the form L [dθ] + [v] for some [v] ∈ H^1(M), and merely insist that L=1. An easy calculation shows that if L=1 and (Ω, ω) induces a metric of volume one, then so too does Ω + z ∧ω. Consequently, we could quotient by these and establish the following analogue of Proposition <ref>:{[Ω, ω] ∈_SU(3): ([Ω, ω]) = 1}× H^1(M)={[ϕ = Ω + z ∧ω] ∈_G_2^S^1: ([ϕ]) = ([Ω, ω]) = 1}Of course, the analogue of Theorem <ref> in this case is completely false: even if we quotient by rescaling of G_2 structures, so that we work with {[ϕ] ∈_G_2: ([ϕ]) = 1}, {[ϕ = Ω + z ∧ω] ∈_G_2^S^1: ([ϕ]) = ([Ω, ω]) = 1} must be a proper subspace. Consequently, to prove that this space is smooth we would essentially have to proceed by the same argument as in this subsection and then continue. It is in this sense that we claimed after Definition <ref> that quotienting by the rescaling action added additional complexity for no practical gain. §.§ The space of twisting classes Z We now know that _G_2^S^1 = Z ×_SU(3) is a manifold; it remains to show that both factors are manifolds and if we take the product manifold structure on the right hand side this identification is a diffeomorphism. We begin with the quotient Z of Definition <ref>. Z is clearly an open subset of a vector space; the purpose of this subsection is to obtain a description of this vector space intrinsic to M. In the compact case, this is straightforward; in the asymptotically cylindrical case, we will use standard asymptotically cylindrical Hodge theory. First of all, we need a standard lemma which we will use later to set up gluing as well.Suppose that α is an exponentially decaying (with all derivatives) closed form on the end N × (0, ∞) of an asymptotically cylindrical manifold M with cross-section N. Then there is a form η on N × (0, ∞) such that dη = α|_N × (0, ∞). We now claim our intrinsic description of the vector space quotient. This is the only place where we explicitly need Remarks <ref> and its G_2 analogue: that all interesting Ricci-flat asymptotically cylindrical manifolds have connected cross-section N.Let M × S^1 be compact or asymptotically cylindrical. In the case that M × S^1 is asymptotically cylindrical, suppose further that the cross-section N of M is connected. Then the quotientclosed S^1-invariant covector fields z (with z(∂/∂ t) → 0 exponentially)/differentials of S^1-invariant (asymptotically translation invariant) functionsis isomorphic to H^1(M × S^1) = H^1(M) ×; in particular, if we restrict to Z, where the [dθ] component must be positive, we get the open subset Z = H^1(M) ×_>0. Asymptotically cylindrical Hodge theory is well-studied, and Lemma <ref> is essentially a result between that for bounded harmonic forms and that for arbitrary closed forms with exponential growth, between the generalisations to asymptotically cylindrical manifolds of Propositions 6.13 and 6.18 of Melrose <cit.> – these say that each of these is given by first cohomology and Lemma <ref> is that arbitrary closed forms with specified limit over a suitable quotient does as well. It is clear that Z corresponds to cohomology classes containing a positive [dθ] component, so it is enough to show that (<ref>) is isomorphic to H^1(M × S^1). In the compact case, (<ref>) reduces to closed S^1-invariant covector fields z/differentials of S^1-invariant functionsSince a closed S^1-invariant covector field is of the form v + c dt where v is a closed 1-form on M and c is a constant, and an S^1-invariant function is just a function on M, it is easy to see that (<ref>) is indeed isomorphic to H^1(M) ⊕ = H^1(M × S^1). In the asymptotically cylindrical case, we will use the following facts of asymptotically cylindrical Hodge theory, for an asymptotically cylindrical manifold M with connected cross-section.* H^1 ≅^̋1_(M) = ^̋1_(M), where this isomorphism is given by taking the cohomology class of a bounded harmonic (and so closed) form, ^̋1_ is the set of bounded harmonic 1-forms, and ^̋1_ is the set of bounded harmonic 1-forms having the same boundary condition that v() → 0 exponentially.* An exact decaying S^1-invariant 1-form is the differential of a decaying S^1-invariant function.The first point is simply <cit.> with different notation: that says that the map from bounded harmonic 1-forms to cohomology is an isomorphism, and that no bounded harmonic 1-form has dt as its limit. The second is easier: it is required to set up the Hodge theory, but here we can do it concretely using Lemma <ref>. If α=df is our 1-form, Lemma <ref> says that on the end, we can explicitly find a g with dg = α. An examination of the proof shows that g is itself exponentially decaying. On the end, d(f-g) = 0, so that f-g is a constant, c say. Then f-c is an exponentially decaying function with d(f-c) =α, as required. We clearly have a map from closed S^1-invariant covector fields z on M × S^1, with z() → 0 exponentially, to H^1(M × S^1). We have to apply (i) and (ii) to show that the induced map from (<ref>) to H^1(M × S^1) is a well-defined bijection. Since we quotient by exact forms, it is clearly well-defined. To show that it is injective, we have to show that if v is S^1-invariant with appropriate limit and represents the zero cohomology class, then it is the differential of an asymptotically translation invariant S^1-invariant function. Since [v]=0, and v has appropriate limit, the cohomology class of the limit [ṽ] = 0. Thus ṽ = dg for some g defined on N; we may assume g is S^1-invariant, since v is, for instance by averaging g around the circle factor. Then v-d(ψ g), where as in equation (<ref>) ψ is one for t large and zero for t small, represents the zero cohomology class and has zero limit, so by (ii) we have v -d(ψ g) = dh for some decaying function h; since v-d(ψ g) is S^1-invariant, we may assume that h is. Then we have v = d(ψ g+ h); ψ g+h is an asymptotically translation invariant S^1-invariant function, as required. To show surjectivity, by the isomorphism in (i) it is enough to show that every bounded harmonic 1-form defines a class of (<ref>); but, by (i) and Lemma <ref>, this is immediate. §.§ Smoothness of the SU(3) moduli space We now turn to the _SU(3) factor. Since we have it as a subspace of _G_2^S^1 by Proposition <ref>, and we understand the structure of _G_2^S^1 by Theorem <ref>, we have a reasonable knowledge of its structure as a topological space. It remains to understand the _SU(3) factor as a smooth manifold. We use the projection π_Z from _G_2^S^1 to Z. We will show that π_Z is a submersion. Its fibres are precisely _SU(3)'s, and so the implicit function theorem will give a family of manifold structure on _SU(3). We then have to check that the manifold structure is independent of which fibre we take, and that consequently we indeed have a smooth product; this establishes Theorem A. Firstly, we now know that _SU(3)(M) is locally homeomorphic to a subset of cohomology.In the compact case,[Ω, ω] ↦ ([Ω], [ω]) ∈ H^3(M) ⊕ H^2(M)is a local homeomorphism to its image. In the asymptotically cylindrical case,[Ω, ω] ↦ ([Ω], [ω], [Ω̃_2], [ω̃_2]) ∈ H^3(M) ⊕ H^2(M) ⊕ H^2(N) ⊕ H^1(N)is a local homeomorphism to its image, where Ω̃_2 and ω̃_2 are, as in Theorem <ref>, the appropriate components of Ω̃= Ω̃_1 + dt ∧Ω̃_2 and ω̃= ω̃_1 + dt∧ω̃_2. We first apply Theorem <ref>, which says that locally _G_2^S^1 is homeomorphic to _G_2 and hence to cohomology. Then, in the compact case, the result follows by combining Proposition <ref> with the Künneth theorem. Specifically, given a point [Ω, ω], take a neighbourhood of [Ω + dθ∧ω] in _G_2^S^1 that is homeomorphic to a neighbourhood in _G_2(M× S^1) and so to a neighbourhood in H^3(M × S^1). Then the map (<ref>) is given by the composition[row sep=0pt] _SU(3)r _G_2^S^1r H^3(M × S^1) r H^3(M) ⊕ H^2(M) [Ω', ω'] [mapsto]r [Ω' + dθ∧ω'][mapsto]r [Ω' + dθ∧ω'][mapsto]r([Ω'], [ω'])Consequently, (<ref>) is continuous because every individual step is. The inverse can be written in exactly the same way and so is also continuous (the last map being the projection _G_2^S^1→_SU(3)).In the asymptotically cylindrical case, the only difficulty is that we have to use the Künneth theorem on the cross-section as well. The map from [Ω', ω'] to [ϕ' = Ω' + dθ∧ω'] is a local homeomorphism to its image exactly as in the compact case. We already know that (with the notation of Theorem <ref>) the map[ϕ'] ↦ ([ϕ'], [ϕ̃'_2]) ∈ H^3(M× S^1) ⊕ H^2(N× S^1)is a local homeomorphism to its image; finally, the map from H^3(M× S^1) ⊕ H^2(N× S^1) to H^3(M) ⊕ H^2(M) ⊕ H^2(N) ⊕ H^1(N) is again continuous in both directions, using the Künneth theorem for both H^3(M) and H^2(N). We now turn to the projection map π_Z: _G_2^S^1(M × S^1) → ZWe will show first that π_Z is smooth, then that it is a surjective submersion; the implicit function theorem then implies that the fibres, which are clearly _SU(3)'s, have manifold structures. For smoothness, we work locally, and so may assume we have a subset of torsion-free G_2 structures (open in some suitable slice). We already know from Proposition <ref> that the map taking a G_2 structure to the twisting z is smooth (as a map of Fréchet spaces). Since the map from z to its cohomology class [z] is linear and continuous, it is evidently smooth. Thus (<ref>) defines a smooth map between finite-dimensional manifolds. To show that π_Z is a surjective submersion we will use the following elementarySuppose that (Ω, ω) is an (asymptotically cylindrical) Calabi-Yau structure on M and that z(s) is a smooth curve of closed 1-forms with [z(s)] ∈ Z for all s. Then the curve[Ω + z(s) ∧ω ] ∈^S^1_G_2(M × S^1)is smooth. It is obvious that Ω + z(s) ∧ω is a smooth curve of closed three-forms. The map to ^S^1_G_2 is given locally by taking certain cohomology classes, by Theorem <ref>. This map is linear and continuous and so is smooth to cohomology classes; hence it is smooth to the image of _G_2^S^1 in cohomology.Lemma <ref> yields The map π_Z of (<ref>) is a surjective submersion.Surjectivity is already done, since π_Z is the projection from a product. To prove π_Z is a submersion, we have to show that given a tangent vector [y] ∈ H^1(M × S^1) at [z] ∈ Z, for every [ϕ] ∈π_Z^-1([z]) there is a tangent vector at [ϕ] that maps to [y] under Dπ_Z. By Proposition <ref>, we know any such [ϕ] = [Ω + z ∧ω] for some Calabi-Yau structure (Ω, ω) and representative z. Pick some representative y for the tangent. z+sy is a smooth curve and, by openness of Z, [z+sy] ∈ Z for s small enough. By Lemma <ref>, therefore, γ(s) = [Ω + (z+sy)∧ω] is a curve in _G_2^S^1 through [ϕ]. We consider its tangent at [ϕ] = γ(0).Dπ_Z(γ̇) = (̇π̇∘̇γ̇)̇ = (̇[̇ṡ↦̇ ̇ż+̇ṡẏ]̇)̇ = [y]and so we have a submersion.In particular, the implicit function theorem now proves that every _SU(3) fibre has a smooth structure, possibly different for each fibre. As we already have a topological product, we nextshow that all these smooth structures are the same and that the projection map _G_2^S^1→_SU(3) is smooth; it is then straightforward to show that we obtain a smooth product. Suppose [z_1] and [z_2] are classes of Z. The map π_Z^-1([z_1]) →π_Z^-1([z_2]) given using the product structure of Proposition <ref> by projection to _SU(3) and inclusion is a diffeomorphism when these fibres are equipped with their submanifold smooth structures.We first prove the case where M is compact. Fix a class [ϕ_1 = Ω + z_1 ∧ω] in π_Z^-1([z_1]) and its image [ϕ_2 = Ω + z_2 ∧ω] in π_Z^-1([z_2]). Locally around these two, we know by Theorems <ref> and <ref> that _G_2^S^1 is locally diffeomorphic to H^3(M × S^1). Consequently, the fibres π_Z^-1([z_1]) and π_Z^-1([z_2]) are locally submanifolds of H^3(M × S^1). The map in the statement defines a map between these submanifolds, which we want to show is smooth. It suffices to show that there is a smooth map on H^3(M × S^1) which agrees with this map on π_Z^-1[z_1]. Recall from Lemma <ref> that Z is _>0× H^1(M). Suppose that [z_1] = L_1 [dθ] + [v_1] and [z_2] = L_2 [dθ] + [v_2]. By the Künneth theorem, we know that H^3(M × S^1) ≅ H^3(M) ⊕ H^2(M). We define a map on H^3(M × S^1) ≅ H^3(M) ⊕ H^2(M) byH^3(M) ⊕ H^2(M) ∋ ([α], [β]) ↦ ([α] + [v_2]-[v_1]/L_1∧ [β], L_2/L_1[β])This is linear and so certainly smooth. Suppose now that ([α], [β]) ∈π_Z^-1([z_1]) is close to [ϕ_1]. Then [α + dθ∧β] = [Ω' + (L_1 dθ + v_1) ∧ω']. It follows that [α] = [Ω' + v_1 ∧ω'] and [β] = [L_1 ω']; hence the image of this map is ([Ω' + v_2 ∧ω'], [L_2 ω']) = [Ω' + (L_2 dθ + v_2) ∧ω']. This is precisely the image under the map in the statement, and this proves the result in the compact case.The asymptotically cylindrical case is very similar: the additional linear map H^2(N) ⊕ H^1(N) ∋ ([α̃], [β̃]) ↦ ([α̃], L_2/L_1 [b̃eta]) for the limit factor behaves identically, and the fact that _G_2^S^1 is only diffeomorphic to a submanifold of H^3(M × S^1) ⊕ H^2(N × S^1) does not affect the argument. We can be more concrete about what the smooth structure on the fibre π_Z^-1([z]) is. To set up our moduli space of gluing data in Proposition <ref>, we will need to know that the moduli spaces have coordinates corresponding to a suitable set of structures (essentially slice coordinates as in the G_2 case).Suppose that [Ω, ω] ∈_SU(3), and that (Ω, ω) is a Calabi-Yau structure representing it. Then there exists a subset U of Calabi-Yau structures containing (Ω, ω) such that U is diffeomorphic to a neighbourhood of [Ω, ω] ∈_SU(3), and such that the group of automorphisms (Ω', ω') isotopic to the identity is independent of (Ω', ω') ∈ U.First we write ϕ = Ω + dθ∧ω; ϕ is an S^1-invariant G_2 structure. Consequently, by Theorems <ref> and <ref>, there exists a chart V for _G_2^S^1 diffeomorphic to a set of torsion-free S^1-invariant G_2 structures and such that the group of automorphisms of ϕ' isotopic to the identity is independent of ϕ' ∈ V. By Theorem <ref>, such torsion-free S^1-invariant G_2 structures ϕ' are given by a twisting z' and a Calabi-Yau structure (Ω', ω'). Let U be the set of Calabi-Yau structures U = {(Ω', ω'): ∃ z' ∈ [dθ] s.t. Ω' + z' θ∧ω' ∈ V}U is precisely the set π_Z^-1([dθ]) expressed in the local coordinates provided by V: hence U defines a chart for _SU(3) containing (Ω, ω), as needed.It remains to check that the automorphisms isotopic to the identity don't vary with the Calabi-Yau structure in U. We apply the ideas of subsection <ref>. Suppose that (Ω', ω') and (Ω”, ω”) are structures in U', and Φ is an automorphism of (Ω', ω') isotopic to the identity. There are z', z”∈ [dθ] such that Ω' + z' ∧ω', Ω” + z”∧ω”∈ V; since Φ^* z' - z' is exact, by Lemma <ref> we may find a diffeomorphism Ψ corresponding to a time-1 flow in thedirection such that Φ∘Ψ is an automorphism of the G_2 structure Ω' + z' ∧ω' (clearly isotopic to the identity). Consequently it is an automorphism of Ω” + z”∧ω”, and so its M part in the sense of Lemma <ref> is an automorphism of (Ω”, ω”). It is easy to see that this M part is precisely Φ, which proves the result. Now we have a fixed smooth structure on _SU(3) we can prove The projection map _G_2^S^1→_SU(3) is smooth.We now know that cohomology classes provide local coordinates for both _G_2^S^1 and its submanifold _SU(3) = π_Z^-1([dθ]), and therefore it is enough to show the smoothness of the projection map at the level of cohomology classes (H^3(M) in the compact case, and H^3(M) ⊕ H^2(N) as in Theorem <ref> in the asymptotically cylindrical case). We take a neighbourhood U = ' × Z' with ' and Z' are both charts, by the fact that ^S^1_G_2 is a topological product.For compact manifolds, the projection map becomes[ϕ'] = [Ω' + z' ∧ω'] ↦ [Ω' + dθ∧ω']that is, it is the addition of [dθ-z'] ∧ [ω']. We can work on a slice neighbourhood, so ϕ' is smooth, and then [z'] and [ω'] are smooth by Proposition <ref>. In the asymptotically cylindrical case, we note that z̃'() and z̃() are zero by the boundary conditions of Definition <ref>; thus the map on the H^2(N) term is the identity, and certainly smooth. Together these yield Theorem A, the culmination of our work on deformations.^S^1_G_2 = Z ×_SU(3)(M)where _SU(3)(M), the Calabi-Yau moduli space, is a manifold and this is a smooth product.We now have a single smooth structure on the fibre _SU(3) such that both projections of the product ^S^1_G_2 = _SU(3)× Z are smooth. We have to show that the combination of these two has an isomorphism for its derivative. We work at [ϕ] ∈π_Z^-1([z]). We know by Proposition <ref> that Dπ_Z: T _G_2^S^1→ TZ is surjective and by construction its kernel is Tπ_Z^-1(z). On the other hand, π__SU(3): π_Z^-1([z]) →_SU(3) is essentially the identity, and so Dπ__SU(3): Tπ_Z^-1([z]) → T_SU(3) is also the identity. It follows immediately that Dπ_Z ⊕ Dπ__SU(3): T^S^1_G_2→ TZ ⊕ T_SU(3) is an isomorphism, and so that we have a local diffeomorphism for the smooth product structure. Hence, as smoothness is a local property, we have a global smooth product.§ GLUING We now turn to questions of gluing. The objective of this section is to prove Theorem B (Theorem <ref>), which states that the gluing map on Calabi-Yau structures induced from the gluing map on G_2 structures defines a local diffeomorphism from a moduli space of gluing data to the moduli space of Calabi-Yau structures. We must first show that we can induce a gluing map of Calabi-Yau structures from the gluing map of G_2 structures. We do this in subsection <ref>: we analyse the proof of the gluing result for G_2 structures found in Kovalev <cit.> to prove that asymptotically cylindrical S^1-invariant G_2 structures can be glued to form an S^1-invariant G_2 structure, and then Theorem <ref> (Theorem C) gives us a large family of gluing maps (Theorem <ref>). In subsection <ref>, we show that this family of gluing maps defines a unique map to the moduli space _SU(3) of Definition <ref>. In subsection <ref> we set up the moduli space of gluing data. Chiefly we follow <cit.>, but in a few places this paper was abbreviated from Nordström's thesis <cit.> and we need the full version. As this moduli space is induced from the moduli spaces on the asymptotically cylindrical ends, a result analogous to Theorem <ref> (Theorem A) remains true: this result is Theorem <ref> below. In subsection <ref>, we then restrict to the data that may be glued, and define the gluing map on the moduli space of gluing data. Finally, in subsection <ref>, we analyse the gluing map of G_2 structures in terms of this product structure on the moduli space of gluing data, and identify what deformations in each component correspond to. This analysis enables us to prove Theorem B, by saying that the deformations of G_2 gluing data corresponding to deformations of the Calabi-Yau gluing data give Calabi-Yau deformations, but that the deformations corresponding to the twistings do not affect the final Calabi-Yau structure. §.§ Gluing of structures In this subsection, we show that Calabi-Yau structures can be glued. We briefly review the perturbation argument for G_2 gluing. We then show in Theorem <ref> that this argument passes to the S^1-invariant case, using uniqueness, and so defines a collection of gluing maps for Calabi-Yau structures. We can identify suitable pairs of asymptotically cylindrical manifolds.Suppose that M_1 and M_2 are manifolds with ends, with corresponding cross-sections N_1 and N_2. M_1 and M_2 are said to match with the identification F if we have an orientation-reversing diffeomorphism F: N_1 → N_2. Such an F induces further orientation-preserving mapsF:N_1 × S^1 × (0, 1)→ N_2 × S^1 × (0, 1) F:N_1 ×(0, 1)→ N_2 × (0, 1)(n, θ, t)↦ (F(n), θ, 1-t) (n, t)↦ (F(n), 1-t))Fix T>1, a “gluing parameter". In practice T will be taken large enough to provide various analytic estimates.LetM_1 ⊃ M_1^ T := M_1^∪ N_1 × (0, T)and define M_2^ T similarly. Using (<ref>), F defines an orientation-preserving diffeomorphism between N_1 × (T-1, T) and N_2 × (T-1, T). Then we considerM^T = M_1^ T∪ M_2^ T/Fthe identification of these two manifolds by F. M^T is a closed and oriented manifold. By virtue of our extension of F, we also see that if we do the same with M_1 × S^1 and M_2 × S^1 we just getM^T × S^1. If F_1 and F_2 are isotopic diffeomorphisms N_1 → N_2 then the manifolds M^T constructed using them are diffeomorphic. Thus our definition depends on the isotopy class of F, which is essentially an arbitrary choice: we shall ignore this choice for the time being, though it will re-emerge in Propositions <ref> and <ref>. We may then suppress F and just write N_1 = N_2. For any T and T',M^T and M^T' are diffeomorphic. T dependence is important when we glue structures, however: we could apply appropriate diffeomorphisms to effectively vary T for the structures without actually varying T for the manifold, but to do so would make the notation more complex for no practical gain. Thus we retain T. Now, given a pair of structures, they consist of closed forms. We would like to patch them together. [<cit.>]Let M_1 and M_2 be matching manifolds with ends as in Definition <ref>, and let g_1 and g_2 be asymptotically cylindrical metrics on them. Suppose that α_1 and α_2 are asymptotically translation invariant p-forms on M_1 and M_2 respectively. The diffeomorphism F (extended as in Definition <ref>) induces a pullback map F^* from the limiting bundle ⋀^p T^*M_2 |_N_2 to ⋀^p T^*M_1|_N_1. α_1 and α_2 are said to match if the image of α̃_2 under F^* is α̃_1. In particular, asymptotically cylindrical Calabi-Yau and G_2 structures are said to match if the forms defining them match, and twistings (as in Definition <ref>) are said to match if they match as forms. Let T and M^T be as in Definition <ref>. Suppose that α_1 and α_2 are a pair of matching differential forms, and that α_1 and α_2 are both closed. Then the limits α̃_i are closed on N_i, and hence, treated as constants, on the end of M_i. By Lemma <ref> we then have that α_i - α̃_i is exact on the end, and so can be written as dβ_i there. As in equation (<ref>), let ψ_T haveψ_T(t) = 1 t>T-1 0t ≤ T-2and defineα'_i = α_i - d(ψ_T β_i)on the end, and α_i off the end. On the overlap of M_1^ T and M_2^ T, α'_i = α̃_i, and so the two forms are identified by F. Thus they define a global tensor field α^T on M^T, and α^T is closed because α'_i are closed. Write α^T = γ_T(α_1, α_2); that is, γ_T is the gluing map giving a closed form on M^T from a closed matching pair.Given a pair of matching torsion-free G_2 structures, Definition <ref> yields a (not necessarily torsion-free) G_2 structure ϕ^T on M^T. By construction, dϕ^T = 0 and d*_ϕ^Tϕ^T can be bounded with all derivatives by bounds decaying exponentially in T. We can thus perturb ϕ^T to find a torsion-free G_2 structure.Proposition <ref> and Theorem <ref> below carry out this perturbation. The proposition, which provides the setup, is essentially due to Joyce and the theorem is summarised from Kovalev <cit.>, though the same result can be obtained by using the work of Joyce. The second paragraph of the theorem is easy to establish from the proof, using lower semi-continuity of the first eigenvalue of the Laplacian in the metric (e.g. <cit.>). In Theorem <ref>, we restrict to the case where the seven-manifold is of the form M^T × S^1; the same proof applies for a seven-manifold glued as in Definition <ref>.Let X be a compact Riemannian seven-manifold whose metric is defined by a closed, but not necessarily torsion-free, G_2 structure ϕ. Let ·, · be the induced inner product on differential forms. Suppose ϕ̂ is a sufficiently small 4-form such that dϕ̂= d*_ϕϕ (that is, *_ϕϕ - ϕ̂ is close to *_ϕϕ and closed) and η is a sufficiently small 2-form satisfying a certain equation of the form(dd^* + d^*d)η + * d ((1+ 1/3 dη, ϕ)ϕ̂) - * dR(dη) = 0where the remainder term R satisfies |R(dη) - R(dξ)| ≤ϵ |dη - dξ|(|η| + |ξ|) for some constant ϵ. Then ϕ + dη is a torsion-free G_2 structure. Let M^T × S^1 be a compact seven-manifold constructed as in Definition <ref>, with ϕ^T given by gluing asymptotically cylindrical G_2 structures as in Definition <ref>.We may choose ϕ̂^T by using the approximate gluing (as in Definition <ref>) of the closed forms *_ϕ_1ϕ_1 and *_ϕ_2ϕ_2 as our closed approximation to *_ϕϕ. Then for T > T_0 sufficiently large we may find a small 2-form η solving (<ref>). dη is unique of its size given ϕ̂^T. Moreover, ϕ^T + dη can be chosen to be continuous in the structures ϕ_1 and ϕ_2 with respect to the extended weighted C^∞ topology defined in Definition <ref>, and T_0 can be chosen to be upper semi-continuous in these structures. A straightforward extension of Theorem <ref> yields the theorem that Calabi-Yau structures can be glued. Suppose M_1 and M_2 are asymptotically cylindrical Calabi-Yau threefolds. Let (Ω_1, ω_1) and (Ω_2, ω_2) be Calabi-Yau structures on M_1 and M_2 matching in the sense of Definition <ref>, and let (z_1, z_2) be a pair of twistings matching in the same sense. ϕ_1 = Ω_1 + z_1 ∧ω_1 and ϕ_2 = Ω_2 + z_2 ∧ω_2 define are S^1-invariant torsion-free G_2 structures, matching in the same sense. Write ϕ^T for the approximate gluing of these torsion-free G_2 structures given by Definition <ref>. There exists T_0 > 0 such that, for all T> T_0, ϕ^T can be perturbed to give an S^1-invariant torsion-free G_2 structure on M^T × S^1. In particular, we get a Calabi-Yau structure (Ω^T, ω^T) on M^T. For each choice of matching twistings, this procedure gives a well-defined and continuous map (Ω_1, ω_1, Ω_2, ω_2) ↦ (Ω^T, ω^T) of Calabi-Yau structures.By Propositions <ref> and <ref>, Ω_1 + z_1 ∧ω_1 and Ω_2 + z_2 ∧ω_2 are indeed S^1-invariant torsion-free G_2 structures on M_i × S^1; they obviously match, since taking the real part and the wedge product commute with pullback. The approximate gluing procedure of Definition <ref> is clearly invariant under the rotation, and so our approximate gluing ϕ^T is S^1-invariant. We know by Theorem <ref> that for T>T_0 sufficiently large we can perturb ϕ^T to a torsion-free G_2 structure: we need to check that that structure is S^1-invariant. To follow the theorem, we need ϕ̂^T to be S^1-invariant. If a G_2 structure ϕ is S^1-invariant, then *_ϕϕ is also S^1-invariant, because pullback by the isometric rotation Θ commutes with the Hodge star. Thus, the approximation to *_ϕϕ given by gluing *_ϕ_1ϕ_1 and *_ϕ_2ϕ_2 is S^1-invariant and hence so is ϕ̂^T. We may now check that the solution dη, where η solves (<ref>), is S^1-invariant. Since ϕ^T, and so the metric being used, and ϕ̂^T are both S^1-invariant, the operator defining (<ref>) commutes with Θ, hence if η satisfies (<ref>), so too does Θ^* η. The uniqueness statement then implies that Θ^* dη = dΘ^* η = dη, that dη is S^1-invariant. Using Propositions <ref> and <ref> again, ϕ^T + dη then yields our Calabi-Yau structure (Ω^T, ω^T) on M^T. The claim of continuity on structures is immediate from the claim in Theorem <ref>. We have recovered in more generality the result of Doi–Yotsutani<cit.>. Their argument proceeds as in the first paragraph of the proof of Theorem <ref>, except with slightly more assumptions, to obtain a torsion-free G_2 structure on M^T × S^1. They then argue as follows.Suppose M^T is a simply connected manifold and M^T × S^1 admits a torsion-free G_2 structure. Then M^T admits a Ricci-flat Kähler metric.Consider the universal cover M^T × of M^T × S^1. M^T × also admits a torsion-free G_2 structure, and so a Ricci-flat metric, and by the Cheeger-Gromoll splitting theorem<cit.> the metric on M^T × is given by a Riemannian product N ×. The metric induced on N is Ricci-flat Kähler, by holonomy considerations. By classification theory for compact simply connected spin 6-manifolds, M^T and N are diffeomorphic; hence M^T admits a Ricci-flat Kähler metric. Our work makes this argument much more concrete, as well as generalising to the not necessarily simply connected case. We shall assume that the torsion-free G_2 structure on M^T × S^1 is S^1-invariant; by the proof of Theorem <ref>, this assumption requires no further hypotheses on the structures to be glued. Using S^1-invariance, we write the torsion-free G_2 structure on M^T × S^1 as Ω + (Ldθ + v) ∧ω, where z = Ldθ + v is a twisting. We describe the Riemannian universal cover of M^T × S^1 with the G_2 structure Ω + (Ldθ + v) ∧ω. We need to equip the universal cover M^T × with a torsion-free G_2 structure, and we take the torsion-free G_2 structure Ω + (L dθ) ∧ω, where θ is the coordinate along . We now need to define a projection π: M^T ×→ M^T × S^1 such thatπ^* (Ω + (L dθ + v) ∧ω) = Ω + Ldθ∧ωSince M^T is simply connected, b^1(M^T) = 0 and so we may write v = df for some function f on M^T. Define π by (x, θ) ↦ (x, [θ - f(x)/L]); it is easy to see that π satisfies (<ref>) and so is a Riemannian covering of M^T × S^1 by M^T ×; since M^T is simply connected, it is the universal cover. Note that the torsion-free G_2 structure Ω + (L dθ) ∧ω is the product structure that can be obtained by using Cheeger-Gromoll, which shows immediately that N = M^T. We thus get the Calabi-Yau structure (Ω, ω) on M^T, and in particular a Ricci-flat Kähler metric. The major gain from our concrete approach is that we get an explicit identification of N and M^T (the identity), and an explicit universal cover; in particular, we obtain a relation between the Calabi-Yau structures we glue and the resulting Calabi-Yau structure. §.§ Gluing to moduli space This gluing map is not obviously independent of the twistings z_1 and z_2. However, we now show that the gluing map is independent of z_1 and z_2 as a map to the moduli space _SU(3)(M^T) defined in Definition <ref>: that is, the Calabi-Yau structure may depend on the twistings, but different twistings result in Calabi-Yau structures that are at worst pullbacks of each other. We use cohomology. We know that the perturbation made in Theorem <ref> is an exact form, so does not change the cohomology class of the G_2 structure. This cohomology class can be decomposed, for instance as in the proof of Proposition <ref>, into cohomology classes corresponding to the twisting and the Calabi-Yau structure. If it were the case that the cohomology classes corresponding to the Calabi-Yau structure did not change under this perturbation, then since they originally are just given by gluing the Calabi-Yau structures using Definition <ref> (which we will prove momentarily), they would be independent of the twistings used. It would then follow from Proposition <ref> that the _SU(3) class is also independent of the twistings used. Unfortunately, it is not quite true that the cohomology classes corresponding to the Calabi-Yau structure do not change under the perturbation: whilst it is the case for [Ω], it is possible we might have to rescale [ω]. In this subsection, we adjust the previous paragraph to provide a correct argument proving the result in a similar way.We begin with the following simple lemma saying that in cohomology the wedge product of patched forms is the patching of the wedge products. Suppose that α_1, β_1 and α_2, β_2 are two pairs of closed matching asymptotically translation invariant forms on M_1 and M_2, so that α_1 ∧β_1 and α_2 ∧β_2 is also a pair of closed matching asymptotically translation invariant forms. The approximate gluing of Definition <ref> gives well-defined cohomology classes [γ_T(α_1, α_2)], [γ_T(β_1, β_2)], and [γ_T(α_1 ∧β_1, α_2 ∧β_2)]. Then[γ_T(α_1 ∧β_1, α_2 ∧β_2)] = [γ_T(α_1, α_2)] ∧ [γ_T(β_1, β_2)]It is sufficient to show that on M_1^ T the cutoff of α_1 ∧β_1 and the wedge product (cutoff of α_1 ) ∧ (cutoff of β_1 ) differ by dγ_1 where γ_1 is supported away from the identified region of M_1^ T, for then when we identify we getγ_T(α_1 ∧β_1, α_2 ∧β_2) - γ_T(α_1, α_2)∧γ_T(β_1, β_2) = dγ_1 + dγ_2with γ_1 and γ_2 having disjoint support and the result follows.We will therefore drop the subscripts. As in Definition <ref>, divide α and β, on the end, into a limit part and an exact partα = α̃+ dα' β = β̃+ dβ'We then getα∧β = α̃∧β̃+ d(α' ∧β̃+ (-1)^αα̃∧β' + α' ∧ dβ')To simplify notation, set φ = 1-ψ_T, so φ= 0 for t>T-1 and 1 for t≤ T-2 (ψ_T is as defined in equation (<ref>) and used in Definition <ref>). The cutoffs of α and β are α̃+ d(φα') and β̃+ d(φβ') and it follows that the wedge product of the cutoffs is given exactly by replacing α' and β' with φα' and φβ' in (<ref>). Similarly the cutoff of the wedge product is given by introducing a φ into the exterior derivative in (<ref>). On taking the difference, all terms but the last then cancel, to gived(φα' ∧ d β' - φα' ∧ d (φβ'))= d(φα' ∧ dβ' - φ^2 α' ∧ dβ' - φα' ∧ dφ∧β' )Since φ -φ^2 = φ(1-φ) and φ dφ are both supported in (T-2, T-1), (<ref>) is supported in (T-2, T-1); that is, away from the identified region. Thus we have the claimed result.Combining Lemma <ref> with standard results on the cohomology ring of a compact Kähler manifold, we obtain our result on how the cohomology classes [Ω] and [ω] differ from the approximate gluings of the Ω_i and ω_i.Suppose that Ω_1 + (L dθ + v_1) ∧ω_1 and Ω_2 + (L dθ +v_2) ∧ω_2 are matching torsion-free asymptotically cylindrical S^1-invariant G_2 structures obtained from torsion-free asymptotically cylindrical (Ω_i, ω_i) and twistings L_i dθ + v_i by Propositions <ref> and <ref>. (Note that L_1 = L_2 since these two G_2 structures match.)Suppose that for some T these glue as in Theorem <ref> to the S^1-invariant torsion-free G_2 structure ϕ and we have ϕ = Ω + (L' dθ + v) ∧ω for a Calabi-Yau structure (Ω, ω) and twisting L' dθ + v. Then there exists c>0 such thatL' = cL[Ω] = [γ_T(Ω_1, Ω_2)] [ω] = 1/c[γ_T(ω_1, ω_2)][v] = c [γ_T(v_1, v_2)]Removing the torsion does not affect the cohomology class of ϕ, so we have[γ_T(Ω_1 + (L dθ + v_1) ∧ω_1, Ω_2 + (L dθ + v_2) ∧ω_2)] = [Ω + (L' dθ + v) ∧ω]Using Lemma <ref> and the obvious linearity of γ_T, we obtain[γ_T(Ω_1, Ω_2)] + [L dθ + γ_T(v_1, v_2)] ∧ [γ_T(ω_1, ω_2)] = [Ω] + [L' dθ + v] ∧ [ω]Since γ_T can be defined on any pair of matching asymptotically cylindrical manifolds and commutes with matching maps of such pairs (provided the cutoff functions are chosen appropriately), and we can choose inclusions so that (M_1, M_2)(M_1 × S^1, M_2 × S^1) is such a pair, [γ_T(Ω_1, Ω_2)], [γ_T(v_1, v_2)], and [γ_T(ω_1, ω_2)] are in the subset H^* (M^T) of H^*(M^T× S^1) (corresponding to having no dθ terms). Evidently, [Ω], [v], and [ω] also lie in the subset H^* (M^T). By the Künneth theorem, therefore, we haveL [γ_T(ω_1, ω_2)] = L' [ω]Recalling that L and L' are both positive, set c = L'/L. Hence [ω] = 1/c [γ_T(ω_1, ω_2)]. Taking the other component from the Künneth isomorphism, and writing [γ_T(ω_1, ω_2)] as c[ω], we have[γ_T(Ω_1, Ω_2)] + [γ_T(v_1, v_2)] ∧ c[ω] = [Ω] + [v] ∧ [ω]Now [ω] is a Kähler class on M^T and we have [ω] ∧ [Ω] = 0 = [γ_T(Ω_1 ∧ω_1, Ω_2 ∧ω_2)] = [γ_T(Ω_1, Ω_2)] ∧ c[ω]again using Lemma <ref>. Since c ≠ 0, (<ref>) means that [Ω] and [γ_T(Ω_1, Ω_2)] are classes of primitive 3-cohomology. The remaining two equations now follow from the Lefschetz decomposition as the primitive 3-cohomology and 1-cohomology components of (<ref>). We now prove that the constant c of Proposition <ref> doesn't change as we change the twistings. It is clear that if Ω is known, c is determined for each ω by condition iii) of Definition <ref>:Ω∧Ω = (-2i^n+1)^n-1/n!ω^nand that if [Ω] is known, exactly the same applies for each [ω]. Using torsion-freeness we can say a little more, passing to [Ω].Let (Ω, ω) be a Calabi-Yau structure. There is an open neighbourhood U of (Ω, ω) in Calabi-Yau structures, and ϵ>0, such that if (Ω_1, ω_1) and (Ω_2, ω_2) both lie in U, with [Ω_1] = [Ω_2] and [ω_1] = C[ω_2] for |C-1|<ϵ, then C=1.We work locally around the class of _G_2^S^1 corresponding to Ω + dθ∧ω. We know that there is an open subset V of _G_2^S^1 around this class which is homeomorphic to an open subset of H^3(M × S^1) as in Theorem <ref> and Theorem <ref>. By reducing the open set if necessary, we may assume using Proposition <ref> that V is a product of open sets U' and W in _SU(3) and Z respectively. Let U be the set of structures whose moduli class is in U'. W is an open set containing [dθ] so it contains an interval of the line [dθ]; choose ϵ<1/2 so that it contains (1-2ϵ, 1+2ϵ) [dθ]. Now suppose given two structures (Ω_i, ω_i) as in the statement. Clearly, the S^1-invariant torsion-free G_2 class [Ω_1 + dθ∧ω_1] lies in V, and since |1/C -1| < ϵ/1-ϵ < 2ϵ, the same is true of the class [Ω_2 + 1/Cdθ∧ω_2]. Moreover, we have the equality[Ω_1 + dθ∧ω_1] = [Ω_2 + 1/Cdθ∧ω_2]in H^3. Since V is homeomorphic to its image in H^3(M × S^1), we also have equality in _G_2^S^1. Applying Proposition <ref> again, we find that C=1. A natural question for further study is whether the constant c of Proposition <ref> is necessarily one. This would mean that any such gluing of S^1-invariant G_2 structures does not fundamentally alter the length of the circle factor, and seems natural if the gluing of the Calabi-Yau structures can be done without reference to the G_2 structures. On the other hand, if we regard c instead as a possible rescaling of the holomorphic volume form Ω, it is not obvious that the scaling of the holomorphic volume form should be preserved by gluing. Changing how we regard c would superficially affect much of the below, as c would have to be controlled in different places, but would not make it substantially different. An interesting preliminary question would be whether the scaling of the holomorphic volume form is uniquely determined by the cohomology, that is, whether Lemma <ref> is true globally. We now use Lemma <ref> to prove our foreshadowed well-definition result. Let (Ω_1, ω_1) and (Ω_2, ω_2) be matching Calabi-Yau structures; let z_1 and z_2 be two matching twistings, and z'_1 and z'_2 be another matching pair of twistings. Then, for neck-length parameter T sufficiently large (depending on a curve of G_2 structures that will appear in the proof), the _SU(3) parts of the results of gluing the G_2 structures Ω_i + z_i ∧ω_i and Ω_i + z'_i ∧ω_i are equal, and the Z parts are given by the same multiple of the approximate gluing, though the approximate gluing may be different in the two cases.The set of matching pairs of twistings is precisely _>0 times the vector space of matching closed 1-forms (with appropriate limits) on M_1 and M_2. Therefore it is path-connected, and so there exists a path in it (z_1(s), z_2(s)) with z_i(0) = z_i and z_i(1) = z'_i. For T sufficiently large (by semi-continuity of a minimal T from Theorem <ref> and compactness of [0, 1]), the resulting pairs of matching torsion-free S^1-invariant G_2 structures(Ω_1 + z_1(s) ∧ω_1, Ω_2 + z_2(s) ∧ω_2)can be glued to give torsion-free S^1-invariant G_2 structures ϕ(s); ϕ(s) is a continuous curve, again by Theorem <ref>. Using the proof of Proposition <ref>, we can split these up asϕ(s) = Ω(s) + z(s) ∧ω(s)for continuous curves of Calabi-Yau structures and twistings. In the notation of Proposition <ref>, it follows that L' is continuous and so c is (because these are determined from z(s)); hence we know that the cohomology classes satisfy[Ω(s)]= [γ_T(Ω_1, Ω_2)][ω(s)]= 1/c(s)[γ_T(ω_1, ω_2)]for a continuous positive function c(s). In particular, we see that (Ω(s), ω(s)) is a continuous curve of Calabi-Yau structures with [Ω(s)] fixed and [ω(s)] only varying in a line. It follows from Lemma <ref> that c(s) is locally constant, and hence it is constant; thus, [Ω(s)] and [ω(s)] are both fixed, and so so is the moduli class of (Ω(s), ω(s)), which proves the first claim.The second claim follows since we also have [z(s)] = c(s) [γ_T(z_1(s), z_2(s)] from Proposition <ref>.Proposition <ref> essentially says that the images under gluing of a pair of pairs of G_2 structures differing just by varying the twistings themselves just differ by varying the twisting and potentially diffeomorphism. In subsection <ref>, we shall analyse the gluing map between moduli spaces for G_2 closely to work out how the gluing map between Calabi-Yau moduli spaces behaves. Proposition <ref> will be used to say that a variation corresponding to a twisting glues to a variation corresponding a twisting: we would like to know what happens when we vary the Calabi-Yau structure or the gluing parameter T.In varying the Calabi-Yau structure, there are two complications over varying the twisting. Firstly, it is not at all clear that _SU(3) is connected, so we will need to assume the existence of the curve used in the proof of Proposition <ref>. In any case, the factor c may vary.Suppose that z_1 and z_2 are a pair of matching twistings. Let(Ω_1, ω_1) and (Ω_2, ω_2) be a pair of matching Calabi-Yau structures, and let (Ω'_1, ω'_1) and (Ω'_2, ω'_2) be another such pair. Suppose that there exists a continuous curve through matching pairs of Calabi-Yau structures joining these pairs.Then, for neck-length parameter sufficiently large (depending on this curve), the Z parts of the results of gluing the G_2 structures Ω_i + z_i ∧ω_i and Ω'_i + z_i ∧ω'_i are proportional. By choosing T large, as in the proof of Proposition <ref>, we get a continuous curve of glued structures; write them as Ω(s) + z(s) ∧ω(s). By Proposition <ref>, we know that [z(s)] = c(s)[γ_T(z_1, z_2)]. Hence [z(0)] = c(0)/c(1)[z(1)].The only remaining question is the effect of varying the neck-length parameter T: again we obtain Suppose (Ω_i, ω_i) are matching Calabi-Yau structures and z_i are matching twistings. Let T and T' be a pair of positive reals exceeding the minimal gluing parameter T_0 for the associated matching G_2 structures Ω_i + z_i ∧ω_i. Then as in Proposition <ref> the Z parts of the glued structures are proportional.Choose a curve T(s) from T to T', always greater than T_0. As in Propositions <ref> and <ref>, write the curve of glued structures as Ω(s) + z(s) ∧ω(s) and use Proposition <ref> to get [z(s)] = c(s)[γ_T(s)(z_1, z_2)].The statement follows as in Proposition <ref> if [γ_T(s)(z_1, z_2)] is independent of s. Because the common limit of z_1 and z_2 has no dt term, the natural diffeomorphism pulls back the gluing with a large T to the smaller T with only a compactly supported error (see <cit.>), and so [γ_T(s)(z_1, z_2)] is indeed independent of s.§.§ Moduli spaces of gluing data By combining Theorem <ref> with Proposition <ref>, we have thus shown that there is a single well-defined gluing map from matching pairs of Calabi-Yau structures to the moduli space _SU(3)(M^T). We now define a moduli space of gluing data and show that this gluing map induces a well-defined map between these moduli spaces. For the definition, we follow the ideas and notation for the G_2 case in <cit.>. Here, Nordström restricts to the special case in which the first Betti number of the glued manifold is zero for simplicity, though the result is true in general. In our case, b^1(M^T × S^1) is clearly nonzero, and though we could similarly argue for the special case when b^1(M^T) = 0, we will follow the full generality analysis provided by Nordström in <cit.> in the relevant place. In this subsection, we define a quotient which we expect to define a sensible space of gluing data, and show that this quotient is a manifold. The idea here, which is used in <cit.>, is to use a sequence of larger and larger spaces, and show each in turn is a manifold. The smallest is the spaceof “matching moduli classes"; the second is the spaceof “moduli classes of matching pairs", and finally we end up with the spaceof “moduli classes of matching pairs and gluing parameters" which we require. We first review the definitions and results in the G_2 case: these pass to the S^1-invariant G_2 case with very little additional work. We show that _G_2^S^1 is a principal -bundle over _G_2^S^1, and that _G_2^S^1 is a bundle over _G_2^S^1, which defines its coordinates (Proposition <ref>). We provide some detail of the proof that _G_2^S^1 is a bundle over _G_2^S^1, as this material is not available in <cit.> and is not fully given even in <cit.>. We then pass simultaneously to the analogous spaces for Calabi-Yau structures and the relationship between the G_2 and Calabi-Yau cases. We show that the analogous spaces for Calabi-Yau structures are smooth and the inclusion maps from the Calabi-Yau versions to the G_2 versions are smooth. For smoothness of the spaces, we use both similar methods to the G_2 case and what we already known about the relationship between G_2 and Calabi-Yau structures. The final theorem of this subsection (Theorem <ref>) is a gluing-data version of Theorem A (Theorem <ref>), saying that the space of “S^1-invariant G_2 gluing data" is a product of the space of “Calabi-Yau gluing data" with a suitable space of twistings in the sense of Definition <ref>. We now begin by summarising the G_2 case, with the minor changes required to make the results S^1-invariant. We will make some of the definitions in greater generality, however, as otherwise we would have to make exactly parallel definitions in the Calabi-Yau case. To define a moduli space of gluing data, we need to define an action on matching structures by matching pairs of diffeomorphisms. The type of the structure is irrelevant here. Consequently, we shall write υ for the structures, which we shall use generally in this subsection when giving an argument that applies in both cases.We recall from Definition <ref> that the limit of an asymptotically cylindrical diffeomorphism Φ is a pair (Φ̃, L) such that the diffeomorphism decays to (n, t) ↦ (Φ̃(n), t+L).[cf. <cit.>]Suppose M_1, M_2, and F are as in Definition <ref>. Suppose that Φ_1 and Φ_2 are asymptotically cylindrical diffeomorphisms of M_1 and M_2 with limits (Φ̃_i, L_i). They are said to match if F^-1Φ̃_2F = Φ̃_1 as a map N_1 → N_1. Note that we do not require L_1 = L_2. A matching pair (Φ_1, Φ_2) is isotopic to the identity as a matching pair if Φ_1 and Φ_2 are both asymptotically cylindrically isotopic to the identity as in Definition <ref> and we may choose the isotopies Φ_1, s and Φ_2, s such that the pair of diffeomorphisms (Φ_1, s, Φ_2, s) matches for all s. As in Definition <ref>, we shall simply speak of a pair of diffeomorphisms being isotopic to the identity. We define an action of matching pairs (Φ_1, Φ_2) with limits (Φ̃_i, L_i) on triples (υ_1, υ_2, T) where υ_1 and υ_2 are (asymptotically cylindrical Calabi-Yau or, potentially S^1-invariant, G_2) structures on M_1 and M_2 respectively and T is a real number, which will eventually be the gluing parameter, by(Φ_1, Φ_2)(υ_1, υ_2, T) =(Φ_1^*υ_1, Φ_2^*υ_2, T-1/2(L_1+L_2)) (<ref>) is clearly an action on triples of structures. The first thing to show is that (<ref>) preserves the subspace of triples where the structures match as in Definition <ref>, but this is obvious from the definition of matching diffeomorphisms. Therefore we make Let _G_2^S^1 = matching pairs of torsion-free S^1-invariant G_2 structures and parameters T/matching pairs of S^1-invariant diffeomorphisms isotopic to the identityS^1-invariant diffeomorphisms are as defined in Definition <ref>, for consistency of our moduli spaces. The orbit of a matching pair of G_2 structures under matching pairs of diffeomorphisms isotopic to the identity is closed: by Theorem <ref>, the orbit of a G_2 structure under diffeomorphisms isotopic to the identity is closed (and in fact the diffeomorphisms converge to a diffeomorphism giving the new point of the orbit), so we only have to check that a pair of diffeomorphisms being a matching pair isotopic to the identity is a closed condition, which is obvious. Hence, G̃_G_2^S^1 is Hausdorff. The same applies for Calabi-Yau structures, combining Theorem <ref> with Theorem A (Theorem <ref>) and using the closedness of exact forms.We consider two additional spaces of gluing data, both of which are smaller than _G_2^S^1, from which we can construct _G_2^S^1, and hence infer that it is a manifold. First of all, we define the smallest possible space of gluing data: the subspace of the product of the moduli spaces on each part corresponding to matching moduli classes. Let _G_2^S^1 be the space of matching pairs in the S^1-invariant G_2 moduli spaces on M_1 and M_2, that is:([ϕ_1], [ϕ_2]) ∈_G_2^S^1(M_1) ×_G_2^S^1(M_2)such that there exist representatives ϕ_1 and ϕ_2 matching in the sense of Definition <ref>. The final space is the space of pairs of matching classes quotiented by matching diffeomorphisms, defined as forbut forgetting the parameter T. Because we have to deal with two different kinds of structures the notation is already quite involved, so we give it a specific name. The action by matching pairs of diffeomorphisms is just that restricted from Definition <ref>.Let _G_2^S^1 = matching pairs of torsion-free S^1-invariant G_2 structures/matching pairs of S^1-invariant diffeomorphisms isotopic to the identity The notation we are adopting is rather different from that of <cit.>. In that, ouris called _y, following a general principle of using y subscripts to denote matching objects; our spaceis just denoted (𝒳_y ×)/_y, and our spaceis denoted 𝒳_y/_y, or . It is possible that our usingfor a different space may cause confusion; as in <cit.>, the reason for this notation is that it is the base space of a suitable bundle. It is clear that _G_2^S^1 is a principal -bundle over _G_2^S^1 Therefore, by taking the natural smooth structure on such a bundle, it is enough to show that _G_2^S^1 is smooth.The argument is essentially that _G_2^S^1 is a manifold because it is a covering space of _G_2^S^1. First, therefore, we have to check that _G_2^S^1 is a manifold. We have _G_2^S^1 is a smooth manifold. Moreover, around the classes of any matching pair of structures there exist charts for _G_2^S^1 consisting of matching pairs of structures.Strictly, of course, Nordström's argument is for the case of _G_2, defined as in Definition <ref> but removing the constraints on S^1-invariance; but we have shown that locally _G_2^S^1 is an open subset of _G_2, and therefore locally _G_2^S^1 is an open subset of _G_2. As open subsets of a submanifold are submanifolds, the S^1-invariant result is immediate. The existence of such charts is not part of <cit.> but is clear from its proof.We now proceed to show _G_2^S^1 is a manifold. We give details for this proof, as it is not found in full generality in <cit.> but only in <cit.>; even there, not all the details are given. The factor by which _G_2^S^1 is bigger than _G_2^S^1 appears in <cit.> (except of course for not requiring S^1-invariance), though we use a somewhat different setup. As it will reappear in the Calabi-Yau case, in Proposition <ref>, we define it separately. It could be defined in general, as it only depends on the Riemannian metric, but to prove its properties we need slice results analogous to those of Theorem <ref> and Proposition <ref>. These results could be obtained in the Riemannian case from Ebin<cit.>. We first need to weaken the notion of isotopic with fixed limit from Definition <ref>. We will be interested in isotopies Φ_s such that for some fixed diffeomorphism Ψ, (Ψ, Φ_s) is an isotopy of matching pairs in the sense of Definition <ref>, and thus we make Suppose that Φ and Ψ are asymptotically cylindrical diffeomorphisms of an asymptotically cylindrical manifold M, isotopic in the sense of Definition <ref>. An isotopy is a curve Φ_s of asymptotically cylindrical diffeomorphisms; taking limits, an isotopy gives us a curve (Φ̃_s, L_s) of diffeomorphisms of N and real numbers. Φ and Ψ are isotopic with fixed (N) limit if there is an isotopy such that Φ̃_s is independent of s. To have fixed (N) limit is clearly weaker than having fixed limit in the sense of Definition <ref>, and is precisely what is needed to obtain an isotopy of matching pairs.Suppose that (M_1, g_1) and (M_2, g_2) are matching asymptotically cylindrical Ricci-flat manifolds, with metrics induced from torsion-free G_2 or Calabi-Yau structures. Suppose that the cross-section is N and g_1 and g_2 induce the metric g̃ on it, suppressing the diffeomorphism F of Definitions <ref> and <ref>.Consider the set of diffeomorphisms{diffeomorphisms of N × [0, 3] of the form (n, t) ↦ (f_t(n), t) where f_t = 𝕀 for t ∈ [0, 1], f_t = f_2 for t ∈ [2, 3], and f_2 is an isometry of g̃}Let Ã(g̃) be the quotient of this set by isotopies preserving the diffeomorphism on N × ([0, 1] ∪ [2, 3]). It is clear that a class of Ã(g̃) induces diffeomorphisms on M_1 and M_2, and that taking a different representative gives diffeomorphisms that are isotopic with fixed limit in the sense of Definition <ref>. Consider the subgroup of Ã(g̃) consisting of classes whose induced diffeomorphisms on M_1 are isotopic with fixed (N) limit to isometries of M_1, and the analogous subgroup for M_2. Consider the subgroup G generated by these two subgroups, and let A(g_1, g_2) be the quotient Ã(g̃)/G. By a minor abuse of notation, when the metric is induced from Calabi-Yau structures we shall write Ã(Ω̃, ω̃) and A(Ω_1, ω_1, Ω_2, ω_2).By a slightly larger abuse of notation, when the metric is induced from a S^1-invariant torsion-free G_2 structures, we will write Ã(ϕ̃) and A(ϕ_1, ϕ_2) to be the sets given by requiring all the diffeomorphisms above to be S^1-invariant in the sense of Definition <ref>.Elements of Ã(g̃) give an “action" on pairs of structures, thus: Suppose that g_1 and g_2 are Ricci-flat metrics on M_1 and M_2 induced by matching Calabi-Yau or torsion-free G_2 structures. Let Ã(g̃) be as in Definition <ref>. Suppose that g'_1 and g'_2 are two other Ricci-flat metrics induced by matching structures. Define an action of Ã(g̃) on the set of such metrics by[Φ](g_1, g_2) = (g_1, Φ^* g_2) Note that (<ref>) is not well-defined; however, it is well-defined up to pullback by a matching pair of diffeomorphisms isotopic to the identity, and in practice we will only be using (<ref>) as a map to spaces (such as _G_2^S^1) where we have quotiented by pullback by matching pairs isotopic to the identity. We hope that A(ϕ_1, ϕ_2) will define the other smooth component in a local statement “_G_2^S^1 = A(ϕ_1, ϕ_2) ×_G_2^S^1". We thus need to know that A(ϕ_1, ϕ_2) is smooth.With the notation of Definition <ref>, Ã(g̃) is a finite-dimensional abelian Lie group. Its tangent space at the class of the identity is the space of Killing fields on N. A diffeomorphism defining a class of Ã(g̃) defines a class of G precisely if its image under the map of Definition <ref> is in the orbit of (g_1, g_2) by matching pairs isotopic to the identity andso G is a closed subgroup, with tangent spaces the sums of the subspaces of Killing fields on N that have extensions to Killing fields on M_1 and M_2. Hence A(g_1, g_2) is also a finite-dimensional manifold, and the map of Definition <ref> passes to a well-defined map of A(g_1, g_2) on structures up to pullback by matchingpairs of diffeomorphisms isotopic to the identity. Locally around the class of the identity, A(ϕ_1, ϕ_2) as defined is equal to A(g_1, g_2) for the induced metrics.To show Ã(g̃) is a finite-dimensional Lie group, we note that it is equivalent, by careful use of bump functions, to the space of curves in _0(N) from the identity to isometries of N, modulo homotopy with fixed end points. The corresponding group in <cit.> is in fact defined as this space of curves modulo homotopy. As in <cit.>, standard arguments show that its identity component (in the sense of isotopy through diffeomorphisms in the subset of (<ref>)) is the universal cover of the identity component of the isometry group of N. The identity component of the isometry group is a compact Lie group, with Lie algebra given by the Killing fields. Hence, the space of Killing fields is the universal cover, and by considering each component in turn we consequently have that the whole of Ã(g̃) is a finite-dimensional Lie group, with tangent space at the class of the identity given by the Killing fields on N. We now have to show that a diffeomorphism defining a class of Ã(g̃) defines a class of G if and only if its image under the map of Definition <ref> is in the orbit of the matching pair of structures inducing (g_1, g_2) by matching pairs of diffeomorphisms isotopic to the identity. Since this orbit is closed by the remark after Definition <ref>, and we can find local continuous maps giving representatives from Ã(g̃) to diffeomorphisms since Ã(g̃) is finite-dimensional, we may then deduce that G is a closed subgroup and the rest of the first paragraph follows. We note from Lemma <ref> and and the remark after Definition <ref> that, within _0, an isometry of a metric induced by a Calabi-Yau or torsion-free G_2 structure is an automorphism of that structure. We shall, as in Definition <ref>, write (υ_1, υ_2) for the matching pair of structures, which is either (ϕ_1, ϕ_2) or (Ω_1, ω_1, Ω_2, ω_2). Note that if we take a different diffeomorphism representing the same class of Ã(g̃), the two diffeomorphisms are isotopic, by an isotopy preserving limits. Since such an isotopy changes the result by a matching pair of diffeomorphisms isotopic to the the identity, using this different representative does not affect whether the image by the map of Definition <ref> lies in the orbit, so the argument will be independent of the representative we choose. Firstly, if the diffeomorphism Φ representing a class of Ã(g̃) is isotopic, preserving (N) limit, to an automorphism Ψ of υ_2, then we have that Ψ^-1Φ is isotopic, preserving (N) limit, to the identity. Consequently the matching pair (id, Ψ^-1Φ) is isotopic to the identity, and hence (υ_1, Φ^* υ_2) = (υ_1, Φ^* (Ψ^-1)^* υ_2) lies in the orbit by matching pairs isotopic to the identity. Consequently, representatives of the part of G corresponding to M_2 map into the orbit under the map of Definition <ref>. Secondly, if the diffeomorphism Φ representing a class of Ã(g̃) is isotopic, preserving (N) limits, to an automorphism Ψ of υ_1, then since Φ is isotopic to the identity the matching pair (Φ, Φ) is isotopic to the identity, and so (Ψ, Φ) is isotopic to the identity. Consequently, (υ_1, Φ^* υ_2) = (Ψ^* υ_1, Φ^* υ_2) lies in the orbit. We have now shown that every diffeomorphism representing a class of G maps into the orbit under the map of Definition <ref>. It remains to show that if Φ represents a class of Ã(g̃) and (υ_1, Φ^* υ_2) is in the orbit, then Φ represents a class of G. We suppose that (υ_1, Φ^* υ_2) = (Ψ_1^* υ_1, Ψ_2^* υ_2) for some matching pair (Ψ_1, Ψ_2) isotopic to the identity. In particular, the (N) part of their common limit is isotopic to the identity. This isotopy is a curve as at the start of the current proof, so defines a diffeomorphism Ψ̃ giving a class of Ã(g̃). We will show that Ψ̃ is isotopic with fixed (N) limit to both Ψ_1 and Ψ_2. That is, Ψ̃ is isotopic with fixed (N) limit to the automorphism Ψ_1 of υ_1 and also ΦΨ̃^-1 is isotopic with fixed (N) limit to the automorphism ΦΨ_2^-1 of υ_2. That is, the classes defined by Ψ̃ and ΦΨ̃^-1 are in G; it follows that the class defined by Φ lies in G. We note that the extensions of Ψ̃ to M_1 and M_2 are also isotopic to the identity and that moreover these isotopies can be chosen to match the original isotopies of Ψ_1 and Ψ_2 (which, since they match each other, have the same isotopy at the limit). Inverting one of these isotopies, we find an isotopy with fixed (N) limit between 𝕀 and Ψ̃^-1Ψ_i for either i; composing with Ψ̃ gives the result.For the tangent space, we use an infinitesimal version of the previous argument: it would be possible, but more complicated, to extract tangent spaces from our description of G. Suppose a Killing field X maps, under the derivative of the map of Definition <ref>, into the tangent space of the orbit extending X to M_2. That is, we have (υ_1, Ł_X υ_2) = (Ł_Y_1υ_1, Ł_Y_2υ_2) for some matching pair of vector fields Y_1 and Y_2. It follows that X-Y_2 is a Killing field for g_2 and that Y_1 is a Killing field for g_1; hence, since Y_1 and Y_2 match, X is the sum of Killing fields extending to Killing fields for g_1 and g_2. The converse follows by reversing the argument. To prove the claim for A(ϕ_1, ϕ_2) we just observe that locally around the class of the identity these manifolds are given by their tangent spaces, and all Killing fields on S^1-invariant Ricci-flat manifolds are S^1-invariant. The dimension of A(g_1, g_2) may be established using the Mayer-Vietoris theorem, see for instance <cit.>, which shows that it is zero under the condition b^1(M^T) = 0 (of course, in our case we are working with b^1(M^T × S^1) >0). Before working further with these ideas, we note that we will need to extract elements of A(g_1, g_2) from pairs of matching structures. Consequently, we need to take slightly more care with the slice arguments proving Theorem <ref>, to make sure we can determine diffeomorphisms from structures. Specifically, we require the following result, claimed without proof by Nordström <cit.>.Suppose one of the following holds.* Let N be a compact five-dimensional manifold. Let U be a sufficiently small neighbourhood in a subspace around the translation-invariant S^1-invariant G_2 structure ϕ_0 on N × S^1 × such that U is transverse to the orbit of the identity component _0^S^1(N × S^1) defined in Definition <ref>.* Let M be a six-dimensional manifold with an end. Let U be a sufficiently small neighbourhood in a subspace around the asymptotically cylindrical S^1-invariant G_2 structure ϕ_0 on M × S^1, consisting of structures whose limits are torsion-free S^1-invariant G_2 structures with the same automorphism groups as the limit of ϕ_0, and such that U is transverse to the orbit of the identity component of diffeomorphisms in _0^S^1(M × S^1) that have limits automorphisms of the limit structure ϕ̃_0. Then, in both cases, on the image {^S^1_0/(ϕ_0) ∩_0^S^1}^* U of the pullback map, the map from a smooth structure to the class of diffeomorphisms required is continuous and smooth.It suffices to prove that the analogous map to _0/(ϕ_0) is smooth and continuous, as ^S^1_0/(ϕ_0) ∩_0^S^1 is a submanifold of it, by Proposition <ref> and using Proposition <ref> to see that the quotient remains locally the same. We prove (i); (ii) is entirely analogous. Both are straightforward applications of the inverse function theorem. Note that the forms in U are not constrained to be torsion-free G_2 except in their limits, in case (ii). Consider the pullback map from C^2, α diffeomorphisms and C^1, α 3-forms in U to C^1, α 3-forms. By combining Baier's result on the smoothness of pullbacks in the diffeomorphism <cit.> with linearity in the form, the pullback map is smooth. The derivative is an isomorphism, since U is transverse to the derivative orbits and we have removed the automorphisms, and so by the inverse function theorem there is a small neighbourhood of ϕ_0 in U, and a small neighbourhood of the identity, which we call D, on which the inverse is continuous and smooth. When we restrict to smooth diffeomorphisms, the pullback map must remain continuous and smooth (as a smooth map to a submanifold). We now just have to globalise in diffeomorphisms. Given some diffeomorphism Φ, consider the subset D Φ and the slice neighbourhood U. The image of D Φ× U under the pullback map is just the pullback by Φ of the image of D × U. Consequently, the map from a point of the image ϕ to the diffeomorphism class is given by composing the inverse of Φ with the diffeomorphism class required for Φ^-1, *ϕ, which depends smoothly on ϕ by the previous paragraph. Since composition and pullback by fixed smooth maps are smooth, it follows that the composition depends smoothly on ϕ.The slice U exists by the proof of Theorem <ref>. The required transition to the S^1-invariant setting is carried out on beginning on page s1invarg2slice. The slice required for case (i) occurs in the paragraph immediately before (<ref>) and the slice required for case (ii) is the penultimate paragraph before Theorem <ref>.We may now use A(ϕ_1, ϕ_2) to find charts for _G_2^S^1.Suppose that [ϕ_1, ϕ_2] ∈_G_2^S^1. We have a chart U = U_1 × U_2 around ([ϕ_1], [ϕ_2]) ∈_G_2^S^1⊂_G_2^S^1×_G_2^S^1 consisting of matching pairs of structures, such that all these pairs and their limits have the same identity components of their isometry groups, and that for each ϕ'_i ∈ U_i, for Φ^* ϕ'_i sufficiently close to ϕ_i there is a continuous map Φ^* ϕ'_i ↦ [Φ] ∈_0^S^1/∩_0^S^1. Then an open subset of A(ϕ_1, ϕ_2) × U is homeomorphic to an open neighbourhood of [ϕ_1, ϕ_2] in _G_2^S^1, by the map from A(ϕ_1, ϕ_2) × U to _G_2^S^1([Φ], (ϕ'_1, ϕ'_2)) ↦ [ϕ'_1, Φ^* ϕ'_2]where we take the extension of Φ to a diffeomorphism of M_2.The set U exists by Proposition <ref> and the properties required are just properties on M_1 and M_2 so hold by the slice theorem Theorem <ref>. The existence of the required continuous map is given by Proposition <ref>. We first show that (<ref>) gives a well-defined element of _G_2^S^1.Since (ϕ'_1, ϕ'_2) are the representatives of the point of U in the chart, they match, and have the same identity components of their automorphism groups as (ϕ_1, ϕ_2), as do their limits. It follows immediately from Proposition <ref> that (<ref>) is a well-defined map, since we have quotiented by the stabiliser.Now we show injectivity. If we have [ϕ'_1, Φ^* ϕ'_2] = [ϕ”_1, Ψ^* ϕ”_2] in , in particular these define the same class in . Thus so do (ϕ'_1, ϕ'_2) and (ϕ”_1, ϕ”_2), and by hypothesis both of these pairs lie in U. Since U is a slice neighbourhood, it follows that ϕ'_1=ϕ”_1 and ϕ'_2 = ϕ”_2.It remains to show that if [ϕ'_1, Φ^* ϕ'_2] = [ϕ'_1, Ψ^* ϕ'_2], then [Φ] = [Ψ] in A(ϕ_1, ϕ_2). Again, as in Proposition <ref> we have shown that we have quotiented by the stabiliser, we indeed have [Φ] = [Ψ]. It is clear that (<ref>) is continuous, so it only remains to show that it maps to an open subset and its inverse there is continuous. We shall construct the open set and the inverse on it simultaneously, taking a sequence of smaller open sets as required. First of all, the projection _G_2^S^1→_G_2^S^1 is continuous, and so the preimage of U is open. This preimage is our first open set V_1. We also have a natural map from an open subset of _G_2^S^1 contained in V_1 to A(ϕ_1, ϕ_2), as follows. Suppose given [ϕ”_1, ϕ”_2] ∈ V_1, which projects to ([ϕ”_1], [ϕ”_2]) ∈ U. By definition, there then exist slice structures ϕ'_1 and ϕ'_2 and asymptotically cylindrical diffeomorphisms Φ_1 and Φ_2 such that ϕ”_i = Φ_i^* ϕ_i. By construction, ϕ'_1 and ϕ'_2 match, but Φ_1 and Φ_2 need not; note that Φ_1 and Φ_2 are only defined up to isometries, but changing them by an isometry will have no effect on the final class of A(ϕ_1, ϕ_2). Since Φ_1 is asymptotically cylindrically asymptotic to the identity, its limit is isotopic to the identity, and hence the (N) part is. The isotopy from the identity to the (N) part of its limit defines, as in the proof of Proposition <ref>, a diffeomorphism Ψ_1 representing a class of Ã(ϕ̃), such that (Φ_1, Ψ_1) is isotopic to the identity as a matching pair. On the other hand, Φ_2 is also asymptotically cylindrically asymptotic to the identity, so we have a diffeomorphism Ψ_2 such that (Ψ_2, Φ_2) is isotopic to the identity as a matching pair. Let Φ' = Ψ_2 Ψ_1^-1; the diffeomorphism Φ' defines a class of A(ϕ_1, ϕ_2). On a suitably small open set V_2, Φ_1 and Φ_2 depend continuously on ϕ”_1 and ϕ”_2, by the hypothesis. Consequently, since the isotopy can clearly be chosen continuously in the diffeomorphism, so do Ψ_1 and Ψ_2. Since inversion is continuous, the diffeomorphism Φ' = Ψ_2 Ψ_1^-1 also depends continuously on ϕ”_1 and ϕ”_2, and so in an even smaller open subset V_3 Φ' defines a class of A(ϕ_1, ϕ_2) depending continuously on ϕ”_1 and ϕ”_2. We have now constructed an open subset V_3 of _G_2^S^1 and a map to U × A(ϕ_1, ϕ_2) which we hope to be the inverse. It is clearly continuous, by construction. We have to check that it is an inverse, that is that [ϕ'_1,(Ψ_2 Ψ_1^-1)^* ϕ'_2] = [ϕ”_1, ϕ”_2]. We know that the pairs (Φ^-1_1, Ψ^-1_1) and (𝕀, Φ^-1_2 Ψ_2) are both isotopic to the identity as matching pairs. Consequently, we have[ϕ”_1, ϕ”_2] = [Φ_1^* ϕ'_1, Φ_2^* ϕ'_2] = [Φ_1^* ϕ'_1, Ψ_2^* ϕ'_2] = [ϕ'_1, (Ψ_2Ψ_1^-1)^* ϕ'_2] We now show that the charts obtained in Proposition <ref> form an atlas, and so that _G_2^S^1 is a manifold.Suppose given two open subsets of _G_2^S^1 as in Proposition <ref>, so homeomorphic by the map in the proposition to the product of an open subset of U × A(ϕ_1, ϕ_2) and U' × A(χ_1, χ_2) respectively. Suppose that these subsets intersect; on the intersection, the transition map([ϕ'_1], [ϕ'_2], [Φ]) ↦ [ϕ'_1, Φ^* ϕ'_2] = [χ'_1, Φ'^* χ'_2] ↦ ([χ'_1], [χ'_2], [Φ'])is smooth, where χ'_1 and χ'_2 are the representatives of the classes [ϕ'_1] and [ϕ'_2] in the other chart, and [Φ'] is the relevant class of A(χ_1, χ_2).The map ([ϕ'_1, ϕ'_2]) ↦ ([χ'_1], [χ'_2]) is the identity in _G_2^S^1, and so is smooth. Consequently also the maps to the slice representatives χ'_1 and χ'_2 are smooth. For the map to Φ', we note that the structures ϕ'_1 and Φ^* ϕ'_2 depend smoothly on [ϕ'_1], [ϕ'_2] and [Φ]. It is obvious that [ϕ'_1] and [ϕ'_2] depend smoothly on these classes, by linearity; for [Φ], we first note that since the components Ã(ϕ̃) are identified with the finite-dimensional space of Killing fields, we may choose a representative Φ for [Φ] smoothly, and then the pullback is smooth by <cit.>. Consequently it suffices to show the map in Proposition <ref> determining [Φ'] from the structures ϕ'_1, χ'_1, Φ^* ϕ'_2 and χ'_2 is smooth. Exactly the same argument works, using the smoothness result of Proposition <ref>.We now proceed to the Calabi-Yau case. The spaces are set up as in the S^1-invariant G_2 case, but to show they are manifolds requires some further work. We begin with the definitions.We first make, exactly as in Definition <ref>, Let_SU(3) = matching pairs of Calabi-Yau structures and parameters T/pairs of diffeomorphisms isotopic to the identity as matching pairswhere structures match if they match in the sense of Definition <ref>, and isotopy as matching pairs and the action are as in Definition <ref>.We want to show that _SU(3) is smooth and that we can include it into _G_2^S^1 as a smooth submanifold. It follows from our earlier analysis (in section <ref>) that to define such a map we need a matching pair of twistings (z_1, z_2), with the usual boundary condition z̃_i() = 0. Such a pair of twistings immediately gives a map from matching pairs of Calabi-Yau structures to matching pairs of S^1-invariant G_2 structures, using Theorem <ref> (Theorem C). This map may not induce a well-defined map _SU(3)→^S^1_G_2; if the triples (Ω_1, ω_1, Ω_2, ω_2, T) and (Ω'_1, ω'_1, Ω'_2, ω'_2, T') are identified by the isotopic-to-the-identity matching pair (Φ_1, Φ_2) then the extension (Φ̂_1, Φ̂_2) given as in Lemma <ref> need not identify (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) with (Ω'_1 + z_1 ∧ω'_1, Ω'_2 + z_2 ∧ω_2, T'). Hence, we take a quotient of twistings as in Proposition <ref>. We thus make, using notation inspired by Definition <ref>,Let M_1 and M_2 be as in Definition <ref>, and let _Z be the space of matching pairs of twisting classes; that is, ([z_1], [z_2]) in Z(M_1) × Z(M_2) such that there are representatives z_1 and z_2 matching in the sense of Definition <ref>. Here Z(M_i) is the open subset of H^1(M_i × S^1) of Lemma <ref>.We know that a twisting is of the form L dθ + v for v a 1-form on M. Thus if two twistings L_i dθ + v_i match, we have L_1=L_2 and v_1 matches with v_2 (we used the first of these in Proposition <ref>).Thus _Z is the product of _>0 with the set of matching pairs ([v_1], [v_2]). Now the forms v_i have no dt component in the limit and so [ṽ_1 - ṽ_2] is a well-defined element of H^1(N × S^1), and it is easy to see using Mayer-Vietoris that a pair ([v_1], [v_2]) equivalently matches if and only if [ṽ_1-ṽ_2] is zero. Thus the set of matching pairs is a vector space, and _Z is the product of _>0 with a vector space. It follows that _Z is a manifold, and in particular path-connected. Analogously to our use of path-connectedness of the space of twistings in Proposition <ref>, we will use path-connectedness of _Z in Proposition <ref> below to show that which element of _Z we use is not very important. These are indeed the classes we need to use to define our inclusion maps.Suppose that ([z_1], [z_2]) ∈_Z. Define the mapι_[z_1], [z_2]: _SU(3)→_G_2^S^1by taking a pair of matching representatives (z_1, z_2) and then mapping(Ω_1, ω_1, Ω_2, ω_2, T) ↦ [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T]for any representative quintuple (Ω_1, ω_1, Ω_2, ω_2, T).The map ι_[z_1], [z_2]: _SU(3)→_G_2^S^1 of Definition <ref> is well-defined and injective.First suppose that (Ω_1, ω_1, Ω_2, ω_2, T) and (Ω'_1, ω'_1, Ω'_2, ω'_2, T') are two quintuples representing the same class of _SU(3). Suppose that we use the matching pairs (z_1, z_2) and (z'_1, z'_2), both of which represent ([z_1], [z_2]), to define the corresponding S^1-invariant G_2 structures. There is a matching pair of diffeomorphisms (Φ_1, Φ_2) isotopic to the identity such that Ω_1 = Φ_1^* Ω'_1 and so on as in Definition <ref>. It is clear from the definitions that extending these diffeomorphisms to M_i × S^1 as in Lemma <ref> gives a matching pair, still isotopic to the identity in the sense of Definition <ref>, that acts on (Ω'_1 + z'_1 ∧ω'_1, Ω'_2 + z'_2 ∧ω'_2, T') to give (Ω_1 + Φ^*_1(z'_1) ∧ω_1, Ω_2 + Φ^*_2(z'_2) ∧ω_2, T). To prove the result, it thus suffices to show that (Ω_1 + Φ^*_1(z'_1) ∧ω_1, Ω_2 + Φ^*_2(z'_2) ∧ω_2, T) and (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) represent the same class of _G_2^S^1. Since (Φ_1^* z'_1, Φ^*_2 z'_2) is another matching pair of representatives for ([z_1], [z_2]), by relabelling it suffices to prove the special case in which (Ω_1, ω_1, Ω_2, ω_2, T) = (Ω'_1, ω'_1, Ω'_2, ω'_2, T').Since z_i - z'_i are exact and asymptotically translation invariant, as in the proof of Lemma <ref> they are df_i for some asymptotically translation invariant f_i. Hence, there are asymptotically cylindrical diffeomorphisms Φ_i ∈_0^S^1 identifying Ω_i + z_i ∧ω_i and Ω_i + z'_i ∧ω_i on M_i × S^1 by Lemma <ref>. We have to check that (Φ_1, Φ_2) can be chosen to be isotopic to the identity as a matching pair and that the common limit is of the form (Φ̃_i, 0), i.e. has no translation component, so that T is unaffected. By the proof of Lemma <ref>, Φ_i is the time-1 flow of f_i, so its limit certainly has no translation component (which would correspond to a flow by ). It then only remains to show that f_1 and f_2 can be chosen to match, as then the flow defines a matching isotopy. However, the proof of Lemma <ref> also yields that the limits of the f_i only depend on the limits of the differences z'_i - z_i; hence, that f_1 and f_2 match follows, if we make appropriate choices, from the fact that these differences match.For injectivity, we apply the proof of Lemma <ref>. If [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] = [Ω'_1 + z_1 ∧ω'_1, Ω'_2 + z_2 ∧ω'_2, T']then there is a pair of S^1-invariant diffeomorphisms isotopic to the identity as a matching pair pulling back Ω_i + z_i ∧ω_i to Ω'_i + z_i ∧ω'_i. Taking the M_i parts of these diffeomorphisms as in Lemma <ref> gives diffeomorphisms of M_i pulling back Ω_i to Ω'_i and ω_i to ω'_i. Evidently these diffeomorphisms are also isotopic to the identity as a matching pair, and as the M part must include the translation part of the limit, the action on T also gives T'. Hence[Ω_1, ω_1, Ω_2, ω_2, T] = [Ω'_1, ω'_1, Ω'_2, ω'_2, T']ι_[z_1], [z_2] is injective. To show that _SU(3) is smooth, and the maps ι_[z_1], [z_2] are smooth inclusions, we introduce smaller moduli spaces exactly analogous to those in the S^1-invariant G_2 case. We begin with the space corresponding to _G_2^S^1 defined in Definition <ref>. Let ℬ_SU(3) be the space of matching pairs in the SU(3) moduli spaces on M_1 and M_2, that is:([Ω_1, ω_1], [Ω_2, ω_2]) ∈_SU(3)(M_1) ×_SU(3)(M_2)such that there exist representatives (Ω_1, ω_1) and (Ω_2, ω_2) matching in the sense of Definition <ref>. We also need the space corresponding to _G_2^S^1 of Definition <ref>. Let_SU(3) = matching pairs of Calabi-Yau structures/pairs of diffeomorphisms isotopic to the identity as matching pairs To show that _SU(3) is a manifold, we argue, just as in the S^1-invariant G_2 case, that _SU(3) and _SU(3) are manifolds. We also have to show that the inclusion maps corresponding to ι_[z_1], [z_2] are all smooth. We begin with _SU(3). Using Theorem A (Theorem <ref>), and essentially arguing as in the proof of that theorem that _SU(3) is the fibre of a smooth submersion to _G_2^S^1→_Z, we prove that _SU(3) is a manifold. Proposition <ref> is the Calabi-Yau analogue of Proposition <ref>._SU(3) is a smooth manifold. Moreover, there exist charts consisting of matching pairs of structures around every point, with their groups of automorphisms isotopic to the identity independent of the point in the slice. We have a diffeomorphism _SU(3)×_Z →_G_2^S^1. In particular, the inclusion map _SU(3)→_G_2^S^1 given by any pair of matching cohomology classes ([z_1], [z_2]) is a well-defined smooth immersion.We have shown in Theorem <ref> that _G_2^S^1(M_1 × S^1) ×_G_2^S^1(M_2 × S^1)=_SU(3)(M_1) × Z(M_1) ×_SU(3)(M_2) × Z(M_2)That is, given a pair of S^1-invariant G_2 moduli classes [ϕ_1] and [ϕ_2] we can express them in terms of Calabi-Yau structures by pairs ([Ω_1, ω_1], [z_1]) and ([Ω_2, ω_2], [z_2]). If moreover ([ϕ_1], [ϕ_2]) ∈_G_2^S^1, then there exist representatives ϕ_1 and ϕ_2 that match. By applying uniqueness in Proposition <ref> to the limits, it follows immediately that the corresponding representatives z_i, ω_i, and Ω_i all match. Conversely, given a matching pair of Calabi-Yau classes and a matching pair of twisting classes, taking matching representatives for these pairs and then combining them as in Proposition <ref> gives a matching pair of S^1-invariant G_2 structures and hence of classes.It follows therefore that the submanifold _G_2^S^1 can be expressed in terms of the product structure of (<ref>) as_G_2^S^1 = _SU(3)×_ZWe proceed exactly as in the case of proving that _SU(3) is a manifold by showing that _SU(3) is the fibre of a surjective submersion. By the remark after Definition <ref>, _Z is the product of _>0 with a vector space, and so a manifold. An obvious smooth path of structures and hence of classes (since a path of structures defines a path of cohomology classes) yields that the map _G_2^S^1→_Z is a submersion; it follows that we have a collection of manifold structures on _SU(3) by the implicit function theorem. The natural inclusion map from _SU(3) to _SU(3)×_SU(3) is a smooth immersion from the smooth structure given on _SU(3) by the implicit function theorem, because it is the composition_SU(3)_G_2^S^1_G_2^S^1×_G_2^S^1↠_SU(3)×_SU(3)It follows that the smooth structure on _SU(3) is independent of the point of _Z. We already know that _G_2^S^1 is a product as a topological space, as a subspace of the topological product (<ref>). It remains to check that the bijective homeomorphism _SU(3)×_Z →_G_2^S^1 is a diffeomorphism. We may choose coordinates as in the statement by applying Proposition <ref>. Now the maps from an S^1-invariant G_2 structure to its Z and Calabi-Yau parts are smooth, and combining these we see that the map _G_2^S^1→_SU(3)×_Z is smooth; similarly given a Calabi-Yau structure and a twisting the map to an S^1-invariant G_2 structure is smooth, so the map _SU(3)×_Z →_G_2^S^1 is smooth, and hence is indeed a diffeomorphism. We now turn to the smoothness of _SU(3). The coordinate charts are set up as in Proposition <ref>.Suppose that [Ω_1, ω_1, Ω_2, ω_2] ∈_SU(3). We have a chart U for _SU(3) around ([Ω_1, ω_1], [Ω_2, ω_2]) consisting of matching pairs of structures, such that all these pairs and their limits have the same groups of isometries isotopic to the identity, and that for each (Ω, ω) on M_1 or M_2, for (Φ^*Ω_i, Φ^* ω_i) sufficiently close to (Ω_i, ω_i) there is a continuous map (Φ^* Ω_i,Φ^* ω_i)↦Φ (which is not necessarily unique). Then an open subset of A(Ω_1, ω_1, Ω_2, ω_2) × U is homeomorphic to an open neighbourhood of [Ω_1, ω_1, Ω_2, ω_2] in _SU(3), by the map from A(Ω_1, ω_1, Ω_2, ω_2) × U to _G_2^S^1([Φ], (Ω'_1, ω'_1, Ω'_2, ω'_2)) ↦ [Ω'_1, ω'_1, Φ^* Ω'_2, Φ^* Ω'_2]where A(Ω_1, ω_1, Ω_2, ω_2) is as in Definition <ref>, and its action is as described in Definition <ref>.Once the first paragraph of the proposition is established, the rest follows by exactly the same methods as in Proposition <ref>. To establish the first paragraph, we note that we have charts with the required property on isometries by Proposition <ref>, and a continuous map giving diffeomorphisms between Calabi-Yau structures is the following composition of which every step is continuous. Given a pair of structures (Ω, ω) and (Ω', ω') close by and representing the same moduli class, we have that the continuous images Ω + dθ∧ω and Ω' + dθ∧ω' are also close by and represent the same moduli class. In turn, therefore, we have a continuously dependent S^1-invariant Φ pulling back the first to the second, by the result of Proposition <ref>. We know that (Ω, ω) and (Ω', ω') represent the same class of _SU(3), so there is a diffeomorphism Φ' pulling back one to the other. Composing the extension of Φ' using Lemma <ref> with Φ^-1 clearly gives an automorphism of the G_2 structure, and hence an isometry of the product metric. Therefore restricting the composition to M as in Lemma <ref> is also an isometry, and it follows by Lemma <ref> that it is an automorphism of the Calabi-Yau structure. In particular, we see that Φ^* Ω = Ω' and Φ^* ω = ω'. The map of diffeomorphisms given by restricting Φ to M is continuous, as we see in Lemma <ref> that restricting an S^1-invariant diffeomorphism to M is essentially composition with an inclusion and a projection: that is, the final step is continuous, as required. We now show that _SU(3) is smooth and the inclusion maps ι_[z_1], [z_2]: _SU(3)→_G_2^S^1 induced as in Definition <ref> are smooth.Let (Ω_1, ω_1, Ω_2, ω_2) define a class of _SU(3) and let (z_1, z_2) define a class of _Z. Let (ϕ_1 = Ω_1 + z_1 ∧ω_1, ϕ_2 = Ω_2 + z_2 ∧ω_2) define ι_[z_1], [z_2]([Ω_1, ω_1, Ω_2, ω_2]) ∈_G_2^S^1. The manifolds A(Ω_1, ω_1, Ω_2, ω_2) and A(ϕ_1, ϕ_2) of Definition <ref> can be naturally identified in a neighbourhood of the class of the identity. The inclusion map ι_[z_1], [z_2] can thus be examined locally in terms of the local homeomorphisms of Propositions <ref> and <ref> asA(Ω_1, ω_1, Ω_2, ω_2) ×_SU(3)→_SU(3)→^S^1_G_2→ A(ϕ_1, ϕ_2) ×_G_2^S^1It is smooth. Moreover, it is an immersion, so the coordinate charts defined on _SU(3) in Proposition <ref> form an atlas.Locally around the identity, both A(ϕ_1, ϕ_2) and A(Ω_1, ω_1, Ω_2, ω_2) are manifolds. Consequently, we may work with the tangent spaces at the identity. These are quotients as identified in Proposition <ref>: the quotient of Killing fields on the cross-section by those that extend to Killing fields on the asymptotically cylindrical pieces.It is clear that a Killing field on N extends to a Killing field on N × S^1. Conversely, given a Killing field on N × S^1, it is (since parallel and so S^1-invariant) X + c, with X a Killing field on N and c a constant, which defines a map from Killing fields on N × S^1 to Killing fields on N. Hence we have maps between Ã(ϕ̃) and Ã(Ω̃, ω̃). We have to show first that the maps induced on the quotients A(ϕ_1, ϕ_2) and A(Ω_1, ω_1, Ω_2, ω_2) by these are well-defined. If a Killing field on N extends to a Killing field on M_1, say, then clearly the corresponding Killing field on N × S^1 extends as a Killing field to M_1 × S^1. Conversely, if a Killing field X + c on N × S^1 extends to a Killing field on M_1 × S^1, then since c is itself a Killing field on M_1, X must also so extend. Thus these maps are well-defined. That the maps are inverse to each other also follows easily from the fact that c extends to an S^1-invariant Killing field of M_1. Thus, in a sufficiently small subset of the identity, A(Ω_1, ω_1, Ω_2, ω_2) and A(ϕ_1, ϕ_2) are naturally identified. For the second claim, for notational simplicity setting Ψ = Φ^-1, in these coordinates ι_[z_1], [z_2] becomes([Ω_1, ω_1], [Ω_2, ω_2], [Φ])↦ [(Ω_1, ω_1), ( Φ^* Ω_2, Φ^* ω_2)]↦ [Ω_1 + z_1 ∧ω_1,Φ^* (Ω_2 + Ψ^*z_2 ∧ω_2)] ↦ ([Ω_1 + z_1 ∧ω_1], [Ω_2 + Ψ^*z_2 ∧ω_2], [Φ])The map from A(Ω_1, ω_1, Ω_2, ω_2) to A(ϕ_1, ϕ_2) is clearly the identity under the identification of the previous paragraph and so smooth, so it is sufficient to check that the map to the _G_2^S^1 component is smooth. We show that the _G_2^S^1 component is independent of Φ. Then the _G_2^S^1 component is just ([Ω_1 + z_1 ∧ω_1], [Ω_2 + z_2 ∧ω_2]), which depends smoothly on the class ([Ω_1, ω_1], [Ω_2, ω_2]) of _SU(3) by Proposition <ref>. So, it is enough to show that as moduli classes we have [Ω_2 + z_2 ∧ω_2] = [Ω_2 + Ψ^*z_2 ∧ω_2]. On an appropriate neighbourhood of the class of the identity in A(Ω_1, ω_1, Ω_2, ω_2), possibly reducing the size of the charts, it is sufficient to show the equalities of cohomology classes[Ω_2 + z_2 ∧ω_2] = [Ω_2 + Ψ̃^* z_2 ∧ω_2][Ω̃_2, 2 + z̃_2 ∧ω̃_2, 2] = [Ω̃_2, 2 + Ψ̃^*z̃_2 ∧ω̃_2, 2]where the additional subscript 2 in the second equation denotes the relevant components of Ω̃_2 = Ω̃_2, 1 + dt ∧Ω̃_2, 2 and ω̃_2 = ω̃_2, 1 + dt ∧ω̃_2, 2 as in Theorem <ref>. Using that theorem, we know that structures that are sufficiently close and have these cohomology classes the same define the same moduli classes. But Φ is isotopic to the identity, and so so is Ψ, and ω_2 and ω_2, 2 are closed (since ω_2 is parallel): it follows that the cohomology classes are the same.ι_[z_1], [z_2] is now obviously an immersion, because the identity is and the inclusion of _SU(3) into _G_2^S^1 is (by Proposition <ref> again). Since the manifold structure on _G_2^S^1 is fixed, and ι_[z_1], [z_2] is independent of which chart we take on _SU(3), we find that each chart is a submanifold of _G_2^S^1. By uniqueness of the smooth structure on a submanifold, it follows that the transition functions for the charts of Proposition <ref> are smooth.Finally, _SU(3) is a principal -bundle over _SU(3) exactly as _G_2^S^1 is a principal -bundle over _G_2^S^1; consequently, it is smooth. The inclusions ι_[z_1], [z_2] of Definition <ref> are bundle maps over the corresponding inclusions _SU(3)→_G_2^S^1: thus, for every pair ([z_1], [z_2]) ∈_Z, ι_[z_1], [z_2]: _SU(3)→_G_2^S^1 is a smooth map. Consequently we have the final result of this subsection The spaces _SU(3) and _G_2^S^1 defined in Definitions <ref> and <ref> are manifolds. With _Z as defined in Definition <ref>, we have a diffeomorphism_SU(3)×_Z →_G_2^S^1induced from that of Theorem <ref>.Using the maps ι_[z_1], [z_2] of Definition <ref>, we have a map _SU(3)×_Z →_G_2^S^1([Ω_1, ω_1, Ω_2, ω_2, T], ([z_1], [z_2])) ↦ι_[z_1], [z_2]([Ω_1, ω_1, Ω_2, ω_2, T])(<ref>) is smooth because in local coordinates, by Proposition <ref>, it reduces to the corresponding map _SU(3)×_Z →_G_2^S^1, the identity on the T component, and the identity A(Ω_1, ω_1, Ω_2, ω_2) → A(ϕ_1, ϕ_2). Using that the map onspaces is a diffeomorphism, it follows that (<ref>) is a smooth local diffeomorphism. It is clearly a surjection, as any representative (ϕ_1, ϕ_2, T) of a class of _G_2^S^1 can be written as (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) for some matching pair of Calabi-Yau structures and matching pair of twistings as in Proposition <ref>. It is an injection because if [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] = [Ω'_1 + z'_1 ∧ω'_1, Ω'_2 + z'_2 ∧ω'_2, T']then there are asymptotically cylindrical diffeomorphisms relating these S^1-invariant G_2 structures, and in particular we see that [z_i] = [z'_i] as in Lemma <ref>. Injectivity then follows by injectivity in Proposition <ref>. Thus (<ref>) is a global diffeomorphism, as claimed.§.§ Restricting to data that can be glued In this subsection, we define the subspaces of the quotient _G_2^S^1 and _SU(3) that actually glue and the corresponding gluing maps (Definitions <ref> and <ref>). We have to define the gluing on Calabi-Yau structures by our inclusions _SU(3)→_G_2^S^1; as there is more than one such inclusion, there is more than one possible such gluing map. Consequently, we must prove that the Calabi-Yau gluing map is independent of the inclusion we consider: this is the final result of this subsection, Proposition <ref>, and essentially follows by combining Proposition <ref> with the fact that the gluing map is well-defined on the G_2 moduli space (Theorem <ref>). We begin by defining _G_2^S^1⊂_G_2^S^1: our definition, Definition <ref>, is adapted from Nordström<cit.>. From Theorem <ref>, we know that any pair of S^1-invariant G_2 structures glues for gluing parameter T>T_0 for some T_0, and T_0 is upper semi-continuous in the pair of structures. <cit.> says that the derivative of the gluing map between moduli spaces of G_2 structures (and hence of S^1-invariant G_2 structures) is an isomorphism for T>T'_0 for some, possibly larger, T'_0. The proof of <cit.> enables us to infer that T'_0 is also upper semi-continuous in the structures: in order to prove Theorem B, that the gluing map between moduli spaces of Calabi-Yau structures is a local diffeomorphism, we would like to have T>T'_0 as well. Therefore we make [cf. <cit.>]Let _G_2^S^1⊂_G_2^S^1be the subset of G_2 gluing data classes that have a representative (ϕ_1, ϕ_2, T) with T large enough that ϕ_1 and ϕ_2 can be glued with parameter T in the sense of Theorem <ref> and the derivative of the gluing map is an isomorphism at the triple (ϕ_1, ϕ_2, T). We see that _G_2^S^1 is an open subset of _G_2^S^1. We may then define a gluing map from _G_2^S^1 in the obvious way. Note that as T is now varying, we cannot sensibly use M^T for the glued manifold as in Theorem <ref>. We shall call it M.The gluing map _G_2^S^1 to _G_2^S^1(M × S^1) is defined as follows. Given a class in _G_2^S^1, by definition it admits a representative (ϕ_1, ϕ_2, T) that glues in the sense of Theorem <ref> (and by the proof of Theorem <ref> the resulting structure is S^1-invariant). Take the class of the result in _G_2^S^1(M × S^1).There is likely to be more than one such representative, but Nordström has shownThe map ^S^1_G_2→^S^1_G_2 given by Definition <ref> is well-defined.Of course, Nordström proved that the corresponding map _G_2→_G_2, with _G_2 defined analogously to _G_2^S^1, was well-defined: since both _G_2 and _G_2 are locally diffeomorphic to the corresponding S^1-invariant spaces and the map is defined identically, the S^1-invariant result immediately follows.The most natural definition of _SU(3) would be to take those classes of Calabi-Yau gluing data that have representatives (Ω_1, ω_1, Ω_2, ω_2, T) that glue using the inclusions ι_[z_1], [z_2] of Definition <ref>. However, it is possible that the required T may depend on z_i. We will therefore work initially with subsets depending on the class ([z_1], [z_2]) ∈_Z, but we will then take the union to define our space _SU(3), and check that the gluing map is still well-defined. First, we make Let_SU(3), ([z_1], [z_2]) = ι_[z_1], [z_2]^-1(_G_2^S^1) ={[Ω_1, ω_1, Ω_2, ω_2, T]: [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] ∈_G_2^S^1} _SU(3), ([z_1], [z_2]) is the inverse image of an open subset under a continuous map so open. Note that, for any choice of ([z_1], [z_2]), every class of Ĝ_SU(3) is included in _SU(3), ([z_1], [z_2]) for sufficiently large T,because every pair of matching S^1-invariant G_2 structures glues and the derivative is an isomorphism for T sufficiently large. We now define a family of gluing maps: Define the gluing map on the space of gluing data _SU(3), ([z_1], [z_2]) given by Definition <ref> by the composition_SU(3), ([z_1], [z_2])→_G_2^S^1→_G_2^S^1→_SU(3)where the first map is the inclusion ι_[z_1], [z_2], the second map is the gluing map of Definition <ref>, and the third map is the appropriate projection of Theorem <ref>. Rather than a family of spaces of gluing data and corresponding gluing maps, we would like a single space with a single gluing map. We makeLet _SU(3) = ⋃_([z_1], [z_2]) ∈_Z_SU(3), ([z_1], [z_2]) _SU(3) is also an open subset of _SU(3), and so a manifold. We can define a gluing map on it in the natural way Define the gluing map _SU(3)→_SU(3) by taking the map of Definition <ref> on each of the open subsets _SU(3), ([z_1], [z_2]). The gluing map of Definition <ref> is not a priori well-defined. If we are given a class [Ω_1, ω_1, Ω_2, ω_2, T] of _SU(3), there may be a pair of pairs ([z_1], [z_2]) and ([z'_1], [z'_2]) such that ι_[z_1], [z_2]([Ω_1, ω_1, Ω_2, ω_2, T]) and ι_[z'_1, z'_2]([Ω_1, ω_1, Ω_2, ω_2, T]) both lie in _G_2^S^1. We have to show that under the two maps of Definition <ref> [Ω_1, ω_1, Ω_2, ω_2, T] has the same image.We have already proved Proposition <ref>, which says that the _SU(3) components of the gluing of the two pairs (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T+S) and (Ω_1 + z'_1 ∧ω_1, Ω_2 + z'_2 ∧ω_2, T+S) are equal for S large enough. We now have two distinct classes in _G_2^S^1 corresponding to our two inclusions. For each of these classes, there exist representatives that glue, but we know nothing about how the representatives for the different classes are related. However, for S large enough, we know that the explicit representatives (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T+S) and (Ω_1 + z'_1 ∧ω_1, Ω_2 + z'_2 ∧ω_2, T+S) glue, and we may apply Proposition <ref> to deduce that these have the same _SU(3) component. We will show that the equality of the _SU(3) component is independent of increasing the gluing parameter, so that although (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) and (Ω_1 + z'_1 ∧ω_1, Ω_2 + z'_2 ∧ω_2, T) may not glue, the result of gluing the classes [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] and [Ω_1 + z'_1 ∧ω_1, Ω_2 + z'_2 ∧ω_2, T] must also have the same _SU(3) component.Suppose that (Ω_i, ω_i, T) and (Ω'_i, ω'_i, T') are representatives of the same class in _SU(3) and for matching pairs of twistings z_i and z'_i, possibly defining different twisting classes, the resulting representatives (Ω_i + z_i ∧ω_i, T) and (Ω'_i + z'_i ∧ω'_i, T') for the corresponding classes of _G_2^S^1 glue and they continue to glue if the parameters T and T' are increased. Suppose that there exists S>0 such that the results of gluing (Ω_i + z_i ∧ω_i, T+S) and (Ω'_i + z'_i ∧ω'_i, T'+S) have the same _SU(3) component. Then so too do the results of gluing (Ω_i + z_i ∧ω_i, T) and (Ω'_i + z'_i ∧ω'_i, T').We have two curves in _G_2^S^1, defined on [0, S] by gluing the curves s ↦ (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T+s) and s ↦ s(Ω'_1 + z'_1 ∧ω'_1, Ω'_2 + z'_2 ∧ω'_2, T'+s). We consider the projection of these to _SU(3), and call them ([Ω(s), ω(s)]) and ([Ω'(s), ω'(s)]). By Proposition <ref>, _SU(3) is locally represented by the cohomology of Ω and ω, and so these curves are determined by their values at a point and the corresponding curves of cohomology classes. Now these cohomology classes are[γ_T+s(Ω_1, Ω_2)]andc(s)[γ_T+s(ω_1, ω_2)]and the same with primes, where γ_T is the gluing map of Definition <ref>, and the functions c and c' are as in Proposition <ref>. By assumption, [Ω(S), ω(S)] = [Ω'(S), ω'(S)]. In particular, we have the same for the cohomology classes:[γ_T+S(Ω_1, Ω_2)]= [γ_T'+S(Ω'_1, Ω'_2)] c(s)[γ_T+S(ω_1, ω_2)]= c'(s)[γ_T'+s(ω'_1, ω'_2)]As Ω_i, ω_i and Ω'_i, ω'_i define the same Calabi-Yau class, they have agreeing cohomology and agreeing limit cohomology, it follows that [γ_T+s(Ω_1, Ω_2)] = [γ_T'+s(Ω'_1, Ω'_2)] for all s (because they agree at s=S and the change as we reduce s are given by the Mayer-Vietoris sequence and the limit cohomology; see, for example, <cit.>). The Kähler parts are complicated slightly by the functions c and c'. Under the restriction map to the cohomology of M_1, [γ_T+S(ω_1, ω_2)] and [γ_T'+S(ω'_1, ω'_2)] give [ω_1] and [ω'_1], respectively, which are equal by assumption and nonzero by non-degeneracy of the Kähler form. Thus from (<ref>) we see that [γ_T+S(ω_1, ω_2)] = [γ_T'+S(ω'_1, ω'_2)]c(S) = c'(S)The same argument as in the previous paragraph then shows the equality of the cohomology classes over the whole curve. Now Lemma <ref> shows that there exists ϵ such that c(s) = c'(s) fors>S-ϵ. Since the cohomology represents _SU(3) locally homeomorphically we get that the curves in _SU(3) agree for s>S-ϵ; by continuity they also agree at s=S-ϵ. Generalising the above argument, and using the connectedness of [0, S], it follows that the curves in _SU(3) agree at s=0. As envisaged, Lemma <ref> enables us to prove a far stronger well-definition result than Proposition <ref>.The gluing map _SU(3)→_SU(3) of Definition <ref> is well-defined. Suppose that [Ω_1, ω_1, Ω_2, ω_2, T] is a class of _SU(3) and there exist two twisting class pairs [z_1, z_2] and [z'_1, z'_2] such that the classes [Ω_i + z_i ∧ω_i, T] and [Ω_i + z'_i ∧ω_i, T] both lie in _G_2^S^1. That is, there are representatives(Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T)and(Ω'_1 + z'_1 ∧ω'_1, Ω'_2 + z'_2 ∧ω'_2, T')both of which glue and continue to glue if T and T' are increased.Now take (Φ_1, Φ_2) to be a matching pair of diffeomorphisms of M_1 and M_2 isotopic to the identity pulling back (Ω'_i, ω'_i, T') to (Ω_i, ω_i, T), which represents the same G̃_SU(3) class, by construction. Then there exists S sufficiently large so that(Ω_1 + Φ^*_1 z'_1 ∧ω_1, Ω_2 + Φ^*_2 z'_2 ∧ω_2, T+S)also glues. Since the action of (Φ_1, Φ_2) is affine on the gluing parameter, gluing (<ref>) defines the same class of _G_2^S^1 as gluing(Ω'_1 + z'_1 ∧ω'_1, Ω'_2 + z'_2 ∧ω'_2, T'+S)By Theorem <ref>, the results of gluing these two are thus the same. Also, however, by Proposition <ref>, and possibly increasing S some more, the result of gluing (<ref>) has the same _SU(3) component as the result of gluing (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T+S)Thus we have that the _SU(3) component giving by gluing (<ref>) is the same as for gluing (<ref>). By Lemma <ref>, we can then reduce to S to zero, which proves the proposition.The proof of Proposition <ref> is closely allied to Nordström's proof of the G_2 theorem, Theorem <ref>, which again works by taking a curve for far larger gluing parameter and arguing on cohomology, and then increasing the gluing parameter in a controlled fashion. It would not be that hard to combine the two proofs (essentially proving Proposition <ref> in a representation-independent way). §.§ The main theorem We now turn to Theorem B, which is that the gluing map is a local diffeomorphism of moduli spaces, and is Theorem <ref> below. The idea is that by the previous work we locally have_SU(3) =A(Ω_1, ω_1, Ω_2, ω_2) ×_>0×_SU(3) ⊂A(Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2) ×_>0×_G_2^S^1 = _G_2^S^1we have by Theorem A (Theorem <ref>) that_G_2^S^1 = _SU(3)× Zand by Nordström's work <cit.> we understand the gluing map defined (by Definition <ref> below) on _G_2^S^1 and its derivative, which is an isomorphism. We thus just have to consider what happens in terms of the splittings in (<ref>) and (<ref>). We consider the _Z part, _SU(3) part, the A part, and the _>0 part in turn, showing that variations in _Z lead to variations in the Z component of _SU(3)× Z but variations in the other three parts lead to variations in _SU(3) and perhaps rescaling of the Z component. It then follows easily that the composition defined by Definition <ref> also has derivative an isomorphism, and Theorem B then follows from the inverse function theorem. Formally, we should first note Nordström's result for the G_2 case, which says (<cit.>; see also our Definition <ref>) that the map_G_2→_G_2defined analogously to Definition <ref> is a local diffeomorphism. As we know that _G_2 and _G_2 are locally diffeomorphic to _G_2^S^1 and _G_2^S^1 respectively, and the gluing map commutes with these local diffeomorphisms as it maps S^1-invariant structures to S^1-invariant structures (proof of Theorem <ref>), Nordström's result implies The gluing map _G_2^S^1→_G_2^S^1defined in Definition <ref> has derivative an isomorphism, and is thus a local diffeomorphism. We showThe map_SU(3)→^S^1_G_2→^S^1_G_2→_SU(3)defined by Definition <ref> is also a local diffeomorphism. As the result is local, we work on an open subset _SU(3), ([z_1], [z_2]) as in Definition <ref>. We apply the inverse mapping theorem, as Nordström did to prove the corresponding result for G_2. He showed (in the proof of <cit.>) that the map between harmonic forms given by the derivative is essentially the gluing map given by Definition <ref>, and that whilst this gluing map isn't an isomorphism, it is injective, and a complement of its image can be obtained by varying T. We have to show the derivative remains an isomorphism when we pre- and post-compose the inclusion by ι_[z_1], [z_2] and the projection _G_2^S^1→_SU(3) of Theorem <ref>. We need to consider the tangent spaces of _SU(3), ([z_1], [z_2]), ^S^1_G_2, ^S^1_G_2 and _SU(3) and how they are related to each other. By the work in subsection <ref>, we essentially have the following The tangent space to _SU(3), ([z_1], [z_2]) is the direct sum T_SU(3)⊕ TA ⊕ T, where T_SU(3) is the inclusion of the tangent space to _SU(3) and A = A(Ω_1, ω_1, Ω_2, ω_2) is defined in Definition <ref>. The tangent space to _G_2^S^1 is the direct sum T_SU(3)⊕ T_Z ⊕ TA ⊕ T, where T_SU(3) is as before, T_Z is the inclusion of the tangent space to _Z, and A = A(ϕ_1, ϕ_2), and the inclusion of T_SU(3) is given by the inclusion of the appropriate components (recalling that the two groups A have the same tangent spaces by Proposition <ref>).The tangent space to _G_2^S^1 is the direct sum of the tangent space to _SU(3) and the tangent space to Z at the corresponding points, and we write these as T_SU(3) and T_Z.Locally, _SU(3), [z_1], [z_2] is diffeomorphic to _SU(3) as it is an open subset. But _SU(3) is locally diffeomorphic to _SU(3)×, as a principal bundle, which in turn is locally diffeomorphic to _SU(3)× A(Ω_1, ω_1, Ω_2, ω_2) × by Proposition <ref>. For _G_2^S^1, we again work locally and note that Theorem <ref> says that _G_2^S^1 is the product of _SU(3) with _Z. The inclusion is the identity for TA ⊕ T because Theorem <ref> preserves these components. For the inclusion _SU(3)→_G_2^S^1, the fact the inclusion is the identity on the _SU(3) component follows from the definition of _SU(3) as a submanifold in Proposition <ref>, and the corresponding product structure. The compact case is even easier. We begin by considering Proposition <ref>, which almost immediately implies Suppose that [ϕ_1, ϕ_2, T] = [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] ∈_G_2^S^1. Suppose that ([z'_1], [z'_2]) ∈ T_Z, so that [z'_1 ∧ω_1, z'_2 ∧ω_2, 0] ∈ T_Z ⊂ T_G_2^S^1. Applying the derivative of the gluing map of Definition <ref> takes [z'_1 ∧ω_1, z'_2 ∧ω_2, 0] into the T_Z component of _G_2^S^1. Suppose that we have chosen our representatives such that the triple (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) glues. Choose a curve of matching pairs of twistings z_i(s) with z_i(0) = z_i and z'_i = z'_i, for some representative. Upper semi-continuity of the minimal parameter T_0 implies that (Ω_1 + z_1(s) ∧ω_1, Ω_2 + z_2(s) ∧ω_2, T) will also glue for s small. Note that these triples define a curve through [Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T] in the same way as Lemma <ref>, with its tangent exactly [z'_1 ∧ω_1, z'_2 ∧ω_2, 0]. Proposition <ref> says that the curve in _G_2^S^1 constructed by gluing (Ω_1 + z_1(s) ∧ω_1, Ω_2 + z_2(s) ∧ω_2, T) has fixed _SU(3) component, and so its tangent lies in T_Z. Using the analogous proposition for Calabi-Yau structures (Proposition <ref>), we obtain Suppose that [Ω_i, ω_i, T] ∈_SU(3), ([z_1], [z_2]), and that [Ω'_i, ω'_i, 0] is a tangent vector in T_SU(3). Pick representatives of Ω_i and ω_i, and let z_1, z_2 be a pair of matching twistings such that the triple (Ω_1 + z_1 ∧ω_1, Ω_2 + z_2 ∧ω_2, T) is gluable. Then the image of the tangent [Ω'_i + z_i ∧ω'_i, 0] ∈ T_SU(3)⊂ T_G_2^S^1 under the derivative of the gluing map lies in T_SU(3)⊕ [z] ⊂ T_G_2^S^1, where [z] is the twisting class given by applying the gluing of Definition <ref> to z_1 and z_2.Pick some forms Ω'_i and ω'_i representing [Ω'_i, ω'_i, 0]. Take a curve of matching Calabi-Yau structures (Ω_i(s), ω_i(s)) such that Ω_i(0) = Ω_i, ω_i(0) = ω_i, Ω'_i = Ω'_i, ω'_i. By upper semi-continuity of the minimal parameter T_0, (Ω_1(s) + z_1 ∧ω_1(s), Ω_2(s) + z_2 ∧ω_2(s), T) is gluable for s sufficiently small. We know by Proposition <ref> that the image of (Ω_1(s) + z_1 ∧ω_1(s), Ω_2(s) + z_2 ∧ω_2(s), T) in _G_2^S^1 is of the form [Ω(s) + c(s) z ∧ω(s)] for a curve of Calabi-Yau structures (Ω(s), ω(s)); the result clearly follows by differentiating in s. In the same way, we have a similar result for the automorphism component TA. As in Proposition <ref>, let [Ω_i, ω_i, T] ∈_SU(3), ([z_1], [z_2]). Choose representatives Ω_i and ω_i and let z_1, z_2 be a pair of matching twistings such that the triple (ϕ_1=Ω_1 + z_1 ∧ω_1, ϕ_2=Ω_2 + z_2 ∧ω_2, T) is gluable. Consider X ∈ TA(Ω_1, ω_1, Ω_2, ω_2) = TA(ϕ_1, ϕ_2) ⊂ T_G_2^S^1. Its image under the derivative of the gluing map lies in T_SU(3)⊕ [z]. Let Φ_s be the curve of diffeomorphisms of M_2 generated by curve of Killing fields sX as in Proposition <ref>. By upper semi-continuity of the minimal T_0, (Ω_1, ω_1, Φ_s^* Ω_2,Φ_s^* ω_2, T) is gluable for s sufficiently small. Exactly as in Proposition <ref>, by Proposition <ref>, the image of (Ω_1, ω_1, Φ_s^* Ω_2,Φ_s^* ω_2, T) in _G_2^S^1 is of the form [Ω(s) + c(s) z ∧ω(s)]; the result follows.Finally, we have to consider the effect of varying the neck length T. Exactly the same argument as in Propositions <ref>–<ref> shows that it follows from Proposition <ref> that T maps into T_SU(3)⊕ [z]. In sum, therefore, we have shown that T_Z maps into T_Z but that T _SU(3), [z_1], [z_2] maps into T_SU(3)⊕ [z].The proof of Theorem <ref> is now straightforward linear algebra, as follows.We consider the derivative of the gluing map_SU(3)→_SU(3)around any given point. Because _SU(3) is given by a union of open sets, we may suppose that this point lies in _SU(3), ([z_1], [z_2]), and then by Definition <ref> the gluing map is locally given by the composition_SU(3), ([z_1], [z_2])→_G_2^S^1→_G_2^S^1→_SU(3)Its derivative is therefore given in terms of the decomposition in Proposition <ref> byT_SU(3)⊕ TA ⊕ T↪ T_SU(3)⊕ T_Z⊕ TA ⊕ Tγ→ T_SU(3)⊕ T_Z ↠ T_SU(3)where the middle map γ is the derivative of the gluing map on S^1 invariant moduli spaces and is therefore an isomorphism by Theorem <ref>.Now we have that the components in the tangent space of _G_2^S^1 corresponding to the tangent space of _SU(3) are mapped into T_SU(3)⊕ [z] (by Propositions <ref>, <ref>, and the following comment) and the additional component T_Z is mapped into T_Z (by Proposition <ref>).Moreover, we know that [z]is mapped to by T_Z, by taking the obvious curve of twistings (1+s)(z_1, z_2): that is, by taking [z_1 ∧ω_1, z_2 ∧ω_2, 0] ∈ T_Z. It follows that the composition is injective, because if a vector in T_SU(3) maps to zero under the whole map, then under the G_2 gluing it must map to (0, c[z]). But (0, c[z]) is also the image of something in T_Z, which proves that the G_2 gluing map is not injective, a contradiction. The composition is also surjective, simply because anything in T_SU(3) is mapped to by some tangent vector in T_G_2^S^1, and if we ignore the T_Z component we still get the same T_SU(3) component under the composition. Hence the composition of derivatives is an isomorphism and the gluing map of Calabi-Yau structures is a local diffeomorphism.We could also have used a simplified version of <cit.> to show that the patching in Definition <ref> defines an isomorphism between the two T_Z components, rather than analysing the remaining components separately.Finally, we make a remark on the possibility of complex gluing parameter T, which is natural when we are gluing complex manifolds. By the Haskins–Hein–Nordström structure theory (<cit.>), any asymptotically cylindrical Calabi-Yau has cross-section a finite quotient of S^1 × X for some X, and so has a rotation map on its asymptotically cylindrical end; we could say that taking gluing parameter T + iS corresponds to a rotation of one asymptotically cylindrical end. Evidently, if we permitted complex gluing parameter, the gluing map would still cover an open subset. Local injectivity depends on how variation of S interacts with the proof of Theorem <ref>, in particular how the automorphism of rotation on the end behaves as a class in A(Ω_1, ω_1, Ω_2, ω_2). If it defines the trivial class, then the moduli class we obtain on gluing is independent of S. If it defines a nontrivial class, then we lose local injectivity, as changing S is equivalent to a change in T_SU(3). abbrv
http://arxiv.org/abs/1703.09201v1
{ "authors": [ "Tim Talbot" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20170327174158", "title": "Gluing and deformation of asymptotically cylindrical Calabi-Yau manifolds in complex dimension three" }
http://arxiv.org/abs/1703.09054v2
{ "authors": [ "Lara Sousa", "Pedro P. Avelino" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170327132757", "title": "Revisiting the VOS model for monopoles" }
=1 ⟨⟩ ^1Department of Physics and Institute of Nuclear Theory,Box 351560University of Washington, Seattle, Washington 98915, USA ^2Department of Statistics, Simon Fraser University, Vancouver, CA We propose using the frequency-domain bootstrap (FDB) to estimate errorsof modeling parameters when the modeling error is itselfa major source of uncertainty.Unlike the usual bootstrap or the simple χ^2 analysis, the FDB can take into account correlations between errors.It is also very fast compared to the the Gaussian process Bayesian estimate as often implemented for computer model calibration.The method is illustrated with a simple example, the liquid drop model of nuclear binding energies.We find that the FDB gives a more conservative estimate of the uncertainty in liquid drop parameters in better accord with more empirical estimates. For the nuclear physics application, thereno apparent obstacle to apply the method to the more accurate and detailed models based on density-functional theory.Estimating parameter uncertainty in binding-energy models by the frequency-domain bootstrap. G.F. Bertsch^1 and Derek Bingham^2 December 30, 2023 ============================================================================================ Introduction. Thebootstrap method iswidely used to estimate sampling distributions of statistics <cit.>. We will show here that the frequency-domain bootstrap(FDB) for time series analysis <cit.> is well suited for estimating uncertainty in the modeling parametersarising in the theory of nuclear binding energies.Parameters such as the binding energy of nuclear matter and symmetry energy of nuclear matter are needed to construct models of the nuclear equation of state, which is an essential ingredient in the physics of neutron stars. We first describe the method in general terms, and then apply it to the very simple liquid-drop model of nuclear binding energies.The same approach can be applied to more sophisticated models, such those based on density-functional theory <cit.>,which should provide even narrower limits than can be obtained from the liquid drop description.χ^2 and the basic bootstrap.Estimating parametersin modelsand their respective uncertainties is fraught with difficulty.Obviously, if there is theoretical guidance on the functional form of the systematic difference between the system response and the model, it should be incorporated into the parameter estimation. Absent any guidance, the estimation can on be based on the model's performance; parameters that make the model “look" like data are preferred.Denote the model function M(x,p); it depends on parameters p and maps input data variables x (which may be vectors) onto outputy.For the nuclear physics model treated below, the variables are x = (Z,N), the proton and neutron numbers, and y=M(x,p) is the binding energy of the nucleus.The first step in applying a model is to determine a best-fit parameter vector, p_0, byminimizing the square of the residual differences r between the model prediction and the experimental data,r(x) =y_exp(x) - M(x,p).Denote the corresponding vector of the best-fit residuals by r_0. The experimental data for this sort of procedureis implicitly specified as y_exp = M(x, p) + C(x)where C(x) is a correction or error term. The perfect model would have the form y_exp = M(x,p_t) + C(x), where p_t are the true parameters and r(x)=C(x).Now comes the main assumption of the χ^2 method: The correction terms, C(x), are independent for each x_i (i=1,…,S) and follow a mean zero Gaussian distribution with equal variances (the equal variance assumption specifies that the experiments have the same uncertainty and is not strictly necessary).The likelihood for the parameters is thenL(p) =1/(2πσ_0^2)^S/2exp(-∑_i=1^S (y_exp(x_i) - M(x_i,p) )^2/ 2 σ_0^2),where σ_0^2 is the residual variance.Broadly speaking, the bootstrap is an approach to estimating the sampling distribution of a statistic that requires few assumptions on theprocess that generated the data.The basic idea is that the data, or some function of the data, is repeatedly sampled with replacement. For each bootstrap sample, parameters of interest are estimated, and the ensemble of parameter estimates is the corresponding bootstrap distribution. The basic bootstrap in our setting can now be defined as an approximation to sampling distribution of the estimator of p_t without specifying a particular error distribution. To do so, we (i) draw samples, with replacement, of residuals from the entries in the r_0 vector and (ii) reestimate p_t.Doing this many times gives the bootstrap distribution of p_t values.From this bootstrap distribution, functionals such as uncertainty estimates or confidence intervals can be computed.Dealing with correlations.The χ^2, or basic bootstrap method for that matter,greatly underestimates the uncertainty of the parameter distribution in many circumstances.The reason is that the assumed ensemble has no correlation between the residuals at different x points; an unlikely occurrence becauseif the model overestimates the system mean response at a given x, it is also likely to overestimate the mean at nearby x's as well. If the residuals are correlated, that needs to be taken into account in constructing the sampling distribution function for the estimator of p_t, otherwise the variance in the derived sampling distribution will be too small.Correlations can be taken into account by aGaussian process ensemble of residuals <cit.> and this method has become an accepted tool in nuclear physics <cit.> andelsewhere in physics <cit.> under the heading of computer model calibration.This amounts to an attempt to consider any systematic signal in the residuals, as a function of x, that is not accounted for by the model. The specification a Gaussian process requires that a mean function (usually taken to be a constant) and a correlation function must be chosen.The specification in (1) is essentially the same as adopted in<cit.> where their correction term is the sum of a discrepancy function and experimental error. In the applications we consider, the experimental error is a negligible component of C(x) and can be safely ignored.Therefore, C(x) is analogous to their discrepancy function. The frequency-domain bootstrap that we advocate here is an alternative way to include correlations.Furthermore,it is computationally much more efficient to the Gaussian process approaches in<cit.>.The Frequency-Domain Bootstrap.As before, we start with a parameter fit p_0 producing a residual vector r_0. We need to have a measure of distance between data points, |x_i-x_j|, and we assume for the moment that x is one-dimensional array of contiguous integers.This allows us to define the discrete Fourier transform r̃ of the residual vector r.For a residual vector ofdimension S, the discrete Fourier transform (and its inverse) may be expressed r̃(m) = S^-1/2∑_n exp(2 π i n m/ S) r(n),with m in the range -S/2 < m < S/2.The Fourier transform can also be expressed with pure real variables asr(n) = r̃(0) +(2 S)^1/2∑_m=1^S/2 |r̃(m)|sin (2 π i n m /S + ϕ_m).If there are strong correlations of short range in x, the magnitudes of the residuals will be enhanced for m << S/2.On the other hand, if there are no correlations between difference x points we expect the components of r̃(m) to be Gaussian distributed with a variance independent of m.We have no information about the phases, ϕ_m, and we assume that they are uniformly distributed in the interval 0 < ϕ < 2π to construct the FDB ensemble.For the “bootstrap" ensemble, we sample, with replacement, the values of |r̃(m)| from the r_0 set of residuals.The uncertainties are calculated as before: (i) sample the ensemble; (ii) refit the model to get a sample p_t; and (iii) extract the statistical uncertainty by the variance of the p_t samples.Similar to the Gaussian process ensemble approach of Kenney and O'Hagan<cit.>, our approach has an implicit Gaussian assumption for C(x). The main difference is that they view the correction as a realization of a stationary Gaussian process with a specifically chosen covariance function to relate the values of C(x) at inputs x_i and x_j.While our approach also assumes an underlying Gaussian process, we do not have to specify the correlation function.In<cit.>, the chosen correlation function specifies that C(x) is infinitely differentiable. The FDB does not make this smoothness assumption. In this sense, the proposed it is more flexible and contains the specification in <cit.> as a special case. Furthermore, the Gaussian process approach adopted by <cit.>requires the inversion of an S × S correlation matrix for each evaluation of the Gaussian likelihood.To implement their method can require tens-of-thousands of such inversions, each with order o(S^3)<cit.>. For the liquid drop model in the next section, the sample size, S is in the thousands, thereby making their approach less computationally appealing.The proposed approach, on the other hand, makes computations in the order of o(Slog(S)). Application to the liquid drop model. A standard formulation of the liquid drop model of nuclear binding energies B isB(Z,N) = a_v A - a_s A^2/3 - a_c Z^2 A^1/3-a_a (N-Z)^2 A -δ mod(Z,2) +mod(N,2) -1 A^1/2,where Z and N are the proton and neutron numbers respectively, and A=Z+N. The coeffients of the terms in Eq. (<ref>) have clear physical interpretations. The nuclear matter binding energy a_v and the asymmetry term a_a are the only ones to survive in the nuclear matter limit, providing the Coulomb energy term a_c is externally compensated (as in a neutron star).It should be emphasized that this should be considered a toy model for the physical problem due to the omission of shell effects.They are included in models based on nuclear energy density functionals; those models achieve a factor of two better accuracy at a cost of a factor of two in the parameter count <cit.>.We first determine the p_0 parameter set by least-squaresminimization of the residuals.The data set is the experimental binding energies of 2037 nuclei from the 2003 nuclear mass table <cit.>.The resulting fit gives a_v=15.58 MeV and a_a = 22.18 MeV, with a variance of the binding energy residuals σ_r = 3.24 MeV.The nuclear matter binding energy has hardly changed by the additional data of the last 50 years.The first fit that included an error estimate <cit.> found a_v=15.68 MeV.Their error estimate was “say to 1 or 2 % ."We first carry out the χ^2 estimate of the parameter uncertainties, with the result for a_v shown as the top line in Table I. We next carry out the basic bootstrap,randomly assigning the residuals todifferent (Z,N) and re-optimizing the parameters.The results,shown on the second line of the Table, confirmthat the method is a good approximation to χ^2. However, the estimated uncertainty,σ_a_v = 0.03, is wildly unrealistic.This may be seen from alternative formulations of the liquid drop model <cit.> or from improved models that include more of the actual physics.Examples using parametered energy-density functionals are on the bottom two rows of the Table.The quoted uncertainties are 5 times larger than the naive estimates in the Table.The problem of course is that the ordinary bootstrap assumes that the residuals are uncorrelated–true for experimental data but not for model errors.This may be seen in Fig. 1 showing the nuclei in the data set with the sign of r(Z,N) indicated by color. More quantitatively, Fig. <ref> shows the residuals as a function of A, averaging over the nuclei in the data set with given A. The central circles are the average residualof fixed A, r_A = ∑_Z+N=A r(Z,N)/N_A where N_A in the number of data.The two curves delimit the variance of the residuals for afixed A.Obviously, the residuals are obviously highly correlated.We now construct the FDB ensemble.We take A to be the variable in the Fourier transform and r_A as the r_0 data set.The Fourier-transformed residuals |r̃_A| are plotted in Fig. <ref>. We take samples of the r_A distribution by inverse Fourier transforming |r̃_A| expi ϕ_A where ϕ_A is chosen to be uniform in the interval [0,2π]. Several samples are shown in Fig. <ref>. We see that the locations ofthe peaks and troughs can be different from sample to sample. Before we can refit within our new ensemble we have to decide how to deal with the dependence of the full residual function r(Z,N) on N-Z.The problem of correlations is less severe here because, as may be seen in Fig. 1, the chains of nuclides in the N-Z direction arequite short.We choose to deal with the N-Z degree of freedom by taking a χ^2 distribution of residuals r(Z,N) about r_A.The variance of the distribution is taken from the r_0 data set,σ_A^2 =1 N_A (∑_Z+N=A r(Z,N)^2 - r_A^2). Following this procedure, we generated parameter sets from 200 samples. A histogram of a_v values is shown in Fig. <ref>. The distribution is much broader than that of the ordinary bootstrap. The variance in a_v values comes out to be σ_a_v = 0.17 MeV, which appears to be quite reasonable compared with the results from the better models quoted in the last columnsof Table I. Comparison with the GP method.As mentioned earlier, ensembles of residuals based on the Gaussian process have become very popular. We have implemeted the fully Bayesian estimate outlined in <cit.>to estimate parameter uncertainties using a 5-parameter Gaussian process as part of the distribution, with results shown on the fourth line of the Table. The uncertainty a_v is somewhat smaller than the FDB estimate. That is perhaps to be expected. The Fourier transform can capture many degrees of freedom for the correlations, while the GP is limited to the number of parameters in the Bayesian ensemble. Put another way, the methods outlined in<cit.> posits a specific type of correlation structure for the function C(x), while the FDB considers a broader class of Gaussian process models. The result of the increased flexibility is additional uncertainty in the estimated form of C(x). Whether or not this is a good thing or not depends on how strongly one believes in the more restrictive choice of correlation function in <cit.>.Discussion. We have demonstrated that the FDB gives a better estimate of parameter uncertainty than the χ^2 method, which is very well known to unrealistic in the description of nuclear properties <cit.>.Certainly, the uncertainties obtained via the FDBattempt to capture the correlations that the standard χ^2 method ignores.Whether the FDB estimate is large enough to be realistic can still be questioned.We have compared with the results from one family of density functional models, but other models can give larger deviations.Broadly speaking, the advantages of the FDBapproach are computational and a broader exploration of correlation models than methods in <cit.>. We expect to generally encounter more parameter uncertainty as a result of this broader exploration. Acknowledgment.This work was stimulated by the program “Bayesian methods in nuclear physics" at the Institute for Nuclear Theory at University of Washington.GB also acknowledges helpful discussion with W. Nazarewicz and J. Margueron.The research was partially funded by the Natural Sciences and Engineering Research Council of Canada. 99ef77 B. Efron, Annals of Statistics7 1 (1979).kr12J-P. Kreiss and S.N. Lahiri, Handbook of Statistics30 3 (2012). re16 P.G. Reinhard and W. Nazarewicz, Phys. Rev.C93, 051303(2016) ke01 M.C. Kennedy and A. O'Hagan, J. R. Statist. Soc. B63 425 (2001). mc15 J.D. McDonnell, N. Schunck, D. Higdon, J. Sarich, S.M. Wild, and W. Nazarewics, Phys. Rev. Lett.114 122501 (2015). hi15 D. Higdon, D. MacDonnell, N. Schunck, J. Sarich and S. Wild, J. Phys. G42 034009 (2015). pr15 S. Pratt, et al., Phys. Rev. Lett.114 202301 (2015) be16 J. Bernard, et al., arXiv:1605.0395 (2016). recent Two recent examples are:C. Moore, et al., Phys. Rev. D93 064001 (2016); J. Cui and R. Krems, Phys. Rev. Lett. bf 115 073202 (2015).ka11 C.G. Kaufman , D. Bingham, S. Habib, K. Heitmann, J.A. Frieman, Ann. Appl. Stat.5 4 (2011). be05 G.F. Bertsch, B. Sabbey and M. Uusnakki, Phys. Rev. C 71054311 (2005).audiG. Audi, A. H. Wapstra, and C. Thibault, Nucl. Phys.A729 , 337 (2003).Nuclei were selected for the fit by the criteria Z>7, N>7, A<256, and experimental uncertainty on binding energy less than 0.2 MeV. my66 W. Myers and W. Swiatecki, Nucl. Phys.81 1 (1966). ma17J. Margueron, R. Casali, and F. Gulminelli, to be published. Z2 For example, the replacement Z^2 → Z(Z-1) inEq. (<ref>). hi04 D. Higdon, SIAM J Sc. Comput.26 448 (2004). do14 J. Dobaczewski, W. Nazarewicz, and P-G. Reinhard, J. Phys. G41 074001 (2014).
http://arxiv.org/abs/1703.08844v1
{ "authors": [ "G. F. Bertsch", "Derek Bingham" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170326164321", "title": "Estimating parameter uncertainty in binding-energy models by the frequency-domain bootstrap" }
Nick Kaiser]Nick KaiserInstitute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822-1839, USA Why there is no Newtonian backreaction [====================================== In the conventional framework for cosmological dynamics the scale factor a(t) is assumed to obey the `background' Friedmann equationfor a perfectly homogeneous universe while particles move according to equations of motions driven by the gravity ofthe density fluctuations. It has recentlybeen suggested that the emergence of structure modifies the evolution of a(t) viaNewtonian (or `kinematic') backreaction and that this may avoid the need for dark energy. Here we point outthat the conventional system of equations is exact in Newtonian gravity and there is no approximation in the use of the homogeneous universe equation for a(t). The recently proposed modification of Racz et al. (2017) does not reduce to Newtonian gravity in the limit of low velocities.We discuss the relation of this to the `generalised Friedmann equation' of Buchert and Ehlers. These are quite different things; their formula describes individual regions and is obtained under the restrictive assumption that the matter behaves like a pressure-free fluid whereas our result is exact for collisionless dynamics and is an auxiliary relation appearing in the structure equations.We use the symmetry of the general velocity autocorrelation function to show how Buchert's Q tends very rapidly to zero for large volume and that this does not simply arise `by construction' through the adoption of periodic boundary conditions as has been claimed. We conclude that, to the extent that Newtonian gravity accurately describes the low-z universe, there is no backreaction of structure on a(t) and that the need for dark energy cannot be avoided in this way.Cosmology: theory, dark energy, cosmological parameters § INTRODUCTION It is usually assumed that the spacetime in an inhomogeneous cosmology may be described by a metric which is that of a homogeneous FRW model ds^2 = - dt^2 + a(t)^2 dx^2, where x is conformal position,with additional very small `weak field' metric perturbations. This does not require that the density perturbations be small; only that the velocities associated with structures are small compared to c. The scale factor for the background is assumed to obey the Friedmann equationä + 4 π/3 G ρ a = 0for a homogeneous background universe with density ρ∝ a^-3, and where dot denotes time derivative. The density here may be augmented by additional terms ρ + 3 P / c^2 for homogeneous dark energy or radiation backgrounds satisfying the appropriate continuity equations. The peculiar (i.e. non-Hubble) motions of non-relativistic particles such as dark matter or galaxies obey the `structure evolution' equations+ H= - ϕ / a where ≡ a, H = ȧ / a andthe spatial derivative is with respect to comoving coordinates , and where ϕ is a solution of Poisson's equationsourced by the density perturbation, i.e.∇^2 ϕ = 4 π G (ρ - ρ)a^2.This system of equations, which may also be obtained inNewtonian cosmology (Peebles 1980),may be used to find the evolution of linearised perturbations and solved in N-body codes to obtain non-linear structure.Some, however, going back at least to Ellis (1984), havequestioned the validity of this as (<ref>)is derived assuming that the Universe is homogeneous, which is obviously not the case. To address this, Buchert and Ehlers (1997), modelling the matter as a Newtonian pressureless fluid (only a crudeapproximation once multi-streaming occurs, but valid in the linear and quasi-linear regime), have found that the size a ≡ V^1/3 of a region of volume V containing mass M obeys3 ä/ a +4 π G M / a^3 =Q . Here Q = 2⟨ (θ - ⟨θ⟩)^2 ⟩ / 3 + 2 ⟨ω^2 - σ^2 ⟩where θ is the volume expansion rate σ^2 and ω^2 are the shear and rotations squared, and ⟨…⟩ denotes an average over the volume. As with (<ref>) this may be augmented by including a cosmological constant.This is highly suggestive.The equation of motion (<ref>) for the linear size a is strikingly similar to (<ref>) but has an extra term containing quantities that are second order in thevelocity shear d v/ d r. The quantities being averaged in Q are, like the individual terms in (<ref>), generally of order the inverse squared dynamical time G ρ,so one might naively think this would be a strong effect. However, as Buchert and Ehlers point out, for large volumes the actual effect is less than this. As regards the implications for cosmology, they say that `the average motion may be approximately given by a Friedmann model on a scale which is larger than the largest existing inhomogeneities.', but they also argue that `the “conspiracy assumption” that Q = 0 […] must be considered a strong restriction on generality'.Equation (<ref>) is the basis of Newtonian (or `kinematic')backreaction'; the idea that there is a modification of the expansion rate caused by the emergence of structure.It has been studied by Buchert, Kerscher & Sicka 2000, who make some interesting claims; explored in N-body simulations by Kazimierczak 2016; and has been widely discussed in reviews of backreaction (e.g. Buchert & Räsänen, 2012).In a similar vein, Racz et al. (2017) have proposed that the successes of the ΛCDM concordance cosmology can be obtained without the need for dark energy. They say `Cosmological N-body simulations integrate Newtonian dynamicswith a changing GR metric that is calculated from averaged quantities.' but that `There is a choice in how the averaging is done.' They propose to maintain equations (<ref>) and (<ref>) but obtain a(t) by averaging the local expansion rate ȧ / a computed fromthe local density under some simplifying assumptions and then using this to update a(t) at each time-step. Performing N-body calculations using this algorithm and with matter only they find a(t) very similar to the solution of the Friedmann equation in ΛCDM.They argue that the successes of the concordance cosmology can thereby be retained without the need for dark energy through this `strong backreaction' effect.But is it really legitimate to assume that backreaction from structure causes a(t) to deviate at all from the solution of (<ref>)? We can address this in the context of Newtonian gravity.This is relevant because Newtonian gravity should provide a very accurate description of the local universe since all velocities – Hubble and peculiar – are small. And it is in the relatively local universe that the current expansion rate – a problem for matter dominated cosmology in the conventional framework – is measured. As we shall discuss in more detail below, at z < 0.1 for example, where H_0 is reliably measured, the background can be treated as Newtonian to a precision of order z^2 ≃ 0.01 and corrections to lowest order weak field gravity perturbations are suppressed by at least a factor v_ pec / c, the ratio of peculiar velocities to the speed of light. Also, the absolute value of the curvature radius, which is arguably a non-Newtonian construct and which may be identified with a, is not relevant here. All that counts is the expansion rate ȧ / a and how a(t) changes with time.In a homogeneousmodel these are determined locally.The question of how inhomogeneity affects the expansion might seem to be more complex, but it would seem bizarre indeed if the expansion rate of the local universe were affected by the emergence of structure in the distant universe.So if backreaction is at all important it should be revealed in a Newtonian analysis.We will now show that, despite the apparently questionable assumption of homogeneity in (<ref>), the system of equations(<ref>-<ref>) is actually precisely equivalentto the full Newtonian equations of motion.§ NEWTONIAN COSMOLOGY IN SCALED COORDINATES For N particles of mass m interacting under their mutual gravitational attraction there are 3N second order differential equations_i = G m ∑_ji_j - _i/| _j - _i|^3 .These may be solved numerically provided initial positions _i and velocities _i for the particles.Writing this in terms of arbitrarily re-scaled coordinates = a(t), so = ȧ + a and = ä + 2 ȧ+ a, (<ref>)becomes_i + 2 ȧ/a_i= G m/a^3∑_ji_j - _i/| _j - _i|^3 - ä/a_i . What we are interested in is the motion of particles with initial conditions that are close to being in uniform Hubble expansion with some initial expansion rate H (very close if we start at early times).So we might lay down particles on a regular grid in -space within some verylarge spherical boundary centred on the origin and give the particles small displacements δ and velocities = H+ δ with `peculiar' velocities δ chosen to excite the growing mode. The corresponding initial conditions in terms of -coordinates are=/ aand= ((H - ȧ / a) + δ) / a. The sum in (<ref>) will have two components:A `zeroth order' acceleration that, in the limit that the grid spacing becomes very small,is the same as the gravitational acceleration of a uniform density sphere, which grows linearly with _i, plus a perturbation determined by the displacements from the grid (we may think of the source of the gravity being that of the unperturbed grid of particles plus that of a set of dipole sources). If we define the number density of particles in -space n() ≡∑_i δ( - _i) and δ n ≡ n - n with n the inverse of the grid cell volume in -space, equations (<ref>) become_i + 2 ȧ/a_i -G m/a^3∫ d^3 x δ n() - _i/| - _i|^3 = - (ä/a + 4 π G m n/3a^3) _i . It is interesting to compare this with the conventional equations.Those are 3N + 1 equations (3 per particle plus the Friedmann equation for a) whereas here we have only 3N equations, just as in (<ref>). But since a(t) is arbitrary we may assert that a(t) is such that the RHS of (<ref>) vanishes – i.e. that a(t) is a solution of (<ref>) – in which case the vanishing of the LHS is equivalent to the conventional structure equations (<ref>) and (<ref>).Moreover, if we set the initial conditions for (<ref>) to be ȧ / a = H then we see from the second of (<ref>) that = δ/ a; the initial velocity in -space is a pure perturbation with no Hubble-flow component.Alternatively, if one does not require (<ref>) one obtains modified `structure' equations with a large-scale radial acceleration that would drive a Hubble-like flow to compensate. The results for all physical quantities such as positions, velocities, density etc. however are all invariant with respect to the choice of a(t).We thus obtain the original conventional system of equations, in which there is no feedback (or `backreaction') from the structure equations on the expansion. But this is no longer open to the challenge that (<ref>) is only an approximation.Equation (<ref>) is precisely equivalent to (<ref>); we are simply using the freedomin choice of a(t) to impose (<ref>) as an identity. We emphasise that the resulting system of equations – the basis of `Newtonian cosmology' – is not novel. It was first obtained by Dmitriev & Zeldovich (1964) and is what is integrated in essentially all modern N-body simulations.Equivalent equations of motion were also obtained by Peebles (1989) in the context of reconstruction of local group orbits fromthe action principle.The difference here is mainly one of perspective.We have shownthat, in principle, the scale factor is arbitrary and need not obey(<ref>) but, in that case, one must then also modify the `structure' equations accordingly.We note that Newtonian cosmology with point mass particles was also considered by Ellis & Gibbons (2015)who considered a model in which there a population of `background' particleswith no peculiar motions and `galaxies' that respond to their own peculiar gravity and the mean-field gravity of the background particles.As discussed by Dmitriev and Zel'dovich (1964), Newtonian cosmology is obtained by considering perturbations to a large uniform density expanding sphere.The radius R of this sphere may be taken to infinity within Newtonian physics as all the physically observable quantities are regular in that limit. In that sense the results are insensitive to the `boundary conditions at infinity'. It is however required that one consider a sphere as any other geometry would not expand isotropically and homogeneously.Within the infinite sphere the structure equations (<ref>) may be used to describe structure that is periodic within some finite size box of side L, in which case the peculiar potential, velocity and displacements may be expanded as Fourier series as is commonly done.§ DISCUSSION We have tried to clarify the meaning of the conventional equations of Newtonian cosmology.We have expressed the usual Newtonian equations (<ref>) in terms of re-scaled (or what cosmologists call `comoving') coordinatesto obtain (<ref>). But in these equations the scale factor a(t) is completely arbitrary and has no physical impact so there is no dynamical equation that a(t) must obey. This reflects the fact that the universe we live in can, if one so wishes, be considered to be a perturbation of some hypothetical `background' cosmology, but there is freedom in choosing the background. Exploiting this freedom, the Friedmann equation (<ref>) may be asserted as an identity, and with the initial conditions set to ȧ/ a = H_0 we have shown that we then obtain the conventional equations of cosmological dynamics.In these equations (<ref>) should not be considered a dynamic equation so much as an auxiliary relation that determines the form of the equations of motion of the particles.Newtonian dynamics does not strictly require that the scale factor obey the conventional Friedmann equation.But if a(t) is chosen not to obey the Friedmann equation this results in an additional long-range radialforce in the equations of motion in -coordinates; the RHS of (<ref>).This is required in order that physicalquantities like the expansion rate be independent of the choice of scale factor. Similarly, if the initial ȧ / a is not taken to agree with the initial physical expansion rate this implies initial conditions where there will be a net expansion or contraction in comoving coordinates.So if (<ref>) is violated, or the initial conditions are not set appropriately, the solutions of the `structure' equations no longer just describe the emerging structure; they also include part or all of the `background' evolution.The fully non-linear dynamics of the local universe are exactlydescribed using the standard equations in the Newtonian limit. In these the evolution of the scale factor is decoupled from the evolution of structure, and is fixed by the initial density and expansion rate and, of course, the presence of dark energy.There is no Newtonian backreaction on a(t) from structure.Specifically, one cannot, as Racz et al. have proposed, keep (<ref>) and (<ref>) but modify (<ref>).These equations are seen from (<ref>) to be intimately linked together.To modify (<ref>) alone results in a theory that does not reduce to Newtonian gravity in the limit of small velocities as does Einstein's gravity.To remind ourselves why this is important,this means that a matter only universe, with baryon and dark matter densities (in relation to radiation density) set at values that are acceptable for big-bang nucleosynthesis and CMB acoustic peaks, cannot be successfully matched to observations. As is well known, if the density parameter is takento be unity this will result in an unacceptably small final expansion rate and if a low Ω is chosen this would result in global hyperbolic spatial curvature that would mess up the angular scale of the CMB ripples.How does this relate to the `generalised Friedmann equation'(<ref>) ofBuchert & Ehlers (1997)? It is important to realise that their formula has a very different meaning to the Friedmann equation that appears with the structure equation in the conventional framework. Their a is the cube root of a particular volume V and their equation describes the relationship between ä / a and the density within that volume. It is not at all surprising that theä / a for some particular volume would differ at some level from - 4 π G M / 3 a^3 if there is inhomogeneity. The acceleration is some combination of the background plus fluctuation and the mass density is similarly the background density plus the density fluctuation.But these two fluctuations need not be the same.Indeed, it is perhaps surprising that the deviations would appear only at second order in the fluctuations and not be already present in linear theory. But one would hardly call this `backreaction' of structure on the global expansion rate; it is simply inhomogeneity affecting the local expansion rate and local density but in slightly different ways.The key question is really whether there is a systematic difference. If the combination of quantities being volume averaged in Q has a non-zero expectation value then this would imply deviations from Friedmann behaviour even in the limit that V →∞ and one would have to reject (<ref>) in favour of (<ref>).But this is not the case.A strong indication of this, as shown by Buchert & Ehlers, is that Q can also be expressed as a surface integral.They obtained this bydecomposingthe total velocity into a Hubble flow plus perturbation = H+ with the expansion rate being that of theregion in question.More relevant is to consider the peculiar velocity with respect to the global expansion rate.As shown in the appendix, this givesQ =1/V∫ dA· ( (·) - (·) )- 3/2V^2[∫ dA·]^2where the first term is equation 14 ofBuchert & Ehlers and the second term appears in their appendix B.An obvious, but largely unanswered, question is: How does Q in (<ref>) depend on V?And how large is it typically? An under-appreciated feature of (<ref>) is that, as discussed in the appendix,the expectation value of the integrand of the first term vanishes by symmetry. Consequently, the average of this term, taken over an ensemble of volumes of any size, also vanishes. The typical value of the fluctuation in this contribution to Q for a volume of size r is | Q| ∼ v^2 / r^2, independent of the `coherence length' λ of the peculiar velocity field.This tends to zero as r →∞, and should be considered to be a `cosmic variance' fluctuation. The second term has a non-zero expectation value, but this is of order ⟨ Q⟩∼ v^2 λ^2 / r^4 and falls to zero even faster.Thus the quantitative answer to the question that Ellis posed and Buchert & Ehlers addressed is that, averaged over large volumes, the scale factor does obey the Friedmann equation and there is no backreaction on a(t) from the emergence of structure, consistent with what we have found above. t is reasonable to ask how, if at all, the conclusions here differ from thecurrent position of experts in the backreaction community.In the first paragraph of Buchert & Räsänen (2012) they say that `In standard linear theory, the effect vanishes on average by construction. In Newtonian gravity, this turns out to be true also in the non-perturbative regime.'This is not in conflict with what we have found here. However, a key phrase here is `by construction'. Expanding on this they say `When we impose periodic boundary conditions in Euclidean space, the backreaction variable Q is strictly zero on the periodicity scale (a three-torus has no boundary)'.Similarly, Buchert, Kerscher & Sicka (2000) say `Note that both the numerical and analytic approaches enforce a globally vanishing backreaction by imposing periodic boundary conditions'. This connection between vanishing of Q and periodic BCs is repeated, and later, in their discussion of N-body simulations one reads that`Most cosmological Nbody simulations solve [....] with periodic boundary conditions. Hence, the boundary of C is empty [....], and from Eq. (10) we directly obtain Q_C = 0.' and, following this, `It will be a challenge to incorporate backreaction effects in Nbody simulations.'We think it might be possible for a readerof these papers to come away with the impression that the N-body simulations and analytic calculations are missing some extra non-negligible backreaction physics `by construction' through the special choice of periodic boundary conditions. This might be further reinforced by Buchert & Ehlers statement that for Q to be zero would require a `conspiracy'.What we have shown here that is Q tends to zero very rapidly in the limit of large volumes regardless of whether the structure is assumed to be periodic. This is based solely on the symmetry properties of statistically homogeneous and isotropic velocity fields. Another minor novelty of our approach is to show how the surface integral form for Q may be obtained directly rather than through the intermediary step of Raychaudhuri's equation.The analysis leading to our (<ref>) represents a significant advance over the approach followed in e.g.Buchert & Ehlers and later backreaction studies where it is assumed that the matter can be modelled as a pressure-free fluid. Uncondensed baryonic gas may, if the cooling time is sufficiently short, approximate such a fluid. Collisionless dark matter at very early times before non-linear structures form may behave a lot like such a fluid. But in the non-linear regime that is relevant here this assumption is, at the very least, highly questionable.Once multi-streaming occurs, collisionless dark matter and galaxies develop pressure.The same is true for the bulk of the baryonic gas which cannot cool efficiently. It is only in this way that realistic equilibrated (i.e. `virialised') or quasi-equilibrated structures can form.The only equilibrium state for a pressure-free gas is, in contrast, a dense rotationally supported disk.The analysis here has been entirely Newtonian. It is certainly true that there must be genuine relativistic effects that will modify the expansion rate.One such effect is that of intergalactic pressure.It is known that most galaxies harbour black holes and it is thought that these merge in the process of the merging of their hosts.The rapidly time varyinggravitational potential will inevitably result in expulsion of a small amount of stars and dark matter at high velocities. This results in non-zero kinematic pressure in intergalactic space which, owing to the expansion, will do P dV work.According to special relativity, δ E = δ mc^2, so this loss of energy results in a decrease in mass and therefore a modification to the continuity equation; i.e. there will be a non-zero, and positive, pressure P in ρ̇= - 3 H (ρ + P / c^2). This pressure will also appear in the Friedmann ä equation and will result in a modification to the expansion rate.Simple estimates, however, suggest that this is negligible for all practical purposes.One might naively question whether the pressure inside bound stellar systems or in stars themselves might need to be included in some average sense in the Friedmann equations.That is not the case, as was shown by Einstein & Straus (1945).The title of their paper was `The Influence of the Expansion of Space on the Gravitation Fields Surrounding the Individual Stars' and they concluded that there is none.The fact that distant matter is expanding away from stars does not affect them; their gravitational mass – the parameter defining the Schwartzschild geometry that surrounds them – is fixed.Consequently the gravitational mass density of a population of stars, black holes or other compact objects must dilute as 1/a^3 so the pressure P in the continuity equation (and consequently also in the acceleration equation) must vanish. It has been proposed (e.g. Buchert & Räsänen 2012) that there may be strongGR backreaction on the expansion. We would argue that something quite radical is required for this to be the case.Imagine that we live in an `island universe' much like ours, but extending only to say aboutz = 0.1, thus including the region where the expansion rate H_0 is reliably established to be approximately 70 km/s/Mpc. For a homogeneous sphere, the errors incurred in the Newtonian approximation – the difference between the proper and gravitational mass for instance – is on the order of v^2 / c^2 or about one percent.Adding structure within the island excites the usual scalar metric perturbation with only one spatial degree of freedom as this is driven by the density. Beyond this lowest order weak field are `gravitomagnetic' effects, driven by the matter 3-current associated with structure, but the metric perturbations arising from motions are suppressed relative to the Newtonian term by a factor v_ pec / c.That such post-Newtonian effects are small is supported by direct weak-field calculations of Adamek et al (2013). Beyond the 4 degrees of freedom associated with the matter 4-current, all that is left are the two metric degrees of freedom of gravitational waves, but these do not affect the expansion rate as they are traceless. The errors involved in modelling the expansion of such an island universe with Newtonian physics should therefore be very small.What then is the effect of adding the external universe?If this is spatially homogeneous and isotropic then, as Einstein & Straus showed, there is no effect.The challenge for backreaction proponents is to explain how the emergence of structure at great distances can affect the local dynamics and make any appreciable changes to the local expansion rate (and thus e.g. reconcile the large observed H_0 with that expected in a flat universe without dark energy).There are local tidal influences from distant structures, but these are small and, like gravitational waves, do not affect the expansion. The problem with believing that this occurs in GR is that a cornerstone of the theory is that spacetime is locally flat. This means that in the local universe it is the local matter that controls the dynamics through the 1st law of thermodynamics (energy conservation), expressed in the Friedmann continuity equation, and the conservation of momentum expressed in the Friedmann acceleration equation.Finally, and returning to Newtonian dynamics, we mention another probably small but not obviously vanishing cause of backreaction; that of `tidal torques'.It is well known that, in conventional models for galaxy formation, galaxies acquire their angular momentum through non-linear effects as they depart from the linear regime but before they decouple.This can be thought of as a kind of `mode-mode' coupling between the galaxy scale fluctuations and a larger-scale motion; the global expansion.It does not seem entirely obvious that this has vanishing effect on the expansion of the universe.But our main result here shows that, to the extent that the structure is a statistically homogeneous and isotropic random process, there can be no such effect.§ ACKNOWLEDGEMENTS I am grateful to Kevin Croker, John Learned and Istvan Szapudi for stimulating discussions on this topic.It is also a pleasure to acknowledge useful feedback from Pierre Fleury, Thomas Buchert, George Ellis, Syksy Räsänen, an anonymous referee and, inparticular, Anthony Challinor.[Adamek et al.2013]2013PhRvD..88j3527A Adamek J., Daverio D., Durrer R., Kunz M., 2013, PhRvD, 88, 103527 [Buchert & Ehlers1997]1997A A...320....1B Buchert T., Ehlers J., 1997, A&A, 320, 1[Buchert, Kerscher, & Sicka2000]2000PhRvD..62d3525B Buchert T., Kerscher M., Sicka C., 2000, PhRvD, 62, 043525[Buchert & Räsänen2012]2012ARNPS..62...57B Buchert T., Räsänen S., 2012, ARNPS, 62, 57[Dmitriev & Zeldovich1964]1964JETP Dmitriev N. A., Zel'dovich Ya. B., 1964, Soviet Physics JETP, 18, 793[Einstein & Straus1945]1945RvMP...17..120E Einstein A., Straus E. G., 1945, RvMP, 17, 120[Ellis1984]1984grg..conf..215E Ellis G. F. R., 1984, In Proc. 10th international conference on General Relativity and Gravitation, B. Bertotti et a. (eds), Reidel Dordrecht[Ellis & Gibbons2015]2015CQGra..32e5001E Ellis G. F. R., Gibbons G. W., 2015, CQGra, 32, 055001[Gorski1988]1988ApJ...332L...7G Gorski K., 1988, ApJ, 332, L7 [Kazimierczak2016]2016arXiv160100110K Kazimierczak T. A., 2016, arXiv, arXiv:1601.00110 [Monin+Iaglom1975]Monin+Iaglom Monin, A. S., Iaglom, A. M., 1975 Statistical fluid mechanics: Mechanics of turbulence. Volume 2, Cambridge, Mass., MIT Press [Peebles1980]1980lssu.book.....P Peebles P. J. E., 1980, The Large-Scale Structure of the Universe, Princeton University Press, New Jersey, 7 [Peebles1989]1989ApJ...344L..53P Peebles P. J. E., 1989, ApJ, 344, L53 [Rácz et al.2016]2017MNRAS Rácz G., Dobos L., Beck R., Szapudi I., Csabai I., 2017, MNRAS § ACCELERATION IN TERMS OF SURFACE AVERAGES Raychaudhuri's equation leads to the Freidmann-like (<ref>) containing the additional term Q that is a volume average ofquadratic scalars constructed from the velocity shear tensor.Decomposing the velocity into a Hubble-flow plus perturbation, Buchert & Ehlers obtained a surface integral expression for Q. Here we show how this may be obtained directly.The rate of change of the volume at some time t isV̇ = ∫ dA· whereis the velocity anddA =dA is an outward directed surface area element. At some slightly later time t' = t + δ t the rate of change of the volume will be V̇' = ∫ dA' ·' = ∫ dA' ' ·'.The ratio dA'/dA is the determinant of the 2D matrix describing the mapping from positions in the initial surface to the final surface. Since ' =+ δ t this is easily found to be dA' / dA = 1 + (v_xx + v_yy) δ t to first order in δ t where we have erected coordinates so the z-axis is parallel toand where v_xx≡∂ v_x / ∂ x etc. Similarly, the unit normal changes if v_z varies across the area element: →' =- (x̂ v_zx +ŷ v_zy)δ t. ThusdA' ·' = dA (1 + (v_xx + v_yy) δ t) × ( - (x̂ v_zx +ŷ v_zy)δ t) · ( + δ t)and therefore d ( dA·)/dt= ( dA'·' -dA·) / δ t=[v̇_z + v_z v_xx + v_z v_yy - v_x v_zx - v_y v_zy] dA .The coordinate frame independent expression of this is easily found by noting that, in this frame, this is the same as dA· (+(·) - (·) ).The second time derivative of the volume V̈ = (V̇' - V̇) / δ t is thereforeV̈= ∫ dA· [+(·) - (·) ].Gauss's law tells us the first term is ∫ dA·= - 4 π G M.To convert this into an expression involving a = V^1/3 we use3 ä / a = V̈ / V - (2/3) (V̇ / V)^2 to obtain (<ref>) – i.e. equation 9 of Buchert & Ehlers – but now withQ = 1/V∫ dA· ( (·) - (·) ) - 2/3V^2[∫ dA·]^2 Writing the velocity as = H+ we findQ =H/V∫ dA· ( (·) - (·)- 2 )+ 1/V∫ dA· ( (·) - (·) )- 2/3V^2[∫ dA·]^2The integrand in the first line is × (×), so its integral vanishes. The second line is identical to equation 14 of Buchert & Ehlers. That was obtained assuming that 3H is the volume average of the volume expansion rate within the particular volume considered and, as they discuss, the last term here, which appears in their appendix B, enters if H is taken to be the global expansion rate.An under-appreciated feature of the surface integral term isthat the expectation value of the integrand vanishes by symmetry.This is because it involves products of the velocity and its spatial derivative like ⟨ v_x v_zx⟩.This is the derivative with respect to lag ', at ' = 0, of the correlation function ⟨ v_x() v_z( + ') ⟩.But for a velocity field that is statistically homogeneous and isotropic – no further assumptions are required – these functions are even functions of ' so⟨ v_x v_zx⟩ vanishes (Monin & Iaglom, 1975; see also Gorski 1988).For an individual volume the contribution to Q will not vanish. If the velocity field has coherence length λ the integrand is on the order of u^2 / λ. The mean square is ⟨ Q^2 ⟩∼ N (Δ A u^2 / λ)^2 / V^2 where Δ A ∼λ^2 and N = A / Δ A.It follows that the typical contributionis | Q| ∼ u^2 / r^2, independent of λ.This becomes very small for large volumes.The last line in (<ref>), also of 2nd order in , differs from the second in thatit has a non-vanishing (negative) expectation value. But it is on the order of Q∼ u^2 λ^2 / r^4 and so is even smaller than the second term for large V. We believe this is whatKazimierczak (2016) has measured in N-body simulations.
http://arxiv.org/abs/1703.08809v2
{ "authors": [ "Nick Kaiser" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170326115116", "title": "Why there is no Newtonian backreaction" }
Department of Chemistry and Applied Biosciences, ETH Zürich c/o USI Campus,6900 Lugano, Switzerland. Facoltá di Informatica, Instituto di Scienze Computazionali, and National Center for Computational Design and Discovery of Novel Materials MARVEL, Universitá della Svizzera italiana, 6900 Lugano, Switzerlandparrinello@phys.chem.ethz.ch Department of Chemistry and Applied Biosciences, ETH Zürich c/o USI Campus,6900 Lugano, Switzerland. Facoltá di Informatica, Instituto di Scienze Computazionali, and National Center for Computational Design and Discovery of Novel Materials MARVEL, Universitá della Svizzera italiana, 6900 Lugano, Switzerland In this paper we combine two powerful computational techniques, well-tempered metadynamics and time lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choiceis not optimal leading to slow convergence.However by analyzing the dynamics generated in one such a run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, weobtain much more efficientcollective variables, that are alsobetter capableof illuminating the physics of the system. We demonstrate the power of this approach in two paradigmaticexamples.A variational conformational dynamicsapproach to the selection of collective variables in metadynamics Michele Parrinello December 30, 2023 ========================================================================================================Molecular Dynamics (MD)simulations have become pervasive in contemporary science, and are extensively used in fields as diverse as chemistry, biology and material science. Yet in spite ofmany successes, the limited time scale that can be exploredin MD simulations severely limits their scope and power.In many cases the time scale problem results fromthe presence of metastable states separated by kinetic bottlenecks thatrenderthe transitions between such statessorarethatthey take placeon a time scale that far exceeds what can be afforded by ordinarysimulation methods.For this reason, a plethora of enhanced simulation methods have been suggested.Starting from the pioneering work of Torrie and Valleau<cit.>, a wide class of such methodsrely on the identification of appropriate collective variables (CVs)<cit.>. The CVs are a set of functions s(R) of the atomic coordinates R that describe those degrees of freedomwhose sampling needs to be accelerated. This latter goal is achieved by addingto the interaction potential U(R) an appropriately constructed biasV(s(R)), designed so as to accelerate sampling.Here we shall focuson one such method, namely well tempered metadynamics (WTMetaD)<cit.> that is enjoying increasing popularity and that offers the possibility in one of its variants, usually referred to asinfrequent metadynamics<cit.>, to calculate transition rates from metastable state to metastable state. Howeverin WTMetaD the rate of convergencedepends on an appropriate choice of CVs especially when it comes to calculating rates.In an important paper this has been pointed out by Tiwary and Berne<cit.>who have proposed the spectral gap optimization of order parameters (SGOOP) method, that is based on maximum caliber and that has been shown to improve the quality of aninitial CV guess in a spectacular way.Thiswork offers an alternativeto SGOOP based on the signal processing technique Time-lagged Independent Component Analysis (TICA)<cit.>.As in the case of Ref.  we start from a CV that is able to push the systemfrom one metastable state to another albeit sluggishlyand ameliorate it soas to greatly improve its efficiency. It has been shown in the important work of Noé and coworkers<cit.> that TICA provides an optimal solution of the variational approach to conformational dynamics (VAC), hence identifying slow order parameters which may serve as optimal CVs in a metadynamics simulation. We shall refer to this combination as the variational conformational dynamics approach to metadynamics (VAC-MetaD). We recall first thatin WTMetaDthe bias potential V(s(R),) is built on the fly by periodically adding a small repulsive Gaussian whose amplitude decreases as the simulation progresses.A remarkable feature of WTMetaD is that this stochastic, apparently out of equilibrium process,can be described by an ordinary differential equation whose asymptotic solution for the bias tends rigorously to the following equilibrium result<cit.>:lim_t →∞ V(s,t)= -(1-1/γ) F(s)where F(s) is the free energy associated to the s CVgiven within an irrelevant constant by:F(s)=-1/βlog∫ dRδ(s-s(R)) e^-β U(R)where β = 1/k_BT is the inverse temperature and γ is the so-called bias factor. One of theconsequences of the existencebehind WTMetaD of an ordinary differential equation is that the reweighting of the trajectories can be done on the fly and the equilibrium expectation value of an operator ⟨ O(R) ⟩can be calculated<cit.> as an average over the metadynamics run as: ⟨ O(R) ⟩ = lim_ T →∞∫_0^T O(R_t) e^β(V(s(R_t))-c(t))dt/∫_0^Te^β(V(s(R_t))-c(t))dtwhere R_tare the atomic positions at time tand the time dependent energy offsetc(t) is: c(t)=-1/βlog∫ ds e^-β (F(s)+V(s,t))/∫ ds e^-β F(s)a quantity that tends asymptotically to the reversible work performed on the system by the bias. If weintroduce a new time scale t̃ such that dt̃ = e^β(V(s(R_t))-c(t))dtwe can recognise that⟨ O(R) ⟩ can be written as an average over thet̃time⟨ O(R) ⟩ = lim_ T_t̃→∞1/T_t̃∫_0^T_t̃ O(R_t̃) dt̃where T_t̃ =∫_0^Te^β(V(s(R_t))-c(t))dt is the total elapsed t̃ time. The times t and t̃measure the metadynamics andBoltzmann sampling progress respectively. It follows from Eqn. <ref> and from the convergency properties of WTMetaD that we can think of WTMetaD as an ergodic dynamics in t̃ time that samples the Boltzmann distribution. We note however that the t̃ dynamics cannot be directly related to the unbiased dynamics but depends on the choice of CVs. Poor CVslead to longconvergence times, while good CVs lead to much shorter ones. In fact the very purpose of biasing the system is to turn rare events into frequent ones, and the time t̃ should not be confused with the actual unbiased time, but a measure of the extent of metadynamics enhancement of the sampling. We want now to take advantage of progress made in the field of conformational dynamics<cit.>and the realizationthat a propagator associated to an ergodic dynamics can be spectrally decomposed into eigenvalues λ_i (t) and eigenfunctions ψ_i(R). The highest eigenvalue λ_0 is one since the system evolves towards theinvariant distribution, while the others decay with time<cit.>.If the first sets of M non trivial eigenvalues are separated by a gap from all others, the corresponding eigenfunctions can be identifiedas the CVs that describe the slowest modes of the system. In order to practically benefit from these theoretical results one needs an approximation method that can deal with the fact that the propagator is very high dimensional. Luckilyanapproximate evaluation, based on a variational principle similar to the Raleigh-Ritz principle of quantum mechanics, hasbeen suggested<cit.>.In this approach the dynamicsis projected into a low dimensional space spannedby the basis functions O_k (R)k=1, ..., N and the eigenfunctions are approximated by a linear expansion:ψ_i(R)=∑_k=1^N b_ik O_k(R)The eigenvalues and eigenfunctions that best approximatethe exact eigenvalues and eigenvectors are given by the solutionof the following generalized eigenvalue equation:C (τ)b_i= C (0)λ_i (τ)b_i where τ is the lag time and C (τ)is the matrix of the dynamical correlation functions C_j,k(τ)=⟨ O_j(t)O_k(t+τ) ⟩,whileλ(τ)_iare the eigenvalues and b_i are the expansion coefficients of eigenfunctions. As discussed above, it is possible to map WTMetaD intoa dynamics that asymptotically samples the Boltzmann distribution.Thus wemake the ansatzthat alsoforthe t̃ dynamics the properties at the basis of the spectral decomposition described above hold at least asymptotically since metadynamics in this limit explores ergodically the Boltzmann distribution. If this is sowe can approximatethe slow modes of the systems with the solutions ofthe generalized eigenvalue Eqn. <ref>in which the time correlation functions are expressed as a function of the scaled time t̃ .While the variational approach Eqn. <ref> aims at identifying the slowest CVs by varying b_i so as to maximize the eigenvalues λ(τ), metadynamics aims at generating a biased dynamics in which the slowest processes are fast. Hence, a successful application of VAC-MetaD should result in a biased dynamics whose slowest processes are fast, corresponding to small leading eigenvalues, λ_1, λ_2, .... Thus our strategy willbe of choosing first a set ofCVs expressed in the space spannedby an appropriate set of functionsO_k(R)and then perform a WTMetaD run to calculate the correlation function in t̃ time, plug them into Eqn. <ref>, and use the eigenvectorsof the highest eigenvalues asimproved CVs.Ifweperform a newWTMetaD simulation driven by such CVsthen we should seea much more efficient sampling, and due to the acceleration of the slowest modes, we should observe a decrease in the relaxation time associated with the leading eigenvalues.We must add however that since metadynamics needs some incubation time τ_c before reaching the asymptotic limit in which Eqn. <ref> holds, only after time τ_c will the trajectory yield a stable estimate of the eigenvalues. Only in this limit the eigenvectors of the slowest modes in Eqn. <ref> will be used as new CVs.We now explicate how the method works in practice. As stated above the VAC-MetaD procedure aims to find from an initial set of candidate order parameters O={O_k(R)} the optimal linear combination of these order parameters to use as a CV in metadynamics. Initially we give equal weight to each order parameter and perform a short biased simulation with the initial CV as s^(0)(R)=1/√(N)∑^N_k O_k(R). In order to approximate the unbiased slow modes from the biased metadynamics simulation, we need to reweight trajectory samples in our computation of the correlation matrices<cit.>.The two correlation matrices needed to solve Eqn. <ref> are C(0) = ∑_tw(t) O(t) O(t)^TC(τ) = ∑_tw(t)O(t) O(t+τ)^Twith O(t) the set of candidate order parameters at time t and w(t) the WTMetaD weight w(t)=e^β(V(s(t))-c(t))/∑_t e^β(V(s(t))-c(t))In practice the correlation functions of Eqn. <ref> are symmetrized to ensure that the λs are real valued<cit.>. Diagonalization of the correlation matrix C(0) would give the usual reweighted principle components. In the time-lagged matrix C(τ) the lag time τ is given by the sum of the rescaled time steps τ=∑_t=t_0^τ'e^β(V(s(t))-c(t))Δ tInsertion of the matrices given by Eqn. <ref> into Eqn. <ref> and solving the generalized eigenvalue equation for the eigenfunctions b_i gives a new set of N transformed basis functions s_i= b^T_i O, which are an approximation for the eigenfunctions of the dynamical propagator (Eqn <ref>).Each of the N eigenvalues has an associated relaxation time given at a fixed chosen lag time τ by t^*_i=-τ/log|λ_i|One expects to find a gap in timescales between the the M < N slow modes corresponding to large t^* which provides a natural way to select a subset of the N basis function, those corresponding to the slowest relaxation times (largest eigenvalues), as new CVs to be biased with WTMetaD. Thus the procedure yields the desired set of M CVs ass_i(R)=∑_k=1^M <N b_ik O_k(R)with b_i the expansion coefficients in Eqn. <ref>.We shall test this procedure by studying in vacuum the conformational landscape of two simple peptides that have often been used as testing ground for new methods, alanine dipeptide (Ace-Ala-Nme) and alanine tetrapeptide (Ace-Ala_3-Nme) which are shown in Fig. <ref>. All simulations were performed with the GROMACS5.0.5 package<cit.>using the Amber99-SB force field with an integration time step of 2 fs. Trajectories were generated in the NVT ensemble with the temperature maintained at 300 K using the stochastic velocity rescaling thermostat<cit.>. All metadynamics calculations were performed within the PLUMED2 plugin<cit.> with Gaussian hills deposited every 500 integration steps and an initial hill height of 1.2 kJ/mol, a width of 0.03 units and a bias factor γ of 15. We start with the much studied case of alanine dipeptide. Here we depart from the usual procedure and besides the Rahmachandran anglesϕ and ψ, we consider additionally the angle θ that is known to be part of the reaction coordinate<cit.> (see Fig. <ref> a). Each angle is transformed according to Θ_k=0.5+ 0.5 cos(Θ_k - Θ_0) with the reference angle Θ_0=1.2 rad<cit.> and Θ={ϕ,ψ,θ}. The initial CV is then taken to be a linear combination with equal weights of the transformed angles ϕ, ψ, and θ, i.e. s_0=c_1 Θ_ϕ +c_2 Θ_ψ + c_3 Θ_θ with the { c_1,c_2,c_3} initially all equal and normalized to 1. An upper and lower restraint on the θ angle was introduced at ± 0.5 radians. Following the usual TICA procedure we subtract the reweighted mean given by Eqn. <ref> from the raw trajectory data and work in a zero-mean vector space. From an initial WTMetaD trajectory of 20 ns biasing s_0 we compute the two correlation matrices in Eqn. <ref> with O=Θ. The trajectory of the initial CV s_0 is shown in Fig. <ref> (top left). It can be seen that such a CV is able to induce transitions between the different conformers however such transitions are not very frequent.This is reflected by the fact that the highest λ (τ) decays more slowly than the other eigenvalues reflecting a difficulty of the chosen CV to promote transitions. The middle row of Fig. <ref> shows the relaxation times t^*_i associated with each eigenvalue given by Eqn. <ref> at a lag time τ=800 ps chosen within the regime for which the eigenvector coefficients are stable. Fig. <ref> middle left clearly shows a slow process with a dominant large eigenvalue and a clear separation of time scales. The eigenvector associated with this highest eigenvalue, computed in the asymptotic regime where the eigenvectors are constant with respect to the lag time τ, thus obtained is used as a new CV s_1 = b_1^T Θ. If we use as CV this eigenvector associated with the highest λthe exploration of the conformational space is greatly accelerated as depicted in Fig. <ref> top right, andthe relaxation rates of decay of the different λs becomecomparable, reflecting the factthat the rare event has been made no longer rare (Fig. <ref> middle right). It is interesting to note that in the optimized CV much of the weight is of the angle ϕ but both ψ and θ are part of the optimal CV.In Fig. <ref> (bottom) we also show the free energies associated to the initial (left) and final (right) CV. The conformational landscape of alanine dipeptide is well known and characterized by a deeper basin in which conformers C5 and C7eq can easily interconvert and while conformer C7ax is higher in energy separated by a sizeable barrier from the first two. It can be seen that although the FES is fully converged, these physical pictures are hidden in the free energy representation provided by the initial CV but clearly evident when using the final CV.We now turn to the second example for which we have applied our approach, that of alanine tetrapeptide (Ala_3) shown in Fig. <ref> b), a case already considered in<cit.>. The simulation is started as in Ref.   by taking as collective coordinate a linear combination with equal weights of the set of six dihedral angles Θ ={ϕ_1,ψ_1, ϕ_2, ψ_2, ϕ_3, ψ_3 } transformed as before so that Θ_k=0.5+ 0.5 cos(Θ_k - Θ_0).As can be seen in Fig. <ref> (left) the exploration of the conformational space is somewhat inefficient with rare conformational transitions. We project the t̃ dynamics in the space of the chosen angles and observe that the decay times of the two topmost eigenvalues are significantly slower than that of the others. This suggests to project the free energy surface in the space of the two topmost eigenvectors. As shown in Fig. <ref>, in this representation the seven different minima identify conformers that have adifferent arrangement of the three dihedral angles ϕ_1, ϕ_2, and ϕ_3. Theconformerin which all the three ϕs have allpositive valuesis seldom visited and in this representation are projected into the top left tail of the central minimum. An improvement in sampling efficiency is obtained if one uses the topmost eigenvector as a new CV (see the central panels in Fig. <ref>), but still it can be seen that a relatively slow process remains.However the behavior in time of the λs cries out for the use of at least two CVs. If this is done, (see Fig. <ref> far right panels) the improvement is amazing and it offsets the cost of using two CVs instead of one. In fact within a 20 nsrun we get better converged results than theextensive parallel tempering run in Ref   that used an aggregated time equivalent to 8 × 500 ns. These lead to a well converged and smooth free energy surface (see Fig. <ref> right). It is difficult at this stage to compare the relative practical merits of the SGOOP method to ours. Extensive test on a variety of applications will be needed. As far as we can tell, for the two cases examined here they seem to have comparable performances in the case of Ala_3 when we use only the topmost eigenvector as collective variable. It is likely that the two methods will be complementary. However a difference in philosophy must be underlined. In SGOOP one reweighs a one-dimensional projection of the FES to find the optimal linear combinations of CVs. Here we are reweighting the simulation time to take advantage of the variational formalism of conformation dynamics whose solution provides an optimal estimate for the true dynamical propagator. In summary, there is a growing interest in applying dimensionality reduction techniques on a larger candidate set of possible CVs to find a subset of generic good CVs that can be used for enhanced sampling. A widely used example is principle component analysis (PCA) which projects the data along the direction of largest variance. On the other hand, it is well known in the field of conformational dynamics that time-lagged independent component analysis on high dimensional coordinates from molecular dynamics can be useful for constructing Markov state models. In this letter we have taken this insight and combined it with well tempered metadynamics to find a set of optimal CVs for further enhanced sampling. We surmise finally that the method can be adapted to other sampling methodsby appropriately changing the weights in Eqn. <ref> While preparing this manuscript a work has appeared in the literature<cit.> in which it is pointed out that from the TICA formalism useful collective coordinates can be extracted, once a long enough trajectory is available from unbiased simulations. Our work extends considerably the scope of their approach by making it applicable to the more commonly encountered situation in which transitions between metastable states can only be observed by the use of a biased simulation. The authors thank Frank Noé and Pratyush Tiwary for careful reading of the manuscript and useful suggestions. Computational resources were provided by the Swiss National Supercomputing Center (CSCS). This research was supported by the VARMET European Union Grant ERC-2014-ADG-670227 and the National Center of Competence in Research Materials Revolution: Computational Design and Discovery of Novel Materials (MARVEL) 51NF40_141828.
http://arxiv.org/abs/1703.08777v2
{ "authors": [ "James McCarty", "Michele Parrinello" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170326065734", "title": "A variational conformational dynamics approach to the selection of collective variables in metadynamics" }
Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, DenmarkDepartment of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, DenmarkDetailed analysis of the system of four interacting ultra-cold fermions confined in a one-dimensional harmonic trap is performed. The analysis is done in the framework of a simple variational ansatz for the many-body ground state and its predictions are confronted with the results of numerically exact diagonalization of the many-body Hamiltonian. Short discussion on the role of the quantum statistics, i.e. Bose-Bose and Bose-Fermi mixtures is also presented. It is concluded that the variational ansatz, although seemed to be oversimplified, gives surprisingly good predictions of many different quantities for mixtures of equal as well as different mass systems. The result may have some experimental importance since it gives quite simple and validated method for describing experimental outputs. Four fermions in a one-dimensional harmonic trap: Accuracy of a variational-ansatz approach T. Sowiński December 30, 2023 ===========================================================================================§ INTRODUCTIONOne-dimensional systems of few quantum particles have attracted a lot of attention in the past few years due to the amazing experimental progress in studying such systems. At last, it becomes possible not only to test and improve theoretical description of such systems <cit.>, but also to test all these theoretical ideas experimentally <cit.>. New experiments of an extremely high accuracy have challenged theoreticians to serve predictions with incredible precision and as a consequence to audit previous rough approximations made to describeproperties of few quantum bodies <cit.>.The physics of few quantum particles is extremely difficult to be analyzed without any approximations. It comes from the simple observation that 'a few' is too many to use a straightforward method for one- and two-body physics, and at the same time it is still not enough to adopt methods of statistical many-body theory and mean-field description <cit.>. Therefore, one has to find completely different approaches to the problem (for example those, which were up to now in a domain of nuclear physics <cit.>). Independently of these facts, there always exists a temptation to describe complicated few-body problem with evidently oversimplified methods. One of these kind of approaches is based on different implementations of the variational-ansatz method.In this paper we want to investigate the properties of a system of four fermionic atoms confined in a one-dimensional harmonic trap obtained via a simple variational method and validate these results. The method is based on an assumption that the ground state of a many-body interacting system can be almost perfectly superposed from two limiting many-body states, i.e., the ground states obtained for vanishing and very strong repulsions <cit.>. Since the method was successfully adopted for systems of two and three quantum particles (and for a particular class of polaron systems with up to six bodies), a natural question about the validity of this assumption for larger number of particles arises with other system compositions. Here we try to answer this question by comparing predictions of the ansatz with predictions of numerically exact diagonalization of the four-body Hamiltonian. A comparison is done on various levels by considering many different quantities that, in principle, may be extracted from the experimental data. We stress that we consider the experimentally relevant situation where the particles in our system have different masses. This is a particularly difficult issue for one-dimensional systems. Such systems cannot be addressed using for instance the Bethe ansatz as mass differences will generically break the assumption of non-diffractive scattering. This assumption is central to the traditional Bethe ansatz approach to generate exact solutions of one-dimensional many-body systems <cit.>. This implies that a simple approach to mass imbalanced systems is highly desirable.In Sec. II a brief description of the system under study is given and both complementary methods of treatment, i.e., the interpolatory ansatz and the exact diagonalization are briefly characterized. In Sec. III we compare different predictions of both methods and we discuss disclosed discrepancies. Finally, in Sec. IV we give some remarks on four-body systems with other quantum statistics, we discuss some possible extensions of the variational method and conclude briefly.§ THE MODELThe system studied.— We consider N_a = N_b = 2 fermionic particles confined in an external one-dimensional harmonic potential of frequency ω. In principle, the particles of different kinds may have different masses, i.e., m_a≠ m_b. We assume that interactions between particles can be described with two-body contact δ-like potential. In this case, due to the fermionic nature of particles, the interactions are present only between particles of different components. The Hamiltonian of the system has the formℋ =ℋ_a+ℋ_b+ℋ_ab,ℋ_a =-ħ^2/2m_a(∂^2/∂ q_1^2+∂^2/∂ q_2^2)+m_aω^2/2(q_1^2+q_2^2),ℋ_b =-ħ^2/2m_b(∂^2/∂ q_3^2+∂^2/∂ q_4^2)+m_bω^2/2(q_3^2+q_4^2),ℋ_ab =g[δ(q_1-q_3)+δ(q_2-q_3)+δ(q_1-q_4)+δ(q_2-q_4)].Notice that since the masses are allowed to be different, the Bose-Fermi mapping <cit.> cannot be applied. However, in extreme limits the exact ground-state wave function is always known. For vanishing interactions, g=0, particles occupy the two lowest single-particle orbitals of corresponding harmonic oscillators. In the case of infinitely strong interactions, 1/g=0, a semi-analytical expression for the exact four-body ground state was found recently in <cit.> using the methods introduced in <cit.>. In general, the properties of the ground state for intermediate interactions cannot be found analytically and one needs to use numerical or approximate methods.Interpolatory ansatz.— Quite recently, it was proposed to use a very simple variational method based on the assumption that the ground state of the system for any interaction can be well approximated by an appropriate superposition of the ground states in the limiting cases:|Ψ(g)⟩ = α(g)|Ψ_0⟩ + β(g)|Ψ_∞⟩.The coefficients α(g) and β(g) are determined by minimizing an expectation value of the many-body Hamiltonian (<ref>) in this state. Note, that the many-body states |Ψ_0⟩ and |Ψ_∞⟩ are not necessarily orthogonal. Therefore, the variational parameters fulfill non-natural normalization conditions. The detailed prescription for obtaining appropriate variational parameters α(g) and β(g) was discussed in <cit.>. For the completeness of our discussion we include a brief discussion of the method in the Appendix. A small modification of the method also mentioned in the Appendix which substantially improves predictions of the ground-state energy is discussed in further analysis.Although the ansatz seems to be highly oversimplified it was used for systems with equal masses with surprisingly good results. Here we want to make a comprehensive study of theaccuracy of the ansatz when different quantities and interparticle correlations extracted from the ground state are considered. Especially, we are interested in the cases when the particles belonging to the different components have different masses. To find quantitative answers to this open questions we perform the numerically exact diagonalization of the many-body Hamiltonian Eq. (<ref>), we find its exact ground state as a function of interactions and we compare different quantities with predictions of the variational ansatz.Numerical diagonalization.— The exact diagonalization is performed in a straightforward and well-established way. First, we express the many-body Hamiltonian Eq. (<ref>) in a matrix form in an appropriate Fock basis. It can be done by expressing all many-body states of the system in the basis composed as products of single-particle orbitals:|kl;mn⟩ := 𝒜{φ_a,k(q_1) φ_a,l(q_2) φ_b,m(q_3) φ_b,n(q_4)},where φ_a,k(q) are eigenstates of corresponding single-particle harmonic oscillators, i.e.,H_λ φ_λ,k(q) = (k+1/2)ħω φ_λ,k(q)and 𝒜{.} is the anti-symmetrization operator in the appropriate subspace of indistinguishable fermions assuring that:|kl;mn⟩ = -|lk;mn⟩=-|kl;nm⟩.Assuming some sufficiently large cutoff k≤ N_max of the considered single-particle excitations one can calculate all matrix elements of the Hamiltonian Eq. (<ref>). The resulting matrix isdiagonalized to find the exact ground state of the system |Φ(g)⟩ and its energy E(g). In our case, the exact diagonalization is performed with the Arnoldi method <cit.> that was used previously with a great success for similar models <cit.>. Alternative diagonalization routines that exploit effective interactions are also very efficient for all interaction strengths <cit.>, although these methods have yet to be extended to the case with particles of different mass.In the following, many-body wave functions in position representation corresponding to states |Ψ(g)⟩ and |Φ(g)⟩ will be denoted as Ψ_g(q_1,q_2;q_3,q_4) and Φ_g(q_1,q_2;q_3,q_4), respectively. Additionally, we introduce a dimensionless parameter μ=m_a/m_b for the mass ratio of atoms from different components.§ QUALITY OF THE ANSATZ WAVE FUNCTIONThe ground-state energy.— The quality of the assumed form of the variational wave function can be examined in various ways depending on the physical quantity one is interested in. Before any sophisticated tests are performed one should check predictions for the energy of the ground state since this quantity is always bounded from below by the exact value of the ground-state energy. Moreover, the energy of the ground state is a quantity, which in systems of few ultra-cold particles can be measured experimentally with high accuracy <cit.>.To test the predictions of the variational method based on this natural quantity we compare the variational energy of the ground state with its counterpart obtained with the exact diagonalization method. The results are presented in Fig. <ref>, where solid lines represent variational ansatz predictions whereas crosses and squares correspond to the exact-diagonalization predictions (see caption of Fig. <ref> for details). Quite obviously, the energy is well reproduced in the limiting cases of g=0 and g=∞. However, for the intermediate interactions the energy is clearly overestimated. Moreover, in the perturbation regime of small interactions (g≈ 0) the slope ∂ E(g)/∂ g|_g=0 is not predicted correctly. These results could suggest that the variational assumption that the ground state of the system can be well approximated with a simple superposition of two many-body eigenstates in limiting cases is maybe too simple.At this point it is worth noticing, that the variational ansatz we use can be essentially improved to make predictions of the ground-state energy much more accurately. The modification is extensively described in <cit.> and briefly discussed in the Appendix. The improved results obtained in this framework are presented in Fig. <ref> by dashed lines. It is clear that the improvement of the resulting energies is essential. Nevertheless, as shown in <cit.>, in this case one loses accuracy in the predictions of the many-body wave functions. Therefore, in further discussion of other quantities the original ansatz Eq. (<ref>) is used and the modified ansatz is adopted only when displaying the energy spectrum in Fig. <ref>.Let us note here that also the exact diagonalization method has some problems, mostly in the limit of very strong interactions. It is related to the fact that the resulting energies converge to the exact value very slowly with increasing cutoff N_max. Nevertheless, in principle one has a full control on this convergence and can unambiguously indicate a systematic error related to this numerical approximation. However, in cases where convergence is prohibitively slow, the access to a simple ansatz is extremely useful.Overlap of the many-body ground states.— It is quite natural that in the case of any variational method used to determine the ground state of a many-body problem, a coincidence of energies is not sufficient to claim that the quantum state is predicted correctly. One of the methods to check if the quantum state is reproduced correctly is to calculate its fidelity, i.e., an overlap of the approximate state with the many-body ground state obtained from the exact diagonalization method:F(g) = |⟨Ψ(g)|Φ(g)⟩|^2.Obviously in the case studied, for g=0 and g→∞, the fidelity F is equal to 1 since in these limiting cases the wave function is reproduced exactly. For intermediate interactions the fidelity is smaller than 1, and it is presented in Fig. 2. Surprisingly, for equal mass mixture μ=1 (thin line) the overlap is close to 1 for any interaction, i.e. the wave function of the ground state is reproduced correctly. However, if the mass ratio increases (thick line) the predictions of the ansatz become worse for intermediate interactions. However, the overlap is still quite large. This observation suggests that different quantities extracted from the approximate ground-state wave function served by the ansatz may have values close to those obtained from the exact method.To check this hypothesis, in the following we will compare different predictions of the variational approximation with predictions of the exact diagonalization method. Single particle density.— Apart from the ground-state energy, one of the quantities, which can be measured straightforwardly in experiments, is a spatial density profile of the particles of a given component. Typically it is done by repeating and averaging instantaneous detections of positions of all particles. In principle, in the limit of an infinite number of repetitions, the resulting density approaches the theoretical quantities extracted from the many-body wave function n_a(q_1) = ∫dq_2∫dq_3∫dq_4 |Φ_g(q_1,q_2;q_3,q_4)|^2, n_b(q_3) = ∫dq_1∫dq_2∫dq_4 |Φ_g(q_1,q_2;q_3,q_4)|^2. These profiles can be directly compared with the profiles calculated analogously from the variational ground state of the system |Ψ(g)⟩. Obviously, since the ansatz is based on the proper wave functions in g=0 and g→∞, in these limiting cases the predictions of both methods match. If any discrepancies between both predictions exist, one should expect them in the range of interactions where the fidelity F is essentially less than 1. In Fig. <ref> we show the density profiles obtained from both methods for g=2. For equal mass case μ=1 (left panel in Fig. <ref>), the exact profile is much flatter than the profile from the variational method. It means that for intermediate interactions variational wave function overestimate contribution from the non-interacting many-body wave function.When the mass difference between atoms is introduced (right panel in Fig. <ref>), the density profile of a heavier component is improved. At the same time, the density of a lighter component becomes worse. We checked that this scenario is quite general and it does not depend on statistics, i.e., the result is the same when analogous variational method is adopted for Bose-Bose or Bose-Fermi mixtures.Although the density profiles predicted by the variational ansatz have some discrepancies when compared to exact results, these differences are rather marginal and should not be of importance in comparison to experimental results. Additionally, we checked that also on the level of a complete single-particle density matrix (not only on its diagonal part) the predictions of the variational ansatz are very close to the exact results. It means that the proposed variational wave function can be safely used to predict any single-particle properties of the system of equal as well as different masses.Interparticle correlations.— A natural question, which arises at this point, is related to different interparticle correlations that are beyond description of a single-particle density matrix. Since the ansatz is based on a very simple superposition of two many-body states, in principle, it is not obvious if mutual correlations between particles, which are very sensitive to any change in the many-body wave function, are restored correctly. To answer this question we concentrate on the simplest two-body correlation, i.e., a two-particle density profile between components defined asρ(q_1,q_3) = ∫dq_2∫dq_4 |Φ_g(q_1,q_2;q_3,q_4)|^2. Density profiles for interacting system of four fermions of the same and different masses are presented in top and bottom panels of Fig. <ref>, respectively. As before, the presented results are obtained for the intermediate interactions g=2, where the fidelity F is essentially lower than 1. It is seen that, in general, the predictions of the variational method are also consistent with exact results. However, some differences are visible, especially for the different-mass systems. Firstly, the variational pair density profiles are much more smeared than the profiles obtained with the exact method. In addition, for the different-mass system, the exact probability of finding both particles in the middle of the trap, in contrast to predictions of the variational method, rapidly drops to zero when the mass ratio μ is increased. This observation is one of discrepancies of the variational ansatz, which may lead to some quantitative differences as compared to experimental data.Occupations.— One of the less obvious ways of comparing results obtained with different methods is checking the predictions for the occupations of the single-particle orbitals, i.e., the quantities which mathematically are defined for the variational ground state of the system |Ψ(g)⟩ asP_a(k)= ∑_lmn⟨ kl;mn|Ψ(g)⟩, P_b(m)= ∑_kln⟨ kl;mn|Ψ(g)⟩. For the exact ground state of the system |Φ(g)⟩ the definitions are analogous. These quantities are quite interesting in the context of ultra-cold atoms since they can be measured experimentally by an appropriate lowering of the external confinement <cit.>. Therefore, the theoretical predictions for these quantities can be validated.In Fig. <ref> we present probabilities Eq. (<ref>) calculated for some of the lowest single-particle states as the functions of interactions g for equal (top panel) and different (bottom panel) mass systems. The results based on the variational method (lines) are compared with the probabilities obtained from the exact-diagonalization approach (squares, crosses, etc.). Obviously, in the case of an equal-mass system, both flavors have exactly the same probabilities. For vanishing interactions particles can be found only in the two lowest states (black solid lines and crosses or black dashed lines and squares for states with k=0 or k=1, respectively). As interaction increases, both probabilities decrease and higher single-particle states become partially occupied. In this case, predictions of the variational method, although not perfect, reproduce results from the exact method quite well.The situation changes significantly, when a mixture of different masses is considered. In this case, predictions of both methods are roughly consistent only for the heavier component. For the lighter component, the occupation of the lowest single-particle orbital rapidly drops with the increase of the interactions and the third orbital becomes significantly occupied. Moreover, at some moment the occupation of the ground orbital becomes less probable than the occupation of the third state. This behavior of probabilities for lighter component is predicted by the variational ansatz. However, for small interactions a mentioned drop is too slow, whereas for stronger interactions (around g=2) it is too rapid. Nevertheless, differences between exact diagonalization predictions and variational approach are not essential. It means that also in the case of different-mass systems the variational ansatz can still be used for qualitative predictions when occupations of different single-particle orbitals are considered.This extremely good agreement between the predictions of the variational ansatz and the exact-diagonalization approach in the case of the heavy component is related to the fact that in both limiting cases (g=0 and g→∞) the heavy particles are located in the middle of the trap. The situation is different for light particles, i.e., in the limit of strong repulsion the light particles are pushed out from the middle of the trap. This implies that for the light particles interactions have a stronger effect spatially. As a consequence, it is considerably more difficult to capture this effect by the ansatz constructed as a superposition of the limiting wave functions. This effect is directly reflected in the occupations of the single-particle orbitals.§ FINAL REMARKSOther statistics.— Although the results presented are related to the fermionic mixtures, to obtain a wider perspective on the problem of accuracy of the variational ansatz, it is worth considering different kinds of mixtures of four quantum particles. Both methods, i.e., the variational ansatz and the exact diagonalization approach, can be easily adopted for mixtures of two kinds of bosons or one kind of boson and one kind of fermion. Formally, the only difference has to be introduced in (anti)-symmetrization definitions in Eq. (<ref>). Of course, these changes may have (and typically do have) decisive consequences for the results obtained.We have performed appropriate calculations for Bose-Bose and Bose-Fermi mixtures under the assumption that the bosons within a given flavor do not interact and the only non-vanishing interaction is present between different components. This assumption gives us a simple comprehensive tool for testing the role of quantum statistics. The strongly interacting states for the equal mass case in Bose-Fermi mixtures have been a subject of several recent discussions <cit.>. We note that in Bose-Bose mixtures with no interactions within a given flavor, the strongly interacting wave function cannot be found by building it on a basis of totally antisymmetric wave function, but other techniques to obtain it have been discussed recently <cit.>.While we do not present the full results of our calculations here, some of the results obtained in this way were already mentioned previously. From our numerical tests and comparisons some general conclusions about the role of the statistics can be given. Independently of the statistics, the simple variational ansatz works surprisingly well and it can be safely used for simple qualitative and quantitative predictions when single-particle observables are considered. In fact, the case that we have presented here with two kinds of fermions is that in which the comparison between the numerically exact and the variational method is the worst. For other compositions of the particles the variational method agrees even better with the numerical results. We do caution though that whenever higher interparticle correlations are considered, one should be very careful since the predictions of the variational method proposed can be overestimated.Improving the ansatz.— It is quite obvious that in principle the variational probe function (<ref>) could be extended by superposing additional many-body state — for example the many-body ground state obtained numerically for the interaction g for which the accuracy is the worst. Although such extension is possible it requires some numerical effort to obtain an additional many-body state. Therefore, a lot of the beauty and simplicity of the idea may be quickly lost. Nevertheless, this direction would be necessary if larger number of particles were considered.Conclusions.— In this paper we compared predictions of the interpolatory ansatz introduced in <cit.> with the numerically exact method of diagonalization of the many-body Hamiltonian. Surprisingly, the simple assumption that the ground state of four interacting fermions of different masses can be well approximated by a superposition of two many-body ground states obtained in the limits of very strong and vanishing interactions, is sufficient to describe many different properties of the system. Obviously, in this simplified description some discrepancies are present for the intermediate interactions, but they are rather small and not decisive in the view of a quite drastic simplification.§ ACKNOWLEDGMENTSThis work was partially supported by the (Polish) National Science Center Grants No. 2016/21/N/ST2/03315 (DP) and 2016/22/E/ST2/00555 (TS). The authors would like to thank M. E. S. Andersen, M. Valiente, and A. Volosniev for discussions. The work of A. S. D. and N. T. Z. is supported by the Danish Council for Independent Research DFF and the DFF Sapere Aude program.§ APPENDIX: DETAILS OF THE ANSATZIt can be shown straightforwardly that the trial energy E calculated as an expectation value of the Hamiltonian (<ref>) in the variational wave function (<ref>) is given as:E = ⟨Ψ(g)|ℋ|Ψ(g)⟩/⟨Ψ(g)|Ψ(g)⟩ = E(0) + ⟨Ψ_0| H_ab|Ψ_0⟩α^2 + Δ Eβ^2/α^2 + β^2 + 2⟨Ψ_0|Ψ_∞⟩αβ,where Δ E = E(∞) - E(0). By finding the extreme points of the above expression, one finds the stationary solutions, which are determined by the following condition (α/β)_opt^(±) = Δ E - ⟨Ψ_0| H_ab|Ψ_0⟩∓√((Δ E - ⟨Ψ_0| H_ab|Ψ_0⟩)^2+4 ⟨Ψ_0| H_ab|Ψ_0⟩Δ E ⟨Ψ_0|Ψ_∞⟩^2)/2 ⟨Ψ_0| H_ab|Ψ_0⟩⟨Ψ_0|Ψ_∞⟩.As a consequence, the optimized energy reduces toE_opt^(±) = E(0) +⟨Ψ_0| H_ab|Ψ_0⟩ + Δ E ±√(( ⟨Ψ_0| H_ab|Ψ_0⟩ + Δ E)^2 - 4 ⟨Ψ_0| H_ab|Ψ_0⟩Δ E(1- ⟨Ψ_0|Ψ_∞⟩^2))/2(1- ⟨Ψ_0|Ψ_∞⟩^2). The above estimation of the energy turns out to be insufficient in the limit of strong as well as weak interactions <cit.>. It can be improved by requiring the correct slope of the energy as a function of -1/g. It is shown that up to the first-order expansion the slope of the energy in the strong interactions regime can be calculated exactly and it is given as<cit.>:K_opt^∞ = ∂ E_opt/∂ (-1/g)|_g→∞ = Δ E^2/K^0⟨Ψ_0|Ψ_∞⟩^2,where K^0 =⟨Ψ_0| H_ab|Ψ_0⟩/g. For a given K_opt^∞ we can now find a new value of ⟨Ψ_0|Ψ_∞⟩, which can be inserted in the expression for the optimized energy (<ref>). In this way one obtains much better estimation of the ground-state energy. Even though the modified ansatz reproduces the energy much better than the original ansatz (see Fig. <ref>), it comes with the cost that the ground-state wave function is no longer known. As a consequence, the modified ansatz is useful only for performing a better estimation of the energy.apsrev4-1
http://arxiv.org/abs/1703.08720v2
{ "authors": [ "Daniel Pęcak", "Amin S. Dehkharghani", "Nikolaj T. Zinner", "Tomasz Sowiński" ], "categories": [ "cond-mat.quant-gas", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20170325173819", "title": "Four fermions in a one-dimensional harmonic trap: Accuracy of a variational-ansatz approach" }
Alexander.Balakin@kpfu.ru Department of General Relativity and Gravitation, Institute of Physics, Kazan Federal University, Kremlevskaya str. 18, Kazan 420008, Russia Alexei.Zayats@kpfu.ru Department of General Relativity and Gravitation, Institute of Physics, Kazan Federal University, Kremlevskaya street 18, Kazan 420008, Russia In the framework of Einstein-Maxwell-axion theory we consider static spherically symmetric solutions, which describe a magnetic monopole in the axionic environment. These solutions are interpreted as the solutions for an axionic dyon, the electric charge of which is composite, i.e., in addition to the standard central electric charge, it includesan effective electric charge induced by the axion-photon coupling. We focus on the analysis of that solutions, which are characterized by the electric field regular at the center. Special attention is paid to the solutions with the electric field, which is vanishing at the center, has the Coulombian asymptote and thus display an extremum at some distant sphere.Constraints on the electric and effective scalar charges of such an object are discussed. 04.20.-q, 04.40.-bEinstein-Maxwell-axion theory: Dyon solution with regular electric field Alexei E. Zayats December 30, 2023 ========================================================================§ INTRODUCTION In 1987 Wilczek has formulated the idea that for a distant observer the magnetic monopole in an axionic environment looks like a dyon with magnetic and effective electric charge <cit.>. This idea was based on the prediction of the axion electrodynamics that the interaction between the radial magnetic field, attributed to the monopole, and the surrounding pseudoscalar (axion) field produces the radial electric field without real electric charge at the center. That is why one can say, that Wilczek presented in 1987 the first example of the so-called axionic dyon. The axion electrodynamics, on which this result was based, has been established and developed in the decade 1977-1987, being inspired by the theoretical discovery of Peccei and Quinnof the CP-invariance conservation <cit.>, and by discussions about a new light pseudo-Goldstone boson introduced by Weinberg <cit.> and Wilczek <cit.>. The model of coupling of the pseudoscalar and electromagnetic fields was formulated in covariant formby Ni in <cit.>; the axion electrodynamics written in the 3-dimensional form was used by many authors (see, e.g., the work of Sikivie <cit.>). Since the axions are considered to be candidates to the dark matter particles <cit.>, the physics of axions had become one of the key elements of numerous applications to cosmology and astrophysics. These applications involve into consideration various models of interaction of gravitational, electromagnetic, scalar and pseudoscalar fields, which are called nowadays the Einstein-Maxwell-axion, and Einstein-Maxwell-axion-dilaton models (see, e.g., <cit.>). Also, these applications attract the attention to the models, which belong to the class of theories indicated by term Extended Axion Electrodynamics <cit.>.In 1991 Lee and Weinberg <cit.> studied spherically symmetric solutions for static black holes with massless axionlike scalar field; in fact, it was a realization of the Wilczek idea in the framework of Einstein-Maxwell-axion theory. Lee and Weinberg have obtained self-consistent master equations for the axion field and metric coefficients, analyzed the asymptotic properties of the solutions, and studied analytic and numeric solutions for the cases of large and small values of the constant of the axion-photon coupling. If we omit the initial electric charge at the center of the object described in <cit.>, we find the solution for the axionic dyon, which was obtained in the framework of the Einstein-Maxwell-axion model, and was predicted in <cit.> using the simple Maxwell-axion model. In this sense, one can say, that in <cit.> the authors presented the first (static) example of the so-called Longitudinal Magneto-Electric Cluster, in which the magnetic and axionically induced electric fields are parallel to one another. Later the solutions describing the Longitudinal Clusters were found in the systems with the pp-wave symmetry <cit.>, and in the context of search for fingerprints of relic axions in the terrestrial magnetosphere <cit.>.Now we are interested to find a regular solution for the axionic dyon. What does this means? In 1968 Bardeen <cit.> attracted the attention to the solutions of the field equations regular in the center. The first idea was to modify the equations for the electric field so that it will be finite at r=0; for instance, it might be the function E(r)=Q/r^2+a^2 with E(0)=Q/a^2 and the Coulombian asymptote E(r)→Q/r^2. In the framework of Einstein-Maxwell theory the regularity in the center assumes that not only the electric field is finite, but the metric coefficients and all curvature invariants are finite as well. The story of search for regular solutions is worthy to be a subject of special review; we would like to mention only three details in this context. First, the nonminimal coupling of electromagnetic and gauge fields can provide the gravitational field to be regular (see, e.g., <cit.>). Second, the nonminimal coupling can provide the electric field to be finite in the center (see, e.g., <cit.>). Third, for solutions with a magnetic monopole field the situation is not perfect; for the mentioned solutions the first invariant of the electromagnetic (or gauge) field, B⃗^2-E⃗^2, is not regular in the center, since the magnetic field, B(r)=ν/r^2, in contrast to the electric one, cannot be finite there. As for the second (pseudo)invariant (B⃗·E⃗), it is possible to be finite in the center, when the electric field is not only finite, but tends to zero not slowly than r^2. On the other hand, if the electric field strength E is finite at the origin, but does not vanish, vector field E⃗ has a hedgehog-like singularity. Therefore if we expect to find a solution, which is characterized by the electric field regular in the center in the strict sense of the word, we should require the condition E(0)=0 to be satisfied.Thus, searching for the regular axionic dyons, we are faced with the problem to find an exact solution of the field equations, for which the electric component vanishes both at r→ 0 and r →∞. Below we intend to show that it is possible for magnetic monopole surrounded by the pseudoscalar (axion) field, when the guiding parameters of the model are specifically coupled.The paper is organized as follows. In Section <ref>, we remind basic details of the Einstein-Maxwell-axion theory, and recover well-known solutions with vanishing constant of the axion-photon coupling (γ=0), using the harmonic spacetime coordinates. In Section <ref>, we analyze the solutions with nonvanishing γ; in Subsection <ref> we discuss an example of exact solution for the axionic dyon singular at the center; in Subsection <ref> we study (analytically) the regular solutions of the axion electrostatics in the background of magnetic monopole; the results of the numerical study are presented in Subsection <ref>. Section <ref> contains conclusions.§ EINSTEIN-MAXWELL-AXION MODEL §.§ Basic formalism The action functional of the Einstein-Maxwell-axion modeltakes the formS_EMa=∫ L √(-g)d^4x, L=R/2κ+1/4 F_ikF^ik +1/4γ F_ik^ikϕ-1/2∇_iϕ∇^iϕ+1/2 m^2_aϕ^2.Here R is the Ricci scalar; g is the determinant of the metric tensor g_ik; κ is the Einstein constant, F_ik is the Maxwell tensor, _ik denotes its dual tensor, ϕ stands for the pseudoscalar (axion) field; γ is the constant of the axion-photon coupling; and m_a is the axion mass.The variation of the action (<ref>) with respect to potentials of the electromagnetic field A_i, to the axion field ϕ, to the spacetime metric g^ik gives, respectively, the equations of axion electrodynamics∇_iF^ik+γ^ik∇_iϕ=0,the equation for the axion field ϕ∇_i∇^i ϕ+m^2_aϕ+γ/4F_ik^ik=0,and the equations for the gravitational fieldR_ik-1/2 R g_ik=κ(T_ik^(M) +T_ik^(a)).Here T_ik^(M) and T_ik^(a) are the energy-momentum tensors for the electromagnetic and axion fields, respectively, which are defined as followsT_ik^(M)=1/4g_ikF_mnF^mn-F_inF_k^n, T_ik^(a)=∇_iϕ∇_kϕ-1/2 g_ik∇_nϕ∇^nϕ+1/2 m_a^2ϕ^2 g_ik.The dual Maxwell tensor satisfies the equation ∇_k ^ik=0, which is free of information about the axion field.§.§ Static spherically symmetric spacetime Let us consider a static spherically symmetric spacetime with the metricds^2= e^-2β(u)dt^2- e^2β(u)[ e^-4ρ(u)du^2+ e^-2ρ(u)dΩ^2]. We use the harmonic coordinate system <cit.>, in which the variable u plays the role of a radial coordinate; the spatial infinity corresponds to u=0.We assume that the axion field depends on the radial coordinate only, i.e., ϕ=ϕ(u). This system is more convenient to analyze scalar field models, but when we will need to revive the usual spherical coordinate notation, we putr= e^β(u)-ρ(u).At the spatial infinity, i.e, at u=0, asymptotic behavior of the spacetime metric is supposed to be Minkowskian. It means thatβ(0)=0, ρ(u)|_u→ 0∼ln u.In this paper, we focus on the study of configurations with a magnetic monopole located at the center; the Maxwell tensor components are chosen to be equal toF_ut=q(u) e^-2β(u), F_θφ=μsinθ,where the constant μ relates to the magnetic charge, q(u) is a function to be found. To characterize the electric field it is convenient to introduce also the scalar quantity E defined as followsE≡q/r^2=qe^2ρ-2β.In fact, the scalar E is the tetrad component of the Maxwell tensor E=√(-F_utF^ut). Inthese terms the equations of axion electrodynamics (<ref>) reduce to one equation(q-γμϕ)'_u=0,yielding the solutionq=Q+γμ(ϕ-ϕ_0).The constant of integrationϕ_0 is the value of the axion field at the infinity, i.e., ϕ(u=0)=ϕ_0; similarly, we define Q=q(0).The axion field equation (<ref>) takes now the formϕ”_uu-γμ qe^-2β-m^2_a ϕe^2β-4ρ =0.Using (<ref>) we can rewrite this equation as follows:q”_uu-γ^2μ^2 qe^-2β-m^2_a (q+γμϕ_0-Q)e^2β-4ρ=0. There are four nontrivial equations of the gravitational field. For the metric (<ref>) four nonvanishing components of the Einstein tensor G^k_i = R^k_i -1/2δ_i^k R areG_u^u= e^-2β+4ρ(β'_u^2-ρ'_u^2+ e^-2ρ),G_θ^θ=G_φ^φ= e^-2β+4ρ(-β'_u^2+ρ'_u^2+ρ”_uu),G_t^t= e^-2β+4ρ(-β'_u^2+ρ'_u^2-2β”_uu+2ρ”_uu+ e^-2ρ).The corresponding four nonvanishing components of the energy-momentum tensor T^k_i = T^k(M)_i+T^k(a)_itake the form (see (<ref>) and (<ref>))T_u^u=1/2 e^-2β+4ρ[(μ^2+q^2) e^-2β-ϕ'_u^2]+1/2 m^2_aϕ^2,T_θ^θ=T_φ^φ=1/2 e^-2β+4ρ[-(μ^2+q^2) e^-2β+ϕ'_u^2]+1/2 m^2_aϕ^2,T_t^t=1/2 e^-2β+4ρ[(μ^2+q^2) e^-2β+ϕ'_u^2]+1/2 m^2_aϕ^2.If we assume (as in <cit.>) that the axion field is massless, m_a=0, three independent equationsfor gravity field can be rewritten asρ”_uu+ e^-2ρ=0,β”_uu+κ/2(μ^2+q^2) e^-2β=0, ρ'_u^2- e^-2ρ=β'_u^2+κ/2ϕ'_u^2-κ/2(μ^2+q^2) e^-2β.Clearly, the first equation is decoupled from other ones, and can be immediately resolved. Indeed, the first integral of (<ref>) isρ'_u^2- e^-2ρ=C,and the the solution satisfying the condition (<ref>) takes the formρ=lnΠ(C,u), Π(C,u)≡{[ sinhν uν, when C=ν^2>0,;u, when C=0,; sinν uν, when C=-ν^2<0. ].Also, one can check directly, that with (<ref>) Eq. (<ref>) is a differential consequence of (<ref>) with (<ref>). Thus the key subsystem of master equations consists of the following pair of equationsβ'_u^2+κ/2γ^2μ^2q'_u^2-κ/2(μ^2+q^2) e^-2β=C,q”_uu-γ^2μ^2 qe^-2β=0.When the quantities β(u) and q(u) are found, the axion field and the electric field can be reconstructed asϕ=ϕ_0+q-Q/γμ ,E(u) =q(u) e^-2βΠ^2(C,u).In other words, we have to find two functions β and q, which satisfy the key system of equations (<ref>). Since we use nonstandard coordinate u instead of radial variable r, we would like to comment how the known solutions can be displayed in these terms. §.§ Known solutions in the u-representation with vanishing constant of axion-photon coupling In order to illustrate a behavior of the metric functions β, ρ and the function q, let us give some examples of well-known spacetimes, for which the axion-photon coupling is supposed to be absent, γ=0.§.§.§ Schwarzschild solution In case when μ=0 and q=0, the first equation from Eq. (<ref>) reduces to the following formβ'_u^2=C>0and the solution to it with the condition (<ref>) can be found immediatelyβ=Mu,where C=M^2. The formula (<ref>) givesρ=ln(sinh Mu/M).Thus, we obtain the Schwarzschild metric in the harmonic coordinatesds^2= e^-2Mudt^2-M^4e^2Mu/sinh^4 Mudu^2-M^2e^2Mu/sinh^2 MudΩ^2.After transformation of the radial coordinate (see (<ref>))r=Me^Mu/sinh Mu, ⇔ u=-1/2Mln(1-2M/r)this metric returns to its standard formds^2 = (1-2M/r)dt^2 - (1-2M/r)^-1dr^2 - r^2 dΩ^2.The constant M plays here the role of the mass. Mention should be made, that the u-coordinate system covers the Schwarzschild spacetime from the spatial infinity (u=0) till the horizon (u→∞) only. When u→∞ the metric component g_tt= e^-2Mu tends to zero, i.e., r→ 2M.§.§.§ Reissner-Nordström solution Let the axion field and the function q be constant, ϕ=ϕ_0, γ=0 and q=Q. Then the second equation from Eq. (<ref>) is an identity, and the first one is simplified asβ'_u^2-κ/2(μ^2+Q^2) e^-2β=C.The solution to this equation, which satisfies the condition β(0)=0, isβ=ln[Π(C,u+u_*)/Π(C,u_*)],where the value u_* can be obtained from the conditionΠ(C,u_*)=[κ(μ^2+Q^2)/2]^-1/2.In order to clarify the sense of the constant C for the Reissner-Nordström solution, we consider the case u→0. At the origin the metric function β behaves asβ|_u→0∼[C+1/Π(C,u_*)^2]^1/2· u,and keeping in mind the Schwarzschild solution we can identify the factor in front of u with the mass M, i.e.,C=M^2-κ(μ^2+Q^2)/2.Thus, the Reissner-Nordström solution in the harmonic coordinate system takes the formds^2 =Π(C,u_*)^2/Π(C,u+u_*)^2dt^2-Π(C,u+u_*)^2/Π(C,u)^2 Π(C,u_*)^2(du^2/Π(C,u)^2+dΩ^2).We have to compare this solution with the well-known oneds^2= A(r) dt^2 - dr^2/A(r)- r^2 dΩ^2,A(r) = (1-M/r)^2 + 1/2r^2(Q^2+μ^2-2M^2).The solutions (<ref>) can be identified with (<ref>) keeping in mind the number of horizons. (i)One horizon.When M^2>κ(Q^2+μ^2)/2, i.e., when C=ν^2>0, there is one horizon at u=∞ and r(u→∞)=M+√(C).(ii)Naked singularity.When C=-ν^2<0, one obtains thate^ρ=sinν u/ν,e^β=sinν (u+u_*)/sinν u_*,u_*=1/νarcsinν/√(ν^2+M^2), r= e^β-ρ=√(ν^2+M^2)·sinν(u+u_*)/sinν u.If u+u_*=π/ν then r→ 0, therefore this point u=u_* corresponds to the central naked singularity.(iii)Double horizon.When M^2=κ(Q^2+μ^2)/2, i.e., when C=0, we obtaine^ρ=u, u_*=1/M,e^-2β= (1-M/r)^2,After transformation of the radial coordinate r=(1+Mu)/u⇔ u=1/r-Mone can derive the standard form of the metricds^2=(1-M/r)^2dt^2-(1-M/r)^-2dr^2-r^2 dΩ^2. §.§.§ Penney and Fisher solutions When the axion-photon coupling constant γ is equal to zero and m_a=0, the Eq. (<ref>) reduces to ϕ”_uu=0, thus the axion field is linear in the variable uϕ=ϕ_0+Pu.The integration constant P can be indicated as an scalar (axion) “charge”. The constant C is now a combination of the charges P, Q, and μC=M^2-κ/2(μ^2+Q^2-P^2).The equations (<ref>) give nowβ_u^2=M^2+κ/2(μ^2+Q^2)[ e^-2β-1],and the solution to this equation takes the formβ=lnΠ(C̃,u+u_*)/Π(C̃,u_*).Here the modified constant C̃ is of the formC̃=M^2-κ/2(μ^2+Q^2), Π(C̃,u_*)=(κ(μ^2+Q^2)/2)^-1/2.For the metric functions β and ρ given by (<ref>)) the linear element (<ref>) covers the Penney solution <cit.>ds^2 =Π(C̃,u_*)^2/Π(C̃,u+u_*)^2dt^2-Π(C̃,u+u_*)^2/Π(C,u)^2 Π(C̃,u_*)^2(du^2/Π(C,u)^2+dΩ^2).Clearly, when P=0 the constants C and C̃ coincide, and the Penney solution reduces to the Reissner-Nordström one.When C̃=0, i.e.,M^2=κ/2(Q^2+μ^2),C=κ P^2/2>0,u_*=1/M ,we recover the “extremal” Penney solutionds^2=dt^2/(1+Mu)^2-C(1+Mu)^2/sinh^2 √(C)u(Cdu^2/sinh^2 √(C)u+dΩ^2). In the particular case, if both electric and magnetic charges, Q and μ, vanish, the metric (<ref>) turns into the Fisher metric <cit.>ds^2= e^-2Mudt^2- e^2Mu/Π(C,u)^4du^2- e^2Mu/Π(C,u)^2dΩ^2,where the constantC=M^2-κ P^2/2can be positive, vanishing or negative depending on the relation between the mass M and the scalar (axion) charge P.§ SOLUTIONS WITH NONVANISHING CONSTANT OF THE AXION-PHOTON COUPLING, Γ≠ 0Let us consider the general case, for which the axion-photon coupling constant γ does not vanish. We deal now with the key system of equationsβ'_u^2+κ/2γ^2μ^2q'_u^2-κ/2(μ^2+q^2) e^-2β=C,q”_uu-γ^2μ^2 qe^-2β=0,with the the boundary conditionsβ(0)=0,β'_u(0)=M ,q(0)=Q,q'_u(0)=γμ P.The first condition for β is the requirement that the spacetime is asymptotically Minkowskian; the second one introduces the asymptotic Schwarzschild mass M. The first condition for q means that Q is the asymptotic electric charge. As for the last condition, it appears from the relationship ϕ=ϕ_0+q-Q/γμ and the definition for the axion charge ϕ'_u(0)=P. As usual, we denote the asymptotic value of the pseudoscalar (axion) field as ϕ(0)=ϕ_0. and for this version of the key system of equations the constant C is not arbitrary, it satisfies the condition (<ref>) C=M^2-κ/2(μ^2+Q^2-P^2).Clearly, the key system of equations (<ref>) does not depend on u explicitly, we see u only as the argument of β(u) and q(u). This means that we can search for particular solutions of the form β=β(q(u)), and replace the derivative β^'_u by dβ/dq q^'_u in the key system yielding the following equation:β”_qq=-1/2κ(μ^2+q^2)+γ^2μ^2qβ'_q/C e^2β+1/2κ(μ^2+q^2) (β'_q^2+κ/2γ^2μ^2) .We will use this consequence in the next subsection to obtain particular exact solution to the key system. §.§ Exact solution with the singularity at the center In general case the key system of equations admits the numerical study only, that is why we would like to start our discussion with a particular but explicit example of a solution, when the constant C is vanishing, C=0.Then the first equation (<ref>) admits the solution quadratic in q:β(q)=Q^2-q^2/2μ^2,when five parameters M, Q, γ, κ, μ satisfy the following three relationshipsM=γ |Q|, κ=2γ^2, P=-μ Q.Since C=0, the second metric coefficient is of the form ρ(u)=ln u. In order to find the function q(u) we focus on the second equation(<ref>). With the parameters given by (<ref>) and boundary conditions (<ref>) the first integral of that equation isq'_u^2 - γ^2 μ^4e^q^2-Q^2/μ^2 =const = 0,so that its implicit solutionu=1/γ P√(π/2)e^Q^2/2μ^2[(q/μ√(2))-(Q/μ√(2))]is expressed in terms of the Gauss error function (x), defined as(x)=2/√(π)∫_0^x dte^-t^2 , (-x) = - (x).When |q|=∞, the first Gauss error function in (<ref>) takes finite value; this means that there exists a finite value u_∞, for which |q(u_∞)|=∞. For instance, when Q is positive, q(u_∞)= -∞, and u_∞ can be found as follows:u_∞=1/γ |μ|√(π/2)e^Q^2/2μ^2[1+(|Q|/|μ|√(2))] >0.The radial function r (<ref>) also can be presented in terms of Gauss error functions:r= e^β-ρ =√(2/π)γ Pe^-q^2/2μ^2×[(q/μ√(2))-(Q/μ√(2))]^-1.According to this formula, r(u_∞)=0, thus, we obtain thatE(u_∞)= ∞ and ϕ(u_∞)= ∞. In other words, we deal with central singularity at u=u_∞. On the other hand, q=0, when u=u_0, whereu_0=1/γ |μ|√(π/2)e^Q^2/2μ^2(|Q|/|μ|√(2)),which is valid for arbitrary signs of Q and μ. Similarly, we obtainr(u_0)= √(2/π)γ |μ|[(|Q|/|μ|√(2))]^-1>0 .Thus, the electric field takes zero value, when u=u_0 and r=r(u_0). Since E(u=u_0)=E(u=0)=0, the function E(u) reaches extremum at the finite value of the variable u (the type of extremum, minimum or maximum, is predetermined by the sign of the electric charge Q). Typical plots of E(r) and ϕ(r) are presented in Fig. <ref>.§.§ Axion electrostatics in the background field of the magnetic monopole §.§.§ Preamble: The regular solution to the equation of axion electrostatics in the flat spacetime In order to simplify further interpretation of solutions, let us assume, first, that the background spacetime is flat, i.e., β=0 and ρ=ln u. Then the last equation in (<ref>) reduces to the formq”_uu-γ^2μ^2q=0,and we can obtain the solution discovered by Campbell, Kaloper, and Olive <cit.>, which is regular at the center u=∞ and satisfies the condition q(u=0)=Qq(u)=Qe^-γ|μ| u.Another boundary condition q'(0)=γμ P gives the constraint on the axion chargeP=-Q μ,for which this regular solution exists.§.§.§ Exact solution to the equation of axion electrostatics We assume now that the background gravitational field is formed by the magnetic monopole without horizons and with the naked singularity at the center. In fact, the background metric relates to the Reissner-Nordström solution with a magnetic charge. This means that C= -ν^2 <0, ande^ρ=Π(C,u)=sinν u/ν, ν=√(R_μ^2-M^2) , e^β= R_μ sinν(u+u_*)/ν, sinν u_*=ν/R_μ . r= e^β-ρ=R_μ sinν (u+u_*)/sinν u,where R_μ=√(κμ^2/2) is the Reissner-Nordström radius. When u+u_* →π/ν, we obtain that e^β→ 0 and r → 0. In this spacetime background the function q(u), which determines the electric field induced by the axion-photon coupling, satisfies the equationsin^2 ν (u+u_*)q_uu = γ^2μ^2 ν^2/(ν^2+M^2) q .The replacement z=iν(u+u_*) transforms this equation into the Legendreequationd/dz[(1-z^2)dq/dz] + α (α +1) q =0.where the parameter α is introduced as follows:α (α +1)= γ^2μ^2/(ν^2+M^2)=2γ^2/κ ,α=1/2(±√(1+8γ^2/κ)-1) .The variable z is complex; the quantity |z| takes the value |z|=M/ν at u=0, and becomes infinite |z|=∞, when u=π/ν-u_*. We search for the solution q(z), which is regular for the interval M/ν < |z| < ∞, and we require especially, that the solution is regular at u=π/ν-u_*. As usual, q(z), the solution of the Legendre equation (<ref>), is the linear combination of P_α(z) and Q_α(z), the Legendre functions of the first and second kinds, respectively (see, e.g.,<cit.> for details). Keeping in mind the analytic properties of the Legendre functions, we can write the regular solution for q(u) satisfying the condition q(0)=Q in the following form: q(u)/Q =Q_α(z(u))+πiP_α(z(u))Θ(ν (u+u_*))/ Q_α(iν u_*)+πiP_α(iν u_*) ,z(u)=iν(u+u_*). Here Θ(ν (u+u_*)) is the Heaviside function; as was shown in <cit.> such a structure guarantees the regularity of the solution on the real axis of the complex plane z. Using (<ref>) and(<ref>) we can present the electric field E(r) as a function of r as follows: E(r)=Q/r^2· Q_α(iM/νΞ(r))+πiP_α(iM/νΞ(r))Θ(Ξ(r))/ Q_α(iM/ν)+πiP_α(iM/ν) , Ξ(r)=1-R_μ^2/rM.§.§.§ Integral representation of the solution For the analysis of regularity of the electric field one can use also the convenient integral representations of the Legendre functions (see <cit.>)Q_α(i x)+πiP_α(i x)Θ( x) = e^iπ(α+1)/2/(sin x)^α∫_x^π dξ(cos x- cosξ)^α = e^iπ(α+1)/2/(sin x)^α∫_-cos x^1 dz (z+cos x)^α/√(1-z^2),which yields, in particular,Q_α(iν u_*)+πiP_α(iν u_*) = e^iπ(α+1)/2(1+M^2/ν^2)^α/2∫_-M/√(ν^2+M^2)^1dz (z+M/√(ν^2+M^2))^α/√(1-z^2) .Using these representations, one can show thatq/Q=N^α/2·Z_α([M/R_μ-R_μ/r]N^-1/2)/Z_α(M/R_μ),where the function Z_α(ξ) is defined as followsZ_α(ξ)=∫_-ξ^1 dt (t+ξ)^α/√(1-t^2) ,and N is the standard Reissner-Nordström metric coefficientN=1-2M/r+R_μ^2/r^2.The function Z_α(ξ) satisfies the following relations:Z_α(0)=√(π)/2·Γ(α+1/2)/Γ(α+2/2),Z_α(1)=2^α√(π) Γ(α+1/2)/Γ(α+1),Z_α(-1+ξ)∼√(π/2)·Γ(α+1)/Γ(α+3/2) ξ^α+1/2, .Z_α(ξ)|_α→∞∼√(π/2α) (1+ξ)^α+1/2.Using the formula (<ref>), one can obtain that. q/Q|_r→ 0∼√(π)·Γ(α+1)/Γ(α+3/2)(1-M^2/R_μ^2)^α+1/2/2^α+1 Z_α(M/R_μ)(r/R_μ)^α+1 . The electric field E(r)=q/r^2 is regular at the center r=0, when α≥ 1. The value E(0) is finite, when α= 1, and E(0)=0, whenα>1. The second invariant of the electromagnetic field I_(2)≡1/4F_mn^mn is regular at the center, when α≥ 3. Indeed,1/4 F_mn^mn = 1/√(-g) E^ut θφ F_ut F_θφ = μ qe^4ρ-4β=μ q/r^4,thus, at r→ 0 the invariant 1/4 F_mn^mn∝ r^α-3. In Fig. <ref> we present typical plots of the function E(r) for three values of the parameter α.§.§.§ Behavior of the axion field When the function q(u) is found, the axion field ϕ=ϕ_0 + q-Q/γμ can be easily reconstructed. In particular, one can see that the axion field is regular at the center, when the function q(u) takes finite value at r=0. We focus now on the following detail: when u → 0, the quantity q-Q/γμ tends to u q^'(0)/γμ, so, in fact, we have to analyze the value of the quantity P/Q. This ratio can be calculated asP/Q =-√(α+1/α)μ×(ν/iR_μ· Q_α+1(iM/ν)+πiP_α+1(iM/ν)/ Q_α(iM/ν)+πiP_α(iM/ν)-M/R_μ).Integral representation (<ref>) of this quantity givesP/Q=-√(α+1/α)μ·( Z_α+1(M/R_μ)/Z_α(M/R_μ)-M/R_μ).For two limiting cases, M→0 and M→ R_μ, this expression takes the form.P/Q|_M→ 0=-2μ/√(α(α+1))·[Γ(α/2+1)/Γ(α+1/2)]^2, .P/Q|_M→ R_μ=-√(α/α+1)μ. Using the formula (<ref>), one can demonstrate the following detail: if α→∞, i.e., if the axion-photon coupling constant γ is much greater than √(κ), the ratio P/Q tends to a constant for any values of M∈[0,R_μ):.P/Q|_α→∞→ -μ.Thus, in this limit, α→∞, we obtain the result coinciding with the flat spacetime case(see (<ref>)).§.§.§ Limiting case M→ R_μ Let us consider the extremal Reissner-Nordström case with M→ R_μ. For this limit, we haveν=√(R^2_μ-M^2)=0, u_*=1/R_μ,u=1/r-R_μ.The metric function β takes the formβ=-ln(1-R_μ/r),while the electric field function q according the formula (<ref>) can be written as followsq=Q(1-R_μ/r)^α,or, excluding the variable r,β =-1/αlnq/Q.In contrast to the case M<R_μ, the function q vanishes at the double horizon r=R_μ≠0 and e^β|_q=0→∞. §.§ Qualitative and numerical studies of the regular solutions When the spacetime background is not fixed, i.e., the model is self-consistent, we have to solve the general system of the key equations (<ref>) and (<ref>). In contrast to the explicit example demonstrated in Subsection <ref>, regular solutions to this system can be presented in a numerical form only.In this subsection we will study solutions with the electric field, regular at the center, when the function q vanishes at r=0. The metric function β has to tend to -∞, because the naked singularity associated with the magnetic monopole cannot be removed. Using Eq. (<ref>), we obtain that β behaves as.β|_q→0∼1/α+1lnq/Q,where the parameter α>-1 has to satisfy the conditionα(α+1)=2γ^2/κ, α=1/2(√(1+8γ^2/κ)-1).Obviously, this relation does not differ from the corresponding expression (<ref>) for the background solution. The formula (<ref>) relates to the following asymptotic behavior of the functions q(u) and β(u):β(u)∝ln (u_0-u), q(u)∝ (u_0-u)^α+1,where the value u_0 corresponds to the valuer=0 at the center. For instance, for the background solution considered above u_0=π/ν-u_*. When e^ρ does not vanish at u=u_0, the standard radial coordinate r= e^β-ρ behaves as follows:r∝ u_0-u,and we haveq∝ r^α+1, β∝ln r.The first formula coincides qualitatively with the corresponding expression for the background solution (see (<ref>), and the electric field E(r) = q/r^2 vanishes at the center when α>1 as well.When u=0 the boundary conditions (<ref>) giveβ|_q=Q=0,.β'_q|_q=Q=M/γμ P.If we fix the electric and magnetic charges, Q and μ, the coupling constant γ, and the mass M, desired solution to Eq. (<ref>) with conditions (<ref>) and (<ref>) exists only for a specific value of the axion charge P, and the inequality μ P/Q<0 has to be valid. This latter constraint arises from the second equation of (<ref>), becauseq(0)/Q=1,q(u_0)/Q=0,q”_uu/Q=γ^2μ^2qe^-2β/Q>0.To illustrate dependence between Q, μ, P, and α (or γ), we present Figs. <ref> and <ref>. Each figure consists of three panels, which correspond to specific values of the coupling parameter α, namely, for α=1, 2, and 3, respectively. On the other hand, if C=0 Eq. (<ref>) admits a solution, which at q=0 behaves as follows (cf. (<ref>))β∼ -1/αlnq/Q.As it was mentioned above, such a solution corresponds to the metric, which possesses a double horizon. Curves describing in Figs. <ref> and <ref> the relationship between the charges, P, Q, and μ, and the mass M for this limiting configuration are drawn using gray color.Fig. <ref> depicts dependence between the ratio P/Q and the mass M for fixed values of the electric charge Q and the coupling parameter α. The first (left) curve corresponds to the limiting case Q≪μ described in Subsection <ref>. Other curves correspond to |Q|=n|μ|, where n=1,… ,5.Fig. <ref> illustrates dependence between the ratio Qμ and the mass M for fixed values of the axion scalar charge P and the coupling parameter α. The range M/R_μ∈ [0,1) on the horizontal axis corresponds again to the limiting case P≪μ described in Subsection <ref>. Color curves correspond to |P|=0.5n|μ|, where n=1,… ,5. The gray line defines the mass-charge relation for the spacetime metric with double (extremal) horizon. If the gravitational interaction is much weaker than the axion-photon coupling, i.e. when γ^2≫κ, or, equivalently, α≫ 1, the color curves become horizontal, |P|→ |Q| (see (<ref>)). The solution of Campbell, Kaloper, and Olive (<ref>) can be considered as a non-gravitational limit. § CONCLUSIONS In the present paper we realize Wilczek's idea about a magnetic monopole surrounded by an axion-induced radial electric field in the framework of the Einstein-Maxwell model with the massless axion field. Since this electric field is created by interaction between the magnetic field of the monopole and the axion field and is not related to any real electric charge, the electric field has to be regular in the center in the strict sense, i.e., E(0)=0. In this sense, our solution is a generalization of the result of Campbell, Kaloper, and Olive <cit.>, taking into account the gravitational field of the monopole.In Subsection <ref> we present the four-parameter family of solutions (see Eqs. (<ref>) and (<ref>)) in the framework of the axion electrodynamics on the background of the magnetic monopole gravitational field with the metric of the Reissner-Nordström type. The fifth parameter, the axion field charge P, is determined by other parameters, namely, the electric and magnetic charges Q and μ, the mass M, and the coupling parameter α (see Eq. (<ref>)). Besides this relation, the parameters are bounded by two inequalities, which correspond to requirements of absence of horizons (M^2<κμ^2/2) and regularity at the origin (α>1). In addition, when α≥ 3 the invariant scalar F_ik^ik appears to be regular in the center too.In Subsection <ref>, using numerical methods, we solvethe total system of equations attributed to the Einstein-Maxwell-axion model, in which the gravitational field is self-consistent, not the background one. We demonstrate that the behavior of the solutions to the self-consistent system qualitatively coincides with the background solution, and this background solution can be extracted from the general solution as an asymptotic case with Q, P≪μ. The work was supported by Russian Science Foundation (Project No. 16-12-10401), and, partially, by the Program of Competitive Growth of Kazan Federal University. 99Wilczek2 F. Wilczek, Two applications of axion electrodynamics, Phys. Rev. Lett. 58, 1799 (1987).PQ R.D. Peccei, H.R. Quinn, CP conservation in the presence of instantons, Phys. Rev. Lett. 38, 1440 (1977).Weinberg S. Weinberg, A new light boson? Phys. Rev. Lett. 40, 223 (1978).Wilczek F. Wilczek, Problem of strong P and T invariance in the presence of instantons, Phys. Rev. Lett. 40, 279 (1978).Ni77 W.-T. Ni, Equivalence principles and electromagnetism, Phys. Rev. Lett. 38, 301 (1977).Sikivie83 P. Sikivie, Experimental tests of the “invisible” axion, Phys. Rev. Lett. 51, 1415 (1983).ADM1 G.G. Raffelt, Astrophysical methods to constrain axions and other novel particle phenomena, Phys. Rept. 198, 1 (1990).ADM2 M.S. Turner, Windows on the axion, Phys. Rept. 197, 67 (1990).ADM3 E.P.S. Shellard, R.A. Battye, On the origin of dark matter axions, Phys. Rept. 307, 227 (1998).ADM4 B.A. Bassett, M. Kunz, Cosmic acceleration vs axion-photon mixing, Astrophys. J. 607, 661 (2004).ADM5 R. Battesti et al., Axion searches in the past, at present, and in the near future, Lect. Notes Phys. 741, 199 (2008).ADM6 F.D. Steffen, Dark-matter candidates — axions, neutralinos, gravitinos, and axinos, Eur. Phys. J. C 59, 557 (2009).ADM7 L.D. Duffy, K. van Bibber, Axions as dark matter particles, New J. Phys. 11, 105008 (2009).ADM8 P. Sikivie, Q. Yang, Bose-Einstein condensation of dark matter axions, Phys. Rev. Lett. 103, 111301 (2009).ADM9 M. Khlopov, Fundamentals of cosmic particle physics (Springer, 2012).EMAD1 K. Flathmann, S. Grunau, Analytic solutions of the geodesic equation for Einstein-Maxwell-dilaton-axion black holes, Phys. Rev. D 92, 104027 (2015).EMAD2 M. Azreg-Aïnou, G. Clément, D.V. Gal'tsov, All extremal instantons in Einstein-Maxwell-dilaton-axion theory, Phys. Rev. D 84, 104042 (2011).EMAD3 T. Matos, G. Miranda, R. Sanchez-Sanchez, P. Wiederhold, Class of Einstein-Maxwell-dilaton-axion space-times, Phys. Rev. D 79, 124016 (2009).ExtendedAE1 A.B. Balakin, W.-T. Ni, Non-minimal coupling of photons and axions, Class. Quantum Gravity 27, 055003 (2010).ExtendedAE2 A.B. Balakin, R.K. Muharlyamov, A.E. Zayats, Non-minimal Einstein-Maxwell-Vlasov-axion model, Class. Quantum Gravity, 31, 025005 (2014).ExtendedAE3 A.B. Balakin, N.O. Tarasova, Extended axion electrodynamics: Optical activity induced by nonstationary dark matter, Gravit. Cosmol. 18, 54 (2012).ExtendedAE4 A.B. Balakin, V.V. Bochkarev, N.O. Tarasova, Gradient models of the axion-photon coupling, Eur. Phys. J. C 72, 1895 (2012).ExtendedAE5 A.B. Balakin, R.K. Muharlyamov, A.E. Zayats, Electromagnetic waves in an axion-active relativistic plasma non-minimally coupled to gravity, Eur. Phys. J. C 73, 2647 (2013).ExtendedAE6 A.B. Balakin, R.K. Muharlyamov, A.E. Zayats, Axion-induced oscillations of cooperative electric field in a cosmic magneto-active plasma, Eur. Phys. J. D 68, 159 (2014).ExtendedAE7 A.B. Balakin, T.Y. Alpin, Extended axion electrodynamics: Anomalous dynamo-optical response induced by gravitational pp-waves, Gravit. Cosmol. 20, 152 (2014).ExtendedAE8 T.Yu. Alpin, A.B. Balakin, The Einstein-Maxwell-aether-axion theory: Dynamo-optical anomaly in the electromagnetic response, Int. J. Mod. Phys. D 25, 1650048 (2016).LW1991 K. Lee, E.J. Weinberg, Charged black holes with scalar hairs, Phys. Rev. D 44, 3159 (1991).pp1 A.B. Balakin, W.-T. Ni, Anomalous character of the axion-photon coupling in a magnetic field distorted by a pp-wave gravitational background, Class. Quantum Gravity 31, 105002 (2014).BG A.B. Balakin, L.V. Grunskaya, Axion electrodynamics and dark matter fingerprints in the terrestrial magnetic and electric fields, Rep. Math. Phys. 71, 45 (2013).Bardeen J.M. Bardeen, Non-singular general-relativistic gravitational collapse, in Abstracts of the 5th International Conference on Gravitation and the Theory of Relativity, edited by V. A. Fock et al. (Tbilisi University Press, Tbilisi, Georgia, 1968), p. 174.Reg1 A.B. Balakin, A.E. Zayats, Non-minimal Wu-Yang monopole, Phys. Lett. B 644, 294 (2007).Reg2 A.B. Balakin, H. Dehnen, A.E. Zayats, Non-minimal Einstein-Yang-Mills-Higgs theory: Associated, color and color-acoustic metrics for the Wu-Yang monopole model, Phys. Rev. D 76, 124011 (2007).Reg3 A.B. Balakin, J.P.S. Lemos, A.E. Zayats, Regular nonminimal magnetic black holes in spacetimes with a cosmological constant, Phys. Rev. D 93, 024008 (2016).Reg4 A.B. Balakin, V.V. Bochkarev, J.P.S. Lemos, Non-minimal coupling for the gravitational and electromagnetic fields: black hole solutions and solitons, Phys. Rev. D 77, 084013 (2008).Reg5 A.B. Balakin, J.P.S. Lemos, A.E. Zayats, Nonminimal coupling for the gravitational and electromagnetic fields: Traversable electric wormholes, Phys. Rev. D 81, 084015 (2010).Reg6 A.B. Balakin, A.E. Zayats, Nonminimal black holes with regular electric field, Int. J. Mod. Phys. D 24, 1542009 (2015).Bron73 K.A. Bronnikov, Scalar tensor theory and scalar charge, Acta Phys. Pol. B 4, 251 (1973).Penney R. Penney, Generalization of the Reissner-Nordström solution to the Einstein field equations, Phys. Rev. 182, 1383 (1969).Fisher I.Z. Fisher, Scalar mesostatic field with regard for gravitational effects, Zh. Eksp. Teor. Fiz. 18, 636 (1948); gr-qc/9911008.Campbell B.A. Campbell, N. Kaloper, K.A. Olive, Axion hair for dyon black holes, Phys. Lett. B 263, 364 (1991).Erd H. Bateman, A. Erdelyi, Higher Transcendental Functions, vol. 1 (McGraw-Hill, New York, 1953).
http://arxiv.org/abs/1703.08858v1
{ "authors": [ "Alexander B. Balakin", "Alexei E. Zayats" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170326181545", "title": "Einstein-Maxwell-axion theory: Dyon solution with regular electric field" }
Coherent Online Video Style Transfer [==================================== We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound.Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function.Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior.We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors. § INTRODUCTION r0.4 90KL( ϕ_init  ϕ_true ) < g r a p h i c s >IterationsFitting a 100-dimensional variational posterior to another Gaussian, using standard gradient versus our proposed path derivative gradient estimator.Recent advances in variational inference have begun to make approximate inference practical in large-scale latent variable models.One of the main recent advances has been the development of variational autoencoders along with the reparameterization trick <cit.>.The reparameterization trick is applicable to most continuous latent-variable models, and usually provides lower-variance gradient estimates than the more general REINFORCE gradient estimator <cit.>. Intuitively, the reparameterization trick provides more informative gradients by exposing the dependence of sampled latent variableson variational parameters .In contrast, the REINFORCE gradient estimate only depends on the relationship between the density function log q_(|, ) and its parameters. Surprisingly, even the reparameterized gradient estimate contains the score function—a special case of the REINFORCE gradient estimator.We show that this term can easily be removed, and that doing so gives even lower-variance gradient estimates in many circumstances.In particular, as the variational posterior approaches the true posterior, this gradient estimator approaches zero variance faster, making stochastic gradient-based optimization converge and "stick" to the true variational parameters, as seen in figure <ref>.§.§ Contributions * We present a novel unbiased estimator for the variational evidence lower bound (ELBO) that has zero variance when the variational approximation is exact. * We provide a simple and general implementation of this trick in terms of a single change to the computation graph operated on by standard automatic differentiation packages.* We generalize our gradient estimator to mixture and importance-weighted lower bounds, and discuss extensions to flow-based approximate posteriors.This change takes a single function call using automatic differentiation packages.* We demonstrate the efficacy of this trick through experimental results on MNIST and Omniglot datasets using variational and importance-weighted autoencoders. §.§ Background Making predictions or computing expectations using latent variable models requires approximating the posterior distribution p( | ).Calculating these quantities in turn amounts to using Bayes' rule: p( | ) = p( | )p() / p(). Variational inference approximates p( | ) with a tractable distribution q_ϕ( | ) parameterized bythat is close in KL-divergence to the exact posterior. Minimizing the KL-divergence is equivalent to maximizing the evidence lower bound (ELBO): ELBOℒ(ϕ) = 𝔼_∼ q [log p(, ) - log q_ϕ(|)]An unbiased approximation of the gradient of the ELBO allows stochastic gradient descent to scalably learn parametric models.Stochastic gradients of the ELBO can be formed from the REINFORCE-style gradient, which applies to any continuous or discrete model, or a reparameterized gradient, which requires the latent variables to be modeled as continuous.Our variance reduction trick applies to the reparameterized gradient of the evidence lower bound.§ ESTIMATORS OF THE VARIATIONAL LOWER BOUND In this section, we analyze the gradient of the ELBO with respect to the variational parameters to show a source of variance that depends on the complexity of the approximate distribution.When the joint distribution p(, ) can be evaluated by p( | ) and p() separately, the ELBO can be written in the following three equivalent forms: ℒ(ϕ)= 𝔼_∼ q [log p(|) + logp() - log q_ϕ( | )]= 𝔼_∼ q [log p(|) + logp())] + ℍ[q_ϕ]= 𝔼_∼ q [log p(|)] - KL(q_ϕ (|) || p())Which ELBO estimator is best?When p() and q_(|) are multivariate Gaussians, using equation (<ref>) is appealing because it analytically integrates out terms that would otherwise have to be estimated by Monte Carlo.Intuitively, we might expect that using exact integrals wherever possible will give lower-variance estimators by reducing the number of terms to be estimated by Monte Carlo methods.Surprisingly, even when analytic forms of the entropy or KL divergence are available, sometimes it is better to use (<ref>) because it will have lower variance.Specifically, this occurs when q_( |) = p( | ), i.e. the variational approximation is exact. Then, the variance of the full Monte Carlo estimator ℒ̂_MC is exactly zero. Its value is a constant, independent of q_(|). This follows from the assumption q_( |) = p( | ):ℒ̂_MC(ϕ)= log p(, ) - log q_ϕ( | ) = log p( | ) + log p() - log p( | )= log p() ,This suggests that using equation (<ref>) should be preferred when we believe that q_(|) ≈ p(|). Another reason to prefer the ELBO estimator given by equation (<ref>) is that it is the most generally applicable, requiring a closed form only for q_ϕ (|).This makes it suitable for highly flexible approximate distributions such as normalizing flows <cit.>, Real NVP <cit.>, or Inverse Autoregressive Flows <cit.>.Estimators of the lower bound gradientWhat about estimating the gradient of the evidence lower bound?Perhaps surprisingly, the variance of the gradient of the fully Monte Carlo estimator (<ref>) with respect to the variational parameters is not zero, even when the variational parameters exactly capture the true posterior, i.e., q_(|) = p(|).This phenomenon can be understood by decomposing the gradient of the evidence lower bound. Using the reparameterization trick, we can express a samplefrom a parametric distribution q_() as a deterministic function of a random variablewith some fixed distribution and the parametersof q_, i.e., = t(,).For example, if q_ is a diagonal Gaussian, then for ∼ N(0,𝕀), z = μ + σ is a sample from q_. Under such a parameterization of , we can decompose the total derivative (TD) of the integrand of estimator (<ref>) w.r.t. the trainable parametersas ∇̂_TD(ϵ, )= ∇_[ log p(|) + log p() - log q_(| ) ] = ∇_[ log p(|) + log p() - log q_(| ) ] = ∇_[ log p ( | )- log q_( | ) ]∇_ t(,) _path derivative- ∇_log q_ ( | )_score function, [20]r0.5< g r a p h i c s >The evidence lower bound is a function of the sampled latent variablesand the variational parameters .As the variational distribution approaches the true posterior, the gradient with respect to the sampled(blue) vanishes. The reparameterized gradient estimator w.r.t.decomposes into two parts.We call these the path derivative and score function components.The path derivative measures dependence ononly through the sample . The score function measures the dependence on log q_ directly, without considering how the samplechanges as a function of .When q_(|) = p(|) for all , the path derivative component of equation (<ref>) is identically zero for all .However, the score function component is not necessarily zero for anyin some finite sample, meaning that the total derivative gradient estimator (<ref>) will have non-zero variance even when q matches the exact posterior everywhere. This variance is induced by the Monte Carlo sampling procedure itself. Figure <ref> depicts this phenomenon through the loss surface of log p(, ) - log q_(| ) for a Mixture of Gaussians approximate and true posterior.Path derivative of the ELBO Could we remove the high-variance score function term from the gradient estimate?For stochastic gradient descent to converge, we require that our gradient estimate is unbiased.By construction, the gradient estimate given by equation (<ref>) is unbiased.Fortunately, the problematic score function term has expectation zero.If we simply remove that term, we maintain an unbiased estimator of the true gradient: ∇̂_PD(ϵ, )= ∇_[ log p ( | ) - log q_( | ) ]∇_ t(,) - ∇_log q_ ( | ). This estimator, which we call the path derivative gradient estimator due to its dependence on the gradient flow only through the path variablesto update , is equivalent to the standard gradient estimate with the score function term removed.The path derivative estimator has the desirable property that as q_(z|x) approaches p(z|x), the variance of this estimator goes to zero. When to prefer the path derivative estimator Does eliminating the score function term from the gradient yield lower variance in all cases?It might seem that its removal can only have a variance reduction effect on the gradient estimator.Interestingly, the variance of the path derivative gradient estimator may actually be higher in some cases.This will be true when the score function is positively correlated with the remaining terms in the total derivative estimator.In this case, the score function acts as a control variate: a zero-expectation term added to an estimator in order to reduce variance.t]cc .5 .45Control variates are usually scaled by an adaptive constant c^*, which modifies the magnitude and direction of the control variate to optimally reduce variance, as in <cit.>.In the preceding discussion, we have shown that c^*=1 is optimal when the variational approximation is exact, since that choice yields analytically zero variance.When the variational approximation is not exact, an estimate of c^* based on the current minibatch will change sign and magnitude depending on the positive or negative correlation of the score function with the path derivative. Optimal scale estimation procedures is particularly important when the variance of an estimator is so large that convergence is unlikely.However, in the present case of reparameterized gradients, where the variance is already low, estimating a scaling constant introduces another source of variance.Indeed, we can only recover the true optimal scale when the variational approximation is exact in the regime of infinite samples during Monte Carlo integration. Moreover, the score function must be independently estimated in order to scale it.Estimating the gradient of the score function independent of automatic reverse-mode differentiation can be a challenging engineering task for many flexible approximate posterior distributions such as Normalizing Flows <cit.>, Real NVP <cit.>, or IAF <cit.>.By contrast, in section <ref> we show improved performance on the MNIST and Omniglot density estimation benchmarks by approximating the optimal scale with 1 throughout optimization.This technique is easy to implement using existing automatic differentiation software packages.However, if estimating the score function independently is computationally feasible, and a practitioner has evidence that the variance induced by Monte Carlo integration will reduce the overall variance away from the optimum point, we recommend establishing an annealling schedule for the optimal scaling constant that converges to 1.§ IMPLEMENTATION DETAILSIn this section, we introduce algorithms <ref> and <ref> in relation to reverse-mode automatic differentiation, and discuss how to implement the new gradient estimator in Theano, Autograd, Torch or Tensorflow <cit.>. Algorithm <ref> shows the standard reparameterized gradient for the ELBO.We require three function definitions:to generate a reparameterized sample from the variational approximation, and functions that implement log p(, ) and log q( | , ).Once the lossis defined, we can leverage automatic differentiation to return the standard gradient evaluated at ϕ_t.This yields equation (<ref>). Algorithm <ref> shows the path derivative gradient for the ELBO.The only difference from algorithm <ref> is the application of thefunction to the variational parameters inside .Table <ref> indicates the names ofin popular software packages. ht!]cc.55 .38This simple modification to algorithm <ref> generates a copy of the parameter variable that is treated as a constant with respect to the computation graph generated for automatic differentiation.The copied variational parameters are used to evaluate variational the density log q_ at .Recall that the variational parametersare used both to generatethrough some deterministic function of an independent random variable , and to evaluate the density ofthrough log q_.By blocking the gradient through variational parameters in the density function, we eliminate the score function term that appears in equation (<ref>).Per-iteration updates to the variational parametersrely on thechannel only, e.g., the path derivative component of the gradient of the loss function .This yields the gradient estimator corresponding to equation (<ref>).§ EXTENSIONS TO RICHER VARIATIONAL FAMILIES Mixture Distributions [20]r0.5090Trace Norm of Covariance Matrix < g r a p h i c s >Variational Parameters _init→_trueFitting a mixture of 5 Gaussians as a variational approximation to a posterior that is also a mixture of 5 Gaussians. Path derivative andscore function gradient components were measured 1000 times. The path derivative goes to 0 as the variational approximation becomes exact, along an arbitrarily chosen path In this section, we discuss extensions of the path derivative gradient estimator to richer variational approximations to the true posterior. Using a mixture distribution as an approximate posterior in an otherwise differentiable estimator introduces a problematic, non-differentiable random variable ∼Cat(α).We solve this by integrating out the discrete mixture choice from both the ELBO and the mixture distribution.In this section, we show that such a gradient estimator is unbiased, and introduce an extended algorithm to handle mixture variational families.For any mixture of K base distributions q_(|), a mixture variational family can be defined byq__M( | ) = ∑_c=1^K _c q__c(|),where _M = {_1, ..., _k, _1, ..., _k} are variational parameters, e.g., the weights and distributional parameters for each component.Then, the mixture ELBO ℒ_M is given by:∑_c=1^K π_c 𝔼__c ∼ q__c[log p(,_c) - log(∑_k=1^K _kq__k(_c | )) ],where the outer sum integrates over the choice of mixture component for each sample from q__M, and the inner sum evaluates the density. Applying the new gradient estimator to the mixture ELBO involves applying it to each q__k(_c | ) in the inner marginalization.Algorithm <ref> implements the gradient estimator of (<ref>) in the context of a continuous mixture distribution.Like algorithm <ref>, the new gradient estimator of <ref> differs from the vanilla gradient estimator only in the application ofto the variational parameters. This eliminates the gradient of the score function from the gradient of any mixture distribution. Importance-Weighted AutoencoderWe also explore the effect of our new gradient estimator on the IWAE bound <cit.>, defined asℒ̂_K = 𝔼__1,...,_K∼ q( | )[ log( 1/k∑_i=1^K p(, _i)/q(_i | )) ]with gradient ∇_ℒ̂_K = 𝔼__1,...,_K∼ q( | )[ ∑_i=1^K _i ∇_log_i ] where _ip(, _i) / q(_i | ) and _i _i / ∑_i=1^k _i.Since ∇_log_i is the same gradient as the Monte Carlo estimator of the ELBO (equation (<ref>)), we can again apply our trick to get a new estimator. However, it is not obvious whether this new gradient estimator is unbiased.In the unmodified IWAE bound, when q=p, the gradient with respect to the variational parameters reduces to:𝔼__1,...,_k∼ q( | )[ -∑_i=1^k _i ∇_log q_ (_i | ) ]. Each sample z_i is used to evaluate both _i and the partial derivative term.Hence, we cannot simply appeal to the linearity of expectation to show that this gradient is 0.Nevertheless, a natural extension of the variance reduction technique in equation (<ref>) is to apply our variance reduction to each importance-weighted gradient sample. See algorithm <ref> for how to implement the path derivative estimator in this form.We present empirical validation of the idea in our experimental results section, which shows markedly improved results using our gradient estimator.We observe a strong improvement in many cases, supporting our conjecture that the gradient estimator is unbiased as in the mixture and multi-sample ELBO cases. Flow DistributionsFlow-based approximate posteriors such as <cit.> are a powerful and flexible framework for fitting approximate posterior distributions in variational inference.Flow-based variational inference samples an initial _0 from a simple base distribution with known density, then learns a chain of invertible, parameterized maps f_k(_k-1) that warp _0 into _K = f_K ∘ f_K-1∘ ... ∘ f_1(_0).The endpoint _K represents a sample from a more flexible distribution with density log q_K(_K) = log q_0(_0) - ∑_k=1^K log| ∂ f_k/∂_k-1|. We expect our gradient estimator to improve the performance of flow-based stochastic variational inference. However, due to the chain composition used to learn _K, we cannot straightforwardly apply our trick as described in algorithm <ref>.This is because each intermediate _j,1≤ j≤ K contributes to the path derivative component in equation (<ref>). The log-Jacobian terms used in the evaluation of log q(_k), however, require this gradient information to calculate the correct estimator.By applyingto the variational parameters used to generate each intermediate _i and passing only the endpoint _K to a log density function, we would lose necessary gradient information at each intermediate step needed for the gradient estimator to be correct. At time of writing, the requisite software engineering to track and expose intermediate steps during backpropagation is not implemented in the packages listed in Table <ref>, and so we leave this to future work. § RELATED WORK Our modification of the standard reparameterized gradient estimate can be interpreted as adding a control variate, and in fact <cit.> investigated the use of the score function as a control variate in the context of non-reparameterized variational inference.The variance-reduction effect we use to motivate our general gradient estimator has been noted in the special cases of Gaussian distributions with sparse precision matrices and Gaussian copula inference in <cit.> and <cit.> respectively.In particular,  <cit.> observes that by eliminating certain terms from a gradient estimator for Gaussian families parameterized by sparse precision matrices, multiple lower-variance unbiased gradient estimators may be derived. Our work is a generalization to any continuous variational family.This provides a framework for easily implementing the technique in existing software packages that provide automatic differentiation.By expressing the general technique in terms of automatic differentiation, we eliminate the need for case-by-case analysis of the gradient of the variational lower bound as in <cit.> and <cit.>.An innovation by <cit.> introduces the generalized reparameterization gradient (GRG) which unifies the REINFORCE-style and reparameterization gradients.GRG employs a weaker form of reparameterization that requires only the first moment to have no dependence on the latent variables, as opposed to complete independence as in  <cit.>.GRG improves on the variance of the score-function gradient estimator in BBVI without the use of Rao-Blackwellization as in <cit.>.A term in their estimator also behaves like a control variate. The present study, in contrast, develops a simple drop-in variance reduction technique through an analysis of the functional form of the reparameterized evidence lower bound gradient. Our technique is developed outside of the framework of GRG but can strongly improve the performance of existing algorithms, as demonstrated in section <ref>.Our technique can be applied alongside GRG.In the python toolkit Edward <cit.>, efforts are ongoing to develop algorithms that implement stochastic variational inference in general as a black-box method.In cases where an analytic form of the entropy or KL-divergence is known, the score function term can be avoided using Edward.This is equivalent to using equations (<ref>) or (<ref>) respectively to estimate the ELBO.As of release 1.2.4 of Edward, the total derivative gradient estimator corresponding to (<ref>) is used for reparameterized stochastic variational inference. § EXPERIMENTSExperimental SetupBecause we follow the experimental setup of <cit.>, we review it briefly here.Both benchmark datasets are composed of 28 × 28 binarized images.The MNIST dataset was split into 60,000 training and 10,000 test examples.The Omniglot dataset was split into 24,345 training and 8070 test examples.Each model used Xavier initialization <cit.> and trained using Adam with parameters β_1=0.9, β_2 = 0.999, and ϵ= 1e^-4 with 20 observations per minibatch <cit.>. We compared against both architectures reported in <cit.>.The first has one stochastic layer with 50 hidden units, encoded using two fully-connected layers of 200 neurons each, using a tanh nonlinearity throughout.The second architecture is two stochastic layers: the first stochastic layer encodes the observations, with two fully-connected layers of 200 hidden units each, into 100 dimensional outputs.The output is used as the parameters of diagonal Gaussian.The second layer takes samples from this Gaussian and passes them through two fully-connected layers of 100 hidden units each into 50 dimensions.See table <ref> for NLL scores estimated as the mean of equation (<ref>) with k=5000 on the test set.We can see that the path derivative gradient estimator improves over the original gradient estimator in all but two cases. Benchmark DatasetsWe evaluate our path derivative estimator using two benchmark datasets: MNIST, a dataset of handwritten digits  <cit.>, and Omniglot, a dataset of handwritten characters from many different alphabets  <cit.>.To underscore both the easy implementation of this technique and the improvement it offers over existing approaches, we have empirically evaluated our new gradient estimator by a simple modification of existing code [See <https://github.com/geoffroeder/iwae>]  <cit.>.Omniglot ResultsFor a two-stochastic-layer VAE using the multi-sample ELBO with gradient corresponding to equation (<ref>) improves over the results in <cit.> by 2.36, 1.44, and 0.6 nats for k={1, 5, 50} respectively. For a one-stochastic-layer VAE, the improvements are more modest: 0.72, 0.22, and 0.38 nats lower for k={1, 5, 50} respectively.A VAE with a deep recognition network appears to benefit more from our path derivative estimator than one with a shallow recognition network.For comparison, a VAE using the path derivative estimator with k=5 samples performs only 0.08 nats worse than an IWAE using the total derivative gradient estimator (<ref>) and 5 samples.By contrast, using the total derivative (vanilla) estimator for both models, IWAE otherwise outperforms VAE for k=5 samples by 1.52 nats. By increasing the accuracy of the ELBO gradient estimator, we may also increase the risk of overfitting.<cit.> report that they didn't notice any significant problems with overfitting, as the training log likelihood was usually 2 nats lower than the test log likelihood.With our gradient estimator, we observe only 0.77 nats worse performance for a VAE with k=50 compared to k=5 in the two-layer experiments.IWAE using equation (<ref>) markedly outperforms IWAE using equation (<ref>) on Omniglot. For a 2-layer IWAE, we observe an improvement of 2.34, 1.2, and 0.52 nats for k={1, 5, 50} respectively. For a 1-layer IWAE, the improvements are 0.72, 0.7, and 0.51 for k={1, 5, 50} respectively.Just as in the VAE Omniglot results, a deeper recognition network for an IWAE model benefits more from the improved gradient estimator than a shallow recognition network. MNIST Results For all but one experiment, a VAE with our path derivative estimator outperforms a vanilla VAE on MNIST data. For k=50 with one stochastic layer, our gradient estimator underperforms a vanilla VAE by 0.13 nats.Interestingly, the training NLL for this run is 86.11, only 0.37 nats different than the test NLL.The similar magnitude of the two numbers suggests that training for longer than <cit.> would improve the performance of our gradient estimator.We hypothesize that the worse performance using the path derivative estimator is a consequence of fine-tuning towards the characteristics of the total derivative estimator.For a two-stochastic-layer VAE on MNIST, the improvements are 0.56, 0.33 and 0.45 for k={1, 5, 50} respectively.In a one-stochastic-layer VAE on MNIST, the improvements are 0.36 and 0.14 for k={1, 5} respectively. The improvements on IWAE are of a similar magnitude.For k=50 in a two-layer path-derivative IWAE, we perform 0.26 nats worse than with a vanilla IWAE.The training loss for the k=50 run is 82.74, only 0.42 nats different.As in the other failure case, this suggests we have room to improve these results by fine-tuning over our method.For a two stochastic layer IWAE, the improvements are 0.66 and 0.22 for k=1 and 5 respectively. In a one stochastic layer IWAE, the improvements are 0.36, 0.34, and 0.33 for k={1, 5, 50} respectively.§ CONCLUSIONS AND FUTURE WORKWe demonstrated that even when the reparameterization trick is applicable, further reductions in gradient variance are possible.We presented our variance reduction method in a general way by expressing it as a modification of the computation graph used for automatic differentiation.The gain from using our method grows with the complexity of the approximate posterior, making it complementary to the development of non-Gaussian posterior families.Although the proposed method is specific to variational inference, we suspect that similar unbiased but high-variance terms might exist in other stochastic optimization settings, such as in reinforcement learning, or gradient-based Markov Chain Monte Carlo. plainnat
http://arxiv.org/abs/1703.09194v3
{ "authors": [ "Geoffrey Roeder", "Yuhuai Wu", "David Duvenaud" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170327172502", "title": "Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference" }
Shift-Symmetric Configurations in Two-Dimensional Cellular Automata:Irreversibility, Insolvability, and Enumeration Christof Teuscher March 2017 =====================================================================================================================The doctrine of double effect () is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed.The goal in this paper is to automate .We briefly present , and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine.We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect.We then use our framework to successfully simulate scenarios that have been used to test for the presence of the principle in human subjects.Our framework can be used in two different modes: One can use it to build -compliant autonomous systems from scratch; or one can use it to verify that a given AI system is -compliant, by applying alayer on an existing system or model.For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible.The role of thelayer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by sketching initial work on how one can apply ourlayer to the STRIPS-style planning model, and to a modified POMDP model. This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporatingin their own frameworks. § INTRODUCTION The doctrine of double effect () is a long-studied ethical principle that enables adjudication of ethically “thorny” situations in which actions that have both positive and negative effects appear unavoidable for autonomous agents <cit.>.Such situations are commonly called moral dilemmas.The simple version ofstates that such actions, performed to “escape” such dilemmas, are allowed — provided that [1)]* the harmful effects are not intended;* the harmful effects are not used to achieve the beneficial effects (harm is merely a side-effect); and* benefits outweigh the harm by a significant amount. What distinguishesfrom, say, naïve forms of consequentialism in ethics (e.g. act utilitarianism, which holds that an action is obligatory for an autonomous agent if and only if it produces the most utility among all competing actions) is that purely mental intentions in and of themselves, independent of consequences, are considered crucial (as condition 2 immediately above conveys).Of course, every major ethical theory, not just consequentialism, has its passionate proponents; cogent surveys of such theories make this plain (e.g. see <cit.>).Even in machine ethics, some AI researchers have explored not just consequentialism and the second of the two dominant ethical theories, deontological ethics (marked by an emphasis on fixed and inviolable principles said by their defenders to hold no matter what the consequences of abrogating them), but more exotic ones, for example contractualism (e.g. see <cit.>) and even divine-command ethics (e.g. see <cit.>).in a sense rises above philosophical debates about which ethical theory is preferred.The first reason is that empirical studies have found thatplays a prominent role in an ordinary person's ethical decisions and judgments <cit.>.For example, in <cit.>, a large number of participants were asked to decide between action and inaction on a series of moral dilemmas, and their choices adhered to , irrespective of their ethical persuasions and backgrounds, and no matter what the order in which the dilemmas were presented.In addition, in legal systems, criminality requires the presence of malicious intentions <cit.>, andplays a central role in many legal systems <cit.>.[On the surface, criminal negligence might seem to require no intentions.While that might be true, even in criminal negligence it seems rational to ask whether the negligence was accidental or something the “suspect” had control over. This suggests a milder form of intention, or something similar, but not exactly intention.] Assuming that autonomous systems will be expected to adjudicate moral dilemmas in human-like ways, and to justify such adjudication, it seems desirable to seek science and engineering that allows , indeed even nuanced, robust versions thereof, to be quickly computed. § PRIOR WORK We quickly review prior rigorous modeling of .Mikhail in <cit.> presents one of the first careful treatments of the doctrine.While the presentation of the doctrine makes use of some symbolism, the level of formalization is not amenable to automation.<cit.> presents a model-theoretic formalization of a simple version of the doctrine. While this is an important first step, the calculus presented by Bentzen does not have any computational realization.However, there are two independent strands of research with implementations for : that of Berreby et al. berreby2015modelling and Pereira and Saptawijaya pereira2016counterfactuals; both use logic programming.Notably, while the Berreby et al.explicitly eschew counterfactuals for modeling , Pereira and Saptawijaya modelusing counterfactuals.To our knowledge, both the projects present one of the first formal models ofthat can be implemented. It should be noted, however, that both of these formal systems are extensional, and it is well-known that when dealing with intensional states such as knowledge, belief, intention etc., extensional systems can quickly generate inconsistencies <cit.> (see the appendix for more details).The expressivity challenge is both quantificational and intensional; this challenge is acute for the logic-programming paradigm, as opposed to one based — as is ours — on formal languages beyond first-order logic and its variants, and proof theories beyond resolution and its derivatives.In particular,requires elaborate structures for quantification (including, inevitably, first-order numerical quantifiers such as ∃^k: k ∈ℝ, since quantification over utilities is essential), and many intensional operators that range over quantifiers, starting with the epistemic ones.Needless to say, modeling and simulation at the propositional level, while truly excellent in the case of <cit.>, is insufficiently expressive.Among the many empirical experiments centered around , the one in <cit.> deserves a mention. Malle et al. devise an experiment in which they place either a human or a robot as the central actor in a hypotheticalscenario, and study an external viewer's moral judgement of action or inaction by the human or robot.This study shows that humans view ethical situations differently when robots participate in such situations; and the study demonstrates the need for rigorous modeling ofto build well-behaved autonomous systems that function in -relevant scenarios. § THE CALCULUSIn this section, we present the deontic cognitive event calculus ().Dialects of this calculus have been used to formalize and automate highly intensional reasoning processes, such as the false-belief task <cit.> and akrasia (succumbing to temptation to violate moral principles) <cit.>.[Arkoudas and Bringsjord ArkoudasAndBringsjord2008Pricai introduced the general family of cognitive event calculi to whichbelongs.]is a sorted (i.e. typed) quantified modal logic (also known as sorted first-order modal logic).The calculus has a well-defined syntax and proof calculus; see <cit.>.The proof calculus is based on natural deduction <cit.>, and includes all the introduction and elimination rules for first-order logic, as well as inference schemata for the modal operators and related structures. A snippet ofis shown in the Appendix.§.§ Syntax§.§.§ First-order FragmentThe first-order core ofis the event calculus <cit.>. Though we use the event calculus, our approach is compatible with other calculi (e.g. the situation calculus) for modeling events and their effects.§.§.§ Modal FragmentThe modal operators present in the calculus include the standard operators for knowledge , belief , desire , intention , etc.The general format of an intensional operator is (a, t, ϕ), which says that agent a knows at time t the proposition ϕ. Here ϕ can in turn be any arbitrary formula.The calculus also includes a dyadic deontic operator .The unary ought in standard deontic logic is known to lead to contradictions.Our dyadic version of the operator blocks the standard list of such contradictions, and beyond.[A nice version of the list is given lucidly in <cit.>.] §.§ Semantics §.§.§ First-order FragmentThe semantics for the first-order fragment is the standard first-order semantics. The truth-functional connectives , , →, ¬ and quantifiers ∀, ∃ for pure first-order formulae all have the standard first-order semantics. §.§.§ Modal FragmentThe semantics of the modal operators differs from what is available in the so-called Belief-Desire-Intention (BDI) logics <cit.> in many important ways.For example,explicitly rejects possible-worlds semantics and model-based reasoning, instead opting for a proof-theoretic semantics and the associated type of reasoning commonly referred to as natural deduction <cit.>. Briefly, in this approach, meanings of modal operators are defined via arbitrary computations over proofs, as we will soon see. §.§ Reasoner (Theorem Prover)Reasoning is performed through a novel first-order modal logic theorem prover, , which uses a technique called shadowing to achieve speed without sacrificing consistency in the system.Extant first-order modal logic theorem provers that can work with arbitrary inference schemata are built upon first-order theorem provers.They achieve the reduction to first-order logic via two methods.In the first method, modal operators are simply represented by first-order predicates. This approach is the fastest but can quickly lead to well-known inconsistencies as demonstrated in <cit.>. In the second method, the entire proof theory is implemented intricately in first-order logic, and the reasoning is carried out within first-order logic. Here, the first-order theorem prover simply functions as a declarative programming system.This approach, while accurate, can be excruciatingly slow.We use a different approach, in which we alternate between calling a first-order theorem prover and applying modal inference schemata.When we call the first-order prover, all modal atoms are converted into propositional atoms (i.e., shadowing), to prevent substitution into modal contexts.This approach achieves speed without sacrificing consistency.The prover also lets us add arbitrary inference schemata to the calculus by using a special-purpose language.While we use the prover in our simulations, describing the prover in more detail is out of scope for the present paper.[The prover is available in both Java and Common Lisp and can be obtained at: <https://github.com/naveensundarg/prover>. The underlying first-order prover is SNARK available at: <http://www.ai.sri.com/ stickel/snark.html>.] § INFORMALWe now informally but rigorously present .We assume we have at hand an ethical hierarchy of actions as in the deontological case (e.g. forbidden, neutral, obligatory); see <cit.>. We also assume that we have a utility or goodness function for states of the world or effects as in the consequentialist case.For an autonomous agent a, an action α in a situation σ at time t is said to be -compliant iff: 𝐂_1 the action is not forbidden (where we assume an ethical hierarchy such as the one given by Bringsjord bringsjord201721st, and require that the action be neutral or above neutral in such a hierarchy);𝐂_2The net utility or goodness of the action is greater than some positive amount γ;𝐂_3a the agent performing the action intends only the good effects;𝐂_3b the agent does not intend any of the bad effects;𝐂_4 the bad effects are not used as a means to obtain the good effects; and𝐂_5 if there are bad effects, the agent would rather the situation be different and the agent not have to perform the action. That is, the action is unavoidable.See Clause 6 of Principle III in <cit.> for a justification of of 𝐂_5. This clause has not been discussed in any prior rigorous treatments of , but we feel 𝐂_5 captures an important part of whenis normally used, e.g. in unavoidable ethically thorny situations one would rather not be present in. 𝐂_5 is necessary, as the condition is subjunctive/counterfactual in nature and hence may not always follow from 𝐂_1 - 𝐂_4, since there is no subjunctive content in those conditions.Note that while <cit.> modelusing counterfactuals, they use counterfactuals to model 𝐂_4 rather than 𝐂_5.That said, the formalization of 𝐂_5 is quite difficult, requiring the use of computationally hard counterfactual and subjunctive reasoning.We leave this aside here, reserved for future work.§ FORMALThe formalization is straightforward given the machinery of . Let Γ be a set of background axioms, which could include whatever the given autonomous agent under consideration knows about the world; e.g., its understanding of physics, knowledge and beliefs about other agents and itself, etc.The particular situation that might be in play, e.g., “the autonomous agent is driving,” is represented by a formula σ.We use ground fluents for effects.We assume that we have a utility function μ that maps from fluents and times to real-number utility values.μ needs to be defined only for ground fluents: μ: ×→ℝGood effects are fluents with positive utility, and bad effects are fluents that have negative utility.Zero-utility fluents could be neutral fluents (which do not have a use at the moment). §.§ Defining means The standard event calculus anddon't have any mechanism to say when an effect is used as a means for another effect.While we could employ a first-order predicate and define axiomatically when an effect is used as a means for another effect, we take a modal approach that does not require any additional axioms beyond what is needed for modeling a given situation.Intuitively, we could say an effect e_1 is a mere side effect for achieving another effect e_2 if by removing the entities involved in e_1 we can still achieve e_2; otherwise we say e_1 is a means for e_2.Our approach is inspired by Pollock's Pollock1976-POLSR treatment, and while similarities can be found with the approach in <cit.>, we note that our definition requires at least first-order logic.Given a fluent f, we denote by ⊙ the set of all constants and function expressions in f.For example:⊙(ℎ𝑢𝑛𝑔𝑟𝑦(𝑗𝑎𝑐𝑘)) = {𝑗𝑎𝑐𝑘}⊙(𝑚𝑎𝑟𝑟𝑖𝑒𝑑(𝑗𝑎𝑐𝑘, 𝑠𝑖𝑠𝑡𝑒𝑟(𝑚𝑎𝑟𝑦))) = {𝑗𝑎𝑐𝑘, 𝑠𝑖𝑠𝑡𝑒𝑟(𝑚𝑎𝑟𝑦), 𝑚𝑎𝑟𝑦}We need one more definition: the state of the world without a given set of entities.Let ⊗(Γ, θ), where Γ is a set of formulae and θ is a set of ground terms, be defined as below:⊗(Γ, θ) = {ψ∈Γ | ψθ}Note that the above definition relies on the Unique Names Assumption commonly used in most formulations of the event calculus.This assumption ensures that every object in the domain has at most one name or expression referring to it.If this assumption does not hold, we can have the following slightly more complicated definition for ⊗.⊗(Γ, θ) = ψ∈Γ | ψs ∃ t∈θ: Γ⊢ s=t We introduce a new modal operator , means, that says when an effect is a means for another effect.: ×→ The meaning of the operator is defined computationally below.The definition states that, given Γ, a fluent f holding true at t_1 causes or is used as a means for another fluent g at time t_2, with t_2 > t_1, iff the truth condition for g changes when we remove formulae that contain entities involved in f. While this definition is far from perfect, it suffices as a first cut and lets us simulate experimental scenarios that have been used to test 's presence in humans. (Three other similar definitions hold when we look at combinations of fluents holding and not holding.)The equation below follows (Note that ⊢ is non-monotonic, as it includes the event calculus):Γ⊢( (f,t_1), (g,t_2)){Γ ⊢ t_2 > t_1 ∧ [Γ ⊢(f,t_1)Γ ⊢(g,t_2) ] ⇒[Γ - ⊗(Γ, ⊙(f))⊢¬(g,t_2)] }For example, let e_1 be “throwing a stone s at a window w” and e_2 be “the window w getting broken.”We can see that e_2 is not just a mere side effect of e_1, and the definition works, since, if the stone is removed, e_2 wouldn't happen.This definition is not perfect.For instance, consider when there are common objects in both the events: the intuitiveness breaks down (but the definition still works).We might for example let e_1 be “hitting a window w with a bat b.” If the window and bat are not present, e_2 would not happen. §.§ The Formalization Note that the (Γ, σ, a, α, t, H) predicate defined below, though defined using , lies outside of the formal language of .Whileis not fully formalized in , the individual clauses 𝐅_1 - 𝐅_4 are.This is how we can verify the conditions in the simulations described later.It is trivial to define a new symbol and formalize the predicate in : 1 ⇔𝐅_1 𝐅_2 𝐅_3 𝐅_4.What is not trivial, we concede, is how this works with other modalities.For example, can we efficiently derive 𝐊(a, t_1, 𝐊(b, t_2, (Γ, σ, a, α, t, H))) given some other formulae Γ?This could be difficult because the predicate's definition below involves provability, and one has to be careful when including a provability predicate.That said, for future work, we plan on incorporating this within an extended dialect of .One immediate drawback is that while we can have a system-level view of whether an action is -sanctioned, agents themselves might not know that.For example, we would like to able to write down “a knows that b knows that c's action is -sanctioned.”Given the machinery defined above, we now proceed to the formalization.Assume, for any action type α carried out by an agent a at time t, that it initiates the set of fluents α_I^a,t, and terminates the set of fluents α_T^a,t. Then, for any action α taken by an autonomous agent a at time t with background information Γ in situation σ, the action adheres to the doctrine of double effect up to a given time horizon H, that is (Γ, σ, a, α, t, H) iff the conditions below hold:[linecolor=white, frametitle= Formal Conditions for , frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt] 𝐅_1 α carried out at t is not forbidden. That is: Γ⊬¬(a, t, σ, ¬(action(a, α),t))𝐅_2 The net utility is greater than a given positive real γ: Γ⊢‎‎∑_y=t+1^H‎(∑_f∈α_I^a,tμ(f, y) - ∑_f∈α_T^a,tμ(f, y)) > γ 𝐅_3𝐚 The agent a intends at least one good effect.(𝐅_2 should still hold after removing all other good effects.)There is at least one fluent f_g in α_I^a,t with μ(f_g, y) > 0, or f_b in α_T^a,t with μ(f_b, y) < 0, and some y with t<y≤ H such that the following holds: Γ ⊢( ∃ f_g ∈α_I^a,t (a, t,(f_g,y))∃ f_b∈α_T^a,t (a, t,¬(f_b, y)))𝐅_3𝐛 The agent a does not intend any bad effect. For all fluents f_b in α_I^a,t with μ(f_b, y) < 0, or f_g in α_T^a,t with μ(f_g, y) > 0, and for all y such that t<y≤ H the following holds:Γ⊢(a, t, (f_b,y))Γ⊢(a, t, ¬(f_g,y))𝐅_4 The harmful effects don't cause the good effects.Four permutations, paralleling the definition ofabove, hold here.One such permutation is shown below.For any bad fluent f_b holding at t_1, and any good fluent f_g holding at some t_2, such that t< t_1, t_2≤ H, the following holds:Γ⊢¬((f_b, t_1), (f_g, t_2))𝐅_5 This clause requires subjunctive reasoning. The current formalization ignores this stronger clause.There has been some work in computational subjunctive reasoning that we hope to use in the future; see <cit.>.§.§.§ Doctrine of Triple Effect The doctrine of triple effect () was proposed in <cit.> to account for scenarios where actions that are viewed as permissible by most philosophers and deemed as such by empirical studies (e.g. the switch action in the third scenario in <cit.>) are not sanctioned by , as they involve harm being used as a means to achieve an action. allows such actions as long as the harm is not explicitly intended by the agent.Note that our version ofsubsumesthrough condition 𝐂_4. §SCENARIOS The trolley problems are quite popular in both philosophical and empirical studies in ethics.Hauser et al. hauser2007dissociation found empirical support thatis used by humans, courtesy of experiments based on a set of trolley problems.They use a set of 19 trolley problems in their experimentation, and describe in detail four of these.We consider the first two of these problems in our study here.The problem scenarios are briefly summarized below; common to both this setup: There are two tracksand .There is a trolley loose onheading toward two people P_1 and P_2 on ; neither person can move in time.If the trolley hits them, they die.The goal is to save this pair.[For computational purposes, the exact number of persons is not important as long as it is greater than one.] Scenario 1 There is a switch that can route the trolley to .There is a person P_3 on . Switching the trolley towill kill P_3.Is it okay to switch the trolley to ?Scenario 2 There is no switch now, but we can push P_3 onto the track in front of the trolley.This action will damage the trolley and stop it; it will also kill P_3.Is it okay to push P_3 onto the track?-based analysis tells us it is okay to switch the trolley in Scenario 1, as we are killing the person merely as a side effect of saving P_1 and P_2.In Scenario 2, similar analysis tells us it is not okay to push P_3, because we are using that person as a means toward our goal. § SIMULATIONS At the core of our simulation is a formalization of the basic trolley scenario based on the event calculus.We use a discrete version of the event calculus, in which time is discrete, but other quantities and measures, such as the utility function, can be continuous.We have the following additional sorts:and .We also declare that theand thesorts are subsorts of asort, the instances of which are objects that can be placed on tracks and moved.We use the following additional core symbols: : ××→: →: ×→: ××→𝑝𝑢𝑠ℎ: ××→ The utility function μ is defined as follows:μ(f,t) =-1 if f ≡(P)0otherwise We set the threshold γ at 0.5.The simulation starts at time t=0 with the only trolley, denoted by 𝑡𝑟𝑜𝑙𝑙𝑒𝑦, on .We have an event-calculus trajectory axiom shown below as part of Γ: ∀ t: , track:, s:[𝑇𝑟𝑎𝑗𝑒𝑐𝑡𝑜𝑟𝑦((t, track), s, (t, track, Δ), Δ)] The above axiom gives us the trolley's position at different points of time.Γ also includes axioms that account for non effects. For example, in the absence of any actions, we can derive: Γ⊢((𝑡𝑟𝑜𝑙𝑙𝑒𝑦, , 23), 23)We also have in the background Γ a formula stating that in the given trolley scenario the agent ought to save both P_1 and P_2. Ideally, while we would like the agent to arrive at this obligation from a more primitive set of premises, this setup is closer to experiments with human subjects in which they are asked explicitly to save the persons.Note the agent performing the action is simply denoted by I, and let the time of the test be denoted by now. (I, now, σ_trolley,[ ¬∃ t:((P_1,t)) ¬∃ t:((P_2,t)) ] ) Given that the agent knows that it is now in situation σ_trolley, and the agent believes that it has the above obligation, we can derive from 's inference schemata what the agent intends: { (I, now, σ_trolley),(I, now, ( I, now, σ_trolley, [¬∃ t: ((P_1,t)) ¬∃ t: ((P_2,t)) ] )),(I, now, σ_trolley,[ ¬∃ t:((P_1,t)) ¬∃ t:((P_2,t)) ] ) }⊢(I, now, [ ¬∃ t:((P_1,t)) ¬∃ t:((P_2,t)) ] )In both the simulations, P_1 is at position 4 and P_2 is at position 5 on .In Scenario 1, P_3 is at position 3 on , and the train can be switched from position 3 onto position 0 on . In Scenario 2, we push P_3 onto position 3 on .The total number of formulae and run times for simulating the two scenarios with and without the actions are shown below.Note these are merely event-calculus simulation times.These are then used in computing (Γ,σ, a, α, t, H).The event-calculus simulation helps us compute 𝐅_2.2cSimulation Time (s) (r)3-4 Scenario |Γ| No action Action performed Scenario 139 0.591 1.116 Scenario 2 380.602 0.801was then used to verify that 𝐅_1, 𝐅_3a, and 𝐅_3b hold.Both the scenarios combined take 0.57 seconds for 𝐅_1, 𝐅_3a, and 𝐅_3b.The scenarios differ only in 𝐅_4. The pushing action fails to be -compliant due to 𝐅_4. For verifying that 𝐅_4 holds in Scenario 1 and doesn't hold in Scenario 2, it takes 0.49 seconds and 0.055 seconds, respectively.[All the axioms for the two simulations, , and the combinedimplementation can be obtained here: <https://goo.gl/9KU2L9>.]§ ON OPERATIONALIZING THE PRINCIPLE Given the above formalization, it's quite straightforward to build logic-based systems that are -compliant.[For examples of logic-based systems in pure first-order logic, see <cit.>.]But how do we apply the above formalization to existing models and systems that are not explicitly logic-based?We lay down a set of conditions such models must satisfy to be able to verify that they are -compliant.We then sketch how we could usein two such modified systems: a STRIPS-like planner and a POMDP type model.The problem now before us is: Given a system and a utility function, can we say that the system is -compliant?No, we need more information from the system.For example, we can have two systems in the same situation, the same utility functions and same set of available actions.[Where does a utility function come from? The obvious way to get a utility function seems to be to learn such a function.There are good arguments that such learning can be very hard <cit.>.For now, we are not concerned with how such a utility function is given to us.For exposition and economy assume that it already exists. ] One system can be -compliant while the other is not.For example, assume that we have two autonomous driving systems d_1 and d_2.Assume that d_2 has learned to like killing dogs and intends to do so if possible during its normal course of operation.While driving, both come across a situation where the system has to hit either a human or a dog.In this scenario, d_1's action to hit the dog would be -compliant while d_2's action will not be.Therefore, the formalization requires that we have access to an agent's intentions at all times. One common objection to requiring that intentions be separate from utilities states that utilities can be used to derive intentions.This is mistaken: it is not always possible to derive intentions from a utility function.For example, there might be a state that has high utility but the agent might not intend to realize that state, as it could be out of reach for that agent (low perceived probability of success).For instance, winning a million dollars (w) has high utility, but most rational agents might not intend w, as they know this event is (alas) out of their reach.This holds for similar high-utility states.At a minimum, we believe utility and perceived probability of success go into an agent's intentions.This seems to align with the human case when we are looking at motivations, i.e. expectancy-value theory.How motivations could transform into intentions is another open research question. §.§ Requirements Practically speaking, there is a spectrum of systems that our techniques will be dealing with.At one end, we will encounter systems that are complete black boxes taking in percepts from the environment and spitting out actions.Sincerequires us to look at intentions of systems, such black-box systems will be impossible to verify.We can of course ask the system to output its intention through language as one of its possible actions, but this means that we are relying on the system's honest reporting of its internal states.At the other end of the spectrum, we have complete white-box systems.We can be fully confident that we can get what the system intends, believes, knows, etc. at any point in time.Verifying such systems is possible, in theory at least.While we don't know what kind of shape autonomous systems will take and where they will fall in the spectrum, we can explicitly list information we need from such systems before we can start the verification process.One such specification follows. [linecolor=white, frametitle= Gray Box Requirement, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]Given any autonomous system a, at any point of time t, we shouldat least be able to assert the following, if true, in order toverify that it is -compliant:* The system's intentions: (¬)(a, t, ϕ)* Prohibitions: ¬(a, t, σ, ¬ϕ) How would we go about applying the formalization to other formal systems?We very briefly sketch two examples.§.§.§ STRIP-like PlannerWe first look at a STRIPS-style planning system.Briefly, a STRIPS-style planner has a set of actions {a_i} and a set of states {s_i}.The states are nothing but sets of formulae or atoms.The individual formulae would be our effects.Each action a has a set of preconditions pre(a), a set of formulae that should hold in a given state to execute that action in that state.After executing an action a in a state s, the new state is given by s ∪𝑎𝑑𝑑𝑖𝑡𝑖𝑜𝑛𝑠(a) - 𝑑𝑒𝑙𝑒𝑡𝑖𝑜𝑛𝑠(a). The planner is given an explicit goal ϕ.This means that we know (¬)(a, t, ϕ) trivially.If we have an ethical hierarchy for the available set of actions, we then satisfy the gray-box requirement.What is then needed is a definition for , an effect used as means for another effect.The formalism gives us one possible way to define .A plan ρ is nothing but a sequence of actions.Given a plan ρ, we say an effect e_1 is used as means for another effect e_2, if e_1 ∈ pre(a_1), a_1 is an action in the plan and e_2 ∈𝑎𝑑𝑑𝑖𝑡𝑖𝑜𝑛𝑠(a_2), and a_1 comes before a_2.§.§.§ POMDP-derived SystemPartially observable Markov decision process (POMDP) models have been quite successful in a large number of domains.It is highly likely that some of the first autonomous systems might be based on POMDPs.We note that in such models, the only goal is to maximize a reward function.Another issue is that states are atomic.In order to discern between good and bad effects, we would need states to be decomposed into smaller components.One possible approach could use factored markov decision processes, which are MDPs in which states are represented as a mapping m from a set of state variables Θ ={s_1, s_2, …, s_n} to a set of values 𝒱.Here the utility and reward function could be defined on the assignments; i.e., reward(s) = ∑μ(s_i ν), where μ assigns a utility value to a particular assignment of a state variable.Additionaly, the formalism could specify one or more goal states that the model seeks to attain while maximizing the reward along the way, giving us (¬)(a,t,ϕ).§ HEIRARCHIES OF DOCTRINESOur formalization, summarized in the equation below, gives rise to multiple hierarchies of the doctrine.We discuss some of the hierarchies below. (Γ, σ, a, α, t, H) ⇔𝐅_1 𝐅_2𝐅_3𝐅_4 Horizon One obvious knob in the above equation is the horizon H.Increasing H will give us stronger versions of the doctrine.Since our formalization is in first-order modal logic, the horizon need not be finite: we could set the horizon to infinity, H=ω, and still obtain a tractable model, as long as we carefully develop our formalization.[It's a well-known fundamental result that first-order logic can handle infinite models with a finite number of axioms; see e.g. Ch. 12 in <cit.>.] Agent Generality Instead of just checking whether an action at a given time is -compliant, we could ask whether an autonomous agent a in a given situation σ will be -compliant at all times. This gives us the following condition: ∀α:,t:. (Γ, σ, a, α, t, H)Situation Generality In the hierarchy above, the quantification was over objects.We could ask whether an autonomous agent would be -compliant in all situations.That would correspond to a quantification over formulae (see centered formula immediately below), something not supported in the version of used herein. ∀σ:,α:,t:.(Γ, σ, a, α, t, H) Counterfactual Reasoning The presence or absence of counterfactual reasoning in 𝐅_5 would correspond to a very strong version of the doctrine, but one that would also be very hard to automate in the general case.We note that there are hierarchies of counterfactual reasoning (see <cit.>) that could correspond to hierarchies of versions of . § CONCLUSION We now quickly summarize the chief contributions of the foregoing, and end by presenting future lines of work.Our primary contribution is the presentation of a novel computational logic, or cognitive calculus, in which important versions ofare formalized.As a part of this calculus, we formalized an effect being used as a means for another effect via the modal operator .We also supplied an informal but rigorous version, 𝐂_1-𝐂_4, of the doctrine itself, from which we built our formalization 𝐅_1-𝐅_4.Included in this formalization is the clause 𝐂_5/𝐅_5, which requires subjunctive and counterfactual reasoning, an aspect that hitherto has simply not been considered in any systematic treatment of .Our formalization subsumes the doctrine of triple effect, ; we have achieved the first computational simulations of the doctrine.A byproduct of these simulations is an event-calculus formalization of a demanding class of trolley problems (widely used in empirical and philosophical studies of ethics).We noted that our formalization gives rise to hierarchies of doctrines with varying strengths.Our readers can choose a particular strength doctrine that fits their needs.Future work includes simulating more intricate “ethically thorny” scenarios.Despite our progress, we note that our formalization is devoid of any mechanisms for handling uncertainty, and we are in the process of extending our work to include reasoning based on probabilistic versions of .[There exist probabilistic versions of the event calculus. <cit.>. We will leverage similar work.]We also note that we have not said much about how our formalization could interact with an autonomous learning agent.We observe that even the possibility that such an intricate principle as / is learnable using existing learning frameworks remains open to question <cit.>.In the short term, a guaranteed-to-be-fruitful but less ambitious area of development will be the deployment of our mechanization ofin existing systems, and adapting existing formal models, as briefly discussed above, to exploit this mechanization.Finally, we note that since we are using first-order (multi) modal logic, we will eventually run into efficiency issues, as even vanilla first-order logic's decision problem, Γ⊢γ, is Turing-undecidable. There are a number of techniques to mitigate this issue.One approach is to exploit a library of commonly used proof patterns codified in a denotational proof language; see <cit.>.We are cautiously optimistic, as many formal enterprises outside of AI (e.g. software verification <cit.> and formal physics <cit.>) routinely face such challenges and surmount them. § ACKNOWLEDGEMENTSWe are grateful to the Office of Naval Research for funding that enabled the research presented in this paper.We also thank Dr.Daniel Thero for reading a draft of the paper and providing valuable feedback.We are also grateful for the insightful reviews provided by the five anonymous referees.§ DEONTIC COGNITIVE EVENT CALCULUSWe provide here a short primer on the deontic cognitive event calculus ().A calculus is a set of axioms in a formal logic.For example, the event calculus is a set of axioms couched in first-order logic.is a set of axioms in sorted first-order modal logic (also known as sorted quantified modal logic) that subsumes the event calculus.While first-order logic is an extensional system, modal logics are intensional systems.Note that there is a profound difference between intension vs. intention.One can have an intention to bring something about; this is traditionally captured by particular intensional operators.In other words, put concretely, the intention operator 𝐈 is an intensional operator, but so is 𝐃 for desire, 𝐁 for believes, and 𝐏 for perceives, etc. is intensional in the sense that it includes intensional operators.Unfortunately, the situation is further confused by the fact that traditionally in philosophy of mind, intentionality means the so-called “aboutness” of some mental states, so that my belief that Melbourne is beautiful is in this sense intentional, while my mental state has nothing to do with intending something.Most logicians working in formal intensional systems believe that at least intensional logic is required to formalize intentional states <cit.>. One simple reason is that using plain first-order logic leads to unsound inferences as shown below.In the inference below, we have an agent a that knows that the killer in a particular situation is the person that owns the knife.Agent a does not know that the 𝑀𝑜𝑒 is the killer, but it's true that 𝑀𝑜𝑒 is the owner of the knife.If the knowledge operator 𝐊 is a simple first-order predicate, we will get the proof shown below, which produces a contradiction from sound premises. See <cit.> for a sequence of stronger representation schemes in first-order logic for knowledge and belief that still result in inconsistencies.[frametitle=Modeling Knowlege (or any Intension) in First-order Logic , frametitlebackgroundcolor=gray!25,linecolor=white,backgroundcolor=gray!10]1𝐊(a,𝖪𝗂𝗅𝗅𝖾𝗋(𝑜𝑤𝑛𝑒𝑟(𝑘𝑛𝑖𝑓𝑒))) 2¬𝐊(a,𝖪𝗂𝗅𝗅𝖾𝗋(𝑀𝑜𝑒)) 3𝑀𝑜𝑒 = 𝑜𝑤𝑛𝑒𝑟(𝑘𝑛𝑖𝑓𝑒)4𝐊(a,𝖪𝗂𝗅𝗅𝖾𝗋(𝑀𝑜𝑒))5 In addition, consider this sentence: “Smith knows that Jones believes that there is exactly one fat man, and no more than three slim men, all four of whom desire that it not be the case that they die at the hands of Jones.” We need for / technology this sentence to be expressible directly in a single formula of some formal language of some formal logic that has an implementable inference framework (not model-based) that isn't based on resolution.Because of this, while as logicists we applaud logic programming and various other logic-inclined groups, we need our own programmable frameworks, obviously. Needless to say, our great challenge is speed of processing in the deployment of our technology.The required speed requires expensive engineering.We don't deny that. §.§ Syntax of Deontic Cognitive Event Calculus is a sorted calculus.A sorted system can be thought of as being analogous to a typed single-inheritance programming language. We show below some of the important sorts used in .Among these, the Agent, Action and ActionType sorts are not native to the event calculus.SortDescription Agent Human and non-human actors. TimeThe Time type stands for time in the domain.E.g. simple, such as t_i, or complex, such as birthday(son(jack)). Event Used for events in the domain.ActionType Action types are abstract actions.They are instantiated at particular times by actors.Example: eating. Action A subtype of Event for events that occur as actions by agents.Fluent Used for representing states of the world in the event calculus.The figures below show the syntax and inference schemata of . The syntax is quantified modal logic.Commonly used function and relation symbols of the event calculus are included.Particularly, note the following modal operators: 𝐏 for perceiving a state, 𝐊 for knowledge, 𝐁 for belief, 𝐂 for common knowledge, 𝐒 for agent-to-agent communication and public announcements, 𝐁 for belief, 𝐃 for desire, 𝐈 for intention, and finally and crucially, a dyadic deontic operator 𝐎 that states when an action is obligatory or forbidden for agents. It should be noted thatis one specimen in a family of easily extensible cognitive calculi.Since the semantics ofis proof-theoretic, as long as a new construct has appropriate inference schemata, the extension is sanctioned. [linecolor=white, frametitle=Syntax, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt]𝑆 ::= ⊑𝑓 ::= { : ×→: →: ×→: ×→: ××→: ××→: ××→: ×→ . 𝑡 ::= 𝑥 : 𝑆𝑐 : 𝑆 f(t_1,…,t_n)ϕ ::= {t:ϕϕψϕψ (a,t,ϕ)(a,t,ϕ) (t,ϕ) (a,b,t,ϕ) (a,t,ϕ) (a,t,ϕ) (a,t,(f,t')) (a,t,ϕ)(a,t,ϕ,(¬)(action(a^∗,α),t')) .The figure below shows the inference schemata for .R_𝐊 and R_𝐁 are inference schemata that let us model idealized agents that have their knowledge and belief closed under theproof theory.While normal humans are not dedcutively closed, this lets us model more closely how deliberate agents such as organizations and more strategic actors reason. (Some dialects ofcognitive calculi restrict the number of iterations on intensional operators.)R_1 and R_2 state respectively that it is common knowledge that perception leads to knowledge, and that it is common knowledge that knowledge leads to belief.R_3 lets us expand out common knowledge as unbounded iterated knowledge.R_4 states that knowledge of a proposition implies that the proposition holds.R_5 to R_10 provide for a more restricted form of reasoning for propositions that are common knowledge, unlike propositions that are known or believed.R_12 states that if an agent s communicates a proposition ϕ to h, then h believes that s believes ϕ.R_14 dictates how obligations get translated into intentions.[linecolor=white, frametitle=Inference Schemata, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt][[R_]](a,t_2,ϕ)(a,t_1,Γ), Γ⊢ϕ, t_1 ≤ t_2[[R_]](a,t_2,ϕ)(a,t_1,Γ), Γ⊢ϕ, t_1 ≤ t_2[[R_1]](t,(a,t,ϕ) (a,t,ϕ))[[R_2]](t,(a,t,ϕ) (a,t,ϕ))[[R_3]](a_1, t_1, …(a_n,t_n,ϕ)…)(t,ϕ)t≤ t_1 … t≤ t_n[[R_4]]ϕ(a,t,ϕ)[[R_5]](t,(a,t_1,ϕ_1ϕ_2)) (a,t_2,ϕ_1) (a,t_3,ϕ_2)[[R_6]](t,(a,t_1,ϕ_1ϕ_2)) (a,t_2,ϕ_1) (a,t_3,ϕ_2)[[R_7]](t,(t_1,ϕ_1ϕ_2)) (t_2,ϕ_1) (t_3,ϕ_2)[[R_8]](t, ∀ x.ϕϕ[x↦ t])[[R_9]](t,ϕ_1 ϕ_2 ϕ_2 ϕ_1)[[R_10]] (t,[ϕ_1…ϕ_nϕ][ϕ_1…ϕ_nψ])[[R_12]](h,t,(s,t,ϕ))(s,h,t,ϕ)[[R_13]](a,t,((a^∗,α),t))(a,t,((a^∗,α),t'))[[R_14]](a,t,(a,t,χ))(a,t,ϕ) (a,t,(a,t,ϕ, χ))(a,t,ϕ,χ) named
http://arxiv.org/abs/1703.08922v5
{ "authors": [ "Naveen Sundar Govindarajulu", "Selmer Bringsjord" ], "categories": [ "cs.AI", "cs.LO", "cs.RO" ], "primary_category": "cs.AI", "published": "20170327040356", "title": "On Automating the Doctrine of Double Effect" }
A perturbation theory approach to the stability of the Pais-Uhlenbeck oscillator Misael Avendaño-Camacho^†^*, José A. Vallejo^ and Yury Vorobiev^† ^†^*Departamento de Matemáticas, Universidad de Sonora (México)^Facultad de Ciencias, Universidad Autónoma de San Luis Potosí (México)^*CONACYT Research FellowEmails: December 30, 2023 ====================================================================================================================================================================================================================================================== We present a detailed analysis of the orbital stability of the Pais-Uhlenbeck oscillator, using Lie-Deprit series and Hamiltonian normal form theories. In particular, we explicitly describe the reduced phase space for this Hamiltonian system and give a proof for the existence of stable orbits for a certain class of self-interaction, found numerically in previous works, by using singular symplectic reduction. § INTRODUCTION One of the main problems in modern field theories is their renormalizability, that is, the possibility of canceling infinities when developing solutions to their equations of motion as a perturbation series. In the case of quantum theories, another (related) problem is that of unitarity. It is known since long ago that perturbatively renormalizable theories based on higher order derivatives can be constructed, but most of these models have been discarded in Physics because of unitarity problems in their quantization. Pais and Uhlenbeck <cit.> developedone such model in the 50's, based on a generalization of the harmonic oscillator, but taking into accounttwo frequencies w_1, w_2. Their model arose in the context of gravitation and can be described with the fourth-order differential equationd^4u/dt^4 +(w^2_1+w^2_2)d^2u/dt^2 +w^2_1w^2_2u=0 ,for a function of time u=u(t). This is the Euler-Lagrange equation for the LagrangianL=( d^2u/dt^2+w^2_1u ) ( d^2u/dt^2+w^2_2u ) .The Ostrogadski's second-order formalism <cit.>generalizes the Legendre transform(which passes from the Lagrangian to the Hamiltonian for regular Lagrangiansdepending only on first derivatives), and allows us to find thecorresponding Hamiltonian. If we introduce new variablesα =w^2_1-w^2_2,β =w^2_1w^2_2 ,and alsoq_1=1/√(2αβ)( d^2u/dt^2+w^2_1u ) q_2=1/√(2αβ)( d^2u/dt^2+w^2_2u ),the result can be written asH_0=1/2( p^2_1+w^2_1q^2_1)-1/2( p^2_2+w^2_1q^2_2) ,where (denoting the time derivative by a point over the corresponding letter)p_1=-√(2αβ) u̇=p_2 .Thus, the dynamical system described by the fourth-order equation(<ref>), which is a toy model for a renormalizable theory, can be described by the Hamiltonian (<ref>), which is the difference of two harmonic oscillators. As long as the two oscillators are uncoupled, there are no physical problems at the classical level; there exist negative energies, but these can be interpreted in terms of the different labeling of the components, and have the same meaning as the positive ones. But for us it is more important to consider now the situation in which there is an interaction. If a nonlinear interaction term is added to (<ref>), so the Hamiltonian becomes, say,H=1/2(p^2_1+w^2_1q^2_1)-1/2(p^2_2+w^2_2q^2_2) + Λ/4(q_1+q_2)^4 , the interaction could lead to an exchange of energy from the `positive' oscillator to the `negative' one, and this exchange could be done without any lower bound. An infinite amount of energy could be dissipated by the `negative' oscillator, collapsing the system. This fact is reflected in the existence of states of negative norm after canonical quantization of the system, so the time evolution presents problems regarding unitarity (although this phenomenon can be conveniently interpreted, as in <cit.>,in order to give it a physical meaning).The situation just described has been the main reason to discard higher-order models as useful physical ones. However, their property of being perturbatively renormalizable is strong enough to have prevented the complete vanishing of interest in them, and some studies have been conducted, numerically simulating their behavior, with the hope of finding a mechanism that would protect unitarity or prevent the collapse of the system. Surprisingly, what has been found is the existence of a class of interactions (of which (<ref>) is an example) admitting `islands of stability' (see <cit.>, <cit.> and <cit.>): For some values of the parameters characterizing the system, the interaction generates stable periodic motions, thus allowing these models to be considered as perfectly viable ones. Although some heuristic arguments have been given in these papers to justify the occurrence of these stable motions, this phenomenon continues to be basically a finding based on numerical simulations, and its physical origin remains unexplained. Precisely, our purpose in this note is to show how the appearance of closed, stable orbits for a system described by the Hamiltonian (<ref>) can be explained by using geometric and perturbation theory and singular symplectic reduction.The whole idea is very simple, and can be described as follows. The complete Hamiltonian (<ref>) is regarded as a perturbation of the Pais-Uhlenbeck oscillator (<ref>). The latter obviously admits closed orbits, invariant under the action of the Hamiltonian flow of X_H_0 (the Hamiltonian vector field of H_0). If the interaction term satisfies certain conditions, seeing it as a perturbation of H_0 we can think that some of the non-perturbed orbits will survive, andwe can detect them studying the fixed points of the map expressing the returning time to a suitable Poincaré section. As the system presents U(1) symmetry (manifest in the periodicity of the flow of X_H_0), it will be convenient to carry on the study of existence ofperiodic orbits in a reduced space constructed as the quotient of phase space under the U(1) action. This reduced space has the structure of a semi-algebraic variety and, as we will see, the periodic orbits originate precisely from its singular points (hence our use of singular reduction). These ideas have been explored in the case of the Hénon-Heiles system in <cit.>, and for perturbations of theisotropic harmonic oscillator in <cit.>. An alternative approach to stability, based on the notion of Lagrangian anchor, is developed in <cit.>. On the other hand, M. Pavšič has studied the stability problem under the assumption of bounded interaction potentials. With this restriction, in <cit.> he provesthat no instabilities occur. However, in the present paper we do not restrict the potentials to the bounded case,so our approach is different, based on the geometric reasoning outlined above.In the next section, we summarize the main results of the theory Hamiltonians in normal form. As we will see, the analysis is considerably simplified if the normal form of H is expressed in the so-called Hopf variables, which for the case of the Pais-Uhlenbeck oscillator are determined in Section <ref>. Then, we proceed to compute the normal form (in Section <ref>) and to determine the critical points of it on the reduced phase space, which will lead to the existence of closed stable orbits in the original system (in Section <ref>). § NORMAL FORMS IN PERTURBATION THEORY Given a Poisson manifold (M,P), with induced bracket {· ,·}, consider a perturbed Hamiltonian of the form H=H_0+ε H_1, where H_0 is supposed to be integrable in general. Hamilton's equations for H area coupled non-linear system of differential equations, so its solution in closed form is very difficult, or even impossible, to obtain. Perturbation theory tries to construct approximate solutions to this system, and there are two big sets of techniques for doing this. The first group, which comprises classical methodssuch as Poincaré-Lindstedt or Von Zippel's, start with a known solution of the unperturbed Hamiltonian H_0 and add successive corrections to it, in whose computations (usually through formal power series) enters H_1, and the fact that ε is small. The methods in the second group, which includes the Lie-Deprit method used in this paper, try to put the system of Hamiltonian equations in an approximate, simpler form suitable to be studied by analytic tools. Thus, the perturbed Hamiltonian H is said to admit a normal form of order n if there exist a near-identity canonical transformation on phase space such that H is transformed intoH=∑^n_i=0ε^iN_i +R_H ,where N_0=H_0 and{N_i,H_0}=0,1≤ i≤ n .The function N=∑^n_i=0ε^iN_i is the normal form (of order n) of H. This approach is based on the fact that whenever ‖ H-N‖ =‖ R_H‖ is small in a suitable norm, the trajectories of N provide us with good approximations to the true trajectories of H. In particular, closed orbits for H can be detected through the existence of closed orbits for N.Of course, in the setting of Poisson geometry, conditions (<ref>), (<ref>) can be expressed interms of Hamiltonian vector fields instead of functions; then, we would write respectively,X_H=∑^n_i=0ε^iX_H_i +X_R_Hand[X_H_0,X_H_i]=0,1≤ i ≤ n ,where, for any F∈𝒞^∞ (M), X_F={F,·} is the corresponding Hamiltonian vector field.To construct the desired canonical near-identity transformation that passes to the normal form, the Lie-Deprit method resorts to generating functions. In fact, a family of canonical transformations depending on the parameter ε, x↦ y(x;ε) (where x denotes collectively the coordinates on M), such that y(x;0)=x, is defined by∂ y_j/∂ε={S,y_j} ,j∈{1,… , M} ,with S=S(ε) is the generating function. Alternatively, (<ref>) can be written in terms of a Lie derivative as∂ y_j/∂ε=ℒ_X_Sy_j ,where X_S={S,·} is the Hamiltonian vector field of S. It can be thought as the `ε-flow generator', much in the same way as H is the time-flow generator.A problem appearing in the usual formulation of the Lie-Deprit method is that, in order to computeS=∑^n_j=0ε^jS_j as a formal series (with n possibly equal to ∞), one has to use action–angle variables and Fourier analysis, so the formalism is of a local nature (recall that global action–angle variables do not always exist, see <cit.>). This problem can be overcome in a geometric setting by exploiting the symmetries of H_0. In <cit.>, a reformulation of the Lie-Deprit method is offered, valid for systems admitting a U(1)-action such that the Hamiltonian vector field X_H_0 has periodic flow. When trying todetermine the defining properties of the generating function S or its Hamiltonian vector field X_S,one is lead to a set of equations called the homological equations, which basically have the formℒ_X_H_0S_j=F_j-(j+1)N_j+1 j≥ 0 ,for a certain set of functions F_j. As previously mentioned, these equations are usually solved by writing everything in action–angle variables, using some Fourier analysis and then averaging over angles on orbits with constant action. The idea in <cit.> was to solve the homological equations in a global setting, again using averaging operators, but this time constructing them by means of geometric properties of the flow of Hamiltonian vector fields, thus avoiding action–angle variables and the requirement that M be symplectic. In what follows we offer a brief summary of the results in <cit.>, in particular the explicit expressions for the normal forms to first and second order.Given a manifold M and a complete vector field X∈𝒳(M), with periodic flow of period function T∈𝒞^∞ (M), T>0, we have (for any p∈ M)Fl^t+T(p)_X(p)=Fl^t_X(p) ,where Fl^t_X is the flow of X evaluated at time t∈ℝ. In this case, X induces a U(1)-action by putting (t,p)↦Fl^t/w(p)_X(p), where w=2π/T>0 is the frequency function. This U(1)-action is periodic with constant period 2π:Fl^(t+2π )/w(p)_X(p)=Fl^t/w(p)_X(p) .A straightforward computation shows that the generator of this U(1)-action is given by the vector fieldΥ =1/wX∈𝒳(M) .Now, for any tensor field R∈Γ T^r_s(M), its U(1)-averaging is defined by⟨ R⟩ =1/T∫^T_0 (Fl^t_Υ )^*R dt .Also, an 𝒮 operator, mapping Γ T^r_s(M) into itself, is defined as𝒮(R)=1/T∫^T_0 (t-π )(Fl^t_Υ )^*R dt .In terms of these averaging operators, the solution to the homological equations (considering now a Poisson manifold (M,P) and a Hamiltonian function H=H_0 +ε H_1) can beexpressed asS_j = 𝒮( F_j/w)N_j+1= 1/j+1⟨ F_j⟩ .Explicitly, the lowest order expressions for the normal forms of the perturbed Hamiltonian areN_1=⟨ H_1⟩ =1/T∫^T_0 (Fl^t_X_H_0)^*H_1 dt ,andN_2 =1/2⟨{𝒮( H_1/w),H_1}⟩ . § INVARIANTS OF THE HAMILTONIAN FLOW OF THE FREE PAIS-UHLENBECK OSCILLATOR According to what we have seen in the Introduction, consider the Hamiltonian for the Pais-Uhlenbeck oscillator in T^∗ℝ^2 with coordinates (q_1,p_1,q_2,p_2) and the Poisson bracket induced by the usual canonical symplectic structure:H_0(q_1,p_1,q_2,p_2)=1/2(p^2_1+w^2_1q^2_1)-1/2(p^2_2+w^2_2q^2_2) .Its associated Hamiltonian vector field is readily found to beX_H_0 = p_1∂/∂ q_1-w^2_1q_1∂/∂ p_1 -p_2∂/∂ q_2+w^2_2q_2∂/∂ p_2 .The curves c:I⊂ℝ→ T^∗ℝ^2, c(t)=(q_1(t),p_1(t),q_2(t),p_2(t)) which are integrals for X_H satisfy the decoupled system (each dot denoting one time derivative)q̈_̈1̈+w^2_1q_1=0 q̈_̈2̈+w^2_2q_2=0 ,and hence, we have an action of U(1) on T^∗ℝ^2≃ℝ^4 given by the (linear) flow of X_H_0:Fl^t_X_H_0[ q_1; p_1; q_2; p_2 ]= [ p_1/w_1sin w_1t+q_1cos w_1t;p_1cos w_1t-w_1q_1sin w_1t; q_2cos w_2t-p_2/w_2sin w_2t;w_2q_2sin w_2t+p_2cos w_2t ] .Notice that whenever w_1 and w_2 are commensurable, this flow is periodic(although we may need a time rescaling to see it as an U(1) action). In particular, if w_1,w_2∈ℤ are coprime, then Fl^t_X_H_0 is 2π w_ww_2-periodic.We are interested in determining the polynomial invariants under the action of this flow. To this end, following <cit.>, we introduce a set of complex coordinates through the relationsz_j=p_j+iw_jq_j,z_j=p_j-iw_jq_j ,for j∈{1,2}. In terms of these, the linear flow of X_H_0 can be written in the formFl^t_X_H_0(z_1,z_1,z_2,z_2)= (e^iw_1tz_1,e^-iw_1tz_1,e^-iw_2tz_2,e^iw_2tz_2) .Consider now an arbitrary monomial z^j_1_1z^j_2_2z^k_1_1z^k_2_2. It will be invariant under the action of the Hamiltonian vector field X_H_0 if and only ifz^j_1_1z^j_2_2z^k_1_1z^k_2_2 = (e^iw_1t)^j_1(e^-iw_2t)^j_2(e^-iw_1t)^k_1(e^iw_2t)^k_2z^j_1_1z^j_2_2z^k_1_1z^k_2_2 = e^-i(-w_1j_1+w_2j_2+w_1k_1-w_2k_2)tz^j_1_1z^j_2_2z^k_1_1z^k_2_2 ,that is, if and only ifw_1(k_1-j_1)+w_2(j_2-k_2)=0 .For the sake of clarity, let us call m_1=k_1-j_1 and m_2=j_2-k_2, so we must solvem_1w_1+m_2w_2=0 .In what follows, we will assume that w_1,w_2∈ℤ^+, so we actually have a Diophantine equation in the unknowns (m_1,m_2) (the analysis of the general case is completely analogous). Moreover, we will suppose that w_1, w_2 are coprime; other cases, such as the resonance w_1=w_2, will be dealt with separately. The condition gcd(w_1,w_2)=1 guarantees that there exist solutions to (<ref>); actually a trivial one is given by (m_1,m_2)=(-w_2,w_1). Notice that (m_1,m_2) is a solution if and only if (rm_1,rm_2) is a solution for any r∈ℤ, thus, the set of all integer solutions (the only ones of interest for us) to (<ref>) is(m_1,m_2)=(-rw_2,rw_1) for arbitrary r∈ℤ, that is,(k_1-j_1,j_2-k_2)=(-rw_2,rw_1) .Let us consider the different possibilities appearing here. * If k_1>j_1 and j_2>k_2, then m_1,m_2>0, and as w_1,w_2>0 by hypothesis, there are no solutions.* Analogously, if k_1<j_1 and j_2<k_2, then m_1,m_2<0, and as w_1,w_2>0, by factoring a sign we arrive at |m_1|w_1+|m_2|w_2=0, which is too an equation without solutions.* If k_1<j_1 and j_2>k_2, then it is r>0, so we can write j_1=k_1+rw_2, j_2=k_2+rw_1, and the invariant polynomial becomesz^k_1+rw_2_1z^k_2+rw_1_2z^k_1_1z^k_2_2 = (z_1z_1)^k_1(z_2z_2)^k_2(z^w_2_1z^w_1_2)^r .Hence, the monomialsz_1z_1,z_2z_2,z^w_2_1z^w_1_2can be taken as generators.* If k_1>j_1 and j_2<k_2, then r<0 can be written as r=-s, with s∈ℤ^+. In this case, k_1=j_1+sw_2 and k_2=j_2+sw_1, so the invariant monomial isz^j_1_1z^j_2_2z^j_1+sw_2_1z^j_2+sw_1_2 =(z_1z_1)^j_1(z_2z_2)^j_2(z^w_2_1z^w_1_2)^s ,and the monomialsz_1z_1,z_2z_2,z^w_2_1z^w_1_2are generators. As a conclusion, the generators of the algebra of invariant polynomials (under the action of the Hamiltonian flow of X_H_0) can be taken asz_1z_1,z_2z_2,z^w_2_1w^w_1_2,z^w_2_1z^w_1_2 .Alternatively, we can consider the following set of real generators (in terms of the original phase space variables (q_1,q_2,p_1,p_2)), called the Hopf variables:ρ_1 =z_1z_1=w^2_1q^2_1+p^2_1ρ_2 =z_2z_2=w^2_2q^2_2+p^2_2ρ_3 = Re( z^w_2_1z^w_1_2 ) = Re( (p_1+iw_1q_1)^w_2(p_2+iw_2q_2)^w_1)ρ_4 = Im( z^w_2_1z^w_1_2 ) = Im( (p_1+iw_1q_1)^w_2(p_2+iw_2q_2)^w_1).Rather than giving the most general explicit expression, let us illustrate this result with a simple example. For instance, in the case of a 1:2 resonance (that is, w_1=1, w_2=2), we getρ_1 =q^2_1+p^2_1ρ_2 =4 q^2_2+p^2_2ρ_3 =p_2(p^2_1-q^2_1)-4p_1q_1q_2ρ_4 =2q_2(p^2_1-q^2_1)+2q_1p_1p_2 . There exists a certain algebraic relation satisfied by the ρ variables. To begin with, we haveρ^w_2_1ρ^w_1_2= (z_1z_1)^w_2(z_2z_2)^w_1 =z_1^w_2z_1^w_2z_2^w_1z_2^w_1 ,but this can be rearranged asz_1^w_2z_2^w_1z_1^w_2z_2^w_1 =Re^2( z_1^w_2z_2^w_1)+Im^2( z_1^w_2z_2^w_1) =ρ^2_3 + ρ^2_4 .Thus, the real generators (ρ_1,ρ_2,ρ_3,ρ_4) satisfyρ^2_3 + ρ^2_4=ρ^w_2_1ρ^w_1_2 ,ρ_1,ρ_2≥ 0 ,which are the equations of a singular algebraic surface in ℝ^4. For the particular case of the 1:2 resonance, we getρ^2_3 + ρ^2_4=ρ^2_1ρ_2 ,ρ_1,ρ_2≥ 0 . Since (by a suitable rescaling) the action on T^∗ℝ^2≃ℝ^4 of the flow of X_H_0 can be seen as a smooth U(1)-action, the group U(1) is compact, and the orbit space ℝ^4/U(1) only contains finitely many orbit types (we will consider the geometric structure of this orbit space in Section <ref>), we can apply the result in <cit.>, which tells us that the smooth observables invariant under the action of U(1) are smooth functions of the polynomial generators(ρ_1,ρ_2,ρ_3,ρ_4).§ NORMAL FORM OF THE PERTURBED HAMILTONIAN Recall from the Introduction that the main difficulty associated with the Hamiltonianform of the Pais-Uhlenbeck oscillator (<ref>) is the following fact: When aninteraction between the two sub-oscillators is added, the energy can be freely transmitted from one to another. In particular, this transfer can be done unboundedly from the oscillator 1 to the oscillator 2, leading to increasing negative energies and an eventual collapse of the system[At the quantum level, this fact manifest itself in the appearance, after canonical quantization, of states with negative norm.]. However, it has been observed in <cit.>, <cit.> and <cit.> (among others) that for certain interactions, there are “islands of stability”, which have been detected numerically.It is our intention to prove analytically the existence of such stable configurations,and for this we need to write the interacting Hamiltonian in normal form. Then, we will take the quotient by the action of the Hamiltonian flow of H_0 and will get the corresponding Hamiltonian on the reduced phase space, in the next section. An important feature of this reduction process is that this reduced Hamiltonian is a function of only three among the invariant generators (ρ_1,ρ_2,ρ_3,ρ_4).For a cubic self-interacting Pais-Uhlenbeck oscillator (with the restriction w_1≠ w_2)d^4 u/du^4+(w^2_1+w^2_2)d^2 u/du^2 +w^2_1w^2_2u-Λ u^3 =0 ,the following Hamiltonian has been proposed in <cit.>,H(q_1,p_1,q_2,p_2)=H_0+ Λ H_1 =1/2(p^2_1+w^2_1q^2_1)-1/2(p^2_2+w^2_2q^2_2) + Λ/4(q_1+q_2)^4 ,where the relation between the variable u and the set (q_1,q_2,p_1,p_2) is obtained through a series of substitutions and hyperbolic rotations that will not be needed here (they come from the application of Ostrogadski's second-order formalism, see <cit.> and <cit.>). Due to the presence of a periodic flow, we will find it convenient to use the techniques in <cit.>, considering Λ as a perturbation parameter (in fact, here Λ is a small parameter, see <cit.>). We begin by noticing that the Hamiltonian flow Fl^t_X_H_0 is given byFl^t_X_H_0[ q_1; p_1; q_2; p_2 ]= [ p_1/w_1sin w_1t+q_1cos w_1t;p_1cos w_1t-w_1q_1sin w_1t; q_2cos w_2t-p_2/w_2sin w_2t;w_2q_2sin w_2t+p_2cos w_2t ] .The first two components are periodic with period T_1=2π/w_2, while the remaining two are periodic with T_2=2π/w_1. A common period T is obtained by finding integers a,b such that aT_1=T=bT_2. In our case, a=w_1,b=w_2 do the job, so T=2π w_1w_2. The computation of the second-order normal form is described in <cit.>. In particular, the term N_1 is given by the averaging of the perturbation term H_1 along the flow Fl^t_X_H_0 (<ref>), that is:N_1=⟨ H_1⟩ = 1/2π w_1w_2∫^2π w_1w_2_0 (Fl^t_X_H_0)^* H_1dt . The result isN(q_1,p_1,q_2,p_2)=H_0+ Λ N_1 + O(Λ^2)whereN_1 = 3/32w_1^4w_2^4( w_1^4w_2^4(q^4_1+q^4_2)+ q^2_1(4w_1^4w_2^4q^2_2+ 2w_1^2w_2^4p^2_1+4w_1^4w_2^2p^2_2). . +q^2_2(4w_1^2w_2^4p^2_1+2w_1^4w_2^2p^2_2)+ p^2_1(w_2^4p^2_1+4w_1^2w_2^2p^2_2)+ w_1^4p^4_2. ) As a quick check, one can compute the Poisson brackets {H_0,N_1}=0, as it must be for a normal form. Now, we use (<ref>) to re-express these results in terms of the Hopf invariants ρ_i, obtainingH_0(ρ_1,ρ_2,ρ_3,ρ_4)=1/2(ρ_1-ρ_2)for the free part, whileN_1(ρ_1,ρ_2,ρ_3,ρ_4)=3/8( 1/4w_1^4ρ^2_1+1/w_1^2w_2^2ρ_1ρ_2+1/4w_2^4ρ^2_2 ). We will make use of these explicit expressions in the next section, to determine the existence of stable periodic orbits under the flow of the Pais-Uhlenbeck oscillator. § STABILITY ANALYSIS ON THE REDUCED PHASE SPACE Recall from the discussion following (<ref>) that the Hamiltonian flowFl^t_X_H_0 is such that all its orbits are periodic with period2π w_1w_2. The flow of X_H_0 induces a U(1)-actionon the phase space. Let 𝒪 the orbit space given by identifying any two points of ℝ^4 lying on the same orbit of X_H_0. Any averaged vector field clearly satisfies ℒ_X_H_0⟨ X⟩ =0, thus leading to (Fl^t_X_H_0)^*⟨ X⟩ =⟨ X⟩. Therefore, the averaged vector field is completely determined along the orbits of X_H_0 if it is known at a point. In particular, this means that ⟨ X⟩ descends to the orbits space. Our reduced phase space will be obtained as a subspace of the orbits space, by fixing a value for the momentum map of the U(1)-action determined by Fl^t_X_H_0, which is nothing but the total energy H_0=(ρ_1-ρ_2)/2 (see (<ref>)).At this point, there are two things to do. The first one is to geometrically identify the reduced phase space, and the second one is to give an explicit expression for the reduced Hamiltonian, that is, the normal form Hamiltonian N=H_0 +Λ N_1 +O(Λ^2) when restricted to the reduced phase space.For the first step, we will follow the technique described in <cit.> to prove that (<ref>) and the condition of constant energy H_0=h, gives the algebraic description of the reduced phase space. We use a result by Poènaru <cit.> (actually a corollary of the theorem by Schwarz in <cit.>), which states that the basic invariant polynomials separate the orbits of the Hamiltonian flow Fl^t_X_H_0. In our case this implies[Here we collectively denote (q_1,p_1,q_2,p_2) by (q,p).] that the equality (ρ_1(q,p),…,ρ_4(q,p))=(ρ_1(q',p'),…,ρ_4(q',p')) holds if and only if (q,p) and (q',p') belong to the same orbit. Thus, it is enough to prove that for every (u_1,u_2,u_3,u_4) such that u^2_3+u^2_4= u^w_2_1u^w_1_2, its inverse image under the map (q,p)↦ (ρ_1(q,p),…,ρ_4(q,p)) is precisely a single orbit of the flow Fl^t_X_H_0. For instance, if u_2=0 then ρ_2(q,p)=0 and necessarily q_2=0=p_2 (from (<ref>)). This, in turn, implies that ρ_3=0=ρ_4 so we have the inverse image of (u_1,0,0,0), where u_1≥ 0, which is the set {(q_1,p_1,0,0)∈ℝ^4:q^2_1+p^2_1=u_1}, and this is clearly an orbit of Fl^t_X_H_0. The remaining cases can be done along similar lines, and will not be written here. In what follows, we will restrict our attention to fixed negative energy values. The reduced phase space is then given by the set of equationsρ^2_3 + ρ^2_4=ρ^w_2_1ρ^w_1_2 ,ρ_1,ρ_2≥ 0, ρ_1-ρ_2=2h ,that is,ρ^2_3 + ρ^2_4=(2h+ρ_2)^w_2ρ^w_1_2 ,ρ_2≥ -2h .As advanced in Section <ref>, (<ref>) is the equation of a singular algebraic surface S in ℝ^3. In the case of the w_1:w_2 resonance for the sum of harmonic oscillators, the geometry of these surfaces (which typically are pinched spheres) has been extensively studied by Kummer, see <cit.>. In the present case, due to the fact that H_0 is an indefinite quadratic form, the surface(<ref>) is a noncompact semialgebraic variety, as can be seen in Figure <ref> (whichrepresent the above surface for h=-1 from two different viewpoints). Notice that the point (ρ_2,ρ_3,ρ_4)= (-2h, 0,0) is always a point on the surface (<ref>). This point is smooth when w_2=1, it has a conical singularity when w_2=2, and it has a cusp-like singularity for w_2 ≥ 3. Therefore, the point (-2h, 0,0) belongs to the reduced space if and only if w_2=1. This point corresponds to the curve in ℝ^4 given by p_1=q_1= 0 and p_2+w_2^2q_2 = -2h, called the normal mode. We will analyze these smoothness issues later on.The preceding remarks lead us to consider two different cases, depending on the values of w_2. Previous to the analysis of these, we need to gather some results related to Moser'stheorem and its generalizations. The well-known theorem by Moser <cit.>, can be rephrased as follows: Let H=H_0+Λ H_1 +O(Λ) be a perturbed Hamiltonian, with M_h the hypersurface H_0=h. Suppose that the orbits of the Hamiltonian flow Fl^t_X_H_0 are all periodicwith period T and let S be the quotient with respect to the induced U(1)-action on M_h. Then, to every non-degenerate critical point p∈ S of the restricted averaged perturbation . N_1|_S=. ⟨ H_1⟩|_S corresponds a periodic trajectory of the full Hamiltonian vector field X_H, that branches off from the orbit represented by p and has period close to 2π. A generalization can be found in <cit.>, Theorem 6.4 there; as we will see, our setting satisfies its hypothesis when w_2=1.In order to apply these results, we must characterize the critical points of Hamiltonian vector fields in the the reduced space. First, observe that the commutator relations among generators (ρ_1,ρ_2, ρ_3, ρ_4 ) are given by{ρ_1,ρ_2 } =0, {ρ_1,ρ_3 } = -2w_1w_2ρ_4, {ρ_1,ρ_4 } = 2w_1w_2ρ_3,{ρ_2,ρ_3 } =-2w_1w_2ρ_4, {ρ_2,ρ_4 } = 2w_1w_2ρ_3,{ρ_3,ρ_4 } = w_1w_2ρ_1^w_2-1ρ_2^w_1-1(w_1ρ_1+w_2ρ_2).Renaming the variables ρ_3=x, ρ_4=y, and ρ_2=z, these relations induce a Poisson bracket on the three dimensional Euclidean space ℝ^3={(x,y,z) } given by{f, g} = w_1w_2⟨∇ g,∇ f×∇ F⟩ ,where F is the functionF(x,y,z)=x^2+y^2-(z+2h)^w_2z^w_1 ,and the symbols ⟨· ,·⟩, ×, ∇ stand for the usual inner product, cross product and nabla operator in ℝ^3, respectively.Hence, for any f∈ C^∞(ℝ^3), its Hamiltonian vector field is given byX_f= w_1w_2 ∇ f ×∇ F .It follows directly from definition (<ref>)that the function F(x,y,z) (<ref>)is a Casimir of the Poisson structure (<ref>). Thus, the symplectic leaves of thecorresponding foliation are precisely the connected components of level sets of F. If we define the mapping P:ℝ^4 →ℝ^4 byP(ρ_1,ρ_2,ρ_3,ρ_4) = (ρ_3,ρ_4,ρ_2) ,we get that P is a Poisson map and P(H_0^-1(h))= F^-1(0). Moreover,(P∘Fl^t_X_H_0) [ q_1; p_1; q_2; p_2 ] = P[ ρ_1(p_1,q_1,p_2,q_2); ρ_2(p_1,q_1,p_2,q_2); ρ_3(p_1,q_1,p_2,q_2); ρ_4(p_1,q_1,p_2,q_2) ] .Therefore, the reduced space is contained in a symplectic leaf of F^-1(0)⊂ℝ^3 . Let us denote by M_h the reduced space. Then, a realization is given byM_h={[ F^-1(0)andx≥ -2hifw_2=1 ,; F^-1(0)andx> -2hifw_2>1 . ].Any function f∈ C^∞(ℝ^3) defines a Hamiltonian vector field X_fon M_h by the restriction of (<ref>):X_f:= . (w_1w_2∇ f×∇ F)|_M_h.It also follows from (<ref>) that the Hamiltonian vector field X_f has a critical point at the point p∈ M_h if and only if either ∇ f(p) is orthogonal at p to the reduced space M_h, or ∇ f(p)=0.Next, we describe how to obtain the reduced Hamiltonian vector field corresponding to a function G∈ C^∞(ℝ^4) such that {H_0, G}=0. As discussed in Section <ref>, G can be expressed in terms of the Hopf variables: G = G(ρ_1,ρ_2,ρ_3,ρ_4). Writingρ_1=z+2h, ρ_2=z, ρ_3=x and ρ_4=x, we obtain the functionQ(x,y,z)= G(z+2h,z,x,y). Thus, the reduced Hamiltonian vector field associated to G is thevector fieldX_G=.(w_1w_2 ∇ Q ×∇ F )|_M_h .This expression allows us to compute the critical points of the reduced vector field associated tothe function N_1(ρ_1,ρ_2,ρ_3,ρ_4) (<ref>). Letting as above K(x,y,z)=N_1(z+2h,z,x,y), we getK(x,y,z)(x,y,z) = 3/8( 1/4w_1^4(z+2h)^2+1/w_1^2w_2^2(z+2h)z+1/4w_2^4z^2 )= 3/8( w_1^4+4w_1^2w_2^2+w_2^4 /4w_1^4w_2^4z^2+2w_1^2+w_2^2 /w_1^4w_2^2hz+h^2/w_1^4).Hence, the reduced vector field isX_N_1=.(w_1w_2 ∇ K ×∇ F )|_M_h.As we pointed out above, the critical points of (<ref>) are those points p∈ M_h such that either ∇ K(p)=0 or ∇ K(p) is orthogonal to M_h.By a straightforward computation, we get∇ K = (0,0,3/8( w_1^4+4w_1^2w_2^2+w_2^4 /2w_1^4w_2^4z+2w_1^2+w_2^2 /w_1^4w_2^2h ) ) .Thus, ∇ K(p)=0 if and only ifp=(0,0,(-2h)(2w_1^2w_2^2+w_2^4)/(w_1^4+4w_1^2w_2^2+w_2^4)). But because of 2w_1^2w_2^2+w_2^4/w_1^4+4w_1^2w_2^2+w_2^4 <1 ,∇ K never vanishes on M_h. Since, once the z-axis is fixed, M_h is invariant under rotations on ℝ^3 and∇ K never vanishes on M_h, ∇ K is orthogonal to M_h at (0,0,-2h) if andonly if w_2=1 (otherwise, the point does not belong to M_h). We will consider the cases w_2=1 and w_2>1 separately, as they need different treatments. §.§ The case w2=1As we have seen, the point (0,0,-2h) is the only critical point of the reduced Hamiltonianvector field X_K corresponding to N_1. A simple computation gives∂ F/∂ z(0,0,-2h)=(-2h)^w_1≠ 0 .By the implicit function theorem, there there exists a (locally defined) smooth functionz = g(x,y) such that F(x,y,g(x,y))=0 and g(0,0)=-2h. Therefore, the function K in (<ref>) has the form K=K(g(x,y)) in a neighborhood of (0,0,-2h).Another computation shows thatHess(K(0,0)) >0 .Therefore, the critical point (0,0,-2h) is non-degenerate and Theorem 6.4 in <cit.>implies that, for small enough Λ, the Pais-Uhlenbeck oscillator has a unique stable periodic orbit γ_Λ with energy h through each point m(Λ),sufficiently close to (0,0,-2h), with period T(Λ), such that H_0(m(Λ))→ h andT(Λ) → 2π w_1w_2. §.§ The case w2>1If w_2>1, neither ∇ K vanishes on M_h nor∇ K is orthogonal to M_h. M_h being a quotient space, each point on itrepresents a periodic orbit of the Hamiltonian H. Thus, there is no point in M_h whoseperiodic orbit persists as a periodic orbit of the Pais-Uhlenbeck oscillator.But we must take into account that, in order to preserve smoothness, the point (0,0,-2h)was removedfrom M_h.This point precisely corresponds to the normal mode and we have seen in the previouscase that the normal mode persists as a periodic orbit of the oscillator, so we may expect thatbringing back (0,0,-2h) could give us the orbits we seek. Unfortunately, if w_2>1 Moser'sresults do not apply. In this case, we will prove that the normal mode also persists as a periodicorbit, but we must resort to other techniques.Let f_2(p_1,q_1,p_2,q_2) = 1/2(p_2+w_2^2q^2 ). The Hamiltonian vector field with respect to the canonical symplectic structure on ℝ^4, X_f_2, hasperiodic flow with periodic T=2π/w_2. This flow generates a free and proper U(1)-action on ℝ^2×(ℝ^2-(0,0)). For every fixed h<0, thelevel set f_2^-1(-h) is foliated by periodic orbits of X_f_2 and a the reduced space is given by M_h=f_2^-1(-h)/U(1). Let us make the following change of variables from (p_1,q_1,p_2,q_2) to (p_1,q_1,L,θ ):Ψ(p_1,q_1,L,θ)=(p_1,q_1,-√(2L)sin w_2θ,-√(2L)/w_2cos w_2 θ),L>0 ,0<θ<2π/w_2 .In these coordinates, the canonical symplectic form on ℝ^2×(ℝ^2-(0,0)), dp_1∧dq_1+dp_2∧dq_2, becomes dp_1∧dq_1+dθ∧dL, and the Hamiltonian of the Pais-Uhlenbeck oscillator isH(p_1,q_1,L,θ)=1/2(p_1^2+w_1^2q_1^2)-L + Λ/4(q_1 - √(2L)/w_2cos w_2θ)^4 .Consider the restriction to the level set Σ_h= {(p_1,q_1,L,θ)| L=h}. Since this level set is foliated by orbits of X_f_2, the Hamiltonian equations of (<ref>) areθ̇ =1 + Λ(q_1 -√(-2h)/w_2cos w_2θ)^3(cos w_2 θ/w_2√(-2h)) , ṗ_1 =-w_1^2q_1-Λ(q_1 -√(-2h)/w_2cos w_2θ)^3 , q̇_1 =p_1 .We now consider the cross section σ_0 = {(p_1,q_1,-h,θ)∈Σ_h :θ =0},and fix the point a=((p_1^0,q_1^0,-h,θ^0))∈σ_0. The trajectory of (<ref>) through a is:θ(t) =t +Λ∫^t_0(q_1 -√(-2h)/w_2cosw_2θ)^3(cosw_2 θ/w_2√(-2h) ) dt , p_1(t) = p_1^0 cosw_1 t -w_1 q_1^0sinw_1 t-Λ∫^t_0(q_1 -√(-2h)/w_2cosw_2θ)^3dt ,q_1(t) = p_1^0/w_1sinw_1t + q_1cosw_1 t . Let T(a,Λ) be the time elapsed between two consecutive intersections of σ_0. From equation (<ref>), we get2π/w_2=T(a,Λ)+Λ∫^T(a,Λ)_0(q_1 -√(-2h)/w_2cos w_2θ)^3(cos w_2 θ/w_2√(-2h)) dt ,so T(a, Λ) has the form T(a, Λ)=2π/w_2 +Λ T_1(a, Λ)+O(Λ^2) ,where T_1(a)= 1/w_2√(-2h)∫_0^2π/w_2cos w_2t (√(-2h)/w_2cos w_2 t - q_1^0)^3 dt .Let us remark that this is the average along the orbit of X_f_2 through (p_1^0,q_1^0,-h,0). Moreover, T_1(0,0,-h,0)≠ 0. Substituting (<ref>) in (<ref>) and (<ref>), we obtainp_1(T(a))=p_1^0 +Λ( -w_1^2 q_1^0T_1(a)-∫^2π/w_2_0(q_1^0 -√(-2h)/w_2cos w_2 t)^3dt ) +O(Λ^2),q_1(T(a))= q_1^0 +Λ p_1^0T_1(a)+O(Λ^2). In order to prove that there exists period orbits for Pais-Uhlenbeck oscillator in Σ_h, wemust show that, for each Λ small enough, there exist p_1^0(Λ) and q_1^0(Λ)such that we get a fixed point:p_1(T(p_1^0(Λ),q_1^0(Λ),-h,0,Λ))= p_1^0(Λ), q_1(T(p_1^0(Λ),q_1^0(Λ),-h,0,Λ))= q_1^0(Λ).To this end, we define the following function F: ℝ^3→ℝ^2,F[ p_1; q_1; Λ ]= [ -w_1^2 q_1T_1(a)-∫^2π/w_2_0(q_1 -√(-2h)/w_2cos w_2 t)^3dt+O(Λ); p_1T_1(a)+O(Λ) ] .A straightforward computation shows that F(0,0,0)=(0, 0 )^T and ( . ∂ F/∂ p_1∂ q_1|_(0,0,0)) = [0 -w_1^2 T_1(0,0,-h,0);T_1(0,0,-h,0)0 ] >0 .By the implicit function theorem, there exists δ>0, and open neighborhood U of (0,0) and a function g: (-δ,δ)→ U, g(Λ)=(p_1(Λ),q_1(Λ) ) such that g(0)=(0,0) and F(g(Λ),Λ)=0. Therefore,p_1(T(g(Λ),-h,0,Λ))= p_1(Λ), q_1(T(g(Λ),-h,0,Λ))= q_1(Λ).This fact proves that for sufficiently small Λ, the Pais-Uhlenbeck oscillator has a uniquestable periodic orbit γ_Λ with energy h which branches off from the normal modeγ.Summarizing, in either case, w_2=1 or w_2>1, we have that stable orbits for the self-interacting Pais-Uhlenbeck oscillator with quartic potential exist, all of them coming from the normal mode.CAVV13M. Avendaño-Camacho, J. A. Vallejo and Yu. Vorobjev: A simple global representation for second-order normal forms of Hamiltonian systems relative to periodic flows. J. of Phys. A: Math. and Theor. 46 (2013) 395201.Bat88 L. M. Bates: Examples for obstructions to action-angle coordinates. Proc. of the Royal Soc. of Edinburgh Sec. A: Math. 110 1-2 (1988) 27–30. CKR83 R. C. Churchill, M. Kummer and D. L. Rod: On averaging, reduction, and symmetry in Hamiltonian systems. J. of Diff. Eqs. 49 (1983) 359–414.Cus94 R. H. Cushman: Geometry of perturbation theory. In “Deterministic chaos in General Relativity”, Editors: D. Hobill et al. Springer Verlag 1994, 89–101.Cus97 R. H. Cushman and L. Bates: Global aspects of classical integrable systems. Birkhauser, Basel, 1997.Cus99 R. H. Cushman, S. Ferrer and H. Hanssmann: Singular reduction of axially symmetric perturbations of the isotropic harmonic oscillator. Nonlinearity 12 2 (1999) 389–410.Dui80 J. J. Duistermaat: On global action-angle coordinates. Comm. in Pure and Appl. Math. 33 6 (1980) 687–706.IK13 I. B. Ihlan and A. Kovner: Some comments on ghosts and unitarity: The Pais-Uhlenbeck oscillator revisited. Phys. Rev. D88 (2013) 044045.KL14 D. S. Kaparulin, S. L. Lyakhovich and A. A. Shaparov: Classical and quantum stability of higher-derivatives dynamics. The Eur. Phys. Journal C 74 (2014) 3072. KL15 D. S. Kaparulin and S. L. Lyakhovich: Energy and stability of the Pais-Uhlenbeck oscillator. In “Geometric Methods in Physics”, Editors: P. Kielanowski et al. Birkhauser Trends in Mathematics, Springer Verlag 2015, 127–134. Kum86 M. Kummer: On resonant Hamiltonian systems with finitely many degrees of freedom. In “Local and global methods in nonlinear dynamics”. Editors: A. W. Sáenz et al. Lecture Notes in Physics 252, Springer Verlag 1986, 19–31.Mas16 I. Masterov: An alternative Hamiltonian formulation for the Pais-Uhlenbeck oscillator. Nucl. Phys. B902 (2016) 95–114.Mos70 J. Moser: Regularization of Kepler's problem and the averaging method on a manifold. Comm. on Pure and Appl. Math. XXIII (1970) 609–636.Mos10 A. Mostafazadeh: A Hamiltonian formulation of the Pais-Uhlenbeck oscillator thatyields a stable and unitary quantum system. Phys. Letters A375 (2010) 93–98.Os50 M. Ostrogradsky: Memoires sur les equations differentielles relatives au problème des isoperimètres. Mem. Acad. St. Petersbourg, VI 4 (1850) 385517.PU50 A. Pais and G. E. Uhlenbeck: On field theories with nonlocalized action. Phys. Rev. 79 (1950) 145–165.Pav13 M. Pavšič: Stable self-interacting Pais-Uhlenbeck oscillator. Mod. Phys. Letters A28 36 (2013) 1350165.Pav13-2 M. Pavšič: Pais-Uhlenbeck oscillator with a Benign Friction Force.Phys. Rev. D87 10(2013) 107502Pav13-3 M. Pavšič: Quantum Field Theories in Spaces with Neutral Signatures. J. Phys. Conf. Ser. 437 (2013) 012006.Pav16 M. Pavšič: Pais-Uhlenbeck oscillator with negative energies. Int. J. Geom. Meth. Mod. Phys. 13 09 (2016) 1630015. Poe76 V. Poènaru: Singularités C^∞ en présence de symétrie. Lecture Notes in Mathematics 510, Springer Verlag, Berlin, 1976.Sch75 G. Schwarz: Smooth funtions invariant under the action of a compact Lie group. Topology 14 (1975) 63–68.Smi09 A. V. Smilga: Comments on the dynamics of the Pais-Uhlenbeck oscillator. SIGMA 5 (2009) 017.
http://arxiv.org/abs/1703.08929v2
{ "authors": [ "Misael Avendaño-Camacho", "José A. Vallejo", "Yury Vorobiev" ], "categories": [ "math-ph", "gr-qc", "hep-th", "math.MP", "70H09, 70H12, 70H14, 53D20" ], "primary_category": "math-ph", "published": "20170327045117", "title": "A perturbation theory approach to the stability of the Pais-Uhlenbeck oscillator" }
Multiple Instance Learning with the Optimal Sub-Pattern Assignment Metric Quang N. Tran, Ba-Ngu Vo, Dinh Phung, Ba-Tuong Vo, and Thuong Nguyen========================================================================= Multiple instance data are sets or multi-sets of unordered elements. Using metrics or distances for sets, we propose an approach to several multiple instance learning tasks, such as clustering (unsupervised learning), classification (supervised learning), and novelty detection (semi-supervised learning). In particular, we introduce the Optimal Sub-Pattern Assignment metric to multiple instance learning so as to provide versatile design choices. Numerical experiments on both simulated and real data are presented to illustrate the versatility of the proposed solution.Point patterns, multiple instance data, set distances, clustering, classification, novelty detection, affinity propagation§ INTRODUCTIONMultiple instance (MI) data, more commonly known as `bags' <cit.>, <cit.>, <cit.>, are mathematical objects called point patterns. A point pattern (PP) is a set or multi-set of unordered points (or elements) <cit.>, in which each point represents the state or features of the object of study. Note that a set does not contain repeated points while a multi-set can. PPs appear in a variety of applications. In natural language processing and information retrieval, the `bag-of-words' representation treats each document as a collection or set of words <cit.>. In image and scene categorization, the `bag-of-visual-words' representation—the analogue of the `bag-of-words' in text analysis—treats each image as a set of its key patches <cit.>. In applications involving three-dimensional (3D) images such as computer tomography scan, and magnetic resonance imaging, point cloud data are actually sets of points in some coordinate system <cit.>. In data analysis for the retail industry as well as web management systems, transaction records such as market-basket data <cit.> and web log data <cit.> are sets of transaction items.While PP data are abundant, fundamental MI learning tasks such as clustering (unsupervised learning), classification (supervised learning), and novelty detection[Novelty detection is not a special case of classification because anomalous or novel training data is not available <cit.>.] (semi-supervised learning), have received limited attention <cit.>. Indeed, to the best of our knowledge, there are no MI learning solutions based on PP models, nor any MI novelty detection solutions in the literature. In MI clustering, two algorithms have been developed for PP data: Bag-level Multi-instance Clustering (BAMIC) <cit.>; and Maximum Margin Multiple Instance Clustering (M^3IC) <cit.>. BAMIC adapts the k-medoids algorithm with the Hausdorff distance as a measure of dissimilarity between PPs <cit.>. M^3IC, on the other hand, poses the PP clustering problem as a non-convex optimization problem which is then relaxed and solved via a combination of the Constrained Concave-Convex Procedure and Cutting Plane methods <cit.>. In MI classification, there are three paradigms: Instance-Space; Embedded-Space; and Bag-Space <cit.>. These paradigms differ in the way they exploit data at the local level (individual points within each bag) or at the global level (the bags themselves as observations). Instance-Space is the only paradigm exploiting data at the local level which neglect the relationship between points in the bag. At the global level, the Embedded-Space paradigm maps all PPs to vectors of fixed dimension, which are then processed by standard classifiers for vectors. On the other hand, the Bag-Space paradigm addresses the problem at the most fundamental level by operating directly on the PPs. The philosophy of the Bag-Space paradigm is to preserve the information content of the data, which could otherwise be compromised through the data transformation process (as in the Embedded-Space approach). Existing methods in the Bag-Space paradigm uses the Hausdorff <cit.>, Chamfer <cit.>, and Earth Mover's <cit.> distances. In this paper, we propose the use of the Optimal Sub-Pattern Assignment (OSPA) distance <cit.> in MI clustering, classification and novelty detection. The choice of set distance in MI learning can markedly influence the performance, and the OSPA distance provides more flexible design choices for different types of applications. Our specific contributions are: * In MI clustering, we combine the Affinity Propagation (AP) clustering algorithm <cit.> with set distances as dissimilarity measures[Preliminary results have been presented in the conference paper <cit.>. This paper presents a more comprehensive study.]. We also examine the clustering performance amongst the OSPA, Hausdorff, and Wasserstein distances. Compared to existing k-medoids based techniques <cit.>, AP can find clusters faster with much lower error, and does not require the number of clusters to be specified <cit.>. In addition, the OSPA distance is more versatile than the Hausdorff distance used in <cit.>.* In MI classification, we use the OSPA distance in the k-nearest neighbour (k-NN) algorithm <cit.>, and examine the performance against the Hausdorff-based technique <cit.> and the Wasserstein-based technique (the Earth Mover's distance adapted for PPs <cit.>). Being a Bag-Space approach, this technique exploits data at the global level, and avoids potential information loss from the embedding. Moreover, the advantage over existing Bag-Space approaches lies in the versatility of the OSPA distance over the Hausdorff <cit.>, Chamfer <cit.> and Earth mover's <cit.> distances. * In MI novelty detection, we propose a solution based on the set distance between the candidate PP and its nearest neighbour in the normal training set. We also examine the detection performance amongst the OSPA, Hausdorff, and Wasserstein distances. This very first MI novelty detection method is simple, effective and versatile across various applications. The rest of this paper is organized as follows. Section <ref> presents the Hausdorff, Wasserstein and OSPA distances along with their properties in the context of MI. Based on these distances, sections <ref>, <ref>, and <ref> present the distance-based MI learning algorithms and numerical experiments for clustering, classification, and novelty detection, respectively. Section <ref> concludes the paper. § SET DISTANCES Machine learning tasks such as clustering, classification and novelty detection are mainly concerned with the grouping/separating of data based on their similarities/dissimilarities. A distance is a fundamental measure of dissimilarity between two objects. Hence, the notion of distance or metric is important to learning approaches without models <cit.>. In MI learning, several set distances have been introduced for PP data[A multi-set can be equivalently expressed as a set by augmenting the multiplicity of each element, i.e., a multi-set with elements x_1 repeated N_1 times, ...., x_m repeated N_m times, can be represented as the set {(x_1,N_1),...,(x_m,N_m)}.], namely the Hausdorff <cit.>, and Chamfer <cit.> distances.In this section, we present the Hausdorff <cit.>, Wasserstein <cit.>, and OSPA distances <cit.>. In particular we discuss their properties and the implications in the context of design choices for MI learning algorithms. The choice of set distance in MI learning directly influences the performance and hence it is important to select distances that are compatible with the applications. For completeness, we recall the definition of a distance function or metric on a non-empty space 𝒮. A function d:𝒮×𝒮→[0;1) is called a metric if it satisfies the following three axioms:* (Identity) d(x,y)=0 if and only if x=y ; * (Symmetry) d(x,y)=d(y,x) for all x,y∈𝒮 ; * (Triangle inequality) d(x,y)≤ d(x,z)+d(z,y) for all x,y,z∈𝒮.Our interest lies in the distance between two finite subsets X={x_1,...,x_m} and Y={y_1,...,y_n} of a metric space (𝒲,d), where 𝒲 is closed and bounded observation window, and d denotes the base distance between the elements of 𝒲. Note that d is usually taken as the Euclidean distance when 𝒲 is a subset of ℝ^n. §.§ Hausdorff distance The Hausdorff distance between two non-empty sets X and Y is defined by d_𝙷(X,Y)=max{max_x∈ Xmin_y∈ Yd(x,y),max_y∈ Ymin_x∈ Xd(x,y)} ,Note that the Hausdorff distance is not defined when either X or Y is empty. In addition to being a metric, the Hausdorff distance is easy to compute and was traditionally used as a measure of dissimilarity between binary images. It gives a good indication of the dissimilarity in the visual impressions that a human would typically perceive between two binary images. Hausdorff distance has been successfully applied in applications dealing with PP data, such as detecting objects from binary images <cit.>, or measuring the dissimilarities between 3-D surfaces—sets of coordinates of points <cit.>. In MI learning it has been applied in classification <cit.> and clustering <cit.>. The Hausdorff distance could produce some undesirable effects for many MI learning applications since it may group together PPs that are intuitively dissimilar while separating PPs that are similar. Specifically: * The Hausdorff distance is relatively insensitive to dissimilarities in cardinality <cit.>. Consequently, it can group together PPs with large differences in cardinality (e.g., X and Y in Fig. <ref>). This can be undesirable in many applications since the cardinalities of the PPs are important in MI learning. * The Hausdorff distance penalizes heavily outliers—elements in one set which are far from every element of the other set <cit.>. Consequently, it tends to separate similar sets that differ only in a few outliers (e.g., X and Z in Fig. <ref>). This is undesirable in applications where the observed PPs of underlying groups are contaminated by outliers due to spurious noise. Nonetheless, there are applications where it is desirable to separate PPs with outliers from those without. Note that there are also generalizations of the Hausdorff distance that avoid the undesirable outlier penalty <cit.>.The Chamfer “distance” <cit.> is a variation of the Hausdorff construction, but does not satisfy the metric axioms. In terms of measuring dissimilarity, it is very similar to the Hausdorff distance, and has been used to construct a Support Vector Machine kernel for MI classification in <cit.>. §.§ Wasserstein distance The Wasserstein distance (also known as Optimal Mass Transfer distance <cit.>) of order p≥1 between two sets X and Y is defined by <cit.>d_𝚆^(p)(X,Y)=min_C(∑_i=1^m∑_j=1^nc_i,jd(x_i,y_j)^p)^1/p,where C=(c_i,j) is an m× n transportation matrix (recall that m and n are the cardinalities of X and Y, respectively), i.e., c_i,j are non-negative and satisfies:∑_j=1^nc_i,j=1/m1≤i≤m,∑_i=1^mc_i,j=1/n1≤j≤n.Note that similar to the Hausdorff distance the Wasserstein distance is a metric <cit.> and is not defined when either X or Y is empty. The Wasserstein distance can be considered as the Earth Mover's distance <cit.> adapted for PPs <cit.>. Consider the sets X={x_1,...,x_m} and Y={y_1,...,y_n} as collections of earth piles at x_i each with mass 1/m and y_j each with mass 1/n, i.e., the total mass of each collection is 1, and suppose that the cost of moving a mass of earth over a distance is given by the mass times the distance. Then the Wasserstein distance (<ref>) can be considered as the minimum cost needed to build one collection of earth piles from the other. This is illustrated in Figs. <ref> and <ref>, where the arrows correspond to the optimal movements of the earth piles. Indeed the Earth Mover's distance has been used to construct a Support Vector Machine kernel for MI classification in <cit.>. The Wasserstein distance partially addresses the cardinality insensitivity and reduces the undesirable penalty on the outliers of the Hausdorff distance <cit.>, see for example Fig. <ref>. However, to the best of our knowledge, it has not been used in MI, and still has a number of drawbacks. * It is still possible for the Wasserstein distance to group together dissimilar sets while separating similar sets as illustrated in Fig. <ref>. Intuitively X and Z are very similar whereas X and Y are quite dissimilar, but the Wasserstein distance disagrees, i.e., d_𝚆^(2)(X,Y)≈1.6<d_𝚆^(2)(X,Z)≈2.3. The large Wasserstein distance between X and Z is due to the moving of earth from the bottom blue pile in Fig. <ref> over long distances (the two longest blue arrows to red piles in Fig. <ref>). Note that the elements of Z are not so balanced around the elements of X, and thus require the pile to be moved over long distances. On the other hand the elements of Y are more balanced around the elements of X thereby requiring less work and hence a smaller resulting distance. In general, the Wasserstein distance depends on how well balanced the numbers of points of X are distributed among the points of Y. * Both the Wasserstein and Hausdorff distances are not defined if one of the sets is empty. However, in PP data, empty PPs are not unusual. For example, in WiFi log data where each datum (a log record) is a set of WiFi access point IDs around the scanning device at a given time, there are instances when there are no WiFi access points leading to empty observations. In image data where each image is represented by a set of features describing some objects of interest, images without any object of interest are represented by empty PPs. §.§ OSPA distanceThe Optimal SubPattern Assignment (OSPA) <cit.> distance of order p≥1, and cutoff c>0, is defined by d_𝙾^(p,c)(X,Y)=(1/n(min_π∈Π_n∑_i=1^md^(c)(x_i,y_π(i))^p+c^p(n-m)))^1/p,if n≥ m>0 (recall that m and n are the cardinalities of X and Y, respectively), and d_𝙾^(p,c)(X,Y)=d_𝙾^(p,c)(Y,X) if m>n>0, where Π_n is the set of permutations of { 1,2,...,n}, d^(c)(x,y)=min(c,d(x,y)). Further d_𝙾^(p,c)(X,Y)=c if one of the set is empty; and d_𝙾^(p,c)(∅,∅)=0. The two adjustable parameters p, and c, are interpreted as the outlier penalty and the cardinality sensitivity, respectively.Assuming p=1, to compute (<ref>), we assign m elements of Y to the m elements of X so as to minimize the total adjusted distance d^(c) (see Fig. <ref> for illustration). This can be achieved via an optimal assignment procedure such as Hungarian method. For each of the (n-m) elements in Y which are not assigned, we set a fixed distance of c. The OSPA distance is simply the average of these n distances (i.e., m optimal adjusted distances and (n-m) fixed distances c). Thus, the OSPA distance has a physically intuitive interpretation as the “per element” dissimilarity that incorporates both features and cardinality <cit.>. The OSPA distance is a metric with several salient properties that can address some of the undesirable effects of the Hausdorff and Wasserstein distances <cit.>. * The OSPA distance penalizes relative differences in cardinality in an impartial way by introducing an additive component on top of the average distance in the optimal sub-pattern assignment. The first term in (<ref>) is the dissimilarity in feature while the second term is the dissimilarity in cardinality. * The OSPA distance is defined for any two PPs. It is equal to c (i.e., maximal) if only one of the two PPs is empty, and zero if both PPs are empty.* The outlier penalty can be controlled via parameter p. The larger p, the heavier penalty on outliers. Note that the role of p in OSPA is similar to that for the Wasserstein distance, however, it is mitigated due to the cutoff c. In practice it is common to use p=2.[For the rest of this paper, we use p=2, unless stated.]* The cutoff parameter c controls the trade-offs between feature dissimilarity and cardinality difference (see Fig. <ref> for illustration). Indeed, c determines the penalty for cardinality difference and is also the largest allowable base distance between constituent elements of any two sets. As a general guide: 1) to emphasize feature dissimilarity, c should be as small as the typical base distance between constituent elements of the PPs in the given dataset; 2) conversely, to emphasize cardinality difference, c should be larger than the maximum base distance in the given dataset; 3) for a balanced emphasis on cardinality and feature, a moderate value of c in between the two aforementioned values should be chosen. Fig. <ref> shows four PPs X, Y, Z and O, where: Y has elements that are closest to individual elements of X, but has a larger cardinality; Z has elements far away from the elements of X, but has the same cardinality; while O is visually most similar to X. In this scenario, the typical base distance between the elements of the PPs is about 1.4 and the maximum base distance is about 9.5. Choosing a small cutoff c=1.4 yields d_𝙾^(2,1.4)(X,Y)<d_𝙾^(2,1.4)(X,Z), indicating an emphasis of feature dissimilarity over cardinality difference. Choosing a large cutoff c=15 yields d_𝙾^(2,15)(X,Y)>d_𝙾^(2,15)(X,Z), indicating an emphasis of cardinality difference over feature dissimilarity. Choosing a moderate cutoff c=6, makes O closest to X, indicating a balanced emphasis on both feature dissimilarity and cardinality difference.We stress that while the OSPA distance offers more flexibility in design choices and some merits over the other distances, there is no single distance that works for all applications. § CLUSTERING OF POINT PATTERNS In general, clustering is an unsupervised learning problem since the class (or cluster) labels are not provided <cit.>. The aim of clustering is to partition the data into groups so that members in a group are similar to each other whilst dissimilar to observations from other groups <cit.>. Clustering is a fundamental problem in data analysis with a long history dated back to the 1930s in psychology <cit.>. Comprehensive surveys on clustering can be found in <cit.>.§.§ Problem Formulation In a MI clustering context, the overall goal is to partition a given PP dataset 𝒟={ X_1,...,X_N}⊆𝕏 into disjoint clusters which minimize the sum of (set) distances between PPs and their cluster centers, while penalizing the trivial partition 𝒫={{X_1},...,{X_N}} (i.e., each observation is a cluster) that yields zero sum of distances. More concisely, let μ:𝒟→𝕏 be a mapping that assigns a cluster center to each PP in 𝒟, i.e., μ(X) is the center of the cluster that X belongs to, then the clustering problem can be stated asμmin∑_X∈𝒟d(X,μ(X))+γ(X)δ_X[μ(X)],subject to μ(C)=C,∀ C∈μ(𝒟),where δ_A[B]=1 if A=B, and is 0 otherwise, γ:𝒟→[0,∞) is a user chosen penalty function that imposes a penalty for the selection of an observation X as its own cluster centre, and hence penalizes the identity map μ:X↦ X as a solution. Remark: The mapping μ provides a partitioning 𝒫={𝒫_1,...,𝒫_|μ(𝒟)|} of the dataset 𝒟, where 𝒫_k={ X∈𝒟:μ(X)=C_k} is the k^th cluster and μ(𝒟)={ C_1,...,C_|μ(𝒟)|} is the set of cluster centers or centroids. The constraints ensure that if a PP C∈𝒟 is a cluster centre, then C must belong to the cluster with centre C. The user defined penalty γ(X) can also be interpreted in terms of the preference for datum X to be a centroid: the smaller γ(X) is, the higher we prefer X to be a centroid.Note that the cluster center μ(X) of a datum X can be either defined as the mean (or more generally the Frchet mean) of the observations in its group (e.g., k-means) or chosen among observations in the dataset, i.e., μ:𝒟→𝒟 (e.g., k-medoids). In general, the Frchet mean of a collection of PPs is computationally intractable <cit.> and a better strategy is to select the centroids from the dataset. Such centroids, also known as `exemplars' <cit.>, can be efficiently computed as well as serving as real prototypes for the data. To the best of our knowledge, BAMIC <cit.> is the only exemplar-based clustering algorithm for PPs using a set distance (Hausdorff) as a measure of dissimilarity. The Hausdorff distance, used by BAMIC, has several undesirable properties as discussed in section <ref>. Moreover, since BAMIC is based on the k-medoids algorithm, it requires the number of clusters as an input, which is not always available in practice. Determining the correct number of clusters is one of the most challenging aspects of clustering <cit.>. While it is possible to perform model selection via cross-validation for different number of clusters to decide on the best one, this process incurs substantial computational cost. In addition, it is mathematically more principled to jointly determine the number of clusters and their centers. In this work, we propose a versatile MI clustering algorithm using the AP algorithm <cit.> with the OSPA distance as a dissimilarity measure. For the sake of performance comparison, we also include the Hausdorff and Wasserstein distances as baselines. Using message passing, AP provides good approximate solutions to problem (<ref>)-(<ref>) <cit.>, thereby determining the number of clusters automatically from the data (see details in section <ref>). Compared to k-medoids (used in BAMIC), AP can find clusters faster with considerably lower error <cit.> and does not require random initialization of cluster centers (since AP first considers all observations as exemplars). In addition, the OSPA distance does not suffer from the undesirable effects as the Hausdorff distance used in BAMIC, as well as being more flexible (section <ref>).§.§ AP clustering with set distances The AP algorithm has been widely used in several applications due to its ability to automatically infer the number of clusters and fast execution time. The AP algorithm uses the similarity values between all pairs of observations in the data set 𝒟={X_1,...,X_N} and user defined exemplar preferences, as input and returns the `best' set of exemplars. The similarity values of interest in this work are the negatives of the OSPA distances between the PPs in 𝒟. The preference value for a datum X_n is the negative of the penalty, i.e., -γ(X_n), the larger its preference, the more likely that X_n is an exemplar. In AP, the exemplar for an observation X_n (which could be X_n itself or another observation) is represented by a variable c_n, where c_n=k means that X_k is the exemplar for X_n. Note that a configuration (c_1,…,c_N) provides an equivalent representation of the decision variable μ:𝒟→𝒟 in problem (<ref>)-(<ref>) by defining μ(X_n)=X_k iff c_n=k. Treating each c_n as a random variable, a factor graph with nodes c_1,…,c_N can be constructed by encoding into the functional potentials the similarities between pairs of observations, the preferences for each observation, as well as constraints that ensure valid cluster configurations. Constraint (<ref>) means that in a valid configuration (c_1,…,c_N), c_c_n=c_n, i.e., if X_k is an exemplar for any observation, then the exemplar of X_k is X_k. This constraint can be enforced by setting the potential of any configuration (c_1,…,c_N) with c_c_n≠ c_n to -∞. Ideally, performing max-sum message-passing yields a configuration that maximizes the sum of all potentials in this factor graph, and hence a solution to the clustering problem (<ref>)-(<ref>). AP is an efficient approximate max-sum message-passing algorithm using a protocol originally derived from loopy propagation on factor graphs <cit.>.[An equivalent binary graphical model representation for AP was later proposed in <cit.>. Instead of creating a latent node for each individual observation as in <cit.>, a binary node b_n,k is created for each pair (X_n,X_k) and b_n,k=1 if X_k is an exemplar for X_n. Message-passing on this new factor graph representation yields the same solution.] Further details on the AP algorithm can be found in <cit.>. In what follows, we discuss specific details for the clustering of PP data summarized in Algorithm <ref>. The algorithm starts by computing all pairwise similarities input for AP: s(n,k)=-d_𝙾(X_n,X_k), and preferences s(k,k)=-γ(X_k). A common practice is to give all observations the same preference, e.g., the median of the similarities (which results in a moderate number of clusters) or the minimum of the similarities (which results in a small number of clusters) <cit.>.The AP algorithm passes two types of messages. The responsibility r(n,k), defined in (<ref>) indicating how well X_n trusts X_k as its exemplar, is sent from observation X_n to its candidate exemplar X_k. Then, the availability a(n,k), defined in (<ref>) reflecting the accumulated evidence for X_k to be an exemplar for X_n, is sent from a candidate exemplar X_k to X_n. Note from (<ref>)-(<ref>) that the responsibility r(n,k) is calculated from availability values that X_n receives from its potential exemplar, whereas the availability a(n,k) is updated using the `support' from observations that consider X_k as their candidate exemplar. Note from (<ref>) that when an observation is assigned to exemplars other than itself, its availability falls below zero. Such negative availabilities in turn decrease the effect of input similarities s(n,k') in (<ref>), thereby eliminating the corresponding PPs from the set of potential exemplars.The loopy propagation is usually terminated when changes in the messages fall below a threshold (see Algorithm <ref>), or when the cluster assignments stay constant for some iterations, or when number of iterations reaches a given value <cit.>. The cluster label c_n is the value of k that maximizes the sum r(n,k)+a(n,k) <cit.>.§.§ Experiments In this section we evaluate the performance of the proposed AP-based clustering algorithm on both simulated and real PP data. In particular, we compare the clustering performance amongst the Hausdorff, Wasserstein and OSPAdistances. Note that BAMIC (which uses the k-medoids algorithm instead of AP) can be treated as AP clustering with the Hausdorff distance. Since the result of the AP algorithm depends on the choice of exemplar preferences, we first empirically select the exemplar preference that yields the best performance in terms of number of clusters for each distance, and then benchmark the best case performance of one distance against the others. The relevant performance indicators are: Purity (Pu), Normalized mutual information (NMI), Rand index (RI), F1 score (F1) <cit.>.§.§.§ Clustering with simulated data In this experiment, we consider three simulated datasets. Each dataset consists of 3 clusters, each cluster consists of 200 PPs generated from a Poisson point process (PPP) with a 2-D Gaussian intensity.[This dataset is similar to that of <cit.>. In fact, they simulated by the same mechanism. ] In brief, a PP is sampled from a PPP with Gaussian intensity parameterized by (λ,μ,Σ), by first sampling the number of points from a Poisson distribution with rate λ, and then sampling the corresponding number points independently from the Gaussian with mean and covariance (μ,Σ). The parameters for the PPPs used in this experiment are shown in Fig. <ref>. Three diverse scenarios are considered: in dataset (i) features of the PPs from each cluster are well separated from those of the other clusters, but their cardinalities significantly overlap (see Fig. <ref>); in data set (ii) cardinalities of the PPs from each cluster are well separated from those of the other clusters, but their features significantly overlap (see Fig. <ref>); dataset (iii) is a mix of (i) and (ii) (see Fig. <ref>). Three different cutoff values for the OSPA distance are experimented: c=1 (small); c=12 (moderate); and c=26 (large). Note that c=1 is a typical value of the intra-PP base distance (i.e., base distance between the features within the PPs in the dataset), c=26 is an estimate of the maximum intra-PP base distance, and c=12 is a moderate value of the intra-PP base distance. In dataset (i) (Fig. <ref>), the Hausdorff, Wasserstein and OSPA distances with small and moderate cutoffs show good performance. The OSPA distance with a large cutoff tends to emphasize the cardinality dissimilarities (which are negligible in this scenario) over feature dissimilarities (see subsection <ref>) leading to poor clustering performance. In dataset (ii) (Fig. <ref>), where cardinality difference is the main discriminative information, the Hausdorff and Wasserstein distances perform poorly since they are unable to capture cardinality dissimilarities between the PPs. The OSPA distance with a small cutoff tends to emphasize feature dissimilarities (which are negligible in this scenario) over cardinality dissimilarities (see section <ref>) leading to poor performance. On the other hand the OSPA distance with moderate and large cutoffs perform better since they can appropriately capture cardinality dissimilarities. In dataset (iii) (Fig. <ref>), the results again confirm the discussions above. The OSPA distance with moderate cutoff provides a balanced emphasis on both feature and cardinality dissimilarities, yielding the best performance. §.§.§ Clustering with the Texture dataset This experiment involves clustering images from the classes “T14 brick1”, “T15 brick2”, and “T20 upholstery” of the Texture images dataset <cit.>. Each class consists of 40 images, with some examples shown in Fig. <ref>. Each image is compressed into a PP of 2-D features by first applying the SIFT algorithm (using the VLFeat library <cit.>) to produce a PP of 128-D SIFT features, which is then further compressed into a 2-D PP by Principal Component Analysis (PCA). Fig. <ref> shows the superposition of the 2-D PPs from the three classes along with their cardinality histograms. Fig. <ref> shows that the OSPA distances (especially with c=20) outperform the Hausdorff and Wasserstein distances, since it can incorporate both feature and cardinality information. The poor performance of the Hausdorff and Wasserstein distances is due to the significant overlap in the features and their inability to measure cardinality dissimilarities in the data.§.§.§ Clustering with the StudentLife dataset This experiment involves WiFi scan data from the StudentLife dataset <cit.> collected from smartphones carried by students at Dartmouth College. At every preset interval, the phone automatically scans for surrounding WiFi access points and records detected ones. Therefore, each observation is a PP of WiFi access point IDs (called WiFi IDs). The logs of these WiFi scans can be used to infer the history of visited places since scans containing similar PPs of WiFi IDs are normally recorded at close-by locations. Thus, estimating the visited locations from WiFi IDs PPs can be formulated as a clustering problem, where each cluster represents a visited location. In the StudentLife data collection, the location of each scan (at the building level) is retrieved by mapping the detected WiFi IDs to the WiFi deployment information provided by Dartmouth Network Services. However, this deployment information is highly protected and is not available to the general public. In this experiment data from a random participant is pre-processed so as to keep only WiFi IDs appearing at least 10 times (544 such WiFi IDs). Further, only 4 locations that received more observations than the number of WiFi IDs are considered. Fig. <ref> shows the frequency histograms of WiFi IDs and cardinality histogram of the observations collected from 4 considered locations.For performance assessment, we use the locations provided in the dataset as ground-truth. Observe from Fig. <ref> that the OSPA distance (other cutoff values have similar performance to that with c=300 and are not shown) performs better than both the Hausdorff and Wasserstein distances. However, the improvement is not drastic since there are substantial overlaps in both features and cardinalities between different clusters.§ CLASSIFICATION OF POINT PATTERNS Classification is the supervised learning task of assigning a class label ℓ∈{ 1,…,N_class} to each input observation X <cit.>. Unlike its unsupervised counterpart, i.e., clustering (section <ref>), classification relies on training data, which are fully-observed input-output pairs 𝒟_train={(X_n,ℓ_n)}_n=1^N_train <cit.>. Classification is arguably the most widely used form of supervised machine learning, spanning various fields of study <cit.>.The classification problem can be approached with or without knowledge of the underlying data model <cit.>. In this paper, we focus on the so-called non-parametric classifiers, which do not require knowledge of the data model. Among non-parametric classifiers such as Support Vector Machine (a binary classifier) <cit.>, Parzen window <cit.>, k-Nearest Neighbors (k-NN) <cit.>, k-NN is more suited to PP data classification using set distances. The k-NN classifier has two phases: training and classifying. Contrary to eager learning algorithms in which a model is learned from training data in the training phase, the k-NN algorithm delays most of its computational effort to the classifying (or test) phase. In the training phase, the only task is storing class labels of the training observations. In the test phase, when a new observation is passed to query its label, the algorithm determines its k nearest observations, with respect to some distance, in the training set. The queried observation is then assigned the most popular label among its k nearest neighbours. §.§ k-NN classification with set distances In MI learning, PP classifiers based on the k-NN algorithm using set distances such as Hausdorff <cit.>, Chamfer <cit.>, and Earth Mover's <cit.> have been proposed. However, the OSPA distance is more versatile and better at capturing feature and cardinality dissimilarities between PPs. Hence, the OSPA distance would be more effective with the k-NN algorithm for PP classification.Unlike existing k-NN classification that only stores the class labels in the training phase, our proposed approach exploits training data to learn a suitable dissimilarity measure. Since the fully observed training data can be used to assess whether the set distance agrees with the notion of similarity/dissimilarity of the application under consideration, in principle, a suitable distance can be learned. A simple approach is to perform cross-validation on the training data for a range of distances and select the best. Intuitively, a suitable distance entails small dissimilarities between observations in the same class, but large dissimilarities between observations from different classes. Hence, for a given training dataset, we seek a distance (or its parameterization) that minimizes the ratio of inter-class dissimilarity to intra-class dissimilarity. In general, learning an arbitrary distance from training data is numerically intractable. However, it is possible to learn low dimensional parameters such as the cut-off parameter in the OSPA distance. The OSPA distance provides the capability for adapting the weighing between feature dissimilarity and cardinality dissimilarity via the cut-off parameter c. While the right balance between feature and cardinality dissimilarities varies from one application to another, it can be learned from the fully observed training data via cross-validation. However, cross-validation is not suitable for small training datasets. In the following, we describe an alternative approach that also accommodates small datasets. Let d̅_𝙾^(p,c)(X,C) denote the average OSPA distance, with cut-off c, from a PP X to its k nearest neighbours in a collection C (of PPs), and let C_ℓ denote the class of PP observations with class label ℓ in the training set. Then the inter-class dissimilarity for C_ℓ is defined by D̂^(p,c)(C_ℓ)=max_X∈ C_ℓd_𝙾^(p,c)(X,C_ℓ) while its intra-class dissimilarity is defined by Ď^(p,c)(C_ℓ)=min_j≠ℓmin_X∈ C_ℓd_𝙾^(p,c)(X,C_j). To enforce small inter-class dissimilarity and large intra-class dissimilarity, we seek cut-off parameters that minimize the worst-case (over the training data set) ratio of inter-class dissimilarity to intra-class dissimilarity ρ(c)=max_ℓ(D̂_𝙾^(p,c)(C_ℓ)/Ď_𝙾^(p,c)(C_ℓ))The operations max, min in the definition of ρ can be replaced by averaging or a combination thereof. For large training datasets averaging is preferable.§.§ Experiments In the following experiments, we benchmark the classification performance of the OSPA distance against the Hausdorff[and hence the Chamfer “distance”, see subsection <ref>.] and Wasserstein[and hence the Earth Mover's distance, see subsection <ref>.]distances on both simulated and real data. Since the performance depends on the choice of k (the number of nearest neighbours), we ran our experiments for each k∈{1,...,10} and benchmark the best case performance of one distance against the others. §.§.§ Classification of simulated data This experiment examines the classification performance on the three diverse scenarios from the simulated datasets of section <ref>. Using a 10-fold cross validation, the average classification performance is summarized in Fig. <ref>. Observe that in dataset (i), where features of the PPs from one cluster are well separated from those of the other clusters, all distances perform well. In dataset (ii) and (iii), where features of the PPs from one cluster overlap with those of the other clusters, the OSPA distance outperforms the Hausdorff and Wasserstein since it can appropriately capture the cardinality dissimilarities in the data (Fig. <ref>). §.§.§ Classification of Texture data This experiment examines the classification of the extracted PP data in section <ref>, consisting of three classes from the Texture images dataset. Using a 4-fold cross validation, the average performance is summarized in Fig. <ref>. Observe that in this dataset, the OSPA distance also performs best, since it can give a good balance between feature and cardinality dissimilarities.§.§.§ Classification of StudentLife data This experiment examines the classification of the StudentLife WiFi dataset of section <ref>. Using a 10-fold cross validation, the average performance is summarized in Fig. <ref>. For this dataset, all the Hausdorff, Wasserstein and OSPA achieve good performance, since the features (i.e., WiFi IDs) from the PPs of each cluster are well-separated from those of the other clusters.§ NOVELTY DETECTION FOR POINT PATTERNS Novelty detection is the task of identifying new or strange data that are significantly different from `normal' training data <cit.>. Note that novelty detection is not a special case of classification because anomalous or novel training data is not available <cit.>. There are typically two phases in novelty detection: training and detection. Since its training phase requires only normal data, novelty detection is considered as semi-supervised learning <cit.>. Novelty detection is a fundamental problem in data analysis with a plethora of application areas ranging from intrusion detection <cit.>, fraud detection <cit.>, structural health monitoring <cit.>, to tumor detection from MRI images <cit.>. However, novelty detection for point pattern data has not been studied.This section introduces a solution to the novelty detection problem for PP data by incorporating set distances into nearest neighbour algorithm. Like classification, novelty detection can be approached with or without knowledge of the underlying data model. The most common non-parametric novelty detection technique is nearest neighbour <cit.>, which is based on the assumption that normal observations are closer to the training data than novelties <cit.>. This approach requires a suitable notion of distance between observations <cit.>. §.§ Novelty detection with set distances If the distance (e.g., Hausdorff, Wasserstein or OSPA) between the candidate PP and its nearest normal neighbour (NNN)[This can be interpreted as the Hausdorff distance between the candidate and the normal data class.] is greater than a given threshold, then the candidate is deemed a novelty, otherwise it is normal. A suitable threshold can be chosen experimentally <cit.>. One suitable threshold is the 95th-percentile of the inter-class distances (between normal training observations and their NNNs). However, no single threshold is guaranteed to work well for all cases. Similar to classification with OSPA (section <ref>), training data can be used to determine a suitable balance between feature dissimilarity and cardinality dissimilarity. However, there is no inter-class dissimilarity, and hence minimizing the intra-class dissimilarity for normal data yields the trivial solution c=0. To determine a suitable balance, consider the cardinality dissimilarity d_𝚌𝚊𝚛𝚍^(p)(X,Y)=1/n(n-m) and feature dissimilarity d_𝚏𝚎𝚊𝚝^(p)(X,Y)=1/nmin_π∈Π_n∑_i=1^md(x_i,y_π(i))^p between all pairs of observations X,Y in the normal training set (assuming the cardinality m of Y is not greater than the cardinality n of X, otherwise we compute d_𝚌𝚊𝚛𝚍^(p)(Y,X) and d_𝚏𝚎𝚊𝚝^(p)(Y,X)). Note that for d_𝚏𝚎𝚊𝚝^(p) we use the base distance d to capture the absolute feature dissimilarity rather than the capped feature dissimilarity from base distance d^(c). To decide whether a test PP T is novel, we need to determine its cardinality dissmilarity and feature dissimilarity relative to the normal data. The relative cardinality dissmilarity and feature dissimilarity of T (with respect to the normal data) can be defined as d_𝚌𝚊𝚛𝚍^(p)(T,T^*))/m_𝚌𝚊𝚛𝚍^(p) and d_𝚏𝚎𝚊𝚝^(p)(T,T^*)/m_𝚏𝚎𝚊𝚝^(p), where T^* is T's NNN, m_𝚌𝚊𝚛𝚍^(p) and m_𝚏𝚎𝚊𝚝^(p) are large values (e.g., maximum or 95th-percentile) of d_𝚌𝚊𝚛𝚍^(p)(Y,X) and d_𝚏𝚎𝚊𝚝^(p)(Y,X)) in the normal data set, respectively. Observe that summing the relative dissmilarities and scaling by m_𝚏𝚎𝚊𝚝^(p) gives the uncapped OSPA “distance”(d_𝙾^(p)(T,X(T)))^p=m_𝚏𝚎𝚊𝚝^(p)/m_𝚌𝚊𝚛𝚍^(p)d_𝚌𝚊𝚛𝚍^(p)((T,T^*)+d_𝚏𝚎𝚊𝚝^(p)(T,T^*)Hence, a suitable cut-off parameter is c=(m_𝚏𝚎𝚊𝚝^(p)/m_𝚌𝚊𝚛𝚍^(p))^1/p . §.§ Experiments In this section, we examine the novelty detection performance of the Hausdorff, Wasserstein and OSPAdistances on both simulated and real data. §.§.§ Novelty detection with simulated data In this experiment, we consider cluster 2 from the simulated data set in subsection <ref> as normal data, and clusters 1 and 3 as novel data. This allows us to study three diverse scenarios: dataset (i), see Fig. <ref>, is an example of feature novelty, where novel observations are similar in cardinality with normal training data, but dissimilar in feature; dataset (ii), shown in Fig. <ref>, is an example of cardinality novelty, where novel observations are similar in feature with normal training data, but dissimilar in cardinality; dataset (iii), shown in Fig. <ref>, is a mix of feature and cardinality novelty.Using a 10-fold cross validation, the average performance summarized in Fig. <ref>. Fig. <ref> shows boxplots of the distances between the test PPs and theirs NNNs. Observe that in datasets (i) and (iii), where novelties are dissimilar with normal data in feature (see Figs. <ref> and <ref>), all distances perform well. In dataset (ii), where novelties are dissimilar with normal data in cardinality, but similar in feature (see Fig. <ref>), the OSPA distance outperforms the Hausdorff and Wasserstein since it can appropriately penalize the cardinality dissimilarity between normal and novel data (see Fig. <ref>).§.§.§ Novelty detection with Texture data Using the Texture dataset from subsection <ref>, we consider normal data are taken from class “T14 brick1” and novel data are taken from class “T20 upholstery”. We use 4-fold cross validation. In each fold, the training data consist of 75% of images from normal class (30 images), the testing set includes the remaining images from normal class (10 images) and 25% of images from novel class (10 images). Observe that the performance of set distances (Hausdorff, Wasserstein, and OSPA) on this dataset is similar to that of set distances on the simulated dataset in section <ref>. Since normal and novel data are dissimilar in feature (see the feature plot in Fig. <ref>), all distances perform well (Fig. <ref>).§.§.§ Novelty detection with StudentLife WiFi data Using the StudentLife WiFi dataset described in subsection <ref>, we consider observations from locations 1 and 2 as normal data and observations from locations 3 and 4 as novelties. Using a 10-fold cross validation, the average performance is summarized in Fig. <ref>. Observe that all three distances(OSPA, Hausdorff and Wasserstein distance) perform similar for this dataset with average F1 score about 0.83. § CONCLUSIONS In this paper, algorithms for clustering, classification, and novelty detection with point pattern data using the OSPA distance have been presented. In clustering, AP is combined with the OSPA (or others such as Wasserstein and Hausdorff) distance as dissimilarity measure. In classification, the OSPA distance is incorporated into the k-nearest neighbour (k-NN) algorithm. In MI novelty detection, a solution developed using the set distances between the candidate PP and its nearest normal neighbour in the training set. Numerical experiments on simulated and real data demonstrated that the OSPA distance offers more flexibility in design choices as well as the ability to better capture dissimilarities between sets compared to the other distances. We reiterate that while the OSPA distance does offer some merits over the other distances, there is no single distance that works for all applications. In practice, to determine which distance (and parameters) are better suited to which application, it is important to assess whether the distance agrees with the notion of similarity/dissimilarity specific to that application. IEEEtran
http://arxiv.org/abs/1703.08933v1
{ "authors": [ "Quang N. Tran", "Ba-Ngu Vo", "Dinh Phung", "Ba-Tuong Vo", "Thuong Nguyen" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170327052332", "title": "Multiple Instance Learning with the Optimal Sub-Pattern Assignment Metric" }
TCP in 5G mmWave Networks: Link Level Retransmissions and MP-TCP Michele Polese^*, Rittwik Jana^†, Michele Zorzi^* ^*Department of Information Engineering, University of Padova, Italy e-mail: {polesemi, zorzi}@dei.unipd.it^†AT&T Labs-Research, Bedminster NJ, USAe-mail: rjana@research.att.com=================================================================================================================================================================================================================================================empty § INTRODUCTION Entropy is an important fundamental property ofmany-body systems. It governs their thermodynamics, heat transfer, thermoelectric and thermo-magnetic properties. On the other hand, the entropy was always hard to be directly measured experimentally. It has been revealed very recently that the entropy per particle, ∂ S/∂ n, where n is the electron density, can be experimentally studied<cit.>. To be more precise, the measured quantity is the temperature derivative of the chemical potential, ∂μ /∂ T, which may be extracted by modulating the temperature of the gated structure with a 2D electron gas playing the role of one of the plates of a capacitor. Both derivatives are equal as a consequence of the Maxwell relations = ( ∂ S/∂n)_T = -( ∂μ/∂ T)_n. In the theoretical paper <cit.>, quite surprisingly, it has been pointed out that in a quasi-two-dimensional electron gas (2DEG) with parabolic dispersion the entropy per electron, s, exhibits quantized peaks at resonances between the chemical potential and size quantization levels. The amplitude of such peaks in the absence of scattering depends only on the subband quantization number and is independent of material parameters, shape of the confining potential, electron effective mass, and temperature. The quantization of entropy per electron was interpreted in <cit.> as a signature of the Lifshitz electronic topological transition<cit.>, which in the 2D case is characterised by a discontinuity in the electronic density of states (DOS). The latter is caused by a change of the topological properties, viz. connectivity of the electronic Fermi surface <cit.>. Lifshitz transitions widely occur in multi-valley semimetals, doped semiconductor quantum wells, multi-band superconducting systems such as iron-pnictide compounds <cit.> and also in 2D Dirac materials, as we discuss below. In this Report, we analyze theoretically the behavior of the entropy per particle as a function of the chemical potential in a gapped graphene deposited on a substrate and other low-buckled Dirac materials, e.g. silicene and germanene. We show thatthe entropy per electron in these systems acquires quantized universal values at low temperatures if thechemical potential passes throughthe edge of consequent gaps. It is a universal property of electronic systems characterised by a step-like behaviour of the density of states. If the chemical potential is resonant to the Dirac point, we find the discontinuity in s at very low temperature. At low but finite temperatures this discontinuity transforms into the combination of a very sharp dip at the negative chemical potential followed by a sharp peak at the positive chemical potential. These predictions offer a new tool for the characterisation of novel crystalline structures. In particular, the very characteristic spikes of entropy that must be relatively easy to observe are indicative of the consequent gaps, in particular due to spin-orbit interaction. We believe that the measurements of the entropy per particle (e.g. following the technique of Ref. <cit.>) may reveal hidden peculiarities of the band structure of new materials.§ RESULTS §.§ The link between the discontinuity of the DOS and the quantization of entropy To start with, let us consider an electronic system characterised by a DOS function D (ϵ) that has a discontinuity. In order to describe Dirac materials specifically, we assume that the DOS is a symmetric function, D (ϵ) =D ( -ϵ), although this assumption is not essential. We shall assume that the DOS has 2N discontinuities at the points ϵ = ±Δ_i and it can be presented in the formD( ϵ) =f(ϵ)∑_i=1^N θ(ϵ^2-Δ_i^2).The function f(ϵ) is assumed to be a continuous even function of energy ϵ and it may account for the renormalizations due to electron-electron interactions in the system.The case of N=1 corresponds to a gapped graphene with the dispersion law ϵ(k) = ±√(ħ ^2 v_F^2 k^2 + Δ ^2) and f(ϵ) = 2 |ϵ|/(πħ ^2 v_F^2), where we have taken into consideration both the valley and spin degeneracy. Here Δ is the gap, v_F is the Fermi velocity, k is the wavevector. The globalsublattice asymmetry gap 2 Δ∼ 350 can be introduced in graphene <cit.> if it is placed on top of a hexagonal boron nitride (G/hBN) and the crystallographic axes of graphene and hBN are aligned.The case of N=2 corresponds to silicene <cit.>, germanene <cit.> and other low-buckled Dirac materials <cit.>. The dispersion law in these materials writes ϵ_ησ (k) = ±√(ħ ^2 v_F^2 k^2 + Δ_ησ ^2), where η and σ are the valley and spin indices, respectively. Here the valley- and spin-dependent gap, Δ _ησ=Δ _z-ησΔ _SO, where Δ _SO is the material dependent spin-orbit gap caused by a strong intrinsic spin-orbit interaction. It has a relatively large value, e.g. Δ _SO≈4.2meV in silicene and Δ _SO≈11.8meV in germanene. The adjustable gap Δ_z = E_z d, where 2 d is the separation between the two sublattices situated in differentplanes, can be tuned by applying an electric field E_z. The function f(ϵ) =|ϵ|/(πħ ^2 v_F^2) is twice smaller than one for graphene, because the first transition in Eq. (<ref>) with i=1 corresponds to η = σ = ± with Δ_1 = | Δ _SO - Δ _z | and the second one with i=2 corresponds to η = -σ = ± with Δ_2 = |Δ _z+ Δ _SO |.Since the DOS is a symmetric function, instead of the total density of electrons it is convenient to operate with the difference between the densities of electrons and holes (see the Methods) given byn(T,μ,Δ_1,Δ_2,…,Δ_N)= 1/4∫ _ -∞^∞d ϵ D (ϵ)[tanhϵ +μ/2 T - tanhϵ -μ/2 T ],where we set k_B=1. Clearly, n (T ,μ ) is an odd function of μ and n (T ,μ=0) =0. Thedensity n in the Dirac materials may be controlled by an applied gate voltage. In what follows we consider the dependence of s on the chemical potential.As it was mentioned above, the entropy per particle is directly related to the temperature derivative of the chemical potential at the fixed density n (see Eq. (<ref>)). The latter can be obtained using the thermodynamic identity( ∂μ/∂ T)_n = - ( ∂ n/∂ T)_μ( ∂ n/∂μ)_T^-1.If the chemical potential is situated between the discontinuity points, Δ_i < |μ| < Δ_i+1, andT → 0,one obtains for the first derivative in Eq. (<ref>) (see the Methods)∂ n(T,μ)/∂ T= D^' (|μ|)π^2 T/3sign(μ), Δ_i > 0.On the other hand,at the discontinuity points μ=±Δ_J at T → 0, one finds. ∂ n(T,μ)/∂ T|_μ=±Δ_J=±[ D(Δ_J+0)-D(Δ_J-0)]∫_0^∞x d x/cosh^2x= ± f(Δ_J)ln2.One can see that a factor of ln 2 originates from the integration of the derivative of the Fermi distribution (or 1/2tanh z) multiplied by the energy. If μ=±Δ_J with J<N and T → 0 for the second derivative in Eq. (<ref>), one obtains (see the Methods).∂ n(T,μ)/∂μ|_μ=±Δ_J = f(Δ_J)∑_i=1^Nθ(Δ^2_J -Δ^2_i)=f(Δ_J)(J-1/2),where the first J-1 θ functions give J-1 and the last one gives the 1/2 contribution.Thus, we arrive to the conclusion that the entropy per particle in Dirac materials iss(T → 0,μ=±Δ_J)=±ln2/J-1/2,J=1,2, … N,while for Δ_i < |μ| < Δ_i+1 it vanishes. One can see that the behaviour of entropy per particle for the gapped Dirac systems as a function of chemical potential is analogous to one found in quasi-2DEG with a parabolic dispersion <cit.>. This fact allows us to speculate that such universal spikes are related rather to the topological changes of the Fermi surface than to specific form of the spectrum.§.§ Gapped Dirac materials In the particular case of a gapped graphene the integral (<ref>) can be done analytically <cit.> n (T ,μ,Δ )=2 T^2/πħ ^2v_F^2[ Δ/Tln 1+exp( μ -Δ/T) /1+exp( -μ +Δ/T)+_2( -e^-μ +Δ/T ) -_2( -e^μ -Δ/T) ], whereis the polylogarithm function. Thederivatives ( ∂ n/∂ T )_μ and ( ∂ n /∂μ)_T are calculated in the Methods, Eqs. (<ref>) and (<ref>).The density of carriers in silicene can be described with use of the formalism developed above for graphene by formally representing silicence as a superposition of two gapped graphene layers characterised by different gaps: n(T ,μ,Δ_1, Δ_2) = 1/2 [ n (T ,μ,Δ_1 ) + n (T ,μ,Δ_2 ) ].Once the carrier imbalance function, n(T,μ,Δ_1,Δ_2,…,Δ_N), is found, the entropy per electron can be calculated using Eqs. (<ref>) and (<ref>). In Fig. <ref> (a) and (b) we show the dependence s(μ) for graphene and silicene, respectively, for three different values of T. Since the entropy per electron is an odd function of μ, only the region μ >0 is shown. In the case of silicene we express μ and T in the units of a smaller gap, Δ_1. The dependence s(μ) in the vicinity of the second gap, μ = Δ_2 = 2 Δ_1 is shown in the insert of Fig. <ref> (b) to resolve the spike structure for three temperatures lower than the values on the main plot. The most prominent feature that we find in Fig. <ref> (a) and (b) is a sharp peak observed for the chemical potential in the temperature vicinity of the Dirac point,|μ| ∼ T.If the chemical potential is inside the gap but it is not very close to the Dirac point, T ≪ |μ|<Δ, and T ≪Δ-|μ|, the entropy per particle in a gapped graphene iss(T, μ,Δ)≃sign(μ )[Δ-|μ|/T+1+T/Δ+T].Near the Dirac point, |μ| ≪ T ≪Δ,one findss(T, μ ,Δ)≃μΔ/T^2[ 1 + O (e^-Δ/T) ].If the chemical potential crosses the Dirac point at T=0, the transition from hole-like to electron-like carriers is singular. Eqs. (<ref>) and (<ref>) show how the temperature smears it. The peak inside the gap is mainly due to the specific dependence of the chemical potential on the electron density. Indeed, since s= ∂ S(T,μ)/∂ n =(∂ S(T,μ)/∂μ)(∂μ/ ∂ n), the dependence s(μ) is governed by the sharpest function in the product. The chemical potential grows rapidlyat the small density n and then quickly reaches the value |μ| ≃Δ, where the derivative ∂μ/ ∂ n becomes small. The peaked behavior of s may be considered as a smoking gun for the gap opening in gapped Dirac materials.Near the Lifshitz transition points: μ = ±Δ, we observe that the dependences s(μ) are monotonic functions, so that these points are not marked by spikes. This is typical for any system where DOS has just one discontinuity <cit.>. Nevertheless, the entropy per particle quantization rule for graphene s(μ = ±Δ) = ± 2 ln2 is fulfilled. One can see that in both panels of Fig. <ref>, at low temperatures allcurves cross each other near this point. The corresponding value s= 2 ln 2 is shownby the dotted line. This numerical result can be confirmed analytically. For T ≪Δ we obtains(T,μ=Δ,Δ)=2ln2+π^2-12ln^22/3T/Δ+ O(T^2). Now we briefly discuss the effect of broadening of the energy levels due to the scattering fromstatic defects. Let us smear the DOS function (<ref>) by convoluting it with the Lorentzian, γ/[π(ω^2+γ^2)], where γ is the scattering rate. In the regime γ≪ T≪Δ one findss(T, μ = Δ, Δ)=2ln2 [1-γ/T(1/πln2 +T/Δ)].Eq. (<ref>) shows that the universality of the low temperature entropy per particle is broken by the disorder if the mean free path becomes comparable with the thermal diffusion length. The case Δ=0 deserves a special attention. In this limit, Eq. (<ref>) acquires a simple form (see the Methods, Eqs. (<ref>) and (<ref>)). For the entropy per particle one findss(T,μ,0)={ [ μ/T(1-μ^2/T^21/6ln2),|μ| ≪ T,;π^2/3T/μ, T≪ |μ| . ] .It is important to note that the second line of Eq. (<ref>) if multiplied by the factor k_B/e yields the Seebeck coefficient for a free electron gas <cit.>. Moreover, the general expression for s = -∂μ/∂ T, Eq. (<ref>) reproduces the thermal power S that can be extractedfrom the results based on the Kubo formalism <cit.> that validatesthe thermodynamic approach of <cit.>. The presence of the second gap in silicene and similar materials, Δ_2 > Δ_1, results in the appearance of the peak in s(μ) ≈±2 ln2/3 near the point μ = ±Δ_2, as seen in Fig. <ref>(b). The corresponding value s= 2 ln2/3 is shown by the dotted line. This peak can be considered as a signature of the second Lifshitz transition which occursif μ crosses Δ_2. Indeed, as it was shown for the quasi-2DEG in <cit.> the peak structure in s(μ) develops only if the number of discontinuities in the DOS, N ≥ 2. Thus, these perspective Dirac materials, where the spin orbit interaction plays a very important role allow the simplest realization of the N = 2 case with two discontinuities onboth electron and hole sides of the total DOS. Fig. <ref> shows the 3D and density plots of s as a function of μ/Δ_1 and T/Δ_1. To be specific, we assumed that Δ_1 is the smallest of the gaps and chose Δ_2 = 4 Δ_1. The black and blue lines correspond to the contours of constant values s = ± 2 ln 2 and s = ± 2 ln 2 /3, respectively.The range of s in the 3D plot is restricted by -2 ≤ s ≤ 2, so that only the peaks at μ =±Δ_2 can be seen.A more careful examination of Fig. <ref> (b) shows that the peak occurring nearμ = Δ_2 is somewhat shifted to smaller than Δ_2 values of μ. Looking at Fig. <ref> (b) and its insert one can tracehow the position of this peak moves towards the point (μ = Δ_2, T =0) as the temperature decreases. In Fig. <ref> (a)the increase of its height can be seen. Close to this point (T ≪Δ_2) we obtain analyticallys(T,μ=±Δ_2)=±[2ln2/3+π^2-4ln^22/9T/Δ_2].In what concerns the behaviour the silicene's entropy per particle close to the smallest gap, Δ_1, it is described byEq. (<ref>) withΔ replaced by Δ_1.Recent successes in fabrication of silicene field-effect transistors <cit.> offers the opportunity of a direct measurement of the entropy per particle in silicene. In the prospective experiment, a double gate structure would be needed that enables one to tune μ and Δ_z independently. Such a situation is modelled in Fig. <ref>, where weshow the 3D and densityplots of s as a function of μ/Δ_SO and Δ_z/Δ_SO. As in Fig. <ref>, the black and blue lines correspond to the contours of constant values s = ± 2 ln 2 and s = ± 2 ln 2 /3, respectively.The points Δ_z = ±Δ_SO correspond to the case where Δ_1 =0 and Δ_2 = 2 Δ_SO or Δ_1 = 2 Δ_SO and Δ_2 =0, so that the system experiences a transition from two to one gap spectrum. For |Δ_z|< Δ_SO the system is a topological insulator and for |Δ_z|> Δ_SO it is a band insulator. § DISCUSSIONWe presented original analytical expressions for the entropy per particle in a wide energy range for various Dirac materials. Basing on them we have predicted the characteristic spikes of the entropy per particle at the Lifshitz topological transition points in several 2D Dirac systems. The magnitude of spikes is quantized at low temperatures and is independent of material parameters. The quantized spikes are expected to occur in silicene and germanene. They can also be found in the gapped graphene in the presence of Zeeman splitting and in quasi two-dimensional Dirac and Weyl materials. Note that the same quantization of entropy and spikes occur in a 2DEGin the presence of Zeeman splitting<cit.>, see the Methods.Our results are based on the assumption that the function f(ϵ) in the DOS (<ref>) is continuous. Although this assumption is quite general, it is not fulfilled, for example, ina bilayer graphene. The overall behavior of the entropy per electron ∂ S/∂ n as a function of the electronic chemical potential may be used as a tool for characterization of the electronic dispersion in novel crystal structures. The crucial point is that ∂ S/∂ n is related to the temperature derivative ∂μ/∂ T via the thermodynamicMaxwell relation (<ref>). The last value, as was mentioned in Introduction, can be directly measured using the experimental approachdeveloped in <cit.>. It appears that this technique has a three orders of magnitude higher resolution than the other methods and thus it can be very helpful in probing interaction effects in 2D electron systems. The measurements of the entropy per particle can also be used to study the effect of interactions on the DOS in graphene, because the renormalization of the Fermi velocity due to electron-electron interactions <cit.> modifies the function s (n).§ METHODS§.§ Relationship between the carrier density and carrier imbalance At thermal equilibrium, the total density of electrons in a nonrelativistic system can be expressed asn_tot(T,μ)=∫ _-∞^∞d ϵ D (ϵ ) f_F D(ϵ -μ/T),where f_F D ( ϵ ) = 1/[exp(ϵ/ T)+1 ] is the Fermi-Dirac distribution function and we set k_B=1. In a relativistic theory, for example, inQED the number of electrons or positrons is not conserved, while a conserving number operator is needed to build the statistical density matrix <cit.>. In QED, the conserved quantity if the difference of the numbers of positively and negatively charged particles: electrons and positrons.In the Dirac materials the “relativistic” nature of carriers is encoded in the symmetric DOS function, D(ϵ) = D(-ϵ). Accordingly, it is convenient to operate with the difference between the densities of electrons and holes instead of the total density of electrons <cit.>. The difference is given byn(T,μ)=∫_-∞^∞ d ϵ D(ϵ) [ f_F D(ϵ - μ) θ(ϵ)-[1- f_F D(ϵ - μ) ] θ(-ϵ)]= -1/2∫_-∞^∞ d ϵ D(ϵ) tanhϵ-μ/2T.The last equation can be rewritten in the form of Eq. (3). One can verify that the carrier imbalance n(T,μ) and the total carrier density n_tot(T,μ) are related by the expression n(T,μ)= n_tot(T,μ) - n_hf, where n_hf is the density of particles for a half-filled band (in the lower Dirac cone) n_hf =∫_-∞^∞ d ϵ D(ϵ) θ(-ϵ). Consequently, there is no difference whether the entropy per particle in Eq. (<ref>) is defined via the total carrier density n_tot or the carrier imbalance n.§.§ General expressions for ∂ n /∂ T and ∂ n /∂μ The first temperature derivative in Eq. (<ref>) depends on whether the chemical potential μ hits the discontinuity of the DOS D(ϵ) given by Eq. (<ref>). Differentiating Eq. (<ref>) over the temperature one obtains∂ n(T,μ)/∂ T = sign(μ)/4T∫_-∞^∞ dϵ D(ϵ)[ ϵ-|μ|/2T1/cosh^2ϵ-|μ|/2T-ϵ+|μ|/2T1/cosh^2 ϵ+|μ|/2T].Changing the variable ϵ=2Tx±|μ| in two terms and changing the limits of integration, one obtains∂ n(T,μ)/∂ T = sign(μ)∫_0^∞ dx[D(|μ|+2Tx)-D(|μ|-2Tx)] x/cosh^2x.If the DOS D(ϵ)has a continuous derivative at the point ϵ = |μ|, where Δ_i < |μ| < Δ_i+1, one can expand D(|μ|+2Tx)-D(|μ|-2Tx)≃ 4TxD^'(|μ|). Then integrating over x we arrive at Eq. (<ref>)∂ n(T,μ)/∂ T≃4T sign(μ)D^'(|μ|)∫_0^∞x^2 d x/cosh^2x = sign(μ)D^'(|μ|)π^2/3T.On the other hand,at the discontinuity points μ=±Δ_J at T → 0, we arrive at Eq. (<ref>). The second derivative in Eq. (<ref>) in the zero temperature limit is just the DOS. Indeed, we have∂ n(T,μ)/∂μ =1/8T∫_-∞^∞ dϵ D(ϵ)[ 1/cosh^2ϵ+μ/2T+1/cosh^2ϵ-μ/2T]=D(μ), T→0.This is because (1/4T) cosh^-2 (x/2T) →δ(x) for x→0. Substituting the DOS given by Eq. (<ref>) to Eq. (<ref>) we arrive at Eq. (<ref>). §.§ Explicit expressions for the derivatives ∂ n /∂ T and ∂ n /∂μ for the Dirac materials The carrier imbalance for a gapped graphene is given by Eq. (<ref>). The corresponding derivatives are( ∂ n/∂μ) _T =2/πħ ^2v_F^2[ Δ/2( tanhμ -Δ/2T -tanhμ +Δ/2T) + T( ln( 2coshμ -Δ/2T) +ln( 2coshμ +Δ/2T) ) ]and( ∂ n/∂ T)_μ =2/πħ^2 v_F^2[ 2Δln1+exp( μ -Δ/T ) /1+exp( -μ +Δ/T) +2T_2( -e^-μ +Δ/T) -2T_2( -e^μ -Δ/T) . . -μln( 2coshμ -Δ/2T) -μln( 2cosh μ +Δ/2T) + Δ/Tμsinh (Δ /T)+Δsinhμ /T/ coshΔ /T+coshμ /T] .Eqs. (<ref>) – (<ref>) and (<ref>) are obtained using the low-temperature expansions of the derivatives, Eqs. (<ref>) and (<ref>). §.§ Dirac materials with Δ=0 If Δ =0 Eq. (<ref>) reduces ton(T,μ) = 2T^2/πħ^2 v_F^2[ _2 (-e^- μ/T) - _2 (-e^μ/T) ].Using Eq. (<ref>) we obtain the general expression( ∂μ/∂ T)_n= μ/T-1/ln (2 coshμ/2T)[ _2 (-e^- μ/T) - _2 (-e^μ/T) ].§.§ Quantization of entropy in the presence of Zeeman splitting In the 2DEG in the presence of Zeeman splitting considered in the Supplementary material of <cit.> the carrier density readsn(μ,T)=m/4πT[ln(1+e^(μ+Z)/T)+ln(1+e^(μ-Z)/T)].Here Z is the Zeeman splitting energy and m is the carrier mass. One can show that the entropy per particle in this case also obeys thequantization rule. ∂ S/∂ n|_μ=-Z=2ln2, . ∂ S/∂ n|_μ=Z=2ln2/3,T→ 0.99Pudalov2015NatComm Kuntsevich, A.Yu., Pudalov, V.M., Tupikov, I.V. and & Burmistrov, I.S. Strongly correlated two-dimensional plasma explored from entropy measurements. Nat. Commun. 6, 7298; 10.1038/ncomms8298 (2015).Varlamov2016PRB Varlamov, A.A., Kavokin, A.V. &Galperin, Y.M. Quantization of entropy in a quasi-two-dimensional electron gas. Phys. Rev. B 93, 155404 (2016).Lifshitz1960JETP Lifshitz I.M.Anomalies of Electron Characteristics of a Metal in the High Pressure. Zh. Eksp. Teor. Fiz. 38, 1569 - 1576 (1960) [Sov. Phys. JETP 11, 1130 - 1135 (1960).]Blanter1994PRBlanter, Ya.M., Kaganov, M.I.,Pantsulaya, A.V.& Varlamov, A.A. The theory of electronic topological transitions. Phys. Rep. 245, 159-257(1994). Rodriguez2016JPCMRodriguez, J.P. Collective mode at Lifshitz transition in iron-pnictide superconductors. J. Phys. Cond. Matt.28, 375701 (2016). Hunt2013Science Hunt, B. et al. Massive Dirac Fermions and Hofstadter Butterfly in a van der Waals Heterostructure. Science 340, 1427 - 1430 (2013).Woods2014NatPhys Woods, C.R. et al. Commensurate–incommensurate transition in graphene on hexagonal boron nitride.Nat. Phys. 10, 451 - 456 (2014).Chen2014NatCom Chen,Z.-G. et al. Observation of an intrinsic bandgap and Landau level renormalization in graphene/boron-nitride heterostructures.Nat. Commun. 5, 4461; 10.1038/ncomms5461 (2014).Gorbachev2014Science Gorbachev, R.V. et al. Detecting topological currents in graphene superlattices.Science 346, 448 - 451 (2014).Kara2012SSR Kara, A. et al. A review on silicene—New candidate for electronics.Surface Sci. Rep. 67, 1 - 18 (2012).Acun2015JPCM Acun, A. et al. Germanene: the germanium analogue of graphene. J. Phys. Cond. Mat. 27, 443002 (2015). Liu2011PRL Liu,C.-C.,Feng, W. &Yao, Y. Quantum Spin Hall Effect in Silicene and Two-Dimensional Germanium. Phys. Rev. Lett. 107, 076802 (2011).Liu2011PRB Liu, C.-C.Jiang, H. & Yao, Y. Low-energy effective Hamiltonian involving spin-orbit coupling in silicene and two-dimensional germanium and tin. Phys. Rev. B 84, 195430 (2011).Gorbar2002PRB Gorbar, E.V., Gusynin, V.P., Miransky, V.A. & Shovkovy, I.A. Magnetic field driven metal-insulator phase transition in planar systems. Phys. Rev. B 66, 045108 (2002).Abrikosov.book Abrikosov, A.A. Fundamentals of the Theory of Metals. (Elsevier, Amsterdam, 1988).Sharapov2012PRB Sharapov, S.G. & Varlamov, A.A. Anomalous growth of thermoelectric power in gapped graphene. Phys. Rev. B 86, 035430 (2012).Varlamov2013EPL Varlamov, A.A. & Kavokin, A.V. Prediction of thermomagnetic and thermoelectric properties for novel materials and systems. Europhys. Lett. 103, 47005 (2013).Tao2015NatNano Tao, L. et al. Silicene field-effect transistors operating at room temperature.Nature Nanotechnology 10, 227 - 231 (2015).Elias2011NatPhys Elias, D.C. et al. Dirac cones reshaped by interaction effects in suspended graphene. Nat.Phys. 7, 701 - 704(2011).Kapusta.book Kapusta, J.I & Gale, C. Finite-Temperature Field Theory Principles and Applications. (Cambridge Univer. press, Cabmridge, 2006).Gusynin2004PRB Sharapov, S.G.,Gusynin, V.P. &Beck, H. Magnetic oscillations in planar systems with the Dirac-like spectrum of quasiparticle excitations. Phys. Rev. B 69, 075104 (2004).Sharapov2015JPA Sharapov, S.G. Thermodynamic properties of the 2+1-dimensional Dirac fermions with broken time-reversal symmetry. J.Phys. A 48, 365002 (2015).§ ACKNOWLEDGEMENTSWe acknowledge the support of EC for the RISE Project CoExAN GA644076. A.V.K acknowledges support from the EPSRC established career fellowship. V.P.G. and S.G.Sh. acknowledge a partial support from the Program of Fundamental Research of the Physics and Astronomy Division of the NAS of UkraineNo. 0117U00240. § AUTHOR CONTRIBUTIONS STATEMENT A.V.K., S.G.Sh., A.A.V. and V.P.G. conceived the work. S.G.Sh., A.A.V. and V.P.G. performed calculations. V.Yu.T. has done all numerical computations and prepared the figures. All authors contributed to writing the manuscript.§ ADDITIONAL INFORMATION Competing financial interests: The authors declare no competing financial interests.
http://arxiv.org/abs/1703.08962v2
{ "authors": [ "V. Yu. Tsaran", "A. V. Kavokin", "S. G. Sharapov", "A. A. Varlamov", "V. P. Gusynin" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170327080036", "title": "Entropy spikes as a signature of Lifshitz transitions in the Dirac materials" }
LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks Luca Caltagirone^*, Mauro Bellone, Lennart Svensson, Mattias Wahde ^*Corresponding author: luca.caltagirone@chalmers.se. L. Caltagirone, M. Bellone, and M. Wahde are with the Adaptive Systems Research Group, Applied Mechanics Department, Chalmers University of Technology, Gothenburg, Sweden. L. Svensson is with the Signal and Systems Department, also at Chalmers University of Technology. December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================== In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60×60 meters.By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable. § INTRODUCTIONIn recent years, universities, high-tech companies, and the automotive industry have invested vast resources into developing the technology necessary to enable fully autonomous driving. There are several reasons behind this collective effort; to name a few, it is argued that with autonomous vehicles there will be fewer accidents, less pollution, and a more efficient use of the infrastructure. Currently, two main paradigms exist to address the problem <cit.>: mediated perception and behavior reflex. Approaches utilizing mediated perception divide autonomous driving into subtasks, such as lane marking detection, vehicle detection, free road estimation, and traffic sign recognition. The results of each subtask are then combined to produce a world model that is afterwards analyzed to decide what driving action should be carried out. This paradigm is currently the leading one in both industry and academia thanks to its important strengths of being modular and interpretable. Modularity makes it possible to substitute a subcomponent (e.g., a lane marking detector) for a better one whenever it becomes available, without having to redesign the entire system. Interpretability means that the output of each subcomponent can be easily understood and debugged: For example, one could look at the lane marking detections and immediately notice if the system is misunderstanding road barriers or shadows for lane markings.The behavior reflex paradigm was first successfully demonstrated in 1989 by Pomerlau <cit.>who trained a simple three-layer fully connected artificial neural network, ALVINN, to predict the vehicle heading direction from camera and laser range finder data. More recently, Bojarski et al. <cit.> have used modern deep learning <cit.> techniques and trained a convolutional neural network (CNN) to infer appropriate steering angles given as input only forward-looking camera images. In <cit.>, the authors proposed a more sophisticated architecture to perform driving action prediction that combines a fully convolutional neural network (FCN) for visual feature extraction with a long-short-term-memory (LSTM) recurrent neural network for temporal fusion of visual features and past sensory information.While mediated perception approaches require time-consuming and expensive hand-labeling of training examples (e.g., selecting all the pixels belonging to lane markings in a given image), a behavior reflex system in its simplest form may only requireon-board steering angle logs and their corresponding time-stamped camera images, both of which are easily obtainable.However, this kind of approach works as a black-box and modularity is lost in favor of a monolithic system that maps raw input information to control actions. It is therefore difficult to understand why a system is choosing one action over another, and consequently how to correct undesired behaviors.Chen et al. <cit.> have recently proposed an alternative approach, called direct perception, that takes an intermediate position between mediated perception and behavior reflex. Their main idea is to train a CNN to map camera images to a predefined set of descriptors such as heading angle and position in the ego-lane. It is argued that such a set provides a compact but complete representation of the vehicle's surrounding that can then be used for choosing appropriate control actions. Their approach, however, was developed within a simple driving simulator and it would probably not generalize well when applied to much more complex real-world scenarios. The approach described in this work also occupies an intermediate position between the two main paradigms previously described. By taking as input LIDAR point clouds, past GPS-IMU information, and driving directions, our system generates as output driving paths in the vehicle reference frame. This is accomplished by implicitly learning, from real-world driving sequences, which regions of a point cloud are drivable and how a driver would navigate them. One of the novelties of our approach is that GPS-IMU data and driving directions are transformed into a spatial format that makes possible the direct information fusion with LIDAR point clouds. In comparison with behavior reflex methods,the proposed approach preserves the decoupling between perception and control while, at the same time, producing interpretable results. Whereas mediated perception methods carry out low-level scene parsing, our system generates a more abstract output that is closer to vehicle control and its training data is obtained automatically without the need of time-consuming hand-labeling.The paper is organized as follows: In Section <ref>, an overview of the proposed system is presented and it is followed by a description of the preprocessing steps applied to the raw input data. The FCN architecture is presented in Section <ref>. The data set and details about the training procedure are described in Section <ref>. The results are presented in Section <ref> and are followed by the conclusions and future work in Section <ref>. § DRIVING PATH GENERATION The goal of this work is to develop a system able to generate driving paths using as input LIDAR point clouds, GPS-IMU information, and driving intention.The problem is cast as a binary pixel-level semantic segmentation task using a fully convolutional neural network (FCN). Since FCNs handle data structured in the form of multi-dimensional arrays (i.e., tensors), some preprocessing steps are required in order to transform the input data into a suitable format. Similar to <cit.>, here this is accomplished by considering a top-view perspective where the point cloud is first partitioned within a discretized grid and then transformed into a 3D tensor by computing several basic statistics in each grid cell. The GPS coordinates are used for generating the ground-truth paths followed by the vehicle and for determining the driving intention whenever an intersection is approached. Each path is also augmented with IMU information about the vehicle's forward speed, forward acceleration, and yaw rate, and then transformed into atensor. Finally, the LIDAR and GPS-IMU tensors are stacked and given as input to an FCN that is trained to generate a confidence map assigning to each grid cell a probability that the vehicle will drive over that area during its future motion. §.§ LIDAR point cloud preprocessing Vision-based algorithms for automotive applications usually work in one of two perspective spaces: camera or top-view.The former provides a view of the environment as seen from behind the windshield, whereas the latter offers a view of the vehicle's surroundings as seen from above, as if it were being observed from a bird flying above the scene. A top-view perspective is, in our opinion, a natural choice to perform path generation and therefore it will be used in this work. The procedure to transform a raw point cloud into an input tensor for a CNN consists of several steps. Initially, a discrete grid covering a region of interest (RoI) of 60×60 meters is created in the LIDAR x-y plane.The grid is centered in the LIDAR coordinate system and it has cells of 0.10×0.10 meters.Each point is then assigned to its corresponding grid cell, via projection on the x-y plane. Afterwards, four basic statistics are computed in each cell: number of points, average reflectivity, minimum and maximum point elevation.Finally, a 2D tensor is created for storing each one of the previously mentioned statistics.The final output of this procedure is therefore a 3D tensor of size 4×600×600.Figure <ref> (E) and (F) illustrate, respectively, an example of point cloud maximum elevation and average reflectivity.§.§ GPS-IMU data preprocessing As previously mentioned, the GPS information is used to generate the driving paths followed by the vehicle.Considering a generic current time step k of a driving sequence, the corresponding path Π, centered in the current position p_k = [x_k, y_k,z_k], is given by the union of two sets of 3D points, Π^- and Π^+, expressed in the LIDAR coordinate system.Here, Π^- denotes the past sub-path which is defined in the time interval [0:k]; whereasΠ^+ is the future sub-path and it is defined in the time interval [k:N], where N denotes the total number of time steps in the driving sequence. Additionally, the past path Π^- is also augmented with information provided by the IMU, specifically, forward speed v, forward acceleration a, and yaw rate ω. By denoting the augmented descriptor with x = [x, y, z, v, a, ω], thepast path then becomes Π^- = {x_0, …, x_k}.Given that the paths are expressed as sets of points in the LIDAR coordinate system, they can also be transformed into tensors using a similar procedure to the one described in Section <ref>.In this case, however, the vehicle's trajectory information is considered instead of the point elevation and the reflectivity statistics. Another difference is that it is necessary to add continuity and thickness to the paths which are, so far, a simple collection of discrete points. Continuity is obtained by joining neighboring points to form a curve; then, by considering that the vehicle's width is about 1.80 meters, the curve is expanded 0.90 meters on each side in order to approximately cover the actual driving corridor. It is necessary to consider a driving corridor instead of a curve in order to provide the FCN with enough positive examples at training time. After inference, the 1D path can be recovered by finding the central curve of the FCN's output.To summarize, the above procedure is used to generate two tensors: the first one contains information about the vehicle's past motion up to the current time step and it has a size of 3×600×600.This tensor will be stacked with the LIDAR tensor, and will be part of the input for the FCN.The second tensor is the ground-truth future path that the FCN will be trained to predict as output. Figure <ref> (A), (B), and (C) show an example offorward acceleration, forward speed, and yaw rate tensors, respectively; whereas, Figure <ref>(G) illustrates the corresponding future path ground-truth. §.§ Generating driving intention input Driving intention describes the knowledge about what direction the vehicle will take when multiple options are available, such as, for example, at intersections or highway exits. Intention is an important component of driving considering that people rarely drive in a purely reactive fashion and indeed adapt their driving behavior according to their destination.Having access to the driving intention is also important when training the FCN in order to make sense of otherwise ambiguous situations:For example, in certain cases, when approaching an intersection the vehicle will turn right, while in others it will go straight or turn left.Without knowing where the driver intends to go, these examples will provide conflicting feedback and they might deteriorate the FCN's ability to learn a robust representation of the driver's model. In this work, Google Maps is used to obtain driving instructions when approaching locations where multiple direction can be taken.When queried given the current position and a certain destination, Google Maps returns a human interpretable driving action, such as turn left or take exit, together with an approximate distance of where that action should be taken.This information is integrated into the past path Π^- described in Section <ref>, that is therefore augmented with two additional dimensions: intention direction, i_d∈{,,}, and intention proximity, i_p∈ [0, 1]. See Fig. <ref> (D) and (H) for an example of intention proximity and intention direction tensors, respectively. The default direction is to go straight, which should not be literally interpreted as keeping a constant heading angle, but simply as to keep driving along the main road, which obviously may not be straight. The GPS positions and intention proximity are not exact and their purpose is not to tell the FCN precisely where and how it is supposed to turn.They just provide an indication of the coming action that should be acted upon only if there is agreement with the scene understanding provided by the LIDAR.§.§ FCN architecture In this work, path generation is cast as a binary pixel-level semantic segmentation problem within a deep learning framework: An FCN is trained to assign to each RoI's grid cell, or pixel, a probability that the vehicle will drive over that area during its future motion. The future path is then obtained by considering the region defined by the grid cells with probability greater than a fixed threshold.In recent years, several CNNs for carrying out semantic segmentation have been proposed; some examples are Segnet <cit.>, FCN-8s <cit.>, and Dilation <cit.>.Here, however, it was preferred to implement a task-specific FCN by taking into account two factors: (1) the nature of our training data, and (2) recent design guidelines regarding semantic segmentation networks.In <cit.>, the authors have shown that working with high-resolution feature maps is helpful for achieving better performance. However, the higher the resolution, the larger the memory requirements of the network. Here, to compromise between the two,only two max-pooling layers were used.Additionally, the proposed FCN was designed to have a large receptive field which has also been shown to be beneficial for semantic segmentation. The expansion of the receptive field was efficiently accomplished by using dilated convolutions <cit.>. The FCN's overall architecture is shown in Fig. <ref> andfurther details are provided in Table <ref>.§ DATA SET AND TRAINING The data set used in this work is entirely based on the KITTI raw data set <cit.>, which consists of 55 driving sequences taken over 4 days in three driving environments: city, rural, and highway.The sequences have lengths ranging from a few seconds to a few minutes.Out of the 55 available, only 45 were used: 30 were assigned to the training set, and 15 to the validation and test sets.Three behaviors determined by the vehicle yaw rate are defined in order to gain a coarse insight of the driving actions carried out in the sequences:turning left if the yaw rate is greater than 1.0^∘/s, turning right for a yaw rate less than-1.0^∘/s, and straight in all other cases <cit.>.A break-down of the sets is provided in Table <ref>.The FCNs were trained using the Adam optimization algorithm with an initial learning rate of 0.0005, a batch-size of 2, and using cross-entropy loss as the objective function.The learning rate was decayed by a factor of 2 whenever there was no improvement of performance within the last epoch. For regularization, spatial dropout layers (p_d=0.20) were added after each dilated convolution layer in the context module. Furthermore, data augmentation was carried out on-the-fly in the form of random rotations in the range [-20^∘, 20^∘] about the LIDAR z-axis.The FCNs were implemented using Torch7 framework and were trained on an NVIDIA GTX980Ti GPU. § EXPERIMENTS The following experiments had two main goals: To investigate whether the proposed approach could learn to generate feasible driving paths from real-world driving sequences, and to study how different sensor and information modalities affect performance. For these purposes, the FCN described in Section <ref> was trained using five different combinations of input data[Videos of the FCNs applied to full driving sequences of the validation and test sets can be found at http://goo.gl/ksRrYA]: Lidar-IMU-INT(ention), Lidar-INT, Lidar-IMU, Lidar-only, andIMU-only. IMU-only denotes the input tensor consisting of three channels: forward speed, forward acceleration, and yaw rate (see also Section <ref>). Additionally, a baseline denoted as Straight was also included in the comparison:this consists of a straight path of width 1.80 meters originating in the vehicle and heading forward.The metrics used for evaluation are precision (PRE), recall (REC), and maximum F1-measure (MaxF). In the following, we will refer to the FCNs using their corresponding input tensor description, so, for example, Lidar-IMU-INT will denote the FCN trained with LIDAR, IMU, and intention data.§.§ Results overviewAs can be seen in Table <ref>, IMU-only obtained the lowest MaxF score of 81.63%.Performance increased every time an information modality was added, reaching the highest MaxF score of 88.13% in the case of Lidar-IMU-INT.These results support the intuitive assumption that the more information the FCN has access to, the better it can learn and perform the task. Furthermore, they confirm that the data representation adopted in this work (see Section <ref>) is appropriate for carrying out information fusion. The average inference time, that is, the time for a forward pass through the FCN, from input to output, was 0.033 seconds, which corresponds to a frame-rate of about 30 Hz. The FCN was able to assign a high probability to grid cells with no LIDAR readings, such as regions falling in-between LIDAR layers or occluded areas, by exploiting context information; see row (A) of Fig. <ref>, for an example of inference within an occluded region generated by Lidar-IMU-INT. In some cases, however, it was noticed that the predicted paths became less accurate at longer ranges. This could be because the densities of LIDAR point clouds decrease with distance from the sensor so that their information content also decreases following the same pattern.By considering a smaller output RoI of 40×40 meters, accuracy increased for all the considered input combinations; also in this case, the best performance was achieved by Lidar-IMU-INT with a MaxF score of 92.60%. §.§ Driving intentionIn Section <ref>, it was argued that considering driving intention information would enable the FCN to learn a more robust driving model. This was indeed confirmed by the above results. Rows (A) and (B) in Fig. <ref>illustrate two driving scenarios, in proximity of an intersection, where the benefit provided by intention information is particularly evident. As can be seen, Lidar-IMU-INT and LIDAR-INT were able to generate paths that are close to the ground-truth, whereas, the other FCNs produced uncertain predictions. It is worthwhile to mention that the driving intention only provides information about the coarse action to be executed and an approximate distance to the point where the maneuver should take place. In addition to that, the FCN must carry out scene understanding from the LIDAR point cloud in order to generate a path that, besides following the driving intention, also takes into account drivability. The intention should be acted upon only if there is agreement with the scene understanding, otherwise the system should trust the latter. Column (A) in Fig. <ref> shows an example where the driver intention was to turn right even though the road did not allow such a maneuver. The system recognized that turning was not feasible and generated a correct straight path. Panels B–D illustrate the paths generated by Lidar-IMU-INT in a situation where multiple directions are possible: only the driving intention was modified whereas the LIDAR and IMU tensors were left unchanged. § CONCLUSION AND FUTURE WORK In this work, an FCN is trained end-to-end to generate driving paths by integrating multiple sensor and information sources, that is, LIDAR point clouds, GPS coordinates, driving directions, and inertial measurements. The system generates interpretable outputs and preserves the decoupling between control and perception. Given that its training data is obtained automatically, a large volume of training examples for supervised learning could be collected with minimal effort.Several issues are left for future work: Of particular interest to the authors is to explore approaches for integrating LIDAR point clouds acquired over successive time steps in order to generate paths that take into account the motion of nearby vehicles. Considering additional sensors such as, for example, radars and cameras could further enhance the system accuracy and perception range. The accuracy of the ground-truth paths could be improved by performing dead-reckoning in addition to using GPS coordinates. Lastly, the output of the proposed system is a probability map of future vehicle positions and it still remains to be determined how to make best use of this information for carrying out trajectory planning or vehicle control.§ ACKNOWLEDGMENTThe authors gratefully acknowledge financial support from Vinnova/FFI.IEEEtran
http://arxiv.org/abs/1703.08987v2
{ "authors": [ "Luca Caltagirone", "Mauro Bellone", "Lennart Svensson", "Mattias Wahde" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170327095155", "title": "LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks" }
http://arxiv.org/abs/1703.09193v1
{ "authors": [ "Zoi Kaoudi", "Jorge-Arnulfo Quiané-Ruiz", "Saravanan Thirumuruganathan", "Sanjay Chawla", "Divy Agrawal" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20170327172454", "title": "A Cost-based Optimizer for Gradient Descent Optimization" }
10C10berfc Mpc h^-1Mpc kms^-1V_cΔ tGyr et al. H^-H_2H_2^+SN_γγ f_γγH IH IIHe IHe IIHe IIIC IVSi IVO VI HI CIV OVI H HII He HeI HeII HeIII erg sr HzcmN_ HI sσ_8/usr/local/lib/tex/inputs/latex/styles.5ex.5ex> ∼< ∼∝∼.5ex.5ex.5ex.5ex.5ex i.e.e.g. > ∼< ∼.5ex.5ex.5ex.5ex.5exZ_ cr_ kin E_ kin≫γγf_γγLyα SN-e^± SN-e^±i.e.,e.g., 3 E_ g^III×10^51 erg ^-110^51 erg ^-1Ω_Z^sfhΩ_Z^obsΩ_0 M_⊙ Z_⊙log Logf_ esc L_α high-z ⟨χ_HI⟩χ_HIOptical line emission at z∼6.8M. Castellano et al. 1INAF - Osservatorio Astronomico di Roma, Via Frascati 33, I - 00040 Monte Porzio Catone (RM), Italy2INAF - Osservatorio Astronomico di Bologna, Via Ranzani 1, I - 40127, Bologna, Italy3Observatoire de Genève, Université de Genève, 51 Ch. des Maillettes, 1290, Versoix, Switzerland4Kapteyn Astronomical Institute, University of Groningen, Postbus 800, 9700 AV Groningen, The Netherlands5INAF Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste, Italy6Department of Astronomy, The University of Texas at Austin, C1400, Austin, TX 78712, USA7Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA8Cavendish Laboratory, University of Cambridge, 19 J. J. Thomson Ave, Cambridge CB3 0HE, UK 9Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK10Department of Physics and Astronomy, University of Missouri-Columbia, Columbia, MO, USAmarco.castellano64oa-roma.inaf.it We analyze a sample of z-dropout galaxies in the CANDELS GOODS South and UDS fields that have been targeted by a dedicated spectroscopic campaign aimed at detecting their Lyα line. Deep IRAC observations at 3.6 and 4.5 μm are used to determine the strength of optical emission lines affecting these bands at z∼6.5-6.9 in order to i) investigate possible physical differences between Lyα emitting and non-emitting sources; ii) constrain the escape fraction of ionizing photons; iii) provide an estimate of the specific star-formation rate at high redshifts. We find evidence of strong [OIII]+Hβ emission in the average (stacked) SEDs of galaxies both with and without Lyα emission. The blue IRAC [3.6]-[4.5] color of the stack with detected Lyα line can be converted into a rest-frame equivalent width EW([OIII]+Hβ)=1500^+530_-440Å assuming a flat intrinsic stellar continuum. This strong optical line emission enables a first estimate of f_esc≲20% on the escape fraction of ionizing photons from Lyα detected objects.The objects with no Lyα line show less extreme EW([OIII]+Hβ)=520^+170_-150Å suggesting different physical conditions of the HII regions with respect toLyα-emitting ones, or a larger f_esc. The latter case is consistent with a combined evolution of f_esc and the neutral hydrogen fraction as an explanation of the lack of bright Lyα emission at z>6. A lower limit on the specific star formation rate, SSFR>9.1Gyr^-1 for M_star=2 × 10^9 M_⊙ galaxies at these redshifts can be derived from the spectroscopically confirmed sample. § INTRODUCTION The synergy between deep photometric and spectroscopic observations is becoming fundamental to understand the reionization epoch. On the one hand,selection through photometric redshifts or the Lyman-break technique has enabled the determination of the evolution of the UV luminosity density and the identification of faint star-forming galaxies as the most likely responsibles of reionization<cit.>. On the other hand, the spectroscopic follow-up of such photometrically selected samples has yielded constraints on the timeline of the reionization process <cit.>. Eventually, a thorough understanding of this major transition will require firm constraints of the physical properties of z>6 galaxies that affect both the interpretation of the UV LF <cit.> and the decrease of bright Lyα emission <cit.>. Looking for line emission signatures in broadband photometry has recently emerged as a valuable tool for investigating the evolution of galaxy properties at high-redshift <cit.>. The spectral energy distribution of objects in the reionization epoch is affected by emission from [OIII]λλ4959,5007 and Hβ at IR wavelengths, resulting in a bluing of the IRAC 3.6μm-4.5μm color at z∼6.6-6.9, where the lines affect the 3.6μm band, and a reddening at z>7 when they enter the 4.5μm one <cit.>. These signatures yielded evidence of extremely strong line emission in high-z galaxies, and enabled more accurate photometric redshifts and constraints on their specific star-formation rate (SSFR) <cit.>.In the present work we exploit deep IRAC observations to constrain the optical line emission properties of a sample of z-dropout galaxies from the CANDELS GOODS-South and UDS fields <cit.> observed by deep spectroscopic programs aimed at detecting their Lyα line <cit.>. In Sect. <ref> we present the sample under consideration and the procedure used to construct average (stacked) images for subsamples with different Lyα emission properties. The analysis of the IRAC colors in terms of optical line contribution to the broad-band photometry is given inSect. <ref>. We discuss in Sect. <ref> the resulting constraints on the physical properties of our targets. We present a summary in Sect. <ref>. Throughout the paper, observed and rest–frame magnitudes are in the AB system, and we adopt the Λ-CDM concordance model (H_0=70km/s/Mpc, Ω_M=0.3, and Ω_Λ=0.7).§ THE HIGH-REDSHIFT SAMPLE A comprehensive description of the sample will be presented in a forthcoming paper (Pentericci L. 2017, in preparation): here we summarize the information that is most relevant for the present analysis. The spectroscopic targets have been selected from the official H-band detected CANDELS catalogs of the GOODS-South <cit.> and UDS <cit.> fields. Sources have been selected initially through appropriate recastings of the “Lyman-break” technique as described in <cit.>. The final color-color selection criteria take into account the different sets of passbands available in the two fields <cit.> resulting in slightly different redshift selection functions <cit.>. In addition to the Lyman-break-selected candidates, we also inserted in the available FORS2 slits targets that did not pass the above criteria but had a photometric redshift of z_phot > 6.5. The photometric redshifts used for selection are the official CANDELS ones built from a set of different photo-z runs through the hierarchical bayesian approach described in <cit.>.We complemented the large program sample with data obtained by our previous programs <cit.>. All objects have been observed with the FORS2 spectrograph using the 600Z holographic grating (sensitivity in the range of 8000-10.000Å with a spectral resolution of R=1390) following the observing strategy presented in P14. Finally, we add to our own sample the z∼7 targets observed by ESO programmes 086.A-0968(A) and 088.A-1013(A) (P.I. Bunker) with the same FORS2 setup. The data have been processed through our own reduction pipeline, which is fine-tuned for the detection of faint emission lines <cit.>.The final spectroscopic sample comprises 84 objects including those selected only from photometric redshifts. Only 17 of them show Lyα, in some cases quite faint, consistently with the decline of the Lyα emission fraction at high-redshift (Pentericci L. et al. 2017, in preparation). In the present work, we will focus on the sources in the redshift range where [OIII]+Hβ generate a sharp bluing of the 3.6μm-4.5μm color. We consider 11 sources with detected Lyα, regardless of the relevant EW, at redshift z=6.565 - 6.836 and 25 sources with no Lyα emission having primary photometric-redshift solution in a slightly larger range (6.4<z_phot<7.0) to conservatively account for the effect of photo-z uncertainty. The samples include galaxies with H160 spanning the range of ∼25.0-28.0. We analyze the photometric properties of the spectroscopic samples exploiting the available CANDELS mosaics <cit.> in the four HST bands V606, I814, J125, and H160 that are available for both fields and the Spitzer IRAC observations in the 3.6 (CH1 herafter), 4.5 (CH2), 5.8, and 8.0 μm channels. The IRAC mosaics of the UDS field combine observations from the SWIRE <cit.>, spUDS <cit.> and SEDS<cit.> surveys as described in <cit.>. For the GOODS-S field, we used 5.8 and 8.0 μm observations from the GOODS Spitzer Legacy project (PI: M. Dickinson) <cit.> together with our own reduction of all the available CH1 and CH2 IRAC observations including data from the S-CANDELS program (PI G. Fazio) <cit.>. The analysis of individual sources is based on the 19-band photometric information for the GOODS-S and UDS fields described in <cit.> and <cit.> respectively, with the notable exception of the IRAC GOODS-South photometry that we re-estimated using the full-depth maps described above[The IRAC photometry will be publicly released as part of the revised GOODS-S photometric catalog by the ASTRODEEP collaboration (Fontana et al. 2017, in preparation).]. The IRAC GOODS-S CH1 and CH2 photometry has been obtained with v2.0 of <cit.> that exploits information from high-resolution HST images to extract photometry from lower resolution data where blending is a concern. As reference high-resolution templates, we use the sources cutouts obtained from the H160 band after dilating its segmentation map as described in <cit.> to recover an unbiased estimate of the total flux in the low-resolution frames. Theruns are performed by simultaneously fitting all of the objects in the field using object-dependent PSFs. This procedure takes into account the large variation of the point spread function resulting from the difference in position angle among the several programs contributing to the final maps. §.§ Stacking ProcedureThe sources under investigation have typical mid-IR flux close to the detection limit of the deep Spitzer observations. The CH1-CH2 color of the objects in our sample is in the best cases determined with an uncertainty of 0.3-0.5 magnitudes, while more than one-third of our sources have S/N<1 in one (mostly CH2) or both the IRAC bands. For this reason, we will base our investigation on stacked images. We separately analyze objects with detected Lyα line and of those with no line detection to discern possible correlation between the optical and the Lyα line emission properties. We also consider subsamples of bright and faint sources to assess a possible relation between line emission properties and UV luminosity. We consider as bright objects those with H_160<26.0 (roughly corresponding to L≳ L^*): 4 (6) sources are brighter than this limit in the Lyα-detected (-undetected) samples respectively. We build stacked images in the four IRAC channels and in the V606, I814, J125 and H160 HST bands. In this way, we can study the IRAC CH1-CH2 color as a probe of line emission as well as the overall “average” SED of the samples under consideration. For the IRAC bands, where source confusion and blending is significant, we first perform asecond pass run using the option <cit.> to generate residual images where only the z∼7 sources under analysis are left. In this way, all sources are modeled and those close to the z∼7 are effectively removed (see, e.g. Fig. <ref>) such that these cleaned images can be used to generate reliable stacked images of the candidates.We then visually inspect all our sources and exclude three objects (one Lyα emitter and two non-emitters) due to the presence of bad residual features close to the targets that can possibly affect the photometry. In Table <ref> we list the sources actually used for the present analysis. The stacked images are then generated as weighted averages of the individual thumbnails and are presented in Fig. <ref>. Together with the stacks, we generate average CH1 and CH2 PSFs from the PSFs of the indivudal sources. The HST stacks are generated as weighted average images of the individual thumbnails after masking all close-by sources according to the relevantsegmentation map. The HST photometry is obtained withby performing detection and estimating total magnitude in the stacked H160 band. Total magnitudes in the other bands are computed on the basis of the relevant isophotal colors with respect to the H160 one. Photometry of stacked IRAC images is estimated withusing the source cutout from the stacked H160 band as prior. The resulting spectral energy distributions are shown in Fig. <ref> and Fig. <ref> (for “bright” and “faint” subsamples).§ EVIDENCE OF OPTICAL LINE EMISSION We show in Fig. <ref> the CH1-CH2 color of Lyα emitting and non-emitting stacks and the colors of all individual sources under consideration. We find CH1-CH2=-1.0 ±0.21 and CH1-CH2=-0.47±0.11 for Lyα emitting and non-emitting average sources respectively. Clearly, these colors represent the average properties of the sample.We find that both samples show an evident relation between the UV luminosity and the CH1-CH2 color. The bright sample's stacks have a similar CH1-CH2≃-0.25 for both Lyα emitting and non-emitting sources. The IRAC colors of the faint subsamples are bluer. The stacks of the faint non-emitting subsample has CH1-CH2=-0.60±0.23 while the stack of faint Lyα-emitting sources is extremely blue (CH1-CH2<-1.5 at 1σ) due to the non-detection in CH2. The difference between IRAC colors of bright and faint Lyα emitting galaxies is signficant at the ∼2.5σ level. As shown in Fig. <ref>, the average negative CH1-CH2 color we find for the two samples can only be explained by the presence of optical line emission affecting the CH1 filter. The most extreme color obtained for purely stellar emission is approximately -0.35 (which would also require no dust and extreme galaxy properties, see Sect. <ref>), much redder than the stacked color of Lyα emitting galaxies and only marginally compatible with the color from the stacking of non-emitting galaxies, implying that the bulk of objects in the two samples has optical line emission affecting the IRAC bands. In particular, the stacked color of the bright subsamples still suggests the presence of emission lines but is also compatible with purely stellar emission from low metallicity/low extinction galaxies, while line emission is surely present in most of the objects contributing to the faint subsamples. Interestingly, this value is bluer than for the youngest and lowest metallicity templates in our library suggesting that the physical conditions in distant HII regions can be more extreme than what is assumed in our nebular emission model <cit.>.The evidence of optical emission lines is also shown by a SED-fitting of the stacked multi-band photometry. We fit the eight-band photometry with our χ^2 minimization code <cit.> fixing the redshift at the average one of the relevant sample. The fit is performed both with stellar only templates from the library of <cit.> (BC03 hereafter)and also including the contribution of line emission as in <cit.> assuming an escape fraction of ionizing photons f_esc=0 <cit.>. A comparison of the stellar and stellar+nebular fits shows that the former solution is disfavored in terms of χ^2 (Fig. <ref>) By varying the contribution of nebular emission from f_esc=0 to f_esc=1 at 0.2 steps, we find that f_esc=0 models are always favored. Templates with f_esc>0.4 are excluded at 1σ in the case of Lyα-emitting galaxies, while the difference in terms of χ^2 among the various templates is not significant in the case of Lyα-undetected ones. Considering that photometric redshift estimates do not rely on nebular templates, the evident nebular feature in the IRAC bands of sources with no Lyα redshift, together with the deep non-detection in the stacked V606 band (>31.4 mag at 1σ), provides further evidence that these objects are robust z-dropout galaxies thus strengthening the case for a declining Lyα fraction at z>6. § DISCUSSIONWe can convert the observed IRAC color into a combined rest-frame EW([OIII]+Hβ) by assuming a baseline color for the intrinsic stellar emission, which, in turn, depends on age, E(B-V) and metallicity of the stellar population. Intrinsic colors range from CH1-CH2≃-0.35 for a dust-free Z=0.02Z_⊙ template of Age=10Myr, to CH1-CH2≳0.2 (e.g. Age=100Myr, E(B-V)=0.2, solar metallicity).In particular, age and dust extinction are the factors that mostly affect the continuum shape, with a 0.2 mag color difference between templates at E(B-V)=0 and E(B-V)=0.15 (at fixed age and metallicity) and between templates at Age=0 and Age=300 Myr (at fixed dust extinction and metallicity). A 0.1 mag difference in color is found between Z=0.02Z_⊙ and solar metallicity templates of similar age and dust extinction. In principle, the difference between the IRAC colors of Lyα-emitting and non-emitting galaxies (∼0.5 mag) can be completely explained by a difference in the underlying stellar optical continuum with Lyα-emitting being very young, metal-poor, and dust-free, and objects lacking Lyα emission being >100Myrs old, metal enriched, and mildly extincted. However, the typical UV slopes obtained from the J125-H160 stacked photometry is β≃ -1.9 for both samples and the distribution of individual UV slope in the two samples is similar (Pentericci L. et al. 2017, in preparation). We can thus exclude a significant presence of dust-free low metallicity galaxies among Lyα-emitting galaxies since such an extreme population would show a bluer slope β∼-2.7 <cit.>. Therefore, different physical properties can contribute, but not completely explain, the difference between IRAC colors in our samples. For simplicity, we consider a flat CH1-CH2=0.0 for the no-emission-line case, as expected for a reference 100Myr old Z=0.2Z_⊙ galaxy with UV slope β∼-1.9 (corresponding to E(B-V)∼0.12), to convert IRAC colors into equivalent widths of the optical line emission. The measured color term can be then converted into EW([OIII]+Hβ) = 1500^+530_-440Å (Lyα emitting sample) and EW([OIII]+Hβ)=520^+170_-150 (Lyα undetected sample). These values are consistent within the uncertainty with the line strength predicted by the stellar+nebular SED-fitting on the eight-band stacked photometry, thus providing further evidence that a difference in the stellar SEDs is unlikely to explain the different IRAC colors. The stacking of bright sources yield EW([OIII]+Hβ)∼230-290Å. The largest equivalent widths are obtained for the faint subsamples with EW([OIII]+Hβ)=720 ^+400_-330Å of Lyα-undetected sources and a lower limit of EW([OIII]+Hβ) >2900Å  of Lyα emitting ones. In fact, given the similar color of the stacked bright subsamples, the difference between Lyα detected and undetected objects appears to be mostly confined to the subsamples of faint (H160>26.0) sources. We summarize in Table <ref> measurements for the different subsamples.Notably, the different IRAC colors of bright and faint Lyα emitting galaxies (∼1 mag) cannot be explained from a variation of the underlying stellar continumm alone (≲0.5 mags). The relation between EW([OIII]+Hβ) and UV luminosity, which is evident in both Lyα-detected and -undetected samples can also be explained by a relation between age and UV luminosity. Moreover, such bright optical line emission from sub-L^* sources implies that stellar feedback is either not strong enough to deplete their inter-stellar medium or the sources are too young, and thus feedback has not been effective for a long enough time to affect the ISM.An intriguing possibility is that different physical properties of the HII regions concur in explaining both Lyα visibility and a larger EW([OIII]+Hβ) <cit.>. The IRAC color of the Lyα emitting galaxies can thus be explained by these objects being younger and more metal poor, and thus with harder ionization fields, than non-emitting ones. A higher escape fraction of ionizing photons can also explain a lower EW of the optical emission lines and play a role in the low Lyα visibility. In the next section, we will discuss the relation between physical conditions of the HII regions and EW([OIII]+Hβ). An alternative explanation of the difference between the two stacks can be uncertainties affecting the sample of objects with no Lyα. We can exclude with high confidence any contamination from low-redshift interlopers since no other lines are detected in any of the objects (P14, Pentericci L. et al. 2017, in preparation) and also because of the mag>31.4 non-detection on the stacked V606 band. Moreover, the nebular feature typical of this redshift range is more evident for faint sources where a larger contamination might be expected given the lower reliability of photometric redshifts. However, we can not exclude the possibility that the Lyα-undetected samples in the 6.4<z_phot<7.0 range actually contain sources with true redshift >7.0, which would partially erase the line signature. At z>7.0 the CH1-CH2 can be as red as ∼0.5-0.8 <cit.> because of [OIII]+Hβ affecting the 4.5μm passband: this can be the case of some of the sources in our sample with a positive color term (Fig. <ref>). Similarly, Hα emission can add to the CH2 flux of objects at z∼6.5. In such a case, the EW([OIII]+Hβ) we measure for Lyα-undetected sources should be considered to be a lower limit of the real, typical line strength. We perform two tests to ascertain possible biases due to the photometric-redshift selection. First of all, we restrict the analysis to a more conservative range 6.6<z<6.9 and excluding sources with red IRAC colors (CH1-CH2>1): we find an average CH1-CH2∼-0.2 again suggestive of low EW([OIII]+Hβ). As a second test, we inspected the photometric-redshift probability distribution functions of our objects to isolate those with highest probability (p>0.75) of being in the 6.6<z<6.9 range. Four out of five objects have IRAC color in the range of ∼-0.26 to -0.39 the remaining one being UDS_22859 with CH1-CH2∼2. These results suggest no obvious bias due to photometric-redshift selection in the result from the stack of Lyα-undetected sources, though a future spectroscopic detection of optical lines themselves with JWST is likely the only way to overcome the effect of photometric-redshift uncertainties in this kind of analysis.§.§ Implications on the Escape Fraction The escape of ionizing Lyman continuum (LyC) radiation from star-forming regions affects nebular emission and line strength.In particular, a high escape fraction and a high neutral hydrogen fraction in the IGM have a similar effects on Lyα visibility <cit.> while optical emission lines such as O[III] and Hβ are only affected by f_esc. <cit.> found that the observed decline of the Lyα emission at high-redshift can be explained by a small increase of the Lyman continuum escape fraction Δf_esc<0.1 assuming f_esc is already high (∼0.65) at z=6, or by a modest increase in both the escape fraction (Δf_esc≃0.1) and the neutral IGM fraction (Δχ_HI≃ 0.2) from z=6 to z=7 starting from a f_esc=0.15 at z=6.Two mechanisms can be responsible of LyC leakage: the presence of “holes” in standard radiation-bounded HII nebulae, and the formation of incomplete Strömgren spheres, or “density bounded” HII regions <cit.>. Real cases of LyC leakage are most probably due to a combination of the two phenomena.As discussed in depth by Z13, a combined measurement of the UV slope and of EW(Hβ), which will become feasible only with JWST, yield to general constraints on the escape fraction of ionizing photons from high-redshift galaxies, albeit mid/far-IR rest-frame information might be needed to disentangle the effects of dust. However, the present evidence of strong line emission affecting the broadband colors of high-redshift galaxies allows us to put first constraints on the LyC leakage since line luminosity is suppressed at increasing f_esc with no line emitted in the extreme case of f_esc=1. We compute the expected IRAC color for different f_esc values as a function of galaxy age in two different ways: (1) from stellar+nebular templates following <cit.>, where hydrogen lines are computed considering case B recombination, and relative line intensities of He and metals as a function of metallicity are taken from <cit.> and assumed to be independent of f_esc, as expected in ionization bounded nebulae; (2) by modeling a density bounded nebula with <cit.> adopting the same assumption as described in <cit.> and fixing the ionization parameter at log(q/cm s^-1)=7.75. Stellar templates from the BC03 library and a constant SFH with a minimum age=10Myr are assumed in both cases and considering E(B-V)=0.15 (for Z=0.02Z_⊙) and E(B-V)=0.10 (Z=0.2Z_⊙) because this is the lowest value allowed by the observed UV slope at age=10Myr, where line EW is the largest for any f_esc.In Fig. <ref> we compare the observed stacked colors of our samples with the color predicted for our reference models of radiation bounded (top panel) and ionization bounded (bottom) nebulae. In both cases, we find that the EW([OIII]+Hβ) of the Lyα-emitting stack is best reproduced by models with null escape fractions: it is consistent with f_esc up to 20% but only for extremely young and probably unrealistic ages, especially in the radiation bounded nebulae scenario (∼10Myrs). On the other hand, the CH1-CH2 color of the Lyα undetected stack is compatible with a larger f_esc from very young and metal poor galaxies, or with a similar f_esc<20-40% for ages up to >100 Myrs in the density bounded case.We further explored how different physical conditions in the HII regions can effect the emission line strenght and thus the IRAC colors, using CLOUDY. In particular, since it has been suggested that high-redshift star-forming regions might be characterized by more extreme conditions <cit.>, we assume a harder ionization field log(q/cm s^-1)=9.0 and a higher density n=1000cm^-3. The results relevant to the present case are shown in the left panel of Fig. <ref>, a thorough investigation of the ISM conditions will be presented in a forthcoming paper (De Barros S. et al. 2017 in preparation). We find that for Lyα-emitting sources f_esc>50% can still be excluded at any age, while they are compatible with f_esc≲30% at young ages (<20Myr).We have then performed the same calculation described above using templates from the BPASSV2.0 library including the effect of interacting binary stars that can also significantly affect the emission budget of ionizing photons at high-redshift <cit.>. As shown in the right panel of Fig. <ref>, the boosted ionizing flux in the BPASS templates yield ∼0.2 mag bluer colors than BC03 ones. In any case, even in the most favorable ionizing conditions, we can basically exclude an escape fraction larger than 50%at all ages. This test highlights that not only a variation in the escape fraction but also different physical properties of the HII regions can contribute in explaining the different IRAC color of Lyα-detected and Lyα-undetected sources.Clearly, only future spectroscopic investigations of the optical rest-frame emission will be able to assess the physical conditions of primordial HII regions and the link between Lyα emission, gas properties and f_esc. If confirmed, a larger f_esc in Lyα undetected sources would provide evidence of a scenario with a milder evolution of the neutral hydrogen fraction as suggested by <cit.>. In particular, the“density bounded” leakage case can be probed by future JWST mid-infrared spectroscopic observations disentangling the strong combined EW([OIII]+Hβ) detected in these galaxies to look for non-standard [OIII]/Hβ and [OIII]/[OII] ratios as indirect tracers of high f_esc<cit.>. Interestingly, the similar EW([OIII]+Hβ) inferred for bright sources regardless of their Lyα emission suggests that f_esc or physical differences might involve only sub-L^* galaxies while other factors, including IGM transmission, affect the Lyα visibility of bright ones. §.§ The Specific Star-formation Rate of Reionization GalaxiesOur sample of spectroscopically confirmed high-redshift sources allows us, for the first time, to constrain the SSFR during the reionization epoch from a homogeneously selected sample of objects with secure redshift. On the one hand, the strength of the optical line emission can be used as a star-formation rate indicator. On the other hand, the continuum emission in the 4.5μm band corresponds to the optical rest-frame emission and can be used as a proxy of the total stellar mass. As a first estimate, we compute a conservative lower limit on the SSFR, solely based on the stacked IRAC photometry <cit.>. We first build a library of constantly star-forming models from both the BC03 and BPASSV2.0[We compute the mass normalization of BPASSV2.0 constant SFR templates assuming a 30% mass fraction recycled in the ISM <cit.>.] libraries at different ages that we use as a reference to estimate SFR and stellar mass. We assume a Salpeter IMF and consider models with E(B-V) from 0 to 1. and metallicity Z=0.02,0.2,1.0 Z_⊙ (for BC03) or Z=0.001,0.004,0.02 (BPASSV2.0). The SFR is obtained from the IRAC color after converting the corresponding EW([OIII]+Hβ) into Hα luminosity assuming standard line ratios <cit.> and a redshift of z=6.7, which is the average value of the Lyα-detected sample. Stellar mass is obtained by computing the relevant conversion with respect to the mid-IR continuum luminosity probed by the CH2 band. Among all considered models, we look for the one yielding the lowest SSFR that we can safely assume as a conservative lower limit for the typical SSFR at these redshifts. We find minimum values of SSFR=9.1 Gyr^-1 and SSFR=10.5 Gyr^-1 from BC03 and BPASS models, respectively, with a stellar mass of ∼2× 10^9 M_⊙. Our analysis points to a larger SSFR with respect to the previous estimate from <cit.> who used emission line signatures in seven LBG candidates at z∼6.6-7.0 to derive a lower limit of 4 Gyr^-1. An increased SSFR in low luminosity galaxies might explain the difference between <cit.> (focused onL>L^* sources) and our sample that includes fainter galaxies. In turn, this can be related to the bimodality found in z∼5-7 galaxies by <cit.> with “old” (age>100Myr) having SSFR∼3-4 Gyr^-1, and young (age<30Myr) having ten times larger SSFR. The real specific star-formation rate can be much higher than this limit. In fact, the nebular-stellar fit of the stacked SED yields an SSFR=103^+35_-39 Gyr^-1<cit.>, which is consistently a factor ∼2 higher than the corresponding SSFR∼50 Gyr^-1 found by <cit.>, but similar to the SSFR of low-mass z>3 galaxies measured by <cit.>. We note that the SSFR we find for our Lyα-emitting z∼7 sources is comparable toestimates from other spectroscopically confirmed galaxies at z≳7, ranging from ∼10 to 20 Gyr^-1<cit.> to values >100 Gyr^-1<cit.>. High SSFR at these redshift are also favored by the z∼3-6 redshift trend presented in <cit.>.§ SUMMARY AND CONCLUSIONSWe have analyzed the IRAC 3.6μm-4.5μm color to gather information on optical line emission of a sample of z∼7 galaxies in the CANDELS GOODS and UDS fields that have been targeted by a spectroscopic campaign to detect their Lyα line. After dividing the sample into Lyα-detected (10 sources) and -undetected (23 sources at 6.4<z_phot<7.0) subsamples, we built stacked images in the V606, I814, J125, and H160 HST bands and in the four IRAC channels at 3.6-8.0 μm. We analyzed the SEDs and the colors of the stacked sources finding the following. *There is evidence of strong [OIII]+Hβ emission in the average (stacked) SEDs both of galaxies with detected Lyα emission and of those lacking Lyα line. On the basis of the χ^2, the SED-fitting including nebular contribution is clearly preferred with respect to stellar-only models. The stacked V606band from objects lacking Lyα line confirms the reliability of these sources as high-redshift candidates through a deep non-detection at mag>31.4, corresponding to a V606-H160≃5. *The CH1-CH2 color is bluer (-1.0 ±0.21) for the average object with a detected Lyα line than for non-emitting sources (-0.47±0.11). The IRAC colors can be translated into equivalent width EW([OIII]+Hβ) = 1500 ^+530_-440Å  (Lyα emitters) and EW ([OIII]+Hβ) = 520 ^+170_-150Å (non-emitters) assuming a flat intrinsic stellar continuum. Optical emission lines appear stronger in the subsamples of faint (26.0<H_160<27.5) objects, with the average color of bright (H_160<26.0) sources compatible with stellar-only emission from low metallicity young galaxies. Bright galaxies with and without confirmed Lyα emission show similar CH1-CH2 colors, such that the difference between the two populations effectively lies in the faint subsamples.*The different IRAC color between the two populations can be most likely explained by a difference in physical conditions of the HII regions, with Lyα-emitting galaxies being younger and/or more metal poor, thus with harder ionization fields, or by a larger escape fraction in non-emitting sources.A possible dilution of the line signature due to z>7 galaxies in the photometric-redshift sample cannot be excluded.*The strong signature of optical line emission of Lyα detected objects yield to f_esc≲20% on the escape fraction of ionizing photons from these objects both in the case of radiation bounded and of density bounded HII regions. A larger f_esc limit (≲50%) is found when assuming the extreme case of very high density and ionization parameter and the contribution from interacting binaries to the ionizing flux. The optical line emission from Lyα undetected sources can be explained by a larger f_esc from very young and metal poor galaxies, or with a similar f_esc<20-40% for ages up to ∼80-130Myr. These results are qualitatively in agreement with the scenario suggested by <cit.> of a combined evolution of f_esc and neutral hydrogen fraction explaining the lack of bright Lyα emission at z>6.*By using only the spectroscopically confirmed objects, we derive SSFR= 103^+35_-39 Gyr^-1 for M_star=5 × 10^8 M_⊙ galaxies at z∼6.7 from the stacked SED, and a robust lower limit of SSFR=9-10Gyr^-1 (depending on the assumed library) under the most conservative assumptions on the conversion factor used to derive SFR and stellar mass using only information from the mid-IR photometry. Mid-IR spectroscopy with JWST is clearly needed to move beyond constraints from broadband observations. In this respect, it is interesting to note that the strength of the optical line signature found in our sample implies typical [OIII] and Hβ fluxes of ∼ 10^-16-10^-17 erg/s/cm^2. Such bright lines can be detected at high S/N by NIRspec with few minutes of integration time[https://jwst.etc.stsci.edu] allowing us to fully constrain the dependence of Lyα emission on physical properties and to look for unusual line ratios as a signature of large escape fraction from density bounded regions.We thank R.J. McLure for kindly providing IRAC mosaics of GOODS-South, and J.J. Eldridge for assistance in using the BPASS library. S.D.B. thanks Gary Ferland and the organizers of the 2015 Belfast CLOUDY Winter School as well as Kimihiko Nakajima for their support regarding CLOUDY simulations. K.C. acknowledges funding from the European Research Council through the award of the Consolidator Grant ID 681627-BUILDUP. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312725. This work is based on data obtained with ESO Programmes 084.A-0951, 085.A-0844, 086.A-0968, 088.A-1013, 088.A-0192, and 190.A-0685.natexlab#1#1[Anders&Fritze-v. Alvensleben(2003)]Anders2003Anders, P., &Fritze-v. Alvensleben, U. 2003, , 401, 1063[Ashbyet al.(2013)Ashby, Willner, Fazio, Huang, Arendt, Barmby, Barro, Bell, Bouwens, Cattaneo, Croton, Davé, Dunlop, Egami, Faber, Finlator, Grogin, Guhathakurta, Hernquist, Hora, Illingworth, Kashlinsky, Koekemoer, Koo, Labbé, Li, Lin, Moseley, Nandra, Newman, Noeske, Ouchi, Peth, Rigopoulou, Robertson, Sarajedini, Simard, Smith, Wang, Wechsler, Weiner, Wilson, Wuyts, Yamada, &Yan]Ashby2013Ashby, M. L. N., Willner, S. P., Fazio, G. G., et al. 2013, , 769, 80[Ashbyet al.(2015)Ashby, Willner, Fazio, Dunlop, Egami, Faber, Ferguson, Grogin, Hora, Huang, Koekemoer, Labbé, &Wang]Ashby2015—. 2015, , 218, 33[Bouwenset al.(2015)Bouwens, Illingworth, Oesch, Caruana, Holwerda, Smit, &Wilkins]Bouwens2015bBouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, , 811, 140[Bruzual&Charlot(2003)]Bruzual2003Bruzual, G., &Charlot, S. 2003, , 344, 1000[Caputiet al.(2011)Caputi, Cirasuolo, Dunlop, McLure, Farrah, &Almaini]Caputi2011Caputi, K. I., Cirasuolo, M., Dunlop, J. S., et al. 2011, , 413, 162[Caruanaet al.(2012)Caruana, Bunker, Wilkins, Stanway, Lacy, Jarvis, Lorenzoni, &Hickey]Caruana2012Caruana, J., Bunker, A. J., Wilkins, S. M., et al. 2012, , 427, 3055[Castellanoet al.(2010)Castellano, Fontana, Paris, Grazian, Pentericci, Boutsia, Santini, Testa, Dickinson, Giavalisco, Bouwens, Cuby, Mannucci, Clément, Cristiani, Fiore, Gallozzi, Giallongo, Maiolino, Menci, Moorwood, Nonino, Renzini, Rosati, Salimbeni, &Vanzella]Castellano2010bCastellano, M., Fontana, A., Paris, D., et al. 2010, , 524, A28[Castellanoet al.(2014)Castellano, Sommariva, Fontana, Pentericci, Santini, Grazian, Amorin, Donley, Dunlop, Ferguson, Fiore, Galametz, Giallongo, Guo, Huang, Koekemoer, Maiolino, McLure, Paris, Schaerer, Troncoso, &Vanzella]Castellano2014Castellano, M., Sommariva, V., Fontana, A., et al. 2014, , 566, A19[Castellanoet al.(2016)Castellano, Dayal, Pentericci, Fontana, Hutter, Brammer, Merlin, Grazian, Pilo, Amorin, Cristiani, Dickinson, Ferrara, Gallerani, Giallongo, Giavalisco, Guaita, Koekemoer, Maiolino, Paris, Santini, Vallini, Vanzella, &Wagg]Castellano2016Castellano, M., Dayal, P., Pentericci, L., et al. 2016, , 818, L3[Coleet al.(2000)Cole, Lacey, Baugh, &Frenk]Cole2000Cole, S., Lacey, C. G., Baugh, C. M., &Frenk, C. S. 2000, , 319, 168[Dahlenet al.(2013)Dahlen, Mobasher, Faber, Ferguson, Barro, Finkelstein, Finlator, Fontana, Gruetzbauch, Johnson, Pforr, Salvato, Wiklind, Wuyts, Acquaviva, Dickinson, Guo, Huang, Huang, Newman, Bell, Conselice, Galametz, Gawiser, Giavalisco, Grogin, Hathi, Kocevski, Koekemoer, Koo, Lee, McGrath, Papovich, Peth, Ryan, Somerville, Weiner, &Wilson]Dahlen2013Dahlen, T., Mobasher, B., Faber, S. M., et al. 2013, , 775, 93[de Barroset al.(2014)de Barros, Schaerer, &Stark]deBarros2014de Barros, S., Schaerer, D., &Stark, D. P. 2014, , 563, A81[de Barroset al.(2016)de Barros, Vanzella, Amorín, Castellano, Siana, Grazian, Suh, Balestra, Vignali, Verhamme, Zamorani, Mignoli, Hasinger, Comastri, Pentericci, Pérez-Montero, Fontana, Giavalisco, &Gilli]deBarros2016de Barros, S., Vanzella, E., Amorín, R., et al. 2016, , 585, A51[Dijkstraet al.(2014)Dijkstra, Wyithe, Haiman, Mesinger, &Pentericci]Dijkstra2014Dijkstra, M., Wyithe, S., Haiman, Z., Mesinger, A., &Pentericci, L. 2014, , 440, 3309[Eldridge&Stanway(2009)]Eldridge2009Eldridge, J. J., &Stanway, E. R. 2009, , 400, 1019[Faisstet al.(2016)Faisst, Capak, Hsieh, Laigle, Salvato, Tasca, Cassata, Davidzon, Ilbert, Le Fevre, Masters, McCracken, Steinhardt, Silverman, De Barros, Hasinger, &Scoville]Faisst2016Faisst, A. L., Capak, P., Hsieh, B. C., et al. 2016, ArXiv e-prints, arXiv:1601.07173[Ferlandet al.(1998)Ferland, Korista, Verner, Ferguson, Kingdon, &Verner]Ferland1998Ferland, G. J., Korista, K. T., Verner, D. A., et al. 1998, , 110, 761[Ferlandet al.(2013)Ferland, Porter, van Hoof, Williams, Abel, Lykins, Shaw, Henney, &Stancil]Ferland2013Ferland, G. J., Porter, R. L., van Hoof, P. A. M., et al. 2013, Revista Mexicana de Astronomía y Astrofísica, 49, 137[Finkelsteinet al.(2013)Finkelstein, Papovich, Dickinson, Song, Tilvi, Koekemoer, Finkelstein, Mobasher, Ferguson, Giavalisco, Reddy, Ashby, Dekel, Fazio, Fontana, Grogin, Huang, Kocevski, Rafelski, Weiner, &Willner]Finkelstein2013Finkelstein, S. L., Papovich, C., Dickinson, M., et al. 2013, , 502, 524[Finkelsteinet al.(2015)Finkelstein, Ryan, Papovich, Dickinson, Song, Somerville, Ferguson, Salmon, Giavalisco, Koekemoer, Ashby, Behroozi, Castellano, Dunlop, Faber, Fazio, Fontana, Grogin, Hathi, Jaacks, Kocevski, Livermore, McLure, Merlin, Mobasher, Newman, Rafelski, Tilvi, &Willner]Finkelstein2015Finkelstein, S. L., Ryan, Jr., R. E., Papovich, C., et al. 2015, , 810, 71[Fontanaet al.(2000)Fontana, D'Odorico, Poli, Giallongo, Arnouts, Cristiani, Moorwood, &Saracco]Fontana2000Fontana, A., D'Odorico, S., Poli, F., et al. 2000, , 120, 2206[Fontanaet al.(2010)Fontana, Vanzella, Pentericci, Castellano, Giavalisco, Grazian, Boutsia, Cristiani, Dickinson, Giallongo, Maiolino, Moorwood, &Santini]Fontana2010Fontana, A., Vanzella, E., Pentericci, L., et al. 2010, , 725, L205[Galametzet al.(2013)Galametz, Grazian, Fontana, Ferguson, Ashby, Barro, Castellano, Dahlen, Donley, Faber, Grogin, Guo, Huang, Kocevski, Koekemoer, Lee, McGrath, Peth, Willner, Almaini, Cooper, Cooray, Conselice, Dickinson, Dunlop, Fazio, Foucaud, Gardner, Giavalisco, Hathi, Hartley, Koo, Lai, de Mello, McLure, Lucas, Paris, Pentericci, Santini, Simpson, Sommariva, Targett, Weiner, Wuyts, &the CANDELS team]Galametz2013Galametz, A., Grazian, A., Fontana, A., et al. 2013, , 206, 10[Grazianet al.(2012)Grazian, Castellano, Fontana, Pentericci, Dunlop, McLure, Koekemoer, Dickinson, Faber, Ferguson, Galametz, Giavalisco, Grogin, Hathi, Kocevski, Lai, Newman, &Vanzella]Grazian2012Grazian, A., Castellano, M., Fontana, A., et al. 2012, , 547, A51[Groginet al.(2011)Grogin, Kocevski, Faber, Ferguson, Koekemoer, Riess, Acquaviva, Alexander, Almaini, Ashby, Barden, Bell, Bournaud, Brown, Caputi, Casertano, Cassata, Castellano, Challis, Chary, Cheung, Cirasuolo, Conselice, Roshan Cooray, Croton, Daddi, Dahlen, Davé, de Mello, Dekel, Dickinson, Dolch, Donley, Dunlop, Dutton, Elbaz, Fazio, Filippenko, Finkelstein, Fontana, Gardner, Garnavich, Gawiser, Giavalisco, Grazian, Guo, Hathi, Häussler, Hopkins, Huang, Huang, Jha, Kartaltepe, Kirshner, Koo, Lai, Lee, Li, Lotz, Lucas, Madau, McCarthy, McGrath, McIntosh, McLure, Mobasher, Moustakas, Mozena, Nandra, Newman, Niemi, Noeske, Papovich, Pentericci, Pope, Primack, Rajan, Ravindranath, Reddy, Renzini, Rix, Robaina, Rodney, Rosario, Rosati, Salimbeni, Scarlata, Siana, Simard, Smidt, Somerville, Spinrad, Straughn, Strolger, Telford, Teplitz, Trump, van der Wel, Villforth, Wechsler, Weiner, Wiklind, Wild, Wilson, Wuyts, Yan, &Yun]Grogin2011Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, , 197, 35[Guoet al.(2013)Guo, Ferguson, Giavalisco, Barro, Willner, Ashby, Dahlen, Donley, Faber, Fontana, Galametz, Grazian, Huang, Kocevski, Koekemoer, Koo, McGrath, Peth, Salvato, Wuyts, Castellano, Cooray, Dickinson, Dunlop, Fazio, Gardner, Gawiser, Grogin, Hathi, Hsu, Lee, Lucas, Mobasher, Nandra, Newman, &van der Wel]Guo2013Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, , 207, 24[Huanget al.(2016)Huang, Lemaux, Schmidt, Hoag, Bradač, Treu, Dijkstra, Fontana, Henry, Malkan, Mason, Morishita, Pentericci, Ryan, Trenti, &Wang]Huang2016Huang, K.-H., Lemaux, B. C., Schmidt, K. B., et al. 2016, , 823, L14[Hutteret al.(2015)Hutter, Dayal, &Müller]Hutter2015Hutter, A., Dayal, P., &Müller, V. 2015, , 450, 4025[Hutteret al.(2014)Hutter, Dayal, Partl, &Müller]Hutter2014Hutter, A., Dayal, P., Partl, A. M., &Müller, V. 2014, , 441, 2861[Jianget al.(2016)Jiang, Finlator, Cohen, Egami, Windhorst, Fan, Davé, Kashikawa, Mechtley, Ouchi, Shimasaku, &Clément]Jiang2016Jiang, L., Finlator, K., Cohen, S. H., et al. 2016, , 816, 16[Karmanet al.(2016)Karman, Caputi, Caminha, Gronke, Grillo, Balestra, Rosati, Vanzella, Coe, Dijkstra, Koekemoer, Mercurio, &Nonino]Karman2016Karman, W., Caputi, K. I., Caminha, G. B., et al. 2016, ArXiv e-prints, arXiv:1606.01471[Khaireet al.(2015)Khaire, Srianand, Choudhury, &Gaikwad]Khaire2015Khaire, V., Srianand, R., Choudhury, T. R., &Gaikwad, P. 2015, ArXiv e-prints, arXiv:1510.04700[Koekemoeret al.(2011)Koekemoer, Faber, Ferguson, Grogin, Kocevski, Koo, Lai, Lotz, Lucas, McGrath, Ogaz, Rajan, Riess, Rodney, Strolger, Casertano, Castellano, Dahlen, Dickinson, Dolch, Fontana, Giavalisco, Grazian, Guo, Hathi, Huang, van der Wel, Yan, Acquaviva, Alexander, Almaini, Ashby, Barden, Bell, Bournaud, Brown, Caputi, Cassata, Challis, Chary, Cheung, Cirasuolo, Conselice, Roshan Cooray, Croton, Daddi, Davé, de Mello, de Ravel, Dekel, Donley, Dunlop, Dutton, Elbaz, Fazio, Filippenko, Finkelstein, Frazer, Gardner, Garnavich, Gawiser, Gruetzbauch, Hartley, Häussler, Herrington, Hopkins, Huang, Jha, Johnson, Kartaltepe, Khostovan, Kirshner, Lani, Lee, Li, Madau, McCarthy, McIntosh, McLure, McPartland, Mobasher, Moreira, Mortlock, Moustakas, Mozena, Nandra, Newman, Nielsen, Niemi, Noeske, Papovich, Pentericci, Pope, Primack, Ravindranath, Reddy, Renzini, Rix, Robaina, Rosario, Rosati, Salimbeni, Scarlata, Siana, Simard, Smidt, Snyder, Somerville, Spinrad, Straughn, Telford, Teplitz, Trump, Vargas, Villforth, Wagner, Wandro, Wechsler, Weiner, Wiklind, Wild, Wilson, Wuyts, &Yun]Koekemoer2011Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, , 197, 36[Labbéet al.(2015)Labbé, Oesch, Illingworth, van Dokkum, Bouwens, Franx, Carollo, Trenti, Holden, Smit, González, Magee, Stiavelli, &Stefanon]Labbe2015Labbé, I., Oesch, P. A., Illingworth, G. D., et al. 2015, , 221, 23[Lonsdaleet al.(2003)Lonsdale, Smith, Rowan-Robinson, Surace, Shupe, Xu, Oliver, Padgett, Fang, Conrow, Franceschini, Gautier, Griffin, Hacking, Masci, Morrison, O'Linger, Owen, Pérez-Fournon, Pierre, Puetter, Stacey, Castro, Polletta, Farrah, Jarrett, Frayer, Siana, Babbedge, Dye, Fox, Gonzalez-Solares, Salaman, Berta, Condon, Dole, &Serjeant]Lonsdale2003Lonsdale, C. J., Smith, H. E., Rowan-Robinson, M., et al. 2003, , 115, 897[Merlinet al.(2015)Merlin, Fontana, Ferguson, Dunlop, Elbaz, Bourne, Bruce, Buitrago, Castellano, Schreiber, Grazian, McLure, Okumura, Shu, Wang, Amorín, Boutsia, Cappelluti, Comastri, Derriere, Faber, &Santini]Merlin2015Merlin, E., Fontana, A., Ferguson, H. C., et al. 2015, , 582, A15[Merlinet al.(2016)Merlin, Bourne, Castellano, Ferguson, Wang, Derriere, Dunlop, Elbaz, &Fontana]Merlin2016bMerlin, E., Bourne, N., Castellano, M., et al. 2016, , 595, A97[Nakajimaet al.(2016)Nakajima, Ellis, Iwata, Inoue, Kusakabe, Ouchi, &Robertson]Nakajima2016Nakajima, K., Ellis, R. S., Iwata, I., et al. 2016, ArXiv e-prints, arXiv:1608.08222[Nakajima&Ouchi(2014)]Nakajima2014Nakajima, K., &Ouchi, M. 2014, , 442, 900[Oeschet al.(2015)Oesch, van Dokkum, Illingworth, Bouwens, Momcheva, Holden, Roberts-Borsani, Smit, Franx, Labbé, González, &Magee]Oesch2015Oesch, P. A., van Dokkum, P. G., Illingworth, G. D., et al. 2015, , 804, L30[Onoet al.(2012)Ono, Ouchi, Mobasher, Dickinson, Penner, Shimasaku, Weiner, Kartaltepe, Nakajima, Nayyeri, Stern, Kashikawa, &Spinrad]Ono2012Ono, Y., Ouchi, M., Mobasher, B., et al. 2012, , 744, 83[Pentericciet al.(2011)Pentericci, Fontana, Vanzella, Castellano, Grazian, Dijkstra, Boutsia, Cristiani, Dickinson, Giallongo, Giavalisco, Maiolino, Moorwood, Paris, &Santini]Pentericci2011Pentericci, L., Fontana, A., Vanzella, E., et al. 2011, , 743, 132[Pentericciet al.(2014)Pentericci, Vanzella, Fontana, Castellano, Treu, Mesinger, Dijkstra, Grazian, Bradač, Conselice, Cristiani, Dunlop, Galametz, Giavalisco, Giallongo, Koekemoer, McLure, Maiolino, Paris, &Santini]Pentericci2014Pentericci, L., Vanzella, E., Fontana, A., et al. 2014, , 793, 113[Renzini(2016)]Renzini2016Renzini, A. 2016, , 460, L45[Roberts-Borsaniet al.(2015)Roberts-Borsani, Bouwens, Oesch, Labbe, Smit, Illingworth, van Dokkum, Holden, Gonzalez, Stefanon, Holwerda, &Wilkins]Roberts-Borsani2015Roberts-Borsani, G. W., Bouwens, R. J., Oesch, P. A., et al. 2015, ArXiv e-prints, arXiv:1506.00854[Robertsonet al.(2015)Robertson, Ellis, Furlanetto, &Dunlop]Robertson2015Robertson, B. E., Ellis, R. S., Furlanetto, S. R., &Dunlop, J. S. 2015, , 802, L19[Salpeter(1955)]Salpeter1955Salpeter, E. E. 1955, , 121, 161[Schaerer&de Barros(2009)]Schaerer2009Schaerer, D., &de Barros, S. 2009, , 502, 423[Schenkeret al.(2012)Schenker, Stark, Ellis, Robertson, Dunlop, McLure, Kneib, &Richard]Schenker2012Schenker, M. A., Stark, D. P., Ellis, R. S., et al. 2012, , 744, 179[Shiraziet al.(2014)Shirazi, Brinchmann, &Rahmati]Shirazi2014Shirazi, M., Brinchmann, J., &Rahmati, A. 2014, , 787, 120[Smitet al.(2015a)Smit, Bouwens, Labbé, Franx, Wilkins, &Oesch]Smit2015bSmit, R., Bouwens, R. J., Labbé, I., et al. 2015a, ArXiv e-prints, arXiv:1511.08808[Smitet al.(2014)Smit, Bouwens, Labbé, Zheng, Bradley, Donahue, Lemze, Moustakas, Umetsu, Zitrin, Coe, Postman, Gonzalez, Bartelmann, Benítez, Broadhurst, Ford, Grillo, Infante, Jimenez-Teja, Jouvel, Kelson, Lahav, Maoz, Medezinski, Melchior, Meneghetti, Merten, Molino, Moustakas, Nonino, Rosati, &Seitz]Smit2014—. 2014, , 784, 58[Smitet al.(2015b)Smit, Bouwens, Franx, Oesch, Ashby, Willner, Labbé, Holwerda, Fazio, &Huang]Smit2015aSmit, R., Bouwens, R. J., Franx, M., et al. 2015b, , 801, 122[Songet al.(2016)Song, Finkelstein, Livermore, Capak, Dickinson, &Fontana]Song2016Song, M., Finkelstein, S. L., Livermore, R. C., et al. 2016, , 826, 113[Stanwayet al.(2016)Stanway, Eldridge, &Becker]Stanway2016Stanway, E. R., Eldridge, J. J., &Becker, G. D. 2016, , 456, 485[Starket al.(2016)Stark, Ellis, Charlot, Chevallard, Tang, Belli, Zitrin, Mainali, Gutkin, Vidal-Garcia, Bouwens, &Oesch]Stark2016Stark, D. P., Ellis, R. S., Charlot, S., et al. 2016, ArXiv e-prints, arXiv:1606.01304[Starket al.(2017)Stark, Ellis, Charlot, Chevallard, Tang, Belli, Zitrin, Mainali, Gutkin, Vidal-García, Bouwens, &Oesch]Stark2017—. 2017, , 464, 469[Vanzellaet al.(2011)Vanzella, Pentericci, Fontana, Grazian, Castellano, Boutsia, Cristiani, Dickinson, Gallozzi, Giallongo, Giavalisco, Maiolino, Moorwood, Paris, &Santini]Vanzella2011Vanzella, E., Pentericci, L., Fontana, A., et al. 2011, , 730, L35+[Vanzellaet al.(2016)Vanzella, de Barros, Vasei, Alavi, Giavalisco, Siana, Grazian, Hasinger, Suh, Cappelluti, Vito, Amorin, Balestra, Brusa, Calura, Castellano, Comastri, Fontana, Gilli, Mignoli, Pentericci, Vignali, &Zamorani]Vanzella2016Vanzella, E., de Barros, S., Vasei, K., et al. 2016, ArXiv e-prints, arXiv:1602.00688[Wilkinset al.(2016)Wilkins, Feng, Di-Matteo, Croft, Stanway, Bouwens, &Thomas]Wilkins2016Wilkins, S. M., Feng, Y., Di-Matteo, T., et al. 2016, , 458, L6[Wilkinset al.(2013)Wilkins, Coulton, Caruana, Croft, Matteo, Khandai, Feng, Bunker, &Elbert]Wilkins2013bWilkins, S. M., Coulton, W., Caruana, J., et al. 2013, , 435, 2885[Yanet al.(2011)Yan, Yan, Zamojski, Windhorst, McCarthy, Fan, Röttgering, Koekemoer, Robertson, Davé, &Cai]Yan2011Yan, H., Yan, L., Zamojski, M. A., et al. 2011, , 728, L22[Yanet al.(2012)Yan, Finkelstein, Huang, Ryan, Ferguson, Koekemoer, Grogin, Dickinson, Newman, Somerville, Davé, Faber, Papovich, Guo, Giavalisco, Lee, Reddy, Cooray, Siana, Hathi, Fazio, Ashby, Weiner, Lucas, Dekel, Pentericci, Conselice, Kocevski, &Lai]Yan2012Yan, H., Finkelstein, S. L., Huang, K.-H., et al. 2012, , 761, 177[Zackrissonet al.(2013)Zackrisson, Inoue, &Jensen]Zackrisson2013Zackrisson, E., Inoue, A. K., &Jensen, H. 2013, , 777, 39[Zitrinet al.(2015)Zitrin, Labbé, Belli, Bouwens, Ellis, Roberts-Borsani, Stark, Oesch, &Smit]Zitrin2015Zitrin, A., Labbé, I., Belli, S., et al. 2015, , 810, L12
http://arxiv.org/abs/1703.08986v3
{ "authors": [ "M. Castellano", "L. Pentericci", "A. Fontana", "E. Vanzella", "E. Merlin", "S. De Barros", "R. Amorin", "K. I. Caputi", "S. Cristiani", "S. L. Finkelstein", "E. Giallongo", "A. Grazian", "A. Koekemoer", "R. Maiolino", "D. Paris", "S. Pilo", "P. Santini", "H. Yan" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170327095139", "title": "Optical Line Emission from z$\\sim$6.8 Sources with Deep Constraints on Ly$α$ Visibility" }
1]Chris-André Leimeister 1]Thomas Dencker1,2]Burkhard Morgenstern [1] University of Göttingen, Department of Bioinformatics,Goldschmidtstr. 1, 37077 Göttingen, Germany[2]University of Göttingen, Center for Computational Sciences, Goldschmidtstr. 7, 37077 Göttingen, Germany Anchor points for genome alignment based onFiltered Spaced Word Matches [ Received ??? / Accepted ??? ========================================================================= Alignment of large genomic sequences is a fundamental task in computational genome analysis. Most methods for genomic alignment use high-scoring local alignments asanchor points to reduce the search space of the alignment procedure. Speed and quality of these methods therefore depend on the underlying anchor points. Herein, we propose to use Filtered Spaced Word Matches to calculate anchor points for genome alignment. To evaluate this approach, we used these anchor points in the the widely used alignment pipeline Mugsy. For distantly related sequence sets, we could substantially improve the quality of alignments produced by Mugsy.§ INTRODUCTIONSequence comparison is one of the most fundamental tasks in computational biology. Here, a basic task is to align two or several DNA or protein sequences – either globally, over their entire length, or locally, by restricting the alignment to a single region of homology.Standard approaches to sequence alignment assume that the input sequences derived from a common ancestral sequence, and that evolutionary events are limited to substitutions, insertions and deletions of single residues or small sequence segments. In this case, sequence homologies can berepresented by global sequence alignments, that is by insertinggap characters into the sequences such thatevolutionarily related sequence positions are arranged on top of each other. Under most scoring schemes, calculating an optimal alignment of two sequences takes time proportional to the product of their lengths and is therefore limited to rather short sequences <cit.>.With the rapidly increasing number of partially or fully sequenced genomes,alignment of genomic sequences has become an important field of research in bioinformatics, see <cit.> for a recent review and evaluation of some of the most popular approaches.Here, the first challenge is the sheer size of the input sequence that makes it impossible to use traditional algorithms with quadratic run time. The second challenge is that related genomes often share multipleregions of local sequence homology, interrupted by non-conserved parts of the sequence where no significant similarities can be detected. This means that neither global nor local alignment methods can properly representthe homologies between whole genomes. Finally, evolutionaryevents such as duplications and large-scale rearrangements must be taken intoaccount.Since it is not possible, in general, to represent homologiesamong genomes in one single alignment, advanced genome alignersreturn alignments of so-called Locally Collinear Blocks, i.e. blocks of segments of the input sequences that contain the samegenes in the same relative order.Since the late Nineteen Nineties, major efforts have been made to a address the problem of genome alignment, and many approaches have beenpublished. One of the first multiple-alignment programs that was applied to genomicsequences was DIALIGN <cit.>.This programcomposes multiple alignments from chains of local pairwisealignments, and it does not penalizegaps; it is therefore able to align sequences where local homologies are separated by longnon-homologous segments. The program has been applied, for example,to identify small non-coding functional elements in genomic sequences<cit.>. However, the program was initially not designed for large genomic sequences,and it is limited to sequences up to around 10 kb.Moreover, DIALIGN is not able to deal with duplications,rearrangements and homologies on inverse strands of genomes.To align longer sequences, most programs for genomic alignmentrely on some sort of anchoring<cit.>, In a first step, they use a fast method for local alignment to identify high-scoringlocal homologies, so-called anchor points.Next, chains of such local alignments are calculated and, finally, sequence segments between the chained high-scoring local alignments are aligned with a slower but more sensitive alignment method.For multiple sequence sets, anchor points can be defined either betweenpairs of sequences or between several or all of the input sequences.A pioneering tool to find anchor points for genomic alignment was MUMmer <cit.>; the current version of the program<cit.> is considered the state-of-the-artin alignment anchoring.MUMmeruses maximal unique matches as pairwise anchor points to aligngenomic sequences or protein sequences. By contrast, MGA <cit.> is a tool for multiple alignment of genomic sequences that uses maximal exact matches between all sequences within a given sequence set.Both MUMmer and MGA use suffix trees <cit.> torapidly identify pairs or blocks of identical words,one word from each of the sequences, that are then used as anchor points.Both programs are able to align entire bacterial genomes, MUMmer was also used in the A. thaliana genome project<cit.>. However, sincethe probability of homologous exact matches rapidly decreases with increasingdivergence, they are most useful to compare closely related genomes, such as different strains of E. coli.Other approaches to genome alignment are OWEN <cit.>,AVID <cit.>, MAVID <cit.>, LAGAN and Multi-LAGAN <cit.>,CHAOS/DIALIGN <cit.>,the VISTA genome pipeline <cit.>,TBA <cit.> and Mauve<cit.>, see <cit.> for a review.All of these methods are based on alignment anchoring, and most of them are able to deal with duplications and genome rearrangements.Some methods for genomic alignments are based on statistical propertiesof the sequences <cit.>. Other methods are based on graphs, for example onA-Bruijn graphs <cit.> or oncactus graphs <cit.>. A further development of Mauve called progressiveMauve uses palindromic spaced seeds instead of exact word matches as anchor points<cit.>. That is, for a given binary pattern of length ℓ representingmatch and don't-care positions, one searches for a set of ℓ-mers, one ℓ-mer from each of the input sequences, such that all ℓ-mers have matching nucleotides at the match positions. At the don't-care positions, mismatches are allowed. Palindromic patterns are used to cover both strands of the input sequences.Spaced seeds areused in database searching <cit.> and alignment-freesequence comparison <cit.> since they have been shown to lead to better results than contiguous word matches.Mugsy <cit.> is a popular software pipeline for multiplewhole-genome alignment.Ina first step, this program uses Nucmer <cit.>to construct all pairwise alignments of the input sequences. Nucmer, in turn, uses MUMmer to find exact unique word matches which are used asalignment anchor points.Analignment graph is constructed from these pairwise alignments using the SeqAn software <cit.>, andLocally Collinear Blocks are constructed. Finally, a multiple alignment is calculated using SeqAn::TCoffee <cit.>. Mugsy has been designedto align closely related genomes, such as different strains of abacterium. Here, it produces alignments of high quality. On more distantly related genomes, however, the program is oftenoutperformed by other multiple genome aligners <cit.>.Finding anchor points is the most important step in whole-genome sequence alignment. Here, a trade-off between speed, sensitivity and precision is necessary. A sufficient number of anchor points is required in order to reduce the search space and thereby the run time for the subsequent,more sensitive alignment routine.Wrongly chosen anchor points,on the other hand, can substantially deteriorate the quality of the final output alignment. If spurious similarities are used as anchor points, this not only results in non-homologous parts of the sequences being aligned. Wrong anchor points may also prevent the programfrom aligning biologically relevant, true homologies since aligning them may be incompatible with the selected anchors.Also, if the number of anchor points is too large, finding optimal chains of anchor points can become computationally expensive.In this paper, we propose a novel algorithm to find pairwise anchor points for genomic alignments that is based on the Filtered Spaced Word Matches (FSWM) idea that we previously introduced <cit.>.Anchor points are calculated using a hit-and-extend approachwhere high-scoring spaced-word matches are used as seeds: for an underlying binary pattern of length ℓ representingmatch and don't care positions, we rapidly identify spaced-word matches, i.e.length-ℓ segment pairs from the input sequences with matching nucleotides at the match positions but with possible mismatches at the don't care positions. For each spaced-word match, we then calculate a similarity score consideringall aligned positions – including the don't-care positions –, and we keep only those spaced-word matches that have a score above a certain threshold.These segment pairs are then extended tolocally-maximal gap-free alignments, similar as in BLAST <cit.>. To evaluate our anchoring approach, we used theMugsy pipeline using our software in the initial step, to find anchor points.For closely related input sequences, the quality of the resulting alignments is comparable to the original version of Mugsy where exact word matches are used foranchoring. Our paproach is far superior, however, if distal sequences are to be aligned, where most other alignmentapproaches either fail to produce alignments or require an unacceptableamount of time.Through our web site, we provide the adapted Mugsy pipeline with our anchoring approach as a pipeline for genome-sequence alignment that can be readily installed. A standalone version of our spaced-words software is provided as well,such that developers can integrate it into their own sequence-analysis pipelines.§ FILTERED SPACED WORD MATCHES For a sequence S of length L over an alphabet Σand 0 < i ≤ L, S[i] denotes the i-th symbol of S.For integers w≤ℓ,a binary pattern Pof length ℓ and weight w is aword over {0,1} of length ℓ such that there are exactly w indices i with P[i]=1. These positionsare called match positions, while positions i with P[i]=0 are called don't-care positions.A spaced word with respect to a pattern P is a word w over Σ∪{∗} where `∗' is a wildcard characternot contained in Σ, and w[k]=∗ holds if and only if k is a don't-care position, i.e. if P[k]=0, see also<cit.>. A spaced word w with respect to a pattern P occurs in a sequence S at position i if S[i+k-1] = w[k] for all match positions k of the pattern P. For sequences S_1 and S_2 with lengths L_1 and L_2, respectively andapattern P of length ℓ, and1≤ i ≤ L_1-ℓ+1, 1≤ j ≤ L_2-ℓ+1, we say that there isspaced-word match between S_1 and S_2at (i,j) with respect to P ifthe spaced words at i in S_1 and at j in S_2 are identical - in other words, iffor all match positions k in P, one hasS_1[i+k-1] = S_2[j+k-1]. Below is a spaced-word matchbetween two DNA sequences S_1 and S_2 at (5,2) with respectto thepattern P=1100101: [ S_1:GCTGTATACGTC ; S_2: STACACTTAT; P:1100101;] Indeed, the spaced word `TA∗∗ C∗ T' occurs at positions 5 in S_1and at position 2 in S_2. Herein, we propose to use spaced-word matches as a first step to calculateanchor points for pairwise alignment.We therefore need some criterion to distinguish between spaced-word matches representing true homologies and random background matches. In a previous paper, we used spaced-word matches to estimatephylogenetic distances between genomic sequences <cit.>.To this end, we first identified all spaced-word matches with respect to a given pattern P. To remove spurious random spaced-word matches, we applied a simple filtering procedure:using a nucleotide substitution matrix <cit.>, we calculated for each spaced-word match the sum scores of all aligned pairs of nucleotides (including match and don't-care positions), and we removed all spaced-word matches with a score below zero.A graphical representation of the spaced-word matches between twosequences shows thatthis procedure can clearly separate random spaced-word matches fromtrue homologies. If we plot for each possible score value s the number of spaced-word matches with score s, we obtain a bimodal distribution with one peak for randommatches anda second peak for homologies.We call such a plot a spaced-words histogram.For simulated sequence pairs under a simple model of evolution, both peaks are normally distributed. For real-world sequences, the random peak is still normally distributed, but the `homologous' peak is more complex,see Figure <ref>. Even so, using a cut-off value of zero can clearly distinguish between random matches and true homologies. More examples for spaced-words histogramsare given in<cit.>.Our approach to find anchor points for pairwise genomic alignment is asfollows. For given parameters ℓ and w, we first calculate a binary pattern with length ℓ and weight (number of match positions) w using our recently developedsoftware rasbhari <cit.>.We then identify all spaced-word matches with respect to P.To find homologies even for distantly related sequences,we use patterns with a low weight; by default, we use a weight of w=10.On the other hand, we use a large number ofdon't-care positions, since this makes it easier to distinguish true homologies from random spaced-word matches.By default, we use a pattern length of ℓ = 110, so our patterns contain 10 match positions and 100 don't-care positions;we use the following nucleotide substitution matrix described in<cit.>:[ ACGT;A 91 -114-31 -123;C 100 -125-31;G100 -114;T91;]Based on this matrix, we calculate the score of each spaced-word match as the sum of the substitution scores of all aligned pairs of nucleotides. We then discard all spaced-word matches with a score below zero. Next, we extend the identified spaced-word matches in bothdirections without gaps. As the starting point for this extension, we do not use the full spaced-word matches, but their mid points.The reason for this is that, with our long patterns, even a high-scoring spaced-word match may not represent sequence homologies over itsentire length. It often occurs that some part of a spaced-word alignes homologous nucleotides, but another part extends into non-homologous regions of the sequences. There is a high probability, however, that the mid point of a long, high-scoring spaced-word match is located within a region of true homology.Finally, we use the produced `extended' gap-free alignments asanchor points for alignment. § EVALUATIONTo evaluate Filtered Spaced Word Matches (FSWM) and to compare it to thestate-of-the-art approach to alignment anchoring, we used the Mugsy software system. As mentioned above, the original Mugsy usesMUMmer to find pairwise anchor points. We replaced MUMmer in the Mugsy pipelineby our FSWM-based anchor pointsand evaluated the resulting multiple alignments. In addition, we compared these alignments to alignments produced bythe multiple genome aligner Cactus <cit.>.Cactus is known to be one of the best existing tools for multiple genomealignment; it performed excellent in theAlignathon study <cit.>. To measure the performance of the compared methods, we used simulatedgenomic sequences as well as three sets of real genomes.To make MUMmer directly comparable to FSWM, we used a minimumlength of 10 nt formaximum unique matches, corresponding to the default weight (sumof match positions) used in Spaced Words.Note that, by default, MUMmer uses a minimum length of 15 nt. With this default value, however, we obtained alignments of much lower quality.The Cactus tool was run with default values. §.§ Simulated genome sequences To simulate genomic sequences, we used the Artificial Life Framework (ALF) developed by Dalquen et al. <cit.>.ALF evolves gene sequences based on a probabilistic modelalong a randomly generated tree, starting with an ancestral gene. During this process evolutionary events are logged such that the true MSA is known for each simulated gene family. This true MSA can then be used as reference to assess the quality of automatically generated alignments. We generated a series of 14 data sets, each containing 30 simulated `genomes', with increasing mutation rates for the different datasets. For all other parameters in ALF, we used the default settings.In each data set, there are 750 simulated gene families such that one gene from each gene family is present in each of the 30 simulated genomes.Thus, each of the `genomes' contains the same set of 750 genes.We varied the mutation rates between an average of 0.1013 substitutions per position for the first data set to an average of 0.8349 substitutions per position for the 14th data set. The maximal pairwise distances between all pairs of sequences within one data set ranges from 0.1640 for the first to 1.0923 for the 14th data set. Thesimulated genes have an average length of about 1500 bp, summing up to atotal size of about 32 MB per data set.To assess the quality of the produced alignments,we calculated recall and precision values in the usual way. If, for one given data set, S is the set of all positions in the 30 simulatedgenomes, we denote by A⊂ S^2 the set of all pairs of positions aligned by the alignment that is to be evaluated while R⊂ S^2 denotes the set of all pairs ofpositions aligned in the reference alignment. recall and precision are thendefined asrecall=|A∩ R|/|R|,precision=|A∩ R|/|A|The harmonic mean of reall and precision is called the balanced F-score and is often used as an overallmeasure ofaccuracy; it is thusdefined as F_score=2×precision× recall/precision + recall To estimate these three values, we used the tool mafComparator which was also used in the Alignathon study <cit.>.Since it is impractical to consider the entire set S^2 of pairs of positions of the test sequences, we sampled 10 million pairs of positions for each data set. This corresponds to the evaluation procedure used inAlignathon. For the simulated sequence sets, their precision and recall valuesare shown in Figure <ref>.For data sets with smaller mutation rates,alignments obtained with FSWM are only slightly better than those obtained with MUMmer. However, if the mutation rate increases, our spaced-words approach substantiallyoutperforms the original version of Mugsy where exact word matches are used to find anchor points.Not only more homologies are detected but also the precision is slightly higher if Filtered Spaced Word Matches is used instead of MUMmer. §.§ Real-world genome sequencesFor real-world genome families, it is usually not possible to calculate the precision of MSA programs because it is, in general, not known which sequence positions exactly are homologous to each other and which ones are not.If there are core blocks of the sequences for which the biologically correct alignment is known, at least the recall can be calculated for these core blocks. For most genome sequences, however, no such core blocks are available.To evaluate Mugsy, the authors of the program used thenumber of core columns of the produced alignments as a criterion for alignment quality <cit.>. Here, a core column is defined as a column that does not contain gaps, i.e. a column that alignsnucleotides from all of the input sequences. In addition, the authors of Mugsy used the number of pairs of aligned positions of the aligned sequences as an indicator of alignment quality.In this paper, we are using the same criteria to evaluate multiple alignments of real-world genomes.As a first real-word example, we used a set of29 E.coli/Shigellagenomes that has already been used in the original Mugsy paper,see supplementary material for details; these sequences havealso been used to evaluate alignment-free methods <cit.>.The total size of this data set is about 141 MB. As a second test set, we used another prokaryotic data set which consists of 32 complete Roseobacter genomes (details in the supplementary material).This data set was used to assess the performance on more distantly related organisms than the E.coli/Shigella strains. The total size of these data set is about 135 MB. To test our approach on eukaryotic genomes, we usedas a third test case a set of nine fungal genomes, namelyCoprinopsis cinerea,Neurospora crassa,Aspergillus terreus,Aspergillus nidulans,Histoplasma capsulatum,Paracoccidioides brasiliensis,Saccharomyces cerevisiae,Schizosaccharomyces pombe and Ustilago maydis(genbank accession numbers are given in the supplementary material). The total size of this third data set is about 253 MB. The results of Mugsy with MUMmer and FSWM for the three real-world data sets are shown in Table 1,together with the results obtained with Cactus. In addition to the number of core columns and the number of aligned pairs of positions, the table containsthe number of core Locally Collinear Blocks, i.e. the number of Locally Collinear Blocks involving all of the input sequences,and the total number of Locally Collinear Blocks returned by the alignment programs. §.§ Program run time Table 2 reportsthe program run times of Mugsy with FSWM, Mugsy with MUMmer and Cactus on the above three real-world sequence sets. In addition, the table contains the run times forFSWM and MUMmer alone. § DISCUSSIONIn this paper, we proposed a novel approach to calculate anchor points for genome alignment. Finding suitable anchor points isa critical step in all methods for genomealignment, since the selected anchor points determine which regions of the sequences can be aligned to each other in the final alignment.A sufficient number of anchor points is necessary to keep the search space and run time of the main alignment procedure manageable, sosensitive methods are needed to find anchor points.Wrongly selected anchor points, on the other hand, can seriously deterioratethe quality of the final alignments, so anchoring proceduresmust also be highly specific.Earlier approaches to genomic alignment used exact word matches as anchor points <cit.>,since such matches can be easily found using suffix trees and relatedindexing structures. These approaches are limited, however, to situations where closely related genomes are to be aligned,for example different strains of a bacterium.In modern approaches to database searching, spaced seeds are usedto find potential sequence homologies <cit.>.Here, binary patterns of match and don't care positions are used,and two sequence segments of the corresponding length are considered to match if identicalresidues are aligned at the match positions, while mismatches areallowed at the don't carepositions.Such pattern-based approaches are more sensitive than previous methodsthat relied on exact word matches. We previously proposed to apply the `spaced-seeds' idea to alignment-freesequence comparison, by replacing contiguous words by so-called spaced words, i.e. by words that contain wildcard characters at certain pre-defined positions <cit.>. More recently, we introduced filtered spaced word matches <cit.> to estimate phylogenetic distances between genome sequences.In the latter approach, we first identify spaced-word matchesusing relatively long patterns with only few match positions.For the identified matching segments, we then look at all alignedpairs of nucleotides,including the ones at the don't-care positions, and we discardspaced-word matches if the overall degree of similarity between the twosegments is below a threshold.Phylogenetic distances can be estimated based on the aligned nucleotides atthe don't-care positions of the remaining spaced-word matches. We showed that this procedure is fast and highly sensitive, and it canreliably distinguish between true homologiesand spurious sequence similarities. In the present study, we used filtered spaced word matches to calculatehigh-quality anchor pointsfor genomic sequence alignment.Instead of using spaced-word matches directly as anchor points, we extend them into both directions, similar to the hit-and-extend approach to databasesearching. To evaluate these anchor points, we integrated them into the popular genome-alignment pipeline Mugsy. Test runs on simulated genome sequencesshow that, for closely related sequences,Mugsy produces alignmentsof high quality with both types of anchor points.For more distantly related sequences, however, the recall values of the program drop dramatically if anchor points are calculated with MUMmer while, with our spaced-word matches, one observes recall values close to 100% for distances up to around 0.7 substitutionsper position.For real-world genomes, it is more difficult to evaluate the performance of genome aligners since there is only limited information available on which positions are homologous to each other and which ones are not.Angiuoli and Salzberg <cit.>therefore used the number of aligned pairs of positions as an indicator ofalignment quality, together with the size of the `core alignment', i.e. the number ofalignments columnsthat do not contain gaps. At first glance,these criteria might seem questionable; it would be trivial to maximize thesevalues, simply by aligning sequences without internal gaps, by adding gaps only at the ends of the shorter sequences. However, as shown inFigure <ref>, all MSA programs in our study have highprecision values, i.e. positions aligned by these programs are likely to be true homologs. In this situation, the number of alignedposition pairs and size of the `core alignment' can be considered as a proxyfor the recall of the applied methods i.e. the proportion of homologies that are correctly aligned. For distantly related sequence sets, the total run time of Mugsyis much higher with our FSWM anchoring approach than with MUMmer. One reason for the increased run time with FSWM is the fact that, with spaced-words, far moreLocally Collinear Blocks are detected, than if exact word matches are used as anchor points,especially for distantly related sequences where exact word matching isnot very sensitive.One possible solution for this issue would be to applyuser-defined threshold values for the total number of returned Locally Collinear Blocks or for their similarity scores, to reduce the run time of the final alignment procedurefor largegenomic sequences.abbrv
http://arxiv.org/abs/1703.08792v1
{ "authors": [ "Chris-Andre Leimeister", "Thomas Dencker", "Burkhard Morgenstern" ], "categories": [ "q-bio.GN" ], "primary_category": "q-bio.GN", "published": "20170326092107", "title": "Anchor points for genome alignment based on Filtered Spaced Word Matches" }
Department of Physics and Astronomy, Seoul National University, Gwanak, Seoul 151–742, South Korea yoonyoung@astro.snu.ac.kr, ishiguro@astro.snu.ac.kr Faculty of Engineering, Kindai University, Hiroshima Campus, 1 Takaya Umenobe, Higashi-Hiroshima, Hiroshima 739–2116, Japan Department of Planetology, Kobe University, Nada, Kobe 657–8501, Japan We revisited a mass ejection phenomenon that occurred in asteroid P/2010 A2 in terms of the dynamical properties of the dust particles and large fragments. We constructed a model assuming anisotropic ejection within a solid cone-shaped jet and succeeded in reproducing the time-variant features in archival observational images over ∼3 years from 2010 January to 2012 October. When we assumed that the dust particles and fragments were ejected in the same direction from a point where no object had been detected in any observations, the anisotropic model can explain all of the observations including (i) the unique dust cloud morphology, (ii) the trail surface brightness and (iii) the motions of the fragments. Our results suggest that the original body was shattered by an impact with the specific energy of Q^∗ 350 J kg^-1, and remnants of slow antipodal ejecta (i.e., anisotropic ejection in our model) were observed as P/2010 A2. The observed quantities are consistent with those obtained through laboratory impact experiments, supporting the idea that the P/2010 A2 event is the first evidence of the impact shattering occurred in the present main asteroid belt.§ INTRODUCTIONP/2010 A2 (LINEAR) is the fifth recognized “active asteroid”. It was discovered on 2010 January 6 by the Lincoln Near Earth Asteroid Research (LINEAR) survey <cit.>. It orbits in the main asteroid belt but exhibited a dust ejection like comets <cit.>. After the discovery, intensive follow-up observations were performed using a variety of ground-based and space telescopes to reveal the mass ejection mechanism.It displayed a distinctive morphology of the dust cloud (see Figure <ref>) with a prominent point-like source that is approximately 120 m in diameter (hereafter, the largest fragment, LF) at the leading edge of the dust trail. Two arc-like structures were noticed at the eastern edge of the dust trail (arc A and B). In addition, several decimeter-sized or larger sub-fragments were found along the arcs <cit.>. It was also noticed that a fainter structure (hereafter outer diffuse source) extended more widely than the dust trail <cit.>.The mass ejection mechanism of the asteroid still remains inconclusive, although there are several efforts to understand the cause through dynamical modeling of the dust particles and fragments. In an early study, <cit.> considered that the dust particles were ejected continuously by sublimation of ice. Later, it became clear that the mass was ejected impulsively rather than continuously on a day in 2009 February or March by either an impact or rotational breakup <cit.>. <cit.> argued that the arcs are associated with an impact hollow cone, suggesting that a decameter-sized crater was formed on the LF. On the other hand, <cit.> studied the motions of sub-fragments using a series of high-resolution images taken by the Hubble Space Telescope (HST) from 2010 January to May and suggested that the mass was ejected in the equatorial plane of the LF by centrifugal force (called rotational breakup). <cit.> further obtained observational image in 2012 October from the 10 m Keck I telescope and conducted a model simulation of dust particles to understand the observed image. They claimed that their observation was consistent with an impact close to the shattering threshold, although they could not rule out the possibility of a rotational breakup. Therefore, the interpretations to the comet-like activity are incoherent, that is, either impact shattering, cratering or rotational breakup.Here, we note that none of the previous dynamical modeling are successful in reproducing the multi-epoch observed features (i.e., the time-variant dust cloud morphology, the trail surface brightness and the motions of fragments). For example, <cit.> did not deal with observation images taken before 2012, while <cit.> seems to not replicate the trail surface brightness in 2012 (see Figure 8 in the reference). With the exception of <cit.>, no studies considered the motions of the fragments. To complement the incomplete modeling and elicit further information about this mysterious phenomenon, we revisited archival observations taken from 2010 January to 2012 October together with our unpublished observation conducted using the 8.2 m Subaru telescope in 2011 June (see Figure <ref>, Table <ref>, and the abbreviations therein) and constructed a new model that could replicate these time-variant features in the archival observation data over ∼3 years since its discovery.The rest of this paper is organized as follows. Section <ref> describes the model details, Section <ref> presents the results of the dust modeling and further analysis of the fragment motion. We discuss the results based on the knowledge obtained through laboratory impact experiments in Section <ref> and summarize the findings and their physical implications in Section <ref>. § MODEL DESCRIPTIONTo understand the time-variant morphology and surface brightness distribution over the entire period of the available observations (i.e., ∼3 years from 2010 January to 2012 October) in a comprehensive manner, we conducted a dynamical simulation of dust particles taking into account solar gravity and radiation pressure <cit.>. The trajectory of a dust particle is determined by the particle radius and the ejection velocity <cit.>. The size of the particle can be parameterized by β, the ratio of radiation pressure acceleration to solar gravity. For a spherical particle, β is given by β = K Q_pr/ρ_d a_d, where a_d and ρ_d are the particle radius and the mass density in the MKS units, respectively. For large fragments, the β values can be approximated to zero. Because the P/2010 A2 dust particles have a composition that is similar to ordinary chondrites <cit.>, we assumed ρ_d = 3000 kg m^-3, which is a typical value for their bulk density <cit.>. This assumption is consistent with previous research <cit.>. K = 5.7 × 10^-4 kg m^-2 is a constant, and Q_pr is a radiation pressure coefficient that we considered to be unity <cit.>. We supposed an impulsive dust ejection on 2009 March 2 following the previous research <cit.>. We employed a size-dependent terminal speed for the dust particles: V_ej = V_0 (β/β_min)^u_1 , where V_0 is the reference ejection speed for the largest particles (β=β_min). The exponent, u_1, is the power-index of the size-dependent ejection speed. The number of dust particles is given by N(a_d) da_d = N_0 (a_d/a_0)^-q da_d , in the size range of a_min ⩽ a_d ⩽ a_max, where a_min and a_max are the minimum and maximum particle sizes given by a_min=KQ_pr/ρ_dβ_max and a_max=KQ_pr/ρ_dβ_min, respectively. q is the power-index of the differential size distribution. N_0 is the reference number of dust particles at the reference size of a_0=1 m.The assumptions and model above are, in principle, the same as those in <cit.>, where they assumed a size-independent (i.e., u_1=0 in Equation (<ref>)) “isotropic” ejection (also see Table <ref> for the model parameters of the isotropic model). Namely, theyconsidered that the dust particles were ejected in every direction without thinking of the size dependency of the ejection speed. In addition, <cit.> assumed that the dust particles were ejected from the observed position of the LF. Figure <ref> shows a comparison between the isotropic model and the observations at different epochs. It is true that the isotropic model reproduced the observed morphology and surface brightness distribution in the Keck-2012 image well, but in our opinion, it is not satisfactory to explain the time evolution of the dust cloud.In particular, we would like to draw attention to the unique morphology observed in early 2010 (the arcs), which is not present in the isotropic ejection model images of the early 2010 observations. Therefore,certain modifications are required to ensure consistency with the observations in early 2010.There are several ideas for the creation of the arc-like features. For example, assuming little orno radiation pressure, it would be possible to replicatearc features thatare consistent with the images at a single epoch or for a certain short time duration <cit.>. However, as we mentioned above, these models do not reproduce the morphology and brightness distribution at different epochs. We noticed that arc A can be produced without thinking of exquisite dust ejection models when we assumed a simple cone-shaped jet with a half opening angle of w (not an impact hollow cone but a solid cone), although we initially did not consider the physical implication of the ejection model. Figure <ref> shows the example of model images with a hemispherical (i.e., w=90) dust ejection considering different orientations of the central axis of the cone-shaped jet. It is clear that some model images show arc A at the eastern edge of the trail structure, which is similar to the observations in early 2010 (see Figure <ref> (a) and (b)). We noticed that the brightness enhancement of arc A can be explained by the high existence probability of the largest particles that were ejected at similar ejection velocities. Such particles tend to form a cut end of the dust trail at the leading edge when they were ejected toward the trailing direction of the orbital motion. For this reason, we modified the isotropic model to what we call an “anisotropic model”, where we assumed that dust particles were ejected symmetrically with respect to a direction in the inertial frame (i.e., toward right ascension α_jet and declination δ_jet) in a cone-shaped jet with a half opening angle of w, and searched for the best-fit parameter set to match the observed data. In addition, we left the position of the dust source (i.e., the dust ejection point, DEP) out of our consideration in order not to be fixated on the previous ideas (although the dust particles were assumed to have originated from the LF in the previous publications). In order to keep the constant width of the dust trail in the model, we imposed a constraint of the reference ejection speed of V_0 ∝ sin(w)^-1 <cit.>. Once we obtained the positions of the dust particles in celestial coordinates by analytically solving the Kepler equation with solar radiation pressure, we calculated the cross-sectional areas of the dust particles in the CCD coordinate system: C_pixel(x,y) = ∫^a_max_a_minN_cal(a_d,x,y) π a_d^2 da_d , where N_cal(a_d,x,y) is the number of dust particles counted within a pixel at the coordinates (x, y) in a CCD image. A full list of free parameters and the test range is shown in Table <ref>. § RESULTS§.§ Dust Cloud Morphology and Surface Brightness Among the eight unknown parameters in our model above (α_jet, δ_jet, w, β_max, β_min, V_0, u_1, q), we first determined the three parameters for a cone-shaped jet.We measured the position angle (PA) connecting two edges of arc A from the observation images of Gemini-2010, from the east edge to the west edge, PA=45.0 with respect to the south direction. We then created a number of simulation images with a hemispherical (i.e., w=90) dust ejection considering different orientations of the central axis of the cone-shaped jet (α_jet, δ_jet) and measured the PA of each modeled arcs (Figure <ref>). We found that the best-fit on arc A can be obtained when the particles ejected in a jet direction of 25α_jet 40 and 30δ_jet 40. The PA was independent of w. Assuming a jet direction of (35, 35), w was derived using the curvature κ of the forth-order polynomials f(x) fitted to the (x, y) positions of modeled arcs where we reversed x and y (i.e., (y, x)), at x=x_0 where it makes the most distant point from the line connecting two edges of the arc:κ = | f”(x) |/[1+f'(x)^2]^3/2 ,where we measured κ=4.07×10^-2 from the observation images of Gemini-2010. We obtained 20w 30. Repeating the process using different jet directions in the plausible ranges gives the same solution of w. Once we fixed (α_jet, δ_jet, w), we obtained the DEP as an outcome (see more descriptions in Section <ref>).Second, we determined the smallest particles size (β_max) and the power-index of the size-dependent ejection speed (u_1) using the Subaru-2011 image. Small particles were susceptible to the solar radiation pressure and difficult todetect in images with narrowfields of view (FOV). We found the Subaru-2011 image taken with the wide-field camera Suprime-Cam was the best to determine the smallest particle size because it has the largest orbital coverage (a delta mean anomaly of 0.42). We obtained β_max=7×10^-4 (a_d=270 ) from the comparison of observation and model images to explain the existence of the dust trail that extended beyond the FOV of the Subaru-2011 image.Figure <ref> shows the width of the trail as a function of distance from the reference point (i.e., the DEP), where we measured the width from the FWHM of a series of surface cut profiles perpendicular to the trail. In the figure, we found that the trail widened sharply at 40–80 (open circles) and moderately beyond 80(filled circles) as the distance increased. We ignored the data at 40–80 because we might sampled the data from the fine structures (i.e.,arc A and B) and fitted the slope with a power-law function (dashed line). Since smaller particles were distributed farther via the solar radiation pressure, the observed widening beyond 80suggests that the smaller particles were ejected with higher ejection velocity. We derived the power-index of the size-dependent ejection speed u_1=0.10±0.02 through the power-law fitting on the relation between width (i.e., ejection speed perpendicular to the orbital plane) and distance (i.e., proportional to β assuming that the dust motion parallel to the orbit plane is determined by radiation pressure acceleration). Once we determined these five parameters (α_jet, δ_jet, w, β_max and u_1), we deduced plausible values of V_0, β_min and q, considering both morphology and surface brightness of the dust trail at multiple observation epochs. Through comparison with our dynamical model, we derived V_0 and β_min by fitting the overall trail width and the extent of arc A (i.e., cut end of the trail), which is sensitive to the ejection speed of the largest trail particles. The best-fit V_0 and u_1 resulted in the maximum speed for the smallest particles of ∼0.50 m s^-1.In Table <ref>, we summarized the best-fit parameters for our dynamical model. The best-fit model shows good agreement with the observed morphology at any epochs in 2010–2012 (in the middle row of Figure <ref>). Moreover, the model also fits the surface brightness distribution in a broad sense (Figure <ref>), although there are modest differences near the peaks which can be improved by a fine tuning of the number of dust particles around the maximum size. Because our new model parameters were obtained by following the isotropic model, the results (especially the size and the size distribution) are consistent with those in <cit.>. However, the trivial modification to the anisotropic dust ejection resulted in a remarkable improvement in reproducing the observed morphology (the trail and arc A) at any observation epochs over ∼3 years. We would like to insist that this is the first success, which demonstrates the consistency with both the time-variant morphology and the surface brightness profiles simultaneously. The remaining features which is not produced in our model is arc B, which will be discussed later. §.§ The Location of the LF and the Dust Ejection Point (DEP) As we mentioned in Section <ref>, we did not specify any “detectableobjects” as the dust source and determined the model parameters in Section <ref>. Since the LF had been considered to be the dust source in all previous research, it is important to examine the location of the LF with respect to the dust cloud in our model simulation. We hence calculated the positions of ≈10^6 test particles using the same anisotropic model, where we fixed β=0 and V_ej=V_0 to take into account only the large particles without the solar radiation pressure effect. We found that the big test particles (β=0) tend to appear along the arc with a high probability, where the arc is morphologically identical to arc A (β=β_min) with anegligible offset of ∼0.4 (i.e., unresolved under the ground-based seeing disk size), because such large dust particles (β≤10^-6) are almost stationary against the solar radiation pressure. Figure <ref> (bottom row) shows the existence probability of the LF together with the observation images (top) and dust model images (middle) for three different epochs in 2010, 2011 and 2012. Over the years, the positions of the largest test particles show good agreement with the modeled arc A. From the result, we conjectured that intermediate sized fragments (from decimeter to hundred-meter particles) might distribute along arc A with high probability but only the LF had been detected because it is brighter than the detection limits of these observations. In Figure <ref>, crosses show the location of the DEP. It is important to note that the DEP deviated from the observed position of the LF. This result implies that dust particles were not ejected from the LF, but from a position where no object was detected from any observations (which will be discussed later). Similar trends can be seen in the surface brightness profiles (Figure <ref>). Although the derived parameters are very similar between the isotropic model and our anisotropic model, these two differ in that the former assumed that the dust ejection was from the LF, while the latter assumed that it was from a position where no object had been detected.We examined the orbital difference between the LF and the DEP. We analyzed the positions of the LF and the DEP in ∼40 epoch observations with relevant model images and derived the orbital elements in the J2000 coordinate system using the Find_Orb[http://www.projectpluto.com/find_orb.htm] software package. Table <ref> shows the osculating orbital elements (a,e,i) of the LF and the DEP. We found the best set of orbital elements with a negligible residual in the celestial plane (∼0.7). The orbital elements of the DEP is in consistent with those of the LF down to ∼4th decimal place but significantly different from them to an accuracy of the uncertainties (around ∼5th decimal place, from NASA/JPL Small-Body Database Browser[http://ssd.jpl.nasa.gov/]). To confirm if the orbital elements of the LF are available for our dust model, we performed another set of anisotropic model simulations using the new orbital elements for the DEP, but we could not find any notable differences in the modeled dust morphology. This means that the trivial difference in the orbital elements does not change the above results for our dust model simulation.§.§ Motions of the FragmentsWe performed further analysis on the motions of the fragments distributed along arcs A and B, following the designation in <cit.>. The motions of the fragments were thoroughly studied in <cit.>, where they regarded the LF as the source and investigated the motion with respect to the LF. However, because the fragments and dust particles are not ejected from the LF in our above analysis, we should reconsider the motion with respect to the DEP we derived above. Figure <ref> (a) shows the observedsky-plane trajectories of fragments from UT 2010 January 25 to May 8 with respect to the DEP, showing the positions at the first epoch of the HST-2010 image as the filled circles. We show the motions of not only these fragments (labeled with A1, A2, A3, AB, B1, B2 and B3) but also the LF because the LF is no longer the source of the materials. We found that all fragments were moving toward the northwest in the observed frame.To give an interpretation to the motion, we examined the trajectories of the fragments (i.e., β=0) through a dynamical analysis. Figure <ref> (b) shows the calculated trajectories of the fragments ejected from the DEP on UT 2009 March 2 (the same day as the dust ejection) with different ejection directions. We employed the orbital elements of the DEP derived in Section <ref> and considered the fragment ejection in every direction (i.e., isotropic ejection) to think about all possibilities. We found that the observed trajectories can be explained only when these objects were ejected in the same direction as the dust particles in our anisotropic dust model (Table <ref>). The agreement suggests that the large fragments were ejected together with the small dust particles from the DEP in the same direction (i.e., 25α_jet 40 and 30δ_jet 40 with 20w 30). We derived the typical ejection speed of the fragments of V_ej=0.28 m s^-1 from this dynamical analysis, which is consistent with the best-fit V_0 (the velocity of the largest dust particles) value in our anisotropic dust model.In Figure <ref> (a), the trajectories of the fragments on arc B (i.e., AB, B1, B2 and B3) concentrated on the narrow region and aligned parallel to the bulk motion of the fragments projected onto the sky plane. To explain the trend on arc B, we further ran a dynamical simulation of 100 test particles using our best-fit fragment model (Table <ref>) and compared them to the observed trajectories of the eight fragments. For convenience of classification, we divided them into two groups: group A for the fragments having similar trajectories to the LF, A1, A2 and A3 (i.e., arc A) and group B for the fragments having similar trajectories to B1, B2 and B3 (i.e., arc B). In the case of AB-like fragments that intersect both arcs A and B, we regarded them as group A. By visual inspection, we classified 100 test particles into two groups, that is, 53 and 19 particles are in groups A and B, respectively. Since we strictly selected the test particles only when they are almost identical to those fragments, 28 test particles were not classified into any group. We then recorded the initial velocity information about the selected particles in equatorial rectangular coordinates, (v_x,v_y,v_z), and reconstructed the ejection velocity field at the moment of disruption as shown in Figure <ref>. From the cone axis-centered view (Figure <ref> (a)), we found that group B particles (black) have a limited spatial distribution, while group A particles (gray) have an almost isotropic distribution within a cone-shaped jet. The edge-on view (Figure <ref> (b)) suggests that the ejection velocity field of group B particles is almost parallel to the jet direction (i.e., the bulk motion of fragments), as we expected from Figure <ref> (a). For quantitative analysis, we can consider the total unit velocity vector with respect to the DEP, 𝐯̂_tot, [ 𝐯_tot = (v_x,tot, v_y,tot, v_z,tot) = ∑_i^Ntp (v_x,i, v_y,i, v_z,i) ,; 𝐯̂_tot = 𝐯_tot/(v_x,tot^2 + v_y,tot^2 + v_z,tot ^2)^1/2 , ] where we calculated the total unit velocity vectors as 𝐯̂_tot,A=(0.717,0.389,0.579) and 𝐯̂_tot,B=(0.672,0.457,0.583) for the group A and B fragments, respectively. It is notable that 𝐯̂_tot,B has a negligible separation of ∼0.9 with respect to the jet centroid (35, 35), while 𝐯̂_tot,A has a relatively wide separation of ∼5.3. Further discussion will be provided in Section <ref>. § DISCUSSIONFor both dust particles and fragments, key features to be explained include1. Absence of the central body at the DEP;2. Anisotropic ejection with a limited angle; 3. Low ejection speed (≪ 1 m s^-1); 4. Similarity in ejection speeds between fragments andlargest dust particles (V_ej≈ V_0). §.§ Ejection mechanismFour ejection mechanisms have been suggested so far, that is, sublimation of ice <cit.>, rotational breakup <cit.>, impact cratering <cit.> and impact shattering <cit.>. Here, we evaluate four mechanisms with two fundamental questions: (i) Does it require momentum conservation on the original body? (ii) Does it require the central body (i.e., the LF) survived at the DEP? To answer the first question (i), sublimation and rotational breakup require momentum conservation on the original body before and after the mass loss (i.e., total ejecta momentum should be zero with respect to the DEP), where we assumed there is no external force. On the other hand, we expect non-conservation of momentum from an impact unless we take into account the external momentum because impact projectile injects momentum into the target asteroid. In the case of P/2010 A2, most of the mass is occupied by large fragments rather than small dust particles, while the ejection velocities are approximately the same regardless of the size, suggesting that the largest bodies make up a significant proportion of the total momentum. We thus consider the sum of momentum of the eight largest fragments with respect to the DEP as a proxy for the total momentum in the system and found that it never converges to zero in our model (cf. Section <ref>). Independently, we revisited a rotational breakup model where the DEP was assumed to be the LF <cit.> and considered the total momentum of seven sub-fragments with respect to the LF. All sub-fragments had negative velocities in the y- and z-directions, meaning that the total momentum cannot converge to zero regardless of the individual fragment masses and x-velocities. We conclude that momentum is not conserved on the original body of P/2010 A2, which rules out the two mechanisms requiring zero total momentum, i.e., sublimation and rotational breakup.The second question (ii) is actually a criterion to judge the degree of the fragmentation of the target asteroid to differentiate the two remaining possibilities, i.e., impact cratering and shattering. Both are impacts, but a shattering (i.e., complete target destruction) require more “specific energy” (impact kinetic energy per total mass of the system) than a cratering (i.e., partial target destruction). The central body must survive from an impact cratering, whereas it may not survive from an impact shattering, leaving nothing at the DEP but produce interplanetary debris. The point of the issue is summarized in Table <ref>. From the two aspects that (i) there is a non-zero value of the total momentum in the system and (ii) no central body existed at the DEP, we arrived at the conclusion that the ejection mechanism of P/2010 A2 is an impact shattering. §.§ Impact shattering interpretation §.§.§ Comparison to laboratory experiments To verify our hypothesis that P/2010 A2 is the debris from an impact shattering, we compare the results of our simulation to those of the laboratory impact experiments from the literature. In the laboratory experiments using various targets, it is commonly observed that small (100) particles with high velocities (>10 m s^-1) are produced at the point of impact in the opposite direction of the impact trajectory (cf. Figure <ref>), while the largest and slowest fragments are usually located directly opposite the impact site, which we call “antipodal” fragments <cit.>. Specifically for targets of basalt and gypsum, the antipodal region suffers the least fragmentation, and a number of large fragments are generated with a limited distribution around the antipodal point; it has been observed that such fragments have similar velocities <cit.>. Considering the typical antipodal velocity of ∼(5–10) m s^-1 in the shattering experiments, we notice that there is a velocity discrepancy between P/2010 A2 (≪ 1 m s^-1) and the laboratory counterparts. To eliminate the discrepancy and explain the low antipodal velocities, we consider two approaches as follows.The first approach is making the target weaker so that it iseasily shattered by a relatively small specific energy, i.e., adopting the idea by <cit.> that antipodal velocities decreased with decreasing specific energy. The key for low antipodal velocities will be to decrease the specific energy to as small as possible until it is enough to shatter a “weak” body. It has been suggested that a target with a low static strength should be easier toshatter via impact than high strength targets <cit.>, while it is also known that sub-kilometer to kilometer sized bodies are significantly weaker than bodies in other sizes <cit.>.Once the specific energy is enough to destroy a body, the second approach is damping the propagation of the shock wave through the target, sheltering the antipodal region from significant damage while intensifying the deposition of the shock energy at the impact point <cit.>. It was reported that porous materials (e.g., gypsum and sandbags) have lower antipodal velocities than those of nonporous materials (e.g., basalt and ice) becausepre-existing fractures and voids in the target body cause a rapid attenuation of shock pressure <cit.>.To summarize the comparison between the results from P/2010 A2 and the laboratory experiments, we found that the large fragments generated around the antipodal point provide the best match to our results, i.e., anisotropic ejection with a limited angle and almost constant ejection speeds for large ejecta (V_ej≈ V_0). To explain the velocity discrepancy, we considered two possible approaches to reduce the antipodal velocities: (i) making the target weaker so that it is easily shattered by a relatively small specific energy (e.g., size and strength) and (ii) damping the propagation of the shock wave through the target (e.g., porosity).Combining all of the possible material properties of the affected asteroid, we speculate that the original body of P/2010 A2 could be a sub-kilometer sized rubble-pile asteroid like (25143) Itokawa (i.e., a porous and low static strength asteroid).§.§.§ Energy estimation Given the above considerations, to determine how realistic it is that the target asteroid can be shattered and produce low antipodal velocities, we can estimate the specific energy delivered to the target asteroid. The momentum conservation taking into account the injected projectile momentum can be written as (also see the configuration in Figure <ref>) p_proj𝐞̂_p = -p_ej𝐞̂_p + Δ p_target𝐞̂_p , where 𝐞̂_p is the unit vector of the direction along the projectile momentum, p_proj and p_ej are the components of the projectile and escaping ejecta momentum, respectively. A minus sign appeared in the ejecta term because the majority of the ejecta are expected to be generated in the opposite direction of the impact trajectory <cit.>. Δ p_target denotes the resulting momentum of the target along the impact direction (i.e., antipodal component). Assuming Δ p_target∼ M_anti v_anti where M_anti and v_anti are the sum of the antipodal fragment masses and the mean antipodal velocity (i.e., 0.28 m s^-1), respectively, we can rewrite Equation <ref> as m_pV_impact + (ejecta momentum) = M_anti v_anti , where m_p and V_impact are mass of the projectile and the impact velocity, respectively.We assume V_impact∼ 5 km s^-1, which is the average collision velocity in the main asteroid belt <cit.>. This indicates that M_anti v_anti 2 × m_pV_impact , for collision with velocities higher than 5 km s^-1 <cit.>. In this case, the specific energy (impact kinetic energy per total mass of the system) is given as Q^∗∼1/2m_p/M_antiV_impact^2 , where we assume that the total mass of the system (i.e., the sum of projectile and target mass) can be approximated by M_anti. Combining Equation <ref> and <ref> gives Q^∗ 1/2 v_anti/2 V_impact V_impact^2 = 1/4 v_anti V_impact , and we obtained Q^∗ 350 J kg^-1. Interestingly, this energy density corresponds closely to the shattering threshold, Q^∗_S, of a ten-meter diameter body in a recent laboratory examination using porous gypsum targets <cit.>, where they are also in good agreement with that of tens to hundreds of meters bodies estimated by the numerical simulation <cit.>. We thus conjecture that such small values of Q^∗ delivered to the original asteroid enable an impact shattering (Q^∗ Q^∗_S), resulting in low antipodal velocities down to ≪1 m s^-1.§.§.§ Remaining morphological interpretation Further details should be left as open questions. However, here, we suggest a possible scenario that all observed ejecta (LF+A+B+dust) originated from the antipodal region in the target asteroid and ejected with a similar ejection speed of ∼0.28 m s^-1. Of the ejecta, three components (the LF, arc A, and dust trail) essentially constructed arc A and connected the dust trail. On the contrary, the distinct morphology of arc B remains unexplained so far, implying that it was created by different mechanism. We note that the simulated ejection velocity field of the B fragments was parallel to the jet direction (Section <ref>) and suggest that the injected momentum from the projectile to the target asteroid created a number of antipodal fragments along the direction of the momentum transfer (i.e., the impact trajectory). On the other hand, there should be more ejecta that originated from somewhere in the target asteroid but not from the antipodal region (cf. Equation <ref>). Most of those particles are small and fast, as they underwent greater impact fragmentation, and they should be pushed out of the FOV by radiation pressure during the ∼1 year between the disruption in 2009 and the first discovery in 2010. Some particles with intermediate sizes and velocities may have a chance to remain in the FOV at the time of early observations, as we can see from the outer diffuse sources (cf. Gemini-2010 and HST-2010) but not from Subaru-2011 and Keck-2012.We remark that above scenario would be one of the possible interpretations that ensure consistency with the observational evidence. The ratio of antipodal to non-antipodal debris is highly dependent upon target property or the distance between the impact point and the antipodal point, and the majority of slow debris from impact shattering does not necessarily have to be antipodal. The work presented here suggests a possibility of large amounts of antipodal debris existed as we see from <cit.>. § SUMMARY AND CONCLUSIONSIn this paper, we performed dust model simulations assuming an anisotropic ejection from P/2010 A2 and succeeded in reproducing the time-variant features in the archival observations over ∼3 years from 2010 January to 2012 October. When we assumed that the dust particles and fragments were ejected in the same direction from a point (i.e., the dust ejection point, DEP) where no object had been detected in any observations, our anisotropic model can explain all of the observations including (i) the unique dust cloud morphology, (ii) the trail surface brightness and (iii) the motions of the fragments. Our major finding is that the DEP is decoupled from the largest fragment (cf. LF=DEP had been considered in all previous research).Comparing our results to the laboratory impact experiments, we speculated about the regional variation in the degree of fragmentation, the ejection velocity field, the specific energy, and the physical properties such as the size, porosity and static strength of the impacted asteroid:* The least fragmentation around the antipodal point of the shattered asteroid is comparable to an anisotropic ejection in the limited ejection velocity field.* The asteroid underwent an impact with the specific energy of Q^∗ 350 J kg^-1.* Impacts on sub-kilometer sized rubble-pile asteroids like (25143) Itokawa may produce low antipodal ejection velocities down to ≪1 m s^-1.Finally, we remark that our results based on observations and their modelings are consistent with those obtained through laboratory impact experiments. The consistency supports the idea that the P/2010 A2 event is the first evidence of the impact shattering (i.e., total disruption) occurred in the present main asteroid belt.This work was supported by two research programs through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2012R1A4A1028713, 2015R1D1A1A01060025). Yoonyoung Kim was supported by the Kwanjeong educational foundation scholarship and the Fellowship for Fundamental Academic Fields. Junhan Kim and Hidekazu Hanayama helped with the Subaru observation (Program S11A-038). Marco Micheli gave comments on orbital determination. We also thank to Prof. Masahiko Arakawa for a fruitful discussion. This research utilized the Keck Observatory Archive (KOA) and the facilities of the Canadian Astronomy Data Centre. We also thank anonymous referee for careful reading of the manuscript.[Agarwal et al.(2013)]A13 Agarwal, J., Jewitt, D., & Weaver, H. 2013, , 769, 46 [Asada(1985)]Asada1985 Asada, N. 1985, , 90, 12 [Asphaug et al.(1998)]Asphaug1998 Asphaug, E., Ostro, S. J., Hudson, R. S., Scheeres, D. J., & Benz, W. 1998, , 393, 437 [Birtwhistle et al.(2010)]discovery Birtwhistle, P., Ryan, W. H., Sato, H., Beshore, E. C., & Kadota, K. 2010, Central Bureau Electronic Telegrams, 2114, 1 [Britt et al.(2002)]Britt2002 Britt, D. T., Yeomans, D., Housen, K., & Consolmagno, G. 2002, Asteroids III, 485 [Burns et al.(1979)]Burns1979 Burns, J. A., Lamy, P. L., & Soter, S. 1979, , 40, 1 [Davis & Ryan(1990)]DR90 Davis, D. R., & Ryan, E. V. 1990, , 83, 156 [Finson & Probstein(1968)]Finson1968 Finson, M., & Probstein, R. 1968, , 154, 327 [Fujiwara(1987)]Fujiwara1987 Fujiwara, A. 1987, , 70, 536 [Giblin et al.(1998)]Giblin1998 Giblin, I., Martelli, G., Farinella, P., et al. 1998, , 134, 77 [Hainaut et al.(2012)]H12 Hainaut, O. R., Kleyna, J., Sarid, G., et al. 2012, , 537, A69 [Housen & Holsapple(2015)]HH15 Housen, K. R., & Holsapple, K. A. 2015, Lunar and Planetary Science Conference, 46, 2894 [Holsapple & Housen(2012)]HH12 Holsapple, K. A., & Housen, K. R. 2012, , 221, 875 [Housen & Holsapple(1999)]HH99 Housen, K. R., & Holsapple, K. A. 1999, , 142, 21[Hsieh & Jewitt(2006)]HJ06 Hsieh, H. H., & Jewitt, D. 2006, Science, 312, 561 [Ishiguro et al.(2016)]Ishiguro2016 Ishiguro, M., Kuroda, D., Hanayama, H., et al. 2016, , 152, 169 [Ishiguro et al.(2013)]Ishiguro2013 Ishiguro, M., Kim, Y.,Kim, J., et al. 2013, , 778, 19 [Ishiguro et al.(2011)]I11 Ishiguro, M., Hanayama, H., Hasegawa, S., et al. 2011, , 741, L24 [Ishiguro(2008)]Ishiguro2008 Ishiguro, M. 2008, , 193, 96 [Ishiguro et al.(2007)]Ishiguro2007 Ishiguro, M., Sarugaku, Y., Ueno, M., et al. 2007, , 189, 169 [Jewitt et al.(2015)]J15 Jewitt, D., Hsieh, H., & Agarwal, J. 2015, Asteroids IV, 221 [Jewitt et al.(2013)]J13 Jewitt, D., Ishiguro, M., & Agarwal, J. 2013, , 764, L5 [Jewitt et al.(2010)]J10 Jewitt, D., Weaver, H., Agarwal, J., Mutchler, M., & Drahus, M. 2010, , 467, 817 [Jutzi et al.(2010)]Jutzi2010 Jutzi, M., Michel, P., Benz, W., & Richardson, D. C. 2010, , 207, 54 [Kim et al.(2012)]K12 Kim, J., Ishiguro, M., Hanayama, H., et al. 2012, , 746, L11 [Kleyna et al.(2013)]K13 Kleyna, J., Hainaut, O. R., & Meech, K. J. 2013, , 549, A13 [Marzari et al.(2011)]Marzari2011 Marzari, F., Rossi, A., & Scheeres, D. J. 2011, , 214, 622 [Moreno et al.(2010)]M10 Moreno, F., Licandro, J., Tozzi, G.-P., et al. 2010, , 718, L132 [Nakamura et al.(2015)]Nakamura2015 Nakamura, A. M., Yamane, F., Okamoto, T., & Takasawa, S. 2015, , 107, 45 [Nakamura & Fujiwara(1991)]N91 Nakamura, A., & Fujiwara, A. 1991, , 92, 132 [O'Brien et al.(2011)]5km O'Brien, D. P., Sykes, M. V., & Tricarico, P. 2011, Lunar and Planetary Science Conference, 42, 2665 [Okamoto & Arakawa(2009)]OA2009 Okamoto, C., & Arakawa, M. 2009, Meteoritics and Planetary Science, 44, 1947 [Setoh et al.(2010)]Setoh2010 Setoh, M., Nakamura, A. M., Michel, P., et al. 2010, , 205, 702 [Snodgrass et al.(2010)]S10 Snodgrass, C., Tubiana, C., Vincent, J.-B., et al. 2010, , 467, 814 [Yanagisawa & Itoi(1994)]YI94 Yanagisawa, M., & Itoi, T. 1994, 75 Years of Hirayama Asteroid Families:The Role of Collisions in the Solar System History, 63, 243
http://arxiv.org/abs/1703.08815v1
{ "authors": [ "Yoonyoung Kim", "Masateru Ishiguro", "Tatsuhiro Michikami", "Akiko M. Nakamura" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170326132446", "title": "Anisotropic Ejection from Active Asteroid P/2010 A2: An Implication of Impact Shattering on an Asteroid" }
Efficiently Clustering Very Large Attributed GraphsThis work has been published in ASONAM 2017 <cit.>. This version includes an appendix with validation of our attribute model and distance function, omitted in <cit.> for lack of space. Please refer to the published version.Alessandro Baroni University of Pisa, Italybaroni@di.unipi.itAlessio Conte University of Pisa, Italyconte@di.unipi.itMaurizio Patrignani Roma Tre University, Italypatrigna@dia.uniroma3.itSalvatore Ruggieri University of Pisa andISTI-CNR, Italyruggieri@di.unipi.it December 30, 2023 ==============================================================================================================================================================================================================================================================================================Attributed graphs model real networks by enriching their nodes with attributes accounting for properties. Several techniques have been proposed for partitioning these graphs into clusters that are homogeneous with respect to both semantic attributes and to the structure of the graph. However, time and space complexities of state of the art algorithms limit their scalability to medium-sized graphs.We propose SToC (for Semantic-Topological Clustering), a fast and scalable algorithm for partitioning large attributed graphs. The approach is robust, being compatible both with categorical and with quantitative attributes, and it is tailorable, allowing the user to weight the semantic and topological components. Further, the approach does not require the user to guess in advance the number of clusters. SToC relies on well known approximation techniques such as bottom-k sketches, traditional graph-theoretic concepts, and a new perspective on the composition of heterogeneous distance measures. Experimental results demonstrate its ability to efficiently compute high-quality partitions of large scale attributed graphs.§ INTRODUCTIONSeveral approaches in the literature aim at partitioning a graph into communities that share some sets of properties (see <cit.> for a survey). Most criteria for defining communities in networks are based on topology and focus on specific features of the network structure, such as the presence of dense subgraphs or other edge-driven characteristics.However, real world graphs such as the World Wide Web and Social Networks are more than just their topology. A formal representation that is gaining popularity for describing such networks is the attributed graph <cit.>.An attributed graph is a graph where each node is assigned values on a specified set of attributes. Attribute domains may be either categorical (e.g., sex) or quantitative (e.g., age).Clustering attributed graphs consists in partitioning them into disjoint communities of nodes that are both well connected and similar with respect to their attributes.State of the art algorithms for clustering attributed graphs have severallimitations <cit.>: they are too slow to be compatible with big-data scenarios, both in terms of asymptotic time complexity and in terms of running times; they use data-structures that need to be completely rebuilt if the input graph changes; and they ask the user to specify the number of clusters to be produced. Moreover, they only work with categorical attributes, forcing the user to discretize the domains of quantitative attributes, leading to a loss ofinformation in distance measures.We offer a new perspective on the composition of heterogeneous distance measures. Based on this, we present a distance-based clustering algorithm for attributed graphs that allows the user to tune the relative importance of the semantic and structural information of the input data and only requires to specify as input qualitative parameters of the desired partition rather than quantitative ones (such as the number of clusters). This approach is so effective that can be used to directly produce a set of similar nodes that form a community with a specified input node without having to cluster the whole graph first. We rely on state-of-the-art approximation techniques, such as bottom-k sketches for approximating similarity between sets <cit.> and the Hoeffding bound (see <cit.>) to maintain high performance while keeping the precision under control. Regarding efficiency, our approach has an expected time complexity of O(m log n), where n and m are the number of nodes and edges in the graph, respectively. This performance is achieved via the adoption of a distance function that can be efficiently computed in sublinear time. Experimental results demonstrate the ability of our algorithm to produce high-quality partitions of large attributed graphs.The paper is structured as follows. Section <ref> describes related work. Section <ref> summarizes our contributions. Section <ref> formally states the addressed problem. Section <ref> introduces the notion of distance, essential for cluster definition. The algorithm and its data structures are described in Section <ref>. Section <ref> describes the tuning phase which selects suitable parameter values for the input graph according to the user's needs. The results of our experimentation are discussed in Section <ref>. Finally, Appendix <ref> and <ref> contain a validation of our attribute model and combined distance function.§ RELATED WORK Graph clustering, also known as community discovery, is an active research area (see e.g., the survey <cit.>). While the overwhelming majority of the approaches assign nodes to clusters based on topological information only, recent works address the problem of clustering graphs with semantic information attached to nodes or edges (in this paper, we restrict to node-attributed graphs). In fact, topological-only or semantic-only clustering approaches are not as effective as approaches that exploit both sources of information <cit.>. A survey of the area <cit.> categorizes the following classes for node-attributed graphs (here we recall only key references and uncovered recent papers). Reduction-based approaches translate the semantic information into the structure of the graph (for example adding weights to its edges) or vice versa (for example encoding topological information into new attributes), then perform a traditional clustering on the obtained data. They are efficient, but the quality of the clustering is poor.Walk-based algorithms augment the graph with dummy nodes representing categorical only attribute values and estimate node distance through a neighborhood random walk <cit.>. The more attributes two nodes shares the more paths connect them the more the nodes are considered close. A traditional clustering algorithm based on these distances produces the output. Model-based approaches statistically infer a model of the clustered attributed graphs, assuming they are generated accordingly to some parametric distribution. The BAGC algorithm <cit.> adopts a Bayesian approach to infer the parameters that best fit the input graph, but requires the number of clusters to be known in advance and does not handle quantitative attributes. The approach has been recently generalized to weighted attributed graphs in <cit.>. The resulting GBAGC algorithm is the best performing of the state-of-the-art (in Section <ref> we will mainly compare to this work). A further model-based algorithm is CESNA <cit.>, which addresses the different problem of discovering overlapping communities. Projection-based approaches focus on the reduction of the attribute set, omitting attributes are irrelevant for some clusters. A recent work along this line is <cit.>. Finally, there are some approaches devoted to variants of attributed graph clustering, such as:I-Louvain <cit.>, which extends topological clustering to maximize both modularity and a newly introduced measure called `inertia';PICS <cit.>, which addresses a form of co-clustering for attributed graphs by using a matrix compression-approach;FocusCO <cit.> and CGMA <cit.>, which start from user-preferred clusters;M-CRAG <cit.>, which generates multiple non-redundant clustering for exploratory analysis;CAMIR <cit.>, which considers multi-graphs. Overall, state-of-the-art approaches to partition attributed graphs are affected by several limitations, the first of which is efficiency.Although, the algorithm in <cit.> does not provide exact bounds, our analysis assessed an Ω(n^2) time and space complexity, which restricts its usability to networks with thousands of nodes. The algorithm in <cit.> aims at overcoming these performance issues, and does actually run faster in practice. However, as we show in Section <ref>, its time and space performances heavily rely on assuming a small number of clusters. Second, similarity between elements is usually defined with exact matches on categorical attributes, so that similarity among quantitative attributes is not preserved. Further, data-structures are not maintainable, so that after a change in the input graph they will have to be fully recomputed. Finally, most of the approaches require as input the number of clusters that have to be generated <cit.>. In many applications it is unclear how to choose this value or how to evaluate the correctness of the choice, so that the user is often forced to repeatedly launching the algorithm with tentative values. § CONTRIBUTIONS We propose an approach to partition attributed graphs that aims at overcoming the limitations of the state of the art discussed in Section <ref>.Namely: (i) We propose a flexible concept of distance that can be efficiently computed and that is both tailorable (allowing the user to tune the relative importance of the semantic and structural information) and robust (accounting for both categorical and quantitative attributes). Further, our structures can be maintained when entities are added or removed without re-indexing the whole dataset.(ii) We present the SToC algorithm to compute non-overlapping communities.SToC allows for a declarative specification of the desired clustering, i.e., the user has to provide the sensitivity with which two nodes are considered close rather than forecast the number of clusters in the output.(iii) We describe an experimental comparison with state-of-the-art approaches showing that SToC uses less time/space resources and produces better quality partitions. In particular, in addition to good quality metrics for the obtained clusters, we observe that SToC tends to generate clusters of homogeneous size, while most partitioning algorithms tend to produce a giant cluster and some smaller ones.§ PROBLEM STATEMENT Intuitively, attributed graphs are an extension of the structural notion of graphs to include attribute values for every node.Formally, an attributed graph G(V,E,F) consists of a set V = {v_1, …,v_n} of nodes, a set E={e_1,…,e_m}⊆ V × V of edges, and a set of mappings F={f_1,…,f_A} such that, for i ∈ [1..A], f_i: V → dom(a_i) assigns to a node v the value f_i(v) of attribute a_i, where dom(a_i) is the domain of attribute a_i. Notice that the definition is stated for directed graphs, but it readily applies to undirected ones as well. A distance function d: V × V →ℝ_≥ 0 quantifies the dissimilarity between two nodes through a non-negative real value, where d(v_1, v_2) = d(v_2, v_1), and d(v_1, v_2) = 0 iff v_1 = v_2. Given a threshold τ, the ball centered at node v, denoted B_d(v, τ), consists of all nodes at distance at most τ: B_d(v, τ) = { v' ∈ V| d(v, v') ≤τ}.We distinguish between topological distances, that are only based on structural properties of the graph G(V, E), semantic distances, that are only based on node attribute values F, and multi-objective distances, that are based on both <cit.>.Using on distance functions, a cluster can be defined by considering nodes that are within a maximum distance τ from a given node.Namely, for a distance function d() and a threshold τ, a τ-close cluster C is a subset of the nodes in V such that there exists a node v ∈ C such that for all v' ∈ C, d(v, v') ≤τ. The node v is called a centroid.Observe that C ⊆ B_d(v, τ) is required but the inclusion may be strict. In fact, a node v' ∈ B_d(v, τ) could belong to another τ-close cluster C' due to a lower distance from the centroid of C'. A τ-close clustering of an attributed graph is a partition of its nodes into τ-close clusters C_1, …, C_k. Notice that this definition requires that clusters are disjoint, i.e., non-overlapping.§ A MULTI-OBJECTIVE DISTANCE In this section we introduce a multi-objective distance function that will be computable in sublinear time (Section <ref>) and that allows for tuning the semantic and topological components (Section <ref>).Regarding semantic distance, we assume that attributes can be either categorical or quantitative. For clarity, we assume that attributes 1, …, Q are quantitative, and attributes Q+1, …, A are categorical.Thus, the attribute values (f_1(v), …, f_A(v)) of a node v, boil down to a relational tuple, which we denote by 𝐭_v.Semantic distance d_S() can then be defined as:d_S(v_1, v_2)= f( 𝐭_v_1, 𝐭_v_2) where f() is any distance function over relational tuples (see e.g., <cit.>).In our implementation and in experiments, we first normalize quantitative attributes in the [0, 1] interval using min-max normalization <cit.>. Then, we adopt the following distance:d_s(v_1,v_2) = √(∑_i=1^Q((f_i(v_1) - f_i(v_2))^2)·√(Q) + ∑_i=Q+1^A J(f_i(v_1),f_i(v_2))/AQuantitative attributes are compared using Euclidean distance, and categorical attributes are compared using Jaccard distance (J(X, Y) = 1 - |X ∩ Y|/|X ∪ Y|). The scaling factor √(Q) balances the contribution of every quantitative attribute in the range [0,1]. The overall distance function ranges in [0, 1].Regarding topological distance, we assume it is defined as a distance of the node neighborhoods. Let us recall the notion of l-neighborhood from <cit.>:the l-neighborhood N_l(v) of v is the set of nodes reachable from v with a path of length at most l. N(v) is a shorthand for N_1(v), namely the nodes linked by v.Topological distance d_T() can then be defined as:d_T(v_1, v_2)= g( N_l(v_1), N_l(v_2))where g() is any distance function over sets of nodes, e.g., Dice, Jaccard, Tanimoto <cit.>. In particular, we adopt the Jaccard distance.Fig. <ref> shows an example on distances. Fix node v_0, and considerdistances (semantic and topological) between v_0 and the other nodes in the graph. For topological distance (with l=1), e.g., we have d_T(v_0,v_2) = J({v_0,v_1,v_2,v_7}, {v_0,v_1,v_2}) =1-|{v_0,v_1,v_2}|/|{v_0,v_1,v_2,v_7}| = 0.25. Thus, v_0 is closer to v_2 than to v_1, since d_T(v_0,v_1) = 0.4. For semantic distance (Equ. <ref>), instead, it turns out that v_0 is closer to v_1 than to v_2. In fact, all three of them have the same sex attribute, but v_1 is spatially closer to v_0 than to v_2.Finally, semantic and topological distance can be combined into a multi-objective distance d_ST() as follows <cit.>: d_ST(v_1, v_2) = h( d_S(v_1, v_2) ,d_T(v_1, v_2) ) where h: ℝ_≥ 0×ℝ_≥ 0→ℝ_≥ 0 is such that h(x, y) = 0 iff x = y = 0. This and the assumptions that d_S() and d_T() are distances imply that d_ST() is a distance.<cit.> set h(x, y) = x + y as the sum of semantic and topological distances.If x ≫ y then h(x, y) ≈ x, so the largest distance weights more. However, if x ≈ y then h(x, y) ≈ 2x, which doubles the contribution of the equal semantic and topological distances. In this paper, we consider instead h(x, y) = max(x,y). If x ≫ y then h(x, y) ≈ x ≈ y, as before. However, when x ≈ y then h(x, y) ≈ x, which agrees with the distance of each component.§ THE STOC ALGORITHM For a given distance threshold τ, SToC iteratively extracts τ-close clusters from the attributed graph starting from random seeds. This is a common approach in many clustering algorithms <cit.>. Nodes assigned to a cluster are not considered in the subsequent iterations, thus the clusters in output are not overlapping. The algorithm proceeds until all nodes have been assigned to a cluster.This section details the SToC algorithm. We will proceed bottom-up, first describing the STo-Query procedure which computes a τ-close cluster with respect to a given seed s, and the data structures that make its computation efficient. Then, we will present the main SToC procedure.§.§ The STo-Query ProcedureThe STo-Query procedure computes a connected τ-close cluster C for a given seed s and threshold τ. With our definition of d_ST(), it turns out that C ⊆ B_d_ST(s, τ) = B_d_S(s, τ) ∩ B_d_T(s, τ). We define C as the set of nodes in B_d_ST(s, τ) that are connected to s. Computing C is then possible through a partial traversal of the graph starting from s. This is the approach of the STo-Query procedure detailed in Algorithm <ref>.The efficiency ofSTo-Query relies on two data structures, S and T, that we adopt for computing semantic and topological distances respectively and, a fortiori, for the test d_ST(s, x) ≤τ at line 7. Recall that d_ST(s, x) = max( d_S(s, x), d_T(s, x) ).Semantic distance is computed by directly applying (Equ. <ref>). We store in a dictionary S the mapping of nodes to attribute values. Thus, computing d_S(s, x) requires O(A) time, where A is the number of attributes.For topological distance, instead, a naïve usage of(Equ. <ref>) would require to compute online the l-neighborhood of nodes. This takes O(n) time for medium-to-large values of l, e.g., for small-world networks. We overcome this issue by approximating the topological distance with a bounded error, by using bottom-k sketch vectors <cit.>. A sketch vector is a compressed representation of a set, in our case an l-neighborhood, that allows the estimation of functions, in our case topological distance, with bounded error.The bottom-k sketch S(X) consists in the first k elements of a set X with respect to a given permutation of the domain of elements in X. The Jaccard distance between N_l(v_1) and N_l(v_2) can be approximated using S(N_l(v_1)) and S(N_l(v_2)) in their place, witha precision ϵ by choosing k = logn/ϵ^2 (see <cit.>).We store in a dictionary T the mappings of nodes v to the sketch vector of N_l(v). T allows for computing d_T(s, x) in O(logn) time. §.§ The SToC ProcedureThe algorithm consists of repeated calls to the STo-Query function on selected seeds. τ-close clusters returned by STo-Query are added to output and removed from the set of active nodes V' inAlgorithm <ref> (lines 7–9). We denote by G[V'] the subgraph of G with only the nodes in V'. Seeds are chosen randomly among the active nodes through thefunction (line 6). This philosophy can be effective in real-world networks (see, e.g., <cit.>), and is inherently different from selecting a set of random seeds in the beginning (as in k-means), since it guarantees that each new seed will be at a significant distance from previously chosen ones. Calls to STo-Query return non-overlapping clusters, and the algorithm terminates when all nodes have been assigned to some cluster, i.e., V' = ∅.§.§ Time and space complexity Let us consider time complexity first. For a seed s, STo-Query iterates over nodes in C ⊆ N_l(s) through queue Q. For each node v ∈ C, the distance d_ST(v, x) is calculated for all neighborhoods x of v. Using the data structures S and T, this takes O(log n + A). SToC iterates over seeds s by removing from the graph nodes that appear in the cluster C returned by STo-Query. This implies that a node is enqued in Q exactly once. In summary, worst-case complexity of SToC isO(∑_x∈ V |N(x)|(log n + A)) = O(m(log n + A)).According to related work, we consider A to be constant in real world datasets. This leads to an overall time complexity of O(m log n). The initialization of the data structures S and T has the same cost. In fact, S can be filled in linear time O(n) through a scan of the input attributed graph. Moreover, bottom-k sketches can be computed in O(m k) time <cit.>, hence, for k ∈ O(logn), building T requires O(m log n).Regarding space usage, the dictionary S requires O(nA) = O(n) space, assuming A constant. Moreover, since each sketch vector in T has size O(logn), T requires O(nlogn) space. Thus, SToC requires O(n logn) space, in addition to the one for storing the input graph. § AUTO-TUNING OF PARAMETERS The SToC algorithm assumes two user parameters[Theerror threshold ϵ is more related to implementation performance issues rather than to user settings.] in input: the value l to be used in topological distance (Equ. <ref>), and the distance threshold τ tested at line 7 ofSTo-Query. The correct choice of such parameters can be critical and non-trivial. For example, consider the cumulative distributions of d_S() and d_T() shown in Fig. <ref> for the datasets that will be considered in the experiments. Small values of lmake most of the pairs very distant, and, conversely, high values of l make most of the pairs close w.r.t. topological distance. Analogously, high values of threshold τ may lead to cluster together all nodes, which have semantic and topological distance both lower than τ. E.g., almost every pair of nodes has a semantic distance lower than 0.4 for the DIRECTORS dataset in Fig. <ref>.Another issue with parameters l and τ is that they are operational notions, with no clear intuition on how they impact on the results of the clustering problem. In this section, we introduce a declarative notion, with a clear intuitive meaning for the user, and that allows to derive optimal values for l and τ. We define the attraction ratioas a value between 0 and 1, as a specification of the expected fraction of nodes similar to a given one.Extreme values = 1 or = 0 mean that all nodes are similar to each other and all nodes are different from each other respectively. In order to let the user weight separately the semantic and the topological components, we actually assume that a semantic attraction ratio _S and a topological attraction ratio _T are provided by the user. We describe next how the operational parameters l and τ are computedfrom the declarative ones _S and _T. Computing τ.We approximate the cumulative distribution of d_S() among all the n^2 pairs of nodes by looking at a sample of2 log n/ϵ^2 pairs. By the Hoeffding bound <cit.>, this guarantees error ϵ of the approximation. Then, we set τ = τ̂ as the _S-quantile of the approximated distribution. By definition, d_S() will be lower or equal than τ̂ for the fraction_S of pairs of nodes. Fig. <ref> (second and fourth plots) show the approximated distributions of d_S() for the DIRECTORS and DBLP datasets. E.g., an attraction ratio α_S = 0.4 can be reached by choosing τ = 0.2for DIRECTORS and τ = 0.45 for DBLP. Computing l. In the previous step, we fixed τ = τ̂ using the semantic distance distribution. We now fix l using the topological distance distribution. The approach consists in approximating the cumulative distribution of d_T(), as done before, for increasing values of l starting from l=1. For each value of l, we look at the quantile of τ̂, namely the fraction α_l = Pr( d_T ≤τ̂) of pairs of nodes having topological distance at most τ̂. We choose the value l for which |α_l - α_T| is minimal, namely for which α_l is the closest one to the attraction ratio α_T. Since α_l is monotonically decreasing with l, we stop when|α_l+1 - α_T| > |α_l - α_T|. Fig. <ref> shows an example for _T=0.3 and τ̂=0.6. The value l=6 yields the quantile α_l closest to the expected attraction ratio _T=0.3. We now show that the cost of the auto-tuning phase is bounded by O(mlog n), under the assumptions that both ϵ^-1 and l are O(1). Such assumption are realistic. In fact, values for ϵ cannot be too small, otherwise the performance improvement of using bottom-k sketches would be lost <cit.>. Regarding l, it is bounded by the graph diameter, which for real-world networks, can be considered bounded by a constant <cit.>. Let us consider then the computational cost of auto-tuning.Computing τ requires calculating semantic distance among 2 log n/ϵ^2 pairs of nodes, which requires O(log^2 n), and in sorting the pairs accordingly, which requires O( (log n)(loglog n) ). Overall, the cost is O(log^2 n). Computing l requires a constant-bounded loop. In each iteration, we need to build an approximate cumulative distribution of the topological distance d_T(), which, as shown before, is O(log^2 n). In order to compute d_T() we also have to construct the data structure T for the value l at each iteration, which requires O(mlog n). In summary, computing l has a computational cost that is in the same order of the SToC algorithm.§ EXPERIMENTS We first present the evaluation metrics and the experimental datasets, and then show the results of our experiments, which aim at showing the effectiveness of the approach and at comparing it with the state of the art. §.§ Evaluation metrics According to existing approaches <cit.> we adopt both a semantic and a topological metric. The semantic metric is the Within-Cluster Sum of Squares (WCSS) also called Sum of Squared Error (SSE), widely used by widespread approaches such as the k-means <cit.>: WCSS = ∑_i=1^k∑_ v ∈ C_i v - μ_i ^2where C_1, …, C_k is the clustering of the graph, and, for i ∈ [1, k], μ_i is the centroid of nodes in C_i w.r.t. the semantics distance d_S() <cit.>. WCSS ranges over non-negative numbers, with lower values denoting better clusterings. Alternative semantic metrics, such as entropy <cit.>, are suitable for categorical/discretized attributes only.The topological metric is the modularity <cit.>, a de-facto standard for graph clustering evaluation <cit.>:Q = 1/2m∑_v,w∈ V[ A_vw - k_v k_w/2m] δ (c_v,c_w)where A is the adjacent matrix of the graph, k_v is the degree of node v, c_v is the cluster ID of node v, and δ is the identity function (δ(i,j) = 1 if i=j and 0 otherwise). Q is defined in [-1/2,1], with a random clustering expected to have Q=0, and a good clustering Q>0 <cit.>. §.§ Experimental datasets We will run experiments on two datasets, whose summaries are shown in Table <ref>. DIRECTORS: A social network of directors. A director is a person appointed to serve on the board of a company.We had a unique access to a snapshot of all Italian boards of directors stored in the official Italian Business Register.The attributed graph is build as follows: nodes are distinct directors, and there is an (undirected) edge between two directors if they both seat in a same board. In other words, the graph is a bipartite projectionof a bipartite graph directors-companies. In the following we distinguish the whole graph (DIRECTORS) from its giant connected component (DIRECTORS-gcc). Node attributes include quantitative characteristics of directors (age, geographic coordinates of birth place and residence place) and categorical characteristics of them (sex, and the set of industry sectors of companies they seats in the boards of – e.g., such as IT, Bank, etc.).Clustering this network means finding communities of people tied by business relationships. A combined semantic-topological approach may reveal patterns of structural aggregation as well as social segregation <cit.>. For example, clusters may reveal communities of youngster/elderly directors in a specific sub-graph. DBLP: Scientific coauthor network. This dataset consists ofthe DBLP bibliography database restricted to four research areas: databases, data mining, information retrieval, and artificial intelligence. The dataset was kindly provided by the authors of <cit.>, where it is used for evaluation of their algorithm.Nodes are authors of scientific publications. An edge connect two authors that have co-authored at least one paper. Each node has two attributes: prolific (quantitative), counting the number of papers of the author, and primary topic (categorical), reporting the most frequent keyword in the author's papers.Fig. <ref> shows the cumulative distributions of semantic and topological distances for the datasets. In particular, the 1st and 3rd plots show the impact of the l parameter on the topological distance. The smaller (resp. larger) l, the more distant (resp. close) are nodes. This is in line with the small-world phenomenon in networks <cit.>.§.§ Experimental results We will compare the following algorithms: * Inc-C: the Inc-Clustering algorithm by Zhou et al. <cit.>. It requires in input the number k of clusters to produce. Implementation provided by the authors. * GBAGC: the General Bayesian framework for Attributed Graph Clustering algorithm by Xu et al. <cit.>, which is the best performing approach in the literature. It also takes k as an input. Implementation provided by the authors.* SToC: our proposed algorithm, which takes in input the attraction ratios _S and _T, and the error threshold ϵ.* ToC: a variant of SToC which only considers topological information (nodes and edges). It takes in input _T and the error threshold ϵ (τ and l are computed as for SToC, with a dummy _S = _T).* SC: a variant of SToC which only considers semantic information (attributes). It takes in input _S and the error threshold ϵ.All tests were performed on a machine with two Intel Xeon Processors E5-2695 v3 (35M Cache, 2.30 GHz) and 64 GB of main memory running Ubuntu 14.04.4 LTS. SToC, ToC and SC are implemented in Java 1.8, Inc-C and GBAGC are developed in MatLab. Table <ref> shows the results of SToC for _S = _T = αvarying from 0.1 to 0.9, and a fixed ϵ = 0.9. For every dataset and α, we report the number of clusters found (k), the evaluation metrics Q and WCSS, the running time, and the main memory usage. As general comments, we can observe that k and WCSS are inversely proportional to α, Q is always non-negative and in good ranges, memory usage is limited, and running times are negligible for DBLP and moderate for DIRECTORS and DIRECTORS-gcc. For every α, we executed 10 runs of the algorithm, which uses random seeds, and reported mean value and standard deviation. The low values of the standard deviations of Q and WCSS show that the random choice of the seeds does not impact the stability of the results. The results of ToC and SC can be found in in Appendix <ref>:in summary, the exploitation of both semantic and topological information leads to a superior performance of SToC w.r.t. both Q and WCSS.Tables <ref> and <ref> report the results for Inc-C and GBAGC respectively. Due to the different input parameters of such algorithms,we can compare the results of SToC only by looking at rows with similar k. Let us consider Inc-C first. Running times are extremely high, even for the moderate size dataset DBLP. It was unfeasible to obtain results for DIRECTORS. Space usage is also high, since the algorithm is in O(n^2). Values of Q are considerably worse than SToC. WCSS tends to generally high. Consider now GBAGC. Quality of the results is considerably lower than SToC both w.r.t. Q and WCSS.The space usage and elapsed time increase dramatically with k, which is non-ideal for large graphs, where a high number of cluster is typically expected. On our experimental machine, GBAGC reaches a limit with k = 500 for the DIRECTORS dataset by requiring 65GB of main memory. Even more critical is the fact that the number of clusters actually returned by GBAGC is only a fraction of the input k, e.g., it is 1 for k=15,000 for the DBLP dataset. The user is not actually controlling the algorithm results through the required input. Figure <ref> clarifies the main limitation of Inc-C and GBAGC over SToC. It reports for some of the executions the size distributions of clusters found.Inc-C (bottom plot) tends to produce a single giant cluster including most of the nodes. GBAGC (middle plots) produces a small number of clusters regardless of the input parameters.Instead, SToC (top plots) produces more balanced results, typically expected in sociology <cit.>, with a size distribution in line with common power-laws found inreal-world network and with the input graphs in particular (see <cit.> for the DIRECTORS graph).§ CONCLUSIONSWe proposed SToC, a clustering algorithm for large attributed graphs. It extracts non-overlapping clusters using a combined distance that accounts for network topology and semantic features, based on declarative parameters (attraction ratios) rather than on operational ones (number of clusters) typically required by other approaches. Experimental results showed that SToC outperforms the state of the art algorithms in both time/space usage and in quality of the clustering found. § ACKNOWLEDGEMENTSThe authors would like to thank Andrea Marino and Andrea Canciani for useful conversations.abbrv§ VALIDATION OF THE ATTRIBUTE MODEL One of the main differences of the proposed approach with respect to the state-of-the-art is that quantitative attributes do not need to be discretized. We validate the effectiveness of this choice by showing the loss of quality induced by the discretization. Namely, we run the SToC algorithm on the DBLP dataset, where the quantitative attribute represent how prolific an author is, treating the attribute as a categorical one. Table <ref> shows the results of this process, which can be compared to the results obtained when the quantitative attribute is addressed properly, shown in Table <ref>. We can see not only how both metrics (Q and WCSS) are significantly better when categorical attributes are considered as such, but that ignoring the similarity between similar attributes may lead to an insignificant result, likely due to the flattening of the distances between nodes. This suggests that approaches that handle quantitative attributes may have an inherent advantage with respect to those that need to discretize them.§ SC AND TOC COMPARED TO STOC Table <ref> shows, for varying , the number of clusters k produced, and the modularity and WCSS of the clustering found by the three variations of our algorithm. The best values for modularity and WCSS are marked in bold. As one could expect, topological-only algorithm ToC performs poorly w.r.t. semantic metrics compared to SC and SToC, although semantic-only algorithm SC is competitive with ToC on topology. The clear winner among the three is SToC, which gives a superior performance compared to ToC and SC for most values of . This shows how SToC can effectively combine semantic and topological information to provide a better clustering. Table <ref> also shows that the number k of clusters in output is inversely proportional to α when topology plays a role, i.e., for ToC and SToC. While it is not clear why SC does not seem to follow this behaviour, we suspect it may be due to a small amount of possible d_S values in the DBLP dataset (see Figure <ref> in the paper, right). It is worth noting that the WCSS metric degenerates for high values of ; this might be due to · n approaching the size of the graph, making any given pair of nodes be considered similar. SC seems more resistant to this degeneration.
http://arxiv.org/abs/1703.08590v2
{ "authors": [ "Alessandro Baroni", "Alessio Conte", "Maurizio Patrignani", "Salvatore Ruggieri" ], "categories": [ "cs.SI", "cs.DM", "physics.soc-ph" ], "primary_category": "cs.SI", "published": "20170324203242", "title": "Efficiently Clustering Very Large Attributed Graphs" }
The Inner Structure of Time-Dependent SignalsDavid N. LevinDept. of Radiology, University of Chicago,1310 N. Ritchie Ct., Unit 26 AD, Chicago, IL 60610Email: d-levin@uchicago.edu<http://radiology.uchicago.edu/directory/david-n-levin>The Inner Structure of Time-Dependent Signals David N. Levin December 30, 2023 =============================================This paper shows how a time series of measurements of an evolving system can be processed to create an ”inner" time series that is unaffected by any instantaneous invertible, possibly nonlinear transformation of the measurements. An inner time series contains information that does not depend on the nature of the sensors, which the observer chose to monitor the system. Instead, it encodes information that is intrinsic to the evolution of the observed system. Because of its sensor-independence, an inner time series may produce fewer false negatives when it is used to detect events in the presence of sensor drift. Furthermore, if the observed physical system is comprised of non-interacting subsystems, its inner time series is separable; i.e., it consists of a collection of time series, each one being the inner time series of an isolated subsystem. Because of this property, an inner time series can be used to detect a specific behavior of one of the independent subsystems without using blind source separation to disentangle that subsystem from the others. The method is illustrated by applying it to: 1) an analytic example; 2) the audio waveform of one speaker; 3) video images from a moving camera; 4) mixtures of audio waveforms of two speakers. § INTRODUCTIONConsider a physical system that is being observed with a set of sensors. The time series of raw sensor measurements contains information about the evolution of the system of interest, mixed with information about the nature of the sensors. For example, video pictures contain information about the evolution of the scene of interest, but they are also influenced by sensor-dependent factors such as the position, angular orientation, field of view, and spectral response of the camera. Likewise, audio measurements may describe the evolution of an acoustic source, but they are also influenced by extrinsic factors such as the positions and frequency responses of the microphones. Calibration procedures can be used to transform measurements created with one set of sensors so that they can be compared to measurements made with a different set of sensors (<cit.>,<cit.>,<cit.>). However, there are situations in which it is inconvenient, awkward, or impossible to calibrate a measurement apparatus. For example: 1) the calibration procedure may take too much time; 2) the calibration process may interfere with the evolution of the system being observed; 3) the observer may not have access to the measuring device (e.g., because it is at a remote location).This paper describes how a time series of measurements can be processed to derive a purely sensor-independent description of the evolution of the underlying physical system. Specifically, consider an evolving physical system with N degrees of freedom (N ≥ 1), and suppose that it is being observed by N sensors, whose output is denoted by x(t) (x_k(t)k = 1, … ,N). For simplicity, assume that the sensors' output is invertibly related to the system states. In other words, assume that the sensor measurements represent the system's state in a coordinate system defined by the nature of the sensors. Section <ref> describes how measurements can be chosen to have this invertibility property. Now, suppose that the same system is also being observed by another set of sensors, whose output, x'(t), is invertibly related to the system states and, therefore, is invertibly related to x(t). For example, x(t) and x'(t) could be the outputs of calibrated and uncalibrated sensors, respectively, as they simultaneously observe the same system. Or, they could be the outputs of sensors that detect different types of energy (e.g., infrared light vs ultraviolet light). Under these conditions, we show how to process x(t) in order to derive an ”inner" time series, w(t) (w_k(t)k = 1, … ,N). We then demonstrate that the same inner time series will result if the other set of sensor outputs, x'(t), is subjected to the same procedure. Because of its sensor-independence, an inner time series may produce fewer false negatives when it is used to detect events in the presence of sensor drift. In mathematical terms, x(t) and x'(t) represent the evolving system's state in different coordinate systems on state space, and the inner time series is a coordinate-system-independent description of the system's velocity in state space.To derive this sensor-independent time series, the time series of sensor measurements, x(t), is statistically processed in order to construct N local vectors at each point in state space. The system's path through state space can then be described by a succession of small displacement vectors, each of which is a weighted superposition of the local vectors. The inner time series is comprised of these time-dependent weights, w(t), which are coordinate-system-independent and, therefore, sensor-independent. Thus, any two observers will describe the system's evolution with the same inner time series, even though they utilize different sensors to monitor the system. Essentially, an inner time series is a ”canonical" form of a measurement time series, created by normalizing the measurements with respect to their own statistical properties. No matter what linear or nonlinear transformation has been applied to a sequence of measurements, its canonical form (i.e., its inner time series) is the same. An inner time series is roughly analogous to the principal components of a data set, which represent the data in the same ”canonical" way, no matter what rotation and/or translation has been applied to them.There are many ways of using a time series of measurements to define local vectors on the system's state space, and each of these methods can be used to create a sensor-independent description of the system's evolution. However, the local vectors described in this paper have an unusually attractive property: namely, they produce separable sensor-independent descriptions of systems that are composed of non-interacting subsystems. Specifically, consider a system that is composed of two statistically independent subsystems, and suppose that the raw measurements of it are linear or nonlinear mixtures of the state variables of its non-interacting subsystems. It can be shown that each component of the inner time series of the composite system is also a component of the inner time series of an isolated subsystem. In other words, each component of the inner time series of the composite system is a stream of information about just one of the subsystems, even though it may have been derived from measurements sensitive to several subsystems. Because of this property, an inner time series can be used to detect a specific behavior of one subsystem, which is evolving in the presence of other subsystems. In contrast to blind source separation procedures (<cit.>, <cit.>, <cit.>), this is done without finding the mixing function, which relates the raw measurements of the composite system to the states of its subsystems.Reference <cit.> describes a different way of creating sensor-independent representations of evolving systems. First, the second-order correlations of the system's local velocity distributions are used to define a Riemannian metric and affine connection on the manifold of measurements. Then, each incremental displacement along the system's path through state space is described as a superposition of reference vectors, parallel transferred from the beginning of the path. Such a description will be coordinate-system-independent (and, therefore, sensor-independent), if it includes a coordinate-system-independent way of identifying the reference vectors at the initial point of each path of interest. In contrast, the method proposed in the current paper does not require reference vectors; instead it utilizes local vectors that are properties of the local velocity distributions of the system's past trajectory. Furthermore, the methodology in <cit.> does not provide a simple description of composite systems. In contrast, the method proposed here always creates a sensor-independent description of a composite system, consisting of a collection of the sensor-independent descriptions of the independent subsystems.The next section describes the procedure for computing an inner time series from a time series of raw measurements. It also demonstrates that the inner time series of a composite system consists of a collection of the inner time series of its constituent parts. Section <ref> illustrates the method by applying it to: 1) an analytic example; 2) the audio waveform of one speaker; 3) video images from a moving camera; 4) mixtures of audio waveforms of two speakers. The last section discusses the implications of this approach.§ METHODThe following subsection outlines how a time series of sensor measurements can be processed in order to derive local vectors at each point in the state space of the observed system. This procedure is only presented in outline form here, because detailed descriptions can be found in <cit.> and <cit.>. It is then shown how these vectors can be used to create an inner description of the system's path through state space. In the second subsection, the system is assumed to be composed of two statistically independent subsystems. It is shown that the inner time series of the composite system is a simple collection of the inner time series of its subsystems. §.§ Derivation of inner time seriesThe first step is to construct second-order and fourth-order local correlations of the data's velocity (ẋ)C_kl(x) =⟨ (ẋ_k-ẋ̅_k) (ẋ_l-ẋ̅_l) ⟩_x C_klmn(x) =⟨ (ẋ_k-ẋ̅_k) (ẋ_l-ẋ̅_l)(ẋ_m-ẋ̅_m) (ẋ_n-ẋ̅_n) ⟩_xwhere ẋ̅ = ⟨ẋ⟩_x, where the bracket denotes the time average over the trajectory's segments in a small neighborhood of x, and where all subscripts are integers between 1 and N with N ≥ 1.Next, let M(x) be any local N × N matrix, and use it to define M velocity correlations, I_kl and I_klmnI_kl(x) = ∑_1 ≤ k', l' ≤ N M_kk'(x) M_ll'(x) C_k'l'(x) , I_klmn(x) = ∑_1 ≤ k', l', m', n' ≤ N M_kk'(x) M_ll'(x)M_mm'(x) M_nn'(x) C_k'l'm'n'(x) .Because C_kl(x) is generically positive definite at any point x, it is almost always possible to find a particular form of M(x) that satisfiesI_kl(x) = δ_kl ∑_1 ≤ m ≤ N I_klmm(x) = D_kl(x) ,where D(x) is a diagonal N × N matrix (<cit.>, <cit.>). As long as D is not degenerate, M(x) is unique, up to arbitrary local permutations and/or reflections. In almost all applications of interest, the velocity correlations will be continuous functions of x. Therefore, in any neighborhood of state space, there will always be a continuous solution for M(x), and this solution is unique, up to arbitrary global permutations and/or reflections. In any other coordinate system x', the most general solution for M' is given byM'_kl(x') = ∑_1 ≤ m, n ≤ N P_km M_mn(x) ∂ x_n/∂ x'_l ,where M is a matrix that satisfies (<ref>) and (<ref>) in the x coordinate system and where P is a product of permutation, reflection, and identity matrices (<cit.>, <cit.>). By construction, M is not singular.Notice that (<ref>) shows that the rows of M transform as local covariant vectors, up to a global permutation and/or reflection. Likewise, the same equation implies that the columns of M^-1 transform as local contravariant vectors (denoted as V_(i)(x)i = 1, … N), up to a global permutation and/or reflection. Because these vectors are linearly independent, the measurement velocity at each time (ẋ(t)) can be represented by a weighted superposition of themẋ(t) = ∑_1 ≤ i ≤ N w_i(t) V_(i)(x),where w_i are time-dependent weights. Because ẋ and V_(i) transform as contravariant vectors (except for a possible global permutation and/or reflection), the weights w_i must transform as scalars or invariants; i.e., they are independent of the coordinate system in which they are computed (except for a possible permutation and/or reflection). Therefore, the time-dependent weights, w_i(t), provide an inner (coordinate-system-independent) description of the system's velocity in state space. Two observers, who use different sensors (and, therefore, different state space coordinate systems), will derive the same inner time series, except for a possible global permutation and/or reflection.This equation can be integrated over the time interval [t_0,t] to give an expression for the system's state during that time intervalx(t) = x(t_0) + ∫_t_0^t∑_1 ≤ i ≤ N w_i(t) V_(i)[x(t)] dt,This is an integral equation for constructing x(t) on the interval [t_0, t] from the weight time series, w_i(t), on the same time interval. Note that, given a set of local vectors, there is a many-to-one correspondence between the set of measurement time series and corresponding inner time series. Specifically, (<ref>) shows that each measurement time series maps onto just one weight time series. However, as shown by (<ref>), one weight time series maps onto multiple time series of sensor measurements, differing by the choice of the initial point, x(t_0). It should also be mentioned that it may be difficult to use this equation to numerically compute the measurement time series, corresponding to a given weight time series, because errors will tend to accumulate as one integrates the right side. §.§ Inner time series of composite systemsNow, consider the special case in which the observed system is composite (or separable) in the sense that it consists of two statistically independent subsystems. Specifically, assume that there is a state space coordinate system, s, in which the state components, s_k(t)k = 1, … ,N, can be partitioned into two groups, s_(1) = (s_k k = 1, … ,N_1) and s_(2) = (s_k k = N_1 + 1, … ,N), that are statistically independent in the following sense (<cit.>, <cit.>). Let ρ_S(s,ṡ) be the PDF in (s,ṡ). Namely, ρ_S(s,ṡ) ds dṡ is the fraction of total time that the location and velocity of s(t) are within the volume element ds dṡ at location (s,ṡ). The subsystem state variables, s_(1) and s_(2), are assumed to be statistically independent in the sense that the density function of the system variable is the product of the density functions of the two subsystem variables; i.e.,ρ_S(s,ṡ) = ∏_a=1,2ρ_a(s_(a),ṡ_(a)) .This separability criterion in (s,ṡ) is stronger than the conventional formulation in s, and references <cit.> and <cit.> argue that this makes it preferable to the conventional criterion. In the following paragraphs, it is shown that, if the data are separable in the above sense, the components of the inner time series of the composite system can be partitioned into two groups, each of which provides an inner description of one of the subsystems. Although these results are demonstrated here for systems with two independent subsystems, they can be easily generalized to systems with any number of subsystems.To show this, the first step is to transform (<ref>) into the s coordinate system, by multiplying each side by ds/dx. Because the V_(i) transform as contravariant vectors (up to a possible permutation and/or reflection), it follows thatṡ(t) = ∑_1 ≤ i, j ≤ N w_i(t) P_ij V_S(j),where V_S(j) is V_(j) in the s coordinate system and P is a possible permutation and/or reflection. By definition, the V_S(i) are the local vectors, which are derived from the local distribution of ṡ in the same way that the V_(i) were derived from the local distribution of ẋ. Specifically, V_S(i) is the i^th column of M^-1_S, where M_S is the M matrix that is derived from the second- and fourth-order velocity correlations in the s coordinate system.The next step is to show that the matrix M_S has a simple block-diagonal form. In particular, <cit.> and <cit.> show that M_S is given byM_S(s) = ( [ M_S1(s_(1)) 0; 0 M_S2(s_(2)) ]) .where each submatrix, M_Sa for a = 1,2, satisfies (<ref>) and (<ref>) for correlations between components of s_(a). Observe that each vector V_S(i) vanishes except where it passes through one of the blocks of M^-1_S. Therefore, equation (<ref>) is equivalent to a pair of equations, which are formed by projecting it onto each block corresponding to a subsystem state variable. For example, projecting both sides of (<ref>) onto block a gives the resultṡ_(a)(t) = ∑_1 ≤ i ≤ N j∈block a w_i(t) P_ij V_S(ja).Here, V_S(ja) is the projection of V_S(j) onto block a; i.e., it is the column of M_Sa^-1 that coincides with column j of M^-1_S, as it passes through block a. This means that the vectors, V_S(ja) for j ∈ block a, are the local vectors on the s_(a) manifold, which are derived from the local distribution of ṡ_(a) in the same way that the V_(i) were derived from the local distribution of ẋ. Notice that each time-dependent weight, w_i(t), describes the evolution of just one subsystem. In other words, the weights do not contain a mixture of information about the evolution of the two subsystems. This is true despite the fact that they can be derived from raw measurements that may be complicated unknown mixtures of the state variables of both subsystems. Next, define group 1 (group 2) to be the set of weights appearing in the expression∑_1 ≤ i ≤ N w_i P_ijfor j ∈ block 1 (for j ∈ block 2). Equation (<ref>) shows that the weights in group 1 (group 2) comprise a sensor-independent description of the velocity of subsystem 1 (subsystem 2). Equation (<ref>) also suggests that the weights in group 1 must be statistically independent of the weights in group 2. Specifically, (<ref>) implies that the weights in each group can be computed from: 1) the time course of the state variable of the corresponding subsystem; 2) the local vectors of the corresponding subsystem, which themselves are constructed from the time course of the state variable of the corresponding subsystem. Because the weights in group 1 and group 2 are derived from s_1(t) and s_2(t), respectively, and because the latter are statistically independent, it is likely that the former are also statistically independent.§ ANALYTIC AND EXPERIMENTAL EXAMPLESIn this section, the methodology of Section <ref> is illustrated by applying it to: 1) an analytic example (namely, a time series equal to a sine wave); 2) the audio waveform of a single speaker; 3) video data from a camera moving with two degrees of freedom; 4) nonlinear mixtures of the waveforms of two speakers.§.§ Analytic example: a sine waveIn this subsection, the proposed methodology is applied to a measurement time series, simulated by a sine wave. Its inner time series is derived analytically, before and after it is transformed by an arbitrary monotonic function. The transformed data, which simulate the output of a second sensor, are shown to have the same inner time series as the untransformed data from the first sensor.Suppose the measured sensor signal isx(t) = a sin(t)where a is any real number and - ∞≤ t ≤∞. Because of the periodicity of the signal, the local second-order velocity correlation can be shown to beC_11(x) = a^2 - x^2 .The 1 × 1 “matrix", M, isM_11(x) = ± 1/ √(a^2 - x^2) ,and the one-component local vector, V_(1)(x), isV_(1)1(x) = ±√(a^2 - x^2) .Either sign can be chosen in (<ref>) and (<ref>) because M is only determined up to a global reflection. Substituting (<ref>) and (<ref>) into (<ref>) shows that the weight time series isw_1(t) = ± sgn [a cos(t) ] .Thus, for this simple periodic signal, the inner time series is the sign of the signal's time derivative. As shown in the following subsections, a much larger amount of information is contained in the inner time series of more complex one-component signals. The sensor-independence (or coordinate-system-independence) of the inner time series can be demonstrated explicitly by computing it from measurements that have been transformed by a monotonic function, f(x), which simulates the relative response of a different sensor. Specifically, consider the transformed measurements given byx'(t) = f[a sin(t)] ,where f is monotonic. The local second-order correlation of the velocity of these measurements isC'_11(x') = [ df/dx a cos(t_x') ]^2 ,where df/dx is evaluated at x = a sin(t_x') and where t_x' is any solution of f [a sin(t_x') ] = x'. Because the measurements have just one component, the 1 × 1 ”matrix" M' is equal toM'_11(x') = ± 1 / √(C'_11(x')) ,and the local vector isV'_(1)1(x') = ±√(C'_11(x')) .Substituting (<ref>) and (<ref>) into (<ref>) shows that the weight function isw'_1(t) = ± sgn [a cos(t) ] = w_1(t) .Thus, the transformed and untransformed measurements ((<ref>) and (<ref>)) have the same inner time series (up to a reflection), This shows that the weights are sensor-independent (and coordinate- system-independent), a fact that was proved in Section <ref>. §.§ The audio signal of a single speakerIn this subsection, the proposed method is applied to the audio waveform of a single speaker, before and after it has been transformed by a nonlinear monotonic function, which simulates the relative response of another sensor. The inner time series of the untransformed and transformed signals are shown to be almost the same.The male speaker's audio waveform, x(t), was a 31.25 s excerpt from an audio book recording. This waveform was sampled 16,000 times per second with two bytes of depth. The thin black line in Figure <ref> shows the speaker's waveform during a short (31.25 ms) interval. The thick gray line in Figure <ref>, x'(t), simulates the output of another sensor, which is related to x(t) by the monotonic nonlinear transformation in Figure <ref>.The technique in Subsection <ref> was applied to 500,000 samples of x(t) and x'(t), in order to derive the one-component vectors, V_(1)(x) and V'_(1)(x'), in an array of 128 bins on the x and x' manifolds, respectively. These vectors are displayed in Figure <ref>.Then, these vectors and equation (<ref>) were used to compute the inner time series, w_1(t) and w'_1(t), corresponding to the two measurement time series, x(t) and x'(t), respectively. The resulting time series of weights are shown in Figure <ref>. Notice that the two inner time series are almost the same, despite the fact that they were derived from sensor measurements, which differed by a nonlinear transformation. This demonstrates the sensor-independence of the weights, a property that was proved in general in Subsection <ref>. When either inner time series was played as an audio file, it sounded like a completely intelligible version of the original audio waveform, x(t). No semantic information was lost, although the prosody of the signal may have been modified. Therefore, in this experiment, almost all of the signal's information content was preserved by the process of deriving its inner time series. §.§ Video data from a moving cameraIn this subsection, the procedure in Subsection <ref> is used to derive the inner time series of a sequence of video images, recorded by a camera moving in an office. We also computed the inner time series of the same image sequence, after each image was subjected to a nonlinear transformation, thereby simulating the output of a different sensor (i.e., a different video camera). The two inner time series were almost the same, despite the fact that they were derived from the outputs of dramatically different sensors.The original (i.e., untransformed) images were recorded by a cell phone video camera as it was moved in an irregular fashion over a portion of a spherical surface, having a radius of approximately 25 cm. The plane of the camera was oriented so that it was always tangential to the surface, and the camera's lower edge was kept parallel to the floor at all times. In this way, the camera was moved with two degrees of freedom; i.e., it was moved through a series of configurations (positions and orientations) that formed a two-dimensional manifold. The camera recorded thirty frames per second over the course of approximately 70 minutes, producing a total of 126,036 frames. Each frame consisted of a 320 × 240 array of pixels, in which the RGB responses were measured with one byte of depth. The top row of Figure <ref> displays a typical series of images, subsampled at 1.67 s intervals over the course of 17 s.The second time series of images was created by subjecting each recorded image to a nonlinear transformation.Specifically, each pixel with image coordinates (h,v) in a given recorded frame was mapped to the location with image coordinates (h'(h),v'(v)) in the corresponding transformed frame, where h(h') and v(v') are shown in Figure <ref>. It is evident that this transformation turns each image upside down and backwards, in addition to stretching or compressing each image near its borders. The bottom row of Figure <ref> shows the images that were produced by nonlinearly transforming the corresponding recorded frames in the top row. These images simulate the output of a different sensor (e.g., a video camera, which was "wearing" goggles having inverting/distorting lenses).Because the video was recorded as the camera moved through a two-dimensional manifold of configurations, the resulting images were expected to form a two-dimensional manifold in which each frame was represented by a point. A coordinate system, x, was imposed on this manifold in the following manner. First, we computed six numbers consisting of the centroids of the R, G, and B components for each recorded image. Then, we did a principal components analysis of the collection of six-dimensional multiplets for all recorded video frames. This showed that these multiplets were in or close to a two-dimensional planar subspace, which contained 99% of their variance. Because this subspace did not self-intersect, its points were invertibly related to the configurations of the camera. The x coordinates of each image were taken to be the first two variance-normalized principal components of the corresponding multiplet. The same procedure was applied to the collection of transformed images in order to assign a two-component coordinate, x', to each one. The thin black lines in Figure <ref> show the measurement time series, x(t), derived from the images recorded during a typical 17 s time interval. The thick gray lines in the same figure show the sensor measurements, x'(t), derived from the sequence of transformed images during the same time interval. The x(t) and x'(t) time series can be considered to be the measurements that were produced by two observers who were watching the same physical system with different sensors (i.e., with an ordinary video camera and with a camera having distorting/inverting lenses, respectively). Alternatively, x'(t) can be considered to be the measurements x(t), after they have been transformed to another coordinate system (x') on the two-dimensional manifold of images.The 126,036 measurements, x(t), derived from the sequence of untransformed images, were assigned to bins in a 4 × 4 array. Then, the procedure in Subsection <ref> was used to compute the local vectors in each bin (V_(i)(x) for i=1,2). The same procedure was applied to measurements x'(t), derived from the transformed images, in order to compute the local vectors, V'_(i)(x'). These local vectors are shown in the left and right panels of Figure <ref>. The measurement time series, x(t), and the corresponding local vectors, V_(i), were substituted in (<ref>) in order to derive the inner time series, w_i(t), corresponding to the sequence of untransformed images. Likewise, the measurement time series, x'(t), and the corresponding local vectors, V'_(i), were used to derive the inner time series, w'_i(t), corresponding to the sequence of transformed images. The thin black lines and the thick gray lines in Figure <ref> show the weights, w_i(t) and w'_i(t), respectively, during the time interval depicted in Figure <ref>, after w'_i(t) was multiplied by a global permutation and reflection. Notice that the inner time series are nearly the same, despite the fact that they were derived from the outputs of dramatically different sensors. In other words, the inner time series are sensor-independent, as proved in Subsection <ref>. These results loosely mimic the findings of the well-known psychophysical experiments (<cit.>) in which subjects, who wore inverting/distorting goggles, eventually learned to perceive the world as it was perceived before wearing the goggles. Similarly, Figure <ref> shows that the observer, whose camera was ”wearing" goggles, perceived the inner properties of the image time series to be the same (thick gray lines) as they were perceived before wearing the goggles (thin black lines). §.§ Nonlinear mixtures of two audio waveformsIn this subsection, the system consists of two speakers, whose utterances are statistically independent and are observed in two ways: 1) as a pair of unmixed signals, each one being one speaker's waveform; 2) as a pair of nonlinear mixtures of the unmixed signals. The unmixed and mixed pairs of signals simulate measurements made by two observers who were using different sensors. The procedure in Subsection <ref> was applied to derive the inner time series, corresponding to the unmixed and mixed signals. These inner time series are shown to be almost the same, thereby demonstrating their sensor independence. Furthermore, the time series of each weight component, derived from the signal mixtures, is almost the same as the time series of a weight component, derived from one of the unmixed signals. This demonstrates that the inner time series of a composite system is simply a collection of the inner time series of its statistically independent subsystems, as proved in Subsection <ref>.The unmixed signals were excerpts from audio book recordings of two male speakers, who were reading different texts. The two audio waveforms, denoted x_k(t) for k=1,2, were 31.25 s long and were sampled 16,000 times per second with two bytes of depth. Figure <ref> shows the two speakers' waveforms during a short (31.25 ms) interval. These waveforms were thenmixed by the nonlinear functionsμ_1(x)= 0.763 x_1 + (958 - 0.0225 x_2)^1.5 μ_2(x)= 0.153 x_2 + (3.75 * 10^7-763 x_1 - 229 x_2)^0.5 ,where -2^15≤ s_1, s_2 ≤ 2^15. This is one of a variety of nonlinear transformations that were tried with similar results. The mixed measurements, x'_k(t), were taken to be the variance-normalized, principal components of the waveform mixtures, μ_k[x(t)]. Figure <ref> shows how this nonlinear mixing function mapped an evenly-spaced Cartesian grid in the x coordinate system onto a warped grid in the x' coordinate system. Notice that the mapped grid does not ”fold over" onto itself, showing that it is an invertible mapping. The lines in Figure <ref> show the time course of x'(t). When either waveform mixture (x'_1(t) or x'_2(t)) was played as an audio file, it sounded like a confusing superposition of two voices, which were quite difficult to understand. The method in Section <ref> was then applied to these data as follows: * The 500,000 measurements of the first unmixed waveform, consisting of x_1 and ẋ_1 at each sampled time, were sorted into an array of 16 bins in x_1. Then, the ẋ distribution in each bin was used to compute local velocity correlations, and these were used to derive the one-component local vector, V_(1)(x_1), in each bin in x_1. The left panel of figure <ref> shows these local vectors at each point. These vectors and the ẋ_1 time series were substituted in (<ref>) in order to compute the inner time series, w_1(t), for the first unmixed waveform, The result is shown by the thin black line in the left panel of Figure <ref>.* The same procedure was applied to the second unmixed waveform in order to compute its inner time series, w_2(t). The result is shown by the thin black line in the right panel of Figure <ref>.* The 500000 samples of the mixed waveform, x'(t), were sorted into a 16 × 16 array of bins in x', and the distribution of velocities, ẋ'̇, in each bin was used to compute the local vectors, V'_(i)(x'), at each point. These are shown in the right panel of Figure <ref>. These vectors and the velocity time series, ẋ'(t), were substituted in (<ref>) to compute the inner time series, w'_i(t), of the mixed waveforms. These are depicted by the thick gray lines in Figure <ref>, after they had been multiplied by an overall permutation/reflection matrix. It is evident that the unmixed and mixed waveforms have inner time series that are almost the same. This demonstrates that an inner time series is not affected by transformations of the measurement time series. In other words, the inner time series encodes sensor-independent information. When each inner time series was played as an audio file, it sounded like a completely intelligible recording of one of the speakers. In each case, the other speaker was not heard, except for a faint buzzing sound in the background. Thus, each inner time series contained all of the semantic information in the unmixed waveform.Notice that this composite system has an inner time series, w'_i(t), which is equal to the collection of the inner time series of its statistically independent subsystems, w_1(t) and w_2(t). This demonstrates the separability property of the inner time series of a composite system, which was proved in Subsection <ref>. Also, notice that the correlation between the time series, w'_1(t) and w'_2(t), is quite low (-0.0016). As discussed in Subsection <ref>, this is expected because these are inner time series of two statistically independent subsystems.§ CONCLUSIONThis paper describes how a time series of sensor measurements can be processed in order to create an inner time series, which is unaffected by the nature of the sensors. Specifically, if a system is observed by two sets of sensors, each set of measurements will lead to the same inner time series if the two sets of measurements are related by any instantaneous, invertible, differentiable transformation. In effect, an inner time series encodes information about the intrinsic nature of the observed system's evolution, without depending on extrinsic factors, such as the observer's choice of sensors. An inner time series is created by statistically processing the local distributions of measurement velocities in order to derive vectors at each point in measurement space. The system's velocity can then be described as a weighted superposition of the local vectors at each point. These time-dependent weights comprise the inner time series. Because they are independent of the coordinate system in measurement space, they represent sensor-independent information about the system's velocity in state space.The inner time series may be useful in certain practical applications. For instance, it may be used to reduce false negatives in the detection of events of interest. To see this, imagine that the objective is to detect certain ”targeted" movements of a system as it moves through state space, and suppose that this is being done by using a pattern recognition technique to monitor the output of sensors that are observing the system. If the pattern recognition software is trained on the output of calibrated sensors, subsequent sensor drift will cause false negatives to occur. This can be avoided if the pattern recognition algorithm is trained on the inner time series, instead of the time series of raw measurements. As long as the local vectors are computed from recent data from the drifted sensors, the inner time series will be unaffected by sensor drift, and this procedure will sensitively detect the targeted movements. However, it should be noted that this procedure may be accompanied by some false positives. This is because a given inner time series corresponds to multiple measurement time series, which describe trajectories in different regions of the measurement space, as mentioned in Subsection <ref>.As an example, consider the output of the moving camera in Subsection <ref>, and suppose that our objective is to detect camera movements that produce the sensor output shown by the thin black lines in Figure <ref>. Imagine that a pattern recognition algorithm is trained to detect this particular trajectory segment. However, suppose that the camera's lens subsequently ”drifts" so that the targeted camera movements produce the signal shown by the thick gray line in Figure <ref>. In that case, the drifted data will not be recognized, and false negatives will occur.Now, suppose that the pattern recognition software was trained to recognize the inner time series (Figure <ref>) corresponding to the targeted camera movements. Then, sensor drift will not cause false negatives, as long as the time series to be recognized is processed with local vectors, computed from recently acquired data from the drifted sensors.As described in Subsection <ref>, an inner time series has another attractive property, in addition to its sensor independence. Namely, it automatically provides a separable description of the evolution of a system that is composite in the sense of (<ref>). To see this, consider the sensors, which observe such a composite system. They may be sensitive to the movements of many subsystems, causing the raw sensor outputs to be unknown, possibly nonlinear, mixtures of many subsystem state variables. Now, suppose that we compute the time series of multi-component weights derived from such mixture measurements.As proved in Subsection <ref>, each component of the inner time series of the composite system is the same as a component of the inner time series of one of its subsystems. In other words, the inner time series of a composite system can be partitioned into groups of components, with each group being equal to the inner time series that would have been derived from a subsystem, if it were possible to observe it alone. Because of this separability property, the inner time series may be useful for detecting a targeted movement of one particular subsystem, in the presence of other independent subsystems. In particular, a pattern recognition procedure can be trained to determine if the components of the inner time series of the targeted movement can be found among the components of the inner time series derived from the mixed measurements of the entire system. An advantage of this procedure is that it is not necessary to use blind source separation (<cit.>, <cit.>, <cit.>, <cit.>, <cit.>) to disentangle the measurement time series into its independent components. On the other hand, false positive detections can complicate any such attempt to recognize a targeted signal by its inner time series (instead of its time series of sensor measurements). These errors may occur because multiple different measurement time series may have the same inner time series, as described in Subsection <ref>.As an illustrative example, consider the system comprised of two independent audio signals, described in Subsection <ref>, and imagine that our objective is to detect an utterance of the first speaker (left panel of Figure <ref>), in the presence of the second speaker (right panel of Figure <ref>). It is difficult to determine if this targeted signal is present in the mixtures that are actually measured (Figure <ref>). However, notice that the inner time series of the movement of interest, derived from the unmixed waveforms of a subsystem (the thin black lines in Figure <ref>), is almost the same as one of the inner time series components, derived from the mixed signals of the composite system (thick gray lines in Figure <ref>). Therefore, a pattern recognition procedure, which is trained on the inner time series of the unmixed signal, is likely to recognize the targeted signal, even in the presence of signals from other subsystems. Some comments on these results: * As stated in Section <ref>, we have assumed that the sensors produce measurements that are invertibly related to the state variables of the underlying system. This invertibility property can almost be guaranteed by observing the system with a sufficiently large number of independent sensors: specifically, by utilizing at least 2N+1 independent sensors, where N is the dimension of the system's state space. In this case, the sensors' output lies in an N subspace embedded within a space of at least 2N+1 dimensions. Because an embedding theorem asserts that this subspace is very unlikely to self-intersect (<cit.>), the points in this subspace are almost certainly invertibly related to the system's state space. Then, dimensional reduction techniques (e.g., <cit.>) can be used to find the subspace coordinates (x) that are invertibly related to the state space points, as desired.An example was presented in Subsection <ref>. There, the camera configurations formed a two-dimensional subspace, embedded in a six-dimensional space of raw sensor measurements. This subspace was very unlikely to self-intersect, given that6 > 2N+1 = 5.Then, principal components analysis was used to dimensionally reduce the description of each subspace point from six-dimensional coordinates to two-dimensional coordinates (x).* An inner time series contains information that is intrinsic to the evolution of the observed system, in the sense that it is independent of extrinsic factors, such as the type of sensors used to observe the system. In other words, an inner time series contains information about what is happening ”out there in the real world", independent of how the observer chooses to describe it or experience it. Mathematically speaking, an inner time series is a coordinate-system-independent property of the measurement time series; i.e., its values are the same no matter what measurement coordinate system is used on the system's state space. The local vectors (V_(i)) also represent a kind of intrinsic structure on state space. These vectors ”mark" state space in a way that is analogous to directional arrows, which mark a physical surface and which can be used as navigational aids, no matter what coordinate system is being used.* It is interesting to speculate about the role of inner time series in speech perception. By definition, two people, who understand the same language, tend to perceive the same semantic content of an utterance in that language. Remarkably, this listener-independence occurs despite the fact that the listeners may be using significantly different sensors to make measurements of that utterance (e.g., different outer, middle, and inner ears; different cochleas; different neural architectures of the acoustic cortex). This sensor-independence of speech perception suggests that the semantic content of speech may be an inner property; i.e., it may be encoded in the inner time series of speech (w_i(t)). Specifically, assume that the two listeners have past exposure to statistically similar collections of speech-like sounds. Then, they will perceive the speech-sound manifold to be “marked" by the same local vectors (V_(i)(x)), even though they may represent those vectors in different coordinate systems on the speech-sound manifold. Therefore, when the two listeners use (<ref>) to decode an utterance, they will derive the same inner time series, and they will perceive the same semantic content.* It is equally remarkable that speech perception is largely speaker-independent. Namely, a single listener will instantly recognize that two speakers are uttering the same text. This is true despite the fact that the two sounds were produced by significantly different vocal tracts and may have traversed different regions of the speech-sound manifold. This speaker-independence will occur as long as long as each speaker and the listener have past exposure to statistically similar collections of speech sounds. In that case, because of the above-mentioned listener-independence, each speaker and the listener will derive the same inner time series when they listen to the speaker's utterance. Therefore, if the two speakers have encoded the same semantic content (i.e., the same inner time series) in their utterances, the listener will immediately perceive that they are saying the same thing. Notice that two speakers' utterances, which have the same semantic content, may correspond to two different speech-sound trajectories, which have the same inner time series. Thus, in this speculative scenario, the fact that the same inner time series may be encoded in many measurement time series (see the discussion following (<ref>)) corresponds to the fact that the same semantic content can be expressed by many different voices.IEEEtran1Elbert C. Elbert, Calibration Technology. (Suddeutscher Verlag, Munich, 2012).Morain S. A. Morain and A. M. Budge, Post-Launch Calibration of Satellite Sensors. (CRC Press, 2004).Bottomley G. E. Bottomley, Channel Equalization for Wireless Communications. (Wiley, New York, 2012). Comon Jutten P. Comon and C. Jutten (eds), Handbook of Blind Source Separation, Independent Component Analysis and Applications. (Academic Press, Oxford, 2010).Jutten C. Jutten and J. Karhunen, “Advances in blind source separation (BSS) and independent component analysis (ICA) for nonlinear mixtures," International J. Neural Systems, vol. 14, pp. 267-292, 2004.Almeida L. Almeida, ”Nonlinear source separation", in Synthesis Lectures on Signal Processing, vol. 2, Morgan and Claypool Publishers, 2006.Levin ci-JAPD. N. Levin, “Channel-independent and sensor-independent stimulus representations," J. Applied Physics, vol. 98, art. no. 104701, 2005.Levin arXiv D. N. Levin, ”Model-independent analytic nonlinear blind source separation," http://arxiv.org/abs/1703.01518 (March, 2017)Levin LVA-ICA D. N. Levin, ”Model-independent method of nonlinear blind source separation," In: Tichavsky, P., Barbaie-Zadeh, M., Michel, O., and Thirion-Moreau, N. (eds.), Latent Variable Analysis and Signal Separation, Lecture Notes in Computer Science, Springer, vol. 10169, pp. 310-319, 2017.Roweis S. T. Roweis and L. K.Saul, “Nonlinear dimensionality reduction by locally linear embedding,"Science, vol. 290, pp. 2323-2326, 2000.Sauer T. Sauer, J. A. Yorke, M. Casdagli, “Embedology," J. Statistical Physics, vol. 65, pp.579-616, 1991.Held R. Held and R. Whitman, Perception: Mechanisms and Models. San Francisco, California: Freeman, 1972.
http://arxiv.org/abs/1703.08596v1
{ "authors": [ "David N. Levin" ], "categories": [ "stat.ME", "cs.SD", "math.ST", "stat.TH" ], "primary_category": "stat.ME", "published": "20170324205952", "title": "The Inner Structure of Time-Dependent Signals" }
biu]Erel Segal-Halevigt,cor]Balázs Sziklai[biu]Bar-Ilan University, Ramat-Gan 5290002, Israel. erelsgl@gmail.com[gt]'Momentum' Game Theory Research Group, Centre for Economic and Regional Studies, Hungarian Academy of Sciences. H-1112 Budapest Budaörsi út 45., Email: sziklai.balazs@krtk.mta.hu[cor]Corvinus University of Budapest, Department of Operations Research and Actuarial Sciences, H-1093 Budapest Fővám tér 8. In the classic cake-cutting problem (Steinhaus, 1948), a heterogeneous resource has to be divided among n agents with different valuations in a proportional way — giving each agent a piece with a value of at least 1/n of the total. In many applications, such as dividing a land-estate or a time-interval, it is also important that the pieces are connected. We propose two additional requirements: resource-monotonicity (RM) and population-monotonicity (PM). When either the cake or the set of agents changes and the cake is re-divided using the same rule, the utility of all remaining agents must change in the same direction. Classic cake-cutting protocols are neither RM nor PM. Moreover, we prove that no Pareto-optimal proportional division rule can be either RM or PM. Motivated by this negative result, we search for division rules that are weakly-Pareto-optimal — no other division is strictly better for all agents. We present two such rules. The relative-equitable rule, which assigns the maximum possible relative value equal for all agents, is proportional and PM. The so-called rightmost mark rule, which is an improved version of the Cut and Choose protocol, is proportional and RM for two agents.fair divisioncake-cuttingresource-monotonicitypopulation-monotonicityconnected utilities§ INTRODUCTION Monotonicity axioms have been extensively studied with respect to cooperative game theory <cit.>, political representation <cit.>, computer resource allocation <cit.> and many other fair division problems <cit.>, <cit.>.These axioms express the idea of solidarity among agents: whenever the environment changes in a way that requires the re-allocation of resources, the welfare of all agents not responsible for the change should be affected in the same direction — either they should all be made at least as well off as they were initially, or they should all be made at most as well off. This is the so called replacement principle which was formulated by <cit.>.Two common monotonicity axioms are resource monotonicity (RM) and population monotonicity (PM). Resource-monotonicity, sometimes known as aggregate monotonicity, requires that when new resources are added, and the same division rule is used consistently, the utility of all agents should weakly increase. Population-monotonicity is concerned with changes in the number of participants. It requires that when someone leaves the division process and abandons his share, the utility of the remaining participants should weakly increase. Conversely, when a new agent joins the process, all existing participants should participate in supporting the new agent, thus their utility should weakly decrease.Monotonicity axioms are sometimes conceived as more important than other, more basic fairness axioms. A prominent example is the practical problem of apportionment: there is a parliament with a fixed number of seats and administrative regions with different number of voters. The seats have to be distributed among the regions in such way that the resulting allotment ensures proportional representation. The solution originally employed in the USA congress was the Hamiltonian rule, which guaranteed proportional representation. However, it was found that this rule exhibits the Alabama paradox — increasing the number of seats would have rendered state Alabama with less seats. In other words, the rule violates resource-monotonicity. Later, it was found that this rule also exhibits the new state paradox (the “Oklahoma paradox”) — the addition of Oklahoma to the USA would have rendered state Maine with more seats. In other words, the rule also violates population-monotonicity. These violations pressed legislators to adopt a new apportionment method. The currently used method, the so called Huntington-Hill method, fails to uphold Hare-quota, a basic guarantee of proportionality, but it is satisfies the monotonicity axioms <cit.>.The present paper studies these two monotonicity requirements in the framework of the fair cake-cutting problem <cit.>, where a single heterogeneous resource - such as land or time - has to be divided fairly. We approach the problem from the classic point of view, when each agent is interested in getting a connected piece. Connected utility functions make sense in many applications. E.g., when dividing land, a large connected piece can be used for building a house, but a collection of small disconnected patches of land is virtually useless. Similarly when departments dispute over the availability of a conference room, each of them is interested in reserving the room for a contiguous period which is free of disruption. Another example is a long TV ad, which needs to be aired in one piece. These examples show that assuming connected utilities in some cases is a reasonable restriction, and indeed, many cake-cutting papers explicitly assume that each agent must be allocated a connected piece.§.§ ResultsWe survey many traditional cake-cutting protocols and show that they do not satisfy either of the monotonicity axioms. In particular, all methods based on the Cut and Choose scheme violate both resource-monotonicity and population-monotonicity. This motivates a search for division rules that are both fair in the conventional sense and monotonic. We conducted this search under two different assumptions regarding the agents' utility functions, which are equally common in the cake-cutting literature.In both models, each agent has a value measure defined over the cake. In the additive model, the utility of a piece of cake is just the value measure of that piece; the geometry of the piece has no importance. In the connected model, the utility of a piece of cake is the value of the most valuable connected component of the piece. Our results for the additive model can be found in another manuscript <cit.>. These results were mainly positive: we found several Pareto-optimal proportional division rules that satisfy one or both monotonicity axioms. In particular, the Nash-optimal rule, maximizing the product of values, is envy-free (hence also proportional), resource-monotonic and population-monotonic.The present paper studies the connected model. Here, the situation is not so positive. Each of the monotonicity properties is incompatible with proportionality and Pareto-optimality. That is, no Pareto-optimal proportional division rule can be either resource- or population-monotonic. Thus the fair divider has to choose between Pareto-optimality and monotonicity. While from an economics perspective Pareto-optimality is crucial, public opinion may not always agree. In some cases people are willing to sacrifice efficiency to get fairness <cit.>. As a compromise, we suggest several division rules which are proportional and weakly-Pareto-optimal (no other allocation is strictly preferred by all agents; see e.g. <cit.>) while satisfying one of the monotonicity axioms. The max-equitable-connected rules, which give equally-valuable pieces to each agent while maximizing this value, are both population monotonic. There are two such rules: the rule equalizing the relative values (normalized such that the entire cake value is 1) is proportional but not resource-monotonic, and the rule equalizing the absolute (not normalized) values is resource-monotonic but not proportional. Additionally, we present a proportional and resource-monotonic division protocol for two agents. It is an open question whether there exists a weakly-Pareto-optimal, proportional and resource-monotonic rule for n agents.The equitable rule belongs to the cardinal welfarism framework (cf. chapter 3 of <cit.>). It relies on inter-agent utility comparison, and makes sense if and when such a comparison is feasible. For example, suppose the agents are firms, each of which wants to use the land to a pre-specified purpose (e.g. one firm plans to dig for oil, another firm wants to build housing complexes, etc.). Then, economic models can be used to estimate the monetary utility of each firm for each piece of land and the estimates can be used to calculate equitable divisions (see the conclusion of <cit.> for further discussion of the additive utility model).The paper is organized as follows. Section <ref> reviews the related literature. Section <ref> formally presents the cake-cutting problem and the monotonicity axioms. Section <ref> examines classic cake-cutting protocols and shows that they are not monotonic. Sections <ref> and <ref> present our negative and positive results. Section <ref> concludes and presents a table summarizing the various rules' properties.§ RELATED WORKThe cake-cutting problem originates from the work of the Polish mathematician Hugo Steinhaus and his students Banach and Knaster <cit.>. Their primary concern was how to divide the cake in a fair way. Since then, game theorists analyzed the strategic issues related to cake-cutting, while computer scientists were focusing mainly on how to implement solutions, i.e. the computational complexity of cake-cutting protocols.Many economists regard land division as an important application of division procedures <cit.>). Hence, they note the importance of imposing some geometric constraints on the pieces allotted to the agents. Connectivity is the most well-studied constraint.As we already noted in the introduction there is a vast literature on monotonicity related issues. To our knowledge our paper is the first that explicitly defines RM and PM for the cake cutting setting. However, there are a few other axioms which bear resemblance to these two. <cit.> study the "dumping paradox" in cake-cutting. They show that, in some cakes, discarding a part of the cake improves the total social welfare of any envy-free division. This implies that enlarging the cake might decrease the total social welfare. This is related to resource-monotonicity; the difference is that in our case we are interested in the welfare of the individual agents and not in the total social welfare.<cit.> studies a related cake-cutting axiom called "division independence": if the cake is divided into sub-plots and each sub-plot is divided according to a rule, then the outcome should be identical to dividing the original land using the same rule. He proves that the only rule which satisfies Pareto-optimality and division independence is the utilitarian-optimal rule - the rule which maximizes the sum of the agents' utilities. The rule is only feasible when the utilities are additive (with no connectivity constraints). Unfortunately, this rule does not satisfy fairness axioms such as proportionality.<cit.> studies the problem of "online cake-cutting", in which agents arrive and depart during the process of dividing the cake. He shows how to adapt classic procedures like cut-and-choose and the Dubins-Spanier in order to satisfy online variants of the fairness axioms. Monotonicity properties are not studied, although the problem is similar in spirit to the concept of population-monotonicity.Finally, we mention that the consistency axiom (cf. <cit.> or <cit.>) is related to population-monotonicity, but it is fundamentally different as in that case the leaving agents take their fair shares with them. §.§ Equitable divisionsThe “heroes” of the present paper are the equitable division rules. The equitability condition in cake-cutting is much less studied than other properties such as proportionality and envy-freeness. Some notable exceptions are presented below.The first proof to the existence of an equitable division is implied by the seminal work of <cit.>. The piece allocated to each agent can be an arbitrary member of a σ-algebra, i.e, not necessarily connected. The result of <cit.> implies the existence of an equitable division with a limited number of cuts, but still not necessarily connected. Max-relative-equitable divisions without the connectivity requirement were studied extensively by <cit.> (he calls such divisions equi-optimal). Max-relative-equitable divisions with the connectivity requirement for two agents were studied by <cit.>. The generalization for n agents was mentioned by <cit.>. They related the problem of equitable-connected cake-cutting to a set of integral equations, but did not prove they are solvable. The latter point was discussed by<cit.> for the special case when the valuations are piecewise-constant and everywhere-positive. <cit.> proved that equitable-connected allocations exist for general valuations. <cit.> extended this result and proved that an equitable-connected division exists for any ordering of the agents, and for at least one ordering it is also proportional.The computability of equitable allocations is discussed by several recent works. <cit.> proved that there is no finite discrete procedure for finding an allocation that is equitable, connected and proportional. <cit.> showed that this impossibility holds even without the connectivity and proportionality requirements. On the positive side, <cit.> provided discrete procedures that attain ϵ-equitable connected divisions — divisions in which the difference between the value of every two agents is at most ϵ. Independently and contemporaneously to our work, <cit.> presented a moving-knife procedure for equitable cake-cutting, for the special case in which all players are “hungry” (i.e, all valuations are strictly positive).The main contribution of the present paper to the literature on equitable division is in showing its advantages over other, more famous cake-cutting procedures. In particular, we show that it is population monotonic, and can be made either proportional or resource-monotonic depending on whether relative or absolute values are used.A secondary contribution is a moving-knives procedure for finding an equitable-connected division in any ordering of the agents, which is applicable for general valuations (not only strictly positive). This does not contradict the impossibility results mentioned above, since a moving-knife procedure is continuous rather than discrete. § MODEL §.§ Cake-cuttingA cake-cutting problem is a triple Γ(N,C,(_i)_i ∈ N) where:* N={1,2,…,n} denotes the set of agents who participate in the cake-cutting process. In examples with a small number of agents, we often refer to them by names (Alice, Bob, Carl...).* C is the cake. For simplicity we assume that C is a interval, C=[0,c] for some real number c. We call a Borel subset of C a slice.* _i is the value measure of agent i. It is a finite real-valued function defined on the Borel subsets of [0,∞).As the term “measure” implies, the value measures of all agents are countably additive: the value measures of a union of disjoint slices is the sum of the values of the slices. Moreover, we assume that the value measures are non-negative and bounded. That is, _i assigns a non-negative, but finite number to each slice of C. We also assume that the value measures are absolutely-continuous with respect to Lebesgue measure: this means that a slice with zero length has zero value to anyone. Therefore it is unimportant to specify which agent gets the endpoints of an interval, since the endpoints have zero value.All these assumptions are standard in the cake-cutting literature.Our model diverges from the standard cake-cutting setup in that we do not require the value measures to be normalized. That is, the value of the entire cake is not necessarily the same for all agents. This is important because we examine scenarios where the cake changes, so the cake value might become larger or smaller. Hence, we differentiate between absolute and relative value measures: * The absolute value measure of the entire cake, _i(C), can be any positive value and it can be different for different agents.* The relative value of the entire cake is 1 for all agents. Relative value measures are denoted by _i and defined by: _i(S):=_i(S) / _i(C). It is also common to assume that value measures are private information of the agents. This question leads us to whether agents are honest about their preferences. Cake-cutting problems can be studied from a strategic angle, however, the results are mostly negative. For example, in any deterministic discrete strategy-proof protocol, there always exists an agent that gets the empty piece <cit.>. Here, we will not analyze the strategic behavior of the agents but assume they act truthfully.The utility of an agent is based on its value measure. In the present paper we assume that:_i(X)=sup_I⊂ X_i(I) _i(X)=sup_I⊂ X_i(I)where the supremum is over all connected intervals I that are subsets of X. That is, an agent can only use a single connected piece. The aim is to divide the cake into n pairwise-disjoint slices. A division rule is a correspondence that takes a cake-cutting problem as input and returns a division X=(X_1,…, X_n), or a set of divisions. Note that a division does not necessarily compose a partition of C (i.e. free disposal is assumed).Since all agents have connected utilities, we can assume without loss of generality that each agent receives a connected piece, i.e, for all i, X_i is an interval. Under this assumption, _i(X_i)=_i(X_i) and _i(X_i)=_i(X_i) for all i, so from now on we will use only _i and _i.A division rule R is called essentially single-valued (ESV) if X,Y ∈ R(Γ) implies that for all i ∈ N, _i(X_i)=_i(Y_i). That is, even if R returns a set of divisions, all agents are indifferent between these divisions.The classic requirements of fair cake-cutting are the following. A division X is called:* Pareto-optimal (PO) if there is no other division which is weakly better for all agents and strictly better for at least one agent.* Weakly-Pareto-optimal (WPO) if there is no other division which is strictly better for all agents.* Proportional (PROP) if each agent gets at least 1/n fraction of the cake according to his own evaluation, i.e. for all i ∈ N, _i(X_i)≥ 1/n. Note that thedefinition uses relative values.* Envy-free (EF) if each agent gets a piece which is weakly better, for that agent, than all other pieces: for all i,j ∈ N, _i(X_i)≥_i(X_j). Note that here it is irrelevant whether absolute or relative values are used. Note also that PO+EF imply PROP. A division rule is called Pareto-optimal (PO) if it returns only PO divisions. The same applies to WPO, PROP and EF. §.§ MonotonicityWe now define the two monotonicity properties. In the introduction we defined them informally for the special case in which the division rule returns a single division. Our formal definition is more general and applicable to rules that may return a set of divisions. Let N be a fixed set of agents, C=[0,c], C'=[0,c'] two cakes where c<c', and (_i)_i ∈ N value measures on [0,∞). The cake-cutting problem Γ'=(N, C', (_i)_i ∈ N) is called a cake-enlargement of the problem Γ=(N, C, (_i)_i ∈ N). By definition the cake is always enlarged on the right hand side. This might be critical for some protocols. For instance in the Dubins-Spanier moving knife protocol the cake is processed from left to right <cit.>. However, most of our results (except that of Subsection <ref>) are valid whenever C⊂ C', regardless of whether the cake is enlarged from the left, right or middle. (a) A division rule R is called upwards resource-monotonic, if for all pairs (Γ, Γ'), where Γ' is a cake-enlargement of Γ, for every division X∈ R(Γ) there exists a division Y ∈ R(Γ') such that _i(Y_i) ≥_i(X_i) for all i ∈ N (i.e all agents are weakly better-off in the new division).(b) A division rule R is called downwards resource-monotonic, if for all pairs (Γ', Γ), where Γ' is a cake-enlargement of Γ, for every division Y∈ R(Γ') there exists a division X ∈ R(Γ) such that _i(X_i) ≤_i(Y_i) for all i ∈ N (i.e all agents are weakly worse-off in the new division).(c) A division rule is resource-monotonic (RM), if it is both upwards and downwards resource-monotonic.Let C be a fixed cake, N and N' two sets of agents such that N⊃ N' and (_i)_i ∈ N their value measures. The cake-cutting problem Γ'=(N', C, (_i)_i ∈ N') is called a population-reduction of the problem Γ=(N, C, (_i)_i ∈ N).(a) A division rule R is called upwards population-monotonic, if for all pairs (Γ', Γ) such that Γ' is a population-reduction of Γ, for every division Y ∈ R(Γ') there exists a division X ∈ R(Γ) such that _i(X_i) ≤_i(Y_i) for all i ∈ N' (all the original agents are weakly worse-off in the new division).(b) A division rule R is called downwards population-monotonic, if for all pairs (Γ, Γ') such that Γ' is a population-reduction of Γ, for every division X ∈ R(Γ) there exists a division Y ∈ R(Γ') such that _i(Y_i) ≥_i(X_i) for all i ∈ N' (all remaining agents are weakly better-off in the new division).(c) A division rule is population-monotonic (PM), if it is both upwards and downwards population-monotonic.As usual in the literature, the monotonicity axioms care only about absolute values. In other words, it is not considered a violation of RM if the relative value of an agent decreases when the cake grows.For essentially-single-valued solutions, downwards resource (or population) monotonicity implies upwards resource (or population) monotonicity and vice versa. Set valued solutions, however, may satisfy only one direction of these axioms.The monotonicity axioms in <cit.> require that all divisions in R(Γ) have to be weakly better/worse than all divisions in R(Γ'). In contrast, our definition only requires that there exists such a division. This is closer to the definition of aggregate monotonicity, which originates from cooperative game theory <cit.>. The rationale is that even if a set-valued solution is used, only a single allocation will be implemented. Hence, the divider can be faithful to the monotonicity principles even if the rule suggests many non-monotonic allocations as well.Because our monotonicity requirements are weaker, any impossibility result in our model is valid in Thomson's model, too. This is not true in general for positive results; however, the specific positive results in the present paper are all based on essentially-single-valued rules. Hence, by the previous remark, they are valid in Thomson's model, too. § MONOTONICITY OF CLASSIC CAKE-CUTTING PROTOCOLS Although resource- and population-monotonicity are well established axioms in various fields of fair division, the cake-cutting literature has not adopted these ideas so far. Moreover, classical division methods like the Banach-Knaster <cit.>, Cut and Choose, Dubins-Spanier <cit.>, Even-Paz <cit.>, Fink <cit.> or the Selfridge-Conway protocol do not satisfy these axioms. A detailed explanation for most of these can be found in <cit.>. For completeness, our survey includes procedures that return disconnected pieces.All the counterexamples below feature piecewise homogeneous cakes. These are finite unions of disjoint intervals, such that on each interval the value densities of all agents are constant (although different agents may evaluate the same piece differently). In such cases, the function _i([0,x]) –which displays the value (for agent i) of the piece which lies left to the point x∈ℝ – is a piecewise-linear function (see Figure <ref>). Piecewise-homogeneous cakes are interesting on their own (see e.g. <cit.>), however, we stress that our results, hold for arbitrary cakes - not only for piecewise-homogeneous ones.Piecewise homogeneous cakes can be represented by a simple table containing the value densities of the agents on the different slices. For example the cake in Figure <ref> has the following representation. §.§ Resource-monotonicityFirst let us examine the Cut and Choose protocol for two agents. We define the cut-and-choose rule as the rule in which one pre-specified agent (say, Alice) cuts the cake into two pieces equal in her eyes, the other agent (Bob) picks the piece that he prefers, and the first agent receives the remaining piece. The following example shows that this rule is not resource-monotonic. In the examples below, the ▾ sign over a column indicates the enlargement, and the colored cells in an agent's row indicate the agent's piece.1|c||_A 1|cmyGreen!251 1|cmyGreen!251 1|c11|c|1 1|c||_B 1|c1 1|c1 1|cmyBlue!2531|c|myBlue!253▾ 1|c||_A 1|cmyGreen!251 1|cmyGreen!251 1|cmyGreen!251 1|c1 1|c|2 1|c||_B 1|c1 1|c1 1|c31|cmyBlue!253 1|c|myBlue!252When the extra piece is not present (left), Alice cuts the cake after the second slice, allowing Bob to choose the piece worth 6 for him. However, when the cake is enlarged, Alice cuts after the third slice and Bob's utility drops to 5.This example implies that the Banach-Knaster, Dubins-Spanier, Even-Paz and the Fink methods are not resource-monotonic either, as they all produce the same divisions on the above cake as Cut and Choose. [A more recent protocol, the Recursive Cut and Choose, proposed by <cit.>, violates resource-monotonicity for the same reason.]Finally we examine the Selfridge-Conway envy-free protocol for three agents. This protocol has a pre-specified cutter (who cuts the cake to three equal pieces) and a pre-specified trimmer (who trims his best piece to make it equal to his second-best piece). W.l.o.g, we define the Selfridge-Conway-rule as the rule in which Alice is the cutter and Bob is the trimmer. The following example shows that this rule is not RM.1|c||_A 1|c4 1|c?1mm2 1c2 1|c?1mm4 1cmyGreen!254 1|c|myGreen!2521|c||_B1|cmyBlue!255 1|c?1mmmyBlue!252 1c31|c?1mm41c11|c|1 1|c||_C 1|c1 1|c?1mm21cmyOrchid!254 1|c?1mmmyOrchid!2541c|11|c|1▾ 1|c||_A 1|c4 1|c2 1|c?1mm2 1c4 1|c?1mm4 1cmyGreen!252 1|c|myGreen!2561|c||_B 1|c?.5mmmyBlue!255 1|c2 1|c?1mm3 1c4 1|c?1mm1 1c1 1|c|1 1|c||_C 1|c1 1|c2 1|c?1mm41cmyOrchid!2541|c?1mmmyOrchid!251 1c1 1|c|1In the smaller cake (left), Alice cuts the cake into three parts worth 6 to her, each made of two adjacent slices. The two most valuable parts for Bob are worth the same (7) so he passes. Then the agents choose in the order Carl, Bob, Alice. Carl's utility is 8.In the larger cake (right), Alice cuts three parts worth 8 to her, made of 3, 2 and 2 slices. The leftmost part is most valuable for Bob, so he trims it to make it equal to the middle part. Carl takes the uncut part, which is worth 5 for him. Now, Carl divides the remainder to 3 equal pieces and the agents choose in order: Bob, Alice, Carl. Carl receives a piece worth at most 2, so his total utility is at most 7. §.§ Population-monotonicityPopulation-monotonicity is not applicable to protocols with fixed number of agents, such as Cut and Choose and Selfridge-Conway. The following example shows that the Dubins-Spanier moving-knife protocol is not PM.In the next couple of examples the cells of the leaving player are colored gray.When all three agents are present, Alice is the first to stop the knife and get a piece. In the second round, Bob stops the knife and gets a piece, and finally Carl receives the reminder which is worth for him 40. However, if Bob is not present then Carl will be the first to stop the knife and his value will be 30.The Even-Paz method and the Banach-Knaster protocol (when agents are ordered Alice-Bob-Carl) produce the same allocations. Thus, none of these three methods is population-monotonic. Consider now the Fink procedure. This procedure was specifically designed with upwards-population-monotonicity in mind: when a new agent joins an existing division, he takes a proportional share from each of the existing agents, so all existing agents are weakly worse off; they all participate in supporting the new agent, which is what PM is all about. However, the Fink procedure is not downwards-PM, as the following example shows:Suppose that initially Alice and Bob use Cut and Choose and Bob is the chooser. He is able to salvage the whole cake according to his own evaluation. Now they divide their pieces into three equal parts, and Carl gets to choose one slice from each of them. Hence, Bob ends up with a piece worth at least 8 for him. But if Alice leaves, then Bob and Carl have to redivide the cake using Cut and Choose. Then, no matter who cuts, Bob ends up with only 6.The above example seemingly contradicts our claim that upwards-PM implies downwards-PM and vice versa for single valued solutions. However, there is a subtle difference here. The Fink procedure is based on a predefined order of the agents, and it is only upwards monotonic if the new agent is the last in the order. An alternative explanation is to treat the Fink rule as a set-valued rule, which returns n! possible allocations, for all n! possible orderings of the agents.Under this definition, the Fink rule is upwards-PM, but not downwards-PM as shown in the example.§ NEGATIVE RESULTS When the agents do not care about connectivity, the ideal division rule (at least in terms of fairness) is the Nash-optimal rule, which maximizes the product of utilities: it is RM, PM, PO and EF, hence also proportional <cit.>. Moreover, for every Nash-optimal allocation there exists a price-vector that is a competitive-equilibrium from equal-incomes (CEEI). Therefore, it makes sense to ask whether Nash-optimality and/or CEEI have all these desirable properties with connectivity too. Unfortunately, the answer is no.Consider first the Nash-optimal rule and the following cake: _A 2 2 2 2 2 2 _B 1 1 4 4 1 1 Without connectivity, it gives the two central slices to Bob and the four peripheral slices to Alice. The Nash-welfare is 8*8=64. The allocation is EF and PROP. Moreover, it is supported in a competitive-equilibrium from equal-incomes, in which the price of a central slice is 2 and the price of a peripheral slice is 1 and the income of both agents is 4.In contrast, with connectivity, the Nash-optimal rule is not proportional. To see this, observe that in both of the connected proportional divisions, Alice and Bob each get three slices and a value of 6, so the Nash-welfare is 36. However, when Bob gets four slices and Alice two slices, the Nash-welfare is 40. Hence neither of the two possible proportional allocations is Nash-optimal.Moreover, with connectivity, a CEEI allocation might not exist at all. Recall that any CEEI allocation is both PO and EF; in following cake, no PO+EF allocation exists: [No connected allocation is both PO and EF]: _A 2 0 3 0 2 0 0 _B 0 0 0 0 0 blue!307 0 _C 0 2 0 2 0 0 3EF requires to give Bob a part of his 7 slice; PO then requires to give him his entire 7 slice. Carl's piece can then be either at Bob's left or at Bob's right: * If Carl is at Bob's left and his utility is 2, then the allocation is not PO since Carl can be moved to the rightmost slice and get a utility of 3 without harming any other agent.* If Carl is at Bob's left and his utility is more than 2, then Alice envies him since he holds her 3 slice while here utility is at most 2.* If Carl is at Bob's right, then by PO the entire left is given to Alice, but then Carl envies her. Since both the Nash-optimal and the CEEI rules fail in the presence of connectivity requirements, we have to look for different rules. But first we show that, with connectivity requirements, Pareto-optimality and proportionality are incompatible with resource-monotonicity: When there are two or more agents with connected utilities, any division rule which is proportional and Pareto-optimal cannot be resource-monotonic. Consider the following cake, where the enlargement is marked by the ▾ sign: _̧A̧ 6̧ 0̧ 1̧ 1̧ _̧B̧ 0̧ 4̧ 2̧ 2̧ ▾ _̧A̧ 6̧ 0̧ 1̧ 1̧ 6̧ _̧B̧ 0̧ 4̧ 2̧ 2̧ 0̧ In the smaller cake, any PROP+PO rule must give the leftmost slice to Alice and the rest of the cake to Bob. Hence, Alice's utility is 6 and Bob's utility is 8. In the larger cake, any PROP rule must give Alice a utility of at least 7. This leaves Bob a utility of at most 6. Moreover, Pareto-optimality and proportionality are incompatible with population-monotonicity, too: When agents have connected utilities, any division rule which is proportional and Pareto-optimal cannot be population-monotonic. Consider again the cake of Example <ref>:_A 2 0 3 0 2 0 0 _B 0 0 0 0 0 7 0 _C 0 2 0 2 0 0 3Note that all agents value the entire cake as 7, so by PROP each agent must receive a connected piece with a value of at least 22/3. Bob's piece must be in his 7 slice (by PROP) and most contain all this slice and nothing more (by PO).Carl's piece can be either left or right of Bob's 7 slice. If Carl's piece is at Bob's left, then by PROP it must contain the two 2 slices of Carl. This leaves Alice a utility of at most 2, which violates PROP. So Carl's piece must be the Bob's right and Carl's utility is 3.This leaves the entire region at Bob's left to Alice. By PO, she receives this entire region and her utility is 7.Suppose Bob leaves. Now n=2, so Carl must get a value of at least 7/2 = 3.5, so his piece must touch his middle "2" slice. But this leaves Alice a utility of at most 5. Despite the crucial importance of Pareto-optimality in economics, in our case it is problematic: it is incompatible with the stronger fairness criterion of envy-freeness. Even with the weaker criterion of proportionality, it is incompatible with any of the monotonicity axioms.If we believe that a division rule is fair only if it satisfies both proportionality and monotonicity, we must compromise on efficiency. Some possible compromises are presented in the following section.§ POSITIVE RESULTS §.§ Exactly-proportional rule: PROP+RM+PM Our first division rule is both resource- and population-monotonic, but very inefficient. We present it merely as a benchmark for comparison with the more advanced rules that come later. A division X is called exactly-proportional if it gives every agent a relative value of exactly 1/n. Formally: ∀ i∈ N: _i(X_i)=1/n. The exactly-proportional rule returns an exactly-proportional division; such a division can be found, for example, using the following variant of the Banach-Knaster procedure <cit.>: * Every agent marks a point x_i such that _i([0,x_i])=1/n.* The procedure selects the leftmost point x_min (breaking ties arbitrarily) and gives [0,x_min] to the agent that made that mark.* The remaining agents divide the cake recursively in the same way (keeping the fraction 1/n fixed).* The cake that remains after the n-th step is discarded. The exactly-proportional rule is proportional and resource-monotonic and population-monotonic, but not weakly-Pareto-optimal. PROP is obvious by definition.RM holds because when the cake grows/shrinks, all agents receive the same fraction of a larger/smaller whole.PM holds because when an agent leaves/joins, the remaining agents receive a larger/smaller fraction of the same whole.The following cake shows that the rule is not WPO: _A 2 0 _B 0 2 An exactly-proportional division must give each agent a utility of exactly 1, yet it is possible to give each agent a utility of 2.In essence, the exactly-proportional rule tells the agents “keep your happiness at the minimum proportional level of exactly 1/n, so that when new resources become available, you can only become happier”. This guarantees PROP and RM and PM, but it is very inefficient. §.§ Relative-equitable rule: WPO+PM+PROP We present a population-monotonic division rule based on the notion of equitable cake divisions. The idea of an equitable division is that all agents are equally happy — each agent receives a piece with the same personal value. (a) A cake division X is called relative-equitable if all agents receive exactly the same relative value. Formally: _i(X_i) = _j(X_j) for all i,j ∈ N. This value is called the relative-equitable value of the division.(b) A relative-equitable division is called max-relative-equitable if its relative-equitable value is weakly larger than of all relative-equitable divisions. (a) An agent-ordering, denoted by π, is a permutation on the set of agents N.(b) A connected partition of the cake into n intervals is called a π-partition if the intervals are assigned to the n agents in the order specified by π. For example, a 132-partition is a connected partition in which Agent 1 receives the leftmost piece, Agent 3 receives the middle piece and Agent 2 receives the rightmost piece.Independently of ours, some of the following lemmata were proved by <cit.>. We mention each lemma that was previously proved. For completeness, we provide alternative proofs.For every agent-ordering π, there exists a relative-equitable π-partition. A proof using generalized-inverse functions is given in Theorem 5 of <cit.>.A new and shorter proof, using the Borsuk-Ulam theorem, is given in Appendix <ref>. Below, we present a moving-knife procedure that finds an equitable-connected division for any ordering of the agents. Note that an equitable-connected division cannot be found by a discrete procedure <cit.>, so a moving-knife procedure is a natural alternative.Without loss of generality, assume that π is the order 1,…,n. The procedure starts with the agents holding their knives at the leftmost end of the cake. There is a large screen where the current relative-equitable value is displayed, which is zero at the beginning. During the procedure the positions of the knives determine a division of the cake: the piece allotted to Agent i is the piece between the knife of Agent i and the knife of Agent i-1 (or for i=1 the leftmost end of the cake). The value on the screen increases continuously and all agents moves their knives to the right such that the value of each piece matches the value on the screen. This goes on until one of the following two things happen:(a) The rightmost knife reaches the end of the cake.(b) The knife of an agent reaches the leftmost endpoint of an interval in which the value density of that particular agent is 0. In case (a), the procedure stops and we have obtained an relative-equitable partition of the entire cake. The relative-equitable value is the value on the screen.In case (b), the value on the screen is frozen temporarily and the procedure enters its second phase. Let j be the rightmost agent whose piece is adjacent to a zero-value interval. So every agent who comes after j in the predefined order can strictly increase the value of his piece by moving his knife to the right. We ask all agents starting from j to move their knives to the right such that their value remains constant. This goes on until either (a) holds, or another agent k>j reaches an interval of zero measure so (b) holds for that agent, or Agent j reaches the end of his zero-measure interval. In the first case the procedure stops, in the second case the procedure continues at second phase with agent k, in the third case the procedure goes back to the first phase.Since the rightmost knife is moving continuously and monotonically to the right, eventually it reaches the end of the cake and an equitable division is found.[Here we implicitly use the assumption that the value measures are bounded. If _i([0,c]) were infinite then the rightmost knife could move to the right indefinitely without reaching the end of the cake by slowing down.]We demonstrate the somewhat informal description of the above moving-knife procedure on an example.Consider the piecewise homogeneous cake depicted in Figure <ref>. Three agents: Green, Red and Blue seek an equitable division of the cake, which has total value of 8 for each of them (this is a special case where the same relative value indicates the same absolute value). They agree on using the above procedure with the order Green, Red, Blue. Immediately at the beginning, we are at case (b) because Blue's knife is at a zero-value region. Thus, we enter phase 2 and Blue's knife moves to x=4. Then we return to phase 1.As the knives move to the right, the agents increase the value of their pieces until they reach a relative value of 2/8. At that moment Green's knife rests at x=2, Red's knife at x=3 and Blue's knife at x=4.5. The value displayed at the screen becomes fixed at this point since Red reached a zero-value interval. As Red gradually increases his piece, Blue moves his knife to the right making sure his value does not change. This continues until Blue himself reaches x=6, which is the start of an interval of zero value. Red stops his knife at x=5.5, but Blue continues until he reaches the right end of the cake. The resulting division is relative-equitable with value 2/8. Let X be a certain division of a cake. We denote the smallest relative value obtained by an agent by ^X_min and the largest by ^X_max, such that for all i=1,…,n: ^X_min≤_i(X_i)≤^X_max. Note that ^X_min = ^X_max if and only if X is a relative-equitable division.Let π be an agent-ordering and X a π-partition of a cake. Let Y be a relative-equitable π-partition of the same cake, having a relative-equitable value ^Y. Then: ^X_min≤^Y ≤^X_max. The proof that ^X_min≤^Y is in Lemma 7 of <cit.>; the proof that ^Y ≤^X_max (which is analogous) is in Lemma 1 of <cit.>. We provide an alternative, graphic proof that that ^X_min≤^Y (see Figure <ref>). Assume w.l.o.g. that π is the ordering 1,…,n. Assume by contradiction that ^Y < ^X_min. In particular, this means that Agent 1 receives a smaller value in partition Y than in partition X, that is, _1(Y_1)<_1(X_1). Hence, the cut-point between pieces Y_1 and Y_2 is to the left of the cut-point between pieces X_1 and X_2.The same is true for the n-th agent: _n(Y_n)<_n(X_n). Hence the cut-point between pieces Y_n-1 andY_n is to the right of the cut-point between pieces X_n-1 and X_n. Because the leftmost cut-point moved to the left and the rightmost cut-point moved to the right, there must be a pair of adjacent cut-points such that the left one moved to the left and the right one moved to the right (see Figure <ref>). Hence, there must be an index k, such that: * The left boundary of piece Y_k is to the left of the left boundary of X_k, and* The right boundary ofY_k is to the right of the right boundary of X_k.This means that Y_k ⊃ X_k which in turn implies that _k(Y_k)≥_k(X_k). This contradicts our assumption that ^Y < ^X_min. For every agent-ordering π, there may be many different equitable π-partitions. However, all these divisions have the same equitable value:For every agent-ordering π, there is a unique value ^π which is the relative-equitable value in all relative-equitable π-partitions. This simple corollary is also proved in Corollary 2 of <cit.>.Assume that there are two relative-equitable π-partitions: X with equitable value ^X and Y with equitable value ^Y. By Lemma <ref>, ^X ≤^Y ≤^X. Hence ^X=^Y. A straightforward corollary of the above lemmata is that the orderings can be sorted by their equitable value. Since for n agents there are finitely many different orderings the following holds.There exist max-relative-equitable divisions with connected pieces.Define the relative-equitable rule as the rule that returns all connected max-relative-equitable divisions of the cake. By Lemma <ref>, this rule is essentially-single-valued. The relative-equitable division rule is weakly-Pareto-optimal. Let Y be a max-relative-equitable division with equitable value ^Y. Suppose by contradiction that there is a division X in which the utility of all agents is strictly more than ^Y. Let π be the agent ordering in X. By Lemma <ref>, the relative-equitable-value of the relative-equitable division in ordering π is at least ^X_min > ^Y. But this contradicts the maximality of Y. The relative-equitable division rule is population-monotonic. Since the rule is essentially-single-valued, it is sufficient to prove downwards-PM.Let X be a max-relative-equitable for n agents with equitable value ^X. Suppose that an agent i∈ N abandons his share. Give agent i's piece to an agent that holds an adjacent piece, e.g. to agent i+1. Call the resulting division Y. We obtained a connected division for n-1 agents, in which the smallest value enjoyed by an agent is at least ^X (indeed, the value of all agents except i+1 is exactly ^X, and the value of agent i+1 is at least as large). By Lemma <ref>, the maximum equitable value in the new situation is at least ^X. Hence, in the max-relative-equitable for n-1 agents, the value of all agents is at least as large as in the previous division.The relative-equitable division rule is proportional. This is also proved in Corollary 1 of <cit.>.Let X be any connected proportional division of the cake (by <cit.> such a division always exists). Because X is proportional, ^X_min≥ 1/n. Hence, by Lemma <ref>, the value of a relative-equitable division in the same ordering as X is at least 1/n. Hence, the maximum relative-equitable value is at least 1/n. Unfortunately, the relative-equitable rule is not resource-monotonic.Consider the following cake, where M is a large constant, M≫ 2:1|c_A 1|cmyGreen!25M 1|cmyGreen!25M 1|c1 1|c|11|c_B1|c1 1|c1 1|cmyBlue!25M 1|c|myBlue!25M ▾ ▾ 1|c_A 1|cmyGreen!25M 1|cmyGreen!25M1|cmyGreen!251 1|c1 1|cM 1|c|M 1|c_B1|c1 1|c1 1|cM 1|cmyBlue!25M1|cmyBlue!251 1|c|myBlue!251In the smaller cake, the unique max-relative-equitable division gives the two leftmost slices to Alice and the two rightmost slices to Bob. The relative-equitable value is M/(M+2). However, in the larger cake the unique max-relative-equitable division is attained by cutting exactly in the middle, decreasing the relative-equitable value to 1/2. While Alice gains from the division and her (absolute) value increases by 1, Bob loses since his absolute value drops from 2 M to M+2. The following theorem summarizes the properties of the relative-equitable rule: The relative-equitable rule is weakly-Pareto-optimal and population-monotonic and proportional, but not resource-monotonic.§.§ Absolute-equitable rule: WPO+PM+RMAs shown by Example <ref>, the relative-equitable rule is not RM since the relative value of some agents is made smaller when the cake becomes larger. This may imply that, if we use absolute instead of relative values, we can get resource-monotonicity.Fortunately, almost all definitions, examples, procedures, lemmata and proofs from the previous subsection can easily be adapted to absolute values by just replacing “relative” with “absolute”; the only exception is Lemma <ref>.The absolute-equitable rule is weakly-Pareto-optimal and population-monotonic and resource-monotonic, but not proportional. WPO holds by Lemma <ref> and PM by Lemma <ref>, replacing “relative” by “absolute”.The proof of RM is essentially the same as the proof of Lemma <ref>: the cake enlargement can be treated as a piece that was acquired from an agent who left the scene. [Note that this argument is not true for the relative-equitable-connected rule. When the cake grows, while the absolute value of all agents weakly increases, the relative value of some agents may decrease. Hence, the relative-equitable value in the enlarged cake might be smaller than in the original cake, and this may make some agents worse-off. See Example <ref>.]To see that the rule is not PROP, suppose that Alice values the entire cake as 1 and Bob values the entire cake as M≫ 2. Then, any absolute-equitable division must give Bob at most 1/M ≪ 1/2 of his value. §.§ Rightmost-mark rule: WPO+PROP+RM In this section we present a resource-monotonic procedure that produces an envy-free (hence proportional) division of the whole cake for 2 agents, giving each agent a connected piece.The procedure is called the rightmost-mark rule and consists of the following steps. * Ask both agents to make a mark which cuts the cake in half according to their own valuation. If more than one point satisfies this criterion, i.e. the middle of the cake is worthless to one of the agents, take the rightmost such point. * Cut the cake at the rightmost mark and give the slice on the right to the agent who made the mark. * The remaining part is given to the other agent.For two agents with connected utilities, the rightmost-mark procedure is envy-free, proportional, weakly Pareto-optimal and resource-monotonic. EF and PROP are obvious.To prove WPO, suppose w.l.o.g. that Bob made the rightmost mark. Suppose we want to give Bob a piece worth strictly more than his current utility of 1/2. This can be done in two ways. One way is to keep Bob at the right side and move the division line leftwards; this necessarily does not increase Alice's utility. The other way is to switch between Alice and Bob. But then, if Bob's utility is to be improved, he must receive at least Alice's current share (which is worth for him 1/2). This leaves at most 1/2 to Alice. Hence, there is no division in which the utilities of both agents are strictly higher.We now prove that the procedure is RM. Since the rule is single-valued, it is sufficient to prove upwards-RM.Suppose w.l.o.g. that Bob made the rightmost mark on the smaller cake (see Fig. <ref>). Thus, Bob obtained the piece marked with B, which is worth exactly half for him. Alice received the part marked with A, which is worth at least half for her. When the cake is enlarged two cases are possible: The order of the cut marks made by the agents remains the same or gets reversed. In the first case, Bob still receives the rightmost cake (marked with B'). Since it still worth for him half of the cake, and since the cake is enlarged, he is not worse off. Neither is Alice, who receives a piece that contains her original share.In the second case, Alice receives the rightmost piece A”. Note that she believes that the pieces A” and B” represent the same value, and B” contains A, her original piece. Thus, she is not worse off. Similarly, Bob evaluates A and B the same, and he received B” which contains A, thus he is not worse off either.Now we show that, in the special case in which the value-densities of both agents are strictly-positive, the rightmost-mark division rule is the only rule which is PROP+WPO+RM. First we need a lemma.Suppose there are n=2 agents, Alice and Bob, with strictly-positive valuations, and half-points h_A,h_B respectively. Then, any PROP+WPO division rule that allocates connected pieces must:(1) Cut the cake in the closed interval between h_A and h_B (that is, [h_A,h_B] if h_A≤ h_B or [h_B,h_A] if h_B≤ h_A);(2) Allocate the leftmost piece to the agent with the leftmost half-point (that is, Alice if h_A≤ h_B or Bob if h_B≤ h_A). If h_A=h_B then obviously the only PROP allocation is to cut at that point and give a half to each agent.W.l.o.g, we now assume that h_A< h_B. If the cake is cut at x<h_A, then the piece to the left of x is worth less than 1/2 to both agents, so it cannot be given to any of them. Similarly, if the cake is cut at x>h_B, then the piece to the right of x is worth less than 1/2 to both agents. Hence, the cake must be cut at x∈ [h_A,h_B].Since h_A<h_B, we must have either h_A<x or x<h_B (or both). In the former case, the piece to the right of x is worth less than 1/2 to Alice; in the latter case, the piece to the left of x is worth less than 1/2 to Bob. So in both cases, Alice must get the left and Bob must get the right.For n=2 agents with strictly-positive valuations, any PROP+WPO+RM division rule that allocates connected pieces must allocate the rightmost agent a relative utility of exactly 1/2. If h_A=h_B then obviously both agents must receive exactly 1/2. W.l.o.g, we now assume that h_A<h_B, so the rightmost agent is Bob. We normalize the agents' valuations to 2. So the cake looks like the following (where a∈[0,1] and b∈[0,1] are some constants):[0,h_A) [h_A,h_B) [h_B,1]_̧A̧ 1̧ a̧ 1̧-̧a̧ _̧B̧ b̧ 1̧-̧b̧ 1̧We claim that a PROP+WPO+RM algorithm must give Bob a value of at most 1. Suppose by contradiction that Bob's value is 1+2 d, where d>0 is a constant, d∈(0,1-b). Now, the cake grows as follows:▾ _̧A̧ 1̧ a̧ 1̧-̧a̧ 2̧a̧ _̧B̧ b̧ 1̧-̧b̧ 1̧ ḑ In the extended cake, h_A moves rightwards and is located exactly between slices #2 and #3. h_B also moves slightly rightwards and is now located inside slice #3. By Lemma <ref>, the cake is cut at or to the right of the new h_A and Bob receives the rightmost piece. Hence, Bob's new value is at most 1+d - in contradiction to RM. Theorem <ref> implies that, when the value-densities are strictly-positive, the rightmost-mark rule is the only rule that satisfies PROP+WPO+RM with connected utilities.§ CONCLUSION AND FUTURE WORK We studied monotonicity properties in combination with the classical axioms of proportionality and Pareto-optimality. Table <ref> summarizes the properties of the various division rules. Most properties are proved in the paper body, except the WPO properties of the classic protocols, which are proved in Appendix <ref>.Yes NoNo* Y.c.u. Upw#̧11|c|#1 #1#̧1̧Each of our connected division rules satisfies three of the four properties {PROP,WPO,RM,PM}. Thus, the divider has to choose whether to give up proportionality (PROP) or efficiency (WPO) or give up one of the monotonicity properties. We are still missing a rule that satisfies PROP+WPO+RM for three or more agents, as well as a rule that satisfies PROP+WPO+RM+PM for two or more agents. Additionally, combining envy-freeness with monotonicity for three or more agents looks like a fairly challenging task.In this paper we ignored strategic considerations and assumed that all agents truthfully report their valuations. An interesting future research topic is how to ensure monotonicity in truthful division procedures.Finally, our procedure for equitable division uses moving knives and thus it is not discrete.Recently, <cit.> and <cit.> presented discrete procedures that attain approximately-equitable connected divisions. A division rule based on such procedures naturally attains approximate versions of proportionality and monotonicity. Further development of this idea is deferred to future work.§ ACKNOWLEDGMENTSThe idea of this paper was born in the COST Summer School on Fair Division in Grenoble, 7/2015 (FairDiv-15). We are grateful to COST and the conference organizers for the wonderful opportunity to meet with fellow researchers from around the globe. In particular, we are grateful to Ioannis Caragiannis, Ulle Endriss and Christian Klamler for sharing their insights on cake-cutting. We are also thankful to Marcus Berliant, Shiri Alon-Eron, Christian Blatter and Ilan Nehama for their very helpful comments.The authors acknowledge the support of the `Momentum' Programme (LP-004/2010) of the Hungarian Academy of Sciences, the Pallas Athene Domus Scientiae Foundation, the OTKA grants K108383 and K109354,the ISF grant 1083/13, the Doctoral Fellowships of Excellence Program, the Wolfson Chair and the Mordecai and Monique Katz Graduate Fellowship Program at Bar-Ilan University.In addition Sziklai was supported by the ÚNKP-16-4-I. New National Excellence Program of the Ministry of Human Capacities.§ EXISTENCE OF EQUITABLE-CONNECTED DIVISIONS The proof uses the Borsuk-Ulam theorem[Independently and contemporaneously to our work, <cit.> came up with a similar idea.]. It is about functions defined on spheres. Define the sphere S^n-1 as the set of points (x_1,…,x_n) satisfying: |x_1|+⋯+|x_n| = 1 (it is a sphere in the ℓ_1 metric).[Borsuk-Ulam] Let f_i, for i=1,…,n-1, be real-valued functions of n variables, that are continuous on the sphere S^n-1 .Then, there exists a point on the sphere, X^*=(x_1,…,x_n)∈ S^n-1, such that for all i: f_i(X^*) = f_i(-X^*). Assume that the cake is the interval [0,1]. Each point (x_1,…,x_n) ∈ S^n-1 corresponds to a partition of the cake to n intervals, marked X_1,…,X_n, such that the length of interval X_i (the i-th interval from the left) is |x_i|. Note that each partition corresponds to many points which differ in the signs of some or all of the coordinates (this representation of the partition space was introduced by <cit.> and used e.g. by <cit.>).Suppose w.l.o.g. that the players are ordered from 1 to n, so that player i receives the piece X_i. For every point X=(x_1,…,x_n)∈ S^n-1 and for every i∈ 1,…,n-1, define the function f_i(X) as follows:f_i(X) = (x_i)·_i(X_i) - (x_i+1)·_i+1(X_i+1) Note that when x_i=0, interval X_i is empty so _i(X_i)=0. Hence, the functions f_i are continuous on S^n-1.Hence, by the Borsuk-Ulam theorem, there exists a point X^* on S^n-1 such that for all i: f_i(X^*) = f_i(-X^*). By definition of the f_i, the cake division that corresponds to X^* necessarily satisfies: (x_i)·_i(X^*_i) = (x_i+1)·_i+1(X^*_i+1) This is possible only if for all i∈(1,…,n-1): _i(X^*_i) = _i+1(X^*_i+1)Hence the division X^* is equitable. § WEAK PARETO-OPTIMALITY OF CLASSIC PROTOCOLS It is a well-known fact that the classic protocols that we discuss here are not Pareto-optimal. Now we show that – with the exception of Cut and Choose – they are not even weakly Pareto-optimal. In the following example the Banach-Knaster, Dubins-Spanier and Even-Paz protocols coincide.1|c_A 1|cmyGreen!252 1|c0 1|c0 1|c0 1|c0 1|c|4 1|c_B1|c21|cmyBlue!2531|cmyBlue!2511|cmyBlue!251 1|c5 1|c|01|c_C 1|c2 1|c31|c1 1|c1 1|cmyOrchid!255 1|c|myOrchid!2501|c_A 1|c2 1|c0 1|c0 1|c0 1|c0 1|c|myGreen!254 1|c_B1|cmyBlue!2521|cmyBlue!2531|cmyBlue!2511|c1 1|c5 1|c|01|c_C 1|c2 1|c31|c1 1|cmyOrchid!251 1|cmyOrchid!255 1|c|0Alice gets the first slice as it composes 1/3 of her cake value, while Bob and Carl perform a Cut and Choose on the rest of the cake. The table on the right presents an alternative allocation (still with connected pieces) which is strictly better for all agents.The Fink method is not contiguous, hence we can obtain an improvement by composing the pieces from more slices.For two agents the Fink method proceeds as the Cut and Choose: Alice cuts in the middle and Bob chooses the piece on the right. The second cake is a Pareto-improvement with four slices where every agent is strictly better off.The same example shows that the Cut and Choose is not WPO for additive utilities. However, it is a contiguous protocol, thus it makes sense to investigate whether we could improve it with connected pieces. The Cut and Choose method is weakly Pareto-optimal whenever the agents have connected utilities.Suppose Alice cuts at point x ∈ [0,c] and Bob chooses. The cake is divided into the left piece (L) and the right piece (R). Without loss of generality we may assume that Alice receives the left piece. Alice's utility can be improved in only two ways: (a) Cut at some point x'>x and give the left piece to Alice. But then Bob would receive R' ⊂ R, which obviously cannot be strictly better than R. (b) Cut at some point x”<x and give Alice the right piece R”⊃ R. Then Bob receives L”⊂ L. Since Bob preferred R to L he did not gain utility by this swap. So it is impossible to strictly increase the utility of Alice and Bob simultaneously. Finally we show that the Selfridge-Conway does not satisfy WPO either. Consider the following cake.Alice cuts the cake into three equal parts. Since the two most valuable slices have equal value for Bob he passes. The agents choose in the Carl-Bob-Alice order and obtain the pieces shown by the table on the left. However, as the table on the right shows, there is an allocation where every agent is strictly better off.
http://arxiv.org/abs/1703.08928v3
{ "authors": [ "Erel Segal-Halevi", "Balázs Sziklai" ], "categories": [ "cs.GT" ], "primary_category": "cs.GT", "published": "20170327044658", "title": "Resource-monotonicity and Population-monotonicity in Connected Cake-cutting" }
Universität Göttingen, Institut für Theoretische Physik, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany Universität Göttingen, Institut für Theoretische Physik, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany Vienna Center for Quantum Technology, University of Vienna, Boltzmanngasse 5, 1090 Wien, Austria Physics Department and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, D-80333 München, Germany National Institute for Theoretical Physics (NITheP), Stellenbosch 7600, South Africa Institute of Theoretical Physics, Department of Physics, University of Stellenbosch, Stellenbosch 7600, South Africa The existence or absence of non-analytic cusps in the Loschmidt-echo return rate is traditionally employed to distinguish between a regular dynamical phase (regular cusps) and a trivial phase (no cusps) in quantum spin chains after a global quench. However, numerical evidence in a recent study [J. C. Halimeh and V. Zauner-Stauber, arXiv:1610.02019] suggests that instead of the trivial phase a distinct anomalous dynamical phase characterized by a novel type of non-analytic cusps occurs in the one-dimensional transverse-field Ising model when interactions are sufficiently long-range. Using an analytic semiclassical approach and exact diagonalization, we show that this anomalous phase also arises in the fully-connected case of infinite-range interactions, and we discuss its defining signature. Our results show that the transition from the regular to the anomalous dynamical phase coincides with ℤ_2-symmetry breaking in the infinite-time limit, thereby showing a connection between two different concepts of dynamical criticality. Our work further expands the dynamical phase diagram of long-range interacting quantum spin chains, and can be tested experimentally in ion-trap setups and ultracold atoms in optical cavities, where interactions are inherently long-range.Anomalous dynamical phase in quantum spin chains with long-range interactions Jad C. Halimeh December 30, 2023 =============================================================================§ INTRODUCTION Dynamical phase transitions have recently been the subject of intense theoretical and experimental investigation. Most commonly, they fall into two main types, both of which involve a quench where a control parameter in the system Hamiltonian is abruptly switched from some initial value to a final one, subsequently throwing the system out of equilibrium. The first kind of dynamical phase transition (DPT-I),<cit.> is of the Landau type: one waits for the system to relax into a (quasi-)steady state and extracts a suitable order parameter, usually that associated with spontaneous symmetry breaking in the system at equilibrium. This is done as a function of the final value of the quench-control parameter, and if a non-analyticity arises in this function, then a DPT-I has occurred in the system.A second type of dynamical phase transition is the DPT-II,<cit.> in which non-analyticities in time, or lack thereof, in the Loschmidt-echo return rate r(t)=-lim_N→∞1/Nln|⟨ψ_0|e^-ℋ̂ t|ψ_0||⟩^2, characterize different phases, with pre-quench ground state |ψ_0⟩, system size N, and post-quench Hamiltonian ℋ̂. In the context of the DPT-II, an analogy<cit.> is made between the thermal partition function and the Loschmidt echo ⟨ψ_0|e^-ℋ̂ t|ψ_0|$⟩, or, equivalently, between the thermal free energy and the Loschmidt-echo return rater(t), where evolution time is now interpreted as a complex inverse temperature. Consequently, if the Loschmidt-echo return rate exhibits non-analyticities in evolution time after a quench, this is analogous to non-analyticities in the free energy of a system in equilibrium, which is the hallmark of an equilibrium phase transition.<cit.> This DPT-II, first classified in the seminal work of Ref. Heyl2013 for the one-dimensional nearest-neighbor transverse-field Ising model (NN-TFIM), has been studied both analytically<cit.>and numerically<cit.> in various models, and has also been experimentally observed.<cit.> Even though for certain quenches <cit.> the critical final value of the quenching parameter that separates the phase with cusps from that with no cusps coincides with the equilibrium critical point of the model, this is not always the case,<cit.> and in general the dynamical critical point separating such dynamical phases is different from its equilibrium counterpart.In Fig. <ref> we show, in the context of the DPT-II for quenches from zero field strength, the dynamical phase diagram of the one-dimensional long-range transverse-field Ising model (LR-TFIM) given by the Hamiltonian ℋ̂(Γ)=-J/2𝒩∑_i≠ j^N1/|i-j|^αŜ^z_iŜ^z_j-Γ∑_iŜ^x_i,whereŜ^a_i,a=x,y,z, are the spin-1/2operators on sitei,J>0is the spin-spin coupling constant,Γis the strength of the transverse magnetic field,α≥0, and𝒩is the Kac normalization<cit.> given by 𝒩=1/N-1∑_i≠ j^N1/|i-j|^α=2/N-1∑_n=1^NN-n/n^α,which guarantees energy-density intensivity forα≤1. The part of this diagram atα=0is the main result of this work. The part of this phase diagram forα>1has been constructed using Matrix Product State (MPS) techniques for infinite systems, a method known as iMPS.<cit.> In the limitα→∞the nearest-neighbor result<cit.> is obtained. As can be seen in Fig. <ref>, quenching from zero field strength to above a certain dynamical critical value sets the system in a regular dynamical phase characterized by the appearance of an infinite sequence of cusps with the first cusp appearing before the first minimum in the Loschmidt-echo return rate. These cusps become sharper and temporally less separated with increasing quench strength.<cit.> However, for sufficiently long-range interactions (α≲2.3), a new anomalous dynamical phase<cit.> appears whose defining signature is that cusps appear only after the first minimum in the return rate. In contrast to their regular counterparts, the anomalous cusps separate less in time from each other with decreasing quench strength, with more smooth maxima emerging in the return rate before their onset. In fact, numerical results<cit.> suggest that these cusps arise for arbitrarily small quenches, even though in the framework of iMPS and time-dependent density matrix renormalization group<cit.> (t-DMRG) techniques entanglement buildup prevents access to long-enough evolution times that would be necessary to see the onset of these anomalous cusps for extremely weak quenches.In this paper, we turn our attention to the analytically-tractable fully-connected transverse-field Ising model (FC-TFIM), and investigate the nature of the anomalous phase in a semiclassical approach.<cit.> The advantage of this is two-fold: (i) In iMPS, it is intrinsically difficult to include the Kac-normalization to ensure intensivity of the energy density forα≤1, whereas the FC-TFIM allows for investigating the anomalous phase with exact diagonalization (ED) and semiclassical techniques. ED is a technique which is fundamentally different from iMPS methods, therefore it additionally provides an alternate venue to study the anomalous phase. (ii) Moreover, from an intuitive point of view, it is logical to consider the limit of infinite-range interactions since it appears that the anomalous phase occurs only for interactions that are sufficiently long-range.The remainder of the paper is organized as follows: In Sec. <ref> we review the FC-TFIM and use a semiclassical treatment to derive the infinite-time average of theℤ_2order parameter and its oscillation period. In Sec. <ref> we present and discuss our results obtained from ED, characterize the anomalous phase, and discuss the connection between the cusps in the return rate and theℤ_2order parameter. We conclude in Sec. <ref>.§ LOSCHMIDT-ECHO DYNAMICAL PHASE TRANSITIONConsider a closed quantum chain of N sites prepared in the initial state |ψ_0⟩, and whose time evolution is propagated by the Hamiltonian ℋ̂. One can then mark the analogy<cit.> between the Loschmidt echoG(t)=⟨ψ_0|ψ(t)|=⟩⟨ψ_0|e^-ℋ̂ t|ψ_0|,⟩and the partition function 𝒵(β)=^-βℋ̂ describing the system in thermal equilibrium where β is the inverse temperature, noting that we set ħ to unity all throughout. Taking |ψ_0⟩ as the corresponding boundary condition and considering the complex inverse temperature z, one can then view (<ref>) as a boundary partition function 𝒵_b(z)=⟨ψ_0|e^-z ℋ̂|ψ_0|⟩eq:bdry_Zalong the imaginary axis z= t. Thus, its natural logarithm, the return-rate function r(t)=-lim_N→∞1/Nln|G(t)|^2 arises as an analogue of the thermal free energy per site in the thermodynamic limit N→∞, in which a non-analyticity would indicate the presence of dynamical criticality. A connection is thus established between the system in thermal equilibrium and when it undergoes time evolution, where one would then naturally inquire whether a dynamical phase transition can occur in the latter case just as an equilibrium phase transition occurs in the former. § FULLY-CONNECTED TRANSVERSE-FIELD ISING MODEL§.§ Model and quenchThe one-dimensional FC-TFIM is described by taking theα=0limit of (<ref>), ℋ̂(Γ)=-J/2N∑_i≠ j^NŜ^z_iŜ^z_j-Γ∑_iŜ^x_i-ϵ∑_iŜ^z_i.where we have additionally introduced aℤ_2-symmetry-breaking term withϵa small positive longitudinal field of𝒪(1/N), because wetreat finite-size systems only and spontaneous symmetry breaking is a feature of the thermodynamic limit. The FC-TFIM has an equilibrium quantum critical point<cit.> atΓ_c^e=J/2. Hence, in the ground state of (<ref>) the longitudinal magnetization is positive forΓ<Γ_c^eand vanishes forΓ>Γ_c^e.We are interested in the DPT-II and its corresponding dynamical phases in the FC-TFIM whilst usingΓas the quench-control parameter. In the following, we shall prepare our system in the ground state|ψ_0⟩ofℋ̂(Γ_i), and then at timet=0, the field strength is suddenly switched fromΓ_itoΓ_f≠Γ_i, leading to time-evolving the system underℋ̂(Γ_f)and subsequently discerning from the return rate what dynamical phase our system is in from the perspective of the DPT-II. The DPT-I in this model was first studied in Ref. Sciolla2011. Moreover, it was argued that there is an equivalence<cit.> between the DPT-I and DPT-II in the LR-TFIM, and also in the FC-TFIM.<cit.> Nevertheless, to the best of our knowledge, the anomalous phase has not been previously investigated outside of Ref. Halimeh2016b, which does so numerically in the context of the LR-TFIM forα>1. §.§ Semiclassical equations of motion The period of theℤ_2order parameterŝ^z(t)=∑_iŜ^z(t)/Ncan be computed in an effective semiclassical picture.<cit.> To leading order in the mean-field limitN→∞, the post-quench magnetization expectation value⟨ŝ^z⟩=s(t)+𝒪(1/N)evolves according to Hamilton's equations of motion,ṡ(t)=∂_pH_effandṗ(t)=-∂_sH_eff, with the effective Hamiltonian H_eff(s,p)=-J/2s^2-Γ_f/2√(1-4s^2)cos p,and initial condition s(0) =0, if Γ_i>Γ_c^e, √(1/4-Γ_i^2), if Γ_i<Γ_c^e, p(0) =0, if Γ_i≠0,-π/2, if Γ_i=0.Henceforth, we choose units of time in whichJ = 1. The period of the classical orbit is T= 2∫_s_-^s_+ṣ/∂_p H_eff=2∫_s_-^s_+ṣ/√((1/4-s^2)Γ_f^2-(E+1/2s^2)),and the average magnetization along this orbit is s̅ = 1/T∫_0^Ts(t)ṭ=2/T∫_s_-^s_+s ṣ/√((1/4-s^2)Γ_f^2-(E+1/2s^2)),where the integration boundss_-<s_+are the turning points of the trajectorys(t), and the energy,E=H_eff(s(0),p(0)),is conserved. ForΓ<Γ_c^ethe Hamiltonian (<ref>) has a hyperbolic fixed point at(s,p)=(0,0), whose stable directions are connected to the unstable directions by two homoclinic orbits.The homoclinic orbits separate closedℤ_2-invariant orbits (i.e. orbits that are invariant unders↦-s) from closed orbits that are notℤ_2-invariant. As pointed out in Ref. Sciolla2011, this leads to a DPT-I atΓ^d_c(Γ_i)=(Γ^e_c+Γ_i)/2.For quenches toΓ_f=Γ^d_cthe initial condition (<ref>) lies on a homoclinic orbit ands(t)approachess=0exponentially in time, i.e. the period (<ref>) ofs(t)diverges atΓ^d_cas shown in Fig <ref>). For quenches toΓ_f>Γ^d_cthe orbit isℤ_2-symmetric ands(t)oscillates around zero such that the infinite-time averages̅=lim_t→∞lim_N→∞1/Nt∫_0^tṭ' ∑_i⟨Ŝ^z_i(t')⟩vanishes. Note that the limitN→∞has to be taken before the limitt→∞in order to obtain the semiclassical result (<ref>). In contrast, forΓ_f<Γ^d_c, the orbit is notℤ_2-symmetric and the infinite-time average takes a nonzero value, cf. Fig. <ref>. § RESULTS AND DISCUSSIONWe shall now present our results on the two distinct phases (regular and anomalous) of the DPT-II in the FC-TFIM, and argue that they are intimately related to the phases of the DPT-I in this model through sharing the same critical pointΓ^d_c. Traditionally, the DPT-II is known to give rise to two phases: one with (regular) cusps for quenches across the DPT-II critical point, and a second with no cusps in the return rate for quenches not crossing it. In Ref. Heyl2013, this was demonstrated in the case of the NN-TFIM, where it can be analytically shown that the DPT-II critical point isΓ^e_c. Much like the case of the NN-TFIM, the return rate in the FC-TFIM also shows regular cusps for quenches acrossΓ^d_c, as shown in Fig. <ref> forΓ_i=0. In agreement with previous results,<cit.> these cusps occur before the first minimum of the return rate and beyond. Also, the period of these cusps matches that of the order parameter at longer times and decreases with quench strength while the cusps themselves get sharper.In the case of the NN-TFIM, cusps in the return rate are absent<cit.> for quenches belowΓ^e_c, and the return rate is fully analytic. This has also been observed in Ref. Halimeh2016b to be the case for the LR-TFIM with sufficiently short-range interactionsα≳2.3. However, for longer-range interactions, the return rate does exhibit a new kind of cusps that are qualitatively different in their behavior from their regular counterparts. These cusps characterize the anomalous dynamical phase, defined by a Loschmidt-echo return rate that displays non-analyticities only after its first minimum. In fact, it can be shown in iMPS that these anomalous cusps are caused by level crossings within the set of dominant eigenvalues of the MPS transfer matrix, which is qualitatively different from the set responsible for the manifestation of the regular cusps and which is dominant for quenches above the DPT-II critical point. In good agreement with iMPS data for the LR-TFIM, our ED results in Fig. <ref> for the FC-TFIM show such anomalous cusps in the return rate for quenches belowΓ^d_c, which, unlike the case of the NN-TFIM, is not equal toΓ^e_cfor the FC-TFIM. At longer times they also possess the same period as the order parameter, and, in contrast to the regular cusps, their period increases with quench strength. Moreover, they separate less in time and are preceded by more smooth maxima in the return rate with decreasing quench strength.However, it is to be emphasized that the distinctive signature of the anomalous phase is that its cusps are delayed in the sense that they always occur after the first minimum of the return rate. This leads to smooth peaks preceding them, with more such analytic peaks the smaller the quench is. This can be seen in Fig. <ref>, and agrees with what is observed in iMPS for the nonintegrable model<cit.> forα≲2.3. Additionally, we find that the anomalous cusps occur for arbitrarily small quenches in the FC-TFIM. The transition from the regular phase to the anomalous phase can be understood by observing the regular cusp before the first minimum of the return rate in each panel of Fig. <ref>. This cusp moves away from the first maximum of the return rate and closer to the first minimum asΓ_fis decreased towardsΓ^d_c. OnceΓ_f≤Γ^d_c, this cusp crosses the first minimum of the return rate as we enter the anomalous phase, cf. Fig. <ref>. In fact, one can assign for quenches toΓ_f>Γ_c^da (pseudo-)order parameter<cit.> η=1-t_1^*/t_1^min,witht_1^*the time at which the first cusp occurs andt_1^minthe time of the first minimum in the return rate. Fig. <ref> shows this parameter decaying towards zero as one approaches the dynamical critical point from deep in the regular phase. ForΓ_f>>Γ_c^d, we find thatη→0.5, meaning that the first cusp becomes situated exactly at the first maximum of the return rate, which is typical of quenches deep into the regular phase. As we are dealing with a finite system, this parameter will nevertheless not decay sharply to zero atΓ_c^d. In the thermodynamic limit, on the other hand,ηis expected to sharply decay to zero atΓ_c^d, but this limit is not accessible in our numerical simulations. As per definition,η=0forΓ_f<Γ_c^dbecause there is no cusp before the first minimum of the return rate in the anomalous phase. More details on the regular-to-anomalous transition are provided in sec:transition.It is evident in Figs. <ref> and <ref> that in the regular phaseℤ_2symmetry is preserved whereas in the anomalous phase it is broken with a non-vanishing average of the order parameter, in agreement with the infinite-time limit of Fig. <ref>. This indicates that the DPT-I and DPT-II are intimately related by sharing a common critical pointΓ^d_c. Also, Figs. <ref> and <ref> indicate that the period of the cusps in either dynamical phase and that of the oscillations of the order parameter are the same at long times. In fact, our simulations show that the period of the cusps also grows indefinitely asΓ_f≈Γ^d_c, in accordance with the diverging period of the order parameter shown in Fig. <ref>. As exemplified in sec:NonzeroInitialField, all findings also hold for other initial conditionsΓ_i≠0.Furthermore, we comment that unlike in the LR-TFIM forα≲2.3in Ref. Halimeh2016b, the Loschmidt-echo return rate in the case of the FC-TFIM does not exhibit double-cusp structures. We speculate that these double cusps may be related to the nonintegrability of the LR-TFIM, and would thus be missing in the case of the FC-TFIM. We leave this question open for future investigation.Finally, we remark that our ED results were extensively tested for convergence on various environments and using different independent implementations. In cases where the Loschmidt echo is very small, i.e. for large system sizes and at times when the Loschmidt return rate is large, we observed that double-precision (≈16significant digits) ED is not sufficient to numerically resolve the Loschmidt return rate. In order to get rid of the numerical noise, we performed the numerical computations with enhanced precision of up to256significant digits.§ CONCLUSIONUsing semiclassical equations of motion and exact diagonalization, we have shown that the fully-connected transverse-field Ising model exhibits two distinct dynamical phases, one of which seems to occur as a direct result of the long-range interactions in this model. Starting in aℤ_2-symmetry-broken ground state, quenches below the dynamical critical point give rise to the anomalous phase, whose defining signature is the occurrence of cusps only after the first minimum of the Loschmidt-echo return rate. On the other hand, quenches above the dynamical critical point lead to the regular phase, which shows cusps also before the first minimum of the return rate. The periods of the cusps in both phases display an intimate connection to the period of theℤ_2order parameter oscillations. In fact, our ED simulations indicate that the anomalous phase coincides with the DPT-I phase of brokenℤ_2symmetry, while the regular phase with the DPT-I disordered phase. Our results agree with numerical results on the nonintegrable transverse-field Ising model with long-range interactions, obtained using an infinite matrix product state technique. Additionally, they provide support for the notion that long-range interactions bring about a new anomalous dynamical phase not found in short-range quantum spin chains. Our findings further extend the dynamical phase diagram of quantum spin chains withℤ_2symmetry, and are suitable for investigation in ion-trap and optical cavity atom-photon experiments where interactions are long-range.§ ACKNOWLEDGMENTSThe authors acknowledge discussions with Markus Heyl, Stefan Kehrein, and Ulrich Schollwöck, and are grateful to Michael Kastner for carefully reading and providing valuable comments on the manuscript. Financial support from the Deutsche Forschungsgemeinschaft (DFG) through SFB/CRC1073 (Projects B03 and C03) is gratefully acknowledged. V. Z.-S. gratefully acknowledges support from the Austrian Science Fund (FWF): F4104 SFB ViCoM and F4014 SFB FoQuS.§ TRANSITION FROM ANOMALOUS TO REGULAR PHASEAs mentioned in the main text, the transition from the anomalous to the regular phase manifests in the presence of a cusp immediately preceding the first minimum of the return rate in time atΓ_f≳Γ^d_c. This cusp then moves away from the first minimum to smaller times towards the first maximum of the return rate as one quenches deeper into the regular phase. Fig. <ref> shows this behavior in the vicinity ofΓ^d_c. For quenches very close to, yet belowΓ^d_c(top panels of Fig. <ref>), the first cusp always appears after the first minimum of the return rate, which is the defining signature of the anomalous phase. However, for quenches right aboveΓ^d_c(bottom panels of Fig. <ref>), we see that the first cusp is no longer preceded by a minimum in the return rate, which defines the regular phase. Also to be noted is that, in agreement with the main results of Figs. <ref> and <ref>, Fig. <ref> shows that the anomalous phase is linked to a finite nonzero average of theℤ_2order parameter, while in the regular phase this order parameter vanishes.Ideally, one would want to scan even closer toΓ^d_c, but this requires impracticable computational resources. The reason is that close toΓ^d_cfinite-size effects are particularly pronounced and one has to use largeNin order to see converged results. This can be understood from the semiclassical picture discussed in Sec. <ref>. For quenches close toΓ^d_cthe initial wave packet is localized near the homoclinic orbit of (<ref>) (recall that for the quench toΓ_f=Γ^d_cthe wave packet is exactly centered on the homoclinic orbit). As time evolves the wave packet remains localized and follows the homoclinic orbit until it reaches the neighborhood of the unstable hyperbolic fixed point at(s,p)=(0,0). Even though the wave packet is not centered exactly at the hyperbolic point the wave packet's finite width of𝒪(1/√(N))makes it `feel' the unstable directions. As a consequence, the wave packet gets deformed and spreads in the unstable directions. This leads to a deviation from theN→∞result where the width of the wave packet remains localized also close to the hyperbolic point. The closer one quenches toΓ^d_c, i.e. the closer the wave packet comes to the hyperbolic fixed point, the largerNhas to be to avoid these finite-size effects. Thus, even though for the main results of the paperN=800leads to convergence, for the quenches in this Appendix we have to go to largerNto suppress most finite-size effects. § QUENCHES FROM Γ_I=0.20We now look at the effect of changing the initial condition of our quench. Whereas the main part of the paper treats the caseΓ_i=0, quenches with different initial values of the transverse-field strength lead to the same phase diagram with the only difference being quantitative becauseΓ^d_cis a function ofΓ_ias expressed in (<ref>). Nevertheless, the anomalous (regular) phase still manifests for quenches below (above)Γ^d_c. As an example, Fig. <ref> shows four quenches from initial field strengthΓ_i=0.20. For this initial value of the transverse field, the dynamical critical point according to (<ref>) isΓ^d_c=0.35rather than0.25whenΓ_i=0(see main results). In Fig. <ref> we go from the anomalous phase (top panels) to the regular phase (bottom panels), where we see that in the anomalous phase the first cusp always occurs after the first minimum of the return rate, which is the defining feature of this phase. Note that the weaker the quench is in this phase, the more smooth maxima (and therefore the more smooth minima) precede the first cusp in time. However, after the transition to the regular phase, we see that the first cusp occurs before the first minimum of the return rate, which is the defining feature of this phase. This is qualitatively the same behavior as in the case ofΓ_i=0in the main part of the paper.Additionally, Fig. <ref> shows that the anomalous (regular) phase coincides with theℤ_2-symmetry-broken (unbroken) phase of the DPT-I for the case ofΓ_i=0.20. This is also in agreement with our results in Figs. <ref>, <ref>, and <ref> for quenches fromΓ_i=0.9Sciolla2011 B. Sciolla and G. Biroli, J. Stat. Mech. (2011) P11003.Smacchia2015 P. Smacchia, M. Knap, E. Demler, and A. Silva,Phys. Rev. B 91, 205136 (2015).Pozsgay2013 B. Pozsgay, J. Stat. Mech. (2013) P10028.Zunkovic2016a B. Zunkovic, A. Silva, and M. Fabrizio,Phil. Trans. R. Soc. A 374, 20150160 (2016).Zunkovic2016b B. Zunkovic, M. Heyl, M. Knap, and A. Silva, arXiv:1609.08482.Halimeh2016a J. C. Halimeh, V. Zauner-Stauber, I. P. McCulloch, I. de Vega, U. Schollwöck, and M. Kastner, Phys. Rev. B 95, 024302 (2017).Piroli2017 L. Piroli, B. Pozsgay, and E. Vernier, J. Stat. Mech. (2017) P023106.Heyl2013 M. Heyl, A. Polkovnikov, and S. Kehrein, Phys. Rev. Lett. 110, 135704 (2013).Footnote1 The non-analyticities in the return rate would be phase transition points in the strict sense of the analogy to non-analyticities in the thermal free energy at equilibrium. However, in this paper our characterization of a phase is determined by the kind of non-analyticities (anomalous, regular, or none) that appear in the return rate.Heyl2014 M. Heyl, Phys. Rev. Lett. 113, 205701 (2014).Halimeh2016b J. C. Halimeh and V. Zauner-Stauber, arXiv:1610.02019.Halimeh2017 V. Zauner-Stauber and J. C. Halimeh, arXiv:1709.06050.Hickey2014 J. M. Hickey, S. Genway, and J. P. Garrahan, Phys. Rev. B 89, 054301 (2014).Vajna2014 S. Vajna and B. Dóra, Phys. Rev. B 89, 161105(R) (2014).Vajna2015 S. Vajna and B. Dóra, Phys. Rev. B 91, 155127 (2015).Heyl2015 M. Heyl,Phys. Rev. Lett. 115, 140602 (2015).Schmitt2015 M. Schmitt and S. Kehrein, Phys. Rev. B 92, 075114 (2015).Heyl2017 M. Heyl, Phys. Rev. B 95, 060504 (2017).Campbell2016 S. Campbell, Phys. Rev. B 94, 184403 (2016).Karrasch13C. Karrasch and D. Schuricht,Phys. Rev. B 87, 195104 (2013).Fagotti2013 M. Fagotti,arXiv:1308.0277.Canovi2014 E. Canovi, P. Werner, and M. Eckstein,Phys. Rev. Lett. 113, 265702 (2014).Kriel2014J. N. Kriel, C. Karrasch, and S. Kehrein, Phys. Rev. B 90, 125106 (2014).Andraschko2014 F. Andraschko and J. Sirker,Phys. Rev. B 89, 125120 (2014).Sharma2015 S. Sharma, S. Suzuki, and A. Dutta, Phys. Rev. B 92, 104306 (2015).Zhang2016 J. M. Zhang and H.-T. Yang, Europhys. Lett. 114, 60001 (2016).Sharma2016 S. Sharma, U. Divakaran, A. Polkovnikov, and A. Dutta, Phys. Rev. B 93, 144306 (2016).Abeling2016 N. O. Abeling and S. Kehrein,Phys. Rev. B 93, 104302 (2016).Vid2016 V. Stojevic et al., Phys. Rev. B 94, 165135 (2016).Peng2015 X. Peng et al., Phys. Rev. Lett. 114, 010601 (2015).Flaeschner2016 N. Fläschner et al.,arXiv:1608.05616.Jurcevic2016 P. Jurcevic et al., arXiv:1612.06902.Fannes1992 M. Fannes, B. Nachtergaele, and R. Werner, Comm. Math. Phys. 144, 443 (1992).Verstraete2008 F. Verstraete, V. Murg, and J. I. Cirac, Adv. Phys. 57, 143 (2008).Haegeman2011 J. Haegeman et al., Phys. Rev. Lett. 107, 070601 (2011).Haegeman2016 J. Haegeman et al., Phys. Rev. B 94, 165116 (2016).Stauber2017 V. Zauner-Stauber, L. Vanderstraeten, M. T. Fishman, F. Verstraete, and J. Haegeman, arXiv:1701.07035.Kac1963 M. Kac, G. E. Uhlenbeck, and P. C. Hemmer, J. Math. Physics 4, 216 (1963).White1992 S. R. White, Phys. Rev. Lett. 69, 2863 (1992).Schollwoeck2005 U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).Schollwoeck2011 U. Schollwöck, Ann. Phys. (NY) 326, 96 (2011).White2004 S. R. White and A. E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004).Verstraete2004 F. Verstraete, J. J. García-Ripoll, and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004).Vidal2004 G. Vidal, Phys. Rev. Lett. 93, 040502 (2004).Daley2004 A. J. Daley, C. Kollath, U. Schollwöck, and G. Vidal, J. Stat. Mech. (2004) P04005.Gobert2005 D. Gobert, C. Kollath, U. Schollwöck, and G. Schütz, Phys. Rev. E 71, 036102 (2005).Das2006 A. Das, K. Sengupta, D. Sen, and B. K. Chakrabarti, Phys. Rev. B 74, 144423 (2006).Footnote2 One can in principle argue that η is a sort of <cit.>, since it arises only in the regular phase of the DPT-II, which, at least for the FC-TFIM coincides with the disordered phase of the DPT-I.
http://arxiv.org/abs/1703.09195v2
{ "authors": [ "Ingo Homrighausen", "Nils O. Abeling", "Valentin Zauner-Stauber", "Jad C. Halimeh" ], "categories": [ "cond-mat.stat-mech", "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170327172630", "title": "Anomalous dynamical phase in quantum spin chains with long-range interactions" }
PREPRINT OF DOI: 10.1109/TPWRS.2017.2671786, IEEE TRANSACTIONS ON POWER SYSTEMS Shell et al.: Bare Demo of IEEEtran.cls for Journals Security Constrained Multi-Stage Transmission Expansion Planning Considering a Continuously Variable Series Reactor Xiaohu Zhang, Student Member, IEEE, Kevin Tomsovic, Fellow, IEEE, and Aleksandar Dimitrovski, Senior Member, IEEE This work was supported in part by ARPAe (Advanced Research Projects Agency Energy), and in part by the Engineering Research Center Program of the National Science Foundation and the Department of Energy under NSF Award Number EEC-1041877 and the CURENT Industry Partnership Program. Xiaohu Zhang and Kevin Tomsovic are with the Department of Electrical Engineering and Computer Science, the University of Tennessee, Knoxville, TN, USA, email:{xzhang46,tomsovic}@utk.edu. Aleksandar Dimitrovski is with the Department of Electrical and Computer Engineering, University of Central Florida, FL, USA, email: Aleksandar.Dimitrovski@ucf.edu. December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= This paper introduces a Continuously Variable Series Reactor (CVSR) to the transmission expansion planning (TEP) problem. The CVSR is a FACTS-like device which has the capability of controlling the overall impedance of the transmission line. However, the cost of the CVSR is about one tenth of a similar rated FACTS device which potentially allows large numbers of devices to be installed. The multi-stage TEP with the CVSR considering the N-1 security constraints is formulated as a mixed integer linear programming model. The nonlinear part of the power flow introduced by the variable reactance is linearized by a reformulation technique. To reduce the computational burden for a practical large scale system, a decomposition approach is proposed. The detailed simulation results on the IEEE 24-bus and a more practical Polish 2383-bus system demonstrate the effectiveness of the approach. Moreover, the appropriately allocated CVSRs add flexibility to the TEP problem and allow reduced planning costs. Although the proposed decomposition approach cannot guarantee global optimality, a high level picture of how the network can be planned reliably and economically considering CVSR is achieved. Continuously variable series reactor, transmission expansion planning, power flow control, N-1 security, mixed integer linear programming. § NOMENCLATURE §.§ IndicestocsectionNomenclature [V_1,V_2,V_3]i,j Index of buses.k Index of transmission elements.n Index of generators.m Index of loads.c Index of states; c=0 indicates the base case; c>0 is a contingency state.b Index of load blockt Index of time.E Index for an existing transmission line.C Index for a candidate transmission line.§.§ VariablestocsectionNomenclature [V_1,V_2,V_3]P^g_ncbt Active power generation of generator n for state c in load block b at time t.P_kcbt Active power flow on branch k for state c in load block b at time t.x^V_k Reactance of a CVSR at branch k.u_k,1^E,u_k,2^E Slack variables for the flow violation at existing branch k.u_k,1^C,u_k,2^C Slack variables for the flow violation at candidate branch k. θ_kcbt The voltage angle difference across the branch k in load block b for state c at time t.α_kt Binary variable associated with line investment for branch k at time t.δ_kt Binary variable associated with placing a CVSR on branch k at time t. §.§ ParameterstocsectionNomenclature [V_1,V_2,V_3]b_k Susceptance for branch k.d Discount factor.n_l Number of branches in the system.x_k Reactance for branch k.x_k,V^min Minimum reactance of the CVSR at branchk.x_k,V^max Maximum reactance of the CVSR at branchk.A_bt Duration of the load block b at time t.C_k^V Investment cost of the CVSR at branch k.C_k^L Investment cost of the branch k.C_n^g Operating cost coefficient for generator n.N_kcbt Binary parameter associated with the status of branch k for state c in load block b at time t. P_ncbt^g,min Minimum active power output of generator n for state c in load block b at time t.P_ncbt^g,max Maximum active power output of generator n for state c in load block b at time t.P_mcbt^dActive power consumption of demand m for state c in load block b at time t.S_kcbt^max Thermal limit of branchk for state c in load block b at time t.θ_k^max Maximum angle difference across branch k: π/3 radians.§.§ SetstocsectionNomenclature [V_1,V_2,V_3]𝒟_i Set of loads located at bus i.Ω_L Set of existing transmission lines.Ω_L^+ Set of candidate transmission lines.Ω_L^i Set of transmission lines connected to bus i.Ω_T Set of time periods.Ω_c Set of states.Ω_b Set of load blocks.Ω_0 Set of base operating state.Ω_V Set of candidate transmission lines to install CVSR.ℬ Set of buses.𝒢 Set of on-line generators.𝒢_i Set of on-line generators located at bus i.𝒢_fix Set of on-line generators with fixed generation.§ INTRODUCTIONThe Continuously Variable Series Reactor (CVSR) has recently been proposed for power flow control <cit.>. By controlling the saturation of a magnetic core, the device is capable of continuously and smoothly regulating its output reactance, which is similar to a series FACTS controller TCSC. The control circuit for the CVSR is a simple and low power rating AC/DC converter so the cost of the CVSR is far less than that of the TCSC. Numerous CVSRs could be installed into a single system to enable comprehensive use of the transmission capacity. This could have a significant impact on Transmission Expansion Planning (TEP) decisions and is the main reason for revisiting the TEP problem formulation in this paper.TEP is a task that determines the best strategy to add new transmission lines to the existing power network in order to satisfy the growth of electricity demand and generation over a specified planning horizon. In the contemporary power system, due to the power market restructuring and massive integration of renewable energy, it is critical to have a rationally planned power system that is not only capable of serving the increasing load reliably and efficiently but also economically <cit.>.Depending on the model, TEP can be classified as either a single-stage or multi-stage model. For a single-stage TEP, additional lines are planned only for the target planning year; while for the multi-stage TEP, several different planning horizons with distinct load and generation patterns are considered together. Multi-stage TEP not only decides where to build the new transmission line, but also determines when to build the new line <cit.>.The modeling and solution techniques for the traditional TEP problem have been studied extensively. Mathematical programming is a major category of the solution methods. At the transmission level, the DC power flow model is capable of providing a good approximation and linear methods can be applied. In <cit.>, the TEP in DC network model was formulated as a mixed integer linear programming (MILP) problem and solved by a commercial optimization solver. A disjunctive factor was introduced to eliminate the product between continuous andbinary variables. Given the non-convex nature of the power system, the exact AC network model for the TEP problem is generally a non-convex mixed integer nonlinear programming (MINLP) problem. This type of model is challenging for existing commercial solvers. Therefore, several relaxed or approximated AC models for the TEP problem have been proposed. In <cit.>, the nonlinear AC power flow equations were linearized around the operating point based on Taylor series to achieve the linear model for the AC TEP. The quadratic constraints, such as, the active and reactive power losses, the MVA limit for the transmission line were approximated by using piecewise linearization. In <cit.>, the lift and project <cit.> technique was adopted to lift the TEP problem into higher dimensional space and project the relaxed solution onto the original space. In <cit.>, the line flow based power flow equations <cit.> were employed to give a convex second order cone model for the AC TEP. The voltage magnitude was assumed to be equal to one and the non-convex constraint for the voltage drop across a transmission line was omitted. The AC or relaxed AC TEP models provide a relatively more accurate representation of the network and can include the reactive power planning (RPP) into the TEP problem. However, to the best of the authors' knowledge, the AC TEP models were only applied to small or medium scale systems. Meta-heuristic methods, such as, genetic algorithms <cit.>, greedy randomized search <cit.>, particle swarm optimization <cit.> and differential evolution <cit.> have also been proposed to solve the TEP problem. These techniques have the advantage of easy and straightforward implementation; however, they suffer disadvantages of susceptibility to local optimum and slow computational speed for large practical systems <cit.>. Major hurdles for construction of new transmission lines are difficulties in obtaining the right-of-way, political resistance, long construction time and limited capital budget. These challenging issues have drawn interest in techniques for delaying upgrades. In <cit.>, transmission switching (TS) was introduced to defer the construction of new transmission lines. Benders Decomposition was employed to solve the planning and operation problem alternately. In <cit.>, the authors evaluated the economic benefits and increased flexibility by including the FACTS devices in the TEP. In <cit.>, a single stage TEP model considering energy storage systems (ESS) was presented. The total investment cost for the transmission lines can be reduced by appropriately placing the ESS in the system. This paper presents a MILP model for the multi-stage TEP considering CVSRs, while satisfying N-1 security constraints. Three load blocks are selected to accommodate the load profile of each stage and the considered transmission contingencies can occur in any of the load blocks. Several benefits are anticipated by introducing the CVSR into TEP: 1) CVSRs improve the utilization of the existing network, which leads to deferment or avoidance of new transmission lines; 2) CVSRs change the power flow pattern and increase the use of lower costgeneration, which reduces the total operating cost; 3) CVSRs add flexibility to the system and provide additional corrective actions following contingencies.The main contributions of this paper are summarized below: * A security constrained multi-stage TEP with the consideration of CVSRs is formulated. * A reformulation technique is proposed to transform the MINLP model into a MILP model allowing the problem to be solved by mature commercial MILP solvers. * An iterative approach is developed to decompose the model into the planning master problem and the security check sub-problem so that it is computationally tractable for practical sized systems. This is critical as the model size increases dramatically with the number of stages, load blocks and contingencies.Due to the heuristic method used in the iterative approach, the solution obtained by the decomposition model is not guaranteed to be global optimality. However, it provides a high level picture of how the network can be rationally planned including CVSRs so it is useful from engineering point of view. In addition, the decomposition approach allows the originally large scale MINLP model to become tractable.The remainder of the paper is organized as follows. In Section <ref>, the steady state model of CVSR in DC power flow is presented and the reformulation technique is illustrated to transform the originally nonlinear power flow model into a linear model. Section <ref> presents detailed information about the optimization model and the iterative approach. Simulation results are given in Section <ref> on the IEEE 24-bus and a more practical Polish systems. Conclusions are given in Section <ref>.§ STEADY STATE MODEL OF CVSR AND THE REFORMULATION TECHNIQUE§.§ Steady State Model of CVSR in DC Power FlowFig. <ref> depicts the usage of a CVSR as a series control device. It can be represented by a continuously variable inductive reactance with the parasitic resistance ignored.With a CVSR inserted on a transmission line and the resistance ignored, the total susceptance of the transmission line becomes:b_k^'=-1/x_k+x_k^V=-(b_k+b_k^V)where b_k=1/x_kb_k^V=-x_k^V/x_k(x_k+x_k^V) Unlike a TCSC, the CVSR can only provide a positive reactance and here it is assumed that the CVSR will be installed on the transmission line which is not overly compensated by a series capacitor, i.e., x_k ≥ 0. Then the range of the variable susceptance b_k^V is:b_k,V^min =-x_k,V^max/x_k(x_k+x_k,V^max)b_k,V^max =-x_k,V^min/x_k(x_k+x_k,V^min)§.§ Reformulation of the Nonlinear Power Flow EquationThe active power flow on the candidate transmission line assuming a DC power flow model can be expressed as:P_k=(b_k+δ_kb_k^V)·θ_kb_k,V^min≤ b_k^V≤ b_k,V^max In (<ref>), the binary variable δ_k=1 indicates that a CVSR is installed on line k. The nonlinearity in (<ref>) results from the trilinear term δ_kb_k^Vθ_k. To linearize the nonlinear part, a new variable w_k is introduced as:w_k=δ_kb_k^Vθ_k Then the active power flow equation (<ref>) can be given as:P_k=b_kθ_k+w_k We multiply each side of the constraint (<ref>) with the binary variable δ_k and combine with the variable w_k:δ_k b_k,V^min≤w_k/θ_k=δ_kb_k^V≤δ_k b_k,V^max The sign of θ_k determines the allowable range for w_k:{δ_kθ_k b_k,V^min≤ w_k ≤δ_kθ_k b_k,V^max, if θ_k>0w_k=0,if θ_k=0δ_kθ_k b_k,V^max≤ w_k ≤δ_kθ_k b_k,V^min, if θ_k<0. To realize the “if" constraints, an additional binary variable y_k and the big-M complementary constraints <cit.> are introduced:-M_ky_k+δ_kθ_kb_k,V^min≤ w_k≤δ_kθ_kb_k,V^max+M_ky_k -M_k(1-y_k)+δ_kθ_k b_k,V^max≤ w_k ≤δ_kθ_kb_k,V^min+M_k(1-y_k) During the optimization process, only one of the two constraints (<ref>) and (<ref>) will become active and the other one will be a redundant constraint that is always satisfied with a sufficiently large number M_k. Specifically, when θ_k< 0, y_k will be equal to one and the constraint (<ref>) will be active; when θ_k > 0, y_k will be equal to zero and the constraint (<ref>) will be active; when θ_k=0, one of these two constraints will drive w_k to zero regardless of the value of y_k. Note that the numerical problems occur when M_k is chosen to be too large <cit.>. Since b_k^V is negative, M_k is selected as |b_k,V^minθ_k^max|.Now in the constraints (<ref>) and (<ref>), it can be seen that there is still a nonlinear term δ_kθ_k which involves the product of a binary variable and a continuous variable. We introduce another new variable z_k=δ_kθ_k and linearize it using the method in <cit.>:-δ_kθ_k^max≤ z_k ≤δ_kθ_k^maxθ_k-(1-δ_k)θ_k^max≤ z_k ≤θ_k+(1-δ_k)θ_k^max We then substitute δ_kθ_k with z_k in the inequalities (<ref>) and (<ref>):-M_ky_k+z_kb_k,V^min≤ w_k≤ z_kb_k,V^max+M_ky_k -M_k(1-y_k)+z_kb_k,V^max≤ w_k≤ z_kb_k,V^min+M_k(1-y_k) Thus, the power flow equations (<ref>) and (<ref>) in MINLP model is transformed to a MILP model with (<ref>) and (<ref>)-(<ref>).§ OPTIMIZATION MODEL§.§ N-1 Security ConstraintsPower grid security is the primary concern for the system operations and planning and it cannot be compromised. According to the NERC planning standards <cit.>, a rationally planned power system should have the capability of maintaining an N-1 secure network. To include the N-1 contingency for the transmission lines into the optimization model, a binary parameter N_kc, which represents the status of line k in state c is introduced <cit.>:N_kc={1,if line k is in service in state c 0,if line k is out of service in state c.It should be noted that N_k0 is equal to one since no transmission element is in outage for the base operating condition. The number of states for a complete transmission line N-1 contingency and the base case is n_l+1.For most planning problems, a complete set of N-1 contingencies is not needed and just results in excessive computations as n_l is large in a practical system. For the TEP problem, a complete N-1 contingency is not needed since the addition of some new transmission lines in one area will mainly affect the power flow pattern in the nearby areas. The selection of the contingencies can be based on experimental data or a contingency screening algorithms <cit.>. §.§ Integrated Planning Formulation The integrated planning indicates that all the planning stages, load blocks and security constraints are included in one planning problem, which is formulated as (<ref>)-(<ref>).§.§.§ Objective FunctionThe objective employed in this paper for the TEP problem minimizes the total cost, which includes both the investment and operating cost. Assuming a fixed load demand (price inelastic), minimizing operating cost is equivalent to minimizing generation cost. The objective function is:min∑_t ∈ TPL∑_k ∈Ω_L^+C_k^L(α_kt-α_k,t-1)/(1+d)^t-1 +∑_t ∈ TPL∑_k ∈Ω_VC_k^V(δ_kt-δ_k,t-1)/(1+d)^t-1 +∑_t ∈ TPL∑_b ∈Ω_b∑_n ∈𝒢A_btC^g_nP^g_n0bt/(1+d)^t-1 TPL represents the total planning horizon. The first two terms represent the one time investment cost for the new transmission lines and the installed CVSRs. The third term is the generation cost across the operating horizon. Three distinct load patterns which represent peak, normal and low load condition are selected to accommodate the load profile in each stage. Here the generation cost is just an estimated cost. However, if the detailed load duration curve for each year is given, a relatively more accurate generation cost model can be formulated. All the cost terms are discounted to the present value by using the discount factor d. In this paper, d is selected to be 5%.§.§.§ ConstraintsThe active power flow through the existing transmission lines is:P_kcbt^E-b_kθ_kcbt+M_k'(1-N_kcbt) ≥ 0,k∈Ω_L\Ω_V P_kcbt^E-b_kθ_kcbt-M_k'(1-N_kcbt) ≤ 0,k∈Ω_L\Ω_V P_kcbt^E-b_kθ_kcbt-w_kcbt+M_k'(1-N_kcbt) ≥ 0, k∈Ω_V P_kcbt^E-b_kθ_kcbt-w_kcbt-M_k'(1-N_kcbt) ≤ 0, k∈Ω_VConstraints (<ref>)-(<ref>) hold ∀ c ∈Ω_c, b ∈Ω_b, t ∈Ω_T.Constraints (<ref>) and (<ref>) denote the active power on the lines without CVSRs while constraints (<ref>) and (<ref>) represent the active power flow on the candidate lines to install CVSRs. If the line is in service, i.e. N_kcbt=1, the line flow equations are enforced. A large disjunctive factor M_k' is introduced to ensure these constraints are not restrictive when the transmission line is out of service. As the phase angle will not fall outside of the range [-π/2π/2 ] if an appropriate slack bus is selected, M_k' is chosen to be |b_kπ|.Additional constraints introduced by the reformulation technique can be expanded to consider multiple states, load blocks and stages:-M_ky_kcbt+z_kcbtb_k,V^min≤ w_kcbt≤ z_kcbtb_k,V^max+M_ky_kcbt-M_k(1-y_kcbt)+z_kcbtb_k,V^max≤ w_kcbt ≤ z_kcbtb_k,V^min+M_k(1-y_kcbt)-N_kcbtδ_ktθ_k^max≤ z_kcbt≤ N_kcbtδ_ktθ_k^max N_kcbt(θ_kcbt-(1-δ_kt)θ_k^max) ≤ z_kcbt≤ N_kcbt(θ_kcbt+(1-δ_kt)θ_k^max)Constraints (<ref>)-(<ref>) hold ∀ k∈Ω_V,c ∈Ω_c,b ∈Ω_b, t ∈Ω_T.Constraints (<ref>)-(<ref>) guarantee that the line flow change w_kcbt introduced by the CVSR is zero whenline k with CVSR is out of service in state c, load block b and at stage t. The power flow through the candidate transmission lines is:P_kcbt^C-b_kθ_kcbt+M_k'(2-N_kcbt-α_kt) ≥ 0P_kcbt^C-b_kθ_kcbt-M_k'(2-N_kcbt-α_kt) ≤ 0Constraints (<ref>)-(<ref>) hold ∀ k∈Ω_L^+,c ∈Ω_c,b ∈Ω_b, t ∈Ω_T.In contrast with the existing transmission lines, a candidate transmission line has two situations where it is not connected: either it is not built or it has been built but is out of service.The active power nodal balance at each bus is:∑_n ∈𝒢_iP_ncbt^g -∑_m ∈𝒟_iP_mcbt^d=∑_k∈Ω_L^i P_kcbt^E+∑_k∈Ω_L^i P_kcbt^C i ∈ℬ, c ∈Ω_c,b ∈Ω_b, t ∈Ω_T The system physical limits are represented by:-N_kcbtS_kcbt^max≤ P_kcbt^E ≤ N_kcbt S_kcbt^max,k ∈Ω_L-α_kt N_kcbtS_kcbt^max≤ P_kcbt^C ≤α_ktN_kcbt S_kcbt^max, k ∈Ω_L^+ P_ncbt^g,min≤ P_ncbt^g ≤ P_ncbt^g,max, n ∈𝒢P_ncbt^g=P_n0bt^g, n ∈𝒢_fix, c ∈Ω_c \Ω_0, b ∈Ω_b,t ∈Ω_TConstraints (<ref>)-(<ref>) hold ∀ c ∈Ω_c, b ∈Ω_b,t ∈Ω_T. Constraints (<ref>) and (<ref>) ensure that the power flow is zero if the line is not built or out of service; otherwise, the power flow on the line is limited by its thermal rating. Constraints (<ref>) and (<ref>) reflect that only a subset of the generators are allowed to re-dispatch after a contingency. The other generators which do not participate in the rescheduling are fixed at their base case power output.The build decisions made in the current stage must be present on the later stage:α_kt≥α_k,t-1,k ∈Ω_L^+,t ∈Ω_Tδ_kt≥δ_k,t-1,k ∈Ω_V,t ∈Ω_TNote that α_k0 and δ_k0 are set to be zero.§.§ DecompositionIn the integrated planning model, the constraints have four dimensions, i.e., power system element, state, load block and time. Hence, the size of the optimization model will grow dramatically with the system size and planning horizon. To reduce thecomputational burden for a large practical planning problem, the multiple stages are decomposed using forward planning <cit.>, in which the planning for each stage is solved successively while the building decisions from the previous stage are enforced on subsequent stages. Although forward planning may lead to a suboptimal plan, it greatly reduces the computational time with relatively minor degradation of the solution quality. This iterative approach is as depicted in Fig. <ref>.Essentially the majority of the N-1 security analysis will be performed iteratively at the sub-problem level. Checking N-1 security constraints iteratively is an effective way to reduce the computational burden in the TEP problem <cit.>. The majority of the utilities use similar approaches for solving security constrained TEP <cit.>. The process is as below: * Initialization of the stage number N_s=1. * Run the single stage TEP with CVSR model for the base case considering all the load blocks and several critical contingencies (CC). Obtain solutions and update the system with the new transmission lines and CVSRs. * Perform the remaining N-1 security analysis for the expanded system. If there are no violations, go to step 5); otherwise, identify the contingency leading to the worst violations. Temporarily remove the line from the system. * Run the TEP with CVSR model. The generation dispatch is assumed to be unchanged. The purpose of this step is to find the optimal building plan (lines and CVSRs) to resolve the worst contingency. Replace the contingency line and update the system with new lines and CVSRs from this solution, go to step 3). * If the last stage is solved, then complete; otherwise, increase the stage number N_s=N_s+1 and go to step 2).Including several critical contingencies in the master problem is motivated by the natural thought that the critical contingencies have large impacts on the TEP results. However, considering more contingencies tends to increase the dimension of the master problem. The computational issues are discussed in Section <ref>. For a practical system, the critical contingencies can be selected based onempirical data. In our test system, we rank the contingencies in terms of circuit loading and the generation cost <cit.>. The two sections below detail the problem formulation of the master problem and sub-problem described above. Note that the constraints in Section <ref> all pertain to a specific state c, load block b and stage t .§.§.§ Master ProblemThe planning master problem is to obtain the optimal building plan for the base case considering several critical contingencies. The optimization minimizes (<ref>) subject to (<ref>)-(<ref>). Note that the solution from the previous stage is the input for the current stage, i.e., α_k,t-1 and δ_k,t-1 are known before solving stage t.§.§.§ Sub-problemAfter obtaining the solution for the master problem in stage t, the sub-problem performs N-1 security analysis for the expanded system. Here, P_n0bt^g, α_kt and δ_kt are all input values for the security sub-problem while P_n0bt^g is from the base case generation for each load block. In the iterative process of the sub-problem, new lines and CVSRs will be added to resolve the contingency, i.e., step 4), so α_kt and δ_kt need to be updated accordingly at each iteration.The violations for the DC power flow model are only thermal limit violations. For the N-1 security check, we introduce four positive slack variables to represent possible violations of the existing and candidate transmission lines. For each contingency state c, the objective is to minimize the sum of these slack variables:min ∑_k∈Ω_L(u_k,1^E+u_k,2^E)+∑_k∈Ω_L^+(u_k,1^C+u_k,2^C)Obviously, the contingency with the maximum objective will be regarded as the worst contingency. If there is no violation, the objective for all the contingencies must fall within a specified tolerance. The thermal limit constraints are:-N_k(S_k^max+u_k,1^E) ≤ P_k^E ≤ N_k(S_k^max+u_k,2^E),k ∈Ω_L-α_kN_k(S_k^max+u_k,1^C) ≤ P_k^C ≤α_kN_k(S_k^max+u_k,2^C), k ∈Ω_L^+Constraints (<ref>) and (<ref>) enforce the power flow on the lines that are not connected to zero; however, these two constraints allow thermal violations on the lines in service. The remaining constraints include (<ref>)-(<ref>), (<ref>)-(<ref>).§ CASE STUDIES The proposed planning model is applied to the IEEE 24-bus system and a more practical Polish 2383-bus system. The data for the IEEE 24-bus and the Polish 2383-bus system are included in the MATPOWER software <cit.>. For all the test systems, each stage is 5 years and all the selected lines and CVSRs are built at the beginning of each stage. The investment cost for the CVSR is assumed to be $10/kVA <cit.>. Based on the prototype that is going to be installed by Bonneville Power Administration (BPA), the maximum output reactance of the CVSR is allowed to be 20% of the corresponding line reactance:0 ≤ x_k^V≤ 0.2x_k,k ∈Ω_V§.§ IEEE 24-Bus System The IEEE 24-bus system has 29 transmission lines, 5 transformers, 32 generators and 21 loads. The thermal limits for all the transmission branches are decreased artificially to introduce congestion. For this test system, we assume only one candidate transmission line per existing line (i.e, excluding transformer upgrades) so the number of candidate transmission lines is 29. In addition, all the existing transmission lines are possible locations to install a CVSR so the number of candidate locations for CVSR is also 29. Excluding one contingency (line 7-8) which splits the system into two parts, complete N-1 contingency constraints considering the existing branches are considered. Due to the absence of actual system expansion data, the investment for building new transmission lines is estimated by its length and cost per mile. The cost per mile for different voltage levels can be found in <cit.>.§.§.§ Single Stage PlanningWe first consider the single stage planning for this test system. The selected lines and CVSRs are committed at the beginning of the stage and the operation cost is evaluated over five years thereafter. The simulation results using integrated model are summarized in Table <ref>. From Table <ref>, it can be seen that the TEP without CVSRs requires building 3 transmission lines. When the CVSR is introduced in the TEP, only 2 transmission lines are needed for the considered stage. The construction of line 14-16 ($36.47M) is avoided by installing 3 low cost CVSRs ($13.5M) on line 11-14, 14-16 and 15-21. Thus, the investment cost decreases from $74.25M to $51.28M. Although the operating cost of the case with CVSR is $10M higher than the case without CVSR, the total saving for this five years plan is about $13 M. The computation time for the case without CVSR is 9.25 s and the time increases to 388.51 s for the case considering CVSR.Table <ref> shows the TEP results by using the decomposed model. To evaluate the impacts of the decomposition, two cases are simulated: * Considering one critical contingency (line 18-21) for the peak and normal load level in the master problem. * Considering two critical contingencies (line 18-21, 15-21) for the peak and normal load level in the master problem. The critical contingencies are selected based on the circuit loading in the peak load level <cit.>. As observed from Table <ref>, the investment plans for the TEP without CVSR are the same for these two cases, which are also identical as the results using integrated model. Nevertheless, the computational time using the decomposed model is only around 1.2 s. The investment plans for the TEP with CVSR are different for the two cases. For the case considering one critical contingency, 1 transmission line and 6 CVSRs are added. The cost in total is $1234.73M. The case considering two critical contingencies requires to build 2 transmission lines and 3 CVSRs, which are the same planning results as the integrated model. The computational time for the decomposed model considering two critical contingencies is 34.71s. This is 11 times faster than the integrated model. §.§.§ Multi-stage PlanningWe then consider a two stage planning for this test system. The load growth is estimated to be 25% in five years and this growth is distributed equally among the load buses. We first evaluate the impacts of N-1 contingency constraint on the TEP results. Table <ref> summarizes the TEP results with CVSR and without CVSR for the cases that consider and do not consider N-1 contingency constraints. The number in the parenthesis indicate the installation year for the new lines and CVSRs. It can be seen that the two cases lead to different network expansion plan. Without CVSR, 3 lines are built for the first stage and no line is needed for the second stage for the case do not consider N-1 security constraints. For the case considering N-1 security constraints, 2 transmission lines are committed for the first stage and 1 line is added for the second stage. Although the total number of installed transmission lines are the same for the two cases, one long transmission line (15-21) that costs $69.41M is needed for the case considering N-1 security constraint. The construction of this line significantly increases the investment cost for the case considering N-1 security constraints. Similar results can also be found in the TEP model with CVSR.As observed from Table <ref>, for the case considering N-1 security constraints, 2 CVSRs on line 11-14 and 14-16 are installed in order to avoid the building of line 14-16. The total savings for this ten years plan is around $16.63M.Table <ref> shows the two stage TEP results by using the decomposed model. The same two critical contingencies (line 18-21, 15-21) are considered for the normal and peak load level in stage one and all the load levels in stage two. So the total number of operating states in the master problem is 7 in stage one and 9 in stage two. As observed from Table <ref>, the avoidance of building line 14-16 in stage one is achieved by installing 3 CVSRs on line 11-14, 14-16 and 15-21. In addition, the construction of line 18-21 in the second stage is avoided by installing 2 CVSRs on line 18-21 and 21-22. The total saving on the investment is $44.46M. When comparing the planning results from the integrated model with the results from the decomposed model, one long and expensive transmission line 15-21 ($69.41M) is installed in stage one in the integrated model. This result arises since forward planning is myopic and does not see the future benefits from the present reinforcement <cit.>. Still, the difference of the total cost between the decomposed model and the integrated model is $8.54M for the case considering CVSR, which is only 0.27% of the planning cost. The computation time of the decomposed model is far less than the integrated model. For the case considering CVSR, the computation time is approximately 18 times faster using the decomposed model.§.§ Polish 2383-Bus SystemThe approach is also applied to a more practical Polish 2383-bus system. The system has 2895 existing branches, 327 generators and 1822 loads. Single stage planning model is used for this case study. Only a few transmission corridors have the potential for the construction with new lines because of the physical or regulatory constraints. It is assumed for this study that the number of candidate lines is 60. In addition, 80 existing transmission lines have been selected as candidate locations to install the CVSR. The selection criterion is the congestion severity of the transmission lines. The line investment cost is estimated by the approach given in Section <ref>. To obtain the contingency list, we first eliminate 643 contingencies that would cause islanding. Then we run an optimal power flow (OPF) for each of the remaining transmission N-1 contingencies and select the worst 100 in terms of the operating cost <cit.>. Note that the short term line rating is used for each OPF. Moreover, the worst 6 contingencies are considered for the peak and normal load levels in the master problem. Table <ref> shows the TEP planning strategy for the case with CVSR and without CVSR by using the decomposed model. As observed from Table <ref>, the TEP without CVSR requires building 13 transmission lines. The total investment cost for this planning strategy is $178.53M. For the TEP with CVSRs, 11 transmission lines and 8 CVSRs are selected. The investment cost increases by $3.58M compared to the case without CVSR. Nevertheless, the operating cost decreases significantly by $124.38M with the inclusion of CVSRs. The saving for this 5 year plan is $120.9M, which accounts for 1.13% of the total planning cost. It can also be seen from Table <ref> that the operating cost takes up a large portion in the total cost for this practical large scale system. The CVSRs are intended to be installed in the appropriate transmission lines to reduce congestion and the operating cost. For the peak load level, the hourly operating cost is $35988 for the case without CVSRs. The cost is reduced to $35383 when CVSR is introduced. The computational time when considering CVSR for this practical system is around 1.87 hours, which is acceptable for a planning study <cit.>. §.§ Computational and Optimality Issues The computer used for all simulations has an Intel Core(TM) i5-2400M CPU @ 2.30 GHz with 4.00 GB of RAM. The MILP problem is modeled using the YALMIP <cit.> toolbox in MATLAB with the CPLEX solver <cit.> selected to solve the model. A heuristic method is used for the decomposed model so the global optimality of the solution is not guaranteed. The impact of the decomposition on the solution can be reduced by including more contingencies in the master problem. This will increase the dimension of the master problem and result in greater computations. So there is a compromise between solution quality and computational time. Table <ref> compares the TEP results for the Polish system considering different number of critical contingencies. As can be observed from the table, TEP considering 6 critical contingencies in the master problem gives better results than TEP considering 3 critical contingencies. Nevertheless, the computational time is higher for the case considering 6 contingencies. The planner has to balance computational time with solution quality. Note also that each N-1 check subproblem takes around 1.2s and is independent. If parallelization techniques are used, the computational time in the subproblem can be significantly reduced and the total time will be largely determined by the master problem. In this case, adding more contingencies in the planning model would have little impact on the computational time as long as the size of the master problem is unchanged, i.e., including the same number of critical contingencies.§ CONCLUSION In this paper, the CVSR is investigated for improving the transmission expansion planning. The CVSR is a FACTS-like device which has the capability of continuously varying the transmission line impedance. Due to its simple and low power rating control circuit, the cost of the CVSR is far less than the cost of a similarly related FACTS device. Thus, a large number of such devices could be installed and have a large impact on the planning process. A security constrained multi-stage TEP model considering CVSR is presented. A reformulation technique is proposed to transform the MINLP model into the MILP model so the model can be efficiently solved by commercial solvers. To relieve the computation burden for a practical large scale system, a decomposition approach is introduced to separate the problem into a planning master problem and security analysis sub-problem. Simulation results on two test systems show that if several CVSRs are appropriately allocated in the system, the building of new transmission lines can be postponed or avoided entirely. Moreover, the CVSRs can change the power flow pattern, which is beneficial in reducing operating cost. Finally, the installation of CVSRs adds flexibility to power system operation and can provide alternative control actions to handle various contingencies.IEEEtranXiaohu Zhang (S'12) received the B.S. degree in electrical engineering from Huazhong University of Science and Technology, Wuhan, China, in 2009, the M.S. degree in electrical engineering from Royal Institute of Technology, Stockholm, Sweden, in 2011, and the Ph.D. degree in electrical engineering at The University of Tennessee, Knoxville, in 2017. Currently, he works as a power system engineer at GEIRI North America, Santa Clara, CA, USA. His research interests are power system operation, planning and stability analysis. Kevin Tomsovic (F'07) received the B.S. degree in electrical engineering from Michigan Technological University, Houghton, in 1982 and the M.S. and Ph.D. degrees in electrical engineering from the University of Washington, Seattle, in 1984 and 1987, respectively. Currently, he is the CTI professor of the Department of Electrical Engineering and Computer Science at The University of Tennessee, Knoxville, where he also directs the NSF/DOE ERC CURENT. He was on the faculty of Washington State University from 1992 to 2008. He held the Advanced Technology for Electrical Energy Chair at Kumamoto University, Kumamoto, Japan, from 1999 to 2000 and was an NSF program director in the ECS division from 2004 to 2006. Aleksandar Dimitrovski (SM'09) is the Associate Professor at University of Central Florida – Orlando. Before joining UCF, he had worked in research institutions, power industry, and academia both in USA and Europe. He received his B.Sc. and Ph.D. degrees in electrical engineering with emphasis in power from University Ss. Cyril & Methodius, Macedonia, and M.Sc. degree in applied computer sciences from University of Zagreb, Croatia. His area of interest is focused on uncertain power systems, their modeling, analysis, protection and control.
http://arxiv.org/abs/1703.08935v1
{ "authors": [ "Xiaohu Zhang", "Kevin Tomsovic", "Aleksandar Dimitrovski" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170327053150", "title": "Security Constrained Multi-Stage Transmission Expansion Planning Considering a Continuously Variable Series Reactor" }
=1
http://arxiv.org/abs/1703.09203v2
{ "authors": [ "Greger Torgrimsson", "Christian Schneider", "Johannes Oertel", "Ralf Schützhold" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20170327174513", "title": "Dynamically assisted Sauter-Schwinger effect - non-perturbative versus perturbative aspects" }
Multipair Massive MIMO Relaying Systems with One-Bit ADCs and DACs Chuili Kong, Student Member, IEEE, Amine Mezghani, Member, IEEE, Caijun Zhong, Senior Member, IEEE,A. Lee Swindlehurst, Fellow, IEEE, and Zhaoyang Zhang, Member, IEEE C. Kong, C. Zhong and Z. Zhang are with the Institute of Information and Communication Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: kcl_dut@163.com; caijunzhong@zju.edu.cn; sunrise.heaven@gmail.com). A. Mezghani and A. L. Swindlehurst are with the Center for Pervasive Communications and Computing, University of California, Irvine, CA 92697, USA (e-mail: amine.mezghani@uci.edu; swindle@uci.edu) A. L. Swindlehurst and A. Mezghani were supported by the National Science Foundation under Grant ECCS-1547155. A. L. Swindlehurst was further supported by the Technische Universit��at M��unchen Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement No. 291763, and by the European Union under the Marie Curie COFUND Program. December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ This paper considers a multipair amplify-and-forward massive MIMO relaying system with one-bit ADCs and one-bit DACs at the relay. The channel state information is estimated via pilot training, and then utilized by the relay to perform simple maximum-ratio combining/maximum-ratio transmission processing. Leveraging on the Bussgang decomposition, an exact achievable rate is derived for the system with correlated quantization noise. Based on this, a closed-form asymptotic approximation for the achievable rate is presented, thereby enabling efficient evaluation of the impact of key parameters on the system performance. Furthermore, power scaling laws are characterized to study the potential energy efficiency associated with deploying massive one-bit antenna arrays at the relay. In addition, a power allocation strategy is designed to compensate for the rate degradation caused by the coarse quantization. Our results suggest that the quality of the channel estimates depends on the specific orthogonal pilot sequences that are used, contrary to unquantized systems where any set of orthogonal pilot sequences gives the same result. Moreover, the sum rate gap between the double-quantized relay system and an ideal non-quantized system is a moderate factor of 4/π^2 in the low power regime.Massive MIMO, relays, one-bit quantization, power allocation§ INTRODUCTIONMultipair multiple-input multiple-output (MIMO) relaying networks have recently attracted considerable attention since they can provide a cost-effective way of achieving performance gains in wireless systems via coverage extension and maintaining a uniform quality of service. In such a system, multiple sources simultaneously exchange information with multiple destinations via a shared multiple-antenna relay in the same time-frequency resource. Hence, multi-user interference is the primary system bottleneck. The deployment of massive antenna arrays at the relay has been proposed to address this issue due to their ability to suppress interference, provide large array and spatial multiplexing gains, and in turn to yield large improvements in spectral and energy efficiency <cit.>.There has recently been considerable research interest in multipair massive MIMO relaying systems. For example, <cit.> derived the ergodic rate of the system when maximum ratio combining/maximum ratio transmission (MRC/MRT) beamforming is employed and showed that the energy efficiency gain scales with the number of relay antennas in Rayleigh fading channels. Then, <cit.> extended the analysis to the Ricean fading case and obtained similar power scaling behavior. For full-duplex systems, <cit.> analytically compared the performance of MRC/MRT and zero-forcing reception/transmission and characterized the impact of the number of user pairs on the spectral efficiency.All the aforementioned works are based on the assumption of perfect hardware. However, a large number of antennas at the relay implies a very large deployment cost and significant energy consumption if a separate RF chain is implemented for each antenna in order to maintain full beamforming flexibility.In particular, the fabrication cost, chip area and power consumption of the analog-to-digital converters (ADCs) and the digital-to-analog converters (DACs) grow roughly exponentially with the number of quantization bits <cit.>. The cumulative cost and power required to implement a relay with a very large array can be prohibitive, and thus it is desirable to investigate the use of cheaper and more energy-efficient components, such as low-resolution (e.g., one bit) ADCs and DACs.Fortunately, it has been shown in <cit.> that large arrays exhibit a certain resilience to RF hardware impairments that could be caused by such low-cost components.§.§ Related WorkSeveral recent contributions have investigated the impact of low-resolution ADCs on the massive MIMO uplink <cit.>. For example, <cit.> optimized the training pilot length to maximize the spectral efficiency, while <cit.> revealed that in terms of overall energy efficiency, the optimal level of quantization is 4-5 bits. In <cit.>, the Bussgang decomposition <cit.> was used to reformulate the nonlinear quantization using a second-order statistically equivalent linear operator, and to derive a linear minimum mean-squared error (LMMSE) channel estimator for one-bit ADCs. In <cit.>, a near-optimal low complexity bit allocation scheme was presented for millimeter wave channels exhibiting sparsity. The work of <cit.> examined the impact of one-bit ADCs on wideband channels with frequency-selective fading. Other work has focused on balancing the spectral and energy efficiency, either through the combined use of hybrid architectures with a small number of RF chains and low resolution ADCs, or using mixed ADCs architectures with high and low resolution.In contrast to the uplink case, there are relatively fewer contributions that consider the massive MIMO downlink with low-resolution DACs. In <cit.>, it was shown that performance approaching the unquantized case can be achieved using DACs with only 3-4 bits of resolution. The nearly optimal quantized Wiener precoder with low-resolution DACs was studied in <cit.>, and the resulting solution was shown to outperform the conventional Wiener precoder with 4-6 bits of resolution at high signal-to-noise ratio (SNR). For the case of one-bit DACs, <cit.> showed that even simple MRT precoding can achieve reasonable results. In <cit.>, an LMMSE precoder was proposed by taking the quantization non-linearities into account, and different precoding schemes were compared in terms of uncoded bit error rate.§.§ ContributionsAll these prior works are for single-hop systems rather than dual-hop connections via a relay. Recently, <cit.> considered a relay-based system that uses mixed-resolution ADCs at the base station. Unlike <cit.>, we consider a multipair amplify-and-forward (AF) relaying system where the relay uses both one-bit ADCs and one-bit DACs. The one-bit ADCs cause errors in the channel estimation stage and subsequently in the reception of the uplink data; then, after a linear transformation, the one-bit DACs produce distortion when the downlink signal is coarsely quantized. In this paper, we present a detailed performance investigation of the achievable rate of such doubly quantized systems. In particular, the main contributions are summarized as follows: * We investigate a multipair AF relaying system that employs one-bit ADCs and DACs at the relay and uses MRC/MRT beamforming to process the signals. We take the correlation of the quantization noise into account, and present an exact achievable rate by using the arcsine law. Then, we use asymptotic arguments to provide an approximate closed-form expression for the achievable rate. Numerical results demonstrate that the approximate rate is accurate in typical massive MIMO scenarios, even with only a moderate number of users.* We show that the channel estimation accuracy of the quantized system depends on the specific orthogonal pilot matrix that is used, which is in contrast to unquantized systems where any orthogonal pilot sequence yields the same result. We consider the specific case of identity and Hadamard pilot matrices, and we show that the identity training scheme provides better channel estimation performance for users with weaker than average channels, while the Hadamard training sequence is better for users with stronger channels.* We compare the achievable rate of different ADC and DAC configurations, and show that a system with one-bit DACs and perfect ADCs outperforms a system with one-bit ADCs and perfect DACs. Focusing on the low transmit power regime, we show that the sum rate of the relay system with one-bit ADCs and DACs is 4/π^2 times that achievable with perfect ADCs and DACs. Also, it is shown that the transmit power of each source or the relay can be reduced inversely proportional to the number of relay antennas, while maintaining a given quality-of-service.* We formulate a power allocation problem to allocate power to each source and the relay, subject to a sum power budget. Locally optimum solutions are obtained by solving a sequence of geometric programming (GP) problems. Our numerical results suggest that the power allocation strategy can efficiently compensate for the rate degradation caused by the coarse quantization. §.§ Paper Outline and NotationsThe remainder of the paper is organized as follows. Section <ref> introduces the multipair AF relaying system model under consideration. Section <ref> presents an approximate closed-form expression for the sum rate, and compares the rate achieved with different ADC and DAC configurations. Section <ref> formulates a power allocation problem to compensate for the rate loss caused by the coarse quantization. Numerical results are provided in Section <ref>. Finally, Section <ref> summarizes the key findings.Notation: We use bold upper case letters to denote matrices, bold lower case letters to denote vectors and lower case letters to denote scalars. The notation (·)^H, (·)^*, (·)^T, and (·)^-1 respectively represent the conjugate transpose operator, the conjugate operator, the transpose operator, and the matrix inverse. The Euclidian norm is denoted by || · ||, the absolute value by | · |, and [ A]_mn represents the (m,n)-th entry of A. Also, x CN ( 0,Σ) denote a circularly symmetric complex Gaussian random vector with zero mean and covariance matrix Σ, while I_k is the identity matrix of size k. The symbol ⊗ is the Kronecker product, vec( A ) represents a column vector containing the stacked columns of matrix A, diag( B) denotes a diagonal matrix formed by the diagonal elements of matrix B, (C ) and (C) stand for the real and imaginary part of C, respectively. Finally, the statistical expectation operator is represented by E{·}, the variance operator is Var(·), and the trace is denoted by tr(·). § SYSTEM MODELConsider a multipair relaying system with one-bit quantization, as shown in Fig. <ref>. There are K single-antenna user pairs, denoted as S_k and D_k, k = 1,…,K, intending to exchange information with each other with the assistance of a shared relay. The relay is equipped with M receive antennas with one-bit ADCs and M transmit antennas with one-bit DACs. The one-bit ADCs cause errors in the channel estimation stage and subsequently in the reception of the uplink data; then, after a linear transformation, the one-bit DACs produce distortion when the downlink signal is coarsely quantized.Thus, the system we study is double quantized. We assume that direct links between S_k and D_k do not exist due to large obstacles or severe shadowing. In addition, we further assume that the relay operates in half-duplex mode, and hence it cannot receive and transmit signals simultaneously. Accordingly, information transmission from S_k to D_k is completed in two phases. In the first phase, the K sources transmit independent data symbols to the relay, and in the second phase the relay broadcasts the double-quantized signals x̃_R to the destinations. The signals at the relay's receive antennas and at the destinationsbefore quantization are respectively given byy_R =G_SR P_S^1/2 x_S +n_Ry_D = γ G_RD^T x̃_R +n_D,where γ is chosen to satisfy a total power constraint p_R at the relay, i.e., E{||γx̃_R ||^2 } = p_R, which will be specified shortly. The source symbols are represented by x_S = [ x_S,1,…, x_S,K]^T, whose elements are assumed to be Gaussian distributed with zero mean and unit power. P_S is a diagonal matrix that denotes the transmit power of the sources with [ P_S]_kk = p_S,k. The vectors n_R and n_D represent additive white Gaussian noise (AWGN) at the relay and destinations, whose elements are both identically and independently distributed (i.i.d.) CN(0,1). Note that to keep the notation clean and without loss of generality, we take the noise variance to be 1 here, and also in the subsequent sections. With this convention, p_S and also the subsequent transmit powers can be interpreted as the normalized SNR. The matrices G_SR = [ g_SR,1,…, g_SR,K] and G_RD = [ g_RD,1,…, g_RD,K] respectively represent the uncorrelated Rayleigh fading channels from the K sources to the relay with g_SR,k∈CN(0,β_SR,k I_M) and the channels from the relay to the K destinations with g_RD,k∈CN(0,β_RD,k I_M). The terms β_SR,k and β_RD,k model the large-scale path-loss, which is assumed to be constant over many coherence intervals and known a priori. §.§ Channel EstimationWe assume training pilots are used to estimate the channel matrices G_SR and G_RD, as in other massive MIMO AF relaying systems <cit.>. Therefore, during each coherence interval of length τ_c (in symbols), all sources simultaneously transmit their mutually orthogonal pilot sequences Φ_S∈ℂ^τ_p× K satisfying Φ_S^H Φ_S = τ_p I_K to the relay while the destinations remain silent (τ_p≥ K). Afterwards, all destinations simultaneously transmit their mutually orthogonal pilot sequences Φ_D∈ℂ^τ_p× K satisfying Φ_D^H Φ_D = τ_p I_K to the relay while the sources remain silent.Since the channels G_SR and G_RD are estimated in the same fashion, we focus only on the first link G_SR. The received training signal at the relay is given byY_p = √(p_p) G_SRΦ_S^T +N_p,where p_p represents the transmit power of each pilot symbol, and N_p denotes the noise at the relay, which has i.i.d. CN(0,1) elements. After vectorizing the matrix Y_p, we obtainy_p = vec( Y_p) = Φ̅_Sg̅_SR + n̅_p,where Φ̅_S = Φ_S⊗√(p_p) I_M, g̅_SR = vec( G_SR), and n̅_p = vec( N_p). §.§.§ One-bit ADCsAfter the one-bit ADCs, the quantized signal can be expressed asr_p =Q( y_p),where Q(·) denotes the one-bit quantization operation, which separately processes the real and imaginary parts of the signal. Therefore, the output set of the one-bit ADCs is 1/√(2){± 1± 1j}. Using the Bussgang decomposition <cit.>, r_p can be represented by a linear signal component and an uncorrelated quantization noise q_p:r_p =A_p y_p +q_p,where A_p is the linear operator obtained by minimizing the power of the quantization noise E{|| q_p||^2 }:A_p =R_ y_p r_p^HR_ y_p y_p^-1,where R_ y_pr_p denotes the cross-correlation matrix between the received signal y_p and the quantized signal r_p, and R_ y_py_p represents the auto-correlation matrix of y_p, which is computed asR_ y_p y_p =Φ̅_SD̃_SRΦ̅_S^H +I_M τ_p,where D̃_SR = ( D_SR⊗ I_M) and D_SR is a diagonal matrix whose elements are [ D_SR]_kk = β_SR,k for k = 1,…, K.For one-bit quantization, by invoking the results in <cit.> and applying the arcsine law <cit.>, we haveR_ y_pr_p = 2/π R_ y_p y_pdiag(R_ y_p y_p)^-1/2R_r_pr_p = 2/π( arcsin(J) + j arcsin(K) ),whereJ = diag(R_ y_p y_p)^-1/2(R_ y_p y_p) diag(R_ y_p y_p)^-1/2K = diag(R_ y_p y_p)^-1/2(R_ y_p y_p) diag(R_ y_p y_p)^-1/2.Substituting (<ref>) into (<ref>), and after some simple mathematical manipulations, we haveA_p = √(2/π)diag(R_ y_p y_p)^-1/2. Since q_p is uncorrelated with y_p, we haveR_ q_p q_p =R_r_pr_p -A_p R_ y_py_p A_p^H.Substituting (<ref>) into (<ref>) yields R_ q_p q_p = 2/π( arcsin(J) + j arcsin(K) ) - 2/π( J + jK). §.§.§ LMMSE estimator Based on the observation r_p and the training pilots Φ_S, we use the LMMSE technique to estimate G_SR. Hence, the estimated channel ĝ_SR is given by ĝ_SR =R_g̅_SR r_p R_ r_p r_p^-1 r_p.As a result, the covariance matrix of the estimated channel ĝ_SR is expressed as R_ĝ_SRĝ_SR = D̃_SRΦ̃_S^H (Φ̃_SD̃_SRΦ̃_S^H +A_p A_p^H +R_ q_p q_p)^-1Φ̃_SD̃_SR, where Φ̃_S =A_pΦ̅_S. From (<ref>), we can see that R_ĝ_SRĝ_SR is a non-trivial function of Φ̃_S, which indicates that the quality of the channel estimates depends on the specific realization of the pilot sequence, which is contrary to unquantized systems where any set of orthogonal pilot sequences gives the same result.Although our conclusion in Remark 1 is obtained based on the LMMSE estimator, it also holds for the maximum likelihood estimator <cit.>.In the following, we study the performance of two specific pilot sequences to show how the pilot matrix affects the channel estimation. Here, we choose τ_p = K, which is the minimum possible length of the pilot sequence. a) Identity Matrix. In this case, Φ_S = √(K) I_K, and hence we have R_ y_p y_p =K p_pD̃_SR+I_M K.Consequently, A_p =√(2/π)( K p_pD̃_SR+I_M K)^-1/2 = A̅_p⊗ I_MR_ q_p q_p = (1 - 2/π)I_M K,where A̅_p is a diagonal matrix with [A̅_p]_kk = α_p,k= √(2/π1/K p_pβ_SR,k + 1). Substituting (<ref>) and (<ref>) into (<ref>), we obtainR_ĝ_SRĝ_SR =Q_SR^(1)⊗ I_M,where Q_SR^(1) is a diagonal matrix with elements[ Q_SR^(1)]_kk = σ_SR,k^2 = 2/πK p_pβ_SR,k^2/K p_pβ_SR,k + 1.b) Hadamard Matrix. In this case, every element of Φ_S is +1 or -1, and hence we haveA_p = √(2/π1/p_p∑_n=1^K β_SR,k + 1) I_M KR_ q_p q_p ≈(1 - 2/π)I_M K,where the approximation in (<ref>) holds for low p_p. Substituting (<ref>) and (<ref>) into (<ref>), we obtainR_ĝ_SRĝ_SR =Q_SR^(2)⊗ I_M,where Q_SR^(2) is a diagonal matrix with entries[ Q_SR^(2)]_kk = κ_SR,k^2 = K α̅_p^2 β_SR,k^2 p_p/K α̅_p^2 β_SR,k p_p + α̅_p^2 + 1 - 2/π, where α̅_p = √(2/π1/p_p∑_k=1^Kβ_SR,k + 1). For both cases, the channels from the sources to the relay g_SR,k can be decomposed asg_SR,k = ĝ_SR,k +e_SR,k,where e_SR,k is the estimation error vector. The elements of ĝ_SR,k and e_SR,k are respectively distributed as CN(0, σ_SR,k^2 ) and CN(0, σ̃_SR,k^2 ) when Φ_SR is an identity matrix, while they are distributed as CN(0, κ_SR,k^2 ) and CN(0, κ̃_SR,k^2) when Φ_SR is a Hadamard matrix, where σ̃_SR,k^2 = β_SR,k - σ_SR,k^2 and κ̃_SR,k^2 = β_SR,k - κ_SR,k^2. In what follows we define Ĝ_SR = [ĝ_SR,1,…,ĝ_SR,K] and E_SR = [ e_SR,1,…, e_SR,K].Similarly, the channels from the relay to the destinations g_RD,k can be decomposed asg_RD,k = ĝ_RD,k +e_RD,k,where ĝ_RD,k and e_RD,k are the estimated channel and estimation error vectors. The elements of ĝ_RD,k and e_RD,k are distributed as CN(0, σ_RD,k^2 ) and CN(0, σ̃_RD,k^2 ) when Φ_RD is an identity matrix, while they are CN(0, κ_RD,k^2 ) and CN(0, κ̃_RD,k^2 ) when Φ_RD is a Hadamard matrix, whereσ_RD,k^2=2/πK p_pβ_RD,k^2/K p_pβ_RD,k + 1 κ_RD,k^2= K α̂_p^2 β_RD,k^2 p_p/K α̂_p^2 β_RD,k p_p + α̂_p^2 + 1 - 2/π,with α̂_p = √(2/π1/p_p∑_k=1^Kβ_RD,k + 1), and σ̃_RD,k^2 = β_RD,k - σ_RD,k^2, κ̃_RD,k^2 = β_RD,k - κ_RD,k^2. We also define Ĝ_RD = [ĝ_RD,1,…,ĝ_RD,K] and E_RD = [ e_RD,1,…, e_RD,K].For the channel from the k-th source to the relay, the mean-square error (MSE) is given byMSE_SR,k =E{ ||ĝ_SR,k -g_SR,k||^2 }.Based on the above results, we have MSE_SR,k = σ̃_SR,k^2 for the identity matrix and MSE_SR,k = κ̃_SR,k^2 for the Hadamard matrix. The following proposition compares the MSE of the two approaches.For estimating the channel g_SR,k, the identity matrix is preferable to the Hadamard matrix for user k if β_SR,k < 1/K∑_i=1^K β_SR,i, and vice versa.The proof is trivial since σ̃_SR,k^2 < κ̃_SR,k^2 if β_SR,k < 1/K∑_i=1^K β_SR,i. Proposition <ref> reveals that the accuracy of the individual channel estimates depends on the particular choice of the orthogonal training scheme, contrary to the ideal case without quantization. More precisely, the scaled identity matrix is beneficial for any user with higher path loss than the average. This is because a weak user benefits from being the only one transmitting at a given time, without the presence of stronger users that dominate the behavior of the ADC. In the case ofHadamard matrix, all users are transmitting simultaneously, resulting in an average quantization noise level for all users jointly, which is advantageousfor users with stronger channels.The question of optimizing the pilot sequence for a given performance metric is an interesting one, but is beyond the scope of the paper. For simplicity, we will assume the identity matrix approach in which each user's channel is estimated one at a time.§.§ Data Transmission§.§.§ Quantization with One-bit ADCsWith one-bit ADCs at the receiver, the resulting quantized signals can be expressed asỹ_R =Q(y_R) =A_a y_R +q_a,where A_a is the linear operator, which is uncorrelated with y_R. By adopting the same technique in the previous subsection, we haveA_a = √(2/π)diag(R_ y_R y_R)^-1/2R_ q_a q_a = 2/π( arcsin(X) + j arcsin(Y) ) - 2/π( X + jY),whereX = diag(R_ y_R y_R)^-1/2(R_ y_R y_R) diag(R_ y_R y_R)^-1/2Y = diag(R_ y_R y_R)^-1/2(R_ y_R y_R) diag(R_ y_R y_R)^-1/2R_ y_R y_R = G_SR P_S G_SR^H +I_M.§.§.§ Digital Linear ProcessingWe assume that the relay adopts an AF protocol to process the quantized signals by one-bit ADCs ỹ_R, yieldingx_R =Wỹ_R,where W = Ĝ_RD^* Ĝ_SR^H for MRC/MRT beamforming. §.§.§ Quantization with One-bit DACsAssuming one-bit DACs at the transmitter, the resulting quantized signals to be sent by the relay's transmit antennas can be expressed asx̃_R =Q( x_R) =A_d x_R +q_d,where A_d is the linear operator, and q_d is the quantization noise at the relay's transmit antennas, which is uncorrelated with x_R. Due to the one-bit DACs, we have E{||x̃_R ||^2 } = M. Therefore, the normalization factor γ (c.f. (<ref>)) can be expressed asγ = √(p_R/M). Following in the same fashion as with the ADCs derivations, we obtainA_d = √(2/π)diag(R_ x_R x_R)^-1/2R_ q_d q_d = 2/π( arcsin(U) + j arcsin(V) ) - 2/π( U + jV),whereU = diag(R_ x_R x_R)^-1/2(R_ x_R x_R) diag(R_ x_R x_R)^-1/2V = diag(R_ x_R x_R)^-1/2(R_ x_R x_R) diag(R_ x_R x_R)^-1/2R_ x_R x_R =W R_ỹ_Rỹ_R W^H R_ỹ_Rỹ_R =A_a R_ y_R y_R A_a^H +R_ q_a q_a.§ ACHIEVABLE RATE ANALYSISIn this section, we investigate the achievable rate of the considered system. In particular, we first provide an expression for the exact achievable rate, which is applicable to arbitrary system configurations. Then we use asymptotic arguments to derive an approximate rate to provide some key insights.§.§ Exact Achievable Rate AnalysisWe consider the realistic case where the K destinations do not have access to the instantaneous CSI, which is a typical assumption in the massive MIMO literature since the dissemination of instantaneous CSI leads to excessively high computational and signaling costs for very large antenna arrays. Hence, D_k uses only statistical CSI to decode the desired signal. Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) yields the received signal at the k-th destination y_D,k = γ√(p_S,k) E{ g_RD,k^TA_d W A_a g_SR,k} x_S,k_desired signal + ñ_D,k_effective noise,where where ñ_D,k =γ√(p_S,k)(g_RD,k^TA_d W A_a g_SR,k -E{ g_RD,k^TA_d W A_a g_SR,k}) x_S,k_estimation error+ γ∑_i ≠ k√(p_S,i) g_RD,k^TA_d W A_a g_SR,i x_S,i_interpair interference + γ g_RD,k^TA_d W A_a n_R_noise at the relay+ γ g_RD,k^TA_d W q_a_quantization noise of ADCs + γ g_RD,k^Tq_d_quantization noise of DACs + n_D,k_noise at k-th destination, where n_D,k is the k-th element of the noise vector n_D. Noticing that the “desired signal” and the “effective noise” in (<ref>) are uncorrelated, and capitalizing on the fact that the worst-case uncorrelated additive noise is independent Gaussian, we obtain the following achievable rate for the k-th destination:R_k= τ_c - 2 τ_p/2 τ_clog_2(1 + A_k/B_k + C_k + D_k + E_k + F_k + 1/γ^2),whereA_k= p_S,k | E{ g_RD,k^TA_d WA_a g_SR,k} |^2B_k= p_S,kVar(g_RD,k^TA_d WA_a g_SR,k)C_k= ∑_i≠ k p_S,i E{ | g_RD,k^TA_d W A_a g_SR,i|^2 }D_k=E{ || g_RD,k^T A_d WA_a ||^2}E_k=E{| g_RD,k^TA_d W R_ q_a q_a W^HA_d^Hg_RD,k^* |}F_k=E{| g_RD,k^TR_ q_d q_d g_RD,k^* |}.§.§ Asymptotic Simplifications As we can see, the matrices R_ q_a q_a, A_d, and R_ q_d q_d all involve arcsine functions, which does not give much insight into how the rate changes with various parameters. To facilitate the analysis, we focus on the asymptotic regime for a large number of users, in which (<ref>) can be approximated by R_ y_R y_R≈diag(R_ y_R y_R) ≈(1 + ∑_k=1^K p_S,kβ_SR,k)I_M.Substituting (<ref>) into (<ref>) and (<ref>), we haveA_a ≈√(2/π)√(1/1 + ∑_k=1^K p_S,kβ_SR,k) I_M = α_a I_MR_ q_a q_a ≈(1 - 2/π)I_M.Similarly, asymptotically we haveR_ x_R x_R≈diag( R_ x_R x_R) ≈α̂_d I_M,whereα̂_d = M (α_a^2 + 1 - 2/π) ∑_k=1^K σ_SR,k^2 σ_RD,k^2 +Mα_a^2 ∑_k=1^K σ_SR,k^2 σ_RD,k^2 (M p_S,kσ_SR,k^2 + ∑_i=1^K p_S,iβ_SR,i).Note that the proof of calculating the approximate R_ x_R x_R can be found in the Appendix <ref>.As a result, the matrices A_d and R_ q_d q_d can be approximated byA_d ≈√(2/πα̂_d) I_M= α_d I_MR_ q_d q_d ≈(1 - 2/π)I_M. §.§ Approximate Rate AnalysisIn this section, we derive a simpler closed-form approximation for the achievable rate. Substituting (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>), the exact achievable rate R_k can be approximated byR̃_k= τ_c - 2 τ_p/2τ_clog_2(1 + Ã_k/B̃_k + C̃_k + D̃_k + Ẽ_k + F̃_k + G̃_k),whereÃ_k= p_S,k | E{ g_RD,k^TW g_SR,k} |^2,B̃_k= p_S,kVar(g_RD,k^TW g_SR,k),C̃_k= ∑_i≠ k p_S,i E{ | g_RD,k^TW g_SR,i|^2 }, D̃_k=E{ || g_RD,k^TW ||^2},Ẽ_k= (1 - 2/π)1/α_a^2 E{ || g_RD,k^TW ||^2},F̃_k= (1 - 2/π)1/α_a^2 α_d^2 E{|| g_RD,k||^2}, G̃_k= 1/γ^2 α_a^2 α_d^2. With this expression, we can compute R̃_k by using random matrix theory and present a closed-form approximate rate for the k-th destination, as formalized in the following theorem.With one-bit ADCs and DACs at the relay, the approximate achievable rate of the k-th destination is given by (<ref>), whereÃ_k = p_S,k M^4 σ_SR,k^4 σ_RD,k^4,B̃_k = p_S,k M^2 ( M σ_SR,k^4 σ_RD,k^2 β_RD,k + β_SR,k t_k ),C̃_k = M^2 ∑_i≠ kp_S,i( M σ_SR,i^4 σ_RD,i^2 β_RD,k + β_SR,i t_k ),D̃_k = M^2 t_k ,Ẽ_k = (π/2 - 1)(1 + ∑_k=1^K p_S,kβ_SR,k) M^2 t_k, F̃_k = β_RD,k(π/2 - 1)M^3 ∑_k=1^K p_S,kσ_SR,k^4 σ_RD,k^2 + β_RD,kM^2π/2(π/2 - 1) (1 + ∑_k=1^K p_S,kβ_SR,k) ∑_k=1^K σ_SR,k^2 σ_RD,k^2G̃_k = M^3π/2 p_R∑_k=1^K p_S,kσ_SR,k^4 σ_RD,k^2 + M^2 π^2 /4p_R(1 + ∑_k=1^K p_S,kβ_SR,k) ∑_k=1^K σ_SR,k^2 σ_RD,k^2,with t_k = M σ_RD,k^4 σ_SR,k^2 + β_RD,k∑_n=1^K σ_SR,n^2 σ_RD,n^2. See Appendix <ref>. From Theorem <ref>, we can more readily see the impact of key parameters on the achievable rate. For instance, R̃_k decreases with the number of user pairs K. This is expected since a higher number of users increases the amount of inter-user interference. In addition, R̃_k is an increasing function of M, which reveals that increasing the number of relay's antennas always boosts the system performance. As p_S,k approaches infinity, R̃_k converges to a constant that is independent of p_S,k. In this case, the system becomes interference-limited.To quantify the impact of the double quantization on system performance, in the following corollaries we compare the achievable rate with several different ADC and DAC configurations. With perfect ADCs and one-bit DACs, the achievable rate of the k-th destination can be expressed as (<ref>) (shown on the top of the next page),whereα̃_d =M ∑_k=1^K σ̂_SR,k^2 σ̂_RD,k^2+M∑_k=1^K σ̂_SR,k^2 σ̂_RD,k^2 (M p_S,kσ̂_SR,k^2 + ∑_i=1^K p_S,iβ_SR,i) , with σ̂_SR,k^2 = Kβ_SR,k^2p_p/Kβ_SR,kp_p + 1 and σ̂_RD,k^2 = Kβ_RD,k^2p_p/Kβ_RD,kp_p + 1; Â_k, B̂_k, Ĉ_k, D̂_k can be obtained by replacing σ_SR,k^2 and σ_RD,k^2 with σ̂_SR,k^2 and σ̂_RD,k^2 in Ã_k, B̃_k, C̃_k, D̃_k, respectively.With perfect DACs and one-bit ADCs, the achievable rate of the k-th destination can be expressed asR_k^pD = τ_c - 2 τ_p/2τ_clog_2(1 + Ã_k/B̃_k + C̃_k + D̃_k + Ẽ_k + 2/πG̃_k).With perfect ADCs and DACs, the achievable rate of the k-th destination can be expressed asR_k^p = τ_c - 2 τ_p/2τ_clog_2(1 + Â_k/B̂_k + Ĉ_k + D̂_k + α̃_d/γ^2).Corollaries <ref>-<ref> together with Theorem <ref> provide four cases with different ADC/DAC configurations at the relay: 1) Case I: perfect ADCs and DACs; 2) Case II: perfect ADCs and one-bit DACs; 3) Case III: one-bit ADCs and perfect DACs; 4) Case IV: one-bit ADCs and DACs. The relative performance of these four configurations is described below. As the number of relay antennas becomes very large, we haveR_k^p > R_k^pA > R_k^pD > R̃_k. See Appendix <ref>. Proposition <ref> indicates that the rate of the system with perfect ADCs and one-bit DACs is higher than that of one-bit ADCs and perfect DACs system. This is because one-bit ADCs cause both channel estimation errors and rate degradation, while one-bit DACs only lead to a rate reduction. For what follows, we define the three rate ratios[δ_1, δ_2, δ_3] = [R_k^pA/R_k^p, R_k^pD/R_k^p, R̃_k/R_k^p]. We will compare these ratios for low SNR situations where massive MIMO systems are likely to operate. Here, we consider two cases: a) the transmit power of each source scales as p_S = E_S/M (where we define p_S = p_S,k for k = 1,…,K) with fixed E_S, while p_p and p_R are fixed. This case focuses on the potential power savings of the sources; b) the transmit power of the relay scales as p_R = E_R/M with fixed E_R, while p_S and p_R are fixed. This case focuses on the potential power savings of the relay.With p_S = E_S/M, and E_u, p_p, p_R fixed, we have R̃_k→τ_c - 2 τ_p/2τ_clog_2(1 + 2/πE_Sσ_SR,k^2 ) R_k^p →τ_c - 2 τ_p/2τ_clog_2(1 + E_Sσ̂_SR,k^2),as M →∞. In addition, if E_S→ 0, the rate ratios are given by[δ_1, δ_2, δ_3] = [1, 4/π^2, 4/π^2 ].With p_R = E_R/M, and E_R, p_p, p_S fixed, we have R̃_k→τ_c - 2 τ_p/2τ_clog_2(1 + 2 E_R/πσ_SR,k^4σ_RD,k^4/∑_k=1^Kσ_SR,k^4σ_RD,k^2) R_k^p →τ_c - 2 τ_p/2τ_clog_2(1 +E_Rσ̂_SR,k^4σ̂_RD,k^4/∑_k=1^K σ̂_SR,k^4σ̂_RD,k^2),as M →∞. In addition, if E_R→ 0, the rate ratios are given by[δ_1, δ_2, δ_3] = [2/π, 2/π, 4/π^2 ].From Propositions <ref> and <ref>, we can see that the system with one-bit ADCs and DACs has the same power scaling laws as the perfect hardware case, which is an encouraging result. In addition, for both Propositions <ref> and <ref>, δ_3 = 4/π^2, revealing that for the double-quantized system, the rate ratio is 4/π^2 times less than the perfect ADC/DAC case, for low transmit power at the sources or low transmit power at the relay. This result is the same as that in the system which is only quantized once <cit.>. Interestingly, focusing on the values of δ_1 and δ_2, we observe that the process to achieve the final scaling of 4/π^2 is quite different. For low p_S,k case, the value 4/π^2 only results from δ_2 = 4/π^2, implying that the rate loss is only caused by the one-bit ADCs. In contrast, for the low p_R case, the value 4/π^2 is generated by δ_1 = 2/π and δ_2 = 2/π, indicating that the rate degradation comes from both one-bit ADCs and one-bit DACs. § POWER ALLOCATIONIn this section, we formulate a power allocation problem maximizing the sum rate of the system for a given total power budget P_T, i.e., ∑_k=1^K p_S,k + p_R≤ P_T.§.§ Problem FormulationDefining p_S = [p_S,1, …, p_S,K]^T, the problem is expressed as P_1:maximize_ p_S, p_r τ_c - 2 τ_p/2 τ_c∑_k = 1^Klog_2( 1 + γ_k )subject to γ_k = p_S,k/ξ_k, k = 1, …, K∑_i = 1^K p_S,i+ p_r ≤ P_T p_S≥ 0, p_r ≥ 0,whereξ_k= ∑_i=1^K p_S,i a_k,i + p_R^-1( ∑_i=1^K b_k,i p_S,i + c_k ) + d_kc_k= π^2 /4 M^2σ_SR,k^4 σ_RD,k^4∑_n=1^K σ_SR,n^2 σ_RD,n^2d_k=π/2 Mσ_SR,k^2 + π^2 β_RD,k/4 M^2 σ_SR,k^4 σ_RD,k^4∑_n = 1^K σ_SR,n^2 σ_RD,n^2,and a_k,i and b_k,i are respectively given by (<ref>) and (<ref>), shown on the top of the next page.Since log(·) is an increasing function, problem P_1 can be reformulated asP_2:minimize_ p_S, p_r ∏_k = 1^K( 1 + γ_k )^-1 subject to γ_k ≤p_S,k/ξ_k , k = 1, …, K ∑_i = 1^K p_S,i+ p_r ≤ P_T p_S≥ 0, p_r ≥ 0,which can be identified as a complementary geometric program (CGP) <cit.>. Note that the equality constraints (<ref>) of P_1 have been replaced with inequality constraints (<ref>). Since the objective function of P_2 decreases with γ_k, we can guarantee that the inequality constraints (<ref>) must be active at any optimal solution of P_2, which means that problem P_2 is equivalent to P_1. §.§ Successive Approximation AlgorithmCGP problems are in general nonconvex. Fortunately, we can first approximate the CGP by solving a sequence of GP problems. Then, each GP can be solved very efficiently with standard convex optimization tools such as CVX. The key idea is to use a monomial function ω_kγ_k^μ_k to approximate 1 + γ_k near an arbitrary point γ̂_k > 0. To make the approximation accurate, we need to ensure that1 + γ̂_k = ω_kγ̂_k^μ_k μ_kω_kγ̂_k^μ_k - 1 = 1.These results will hold if the parameters ω_k and μ_k are chosen as ω_k = γ̂_k^-μ_k(1 + γ̂_k) and μ_k = γ̂_k/1+γ̂_k. At each iteration, the GP is obtained by replacing the posynomial objective function with its best local monomial approximation near the solution obtained at the previous iteration. The following algorithm shows the steps to solving P_2.We have neglected ω_k in the objective function of P_3 since they are constants and do not affect the problem solution. Also, some trust region constraints are added, i.e., θ^-1γ̂_k ≤γ_k ≤θγ̂_k, which limits how much the variables are allowed to differ from the current guess γ̂_k. The parameter θ > 1 controls the desired accuracy. More precisely, when θ is close to 1, it provides good accuracy for the momomial approximation but with slower convergence speed, and vice versa if θ is large. As discussed in <cit.>, θ = 1.1 offers a good tradeoff between accuracy and convergence speed. § NUMERICAL RESULTSIn this section, we present numerical results to validate previous analytical results and demonstrate the benefits of the power allocation algorithm. §.§ Impact of the input pilot matrixIn this section, we evaluate the channel estimation accuracy of the identity and Hadamard pilot matrices. We choose K = 4, and the large scale fading coefficients β_SR = [0.6, 0.3, 0.1, 0.9].Fig. <ref> illustrates the MSE of each channel from the sources to the relay versus the transmit power of each pilot symbol. For β_SR,k = {0.1, 0.3} which are less than the average large scale fading value of 0.475, the identity matrix pilot outperforms the Hadamard matrix, in agreement with Proposition <ref>. In addition, observing the curves associated with the Hadamard matrix, we can see that the approximate results nearly overlap with the exact results in the low p_p regime, indicating the validity of our theoretical analysis. However, if p_p increases, the gap between the approximate and exact results grows. §.§ Validation of analytical resultsIn this section, we validate the theoretical derivations. For simplicity, we set the large-scale fading coefficients as β_SR,k = β_RD,k = 1 and adopt an equal power allocation strategy, i.e., p_S,k = p_S.Fig. <ref> shows the sum rate versus the number of user pairs K. The curves associated with “Exact numerical results” and “Approximate numerical results” are respectively generated by Monte-Carlo simulations according to (<ref>) and (<ref>) by averaging over 10^3 independent channel realizations, and the “Theoretical results” curves are obtained based on Theorem <ref>. As can be seen, there exists a gap between “Exact numerical results” (where the matrices R_ q_a q_a and R_ q_d q_d are not diagonal, which means that the quantization noise is correlated) and “Approximate numerical results” (where the matrices R_ q_a q_a and R_ q_d q_d are approximated by identity matrices) when the number of user pairs is small, while the gap narrows and finally disappears as K becomes large. The reason is that the correlation effect is stronger with smaller K and weaker with larger K. In this example, our approximate model is very accurate when the number of user pairs is greater than 15, which is a reasonable number for this size of array. In addition, we observe that the “Approximate numerical results” curve overlaps with that for the “Theoretical results”, which verifies our analytical derivations in Theorem <ref>.Fig. <ref> shows the sum rate versus the number of relay antennas. From Fig. <ref>, we can see that when K = 10, the gap between the exact and approximate numerical results increases with the number of relay antennas. This suggests that for large antenna arrays, the correlation of the quantization noise becomes important and cannot be neglected. However, we are interested in the typical massive MIMO setup where the ratio between the number of relay antennas and the users is on the order of about M/K = 10, and thus we plot Fig. <ref>. In this figure, we can see the gap slightly narrows (from 0.2791 bit/s/Hz at M = 80 to 0.2505 bit/s/Hz at M = 200) as the number of relay antennas increases, which indicates that our approximate model is accurate for massive MIMO scenarios.Fig. <ref> shows the transmit power p_S of each source required to maintain a given sum rate of 5 bit/s/Hz. We can see that when the number of relay antennas increases, the required p_S is significantly reduced. Furthermore, if the number of relay antennas is very large, the required p_S is irrelevant to the resolution of the DACs. In other words, the sources transmit the same power in Case I and Case II, and pay the same power in Case III and Case IV. Fig. <ref> plots the three rate ratios versus the number of relay antennas when p_S is very low. We observe that the three rate ratio curves converge to two nonzero limits 1 and 4/π^2, which is consistent with Proposition <ref>. This property provides an efficient way to predict the sum rate with one-bit quantization according to the known sum rate of perfect ADC and/or DAC systems in low source transmit power regimes and with large-scale relay antennas. Fig. <ref> shows the transmit power p_R of the relay required to maintain a given sum rate of 5 bit/s/Hz. As in the previous case, the required power is substantially reduced when the number of relay antennas grows, which indicates the great benefits of employing large antenna arrays. In addition, when p_R is very small, e.g., p_R = -10 dB, the four curves show quite different results. The required number of relay antennas with one-bit ADCs and DACs is M = 512, which is approximately 2.5 times more than the case with perfect ADCs and DACs which requires M = 208 antennas. For Case II and Case III, the required number of relay antennas is almost the same, respectively M = 314 and M = 345. Fig. <ref> compares the three rate ratios when p_R is very low. We can see that the three rate ratio curves converge to three nonzero limits 2/π, 2/π, and 4/π^2, which agrees with Proposition <ref>. §.§ Power allocationFig. <ref> illustrates the impact of the optimal power allocation scheme on the sum rate when all users experience different large-scale fading. The large-scale fading coefficients are arbitrarily generated by β_SR,k = z_k(r_SR,k/r_0 )^κ and β_RD,k = z_k(r_RD,k/r_0 )^κ, where z_k is a log-normal random variable with standard deviation 8 dB, r_SR,k and r_RD,k respectively represent the distances from the sources and destinations to the relay, κ = 3.8 is the path loss exponent, and r_0 denotes the guard interval which specifies the nearest distance between the users and the relay. The relay is located at the center of a cell with a radius of 1000 meters and r_0 = 100 meters. We choose β_SR = [0.2688, 0.0368, 0.00025, 0.1398, 0.0047 ], and β_RD = [0.0003, 0.00025, 0.0050, 0.0794, 0.0001 ]. As a benchmark scheme for comparison, we also plot the sum rate with uniform power allocation, i.e., p_S = P_T/2K and p_R = P_T/2. For uniform power allocation, we can see that the rate of Case I is the highest, Case IV is the lowest, while Case II outperforms Case III. These results are in agreement with Proposition <ref>. In addition, we observe that the optimal power allocation strategy significantly boosts the sum rate. Although the rate achieved by the optimal power allocation with one-bit ADCs and DACs is inferior to the case of perfect ADCs and DACs with uniform power allocation, it outperforms the other three one-bit ADC/DAC configurations. This demonstrates the great importance of power allocation in quantized systems. § CONCLUSIONSWe have analyzed the achievable rate of a multipair half-duplex massive antenna relaying system assuming that one-bit ADCs and DACs are deployed at the relay. An approximate closed-form expression for the achievable rate was derived, based on which the impact of key system parameters was characterized. It was shown thatthe sum rate with one-bit ADCs and DACs is 4/π^2 times less than that achieved by an unquantized system in the low power regime. Despite the rate loss due to the use of one-bit ADCs and DACs, employing massive antenna arrays still enables significant power savings; i.e., the transmit power of each source or the relay can be reduced proportional to 1/M to maintain a constant rate, as in the unquantized case. Finally, we show that a good power allocation strategy can substantially compensate for the rate loss caused by the coarse quantization.§ PROOF OF THEOREM <REF>The end-to-end SINR given in (<ref>) consists of six expectation terms: 1) desired signal power Ã_k; 2) estimation error B̃_k; 3) interpair interference C̃_k; 4) noise at the relay D̃_k; 5) quantization noise of ADCs Ẽ_k; 6) quantization noise of DACs F̃_k. Besides these terms, we also need to calculate an approximation of R_ x_R x_R. In the following, we compute them one by one.1) Approximate R_ x_R x_R:R_ x_R x_R =E{Ĝ_RD^* Ĝ_SR^H R_ỹ_Rỹ_RĜ_SRĜ_RD^T }≈α_a^2E{Ĝ_RD^* Ĝ_SR^HG_SR P_S G_SR^H Ĝ_SRĜ_RD^T }+ (α_a^2 + 1 - 2/π)E{Ĝ_RD^* Ĝ_SR^H Ĝ_SRĜ_RD^T }.By using the fact that E{|| g_SR,k||^4 } = M (M + 1)β_SR,k^2, we haveE{Ĝ_RD^* Ĝ_SR^HG_SR P_S G_SR^H Ĝ_SRĜ_RD^T }=E{Ĝ_RD^* Ĝ_SR^H Ĝ_SR P_SĜ_SR^H Ĝ_SRĜ_RD^T }+E{Ĝ_RD^* Ĝ_SR^HE_SR P_S E_SR^H Ĝ_SRĜ_RD^T }= M ∑_k=1^K σ_SR,k^2 σ_RD,k^2 (M p_S,kσ_SR,k^2 + ∑_i=1^K p_S,iβ_SR,i)I_M E{Ĝ_RD^* Ĝ_SR^H Ĝ_SRĜ_RD^T } = M∑_k=1^K σ_SR,k^2 σ_RD,k^2I_M.Then, by substituting (<ref>) and (<ref>) into (<ref>), we directly obtainR_ x_R x_R≈M α_a^2 ∑_k=1^K σ_SR,k^2 σ_RD,k^2 (M p_S,kσ_SR,k^2 + ∑_i=1^K p_S,iβ_SR,i) I_M+ (α_a^2 + 1 - 2/π) ∑_k=1^K σ_SR,k^2 σ_RD,k^2 I_M. 2) Ã_k: SinceE{ g_RD,k^TW g_SR,k} =E{ g_RD,k^T ĝ_RD,k^* ĝ_SR,k^Hg_SR,k}= M^2 σ_SR,k^2 σ_RD,k^2,we haveÃ_k = p_S,k M^4 σ_SR,k^4 σ_RD,k^4. 3) B̃_k:E{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,k|^2 } =E{∑_m = 1^K ∑_n=1^Kg_RD,k^T ĝ_RD,m^* ĝ_SR,m^Hg_SR,k g_SR,k^H ĝ_SR,nĝ_RD,n^Tg_RD,k^* },which can be decomposed into three different cases:a) for m = n = k,E{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,k|^2 }=E{ ||ĝ_SR,k||^4 ||ĝ_RD,k||^4 }+E{ ||ĝ_SR,k||^4 |ĝ_RD,k^Te_RD,k^* |^2 }+E{ ||ĝ_RD,k||^4 |ĝ_SR,k^He_SR,k |^2 }+E{ |ĝ_SR,k^He_SR,k |^2 |ĝ_RD,k^Te_RD,k^* |^2 }= M^2 (M+1)^2 σ_SR,k^4 σ_RD,k^4 + M^2 (M+1) σ_SR,k^4 σ_RD,k^2 σ̃_RD,k^2 + M^2 (M+1) σ_RD,k^4 σ_SR,k^2 σ̃_SR,k^2 + M^2 σ_SR,k^2 σ̃_SR,k^2 σ_RD,k^2 σ̃_RD,k^2. b) for m = n ≠ k,E{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,k|^2 }= M^2 β_SR,kβ_RD,k∑_n≠ kσ_SR,n^2 σ_RD,n^2. c) for m ≠ n ≠ k,E{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,k|^2 } = 0. Combining a), b), and c), and by utilizing the fact of σ_SR,k^2 + σ̃_SR,k^2 = β_SR,k and σ_RD,k^2 + σ̃_RD,k^2 = β_RD,k, we haveE{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,k|^2 }= p_S,k M^4 σ_SR,k^4 σ_RD,k^4 + p_S,k M^3 σ_SR,k^4 σ_RD,k^2 β_RD,k+ p_S,k M^3 σ_RD,k^4 σ_SR,k^2 β_SR,k+ p_S,k M^2 β_SR,kβ_RD,k∑_n=1^K σ_SR,n^2 σ_RD,n^2.Thus,B̃_k = p_S,k M^3 (σ_SR,k^4 σ_RD,k^2 β_RD,k +σ_RD,k^4 σ_SR,k^2 β_SR,k) +p_S,k M^2 β_SR,kβ_RD,k∑_n=1^K σ_SR,n^2 σ_RD,n^2.4) C̃_k:E{ | g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,i|^2 } = E{∑_m = 1^K ∑_n=1^Kg_RD,k^T ĝ_RD,m^* ĝ_SR,m^Hg_SR,i g_SR,i^H ĝ_SR,nĝ_RD,n^Tg_RD,k^* },which can be decomposed as six cases:a) for m ≠ n ≠ k,i,E{| g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,i|^2 } = 0. b) for m = n ≠ k,i,E{∑_n ≠ k,i | ĝ_SR,n^Hg_SR,i |^2 |g_RD,k^T ĝ_RD,n^* |^2 }= M^2 β_SR,iβ_RD,k∑_n ≠ i,kσ_SR,n^2 σ_RD,n^2. c) for m = n = k (k≠ i),E{ |ĝ_SR,k^Hg_SR,i|^2 | g_RD,k^T ĝ_RD,k^*|^2 }= M^2 σ_SR,k^2 σ_RD,k^2 β_SR,i((M + 1)σ_RD,k^2 + σ̃_RD,k^2 ). d) for m = n = i (i≠ k),E{ | g_RD,k^T ĝ_RD,i^*|^2 |ĝ_SR,i^Hg_SR,i |^2}= M^2 σ_SR,i^2 σ_RD,i^2 β_RD,k((M + 1)σ_SR,i^2 + σ̃_SR,i^2 ). e) for m = i, n =k, E{| g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,i|^2 } = 0.f) for m=k, n =i, E{| g_RD,k^T Ĝ_RD^* Ĝ_SR^Hg_SR,i|^2 } = 0.Combining a), b), c), d), e), and f), we haveC̃_k = M^2 ∑_i≠ kβ_SR,iβ_RD,k∑_n =1^K σ_SR,n^2 σ_RD,n^2 + M^3 ∑_i≠ kp_S,i( σ_SR,k^2 σ_RD,k^4 β_SR,i + σ_SR,i^4 σ_RD,i^2 β_RD,k). 5) D̃_k: Following the same approach as with the derivations of B̃_k, we obtainD̃_k = M^3 σ_SR,k^2 σ_RD,k^4 + M^2 β_RD,k∑_n = 1^K σ_SR,n^2 σ_RD,n^2. 6) Ẽ_k: By using the fact that Ẽ_k = (1 - 2/π)1/α_a^2D̃_k, we obtain the result for Ẽ_k.7) F̃_k: F̃_k = 1 - 2/π/α_a^2 α_d^2 E{ || g_RD,k||^2 } = (1 - 2/π)M β_RD,k/α_a^2 α_d^2. 8) G̃_k: Combining (<ref>), (<ref>), and (<ref>), we can find the value of G̃_k. § PROOF OF PROPOSITION <REF>We can readily observe that R_k^pD > R̃_k and R_k^p > R_k^pA. Thus, we only focus on comparing R_k^pA and R_k^pD. Due to the fact that σ̂_SR,k^2 = π/2σ_SR,k^2 and σ̂_RD,k^2 = π/2σ_RD,k^2 (c.f. (<ref>), (<ref>), and Corollary <ref>), and by neglecting the low order terms as M →∞, the ratio between the SINR of R_k^pA and that of R_k^pD can be expressed as2^2τ_c/τ_c - 2τ_p R_k^pA -1/2^2τ_c/τ_c - 2τ_p R_k^pD - 1→f_2/f_1,wheref_1= 2/π p_S,kσ_SR,k^4 σ_RD,k^2 β_RD,k + 2/π p_S,kσ_RD,k^4 σ_SR,k^2 β_SR,k+ 2/π∑_i≠ kp_S,i( σ_SR,k^2 σ_RD,k^4 β_SR,i + σ_SR,i^4 σ_RD,i^2 β_RD,k) + σ_SR,k^2 σ_RD,k^4 + (1 - 2/π)β_RD,k∑_i=1^K p_S,iσ_SR,i^4 σ_RD,i^2 +1/p_R∑_i=1^K p_S,iσ_SR,i^4 σ_RD,i^2f_2= p_S,kσ_SR,k^4 σ_RD,k^2 β_RD,k + p_S,kσ_RD,k^4 σ_SR,k^2 β_SR,k+ ∑_i≠ kp_S,i( σ_SR,k^2 σ_RD,k^4 β_SR,i + σ_SR,i^4 σ_RD,i^2 β_RD,k)+ (π/2 - 1)(1 + ∑_k=1^K p_S,kβ_SR,k) σ_SR,k^2 σ_RD,k^4 +1/p_R∑_i=1^K p_S,iσ_SR,i^4 σ_RD,i^2. Since f_1 < f_2, we conclude that R_k^pA > R_k^pD. This completes the proof.IEEE 10H.Xie1 H. Xie, F. Gao, S. Zhang, and S. Jin, “A unified transmission strategy for TDD/FDD massive MIMO systems with spatial basis expansion model,” IEEE Trans. Veh. Techonol., July 2016.H.Xie3 H. Xie, B. Wang, F. Gao, and S. Jin, “A full-space spectrum-sharing strategy for massive MIMO cognitive radio,” IEEE J. Sel. Areas Commun., vol. 34, no. 10, pp. 2537–2549, Oct. 2016.C.Kong1 C. Kong, C. Zhong, and Z. Zhang, “Performance of ZF precoder in downlink massive MIMO with non-uniform user distribution,” J. Commun. and Netw., vol. 18, no. 5, pp. 688–698, Oct. 2016.C.Kong2 C. Kong, C. Zhong, A. Papazafeiropoulos, M. Matthaiou, and Z. Zhang, “Sum-rate and powr scaling of Massive MIMO systems with channel aging,” IEEE Trans. Commun., vol. 63, no. 12, pp. 4879–4893, Dec. 2015.M.Cheng M. Cheng, S. Yang, and X. Fang, “Adaptive antenna-activation based beamforming for large-scale MIMO communication systems of high speed railway,” China Commun., vol. 13, no. 9, pp. 12–23, Sep. 2016.S.Jin S. Jin, X. Liang, K.-K. Wong, X. Gao, and Q. Zhu, “Ergodic rate analysis for multipair massive MIMO two-way relay networks,” IEEE Trans. Wireless Commun., vol. 14, no. 3, pp. 1480–1491, Mar. 2015.X.Wang X. Wang, Y. Wang, and R. Sun, “Approximate sum rate for massive multiple-input multiple-output two-way relay with Ricean fading,” IET Commun., vol. 10, no. 12, pp. 1493–1500, Aug. 2016.H.Q.Ngo H. Q. Ngo, H. A. Suraweera, M. Matthaiou, and E. G. Larsson, “Multipair full-duplex relaying with massive arrays and linear processing,” IEEE J. Sel. Areas Commun., vol. 32, no. 9, pp. 1721–1737, Oct. 2014.Z.Zhang Z. Zhang Z. Chen, M. Shen, and B. Xia, “Spectral and energy efficiency of multipair two-way full-duplex relay systems with massive MIMO,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 848–863, Apr. 2016.J.Yoo J. Yoo, K. Choi, and D. Lee, “Comparator generation and selection for highly linear CMOS flash analog-to-digital converter,” J. Analog Integr. Circuits Signal Process., vol. 35, no. 2–3, pp. 179–187, 2003.R.H.Walden R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE J. Sel. Areas Commun., vol. 17, no. 4, pp. 539–550, Apr. 1999.Y.Li3 Y. Li, C. Tao, A. L. Swindlehurst, A. Mezghani, and L. Liu, “Downlink achievable rate analysis in massive MIMO systems with one-bit DACs,” [Online] Available: https://arxiv.org/pdf/1610.09630.pdfY.Li2 Y. Li, C. Tao, A. Mezghani, A. L. Swindlehurst, G. Seco-Granados, and L. Liu, “Optimal design of energy and spectral efficiency tradeoff in one-bit massive MIMO systems,” [Online] Available: https://arxiv.org/pdf/1609.07427.pdfL.Fan L. Fan, S. Jin, C.-K. Wen, and H. Zhang “Uplink achievable rate for massive MIMO systems with low-resolution ADC,” IEEE Commun. Lett., vol. 19, no. 12, pp. 2186–2189, Dec. 2015.J.Zhang J. Zhang, S. Sun, and Z. Wang, “On the spectral efficiency of massive MIMO systems with low-resolution ADCs,” IEEE Commun. Lett., vol. 20, no. 5, pp. 842–845, May 2016.L.Fan2 L. Fan, D. Qiao, S. Jin, C.-K. Wen, and M. Matthaiou, “Optimal pilot length for uplink massive MIMO systems with low-resolution ADC,” in Proc. IEEE SAM, July 2016.D.Verenzuela D. Verenzuela, E. Björnson, and M. Matthaiou, “Hardware design and optimal ADC resolution for uplink massive MIMO systems,” in Proc. IEEE SAM, July 2016.Y.Li1 Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. L. Swindlehurst, and L. Liu, “Channel estimation and performance analysis of one-bit massive MIMO systems,” [Online] Available: https://arxiv.org/pdf/1612.03271.pdfJ.Choi2 J. Choi, B. L. Evans, and A. Gatherer, “ADC bit allocation under a power constraint for mmWave massive MIMO communication receivers,” [Online] Available: https://arxiv.org/pdf/1609.05165.pdfC.Mollen C. Mollén, J. Choi, E. G. Larsson, and R. W. Heath Jr., “Uplink performance of wideband massve MIMO with one-bit ADCs,” IEEE Trans. Wireless Commun., vol. 16, no. 1, pp. 87–100, Jan. 2017.J.Mo J. Mo, A. Alkhateeb, S. Abu-Surra, and R. W. Heath Jr., “Hybrid architectures with few-bit ADC receivers: Achievable rates and energy-rate tradeoffs,” [Online] Available: http://arxiv.org/pdf/1605.00668.pdfW.Tan W. Tan, S. Jin, C.-K. Wen, and Y. Jing, “Spectral efficiency of mixed-ADC receivers for massive MIMO systems,” IEEE Access, vol. 4, pp. 7841–7846, Aug. 2016.N.Liang N. Liang and W. Zhang, “Mixed-ADC massive MIMO uplink in frequency-selective channels,” IEEE Trans. Commun., vol. 64, no. 11, pp. 4652–4666, Nov. 2016.J.J.Bussgang J. J. Bussgang, “Crosscorrelation functions of amplitude-distorted Gaussian signals,” MIT Research Lab. Electronics, Tech. Rep. 216, Mar. 1952.S.Jacobsson S. Jacobsson, G. Durisi, M. Coldrey, T. Goldstein, and C. Studer, “Quantized precoding for massive MU-MIMO,” [Online] Available: https://arxiv.org/abs/1610.07564A.Mezghani4 A. Mezghani, R. Ghiat, and J. Nossek, “Transmit processing with low resolution D/A-converters,” in Proc. IEEE ICECS, Dec 2009, pp. 683–686.J.Guerreiro J. Guerreiro, R. Dinis, and P. Montezuma, “Use of 1-bit digital-to-analogue converters in massive MIMO systems”, IEEE Electron. Lett., vol. 52, no. 9, pp. 778–779, Apr. 2016.Y.Li4 Y. Li, C. Tao, A. L. Swindlehurst, A. Mezghani, and L. Liu, “Downlink achievable rate analysis in massive MIMO systems with one-bit DACs,” IEEE Commun. Lett., accepted to appear, 2017.O.B.Usman O. B. Usman, H. Jedda, A. Mezghani, and J. A. Nossek, “MMSE precoder for massive MIMO using 1-bit quantization,” in Proc. IEEE ICASSP, Mar 2016, pp. 3381–3385.J.Liu J. Liu, J. Xu, W. Xu, S. Jin, and X. Dong, “Multiuser massive MIMO relaying with mixed-ADC receiver,” IEEE Signal Process. Lett., vol. 24, no. 1, pp. 76–80, Jan. 2017.F.Gao F. Gao, R. Zhang, and Y.-C. Liang, “Optimal channel estimation and training design for two-way relay networks,” IEEE Trans. Commun., vol. 57, no. 10, pp. 3024–3033, Oct. 2009.A.Mezghani A. Mezghani and J. A. Nossek, “Capacity lower bound of MIMO channels with output quantization and correlated noise,” in Proc. IEEE ISIT, July 2012.A.Papoulis A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill, Inc., 1991.G.Jacovitti G. Jacovitti and A. Neri, “Estimation of the autocorrelation function of complex Gaussian stationary processed by amplitude clipped signals,” IEEE Trans. Inf. Theory, vol. 40, no. 1, pp. 249–245, Jan. 1994.M.T.Ivrlac M. T. Ivrlač and J. A. Nossek, “On MIMO channel estimation with single-bit signal-quantization,” in Proc. ITG. Workshop Smart Antennas, 2007.A.Mezghani2 A. Mezghani and J. A. Nossek, “Analysis of 1-bit output noncoherent fading channels in the low SNR regime,” in Proc. IEEE ISIT, Jun. 2009, pp. 1080–1084.M.Avriel M. Avriel and A. C. Williams, “Complementary geometric programming,” SIAM J. Appl. Math., vol. 19, no. 1, pp. 125–141, July 1970.S.Boyd2 S. Boyd, Sequential Convex Programming, 2007. [Online]. Available: http://www.stanford.edu/class/ee364b/lectures/seq_slides.pdf
http://arxiv.org/abs/1703.08657v1
{ "authors": [ "Chuili Kong", "Amine Mezghani", "Caijun Zhong", "A. Lee Swindlehurst", "Zhaoyang Zhang" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170325072724", "title": "Multipair Massive MIMO Relaying Systems with One-Bit ADCs and DACs" }
LIP Lisboa and IST Lisboa, Portugal CBPF, Rio de Janeiro, Brazil LIP Coimbra and University of Coimbra, Portugal Universitàdi Napoli “Federico II” and INFN Roma Tor Vergata, Italy INFN Padova, Italy Università di Udine, ItalyUniversità di Padova, Italy INFN and Università di Roma Tor Vergata, Roma, Italy Coimbra Polytechnic - ISEC, Coimbra, PortugalCurrently the detection of Very High Energy gamma-rays for astrophysics rely on the measurement of the Extensive Air Showers (EAS) either using Cherenkov detectors or EAS arrays with larger field of views but also larger energy thresholds. In this talk we present a novel hybrid detector concept for a EAS array with an improved sensitivity in the lower energies (∼ 100GeV). We discuss its main features, capabilities and present preliminary results on its expected perfomances and sensitivities.This wide field of view experiment is planned to be installed at high altitude in South America making it a complementary project to the planned Cherenkov telescope experiments and a powerful tool to trigger further observations of variable sources and to detect transients phenomena.LATTES: a new gamma-ray detector concept for South America P. Assis1 U. Barres de Almeida2 A. Blanco3 R. Conceição1ruben@lip.pt(presenter) B. D'Ettoree Piazzoli4 A. De Angelis5,6,1 M. Doro7,5 P. Fonte3,8 L. Lopes3 G. Matthiae7 M. Pimenta1 R. Shellard2 B. Tomé1 December 30, 2023 ========================================================================================================================================================================================================================= § INTRODUCTION The study of high-energy and very-high-energy gamma-rays is very important to probe extreme phenomena that takes place in the Universe. Moreover, being neutral, this radiation can pin-point to their emission source, as they are not deflect by the surrounding magnetic fields. The detection of gamma-rays at lower energies (below ∼ 100GeV) can be done using instruments placed in artificial satellites, for instance Fermi. However, as the gamma-ray energy increases, its flux at Earth becomes increasingly smaller, to the point where the available collection areas aboard satellites are not enough to study them. Fortunately, the interaction of gamma-rays with the Earth atmosphere produces Extensive Air Shower (EAS) whose secondaries can be sampled by detector arrays, or one could collect the Cherenkov light produced by the secondaries using Imaging Array Cherenkov Telescopes (IACTs). These ground based detectors attain different advantages/disadvantages with respect to each other: IACTs have a lower energy threshold,and have better angular and energy resolution, as they can image the shower development; on the other hand EAS array have significantly wider field-of-views. In figure <ref> it is shown the sensitivities of current and future gamma-ray experiments with wide field-of-views. Two things become evident: there is no wide field-of-view experiment covering the Southern hemisphere sky; there is a gap in the 100GeV region.Such wide FoV experiment with a low energy threshold and a large duty cycle would be fully complementary to the powerful narrow-FoV Cherenkov Telescope Array (CTA) as it would be able not only to issue alerts of transient phenomena but would also enable long term observations of variable sources and help on the search for emissions from extended regions, such as the Fermi bubbles or dark matter annihilations from the centre of our galaxy. Hence, we propose a novel hybrid detector to be installed at ∼ 5200m a.s.l which ensures an improved sensitivity to the 100 GeV energy region. This manuscript is organised as follows: in section <ref> we describe the detector and the layout of the experiment. In section <ref> we discussed the performance of such detector and in section <ref> we present the achieved sensitivities. We end with final remarks, § DETECTOR DESCRIPTION In order to surpass the limitations of the previous EAS arrays experiments, and be able to lower the energy threshold while maintaining a reasonable energy and angular resolution, we propose to build a dense array with an area of 10 000m^2 constituted by modular hybrid detectors. Each station is composed by two low-cost Resistive Plate Chambers (RPC) on top of a Water Cherenkov Detector (WCD), as shown in figure <ref>. Each RPC has 16 charge collecting pads covering a total area of 1.5×.1.5m^2. The WCD has a rectangular structure with dimensions 3 × 1.5×0.5m^3. The signals are read by two photomultipliers (PMTs) at both ends of the smallest vertical face of the WCD. On the top of the RPCs there is a thin lead plate (5.6mm) to convert secondary photons. The conversion of the photons is important to improve the geometrical reconstruction as photons have a stronger correlation with the primary direction with respect to secondary shower electrons. The success of such hybrid detector concept lies on the fact that the RPCs contribute with its high segmentation and time resolution while the WCD provides a calorimetric measure of the shower secondary particles allowing to lower the energy threshold. Moreover, with this detector concept, it is possible to trigger in the WCD which allows the RPCs to operate at a low threshold while minimising several sources of noise (detector, electronics, environment).§ EXPERIMENT PERFORMANCE The performance of this detector has been assessed using an end-to-end realistic Monte Carlo simulation. The EAS have been simulated using CORSIKA (COsmic Ray Simulations for KAscade) and the detector response was treated by a Geant4 dedicated simulation. We generated 10000 CORSIKA simulations for gammas and protons between 10GeV and 5TeV. To save computational times, the simulations were generated using a power law differential energy spectrum with an index -1, and afterwards wereweighted for the corresponding particle fluxes. The zenith angle for gammas was fixed to 10^∘, while for protons the range was between 5 and 15 degrees. The detector was assumed to be placed at an altitude of 5200m a.s.l. To evaluate the effective area at the trigger level, we required that at least 3 stations have detected a signal. The trigger condition for the station requires at least 5 photonelectrons in each PMT. The effective area for this array, has been computed using simulations. We have found that for gamma primaries with an energy of 100 GeV we still have around 10^3m^2, even considering selection quality cuts, which will be described below. The energy estimation has been done using the total signal(S_tot) recorded in all WCD stations for each event. A calibration curve was derived and used to get the reconstructed energy (E_rec) for each event from S_tot. From this curve, it is possible to evaluate the energy resolution that one could achieve with such detector as a function of E_rec. This is shown in figure <ref> (left) where it is possible to see that the energy resolution improves as the shower energy increases and, at the lowest energies, one still has a reasonable energy resolution of 100%, being this mostly dominated by shower-to-shower fluctuations.The geometric reconstruction was done taking advantage of the RPC segmentation and fast timing (it was used in the simulations a time resolution of 1ns). The position and time of the recorded hits in the RPC were fitted to a shower front plane model in order to the reconstruct the primary direction. The quality of the reconstruction can be improved applying a cut on the number of active RPCs' pads in the event: it was required that the event has at least 10 hits. The reconstructed angle was compared to the simulated one, and we calculate the 68% containment angle, σ_θ,68. The result is shown in figure <ref> (right) where it can be seen that, at energies around 100GeV, a reasonable resolution can be achieved, better that 2^∘.§ SENSITIVITY In order to compute this detector sensitivity to steady sources, one needs to know, apart from the efective area, the energy and angular resolution, the discrimination capabilities between gamma and hadrons. Although we strongly believe that this hybrid detector could combine strategies explored in previous experiments such, as HAWC <cit.> and ARGO<cit.>, the complexity of such required study is out of the scope of this manuscript. Therefore, conservatively, we assumed no background rejection below 300GeV. As in this manuscript we aim to focus on the lowest energies, above 300GeV we took HAWC gamma/hadron capabilities as an ansatz for the highest energies <cit.>. This should, of course, be carefully studied in a future work. In figure <ref> it is shown the differential sensitivity of this detector to study sources. We compute the sensitivity as the flux of a source giving N_excess/√(N_bkg) = 5 after 1 year of effective observation. It was assumed that the source is visible one fourth of the time, which is roughly the time that the galactic centre is visible in the Southern tropic. The obtained results are compared with the 1 year sensitivities of FERMI and HAWC. One can clearly see that this detector would be able to cover the gap between the two of most sensitive experiments in this energy range.§ FINAL REMARKS We have presented a novel hybrid detector able to extend of previous experiments down to the region of 100GeV (more information can be found in <cit.>). This modular compact and low cost detector has given encouraging results, but its capabilities are far from explored, in particular, in what respects gamma-hadron discrimination. With the advent of the Cherenkov Telescope Array, this experiment would be a complementary project as it could provide not only triggers to transient phenomena as it could do long term observations of variable sources.Acknowledgments R. Conceição acknowledges the financial support given by FCT-Portugal .
http://arxiv.org/abs/1703.09254v2
{ "authors": [ "P. Assis", "U. Barres de Almeida", "A. Blanco", "R. Conceição", "B. D'Ettoree Piazzoli", "A. de Angelis", "M. Doro", "P. Fonte", "L. Lopes", "G. Matthiae", "M. Pimenta", "R. Shellard", "B. Tomé" ], "categories": [ "astro-ph.IM", "hep-ex" ], "primary_category": "astro-ph.IM", "published": "20170327183259", "title": "LATTES: a new gamma-ray detector concept for South America" }
We study definably compact definably connected groups definable in a sufficiently saturated real closed field R. We introduce the notion of group-generic point for ⋁-definable groups and show the existence of group-generic points for definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field. We use this notion along with some properties of generic sets to prove that for every definably compact definably connected group G definable in R there are a connected R-algebraic group H, a definable injective map ϕ from a generic definable neighborhood of the identity of G into the group H(R) of R-points of H such that ϕ acts as a group homomorphism inside its domain. This result is used in <cit.> to prove that the o-minimal universal covering group of an abelian connected definably compact group definable in a sufficiently saturated real closed field R is, up to locally definable isomorphisms, an open connected locally definable subgroup of the o-minimal universal covering group of the R-points of some R-algebraic group. Evaluation of Charged Particle Evaporation Expressions in Ultracold Plasmas Craig Witte and Jacob L. Roberts Updated: March 2017 ===========================================================================§ INTRODUCTION This is the first of two papers around definably compact groups definable in real closed fields.Definable groups in o-minimal structures have been intensively studied in the last three decades, and it is a field of current research. A real closed field is an ordered field elementarily equivalent to the real ordered field ℝ; for instance, ℝ, the real algebraic numbers ℝ_alg, the ℵ_1-saturated hyperreal numbers ^*R, which has infinite and infinitesimal elements, among other examples. By quantifier elimination in real closed fields (Tarski-Seidenberg), the definable sets in a real closed field R are the semialgebraic sets over R; namely, sets that are finite Boolean combination of sets of solutions of finitely many polynomial equations and inequalities over R. Since a real closed field is an o-minimal structure (i.e., an ordered structure for which every definable subset of its universe is a finite union of points and intervals, see e.g., <cit.>), then semialgebraic groups over a real closed field can be seen as a generalization of the semialgebraic groups over the real field ℝ, and also as a particular case of the groups definable in an o-minimal structure.There is a closed relation between groups definable in a field F and F-algebraic groups. Given an F-algebraic group H, the group of F-points H(F) is a definable group in F. When F is an algebraically closed field, every definable group in F is F-definably isomorphic, as an F-definable group, to some F-algebraic group (<cit.>); this fact is a version of Weil's theorem that asserts that any F-algebraic group can be recovered from birational data <cit.>. However, when F is real closed, there are semialgebraic groups over F that are not F-definably isomorphic to H(F) for any F-algebraic group H (e.g., consider the group ([0,1)⊆ F,+_mod 1)).Hrushovski and Pillay formulated in <cit.> a relationship between a semialgebraic group G over a real closed field R and the set of R-points H(R) of some R-algebraic group H. It roughly states that although the group operation of a semialgebraic group is given by a semialgebraic function, it is locally given by a rational function. More specifically, it assures the following.<cit.> Let G be a definably connected semialgebraic group over a real closed field ℛ=(R,<,+,0,·,1). Then there are a connected R-algebraic group H, a semialgebraic neighborhood U of the identity of G, and a semialgebraic homeomorphism f:U→ f(U)⊆ H(R), where H(R) is the set of R-points of H, such that x,y,xy∈ U implies f(xy)=f(x)f(y). Where by a definably connected group definable in ℛ we mean that G has no proper (ℛ)-definable subgroups of finite index.Nevertheless, the neighborhood around the identity of G given by the above Hrushovski and Pillay's result could not give enough information about G; for instance, if U is too small. Consider the following example, the group ([0,1)⊆ R,+_mod1) with addition modulo 1 is locally homomorphic to (R,+), where (R,<,+,0,·,1) is a ℵ_1-saturated real closed field. More precisely, the definable bijection f:[0,β)∪(1-β,1)→(-β,β) defined by f(x)=x if x∈[0,β), or f(x)=x-1 if x∈(1-β,1), where 0<β≪1/n for every n∈ℕ (i.e., β is a positive infinitesimal), is a local homomorphism between ([0,1)⊆ R,+_mod1) and (R,+), where by a local homomorphism between two groups we mean a map between some neighborhoods of their identities that acts as a group homomorphism inside its domain (see Def. <ref>). But U=[0,β)∪(1-β,1) cannot cover G with finitely many group translates, and even the subgroup ⟨ U⟩ generated by U cannot say nothing about the torsion of G.Fortunately, the definably compactness (see Def. <ref>) of G allows us to obtain a local homomorphism between G and H(R) whose domain is a generic definable set in G.From now on, we will follow the next conventions. By a sufficiently saturated structure we mean a κ-saturated structure for some sufficiently large cardinal κ. By a type-definable set in a sufficiently saturated structure ℳ we mean a subset of M^n that is the intersection of less than κ-many definable sets. And given a group G ⋁-definable in an o-minimal structure, by G^00 we denote the smallest type-definable subgroup of G of index <κ; if G is definable, thenG^00 exists, by <cit.>.In this paper we prove the next theorem.<ref> Let G be a definably compact definably connected group definable in a sufficiently saturated real closed field R. Then there are * a connected R-algebraic group H such that (G)=(H(R))=(H),* a definable X⊆ G such that G^00⊆ X,* a definable homeomorphism ϕ:X⊆ G→ϕ(X)⊆ H(R) such that ϕ and ϕ^-1 are local homomorphisms.To prove the above result we introduce the notion of group-generic point inSection <ref>. An element a of a group G definable over A⊆ M is called group-generic of G over A if every A-definable X⊆ G with a∈ X is generic in G (namely, X covers G by finitely many group translates), where ℳ=(M,<,…) is a sufficiently saturated o-minimal structure. We show the existence of group-generic points in definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field (Prop. <ref>) as well as we establish some properties of generic, group-generic points, and generic sets. With these tools we adapt the proof of <cit.> to obtain a strong version of the group configuration result for definably compact groups (Prop. <ref>), which is one of the main ingredients for Theorem <ref>.In the second paper (<cit.>) we combine Theorem <ref> and a study of locally definable covering homomorphisms for locally definable groups to prove the following: if G is an abelian definably compact definably connected group G definable in a sufficiently saturated real closed field, then its o-minimal universal covering group G is definably isomorphic, as a locally definable group, to a connected open locally definable subgroup of the o-minimal universal covering group H(R)^0 of the group H(R)^0 for some connected R-algebraic group H.This research is part of my PhD thesis at the Universidad de los Andes, Colombia and University of Haifa, Israel. §.§ The structure of the paper Section <ref> contains some basic background used throughout the paper. Group-generic points are introduced and studied in Section <ref>. We define group-generic points for ⋁-definable groups and show their existence in definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field as well as we establish some of their properties and connections with generic points and generic sets. In Section <ref>, we show a group configuration proposition for definably compact groups (Prop. <ref>) used in the proof of the main result of this paper: Theorem <ref>, which is proved in Section <ref>. Our notation and any undefined term that we use from model theory, topology, or algebraic geometry are generally standard. For a group G whose group operation is written multiplicatively, we use the following notation ∏_nX=n-timesX·…· X, and X^n={ x^n:x∈ X} for any n∈ℕ. § PRELIMINARIES Familiarity with basic facts about o-minimality is assumed (they can be found in <cit.>, <cit.>, and <cit.>).Given a first-order structure ℳ with universe M, we say that a set C⊆ M^k is definable in ℳ over A⊆ M if there is a first order formula ψ(x) with parameters from A such that C={ c∈ M^k:ℳψ(c)}. A set C⊆ M^k is definable in ℳ if it is definable over M. A function f:C⊆ M^k→ M^n is definable if its graph is a definable set. A group (G,·) is definable if G is a definable set and its group multiplication is a definable function.In an o-minimal structure ℳ=(M,<,…) with C⊆ M^k definable in ℳ, we define the(geometric) dimension of C, (C), as the maximal n≤ m such that the projection of C onto n coordinates contains an open set of M^n, where M^n has the product topology induced by the order topology on M.From now until the end of this section, let ℳ=(M,<,…) be a sufficiently saturated o-minimal structure. §.§ Algebraic dimension and generic points Recall that for A⊆ M and b∈ M, b is in the algebraic closure of A (b∈ acl(A)) if b is an element of a finite A-definable set. And b is in the definable closure of A (b∈ dcl(A)) if the singleton { b} is A-definable. We can consider in this definitions of algebraic and definable closure finite tuples from M instead of elements of M with exactly the same definitions.By the Exchange Lemma (<cit.>), we can define a model theoretic notion of dimension.Let A⊆ M and a tuple a∈ M^n. The (acl-)dimension of a over A, (a/A), is the cardinality of any maximal A-algebraically independent subtuple of a. If p∈ S(A), then (p)=(a/A) for any tuple a∈ M^n realising p.We recall some properties of this notion of dimension.<cit.> Let A,B⊆ M and tuples a∈ M^n and b∈ M^m.* If A⊆ B, then (a/A)≥(a/B). * (Additivity) (ab/A)=(a/Ab)+(b/A). * (Symmetry) (a/Ab)=(a/A) ⇔ (b/Aa)=(b/A). * Let p∈ S(A). If A⊆ B, there is q∈ S(B) such that q⊇ p and (p)=(q). Let A⊆ M and tuples a∈ M^n and b∈ M^m.* a is independent from b over A, denoted by a[A]b, if (a/A)=(a/Ab), which is also expressed by saying that tp(a/Ab) does not fork over A. * Let X⊆ M^n A-definable and a∈ X. a is a generic point of X over A if (a/A)=(X).Note, by <cit.>, that the (geometric) dimension of an A-definable set X⊆ M^n satisfies (X)=max{(a/A):a∈ X}.Since ℳ is sufficiently saturated, we have that for every X⊆ M^n definable over A⊆ M, with |A|<κ, there is a∈ X generic of X over A. §.§ ⋁-definable groups A ⋁-definable group is a group (𝒰,·) whose universe is an union 𝒰=⋃_i∈ IZ_i of ℳ-definable subsets of M^n for some fixed n, all defined over A⊆ M with |A|<κ such that for every i,j∈ I* there is k∈ I such that Z_i∪ Z_j⊆ Z_k (i.e., the union is directed), and * the group operation ·|_Z_i× Z_j and group inverse (·)^-1|_Z_i are ℳ-definable maps into M^n.We say that (𝒰,·) is locally ℳ-definable if |I| is countable.A map between ⋁-definable (locally definable) groups is called ⋁-definable (locally ℳ-definable) if its restriction to any ℳ-definable set is a ℳ-definable map.We define (𝒰)=max{(Z_i):i∈ I}.An element a∈𝒰 is generic of 𝒰 over A is (a/A)=(𝒰).* Let G be a ℳ-definable group, and X⊆ G definable containing the identity element of G. Then the subgroup 𝒰=⟨ X⟩ =⋃_n∈ℕ^×∏_nX· X^-1 of G generated by X is a locally ℳ-definable group. Then, in particular, every countable group is a locally definable group in any structure as well as the commutator subgroup [G,G] of a ℳ-definable group G. * The o-minimal universal covering group (see <cit.>) of a connected locally ℳ-definable group exists and is a locally ℳ-definable group. * (<cit.>) Let (G,<,+) be a sufficiently saturated ordered divisible abelian group, and in it take an infinite increasing sequence of elements 0<a_1<a_2<⋯ such that na_i<a_i+1 for every n∈ℕ. The subgroup 𝒰=⋃_i(-a_i,a_i) of G is a ⋁-definable group. This group has the distinction of having no 𝒰^00, and is not definably generated. Any ⋁-definable group in ℳ can be endowed with a topology τ making it into a topological group (<cit.>). This fact was a generalization by Peterzil and Starchenko <cit.> of the known result for the definable groups by Pillay <cit.>. In case ℳ expands the reals, that topology makes any definable group into a real Lie group. Moreover, any ⋁-definable homomorphism between two ⋁-definable groups is continuous with respect to their τ topologies <cit.>, and any ⋁-definable subgroup 𝒲 of a ⋁-definable group 𝒰 is τ-closed in 𝒲, and if 𝒲 and 𝒰 have the same dimension, then 𝒲 is also τ-open in 𝒲 <cit.>. For the rest of the paper any topological property of ⋁-definable groups refers to this τ topology. Let 𝒰 be a ⋁-definable group. * 𝒰 is (τ-)connected if 𝒰 has no nonempty proper (τ-)clopen subset such that its intersection with any definable subset of 𝒰 is definable. * 𝒰 is (τ-)definably compact if every definable path γ:(0,1)→𝒰 has limits points in 𝒰 (where the limits are taken with respect to the τ-topology).Note that if 𝒰 is a definable group, the above definition of connectedness agrees with the known notion of definable connectedness for definable groups. §.§ Generic sets in ⋁-definable groupsLet 𝒰 be a ⋁-definable group. A set X⊆𝒰 is left (right) generic in 𝒰 if less than κ-many left (right) group translates of X cover 𝒰. X is generic if it is both left and right generic, and X is called n-generic if n-group translates of X cover 𝒰. Thus, by saturation, a definable subset generic in a definable group covers the group in finitely many group translates. Moreover, by <cit.>, any left generic definable subset of a connected ⋁-definable group 𝒰 generates 𝒰.Some examples of generic definable subsets of a definable group G are the large subsets in G; namely, a definable set Y⊆ G such that (G∖ Y)<(G), this fact was proved by Pillay in <cit.>. Also, note that for the additive group (M,+) a definable generic set X⊆ M is generic if and only if M∖ X is bounded. in M (<cit.>).In case G is a definably compact definably connected group, by <cit.>, for any definable set X⊆ G, X is left generic if and only if X is right generic, so we just say generic.<cit.> Assume G is a definably connected group definable in a sufficiently saturated o-minimal expansion of a real closed field, and X⊆ G a definable set whose closure in G is definably compact. If X is not left generic in G then G∖ X is right generic in G.§.§ Local homomorphismsLet G_1 and G_2 be two topological groups, X⊆ G_1 a neighborhood of the identity of G_1, and θ:X→ G_2 a map. θ is called a local homomorphism if x,y,xy∈ X implies θ(xy)=θ(x)θ(y). We say that an injective map θ:X⊆ G_1→ G_2 is a local homomorphism in both directions if θ:X→ G_2 and θ^-1:θ(X)→ X are local homomorphisms. Note that if θ:X⊆ G_1→ G_2 is an injective local homomorphism between the groups G_1, G_2, then θ^-1:θ(X)→ X need not be a local homomorphism; for instance, consider the groups G_1=(ℝ,+), G_2=([0,1),+_mod1), and θ:[-1/8,1/8]⊆ℝ→[0,1) the map θ(x)=π(3x) where π:ℝ→[0,1):t↦ t mod1. Then θ is an injective local homomorphism, but θ^-1:[0,3/8]∪[3/8,1)=θ([-1/8,1/8])→[-1/8,1/8]⊆ℝ is not a local homomorphism. For this note that, for example 5/8+_mod15/8=1/4∈θ([-1/8,1/8]), but θ^-1(5/8)+θ^-1(5/8)=-1/4∉[-1/8,1/8]. In Claim <ref> we formulate a necessary and sufficient condition on a local homomorphism θ in order for θ^-1 to be a local homomorphism. Let θ:X⊆ G_1→ G_2 be an injective local homomorphism between the groups G_1 and G_2. Then * θ^-1:θ(X)⊆ G_2→ X⊆ G_1 is a local homomorphism if and only if for all y_1,y_2∈θ(X) if y_1· y_2∈θ(X), then θ^-1(y_1)·θ^-1(y_2)∈ X.* If there is X^'⊆ X such that X^'· X^'⊆ X, then θ|_X^'^-1:θ(X^')→ X^' is a local homomorphism.(i) Let y_1,y_2,y_1· y_2∈θ(X) such that θ^-1(y_1)·θ^-1(y_2)∈ X, then θ(θ^-1(y_1)·θ^-1(y_2))=y_1· y_2 ⇔θ^-1(y_1· y_2)=θ^-1(y_1)·θ^-1(y_2). The other direction is clear.(ii) Let θ(x_1), θ(x_2), θ(x_1)·θ(x_2)∈θ(X^'). Since X^'· X^'⊆ X, θ(x_1· x_2)=θ(x_1)·θ(x_2), then θ^-1(θ(x_1))·θ^-1(θ(x_2))=x_1· x_2=θ^-1(θ(x_1)·θ(x_2))∈θ^-1(θ(X^'))=X^'.Let G_1 and G_2 be two groups, and W⊆ G_1. Let θ:W→ G_2 be an injective local homomorphism. If there is W^'⊆ W such that W^'W^'⊆ W, then (θ|_W^')^-1:θ(W^')→ W^'⊆ G_1 is a local homomorphism. Let x,y∈ W^'. Since W^'W^'⊆ W, θ(xy)=θ(x)θ(y), then θ^-1(θ(x))θ^-1(θ(y))=xy=θ^-1(θ(x)θ(y)). § GROUP-GENERIC POINTS FOR ⋁-DEFINABLE GROUPS From now until the end of this chapter, ℳ is a sufficiently saturated o-minimal expansion of a real closed field. Let 𝒰 be a ⋁-definable group over A⊆ M and a∈𝒰.* a is a left (right) group-generic point of 𝒰 over A if every A-definable X⊆𝒰 with a∈ X is left (right) in 𝒰. a is group-generic if it is both left and right generic. * A type p is generic in 𝒰 if for every formula φ∈ p, φ defines a generic subset in 𝒰.Therefore, a∈𝒰 is group-generic of 𝒰 over A if and only if tp(a/A) is generic in 𝒰. Let G be a group definable over A⊆ M, and a∈ G. If a is group-generic of G over A, then a is generic of G over A. Suppose that there is an A-definable set Y with a∈ Y and (Y)<(G), then (Y∩ G)<(G), then Y∩ G cannot be generic in G, but this contradicts the group-genericity of a.§.§ Basics on group-generic points Below we will discuss some properties of group-generic points and their relationships with generic points and generic sets.The next fact is a consequence of <cit.>.<cit.> Let G be a definably compact definably connected definable group. Then: * The union of two nongeneric definable subsets in G is also nongeneric in G.* The setℐ={ X⊆ G:X is definable and nongeneric in G}is an ideal of (Def(G),∪,∩), the Boolean algebra of definable subsets of G.* There is a complete generic type p(x)∈ S^ℳ(A) in G. Let G be a definably compact definably connected definable over A⊆ M group. Then: * If |A|<κ, then there is a group-generic point a of G over A.* Let p∈ S^ℳ(A) be a generic type in G. Then for any B such that A⊆ B⊆ M there is a generic type q∈ S^ℳ(B) in G such that q⊇ p and q|_A=p.(i) By Fact <ref>(iii), there is a complete generic type p(x)∈ S^ℳ(A) in G. Then, by saturation, there is a p, so tp(a/A)=p is generic in G; i.e., a is group-generic of G over A.(ii) Suppose that G=φ(ℳ,a) for some ℒ_A-formula φ with a⊆ A. Letℐ_B={ X⊆ G: is -definable overand nongeneric in G}andΦ_B={¬ψφ:ψ is a -formula that defines ina set in } .Let p p∪Φ_B. Then p is a partial type since if there are {θ_i} _i<k_1⊆ p, {¬ψ_jφ} _j<k_2⊆Φ_B such that ⋀_i<k_1θ_i⋀_j<k_2¬ψ_jφ is not satisfiable, then ⋀_i<k_1θ_i→⋁_j<k_2ψ_j is satisfiable, but ⋀_i<k_1θ_i∈ p and by Fact <ref>(i), ⋁_j<k_2ψ_j is nongeneric, so there is a formula in p that implies a nongeneric formula, which contradicts the genericity of p. Hence, p is finitely satisfiable. Now, let q(x) be any complete type in S^ℳ(B) such that q⊇p, then q is a generic type in G and q|_A=p. This finishes the proof of (ii).Let G be a definably compact definably connected group definable over A⊆ M with |A|<κ. Let a∈ G be a group-generic element of G over A , and c a finite tuple from M. Then,* there is a^'∈ G such that tp^ℳ(a^'/A)=tp^ℳ(a/A) and a^' is a group-generic element of G over Ac.* There is c^' a finite tuple from M such that tp^ℳ(c^'/A)=tp^ℳ(c/A) and a is a group-generic element of G over Ac^'.* There is c^' a generic element of G over Aa such that a is group-generic of G over Ac^'.* Let b∈ G group-generic of G over Ac. If c^' is a finite tuple from M such that tp^ℳ(c^'/Ab)=tp^ℳ(c/Ab), then b is group-generic of G over Ac^'.* Let b∈ G group-generic of G over A and a group-generic of G over Ab. Then there is c^' a finite tuple from M such that tp^ℳ(c^'/A)=tp^ℳ(c/A), and b and a are group-generic of G over Ac^' and Abc^', respectively. * Let b∈ G group-generic of G over A and a group-generic of G over Ab. Then there is c^' generic of G over Aab such that b and a are group-generic of G over Ac^' and Abc^', respectively.(i) Since a is group-generic of G over A, tp^ℳ(a/A) is a complete generic type in G. Let B=Ac. Then by Prop. <ref>(ii), there is a generic type q∈ S^ℳ(B) in G such that q⊇tp^ℳ(a/A) and q|_A=tp^ℳ(a/A). As ℳ is sufficiently saturated, there is a^'⊆ M such that a^' q, thus tp^ℳ(a^'/B)=q. Since q|_A=tp^ℳ(a/A), tp^ℳ(a^'/A)=tp^ℳ(a/A).(ii) By (i), there is a^'⊆ M such that tp^ℳ(a^'/A)=tp^ℳ(a/A) and a^' is a group-generic element of G over Ac. As ℳ is sufficiently saturated, tp^ℳ(a^'/A)=tp^ℳ(a/A) if and only if there is f∈ Aut(ℳ/A) such that f(a^')=a. Then a^' is a group-generic element of G over Ac if and only if a is a group-generic element of G over Af(c), so with c^'=f(c) we obtain the desired conclusion.(iii) Let c be a generic element of G over A. By (ii), there is c^'⊆ M such that tp^ℳ(c^'/A)=tp^ℳ(c/A) and a is a group-generic element of G over Ac^'. As a is group-generic of G over Ac^', a[A]c^'. Since tp^ℳ(c^'/A)=tp^ℳ(c/A) and c is generic of G over A, then c^' is generic of G over A, but c^'[A]a, then c^' is generic of G over Aa.(iv) First note the following. Let b∈ G group-generic of G over Ac, and f∈ Aut(ℳ/A). Then f(b) is group-generic of G over Af(c). Let φ(x,a^'',f(c)) be a ℒ_Af(c)-formula with a^'' a finite tuple from A such that ℳφ(f(b),a^'',f(c)) and φ(ℳ,a^'',f(c))⊆ G. We will see that φ(G,a^'',f(c)) is generic in G. As f(b)φ(x,a^'',f(c)) if and only if bφ(x,a^'',c), then φ(G,a^'',c) is generic in G. From this it is easy to see that φ(G,a^'',f(c)) is also generic in G. Thus f(b) is group-generic of G over Af(c). Now, as tp^ℳ(c^'/Ab)=tp^ℳ(c/Ab), then there is f∈ Aut(ℳ/Ab) such that f(c)=c^'. Since b∈ G group-generic of G over Ac, the above claim yields b=f(b) is group-generic of G over Ac^'.(v) By (ii), there is c_1 a tuple from M such that tp^ℳ(c_1/A)=tp^ℳ(c/A) and b is group-generic of G over Ac_1. Again by (ii), there is c^' a tuple from M such that tp^ℳ(c^'/Ab)=tp^ℳ(c_1/Ab) and a is group-generic of G over Abc^'. And by (iv), b is group-generic of G over Ac^'.(vi) Let c be a generic element of G over A. By (v), there is c^' a tuple from M such that tp^ℳ(c^'/A)=tp^ℳ(c/A), and b and a are group-generic of G over Ac^' and Abc^', respectively. Since tp^ℳ(c^'/A)=tp^ℳ(c/A), a[Ab]c^', and b[A]c^', then c^' is generic of G over Aab.Let G be a group definable over A⊆ M. If a is group-generic of G over A, then a is group-generic of G over acl^ℳ(A).Let c∈acl^ℳ(A), and assume that aφ(x,c) for some ℒ_c-formula φ(x,c) that defines a subset of G. We will see that φ(G,c) is generic in G. Since c∈acl^ℳ(A) and ℳ is an ordered structure, acl^ℳ=dcl^ℳ. Then there is c^'∈ A such that γ(ℳ,c^')={ c}.Thus, if ϕ(x,c^')=(∃!z)(φ(x,z)∧γ(z,c^')), then aϕ(x,c^').Since a is group-generic of G over A, ϕ(ℳ,c^') is generic in G. Therefore, there are g_1,…,g_n∈ G such that for every g∈ G there is g^'ϕ(ℳ,c^') such that g=g_i· g^' for some i∈{ 1,…,n}. But γ(ℳ,c^')={ c} ,then g^'φ(x,c). Thus φ(G,c) is generic in G. If G is a group definable over A⊆ M and a∈ G is generic of G over A, then a need not be group-generic of G over A. For instance, consider a sufficiently saturated real closed field ℛ=(R,<,+,0,·,1) and its additive group G=(R,+). By saturation, there is α∈⋂_n∈ℕ(0,1/n). Since (α/∅)=tr.deg(ℚ(α):ℚ)=1=(G), then α is generic of G over ∅, but α is not group-generic of G over ∅ since α∈(0,1) and (0,1) cannot cover G by finitely many group translates.Let X⊆ M^n definable over A⊆ M. If b∈ X is generic of X over A and a∈ X is generic of X over Ab, then b is generic of X over Aa. Since a is generic of X over Ab, a[A]b. By the symmetry of the independence (Fact <ref>(iii)), b[A]a, so b is generic of X over Aa. If G is a group definable over A⊆ M, b∈ G is group-generic of G over A and a∈ G is group-generic of G over Ab, then b need not be group-generic of G over Aa. For instance, consider a sufficiently saturated real closed field ℛ=(R,<,+,0,·,1). Let [0,1)⊆ R and G=([0,1),+_mod1).We will find a,b∈ G such that b and a are group-generic of G over A and Ab, respectively, but with b not group-generic of G over Aa.Let φ be a ℒ_A-formula that defines G.Letℐ_A={ψ:ψ is a -formula andis nongeneric in G} .Let θ_n(x)=0<x<1/n for n∈ℕ∖{ 0}, and letΓ_A={¬ψφ,θ_n:ψ∈ℐ_A,n∈ℕ∖{ 0}}.Γ_A is a partial type because if (⋀_i<k_1¬ψ_iφ∧⋀_i<k_2θ_j)(ℛ)=∅, then (⋀_i<k_2θ_j)(ℛ)⊆(⋁_i<k_1ψ_i)(ℛ), but the finite union of nongeneric definable subsets in G is nongeneric, so (⋁_i<k_1ψ_i)(ℛ) cannot contain the generic set (⋀_i<k_2θ_j)(ℛ), thus Γ_A is a partial type. Moreover, Γ_A is generic in G because every formula in Γ_A defines a generic subset in G.Now, let p∈ S^ℛ(A) with p⊇Γ_A, then p is also generic in G, otherwise if there is ϕ∈ p defining a nongeneric subset in G, then ¬ϕφ,ϕ∈ p, but this is a contradiction.By saturation, there is b p, thus b is group-generic of G over A.Following, we will show the existence of a group-generic point a of G over Ab that is also a positive infinitesimal, but b is not group-generic of G over Aa.Letℐ_Ab={ψ:ψ is a -formula andis nongeneric in G} .LetΓ_Ab={¬ψφ,θ_n:ψ∈ℐ_Ab,n∈ℕ∖{ 0}} with θ_n(x)=0<x<1/n for n∈ℕ∖{ 0} as above.As before, Γ_Ab is a generic partial type in G and if q∈ S^ℛ(Ab) with q⊇Γ_Ab, then q is generic in G.By saturation, there is a q, thus a is group-generic of G over Ab.Notice that 0<b<a, otherwise if a<b , then a∈(0,b) and since (0,b) is an Ab-definable subset of G, the group-genericity of a implies that (0,b) is generic in G, but this is not possible since b is infinitesimal. Therefore, b∈(0,a), which is an Aa-definable interval of infinitesimal length, then (0,a) cannot be generic in G, so b is not group-generic in G over Aa. Thus we finish Remark <ref>. <cit.> Let G be a group definable over A⊆ M. Let a,b∈ G, then:* If a is generic of G over Ab, then ab is generic of G over Ab. * There are b_1,b_2∈ G generic of G over Ab such that b=b_1b_2. Let G be a group definable over A⊆ M. Let a,b∈ G, then: * If a is group-generic of G over Ab, then ab is group-generic of G over Ab. * If there is a group-generic point of G over Ab, then there are b_1,b_2∈ G group-generic of G over Ab such that b=b_1b_2.(i) We will show that ab is group-generic of G over Ab, so let X⊆ G Ab-definable with ab∈ X. Since a∈ Xb^-1 and a is group-generic of G over Ab, then Xb^-1 is generic in G, so is X.(ii) By hypothesis, there is a group-generic point b_1 of G over Ab. Since b_1^-1 is also group-generic of G over Ab, (i) implies b_2=b_1^-1b is group-generic of G over Ab, and b=b_1(b_1^-1b). §.§ Generic sets in the product group In this subsection we prove some properties of generic definable subsets of the product group G× G for a definably compact group G. Lemmas <ref> and <ref> will be used in the proof of Theorem <ref>.We first recall the notion of a Keisler measure on G (which exists by <cit.>), and a Fubini (or symmetry) Theorem (<cit.>). Let X⊆ M^n definable.* A (global) Keisler measure μ on X is a finitely additive probability measure on Def(X) (the set of all definable subsets of X); i.e., a map μ:Def(X)→[0,1]⊆ M such that μ(∅)=0, μ(X)=1, and for X_1,X_2∈ Def(X), μ(X_1∪ X_2)=μ(X_1)+μ(X_2)-μ(X_1∩ X_2). * If μ is a Keisler measure on a definable group G, μ is left (right) invariant if μ(gY)=μ(Y) (μ(Yg)=μ(Y)) for every g∈ G and Y∈ Def(G). μ is called generic if: μ(Y)>0 if and only if Y∈ Def(G) and Y is generic in G.Note that if μ_1 and μ_2 are two Keisler measures on a definable group G, we can define a Keisler measure μ_1⊗μ_2 on G× G as follows: for a definable set D⊆ G× G, μ_1⊗μ_2(D)=∫μ_1(D_y)dμ_2 where D_y={ x∈ G:(x,y)∈ D}. Note that function y↦μ_1(D_y) is integrable with respect to μ_2 (<cit.>).The next fact gathers Proposition 7.5 and Theorem 7.7 in <cit.> for the case of a definably compact group.<cit.> Let G be a definably connected definably compact group definable in ℳ. Then * G has a unique left invariant (global) Keisler measure, which is also the unique right invariant global Keisler measure on G. This measure is also generic. * If μ and λ are two left invariant (global) Keisler measures on G, then μ⊗λ=λ⊗μ and is also left invariant. Let G be a definably connected definably compact group definable in ℳ. Let Z⊆ G× G be a definable set. For each y∈ G, let Z_y={ x∈ G:(x,y)∈ Z}, Z_gen={ y∈ G:Z_y is generic in G}, and Z_n={ y∈ G:Z_y isn-generic inG}. Then Z is generic in G× G if and only if Z_gen is generic in G if and only if there is n∈ℕ such that Z_n is generic in G. First, observe that for every i∈ℕ Z_i⊆ Z_i+1, Z_i is definable, and Z_gen=⋃_n∈ℕZ_n. Let Z_1^'=Z_1, and Z_n^'=Z_n∖ Z_n-1 for n≥2. Then Z_gen=⋃̇_n∈ℕZ_n^'. By saturation, Z_gen is generic in G if only if there is n∈ℕ such that Z_n is generic in G if only if there is n∈ℕ such that Z_n^' is generic in G.Second, by Theorem 7.7 in <cit.>, G has a unique generic left invariant (global) Keisler measure μ. Since the Keisler measure μ is left invariant, so is the Keisler measure μ⊗μ on G× G. Thus again by <cit.>, μ⊗μ is the unique left invariant Keisler measure on G× G.For a set X we denote by 1_X the indicator function of X; namely, 1_X(a)=1 if a∈ X and 1_X(a)=0 if a∉ X. μ⊗μ(Z) =∫_Gμ(Z_y)dμ(y) =∑_n∈ℕ∫_Z_n^'μ(Z_y)dμ(y) =∑_n∈ℕ∫_Z_n^'(∫_G1_Z_y(x)dμ(x))dμ(y) =∑_n∈ℕ∫_Z_n^'(∫_G1_Z(x,y)dμ(x))dμ(y). By Proposition 7.5 in <cit.>,∫_Z_n^'(∫_G1_Z(x,y)dμ(x))dμ(y) =∫_G(∫_Z_n^'1_Z(x,y)dμ(y))dμ(x). Then,μ⊗μ(Z) = ∑_n∈ℕ∫_G(∫_Z_n^'1_Z(x,y)dμ(y))dμ(x) = ∑_n∈ℕ∫_Gμ(Z^x,n)dμ(x), where Z^x,n={y∈ Z_n^':(x,y)∈ Z}.Note that Z^x,n⊆ Z_n^' and μ(Z_n^')≤∫_Gμ(Z^x,n)dμ(x), then for every x∈ G,μ(Z^x,n)≤μ(Z_n^')≤∫_Gμ(Z^x,n)dμ(x). From the above equations, we have that μ⊗μ(Z)>0 ⇔ (∃ n∈ℕ)(∫_Gμ(Z^x,n)dμ(x)>0) ⇔ (∃ n∈ℕ)(∃ x∈ G)(μ(Z^x,n)>0) ⇔ (∃ n∈ℕ)(μ(Z_n^')>0). Thus, Z is generic in G× G if and only if there is n∈ℕ such that Z_n^' is generic in G, which is equivalent to Z_gen is generic in G by the first part of this proof. Observe that an analogous result can be proved in the same way for fibers of elements in the first component of G× G. Let G be a definably connected definably compact group definable in ℳ. If b∈ G is group-generic of G over A and a∈ G is group-generic of G over Ab, then (a,b)∈ G× G is group-generic of G× G over A.Let Z⊆ G× G A-definable with (a,b)∈ Z. Since a∈ Z_b={ x∈ G:(x,b)∈ Z}, which is Ab-definable, then Z_b is generic in G, then b∈ Z_gen=⋃_n∈ℕZ_n, where Z_n={ y∈ G:Z_y isn-generic inG}. Therefore, there is n∈ℕ such that b∈ Z_n. As Z_n is A-definable and b is group-generic of G over A, then Z_n is generic in G. By Lemma <ref>, this is equivalent to Z is generic in G× G. Then (a,b)∈ G× G is group-generic of G× G over A.Let X⊆ M^n definable over A⊆ M. If b∈ X is generic of X over A and a∈ X is generic of X over Ab, then (a,b)∈ X× X is generic of X× X over A. It follows directly from the additivity property of the (acl-)dimension (Fact <ref>(ii)). Now, we will show that for a definably connected definably compact group G any definable generic set in G× G contains a definable generic box.Recall that every Hausdorff locally compact group G carries a natural measure called the Haar measure. A left Haar measure m on G is a measure on the Borel algebra (namely, the σ-algebra generated by all open sets of G) that is left invariant (i.e., m(gX)=m(X) for every g∈ G and Borel set X), finite on every compact subset of G, and positive for every non-empty open subset of G. By Haar's Theorem, G has, up to a positive multiplicative constant, a unique nontrivial left Haar measure. If in addition G is compact, then their left Haar measures coincide with their right Haar measures, and since m(G)<∞, we can naturally choose a normalized Haar measure on G; namely, m(G)=1. Let G be a type-definable group. G is compactly dominated by (H,m,π), where H is a compact group, m is the unique normalized Haar measure on H and π:G→ H is a surjective group homomorphism, if for any definable Y⊆ G and for every c∈ H outside a set of m measure zero, either π^-1(c)⊆ Y or π^-1(c)⊆ G∖ Y; namely, m({ c∈ H:π^-1(c)∩ Y=∅ and π^-1(c)∩(G∖ Y)=∅})=0.For what follows, we recall that given a ⋁-definable group 𝒰=⋃_i∈ IZ_i such that 𝒰^00 exists we can endow the quotient group 𝒰/𝒰^00 with a topology, called the logic topology, as follows: let π:𝒰→𝒰/𝒰^00 be the canonical projection map and set C⊆𝒰/𝒰^00 to be closed if and only if for every i∈ Iπ^-1(C)∩ Z_i is type-definable. Then, by <cit.>, these closed sets generate a locally compact topology on 𝒰/𝒰^00 making it into a Hausdorff topological group.<cit.> Let G be a definably connected definably compact group definable in ℳ, then G is compactly dominated by (G/G^00,m,π) where m is the Haar measure of the compact group G/G^00 with its logic topology, and π:G→ G/G^00 is the canonical surjective homomorphism.Let G be a definably connected definably compact group definable in ℳ. Let Z be a definable generic subset in G× G. Then there are definable sets A,B⊆ G generic in G such that A× B⊆ Z. By <cit.>, G is compactly dominated by (G/G^00,m,π) where m is the Haar measure of G/G^00, and π:G→ G/G^00 is the canonical surjective homomorphism. And also G× G is compactly dominated by (G/G^00× G/G^00,m^',π) where m^' is the Haar measure of G/G^00× G/G^00, and π:G× G→ G/G^00× G/G^00 is π=(π,π); i.e., π(x,y)=(π(x),π(y)). Note that on (G× G)/(G^00× G^00), which can be set-theoretically identified with G/G^00× G/G^00, the logic topology corresponds to the product topology on G/G^00× G/G^00.By <cit.>, there is g=(g_1,g_2)∈ G× G such that G^00× G^00⊆ Z·g. Since G^00× G^00 is a type-definable set and Z·g is definable, saturation yields that there are definable A^*,B^*⊆ G such that G^00⊆ A^*,B^* and A^*× B^*⊆ Z·g. Let A=A^*· g_1^-1 and B=B^*· g_2^-1, then A× B⊆ Z and A,B are both definable and generic in G. This ends the proof of the lemma.§ A GROUP CONFIGURATION PROPOSITION FOR DEFINABLY COMPACT GROUPS From now until the end of this paper assume that ℛ=(R,<,+,·) is a sufficiently saturated real closed field.For the proof of the next proposition we will adapt the notion of geometric structure and substructure given in <cit.>. Therefore, for the real closed field ℛ, if ℒ={ +,·}, then the algebraic closure D=R(√(-1)) of R is a geometric structure, and R viewed as an ℒ-structure is a geometric substructure of D and therefore satisfies the following: * the algebraic closures of A⊆ R in ℛ in the model-theoretic and algebraic senses coincide, and for every A⊆ R the algebraic closure of A in R in the sense of the ℒ-structure R is precisely R∩acl^D(A), * R is definably closed in D; that is, if b∈ D and b∈acl^D(R), then b∈ R, and * for each ℒ-formula φ(x,y) there is some N<ω such that any model R_1 of Th(R) and every b∈ R_1, if φ(x,y) defines a finite subset of R_1, then it defines a set with at most N elements.We also adapt the same notation of Hrushovski and Pillay in <cit.>, and we recall it below.Let A⊆ R and a a finite tuple from R. tp(a/A) denotes tp^ℛ(a/A), dcl(A) denotes dcl^ℛ(A).qftp(a/A) denotes qftp^D(a/A) that is the set of quantifier-free ℒ_A-formulas satisfied by a in D. qfdcl(A) denotes qfdcl^D(A) that is the set of elements of D definable over A by quantifier-free formulas, but since R is definably closed in D, so qfdcl^D(A)⊆ R. Note that since D has quantifier elimination, qfdcl^D(A)=dcl^D(A). Finally, by acl(A) we denote acl^D(A).Recall that a group H definable in an algebraically closed field D is D-definably connected if there is no proper nontrivial D-definable subgroup of H of finite index. Let D the algebraic closure of ℛ. Let G be a definably compact definably connected group definable in ℛ. Then there are a finite subset A⊆ R over which G is defined, a D-definably connected group H definable in D over A, points a,b,c of G and points a^',b^',c^' of H(R) such that * a· b=c (in G) and a^'· b^'=c^' (in H),* acl(aA)=acl(a^'A), acl(bA)=acl(b^'A) and acl(cA)=acl(c^'A),* b is a group-generic point of G over A, a is a group-generic point of G over Ab,* a^' and b^' are generic points of H(R) over A and are independent with each other over A.Note that a^' and b^' are only generic and not group-generic. This proof is essentially the same as that of <cit.>, what is new is that we have to prove that the points a and b introduced below remain group-generic of G over each of the sets of parameters defined by Hrushovski and Pillay in their proof. To achieve this we summarize without proof the unmodified parts of the proof of <cit.> and just focuses on the new parts. We refer the reader to <cit.> for appropriate model-theoretic background.The first part of the proof of Proposition 3.1 in <cit.> is devoted to yield a set-up in which <cit.> can be applied, and get with this the existence of the connected group H definable in D mentioned in the conclusion of Proposition 3.1. This is done through a series of lemmas and observations.Let us start with a finite subset A_0 of R over which G and its group operation are defined. Let (G)=n. By Proposition <ref>(i), there is b∈ G group-generic of G over A_0. By Prop. <ref>(ii) and saturation, there is a∈ G group-generic of G over A_0∪{b}, then a[A_0]b. Let c=a· b, then, by Claim <ref>, c is group-generic of G over A_0∪{b}. And also (a,b,c/A_0)=(a,b/A_0)=2n.In ℛ, c∈dcl(a,b,A_0) and b∈dcl(a,c,A_0). Thus we start with three group-generic points of G such that each two of them are independent (over some set of parameters) and define the third in ℛ. As Hrushovski and Pillay point out in <cit.>, the key is to modify those points by points in R such that two of them define the third in the structure D; namely, that dcl is replaced by qfdcl in order to lay the foundations to apply <cit.>. There are a finite subset A_2 of R, containing A_0, and tuples a_1,b_1,,c_1 in R such that * b and a are group-generic of G over A_2 and A_2b, respectively,* acl(a,A_2)=acl(a_1,A_2), acl(b,A_2)=acl(b_1,A_2), acl(c,A_2)=acl(c_1,A_2),* b_1∈qfdcl(a_1,c_1,A_2), and c_1∈qfdcl(a_1,b_1,A_2). The only thing we need to prove here is the existence of elements x^', z_1 from R satisfying the same conditions of the x^' and z_1 of Hrushovski and Pillay in their original proof of <cit.> such thatb and a are group-generic of G over A_2=A_0x^'z_1 and A_2b, respectively, because the rest of the proof is exactly the same as that of <cit.>. * There is a generic point x^' of G over A_0ab such that if A_1=A_0x^', then b is group-generic of G over A_1 and a is group-generic of G over A_1b. * Consider A_1 as in (i), then there is a generic point z_1 of G over A_1ab such that if A_2=A_1z_1, then b is group-generic of G over A_2 and a is group-generic of G over A_2b.(i) Since b is group-generic of G over A_0 and a is group-generic of G over A_0b, then Corollary <ref>(vi) yields the existence of a generic element x^' of G over A_0ab such that b is group-generic of G over A_0x^' and a is group-generic of G over A_0x^'b.(ii) From the conclusion of (i) and Corollary <ref>(vi), we get the existence of a generic point z_1 of G over A_1ab such that b is group-generic of G over A_1z_1 and a is group-generic of G over A_1z_1b. This ends the proof of Lemma <ref>. Now, let a_1,b_1,c_1 and A_2 be as given by Lemma <ref>, and A=acl(A_2)∩ R. Therefore, a_1,b_1,c_1 each have dimension n over A. Since acl(A_2)∩ R=acl^ℛ(A_2), then Remark <ref> implies that b and a are also group-generic of G over A and Ab, respectively.<cit.> yields that qftp(b_1,c_1/A,a_1) is stationary, and hence we can define the canonical base σ of qftp(b_1,c_1/A,a_1). Then σ∈qfdcl(A,a_1). Since R is definably closed in D, σ∈ R.Letr=qftp(σ/A),q_1=qftp(b_1/A), andq_2=qftp(c_1/A),then (q_1)=(q_2)=n.By Remark 3.4 in <cit.>, r is stationary, (r)=n, σ[A]b_1, σ[A]c_1, b_1∈qfdcl(σ,c_1,A), and c_1∈qfdcl(σ,b_1,A). Therefore, there is some A-definable partial function in the sense of D, say μ, such that c_1=μ(σ,b_1). And note that whenever σ^' r and b_1^' q_1 with σ^'[A]b_1^', then μ(σ^',b_1^') is well-defined, realises q_2 and is independent with each of σ^',b_1^' over A. Similarly, as b_1∈qfdcl(σ,c_1,A), there is some A-definable partial function in the sense of D, say υ, such that b_1=υ(σ,c_1). And note that whenever σ^' r and c_1^' q_2 with σ^'[A]c_1^', then υ(σ^',c_1^') is well-defined, realises q_1 and is independent with each of σ^',c_1^' over A.Now, let σ_1,σ_2 r with σ_1[A]σ_2 and σ_1,σ_2∈ R. Let b_2 q_1 such that b_2[A]{σ_1,σ_2} and b_2∈ R.Then μ(σ_1,b_2) is defined, realises q_2, and is independent with σ_2 over A. Therefore, υ(σ_2,μ(σ_1,b_2)) is defined and realises q_1. Denote υ(σ_2,μ(σ_1,b_2)) by b_3.By Remark 3.6 in <cit.>, b_3∈qfdcl(σ_1,σ_2,b_2,A), b_2∈qfdcl(σ_1,σ_2,b_3,A), each of b_2,b_3 is independent with {σ_1,σ_2} over A, and qftp(b_2,b_3/σ_1,σ_2,A) is stationary. Then we can define the canonical base of qftp(b_2,b_3/σ_1,σ_2,A) and denote it by τ. Then τ∈qfdcl(σ_1,σ_2,A), so τ∈ R.Let s=qftp(τ/A). By <cit.>, (s)=n. As was proved in Remark 3.4 in <cit.>, we have that b_3∈qfdcl(τ,b_2,A), b_2∈qfdcl(τ,b_3,A); moreover, τ is independent with each of b_2,b_3 over A. Therefore, there is some A-definable partial function μ^' in the sense of D such that b_3=μ^'(τ,b_2), and whenever τ^' s and b_1^' q_1 with τ^'[A]b_1^', then μ^'(τ^',b_1^') is well-defined and realises q_1.At this stage Hrushovski and Pillay obtain two n-dimensional stationary types s and q_1 over A that satisfy the hypothesis of <cit.>. Moreover, the functions f and g, which are quantifier-free definable in D over A, in the hypothesis of Prop. 1.8.1 in <cit.> correspond to the functions f in <cit.> and μ^', respectively.Next comes the application of <cit.>. Let H, X, h_1, and h_2 as given by Prop. 1.8.1 in <cit.>. We can assume that h_1,h_2 are both the identity function. Thus H is a connected group definable in D over A with generic type s, X is a set definable in D over A with generic type q_1, and there is a transitive group action Λ:H× X→ X:(h,x)↦Λ(h,x), which is also definable in D over A.Note that since τ∈ H(R), τ s, and ℛ viewed as an { +,·}- structure is a geometric substructure of D, then (H(R))=n. Similarly, from b_1∈ X(R) and b_1 q_1, we have (X(R))=n.Moreover, we have the following: * for τ_1,τ_2 s with τ_1[A]τ_2, the product τ_1·τ_2 in the group H is exactly f(τ_1,τ_2), and * for any τ s and b q_1 with τ[A]b, Λ(τ,b) is exactly μ^'(τ,b).Since τ∈qfdcl(σ_1,σ_2,A), there is some A-definable partial function ξ in the sense of D such that τ=ξ(σ_1,σ_2). And note that whenever σ_1^',σ_2^' r with σ_1^'[A]σ_2^', then ξ(σ_1^',σ_2^') is well-defined, realises s and is independent with each of σ_1^',σ_2^' over A.Finally, in the last part of this proof we introduce some new sets of parameters, define the points a^',b^',c^' generic in H(R), and prove some interalgebraicity between them and the points a,b,c.Let σ, b_1, c_1 be as fixed after the proof of Lemma <ref>, which are all in R. There is a tuple σ_1 from R such that qftp(σ_1/A)=qftp(σ/A), , and b and a are group-generic of G over acl(Aσ_1)∩ R and (acl(Aσ_1)∩ R)∪{ b}, respectively.Since r=qftp(σ/A) is stationary and every definable set in ℛ has generic points in R, then there is σ_1 a tuple from R such that qftp(σ_1/A)=qftp(σ/A) and σ_1[A]{σ,b_1,c_1}. Then, by Corollary <ref>(v), there is σ_1⊆ R such that tp(σ_1/A)=tp(σ_1/A), b and a are group-generic of G over Aσ_1 and Aσ_1b, respectively. We will see that σ_1 satisfies the same properties of σ_1.First, since tp(σ_1/A)=tp(σ_1/A) and qftp(σ_1/A)=qftp(σ/A), so qftp(σ_1/A)=qftp(σ/A). Second, from the construction throughout the proof of <cit.>, we have that {σ,b_1,c_1} and { a,b} are interalgebraic over A. Then, σ_1[A]{σ,b_1,c_1} if and only if . And note that since b[A]σ_1 and a[Ab]σ_1, then σ_1[A]{ a,b}.By Remark <ref>, if b and a are group-generic of G over Aσ_1 and Aσ_1b, respectively, then b and a are group-generic of G over acl(A,σ_1)∩ R and (acl(A,σ_1)∩ R)∪{ b}, respectively. This completes the proof of the claim.Lo que queria problar el item (ii) antiguo: Let A_1=acl(Aσ_1)∩ R where σ_1 is given by (i). Let c_2=υ(σ_1,c_1) and a^'=ξ(σ,σ_1). Then there is a tuple τ_1 from R such that τ_1 s=qftp(τ/A), τ_1[A_1]{ a^',b_1,c_2}, and if b_2=μ^'(τ_1,b_1), then b and a are group-generic of G over acl(A_1b_2)∩ R and (acl(A_1b_2)∩ R)∪{ b}, respectively. Proof. (ii) Since s=qftp(τ/A) is stationary and every definable set in ℛ has generic points in R, then there is τ_1 a tuple from R such that τ_1 s and τ_1[A_1]{ a^',b_1,c_2}. Let b_2=μ^'(τ_1,b_1). By Corollary <ref>(v), there is b_2^''⊆ R such that tp(b_2^''/A_1)=tp(b_2/A_1), and b and a are group-generic of G over A_1b_2^'' and A_1b_2^''b, respectively. Therefore, b and a are group-generic of G over acl(A_1b_2)∩ R and (acl(A_1b_2)∩ R)∪{ b}, respectively.Problem (!):(1) If we could prove that b and a are group-generic of G over A_1b_2 and A_1b_2b, respectively, then it is easy to prove that acl(A_1b_2,b)=acl(A_1b_2,τ_1). But, I have not could prove that then b and a are group-generic of G over A_1b_2 and A_1b_2b, respectively. Until now the fact that tp(b_2^''/A_1)=tp(b_2/A_1) has not let me to achieve this.(2) If we think to change the set of parameters to A_1b_2^'', then we have the group-genericity of b and a that we want, but I have not could prove that in this case acl(A_1b_2^'',b)=acl(A_1b_2^'',τ_1) because in this case I do not have b_1, :(The interalgebraicity between b and τ_1 , over A_1b_2^'' is essential in the modified Theorem of HP, because the definition of the set Z comes from the interalgebraicity of the points from G and H(R), and the group-genericity of both a and b let us to conclude that Z is generic in G× G. Let σ_1 be as given by Claim <ref>. Then υ(σ_1,c_1) q_1, is in R, and υ(σ_1,c_1)[A]σ_1. Let c_2=υ(σ_1,c_1). Also, we get ξ(σ_1,σ) s, ξ(σ_1,σ)[A]σ_1, and is in H(R). Let τ=ξ(σ_1,σ), and A_1=acl(A,σ_1)∩ R. Then so far we have that: * b and a are group-generic of G over A_1 and A_1b, respectively,* acl(A_1,a)=acl(A_1,τ), acl(A_1,c)=acl(A_1,c_2), and* τ s, b_1 q_1, and c_2 q_1.We complete the proof of this proposition below.Since R is definably closed in D, for every τ^'∈ H(R) and every β∈ X(R), Λ(τ^',β)∈ X(R). Moreover, H(R) acts on X(R) by the group action Λ restricted to H(R)× X(R), which is definable in R over A.Let us define a relation ∼ on X(R). For β_1,β_2∈ X(R) we say β_1∼β_2 if and only if β_1 and β_2 are both in the same H(R)-orbit, namely if β_1∈Λ(H(R),β_2). Then ∼ is an equivalence relation on X(R) definable in R over A⊆ A_1.Since R is o-minimal and has elimination of imaginaries, <cit.> implies that there are at most finitely many ∼-classes whose dimension equals to (X(R)). Therefore, for every β generic of X(R) over A_1, the equivalence class of β under ∼, denoted [β], has dimension n and is a definable set in R over A⊆ A_1.Now, recall that b_1 is generic of X(R) over A_1 and b_1 q_1, then [b_1] is an n-dimensional set definable in ℛ over A_1.Let b_2^' be a generic element of [b_1] over A_1. Then by Corollary <ref>(v), there is a tuple b_2 from R such that tp(b_2/A_1)=tp(b_2^'/A_1), and b and a are group-generic of G over A_1b_2 and A_1bb_2, respectively. Since b_2^'∈[b_1] and [b_1] is defined over A_1, then b_2∈[b_1]. Thus, there is τ_1∈ H(R) such that b_2=Λ(τ_1^-1,b_1).Since a is also generic of G over A_1bb_2, then a[A_1]{ b,b_2}; thus b_2[A_1,b]a. Also, as b is also generic of G over A_1b_2, b_2[A_1]b. Therefore, b_2[A_1]{ a,b}, thus b_2 is generic of X(R) over A_1ab.Now, since acl(A_1,a)=acl(A_1,τ) and acl(A_1,b)=acl(A_1,b_1), then b_2[A_1]{τ,b_1}. * τ_1 is generic of H(R) over A_1τ b_1, and * τ_1 is generic of H(R) over A_1τ b_2. * τ is generic of H(R) over A_1τ_1 b_2.(i) Note that(τ_1,b_2/A_1,τ,b_1) =(b_2/A_1,τ,b_1)+(τ_1/A_1,τ,b_1,b_2) =n+(τ_1/A_1,τ,b_1,b_2). Also, (τ_1,b_2/A_1,τ,b_1)=(τ_1/A_1,τ,b_1) since b_2=Λ(τ_1^-1,b_1). Therefore, (τ_1/A_1,τ,b_1)=n.(ii) First, observe that as b_2∈acl(A_1,τ_1,b_1), then(τ_1,b_2,b_1/A_1,τ) =(τ_1,b_1/A_1,τ) =(τ_1/A_1,τ,b_1)+(b_1/A_1,τ) =2n. Also,(τ_1,b_2,b_1/A_1,τ) =(b_2/A_1,τ)+(τ_1/A_1,τ,b_2) =n+(τ_1/A_1,τ,b_2). Hence, (τ_1/A_1,τ,b_2)=n.(iii) By (ii), τ_1[A_1,b_2]τ, and since b_2[A_1]τ, then τ[A_1]{ b_2,τ_1}. This finishes this proof. * acl(A_1,b_2,a)=acl(A_1,b_2,τ), * acl(A_1,b_2,b)=acl(A_1,b_2,τ_1), and * acl(A_1,b_2,c)=acl(A_1,b_2,τ·τ_1).(i) It follows from acl(A_1,a)=acl(A_1,τ).(ii) First, we will see that τ_1∈acl(A_1,b_2,b_1).(τ_1,b_2/A_1,b_1) =(b_2/A_1,b_1)+(τ_1/A_1,b_2,b_1) =n+(τ_1/A_1,b_2,b_1). Also,(τ_1,b_2/A_1,b_1) =(τ_1/A_1,b_1)+(b_2/A_1,b_1,τ_1) =n.Then, τ_1∈acl(A_1,b_2,b_1), and since b and b_1 are interalgebraic over A_1, so acl(A_1,b_2,τ_1)⊆acl(A_1,b_2,b).Additionally, b_1=Λ(τ_1,b_2), then b_1∈acl(A_1,b_2,τ_1). So acl(A_1,b_2,b)⊆acl(A_1,b_2,τ_1). This completes (ii).(iii) First, we will see that Λ(τ·τ_1,b_2)=c_2. From the properties of the action and the maps ν,μ, and ξ, we haveΛ(τ·τ_1,b_2) =Λ(τ·τ_1,Λ(τ_1^-1,b_1)) =Λ(τ,Λ(τ_1·τ_1^-1,b_1)) =Λ(τ,b_1), c_2=ν(σ_1,c_1) =ν(σ_1,μ(σ,b_1)) =Λ(ξ(σ_1,σ),b_1) =Λ(τ,b_1).Then, Λ(τ·τ_1,b_2)=c_2.We will see that τ·τ_1∈acl(A_1,b_2,c_2). By Fact <ref>, since τ is generic of H(R) over A_1τ_1 b_2, then τ·τ_1 is generic of H(R) over A_1τ_1 b_2.Now, since b_2[A_1]{ a,b} and acl(A_1,c)=acl(A_1,c_2), c_2[A_1]b_2, and thus we get (τ·τ_1,c_2/A_1,b_2) =(c_2/A_1,b_2)+(τ·τ_1/A_1,b_2,c_2) =n+(τ·τ_1/A_1,b_2,c_2).Also,(τ·τ_1,c_2/A_1,b_2) =(τ·τ_1/A_1,b_2)+(c_2/A_1,b_2,τ·τ_1) =n.Thus, τ·τ_1∈acl(A_1,b_2,c_2), and thus acl(A_1,b_2,τ·τ_1)⊆acl(A_1,b_2,c). Finally, as c_2=Λ(τ·τ_1,b_2) and acl(A_1,c)=acl(A_1,c_2), then acl(A_1,b_2,c)⊆acl(A_1,b_2,τ·τ_1), this ends this proof.Let A_2=acl(A_1,b_2)∩ R, a^'=τ, b^'=τ_1, and c^'=a^'· b^' the productof a^' by b^' in H. So far we have proved that: * By Remark <ref>, b and a are group-generic of G over A_2 and A_2b, respectively, * a^' and b^' are generic of H(R) over A_2 and a^'[A_2]b^', * acl(A_2,a)=acl(A_2,a^'), acl(A_2,b)=acl(A_2,b^'), and acl(A_2,c)=acl(A_2,c^').Finally, let A any finite subset of A_2 over which G and H are defined with the obtained properties. This concludes the proof of Proposition <ref>. § A LOCAL HOMOMORPHISM WITH GENERIC DOMAIN BETWEEN A SEMIALGEBRAICALLY COMPACT SEMIALGEBRAIC GROUP OVER R AND THE R-POINTS OF AN R-ALGEBRAIC GROUP In this section we prove the main theorem of this paper: Let G be a definably compact definably connected group definable in ℛ. Then there are * a connected R-algebraic group H such that (G)=(H(R))=(H),* a definable X⊆ G such that G^00⊆ X,* a definable homeomorphism ϕ:X⊆ G→ϕ(X)⊆ H(R) such that ϕ and ϕ^-1 are local homomorphisms.Denote by D the algebraic closure of R. By Proposition <ref>, there are a finite subset A⊆ R over which G is defined, a D-definably connected group H quantifier-free A-definable in D, a group-generic point b of G over A, a group-generic point a of G over Ab, thus c=a· b is also group-generic of G over Ab (this by Claim <ref>), as well as points a^',b^',c^'= a^'· b^'∈ H(R) generic in H(R) over A with the properties given there. Let k be the subfield generated by A.As every D-definably connected group definable over k in the algebraic closed field D is definably isomorphic over k to a connected k-algebraic group (<cit.>), we may assume that H is such algebraic group. Moreover, by the conditions of the points a,b,c, a^',b^',c^' of Proposition <ref>, the dimension of H as algebraic group is equal to the o-minimal dimensions (G) and (H(R)).Since a and a^' are interalgebraic over k in R and R is o-minimal, a and a^' are interdefinable over k in R, and similarly for b,b^' and c,c^'. From now on, we work in R and by definable we will mean R-definable.By <cit.> (which holds for R instead of ℝ), there are open k-definable neighbourhoods U,V and W in G of a,b,c, respectively, and U^', V^', W^' in H(R) of a^',b^',c^', respectively,and k-definable functions f,g, and h such that f(a)=a^' and f is a definable homeomorphism between U and U^', g(b)=b^' and g is a definable homeomorphism between V and V^', and h(c)=c^' and h is a definable homeomorphism between W and W^'.LetZ={(x,y)∈ G× G:x∈ U,y∈ V,x· y∈ W,f(x)· g(y)=h(x· y)}. Since b is group-generic in G over k and a is group-generic in G over kb, Corollary <ref> yields (a,b) is group-generic in G× G over k. Thus, as Z is k-definable and (a,b)∈ Z, then Z is generic in G× G.By Lemma <ref>, there are definable sets A,B generic in G such that A× B⊆ Z.Let X,Y definable sets generic in G. Then there is g∈ G such that X∩(Y· g^-1) is generic in G and that (X∩(Y· g^-1))· g⊆ Y.By genericity of X in G, there are g_1,…,g_k∈ G such that G=⋃_i≤ kX· g_i, hence Y=⋃_i≤ k(X· g_i)∩ Y=⋃_i≤ k(X∩(Y· g_i^-1))· g_i. Since Y is generic, there is i≤ k such that (X∩(Y· g_i^-1))· g_i is generic in G. Thus, with g=g_i we get the desired result. Then by the above claim applied to A^-1 and B, there is g∈ G such that (A^-1∩(B· g^-1))· g is generic and contained in B. Then if A^'=A∩(g· B^-1), so A^' is generic in G and (A^')^-1· g⊆ B. By <cit.>, there is s∈ A^' such that s· G^00⊆ A^'. Since A^'=A∩(g· B^-1), s=g· b^-1 for some b∈ B. Let t=b. Note that s· t∈ A^'· B⊆ W. So far we have shown that there are generic sets A^' and B in G such that (i) A^'× B⊆ Z , and (ii) There is (s,t)∈ A^'× B such that s· G^00⊆ A^' and (A^')^-1·(s· t)⊆ B.Let X=(A^')^-1· s, then G^00⊆ X. Finally, we will define the local homomorphism.The definable homeomorphismϕ:X=(A^')^-1· s→(f(A^'))^-1· f(s)defined by ϕ(x^-1· s)=f(x)^-1· f(s) for x∈ A^', and its inverseϕ^-1:(f(A^'))^-1· f(s)→ X, which is given by ϕ^-1(y^-1· f(s))=(f^-1(y))^-1· s for y∈ f(A^'), are local homomorphisms between G and H(R)^0.First, note the following. * If (x_1^-1· s)·(x_2^-1· s)∈(A^')^-1· s, then ϕ((x_1^-1· s)·(x_2^-1· s)) =ϕ(x_1^-1· s)·ϕ(x_2^-1· s)⇔f((x_1^-1· s· x_2^-1)^-1)^-1· f(s) =f(x_1)^-1· f(s)· f(x_2)^-1· f(s)⇔f(x_1)f((x_1^-1· s· x_2^-1)^-1)^-1f(x_2) =f(s). * If (f(x_1)^-1· f(s))·(f(x_2)^-1· f(s))∈ f(Y)^-1· f(s), thenϕ^-1((f(x_1)^-1· f(s))·(f(x_2)^-1· f(s)))=ϕ^-1(f(x_1)^-1· f(s))·ϕ^-1(f(x_2)^-1· f(s))⇔ (f^-1((f(x_1)^-1· f(s)· f(x_2)^-1)^-1))^-1· s =x_1^-1· s· x_2^-1· s⇔x_1·(f^-1((f(x_1)^-1· f(s)· f(x_2)^-1)^-1))^-1· x_2=s. It follows directly from the definitions of ϕ and ϕ^-1.We will show in Claim <ref> that each of ϕ and ϕ^-1 satisfy one of the equivalent conditions formulated above. To prove Claim <ref> we will use the next technical fact.From now on, let s^'=f(s) and t^'=g(t). (i) For every y_1∈ f(A^') and every y_2∈ g(B), f^-1(y_1)· g^-1(y_2)=h^-1(y_1· y_2).(ii) (f(A^'))^-1·(s^'· t^')⊆ g(B).(i) Let y_1∈ f(A^'), so there is x_1∈ A^' such that f(x_1)=y_1. Let y_2∈ g(B), so there is x_2∈ B such that g(x_2)=y_2. Since A^'× B⊆ Z, then y_1· y_2=f(x_1)· g(x_2)=h(x_1· x_2); therefore, h^-1(y_1· y_2)=x_1· x_2=f^-1(y_1)· g^-1(y_2).(ii) Let x∈ A^', then x^-1·(s· t)∈ B, so g(x^-1·(s· t))=(f(x))^-1· h(s· t)=(f(x))^-1·(s^'· t^'). Hence, (f(A^'))^-1·(s^'· t^')⊆ g(B). Let x_1,x_2∈ A^'. * Let z^-1=x_1^-1· s· x_2^-1. If z^-1∈(A^')^-1, then f(x_1)f(z)^-1f(x_2)=f(s)=s^'. * Let w^-1=f(x_1)^-1· s^'· f(x_2)^-1. If w^-1∈ f(A^')^-1, then x_1·(f^-1(w))^-1· x_2=s=f^-1(s^').For the following recall that (s,t)∈ A^'× B, and A^'× B⊆ Z, so for every (x,y)∈ A^'× B, f(x)· g(y)=h(x· y).(i)f(s)· g(t) =h(s· t) =h((x_1· z^-1· x_2)· t) =h(x_1·(z^-1· x_2· t)), since , =f(x_1)· g(z^-1· x_2· t) =f(x_1)· f(z)^-1· h(x_2· t) =f(x_1)· f(z)^-1· f(x_2)· g(t).After cancelling g(t), the desired conclusion is obtained.(ii) By Claim <ref>, we have the next equations.f^-1(s^')· g^-1(t^') =h^-1(s^'· t^') =h^-1((f(x_1)· w^-1· f(x_2))· t^') =h^-1(f(x_1)·(w^-1· f(x_2)· t^')), since , =f^-1(f(x_1))· g^-1(w^-1· f(x_2)· t^') =x_1·(f^-1(w))^-1· h^-1(f(x_2)· t^') =x_1·(f^-1(w))^-1· x_2· g^-1(t^'). After cancelling g^-1(t^'), we conclude the claim.This finishes the proof of Proposition <ref>.Theorem <ref> is proved.Let G be a definably connected definably compact group definable in a sufficiently saturated o-minimal expansion of a real closed field. Let X⊆ G definable with G^00⊆ X. Then there are definable sets X_1,X_2 ⊆ G such that X_1 is definably simply connected, X_2 is definably connected and symmetric, and G^00⊆ X_1⊆ X_2⊆ X. By saturation, G^00=⋂_i∈ℕX_i=⋂_i∈ℕX_i· X_i^-1, and since G^00⊆ X, then there is i∈ℕ such that G^00⊆ X_i· X_i^-1⊆ X. By the Cell decomposition Theorem (<cit.>), X_i is a finite union of definably simply connected cells, thus one of them has to be generic, call it C. By <cit.>, there is g∈ G such that G^00⊆ C· g⊆ C· C^-1⊆ X_i· X_i^-1⊆ X. Finally, let X_1=C· g and X_2=C· C^-1.By Claim <ref>, the definable generic set X of Theorem <ref> can be taken either definably connected, symmetric, and G^00⊆ X, or definably simply connected and G^00⊆ X. Let X be the set in the conclusion of Theorem <ref>. As G^00⊆ X, then there is X^'⊆ X symmetric such that G^00⊆ X^'⊆ X^'· X^'⊆ X. Then the next result holds in ℛ:Let G be a definably compact definably connected group definable in ℛ. Then there are * a connected R-algebraic group H such that (G)=(H(R))=(H),* definable sets X^',X⊆ G such that X^' is a symmetric neighborhood of the identity of G and generic in G, and X^'· X^'⊆ X,* a definable homeomorphism ϕ:X⊆ G→ϕ(X)⊆ H(R) such that ϕ and ϕ^-1 are local homomorphisms. By transferring Corollary <ref> from ℛ to any real closed field, we have that Corollary <ref> holds in any real closed field, not necessarily sufficiently saturated. § ACKNOWLEDGEMENTSI would like to express my gratitude to the Universidad de los Andes, Colombia and the University of Haifa, Israel for supporting and funding my research as well as for their stimulating hospitality. I would also like to thank warmly to my advisors: Alf Onshuus and Kobi Peterzil for their support, generous ideas, and kindness during this work.I want to express my gratitude to Anand Pillay for suggesting to Kobi Peterzil the problem discussed in this work: the study of semialgebraic groups over a real closed field. Also, thanks to the Israel-US Binational Science Foundation for their support.The main results of this paper have been presented on the winter of 2016 at the Logic Seminar of the Institut Camille Jordan, Université Claude Bernard - Lyon 1 (Lyon) and at the Oberseminar Modelltheorie of the Universität Konstanz (Konstanz).ErraEdCov,BaEdErrata,PetStein99,HP2,TentZie,Hodges plain
http://arxiv.org/abs/1703.08606v2
{ "authors": [ "Eliana Barriga" ], "categories": [ "math.LO", "03C64, 20G20, 22E15, 03C68, 22B99" ], "primary_category": "math.LO", "published": "20170324213536", "title": "Definably compact groups definable in real closed fields. I" }
Department of Physics, Imperial College LondonPhotonic crystal topological insulators host protected states at their edges. In the band structure these edge states appear as continuous bands crossing the photonic band gap. They allow light to propagate unidirectionally and without scattering. In practice it is essential to make devices relying on these effects as miniature as possible. Here we study all-dielectric photonic topological insulator particles (finite crystals) which do not require a magnetic field. In such particles the edge states' frequencies are discrete. Nevertheless, the discrete states support pseudospin-dependent unidirectional propagation. They allow light to bend around sharp corners similarly to the continuous edge states and act as topologically protected whispering gallery modes which can store and filter light as well as manipulate its angular momentum. In addition, they explain multiple experimental observations of discrete transmission peaks in photonic topological insulators.Topological photonics: from crystals to particles Vincenzo Giannini December 30, 2023 =================================================Introduction. The invention of photonic crystals has greatly enriched the field of photonics <cit.>. More recently this field benefited from applying the ideas of topology to photonic band structures <cit.>. This was triggered by the discovery <cit.> and experimental confirmation <cit.> of an electronic topological insulator (TI). The hallmark of TIs are protected states occurring at the boundary of the crystal (Fig. <ref>(a)). These states support unidirectional propagation and are immune to certain defects. Their analogues are now being explored in acoustics <cit.>, optical lattices <cit.>, plasmonics <cit.> and especially photonics <cit.>. While photonic crystal analogues of 3D TIs have only been recently proposed <cit.>, their 2D counterparts have been already been realized experimentally <cit.>. Some of these realizations employ magnetic fields <cit.> (Quantum Hall effect) while others preserve the time-reversal symmetry <cit.> (Quantum Spin Hall effect). In the latter the angular momentum of light (spin) provides an additional degree of freedom <cit.>. The spin can be used within spin-chiral networks, spin-controlled gates and other devices for integrated optical circuits and quantum computing <cit.>. In contrast to conventional optical devices, these will be robust to manufacturing imperfections and able to take irregular shapes <cit.>.Here we study a photonic TI particle (finite crystal). Because it is difficult to break time-reversal symmetry at optical frequencies <cit.>, the particle considered is made of dielectric rods <cit.> as proposed by Wu and Hu <cit.> and does not require a magnetic field. As opposed to a TI crystal (Fig. <ref>(a)) the particle has a discrete rather than continuous spectrum of edge states as shown in Fig. <ref>(b). These states are a topologically protected version of whispering gallery modes <cit.>. They support unidirectional propagation determined by the pseudospin of light. Moreover, they exist for particles of different shapes and survive defects that do not strongly perturb the inherent crystal symmetry. Finally, they explain the observation of discrete peaks in transmission <cit.> which occur because the photonic crystals studied in experiments are small and better described as particles.Edge states of the photonic TI particle. The parent photonic crystal studied is made from dielectric rods extending along the z-axis (Fig. <ref>(c)). Six of these rods constitute an artificial atom <cit.> as shown in Fig. <ref>(d). The crystal made from these atoms arranged on a triangular lattice possesses C_6 and time-reversal symmetries. Their combination allows the crystal to become a Quantum Spin Hall insulator for TM polarization <cit.>.Each atom carries four orbitals labelled p_x,p_y,d_x^2-y^2,d_xy according to their E_z fields. The respective bands form a Dirac crossing at a planar interface of the photonic TI as illustrated in Fig. <ref>(a). A unitary transformation forms a new basis of p_±=p_x ± ip_y and d_±=d_x^2-y^2± id_xy. In this basis a photonic 4x4 k· p Hamiltonian can be found <cit.> similar to the electronic Quantum Spin Hall Hamiltonian <cit.> (see Supplemental Material). We apply the method of Imura et al. <cit.> to the TI particle with a circular cross section lying in the x-y plane.The two solutions:c_+,m=R(r)(e^-iϕ,1,0,0)^T e^imϕ/√(2) c_-,m=R(r)(0,0,e^iϕ,1)^T e^imϕ/√(2)are the coefficients in front of p_+,d_+,p_-,d_- in the field expansion (Eq. 10 of Supplementary Material). The azimuthal number, m=0, ± 1, ± 2..., ensures periodicity, c_±,m(2π+ϕ)=c_±,m(ϕ). The function R(r) peaks near the particle's boundary. The states have frequenciesw_±,m= w_0+C/r_P(1/2± m)where w_0 is in the middle of the band gap, r_P is the particle's radius and C – some constant. Thus the states come in equally spaced pairs with (+,m) and (-,-m) being degenerate. A similar degeneracy exists in electronic TI nanoparticles due to time-reversal symmetry of the Schrodinger equation <cit.>. However, photons are not fermions and require an additional symmetry to realize a TI. Here this is realized by the C_6 symmetry of the photonic crystal <cit.> as expanded below.To verify the analytical model we have performed numerical simulations for a hexagonal particle (close to the circular shape). The constituent rods are made of silicon (ϵ=11.7) with air in-between them as in <cit.>. We simulate the hexagonal TI particle of radius, r_P/a=7 (169 atoms, a – separation between them). The particle is encased in a trivial matrix – as in Fig. <ref>(b). The edge states appear at the boundary between the topological (r_a> a/3) and trivial atoms (r_a< a/3) as illustrated in Fig. <ref>(a). Their spectrum obtained with frequency domain simulations using the MPB package <cit.> is shown in Fig. <ref>(b). There are twelve states half of which is shown in Fig. <ref>(c). For the simulation details and the six E_z fields not shown – see the Supplementary Material. Each standing wave state is made of two counter-propagating waves. These two waves have opposite pseudospins (H_x± i H_y) required by the time-reversal symmetry of Maxwell's equations and each supports unidirectional propagation of light. The standing waves themselves come in pairs with regular spacing – agreeing with Eq. <ref>. The degeneracy is not exact because the particle is not circular. Also, the finite particle studied means the C_6 symmetry is only approximate due to the mismatch between the TI and trivial regions. The atoms have similar radii so the effect is weak (further decreasing with the size of the particle). In contrast, for electronic TI nanoparticles the necessary symmetry (time-reversal) is always present. There the states retain their degeneracy but get pushed out of the bulk gap as the size decreases <cit.>. For the photonic TI particle Eq. <ref> also implies that larger particles have more edge states. As the size increases, additional pairs of edge states emerge from the bulk bands. Their frequency scales with 1/r_P (Fig. <ref>(d)) in agreement with Eq. <ref>. For very small sizes there is not enough bulk to support the topological states. This effect was also predicted for ribbons of the photonic TI considered <cit.> and experimentally observed for slabs of electronic TIs <cit.>. It can be ascribed to the overlap of wave functions from opposite sides – the states themselves decay exponentially away from the boundary as seen in Fig. <ref>(e). Finally, the numerical results agree with the analytical model in terms of the states' symmetries. To obtain a state that has no nodes in E_z field we need to have the coefficients e^∓ i ϕ (e^∓ 2 i ϕ) in front of p_± (d_±) s.t. the field rotates as p_x cosϕ+p_y sinϕ (d_x^2-y^2cos 2ϕ +d_xysin 2ϕ). This ensures that the 'atomic' orbitals merely rotate as we go around the circumference and occurs for the states (+,m=-2) and (-,m=2). Both states have frequency w=w_0-3C/2r_P according to Eq. <ref>. The pair of states immediately above (below) have an additional factor of e^± iϕ and should possess one nodewhich is indeed observed in Fig. <ref>(c). We stress that the degeneracy here is a result of the photonic crystal symmetry, c.f. electronic TIs. Each edge state (standing wave) is made of two time-reversed waves as required by Maxwell's equations. These edge states also occur for particles of other shapes. Photonic TI particles of different shapes.The edge states arise due to bulk band structure and should be present for particles of any shape that are large enough and preserve relevant symmetries. Before proceeding to investigate the influence of shape however we consider a cavity,i.e. a trivial hexagonal particle inside the topological matrix.The analytical solution here is the same except the R(r) part which now extends outside from the cavity (see Supplementary Material). This agrees with Fig. <ref>(a) where the cavity states are practically identical to those of the particle. The spectrum is also very similar apart from a rigid shift in the frequencies of the states – Fig. <ref>(b) shows the hexagonal particle with the same number of atoms (a/r_P=5, 91 atom). Note how the degeneracy of the states is worse than in Fig. <ref>(c) where the bigger particle has 'more' of the C_6 symmetry. To explore the shape effects we have computed the spectra for the particles of triangular and rhombic shapes (66 and 64 atoms) – as presented in Fig. <ref>(c) and (d). These contrast sharplywith the case of a trivial particle inside a trivial matrix shown in Fig. <ref>(a). The TI particles have the states that occur in pairs and are able to bend around sharp corners. The latter feature has also been observed experimentally <cit.>. This insensitivity to the shape confirms the topological origin. In addition, such states should also be immune against certain defects. Photonic TI particle with a defect. The edge states of electronic topological insulators are protected against defects and impurities which are non-magnetic. The latter is important because magnetic disorder breaks the required (time-reversal) symmetry. For the photonic TI crystal here this can be achieved by breaking the C_6 symmetry of the lattice <cit.>. A strong symmetry breaking occurs when we completely remove three atoms from the TI particle as shown in Fig. <ref>(b). As expected, this modifies the spectrum and localizes the edge states. In contrast, replacing the topological atoms with trivial ones is a relatively weak perturbation of the C_6 symmetry which hardly affects the spectrum and the edge states as seen in Fig. <ref>(c). This suggests that the states survive weak breaking of the C_6 crystal symmetry such as due to slightly different radii of the atoms in the TI particle and the trivial matrix. Finally, in practice the constituent rods can be displaced from their ideal positions. We have simulated this case by displacing the rods in each topological atom randomly by 10%. The results in Fig. <ref>(d) show that the states are weakly affected. They suggest that the states can be realized and will support unidirectional propagation.Unidirectional propagation of light. In electronic TIs the time-reversal symmetry restricts the edge states with opposite spins to travel in opposite directions. Analogously, in the photonic TI considered TM modes can have opposite pseudospins (H_x± iH_y). For a slab it has been shown <cit.> that continuous edge states inside the band gap support propagation in either one or the other direction depending on the pseudospin. To show this for the discrete states in the particle we used the MEEP finite-difference time-domain package <cit.>. The TI particle is surrounded by the trivial matrix and the simulation cell walls are covered with a perfectly matched layer (see Supplementary Material). We excite the edge states with two TM-polarized dipoles 90^o out of phase with each other (H_x ± iH_y). They emit a Gaussian pulse whose pseudospin determines its direction of propagation according to Fig. <ref>(a). The power transmission through a detector (normalized by the Q-factor) in Fig. <ref>(b) shows unidirectional propagation. The transmission spectrum confirms that each edge state (standing wave) consists of two counter-propagating waves as required by the time-reversal symmetry of Maxwell's equations. Either one or the other of these waves is excited depending on the pseudospin of the incident light. In addition, we considered transmission for the particle with positional disorder as previously in Fig. <ref>(d). The resultant spectrum still shows unidirectional transmission for each given pseudospin as seen in Fig. <ref>(b). MEEP also allows to obtain complex frequencies of the edge states. Their real parts shown in Fig. <ref>(c) agree well with the frequency domain simulations. The quality factor, Q=O(10^3), already for the trivial coating that is several atoms thick. Thinner layers lead to more leakage as well as C_6 symmetry breaking and hence backscattering as is evident from Fig. <ref>(d). Finally, we note that the unidirectional propagation for the photonic TI considered has been recently observed <cit.>. In fact, the sample used in <cit.> is comparable in size to the TI particles studied here. Therefore it should possess a discrete rather than continuous spectrum of edge states. We believe that this is the reason why discrete transmission peaks have been observed. Conclusions. To summarize, we have investigated for the first time discrete edge states that emerge in a photonic TI particle. This particle made of dielectric rods is a particular realization of a TI which does not require magnetic fields. The particle's spectrum contains edge states that occur in pairs. The spacing between the pairs decreases with the particle's size tending to the continuum limit. Each edge state, in turn, is made up of two counter-propagating waves with opposite pseudospins. These edge states are insensitive to the particle's shape as explicitly illustrated for hexagonal, rhombic and triangular particles. They are robust against certain defects and disappear only for very small particles. The states support propagation in one direction along the edge which can be switched with pseudospin. In practice, such microcavity can be used to manipulate photons. We expect interesting effects in arrays of such particles. Moreover, the edge states of TI particles will have peculiar effect on the photonic local density of states and radiation of chiral molecules.apsrev4-139 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Yablonovitch(1987)]yablonovitch_inhibited_1987 author author E. Yablonovitch, 10.1103/PhysRevLett.58.2059 journal journal Phys. Rev. Lett. volume 58, pages 2059 (year 1987)NoStop [John(1987)]john_strong_1987 author author S. John, 10.1103/PhysRevLett.58.2486 journal journal Phys. Rev. Lett. volume 58,pages 2486 (year 1987)NoStop [Haldane and Raghu(2008)]haldane_possible_2008 author author F. D. M.Haldane and author S. Raghu, 10.1103/PhysRevLett.100.013904 journal journal Phys. Rev. Lett. volume 100, pages 013904 (year 2008)NoStop [Kane and Mele(2005)]kane_z_2_2005 author author C. L. Kane and author E. J. Mele, 10.1103/PhysRevLett.95.146802 journal journal Phys. Rev. Lett. volume 95, pages 146802 (year 2005)NoStop [König et al.(2007)König, Wiedmann, Brüne, Roth, Buhmann, Molenkamp, Qi, andZhang]konig_quantum_2007 author author M. König, author S. Wiedmann, author C. Brüne, author A. Roth, author H. Buhmann, author L. W. Molenkamp, author X.-L.Qi,and author S.-C.Zhang, 10.1126/science.1148047 journal journal Science volume 318, pages 766 (year 2007)NoStop [Hsieh et al.(2008)Hsieh, Qian, Wray, Xia, Hor, Cava, and Hasan]hsieh_topological_2008 author author D. Hsieh, author D. Qian, author L. Wray, author Y. Xia, author Y. S. Hor, author R. J.Cava,and author M. Z.Hasan, 10.1038/nature06843 journal journal Nature volume 452,pages 970 (year 2008)NoStop [Prodan and Prodan(2009)]prodan_topological_2009 author author E. Prodan and author C. Prodan, 10.1103/PhysRevLett.103.248101 journal journal Phys. Rev. Lett. volume 103, pages 248101 (year 2009)NoStop [Khanikaev et al.(2015)Khanikaev, Fleury, Mousavi, andAlù]khanikaev_topologically_2015 author author A. B. Khanikaev, author R. Fleury, author S. H. Mousavi,andauthor A. Alù, 10.1038/ncomms9260 journal journal Nat. Commun. volume 6, pages 8260 (year 2015)NoStop [He et al.(2016)He, Ni, Ge, Sun, Chen, Lu, Liu, and Chen]he_acoustic_2016 author author C. He, author X. Ni, author H. Ge, author X.-C. Sun, author Y.-B. Chen, author M.-H.Lu, author X.-P. Liu,and author Y.-F. Chen, 10.1038/nphys3867 journal journal Nat. Phys. volume 12, pages 1124 (year 2016)NoStop [Goldman et al.(2014)Goldman, Juzeliunas, Ohberg, andSpielman]goldman_light-induced_2014 author author N. Goldman, author G. Juzeliunas, author P. Ohberg,and author I. B. Spielman, 10.1088/0034-4885/77/12/126401 journal journal Rep. Prog. Phys. volume 77, pages 126401 (year 2014)NoStop [Yuen-Zhou et al.(2016)Yuen-Zhou, Saikin, Zhu, Onbasli, Ross, Bulovic, and Baldo]yuen-zhou_plexciton_2016 author author J. Yuen-Zhou, author S. K. Saikin, author T. Zhu, author M. C. Onbasli, author C. A. Ross, author V. Bulovic,and author M. A. Baldo, 10.1038/ncomms11783 journal journal Nat. Commun. volume 7, pages 11783 (year 2016)NoStop [Jin et al.(2017)Jin, Christensen, Soljačić, Fang, Lu, and Zhang]jin_infrared_2017 author author D. Jin, author T. Christensen, author M. Soljačić, author N. X. Fang, author L. Lu,and author X. Zhang, http://arxiv.org/abs/1702.02553 journal journal arXiv:1702.02553 [cond-mat](year 2017)NoStop [Pan et al.(2017)Pan, Yu, Xu, and de Abajo]pan_topologically_2017 author author D. Pan, author R. Yu, author H. Xu,and author F. J. G. de Abajo, http://arxiv.org/abs/1702.00036 journal journal arXiv:1702.00036 [cond-mat, physics:physics](year 2017)NoStop [Nalitov et al.(2015)Nalitov, Solnyshkov, and Malpuech]nalitov_polariton_2015 author author A. Nalitov, author D. Solnyshkov,and author G. Malpuech,10.1103/PhysRevLett.114.116401 journal journal Phys. Rev. Lett. volume 114,pages 116401 (year 2015)NoStop [Lu et al.(2014)Lu, Joannopoulos, and Soljačić]lu_topological_2014 author author L. Lu, author J. D. Joannopoulos,and author M. Soljačić, 10.1038/nphoton.2014.248 journal journal Nature Photon. volume 8, pages 821 (year 2014)NoStop [Lu et al.(2016a)Lu, Fang, Fu, Johnson, Joannopoulos, and Soljačić]lu_symmetry-protected_2016 author author L. Lu, author C. Fang, author L. Fu, author S. G. Johnson, author J. D. Joannopoulos,and author M. Soljačić, 10.1038/nphys3611 journal journal Nat Physvolume 12, pages 337 (year 2016a)NoStop [Slobozhanyuk et al.(2016)Slobozhanyuk, Mousavi, Ni, Smirnova, Kivshar, and Khanikaev]slobozhanyuk_three-dimensional_2016 author author A. Slobozhanyuk, author S. H. Mousavi, author X. Ni, author D. Smirnova, author Y. S. Kivshar,and author A. B. Khanikaev, 10.1038/nphoton.2016.253 journal journal Nature Photon.(year 2016), 10.1038/nphoton.2016.253NoStop [Wang et al.(2009)Wang, Chong, Joannopoulos, and Soljačić]wang_observation_2009 author author Z. Wang, author Y. Chong, author J. D. Joannopoulos, and author M. Soljačić,10.1038/nature08293 journal journal Nature volume 461, pages 772 (year 2009)NoStop [Rechtsman et al.(2013)Rechtsman, Zeuner, Plotnik, Lumer, Podolsky, Dreisow, Nolte, Segev, and Szameit]rechtsman_photonic_2013 author author M. C. Rechtsman, author J. M. Zeuner, author Y. Plotnik, author Y. Lumer, author D. Podolsky, author F. Dreisow, author S. Nolte, author M. Segev,and author A. Szameit, 10.1038/nature12066 journal journal Nature volume 496, pages 196 (year 2013)NoStop [Skirlo et al.(2015)Skirlo, Lu, Igarashi, Yan, Joannopoulos, and Soljačić]skirlo_experimental_2015 author author S. A. Skirlo, author L. Lu, author Y. Igarashi, author Q. Yan, author J. Joannopoulos,and author M. Soljačić, 10.1103/PhysRevLett.115.253901 journal journal Phys. Rev. Lett. volume 115, pages 253901 (year 2015)NoStop [Cheng et al.(2016)Cheng, Jouvaud, Ni, Mousavi, Genack, and Khanikaev]cheng_robust_2016 author author X. Cheng, author C. Jouvaud, author X. Ni, author S. H. Mousavi, author A. Z. Genack,and author A. B. Khanikaev, 10.1038/nmat4573 journal journal Nat. Mater.volume 15, pages 542 (year 2016)NoStop [Yang et al.(2016)Yang, Xu, Xu, Wang, Jiang, Hu, and Hang]yang_visualization_2016 author author Y. Yang, author Y. F. Xu, author T. Xu, author H.-X. Wang, author J.-H. Jiang, author X. Hu,and author Z. H. Hang, http://arxiv.org/abs/1610.07780 journal journal arXiv:1610.07780 [physics](year 2016)NoStop [Khanikaev et al.(2013)Khanikaev, Mousavi, Tse, Kargarian, MacDonald, and Shvets]khanikaev_photonic_2013 author author A. B. Khanikaev, author S. H. Mousavi, author W.-K. Tse, author M. Kargarian, author A. H. MacDonald,and author G. Shvets, 10.1038/nmat3520 journal journal Nat Matervolume 12, pages 233 (year 2013)NoStop [Shitrit et al.(2013)Shitrit, Yulevich, Maguid, Ozeri, Veksler, Kleiner, andHasman]shitrit_spin-optical_2013 author author N. Shitrit, author I. Yulevich, author E. Maguid, author D. Ozeri, author D. Veksler, author V. Kleiner,and author E. Hasman, 10.1126/science.1234892 journal journal Science volume 340, pages 724 (year 2013)NoStop [Hafezi et al.(2013)Hafezi, Mittal, Fan, Migdall, andTaylor]hafezi_imaging_2013 author author M. Hafezi, author S. Mittal, author J. Fan, author A. Migdall,and author J. M. Taylor, 10.1038/nphoton.2013.274 journal journal Nature Photon. volume 7, pages 1001 (year 2013)NoStop [Dong et al.(2017)Dong, Chen, Zhu, Wang, andZhang]dong_valley_2017 author author J.-W. Dong, author X.-D. Chen, author H. Zhu, author Y. Wang,and author X. Zhang, 10.1038/nmat4807 journal journal Nat. Mater.volume 16, pages 298 (year 2017)NoStop [Bliokh et al.(2015)Bliokh, Rodríguez-Fortuño, Nori, andZayats]bliokh_spin-orbit_2015 author author K. Y. Bliokh, author F. J. Rodríguez-Fortuño, author F. Nori,and author A. V. Zayats, 10.1038/nphoton.2015.201 journal journal Nature Photon. volume 9,pages 796 (year 2015)NoStop [Lu et al.(2016b)Lu, Joannopoulos, and Soljačić]lu_topological_2016 author author L. Lu, author J. D. Joannopoulos,and author M. Soljačić, 10.1038/nphys3796 journal journal Nat. Phys. volume 12,pages 626 (year 2016b)NoStop [Wu and Hu(2015)]wu_scheme_2015 author author L.-H. Wu and author X. Hu, 10.1103/PhysRevLett.114.223901 journal journal Phys. Rev. Lett. volume 114, pages 223901 (year 2015)NoStop [Ma and Shvets(2016)]ma_all-si_2016 author author T. Ma and author G. Shvets,10.1088/1367-2630/18/2/025012 journal journal New Journal of Physics volume 18, pages 025012 (year 2016)NoStop [Milićević et al.(2017)Milićević, Ozawa, Montambaux, Carusotto, Galopin, Lemaître, Le Gratiet, Sagnes, Bloch, and Amo]milicevic_orbital_2017 author author M. Milićević, author T. Ozawa, author G. Montambaux, author I. Carusotto, author E. Galopin, author A. Lemaître, author L. Le Gratiet, author I. Sagnes, author J. Bloch,and author A. Amo, 10.1103/PhysRevLett.118.107403 journal journal Physical Review Lettersvolume 118, pages 107403 (year 2017)NoStop [Yang et al.(2015)Yang, Wang, and Sun]yang_advances_2015 author author S. Yang, author Y. Wang,andauthor H. Sun, 10.1002/adom.201500232 journal journal Advanced Optical Materials volume 3, pages 1136 (year 2015)NoStop [Bernevig et al.(2006)Bernevig, Hughes, and Zhang]bernevig_quantum_2006 author author B. A. Bernevig, author T. L. Hughes,and author S.-C. Zhang, 10.1126/science.1133734 journal journal Science volume 314, pages 1757 (year 2006)NoStop [Imura et al.(2011)Imura, Takane, and Tanaka]imura_spin_2011 author author K.-I. Imura, author Y. Takane, and author A. Tanaka, 10.1103/PhysRevB.84.195406 journal journal Phys. Rev. B volume 84, pages 195406 (year 2011)NoStop [Imura et al.(2012)Imura, Yoshimura, Takane, and Fukui]imura_spherical_2012 author author K.-I. Imura, author Y. Yoshimura, author Y. Takane,and author T. Fukui, 10.1103/PhysRevB.86.235119 journal journal Phys. Rev. B volume 86, pages 235119 (year 2012)NoStop [Siroki et al.(2016)Siroki, Lee, Haynes, and Giannini]siroki_single-electron_2016 author author G. Siroki, author D. K. K. Lee, author P. D. Haynes,andauthor V. Giannini, 10.1038/ncomms12375 journal journal Nat. Commun. volume 7, pages 12375 (year 2016)NoStop [Johnson and Joannopoulos(2001)]johnson_block-iterative_2001 author author S. G. Johnson and author J. D. Joannopoulos, 10.1364/OE.8.000173 journal journal Opt. Express volume 8,pages 173 (year 2001)NoStop [Zhang et al.(2010)Zhang, He, Chang, Song, Wang, Chen, Jia, Fang, Dai, Shan, Shen, Niu, Qi, Zhang, Ma, and Xue]zhang_crossover_2010 author author Y. Zhang, author K. He, author C.-Z. Chang, author C.-L. Song, author L.-L. Wang, author X. Chen, author J.-F. Jia, author Z. Fang, author X. Dai, author W.-Y. Shan, author S.-Q. Shen, author Q. Niu, author X.-L. Qi, author S.-C. Zhang, author X.-C.Ma,and author Q.-K.Xue, 10.1038/nphys1689 journal journal Nat Phys volume 6,pages 584 (year 2010)NoStop [Oskooi et al.(2010)Oskooi, Roundy, Ibanescu, Bermel, Joannopoulos, and Johnson]oskooi_meep:_2010 author author A. F. Oskooi, author D. Roundy, author M. Ibanescu, author P. Bermel, author J. D. Joannopoulos,and author S. G. Johnson, 10.1016/j.cpc.2009.11.008 journal journal Comput. Phys. Commun. volume 181, pages 687 (year 2010)NoStop
http://arxiv.org/abs/1703.09248v2
{ "authors": [ "Gleb Siroki", "Paloma Arroyo Huidobro", "Vincenzo Giannini" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170327181656", "title": "Topological photonics: from crystals to particles" }
Domination number and minimum dominating sets in pseudofractal scale-free web and Sierpiński graph [ December 30, 2023 ==================================================================================================Precipitation is a large-scale, spatio-temporally heterogeneous phenomenon, with frequent anomalies exhibiting unusually high or low values. We use Markov Random Fields (MRFs) to detect extended anomalies in gridded annual rainfall data across India from 1901-2005, that are spatio-temporally coherent but permitting flexibility in size. MRFs are undirected graphical models where each node is associated with a {location,year} pair, with edges connecting nodes representing adjacent locations or years. Some nodes represent observations of precipitation, while the rest represent unobserved (latent) states that can take one of three values: high/low/normal. The MRF represents a probability distribution over the variables, using node potential and edge potential functions defined on nodes and edges of the graph. Optimal values of latent state variables are estimated by maximizing their posterior probability using Gibbs sampling, conditioned on the observations. These latent states are used to identify spatio-temporally extended rainfall anomalies, both positive and negative. Edge potentials enforce spatial and temporal coherence, and can adjust the competing influences of these types of coherence. We study spatio-temporal properties of rainfall anomalies discovered by this method, using suitable measures. We also study the relations between spatio-temporal sizes and intensities of anomalies. Identification of such rainfall anomalies can help in monitoring and studying floods and droughts in India. Additionally, properties of anomalies learnt from this approach could present tests of regional-scale rainfall simulations by climate models and statistical simulators.§ INTRODUCTION In many parts of the world, such as India, rainfall plays an important role in the economy and the well-being of millions of people. Consequently, excess or deficient rainfall can have very significant effects, especially if it is spread over a large region, or a long time. It is known that low annual rainfall has an adverse effect on India's GDP <cit.>. Hence, identification of such spatio-temporally extended events of excess or deficient rainfall is important in both observed historical data and simulations of future scenarios by climate models. In this work, we call such events “anomalies".In climate science, “anomaly" of a climatic variable (such as precipitation) at a particular location and time is defined quantitatively, as the amount of deviation from its climatological value, averaged over many years. But in this work, we will use the term “anomaly" to indicate deviation not only from climatological value at individual locations but also with respect to spatial/temporal neighbors. Instead of individual locations such as grid-points in individual years, we consider spatially or temporally extended anomalies, with flexible spatio-temporal sizes. Anomalies can occur at different spatial and temporal scales, and their occurrence is heterogeneous (the statistics are location-dependent) and anisotropic (not uniform in all directions). The more consequential anomalies are the ones with significant spatiotemporal extent, and therefore it is important to identify them. Identification of such anomalies of rainfall are very useful in monitoring floods <cit.> and droughts <cit.> in India, as it gives us the information about which regions received excess or deficient rainfall in any given year. Of course, floods and droughts can occur at sub-annual time scales, and any approch to detection of such anomalies should be general enough to work at any time scale of interest. Another factor is that with climate change, the frequency of rainfall extremes may increase, along with changes in the spatial pattern of rainfall<cit.>. To understand past and future changes, scientists rely on climate models like general circulation models (GCMs) which simulate global climatic variables including rainfall. Algorithms are necessary for analyzing large-scale simulations as well as observational data procured from sensors, and such analyses should include detecting and summarizing statistics of rainfall anomalies (<cit.>). Such analysis cannot be done manually because of the large and growing volume of data and simulation results, raising the need for automated procedures.Automating anomaly detection is challenging, because anomalies are inherently subjective, depending on definition and detection threshold <cit.>. Anomaly detection in general, andspatiotemporal anomaly detection in particular are considered important research areas in Data Science <cit.>. Anomalies can be both positive and negative depending on the sign of deviation of rainfall volume from the long-term mean. However, the magnitude of deviation to be considered as “anomaly" is a design choice. The simplest approach to anomaly detection is based on a predefined threshold, relative to statistics of the corresponding variable at individual spatial locations. With rainfall, one might consider the time-series of annual mean rainfall at each grid location, estimate its mean and variance, and identify years departing significantly from the mean. However, accounting for effects of spatiotemporal neighbours is important for detection <cit.> of extended anomalies, and the aforementioned location-wise threshold-based approach cannot do this. Neither is it suitable to establish fixed thresholds for spatio-temporal sizes defining anomalies, as these have a wide range of sizes. Several spatially separated anomalies are present within the same year, some of which may be of different signs. Basically, we need to make a compromise between the magnitude and spatio-temporal extent. Anomalies can occur at different spatial scales, ranging from that of the entire domain, in this case the country scale, down to grid levels. An anomaly of all-India rainfall is likely to be manifested through several smaller anomalies of the same sign. For example if the entire country has a negative anomaly in a given year,then several grid-locations within the country are likely to be parts of negative anomalies during the same year. The Indian Meteorological Department (IMD) declares years to be “excess rainfall" (positive anomaly), “deficient rainfall" (negative anomaly) or normal, by comparing the aggregate all-India annual rainfall against thresholds. In some applications, whether or not an anomaly is identified at a large spatial scale should also depend on the presence/absence of anomalies at smaller scales, and the methods illustrated here facilitate this. For example, a year with widespread drought and many grid-locations under negative anomalies, could be considered as a year of negative anomaly at the all-India scale, even if all-India rainfall were not below the threshold.Furthermore the anomaly detection problem is broader, especially when the anomaly is conceived as a conceptual or abstract quantity represented by a state variable that cannot be directly observed or measured and must be inferred indirectly. Here we consider anomalies in rainfall as a latent variable, as often done in statistical modelling <cit.> including spatiotemporal modelling <cit.>. Such latent (i.e. unobserved) states are best estimated through probabilistic methods <cit.>. We associate a latent state variable with each spatiotemporal location, i.e. each combination of grid-point and year. A graph is constructed with all these spatio-temporal variables as nodes, where pairs of nodes corresponding to neighboring locations are connected by edges. An anomaly is a connected component of such a graph, such that at each node in the component the associated latent variables have equal value. The approach of using local wet/dry conditions along with their spatio-temporal extents for monitoring floods and droughts has been attempted earlier also <cit.>, though using standard precipitation index instead of discrete variables.In this work we model these latent variables to be spatiotemporally coherent through parameters of a Markov Random Field. We estimate these latent variables as the maximum posterior (MAP) solution of a Markov Random Field (MRF). MRFs are undirected random graphical models satisfying Markov properties, and are generally used to model joint distributions of several variables <cit.>. Given a likelihood model of the data conditional on the states of this graph, the posterior density and correspondingly the MAP solution of these variables can be estimated.Each latent state node has three values: 1 (positive anomaly), 2 (negative anomaly) or 3 (normal).We also have additional nodes for all-India states each year, which are connected to the local nodes to account for the interaction between spatial scales. MRFs are defined using “potential functions" for nodes and edges of the graph, which encode interactions between neighbouring variables. In our application, these functions influence the spatial and temporal coherence of the state variables. The local Markov properties inherent to MRFs imply that, for any node, its value is conditionally independent of all other nodes except its neighbours.To identify the MAP configuration of the latent stateswe use Gibbs sampling. Based on the inferred latent states we identify spatiotemporally coherent anomalies, and quantify their properties. Effects of enforcing spatial and temporal coherence on the resulting anomalies are examined, and sensitivity to parameters is studied. We compare the spatial extents of positive and negative anomalies. There is an inherent trade-off between spatial and temporal extents of anomalies in any procedure, originating in the values of parameters enforcing spatial and temporal coherence. Furthermore, even for any fixed set of parameters, there is variability in the spatial and temporal sizes of the anomalies detected across the spatio-temporal domain. Both of these effects are examined. Finally, we also study the intensity of anomalies, i.e.the degree by which the annual rainfall in a set of locations suffering an anomaly differs from the long-term rainfall there. We also study how this intensity is related to spatial and temporal extents of the anomalies. Somewhat similar properties of droughts have been studied earlier <cit.>, with an aim to filter out minor droughts. We illustrate our analysis with case studies of some spatially and temporally extended anomalies that our method detected.The contribution of this paper is to study a new problem - detection of spatio-temporally extended rainfall anomalies. We cast the problem into the anomaly detection framework of Data Mining, and use probabilistic approach based on mixture models and latent variables. We use Markov Random Fields for spatio-temporal coherence. A major advantage of this approach is that no thresholds are needed, and anomalies of arbitrary shapes and sizes can be detected. Also, we consider the interaction between different spatial scales. The properties of the model are studied extensively.§ METHODOLOGY §.§ Definitions and Notation We consider S locations and T years, and spatiotemporal observations Y_st of a geophysical variable such as annual-mean rainfall. Then s indexes location and t indexes time, and Y_st signifies rainfall received by location s at time t. Unlike time, 2-dimensional spatial locations have no natural ordering. So we order the spatial locations based on their longitude first, latitude next.Each location in the 2-dimensional spatial grid system has 8 neighbors. For each location s, we denote by NB(s) the set of its neighboring locations, according to the grid system. Thus, for a location having coordinates (lat,lon), its neighbors will be {(lat+i,lon+j)}, where i,j ∈{-1,0,1}. This particular way of ordering and indexing the spatial locations has no bearing on the analyses undertaken below, and any other indexing scheme is also equally compatible with it. This is because, the indexing does not indicate any sequence of the spatial locations, it just identifies them. The important thing in our analysis is the neighborhood structure, which is based on the spatial locations of the grids and independent of the indexing scheme.Let us consider a graph G, where each node is associated with a pair (s,t).Further, for each spatio-temporal location (s,t) we have two nodes, one corresponding to Z_st and one for Y_st. Z_st is a discrete variable which indicates the state of rainfall at location s, time t. While Y is known from the dataset, Z is unknown, and must be estimated. We put edges between pairs of nodes corresponding to Z_st and Z_s't for each year t if s and s' are neighbouring grid-points, i.e. s' ∈ NB(s). We call such edges as spatial edges. Again, we put edges between pairs of nodes corresponding to Z_st and Z_s,t+1 for each location s, and such edges are called temporal edges. Finally, for each spatio-temporal pair (s,t) we have an edge between Z_st and Y_st, and we call such edges as data edges. Thus a spatial edge connects a Z-nodes associated with neighboring locations and same time, a temporal edge connects Z-nodes associated with same location but adjacent times, and a data-edge connects Z-node and Y-node at the same location and time. Thus, we have 2ST nodes, ST data edges, S(T-1) temporal edges, and ∑_s|NB(s)|T spatial edges.We consider each location s to be in one of three possible states in any year t- high (1), low (2) or normal (3), which is encoded by Z. This follows the conventional classification of rainfall-years as excess rainfall, deficient rainfall, or normal, at each location. The state is represented by a latent discrete variable Z_st taking one of 3 values. In such a graph, an anomaly is a connected component of the Z-nodes corresponding to spatio-temporal locations,such that all of the nodes in the component have the same value of Z : either 1 (positive anomaly) or 2 (negative anomaly).A goal of anomaly detection is to estimate these latent variables, from which the connected components can be computed and thus spatio-temporally coherent anomalies identified <cit.>.§.§ Location-wise Analysis (LWA)A naive solution to anomaly detection is to treat the time-series at each location individually. For each time-series we compute mean μ_s and standard deviation σ_s. We then set Z_st=1 (high) for those years where Y_st≥ HIGH_s, Z_st=2 (low) for those years where Y_st≤ LOW_s, and Z_st=3 (normal) for all other years, where HIGH_s and LOW_s are thresholds specific to location s. We call this method Location-Wise Analysis (LWA), since it treats each location independently without considering the state of its neighbours. Corresponding assignments to the latent variables by this method are denoted as Z0. This approach suffers from two major limitations. Firstly, it is not clear how to choose the thresholds, and results vary strongly with the choice. The histogram of annual rainfall in most locations resembles the bell-shaped curve of Gaussian distribution. So, it is reasonable to set HIGH_s=μ_s+σ_s and LOW_s=μ_s-σ_s. Through the rest of this paper, we will use this choice.However, an approach that circumvents the need to specify such thresholds is a better solution.The second major limitation of this approach is of course its neglect of spatial coherence in the latent variable. For example an individual location may be in a certain mode, while all its neighbours are in a different mode in the same year. Isolated anomalies need not be spurious, but spatially or temporally extended anomalies are more consequential. An alternate approach might be to undertake location wise analysis, after having smoothed data onto a coarser grid. This enlarges the scales of interest, but involves loss of spatial information. It also does not permit anomalies at multiple scales, or naturally accommodate spatial heterogeneity or anisotropy in anomalies. This is the most important limitation of LWA, and we need a fundamentally new approach to circumvent it.Finally, this approach also neglects temporal coherence in each of the location-specific time-series. This shortcoming can be solved by using an approach like Hidden Markov Models, which consider a discrete state space for a time-series and models the state transition distributions. However, Hidden Markov Models are most suitable when there exists some natural ordering between the states and one particular state is likely to be followed by another state. In this case, we do not have any such ordering. Rather, we simply need state persistence to achieve temporal coherence. This can be achieved by the method proposed below.§.§ Modeling by Markov Random FieldsDetecting extended anomalies requires a different lens from LWA, one inducing spatial or temporal coherence during assignment of the Z_st-variables.To address this shortcoming, we take the approach that assigns probabilities to different configurations of latent Z-variables, with higher weights to configurations where Z-assignments are spatially or temporally coherent. This is achieved by modelling the latent variable as an MRF, along the lines of the drought discovery technique in <cit.>. We seek to discover spatial and temporal clusters within which Z-values are the same.Markov Random Field is an undirected graphical model, where a probability distribution are defined on an undirected graph. Each node in the graph corresponds to a random variable, and each edge has an associated potential function that depends on the random variables corresponding to the two nodes connected by that edge. The full likelihood of the model is defined as the product of all the edge potential functions.As already stated, we have 2 nodes for every spatio-temporal pair (s,t) - corresponding to Z_st and Y_st. Spatial edges, temporal edges and data edges are defined between pairs of variables as mentioned above. In addition to grid-wise latent states, these can also be defined for the all-India mean, relative to its corresponding distribution across years. The Indian Meteorological Department (IMD) currently makes annual forecasts of spatial aggregate rainfall over India during the summer monsoon months of July-September (JJAS), called Indian Summer Monsoon Rainfall (ISMR). We define an analogous quantity for the entire year, All-India Mean Rainfall (AIMR), and denote by Y_t. Its anomalies are relative to its interannual mean μ and standard deviation σ. Once again, we define discrete latent variable Z_t corresponding to AIMR, which can take 3 values.A Markov Random Field is an undirected graph, with nodes for each (s,t) pair. Corresponding to each (s,t) pair is associated a latent variable Z_st and an observation Y_st.Each observation node Y_st has a single edge, to the corresponding latent variable node Z_st. The graph also contains nodes corresponding to each year, associated with latent Z_t and observed Y_t, corresponding to AIMR. For any year t, Z_t is linked by edges to all nodes for that year {Z_st} for every location s. Large anomalies in ISMR are declared by IMD as excess or deficient rainfall years. However, rainfall is highly heterogeneous spatially. Therefore in order to define anomalies in the aggregate measure of AIMR, we consider not only calculations of Y_t but also the frequencies of local anomalies in the corresponding year. This is achieved by linking the Z_st and Z_t nodes. Figure 1 illustrates the model.Probabilities are assigned to each configuration of Z using node potential functions ψ^v(Z_st) on each node, edge potentialsψ^e(Z_st,Z_s't') on each edge occurring between spatiotemporal nodes and ψ^f(Z_st,Z_t) on each edge occurring between spatiotemporal nodes and AIMR nodes. Edge potentials influence spatial and temporal coherence and node potentials influence the threshold for anomaly detection. Edge potentials describe prior probabilities that the nodes connected by the edge are in the same state. The node potential functions can be interpreted as describing the prior probability distribution across different states.The precipitation amount at any location and year, given by Y_st, is modelled using a Gaussian distribution with parameters specific to the location s and latent state Z_st. These conditional distributions can be interpreted as edge potentials on the Z_st-Y_st data edges connecting the latent and observed states respectively.The likelihood function is:L(Z) ∝∏_s,tψ^v(Z_st)∏_eψ^e(Z_st,Z_s't')∏_fψ^f(Z_st,Z_t)∏_s,t𝒩(Y_st; μ_sZ_st,σ_s)∏_t𝒩(Y_t; μ_Z_t,σ)This defines the likelihood function, i.e. the probability of observing the data given the latent variables in the graph.§.§ Spatial and Temporal Coherence through MRF The spatiotemporal rainfall volume Y_st is modeled as a multi-modal Gaussian distribution, and Z_st=p specifies the mode (1:high,2:low,3:normal). The parameters (μ_sp,σ_s) of this distribution depend on the latent state p as well as location s, and are estimated from data. Similarly for spatial mean rainfall Y_t we use a Gaussian distribution with state-specific parameters (μ_p,σ). Initial estimates of these parameters can be made from the dataset using LWA to assign states.We define edge potential functions so that if two vertices connected by an edge have same values of Z then the corresponding edge potential is larger than if the values were different. Since the likelihood function is multiplied by these edge potentials, this encourages spatial and temporal neighbours to have same state, leading to spatial and temporal coherence. For each edge between location state node Z_st and the corresponding AIMR state node Z_t for the same year, the edge potential influences the extent to which the local state is sought to be made coherent with the aggregate state. We define potential functions for different edges as follows:ψ^(Z_st,Z_s't)= exp(C(s,s'))ifZ_st=Z_s't, = exp (D)otherwise ;wheres' ∈ NB(s)ψ^(Z_st,Z_s,t+1)= PifZ_st=Z_s,t+1, = 1- Potherwise ; ψ^(Z_st,Z_t)= exp(1/S)ifZ_st=Z_t, = 1otherwise ; To emphasize spatial coherence, D is a small constant compared to C(s,s'). The latter describes edge potentials if spatial neighbours are in the same state. As described previously, these edge potentials can be viewed as prior probabilities on the neighbours being in the same state. Therefore C(s,s') represents a prior probability that the states in locations s and s' are the same, and is estimated from data. Two neighbouring grid-locations need not be highly correlated, for e.g. on either side of a narrow mountain range (such as the Western Ghats). Therefore unlike the MRF estimated by <cit.>, where all edges between neighbouring pairs have the same potential function, here the potentials on edges are estimated from data and are location-dependent. The value of edge potential P, for edges connecting nodes with neighbouring years, lies between zero and one. It induces temporal coherence, and hence is called the temporal coherence parameter. Higher values induce a higher emphasis on temporal coherence. The third set of edge potentials describes behaviour of edges between the location nodes in any given year and the AIMR-node for that year. It is defined using the exponential, so that the contribution depends on the total number of locations whose states coincide with the state assigned to the spatial mean node. S is the total number of locations. The edge potential is higher when the location nodes are in the same state as the spatial mean node. Next, we define the node potential functions. These are directly proportional to the prior probabilities of the nodes being in the different states, and generally influence the threshold for anomaly detection in most real situations when data is limited and the prior is not immaterial in the MAP solution. The state that is eventually assigned in the MAP solution depends only on the relative values of these node potentials. For the default model, all node potentials are set equal to the same value, which is set to 1. But they can be varied according to the problem of interest, as described further in the Appendix. MRF parameter settings: Only the part of the likelihood function that varies with the state Z affects the MAP solution. Therefore a node or edge potential can be made irrelevant to the particular analysis by making it constant, independent of the value of Z. In the subsequent sections, we will use this device to consider alternate settings of the MRF, including where either spatial or temporal coherence are considered in isolation.§.§ Anomaly Detection by Markov Random FieldsHaving defined the likelihood function, we carry out inference on the latent variables Z and estimate parameters (μ_sp, σ_s, C(s,s')) for locations s, corresponding neighbours s' and conditioned on latent state p. Unlike the maximum likelihood estimation of <cit.> that is based on integer programming, here we carry out inference by Gibbs Sampling, which is computationally simpler <cit.>. Each latent variable Z_st is initialized based on location-wise analysis described earlier, and corresponding parameters are estimated. The Gibbs sampling technique entails, at each iteration, sampling each Z_st-variable from its updated conditional distribution by conditioning on values of other variables estimated thus far in the iteration, and then re-estimating the parameters. The procedure is repeated for several iterations, and samples are collected at regular intervals. The stationary distribution of this Markov chain Monte Carlo procedure is the posterior distribution on the latent variables. The maximum a-posteriori (MAP) estimate of Z-variables can then be made from the samples. The Gibbs Sampling equation for any latent variable Z_st or Z_t is given by:p(Z_st=p| Z_-s,-t,Z_t,Y_st) ∝ p(Z_st=p, Z_-s,-t,Z_t,Y_st) ∝ p(Z_st=p,Z_s't,Z_st',Z_t)p(Y_st|Z_st)∝ψ^v(Z_st=p)ψ^f(p,Z_t)∏_s',t'ψ^e(p,Z_s't')𝒩(μ_sp,σ_s) p(Z_t=q| Z_-t,Z_st,Y_st) ∝ p(Z_t=q, Z_-t,Z_st,Y_st) ∝ p(Z_t=q,Z_st,Z_t')p(Y_t|Z_t)∝ψ^v(Z_t=q)∏_t'ψ^f(q,Z_t')∏_s,tψ^e(q,Z_st)𝒩(μ_p,σ)where s' refers to neighbours of s, t' to the previous and next years, i.e. (t-1) and (t+1), the state p ∈{1,2,3}, and Z_-s,-t means all the Z-variables except Z_st. While applying this equation, we do not consider variables corresponding to spatiotemporal locations that are not neighbours of Z_st, since the Markov property of MRF holds that each node is conditionally independent of all non-neighbouring nodes conditioned on the neighbouring nodes. The Gibbs Sampling proceeds by drawing samples for each Z_st and each Z_t from Equation 3, and the optimal value for each latent variable is estimated from the distribution across these samples.After estimating the latent-variable-set Z, we identify anomalies by discovering spatially and/or temporally coherent sets of spatiotemporal locations. Spatiotemporal anomalies are estimated as connected components of the MRF, such that each node of the connected component has the same value of Z. These values of Z can be either 1 or 2, corresponding to positive and negative anomalies respectively. Due to coherence, the clusters thus identified can be at a single location but extending over several continuous years, or spatially contiguous locations in a single year, or both. Clearly, the spatio-temporal extents of the anomalies discovered this way are not fixed. §.§ Related WorksAnomaly Detection is well-studied area of Data Mining <cit.>. However, its main challenge is that anomalies cannot be precisely defined, and are subjective by definition, and most papers on anomaly detection solve a specific formulation of the problem. Much of the work on anomaly detection is about classifying each individual data-point as normal or anomalous, with respect to either its immediate neighbors or the entire dataset. It is more difficult when we deal with collections of data-points rather than individually. While there are other approaches to anomaly detection <cit.> including in case of spatiotemporal anomalies <cit.>, here we use MRFs for studying coherent rainfall anomalies. MRFs themselves have been used in similar applications involving geospatial fields <cit.>, including rainfall  <cit.>.Fu et al. (2012) <cit.> have used MRFs to detect coherent droughts of the last century, and find that their procedure can identify well-known droughts around the world. Theirs appears to be the first formulation of the rainfall anomaly detection problem in terms of MRFs. Other Bayesian models have also been considered for studying floods and droughts, such as <cit.> which also incorporate spatial dependence of flood properties at local scales. The present paper is partly motivated by the aforementioned work <cit.>. We focus on grid-level annual rainfall over India, but our method is general enough to work on rainfall data at any spatial and temporal resolution. Like <cit.> we use Markov Random Fields, but an important difference is that both positive and negative anomalies are considered, so that the latent variable in each node is in one of three states (positive, negative, normal). In addition the relation between anomalies at small scales (grid-wise) and large scale (all-India spatial mean) is explicitly modelled.To identify the MAP configuration of the latent states <cit.> used integer programming. However, integer programming is very slow, increasing exponentially in the size of the problem, thereby necessitating probabilistic inference techniques <cit.>.In this workwe use Gibbs sampling to infer the latent variables. Gibbs sampling works by creating a Markov chain whose stationary distribution is the distribution we seek, and then carrying out a random walk on this Markov chain (<cit.>). Gibbs sampling has been used previously in estimating MRFs (e.g. <cit.>), and here we illustrate its usefulness in estimating latent states corresponding to large and heterogeneous geospatial fields such as rainfall. A survey of inference techniques for Markov Random Fields is given in <cit.>.Geophysical spatio-temporal processes have often been studied by approaches somewhat similar to the proposed one. Models such as STARMAX <cit.> which are inspired by time-series models, express the S-dimensional observation vector Y_t at each time-step in terms of that in the previous time-step, as Y_t= CY_t-1+Dv_t+u_t where u_t is noise, v_t is input vector, and C, D are matrices that introduce spatial correlation in the elements of Y_t.  <cit.> proposes an approach for temporal segmentation of multivariate time-series based on latent factors, but it is not geared for spatial coherence or anomalies. In other models such as Gaussian Random Fields <cit.> or Gaussian Process <cit.> the spatial correlations are more strongly captured through covariance matrices of a latent process X, which is however continuous unlike our discrete process Z. At each time-step t, the observations Y_t are expressed in terms of X_t as Y_t = BX_t+DV_t+u_t, while X is itself modelled with a Gaussian prior as X_t ∼𝒩(0,Σ) or X_t ∼ GP(0,K) where K is covariance function and Σ is covariance matrix. As in our case, X is latent and needs to be estimated conditioned on Y, which involves lengthy computations with the covariance matrices. Indeed a lot of research has recently investigated how such computations can be speeded up by considering covariance matrices of special forms (close to diagonal) <cit.>, or by a clever re-grouping of spatial locations which enables the X-variables there to be sampled simultaneously <cit.>. Use of discrete latent variables allow us to circumvent these issues, while providing a natural solution to anomaly detection. §.§ Discussion of ModelHaving discussed the model in details, and before starting its empirical evaluation, we discuss certain aspects of the model which may place it in perspective ofexisting models for similar problems.§.§.§ Relation to other modelsAlthough we consider latent variables to model a spatio-temporal process, our approach is different from  <cit.> because we explictly want three modes - positive anomaly, negative anomaly and normal. So, a discrete latent variable serves us better than continuous. This helps us avoid the matrix computations involved in the GP-type approaches. Unlike the STARMAX-type models we do not model the temporal dynamics explicitly, nor do we ignore it as in the Gaussian Process-type models. We do not use a directed graphical model like Bayesian Networks because spatial locations cannot be ordered naturally, nor is there any known causal relation between spatial locations. So we attempt to model the joint distribution of all spatial variables instead of using conditionals as in a directed model. The spatial and temporal interactions among the Z variables are modelled locally, between pairs of nodes, and the global configuration of Z is inferred based on these local properties. This discrete representation used by our model is physically interpretable, and so are the local interactions. On the downside, this model is not suitable for prediction or simulation purposes for Z, as no conditional distribution is modelled. §.§.§ Computational ComplexityThis inference process based on Gibbs Sampling is iterative, and in each iteration we need to sample the Z-variable for each (s,t) pair, and also the Z-variables at all-India scale for each day. So, each iteration requires O(ST) sampling steps, where S is the number of locations, and T the number of years. However, the sampling for each (s,t) pair can be done in constant time since Z_st can take only 3 values, and their probabilities can be computed easily based on the current Z-assignments to other locations. The complexity is thus linear in the number of spatio-temporal locations. Moreover, this sampling step can be sped-up by parallelized computation, where sets of Z-variables that are independent of each other can be sampled simultaneously. Some of these aspects have been discussed in <cit.>. However, a detailed study of this matter is beyond the scope of this paper.§.§.§ Spatio-temporal SeparabilityAn important issue in spatio-temporal model is that of spatio-temporal separability, i.e. whether the covariance matrix can be written as a product of a purely spatialand a purely temporal component <cit.>. A separable covariance matrix implies that the spatial and temporal effects can be modeled independently, which is not a good assumption in most circumstances. But in our model no such assumption is made. The covariance is a function of the edge potentials, and the covariance between the Y-variables at a pair of spatio-temporal locations Y(s,t) and Y(s',t') can be written as a product of all edge potentials along the graph path between these two nodes, throughZ(s,t) and Z(s',t') along the spatial and temporal edges joining them (see Fig 1), marginalizing over the Z-variables on this path. The sum-product form of this term, along with the form of the edge-potential functions, ensures that spatial and temporal effects are not separable in this model, which is a good assumption for spatio-temporal data. Since the latent space Z being modelled here is discrete rather than continuous, we are able to avoid making separability assumptions without using complex computations, as discussed in <cit.>. § TEST OF METHODWe fit the MRF model discussed above, and also perform the location-wise analysis (LWA) discussed previously in Section 2.1 on a dataset of 1^∘-1^∘ gridded rainfall data measured all over India, for the period 1901-2011. This grid system has 357 locations over India (S=357). The data is available at daily scale, but for the analysis in this paper we compute annual aggregate values. The Z-values are computed and anomalies are discovered. Before going into details of spatio-temporal properties, we first provide a test of the method, by reproducing some known results about AIMR. The results from the MRF are compared with LWA to highlight the differences and benefits.§.§ Local Anomalies in given years We examine results from two years: 1998 (declared excess-rainfall year by IMD) and 2002 (declared deficient-rainfall year). Maps of positive and negative anomalies in these two years are shown in Figure 2. The first panel in each pair shows results from the LWA, while the second panel shows those of the MRF. Overall the maps have many similarities, which seem to validate the MRF approach. It was noted previously that LWA may yield isolated anomalies as well, and this is seen in the figure. By contrast, the constraint of spatial coherence in the MRF yields more spatially connected and extensive anomalies, with fewer isolated anomalies. Anomalies of both kinds are more spatially contiguous with the MRF. Furthermore, for the excess rainfall year, the MRF yields a larger number of locations with positive anomaly state compared to LWA (84 in MRF as compared to 75 in LWA). Likewise, in the deficit-rainfall year, the MRF yields more locations having negative anomalies (194 in MRF compared to 147 in LWA). This is a result of the edges connecting the location-specific states Z_st to the aggregate state Z_t in the MRF, which have higher edge potential when the corresponding nodes are in the same state, as well as the effects of spatial coherence.§.§ AIMR anomalies and local anomalies The variables Z_t denote the anomaly corresponding to All-India spatial mean rainfall (AIMR). For each year we compute AIMR Y_t from local measurements {Y_st}, and from this time-series estimate mean μ and standard deviation σ across years. The excess rainfall years H are defined as those with Y_t≥μ+σ and deficient-rainfall years L have Y_t≤μ-σ.These definitions do not depend on how widespread are local anomalies but only on amount of spatial mean rainfall. We can instead define all-India anomalies so as to depend on the widespread occurrence of local anomalies. For any year t, we compute the number of locations under anomalies of either kind (N1(t) and N2(t)) as found by LWA, and corresponding means (μ_N1, μ_N2) and standard deviations (σ_N1, σ_N2) across the years. Based on these, we identify those years with exceptionally large numbers of locations underpositive anomalies (HL) and exceptionally large numbers of locations under negative anomalies (LL). In other words, HL = |t: N1(t) ≥μ_N1+σ_N1| and LL = |t: N2(t) ≥μ_N2+σ_N2|. It turns out that H and HL are not equal, and their overlap |H∩ HL|/|H| is only 0.7. Similarly L and LL are also not equal, and |L∩ LL|/|L| is only 0.7. This illustrates that the aggregate state Z_t when defined based on spatial mean rainfall often takes different values from when it is defined based on widespread occurrence of local anomalies.In the MRF model, edge potentials ensure that assignment of Z_t is also influenced by values of the location-wise latent states Z_st, and large numbers of local anomalies of one kind increase the probability of Z_t being assigned to the same anomaly. At the same time, it also takes into account the AIMR estimate Y_t. Hence in the MRF the value of Z_t should be able to capture all kinds of all-India anomalies defined so far - H,HL,L,LL. Let ZH and ZL be the positive and negative years identified by the MRF, i.e. ZH={t: Z_t=1} and ZL={t: Z_t=2}. The set ZH captures very well the contents of both H and HL, with |H∩ ZH|/|H|=1 and |HL∩ ZH|/|HL|=0.92. Similarly ZL also overlaps well with L and LL, with |L∩ ZL|/|L|=1 and |LL∩ ZL|/|LL|=0.84. This shows that the MRF model helps discover both types of all-India anomalies, based on spatial-mean rainfall as well as widespread occurrence of local anomalies, simultaneously. We describe anomaly statistics from the MRF for extreme years (H,L), where all-India rainfall is either excess or deficient.Generally, across approaches it can be expected that in years of H (excess rainfall) the number of locations (N1H) assigned as positive state Z_st=1 is much higher than positive state locations in all other years (N1Y), while the number of locations (N2L) assigned to negative state in L (deficit rainfall years) is much higher than in all other years (N2Y). These relationships are seen for the MRF with spatial coherence and LWA in Table 2. Spatial coherence in the MRF causes the mean number of nodes with positive state in years of excess rainfall to be higher than in case of LWA (Table 2). Similarly there are more negative state assignments in years of deficit rainfall as compared to LWA. Furthermore, the mean difference between number of locations with positive and negative states in H and L years respectively (D12H, D21L) is more pronounced with the MRF than in case of location wise analysis (Table 2). Spatial coherence favours occurrence of the corresponding anomaly states in excess or deficit rainfall years. Thus the proposed approach links AIMR states to local states, which helps to identify extreme-rainfall years in a more inclusive way. It also helps to localize the anomalies formed in such years.§ EFFECTS OF MRF EDGE POTENTIALSClearly, the assignment of the latent state variables Z at the different spatio-temporal locations is strongly influenced by the edge potentials of the MRF. It can be generally expected that many isolated locations that are assigned to states 1 or 2 by LWA, will be assigned to state 3 by MRF to preserve spatial coherence. On the other hand, some locations assigned to state 3 by LWA may be assigned to states 1 or 2 if several of their neighbors are in such states. In this section, we study how the Z-assignments are affected by different parameter settings of the MRF involving the edge potentials. §.§ Assignment Statistics For different parameter settings, we compute the total number of nodes in the entire graph assigned states 1 and 2 (N1, N2). We also compute confusion matrices to describe the degree of overlap between anomaly nodes found by location-wise analysis and the MRF. NG1 denotes the number of positive state nodes “gained" by the proposed method when compared to LWA, i.e. nodes satisfying Z_st=1, Z0_st≠ 1 (Recall that state assignments by LWA are Z0). These are nodes not part of positive anomalies by LWA, but part of positive anomalies in the corresponding MRF. Similarly NL1 is the number of positive state nodes “lost" by the proposed method compared to LWA, i.e. nodes satisfying Z0_st=1, Z_st≠ 1. The number of negative state nodes “gained" and “lost" in this way are denoted as NG2 and NL2 respectively. §.§ Edge Potential Settings First we isolate effects of spatial coherence, in the absence of temporal coherence. Absence of temporal coherence is implemented by using constant edge potentials for all edges across years. We also use constant node potentials for all nodes and states.Next we study the effects of temporal coherence alone, leaving out spatial coherence effects.We consider effects of temporal coherence with parameter P (MRF-TC-P), with increasing P denoting increasing emphasis on temporal coherence.The node potential is uniform, independent of the assignment of latent variable Z.Results are shown in Table 2 and Figure 3. In the presence of temporal coherence, the number of nodes in positive state is much larger than that of nodes in negative state. The relative difference increases as the temporal coherence parameter increases. As the role of temporal coherence is increased, by increasing P from 0.5 to 0.99, the number of anomaly-states decreases. Increasing coherence generally leads to fewer anomaly-states. That is why it is not possible to generalize the effect of MRF compared to LWA, without also specifying the coherence parameters.In general the number of anomaly-states “lost" when switching from LWA to the MRF is higher as either spatial or temporal coherence is introduced, and as the temporal coherence parameter is increased. This is expected, as many anomalies found by LWA are isolated and do not reflect coherent effects on larger scales. A less expected effect of introducing coherence is that a significant number of new anomalies are “gained", i.e. identified when LWA could not extract them. Such anomalies are manifested at larger scales only.Finally we consider the MRF where both spatial coherence and temporal coherence, the latter having parameter P, are present (MRF-STC-P). In the presence of spatial coherence, the effects of increasing the temporal coherence parameter P are similar to the previous discussion in the context of temporal coherence alone: higher temporal coherence parameter leads to fewer anomaly-states. Furthermore, the number of positive states is larger than the number of negative states, and the relative difference becomes larger as temporal coherence is increased. There can be different approaches to enforcing spatial coherence based on Equation 2, and we consider the effects in the following. We contrast five different approaches, for which the results are shown in the last part of Table 2. For this analysis, P is kept at 0.9.In the first three cases, D=0. That is, the edge potentials for spatial neighbours have zero weight if the latent states differ. These approaches differ in the choice of edge potentials C(s,s') between spatial neighbours in case the latent states are the same:“prop", where for neighbouring pairs of locations, C(s,s') is proportional to the number of years that the locations have the same phase i.e. sign of rainfall change; “anml"where for neighbouring pairs of locations, C(s,s') is proportional to the number of years that the locations had the same state as estimated by LWA; and “unif" where for neighbouring pairs of locations, C(s,s') values are equal. An important result is that these three approaches do not have much effect on statistics of state assignments(Table 2). Therefore anomaly detection using MRFs does not depend much on details of the spatial coherence model as long as the edge potentials in the presence of spatial coherence are much higher than edge potentials when the neighbouring states differ; recall that for these three cases D=0 so that ratio C/D is infinity. In the last two approaches towards spatial coherence, we relax the constraint that D=0. This is essentially a weakening of the spatial coherence requirement. The ratio of C and D can, however, affect the relative weight given to spatial coherence, with higher ratios emphasizing spatial coherence more. We consider two settings: “mxd1" where C=2, D=1 and “mxd2" where C=5, D=1. If ratio C/D is higher, there are fewer anomaly nodes (Table 2). The “prob" setting with D=0 has been used for all the analysis done before and after this analysis. § PROPERTIES OF DISCOVERED ANOMALIES In Section 2.5, we discussed how the local state variables Z assigned by MRF or LWA are used to identify spatio-temporally coherent zones as positive or negative anomalies. In this section we study the properties of these anomalies, under the different settings of the MRF discussed in the previous section.An important question is how widespread and persistent positive and negative rainfall anomalies are. Another important question is, how much different the rainfall volumes are from the long-term climatology, in case of each anomaly. To evaluate these, we first define several properties of the anomalies.§.§ Anomaly Statistics The spatiotemporal size of each anomaly is the size of the corresponding connected component in the graph, i.e. the number of nodes present in it. We measure the STS: mean spatiotemporal size of all anomalies, including all years; and similarly the STSP: mean spatiotemporal size of all positive anomalies; and STSN: mean spatiotemporal size of all negative anomalies. We define the spatial size of an anomaly as the number of distinct spatial locations included in the nodes covered by it. The temporal size of an anomaly is similarly defined as the number of distinct years included in it. We thereby estimate mean spatial size of all anomalies (SS), only positive (SSP) and only negative (SSN) anomalies. Similarly we measure (TS, TSP, TSN) for corresponding mean temporal sizes. Each state of Z at each location is associated with a distribution over rainfall values. Fig 4 shows the mean rainfall values for each of the locations and each state of Z. Mathematically, these are mean_t: Z_st=1(Y_st) for positive anomalies, and mean_t: Z_st=2(Y_st) for negative anomalies. Two different settings of the MRF are considered: using spatio-temporal coherence with temporal coherence parameters P=0.7 and P=0.9, and the “prop" setting of spatial coherence. The plots show that these mean rainfall fractions for the different states (shown by green, blue and red plots) are clearly well-separated in most locations.To quantify the severity or “anomalousness" of each anomaly quantitatively, we first compute the ratio of the rainfall received at each spatio-temporal location covered by the anomaly, and the long-term mean rainfall over each of these locations. We define the intensity parameter of the anomaly as the mean of these ratios. Mathematically, let A be the set of spatio-temporal locations affected by a particular positive or negative anomaly a. For each (s,t) ∈ A, we compute F_a(s,t)=Y_st/μ_s. Then the intensity of anomaly a is given by I_a=mean_(s,t)∈ AF_a(s,t). §.§ Effect of MRF settings We consider location-wise analysis (LWA), and using MRFs under different settings. These settings include only spatial coherence (SC), only temporal coherence with parameter P (TC-P) and both spatial and temporal coherence (STC-P). Results are shown in Table 3. The different groups of columns show the number of anomalies, spatiotemporal size, spatial size, temporal size and intensity respectively, each one separately for positive and negative anomalies. The results indicate complex relationships involving spatial and temporal scales of anomalies. As expected, with LWA, the number of anomalies is much larger and their mean sizes much smaller, in comparison to versions of the MRF where various constraints of coherence are present.In the absence of spatial coherence, as the temporal coherence parameter is increased, the spatial size of anomalies becomes smaller. Larger temporal coherence parameter selects for more long-lived anomalies and hence these tend to become smaller in spatial extent. The spatiotemporal size decreases as the temporal coherence parameter is increased. The aforementioned effect is also present when spatial coherence is included in the MRF. The selection for longer but spatially less extended anomalies when the temporal coherence parameter is increased creates a trade-off between spatial and temporal extents. Such a trade-off is intrinsic to spatio-temporal anomaly detection: with a larger emphasis on a certain type of coherence (spatial or temporal) the corresponding size of anomalies increases while the other size decreases. In Table 3, we also study the mean intensity of the anomalies under different settings of MRF. Clearly, as either type of coherence is increased, the mean intensity parameter of positive anomalies increases, and that of negative anomalies decreases, and given the aforementioned definition of this parameter the selected anomalies are more “intense". This is a welcome result, indicating that use of spatio-temporal coherence helps us to identify severe anomalies, rejecting mild ones. §.§ Variations among Anomalies The above discussion pertained to parameter-based tradeoffs in mean spatial and temporal sizes of anomalies. However, even for fixed parameter settings of the MRF, there is substantial variation in size and intensity of the detected anomalies. Such variation of spatial and temporal sizes is shown in Figure 5 for two realizations of the MRF. It is seen that generally larger anomalies tend to be shorter-lived, but there are individual exceptions. There is a large range of temporal sizes for a known spatial size, for both positive and negative anomalies. In Figure 5 we also plot the variation of intensity with spatio-temporal size of the anomalies in two realizations of MRF. Here the correlation is even weaker.We compute the correlations between these statistics of individual anomalies. Once again, this is done separately for each setting of the MRF considered in Table 3, and separately for positive and negative anomalies. The results are shown in Table 4. It shows that in almost all the settings the correlation between spatio-temporal size and spatial size is very strong, though it reduces as the temporal coherence parameter P is increased (i.e. the mean temporal size of the anomalies increase). The correlation between spatio-temporal and temporal sizes is less strong, though it increases slightly with P. The spatial and temporal sizes are less well correlated. There is no noticeable correlation between spatio-temporal size and intensity.§ CASE STUDIES OF SOME ANOMALIES Next, we investigate some of the anomalies individually, which were discovered using MRF with spatio-temporal coherence, with temporal coherence parameter P=0.9. We first consider a positive anomaly that occurred in the states of Odisha and Jharkhand along the eastern coast (Fig 6A), in the year 1994. This anomaly covered 20 grid-locations, but persisted for only 1 year (spatial size 20, temporal size 1). The long-term mean annual rainfall over the concerned 20 grid-locations is 4.18 mm per day per location, but that year the mean rainfall over these locations was 5.84 mm per day per location (anomaly intensity of 1.4). Overall, the year 1994 was classified as a positive anomaly year in terms of AIMR, with mean rainfall of 4.23 mm per day per location, compared to the long-term mean of 3.94 mm per day per location (intensity of 1.1). The map of locations having local positive and negative anomalies in 1994 are shown in Fig 6B, which indicates that the Odisha anomaly was quite significant. The LWA-based local anomalies are shown in Fig 6C. Another major anomaly occurred roughly in the same area (Fig 6D) in 2001, covering 11 locations. The mean rainfall that year over this anomaly was 5.5 mm per location per day, compared to the long-term mean of 4.1 mm per location per day (intensity of 1.3). The year 2001 was classified as normal at all-India scale, and the map in Fig 6E shows the locations under positive and negative anomalies according to MRF. A significant negative anomaly occurred around a stretch of Central India (Fig 7A) in 2000, which was classified as an all-India negative anomaly year. The anomaly map by MRF of the year is shown in Fig 7B. The anomaly covered 22 locations which receive 3.88 mm per day per location rainfall on average, but in that year they received only 2.17 mm (anomaly intensity of 0.56). Again, around 10 locations in Odisha near the eastern coast (Fig 7D) had a negative anomaly in 2002, which was a major drought year in terms of AIMR. These locations, which receive 4.18 mm on average, received only 2.4 mm in 2002 (intensity of 0.57). The MRF-based anomaly map for 2002 is shown in Fig 7E, while Fig 7F shows the local anomalies by LWA. Some anomalies are temporally extended, i.e. they cover several years. A good example is a positive anomaly that covered 5 years from 1987 to 1991, over the Meghalaya and Southern Assam region, covering 24 locations (Fig 8A). The mean annual rainfall over these locations is 6.35 mm per location per day, but in these 5 years, the mean rainfall volumes were 6.99, 8.53, 6.92, 7.14 and 7.45 mm per location, per day. Among these years, only 1988 and 1990 were classified as positive anomaly at all-India scale, while the other three years were classified as normal. The MRF-based anomaly map of 1987 is shown in Fig 8B. Again, 11 locations in the south-western state of Kerala (Fig 8D), one of the wettest parts of India, suffered a negative anomaly stretching over 1985-87, all of which were classified as normal years. The mean rainfall over these locations is 6.15 mm per location per day, but during these three years, this mean was 4.83, 4.59 and 4.9 respectively. The MRF-based anomaly map of 1985 is shown in Fig 8E. § CONCLUSIONSThis paper describes a method for coherent anomaly detection using Markov Random Fields (MRFs), where each node is associated with a location and year. Coherence is emphasized because it is an inherent property of rainfall, and also because anomalies are consequential especially when extended spatially or temporally. The anomaly states are represented as latent random variables, so probabilistic methods are required for their estimation. For this purpose we use Gibbs sampling, a type of Markov chain Monte Carlo method. We also consider sensitivities of the results to parameters of the MRF. The MRF is able to identify more coherent anomalies compared to traditional analysis using location-specific thresholds. MRFs offer a principled approach to handling the heterogeneity and anisotropy in the occurrence of anomalies, where more traditional methods such as wavelets may not be appropriate. The method can discover intense positive and negative anomalies of various sizes, without requiring any thresholds. Furthermore the method can be used to characterize both the occurrence of anomalies at large spatial scale by assigning a state variable for All-India spatial-mean rainfall, as well as the widespread occurrence of grid-scale anomalies through effects of edge potentials and spatial coherence in the MRF. The effects of edge potentials enforcing coherence as well as node potentials influencing the threshold for anomaly detection within the MRF are described. We show that adjusting the parameters has effects that are consistent with intuition. However the results are not overly sensitive to the parameters. One effect of coherence is to reveal anomaly states that are classified as normal in location-wise threshold-based analysis, because of the influence of neighbouring locations being assigned to anomaly states. Increasing spatial coherence through edge potentials leads to fewer but larger anomalies. Enforcing any one type of coherence more strongly, selects for either longer-lived or spatially more extended anomalies, though fewer in number. On the other hand, increasing spatio-temporal coherence results in selection of more “intense" anomaliesinstead of mild ones.There is also variability in the spatial and temporal sizes of anomalies. Anomalies longer in one dimension (spatial/temporal) tend to be shorter in the other. Furthermore positive anomalies are not necessarily larger or smaller than negative anomalies, as the results vary with choice of parameters. Overall, this study provides some understanding of heterogeneities in rainfall over Indian region. The results also raise the question of whether the anomalies discovered by this method are relevant for understanding hydrological floods and droughts, which are based on considering multiple variables, including soil moisture. A natural extension of this work would be to infer anomaly states based on the inclusion of additional climatic and hydrological variables. Clearly, anomalies are a very significant feature of rainfall in general and Indian rainfall in particular, and any realistic simulation of regional rainfall should be able to capture their salient properties. Statistics of coherent anomalies learnt from MRF-based approaches could present further tests and benchmarks of regional-scale rainfall simulations made from climate models and statistical simulators. § ACKNOWLEDGMENTS This research was supported by Airbus India Postdoctoral Fellowship and Divecha Centre for Climate Change, Indian Institute of Science. We are thankful to Dr. J. Srinivasan and Dr. V.Venugopal for valuable inputs. § APPENDIX§.§ Choice of Node Potentials As described in Section 2.4, node potentials can be varied depending on the problem being considered.These potentials can be viewed as prior probabilities on the occurrence of different states. For example, a lower threshold on anomaly detection is achieved by specifying ψ^v(Z_st=1) = C_1,ψ^v(Z_st=2) = C_2 and ψ^v(Z_st=3) = C_3, where C1 and C2 are high while C3 is low. Relative frequencies of positive and negative anomalies can be adjusted by changing C_1 and C_2 accordingly. Another application might be to vary node potentials by location. In locations receiving low average rainfall (μ_s is small), negative anomalies may be more consequential and hence important to detect. Likewise, locations receiving higher average rainfall (μ_s is high) might be more sensitive to flooding events. We define the set of locations receiving low average rainfall as L and those receiving high average rainfall as H. Thenψ^v(Z_st=1) = C_1,ψ^v(Z_st=2) = C_2and ψ^v(Z_st=3) = C_2whens ∈ L ψ^v(Z_st=1) = C_2,ψ^v(Z_st=2) = C_1and ψ^v(Z_st=3) = C_2whens ∈ H ψ^v(Z_st=1) = C_3,ψ^v(Z_st=2) = C_3and ψ^v(Z_st=3) = C_3in other locations To achieve the above, we specify C_1 ≤ C_2. On the contrary, the goal may be to identify positive anomalies in dry locations, or negative anomalies in wet locations, by specifying C_2 ≤ C_1. Yet another application may involve inducing homogeneity of heterogeneity in anomaly detection, by identifying positive anomalies especially during years of strong mean rainfall or negative anomalies in the reverse situation respectively. Alternatively, the objective may be to identify negative anomaly states during dry years or vice versa. For this type of problem, we denote sets of years with excess and deficient spatial mean rainfall as H and L. Once again defining node potentials asψ^v(Z_st=1) = C_1,ψ^v(Z_st=2) = C_2and ψ^v(Z_st=3) = C_2whent ∈ L ψ^v(Z_st=1) = C_2,ψ^v(Z_st=2) = C_1and ψ^v(Z_st=3) = C_2whent ∈ Hψ^v(Z_st=1) = C_3,ψ^v(Z_st=2) = C_3and ψ^v(Z_st=3) = C_3in other yearsHomogeneity can be achieved by specifying C_1 to be low and C_2 high, and heterogeneity with the reverse specifications. There is clear analogy between the two sets of problems, one in which node potentials are adjusted by location and the second where the type of year is the primary factor.§.§ Effects of node potentials Node potentials influence the thresholds for anomaly detection, and can be interpreted as prior probabilities of the corresponding anomaly being present before any observations are made. To examine the effects we compute the mean number of positive (NP) and negative (NN) anomalies with spatiotemporal size above 1. In all cases, we maintain spatial and temporal coherence through edge potentials in the MRF, with temporal coherence parameter P=0.99. In setting NP1, we consider equal weights for all 3 states at each node; NP2 favours detection of positive anomalies by setting C_1=2, C_2=1, C_3=1; NP3 favours negative anomalies by setting C_1=1, C_2=2, C_3=1 in NP3; NP4 prioritizes both anomalies over the normal state using C_1=2, C_2=2, C_3=1. One might also set node-specific potentials depending on statistics at either the location or the year associated with the node. We define set LS of dry locations, where mean annual rainfall (μ_s) is atleast one standard deviation σ below the mean of this quantity across locations (μ), i.e. LS={s: μ_s ≤μ-σ}. We also define set HS of wet locations, where HS={s: μ_s ≥μ+σ}. In NP5 we set node potentials C_1=2, C_2=1 in nodes of HS, and C_1=1, C_2=2 in nodes of LS. This favours positive anomalies in wet locations, and negative anomalies in dry locations. In contrast, the values are reversed in NP6, favouring positive anomalies in dry locations and negative anomalies in wet locations. For introducing year-specific node potentials, we consider deficient-rain years L and excess-rain years H once again. In NP7 we set C_1=2, C_2=1 in nodes of H, and C_1=1, C_2=2 in nodes of L. This favours positive anomalies in excess-rain years and negative anomalies in deficit-rain years. These settings are reversed in NP8, favouring positive anomalies in deficit-rain years and negative anomalies in excess-rain years. Table 5 shows anomaly statistics for the various settings of node potentials examined here. When giving additional weight to positive anomalies (as in cases NP2, NP4) the number of positive anomalies increases as would be expected. Similarly when negative anomalies are given higher weight (as in cases NP2, NP4) the number of negative anomalies increases. A common tendency across these settings is that the number of distinct positive anomalies is much larger than that of negative anomalies, but negative anomalies have larger mean spatiotemporal size. Emphasizing node-specific potentials that depend on features of either the location or the year associated with the node, in NP5-NP8, does not substantially change the overall statistics, but affects the particular anomalies detected (which are not shown). In NP7, where in AIMR anomaly years the local anomalies of the same type are favoured, the difference between mean sizes of negative and positive anomalies decreases. This is mainly because positive anomalies have higher spatial size than negative anomalies in this condition. The aforementioned situation is reversed in NP8, when in the anomaly years local anomalies of the reverse type are favoured.0 unsrt gdp Gadgil, Sulochana and Gadgil, Siddhartha; The Indian Monsoon, GDP and Agriculture, Economic and Political Weekly, 2006, pp 4887-4895 flood Dhar, ON and Nandargi, Shobha; Hydrometeorological aspects of floods in India, Flood Problem and Management in South Asia, 2003, pp 1-33 drought1 Cook, E R, Anchukaitis, K J, Buckley, B M, D’Arrigo, R D, Jacoby, G C and Wright, W E; Asian monsoon failure and megadrought during the last millennium, Science, Vol 328(5977), 2010, pp 486–489 drought2 Kumar, K N, Rajeevan, M, Pai, D S, Srivastava, AK and Preethi, B; On the observed variability of monsoon droughts over India, Weather and Climate Extremes, 2013, Vol 1, pp 42-50 drought3 Dracup, J.A.; Drought Monitoring, Stochastic Environmental Research and Risk Assessment, 1991, Vol 5(4), pp 261-266 spatvar Ghosh, Subimal and Das, Debasish and Kao, Shih-Chieh and Ganguly, Auroop R; Lack of uniform trends but increasing spatial variability in observed Indian rainfall extremes, Nature Climate Change, Vol 2(2), pp 86-91, 2012 abrupt Narisma, Gemma T and Foley, Jonathan A and Licker, Rachel and Ramankutty, Navin; Abrupt changes in rainfall during the twentieth century, Geophysical Research Letters, Vol 34(6), 2007 wavelet Ideião, Sandra Maria Araújo and Santos, Celso Augusto Guimarães; Analysis of precipitation time series using the wavelet transform, Revista Sociedade & Natureza, Vol. 1(1), 2009 ICA Sharma, Aditi; Spatial data mining for drought monitoring: An approach using temporal NDVI and rainfall relationship, International Institute for Geoinformation Science and Earth Observation, Master thesis, 2006 anomalysurvey Chandola, Varun and Banerjee, Arindam and Kumar, Vipin; Anomaly Detection: A Survey, ACM computing surveys (CSUR), Vol 41(3), 2009 STDM Shekhar, Shashi and Jiang, Zhe and Ali, Reem Y and Eftelioglu, Emre and Tang, Xun and Gunturi, Venkata and Zhou, Xun; Spatiotemporal data mining: A computational perspective, ISPRS International Journal of Geo-Information, Vol. 4(4), 2015, pp 2306-2338 droughtstudy Rouault, Mathieu and Richard, Yves; Intensity and spatial extent of droughts in southern Africa, Geophysical Research Letters, Vol. 32(15), 2005 bishop, Bishop, Christopher M.; Pattern Recognition for Machine Learning, 2006 neal Neal, Radford M.; Probabilistic inference using Markov chain Monte Carlo methods, 1993 spatstat Gelfand, Alan E and Diggle, Peter and Guttorp, Peter and Fuentes, Montserrat; Handbook of Spatial Statistics, 2010 localJuan Du, Jian Fang, Wei Xu and Peijun Shi; Analysis of dry/wet conditions using the standardized precipitation index and its potential usefulness for drought/flood monitoring in Hunan Province, China, Stochastic Environmental Research and Risk Assessment, 2013, Volume 27(2), pp 377-387 MRF Kindermann, Ross and Snell, Laurie; Markov random fields and their applications, 1980 droughtprop Xinjun Tu, Vijay P. Singh, Xiaohong Chen, Mingwei Ma, Qiang Zhang and Yong Zhao; Uncertainty and variability in bivariate modeling of hydrological droughts, Stochastic Environmental Research and Risk Assessment, 2016, Volume 30(5), pp 1317-1334 MRFdrought Fu,Qiang and Banerjee,Arindam and Liess,Stefan and Snyder, Peter K.; Drought detection of the last century: An MRF-based approach, SIAM International Conference on Data Mining (SDM), 2012 bayesflood Hongxiang Yan and Hamid Moradkhani; A regional Bayesian hierarchical model for flood frequency analysis. Stochastic Environmental Research and Risk Assessment, 2015, Volume 29(3), pp 1019-1036 MRFspatialgibbs Murali Haran, James S. Hodges, Bradley P. Carlin; Accelerating computation in Markov random field models for spatial data via structured MCMC, Journal of Computational and Graphical Statistics, 2003, Vol 12(2), pp 249–264 mcmc Robert, Christian and Casella, George; Monte Carlo Statistical Methods, 2013 mcmc2 Diaconis, Persi; The Markov Chain Monte Carlo revolution, Bulletin of the American Mathematical Society, Vol 46(2), 2009, pp 179-205 gibbsmrf Rue, Håvard; Fast sampling of Gaussian Markov Random Fields, Journal of the Royal Statistical Society: Series B (Statistical Methodology), Vol. 63(2), 2001, pp 325-338 MRFgibbs Stoehr, Julien; A review on statistical inference methods for discrete Markov random fields, arXiv preprint arXiv:1704.03331, 2017 starmax D. S. Stoffer; Time Series Analysis: Theory and Practice. Elsevier/North Holland, New York, 1985 seqseg Zhubin Sun, Xiaodong Liu and Lizhu Wang; A hybrid segmentation method for multivariate time series based on the dynamic factor model. Stochastic Environmental Research and Risk Assessment, 2017, Volume 31(6), pp 1291-1304 GP1 Datta, Abhirup and Banerjee, Sudipto and Finley, Andrew O and Gelfand, Alan E. ; Hierarchical nearest-neighbor Gaussian process models for large geostatistical datasets, Journal of the American Statistical Association, 2016, vol 114(514), pp 800-812 GP2 Katzfuss, Matthias and Guinness, Joseph; A general framework for Vecchia approximations of Gaussian processes, arXiv preprint arXiv:1708.06302, 2017 MRFgibbsfast Brown, D Andrew and McMahan, Christopher S; Sampling Strategies for Fast Updating of Gaussian Markov Random Fields, arXiv preprint arXiv:1702.05518, 2017 spatsep Cressie N, Huang HC; Classes of nonseparable, spatio-temporal stationary covariance functions, Journal of American Statistical Association, 1994, pp 1330–1340 STClus Kisilevich, Slava and Mansmann, Florian and Nanni, Mirco and Rinzivillo, Salvatore; Spatio-temporal clustering, 2009, Springer optics Ankerst, Mihael and Breunig, Markus M and Kriegel, Hans-Peter and Sander, Jörg; OPTICS: ordering points to identify the clustering structure, ACM SIGMOD Record, Vol 28 (2), pp 49-60, 1999 extreme Goswami, Bhupendra Nath and Venugopal, V and Sengupta, D and Madhusoodanan, MS and Xavier, Prince K; Increasing trend of extreme rain events over India in a warming environment, Science, Vol. 314(5804), 2006, pp 1442-1445
http://arxiv.org/abs/1703.09007v2
{ "authors": [ "Adway Mitra", "Ashwin K. Seshadri" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170327110400", "title": "Detection of Spatiotemporally Coherent Rainfall Anomalies Using Markov Random Fields" }
More is Less: A More Complicated Network with Less Inference ComplexityThis paper was accepted by the IEEE CVPR 2017 Xuanyi Dong^1This work was done when Xuanxi Dong was an Intern at 360 AI Institute., Junshi Huang^2, Yi Yang^1, Shuicheng Yan^2,3^1CAI, University of Technology Sydney, ^2360 AI Institute, ^3National University of Singaporedongxuanyi888@icloud.com; huangjunshi@360.cn yi.yang@uts.edu.au; yanshuicheng@360.cnDecember 30, 2023 ============================================================================================================================================================================================================================================================================================================================== In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer; 2) LCCL is very fast if it is implemented as a 1 × 1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.§ INTRODUCTION Despite the continuously improved performance of convolutional neural networks (CNNs)  <cit.>, their computation costs are still tremendous. Without the support of high-efficiency servers, it is hard to establish CNN models on real-world applications. For example, to process a 224 × 224 image,AlexNet <cit.> requires 725M FLOPs with 61M parameters, VGG-S <cit.> involves 2640M FLOPs with 103M parameters, and GoogleNet <cit.> needs 1566M FLOPs with 6.9M parameters. Therefore, to leverage the success of deep neural networks on mobile devices with limited computational capacity, accelerating network inference has become imperative. In this paper, we investigate the acceleration of CNN models based on the observation that the response maps of many convolutional layers are usually sparse after ReLU <cit.> activation. Therefore, instead of fully calculating the layer response, we can skip calculating the zero cells in the ReLU output and only compute the values of non-zero cells in each response map.Theoretically, the locations of zero cells can be predicted by a lower cost layer. The values of non-zero cells from this lower-cost layer can be collaboratively updated by the responses of the original filters. Eventually, the low-cost collaborative layer (LCCL) accompanied by the original layer constitute the basic element of our proposed low-cost collaborative network (LCCN). To equip each original convolutional layer with a LCCL, we apply an element-wise multiplication on the response maps from the LCCL and the original convolutional layer, as illustrated in Fig. <ref>. In the training phase, this architecture can be naturally trained by the existing stochastic gradient descent (SGD) algorithm with backpropagation. First we calculate the response map V^' of the LCCL after the activation layer, and use V^' to guide the calculation of the final response maps. Despite the considerable amount of research where a sparse-based framework is used to accelerate the network inference,  <cit.>, we claim that LCCN is unique. Generally, most of these sparsity-based methods <cit.> integrate the sparsity property as a regularizer into the learning of parameters, which usually harms the performance of network. Moreover, to further accelerate performance, some methods even arbitrarily zeroize the values of the response maps according to a pre-defined threshold. Compared with these methods, our LCCN automatically sets the negatives as zero, and precisely calculates the positive values in the response map with the help of the LCCL.This two-stream strategy reaches a remarkable acceleration rate while maintaining a comparable performance level to the original network.The main contributions are summarized as follows: * We propose a general architecture to accelerate CNNs, which leverages low-cost collaborative layers to accelerate each convolutional layer.* To the best of our knowledge, this is the first work to leverage a low-cost layer to accelerate the network. Equipping each convolutional layer with a collaborative layer is quite different from the existing acceleration algorithms. * Experimental studies show significant improvements by the LCCN on many deep neural networks when compared with existing methods (, a 34% speedup on ResNet-110). § RELATED WORKLow Rank.Tensor decomposition with low-rank approximation-based methods are commonly used to accelerate deep convolutional networks. For example, in <cit.>, the authors exploited the redundancy between convolutional filters and used low-rank approximation to compress convolutional weight tensors and fully connected weight matrices. Yang <cit.> use an adaptive fastfood transform was used to replace a fully connected layer with a series of simple matrix multiplications, rather than the original dense and large ones. Liu <cit.> propose a sparse decomposition to reduce the redundancy in convolutional parameters. In <cit.>, the authors used generalized singular vector decomposition (GSVD) to decompose an original layer to two approximated layers with reduced computation complexity.Fixed Point. Some popular approaches to accelerate test phase computation are based on “fixed point”. In <cit.>, the authors trained deep neural networks with a dynamic fixed point format, which achieves success on a set of state-of-the-art neural networks. Gupta <cit.> use stochastic rounding to train deep networks with 16-bit wide fixed-point number representation. In <cit.>, a standard network with binary weights represented by 1-bit was trained to speed up networks. Then, Rastegari <cit.> further explored binary networks and expanded it to binarize the data tensor of each layer, increasing the speed by 57 times. Product Quantization. Some other researchers focus on product quantization to compress and accelerate CNN models. The authors of <cit.> proposed a framework to accelerate the test phase computation process with the network parameters quantized and learn better quantization with error correction. Han <cit.> proposed to use a pruning stage to reduce the connections between neurons, and then fine tuned networks with weight sharing to quantify the number of bits of the convolutional parameters from 32 to 5. In another work <cit.>, the authors trained neural networks with extremely low precision, and extended success to quantized recurrent neural networks. Zhou <cit.> generalized the method of binary neural networks to allow networks with arbitrary bit-width in weights, activations, and gradients.Sparsity. Some algorithms exploit the sparsity property of convolutional kernels or response maps in CNN architecture. In <cit.>, many neurons were decimated by incorporating sparse constraints into the objective function. In <cit.>, a CNN model was proposed to process spatially-sparse inputs, which can be exploited to increase the speed of the evaluation process. In <cit.>, the authors used the group-sparsity regularizer to prune the convolutional kernel tensor in a group-wise fashion. In <cit.>, they increased the speed of convolutional layers by skipping their evaluation at some fixed spatial positions. In <cit.>, the authors presented a compression technique to prune the filters with minor effects on the output accuracy.Architecture.Some researchers improve the efficiency of networks by carefully designing the structure of neural networks. In <cit.>, a simple model was trained by distilling the knowledge from multiple cumbersome models, which helps to reduce the computation cost while preserving the accuracy. Romero <cit.> extended the knowledge distillation approach to train a student network, which is deeper but thinner than the teacher network, by extracting the knowledge of teacher network. In this way, the student network uses less parameters and running time to gain considerable speedup compared with the teacher network. Iandola <cit.> proposed a small DNN architecture to achieve similar performance as AlexNet by only using 50x fewer parameters and much less computation time via the same strategy.§ LOW-COST COLLABORATIVE NETWORK In this section, we present our proposed architecture for the acceleration of deep convolutional neural networks. First, we introduce the basic notations used in the following sections. Then, we demonstrate the detailed formulation of the acceleration block and extend our framework to general convolutional neural networks. Finally, we discuss the computation complexity of our acceleration architecture. §.§ Preliminary Let's recall the convolutional operator. For simplicity, we discuss the problem without the bias term. Given one convolution layer, we assume the shapes of input tensor U and output tensor V are X × Y × C and X × Y × T, where X and Y are the width and height of the response map, respectively. C and T represent the channel number of response map U and V. A tensor W with size k × k × C × T is used as the weight filter of this convolutional layer. V_t(x,y) represents the element of V(x,y,t). Then, the convolutional operator can be written as: V_t(x,y) = ∑_i,j=1^k∑_c=1^CW_t(i,j,c)U(x+i-1,y+i-1,c)where W_t(x,y) is the element of W(x,y,t).In the LCCN, the output map of each LCCL should have the same size as the corresponding convolutional layer, which means that the shape of tensor V^' is X × Y × T. Similarly, we assume the weight kernel of V^' is W^'. Therefore, the formula of the LCCN can be written as: V^'_t(x,y) = ∑_i,j=1^k^'∑_c=1^CW^'_t(i,j,c)U(x+i-1,y+i-1,c) §.§ Overall Structure Our acceleration block is illustrated in Fig. <ref>.The green block V^* represents the final response map collaboratively calculated by the original convolutional layer and the LCCL. Generally, it can be formulated as:V^*_t(x,y) =0ifV^'_t(x,y) = 0V^'_t(x,y) × V_t(x,y) ifV^'_t(x,y) ≠ 0 where V is the output response map from the original convolutional layer and V^' is from LCCL.In this formula, the element-wise product is applied to V and V^' to calculate the final response map. Due to the small size of LCCL, the computation cost of V^' can be ignored. Meanwhile, since the zero cells in V^' will stay zero after the element-wise multiplication, the computation cost of V is further reduced by skipping the calculation of zero cells according to the positions of zero cells in V^'. Obviously, this strategy leads to increasing speed in a single convolutional layer. To further accelerate the whole network, we can equip most convolutional layers with LCCLs. §.§ Kernel Selection As illustrated in the orange box in Fig. <ref>,the first form exploits a 1 × 1 × C × T kernel (k^' = 1) for each original kernel to collaboratively estimate the final response map.The second structure uses a k^'× k^'× C × 1 filter (we carefully tune the parameter k' and set k' = k) shared across all the original filters to calculate the final result. Both these collaborative layers use less time during inference when compared with the original convolutional layer, thus they are theoretically able to obtain acceleration. In many efficient deep learning frameworks such as Caffe <cit.>, the convolution operation is reformulated as matrix multiplication by flattening certain dimensions of tensors, such as:V = U^*× W^*   s.t.    U^*∈ R^XY × k^2C , W^*∈ R^k^2C × TEach row of the matrix U^* is related to the spatial position of the output tensor transformed from the tensor U, and W^* is a reshaped tensor from weight filters W. These efficient implementations take advantage of the high-efficiency of BLAS libraries, , GEMM[matrix-matrix multiplication function] and GEMV[matrix-vector multiplication function].Since each position of the skipped cell in V^* corresponds to one row of the matrix U^*, we can achieve a realistic speedup in BLAS libraries by reducing the matrix size in the multiplication function. Different structures of the LCCL need different implementations. For a k × k × C × 1 kernel, the positions of the skipped cells in the original convolutional layer are the same in different channels. In this situation, we can reduce the size of U^* to S^'× k^2C, where S^' is the number of non-zero elements in V^'. For a 1 × 1 × C × T kernel, the positions of zero cells are different in different channels, so it is infeasible to directly use the matrix-matrix multiplication function to calculate the result of LCCL, V^'.In this case, we have to separate the matrix-matrix multiplication into multiple matrix-vector multiplications. However, this approach is difficult to achieve the desired acceleration effect.The unsatisfying acceleration performance of 1 × 1 × C × T filters is caused by the inferior efficiency of multiple GEMV, and some extra operations also cost more time (, data reconstruction). Therefore, we choose the k × k × C × 1 structure for our LCCL in our experiments, and leave the acceleration of 1 × 1 × C × T filters as our future work. §.§ Sparsity Improvement According to the previous discussion, the simplest way for model acceleration is directly multiplying the tensor V^' and tensor V. However, this approach cannot achieve favourable acceleration performance due to the low sparsity rate of V^'.To improve the sparsity of V^', ReLU <cit.> activation is a simple and effective way by setting the negative values as zeros. Moreover, due to the redundancy of positive activations, we can also append L_1 loss in the LCCL to further improve the sparsity rate. In this way, we achieve a smooth L_1L_2( X) = μ X + ρ| X| regularizer penalty for each V^':X = √(∑_i = 1^n X_i^2 )  ,  | X| = ∑_i = 1^n | X|However, there are thousands of free parameters in the regularizer term and the additional loss always degrades the classification performance, as it's difficult to achieve the balance between the classification performance and the acceleration rate.Recently, the Batch Normalization (BN) <cit.> is proposed to improve the network performance and increase the convergence speed during training by stabilizing the distribution and reducing the internal covariate shift of input data. During this process, we observe that the sparsity rate of each LCCL is also increased. As shown in Table <ref>, we can find that the BN layer advances the sparsity of LCCL followed by ReLU activation, and thus can further improve the acceleration rate of our LCCN.We conjecture that the BN layer balances the distribution of V^' and reduces the redundancy of positive values in V^' by discarding some redundant activations. Therefore, to increase the acceleration rate, we carefully integrate the BN layer into our LCCL.Inspired by the pre-activation residual networks <cit.>, we exploit different strategies for activation and integration of the LCCL.Generally, the input of this collaborative layer can be either before activation or after activation. Taking pre-activation residual networks <cit.> as an example, we illustrate the “Bef-Aft" connection strategy at the bottom of Fig. <ref>.“Bef" represents the case that the input tensor is from the flow before BN and ReLU activation. “Aft" represents the case that the input tensor is the same to the original convolutional layer after BN and ReLU activation. According to the “Bef-Aft" strategy in Fig. <ref>.the “Bef-Bef", “Aft-Bef" and “Aft-Aft" strategies can be easily derived. During our experiments, we find that input tensors with the “Bef" strategy are quite diverse when compared with the corresponding convolutional layer due to different activations. In this strategy, the LCCL cannot accurately predict the zero cells for the original convolutional layer. So it is better to use the same input tensor as the original convolutional layer, the “Aft" strategy.§.§ Computation Complexity Now we analyze the test-phase numerical calculation with our acceleration architecture. For each convolutional layer, the forward procedure mainly consists of two components, the low cost collaborative layer and the skip-calculation convolutional layer. Suppose the sparsity (ratio of zero elements) of the response map V^' is r. We formulate the detailed computation cost of the convolutional layer and compare it with the one equipped with our LCCL.As shown in Table <ref>, the speedup ratio is highly dependent on r. The term 1/C costs little time since the channel of the input tensor is always wide in most CNN models andit barely affects the acceleration performance. According to the experiments, the sparsity r reaches a high ratio in certain layers. These two facts indicate that we can obtain a considerable speedup ratio. Detailed statistical results are described in the experiments section.In residual-based networks, if the output of one layer in the residual block is all zero, we can skip the calculation of descendant convolutional layers and directly predict the results of this block. This property helps further accelerate the residual networks.§ EXPERIMENTS In this section, we conduct experiments on three benchmark datasets to validate the effectiveness of our acceleration method.§.§ Benchmark Datasets and Experimental Setting We mainly evaluate our LCCN on three benchmarks: CIFAR-10, CIFAR-100 <cit.> and ILSVRC-12 <cit.>. The CIFAR-10 dataset contains 60,000 32 × 32 images, which are categorized into 10 classes and each class contains 6,000 images. The dataset is split into 50,000 training images and 10,000 testing images. The CIFAR-100 <cit.> dataset is similar to CIFAR-10, except that it has 100 classes and 600 images per class. Each class contains 500 training images and 100 testing images. For CIFAR-10 and CIFAR-100, we split the 50k training dataset into 45k/5k for validation. ImageNet 2012 dataset <cit.> is a famous benchmark which contains 1.28 million training images of 1,000 classes. We evaluate on the 50k validation images using both the top-1 and top-5 error rates.Deep residual networks <cit.> have shown impressive performance with good convergence behaviors.Their significance has increased, as shown by the amount of research <cit.> being undertaken. We mainly apply our LCCN to increase the speed of these improved deep residual networks. In the CIFAR experiments, we use the default parameter setting as <cit.>. However, it is obvious that our LCCN is more complicated than the original CNN model, which leads to a requirement for more training epochs to converge into a stable situation. So we increase the training epochs and perform a different learning rate strategies to train our LCCN. We start the learning rate at 0.01 to warm up the network and then increase it to 0.1 after 3% of the total iterations. Then it is divided by 10 at 45%, 70% and 90% iterations where the errors plateau. We tune the training epoch numbers from {200, 400, 600, 800, 1000} according to the validation dataOn ILSVRC-12, we follow the same parameter settings as <cit.> but use different data argumentation strategies. (1) Scale augmentation: we use the scale and aspect ratio augmentation <cit.> instead of the scale augmentation <cit.> used in <cit.>. (2) Color augmentation: we use the photometric distortions from <cit.> to improve the standard color augmentation <cit.> used in <cit.>. (3) Weight decay: we apply weight decay to all weights and biases. These three differences should slightly improve performance (refer to Facebook implementation[<https://github.com/facebook/fb.resnet.torch>]). According to our experiences with CIFAR, we extend the training epoch to 200, and use a learning rate starting at 0.1 and then is divided by 10 every 66 epochs.For the CIFAR experiments, we report the acceleration performance and the top-1 error to compare with the results provided in the original paper <cit.>. On ILSVRC-12, since we use different data argumentation strategies, we report the top-1 error of the original CNN models trained in the same way as ours, and we mainly compare the accuracy drop with other state-of-the-art acceleration algorithms including: (1) Binary-Weight-Networks (BWN) <cit.> that binarizes the convolutional weights; (2) XNOR-Networks (XNOR) <cit.> that binarizes both the convolutional weights and the data tensor; (3) Pruning Filters for Efficient ConvNets (PFEC) <cit.> which prunes the filters with small effect on the output accuracy from CNNs.§.§ Experiments on CIFAR-10 and CIFAR-100 First, we study the influence on performance of using different connection strategies proposed in the Kernel Selection and Sparsity Improvement sections. We use the pre-activation ResNet-20 as our base model, and apply the LCCL to all convolutional layers within the residual blocks. Using the same training strategy, the results of four different connection strategies are shown in Table <ref>.Both collaborative layers with the after-activation method show the best performance with a considerable speedup ratio. Because the Aft strategy receives the same distribution of input to that of the corresponding convolution layer. We also try to use the L_1L_2 loss to restrict the output maps of each LCCL. But this will add thousands of extra values that need to be optimized in the L_1L_2 loss function. In this case, the networks are difficult to converge and the performance is too bad to be compared.Furthermore, we analyze the performance influenced by using different kernels in the LCCL. There are two forms of LCCL that collaborate with the corresponding convolutional layer. One is a tensor of size 1 × 1 × C × T (denoted as 1×1), and the other is a tensor of size k × k × C × 1 (denoted as k × k). As shown in Table <ref>, the k × k kernel shows significant performance improvement with a similar speedup ratio compared with a 1×1 kernel. It can be caused by that the k × k kernel has a larger reception field than 1 × 1.Statistics on the sparsity of each response map generated from the LCCL are illustrated in Fig. <ref>. This LCCN is based on ResNet-20 with each residual block equipped with a LCCL configured by a 1 × 1 × C × T kernel. To get stable and robust results, we increase the training epochs as many as possible, and the sparsity variations for all 400 epochs are provided. The first few collaborative layers show a great speedup ratio, saving more than 50% of the computation cost. Even if the last few collaboration layers behave less than the first few, the k × k × C × 1 based method is capable of achieving more than 30% increase in speed.Hitherto, we have demonstrated the feasibility of training CNN models equipped with our LCCL using different low-cost collaborative kernels and strategies. Considering the performance and realistic implementation, we select the weight sharing kernel for our LCCL. This will be used in all following experiments as default.Furthermore, we experiment with more CNN models<cit.> accelerated by our LCCN on CIFAR-10 and CIFAR-100. Except for ResNet-164 <cit.> which uses a bottleneck residual block {[ 1 × 1; 3 × 3; 1 × 1 ]} , all other models use a basic residual block {[ 3 × 3; 3 × 3 ]} . We use LCCL to accelerate all convolutional layers except for the first layer, which takes the original image as the input tensor. The first convolutional layer operates on the original image, and it costs a little time due to the small input channels (RGB 3 channels). In a bottleneck structure, it is hard to reach a good convergence with all the convolutional layers accelerated. The convolutional layer with 1 × 1 kernel is mainly used to reduce dimension to remove computational bottlenecks, which overlaps with the acceleration effect of our LCCL. This property makes layers with 1 × 1 kernel more sensitive to collaboration with our LCCL. Thus, we apply our LCCL to modify the first and second convolutional layer in the bottleneck residual block on CIFAR-10. And for CIFAR-100, we only modify the second convolutional layer with 3 × 3 kernel in the bottleneck residual block. The details of theoretical numerical calculation acceleration and accuracy performance are presented in Table <ref> and Table <ref>.Experiments show our LCCL works well on much deeper convolutional networks, such as pre-activation ResNet-164 <cit.> or WRN-40-4 <cit.>. Convolutional operators dominate the computation cost of the whole network, which hold more than 90% of the FLOPs in residual based networks. Therefore, it is beneficial for our LCCN to accelerate such convolutionally-dominated networks, rather than the networks with high-cost fully connected layers. In practice, we are always able to achieve more than a 30% calculation reduction for deep residual based networks. With a similar calculation quantity, our LCCL is capable of outperforming original deep residual networks. For example, on the CIFAR-100 dataset, LCCN on WRN-52-1 obtains higher accuracy than the original WRN-40-1 with only about 2% more cost in FLOPs. Note that our acceleration is data-driven, and can achieve a much higher speedup ratio on “easy" data. In cases where high accuracy is not achievable, it predicts many zeros which harms the network structure.Theoretically, the LCCN will achieve the same accuracy as the original one if we set LCCL as an identity (dense) network. To improve efficiency, the outputs of LCCL need to be sparse, which may marginally sacrifice accuracy for some cases.We also observe accuracy gain for some other cases (WRN-52-1 in Table <ref>), because the sparse structure can reduce the risk of overfitting. §.§ Experiments on ILSVRC-12 We test our LCCN on ResNet-18, 34 with some structural adjustments. On ResNet-18, we accelerate all convolutional layers in the residual block. However, ResNet-34 is hard to optimize with all the convolutional layers accelerated. So, we skip the first residual block at each stage (layer 2, 3, 8, 9, 16, 17, 28, 29) to make it more sensitive to collaboration. The performance of the original model and our LCCN with the same setting are shown in Table <ref>.We demonstrate the success of LCCN on ResNet-18, 34 <cit.>, and all of them obtain a meaningful speedup with a slight performance drop.We compare our method with other state-of-the-art methods, shown in Table <ref>. As we can see, similar to other acceleration methods, there is some performance drop. However, our method achieves better accuracy than other acceleration methods. §.§ Theoretical vs. Realistic Speedup There is often a wide gap between theoretical and realistic speedup ratio. It is caused by the limitation of efficiency of BLAS libraries, IO delay, buffer switch or some others. So we compare the theoretical and realistic speedup with our LCCN. We test the realistic speed based on Caffe <cit.>, an open source deep learning framework. OpenBLAS is used as the BLAS library in Caffe for our experiments. We set CPU only mode and use a single thread to make a fair comparison. The results are shown in Table <ref>.Discussion. As shown in Table <ref>, our realistic speedup ratio is less than the theoretical one, which is caused mainly by two reasons. First, we use data reconstruction and matrix-matrix multiplication to achieve the convolution operator as Caffe <cit.>. The data reconstruction operation costs too much time, making the cost of our LCCL much higher than its theoretical speed. Second, the frontal convolution layers usually take more time but contain less sparsity than the rear ones, which reduces the overall acceleration effect of the whole convolution neural network. These two defects can be solved in theory, and we will focus on the realistic speedup in future.Platform. The idea of reducing matrix size in convolutional networks can be applied to GPUs as well in principle,even though some modifications on our LCCN should be made to better leverage the existing GPU libraries. Further, our method is independent from platform, and should work on the FPGA platform with customization. §.§ Visualization of LCCL Here is an interesting observation about our LCCL. We visualize the results of LCCN on PASCAL VOC2007 <cit.> training dataset. We choose ResNet-50 as the competitor, and add an additional 20 channels' convolutional layer with an average pooling layer as the classifier. For our LCCN, we equip the last 6 layers of this competitor model with our LCCL. After fine tuning, the feature maps generated from the last LCCL and the corresponding convolutional layer of the competitor model are visualized in Fig. <ref>. As we can observe, our LCCL might have the ability to highlight the fields of foreground objects, and eliminates the impact of the background via the collaboration property. For example, in the second triplet, car and person are activated simultaneously in the same response map by the LCCL.At the first glance, these highlighted areas look similar with the locations obtained by attention model. But they are intrinsically different in many ways, , motivations, computation operations, response meaning and structures. § CONCLUSION In this paper, we propose a more complicated network structure yet with less inference complexity to accelerate the deep convolutional neural networks. We equip a low-cost collaborative layer to the original convolution layer. This collaboration structure speeds up the test-phase computation by skipping the calculation of zero cells predicted by the LCCL. In order to solve the the difficulty of achieving acceleration on basic LCCN structures, we introduce ReLU and BN to enhance sparsity and maintain performance. The acceleration of our LCCN is data-dependent, which is more reasonable than hard acceleration structures. In the experiments, we accelerate various models on CIFAR and ILSVRC-12,and our approach achieves significant speed-up, with only slight loss in the classification accuracy. Furthermore, our LCCN can be applied on most tasks based on convolutional networks (, detection, segmentation and identification). Meanwhile, our LCCN is capable of plugging in some other acceleration algorithms (, fix-point or pruning-based methods), which will further enhance the acceleration performance.ieee
http://arxiv.org/abs/1703.08651v2
{ "authors": [ "Xuanyi Dong", "Junshi Huang", "Yi Yang", "Shuicheng Yan" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170325055142", "title": "More is Less: A More Complicated Network with Less Inference Complexity" }
Token-based Function Computation with Memory Macheng Shen University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: macshen@umich.edu Jing Sun University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: jingsun@umich.edu Ding Zhao University of MichiganDepartment of Mechanical EngineeringAnn Arbor, USACorresponding author:zhaoding@umich.edu December 30, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================== ^1 Department of Physics and Astronomy,King Saud University, Riyadh 11451, Saudi Arabia ^2 Netherlands Institute for Advanced Study,Korte Spinhuissteeg 3, 1012 CG Amsterdam, Netherlands ^3 Department of Physics, Faculty of Science,Benha University, Benha, 13518, Egypt We investigate the formation of virtual black holes in the context of generalized uncertainty principle (GUP), as a mediator for a proton decay process which is forbidden by the standard model. Then, we calculate the lower bounds of the GUP deformation parameter by the experimental bound on the half life of the proton.§ INTRODUCTIONThe Plank-scale physics is considered as the main focus of quantum gravity approaches. This is because the merge between the quantum uncertainty that predicts an enormous fluctuations of energy at small scales, and the theory of general relativity that associates energy with spacetime curvature leads to the notion of spacetime foam, at the Plank scale <cit.>. This foam has been investigated in several models of quantum gravity, like the spin foam models in loop quantum gravity (LQG) <cit.>, S^2 × S^2 bubbles of `virtual' black holes as described by Hawking <cit.>, or fluctuating geometry as described by group field theory (GFT) and String field theory (SFT) <cit.>. The different models of quantum gravity agree that the structure of spacetime at the small scale is significantly different from the smooth structure described by general relativity <cit.>. However, there is disagreement at the particular scale of which this `fuzziness'in the spacetime start to be prominent. This is mainly due to the disagreement on whether there existcompactified, or Large extra dimensions as predicted by string theory and Randall-Sundrum model <cit.>, respectively or there is no need for such assumption as in LQG, GFT and Hawking bubble model <cit.>.The existence of a minimal length scale that commonly predicted by various approaches to quantum gravity is manifested phenomenologically by deformation of the standard momentum dispersion relations <cit.>to incorporate a cut-off length ℓ_p - or equivalently- energy E_p scales <cit.>. This can be achieved by deformation on the metric, as in gravity's rainbow, by the rainbow functions <cit.>, or deformation of Heisenberg algebra <cit.>. The latter is known as the Generalized uncertainty principle (GUP) <cit.>,which is based on deforming the commutation relation between momentum and position operators in quantum mechanics, but keeping the associative structure ( Lie algebra structure) of the original commutation relations. The most general type of deformation is <cit.>[x^μ, x^ν] =0 [p_μ, p_ν]=0 , [x^μ, p_ν] =ħ/2η^μ _ ν{1- α/E_p( |p|η^μ _ ν+p^μ p_ν/|p|) + α/E_p( p^2 η^μ _ ν +3 p^μ p_ν) } .Which leads to the generalized uncertainty relationΔ x^μΔ p_μ≥ħ/2[ 1+ ( α/ E_p √(⟨ p^2 ⟩)+4 α^2/E_p^2) Δ p^2 + 4 α^2/E_p^2⟨ p⟩ ^2 -2α/E_p√(⟨ p^2 ⟩)]The GUP introduces two terms to the standard uncertainty principle, one is linear and the other is quadratic in momentum. This uncertainty relation is invariant under the Doubly-Special relativity transformation, that preserves both the speed of light and the maximum energy E_p <cit.>. It was shown that GUP implies discreteness of space at short distance scale scale, as predicted by LQG <cit.>. The existence of virtual black holes at the scale of quantum gravity is a possible observation of proton decay <cit.>, a process that is forbidden by the standard model due to conservation of baryon number B. The proton half life is estimated to be larger than ∼ 10^34 years, by experimental observations <cit.>. In Quantum gravity approaches, the proton decay is considered as mediator for the virtual black hole. Simply, consider the proton as a spherical object of radius r_proton∼ 10^-15 m, and virtual black holes form inside of that space ( that is 10^20 larger), two of the three point-like quarks could fall into the black hole, to evaporate away only conserving the energy, charge and angular momentum of the two quarks, due to the no hair theorem <cit.>. The decay products could vary, rarelythey couldthe same quarks that fell into the black hole. However, in most cases it could be other types of particles.proton half-life for this process is calculating from considering the probability of the two quarks to fall into the black hole before it evaporates given the proton crossing time ∼ M_proton^-1∼ 10^-31 years, and considering the proton as a hard sphere of radius ∼ m_p^-1, hence the probability for two quarks to be confined within an region of Plank volume is given by ∼ (M_proton/M_qg)^3. Given the fact that virtual black holes decay in Plank time, gives another factor of mp/M_qg. Thus,the proton half life is therefore <cit.>,τ_p ∼ M_proton^-1 (M_qg/M_proton)^4Where M_qg is defined to be the minimal mass of virtual black holes predicted by a particular quantum gravity model.It is observed that the proton half-life depends on the quantum gravity mass of the virtual black hole. In the above analysis, the mass M_qg is equal to the Planck mass M_p. Thus, using the equation (<ref>)the proton half life due to this process in the order of ∼ 10^45 years. Proton decay via virtual black holes was also studied in large extra dimensions existed, like in Randal-Strudum model. In this case the half life is changed, because of these extradimensions, and the probability will change because there is more ` space' for the quarks to move in.τ_p ∼ M_proton^-1 (M_qg/M_proton)^D The quantum gravity mass M_qg here may not be the Plank mass, but it is bounded to be M_qg >(M^D_p Λ^4)^1/D, where Λ is the energy scale defined by experimental bound Λ∼ 10^16 GeV <cit.>. However, some theories beyond the standard model, likegrand unified theories (GUTs),supersymmetry (SUSY), electroweak sphaleron anomaly <cit.>and magnetic monopoles break thebaryon number conservation. Predicting a half life for the proton of ∼ 10^30 (M_X/10^15 GeV)^4 years for SU(5) Georgi-Glashow<cit.>, and SO(10) GUT models <cit.> models, that predict the decay mediated by a massive X-boson with mass M_X > 10^15 GeV, making the half life at least 10^32 years <cit.>. SUSY models predict a proton half life close the experimental limit of∼ M^-2_SUSY approximately10^36-10^39 years <cit.>. Proton decays viaHiggs sector in SU(5) or SU(15) GUT and magnetic monopoles are also predicted to have a close half life than SUSY (in 4 or higher dimensions) <cit.>.The GUT theories conserve the quantum number B-L instead of baryonicB and leptonic L numbers individually. Allowing a decay channel for the proton into a Lipton and anti quark, for example, the decay into a neutral pion and positron via the channel p ⟶π^0 + e^+ Which is the most famous decay channel. Nevertheless, the decay channel p →π^0 + μ^+ is also predicted, particularly from monopoles <cit.>. The table <ref> summarizes the proton half life for different theories. We observe that there is a Large gap between the half-lives of proton predicted by D=4 quantum gravity and GUT and SUSY models. The existence of higher dimensions could bring the half lives closer, due to the effect of higher dimensional quantum gravity models on the quantum gravity mass M_qg. Considering the GUP deformation as a phenomenological model of quantum gravity, it would be interesting to investigate the GUP deformation on the quantum gravity mass M_qg and hence the protonhalf life. § VIRTUAL BLACK HOLES AND UNCERTAINTY PRINCIPLEIt is known from the uncertainly principle à la Heisenberg that in order to localise a system with accuracy Δ x one needs to shine it with a photon with wavelength λ Δ x ∼λ.However, since that photon carries momentum inversely proportional to its wavelength i.e. p = h/λ, we obtain the well-known uncertainty principle,Δ xλ ^-1∼ 1Δ x Δ p ∼ h ∼ħ.From the standard dispersion relation of the photon momentum and energy E= p, it is obvious that localisation of a system to short distances adds significant amounts of energy to that system. If that system is the spacetime, we expect a gravitational back reaction to the energy added to the spacetime from Einstein field equations: G_μν = 2κ T_μν We may approximate the  (<ref>) over a small region of spacetime of radius L to obtain <cit.> δ g_μν/L^2∼8 π G/L^3 E. Where δ g_μν is the deviation from the flat metric, and it is given by the fractional uncertainty in positions Δ x/L . We also identify Schwarzschild radius as 2MG, with E = M. We obtain the uncertainty relation :Δ x δ r_s ∼ℓ_p^2Leasing to the conclusion that one cannot localise a region of the spacetime more to a radius of Plank length without causing a virtual black hole to form  <cit.> forming a ` quantum' foam structure for the spacetime at the Plank scale <cit.>. This is a direct result from the HUP and not connected to any particular model of quantum gravity, including the GUP.§ VIRTUAL BLACK HOLES AND GUPSince the formation of virtual black holes is a direct result for the marriage between theHeisenberg uncertainty principle and general relativity.Deforming the Heisenberg uncertainty principle (HUP) , to a generalised uncertainty principle (GUP), will affect the nature of the virtual black holes and their physical properties, for further reading about the relation between virtual black holes and GUP c.f. <cit.>. We are mainly concerned of calculating the minimal mass for black holes using GUP, this will correspond to the mass of virtual black holes appearing from GUP instead of HUP. Since the mass of the virtual black holes iswhat is used in calculating the proton half life. In order to compute the minimal black hole mass in GUP, we follow a similar argument made in <cit.>.We make the argument very general and consider D dimensional spacetime with all theD-1 momenta p_i being equalandthe quadratic GUP is given by <cit.>Δ x Δ p ≥ħ/2( 1+14.9( D-3/π) ^2 α^2 Δ p^2/M_p^2)Which leads to expression for Δ pΔ p ≥Δ x/ξ α^2{ 1-√(1-ξħα^2/Δ x^2)} ,where ξ = 14.9( D-3/π) ^2. Now, let a particle be bounded at the black hole event horizon Δ x ∼ r_s, this particle resembles a particle emitted by Hawking radiation from the horizon at Temperature associated with the resulting uncertainty in the momentum/energy of that particle localized at the horizon. Therefore the modified Hawking temperature is calculated using this argument in <cit.>T_QGUP = 2 T_H [ 1+ √(1-ξα_0^2/4λ_D^2 m ^2/D-3)]In whichλ_D =( 16 π/(D-2)Ω_D-2) ^1/D-3 Ω_D = 2 π^D-1/2/Γ(D-1/2)m= M/M_pThe GUP modified temperature is only physical for particular masses i.e.ξα_0^2 ≤ 4λ_D^2 m ^2/D-3The minimal mass M_min of which the inequality above becomes equality is the mass of GUP virtual black holes, and it is given byM_QGUP ( α) =M_p( √(πξ/4)) ^D-3 D-2/8Γ(D-1/2) α ^D-3Since what is multiplied with α is a pure numerical factor that depends on the dimension of spacetime, we denote such factor by f(D) as a function of spacetime dimensions, and the D dimensional Plank mass M_p^(D),M^(D)_QGUP = f(D) M^(D)_pα ^D-3The proton half life is given by the expression from which we can calculate the minimal mass of the black holes in QGUP M_QGUP using f(D), see table <ref> :τ_p ∼ M_proton^-1( M^(D)_p/M_proton) ^D f(D)^Dα ^D(D-3) § LINEAR AND QUADRATIC GUP Another possible, and more general generalized uncertainty relation is <cit.>Δ xΔ p ≥ħ/2{1- α 3.76/M_p( D-3/π) Δ p+α^27.64/M_p^2( D-3/π)^2 Δ p^2}This uncertainty relation can be used similar to (<ref>) to find the mass of GUP-deformed virtual black holes and study proton decay using the same argument as the previous section. Solving (<ref>) for Δ p.Δ p ≥2Δ x + α M_p γ_1/α^2 γ_2( 1-√(1-2 γ_2 α^2 M_p^2/2Δ x + α M_p γ_1))Such thatγ_1 = 3.76( D-3/π)γ_2 = 15.28 ( D-3/π)^2The modified Hawking temperature is then found to be T_LQGUP = 2T_H( 1+γ_1/4 λ_D m^1/D-3) ^-1{ 1+√(1-γ_2 α^2/8(λ_D m^1/D-3+1/4αγ_1)^2)} ^-1The GUP modified temperature is only physical for particular masses i.e.γ_2 α^2 ≤ 8(λ_D m^1/D-3+1/4αγ_1)^2Leading to the minimal mass, M_LQGUP that is the mass of virtual black holesM_LQGUP(α) = M_p(1/2√(γ_2/2)-γ_1/4)^D-3 D-2/8 Γ(D-1/2)π^D-3/2 α^D-3.Since what is multiplied with α is a pure numerical factor that depends on the dimension of spacetime, just like the quadratic GUP, we denote that factor by g(D), whose values are give in table <ref> M^(D)_GUP = g(D) M^(D)_pα ^D-3The proton half life is given by the expression :τ_p ∼ M_proton^-1( M^(D)_p/M_proton) ^D g(D)^Dα ^D(D-3) We can use the data for Planck masses in different spacetime dimensions M_p^(D) <cit.>and the relations (<ref>)(<ref>) to estimate the bounds on the GUP deformation parameterusing quadratic in pα and linear-quadratic in p deformations α'. Knowing that the experimental bound on the proton half-life is >10^34 years <cit.> and the proton mass ∼ 10^3 GeV. The bound on α is given by: 10^34 > 10^-31 10^-3D M_p^(D)D g(D)^D α^D(D-1)10^65-3D > 1 M_p^(D)D g(D)^D α^D(D-1)10^65+3D/D(D-1) M_p^(D)-1/D-1 g(D)^-1/D-1 > α Or α_LQGUP < 10^-65+3D/D(D-1) M_p^(D)-1/D-1 g(D)^-1/D-1.Same goes for quadratic GUP,α_QGUP < 10^65+3D/D(D-1) M_p^(D)-1/D-1 f(D)^-1/D-1. Table <ref> contains the values for minimal black hole masses for both QGUP and LQGUP in different spacetime dimensions. Moreover, table <ref> contains the bounds on α_QGUP and α_LQGUPin quadratic and linear-quadraticGUP deformations in different spacetime dimensions, relevant to physical models. It appears that the GUP deformations become more appearent in the low energy limit if the spacetime has extra dimentions. § CONCLUSIONS In this work, we investigated the production of virtual black holes in higher dimensions in the context of generalized uncertainty principle. We use this black hole production to study the proton decay process that is consideredas a mediator for these virtual black holes. We calculate the proton half life in higher dimensions and we set an upper bound on the GUP deformation parameter α_QGUP and α_LQGUP. We found that the bounds on GUP parameters are around 100, .87, and 0.51 for 6, 9 and 10 dimensions, respectively. These values are stringent and consistent with the bound set by electroweak scale <cit.>. In fact, this isan improvement for various studies on phenomenological aspects of GUP,if the GUP parameter α∼ 1, it appears to be a new and interesting result and relevant to be studied at low energy systems<cit.> and as appeared from comparing the thermal corrections to black hole temperature with temperature from quantum-corrected Newtonian potential <cit.> . This indicate that GUP could be useful to explain the proton decay process beyond the standard model and could open an interesting phenomenological window for studying quantum gravity effects for low energy systems. We hope to report on these issues in the future.§ ACKNOWLEDGEMENTSWe would like to thank the anonymous referees for their useful comments and help improving this manuscript.This research project was supported by a grant from the " Research Center of the Female Scientiffic and Medical Colleges ", Deanship of Scientiffic Research, King Saud University.plain
http://arxiv.org/abs/1703.10038v2
{ "authors": [ "Salwa Alsaleh", "Abeer Al-Modlej", "Ahmed Farag Ali" ], "categories": [ "physics.gen-ph" ], "primary_category": "physics.gen-ph", "published": "20170326134222", "title": "Virtual Black Holes from Generalized Uncertainty Principle and Proton Decay" }
Ultracold collisions of molecules Goulven Quéméner Laboratoire Aimé Cotton, CNRS,Université Paris-Sud, ENS Paris-Saclay, Université Paris-Saclay, 91405 Orsay, France e-mail: goulven.quemener@u-psud.fr December 30, 2023 ====================================================================================================================================================================================This paper deals with the theory of collisions between two ultracold particles with a special focus on molecules. It describes the general features of the scattering theory of two particles with internal structure, using a time-independent quantum formalism. It starts from the Schrödinger equation and introduces the experimental observables such as the differential or integral cross sections, and rate coefficients. Using a partial-wave expansion of the scattering wavefunction, the radial motion of the collision is described through a linear system of coupled equations, which is solved numerically. Using a matching procedure of the scattering wavefunction with its asymptotic form, the observables such as cross sections and rate coefficients are obtained from the extraction of the reactance, scattering and transition matrices. The example of the collision of two dipolar molecules in the presence of an electric field is presented, showing how dipolar interactions and collisions can be controlled.§ INTRODUCTION The achievement of slowing, cooling and trapping atoms <cit.> to quantum degeneracy in Bose–Einstein condensates <cit.> or degenerate Fermi gases has tremendously impacted the Atomic, Molecular, and Optical scientific community. The world of ultracold matter is governed by Quantum Mechanics. The particles move so slowly that one has enough time in an experiment to precisely control their internal structure and external motion, with for example electric or magnetic fields, electromagnetic waves and optical lattices. Ultracold atomic physics has been extensively investigated since those achievements and has led to the exploration of new quantum phenomena <cit.>.Other types of particles, such as ultracold ions, ultracold atoms in Rydberg states and ultracold molecules, are also of specific interest. In this paper we focus mainly on ultracold molecules. Compared to atoms, molecules have a much richer structure, including rotation and vibration in addition to the electronic and spin structure. In contrast to atoms which are directly cooled with lasers, it is harder to cool the molecules with the same procedure due to the lack of closed cycles of absorption and spontaneous emission, even if it can work in certain cases <cit.>.Other techniques are then employed <cit.>: buffer gas cooling <cit.>, deceleration of molecules <cit.>, Sisyphus cooling <cit.>, association of ultracold atoms via photo-association <cit.>, magneto-association <cit.>, and coherent transfer driven by lasers<cit.>. If the molecules possess permanent electric or magnetic dipole moments, they can be manipulatedby electric or magnetic fields <cit.>. In addition to the individual energies of the molecules the strength and orientation of the molecule-molecule interaction can also be controlled, leading to promising applications <cit.>. The precise control over the initial ultracold particles and their interactions can be used to engineer different quantum edifices such as dipolar particles in optical lattices. Such controlled and tunable set-ups can be used for quantum simulation to mimic the Hamiltonian of more complicated systems of condensed matter, quantum magnetism and many-body physics <cit.> or to design schemes of quantum information <cit.>. Dipolar molecules can also be used for testing fundamental theories <cit.>, or to explore a novel ultracold chemistry in a fully determined way <cit.>.Once the molecules are cooled, collisions between molecules and/or atoms can then occur <cit.>. In all cases, collisions play an important role for understanding the stability, the lifetime and the dynamics of an ultracold gas. This paper is devoted to the time-independent quantum description of collisions between two atoms or molecules with internal structure, therefore allowing for changes of the internal state during the collision. The proposed approach is general enough to describe atom-atom, atom-molecule, and molecule-molecule collisions, and we will emphasize on the latter case. As it is based on an angular expansion of the scattering wavefuntion in partial waves, the formalism is specially suited for ultralow collision energies. Section <ref> starts with a reminder on the Schrödinger equation for one and two particles, on the system of coordinates, and on the types of collisions. Two different parts of the colliding motion are tackled. Section <ref> describes the region beyond the range of interactions where the particles hardly feel each other. The relevant observables are introduced there. Section <ref> is devoted to the zone where the particles interact. This is where the partial wave expansion of the scattering wavefunction is introduced, leading to a system of coupled equations for the radial motion. The coupled system is solved using the method of the log-derivative matrix propagation. Symmetry considerations are also invoked, linked to the isotropy of space, to the symmetrization of identical particles, or to the presence of an external field. Section <ref> proceeds to the matching between the two latter regions. Thereactance, scattering and transition matrices are defined and their relations with the observables are established.Section <ref> describes certain properties of collisions in the ultracold regime.In Section <ref>, as an application, we use this formalism to study the dipolar collisions between two ultracold KRb molecules in an electric field.We show how we can simplify the full problem to restrict the physical process to its main relevant element. Two cases are explored: (i) collisions of molecules in the ground rotational state and (ii) collisions of molecules in the first excited rotational state. It is found that collision rates can be enhanced or suppressed. We conclude and give some perspectives in Section <ref>.For readers that desire additional information we refer for instance to references <cit.> among many others.§ THE SCHRÖDINGER EQUATION §.§ The Schrödinger equation for one particle The dynamics of a quantum particle of mass m moving in a potential characterized by the operator V is described by the time-dependent Schrödinger equation in the ⟨r⃗| representation: iħ ∂Ψ(r⃗,t)/∂ t = H Ψ(r⃗,t) where ⟨r⃗| Ψ(t) ⟩ = Ψ(r⃗,t) is the wavefunction of the particleat the position r⃗ and time t. The operator H=T + V is the Hamiltonian of the particle, where T and V are the kinetic and potential energy operators, respectively, defined as: T = p⃗^ 2/2m≡1/2m (ħ/i ∇⃗)^2 = - ħ^2/2m ∇⃗^2; V≡ V(r⃗,t). Wide hats will be used to represent the quantum operators in this paper.We define the presence probability density of a particle asρ(r⃗,t) = |Ψ(r⃗,t)|^2 which has unit of a volume density.This quantity determines the probability dP(r⃗,t)=ρ(r⃗,t) dr⃗ to find the particle at time t at position r⃗ in the volume element dr⃗.The presence probability of the particle in a finite volume V is P (t) = ∫_ VdP(r⃗,t) = ∫_ V ρ(r⃗,t) dr⃗. Since the probability of finding the particle over all space must be unity, the wavefunction has to be normalized using∫_-∞^+∞|Ψ(r⃗,t)|^2 dr⃗= 1.This normalization is possibleif the wavefunction is square integrable, typically when the wavefunctionrepresents a bound state of a particle.For a continuum state of a particle, that is when the particleis not bound in a specific space,this normalization is not possible. Several methods are used to normalizesuch wavefunctions for example using a Dirac delta function in the normalization (see for example <cit.>).We define the probability current of a particle as: j⃗(r⃗,t)=- ħ/2mi[Ψ^*(r⃗,t)∇⃗Ψ(r⃗,t) - Ψ(r⃗,t)∇⃗Ψ^*(r⃗,t)] = Re{Ψ^*(r⃗,t)( p⃗/m Ψ(r⃗,t) ) } . The probability density and the probability current are related by: ∂ρ(r⃗,t)/∂ t + ∇⃗ . j⃗ = 0. Eq. (<ref>) is the continuity equation showing that the probability is conserved locally, just like a charge is conserved in electrostatics. Indeed we have, from the divergence theorem, ∫_ V ∂ρ(r⃗,t)/∂ tdr⃗ =∂ P(t)/∂ t = - ∫_ V ∇⃗ . j⃗dr⃗= - ∮_ S j⃗ . dS⃗ . The decrease (increase) in time of P(t) inside the volume V at time t is equal to an outgoing (incoming) flux of j⃗ through the surface S enclosing the volume V.Note that j⃗ has unit of a surface density per unit of time. If the potential energy is independent of time V = V(r⃗), we can find a stationary solution Ψ(r⃗,t) with a well defined energy E_tot: Ψ(r⃗,t) = ψ^E_tot(r⃗) e^-i E_tot t/ħ, with a separation of space and time in the wavefunction. The solution is stationary since |Ψ(r⃗,t)|^2 = |ψ^E_tot(r⃗)|^2 is independent of time. Putting Eq. (<ref>) into Eq. (<ref>) gives the time-independent Schrödinger equation for ψ^E_tot(r⃗): H ψ^E_tot(r⃗)=[- ħ^2/2m ∇⃗^2 + V(r⃗) ]ψ^E_tot(r⃗) = E_tot ψ^E_tot(r⃗). It is often the case that in collisions the potential energy V is independent of time and the total energy E_tot is conserved. The time-independent formalism still applies when static electric or magnetic fields are present but not anymore when the fields vary in time. Similarly, the time-independent probability density and probability current are given byρ(r⃗) = |ψ^E_tot(r⃗)|^2 and j⃗(r⃗) = - ħ/2mi [ [ψ^E_tot(r⃗)]^*∇⃗ψ^E_tot(r⃗) - ψ^E_tot(r⃗)∇⃗ [ψ^E_tot(r⃗)]^*].§.§ The Schrödinger equation for two colliding particles §.§.§ Coordinate systems We consider a time-independent collision problem of a systemof two composite particles i=1,2 (for example two molecules)of mass m_1, m_2 (see Fig. <ref>), described with individualexternal coordinates r⃗_1, r⃗_2 from an arbitrary point O' and internalcoordinates ρ⃗_1, ρ⃗_2 (not to mistake with thepresence probability density).It is also useful to introduce the center-of-mass (CM) coordinates R⃗ and the relative (rel) coordinates r⃗ (see Fig. <ref>) defined by: R⃗ = m_1r⃗_1 + m_2r⃗_2/m_tot r⃗= r⃗_1 - r⃗_2, where m_tot = m_1 + m_2 is the total mass and m_red = m_1 m_2 / (m_1 + m_2)is the reduced mass. If we define the point O so that O'O = R⃗, then we define a space-fixed frame by the axes OXYZ. This is the space-fixed frame of the center-of-mass of the system, we could have defined any other arbitraryspace-fixed frame O'XYZ. Space-fixed frames are also often called laboratory frames. The axis OZ is oriented along a unit vector e⃗_Z as shown in Fig. <ref>. We did not show the other unit vectors e⃗_X and e⃗_Ythat orient the axes OX and OY. We also define a body-fixed frame of the systemby the axes Oxyz when now the Oz axisis oriented along a unit vector e⃗_z also shown in Fig. <ref>, following the orientation of the vector r⃗. In this paper, we will choose the OZ axis as the quantizationaxis in the space-fixed frame and the Oz axis in the body-fixed frame of the system.The two particles interact in general via a potential energy V(ρ⃗_1, ρ⃗_2, r⃗_1, r⃗_2). The particles are initially located at large distances and they start to interact as they approach from each other. They are scattered in a given direction, reflecting the strength and the anisotropy of the potential energy.In general the potential energy V of the system can be separated in two terms: a potential energy V_int which describes the internal interactions of the particles between themselves, and a potential energy V_ext which describes eventual external potentials.The former term V_int, contains all electrostatic Coulombic interactions between the electrons and the nuclei of the atoms composing the system, and do not depend on the absolute position of the charges but rather on their relative separation. Using extensive ab initio calculations, the full electronic problem is solved for parametric positions of the nuclei within the so called Born–Oppenheimer approximation. It results in a potential energy term V_int(ρ⃗_1, ρ⃗_2, r⃗_1 - r⃗_2) = V_int(ρ⃗_1, ρ⃗_2, r⃗) and is usually called the potential energy surface of the system, as it represents an energy as a function of multi-coordinates in space. The set of vectors (ρ⃗_1, ρ⃗_2, r⃗) are often called the Jacobi coordinates.As it does not depend on the individual positionsr⃗_1, r⃗_2 but only on the relative position of the molecules,it is separable in R⃗ and r⃗.The latter term V_ext, can describe the interaction of the molecules with external fields for example a static electric or magnetic field.If these fields are uniform throughout space, these potentials do not depend on the individual positions of the molecules r⃗_1, r⃗_2. An external potential can also depends on the individual positions of the molecules. Depending on the case, the potential can be separable inR⃗ and r⃗, for example if the external potential is described by an harmonic oscillator <cit.>, or not, for example if the potential is described by an optical lattice <cit.>. In the following we will consider a system described by an arbitrary potential energy surface V_int(ρ⃗_1, ρ⃗_2, r⃗) and an eventual external potential V_ext_1(ρ⃗_1) + V_ext_2(ρ⃗_2) that does not depend on the individual position of the molecules.The total potential energy is then separable in R⃗ and r⃗. The case of non-separable potentials is not treated here as it is beyond the scope of this paper.When the particles are far apart |r⃗| = |r⃗_1 - r⃗_2| →∞, V_int(ρ⃗_1, ρ⃗_2, r⃗) → V_int_1(ρ⃗_1) + V_int_2(ρ⃗_2), the internal potential energy of the two separated molecules 1 and 2. We define theinteraction potential energy by: U_int(ρ⃗_1, ρ⃗_2, r⃗) = V_int(ρ⃗_1, ρ⃗_2, r⃗) - V_int_1(ρ⃗_1) - V_int_2(ρ⃗_2), where U_int→ 0 if |r⃗| →∞. Then the time-independent Schrödinger equation gives: [- ħ^2/2m_1 ∇⃗^2_1 - ħ^2/2m_2 ∇⃗^2_2 + U_int(ρ⃗_1, ρ⃗_2, r⃗) + h_1(ρ⃗_1) + h_2(ρ⃗_2) ]ψ(ρ⃗_1, ρ⃗_2, r⃗_1, r⃗_2)= E_tot ψ(ρ⃗_1, ρ⃗_2, r⃗_1, r⃗_2). The operators h(ρ⃗_i) are the internal Hamiltonians of particles i=1,2: h_i(ρ⃗_i)ϕ_α_i(ρ⃗_i)= {T_i + V_i(ρ⃗_i) } ϕ_α_i(ρ⃗_i)= ε_α_i ϕ_α_i(ρ⃗_i), where V_i(ρ⃗_i) = V_int_i(ρ⃗_i)+ V_ext_i(ρ⃗_i).The index α_i represents the quantum numbers describing the internal eigenfunctions ϕ_α_i and eigenenergies ε_α_i of the hamiltonianh_i of the individual particle i.As an example, if we consider a diatomic molecule with no spin structure where V_int_i(ρ⃗_i) represents the vibrational and rotational internal potential energy and if we consider no external potential energy V_ext_i=0, then ϕ_α_i(ρ⃗_i) = χ_v_i,n_i(ρ_i)/ρ_i Y_n_i^m_n_i(ρ̂_i).We note n⃗_i the rotational angular momentum operator of the molecule i characterized by the quantum number n_i, and n_Z_i represents the projection operator of n⃗_i onto the OZ space-fixed frame axis characterized by the quantum numbers m_n_i. Then Y_n_i^m_n_i(ρ̂_i) represents the rotational wavefunction where ρ̂_i represents the spherical angles of ρ⃗_i. Small hats corresponds to angles here, not to mistake with the wide hats of the quantum operators. χ_v_i,n_i represents the radial vibrational wavefunction characterized by the vibrational and rotational quantum numbers v_i, n_i. The quantum numbers describing the internal state are α_i≡ v_i, n_i, m_n_i.We note ε_α = ε_α_1 + ε_α_2, ϕ_α = ϕ_α_1 ϕ_α_2, with α≡α_1α_2. The total energy of the system E_tot = E_k 1 + E_k 2 + ε_α, where E_k i is the kinetic energy of particle i and is conserved during the collision.Because we consider potentials that do not depend on the individual position of the molecules, one can separate the center-of-mass with the relative coordinates, and we can write the wavefunction as a product ψ(ρ⃗_1, ρ⃗_2, r⃗_1, r⃗_2) = ψ_CM(R⃗) ψ_rel(ρ⃗_1, ρ⃗_2, r⃗).One can show that Eq. (<ref>) can be decoupled into an equation for the center-of-mass motion: [- ħ^2/2m_tot ∇⃗^2_R⃗]ψ_CM(R⃗)= E_k CM ψ_CM(R⃗) and one for the relative motion: [ - ħ^2/2 m_red ∇⃗^2_r⃗ + U_int(ρ⃗_1, ρ⃗_2, r⃗) + h_1(ρ⃗_1) + h_2(ρ⃗_2) ]ψ_rel(ρ⃗_1, ρ⃗_2, r⃗) = (E_tot - E_k CM)ψ_rel(ρ⃗_1, ρ⃗_2, r⃗). with E_k 1 + E_k 2 = E_k CM + E_k rel.For the type of separable interaction potential energy U_int(ρ⃗_1, ρ⃗_2, r⃗), the solution in Eq. (<ref>) for the CM motion is a free motion unaffected by the internal interactions, and is represented as a plane wave. It can be separated from the collision problem. In the following, we consider the collision in the space-fixed frame OXYZof the center-of-mass so that R⃗=0, E_k CM = 0, E_k rel = E_k, and E_tot=E_k+ε_α.Then Eq. (<ref>) describes the motion of a fictitious particle of mass m_red, of internal state ϕ_α = ϕ_α_1 ϕ_α_2 moving in the interacting potential U_int(ρ⃗_1, ρ⃗_2, r⃗). For simplicity we will omit the subscript “rel” in the wavefunction.§.§.§ Types of collisions Fig. <ref> shows different types of collisions, according to the different internal energy levels ε_α of the pair of particles, before the collision (left) and after the collision (right). For this example, there are five different possible states labeled α=1,2,3,4,5, with corresponding energies. * On the left-hand side of the figure, the intial internal level is α=2 and the initial internalenergy ε^iis ε_2 (the superscript `i' stands for `initial' here).* The initial kinetic energy E_k^i is fixed and is called the collision energy E_c.* The total energy is E_tot = ε^i + E_k^i.* The states with an internal energy larger than the total energy are called the closed states, which are not energetically accessible after the collision.* The states with an internal energy smaller than the total energy are called the open states which are energetically accessible after the collision.* An elastic collision occurs when the final state is the same than the initial one,ε^f =ε^i (the superscript `f' stands for `final' here). As the total energy is conserved, the final kinetic energy E_k^f = E_k^i is also conserved. When the particles have the same internal energy after and before the collision, they also have the same kinetic energy.* If the final state is different than the initial one (ε^f ≠ε^i), an inelastic collision takes place. The case ε^f > ε^icorresponds to an excitation where E_k^f < E_k^i, leading to E_k^f = E_tot - ε^f = ε^i - ε^f + E_k^i. In this case, the particles have gained internal energy and lost kinetic energy.* The case ε^f < ε^i refers to a relaxation with E_k^f > E_k^i, and E_k^f = ε^i - ε^f + E_k^i. In this case, the particles have lost internal energy and gained kinetic energy. When the chemical identity of the products is different from the one of the reactants, various kinds of reactive collisions can occur providing that the states of the products are open:AB + CD→AC + BD, AD + BC →A + BCD, B + CDA, C + DAB, D + ABC →A + B + CD, B + C + DA, C + D + AB, D + A + BC →A + B + C + D. In such cases, a set of collective coordinates like the so-called hyperspherical coordinates <cit.> should be employed as they are more appropriate than the Jacobi coordinates to treat the different arrangements or the four particles in a more symmetric way. The resulting collisional formalism is more complicated <cit.> and beyond the scope of this paper. Therefore we will not treat the case of reactive collisions in the following, only the case of elastic and inelastic collisions.§ IN THE REGION FAR FROM COLLISION §.§ Asymptotic form of the wavefunction As mentioned above, the motion of the center-of-mass can be separated from the collision problem. We then don't considered it anymore and we focus now on the relative motion described by the fictitious particle of mass m_red. The relative vector r⃗ can be written in spherical coordinates r⃗={r,r̂=(θ_r,φ_r)} with respectto the OZ space-fixed frame axis.The stationary scattering state for the relative motion in the CM frame for a given total energy E_tot and for an initial state α, k⃗_α behaves asymptotically as: ψ^E_tot_α, k⃗_α(ρ⃗_1, ρ⃗_2, r⃗)r →∞=A[ e^i k⃗_α . r⃗ ϕ_α(ρ⃗_1, ρ⃗_2) + ∑_α'f^+_α→α'(k⃗_α,r̂) e^i k_α' r/r ϕ_α'(ρ⃗_1, ρ⃗_2) ] = ψ_inc + ψ_scat . A is a normalization factor which does not play a role for the result of the collision as we will see later. One could set A=1 for simplicity. Fig. <ref> represents schematically the asymptotic form of the wavefunction and is separated in different parts:* The incident wavefunction ψ_inc is composed of an initial incident plane wavee^i k⃗_α . r⃗ and the internal structure of the particlesϕ_α(ρ⃗_1, ρ⃗_2). ψ_inc is a solution ofEq. (<ref>) when U_int→ 0 at r →∞. The plane wave is characterized by a wavevector k⃗_α of magnitude k_αand incident direction k̂_α. The initial kinetic energy isE_k,α = ħ^2 k_α^2 / 2m_red = E_c. In general k⃗_α can takeany orientation with respect to e⃗_Z.In Fig.<ref>, k⃗_α has been chosen with the same orientationthan e⃗_Z.The incident wavefunction ψ_inc is expressed as a plane wave describing the particles at Z → -∞. A part of the plane wave may continue to propagate towards Z → +∞ without interacting in the potential range.* In the zone of interaction around Z ∼ 0, the particles interacts via the interaction potential energy U_int(ρ⃗_1, ρ⃗_2, r⃗).* Due to this interaction, the plane wave can also be scattered in a spherical manner. This is represented by a spherical wave e^i k_α' r/r.* Due to the specific shape of the interaction potential, the plane wave is scatteredwith an amplitude f^+_α→α'(k⃗_α,r̂), referred to as the scattering amplitude. It represents the probability amplitude of the two particles for being scattered in the direction r̂ from the initial state α with wavevector k⃗_α into the final state α'. ψ_scat represents the overall scattered wavefunction including the internal structure.§.§ Observables We relate now the scattering amplitude to the observables. Considering a typical beam/target collision experiment, the observable is the number of the beam particles, say particles 1, scattered out of the target particles, say particles 2, per unit of time and solid angle and detected by a detector in the laboratory frame somewhere far from the region of collision. In the CM frame it translates into the the number of fictitious particles of mass m_red scattered out of the potential per unit of time and of solid angle dr̂ = sinθ_r dθ_r dφ_r, detected by the detector in the direction r̂ = (θ_r, φ_r) for a transition α→α', and for a given incident direction k⃗_α (see Fig. <ref>). This number is proportional to the incident probability current J_inc = N_incj_inc of the N_inc incoming particles of mass m_red. This is given by: ∂ N_scat/∂ t∂r̂ ∂k̂_α(k⃗_α, r̂)|_α→α' = J_inc ∂σ_α→α'(k⃗_α,r̂)/∂r̂ ∂k̂_α. The quantity ∂σ(k⃗_α,r̂)/∂r̂ ∂k̂_α is called the differential cross section. The flux of J_inc in the differential cross section gives the number of particles scattered per unit of time and solid angle. By expressing j_inc and j_scat from ψ_inc and ψ_scat in Eq. (<ref>), using the first line of the time-independent version of Eq. (<ref>), one can show that: ∂σ_α→α'(k⃗_α,r̂)/∂r̂ ∂k̂_α = k_α'/k_α|f^+_α→α'(k⃗_α,r̂)|^2. The integral cross section for a given direction k⃗_α of collision is given by integrating the differential cross section over all scattering directions: σ_α→α'(k⃗_α) = ∫∂σ(k⃗_α,r̂)/∂r̂ ∂k̂_αdr̂ =k_α'/k_α ∫|f_α→α'(k⃗_α,r̂)|^2 dr̂. If the direction of collision is not specified (for example in a gas-cell experiment in contrast to a beam experiment), one also has to average over the value of the incident directions to obtain the averaged integral cross section for a given collision energy E_c = ħ^2 k_α^2 / 2m_red: σ_α→α'(k_α) = σ_α→α'(E_c) = ∫σ_α→α'(k⃗_α) dk̂_α/∫ dk̂_α= 1/4π ∫σ_α→α'(k⃗_α) dk̂_α . Because: ∂ N_scat/∂ t |_α→α'= 1/4π ∫ dk̂_α ∫ dr̂ ∂ N_scat/∂ t∂r̂ ∂k̂_α(k⃗_α, r̂)|_α→α'= J_inc×σ_α→α'(E_c), we see that the number of scattered particles per unit of time, summed over all incident directions is the flux of the incident probability current J_inc through the averaged integral cross section σ_α→α'(E_c).Then because J_inc= N_inc/ Δ S Δ t is the number of incident particles crossing a given surface Δ S in the time interval Δ t,N_scat/N_inc = σ / Δ S. Choosing a unit surface Δ S = 1 cm^2, σ expressed in cm^2 represents the number of scattered particles relative to the number of incident particles.For gas-cell experiments, one usually has access to the initial volumic density ρ_gas of the gas, not to the initial current J_inc. Using the second line of the time-independent version of Eq. (<ref>) and applying it to ψ_inc in Eq. (<ref>), one can notice that j_inc = |ψ_inc|^2 v = ρ_incv, where v = ħ k_α / m_red = √(2 E_c / m_red) is the initial velocity of the fictitious particle, that is the relative initial velocity of the two colliding particles. Then J_inc = (N_inc ρ_inc) v = ρ_gasv. If we define another observable: β_α→α'(E_c) = σ_α→α'(E_c) × v , called the rate coefficient,then Eq. (<ref>) becomes now: ∂ N_scat/∂ t |_α→α'=ρ_gas×β_α→α'(E_c). Therefore by knowing the initial volumic density instead of the the initial current, one extracts directly the rate coefficients instead of the cross sections, from the number of particles scattered per unit of time. If the cross section has unit of cm^2 and the velocity has unit of cm/s, the rate coefficient has unit of cm^3/s.§ IN THE REGION OF COLLISIONSo far we have just defined some relevant quantities for the collisional properties of a system. We are now interested in how to calculate the cross section and the rate coefficient from a given potential energy.§.§ Partial wave expansion In Eq. (<ref>), the plane wave appearing in ψ_inc is a function of the vector r⃗. When r⃗ is represented by spherical coordinatesr⃗={r,r̂=(θ_r,φ_r)},the kinetic energy operator in Eq. (<ref>) can beexpressed in spherical coordinates by: - ħ^2/2 m_red ∇⃗^2_r⃗≡- ħ^2/2 m_red[ 1/r^2 ∂/∂ r ( r^2∂/∂ r) ] + l⃗^ 2/2 m_redr^2, The spherical harmonics Y_l^m_l(r̂) are eigenfunctions of the square of the angular momentum operator l⃗ so that l⃗^ 2Y_l^m_l =ħ^2 l(l+1)Y_l^m_l. Using the spherical harmonic addition theorem, the plane wave in Eq. (<ref>) can be expanded in spherical harmonics of quantum numbers l,m_l:ψ_inc =Ae^i k⃗_α.r⃗ ϕ_α(ρ⃗_1, ρ⃗_2) =A 4π ∑_l=0^∞∑_m_l=-l^li^l j_l(k_αr) [Y_l^m_l(k̂_α)]^*Y_l^m_l(r̂)ϕ_α(ρ⃗_1, ρ⃗_2) . j_l is a regular spherical Bessel function which behaves at large distances as: j_l(k_αr)r →∞→ sin(k_α r - lπ/2)/k_α r→ i/2 k_α r [ e^-i(k_α r - lπ/2) - e^i(k_α r - lπ/2)] . The asymptotic behavior of ψ_inc is then: ψ_inc r →∞→ ∑_l=0^∞∑_m_l=-l^lN^inc_αl m_l(k⃗_α)ψ^inc_αl m_l(ρ⃗_1, ρ⃗_2, r⃗) , where N^inc_αl m_l(k⃗_α) ≡ A(2 π i)/k_α^1/2i^l[Y_l^m_l(k̂_α)]^* is a normalization factor independent of r. The functions: ψ^inc_αl m_l(ρ⃗_1, ρ⃗_2, r⃗)≡f^inc(r)/rY_l^m_l(r̂)ϕ_α(ρ⃗_1, ρ⃗_2) are called the partial waves, wheref^inc(r) = (e^-i(k_α r - lπ/2) - e^i(k_α r - lπ/2))/k_α^1/2 isthe incident radial function (not to be mistaken with the scattering amplitude f^+) .The expansion over the quantum numbers l,m_l is called the partial wave expansion which represents the description of the colliding system in terms of components of the orbital angular momentum of the translational (collisional) motion.We then extend to all rthe partial wave expansion Eq. (<ref>) for the total wavefunctionψ^E_tot_α, k⃗_α, for a given total energy E_tot, an initial internal quantum state of the molecules α and an initial wavevector k⃗_α: ψ^E_tot_α,k⃗_α(ρ⃗_1, ρ⃗_2, r⃗) = ∑_l=0^∞∑_m_l=-l^lN_αl m_l(k⃗_α)ψ^E_tot_αl m_l(ρ⃗_1, ρ⃗_2, r⃗), where N_αl m_l(k⃗_α) is the normalization factor for the wavefunction ψ^E_tot_αl m_l. It will be defined, inEq. (<ref>), by matching ψ^E_totto the asymptotic form of the wavefunction.At finite r, the partial waves in Eq. (<ref>) are now expressed by: ψ^E_tot_αl m_l(ρ⃗_1, ρ⃗_2, r⃗) =∑_α'∑_l'=0^∞∑_m_l'=-l'^l' f^E_tot_α' l' m_l' , αl m_l(r)/rY_l'^m_l'(r̂)ϕ_α'(ρ⃗_1, ρ⃗_2) . In contrast with Eq. (<ref>) for the incident wavefunction, we now allowin Eq. (<ref>) the components ψ^E_tot_αl m_l(ρ⃗_1, ρ⃗_2, r⃗) to be a general, linear combination of the other final internal states α' and final orbital quantum numbers l', m_l'. This is due to the presence at finite r of the potential energy term which can couple the initial state to all the other final states. The quantum numbers (αl m_l) and (α' l' m_l') define the collisional channels for the initial and final states, respectively.The functions f_α' l' m_l' , αl m_l will be responsible for thetransitions αl m_l →α' l' m_l'. The transition is inelastic ifαα' or elastic if α = α' as illustrated in Fig. <ref>.In Eq. (<ref>), one can define the basis set functions:Φ_α' l' m_l'(ρ⃗_1, ρ⃗_2, r̂) ≡ Y_l'^m_l'(r̂) ϕ_α'(ρ⃗_1, ρ⃗_2) which include all but the radial colliding motion degrees of freedom.If we take the example mentioned above of two diatomic molecules with no spin, this basis set is: Φ_α' l' m_l'(ρ⃗_1, ρ⃗_2, r̂) =χ_v_1',n_1'(ρ_1)/ρ_1 χ_v_2',n_2'(ρ_2)/ρ_2Y_n'_1^m_n_1'(ρ̂_1) Y_n'_2^m_n_2'(ρ̂_2) Y_l'^m_l'(r̂) . The basis set in Eq. (<ref>) is independent of the particles separation r, referred to as a diabatic representation. There are other possible representations, such as the the diabatic-by-sector representation or the adiabatic representation which involves additional coupling terms. Such cases are beyond the scope of this paper.Note that the formalism chosen here uses coordinate axes thatpoint in the space-fixed frame directions, in particular the quantization axis is oriented along the space-fixed unit vector e⃗_Z.For that reason, this is referred to a space-fixed frame formalism <cit.>. In contrast, there is also a body-fixed frame formalism<cit.> where the coordinate axes follow the body-fixed axes (see Fig. <ref>).The two formulations are equivalent. The appropriate formalism depends on the treated problem. Generally, it is more efficient to use the space-fixed frame approach for large J, long range, weak coupling collisions, and the body-fixed frame approach for small J, short range, strong coupling collisions <cit.>. Finally, because the basis set in Eq. (<ref>) uses uncoupled functions of angular momentum, this is called the uncoupled representation of the wavefunction <cit.>.In constrast, one can use a coupled representation<cit.> where all composite functions of angular momentumare coupled together to form a total angular momentum.Taking as example the basis set of a diatomic molecule in Eq. (<ref>), one can couple their rotational angular momenta operators n⃗_1 andn⃗_2 and projections n_Z_1 and n_Z_2,into a coupled rotational angular momentum operator n⃗_12 and projection n_Z_12 with characteristic quantum numbersn_12, m_n_12. The coupled rotational wavefunction becomes: Y_n_12^m_n_12 = ∑_m_n_1,m_n_2 ⟨ n_1, m_n_1, n_2, m_n_2 | n_12, m_n_12⟩Y_n_1^m_n_1(ρ̂_1) Y_n_2^m_n_2(ρ̂_2) . The coupled rotational angular momentum operator n⃗_12and orbital angular momentum l⃗, and projectionsn_Z_12 and l_Z,can be further coupled to form the totalangular momentum operator J⃗ and projection J_Z with characteristic quantum numbers J, M_J. The fully coupled wavefunction becomes: Y_J^M_J = ∑_m_n_12, m_l ⟨ n_12, m_n_12, l, m_l | J, M_J ⟩Y_n_12^m_n_12Y_l^m_l(r̂) with M_J = m_n_12 + m_l = m_n_1 + m_n_2 + m_l.The basis set is completed by combining this angular basis set with the internal radial functions χ_v_1,n_1(ρ_1)/ρ_1 χ_v_2,n_2(ρ_2)/ρ_2.The quantum numbers describing the wavefunction in the coupled representation are now v_1, v_2, n_1, n_2, n_12, l, J, M_J in contrast with v_1, v_2, n_1, m_n_1, n_2, m_n_2, l, m_l for the uncoupled representation. If the total potential energy satisfiesV(-ρ⃗_1, -ρ⃗_2, -r⃗) =V(ρ⃗_1, ρ⃗_2, r⃗),then J and M_J are good quantum numbers and are conserved during the collision.This is the case when no external potentials V_ext are applied, as the potential energy surface V_int does not depend on the global orientation of the two particles. Then when no external field is applied it is useful to use the coupled representation since J and M_J are good quantum numbers, which leads to efficient and fast numerical calculations.When external potentials V_ext are applied, such as with arbitrary external electric or magnetic fields, different states with different values of J become coupled so that J is no more a good quantum number. Therefore the coupled representation loses its advantage. In contrast M_J is still conserved if only one of the electric or the magnetic field is present at a time or if both fields share the same quantization axis. Once both fields are not aligned <cit.>, M_J is not a good quantum number anymore.To treat ultracold collisions in any arbitrary external fields, including both weak and strong regimes, the uncoupled representation is generally preferred. The weak (strong) regime corresponds respectively to an interaction between the particle and the field much smaller (bigger) than the typical zero-field particle energy. For example for a diatomic molecule with a permanent electric dipole moment d, one has to compare the magnitude d E of the interaction of themolecule with an electric field E, with its typical energy without field, that is the rotational constant of the molecule B_rot. The weak (strong) regime is reached when typically d E ≪ B_rot (d E ≫ B_rot). Note though that in case of strongly dominated anisotropic collisions<cit.> theuse of a body-fixed coupled representation can still be beneficial for efficient calculations,provided an appropriate treatment of unphysical states.In the following, we will use the uncoupled representation in the collisional formalismsince it is more intuitive to think in term of the individual quantum numbers of the separated particles. Besides, the last section of the paper will illustrate the case of dipolar molecules collisions in an electric field where the uncoupled representation is preferred and where the following formalism applies. §.§ Coupled equations We look now for the equations satisfied by the radial functions in Eq. (<ref>). The Schrödinger equation for a given partial wave l,m_l is: H ψ^E_tot_αl m_l = E_tot ψ^E_tot_αl m_l with E_tot = ε_α + ħ^2 k_α^2/2 m_red = ε_α + E_c and H given in Eq. (<ref>). InsertingEq. (<ref>) into Eq. (<ref>) andusing Eq. (<ref>), we obtain: ∑_α'∑_l'=0^∞∑_m_l'=-l'^l' [ - ħ^2/2 m_red d^2/d r^2 + ħ^2 l'(l'+1)/2 m_redr^2+ U_int(ρ⃗_1, ρ⃗_2, r⃗) + ε_α' - E_tot]× f^E_tot_α' l' m_l' , αl m_l(r) Y_l'^m_l'(r̂)ϕ_α'(ρ⃗_1, ρ⃗_2) = 0. The first derivatives and the 1/r term have disappeared in Eq. (<ref>) due to the choice of the form f(r)/r in Eq. (<ref>). If we multiply the left-hand side of Eq. (<ref>) by [Y_l”^m_l”(r̂)]^*ϕ^*_α”(ρ⃗_1, ρ⃗_2) and integrate over all but the radial coordinate r, we are led to a system of coupled equations: ∑_α'∑_l'=0^∞∑_m_l'=-l'^l' [ { - ħ^2/2 m_red d^2/d r^2 + ħ^2 l'(l'+1)/2 m_redr^2 + ε_α' - E_tot} δ_α',α” δ_l',l” δ_m_l',m_l”+U^int_α”l”m_l” , α' l' m_l'(r) ] f^E_tot_α' l' m_l' , αl m_l(r)= 0, where: U^int_α”l”m_l” , α' l' m_l'(r) =∫dρ⃗_1 dρ⃗_2 dr̂ [Y_l”^m_l”(r̂)]^*ϕ^*_α”(ρ⃗_1, ρ⃗_2) U_int(ρ⃗_1, ρ⃗_2, r⃗)Y_l'^m_l'(r̂)ϕ_α'(ρ⃗_1, ρ⃗_2) = ∫dρ⃗_1 dρ⃗_2 dr̂ Φ^*_α”l”m_l”(ρ⃗_1, ρ⃗_2, r̂) U_int(ρ⃗_1, ρ⃗_2, r⃗) Φ_α' l' m_l'(ρ⃗_1, ρ⃗_2, r̂) is a matrix element of the coupling matrix 𝐔^int. This matrix is real, symmetric and in generalnon-diagonal. It provides the couplings between the collisional channel α”l”m_l” to α' l' m_l' and is responsible for the inelastic transition in the collision. There are as many line equations of Eq. (<ref>) as there are α”l”m_l” numbers. All but the translational radial motion r, including the vibration and rotation of the molecules and the orbital angular momentum of the collision, has been integrated out in Eq. (<ref>).This provides a set of second-order coupled differential equationsfor the radial functions f^E_tot_α' l' m_l' , αl m_l(r) for a given αl m_l and E_tot. A centrifugal term has appeared in Eq. (<ref>) and Eq. (<ref>) coming from the development of the kinetic energy operator into an angular term proportional to the operator l⃗^ 2. One can define the corresponding (diagonal) matrix 𝐔^cent with (diagonal) matrix elements: U^cent_α”l”m_l” , α' l' m_l'(r) =ħ^2 l'(l'+1)/2 m_redr^2 δ_α',α” δ_l',l” δ_m_l',m_l”. At ultralow collision energy, only a few, low quantum numbers of l are required to describe the collision in Eq. (<ref>)since higher values of l implies higher values of the centrifugal barrier elements.Then this will imply lower values of the tunneling probability, which prevents the particles to come close to each other as E_c→ 0.As more values of l are required for higher E_c,the time-independent partial wave method is more adapted to study collision at ultralow energies than at high energies.It is often useful to plot some elements of the set of equations to get a knowledge of how strong the system is coupled. Defining the indexes i”≡α”l”m_l” and i' ≡α' l' m_l', one can extract an effective potential matrix𝐔^eff in Eq. (<ref>) with the following matrix elements: U^eff_i”,i'(r) = U^cent_i”,i'(r) +U^int_i”,i'(r)+ ε_α' δ_i',i” , which includes the diagonal centrifugal term elements,the coupling matrix elements and the energy thresholds of the two particles.One can plot each diagonal element of this matrix as a function of r. The corresponding curves are called the diabatic energy curves and each of themtend at large r to one of the threshold energies of the two particles.These curves provide a set of all possible effective potentials for the radial motion ofthe two colliding particles,when the non-diagonal terms of the coupling matrix are not present.One can also include the effects ofthe non-diagonal terms of the coupling matrix in Eq. (<ref>) by diagonalizing first the matrix 𝐔^eff and then plot the eigenvaluesas a function of r. The resulting curves are called the adiabatic energy curves, each of them tend as well to the threshold energies of the two particles at large r. These curves also provide a set of effective potentials for the radial motion,but now when the effect of the couplings is present.When comparing both types of curves, one can see directly how and wherethe non-diagonal couplings elements affect the diagonal elements.If the adiabatic curves are quite comparable to the diabatic curves, then the system is weakly coupled. However, it is strongly coupled if both types of curves differ significantly. This is illustrated later in the Section <ref> in Fig. <ref>.Finally, the system of coupled equations can be expressed in a very compact form<cit.>, using a matrix notation: {𝐃^2+ 𝐖} 𝐅= 0. The matrix: 𝐃^2 = 𝐈 d^2/d r^2 is a diagonal matrix, 𝐈 being the identity matrix.𝐖 and 𝐅 are real and symmetric matrices.The matrix elements of 𝐖 are: W_i”,i' = - 2 m_red/ħ^2[ U^cent_i”,i'(r) +U^int_i”,i'(r) + ( ε_α'- E_tot )δ_i',i”] . The square matrix 𝐅 involves the radial functions, and its elements are given by F_i',i = f^E_tot_i',i(r) for which the line i'≡α' l' m_l' refers to the final state and the column i ≡αl m_l to the initial state. There are as many initial states as there are open channels for a given total energy E_tot.For example in Figure <ref>, there are 4 open channels at the given total energy, say i=1,2,3,4 with increasing energies ε_1,ε_2,ε_3,ε_4. The initial state is two molecules in i=2 with internal energy ε_2 with a collision energy E_c and total energy E_tot = ε_2 + E_c. The solution of the wavefunction is the column i=2 of 𝐅. The other columns of 𝐅 represent the other independent solutions of the wavefunction corresponding to a total energy E_tot but different initial conditions: i=1 corresponds to an initial state where the molecules start with internal energy ε_1 and collision energy E_c = E_tot - ε_1, i=3 to ε_3 and E_c = E_tot - ε_3 and finally i=4 to ε_4 and E_c = E_tot - ε_4. Each column of 𝐅 then represents a linearly independent solution of the problem.§.§ Case of long-range interactions described by an electrostatic multipole-multipole expansion In practical, one has to compute all the elements U^int of the coupling matrix in Eq. (<ref>). This requires the knowledge of the full potential energy surface U_int(ρ⃗_1, ρ⃗_2, r⃗). At long-range, the potential energy surface can be described in terms of an electrostatic multipole-multipole expansion <cit.>: U_mult = 1/4 πε_0∑_λ_1λ_2λ ∑_ω_λ_1 ω_λ_2(-1)^λ_1 ( (2 λ_1 + 2 λ_2 + 1)!/(2 λ_1)!(2 λ_2)!)^1/2 Q_λ_1ω_λ_1Q_λ_2ω_λ_2/ r^λ+1 ×δ_λ,λ_1+λ_2 ∑_m_λ_1m_λ_2m_λ A(ρ̂_1,ρ̂_2,r̂) with λ= λ_1 + λ_2. The angular part is given by: A(ρ̂_1,ρ̂_2,r̂)= ( [ λ_1 λ_2 λ; m_λ_1 m_λ_2-m_λ ])×[D^λ_1_m_λ_1ω_λ_1 (ρ̂_1)]^*[D^λ_2_m_λ_2ω_λ_2(ρ̂_2)]^*[D^λ_-m_λ 0(r̂)]^*. The symbol (:::) is a Wigner 3-j symbol related to a Clebsch–Gordan coefficient and it is non-zero only if m_λ = m_λ_1 + m_λ_2 and if λ_1, λ_2, λ satisfy the triangle relation. Q_λ_i ω_λ_i is a generalized multipole in the body-fixed frameof the molecule where we choose the unit vector ρ⃗_i/|ρ⃗_i| for molecule i=1,2 in Fig. <ref> to characterize the quantization axis. λ_i is an angular momentum quantum number corresponding to the electronic charge distribution in the molecules i=1,2. λ_i=0,1,2,3, ... correspond respectively to the charge, dipole, quadrupole, octopole moments, and so on. m_λ_i=[-λ_i,+λ_i] are the projection of these angular momenta onto the space-fixed frame quantization axise⃗_Z. ω_λ_i=[-λ_i,+λ_i] is the projection onto the body-fixed frame quantization axis ρ⃗_i/|ρ⃗_i| of molecule i.In the case of Σ electronic diatomic molecules, ω_λ_1=ω_λ_2=0 and one can write Eq. (<ref>) using the rotational eigenfunctions |n_1, m_n_1⟩ and |n_2, m_n_2⟩ of the molecules 1 and 2 for the internal wavefunction ϕ_α: ⟨n_1, m_n_1, n_2, m_n_2, l, m_l| U_mult | n_1', m_n_1', n_2', m_n_2', l', m_l' ⟩=1/4 πε_0∑_λ_1λ_2 (-1)^λ_1 ( (2 λ_1 + 2 λ_2 + 1)!/(2 λ_1)!(2 λ_2)!)^1/2 Q_λ_10Q_λ_20/ r^λ_1+λ_2+1 ∑_m_λ_1m_λ_2(-1)^m_n_1+m_n_2+m_l ( [λ_1λ_2λ_1 + λ_2;m_λ_1m_λ_2 - (m_λ_1 + m_λ_2 ) ]) ×√((2n_1+1) (2n_1'+1)) ( [n_1λ_1 n_1';000 ])( [n_1λ_1 n_1'; -m_n_1m_λ_1 m_n_1' ])×√((2n_2+1) (2n_2'+1)) ( [n_2λ_2 n_2';000 ])( [n_2λ_2 n_2'; -m_n_2m_λ_2 m_n_2' ])×√((2l+1) (2l'+1)) ( [ l λ_1 + λ_2l'; 0 0 0 ])( [ l λ_1 + λ_2l';-m_l - (m_λ_1 + m_λ_2)m_l' ]) . This provides the elements of the coupling matrix in Eq. (<ref>) at long-range.From the properties of the 3-j symbols in Eq. (<ref>), we find the following selection rules (in addition to the triangle relation selection rule): (i)-m_n_1 + m_λ_1 + m_n_1'=0,(ii) -m_n_2 + m_λ_2 + m_n_2'=0,(iii) -m_l - (m_λ_1 + m_λ_2) + m_l'=0, which imply m_n_1 + m_n_2 + m_l = m_n_1' + m_n_2' + m_l' or M_J = M_J'. There are no couplings if M_J'M_J showing that M_J is conserved during the collision.§.§ Propagation. Log-derivative 𝐙 matrix To get all the radial functions f(r), we need to solve the system of coupled equations Eq. (<ref>). In reality, for practical and numerical reasons, the log-derivative of the radial functions is computed, rather than the functions themselves. This avoids numerical instabilities of the radial functions when a classically forbidden region is reached, and it avoids the necessity to compute the normalization of the functions at each r. We define the log-derivative matrix of the matrix 𝐅(r)in Eq. (<ref>) by: 𝐙(r) = 𝐅'𝐅^-1= [ d/dr 𝐅(r) ][ 𝐅(r) ]^-1 . The log-derivative matrix is a real and symmetric matrixso that 𝐙^* = 𝐙 and 𝐙^t = 𝐙.When r → r_min≃ 0, the potential energy surface becomes very repulsive due to the impenetrability of the particles. Then the radial functions become zero with no couplings, so that 𝐅 and 𝐙 are diagonal. We then impose the initial log-derivative at r=r_min to be: 𝐙(r_min) = ∞×𝐈 . If we divide the range of the radial coordinate r from r_min to r_max into small segments of width Δ r (called sectors), one can propagate the log-derivative from sectors to sectors. Knowing what the log-derivative is in the previous sector, one can know what it is in the current sector. Because we know the log-derivative at r_min≃ 0 we can propagate it to r_max≃∞. We solve this way the system of coupled equations Eq. (<ref>) for each r, called a close-coupling calculation. There are several efficient numerical methods to solve this set of equations, for example Refs. <cit.>,which present no specific problems and can be routinely implemented. Those methods can compute not only the scattering properties of the coupled system with positive collision energies above the energy threshold of two initial separated particles, but also the presence of bound states with negative energies below the same threshold <cit.>. §.§ Symmetry considerations We discuss in this section the role of the inversion and permutationsymmetries and how they are handled in the quantum formalism.Inversion symmetry is considered when the potential energy does not include potentials that depend on the absolute position of the particles, while permutation symmetry is required when dealing with collisions of identical particles. Including those symmetries will reduce thenumber of equations that are coupled in Eq. (<ref>).Finally, we briefly discuss how the quantum formalism is modified when dealing with external potentials when for example electric or magnetic fields are applied.§.§.§ Inversion symmetry We consider here a potential energy V_int that do notdepend on the absolute position of the particles, such as a potential energy surface of a system.The basis function in Eq. (<ref>) turns out to be also an eigenfunction of the inversion parity operator I. This operator corresponds to the transformation (ρ⃗_1,ρ⃗_2,r⃗) → (-ρ⃗_1,-ρ⃗_2,-r⃗). This gives I Φ_αl m_l(ρ⃗_1,ρ⃗_2,r̂) = ϵ_IΦ_αl m_l(ρ⃗_1,ρ⃗_2,r̂) with the inversion parity quantum number ϵ_I ≡ (-1)^n_1+n_2+l = ± 1. This comes from the fact that the inversion symmetry of a vector x⃗→ -x⃗ is equivalent to (x,θ_x,φ_x) → (x,π-θ_x,φ_x+π) and thus implies Y_j^m_j(-x̂) = (-1)^jY_j^m_j(x̂), while the radial wavefunction remains unchanged. If so, applying the inversion operator to a function depending on the radial coordinate like the coupling elements U^int(r) in Eq. (<ref>) will let the function unchanged, IU^int(r) =U^int(r). On the other hand: IU^int(r)= ∫dρ⃗_1 dρ⃗_2 dr̂ Φ^*_α”l”m_l”(-ρ⃗_1, -ρ⃗_2, -r̂)U_int(-ρ⃗_1, -ρ⃗_2, -r⃗)Φ_α' l' m_l'(-ρ⃗_1, -ρ⃗_2, -r̂) = ϵ_Iϵ_I'∫dρ⃗_1 dρ⃗_2 dr̂ Φ^*_α”l”m_l”(ρ⃗_1, ρ⃗_2, r̂)U_int(-ρ⃗_1, -ρ⃗_2, -r⃗)Φ_α' l' m_l'(ρ⃗_1, ρ⃗_2, r̂). As the potential energy surface satisfiesU_int(-ρ⃗_1, -ρ⃗_2, -r⃗) = U_int(ρ⃗_1, ρ⃗_2, r⃗), then ϵ_Iϵ_I' = 1 or ϵ_I = ϵ_I'. Inversion parity is then conserved in a collision involving a potential energy surface. This can be checked directly in Eq. (<ref>). From the property of the 3-j symbols which contains the zero elements, these three symbols are non-zero if (-1)^n_1+λ_1+n_1'=1, (-1)^n_2+λ_2+n_2'=1, and (-1)^l+λ_1+λ_2+l'=1. By arranging the (-1)^λ_1+λ_2 term, this implies (-1)^n_1+n_2+l=(-1)^n_1'+n_2'+l' and then ϵ_I=ϵ_I'. This applies to collision of either identical or different molecules.Note that inversion is not always conserved in a collision if external potentials V_ext are included. §.§.§ Permutation symmetryIf the two particles are identical, one also has to symmetrize the internal wavefunction ϕ_α(ρ⃗_1, ρ⃗_2) = ϕ_α_1(ρ⃗_1)ϕ_α_2(ρ⃗_2) with respect to the permutation of the two particles operator P.The permutation of the two particles is equivalent to the transformation (ρ⃗_1, ρ⃗_2,r⃗) → (ρ⃗_2, ρ⃗_1,-r⃗).The properly symmetrized internal wavefunction is given by: ϕ_α η(ρ⃗_1, ρ⃗_2) =1/√(2(1+δ_α_1,α_2)) {ϕ_α_1(ρ⃗_1)ϕ_α_2(ρ⃗_2) + η ϕ_α_2(ρ⃗_1)ϕ_α_1(ρ⃗_2) } . η = ± 1 describes, respectively, a symmetric and anti-symmetric internal wavefunction with respect to the permutation, so that P ϕ_α η = η ϕ_α,η.Now the basis set functions in Eq. (<ref>) become: Φ_αl m_lη(ρ⃗_1, ρ⃗_2, r̂)≡ϕ_α η(ρ⃗_1, ρ⃗_2) Y_l^m_l(r̂) , and P Φ_αl m_lη = η(-1)^l Φ_αl m_lη since r⃗→ -r⃗ is equivalent to (r, θ_r,φ_r) → (r, π-θ_r,φ_r+π) and Y_l^m_l(-r̂) = (-1)^lY_l^m_l(r̂). One can show, using the above properly symmetrized basis set, that the coupled equations are diagonal in η.Additionally, under permutation P of two identical particles, the total wavefunction has to obey the symmetrization principle: P ψ^E_tot = ϵ_Pψ^E_tot with ϵ_P=+1 if the (composite) particles are identical bosons and ϵ_P=-1 if the (composite) particles are identical fermions. On the basis set functions, it gives P Φ =ϵ_PΦ. Then this implies specific selection rules for η andl following the fact that η(-1)^l = ϵ_P. In the case of identical bosons ϵ_P=+1, internal wavefunctions of η = +1 (resp. η = -1) symmetry imply even partial waves l=0,2,4... (resp. odd partial waves l=1,3,5...). In the case of identical fermions ϵ_P=-1, internal wavefunctions of η = +1 (resp. η = -1) symmetry imply odd partial waves l=1,3,5... (resp. even partial waves l=0,2,4...). Note that all values of l are included in the dynamics since both symmetries of η are generally allowed.In the special case of indistinguishable particles, meaning particles in the same quantum state so that α_1 = α_2, Eq. (<ref>) implies that the wavefunction for the η=-1 symmetry does not exist. In this special case, the number of partial waves describing the dynamics is reduced following the rules just mentioned above since only the η=+1 symmetry survives. This implies even partial waves l=0,2,4... for indistinguishable bosons and odd partial waves l=1,3,5... for indistinguishable fermions.At ultralow energy E_c → 0, only the first and lowest partial wave is important for the dynamics. It is usually common to say that identical bosons in indistinguishable states collide in the s-wave (to refer to l=0) and identical fermions in indistinguishable states collide in the p-wave (to refer to l=1). §.§.§ Collisions in external fields Often in ultracold physics, additional externalfields, such as electric or magnetic fields, are present to control the properties of the individual particles <cit.> and their interactions <cit.>. Then, additional external potentials V_extappear in the Hamiltonian of the system. The previous formalism remains unchanged except that the individual particles i=1,2 are now perturbated by the external field. As a consequence, the (bare) internal state of the particle ϕ_α_i in the absence of an external field is replaced with the corresponding (dressed) internal state ϕ̃_α_i in the presence of the field. The dressed states are a linear combination of the bare states with given coefficientsdue to the interaction of the particle with the field. In the collision formalism, wejust replace the individual bare states ϕ_α_i of the particles i=1,2 with their dressed states ϕ̃_α_i. To compute the elements of the coupling matrix in Eq. (<ref>) between the dressed states Φ̃_αl m_l = ϕ̃_α_1 ϕ̃_α_2Y_l^m_l, there is now just an additional step. We replace the dressed states by the expression of their linear combination of bare states and we compute the corresponding sum of all the bare elements. This presents no difficulties and is routinely done numerically.The other consequence is that J is not a good quantum number anymore, as mentioned above, and an uncoupled representation basis set is generally preferred.The last section of this paper will illustrate such an example, where ultracold collisions of electric dipolar molecules of KRb occur in an external electric field.§ MATCHING THE TWO REGIONSTo relate the observables far from the collision region to the potential energy and radial functions in the collision region, we will equate Eq. (<ref>) and Eq. (<ref>). §.§ Reactance matrix 𝐊. Relation with 𝐙 For practical and numerical reasons, the matching is not done at r_max≃∞ but rather at r_max for which | U^int| ≪ | U^cent|. This the distance for which the interaction terms (diagonal and non-diagonal)can be safely neglected compared to the centrifugal ones. The set of coupled equations Eq. (<ref>) becomes diagonal, each diagonal elements taking the form: { - ħ^2/2 m_red d^2/d r^2 + ħ^2 l(l+1)/2 m_red r^2 + ε - E_tot}f(r) = 0 , each equations only differing by the values ε of the thresholds. This can also be written:r^2 f”(r) + [k^2 r^2 - l(l+1)] f(r) = 0 with the wavevector k = √(2 m_red(E_tot-ε)/ħ^2). Two independent solutions are given by j̃ and ñ, the Ricatti-Bessel functions and Ricatti-Neumann functions <cit.>. They are related to the spherical Bessel and spherical Neumann functions by j̃_l=kr j_l(kr) and ñ_l= kr n_l(kr) and to the Bessel and Neumann functions by j_l(kr) = √(π/2kr)J_l+1/2(kr) and n_l(kr)= √(π/2kr)N_l+1/2(kr). If we set ρ=kr, the solutions for the first l's are: j̃_0(ρ)= sin(ρ)j̃_1= sin(ρ)/ρ - cos(ρ)ñ_0(ρ)= - cos(ρ)ñ_1= - cos(ρ)/ρ - sin(ρ). The behaviour for ρ→ 0 is:j̃_l(ρ)ρ→ 0∝ρ^l+1/(2l+1)!! ñ_l(ρ)ρ→ 0∝ -(2l-1)!!ρ^-l with x!!=x (x-2) (x-4) ... j̃ are often called regular functions since j̃→ 0 as ρ→ 0 and ñ are often called irregular functions since ñ→±∞ as ρ→ 0. For ρ→∞:j̃_l(ρ)ρ→∞∝sin(ρ - lπ/2)ñ_l(ρ)ρ→∞∝ -cos(ρ - lπ/2). A general solution of Eq. (<ref>) for the radial functions at r=r_max is given by: 𝐅(r) = 𝐅^(1) 𝐀 + 𝐅^(2) 𝐁 |_r=r_max where: F^(1)_i',i = δ_i',i1/k_α'^1/2 j̃_l'(k_α' r)F^(2)_i',i = δ_i',i1/k_α'^1/2 ñ_l'(k_α' r) . 𝐀, 𝐁 are real constant matrices, independent of r.In the special case without coupling terms, that is no off-diagonal terms in Eq. (<ref>) ∀ r, the system is uncoupled and 𝐀, 𝐁 will be diagonal at r=r_max. More generally when coupling terms are presentfor r < r_max in Eq. (<ref>), the system is coupled and 𝐀, 𝐁 will be full matrices in general at r=r_max.We can also write Eq. (<ref>) as: 𝐅(r) = 𝐅^K(r)𝐍^K |_r=r_max with: 𝐅^K(r) = {𝐅^(1) - 𝐅^(2) 𝐊} . 𝐊 is called the reactance matrix. 𝐍^K is a real normalisation matrix. From Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), 𝐊≡ - 𝐁 𝐀^-1 and 𝐍^K≡𝐀. The superscript K indicates that the radial functions obey boundary conditions of the 𝐊 matrix.The 𝐊 matrix is real as the matrices 𝐀, 𝐁 are real.The off-diagonal matrix elements of 𝐊 provide an indication of the character of the other final channels due to the couplings from the interaction potential energy of the system in the wavefunction, for a given incident initial colliding channel.We chose the factors k_α'^-1/2 in the two linearly independent functions 𝐅^(1), 𝐅^(2) so that the Wronskian matrix 𝐖 = 𝐅^(1)𝐅^'(2) - 𝐅^'(1)𝐅^(2) is the identity matrix 𝐈. If so, 𝐊 is also a symmetric matrix.This is shown in Proof 1 of the appendix of this paper.𝐊 is related to the 𝐙 matrix by (the order of the matrix multiplication is important to get a symmetric matrix): 𝐊 = {𝐙 𝐅^(2) - 𝐅^'(2)}^-1 {𝐙 𝐅^(1) - 𝐅^'(1)} |_r=r_max . This is often referred to as the matching procedure, performed at r=r_max. This is shown in Proof 2 of the appendix. From the proof, one can see that the reactance matrix is independent of the choice of the normalisation matrix 𝐍^K of the radial functions. It depends only on its log-derivative matrix 𝐙 at r_max: if 𝐙 is diagonal (non-diagonal) due to the uncoupled (coupled) Schrödinger equations, 𝐊 is diagonal (non-diagonal).§.§ Scattering matrix 𝐒. Relation with 𝐊 The problem with Eq. (<ref>) is that the functions are not written in terms of incoming and outgoing radial functions, as the ones appearing in the asymptotic wavefunction in Eq. (<ref>).When r →∞ in Eq. (<ref>), Eq. (<ref>) shows that the Ricatti-Bessel and Ricatti-Neumann functions behave as sine and cosine functions which can also be written in terms of incoming/outgoing spherical wave. Another general solution of Eq. (<ref>) for the radial functions is then given by: 𝐅(r)r →∞= 𝐅^(-) 𝐀' + 𝐅^(+) 𝐁', where F^±_i',i = δ_i',i1/k_α'^1/2e^± i(k_α' r - l'π/2) are incoming (-) or outgoing (+) spherical waves and 𝐀', 𝐁' are complex constant matrices, independent of r.Again, in the special case without coupling terms ∀ r in Eq. (<ref>),𝐀' and 𝐁' will be diagonal while they will be full matricesif coupling terms are present.We can also write Eq. (<ref>) as: 𝐅(r)r →∞= 𝐅^S(r) 𝐍^S with: 𝐅^S(r) = {𝐅^- - 𝐅^+𝐒}. 𝐒 is the scattering matrix. 𝐍^S is a complex normalisation matrix. The superscript S indicates now that the radial functions obey boundary conditions ofthe 𝐒 matrix.Eq. (<ref>) is the useful form to match with the asymptotic one inEq. (<ref>) because it uses incoming and outgoing radial functions as well. From Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), 𝐒≡ - 𝐁' 𝐀'^-1 and 𝐍^S≡𝐀'.𝐒 is related to the 𝐊 matrix by: 𝐒 = 𝐈+i𝐊/𝐈-i𝐊 . This is shown in Proof 3 of the appendix. Again from the proof, one can seethat the scattering matrix is independent of the normalization matrix 𝐍^S of the radial functions. It depends only on the reactance matrix, and hence the log-derivative matrix. 𝐒 is a symmetric matrix: 𝐒^t = 𝐒, and a unitary matrix: 𝐒 𝐒^† = 𝐒^† 𝐒 = 𝐈, as shown in Proof 4 of the appendix.𝐒 in general is a complex matrix including a real and imaginary part.The coefficient of the outgoing waves in the channel i' coming from an incoming wave in the channel i is given by the element S_i',i. The elements |S_i',i|^2 correspond to the ratio of the outgoing flux 4 πħ |S_i',i|^2 / m_red over the incoming one 4 πħ / m_red in absolute value (one can compute the flux using Eq. (<ref>) and Eq. (<ref>), using the radial functions in Eq. (<ref>) and integrating over the whole solid angle dr̂). Then the probability to collide from a state i to a state i' is simply given by P_i → i' = |S_i',i|^2 with:∑_i'P_i → i' = 1. Finally, we impose a diagonal normalization matrix 𝐍^S in Eq. (<ref>). This enables that an independent solution of the Schrödinger equation, corresponding to a given column of the 𝐅 matrix given by Eq. (<ref>), has the same overall normalization in a multiplicative factor, as suggested by Eq. (<ref>). In that way, the diagonal elements of this matrix identify directly with the normalization factor we have already defined in Eq. (<ref>) so that N^S_αl m_l, αl m_l≡ N_αl m_l. §.§ Transition matrix 𝐓. Relation with observables At r_max≃∞, U^int,U^cent→ 0in Eq. (<ref>), the wavefunction tends to Eq. (<ref>)far from the collision region: ψ^E_tot_α, k⃗_αr →∞= A [ e^i k⃗_α . r⃗ ϕ_α + ∑_α'f^+_α→α'e^i k_α' r/r ϕ_α']= ψ_inc + ψ_scat, where ψ_inc has the form of Eq. (<ref>).When no interaction potential energy is present, no scattering is present (ψ_scat =0), we see that ψ_α, k⃗_α = ψ_inc contains only the initial internal state ϕ_α and for which the radial function is a superposition of an incoming and outgoing spherical wave e^± i(k_α r - lπ/2)/r, of same amplitudes. In the presence of the interaction potential energy term U_int, the scattering wave ψ_scat will additionally produce outgoing spherical waves e^i(k_α' r - l'π/2)/rin final states ϕ_α', responsible for inelastic transitions. Both the asymptotic expansion Eq. (<ref>) and the partial wave expansion Eq. (<ref>) and Eq. (<ref>), using Eq. (<ref>), contain now an incoming and outgoing spherical wave term. One can then identify their expressions. This leads to the expression of the normalization factor of each partial waves in Eq. (<ref>): N_αl m_l(k⃗_α) =A 2 π i/k_α^1/2i^l [Y_l^m_l(k̂_α)]^*. Similarly, the scattering amplitude in Eq. (<ref>) writes: f^+_α→α'(k⃗_α,r̂) = 2 π/i k_α^1/2k_α'^1/2 ∑_l=0^∞∑_m_l=-l^l∑_l'=0^∞∑_m_l'=-l'^l'i^l-l'[Y_l^m_l(k̂_α)]^* Y_l'^m_l'(r̂) T_α' l' m_l',αl m_l(k_α) in terms of the transition matrix: 𝐓 = 𝐒 - 𝐈 . Note that some references use a definition 𝐓 = 𝐈 - 𝐒 but the scattering amplitude is then defined with a factor of 2 π i instead of 2 π / i in Eq. (<ref>), which provides in any case the same scattering amplitude.One can then get the observables in terms of the 𝐓 matrix. The differential cross section is given by Eq. (<ref>): ∂σ_α→α'(k⃗_α,r̂)/∂r̂ ∂k⃗_α = k_α'/k_α|f^+_α→α'(k⃗_α,r̂)|^2 = 4 π^2/k_α^2∑_l_a^∑_m_l_a^∑_l_b^∑_m_l_b^∑_l_c^∑_m_l_c^∑_l_d^∑_m_l_d^ i^-l_a+l_b+l_c-l_d × Y_l_a^m_l_a(k̂_α) [Y_l_b^m_l_b(r̂)]^* [Y_l_c^m_l_c(k̂_α)]^* Y_l_d^m_l_d(r̂)×T^*_α' l_a m_l_a,α l_b m_l_b(k_α) T_α' l_c m_l_c,αl_d m_l_d(k_α), where running indexes l_a, m_l_a, ..., l_d, m_l_d have been used in the expression ofthe modulus squared of the scattering amplitude.The averaged integral cross section is given by Eq. (<ref>): σ_α→α'(E_c)= Δ×π/k_α^2 ∑_l^∑_m_l^∑_l'^∑_m_l'^|T_α' l' m_l',αl m_l(k_α)|^2 = ∑_l^∑_m_l^σ_α→α', l m_l(E_c), where we can define a partial wave cross section σ_α→α', l m_l.From Eq. (<ref>) to Eq. (<ref>), we used the fact that the integration overk̂_α gives δ_l_a,l_c δ_m_l_a,m_l_c and the integration over r̂ gives δ_l_b,l_d δ_m_l_b,m_l_d .In the case of identical particles starting in indistinguishable states (ϕ_α = ϕ_α_1ϕ_α_2 with α_1 = α_2),one has to multiply the cross sections by a factor Δ = 2 for symmetry reasons as the differential cross sections have to be integratedover half space only <cit.>. Note that in this case the number of partial waves is halved compared to the case of identical but distinguishable or different particles, due to the specific rules mentioned above for the partial waves. In the case of identical particles starting in distinguishable states (α_1 α_2), or in the case of different particles, Δ = 1.Eq. (<ref>) is used to obtain the corresponding rate coefficient.In a numerical calculation, one usually computes the 𝐙,𝐊,𝐒,𝐓 matrices in this order to get the observables. §.§ Link to scattering of structureless particles. The central potential problem It is interesting to see how to recoverthe central potential problem for elastic scattering of structureless particles(that can be found in many textbooks <cit.>), from the more general elastic and inelastic scattering formalismof particles with internal structure presented in this paper.First, in the central potential problem the interaction is assumed to be isotropic, U_int(r⃗)=U_int(r), so that itdoes not depend on the angles r̂. Then the operators H,L^2,L_z commute and l,m_l are good quantum numbers which are conserved during the collision, in addition with the total energy. So l'=l and m_l'=m_l.Secondly, for an elastic collision, α'=α. Finally, the collision does not depend on the direction of the incident particles since the potential is isotropic. One can choose for example the direction k̂_α≡ẑ = (0,0). Then [Y_l^m_l(0,0)]^* ≡√(2l+1/4π) δ_m_l,0, this implies m_l=0.The asymptotic expansion writes: ψ^E_tot_k r →∞=A [ e^i kz+ f^+(k,r̂) e^i k r/r] . Then the scattering amplitude reduces to: f^+_α→α(k⃗_α,r̂)=f^+(k_α,r̂) = 2 π/i k_α^1/2k_α'^1/2∑_l=0^∞∑_m_l=-l^l∑_l'=0^∞∑_m_l'=-l'^l'i^l-l'[Y_l^m_l(k̂_α)]^*Y_l'^m_l'(r̂) T_α' l' m_l',αl m_l(k_α) = 2 π/i k_α ∑_l=0^∞ i^0 √(2l+1/4π) δ_m_l,0Y_l^0(r̂) T_αl 0,αl 0(k_α) = 2 π/i k ∑_l=0^∞ i^0 √(2l+1/4π) √(2l+1/4π)P_l^0(cosθ)T_l= 1/2 i k ∑_l=0^∞ (2l+1) P_l^0(cosθ)T_l and the cross section reduces to: σ(k)= ∫dr̂|f^+|^2 = 1/4 k^2 ∑_l=0^∞∑_l'=0^∞(2l+1) (2l'+1) [ ∫dr̂P_l^0(cosθ)P_l'^0(cosθ) ] T^*_lT_l'= π/k^2 ∑_l=0^∞(2l+1) |T_l|^2 . From Eq. (<ref>) to Eq. (<ref>),we used ∫_0^πP_l^0P_l'^0sinθdθ = 2/(2l+1)δ_l,l' and ∫_0^2πdφ = 2 π.Because for elastic collisions, the 𝐒 matrix reduces to an element for a given l, it can be written S_l=e^2i δ_l(k). δ_l(k) is called the scattering phase shift in the partial wave l. Since there are no inelastic channels then |S_l|^2=1. The role of the central potential is then to shift the phase of the outgoing wave by δ_l(k). By noting that 1 -e^2i δ_l(k) = e^i δ_l(k) ( e^-i δ_l(k) - e^i δ_l(k) ) = - 2i e^i δ_l(k)sinδ_l(k), one can also find:σ(k) =4 π/k^2 ∑_l=0^∞(2l+1)sin^2δ_l(k) which is a formula often quoted in textbooks.The phase shift is related to the K matrix by K_l=tanδ_l. Note that we recover Eq. (<ref>) because: S_l = e^2i δ_l = 1+itanδ_l/1-itanδ_l=1+i K_l/1-iK_l. We used the fact that 1 ± i tanδ = 1 ±e^iδ-e^-iδ/e^iδ+e^-iδ =2 e^± i δ/e^iδ+e^-iδ.§ BEHAVIOUR AT ULTRALOW ENERGY. SCATTERING LENGTH AND THRESHOLD LAWSWe now present how the dynamics of two colliding particles behaves at ultralow energy when E_c → 0. To simplify the discussion, we will take the case of an elastic collision of structureless particles interacting with a central potential U_int(r), as described in the previous section. The Schrödinger equation writes: { - ħ^2/2 m_red d^2/d r^2 + ħ^2 l(l+1)/ 2 m_red r^2 +U^int(r) - E_c }f(r) = 0 where E_c = ħ^2 k^2/2m_red (we take the energy of the two separated particles as the reference energy). The matching procedure Eq. (<ref>) is performed at r_max=r_0 where r_0 denotes the typical distance for which | U^int(r_0)| ≪ | U^cent(r_0)|. On one hand, there is always a typical collision energy E_c^* for and below which E_c ≪ | U^int(r_0)|, | U^cent(r_0)| so that the Schrödinger equation is in this limit independent of E_c at r_0. Then, the function and its derivative at r = r_0 are also independent of E_c. Its log-derivative is then a given constant Z = C at r = r_0. On the other hand, from Eq. (<ref>),we know the general form of f(r) = f^(1)(ρ) - f^(2)(ρ) K_l = j̃(ρ)/√(k) - ñ(ρ)/√(k)K_l (using ρ = kr) and its derivative f'(r) = f^'(1)(kr) - f^'(2)(kr) K_l, the prime being a derivative with respect to r. If we use d/dr = k d/dρ, we have f'(r) = k (df^(1)(ρ)/dρ) - k (df^(2)(ρ)/dρ) K_l = √(k)(dj̃(ρ)/dρ) - √(k)(dñ(ρ)/dρ)K_l. We perform the matching procedure at r_max=r_0, using Eq. (<ref>) for the functions and their derivatives as E_c, k → 0, using a constant energy-independent value of the log-derivative Z = C, and using the fact that K_l = tan(δ_l) where δ_l is the scattering phase shift (see the central potential problem above).Eq. (<ref>) gives <cit.>: tan(δ_l)= Z_l f^(1) - f^'(1)/Z_l f^(2) - f^'(2)= Cj̃(ρ)/√(k) - √(k)(dj̃(ρ)/dρ)/Cñ(ρ)/√(k) - √(k)(dñ(ρ)/dρ)= Cj̃(ρ) - k (dj̃(ρ)/dρ)/Cñ(ρ) - k (dñ(ρ)/dρ)ρ→ 0=-1/(2l+1)!!(2l-1)!! C Dρ^l+1 - E k ρ^l/C Fρ^-l - G k ρ^-l-1k → 0=-(2l+1)/[(2l+1)!!]^2 C D k^l+1r_0^l+1 - E k^l+1r_0^l/C F k^-lr_0^-l - G k^-lr_0^-l-1k→ 0=-(2l+1)/[(2l+1)!!]^2( C D r_0^l+1 - E r_0^l/C F r_0^-l - G r_0^-l-1) k^2l+1k → 0=-Lk^2l+1 where D,E,F,G are dimensionless proportionality factors in Eq. (<ref>).Since C and k have the dimension of an inverse length and tan(δ_l) has no units,the constant L has the dimension of a length to the power 2l+1. The most important partial wave to describe the collision at ultralow energies corresponds to the first lowest partial wave.For identical and indistinguishable bosonic particles or for different particles, the first partial wave is l=0 as mentioned earlier, then L has the dimension of a length. We define the s-wave scattering length by: a_s = k→ 0lim - tanδ_l=0(k)/k. The cross section can be linked to the scattering length by: σ_l=0(k)= 4π/k^2sin^2δ_0(k) = 4π/k^21/sin^2δ_0(k)+cos^2δ_0(k)/sin^2δ_0(k)= 4π/k^21/1 + 1/tan^2δ_0(k)= 4π/k^21/1 + 1/(a_s k)^2k→ 0→4 π a_s^2. This cross section is the same than the one provided by a hard sphere potential ofradius a_s, that is U^int(r) = ∞ if r ≤ a_s, 0 otherwise.Then at ultralow energy, one can safely replace a complicated interaction potential energy by a simple hard sphere model potential, since the cross sections will be the same. The model potential represents a simple, effective potential for the collision of the system, for which the scattering length plays the essential parameter.In ultracold physics in many-body interacting systems,the scattering length plays a crucial rolein terms of which the many-body physics is described. It appears, for example, in the Gross–Pitaevskii equations to describe the physics of ultracold gases of particles in interaction <cit.>.For identical and indistinguishable fermionic particles, the first partial wave is l=1, then L is a volume. We define the p-wave scattering length (the volume L is the cube of this length) by: a^3_p = k→ 0lim - tanδ_l=1(k)/k^3. The result in Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) are not generally valid for potentials falling off asymptotically as an inverse power of the distance r.Also, for interaction potential U^int(r) = ± C_s / r^s, with s>2, the threshold behaviour in Eq. (<ref>) is dominant for partial waves l < (s-3)/2 <cit.>.For partial waves l > (s-3)/2, the dominant threshold behaviour becomes: tan(δ_l) k→ 0∝ k^s-2. For partial waves l = (s-3)/2, both contributions Eq. (<ref>) and Eq. (<ref>) are taken to describe the threshold behaviour.Using Eq. (<ref>) and Eq. (<ref>), the behaviour of the elastic cross sections and rate coefficients at a vanishing collision energy becomeswhen using the threshold behaviour Eq. (<ref>): σ^el_l k, E_c→ 0∝ k^4l∝ E_c^2l β^el_l k, E_c→ 0∝ k^4l+1∝ E_c^2l+1/2. When using the threshold behaviour Eq. (<ref>), it becomes: σ^el_l k, E_c→ 0∝ k^2s-6∝ E_c^s-3 β^el_l k, E_c→ 0∝ k^2s-5∝ E_c^s-5/2. Inelastic/reactive cross sections and rate coefficients behavioursare given without proof <cit.>: σ^in/re_l k, E_c→ 0∝ k^2l-1∝ E_c^l-1/2 β^in/re_l k, E_c→ 0∝ k^2l∝ E_c^l . These expressions are called the threshold laws or Wigner laws <cit.>. § APPLICATION TO ULTRACOLD COLLISIONS OF DIPOLAR MOLECULES IN ELECTRIC FIELDSIn 2008, a major breakthrough has been made in the field of ultracold molecular physics with the production of a dense and coherent gas ofultracold dipolar fermionic ^40K^87Rbmolecules <cit.>. In contrast with the previous experiments of that time <cit.>, these molecules were producedin their ground electronic state ^1Σ^+, their ground vibrational state v=0, and their ground rotational state n=0, with additional control over the hyperfinestates <cit.>. Therefore, the experimentalists were ableto address the internal state of all the molecules of a dense gasto the absolute ground state.The molecule of KRb possesses in its own framea permanent electric dipole moment of d=0.57 D<cit.>.Therefore, the energy of the molecules and their interactions can be manipulated with an external electric field.KRb molecules are also chemically reactive evenin their absolute ground state <cit.> so that KRb + KRb → K_2 + Rb_2 is an exoergic process.On the one hand, this is a drawback for creating long-lived gases of strong dipolar ultracold molecules in experiments since this chemical reaction will lead to large molecular losses. But on the other hand if an electric field is applied, the molecular losses, which can be quite easily measuredin a experiment as a function of time, will directly provide a signature of the dipolar interaction of the colliding molecules.It is therefore important to understand the collisional properties of the dipolar gas, in terms of its stability and lifetime. Collisions are also driving the thermal equilibrium of the gas and are very important to perform evaporative cooling to further decrease the temperature and reach eventually quantum degeneracy, as it was performed for ultracold gases of atoms <cit.>.As an illustration of the formalism studied in this paper, we will present in this section the collisional properties of KRb + KRb → K_2 + Rb_2 as a function of an electric field, for ^1Σ^+, v=0 molecules initially in the ground rotational state n=0 and in the first excited rotational state n=1, for both fermionic ^40K^87Rb molecules and bosonic ^41K^87Rb molecules. The spin structure of the molecules will not be taken into account in the following. §.§ A simplified problem The full time-independent quantum mechanical formalism developed previously still represents a numerical challenge for diatom-diatom or polyatomic molecular collisions at the present time: (i) Firstly, full potential energy surfaces of polyatomic systems (involving all degrees of freedom) are generally challenging to compute, especially in the region of the complex where the atoms are close to each other. This is still feasible for tri-atomic systems but becomes in general difficult for tetra-atomic ones.(ii) Secondly, when systems are chemically reactive, the Jacobi coordinates used in the present formalism are not appropriate anymore. Instead, one has to use hyperspherical coordinates <cit.> as already mentioned, which treat, in a symmetric way, the polyatomic system formed by the atoms. The hyperspherical formalism <cit.> is well adapted for proper symmetrization of the overall wavefunction with respect to identical atom exchange as well as treating theproducts of a chemical reaction. However, the formalism becomes difficult to handle numerically, especially using a full potential energy surface.Consequently, chemically reactive collision of diatomic molecules have to be tackled in another way at the present time. To overcome those problems, we will use two assumptions to treat the collisions of two diatomic reactive molecules.§.§.§ Long-range interaction First we will consider only the long-range interaction of the potential energy so that U_int = U_mult. At ultralow collision energies, the dynamics becomes more and more sensitive to the term that is the most longer-ranged in the potential energy.In the case of neutral diatomic molecules which possess an electric dipole moment, like KRb, the most longer-ranged term in the multipole-multipole interaction is the dipole-dipole interaction (λ_1 = λ_2 = 1, λ=2 in Eq. (<ref>)), so that U_mult = U_dd.The matrix elements in the uncoupled basis presented abovefor a diatomic molecule are given by:⟨n_1, m_n_1, n_2, m_n_2, l, m_l| U_dd | n_1', m_n_1', n_2', m_n_2', l', m_l' ⟩=-√(30) d^2/4 πε_0 r^3∑_m_λ_1m_λ_2(-1)^m_n_1+m_n_2+m_l ( [112;m_λ_1m_λ_2 -(m_λ_1 + m_λ_2) ]) ×√((2n_1+1) (2n_1'+1)) ( [n_11 n_1';000 ])( [n_11 n_1'; -m_n_1m_λ_1 m_n_1' ])×√((2n_2+1) (2n_2'+1)) ( [n_21 n_2';000 ])( [n_21 n_2'; -m_n_2m_λ_2 m_n_2' ])×√((2l+1) (2l'+1)) ( [l2 l';000 ])( [l2 l'; -m_l -(m_λ_1 + m_λ_2) m_l' ]) where d ≡ Q_1 0 is the electric dipole moment. Higher multipole terms such as the quadrupole and octopole terms <cit.> can become important at higher collision energies <cit.>.In addition to the dipole-dipole term, we includea diagonal electronic -C_6/r^6 van der Waals interaction <cit.>.§.§.§ A short-range tunable condition Secondly, we will use a phenomenological approach to treat the molecular collisions at short-range. The initial condition for the propagation of the radial wavefunction was given by a diagonal matrix in Eq. (<ref>) corresponding to an infinite wall at r=r_min. We will now slightly modify this condition. We still keep the matrix diagonal, meaning no couplings between channels at short-range,but we now allow some additional effective scattering phase-shiftand some effective loss for each channels due to the result of the (unknown) potential energy surface at short-range. We then construct a flexible and tunable log-derivative matrix, where the diagonal elements for a channel i are givenby <cit.>: Z_i,i(r=r_min)= 4 k_mins c√(1-p_SR)/c^2 (√(1-p_SR)-1)^2 + s^2 (√(1-p_SR)+1)^2-ik_minp_SR/c^2 (√(1-p_SR)-1)^2 + s^2 (√(1-p_SR)+1)^2, where: k_min = √(2 m_red[E_tot -U^eff_i,i(r=r_min)]/ħ^2) and: c = cos(k_minr_min + δ_SR)s = sin(k_minr_min + δ_SR). The log-derivative at r_min can be continuously tuned by twoparameters 0 ≤ p_SR≤ 1 and 0 ≤δ_SR≤π. p_SR represents a loss probability for the flux coming from the long-range region r > r_min describing phenomenologically a loss at short-range, while δ_SR represents a phase shift accumulated from the short-range region r < r_min, describing phenomenologically the result ofthe (unknown) potential energy surface there.The above log-derivative condition has been constructed at r=r_min so that it describes a square well of constant depth U^eff_i,i(r_min) given by Eq. (<ref>) from r=0 to r=r_min, with a tunable complex phase shiftδ= δ_r + iδ_i and a corresponding amplitude e^ 2iδ= e^-2 δ_ie^ 2i δ_r appearing in front of the outgoing solution of the radial wavefunction of the square well potentiale^- i k_min r - e^ 2iδe^+ i k_min r <cit.>. e^ 2iδ corresponds to a S_SR matrix element at short-range which probability |S_SR|^2=e^-4δ_i is a number between 0 and 1 depending on δ_i and represents the probability for the flux going to the long-range region r > r_min. The loss probability p_SR is then defined as p_SR=1-e^-4δ_i so that e^-2 δ_i≡√(1 - p_SR). We also note δ_r ≡δ_SR.The condition for full loss of the flux at short range is given by p_SR=1and gives Z_i,i(r_min) = - ik_min for all diagonal elements. This is often called the universal regime sinceno resonances appear in the cross sections or ratecoefficients <cit.> and they are independent of the phase shifts δ_SR <cit.>.The results then become independent ofthe short-range interaction of the systems.The opposite condition for full reflection of the flux is given by p_SR=0 and gives Z(r_min) = k_min c/s which is the usual case for a square potential and depends on the tunable phase-shift δ_SR. With an adequate choice of δ_SR = - k_minr_min (modulo π), we can recover the infinite wall condition from Eq. (<ref>).A number of 0 < p_SR < 1 in between with 0 < δ_SR < πdescribes an intermediate case where we can haveboth loss and reflection <cit.>. Actually, this can be a way to fit the theoretical results with experimental data <cit.> since the short-range potentials are not known generally.The form of this initial tunable log-derivative is thenflexible and can treat the possibility of loss at short-range in a phenomenological way. The complex log-derivative matrix provides a complex 𝐊 matrix and a 𝐒 matrix which is not a unitary matrix anymore. The difference of the sum of the |S|^2 matrix element for one channel with unity provides the overall loss probability of this channel which translates into a loss cross section and a loss rate coefficient. This is an overall loss as we cannot determine each final state-to-state loss probabilities.When describing ultracold collisions of reactive molecules,the universal regime condition at short-range p_SR=1 is often chosen as we know nothing about the full potential energy surface. It is convenient since this condition is independent of the short-range interaction of the systems as mentioned above. It means that when the two molecules meet at short-range, the probability of reaction is one.Comparison with experimental data will eventually tell if one deviates from this regime or not.For the case of non-reactive molecules with a high density of Fano-Feshbach resonancesaround the collisional threshold <cit.>, this condition is also often chosen.In this case, it has been supposed that the molecules might form a molecule-molecule complex for a certain time.The higher the density of Fano-Feshbach resonances, the longer the lifetime of this forming complex. As a consequence, in this high density regime, it has been shown thatthe rate of two molecules being formed in the tetra-atomic complex is exactly the same as the rate of two molecules being destroyed at short-range with a full loss probability p_SR=1 <cit.>.Subsequently, the complex can be destroyed by a collision with a third molecule, resulting in losses of the molecules. Recent experiments observed losses of non-reactive molecules in their absolute groundstate for RbCs <cit.>, NaK <cit.>, and NaRb <cit.> molecules. Even though a direct observation of the forming complexes was not obtained, the hypothesis formulated in <cit.> could be a possible explanation of the experimental molecular losses.We end up with: (i) a long-range interaction from r=r_min to r=r_max and (ii) a short-range tunable boundary condition at r=r_min which describes phenomenologically scattering phase-shifts and additional losses from short-range. To study the collision KRb + KRb → K_2 + Rb_2, we will use the full loss (universal) condition at short-range p_SR=1 so that Z_i,i(r_min) = - ik_min for each diagonal elements. §.§ Molecules in an electric field We consider KRb molecules in their ground electronic state ^1Σ^+ and their ground vibrational state v=0. We do not take into account any spin structure as mentioned earlier. Then, only their rotational structure can change in a collision. The bare internal rotational states of a molecule are usual described by spherical harmonics Y_n_i^m_n_i noted by the ket |n_i m_n_i⟩ for molecule i=1,2.In this basis set, the rotational Hamiltonian is given by ⟨ n_i m_n_i | h_rot | n_i' m_n_i' ⟩ = B_rotn_i(n_i+1)δ_n_i,n_i' δ_m_n_i,m_n_i' where B_rot is the rotational constant of the molecule.We take B_rot = 1.113950 GHz <cit.> for the fermionic ^40K^87Rb molecule and B_rot = 1.095362 GHz <cit.> for the bosonic ^41K^87Rb molecule.In an electric field, we add the Stark term given by the interaction h_S = - d⃗ . E⃗ between the permanent electric dipole moment d⃗ of the molecule and an electric field E⃗ = Ee⃗_Z taken along the OZ direction.In the basis set |n_i m_n_i⟩, the Stark term is written <cit.>: ⟨ n_i m_n_i | h_S | n_i' m_n_i' ⟩=- d E δ_m_n_i,m_n_i'(-1)^m_n_i √(2n_i+1) √(2n_i'+1)× ( [n_i1 n_i';000 ])( [n_i1 n_i'; -m_n_i0 m_n_i' ]) . A permanent electric dipole moment d⃗ is defined in the frame of the individual molecule,where the inter-atomic axis is chosen as quantization axis.We choose the convention that the orientation of the permanent dipole momentd⃗ points from the negative to the positive distribution of charge <cit.>.The sign of the vector d⃗ depends on theinter-atomic axis orientation in the frame of the individual molecule.This is an arbitrary choice but needs to be specified to avoid confusion.Here we assume that the inter-atomic axis is oriented from the lightest atom to the heaviest one <cit.>(for identical atoms of same mass, there is no electric dipole moment), as shown in Fig. <ref> wherethe unit vector ρ⃗_i/|ρ⃗_i|for molecule i=1,2 points from the lightest atom (represented by a small blue circle)to the heaviest one (represented by a big red circle).Using the above convention and the orientation of the inter-atomic axis,a positive vector d⃗ would then mean that the negative distribution ofcharge is on the lightest atom while the positive distribution is on the heaviest one. A negative vector would mean the opposite.As an example, the permanent dipole moment d⃗ for KRb is a positive vector witha magnitude of d=0.57 D. It means that the negative distribution of chargeis on the K atom while the positive one is on Rb, in the individual molecular frame.If we diagonalize the internal Hamiltonian matrix h_i = h_rot + h_S for molecule i=1,2 in the basis set |n_i m_n_i⟩, we get the corresponding eigenvectors (often called dressed states) |ñ_i m_n_i⟩ for a given electric field, which are a linear combination of the bare state |n_i m_n_i⟩. The quantum number m_n_i is conserved. The tilde corresponds to a certain admixture of different rotational quantum numbers due to the electric field but when E → 0, the dressed states |ñ_i m_n_i⟩ tend to the bare states |n_i m_n_i⟩. The number of significantly admixed bare states increases with the magnitude of the electric field.The eigenenergies ε_α_i for molecule i=1,2 are shown in Fig. <ref>-a for the fermionic ^40K^87Rb molecule, for E=[0-50] kV/cm where we used n=[0-5] to insure convergence of the results. It is also useful to plot the induced dipole momentin the electric field direction in the space-fixed frame.The induced dipole moment is the mean valueof the permanent dipole moment over the dressed state |ñ_i m_n_i⟩ at a given electric field E=E_0: d_ind(E_0) = ⟨ñ_im_n_i|d⃗·e⃗_Z |ñ_i m_n_i⟩ |_E_0 = - d ε_α_i/dE |_E_0 . The sign of the induced dipole moment represents nowthe sign of the mean value of the permanent dipole moment for a given statein the direction of the electric field in the space-fixed frame. A positive sign represents a mean value pointing along the field while a negative signrepresents a mean value pointing against the field. The induced dipole moments for different rotational states are shown in Fig. <ref>-b for the fermionic ^40K^87Rb molecule. The ground rotational state |0̃, 0 ⟩ has a positive induced dipole moment growing in a monotonic way from 0 to d=0.57 D. For the first excited state |1̃, 0 ⟩ this is different. The induced dipole moment of |1̃, 0 ⟩ is first negative from E=0 to E = 19 kV/cm, with an increase in magnitude up to E = 7.25 kV/cm and a decrease after. Then it becomes positive at E ≥ 19 kV/cm.Finally, the energy of the combined initial dressedstates ε_α = ε_α_1+ ε_α_2for two fermionic ^40K^87Rb molecules i=1,2 in an electric field, is shown in Fig. <ref> as a function of the electric field.This gives an indication of theenergy thresholds of the possible collisional states.§.§ Collisions of molecules in an electric field We use now a fixed collision energy E_c = 500 nK since this is a typicalvalue reached in experiments of ultracold molecules. From the previous section, we know as well the energy of the individualmolecules as a function of an applied electric field.We use the dipole-dipole interaction in Eq. (<ref>). The interaction varies as -C_3/r^3 and depends on the applied electric field. For the van der Waals interaction we use a value of C_6=12636 a.u. <cit.> for KRb.The molecules are identical and start in the same internal state so that they are indistinguishable. The partial waves used are l=1,3,5 for the fermionic molecules and l=0,2,4 for the bosonic ones.The initial quantum numbers for the individual molecules i=1,2 are m_n_i=0. Those numbers are still good quantum numbers even in an electric field.The total M_J = m_n_1 + m_n_2 + m_l = m_n_1' + m_n_2' + m_l' is conserved during the collision. As we start with m_n_1,m_n_2=0, then M_J=m_l. At such an ultralow energyE_c = 500 nK, the most important partial wave is the first and lowest one. For fermions, the lowest partial wave quantum number is l=1 (p-wave), so that m_l=0,±1. Then we restrict the calculation to M_J=0,±1. For bosons, the lowest partial wave quantum number is l=0 (s-wave), so that m_l=0, and then we restrict to M_J=0.The corresponding diabatic and adiabatic energies fortwo fermionic ^40K^87Rb moleculesin the ground rotational state |0̃,0⟩ at an electric fieldof E = 5 kV/cm are plotted in Fig. <ref> as a function of r(see definition in Section <ref>). We selected the component M_J=0 for this figure so that m_l=0. At large distances,the energies tend to the energy of two separated molecules recovering the results in Fig. <ref>. At short distances, one can see the onset of the centrifugal termscharacterized by the partial wave numbers l=1,3,5 and the corresponding barriers.The diabatic curves are shown in black while the adiabatic ones are shown in red. The effect of the dipole-dipole coupling elements in Eq. (<ref>) can be seen in this figure where the adiabatic energies differ from the diabatic ones. We apply the quantum formalism that we have presented in this paper. This is what it is referred to as the close-couplingquantum calculation in the following.Starting with a boundary conditionat r_min = 10a_0, where a_0 is the Bohr radius, corresponding to a full loss condition at short-range, we propagate the log-derivative matrix 𝐙 up to r_max = 10000a_0. At this distance, we obtain the reactance, scattering and transition matrices 𝐊,𝐒,𝐓, and finally the cross sections and rate coefficients.As we use a boundary condition with full loss at short-range, there are three collisional processes possible: elastic, inelastic and loss processes. The loss processes mimic chemical reaction processes for reactive molecules. For non-reactive molecules, they would mimic the losses of two free molecules into a molecule-molecule complex, subsequently destroyed by a collision with a third molecule.In the following, we will call quenching processes the sum of inelastic and loss processes, that is everything that leads to molecular losses and compare with elastic processes. We present two cases: collisions of molecules (i) in the ground rotational stateand (ii) in the first rotational excited state. For the former case, we also introduce an insightful model, a quantum threshold model, that semi-quantitatively explains the collisional results.§.§.§ Molecules in the ground rotational state: enhancement of the loss rates We present in Fig. <ref> the elastic (red) and quenching (blue)rate coefficients for fermions (Fig. <ref>-a) and bosons (Fig. <ref>-b) for two molecules in the ground rotational state |0̃,0⟩.The results were obtained using the close-coupling quantum calculation aforementioned. The energy threshold for the two molecules|0̃,0⟩+|0̃,0⟩ is shown in blue in Fig. <ref>. For fermions, experimental data of Ref. <cit.> are also included.Globally for both cases, the quenching rate dominates over the elastic rate or they have the same order of magnitude. This is a bad outcome for example for evaporative cooling purpose where elastic collisions have to be important while quenching collisions have to be negligible.Comparing fermions to bosons, similar behaviour is seen except that the bosonic rates are globally higher than the fermionic ones. This is expected from the parity ofthe l quantum numbers. For bosons, the l numbers are even and include the s-wave l=0 curve, for which there is no centrifugal barrier (barrierless case). For fermions, the l numbers are odd and include the p-wave l=1 curve, for which there is a centrifugal barrier.In the former case, the particles approach each other easily without any barier so that the rate is high while in the latter case,the particles approach less easily due to the presence of the p-wave centrifugal barrier. Both rates increase with increasing electric field. They display the same behaviour as their induced dipole moment. When the electric field increases, the induced dipole moment increases monotonically (see Fig.<ref>-b), so does the magnitude of the dipole-dipole interaction and then the rate coefficient.This can be explained by the fact that for fermions or bosons at ultralow energies, the main contribution to the rates comes from an attractive dipole-dipole interaction from the m_l=0 component <cit.>. When the electric field increases the dipole-dipole interaction becomes more and more attractive, favouring the meeting of molecules at short-range and then molecular losses.Fermionic and bosonic elastic rates behave asd_ind^4 as predicted in Ref. <cit.>. Quenching rate coefficients have a strong dependence with increasing electric field and induced dipole moment. The fermionic quenching rates display a d_ind^6 behaviour as found in Ref. <cit.> while the bosonic quenching rates display a d_ind^2 one as found in Ref. <cit.>. §.§.§ A Quantum Threshold model The behaviour of the quenching rate coefficientscan be found semi-quantitatively using a Quantum Threshold (QT) model <cit.>. The method consists in taking into account only the lowest channel curve of the initial colliding state corresponding to the lowest partial wave quantum number. The energy of the two initial free particles is taken as reference. This curve is described by the interaction potential U^eff: U^eff(r)=U^cent(r) +U^int(r)= ħ^2 l (l+1)/2 m_red r^2 - C_s/r^s, for an attractive interaction - C_s / r^s with s>2 and C_s > 0.The competition between the repulsive centrifugal potential and the attractive interactioncreates a potential energy barrier for the incident colliding motion (or incident barrier),of height V_b at position r_b (see Fig. <ref>).The position and the height of the barrier are given by: r_b= [m_reds C_s/ħ^2 l(l+1)]^1/(s-2)V_b= ħ^2 l(l+1)/2 m_red r_b^2 - C_s/r_b^s. In the case of a barrierless collisions (l=0), one cannot define a position r_b andheight V_b of a barrier. Instead the characteristic length and energy of the- C_s / r^s interaction are taken in the model<cit.>:a_s= [ 2 m_red C_s/ħ^2]^1/(s-2)E_s= ħ^2/2 m_red a_s^2. The model simply uses two probabilities of collision:one at long-range and one at short-range.At long range, the two molecules see the incident barrier and tunnel through it. This is described by a long-range (tunneling) probability P_LR.The molecules enter then the short-range region, where they can chemically react or form a complex and be lost from the trap, with a probability P_SR. We will assume full probability of loss at short range so that P_SR=1. Then the probability of loss is P^loss = P_SR× P_LR = P_LR. To estimate the tunneling probability P_LR, we use:(i) a classical Langevin model <cit.>:when E_c ≥ V_b: P_LR(E_c=V_b)=1 that is if the molecules have enough energy to overcome the barrier, the probability of passing above is one, (ii) the form of the threshold laws for the loss probability (Eq. (<ref>)): when E_c → 0, the probability which is proportional to the cross section multiplied by k^2 ∼ E_c (see Eq. (<ref>)) should obey:P_LR(E_c)=γ E_c^l+1/2 , (iii) Eq. (<ref>) to determine the constant γ in Eq. (<ref>), so that P_LR(E_c=V_b)=1=γ V_b^l+1/2. Then we get γ = 1 / V_b^l+1/2. Within the QT model, the total loss probability is:P^loss(E_c)=( E_c/V_b)^l+1/2. Replacing Eq. (<ref>) into Eq. (<ref>) leads to the quenching rate coefficient within the QT model for a given l,m_l: β^qu_l,m_l(E_c) = ħ^2 π/√(2 m_red^3)E_c^l/V_b^l+1/2 Δ , with Δ = 2 if the particles are identical and indistinguishable and Δ = 1 otherwise. This is a simple way to estimate the characteristics ofloss collisions. Once we know the height of the barrier V_b we know how the rate coefficient scales. At zero electric field, the dominant interaction is the attractive van der Waals interaction with s=6. For l=1, V_b = [ 8 ħ^2 / 54 m_red^3 C_6 ]^1/2. For l=0, E_6 = ħ^3 / [8 m_red^3 C_6 ]^1/2.In the electric field regime, V_b = (25 ħ^6 / 108 m_red^3) × (d_ind^2/ 4 πε_0)^-2 for l=1 and E_4 = (15 ħ^6 / 16 m_red^3) × (d_ind^2/ 4 πε_0)^-2 for l=0, where the characteristic interaction is s=4, see Ref. <cit.> for more details.Inserting these expressions into Eq. (<ref>), we see that the quenching rate behaves then as d_ind^6 and d_ind^2 for indistinguishable fermions (l=1) and bosons (l=0) respectively, and in general as d_ind^4(l+1/2).For l=0, the quenching rate coefficients are independent of the collision energy and hence of the temperature. For l=1 and to get the rate coefficients as a function of the temperature T,one can replace E_c by ⟨ E_c ⟩ = 3 k_B T/2, the mean collision energy of a Maxwell-Boltzmann distribution.The QT rate coefficient is shown in Fig. <ref> as a dashed line for the fermionic case. It gives the proper scaling law and transition zonebetween the Van der Waals regime (where we took l=1,m_l=0,±1) and the electric field regime (where we only took l=1,m_l=0). However it overestimates both quantum results and experimental databy about a factor of 2. This can be traced back in the classical Langevin criterion where we chose a unit probability when E_c = V_b. This is true in classical mechanics but in quantum mechanics, the colliding particles are described by a wave. Close to and at the topa barrier, a wave has a transmission probability but also a reflection probability, the sum of both being one. It implies that the transmission probability of the wave function is not equal to unity, in contrast with what is assumed by the classical Langevin model. This explains why the QT model gives an upper value of the quenching rates for fermions.Comparing the QT quenching rates with the ones using the quantum formalism for different molecular systems of dipolar alkali molecules<cit.> provides the corrections to make for the model. The correction is a factor p of order of unity in front of Eq. (<ref>). For l=1, the corrections are p=0.53 for the van der Waals regime and p=0.54 for the electric field regime, while for l=0 they are p=1.92 and p=3.74 respectively.Some of those values can also be found using a Quantum-Defect Theory (QDT) formalism <cit.>.The QT model in Eq. (<ref>) provides then an underestimating rate for the barrierless case l=0 since the correction factor p > 1,while it gives an overestimating rate for the barrier case l=1 since the correction factor p < 1.With the corrections of about 0.5 for l=1 on this figure, one can see thatthe QT model will then agree with the numericalclose-coupling quantum calculation and the experimental data. §.§.§ Molecules in the first rotational excited state: suppression of the loss rates What happens now if the molecules are prepared in the first excited rotational state |1̃,0⟩? The corresponding rate coefficients<cit.> are presented in Fig. <ref> using the close-coupling quantum calculation.Globally, we found the same overall trend as for two molecules in the ground rotational state.The rates follow again the behaviour of the induced dipole moment as a function of the electric field: when |d_ind| increases from E=0 to E = 7.25 kV/cm and from E=19 kV/cm, the rate increases, and inversely when it decreases from E = 7.25 kV/cm to E=19 kV/cm, the rate decreases. We found again that the quenching rate behaves as d_ind^6 and d_ind^2 for fermions and bosons and that the elastic rate behaves as d_ind^4.The main interesting feature of Fig. <ref> comes from the presence of sharply varying structures for the rates near E^* ∼ 12.5 kV/cm and E^* ∼ 11.5 kV/cm(two smoother ones appear near E^* ∼ 10.5 kV/cm and E^* ∼ 27 kV/cm but cannot beseen in the figure).This is in strike contrast with collisions of ground rotational states molecules.These features appear at the specific electric fields E^* where the energy threshold of other combined molecular states crosses the initial one |1̃,0⟩+|1̃,0⟩, shown in red in Fig. <ref>. Those states are for example the |0̃,0⟩+|2̃,0⟩ and the |0̃,0⟩+|2̃,±1⟩ for the two most prominent features respectively.Slightly below E^*, the quenching rate first increases when the electric field is increased, then above E^*, it suddenly drops. Eventually it gets back to a steady value far from E^*. In the region where the quenching rate is suppressed, the elastic rate remains quite high so that elastic processes are bigger than the loss processes, by a factor of γ=20 for fermions and γ=7 for bosons. The principle of this mechanism was originally explored inRef. <cit.> for molecules withoutlosses at short-range (P_SR=0).We consider the initial colliding state of interest, here |1̃,0⟩+|1̃,0⟩ and we take the second prominent structure in the rates (insets of Fig. <ref>) as an example. When the electric field is increased starting from below E^*,the energy of the coupling state, |0̃,0⟩+|2̃,0⟩ in this example, approaches the one of the initial state |1̃,0⟩+|1̃,0⟩ from above.The effective potential curve (U^eff(r) in Eq. (<ref>))of the coupling state pushes the one of the initial state downward, due to the dipole-dipole coupling between the two channels. This results in lowering the curve of the initial state, making it more attractive, hence favouring the molecules to come close to each other and react/be lost at short-range. The quenching rate is thus enhanced.When the electric field is further increased but from above E^* now, the energy of the coupling state lies below the one of the initial state. Its effective potential curve pushes upwards the one of the incident state. This results nowin increasing the curve of the initial state, making it more repulsive,hence preventing the molecules to come close to each other. The quenching rate is suppressed.Even though this mechanism has to be confirmed by experimental results, this is a promising way of suppressing molecular losses due to any reasons (inelastic collisions, chemical reactions, complex-forming losses). This is also promising to perform evaporativecooling of a dipolar gas since elastic processes are more efficientthan quenching ones. In order to perform efficient evaporative cooling,a ratio of γ≃ 100 has to be reached<cit.>, with perhaps a safer estimation using γ≃ 1000.As described above, this is not the case for the KRb system where γ≃ 10 so that evaporative cooling might not be an efficient method to further cool down the gas.However, the suppression of the quenching processes becomes more effective as the permanent electric dipole moment of the molecules increases <cit.>. For those molecules, the ratio γ can reach values of 1000 or more, so that the conditions for efficient evaporative cooling are fullfilledto further cool down dipolar gases and hopefully reach quantum degeneracy.§ CONCLUSION AND PERSPECTIVESIn this paper we presented a time-independent quantum formalism to describe ultracold collisions of particles with internal structure, also accounting for the presence of an external field. It was shown, taking the dipolar KRb molecule as an example, how collisional properties can be tuned with an electric field, from enhancing the quenchingrates to suppressing them. Of course many other configurations could be engineered to control the molecules dynamics and could be implemented within the present formalism.For example, collisions of ultracold molecules in a confined geometry is possible by addingin the quantum formalism an external harmonic oscillator trap that can mimic the presencein an experiment of a one-dimensional optical lattice <cit.>.For sufficiently high induced dipole moments and strong confinements,fermionic and bosonic collisional losses in two dimensions can be suppresseddue to the side-by-side repulsive dipole-dipole interaction.In the particular case where the confinement of the lattice is not strong enough,only fermionic collisional losses can be suppressed due to appropriate selection rules related to the fermionic character of the system <cit.>.The long-range ultracold dipolar physics is also quite general since experiments with ultracold magnetic dipolar molecules <cit.>lead to the same conclusions than for electric dipolar ones.In addition, any arbitrary electric or magnetic fieldwith an arbitrary direction could also be added into the quantum formalism <cit.>, which can be interestingto control ultracold molecules that both possesselectric and magnetic dipole moments.Another interesting tool of control is to employ electromagnetic waves and especially microwaves to control the rotational degree of freedom of the molecules <cit.>.Finally, in addition to two-body collisions, three-body collisions <cit.>and more <cit.> can start to play a role for high densityof the ultracold molecular cloud. The few- and many-body characters of the dipolarinteractions can also start to reveal the increasing anisotropic complexity of the systems<cit.>. Treating all those additional possibilities goes beyond the scope of this paper. We introduced here only a small and simple part of the ultracold collision formalism. In the future, one could increase at will the versatility and the flexibility of the formalism to cover all possible configurations accessible in an experiment, certainly enabling the exploration of all new kinds of ultracold, ultra-controlled dynamics of molecules! § APPENDIX §.§ Proof 1 Let's start with Eq. (<ref>) (first equation) and its transpose (second equation) using the fact that 𝐔 is real and symmetric. Let's multiply by 𝐅^t on the left for the first equation and by 𝐅 on the right for the second equation: 𝐅^t× {𝐃^2𝐅 + 𝐔 𝐅} = 0{𝐃^2𝐅^t + 𝐅^t 𝐔} = 0 ×𝐅 . By retrieving both equations one gets: 𝐅^t (𝐃^2𝐅) - (𝐃^2𝐅^t ) 𝐅 = 0 which implies: 𝐃 [ 𝐅^t (𝐃 𝐅) - (𝐃 𝐅^t)𝐅 ] = 0 where 𝐃≡𝐈 d/dr. This means that the matrix 𝐅^t (𝐃 𝐅) - (𝐃 𝐅^t)𝐅 is independent of r. Moreover at r = r_min, we took 𝐅=0 as mentioned by Eq. (<ref>), so then: 𝐅^t (𝐃 𝐅) - (𝐃 𝐅^t)𝐅 = 0∀r. By inserting Eq. (<ref>) and its transpose into this expression, one gets: (𝐅^(1)- 𝐊^t𝐅^(2) ) (𝐅^'(1)- 𝐅^'(2) 𝐊) - (𝐅^'(1) - 𝐊^t𝐅^'(2)) (𝐅^(1)- 𝐅^(2) 𝐊) = 0 by factorizing the matrices (𝐍^K)^t and 𝐍^K, and by developing: 𝐅^(1) 𝐅^'(1)- 𝐅^(1) 𝐅^'(2) 𝐊- 𝐊^t𝐅^(2) 𝐅^'(1)+ 𝐊^t𝐅^(2) 𝐅^'(2) 𝐊- 𝐅^'(1) 𝐅^(1)+ 𝐅^'(1) 𝐅^(2) 𝐊+ 𝐊^t𝐅^'(2) 𝐅^(1) - 𝐊^t𝐅^'(2) 𝐅^(2) 𝐊 = 0. Using the fact that diagonal matrices commute, we finally get: 𝐊^t (𝐅^'(2) 𝐅^(1) - 𝐅^(2) 𝐅^'(1)) = (𝐅^(1) 𝐅^'(2) - 𝐅^'(1) 𝐅^(2))𝐊 or in term of the Wronskian matrix: 𝐊^t𝐖 = 𝐖 𝐊 . Since 𝐖 = 𝐈 due to the k_α'^-1/2 factors, this implies that 𝐊^t = 𝐊 so that 𝐊 is symmetric. §.§ Proof 2𝐙 = 𝐅'𝐅^-1= {𝐅^'(1)-𝐅^'(2) 𝐊} 𝐍^K[ {𝐅^(1)-𝐅^(2) 𝐊} 𝐍^K ]^-1= {𝐅^'(1)-𝐅^'(2) 𝐊} 𝐍^K [𝐍^K]^-1 {𝐅^(1)-𝐅^(2) 𝐊}^-1= {𝐅^'(1)-𝐅^'(2) 𝐊} {𝐅^(1)-𝐅^(2) 𝐊}^-1 . Then: 𝐙 {𝐅^(1)-𝐅^(2) 𝐊} = {𝐅^'(1)-𝐅^'(2) 𝐊} 𝐙 𝐅^(1)-𝐅^'(1)= {𝐙 𝐅^(2)-𝐅^'(2)} 𝐊 𝐊 = {𝐙 𝐅^(2)-𝐅^'(2)}^-1 {𝐙 𝐅^(1)-𝐅^'(1)} .§.§ Proof 3 First as r →∞: 𝐅^± = - 𝐅^(2)± i𝐅^(1). This implies 𝐅^(1) = (𝐅^+ - 𝐅^-)/2i and 𝐅^(2) = -(𝐅^+ + 𝐅^-)/2.Then: 𝐅 = 𝐅^(1) 𝐀+ 𝐅^(2) 𝐁= {(𝐅^+ - 𝐅^-)/2i } 𝐀 - { (𝐅^+ + 𝐅^-)/2 } 𝐁=(𝐅^+ 𝐀)/2i- (𝐅^- 𝐀)/2i - (𝐅^+ 𝐁)/2- (𝐅^- 𝐁)/2 = 𝐅^-[- (𝐁 - i𝐀)/2]+ 𝐅^+[- (𝐁 + i𝐀)/2] ≡ 𝐅^- 𝐀'+ 𝐅^+ 𝐁'.From that we identify: 𝐀' =i/2(𝐀 + i𝐁) 𝐁' = - i/2(𝐀 - i𝐁)𝐀 = -i (𝐀' - 𝐁')𝐁 = - (𝐀' + 𝐁').We know from Eq. (<ref>) and Eq. (<ref>) that: 𝐒 ≡- 𝐁' 𝐀'^-1=[𝐀 - i𝐁] [𝐀 + i𝐁]^-1= [ (𝐈 - i𝐁 𝐀^-1)𝐀] [(𝐈 + i𝐁 𝐀^-1) 𝐀 ]^-1= (𝐈 - i𝐁 𝐀^-1)𝐀 𝐀^-1(𝐈 + i𝐁 𝐀^-1)^-1= (𝐈 + i𝐊) (𝐈 - i𝐊)^-1where we used 𝐊≡- 𝐁 𝐀^-1 from Eq. (<ref>) and Eq. (<ref>).The matrix 𝐌 = 𝐈+i𝐊 is what is calleda normal matrix since 𝐌 and 𝐌^†commute (𝐌 𝐌^† = 𝐌^† 𝐌,easy to show using the fact that 𝐊 is real and symmetric).Then 𝐌 and 𝐌^† can be expressed by 𝐏 𝐃_M𝐏^-1 and 𝐏 𝐃_M^† 𝐏^-1with the same invertible matrix 𝐏. 𝐃_M,M^† arediagonal matrices with different complex eigenvalues since 𝐌 isnot a hermitian matrix. Then: 𝐌[𝐌^†]^-1 = 𝐏 𝐃_M𝐏^-1 [𝐏 𝐃_M^† 𝐏^-1 ]^-1= 𝐏 𝐃_M𝐏^-1 𝐏 𝐃_M^†^-1 𝐏^-1= 𝐏 𝐃_M𝐃_M^†^-1 𝐏^-1= 𝐏 𝐃_M^†^-1 𝐃_M𝐏^-1=𝐏 𝐃_M^†^-1 𝐏^-1 𝐏 𝐃_M𝐏^-1=[𝐏 𝐃_M^† 𝐏^-1 ]^-1 𝐏 𝐃_M𝐏^-1=[𝐌^†]^-1 𝐌so that one also have 𝐒 = {𝐈+i𝐊} {𝐈-i𝐊}^-1 = {𝐈-i𝐊}^-1 {𝐈+i𝐊}. Because both matrices commute, we can safely write the expression as: 𝐒 = 𝐈+i𝐊/𝐈-i𝐊 .We also know that: 𝐍^ K ≡ 𝐀 = -i (𝐀' - 𝐁') =-i (𝐈 - 𝐁' 𝐀'^-1 )𝐀'=-i (𝐈 + 𝐒 )𝐍^S,and inversely: 𝐍^S ≡i/2(𝐀 + i𝐁)=i/2(𝐈 + i𝐁 𝐀^-1)𝐀=i/2(𝐈 - i𝐊)𝐍^K.If we use the forms in Eq. (<ref>) or Eq. (<ref>), by developing we can find: 𝐅^K= {𝐅^(1) - 𝐅^(2) 𝐊}= {𝐅^- - 𝐅^+ {𝐈 + i 𝐊} {𝐈- i𝐊}^-1} {- {𝐈-i𝐊}} /2i= {𝐅^- - 𝐅^+ 𝐒} {- {𝐈-i𝐊}} /2i = 𝐅^S {- {𝐈-i𝐊}} /2i = 𝐅^S 𝐍^S(𝐍^K)^-1,so that we check that 𝐅^K𝐍^K = 𝐅^S 𝐍^S = 𝐅. §.§ Proof 4𝐒^t =[ 𝐈+i𝐊/𝐈-i𝐊]^t =𝐈+i𝐊^t/𝐈-i𝐊^t = 𝐒 so that 𝐒 is symmetric. 𝐒^† =[ 𝐈+i𝐊/𝐈-i𝐊]^† =𝐈-i𝐊^†/𝐈+i𝐊^† = 𝐒^-1 so that 𝐒 is unitary.127Chu_RMP_70_685_1998 S. Chu, Nobel Lecture: The manipulation of neutral particles, Rev. Mod. Phys. 70, 685 (1998).Cohen-Tannoudji_RMP_70_707_1998 C. N. Cohen-Tannoudji, Nobel Lecture: Manipulating atoms with photons, Rev. Mod. Phys. 70, 707 (1998).Phillips_RMP_70_721_1998 W. D. Phillips, Nobel Lecture: Laser cooling and trapping of neutral atoms, Rev. Mod. Phys. 70, 721 (1998).Cornell_RMP_74_875_2002 E. A. Cornell and C. E. Wieman, Nobel Lecture: Bose-Einstein condensation in a dilute gas, the first 70 years and some recent experiments, Rev. Mod. Phys. 74, 875 (2002).Ketterle_RMP_74_1131_2002 W. Ketterle, Nobel lecture: When atoms behave as waves: Bose–Einstein condensation and the atom laser, Rev. Mod. Phys. 74, 1131 (2002).Lewenstein_AP_56_243_2007 M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen(De), and U. Sen, Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond, Adv. Phys. 56, 243 (2007).Bloch_RMP_80_885_2008 I. Bloch, J. Dalibard, and W. Zwerger, Many-body physics with ultracold gases, Rev. Mod. Phys. 80, 885 (2008).Baranov_PRep_464_71_2008 M. Baranov, Theoretical progress in many-body physics with ultracold dipolar gases, Phys. Rep. 464, 71 (2008).Shuman_N_467_820_2010 E. S. Shuman, J. F. Barry, and D. DeMille, Laser cooling of a diatomic molecule, Nature 467, 820 (2010).Schnell_ACIE_48_6010_2009 M. Schnell and G. Meijer, Cold Molecules: Preparation, applications, and challenges, Angew. Chem. Int. Ed. 48, 6010 (2009).Dulieu_RPP_72_086401_2009 O. Dulieu and C. Gabbanini, The formation and interactions of cold and ultracold molecules: new challenges for interdisciplinary physics, Rep. Prog. Phys. 72, 086401 (2009).Hutzler_CR_112_4803_2012 N. R. Hutzler, H.-I. Lu, and J. M. Doyle, The Buffer Gas Beam: An intense, cold, and slow source for atoms and molecules, Chem. Rev. 112, 4803 (2012).VanDeMeerakker_CR_112_4828_2012 S. Y. T. van de Meerakker, H. L. Bethlem, N. Vanhaecke, and G. Meijer, Manipulation and control of molecular beams, Chem. Rev. 112, 4828 (2012).Narevicius_CR_112_4879_2012 E. Narevicius and M. G. Raizen, Toward cold chemistry with magnetically decelerated supersonic beams, Chem. Rev. 112, 4879 (2012).Zeppenfeld_N_491_570_2012 M. Zeppenfeld, B. G. U. Englert, R. Glöckner, A. Prehn, M. Mielenz, C. Sommer, L. D. van Buuren, M. Motsch, and G. Rempe, Sisyphus cooling of electrically trapped polyatomic molecules, Nature 491, 570 (2012).Thorsheim_PRL_58_2420_1987 H. R. Thorsheim, J. Weiner, and P. S. Julienne, Laser-induced photoassociation of ultracold sodium atoms, Phys. Rev. Lett. 58, 2420 (1987).Fioretti_PRL_80_4402_1998 A. Fioretti, D. Comparat, A. Crubellier, O. Dulieu, F. Masnou-Seeuws, and P. Pillet, Formation of cold Cs_2 molecules through photoassociation, Phys. Rev. Lett. 80, 4402 (1998).Weiner_RMP_71_1_1999 J. Weiner, V. S. Bagnato, S. Zilio, and P. S. Julienne, Experiments and theory in cold and ultracold collisions, Rev. Mod. Phys. 71, 1 (1999).Jones_RMP_78_483_2006 K. M. Jones, E. Tiesinga, P. D. Lett, and P. S. Julienne, Ultracold photoassociation spectroscopy: long-range molecules and atomic scattering, Rev. Mod. Phys. 78, 483 (2006).Ulmanis_CR_112_4890_2012 J. Ulmanis, J. Deiglmayr, M. Repp, R. Wester, and M. Weidemüller, Ultracold molecules formed by photoassociation: heteronuclear dimers, inelastic collisions, and interactions with ultrashort laser pulses, Chem. Rev. 112, 4890 (2012).Kohler_RMP_78_1311_2006 T. Köhler, K. Góral, and P. S. Julienne, Production of cold molecules via magnetically tunable Feshbach resonances, Rev. Mod. Phys. 78, 1311 (2006).Chin_RMP_82_1225_2010 C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Feshbach resonances in ultracold gases, Rev. Mod. Phys. 82, 1225 (2010).Bergmann_RMP_70_1003_1998 K. Bergmann, H. Theuer, and B. W. Shore, Coherent population transfer among quantum states of atoms and molecules, Rev. Mod. Phys. 70, 1003 (1998).Ni_S_322_231_2008 K.-K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe'er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin, and J. Ye, A high phase-space-density gas of polar molecules, Science 322, 231 (2008).Danzl_S_321_1062_2008 J. G. Danzl, E. Haller, M. Gustavsson, M. J. Mark, R. Hart, N. Bouloufa, O. Dulieu, H. Ritsch, and H.-C. Nägerl, Quantum gas of deeply bound ground state molecules, Science 321, 1062 (2008).Koch_CR_112_4928_2012 C. P. Koch and M. Shapiro, Coherent control of ultracold photoassociation, Chem. Rev. 112, 4928 (2012).Bergmann_JCP_142_170901_2015 K. Bergmann, N. V. Vitanov, and B. W. Shore, Perspective: stimulated Raman adiabatic passage: the status after 25 years, J. Chem. Phys. 142, 170901 (2015).Krems_IRPC_24_99_2005 R. V. Krems, Molecules near absolute zero and external field control of atomic and molecular dynamics, Int. Rev. Phys. Chem. 24, 99 (2005).Krems_PCCP_10_4079_2008 R. V. Krems, Cold controlled chemistry, Phys. Chem. Chem. Phys. 10, 4079 (2008).Quemener_CR_112_4949_2012 G. Quéméner and P. S. Julienne, Ultracold molecules under control!, Chem. Rev. 112, 4949 (2012).Lemeshko_MP_111_1648_2013 M. Lemeshko, R. V. Krems, J. M. Doyle, and S. Kais, Manipulation of molecules with electromagnetic fields, Mol. Phys. 111, 1648 (2013).Carr_NJP_11_055049_2009 L. D. Carr, D. DeMille, R. V. Krems, and J. Ye, Cold and ultracold molecules: science, technology and applications, New J. Phys. 11, 055049 (2009).Micheli_PRA_76_043604_2007 A. Micheli, G. Pupillo, H. P. Büchler, and P. Zoller, Cold polar molecules in two-dimensional traps: tailoring interactions with external fields for novel quantum phases, Phys. Rev. A 76, 043604 (2007).Gorshkov_PRL_107_115301_2011 A. V. Gorshkov, S. R. Manmana, G. Chen, J. Ye, E. Demler, M. D. Lukin, and A. M. Rey, Tunable superfluidity and quantum magnetism with ultracold polar molecules, Phys. Rev. Lett. 107, 115301 (2011).Baranov_CR_112_5012_2012 M. A. Baranov, M. Dalmonte, G. Pupillo, and P. Zoller, Condensed matter theory of dipolar quantum gases, Chem. Rev. 112, 5012 (2012).Wall_BookChapter_2014 M. L. Wall, K. R. A. Hazzard, and A.-M. Rey, Quantum magnetism with ultracold molecules, Chapter 1 in From atomic to mesoscale: the role of quantum coherence in systems of various complexities. Edited by S. A. Malinovskaya, I. Novikova, World Scientific Publishing Co 1406, 4758 (2014).DeMille_PRL_88_067901_2002 D. DeMille, Quantum computation with trapped polar molecules, Phys. Rev. Lett. 88, 067901 (2002).Yelin_PRA_74_050301_2006 S. F. Yelin, K. Kirby, and R. Côté, Schemes for robust quantum computation with polar molecules, Phys. Rev. A 74, 050301 (2006).Karra_JCP_144_094301_2016 M. Karra, K. Sharma, B. Friedrich, S. Kais, and D. Herschbach, Prospects for quantum computing with an array of ultracold polar paramagnetic molecules, J. Chem. Phys. 144, 094301 (2016).Hinds_PS_1997_34_1997 E. A. Hinds, Testing time reversal symmetry using molecules, Phys. Scr. 1997, 34 (1997).Tarbutt_BookChapter_2009 M. R. Tarbutt, J. J. Hudson, B. E. Sauer, and E. A. Hinds, Preparation and manipulation of molecules for fundamental physics tests, Chapter 15 in Cold molecules: theory, experiments, applications. Edited by R. Krems, B. Friedrich, B. and W. C. Stwalley, CRC Press, 69 (2009).Gonzalez-Martinez_PRA_90_052716_2014 M. L. González-Martínez, O. Dulieu, P. Larrégaray, and L. Bonnet, Statistical product distributions for ultracold reactions in external fields, Phys. Rev. A 90, 052716 (2014).Tscherbul_PRL_115_023201_2015 T. V. Tscherbul and R. V. Krems, Tuning bimolecular chemical reactions by electric fields, Phys. Rev. Lett. 115, 023201 (2015).Weck_IRPC_25_283_2006 P. F. Weck and N. Balakrishnan, Importance of long-range interactions in chemical reactions at cold and ultracold temperatures, Int. Rev. Phys. Chem. 25, 283 (2006).Hutson_IRPC_26_1_2007 J. M. Hutson and P. Soldán, Molecular collisions in ultracold atomic gases, Int. Rev. Phys. Chem. 26, 1 (2007).Quemener_BookChapter_2009 G. Quéméner, N. Balakrishnan, and A. Dalgarno, Inelastic collisions and chemical reactions of molecules at ultracold temperatures, Chapter 3 in Cold molecules: theory, experiments, applications. Edited by R. Krems, B. Friedrich, B. and W. C. Stwalley, CRC Press, 3 (2009).Brandsen_Joachain_Book_2003 B. Brandsen and C. Joachain, Physics of atoms and molecules, Addison-Wesley, 2003.Cohen-Tannoudji_Book_1997 C. Cohen-Tannoudji, B. Diu, and F. Laloë, Mécanique quantique, Hermann, 1997.Friedrich_Book_2005 H. Friedrich, Theoretical atomic physics, third edition, Springer, 2005.Landau_Book_1958 L. D. Landau and L. M. Lifshitz, Quantum mechanics (non-relativistic theory), Butterworth Heinemann, 1958.Child_Book_1996 M. S. Child, Molecular collision theory, Dover Publications, 1996.Atkins_Friedman_Book_2005 P. W. Atkins and R. S. Friedman, Molecular quantum mechanics, Oxford University Press, 2005.Launay_Book_2000 J.-M. Launay, Collisions moléculaires, cours du DEA Physique, option “physique atomique et moléculaire", Université de Rennes 1.Quemener_PRA_83_012705_2011 G. Quéméner and J. L. Bohn, Dynamics of ultracold molecules in confined geometry and electric field, Phys. Rev. A 83, 012705 (2011).Grishkevich_PRA_84_062710_2011 S. Grishkevich, S. Sala and A. Saenz, Theoretical description of two ultracold atoms in finite three-dimensional optical lattices using realistic interatomic interaction potentials, Phys. Rev. A 84, 062710 (2011).Whitten_JMP_9_1103_1968 R. C. Whitten and F. T. Smith, Symmetric representation for three-body problems. II. Motion in space, J. Math. Phys. 9, 1103 (1968).Johnson_JCP_79_1916_1983 B. R. Johnson, The quantum dynamics of three particles in hyperspherical coordinates, J. Chem. Phys. 79, 1916 (1983).Pack_JCP_87_3888_1987 R. T. Pack and G. A. Parker, Quantum reactive scattering in three dimensions using hyperspherical (APH) coordinates. Theory, J. Chem. Phys. 87, 3888 (1987).Launay_CPL_163_178_1989 J. M. Launay and M. Le Dourneuf, Hyperspherical close-coupling calculation of integral cross sections for the reaction H+H_2 → H_2+H, Chem. Phys. Lett. 163, 178 (1989).Rittenhouse_JPBAMOP_44_172001_2011 S. T. Rittenhouse, J. von Stecher, J. P. D’Incao, N. P. Mehta, and C. H. Greene, The hyperspherical four-fermion problem, J. Phys. B: At. Mol. Opt. Phys. 44, 172001 (2011).Curtiss_JCP_21_2045_1953 C. F. Curtiss, The quantum mechanics of collisions between diatomic molecules, J. Chem. Phys. 21, 2045 (1953).Takayanagi_PTP_11_557_1954 K. Takayanagi, The theory of collisions between two diatomic molecules, Prog. Theor. Phys. 11, 557 (1954).Arthurs_PRS_256_540_1960 A. M. Arthurs and A. Dalgarno, The theory of scattering by a rigid rotator, Proc. Roy. Soc. 256, 540 (1960).Pack_JCP_60_633_1974 R. T. Pack, Space-fixed vs body-fixed axes in atom-diatomic molecule scattering. Sudden approximations, J. Chem. Phys. 60, 633 (1974).Green_JCP_62_2271_1975 S. Green, Rotational excitation in H_2-H_2 collisions: close-coupling calculations, J. Chem. Phys. 62, 2271 (1975).Alexander_JCP_66_2166_1977 M. H. Alexander and A. E. DePristo, Symmetry considerations in the quantum treatment of collisions between two diatomic molecules, J. Chem. Phys. 66, 2166 (1977).Launay_JPBAMOP_9_1823_1976 J. M. Launay, Body-fixed formulation of rotational excitation: exact and centrifugal decoupling results for CO-He, J. Phys. B: At. Mol. Opt. Phys. 9, 1823 (1976).Heil_JCP_68_2562_1978 T. G. Heil, S. Green, and D. J. Kouri, The coupled states approximation for scattering of two diatoms, J. Chem. Phys. 68, 2562 (1978).Takayanagi_AAMP_1_149_1965 K. Takayanagi, The production of rotational and vibrational transitions in encounters between molecules, Adv. At. Mol. Phys. 1, 149 (1965).Zarur_JCP_60_2057_1974 G. Zarur and H. Rabitz, Effective potential formulation of molecule-molecule collisions with application to H_2-H_2, J. Chem. Phys. 60, 2057 (1974).Quemener_PRA_88_012706_2013 G. Quéméner and J. L. Bohn, Ultracold molecular collisions in combined electric and magnetic fields, Phys. Rev. A 88, 012706 (2013).Tscherbul_JCP_133_184104_2010 T. V. Tscherbul and A. Dalgarno, Quantum theory of molecular collisions in a magnetic field: Efficient calculations based on the total angular momentum representation, J. Chem. Phys. 133, 184104 (2010).Tscherbul_PRA_85_052710_2012 T. V. Tscherbul, Total-angular-momentum representation for atom-molecule collisions in electric fields, Phys. Rev. A 85, 052710 (2012).Johnson_JCP_13_445_1973 B. R. Johnson, The multichannel log-derivative method for scattering calculations, J. Comp. Phys. 13, 445 (1973).Johnson_JCP_69_4678_1978 B. R. Johnson, The renormalized Numerov method applied to calculating bound states of the coupled-channel Schrödinger equation, J. Chem. Phys. 69, 4678 (1978).Manolopoulos_JCP_85_6425_1986 D. E. Manolopoulos, An improved log derivative method for inelastic scattering, J. Chem. Phys. 85, 6425 (1986).Stone_Book_1996 A. J. Stone, The theory of intermolecular forces, Oxford University Press, 1996.Hutson_CPC_84_1_1994 J. M. Hutson, Coupled channel methods for solving the bound-state Schrödinger equation, Comput. Phys. Commun. 84, 1 (1994).Abramowitz_Stegun_Book_1964 M. Abramowitz and I. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables, United States Department of Commerce, National Bureau of Standards, 1964.Burke_PhDThesis_1999 J. P. Burke, Jr., Theoretical investigation of cold alkali atom collisions, PhD thesis, University of Colorado, Boulder (USA), 1999.Tscherbul_NJP_11_055021_2009 T. V. Tscherbul, Y. V. Suleimanov, V. Aquilanti, and R. V. Krems, Magnetic field modification of ultracold molecule molecule collisions, New J. Phys. 11, 055021 (2009).Pethick_Smith_Book_2001 C. J. Pethick and H. Smith, Bose–Einstein condensation in dilute gases, Cambridge University Press, 2001.Pitaevskii_Stringari_Book_2003 L. P. Pitaevskii and S. Stringari, Bose–Einstein condensation, Oxford: Clarendon Press, 2003.Sadeghpour_JPBAMOP_33_93_2000 H. R. Sadeghpour, J. L. Bohn, M. J. Cavagnero, B. D. Esry, I. I. Fabrikant, J. H. Macek, and A. R. P. Rau, Collisions near threshold in atomic and molecular physics, J. Phys. B: At. Mol. Opt. Phys. 33, 93 (2000).Hutson_BookChapter_2009 J. M. Hutson, Theory of cold atomic and moleculer collisions, Chapter 1 in Cold molecules: theory, experiments, applications. Edited by R. Krems, B. Friedrich, B. and W. C. Stwalley, CRC Press, 3 (2009).Wigner_PR_73_1002_1948 E. P. Wigner, On the behavior of cross sections near thresholds, Phys. Rev. 73, 1002 (1948).Ospelkaus_PRL_104_030402_2010 S. Ospelkaus, K.-K. Ni, G. Quéméner, B. Neyenhuis, D. Wang, M. H. G. de Miranda, J. L. Bohn, J. Ye, and D. S. Jin, Controlling the hyperfine state of rovibronic ground-state polar molecules, Phys. Rev. Lett. 104, 030402 (2010).Zuchowski_PRA_81_060703_2010 P. S.  ŻŻuchowski and J. M. Hutson, Reactions of ultracold alkali-metal dimers, Phys. Rev. A 81, 060703 (2010).Byrd_PRA_82_010502_2010 J. N. Byrd, J. A. Montgomery, and R. Côté, Structure and thermochemistry of K_2Rb, KRb_2, and K_2Rb_2, Phys. Rev. A 82, 010502 (2010).Meyer_PRA_82_042707_2010 E. R. Meyer and J. L. Bohn, Product-state control of bi-alkali-metal chemical reactions, Phys. Rev. A 82, 042707 (2010).Byrd_PRA_86_032711_2012 J. N. Byrd, J. A. Montgomery, and R. Côté, Long-range forces between polar alkali-metal diatoms aligned by external electric fields, Phys. Rev. A 86, 032711 (2012).Wang_NJP_17_035015_2015 G. Wang and G. Quéméner, Tuning ultracold collisions of excited rotational dipolar molecules, New J. Phys. 17, 035015 (2015).Kotochigova_NJP_12_073041_2010 S. Kotochigova, Dispersion interactions and reactive collisions of ultracold polar molecules, New J. Phys. 12, 073041 (2010).Lepers_PRA_88_032709_2013 M. Lepers, R. Vexiau, M. Aymar, N. Bouloufa-Maafa, and O. Dulieu, Long-range interactions between polar alkali-metal diatoms in external electric fields, Phys. Rev. A 88, 032709 (2013).Zuchowski_PRA_87_022706_2013 P. S. Zuchowski, M. Kosicki, M. Kodrycka, and P. Soldán, Van der Waals coefficients for systems with ultracold polar alkali-metal molecules, Phys. Rev. A 87, 022706 (2013).Idziaszek_PRA_82_020703_2010 Z. Idziaszek, G. Quéméner, J. L. Bohn, and P. S. Julienne, Simple quantum model of ultracold polar molecule collisions, Phys. Rev. A 82, 020703 (2010).Idziaszek_PRL_104_113202_2010 Z. Idziaszek and P. S. Julienne, Universal Rate Constants for reactive collisions of ultracold molecules, Phys. Rev. Lett. 104, 113202 (2010).Bishof_PRA_84_052716_2011 M. Bishof, M. J. Martin, M. D. Swallows, C. Benko, Y. Lin, G. Quéméner, A. M. Rey, and J. Ye, Inelastic collisions and density-dependent excitation suppression in a ^87Sr optical lattice clock, Phys. Rev. A 84, 052716 (2011).Ludlow_PRA_84_052724_2011 A. D. Ludlow, N. D. Lemke, J. A. Sherman, C. W. Oates, G. Quéméner, J. von Stecher, and A. M. Rey, Cold-collision-shift cancellation and inelastic scattering in a Yb optical lattice clock, Phys. Rev. A 84, 052724 (2011).Jachymski_PRL_110_213202_2013 K. Jachymski, M. Michał, P. S. Julienne, and Z. Idziaszek, Quantum theory of reactive collisions for 1/r^n potentials, Phys. Rev. Lett. 110, 213202 (2013).Mayle_PRA_85_062712_2012 M. Mayle, B. P. Ruzic, and J. L. Bohn, Statistical aspects of ultracold resonant scattering, Phys. Rev. A 85, 062712 (2012).Mayle_PRA_87_012709_2013 M. Mayle, G. Quéméner, B. P. Ruzic, and J. L. Bohn, Scattering of ultracold molecules in the highly resonant regime, Phys. Rev. A 87, 012709 (2013).Takekoshi_PRL_113_205301_2014 T. Takekoshi, L. Reichsöllner, A. Schindewolf, J. M. Hutson, C. R. Le Sueur, O. Dulieu, F. Ferlaino, R. Grimm, and H.-C. Nägerl, Ultracold dense samples of dipolar RbCs molecules in the rovibrational and hyperfine ground state, Phys. Rev. Lett. 113, 205301 (2014).Park_PRL_114_205302_2015 J. W. Park, S. A. Will, and M. W. Zwierlein, Ultracold dipolar gas of fermionic ^23Na^40K molecules in their absolute ground state, Phys. Rev. Lett. 114, 205302 (2015).Guo_PRL_116_205303_2016 M. Guo, B. Zhu, B. Lu, X. Ye, F. Wang, R. Vexiau, N. Bouloufa-Maafa, G. Quéméner, O. Dulieu, and D. Wang,Creation of an ultracold gas of ground-state dipolar ^23Na^87Rb molecules, Phys. Rev. Lett. 116, 205303 (2016).Aikawa_PRL_105_203001_2010 K. Aikawa, D. Akamatsu, M. Hayashi, K. Oasa, J. Kobayashi, P. Naidon, T. Kishimoto, M. Ueda, and S. Inouye, Coherent Transfer of Photoassociated Molecules into the Rovibrational Ground State, Phys. Rev. Lett. 105, 203001 (2010).Bohn_BookChapter_2009 J. L. Bohn, Electric dipoles at ultralow temperatures, Chapter 2 in Cold molecules: theory, experiments, applications. Edited by R. Krems, B. Friedrich, B. and W. C. Stwalley, CRC Press, 3 (2009).Aymar_JCP_122_204302_2005 M. Aymar and O. Dulieu,Calculation of accurate permanent dipole moments of the lowest ^1,3 Sigma ^+ states of heteronuclear alkali dimers using extended basis sets, J. Chem. Phys. 122, 204302 (2005).Ni_N_464_1324_2010 K.-K. Ni, S. Ospelkaus, D. Wang, G. Quéméner, B. Neyenhuis, M. H. G. de Miranda, J. L. Bohn, D. S. Jin, and J. Ye, Dipolar collisions of polar molecules in the quantum regime, Nature 464, 1324 (2010).Quemener_PRA_81_022702_2010 G. Quéméner and J. L. Bohn, Strong dependence of ultracold chemical rates on electric dipole moments, Phys. Rev. A 81, 022702 (2010).Quemener_PRA_84_062703_2011 G. Quéméner, J. L. Bohn, A. Petrov, and S. Kotochigova, Universalities in ultracold reactions of alkali-metal polar molecules, Phys. Rev. A 84, 062703 (2011).Bohn_NJP_11_055039_2009 J. L. Bohn, M. Cavagnero, and C. Ticknor, Quasi-universal dipolar scattering in cold and ultracold gases, New J. Phys. 11, 055039 (2009).Gao_PRA_78_012702_2008 B. Gao, General form of the quantum-defect theory for -1/r^α type of potentials with α&gt;2, Phys. Rev. A 78, 012702 (2008).Langevin_ACP_5_245_1905 P. Langevin, A fundamental formula of kinetic theory, Ann. Chim. Phys. 5, 245 (1905).Gao_PRL_105_263203_2010 B. Gao, Universal model for exoergic bimolecular reactions and inelastic processes, Phys. Rev. Lett. 105, 263203 (2010).Avdeenkov_PRA_73_022707_2006 A. V. Avdeenkov, M. Kajita, and J. L. Bohn, Suppression of inelastic collisions of polar ^1Σ state molecules in an electrostatic field, Phys. Rev. A 73, 022707 (2006).Quemener_PRA_93_012704_2016 G. Quéméner and J. L. Bohn, Shielding ^2 ultracold dipolar molecular collisions with electric fields, Phys. Rev. A 93, 012704 (2016).DeMiranda_NP_7_502_2011 M. H. G. de Miranda, A. Chotia, B. Neyenhuis, D. Wang, G. Quéméner, S. Ospelkaus, J. Bohn, J. L. Ye, and D. S. Jin, Controlling the quantum stereodynamics of ultracold bimolecular reactions, Nature Physics 7, 502 (2011).Frisch_PRL_115_203201_2015 A. Frisch, M. Mark, K. Aikawa, S. Baier, R. Grimm, A. Petrov, S. Kotochigova, G. Quéméner, M. Lepers, O. Dulieu, and F. Ferlaino, Ultracold dipolar molecules composed of strongly magnetic atoms, Phys. Rev. Lett. 115, 203201 (2015).Quemener_PRA_92_042706_2015 G. Quéméner, M. Lepers, and O. Dulieu, Dynamics of ultracold dipolar particles in a confined geometry and tilted fields, Phys. Rev. A 92, 042706 (2015).Gorshkov_PRL_101_073201_2008 A. V. Gorshkov, P. Rabl, G. Pupillo, A. Micheli, P. Zoller, M. D. Lukin, and H. P. Büchler, Suppression of inelastic collisions between polar molecules with a repulsive shield, Phys. Rev. Lett. 101, 073201 (2008).Alyabyshev_PRA_80_033419_2009 S. V. Alyabyshev and R. V. Krems, Controlling collisional spin relaxation of cold molecules with microwave laser fields, Phys. Rev. A 80, 033419 (2009).Avdeenkov_PRA_86_022707_2012 A. V. Avdeenkov, Dipolar collisions of ultracold polar molecules in a microwave field, Phys. Rev. A 86, 022707 (2012).Ticknor_PRL_105_013201_2010 C. Ticknor and S. T. Rittenhouse, Three body recombination of ultracold dipoles to weakly bound dimers, Phys. Rev. Lett. 105, 013201 (2010).Wang_PRL_106_233201_2011 Y. Wang, J. P. D'Incao, and C. H. Greene, Efimov effect for three interacting bosonic dipoles, Phys. Rev. Lett. 106, 233201 (2011).Wang_PRL_107_233201_2011 Y. Wang, J. P. D'Incao, and C. H. Greene, Universal three-body physics for fermionic dipoles, Phys. Rev. Lett. 107, 233201 (2011).Lepers_JPBAMOP_49_014004_2016 M. Lepers, G. Quéméner, E. Luc-Koenig, and O. Dulieu, Four-body long-range interactions between ultracold weakly-bound diatomic molecules, J. Phys. B: At. Mol. Opt. Phys. 49, 014004 (2016).
http://arxiv.org/abs/1703.09174v1
{ "authors": [ "Goulven Quéméner" ], "categories": [ "physics.atom-ph", "quant-ph" ], "primary_category": "physics.atom-ph", "published": "20170327163535", "title": "Ultracold collisions of molecules" }
Adversarial Examples for Semantic Segmentation and Object Detection Cihang Xie1*, Jianyu Wang2*, Zhishuai Zhang1The first three authors contributed equally to this work. This work was done when Jianyu Wang was a Ph.D. student at UCLA, Yuyin Zhou1, Lingxi Xie1, Alan Yuille1 1Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA 2Baidu Research USA, Sunnyvale, CA 94089 USA{cihangxie306, wjyouch,zhshuai.zhang, zhouyuyiner, 198808xc, alan.l.yuille}@gmail.com December 30, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================== It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of pixels/proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.§ INTRODUCTIONConvolutional Neural Networks (CNN) <cit.><cit.><cit.><cit.> have become the state-of-the-art solution for a wide range of visual recognition problems. Based on a large-scale labeled dataset such as ImageNet <cit.> and powerful computational resources like modern GPUs, it is possible to train a hierarchical deep network to capture different levels of visual patterns. A deep network is also capable of generating transferrable features for different tasks such as image classification <cit.> and instance retrieval <cit.>, or being fine-tuned to deal with a wide range of vision tasks, including object detection <cit.><cit.>,visual concept discovery <cit.>, semantic segmentation <cit.><cit.><cit.>, boundary detection <cit.><cit.>, etc.Despite their success in visual recognition and feature representation, deep networks are often sensitive to small perturbations to the input image. In <cit.>, it was shown that adding visually imperceptible perturbations can result in failures for image classification. These perturbed images, often called adversarial examples, are considered to fall on some areas in the large, high-dimensional feature space which are not explored in the training process. Thus, investigating this not only helps understand the working mechanism of deep networks, but also provides opportunities to improve the robustness of network training.In this paper, we go one step further by generating adversarial examples for semantic segmentation and object detection, and showing the transferability of them. To the best of our knowledge, this topic has not been systematically studied (e.g., on a large dataset) before. Note that these tasks are much more difficult, as we need to consider orders of magnitude more targets (e.g., pixels or proposals). Motivated by the fact that each target undergoes a separate classification process, we propose the Dense Adversary Generation (DAG) algorithm, which considers all the targets simultaneously and optimizes the overall loss function. The implementation of DAG is simple, as it only involves specifying an adversarial label for each target and performing iterative gradient back-propagation. In practice, the algorithm often comes to an end after a reasonable number of, say, 150 to 200, iterations. Figure <ref> shows anadversarial example which can confuse both deep segmentation and detection networks.We point out that generating an adversarial example is more difficult in detection than in segmentation, as the number of targets is orders of magnitude larger in the former case, e.g., for an image with K pixels, the number of possible proposals is O(K^2) while the number of pixels is only O(K), where O(·) is the big-O notation. In addition, if only a subset of proposals are considered, the perturbed image may still be correctly recognized after a new set of proposals are extracted (note that DAG aims at generating recognition failures on the original proposals). To increase the robustness of adversarial attack, we change the intersection-over-union (IOU) rate to preserve an increased but still reasonable number of proposals in optimization. In experiments, we verify that when the proposals are dense enough on the original image, it is highly likely that incorrect recognition results are also produced on the new proposals generated on the perturbed image. We also study the effectiveness and efficiency of the algorithm with respect to the denseness of the considered proposals.Following <cit.>, we investigate the transferability of the generated perturbations. To this end, we use the adversarial perturbation computed on one network to attack another network. Three situations are considered: (1) networks with the same architecture but trained with different data; (2) networks with different architectures but trained for the same task; and (3) networks for different tasks. Although the difficulty increases as the difference goes more significant, the perturbations generated by DAG is able to transfer to some extent. Interestingly, adding two or more heterogeneous perturbations significantly increases the transferability, which provides an effective way of performing black-box adversarial attack <cit.> to some networks with unknown structures and/or properties.The remainder of this paper is organized as follows. Section <ref> briefly introduces prior work related to our research. Section <ref> describes our algorithm for generating adversarial perturbations, and Section <ref> investigates the transferability of the perturbations. Conclusions are drawn in Section <ref>. § RELATED WORK§.§ Deep Learning for Detection and SegmentationDeep learning approaches, especially deep convolutional neural networks, have been very successful in object detection <cit.><cit.><cit.> and semantic segmentation <cit.><cit.> tasks. Currently, one of the most popular object detection pipeline <cit.><cit.><cit.> involves first generating a number of proposals of different scales and positions, classifying each of them, and performing post-processing such as non-maximal suppression (NMS). On the other hand, the dominating segmentation pipeline <cit.> works by first predicting a class-dependent score map at a reduced resolution, and performing up-sampling to obtain high-resolution segmentation.  <cit.> incorporates the “atrous” algorithm and the conditional random field (CRF) to this pipeline to improve the segmentation performance further.§.§ Adversarial Attack and DefenseGenerating adversarial examples for classification has been extensively studied in many different ways recently. <cit.> first showed that adversarial examples, computed by adding visually imperceptible perturbations to the original images, make CNNs predict a wrong label with high confidence. <cit.> proposed a simple and fast gradient sign method to generate adversarial examples based on the linear nature of CNNs. <cit.> proposed a simple algorithm to compute the minimal adversarial perturbation by assuming that the loss function can be linearized around the current data point at each iteration.<cit.> showed the existence of universal (image-agnostic) adversarial perturbations. <cit.> trained a network to generate adversarial examples for a particular target model (without using gradients). <cit.> showed the adversarial examples for machine learning systems also exist in the physical world. <cit.> studied the transferability of both non-targeted and targeted adversarial examples, and proposed an ensemble-based approaches to generate adversarial examples with stronger transferability. <cit.> generated images using evolutionary algorithms that are unrecognizable to humans, but cause CNNs to output very confident (incorrect) predictions. This can be thought of as in the opposite direction of above works.In contrast to generating adversarial examples, there are some works trying to reduce the effect of adversarial examples. <cit.> proposed a forveation-based mechanism to alleviate adversarial examples. <cit.> showed networks trained using defensive distillation can effectively against adversarial examples, while <cit.> developed stronger attacks which are unable to defend by defensive distillation. <cit.> trained the network on adversarial examples using the large-scale ImageNet, and showed that this brings robustness to adversarial attack. This is imporved by <cit.>, which proposed an ensemble adversarial training method to increase the network robustness to black-box attacks. <cit.> trained a detector on the inner layer of the classifier to detect adversarial examples.There are two concurrent works <cit.> and <cit.> that studied adversarial examples in semantic segmentation on the Cityscapes dataset <cit.>, where <cit.> showed the existence of adversarial examples, and <cit.> showed the existence of universal perturbations. We refer interested readers to their papers for details.§ GENERATING ADVERSARIAL EXAMPLESIn this section, we introduce DAG algorithm. Given an image and the recognition targets (proposals and/or pixels), DAG generates an adversarial perturbation which is aimed at confusing as many targets as possible.§.§ Dense Adversary GenerationLet 𝐗 be an image which contains N recognition targets 𝒯={t_1,t_2,…,t_N}. Each target t_n, n=1,2,…,N, is assigned a ground-truth class label l_n∈{1,2,…,C}, where C is the number of classes, e.g., C=21 (including the background class) in the PascalVOC dataset <cit.>. Denote ℒ={l_1,l_2,…,l_n}. The detailed form of 𝒯 varies among different tasks. In image classification, 𝒯 only contains one element, i.e., the entire image. Conversely, 𝒯 is composed of all pixels (or the corresponding receptive fields) in semantic segmentation, and all proposals in object detection. We will discuss how to construct 𝒯 in Section <ref>.Given a deep network for a specific task, we use 𝐟(𝐗,t_n)∈ℝ^C to denote the classification score vector (before softmax normalization) on the n-th recognition target of 𝐗. To generate an adversarial example, the goal is to make the predictions of all targets go wrong, i.e., ∀ n, max_c{f_c(𝐗+𝐫,t_n)}≠l_n. Here 𝐫 denotes an adversarial perturbation added to 𝐗. To this end, we specify an adversarial label l'_n for each target, in which l'_n is randomly sampled from other incorrect classes, i.e., l'_n∈{1,2,…,C}∖{l_n}. Denote ℒ'={l'_1,l'_2,…,l'_n}. In practice, we define a random permutation function π:{1,2,…,C}→{1,2,…,C} for every image independently, in which π(c)≠c for c=1,2,…,C, and generate ℒ' by setting l'_n=π(l_n) for all n. Under this setting, the loss function covering all targets can be written as:L(𝐗,𝒯,ℒ,ℒ')= ∑_n=1^N[f_l_n(𝐗,t_n)-f_l_n'(𝐗,t_n)]Minimizing L can be achieved via making every target to be incorrectly predicted, i.e., suppressing the confidence of the original correct class f_l_n(𝐗+r,t_n), while increasing that of the desired (adversarial) incorrect class f_l'_n(𝐗+r,t_n). 0.4em textWe apply a gradient descent algorithm for optimization. At the m-th iteration, denote the current image (possibly after adding several perturbations) as 𝐗_m. We find the set of correctly predicted targets, named the active target set:𝒯_m={t_n|max_c{f_c(𝐗_m,t_n)} = l_n}.Then we compute the gradient with respect to the input data and then accumulate all these perturbations:𝐫_m=∑_t_n∈𝒯_m[∇_𝐗_mf_l'_n(𝐗_m,t_n)-∇_𝐗_mf_l_n(𝐗_m,t_n)]Note that |𝒯_m|≪|𝒯| when m gets large, thus this strategy considerably reduces the computational overhead. To avoid numerical instability, we normalize 𝐫_m as𝐫'_m=γ/𝐫_m_∞·𝐫_mwhere γ=0.5 is a fixed hyper-parameter. We then add 𝐫'_m to the current image 𝐗_m and proceed to the next iteration. The algorithm terminates if either all the targets are predicted as desired, i.e., 𝒯_m=∅, or it reaches the maximum iteration number, which is set to be 200 in segmentation and 150 in detection.The final adversarial perturbation is computed as 𝐫=∑_m𝐫'_m. Note that, in practice, we often obtain the input image 𝐗 after subtracting the mean image 𝐗. In this case, the adversarial image is Trunc(𝐗+𝐫+𝐗), where Trunc(·) denotes the function that truncates every pixel value by [0,255]. Although truncation may harm the adversarial perturbation, we observed little effect in experiments, mainly because the magnitude of perturbation 𝐫 is very small (see Section <ref>). The overall pipeline of DAG algorithm is illustrated in Algorithm <ref>.§.§ Selecting Input Proposals for DetectionA critical issue in DAG is to select a proper set 𝒯 of targets. This is relatively easy in the semantic segmentation task, because the goal is to produce incorrect classification on all pixels, and thus we can set each of them as a separate target, i.e., performing dense sampling on the image lattice. This is tractable, i.e., the computational complexity is proportional to the total number of pixels.In the scenario of object detection, target selection becomes a lot more difficult, as the total number of possible targets (bounding box proposals) is orders of magnitudes larger than that in semantic segmentation. A straightforward choice is to only consider the proposals generated by a sideway network, e.g., the regional proposal network (RPN) <cit.>, but we find that when the adversarial perturbation 𝐫 is added to the original image 𝐗, a different set of proposals may be generated according to the new input 𝐗+𝐫, and the network may still be able to correctly classify these new proposals <cit.>. To overcome this problem, we make the proposals very dense by increasing the threshold of NMS in RPN. In practice, when the intersection-over-union (IOU) goes up from 0.70 to 0.90, the average number of proposals on each image increases from around 300 to around 3000. Using this denser target set 𝒯, most probable object bounding boxes are only pixels away from at least one of the selected input proposals, and we can expect the classification error transfers among neighboring bounding boxes. As shown in experiments, this heuristic idea works very well, and the effect of adversarial perturbations is positively correlated to the number of proposals considered in DAG.Technically, given the proposals generated by RPN, we preserve all positive proposals and discard the remaining. Here, a positive proposal satisfies the following two conditions: 1) the IOU with the closest ground-truth object is greater than 0.1, and 2) the confidence score for the corresponding ground-truth class is greater than 0.1. If both conditions hold on multiple ground-truth objects, we select the one with the maximal IOU. The label of the proposal is defined as the corresponding confident class. This strategy aims at selecting high-quality targets for Algorithm <ref>.§.§ Quantitative EvaluationFollowing some previous work <cit.><cit.>, we evaluate our approach by measuring the drop in recognition accuracy, i.e., mean intersection-over-union (mIOU) for semantic segmentation and mean average precision (mAP) for object detection, using the original test images and the ones after adding adversarial perturbations[For implementation simplicity, we keeptargets with ground-truth class label background unchanged when generating adversarial examples.].* For semantic segmentation, we study two network architectures based on the FCN <cit.> framework. One of them is based on the AlexNet <cit.> and the other one is based on the 16-layer VGGNet <cit.>. Both networks have two variants. We use FCN-Alex and FCN-VGG, which are publicly available, to denote the networks that are trained on the original FCN <cit.> training set which has 9610 images, and use FCN-Alex* and FCN-VGG* to denote the networks that are trained on the DeepLab <cit.> training set which has 10582 images. We use the validation set in <cit.> which has 736 images as our semantic segmentation test set.* For object detection, based on the Faster-RCNN <cit.> framework, we study two network architectures, i.e., the ZFNet <cit.> and the 16-layer VGGNet <cit.>. Both networks have two variants, which are either trained on the PascalVOC-2007 trainval set, or the combined PascalVOC-2007 and PascalVOC-2012 trainval sets. These four models are publicly available, and are denoted as FR-ZF-07, FR-ZF-0712, FR-VGG-07 and FR-VGG-0712, respectively. We use the PascalVOC-2007 test set which has 4952 images as our object detection test set. Results are summarized in Table <ref>. We can observe that the accuracy (mIOU for segmentation and mAP for detection) drops significantly after the adversarial perturbations are added, demonstrating the effectiveness of DAG algorithm. Moreover, for detection, the networks with more training data are often more sensitive to the adversarial perturbation. This is verified by the fact that FR-ZF-07 (from 58.70% to 3.61%) has a smaller performance drop than FR-ZF-0712 (from 61.07% to 1.95%), and that FR-VGG-07 (from 69.14% to 5.92%) has a smaller performance drop than FR-VGG-0712 (from 72.04% to 3.36%). To verify the importance of the spatial structure of adversarial perturbations, we evaluate the accuracy after randomly permuting the rows and/or columns of 𝐫. In Table <ref>, we find that permuted perturbations cause negligible accuracy drop, indicating that it is the spatial structure of 𝐫, instead of its magnitude, that indeed contributes in generating adversarial examples. For permutation results, we randomly permute 𝐫 for three times and take the average. §.§ Adversarial ExamplesFigure <ref> shows an adversarial example that fails in both detection and segmentation networks. In addition, we show that DAG is able to control the output of adversarial images very well. In Figure <ref>, we apply DAG to generating one adversarial image (which humans can recognize but deep networks cannot) and one fooling image <cit.> (which is completely unrecognizable to humans but deep networks produce false positives). This suggests that deep networks only cover a limited area in the high-dimensional feature space, and that we can easily find adversarial and/or fooling examples that fall in the unexplored parts.§.§ Diagnostics§.§.§ The Denseness of ProposalsWe first observe the impact on adversarial generation of the denseness of the proposals. To this end, we use different IOU rates in the NMS process after the RPN. This directly affects the number of proposals preserved in Algorithm <ref>. As we can see in Figure <ref>, the mAP value goes down (i.e., stronger adversarial perturbations are generated) as the IOU rate increases, which means that fewer proposals are filtered out and thus the set of targets 𝒯 becomes larger. This is in line of our expectation, since DAG only guarantees misclassification on the targets in 𝒯. The denser sampling on proposals allows the recognition error to propagate to other possible object positions better. Therefore, we choose a large IOU value (0.90) which produces good results. §.§.§ ConvergenceWe then investigate the convergence of DAG, i.e., how many iterations are needed to find the desired adversarial perturbation. Figure <ref> shows the number of active targets, i.e., |𝒯_m|, with respect to the number of iterations m. In general, the training process goes smoothly in the early rounds, in which we find that the number of active proposals is significantly reduced. After the algorithm reaches the maximal number of iterations, i.e., 200 in segmentation and 150 in detection, only few (less than 1%) image fail to converge. Even on these cases, DAG is able to produce reasonable adversarial perturbations.Another interesting observation is the difficulty in generating adversarial examples. In general, the detection networks are more difficult to attack than the segmentation networks, which is arguably caused by the much larger number of potential targets (recall that the total number of possible bounding boxes is one or two orders of magnitudes larger). Meanwhile, as the IOU rate increases, i.e., a larger set 𝒯 of proposals is considered, convergence also becomes slower, implying that more iterations are required to generate stronger adversarial perturbations. §.§.§ PerceptibilityFollowing <cit.><cit.>, we compute the perceptibility of the adversarial perturbation 𝐫 defined by p=(1/K∑_k𝐫_k_2^2)^1/2, where K is the number of pixels, and 𝐫_k is the intensity vector (3-dimensional in the RGB color space, k=1,2,3) normalized in [0,1]. We average the perceptibility value over the entire test set. In semantic segmentation, these values are 2.6×10^-3, 2.5×10^-3, 2.9×10^-3 and 3.0×10^-3 on FCN-Alex, FCN-Alex*,FCN-VGG and FCN-VGG*, respectively. In object detection, these values are 2.4×10^-3, 2.7×10^-3, 1.5×10^-3 and 1.7×10^-3 on FR-ZF-07, FR-ZF-0712, FR-VGG-07 and FR-VGG-0712, respectively. One can see that these numbers are very small, which guarantees the imperceptibility of the generated adversarial perturbations. The visualized examples (Figures <ref> and <ref>) also verify this point. § TRANSFERRING ADVERSARIAL PERTURBATIONSIn this section, we investigate the transferability of the generated adversarial perturbations. For this respect, we add the adversarial perturbation computed on one model to attack other models. The attacked model may be trained based on a different (sometimes unknown) network architecture, or even targeted at a different vision task. Quantitative results are summarized in Tables <ref> - <ref>, and typical examples are illustrated in Figure <ref>. In the following parts, we analyze these results by organizing them into three categories, namely cross-training transfer, cross-network transfer and cross-task transfer. §.§ Cross-Training TransferBy cross-training transfer, we mean to apply the perturbations learned from one network to another network with the same architecture but trained on a different dataset. We observe that the transferability largely exists within the same network structure[We also studied training on strictly non-overlapping datasets, e.g., the model FR-ZF-07 trained on PascalVOC-2007 trainval set and the model FR-ZF-12val trained on PascalVOC-2012 val set. The experiments deliver similar conclusions. For example, using FR-ZF-07 to attack FR-ZF-12val results in a mAP drop from 56.03% to 25.40%, and using FR-ZF-12val to attack FR-ZF-07 results in a mAP drop from 58.70% to 30.41%.]. For example, using the adversarial perturbations generated by FR-ZF-07 to attack FR-ZF-0712 obtains a 22.15% mAP. This is a dramatic drop from the performance (61.07%) reported on the original images, although the drop is less than that observed in attacking FR-ZF-07 itself (from 58.70% to 3.61%). Meanwhile, using the adversarial perturbations generated by FR-ZF-0712 to attack FR-ZF-07 causes the mAP drop from 58.70% to 13.14%, We observe similar phenomena when FR-VGG-07 and FR-VGG-0712, or FCN-Alex and FCN-Alex*, or FCN-VGG and FCN-VGG* are used to attack each other. Detailed results are shown in Tables <ref> and <ref>. §.§ Cross-Network TransferWe extend the previous case to consider the transferability through different network structures. We introduce two models which are more powerful than what we used to generate adversarial perturbations, namely DeepLab <cit.> for semantic segmentation and R-FCN <cit.> for object detection. For DeepLab <cit.>, we use DL-VGG to denote the network based on 16-layer VGGNet<cit.>, and use DL-RN101 to denote the network based on 101-layer ResNet<cit.>. Both networks are trained on original DeepLab <cit.> training set which has 10582 images. For R-FCN <cit.>, we use R-FCN-RN50 to denote the network based on 50-layer ResNet<cit.>, and use R-FCN-RN101 to denote the network based on 101-layer ResNet<cit.>. Both networks are trained on the combined trainval sets of PascalVOC-2007 and PascalVOC-2012. The perturbations applied to these four models are considered as black-box attacks <cit.>, since DAG does not know the structure of these networks beforehand.Detailed results are shown in Tables <ref> and <ref>. Experiments reveal that transferability between different network structures becomes weaker. For example, applying the perturbations generated by FR-ZF-07 leads to slight accuracy drop on FR-VGG-07 (from 69.14% to 66.01%), FR-VGG-0712 (from 72.07% to 69.74%), R-FCN-RN50 (from 76.40% to 74.01%) and R-FCN-RN101 (from 78.06% to 75.87%), respectively. Similar phenomena are observed in using different segmentation models to attack each other. One exception is using FCN-VGG or FCN-VGG* to attack DL-VGG (from 70.72% to 45.16% for FCN-VGG attack, or from 70.72% to 46.33% by FCN-VGG* attack), which results in a significant accuracy drop of DL-VGG. Considering the cues obtained from previous experiments, we conclude that adversarial perturbations are closely related to the architecture of the network. §.§ Cross-Task TransferFinally, we investigate cross-task transfer, i.e., using the perturbations generated by a detection network to attack a segmentation network or in the opposite direction. We use a subset of PascalVOC-2012 segmentation validation set as our test set [There are training images of FR-ZF-07, FR-VGG-07, FCN-Alex and FCN-VGG included in the PascalVOC-2012 segmentation validation set, so we validate on the non-intersecting set of 687 images.]. Results are summarized in Table <ref>. We note that if the same network structure is used, e.g., using FCN-VGG (segmentation) and FR-VGG-07 (detection) to attack each other, the accuracy drop is significant (the mIOU of FCN-VGG drops from 54.87% to 43.06%, and the mAP of FR-VGG-07 drops from 68.88% to 56.33%). Note that this drop is even more significant than cross-network transfer on the same task, which verifies our hypothesis again that the adversarial perturbations are related to the network architecture.§.§ Combining Heterogeneous PerturbationsFrom the above experiments, we assume that different network structures generate roughly orthogonal perturbations, which means that if 𝐫_𝔸 is generated by one structure 𝔸, then adding it to another structure 𝔹 merely changes the recognition results, i.e., 𝐟^𝔹(𝐗,t_n)≈𝐟^𝔹(𝐗+𝐫_𝔸,t_n). This motivates us to combine heterogeneous perturbations towards better adversarial performance. For example, if both 𝐫_𝔸 and 𝐫_𝔹 are added, we have 𝐟^𝔸(𝐗+𝐫_𝔸+𝐫_𝔹,t_n)≈𝐟^𝔸(𝐗+𝐫_𝔸,t_n) and 𝐟^𝔹(𝐗+𝐫_𝔸+𝐫_𝔹,t_n)≈𝐟^𝔹(𝐗+𝐫_𝔹,t_n). Thus, the combined perturbation 𝐫_𝔸+𝐫_𝔹 is able to confuse both network structures. In Tables <ref>– <ref>, we list some results by adding multiple adversarial perturbations. Also, in order to verify that the spatial structure of combined adversarial perturbations is the key point that leads to statistically significant accuracy drop, we randomly generate three permutations of the combined adversarial perturbations and report the average accuracy. From the results listed in Tables <ref>– <ref>, we can observe that adding multiple adversarial perturbations often works better than adding a single source of perturbations. Indeed, the accuracy drop caused by the combined perturbation approximately equals to the sum of drops by each perturbation. For example, the adversarial perturbation 𝐫_2+𝐫_4 (combining FR-ZF-0712 and FR-VGG-0712) causes significant mAP drop on all ZFNet-based and VGGNet-based detection networks, and the adversarial perturbation 𝐫_5+𝐫_7 (combining FCN-Alex* and FCN-VGG*) causes significant mIOU drop on all AlexNet-based and VGGNet-based segmentation networks. However, permutation destroys the spatial structure of the adversarial perturbations, leading to negligible accuracy drops. The same conclusion holds when the perturbations from different tasks are combined. Table <ref> shows some quantitative results of such combination and Figure <ref> shows an example. Note that, the perceptibility value defined in Section <ref> remains very small even when multiple adversarial perturbations are combine (e.g., 4.0×10^-3 by 𝐫_1+𝐫_3+𝐫_5+𝐫_7). §.§ Black-Box AttackCombining heterogeneous perturbations allows us to perform better on the so-called black-box attack <cit.>, in which we do not need to know the detailed properties (architecture, purpose, etc.) about the defender network. According to the above experiments, a simple and effective way is to compute the sum of perturbations from several of known networks, such as FR-ZF-07, FR-VGG-07 and FCN-Alex, and use it to attack an unknown network. This strategy even works well when the structure of the defender is not investigated before. As an example shown in Table <ref>, the perturbation 𝐫_1+𝐫_3+𝐫_5+𝐫_7 leads to significant accuracy drop (from 80.20% to 64.52%) on R-FCN-RN101<cit.>, a powerful network based on the deep ResNet <cit.>. § CONCLUSIONSIn this paper, we investigate the problem of generating adversarial examples, and extend it from image classification to semantic segmentation and object detection. We propose DAG algorithm for this purpose. The basic idea is to define a dense set of targets as well as a different set of desired labels, and optimize a loss function in order to produce incorrect recognition results on all the targets simultaneously. Extensive experimental results verify that DAG is able to generate visually imperceptible perturbation, so that we can confuse the originally high-confidence recognition results in a well controllable manner.An intriguing property of the perturbation generated by DAG lies in the transferability. The perturbation can be transferred across different training sets, different network architectures and even different tasks. Combining heterogeneous perturbations often leads to more effective adversarial perturbations in black-box attacks. The transferability also suggests that deep networks, though started with different initialization and trained in different ways, share some basic principles such as local linearity, which make them sensitive to a similar source of perturbations. This reveals an interesting topic for future research. § ACKNOWLEDGEMENTSWe thank Dr. Vittal Premachandran, Chenxu Luo, Weichao Qiu, Chenxi Liu, Zhuotun Zhu and Siyuan Qiao for instructive discussions.ieee § MORE FANCY EXAMPLES§.§ Generating Geometric PatternsAs an additional showcase, the deep segmentation networks can be confused to output some geometric shapes, including stripes, circles, triangles, squares, etc., after different adversarial perturbations is added to the original image. Results are shown in Figure <ref>. Here, the added adversarial perturbation varies from case to case.§.§ Same Noise, Different OutputsIn Figure 2 of the main article, we show that we can generate some adversarial perturbations to make a deep segmentation network output a pre-specified segmentation mask (e.g., ICCV and 2017). But, the perturbations used to generate these two segmentation masks are different.Here, we present a more challenging task, which uses the same perturbations to confuse two networks. More specifically, we hope to generate a perturbation 𝐫, when it is added to an image 𝐗, the FCN-Alex and the FCN-VGG models are confused to output ICCV and 2017, respectively. To implement this, we apply the locally linear property of the network, and add two sources of perturbations, i.e., 𝐫=𝐫_1+𝐫_2, where 𝐫_1 is generated on FCN-Alex with the mask ICCV, and 𝐫_2 is generated on FCN-VGG with the mask 2017. As shown in Figure <ref>, our simple strategy works very well, although the segmentation boundary of each letter or digit becomes somewhat jagged.
http://arxiv.org/abs/1703.08603v3
{ "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Yuyin Zhou", "Lingxi Xie", "Alan Yuille" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170324212616", "title": "Adversarial Examples for Semantic Segmentation and Object Detection" }
We study piecewise linear co-dimension two embeddings of closed oriented manifolds in Euclidean space, and show that any such embedding can always be isotoped to be a closed braid as long as the ambient dimension is at most five, extending results of Alexander (in ambient dimension three), and Viro and independently Kamada (in ambient dimension four). We also show an analogous result for higher co-dimension embeddings. Polynomial-Time Methods to Solve Unimodular Quadratic Programs With Performance Guarantees Shankarachary Ragi, Member, IEEE, Edwin K. P. Chong, Fellow, IEEE, and Hans D. Mittelmann The work of S. Ragi and H. D. Mittelmann was supported in part by Air Force Office of Scientific Research under grant FA 9550-15-1-0351. The work of E. K. P. Chong was supported in part by National Science Foundation under grant CCF-1422658. This work appeared in part in the Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Mar 2017. S. Ragi and H. D. Mittelmann are with the School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287, USA e-mail: {shankarachary.ragi, mittelmann}@asu.edu. E. K. P. Chong is with the Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO, 80523, USA e-mail: edwin.chong@colostate.edu. December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION A classical theorem of Alexander <cit.> says that every oriented link in ℝ^3is isotopic to a closed braid.This theorem has been used to study knot theory, for example the Jones Polynomial <cit.> is a knot invariant defined using braids. In this paper we study generalizations of Alexander's theorem in higher ambient dimension. Braided surfaces were first introduced by Rudolph <cit.> for surfaces with boundary, but the notion we will be using is due to Viro. Viro defined thenotion of closed braid for closed oriented surface in ℝ^4, which can be thought of as closure of certain (the ones with trivial boundary) braidedsurfaces in the sense of Rudolph. Hilden, Lozano and Montesinos <cit.>, using different terminology, first studied braided embeddings to prove that each three manifold has a braided embedding in ℝ^5. The notion of braided embedding was defined in general by Etnyre and Furukawa <cit.>, and they have been studied previouslyby Carter and Kamada <cit.>.The first analogue of Alexander's theorem for surfaces is due to Rudolph <cit.>, who showed that every oriented ribbon surface is smoothly isotopic toa closed braid. Alexander's theorem was generalized to closed oriented surfaces in ℝ^4 by Viro and independently by Kamada.Viro announced his results in a lecture in 1990, but his proof was never published. Kamada gave an alternative proof <cit.> using the motion picture methodto describe surfaces in ℝ^4.The main result of this article is to show that in the piecewise linear category, Alexander's theorem can be generalized to ambient dimension 5. Any closed oriented piecewise linear (n-2)-link in ℝ^n can be piecewise linearly isotoped to be a closed braid for 3≤ n ≤ 5. Our approach is similar to Alexander's original proof in <cit.> (see also <cit.>, or <cit.>) in the classical case. We give an alternate proof of Kamada's generalization of Alexander's Theorem in dimension 4. For completeness, we also include the proof of the classical case of dimension 3. We also recover another classical result of Alexander <cit.>, that says that any closed oriented piecewise linear k-manifold is a piecewise linear branched cover over the sphere S^k, see Remark <ref>.If Theorem <ref> can be upgraded to the smooth category, then there are applications to contact geometry. Etnyre and Furukawa (see <cit.>) showedthat if Alexander's Theorem holds in the smooth category (with the branch locus being a submanifold) in ambient dimension five, then any embedding of a closed oriented 3-manifold in S^5 can be isotoped to be a transverse contact embedding. One may wonder if there are some analogues of Alexander's theorem for higher co-dimension link. More precisely,Given an natural number k is there an natural number n≥ k+2 so that any closed orientedk-manifold embeds in ℝ^n, and moreover any embedding is isotopicto a closed braid? It is well known that the embedding problem holds as long as n≥ 2k, see <cit.> for piecewise linear category, and <cit.> for smooth category. By Theorem <ref> below, in the piecewise linearcategory we have for k≥ 2 and n≥ 2k, any embedding is isotopic to a closed braid, so the answer to Question <ref> is affirmative. Moreover, we can ask given anyk-link in ℝ^n, is it always isotopic to a closed braid? The following result gives a partial answer to that question.Any closed oriented piecewise linear k-link in ℝ^n can be piecewise linearly isotoped to be a closed braid for 2n≥ 3k+2. Organization. The paper is organized as follows: in the Section 2 we define closed braids and positive links, and we show that the notions of closed braid and positive linkare equivalent, thereby reducing the braiding problem to isotoping a link to be positive. In the Section 3, we describe cellular moves, which will be used to replace a negative simplex with some positive simplices. In Sections 4 and 5, we study co-dimension two and higher co-dimension embeddings repectively. We will show that under the given hypotheses of Theorems <ref> and  <ref>, any closed link can be isotoped to be positive, completing the proofs. At the end of either section, we ask some questions about other possible generalizations of Alexander's Theorem.Acknowledgements. The author is grateful to John Etnyre for introducing the problem and many useful discussions. The author would like to thank James Conway for making helpful comments on earlier drafts of this paper. § CLOSED BRAIDS AND POSITIVE LINKS We assume that all spaces are piecewise linear, all embeddings are piecewise linear and locally flat, all isotopies are piecewise linear and ambient,and all other maps (radial projections, coverings and branched coverings) are topological[Radial projections need not be piecewise linear, see Chapter 1 in <cit.>.].By linear we will mean linear in the affine sense.Let k and l be natural numbers with l≥ 2. Let f:M^k→ℝ^k+l be an embedding of a closed oriented k-manifold (possibly disconnected), and we call the image a (co-dimension l) k-link. We will be mostly concerned with co-dimension two embeddings, i.e. l=2.We say that f is a co-dimension l braided embedding if f(M) is contained in a regular neighborhood N(S^k)=S^k× D^l of the standard sphere (unit sphere in ℝ^k+1⊂ℝ^k+l) such that the embedding composed with the projection to the sphere, pr_1∘ f:M→ S^k is an oriented branchedcovering map. Note that in case k=1, we have pr_1∘ f is just an oriented covering map since the branch locus is empty, also if further l=2, then f(M) isa closed braid (in the classical sense).We generalize this notion and call the image f(M) of a co-dimension l braided embedding f to be a co-dimension l closed braid. We will just say f is a braided embedding and f(M) is a closed braid if co-dimension is clear from the context. By braiding we will mean isotoping a linkto be a closed braid. We will identify M with f(M), and think of f as an inclusion. A simplex of M is understood to be in ℝ^k+l.Let us choose (and fix) a l-1 dimensional subspace ℓ of ℝ^k+l, which will play the role of the braiding axis. Let π:ℝ^k+l→ℝ^k+1denote orthogonal projection to ℓ^, and let O denote the origin of ℝ^k+1. We say that a k-simplex σ=[p_0,...,p_k] in ℝ^k+l is in general position with respect to ℓ if any of the following equivalent conditions hold:* There is no hyperplane in ℝ^k+l which contains both σ and ℓ. * There is no hyperplane in ℝ^k+1 which contains both π(σ) and O. * The vectors π(p_0),...,π(p_k) are linearly independent. * The determinant of [π(p_0)|π(p_1)|...|π(p_k)] is nonzero. We can always assume each simplex is in general position (with respect to ℓ), because if not, then by slightly perturbing the vertices, we can put it in general position. General Position. We will be needing several general position arguments, and we will outline the proof of one of them. They all follow the same pattern: the degenerate case happens if and only if a continuous function vanishes.So, if the system was non-degenerate, then any slight perturbation does not change that fact, and if the system was degenerate, it would be possible to make it non-degenerate with a slight perturbation. We say that a simplex [p_0,...,p_k] in ℝ^k+l in general position (with respect to ℓ) is positive if the simplex [O,π(p_0),...,π(p_k)] has thestandard orientation of ℝ^k+1 (i.e. [π(p_0)|π(p_1)|...|π(p_k)] has positive determinant), otherwise we say it is negative. We say thatan embedded link f:M^k→ℝ^k+l is a positive (with respect to ℓ) if the image of each simplex is in general position with respect to ℓ and positive. Hereafter the axis ℓ will be in the background, it will be understood that a simplex is positive/negative/in general position means it is positive/negative/in general position with respect to ℓ.Let p:ℝ^k+1∖ O→ S^k be the radial projection. For any piecewise linear manifold M^k with a given cellular decomposition, let δ M denote the union of all (k-2)-faces of cells of M.The following theorem shows that to prove Theorem <ref>, it suffices to show we can isotope any link to be positive.Let f:M^k→ℝ^k+l∖ℓ be an embedding, then the composition h defined byM^kℝ^k+l∖ℓℝ^k+1∖ O S^kis an oriented branched covering map if and only if all simplices of M are positive. In other words, the notions of closed braid and positive link are equivalent.If M is a closed braid, then the restriction of h to any particular simplex σ must be orientation preserving, and it follows that all simplices of M must be positive. Let us now assume that M is a positive link. Let Σ:=h(δ M). We will show that h restricts to a covering map on M∖ h^-1(Σ). Now any point x ofM∖ h^-1(Σ) could either be an interior point of a k-simplex, or on the interior of a (k-1)-face shared by two k-simplices. We will show that in both these cases, we can find a compact neighbourhood N of x such that h|_N is injective.Let x be in the interior of the k-simplex σ=[p_0,...,p_k]. Then for any y in σ we see that the ray passing through O and π(y) meets π(σ) exactly once, since π(p_0),...,π(p_k) form a basis for ℝ^k+1.Thus in this case h|_σ is injective. Let us now suppose that x is in the interior of the intersection of the adjacent simplices σ=[p_0,...,p_k] and τ=[p_1,q_0,p_2,...,p_k]. For σ and τ to be compatible, the induced orientation on the (k-1)-face ν=[p_1,...,p_k] they share must be opposite, i.e.the determinant of the matrix [π(p_0)|π(p_1)|...|π(p_k)] is positive, and the determinant of the matrix [π(q_0)|π(p_1)|...|π(p_k)] is negative. Suppose y is in σ and the ray passing through O and π(y) meets π(τ∖ν), then we see that we see that for some non-negative scalars c_0,...,c_k, d_1,...,d_k and positive scalars d_0,λ we have c_0π(p_0)+c_1π(p_1)+...+c_kπ(q_k)=λ(d_0π(q_0)+d_1π(p_1)+...+d_kπ(q_k))and so we havec_0π(p_0)-λ d_0π(q_0)∈ Span{π(p_1),...,π(p_k)}, and hencec_0[π(p_0)|π(p_1)|...|π(p_k)]=λ d_0[π(q_0)|π(p_1)|...|π(p_k)]which is a contradiction to our assumption that both σ and τ are positive. Thus in this case h|_σ∪τ is injective. Thus in either case, for a compact neighborhood N of x, h|_N is a continuous bijection between compact Hausdorff spaces and hence a homeomorphism onto its image.Thus h|_M∖ h^-1(Σ):M∖ h^-1(Σ)→ S^k∖Σ is a local homeomorphism, and infact a covering map since for any y∈ S^k∖Σ, the fiber h^-1(y) is compact and discrete.Also we can check that h is orientation preserving. Thus h is an oriented branched covering, as required.The map h above is only continuous since the radial projection p is so. However, we can compose π∘ f with the pseudo-radial projection[Pseudo-radial projection is the linear extension of the restriction of the radial projection to the vertices of the domain, see Chapter 2 in <cit.>.] instead of the radial projection p, and then the resulting composition will be a piecewise linear branched cover.Let us choose (and fix) a unit vector v∈ℓ⊂ℝ^k+l, let ℓ_v denote the line ℝv , and let π_v:ℝ^k+l→ℝ^k+l-1 denote orthogonal projection to ℓ_v^. By v-coordinate of a point p∈ℝ^k+l we will mean the scalar projection of p onto v. We say that a point p on a k-simplex σ of M^k inℝ^k+l is an overcrossing (respectively undercrossing) if there is another point q∈ M with π(p)=π(q) and difference of v-coordinate of p and the v-coordinate of q is positive (respectively negative).§ CELLULAR MOVES In this section we describe cellular moves, which we will use repeatedly in the next section to isotope any link to be positive.Suppose we have a embedded oriented (k+1)-disk D in ℝ^k+l such that D meets M^k in a k-disk σin ∂ D which is a union of simplices of both M and ∂ D and the induced orientations coming from M and ∂ D are opposite. Let M' be the manifold obtained from Mby replacing σ with ∂ D∖σ (with the orientation on the new simplices coming from ∂ D), Proposition 4.15 of <cit.>shows that M and M' are ambient isotopic. We call such replacement a cellular move along D. Hereafter, we will keep calling the manifold M even after applying cellular move.We want the new manifold to be oriented, and so we need the orientations (induced by σ) on the co-dimension one faces of σ, to agree with the induced orientation coming from the new simplices. This forces the orientation on the new simplices which is why we require the orientations of the simplices common to M and ∂ D to be as above.We will use the cellular moves for constructing all our isotopies, they will be of two types:* Moving the vertices of M slightly for general position arguments. * Replacing a negative simplex with a union of positive simplices.For the first type of isotopy, we note that for any vertex x of M^k, the union of all k-simplices of M which contain x is a k-cell, and slightly moving x is a cellular move. We note that after moving x slightly, a simplex will remain positive (respectively negative) if it wasinitially positive (respectively negative). We will say more about the second type of isotopy in Remark  <ref>, after we make a general observation.The join of two subsets A and B of ℝ^n is defined to beA*B:={λ a+(1-λ) b:a∈ A, b∈ B, λ∈[0,1]}.Let σ=[p_0,...,p_k] be a k-simplex of M^k in general position in ℝ^k+l, and suppose we can find a point q∈ℝ^k+l such that D=-(q*σ) (the minus sign indicates that D is oppositely orientated as compared to q*σ) meets M only in σ, and π(D) contains O. Then the result of cellular move along Dis that σ is replaced by the other simplices of ∂ D, and all the new simplices are oppositely oriented compared to σ.We see that the orientations of all the k-faces of [q,p_0,p_1,...,p_k] agree with the orientations of σ,since when expressed in the basis π(p_0),...,π(p_k), all coefficients of π(q) are negative since O∈π(D). Thus all the new simplices are oppositely oriented compared to σ since the induced orientations on new faces come from -[q,p_0,p_1,...,p_k].In particular, if σ was a negative face to begin with, we can isotope σ to a union of positive simplices, provided we can find a qas in Lemma  <ref>. If we choose q to be any point such that q*σ meets M only in σ, and all the coefficients of π(q) in the basis π(p_0),...,π(p_k) are nonzero, then the result of the cellular move along -(q*σ) will be a k-link with each simplex in general position (assuming each k-simplex of M was already in general position), and the orientations of the new simplices can be read off from the sign of the corresponding coefficient. In particular, if one chooses q such that all the coefficients are positive, then the orientations of the new simplices after applying the cellular move would be the same as that of σ. We have obtained an alternate way to look at another classical theorem of Alexander (see <cit.>), which states that every closed oriented piecewise linear k-manifold is a branched cover over S^k. Any such manifold M embeds in ℝ^N for some N>k, and as we saw above, for a generic orthogonal projection to ℝ^k+1, all the simplices will be non-degenerate. For any negative simplex σ of M in ℝ^k+1, we can choose apoint q∈ℝ^k+1 such that q*σ contains O in its interior. Replacing[Right now, we are just constructing a new piecewiselinear map, and not saying that this operation is an isotopy. However if N is sufficiently large, by Theorem <ref> we can carry out the entire construction by an isotopy.] σ with the other simplices of -q*σ gives us a new piecewise linear map from Mto ℝ^k+1, with one fewer negative simplex. Thus by induction on the number of negative simplices, we can always construct a map from Mto ℝ^k+1 with all simplices being positive, and by Remark <ref>, we get a piecewise linear branched cover of M over S^k by composing with the pseudo-radial projection.It seems likely that this approach will produce a branched cover with fewer number of sheets than Alexander's original construction. The following lemma shows that it is always possible to find embedded disks to do cellular moves if the crossings are only of one type.Suppose f:M→ℝ^k+l is a embedded closed oriented link, and let σ be a k-simplexof M in general position in ℝ^k+l and does not have both overcrossings and undercrossings. Then there is a point q∈ℝ^k+2 such that O∈π(D) and D∩ M=σ, where D=-(q*σ).Let us assume that all crossings are overcrossings (respectively undercrossings).Choose a point q∈ℝ^k+l such that O∈π(D) and π(q)∉π(M). Note that changing only the v-coordinate of q does notchange the projection π_v(D), and we will change the v-coordinate of q if necessary. Let x∈ M∖σ be such that there is a point y_x∈ D whose image under π_v is the same (since π_v|_D is injective, for any given x, there can be at most one y_x) . If we can ensure that the difference of v-coordinate of x and the v-coordinate of of y_x is negative (respectively positive), then we would have D∩ M=σ. We can in fact reduce to checking this condition for finitely many such points x, as follows: let τ be a simplex of M, then π_v(τ)∩π_v(D) will be a bounded polytope, henceby Proposition 2.7 of <cit.>, the convex hull of finitely many points. So as long as we ensure that the v-coordinates of all points which map to these extreme points of π_v(τ)∩π_v(D) satisfy the required inequality, we have that D∩τ=σ∩τ.Now, if this holds for all simplices τ of Mthen we would have D∩ M=σ as required. Since M is compact, thereare finitely many simplices τ, and thus we only have to check the inequality for finitely many points. Now given a point x∈ M∖σ with π_v(x)∈π_v(D)∖π_v(σ)[Note that if π_v(x)∈π_v(σ),then we already know if the crossing at π_v(x) is an overcrossing or undercrossing, and this is independentof the v-coordinate of q. This is why we require the condition that σ does not have both overcrossings and undercrossings in the statement ofthe lemma.], let z be the unique point in σ whose projectionunder π_v is the point of intersection of π_v(σ) and the line passing through π_v(x) and π_v(q). Then we will have that x is below (respectively above) D as long as q is above (respectively below) the point where the line π_v(q)+ℓ_v (i.e. the translate of ℓ_v which projects to π_v(q)) meets the line joining x and z. Thus we see that each such point x gives riseto a lower (respectively upper) bound of v-coordinate of q, and we can simultaneously satisfy finitely many such bounds. The result follows. Sometimes we will not be able to find a q as in Lemma  <ref>, but we may be able to subdivide σ into cells so that the crossings in each subcell is only of one type and then we have similar results as Lemmas <ref> and <ref>. Suppose f:M→ℝ^k+2 is a embedded closed oriented link, and let τ bea k-dimensional cell contained in a negative k-simplex σ of M in ℝ^k+2. If τ does not have both overcrossings and undercrossings, then there is a point q∈ℝ^k+2 such that D=-(q*τ) meets M only in τ, and π(D) contains O. Moreover, the result of cellular move along D is that τ is replaced by a union of positive simplices. § CO-DIMENSION TWO BRAIDINGIn the first subsection, we will use the tools developed so far to complete the proof of Theorem <ref>. We ask some questions about co-dimension two braidings in other cases in the second subsection. We observe that since we have co-dimension l=2, then ℓ=ℓ_vand π=π_v.§.§ Isotoping co-dimension two link to be positive To prove our main result it remains to show the following. For 1≤ k≤ 3 , each embedded closed oriented link f:M^k→ℝ^k+2 is isotopic to a positive link.Strategyof proof. We will use induction on the number of “negative k-simplices”. If all crossings are of one type then we can use cellular moves to replace (isotope) a negative k-simplex with a number of positive k-simplices. Sometimes we will have to break up a negative k-simplex into smaller k-simplices (temporarily increasing the number of negative k-simplices) and show that we can use cellular moves to replace each of the subsimplices, thereby reducing the number of negative k-simplices.Notation.Let S be a subset of M. We say that a point x∈ S is a double point of S if |π|_S^-1(π(x))|≥ 2, a triple point of S if |π|_S^-1(π(x))|≥ 3, a quadruple point if |π|_S^-1(π(x))|≥ 4, and a quintuple point of S if |π|_S^-1(π(x))|≥ 5. We call the collection ofall double (respectively triple) points of a subset S of M the double (respectively triple) point set of S and denote it by𝒟_S (respectively 𝒯_S), and we call their closure in S the double (respectively triple) point complex of S and denote it by 𝒟_S (respectively 𝒯_S). If S is not mentioned explicitly, it is understood that S is M. For any k-simplex σ of M, let 𝒯_σ denote the closure in σ of σ∩𝒯_ℳ.Figures. A note on the figures; in the cases k=1,2, when we are illustrating special cases of crossings on negative k-simplices we will frequently show both an immersed picture, where we will show all the simplices crossing, and a preimage picture, where we indicate all the crossing points in the negative k-simplex. In the case k=3, we can only draw preimage pictures. We argue the 3 cases for k separately.Case: k=1 (Alexander, <cit.>). The proof is by induction on the number of negative 1-simplices of the triangulation of M.General Position. We can ensure the all the crossings are isolated double points, and there are no triple points. Special Case. If a negative 1-simplex does not have both overcrossings and undercrossings, then we can use Lemma  <ref>to replace the 1-simplex with positive 1-simplices. General Case. We can break up a negative 1-simplex into smaller 1-simplices such that no part has both overcrossings and undercrossings, and we can apply Lemma  <ref> to each of the subsimplices. We note that applying such a cellular move to one such subsimplex does not introduce any new crossings on the other subsimplices of our original negative 1-simplex. Thus we can reduce the number of negative 1-simplices, and we are done by induction for the case k=1.Digression. For k=2,3 we have to deal with the fact that if we break up a k-simplex and apply cellular move to the various parts, the result will not be triangulatedany more. Of course we could subdivide the adjacent k-simplices so that the result is in fact triangulated, but this may increase the number of negative k-simplices, which we do not want to happen. We will need to modify the induction hypothesis in cases k=2,3. For this reason, following Kamada (see Chapter 26 in <cit.>), we will introduce the notion of a division of a piecewise linear manifold. A division for a link M^k⊆ℝ^k+2 is a collection of k-simplices {σ_1,...,σ_l} whose union is M,and such that for distinct i and j, if σ_i∩σ_j is nonempty, it is contained in faces of both σ_i and σ_j, and is a face of σ_ior σ_j. We say that the σ_i's are k-simplices of the division for M, and the notion of a positive/negative k-simplex is defined as before. We say that a k-simplex σ is inner (respectively outer) if its intersection with any other k-simplex τ is a face of σ (respectively τ). If σ is inner, then we can break σ up into smaller cells and apply cellular moves on each subcell, and the result would be a division. We can only move a vertex x of a k-simplex σ of M slightly without changing the number of k-simplices of the division if x is a vertex of every k-simplex τ that contains x. In particular if σ is outer, we can slightly move any of its vertices slightly without changing the number of k-simplices of the division.A division is a triangulation if and only if all k-simplices are both inner and outer. We say a division is good if allthe negative k-simplices are outer. Lemmas  <ref> and <ref> still hold if we have a division of M.Notation. For any k-simplex σ of M^k, let X(σ) denote the union of all the k-simplices which are not adjacent to or equal to σ, let Y(σ) denote the complement of σ in M, let Y(σ) denote the closure of Y(σ) in M, and let Z(σ) denotethe union of all the k-simplices of M whose intersection with σ in has dimension at most k-2.We return to the proof of Theorem <ref>. Case: k=2. The proof is by induction on the number of negative 2-simplices of the division of M.General Position for the initial triangulation.We may assume that all the crossings are double point lines and isolated triple points in the interior of respective 2-simplices, and there are no quadruple points.We will make modifications to M in three steps, at each step assuming results from the previous steps hold. In the first step we can make sure that all 2-simplices intersect “nicely” pairwise, and in the laststep we will make sure that all triples of 2-simplices intersect “nicely”, after which we get the required general position statement. Step 2 is a special case of Step 3, where we make sure all non-adjacent triples of 2-simplices intersect “nicely”. By slightly moving each vertex of (the triangulation of) M, we may assumethat: * The projection of a vertex x of (the initial triangulation of) M is not contained in a hyperplane generated (=affinely spanned) by the projection a 2-simplex τ of M, if x∉τ. If y_1,y_2,y_3 are vertices of M such that π(x) is contained in the hyperplane H defined by π(y_1),π(y_2),π(y_2)then that means α(π(x))=0 where α is the dual (with respect to the standard inner product) linear functional defined by choosing an unit normal to H in π(ℝ^4)=ℝ^3. By a slight perturbation ofx one can ensure that α(π(x))≠ 0, and moreover this is generic, i.e. a small perturbation of x would not change the non-vanishing of α(π(x)). We can keep on perturbing the vertices slightly until the above condition holds. This ensures that the projection of two 2-simplices can only intersect in a line segment, that the projection of two 2-simpliceswhich only share a vertex cannotintersect along any edge of either 2-simplex, that the projection of two 2-simpliceswhichshare an edge do not intersect elsewhere.Consequently, 𝒟_M and π(𝒟_M) are graphs. * For any 2-simplex σ of M, we have that π(𝒟_X(σ)) meets π(σ) and π(∂σ) transversely. We will only perturb the vertices of σ, and so π(𝒟_X(σ)) will stay fixed.Let [p_1,p_2] be an edge of π(𝒟_X(σ)). If we make sure that the points p_1,p_2 are not in the hyperplane generatedby π(σ), and the projections of vertices q_1,q_2 and q_3 of σ are maximally affinely independent with p_1,p_2 (i.e. there is no hyperplane in ℝ^3 containing σ and [p_1,p_2], or equivalently p_2-p_1, π(q_2)-π(q_1) and π(q_3)-π(q_1)are linearly independent, or equivalently the determinant of [p_2-p_1| π(q_2)-π(q_1)|π(q_3)-π(q_1)] is nonzero),then [p,q] and σ have the required property. If we move the vertices of σslightly, this property still remains true, hence we can make sure that the required property holds for each edge of π(𝒟_X(σ)).Slightly perturbing the vertices of M would not change this, and hence we can make sure the property holds for all 2-simplices of M. * For any 2-simplex σ of M, we have that π(𝒟_Y(σ)) meets π(σ) and π(∂σ)transversely, except at projection of points of ∂σ which are not triple points. Given any vertex x of M, we can make sure that the set of normal vectors (based at π(x)) in ℝ^3 to the hyperplane generated by projections of all the 2-simplices that have x has a vertex are maximally affinely independent (i.e. if we think of π(x) as the origin, any three of these normal vectors are linearly independent) by perturbing other (i.e. except x) vertices of such 2-simplices. This condition ensures that for any three 2-simplices that share the vertex x, their projection can intersect in at most one point.We can make sure the above condition holds for all vertices x of M. For any 2-simplexσ of M, we can make sure that there is no triple point in any edge of σ by perturbing the vertices of σ slightly, while fixing the hyperplane generated by the projection of σ. Thus, the projection of any triple of 2-simplices intersect at at most one point, and triple points occur in the interior of their respective simplices. General Position for a division. When applying a cellular move along D=-(q*ν), we may assume that the q and ν are chosen so that π(𝒟_M∖ν) meetsπ(∂ D) and π(δ D) transversely.Consequently, there are no quadruple points, and moreover all triple points are isolated and lie in the interior of their respective 2-simplices.Special Cases. Let us look at some special cases (which will contain previous special cases) of crossings in a negative 2-simplex: * Suppose σ is a negative 2-simplex such that does not have both overcrossings and undercrossings, see Figures  <ref> and  <ref>.We can replace σ with a union of positive 2-simplices by Lemma  <ref>.* Suppose σ is a negative 2-simplex such that there are no triple points, see Figure  <ref>. In this case we can break σ up into smaller 2-simplices which are inner, and moreover each of the subsimplices does not have both overcrossings and undercrossings, and we can use Lemma  <ref> to replace each of them with positive 2-simplices. So we can reduce the number of negative 2-simplices by one. * Suppose σ is a negative 2-simplex with exactly one triple point p∈σ, see Figure  <ref>. We know that the line segment [O,π (p)] can meet the projection of each of the three 2-simplices giving rise to the triple point only in π(p) (since all the 2-simplices are in general position), and we choose a point q∈ℝ^4 (the v-coordinate will be changed later if necessary) such that O is in the line segment (π (q),π (p)). As in the proof of Lemma  <ref>, by choosing the v-coordinate of q to be sufficiently positive (or negative), we have [q,p]∩ M= {p}.By compactness we have d([q,p],Y(σ))>0, and hence we can choose a small 2-simplex [p_0,p_1,p_2] in σ containing p such that [q,p_0,p_1,p_2] meets M only at [p_0,p_1,p_2], and we can use cellular move to replace [p_0,p_1,p_2] by positive 2-simplices, see Figure  <ref>. The rest of σ can be broken up into smaller inner 2-simplices each of which does not have both overcrossings and undercrossings, and by Lemma  <ref>, we can replace them with positive 2-simplices and hence we are done.General Case. Suppose σ is any negative 2-simplex, see Figure  <ref>. We break σ up into smaller inner 2-simplices each of which has at most one interior triple point and then use the above special cases. Thus we can reduce the number of negative simplices, and we are done by induction for the case k=2. Case: k=3. It suffices to prove the following claim, since the initial triangulation is a good division.Every embedded closed oriented link M^3 in ℝ^5 with a good division is isotopic to a positive link. The proof of the claim is by induction on the number of negative 3-simplices on the good division. After we prove the claim, it follows that the result holds for any division, because we can subdivide any division further so that it becomes a good division. General Position. We may assume that the double point complex is a 2-dimensional CW-complex, the triple point complex is a graph, all quadruple points are isolated and in the interior of respective 3-simplices, and there are no quintiple points. Moreover we can also assume that alltriple points are disjoint from 1-faces of 3-simplices, and for any vertex x of 𝒯_M, the set π|_M^-1(π(x)) contains exactly one pointon the 2-faces of 3-simplices of M. This can be proved in a similar way we proved the general position statement in the case k=2, and we outline an argument below.By slightly moving each vertex of (the triangulation of) M, we may assume that for the initial triangulation we have: * The projection of a vertex x of M is not contained in a hyperplane generated by the projection a 3-simplex τ of M, if x∉τ. Consequently,𝒟_M and π(𝒟_M) are 2-dimensional CW-complexes. * For any 3-simplex σ of M, π(𝒟_Y(σ)) and the edges of π(𝒟_Y(σ))meets π(σ) and π(∂σ) transversely, except at projection of points of ∂σ which are not triple points. Hence, 𝒯_M and π(𝒯_M) are graphs, andfor any vertex x of 𝒯_M, the set π|_M^-1(π(x)) contains exactly one point on the 2-faces of 3-simplices of M.* For any 3-simplex σ of M, π(𝒯_Y(σ)) meets π(σ) and π(∂σ) transversely, except at projection of points of ∂σ which are not quadruple points.Now we have the required general position statement for the initial triangulation.At each step of applying cellular move along D=-(q*ν), we may assume that ν is chosen so that there are no vertices of 𝒯_M in the 2 faces ofν except the case that such a point is in 𝒯_M∖𝒯_M,q is chosen so that π(𝒟_M∖ν) and π(𝒯_M∖ν) meetsπ(∂ D) and π(δ D) transversely. We will then have the required general position statement.Special Cases.We will look at some special cases (which typically will contain previous special cases) of crossings in a negative 3-simplex:* Suppose σ is a negative 3-simplex such that all crossings are overcrossing (respectively undercrossing).We can replace σ with a union of positive 3-simplices by Lemma <ref>. * Suppose σ is a negative 3-simplex such that there are no triple points.In this case we can break σ up into smaller inner 3-simplices such that the crossings are only overcrossing or undercrossing (but not both), and we can use Lemma <ref> to replace each of them with positive 3-simplices. So we can reduce the number of negative 3-simplices by one. * Suppose σ is a negative 3-simplex with exactly one triple point line segment [p_0, p_1] (with p_0 and p_1 is not in a vertex or edge of σ) coming from 3-simplices τ (above σ) and η (below σ) [Note that if both τ and η are above (respectively below) σ, then the crossings in σ corresponding to τ and η are both undercrossings (respectively overcrossings), and we are in special case 2.]which are not adjacent to σ, and such that π(τ)∩π(η) contains the projection of [p_0, p_1] in its interior, and there are no quadruple points in σ, see Figure  <ref>. Since the 3-simplices τ and η are in general position the 2-simplex [O,π(p_0),π(p_1)] meets π(τ) and π(η) only in [π(p_0),π(p_1)]. We choose a point q∈ℝ^5 (the v-coordinate will be changed later if necessary) such that O is in the interior of [π(q),π(p_0),π(p_1)], and consequently the 2-simplex [π(q),π(p_0),π(p_1)] meets π(τ) and π(η) only in [π(p_0),π(p_1)]. The hypotheses ensures that in [π(q),π(p_0),π(p_1)] we do not see other (i.e. except [π(p_0),π(p_1)]) lines of over or under crossings starting from thevertices π(p_0),π(p_1). Just like in the proof of Lemma <ref>, we may choose the v-coordinate of q to be sufficiently positive (or negative) such that:* If ρ is a 3-simplex whose intersection with σ is 2 dimensional, then the cone D meets ρ only in σ∩ρ. * [q,p_0,p_1] does not intersect Z(σ).By compactness, the distance between [q,p_0,p_1] and Z(σ) is positive, and hence we can choose a cell ν in σ containing [p_0,p_1] such that q*ν meets M only in ν.Using Remark  <ref>, we can use a cellular move to replace ν with a finite union of positive 3-simplices, and can break the rest of σ intosmaller inner 3-simplices and use Lemma <ref> to replace them with positive 3-simplices. So we can reduce the number of negative 3-simplices by one. * Suppose σ is a negative 3-simplex such that 𝒯_σ is non empty but does not meet σ in a vertex or an edge, see Figure  <ref>. There will be finitely many points p_1,...,p_m in the interior of σ which are either a quadruple point or a vertex of the triple point complex 𝒯_M. We can choose points q_1,....,q_m such that O∈(π(q_i),π(p_i)) and each of these line segments [q_i,p_i] are mutually disjoint. Like before, we can find 3-simplices P_i containing p_i in σ such that q_i*P_i are mutually disjoint, and then we can use cellular moves to replace each P_i with union of positive 3-simplices. The rest of σ can be broken up into smaller inner 3-simplices suchthat we are in the previous special cases. The hypothesis and our general position statementensures that for all subsimplices which contain triple point line segments, we are in the previous special case. As we have seen, we can replace each of these 3-simplices by positive 3-simplices, and hence we can reduce the number of negative 3-simplices by one.The special cases of crossings in negative 3-simplices we considered so far are analougus to the ones we saw in the case k=2. In next two special cases we will consider the “new” type of crossings, when 𝒯_σ meets a vertex or an edge of σ, and we will need a new idea. * Suppose σ is a negative 3-simplex, with the only crossings coming from 3-simplices τ (above) and η (below) who share a vertex p_0 with σ. Moreover, there is triple point semiopen line segment (p_0,p_1] in σ, and π(τ)∩π(η) contains π(p_0,p_1] in its interior, see Figure  <ref>.Let σ_1, σ_2 be the subcells of σ such that the hyperplane generated by π(τ) breaks π(σ) into the two parts π(σ_1) and π(σ_2), and let us assume that π(σ_1) is in the same half-space as O. As in the proof of Lemma  <ref>, we can choose a point (by making the v-coordinate sufficiently positive) q such that the cone q*σ_1 meets M only in σ_1, and O is in the interior of π(q*σ_1). We use a cellular move to replace σ_1 by the other faces of this cone. If τ is negative, it must be a outer 3-simplex (since we assumed that the division is good), and hence we can move some of the other (except p_0) vertices of τ a little (so that the projectionof the vertices lie in the half-space generated by the old π(τ), containing O) so that σ_2 does not have any triple point. If τ is positive, we can apply a cellular move on a smaller subsimplex (so that all the new 3-simplices are positive) of τ containing the triple points, so that σ_2does not have any triple point. By using the above special cases, we see that we can replace σ_2 with a union of positive 3-simplices,and we have reduced the number of negative 3-simplices by one.A similar argument works in the special case:* Suppose σ is a negative 3-simplex, with the only crossings coming from 3-simplices τ (above) and η (below), where only one of τ or η shares an edge with σ. Moreover, there is triple point semiopen line segment (p_0,p_1] in σ, where p_0 lies in the common edge, and π(τ)∩π(η) contains π(p_0,p_1] in its interior, see Figure  <ref>.General Case. Suppose σ is any negative 3-simplex.Let us first consider all the points where 𝒯_σ meets a vertex or an edge of σ, and by special cases 5 and 6, we can find small inner 3-simplices containing these points where we can apply cellular moves and replace them with positive 3-simplices. We can break the rest σ up into small inner 3-simplices so that we are special case 4, and as we have seen, we can replace each of the subsimplices with positive 3-simplices, thereby reducing the number of negative 3-simplices by one. This completes the proof of Theorem <ref> for the case k=3.§.§ Questions For k>3, if we have an embedded closed oriented k-link in ℝ^k+2, and we can make sure (at every step of applying cellular moves) that the triple points of M do not intersect δ M then we can use the above method to isotope the link to be positive. However, we cannot always guarantee such a condition on the triple point set, and our approach seems to fail if k>3 (i.e. ambient dimension n>5). Can Theorem <ref> be extended for higher ambient dimension? If not, can a counterexample/obstrustion be found? We can restrict our attention to closed braids where the branched set in M is a (k-2)-dimensional submanifold. In our proof, we could only show that every closed oriented 3-link in ℝ^5 can be isotoped to be a closed braid where the branched set is a graph.Can every closed oriented 3-link in ℝ^5 be isotoped to be a closed braid where the branched set is a link?More generally, for which n≥ 5, is every closed oriented (n-2)-link in ℝ^n isotopic to a closed braid with the branched set being a submanifold?We can also ask similar questions in the smooth category. For which n, is every closed oriented smooth (n-2)-link in ℝ^n smoothly isotopic to a closed braid (with the branched set being a submanifold)? § HIGHER CO-DIMENSION BRAIDINGIn the first subsection, we will use the tools developed so far to complete the proof of Theorem <ref>. We end with some questions about higher co-dimension braidings in the second subsection.§.§ Isotoping higher co-dimension link to be positiveTo prove Theorem <ref> it remains to show the following. Any closed oriented piecewise linear k-linkf:M→ℝ^k+l can be piecewise linearly isotoped to be a closed braid for 2l≥ k+2. In case k=1 or 2, then l=2 satisfies the hypothesis of the above theorem, and we know in this case the result follows from Theorem <ref>. In the rest of the section, we will assume that l≥ 3. We will also assume that l≤ k+1, since otherwise[The proof given still holdsif l>k+1, one just has to interpret statements (like negative dimensional space) correctly.] it is easy to see that the Theorem holds, as there will be no crossings in the projection under π_v.In fact one can show if l>k+1, then any two embeddings of M^k in ℝ^k+l are isotopic (see <cit.>) . The proof will be similar to the proof of case k=2 of Theorem <ref>, and we will not discuss special cases of negative simplices this time. The proof will be by induction on the number of negative simplices in the division of M.Let us consider the projection under π_v:ℝ^k+l→ℝ^k+l-1, and see what we can say about the crossings under thegiven hypothesis 2l≥ k+2. For any two simplices σ and τ, we may assume that π_v(σ) and π_v(τ) intersect transversely inπ_v(ℝ^k+l)=ℝ^k+l-1, and in that case the intersection of the affine subspaces generated by them have dimension 2k-(k+l-1)=k-l+1.We may assume that for any triple of simplices τ, σ and ν, that π_v(σ∩τ) intersects π_v(σ∩ν) transversely inπ_v(σ), and in that case the intersection has dimension k-2(k-l+1)=k-2l+2≤ 0. Consequently, all triple points are isolated and can be assumedto be in the interior of their respective simplices.General Position for the initial triangulation. We may assume that all the crossings are double point complex is (k-l+1)-dimensional CW-complex, all triple points are isolated and in the interior of respective k-simplices, and there are no quadruple points.General Position for a division. When applying a cellular move along D=-(q*ν), we may assume that the q and ν are chosen so that π_v(𝒟_M∖ν) meetsπ_v(∂ D) and π_v(δ D) transversely.Consequently, there are no quadruple points, and moreover all triple points are isolated and lie in the interior of their respective k-simplices. Now given any negative simplex σ, it will contain finitely many triple points p_1,...,p_m in its interior. We can choose points q_1,....,q_m such that O∈(π_v(q_i),π_v(p_i)) and each of these line segments [q_i,p_i] are mutually disjoint, and do notintersect the rest of M. Using Remark <ref> we can find k-simplices P_i containing p_i in σ such that q_i*P_i are mutually disjoint, and then we can use cellular moves to replace each P_i by a union of positive k-simplices. The rest of σ can be broken up into smaller inner k-simplices such that there are only crossings of one type, and then by Lemmas <ref> and  <ref>, we can replace σ with a union of positive simplices. We have reduced the number of negative simplices by one, and hence we are done by induction. §.§ QuestionsWe have shown that in the piecewise linear category that the answer to Question <ref> is affirmative if n≥ 2k for k≥ 2, and also if n=k+2 if 1≤ k≤ 3 .(in both piecewise linear and smooth categories)Given a natural number k, is there a natural number n such that the answer to Question <ref> is affirmative, and if so the what is the smallest such n?We can also focus on a given manifold and ask: Given a smooth closed oriented k-manifold M, is there a smooth branched cover over the sphere S^k?We know that the answer is yes to the corresponding question in the piecewise linear category, due to Alexander <cit.> (see also Remark <ref>). (in both piecewise linear and smooth categories)Given a closed oriented k-manifold M which is a branched cover over the k-sphere, what is the minimum[In the smooth category, there will always be a braided embedding of M to some ℝ^N if we know there isa smooth branched cover g:M→ S^k. We know there is a smooth embedding i:M→ℝ^2k. The map f=g× i:M→ N(S^k)⊆ℝ^3k is a braided embedding.] n such that there is a braided embedding of M in ℝ^n. Is there an M such that this n islarger than the smallest dimensional Euclidean space into which M embeds?It is very likely that in some cases the condition 2l≥ k+2 we had in Theorem <ref> can be weakened and still any embedding can be braided.(in both piecewise linear and smooth categories)Given a closed oriented k-manifold M, what is the range for l≥ 2 such that M embeds in ℝ^k+l, and every embedding is isotopic to a closed braid?If l is not in that range, can we find a counterexample/obstrustion?11A0J.W. Alexander.Note on Riemann spaces.Bull. Amer. Math. Soc. , 26:370–372, 1920. AJ.W. Alexander.A lemma on systems of knotted curves.Proc. Nat. Acad. Sci. USA, 9:93–95, 1923. B J.S. Birman.Braids, Links, and Mapping Class Groups.Annals of Mathematics Studies, Princeton University Press, 1974. CK J.S. Carter and S. Kamada. How to Fold a Manifold.ArXiv e-prints, January 2013. EF J.B. Etnyre and R. Furukawa.Braided Embeddings of Contact 3–manifoldsin the standard Contact 5–sphere,ArXiv e-prints, October 2015.HLM H.M. Hilden, M.T. Lozano and J.M. Montesinos.All three-manifolds are pull-backs of a branched covering S^3 to S^3,Trans. Amer. Math. Soc., 279(2):729–735, 1983.J V.F.R. Jones.Hecke algebra representation of braid groups and link polynomials.Annals of Mathematics, 126:335-388, 1987.K1 S. Kamada.A characterization of groups of closed orientable surfaces 4-space.Topology, 33:113–122, 1994. K2 S. Kamada.Braid and Knot Theory in Dimension Four.American Mathematical Society, 2002.RS C.P. Rourke and B.J. Sanderson.Introduction to Piecewise Linear Topology.Springer-Verlag, 1972. R L. Rudolph.Braided surfaces and Seifert ribbons for closed braids.Comment. Math. Helv., 58(1):1–37, 1983. W H. Whitney.The Self-Intersections of a Smooth n-Manifold in 2n-Space.Annals of Mathematics, 45(2):220–246, 1944.
http://arxiv.org/abs/1703.08588v1
{ "authors": [ "Sudipta Kolay" ], "categories": [ "math.GT", "57Q45, 57Q37, 57M12" ], "primary_category": "math.GT", "published": "20170324201007", "title": "Piecewise linear generalized Alexander's theorem in dimension at most 5" }
1Faculty of Sustainability Studies, Hosei University, Fujimi, Chiyoda-ku, Tokyo 102-8160, Japan matsu@hosei.ac.jp 2Department of Earth and Planetary Sciences, Kyushu University, Fukuoka 812-8581, Japan 3Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan We investigate the formation of circumstellar disks and outflowssubsequent to the collapse of molecular cloud cores with the magnetic field and turbulence. Numerical simulations are performed by using an adaptive mesh refinementto follow the evolution up to ∼ 1000 yr after the formation of a protostar. In the simulations, circumstellar disks are formed around the protostars; thosein magnetized models are considerably smaller than those in nonmagnetized models, but their size increases with time. The models with stronger magnetic field tends to produce smaller disks. During evolution in the magnetized models, the mass ratios of a disk to a protostar is approximately constant at ∼ 1-10%. The circumstellar disks are aligned according to their angular momentum, and theoutflows accelerate along the magnetic field on the 10-100 au scale; this produces a disk that is misaligned with the outflow. The outflows are classified into two types: a magneto-centrifugal wind and a spiral flow.In the latter, because of the geometry, the axis of rotation is misaligned with the magnetic field. The magnetic field has an internal structure in the cloud cores, which also causes misalignment between the outflows and the magnetic field on the scale of thecloud core. The distribution of the angular momentum vectors in a core also has a non-monotonicinternal structure. This should create a time-dependent accretion ofangular momenta onto the circumstellar disk.Therefore, the circumstellar disks are expected to change their orientationas well as their sizes in the long-term evolutions. § INTRODUCTION The magnetic field and turbulence play important roles in the early phase of star formation.Observations of the magnetic field have indicated thatmolecular clouds and molecular cloud cores have a large amount of magnetic energy, which is approximately equal to the kinetic energy <cit.>. The magnetic field therefore has the potential to controlthe gravitational collapse of cloud cores.Molecular clouds exhibit broad molecular lines, which are interpreted as supersonic turbulence <cit.>.The turbulence seems to have a scaling relation such that a smaller scale has a smaller velocity dispersion <cit.>. For high-density cloud cores,weak turbulence is suggested by the narrow molecular lines <cit.>.As shown by <cit.>, such turbulencereproduces the observed rotational properties of cloud cores. This indicates thatthe turbulence contributes angular momentum to the cloud cores, and it is the origin of the rotation of circumstellar disks and protostars. The magnetic field extracts angular momentum from the circumstellar disks and the infalling envelopes, due to magnetic braking and outflows. As a consequence, the protostars accrete gas from the disks.In the past decade, the existence of the so-called “magnetic braking catastrophe” has been debated<cit.>; in this phenomenon, the magnetic field prevents the formation of the circumstellar disks around the protostars.Axisymmetric models have been investigated intensively, and they show that circumstellar disks are formed around the protostar. The size of these disks increases with time, butthey are considerably smaller than nonmagnetized disks <cit.>. Models in which the magnetic field and rotation axes are misaligned have also been investigated <cit.>.Few studies have investigated the role of turbulence in disk formation <cit.>.<cit.> performed numerical simulations of the collapse of magnetized turbulent cloud cores, but they only followed the evolution up to the formation of the first core. Their simulations reproduced the formation of the outflow but not that of disks, because the period simulated was too short.Recent observations have revealed misaligned young stellar objects. <cit.> reported that for16 Class 0 and Class I sources, the magnetic field in protostellar cores of ∼ 1000 au scale is not tightly aligned with outflows. It has been suggested that in the Class I source L1489 IRS, the central Keplerian disk is inclined with respect to a flattened infalling envelope <cit.>.Moreover, it has been suggested that the class I binary source IRS 43 exhibits a misalignment between the orbit of the binary and the circumbinary disk <cit.>. The formation mechanism for such misaligned systems is not yet known.In this paper, we use high-resolution numerical simulations to investigate the collapse of cloud cores to form protostars, and we include the effects of the magnetic field and turbulence of the cloud core. The formation of circumstellar disks and outflows is also investigated. In the simulations, it is expected that misaligned protostars will be formed,because a priori rotation axes are not assumed in the initial conditions. We focus on the early phase of protostar formation, becauserecent observations have provided high-resolution images of very young protostars and circumstellar structures <cit.>.This paper is organized as follows. Section <ref> presents the model. Section <ref> discusses the simulation methods, the results are presented in Section <ref>, and they are discussed in Section <ref>.Our conclusions are presented in Section <ref>. § MODELSAs the initial state of a molecular cloud core, we consider a turbulent, spherical cloud threaded by a uniform magnetic field. The cloud is confined by a uniform ambient gas.This is similar to the initial stateconsidered by <cit.>. It is further specified by the initial strength of the turbulence andthe magnetic field strength, as summarized in Table <ref>.As a template for a molecular cloud core, we consider a cloud for which the density profile is that of the critical Bonnor-Ebert sphere <cit.>. We let ϱ_ BE(ξ) denote the nondimensional density profile of the critical Bonnor-Ebert sphere <cit.>, and then the initial density distribution is given byρ(r) = {[ ρ_0 ϱ_ BE(r/a) forr < R_c; ρ_0 ϱ_ BE(R_c/a) forr ≥ R_c ].,anda = c_s ( f/4 π G ρ_0)^1/2 ,where r, G, c_s, and ρ_0 denote the radius, gravitational constant, isothermal sound speed, and initial central density, respectively.The gas temperature is assumed to be 10 K (c_s = 0.19 km s^-1) . The initial central density is set at ρ_0 = 10^-18gcm^-3, which corresponds to a number density of n_0 = 2.61 × 10^5cm^-3 for an assumed mean molecular weight of 2.3. The parameter f denotes the nondimensional density enhancement factor, and the critical Bonnor-Ebert sphere is obtained when f=1.For a given central density, an increase in density by a factor of f is equivalent toan enlargement of the spatial scale by a factor of f^1/2. We adopt f=2 in this paper. The radius of the cloud is set to be R_c = 6.45 a = 0.0434 f^1/2 pc = 0.0614 pc,where the factor 6.45 comes from the nondimensional radius of the critical Bonnor-Ebert sphere.The density contrast of the initial cloud is ρ(0)/ρ(R_c) = 14.0. The initial freefall timescale at the center of the cloud is thus t_ ff≡ (3 π / 32 G ρ_0)^1/2 = 6.66×10^4yr. The mass of the cloud core (r≤ R_c) is M_c = 0.89 f^3/2 M_ = 2.51M_. The ratio of the thermal energy to the gravitational energy is estimated as E_th/|E_grav| = 0.84 f^-1 = 0.42. The spherical cloud described above is located at the center ofthe computational domain ofx, y, z ∈ [-2R_c, 2R_c]^3.The turbulence is determined by the initial velocity field, and it is not driven during the course of the simulations; that is, we assume free decay of the turbulence. The initial velocity field is incompressible, with a power spectrum of P(k) ∝ k^-4, and it is generated in accordance with that in <cit.>, where k is the wavenumber.This power spectrum results in a velocity dispersion of σ(λ) ∝λ^1/2, which is in agreement with the Larson scaling relations <cit.>, where λ denotes the length scale. The root-mean-square (rms) Mach number in the computational domain is specified by a model parameter, M;M = (1/V_cd∫_V_cd | v^2 |dV)^1/2,where V_cd denotes the volume of the computational domain. We utilized a common velocity field for generating the turbulence in all the models. We changed the amplitude of the turbulence by changing the Mach number M. Note that even if we assume M = 1,the rms Mach number on a 0.06 pc scale (the cloud core scale) is estimated to be M_c =M (R_c/2R_c)^1/2 = 0.71, according to the scaling relations, and therefore the turbulence is subsonic on the cloud core scale. When M = 0.5,the rms Mach number is estimated to be M_c = 0.35. Note that weak turbulence was suggested by the narrow molecular linesin the dense cores in Taurus <cit.>. The rms Mach number on the cloud core scale M_c for each model is listed in Table <ref>.The initial magnetic field is uniform in the z-direction. The field strength is given by B_z = α B_cr,where α denotes the nondimensional flux-to-mass ratio (see Table <ref>),and B_cr denotes the critical field strength for the center of the cloud core, given by B_cr = 2 π G^1/2Σ_0 <cit.>. The central column density Σ_0 is calculated by Σ_0 = ∫_-R_c^R_cρ dz = 5.38 ρ_0 a, where the integral is performed along a line passing through the center of the cloud core. In this paper, we consider a magnetically supercritical core (α < 1). The initial field strength isestimated to be B_z = 181 αf^1/2 μG = 256 α μG; note this uses the model parameters α and f.Note that the model parameter α is inversely proportional to thedimensionless mass-to-flux ratio μ, which is defined as μ = (M_c/Φ)/(M_c/Φ)_cr;its value for each model is listed in Table <ref>. The magnetic flux is defined by Φ = π R_c^2 B_z, and the mass of the cloud core is defined by M_c =4π∫ _0^R_cρ r^2 dr. The critical value is (M_c/Φ)_cr = (2π G^1/2)^-1. The parameter μ is the mass-to-flux ratio in the entire cloud core, while the parameter α reflects the mass-to-flux ratio only at the central axis.The barotropic equation of state is assumed as P(ρ) = c_s^2 ρ + κρ ^7/5, where κ = c_s^2 ρ_cr^-2/5; the critical density is set at ρ_cr = 10^-13 g cm^-3 (the corresponding number density is n_cr = 2.62× 10^10 cm^-3), which is taken from the numerical results of <cit.>.The ohmic dissipation is also considered. The resistivity η is quantitatively estimated according toequations (9) and (10) of <cit.>. The magnetic Reynolds number is estimated as Re_m = v_f λ_J η^-1, where v_f = [(4/3)π G λ_J^2ρ]^1/2 is the free-fall velocity, λ_J = (π c_p^2 / G ρ)^1/2 is the Jeans length,and c_p = (dP/dρ)^1/2 is the sound speed for the barotropic equation of state. The magnetic Reynolds number is less than unity (Re_m < 1) in the region where n ≳ 2× 10^12 cm^-3, and thus the magnetic field is dissipative in that region. This indicates that the inner portion of the first core is magnetically dissipative.We allowed the models listed in Table <ref> to develop until t_p ≃ 10^3 yr for themagnetized models (α 0) and t_p ≃ 10^4 yr for the nonmagnetized models (α = 0), where t_p denotes the elapsed time following formation of a sink particle (a model of a protostar).The short elapsed times in the magnetized models are due toshort time steps of the simulations, which are caused byan extremely fast Alfvén speed around the sink particles.In this study, we therefore focus on the early evaluational stages of low-mass star formation.The recent observational studies have investigated young stars insuch early evolutionary stages, e.g., the first hydrostatic core candidates <cit.>. lllll Model parameters Model M α M_c μM05B0 0.5 0.00.35 ∞ M1B01.0 0.00.71 ∞M05B010.5 0.10.35 2.81 M1B01 1.0 0.10.71 2.81M1B0251.0 0.25 0.35 1.12 M05B025 0.5 0.25 0.71 1.12 § METHODSGravitational collapse of the cloud cores was calculated using the three-dimensional adaptive mesh refinement (AMR) codeSFUMATO <cit.>.The magnetohydrodynamics (MHD) scheme has second-order accuracy in space and time.The computational domain was resolved on a base grid of l =0 with256^3 cells.The maximum grid level was set at l_max = 9. The cell width was Δ x_min = 0.386 au on the finest grid, l = 9,and Δ x_max = 198 au on the base grid of l =0. The Jeans condition was employed as a refinement criterion: blocks were refined when the Jeans length was shorter than eight times the cell width, i.e., λ_J < 8 Δ x, where λ_J is the Jeans length. This condition is twice as strict as that originally proposed by <cit.>, and tested in <cit.>.An even more strict refinement criterion was proposed by <cit.> to capture dynamo amplification of the very weak magnetic field in the gravitational collapse of primordial clouds. Although it should be better to use such a criterion to describe the small scale dynamo action in very high beta plasma, a very weak turbulent magnetic field does not appear anywhere in the present models.For the MHD scheme, we adopted the HLLD Riemann solver <cit.>, though the Roe-type MHD Riemann solver <cit.> was implemented on the original SFUMATO <cit.>. The HLLD Riemann solver is known to be more robust than the Roe-type scheme, because it preserves positivity.We did not apply any density floors, even when the plasma beta became low in the low-density regions. For the sub-grid model of a protostar, we used sink particles.The details of the implementation of the sink particles is shown in <cit.>. The critical density for sink particle formation is set atρ_sink = 1× 10^-10 g cm^-3 (n_sink = 2.62× 10^13 cm^-3),and the sink radius is set at r_sink = 4 Δ x_min = 1.55 au.The minimum cell width Δ x_min is determinedso that the Jeans length of ρ_sink is resolved bythe cells width Δ x_min according tothe Jeans condition. The ohmic dissipation was calculated in accordance with <cit.>.The induction equation for the magnetic field was split into a hyperbolic term and a parabolic term. The former corresponds to the ideal MHD and was solved explicitly; the latter corresponds to the ohmic dissipation and was solved implicitly with the multigrid AMR. This provides second-order spatial accuracy and first-order temporal accuracy.§ RESULTS§.§ Overall structures Figures <ref>–<ref> show snapshots of the models at t_p ≃ 10^3 yr on the cloud core scale, the infalling envelope scale, and the circumstellar disk scale, respectively. Hereafter, we will refer to a circumstellar disk simply as a disk. The cloud cores collapse on the timescale of ∼ 10^5 yr. In general, a stronger initial turbulence and/or a stronger initial magnetic field retards the formation epoch of the sink particle.The sink particle forms at t = 1.2× 10^5 yr for model M05B0 (the earliest formation) and t = 1.9× 10^5 yr for model M1B025 (the latest formation).On the cloud core scale (Figure <ref>), the cloud cores are deformed from a spherical shape. The nonmagnetized models and the weak magnetic field models (the left and middle panels of Figure <ref>) produce complex shapes for the cloud cores because of the turbulence, while the strong magnetic field models (the right panels of Figure <ref>) produce flattened cloud cores, which are orientedperpendicular to the mean magnetic field (the z-direction).The upper and lower panels in Figure <ref> show the models with weak turbulence (M=0.5) and moderate turbulence (M=1), respectively. Models with moderate turbulence result in shapes that are more disturbed than those resulting from models with weak turbulence; this is consistent with what was reported by <cit.>.Figure <ref> shows the column density and velocity distributions on the envelope scale of 400  au.The nonmagnetized models and weak magnetic field models (left and middle panels of Figure <ref>) result in rotating infalling envelopes. The rotation velocities are comparable to the infall velocities. The angular momentum of the infalling envelopes comes from the turbulent velocities imposed on the initial conditions.The density distributions in the envelopes show spiral structures, and these are caused by the inhomogeneity of the rotation velocities.On the other hand, the strong magnetic field models with α = 0.25 (the right panels of Figure <ref>)produce flattened infalling envelopes that are perpendicular to the direction of the mean magnetic field.The arrows in these modelsreflect the silhouettes of the bipolar outflows along the z-axis. Outflow formation is discussed in Section <ref>.Figure <ref> shows the column density and velocity distribution for the central (50 au)^3 cubes.The disks show almost face-on views for all the models because a common seed was adopted for the initial turbulence, which provides angular momentum to the cloud cores. In each panel, the measured size of the disk is indicated by the green circle; the following two criteria were used to define a disk: (1) the density is higher than ρ_cr, and (2) the rotation speed is considerably faster than the infall velocity.The measurement of the disk radii is shown inAppendix <ref>.The size of each disk is correlated with the strength of the magnetic field; in models with a stronger magnetic field, the disks are smaller, as shown in Figure <ref>. The strong magnetic field model M05B025 has a small disk with a radius of R_d = 3.36 au at t_p = 10^3 yr, though this radius is larger than the sink radius, r_sink = 1.55 au. The disk is oriented nearly perpendicular to the y-axis (face-on view in Figure <ref>), and it is also perpendicular to the flattened infalling envelopes. In the disk, the rotation velocity is considerably greater than the infall velocity. Model M1B025 has also a small disk, with a radius of 3.08 au att_p = 10^3 yr; note that it is larger than the sink radius. The disk is elongated toward the upper left in Figure <ref>. The high-density portion of the disk, where ρ≥10^6 ρ_0= 10^-12gcm^-3, is approximatelyaxisymmetric, while the edge of the disk with ρ∼ 10^-13gcm^-3 has a cometary shape. For the weak field models with α = 0.1 (models M05B01 and M1B01), the disks have the face-on views shown in Figure <ref>.Each disk is divided into inner and outer parts.The inner disk has an approximately axisymmetric shape with a high density, and its radius is estimated to be R_d ∼ 5 au at t_p ∼ 10^3 yr, which is considerably larger than the sink radius. The outer disk has spiral arms that wind around the disk; this occurs because the rotation velocity is greater than the infall velocity.We found that the spiral feature is also caused by the inhomogeneity of the ratio between the thermal pressure and magnetic pressure, and the inhomogeneity of the rotation velocity along the azimuthal direction. The outer disk exhibits the Q-value <cit.> larger than ∼ 2, indicating that it is gravitationally stable.The Atacama Large Millimeter/submillimeter Array (ALMA) recently observed a circumstellardisk with a similar dual structure in ELias 2-27, a Class II star <cit.>.The spiral arms of this object is probably caused by the gravitational instability <cit.>, in contrast to the models here. The nonmagnetized models, M05B0 and M1B0, have disks with a large radius of R_d ∼ (20-30) au at t_p ∼ 10^3 yr.For model M1B0, the disk fragments three times: at t_p = 7.1× 10^3 yr, 7.8× 10^3 yr, and8.6× 10^3 yr. Two of these fragments merge at t_p = 8.9× 10^3 yr, andat the end of the calculation period, t_p ≃ 10^4 yr,there are still three sink particles.Similar fragmentation is also seen when a filamentary cloud is used as the initial condition <cit.>. For model M05B0, the disk has not fragmented by the end of the calculation period, t_p ≃ 10^4 yr. For all the models, the thicknesses of the disks are resolved by more than four cells.This is consistent with the fact that a self-gravitational disk has the scale hight H = c_p/(2 πγ G ρ)^1/2 = λ_J/( √(2 γ)π ) = 0.19 λ_J <cit.>, and the Jeans length adopted here requires 2H ≳ 3 Δ x_min. §.§ Disk formation Figure <ref> shows the rotation velocity profile as a function of the distance form the sink particle for each of the models. The rotation velocity profile is obtained as follows. The orientation of the disk axis is determined according to the total angular momentum for a volume of ρ≥ρ_cr. According to the disk orientation,the azimuth velocity v_φ is calculated with respect to the sink particle, andit is averaged with a density weight along the vertical direction of the disk for a volume of ρ≥ρ_cr. Finally, the density-weighted v_φ is azimuthally averaged to obtain the rotation velocity profile shown in Figure <ref>.The nonmagnetized models (M05B0 and M1B0) exhibit rotation velocity profiles faster than those of the Keplerian rotation because of the massive disks.The self-gravity of the disk increases the rotation velocity. For the weak field models (M05B01 and M1B01), the inner parts of the disks (r ≲ (5 - 7) au) exhibit the Keplerian rotation.These regions correspond to those showing an approximately axisymmetric shape in Figure <ref>. In the outer parts of the disks, rotation velocity is slower than that of the Keplerian rotation, and the infall motion is observed there as shown in Figure <ref>. For the strong field models (M05B025 and M1B025), the disk sizes are as small as ∼ 3 au, and the rotation velocities are close to those of the Keplerian rotation near the sink radius.We confirmed that the centrifugal force is a dominant force against the gravity in the disks, and the rotation velocity is considerably faster than the infall velocity. Note that, for all the models, the regions within the sink radius exhibit slower rotation than the Keplerian rotation because of the softening of the gravity therein. Figure <ref> shows the increase in the radius of the disks as a function of time following the formation of sink particles for each of the models. The strong magnetized models (α=0.25) have disks that are smaller than those for the nonmagnetized models (α=0). The disk radius for the strong field models remain at R_d ∼ 3 auduring the simulation period of 6× 10^2 yr≲ t_p ≲ 10^3 yr. In the weak magnetic field models (α=0.1), the disk radius increases with considerable undulations,which are caused by the dynamical changes in the spiral arms.The radius of the disks is sensitive to changes in the spiral arms associated with the outer parts of the disks. Figure <ref> shows the mass of the disks as a function of time following the formation of sink particles.The stronger magnetic field models show slower growth of the disk mass, and this is roughly independent of the Mach number M. The models with α=0.25 have a disk mass of M_d ∼ 10^-3 M_at t_p = 10^3 yr,while models with α=0.1 have a disk mass of M_d ∼ 8 × 10^-3 M_.Figure <ref> shows the mass of the sink particles as a function of time following the formation of sink particles. The growth of the sink particles exhibits a clear tendency: magnetic field models have higher accretion rates than do the nonmagnetized models; all the magnetized models have accretion rates of approximately (4-5)× 10^-5 M_ yr^-1, while thenonmagnetized models have rates of ∼ 1 × 10^-5 M_ yr^-1. Note that we follow only the first ∼ 10^3 yr of evolution of the sink particles, and thus we cannot determine the final stellar masses in our models. Figure <ref> shows the ratio of the disk mass to the sink particle mass for each model.The strong magnetic field models (α = 0.25) exhibit a low ratio of ∼ 0.02-0.03, indicating that a strong magnetic braking efficiently extracts angular momentum from the disks.The weak magnetic field models (α=0.1) have a mass ratio of ∼ 0.1, which is an order of magnitude larger than that of the strong field models. The nonmagnetized models (α=0) have ratios that are on the order of unity, and such large ratios induce fragmentation of the disk (model M05B0). §.§ Outflow formationFigure <ref> shows outflows at t_p = 700 yr forthe magnetized models. The strong magnetic field models (M05B025 and M1B025) have outflows that extend further than do those for the weak magnetic field models (M05B01 and M1B01). The envelopes for the weak magnetic field models are disturbed by the turbulence, as shown on the 800 au scale, while the strong magnetic field models have disk-shaped infalling envelopes. For all the models, the outflows extend in approximately the mean direction of the magnetic field on that scale. The direction of the magnetic field depends on its initial strength, and when the field is strong, the initial direction is approximately maintained (z-direction).In the weak magnetic field models, the magnetic field lines become steeply inclined with respect to the initial direction, and this has been reported by <cit.>.The outflows are not always aligned with thedisks, for which the orientations are shown in the lower panels of Figure <ref>.For models M05B025 and M1B025, the lower panels of Figure <ref> are almost edge-on views of the disks, and the outflows extend nearly vertically in the upper and middle panels of Figure <ref>.This indicates that the outflows are roughly perpendicular to the disk axes for models M05B025 and M1B025. In Figure <ref>, the face-on disk of Model M05B01 is shown in the lower panel, and theoutflow extends horizontally, as shown inthe middle panel. This indicates that there is a misalignment between the disk and the outflow. On the other hand, model M1B01 produces a disk that is roughly aligned with the outflow. Note that, for each of the models, the infalling envelope on the ≳ 100 au scale is aligned with the outflow, as shown in Figure <ref>.The envelope is oriented perpendicular to the magnetic field on that scale, and the outflow extends along the magnetic field,as reported by <cit.>.Figure <ref> shows the length of the outflow as a function of time following the formation of sink particles.The length is measured from the sink particle to the maximum distance of the outflow region, where the outflow region is defined by a volume of v_r ≥ 2c_s. This plot indicates that the strong magnetic field models result in more rapid growth than do those with a weak magnetic field.The growth rates are ∼ 2 km s^-1 and ∼ 1 km s^-1 for the strong and weak magnetic field models, respectively. These rates are consistent with typical gas velocities in the outflow (Figure <ref>). §.§ Cavity formationOne of the prominent features in the strong field models is a cavity structure in the envelope.Figure <ref> shows the outflow and envelope structures for model M1B025 on the scales of 400 au and 100 au. On the 400 au scale, the flattened envelope is perpendicular to the outflow, which is associated with the helical magnetic field lines.On the 100 au scale, the flattened envelope has a cavity in which the magnetic field lines are straight. The bipolar outflow is not associated with the straight magnetic field lines of the cavity; instead, it is associated with the helical magnetic field lines, which thread the disk around the sink particles.The cavity is created beside the disk, as shown in Figure <ref>. The radius of the cavity increases with time, and it increases up to ∼ 50 au by t_p ≃ 10^3 yr.The cavity is caused by the magnetic pressure in the following way. The magnetic pressure is higher inside the cavity than it is in the other regions of the envelope, and thus it pushes the gas away from the flattened envelope.The gas accumulates on the rim of the cavity, which then has a higher density than the other regions, as shown in Figure <ref>. Inside the cavity, the gas moves outward at a velocity of ∼ 0.2 - 0.5 km s^-1. The velocity at which the rim extends is typically 0.2 km s^-1(= 50 au/10^3 yr), which is roughly equal to the speed of sound.Model M05B025 also results in cavities in the envelope, and the size of the cavities (∼ 10 au) is smaller than that for model M1B025 when compared at the same time (t_p = 10^3 yr).In the upper right panel of Figure <ref>, cavities can be seenat both the upper and lower sides of the disk, but their density contrast is less than that for the cavities seen in model M1B025. The weak magnetic field modelsM05B01 and M1B01 do not producecavities in the envelopes.Similar cavity formation has been reported inrecent MHD simulations <cit.>. In these simulations, the cavities are formedwhether or not sink particles are implemented, and whether or not magnetic diffusion is considered. Our simulations suggest that the formation of a cavity depends on the magnetic field strength.The cavities reproduced in these simulations likely correspond tothe magnetic wall that has beenpredicted by theoretical studies <cit.>. § DISCUSSION§.§ Disk growth Our simulations indicate that a cloud core with a stronger magnetic field produces a disk with a smaller radius.On the timescaleof t_p ∼ 10^3 yr, this is roughly consistent with long-term simulations of magnetized rotating cloud cores performed by <cit.> <cit.>. However, their simulations tend to result in larger disk radii than those produced by the models considered here. In their simulations, the disk radii exceed 10 au by t_p = 10^3when μ = 1, and reach 20 au when μ=3 <cit.> Meanwhile, the models here had a disk radius of 3-4 au when μ = 1.12 and 10-20 au when μ = 2.81.The difference in the disk radii is due to the difference in the initial distribution of the angular momentum. <cit.> assumed a uniform rotation, which provides a specific angular momentum distributed asj ∝ r^2. On the other hand, the turbulence produces a velocity distribution of Δ v ∝ r^1/2, due to the Larson scaling relations assumed here <cit.>, and the distribution of the specific angular momentum is expected to be j ∝ r^3/2. The differences seen in the initial angular momentum distributions are therefore larger for larger radii. This has a greater effect on the disk radius in the later stages of the accretion phase,because the angular momentum of the infalling gas has a strong impact on disk evolution <cit.>. This suggests that, for the models here, the disk radius is expected to increase with time, e.g., t_p = 10^5 yr, but it will still be smaller than that produced by a model with uniform rotation (∼ 100 au).Small disks with a radius of less than 100 au are expected to be produced by the magnetized turbulent model.Recent high-resolution observations have revealed Keplerian disks around Class 0 and Class I protostars <cit.>. <cit.> recently suggested that the Class 0 protostar L1527 IRS has a Keplerian disk with a small radius of 54 au.A more extended disk has been suggestedin the Class 1 source TMC 1A, which has an estimated radius of 100 au <cit.>.Disks that have a radius that exceeds 100 au have also been suggested inthe Class 0 source VLA1623A <cit.> andthe Class I source L1489 IRS <cit.>. Such a variety of Keplerian disks may be responsible for the variety of magnetic fields and turbulence in natal cloud cores. §.§ Alignment between an outflow, a disk, and an envelope The various models examined here have shown that some disks are misaligned with the outflow, as described in Section <ref>. <cit.> investigated the direction of outflows when the initial magnetic field is misaligned with the rotation axis on the cloud core scale <cit.>.Their simulations indicate that the outflows are extended in the direction parallel to the local magnetic field, even when this direction is not aligned with the magnetic field on the cloud core scale. This causes the outflow on the ∼ 100 au scale to be misaligned with the magnetic field on the cloud core scale.Figures <ref> and <ref> show the directions of the magnetic field, the rotation axes, the disk-like structures, and outflows for the representative models M05B01 and M1B025. Measurement of these directions is described in Appendix <ref>. For model M05B01, the mean magnetic field B on the scale of 10^4 au is antiparallel to that on the 1 au scale, indicating that the magnetic field rotates up to ∼ 180 around the vector j.The disk normal vector n is associated with B on scales larger than 10 au, indicating that the flattened envelope is perpendicular to the local magnetic field.On scales smaller than 10 au,the disk normal vector n is associated with the mean angular momentum j, indicating that the disk is aligned with the rotation axis.The outflow is accelerated on the 10 au scale, and it extends along the magnetic field, up to the 100 au scale.The outflow is therefore misaligned with the rotating disk, but it isaligned with the flattened envelope.The ejection mechanism for this outflow is different from the ordinal magnetocentrifugal wind <cit.>; this outflow mechanism isa spiral flow (see Figure <ref>b). The model here demonstrates that the spiral flow reproduces a bipolar outflow. Similar spiral flowshave been observed <cit.>. To confirm that the spiral flow mechanism continues to drive the outflow on the timescale of 10^4 yr, further long-term simulations are necessary <cit.>.For model M1B025, the outflow is driven by the magnetocentrifugal wind on the ∼ 100 au scale (see Figure <ref>a), and the flow direction is aligned with the local magnetic field B, as shown in Figures <ref> and <ref>.On this scale, the flattened envelope is also aligned with the magnetic field.On the scale of ≲ 10 au, the disk normal vector n is aligned with the mean angular momentum j, indicating that the disk is aligned with the rotation axis. The flow direction on this scale is contaminated by the velocity associated with the formation of a cavity(see the right bottom panel of Figure <ref>). The direction of the angular momentum vector depends on the radius of the cloud core, as shown in Figure <ref>. This indicates thatthe disk accretes lumps of gas for which the angular momentum vectors have highly nonuniform directions. The angular momentum of the infalling gas greatly affects the evolution of the disk, as shown in <cit.>. As the disk further evolves, its orientation and size are bothexpected to change. Observations of misalignment between outflows, magnetic field, circumstellar disks, and flattened envelopes have been reported.<cit.> showed that outflows are misalignedor randomly aligned relative to the magnetic field on the ∼ 1000 au scale for Class 0 and Class I objects, and this is consistent with the results of the models considered here, which predicted that the direction of the magnetic field depends on the scale length (see, e.g., Figure <ref>), thoughthe outflow accelerates along the magnetic field on the 10-100 au scale.If the outflow is aligned with the acceleration,it is expected to be misaligned with the magnetic field on the ∼ 1000 au scale.Observations of the Class I source L1489 IRS have suggested that the central Keplerian disk is inclined with respect to the flattened infalling envelope <cit.>.This misalignment between the central disk and the flattened envelope was reproduced in all the models we considered, as shown in Figure <ref>. In each model, the disk normal vector n (the green line) drifts on the θ_x - θ_y plane, indicating that with the flattened density structure, the inclination depends on the radius.Moreover, n isaligned with B when r ≳ 100 au, suggesting that the inclination of the flattened envelope is guided by the magnetic field. In other words, the flattened density structure of the envelope is caused by the magnetic field. The Class I binary source in the Ophiuchus star-forming region, IRS 43,is a complex misaligned system. The most curious misalignment in this object is that of the circumbinary disk and the orbit of the binary. According to <cit.>, IRS 43 has a circumbinary disk of which inclination is nearly edge-on, while the orbit of the binary is close to being in the plane of the sky. Such misalignment is possibly produced by a non-monotonic distribution of the angular momentum on the scale of the cloud core as shown inFigure <ref>.On the cloud scale, the infalling gas has misaligned angular momentum, leading the time-dependent accretion of angular momenta onto the circumbinary disk and circumstellar disks.Such a misaligned system cannot be reproduced by axisymmetric models <cit.>. Even if weak turbulence is assumed, misaligned systems are reproduced.§.§ Cavity and arc-like structure The rim of the cavity has a higher column density than does the envelope, as shown in Figure <ref>, andit may be observed as an arc-like structure. <cit.> reported that the ALMA Cycle 0 observations reveal an arc-like structure at the center of the high-density molecular cloud core MC27 or L1521F.The arc-like structure was extended to a length of ∼ 1000 au, and they proposed that it was caused by a dynamical interaction between the dense gas condensation and the envelope.<cit.> performedhydrodynamical simulations to determine the origin of the arc-like structure, and they demonstrated thatgravitational torque due to the orbiting protostars produces arc-like structures extending up to 1000 au. The typical length of the cavity is consistent with the observations of MC27/L1521F. The rim of the cavity obtained with the model M1B025 was extended to ∼ 50 au at t_p = 1000 yr.If we assume that the cavity continues to extend at a constant velocity of 0.2 km s^-1 (see Section <ref>),it takes 2× 10^4 yr for the rim to extended to 1000 au. This timescale agrees with that of the arc structures reproduced by <cit.>, and it is also consistent with the timescale of the protostar (Spizer source) in MC27/L1521F.As shown in the literature <cit.>,the detailed structure of the cavity and the rim seems to be sensitive to the simulation settings. Comparison between the models and the observations should be performed in terms of typical values, e.g., a typical length, as shown here. The rim of the cavity can account for the dense gas condensations in MC27/L1521F. Our simulation results indicate thatthe number density of the rim is ∼ 10^10 cm^-3 on the 50 au scale. When the cavity is expanded up to 1000 au, the number density of the rim is expected to be 2.5 × 10^7 cm^-3, assuming that the density is distributed as ρ∝ r^-2. The number densities of the dense gas condensations, MMS-2 and MMS-3, are estimated to be10^7 cm^-3 and 10^6-7 cm^-3, respectively <cit.>. Therefore, MMS-2 and MMS-3 can be explained by the dense portions of the rim rather than by the fragments. On the other hand, between the rim and the cavity, the column density differs by a factor of ∼ 30; the column densities are∼ 5× 10^24 cm^-3 on the rim and∼ 2× 10^23 cm^-3 in the cavity (Figure <ref>). Such a high contrast in the column density has not been seen in observations of the ALMA Cycle 1 <cit.>.Observations of the magnetic field of the cloud core will beof key importance in determining which model best accounts for the origin of the arc-like structure. § SUMMARY Gravitational collapse of molecular cloud cores and the formation of circumstellar disks and outflows around protostars were investigated by performing AMR simulations; the effects of both turbulence and the magnetic field were considered. Ohmic dissipation was considered in the MHD simulations. We allowed the system to evolve for ∼ 1000 yr following the formation ofa protostar. The main outcomes are summarized as follows. * In each of the magnetized models, the cloud core collapses to form a protostar surrounded by a circumstellar disk. The star-disk system is surrounded by an infalling envelope, and bipolar outflows are ejected. The nonmagnetized models produce massive circumstellar disks, one of which undergoes fragmentation at ∼ 10^4 yr following the formation of a protostar.* The radius of the circumstellar disk depends on the initial strength of the magnetic field.Models with a stronger magnetic field produce a circumstellar disk with a smaller radius. The mass of the disk shows a similar dependence on the magnetic field, where a stronger field produces a less massive disk.The ratio of disk mass to stellar mass remains roughly constant at about ∼ 1 - 10%, depending on the strength of the magnetic field. * The magnetized models reproduce the outflow, which can be classified into two types: a magnetocentrifugal wind and a spiral flow.In the latter, the outflow is not aligned with the rotational axis of the disk.In both the cases, the outflow and the flattened envelope are aligned with the magnetic field on that scale. In some models, the outflow is misaligned with the circumstellar disk.Similarly, the flattened envelope may be misaligned with the circumstellar disk.* The internal distribution of angular momentum in the cloud cores is nonuniform. After long-term evolution, the disk accretes lumps of gas in which the direction of the angular momentum vectors is highly nonuniform; hence, the disk is expected to change its orientation and size.This means that a planet formed during a later phase may have an orbital angular momentum that is highly misaligned with the angular momentum of the central star.* A strong magnetic field tends to produce a cavity in the infalling envelope; this is due to the strong magnetic pressure, and the gas accumulates on the rim.Thus, the rim can account for the arc-like structure and dense gas condensation observed in the high-density molecular cloud coreMC27/L1521F, though it has not been verified by observation that there is a high contrast between the column density of the cavity and that of the rim.We would like to thank K. Tokuda, T. Onishi, K. Tomida, R. Kawabe, and N. Ohashi for fruitful discussions.Numerical computations were carried out on the Cray XC30 and XT4 at the Center for Computational Astrophysics, National Astronomical Observatory of Japan,andthe HITACHI HA8000 Clustre System (T2K-Todai) in the Information Technology Center, The University of Tokyo. This research was supported by JSPS KAKENHI Grant Numbers 16H02160, 15K05032,26400233, 26287030, 25400232,24244017, 23540270, and 23244027. § MEASUREMENT OF THE DISK RADIUS The disk radius for each model was estimated using the density and velocity distributions, as follows. The volume of the disk V_d was defined by two criteria: ρ≥ρ_cr and(v_φ^2 + v_θ^2)^1/2/(v_r^2 + c_s^2)^1/2≥ 3. The former indicates that the disk has a density higher than the critical density of the equation of state. The latter indicates that the velocity of rotation is greater than the radial velocity.In spherical coordinates, the velocity is (v_r, v_θ, v_φ),which was calculated by a transformation from Cartesian coordinates with the origin setat the location of the sink particle.Because the disk orientation is not aligned to any of thecoordinate axes, the tangential velocity of (v_φ^2 + v_θ^2)^1/2 was adopted as the rotational velocity. The value of three on the right-hand side of the second criterion was determined empirically. The disk radius was obtained from the inertia tensor of the volume, V_d. The inertia tensor is calculated asI = ( [ I_xx I_xy I_xz; I_yx I_yy I_yz; I_zx I_zy I_zz;]) ,and each element of the matrix is calculated by a moment of the coordinates, e.g.,I_xy = ∫_V_d( x - x_p )( y - y_p )dV/∫_V_ddV ,where (x, y, z) are the coordinates of a cell, and(x_p, y_p, z_p) is the position vector for a sink particle.The volume integrals in equation (<ref>) were performed by summing over the cells within thevolume, V_d. The matrix I yields three eigenvalues,λ_1 > λ_2 > λ_3,and the square root of each of the eigenvalues corresponds to thelength of a principal axis; λ_1^1/2, λ_2^1/2, and λ_3^1/2 correspond to the semi-major axis,the semi-minor axis, and the thickness of the disk, respectively. The semi-minor axis λ_2^1/2 was adopted as the disk radius so that this method could be applied to a highly elongated disk, such as the disk of model M1B025 (the lower right panel of Figure <ref>). The disk radius is defined asR_d = 2 λ_2^1/2 ,where the factor of two comes from the inertial tensor for a uniform thin disk with a radius of R_d, i.e., I_xx + I_yy = (1/2) R_d^2. We also calculated the mass of the disks usingM_d = ∫_V_dρ dV ,whereV_d is the disk volume. § MEASUREMENT OF THE DIRECTION OF THE AXES In Figures <ref> and <ref>, the direction of the magnetic field, angular momentum, disk, and outflow are each shown as a function of the radius.These axes were measured as follows.In order to define the direction of the magnetic field and angular momentum, we measured the mean magnetic field and the angular momentum: B(r)= 1/V_s(r)∫_V_s(r)B(r)dV, j(r)= 1/M(r)∫_V_s(r)ρ(r)(r-r_p) × (v(r) - v_p)dV.where M(r) = ∫_V_s(r)ρ dV,The volume V_s(r) is that of a sphere with radius r, and the center coincides with the position of the sink particle:V_s(r) = {r∈ℝ^3 | |r - r_p| ≤ r }.The vectors r_p and v_p, respectively, denote the position and velocity of the sink particle.The orientation of the disk was calculated using the eigenvector of the inertia tensor, which is similar to I in equation (<ref>). The elements of the inertia tensor are obtained as follows.I_xy = ∫_ρ≥ρ_dρ( x - x_p )( y - y_p )dV/∫_ρ≥ρ_dρ dV ,where the integration is performed inside the region in which ρ≥ρ_d, for a given threshold ρ_d.The eigenvector associated with the smallest eigenvalue represents a normal vector for the flattened disk-like structure.The radius of the disk-like structure is defined as the maximum extent of the region in which ρ≥ρ_d, as measured from the position of the sink particle.Thus, we obtain the direction of the disk normal vector n as a function of the radius r.Note that the disk measured here corresponds to the circumstellar disk for a small radius, e.g., r10 au,and to the flattened infalling envelope for a large radius.The direction of the outflow was obtained from the flow of the gas inside the outflow region.Because the outflow is bipolar and the gas flows roughly parallel to the magnetic field, the direction of the outflow is calculated as follows.v_of(r) = 1/V_of(r)∫_V_of(r)v sign(v·B) dV,where V_of(r) denotes the region of the outflow, defined asV_of(r) = {r∈ V_s(r) | v_r(r) ≥ 2 c_s}. [Aso et al.(2015)]Aso15 Aso, Y., Ohashi, N., Saigo, K., et al. 2015, , 812, 27[Belloche et al.(2006)]Belloche06 Belloche, A., Parise, B., van der Tak, F. F. S., et al. 2006, , 454, L51[Blandford & Payne(1982)]Blandford82 Blandford, R. D., & Payne, D. G. 1982, , 199, 883[Bonnor(1956)]Bonnor1956 Bonnor, W. B. 1956, , 116,351 [Brinch et al.(2007a)]Brinch07a Brinch, C., Crapsi, A., Hogerheijde, M. R., & Jørgensen, J. K. 2007a, , 461, 1037[Brinch et al.(2007b)]Brinch07b Brinch, C., Crapsi, A., Jørgensen, J. K., Hogerheijde, M. R., & Hill, T. 2007b, , 475, 915[Brinch et al.(2016)]Brinch16 Brinch, C., Jørgensen, J. K., Hogerheijde, M. R., Nelson, R. P., & Gressel, O. 2016, , 830, L16[Burkert & Bodenheimer(2000)]Burkert00 Burkert, A., & Bodenheimer, P. 2000, , 543, 822[Chandrasekhar(1939)]Chandrasekhar39 Chandrasekhar, S. 1939, An Introduction to the Study of Stellar Structure, (Chicago, Ill., Univ. Chicago press) [Ciardi & Hennebelle(2010)]Ciardi10 Ciardi, A., & Hennebelle, P. 2010, , 409, L39[Crutcher(1999)]Crutcher99 Crutcher, R. M. 1999, , 520, 706[Dubinski et al.(1995)]Dubinski95 Dubinski, J., Narayan, R., & Phillips, T. G. 1995, , 448, 226[Ebert(1955)]Ebert1955 Ebert, R. 1955, Z. Astrophys., 37, 222 [Federrath et al.(2011)]Federrath11 Federrath, C., Sur, S., Schleicher, D. R. G., Banerjee, R., & Klessen, R. S. 2011, , 731, 62[Fukuda & Hanawa(1999)]Fukuda99 Fukuda, N., & Hanawa, T. 1999, , 517, 226[Hennebelle & Ciardi(2009)]Hennebelle09 Hennebelle, P., & Ciardi, A. 2009, , 506, L29[Hogerheijde(2001)]Hogerheijde01 Hogerheijde, M. R. 2001, , 553, 618[Hull et al.(2013)]Hull13 Hull, C. L. H., Plambeck, R. L., Bolatto, A. D., et al. 2013, , 768, 159[Joos et al.(2012)]Joos12 Joos, M., Hennebelle, P., & Ciardi, A. 2012, , 543, A128[Jørgensen et al.(2009)]Jorgensen09 Jørgensen, J. K., van Dishoeck, E. F., Visser, R., et al. 2009, , 507, 861[Krasnopolsky et al.(2012)]Krasnopolsky12 Krasnopolsky, R., Li, Z.-Y., Shang, H., & Zhao, B. 2012, , 757, 77[Larson(1981)]Larson81 Larson, R. B. 1981, , 194, 809[Larson(1985)]Larson85 Larson, R. B. 1985, , 214, 379[Li & McKee(1996)]Li96 Li, Z.-Y., & McKee, C. F. 1996, , 464, 373[Li et al.(2013)]Li13 Li, Z.-Y., Krasnopolsky, R., & Shang, H. 2013, , 774, 82 [Machida & Hosokawa(2013)]Machida13 Machida, M. N., & Hosokawa, T. 2013, , 431, 1719[Machida et al.(2007)]Machida07 Machida, M. N., Inutsuka, S.-i., & Matsumoto, T. 2007, , 670, 1198[Machida et al.(2008)]Machida08 Machida, M. N., Inutsuka, S.-i., & Matsumoto, T. 2008, , 676, 1088-1108[Machida et al.(2011)]Machida11 Machida, M. N., Inutsuka, S.-I., & Matsumoto, T. 2011, , 63, 555[Machida et al.(2014)]Machida14 Machida, M. N., Inutsuka, S.-i., & Matsumoto, T. 2014, , 438, 2278[Masunaga, Miyama, & Inutsuka(1998)]Masunaga98 Masunaga, H., Miyama, S. M., & Inutsuka, S. 1998, , 495, 346.[Matsumoto & Hanawa(2011)]Matsumoto11 Matsumoto, T., & Hanawa, T. 2011, , 728, 47[Matsumoto & Tomisaka(2004)]Matsumoto04 Matsumoto, T., & Tomisaka, K. 2004, , 616, 266[Matsumoto et al.(2015a)]Matsumoto15a Matsumoto, T., Dobashi, K., & Shimoikura, T. 2015a, , 801, 77[Matsumoto et al.(2015b)]Matsumoto15b Matsumoto, T., Onishi, T., Tokuda, K., & Inutsuka, S.-i. 2015b, , 449, L123[Matsumoto(2007)]Matsumoto07 Matsumoto, T. 2007, , 59, 905[Matsumoto(2011)]Matsumoto11b Matsumoto, T. 2011, , 63, 317[Mellon & Li(2008)]Mellon08 Mellon, R. R., & Li, Z.-Y. 2008, , 681, 1356-1376[Miyoshi & Kusano(2005)]Miyoshi05 Miyoshi, T., & Kusano, K. 2005, Journal of Computational Physics, 208, 315[Murillo et al.(2013)]Murillo13 Murillo, N. M., Lai, S.-P., Bruderer, S., Harsono, D., & van Dishoeck, E. F. 2013, , 560, A103[Nakano & Nakamura(1978)]Nakano78 Nakano, T., & Nakamura, T. 1978, , 30, 671[Ohashi et al.(2014)]Ohashi14 Ohashi, N., Saigo, K., Aso, Y., et al. 2014, , 796, 131[Onishi et al.(1998)]Onishi98 Onishi, T., Mizuno, A., Kawamura, A., Ogawa, H., & Fukui, Y. 1998, , 502, 296[Pineda et al.(2011)]Pineda11 Pineda, J. E., Arce, H. G., Schnee, S., et al. 2011, , 743, 201[Pudritz & Norman(1986)]Pudritz86 Pudritz, R. E., & Norman, C. A. 1986, , 301, 571[Pérez et al.(2016)]Perez16 Pérez, L. M., Carpenter, J. M., Andrew, S. M., et al. 2016, Science, 353, 1519 [Seifried et al.(2012)]Seifried12 Seifried, D., Banerjee, R., Pudritz, R. E., & Klessen, R. S. 2012, , 423, L40 [Seifried et al.(2013)]Seifried13 Seifried, D., Banerjee, R., Pudritz, R. E., & Klessen, R. S. 2013, , 432, 3320 [Tassis & Mouschovias(2005)]Tassis05 Tassis, K., & Mouschovias, T. C. 2005, , 618, 783[Tobin et al.(2012)]Tobin12 Tobin, J. J., Hartmann, L., Chiang, H.-F., et al. 2012, , 492, 83[Tokuda et al.(2014)]Tokuda14 Tokuda, K., Onishi, T., Saigo, K., et al. 2014, , 789, L4[Tokuda et al.(2016)]Tokuda16 Tokuda, K., Onishi, T., Matsumoto, T., et al. 2016, , 826, 26[Tomida et al.(2017)]Tomida17 Tomida, K., Machida, M. N., Hosokawa, T., Sakurai, Y., & Lin, C. H. 2017, , 835, L11[Tomisaka et al.(1988)]Tomisaka88 Tomisaka, K., Ikeuchi, S., & Nakamura, T. 1988, , 335, 239[Toomre(1964)]Toomre64 Toomre, A. 1964, , 139, 1217[Truelove et al.(1997)]Truelove97 Truelove, J. K., Klein, R. I., McKee, C. F., Holliman, J. H., II, Howell, L. H., & Greenough, J. A. 1997, , 489, L179[Vorobyov et al.(2015)]Vorobyov15 Vorobyov, E. I., Lin, D. N. C., & Guedel, M. 2015, , 573, A5[Yen et al.(2014)]Yen14 Yen, H.-W., Takakuwa, S., Ohashi, N., et al. 2014, , 793, 1[Zhao et al.(2011)]Zhao11 Zhao, B., Li, Z.-Y., Nakamura, F., Krasnopolsky, R., & Shang, H. 2011, , 742, 10[Zuckerman & Evans(1974)]Zuckerman74 Zuckerman, B., & Evans, N. J., II 1974, , 192, L149
http://arxiv.org/abs/1703.09139v2
{ "authors": [ "Tomoaki Matsumoto", "Masahiro N. Machida", "Shu-ichiro Inutsuka" ], "categories": [ "astro-ph.SR", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20170327151529", "title": "Circumstellar disks and outflows in turbulent molecular cloud cores: possible formation mechanism for misaligned systems" }
G. Jenča Department of Mathematics and Descriptive GeometryFaculty of Civil EngineeringSlovak University of TechnologyRadlinského 11 Bratislava 810 05 Slovak Republicgejza.jenca@stuba.sk A note on unitizations of generalized effect algebras Gejza Jenča December 30, 2023 ===================================================== There is a forgetful functor from the category of generalized effect algebras to the category of effect algebras. We prove that this functor is a right adjoint and that the corresponding left adjoint is the well-known unitization construction by Hedlíková and Pulmannová. Moreover, this adjunction is monadic. Primary: 03G12, Secondary: 06F20, 81P10 § INTRODUCTION AND PRELIMINARIES§.§ Introduction In <cit.>, authors proved that every generalized effect algebra can be embedded into an effect algebra. The construction was subsequently studied and applied by several authors, for example in <cit.>, <cit.>, <cit.>. A generalization of the unitization construction to pseudoeffect algebras was recently introduced andstudied in <cit.>It is easy to see that this unitization construction is functorial.We prove that this unitization functor is left adjoint to the forgetful functor from the generalized effect algebras to effect algebras and that this adjunction is monadic. Thus, the category of effect algebras is the category of algebras for a monad defined on the category of generalized effect algebras.We assume working knowledge of basic category theory <cit.>and theory of effect algebras <cit.>. §.§ Generalized effect algebras Let P be a partial algebra with a nullary operation 0 and a binary partial operation ⊕. Denote the domain of ⊕ by ⊥. P is called a generalized effect algebra iff for all a,b,c∈ P the following conditions are satisfied : (P1) a⊥ b implies b⊥ a, a⊕ b=b ⊕ a.(P2) b⊥ c and a⊥ b⊕ c implies a⊥ b, a⊕ b⊥ c, a⊕ (b⊕ c)=(a⊕ b)⊕ c.(P3) a⊥ 0 and a⊕ 0=a.(P4) a⊕ b=a⊕ c implies b=c.(P5) a⊕ b=0 implies a=0. In a generalized effect algebra P, we denote a≤ b iff a⊕ c=b for some c∈ P. It is easy to see that ≤ is a partial order and that 0 is the least element of the poset (P,≤). We denote c=a⊖ b iff a=b⊕ c. Owing to cancellativity, ⊖ is a well-defined partial operation with domain ≥.Let P_1, P_2 be generalized effect algebras. A map f:P_1→ P_2 is called a morphism of generalized effect algebras if and only if it satisfies the following conditions. * f(0)=0.* If a⊥ b, then f(a)⊥ f(b) and f(a⊕ b)=f(a)⊕ f(b). A morphism is f:P_1→ P_2 is full if f(a)⊥ f(b) implies that there are a_1,b_1∈ P_1 such that a_1⊥ b_1, f(a)=f(a_1) and f(b)=f(b_1). §.§ Effect algebras An effect algebra is an generalized effect algebra bounded above. Unwinding this definition, we observe that an effect algebra is partial algebra (E;⊕,0,1) with a binary partial operation ⊕ and two nullary operations 0,1 such that the reduct (E;⊕,0) is a generalized effect algebra and 1 is the greatest element of E.Effect algebras were introduced by Foulis and Bennett in their paper<cit.>. See also <cit.> and <cit.> for equivalent definitions, introduced independently.Let E_1, E_2 be effect algebras. A map ϕ:E_1→ E_2 is called a morphism of effect algebras if and only if it is a morphism ofgeneralized effect algebras satisfying the condition ϕ(1)=1.A morphism ϕ:E_1→ E_2 is a full morphism if and only if ϕ(a)⊥ϕ(b) implies that there are x,y∈ E_1 such that ϕ(x)=a, ϕ(y)=b and x⊥ y. A full, bijective morphism is an isomorphism.§ THE UNITIZATION FUNCTOR The category of generalized effect algebras is denoted by , the category of effect algebras is denoted by , :→ is the evident forgetful functor.Let us define a functor F:→, called unitization.For a generalized effect algebra P, P is a partial algebra with an underlying set P∪̇P^*, where P∩ P^*=∅ and a↦ a^* is a bijection from P to P^*, equipped with a partial operation given as follows: for all a,b∈ P, * a⊥_ P b iff a⊥_P b and then a⊕_ P b= a⊕_P b,* a⊥_ P b^* iff a≤ b and then a⊕_ P b^*= (b⊖_P a)^*,* a^*⊥_ P b iff a≥ b and then a^*⊕_ P b= (a⊖_P b)^*,* a^*⊥̸_ P b^*.This construction was introduced by Hedlíková and Pulmannová in <cit.>. They proved that (F(P),⊕,0,0^*) is always an effect algebra. The basic idea of the construction predates effect algebras, see <cit.>, <cit.>.Let P be the poset on the left-hand side of Figure <ref>. There is a unique ⊕ partial operation on P, making (P,⊕,0) into a generalized effect algebra: a⊕ b=b⊕ a=c, b⊕ b=d and 0⊕ x=x⊕ 0=x, for all x∈ P.The Hasse diagram of the effect algebra F(P) appears on the right-hand side of the picture. Let us consider generalized effect algebra (in fact, a total monoid) (ℕ,+,0), where + is the ordinary addition of natural numbers. Then F(ℕ) is a totally ordered MV-algebra, alsoknown under the name Chang's MV-algebra.For a morphism of generalized effect algebras f:P→ Q, then f: P→ Q is given by f(a)=a, f(a^*)=(f(a))^*. It is easy to check that f is a morphism in the categoryand that :→ is a functor.is left adjoint to . Letus define the unit η. For every generalized effect algebra P, the component η_P:P→ P) is the embedding η_P(x)=x. This is obviously a natural transformation 1_→.Let E be an effect algebra; to define the component of the counit ϵ at E, we need to take a closer look at E. Let us prove that w: E→ E×{0,1} given by w(a)=(a,0) and w(a^*)=(a',1), where a∈ E,is an isomorphism of effect algebras. Indeed, suppose that x,y∈ E and that x⊥ y.The only nontrivial case we have to check is when thereare a,b∈ E such that x=a, y=b^* and a≤ b; in this case x⊕ y=(b⊖_E a)^* and w(x⊕ y)=w((b⊖_E a)^*)=((b⊖ a)',1) =(b'⊕ a,1)=(a,0)⊕(b',1)= w(a)⊕ w(b^*)=w(x)⊕ w(y). The morphism w is easily seen to be full and bijective, hence an isomorphism.We may now define ϵ_E: E→ E as the composition of w with the canonical projection p:E×{0,1}→ E. Explicitly, ϵ_E(x)=x for x∈ E and ϵ_E(x^*)=x' for x^*∈ E^*.The commutativity of the naturality square of ϵ also clear.Let us check the triangle identities. We need to prove that, in the categories of endofunctors of and , respectively, the triangles[r]^-η[rd]_1_ [d]^ϵ[r]^-η[rd]_1_ [d]^ϵcommute. To observe the commutativity of the left triangle, let P be a generalized effect algebra. If x∈ P, then η_P(x)=x and ϵ_ P(x)=x. If x^*∈ P, then η_P(x^*)=(η_P(x))^*=x^* and ϵ_ P(x^*)=x^*.To observe the commutativity of the right triangle, let E be an effect algebra and let x∈ E.Then η_ E(x)=x and ϵ_(x)=x.Let us consider the real interval [0,1]_ℝ, equipped with the usual addition of real numbers restricted to [0,1]_ℝ, meaning that a⊕ b is defined if and only if a+b≤ 1 and then a⊕ b:=a+b. A morphism of generalized effect algebras P→[0,1]_ℝ is called an additive map on P.For an effect algebra E, a state on E is an additive map E→ [0,1]_ℝ preserving the unit, so a state is a morphism in .Every additive map on P uniquely extends to a state on F(P). If s is an additive map, then there is a unique s̅:F(P)→ [0,1]_ℝ such that the diagramUF(P)[r]^U(s̅) U([0,1]_ℝ)P[u]_η_P[ur]_fcommutes. Every state f on an effect algebra must satisfy f(x')=1-f(x). Therefore, if s is an additive map on P, then the state s̅ on F(P) corresponding to s via the bijection established in <ref> is necessarily given bys̅(x)=s(x)x∈ P1-s(x)x∈ P^* In a very similar way, one can prove the following:There is a natural one-to-one correspondence between ideals of P and morphisms F(P)→ 2^2, where 2^2 is the Boolean algebra with two atoms. The forgetful functor U:→ creates coequalizers. Let f,g A→ B be a pair of morphisms in , let h B→ Z be a coequalizer of f,g in . We need to prove that h(1) is the top element of Z. Consider the diagramU(A)@<0.7ex>[r]^U(f)@<-0.7ex>[r]_U(g) U(B)[r]^h[rd]_t Z@<0.7ex>@.>[d]^u     [0,h(1)]_Z@<0.7ex>@^(->[u]^jwhere t is given by t(x)=h(x) and j is the inclusion into Z, so that j∘ t=h.Since t∘ f=t∘ g, there is a unique u Z→[0,h(1)]_Z such that u∘ h=t.This gives us j∘ u∘ h=h. Since h is an epimorphism, j∘ u=id_Z and now we see that for every x∈ Z, x=j(u(x))≤ h(1), because the range of j is bounded above by h(1).It remains to prove that h is a coequalizer in . If E is a generalized effect algebra bounded above and s is a top-preserving morphism such that s∘ f=s∘ g, then there is a unique -morphism u such that u∘ h=s. However, since both h and s preserve the top element, u must be top-preserving as well. Recall <cit.>, that every adjunction [F][U]𝒞𝒟gives rise to a monad (UF,η,Uϵ F) on 𝒞. An adjunction is monadic if U is equivalent to the forgetful functor coming from the category of algebras 𝒞^UFfor the monad (UF,η,ϵ) and the comparison gives us then an isomorphism𝒟≃𝒞^UF.The adjunction (F,U,η,ϵ) is monadic. By Beck's theorem <cit.>, an adjunction is monadic if and only if U creates absolute coequalizers. By Lemma <ref>, U creates all coequalizers. ≃^UF. This research is supported by grant VEGA G-2/0059/12 of MŠ SR, Slovakia and by the Slovak Research and Development Agency under the contracts APVV-0073-10, APVV-0178-11. 10 urlstyleDvuPul:NTiQS Dvurečenskij, A., Pulmannová, S.: New Trends in Quantum Structures. Kluwer, Dordrecht and Ister Science, Bratislava (2000)FouBen:EAaUQL Foulis, D., Bennett, M.: Effect algebras and unsharp quantum logics. Found. Phys. 24, 1325–1346 (1994)foulis2014unitizing Foulis, D.J., Pulmannová, S.: Unitizing a generalized pseudo effect algebra. Order(2014). (to appear)GiuGre:TaFLfUP Giuntini, R., Greuling, H.: Toward a formal language for unsharp properties. Found. Phys. 19, 931–945 (1989)hedlikova1996generalized Hedlíková, J., Pulmannová, S.: Generalized difference posets and orthoalgebras. Acta Math. Univ. Comenianae 65(2), 247–279 (1996)janowitz1968note Janowitz, M.: A note on generalized orthomodular lattices. J. Natur. Sci. Math 8, 89–94 (1968)KopCho:DP Kôpka, F., Chovanec, F.: D-posets. Math. Slovaca 44, 21–34 (1994)mac1998categories MacLane, S.: Categories for the Working Mathematician. No. 5 in Graduate Texts in Mathematics. Springer-Verlag (1971)mayet1991generalized Mayet-Ippolito, A.: Generalized orthomodular posets. Demonstratio Math 24, 263–274 (1991)paseka2009isomorphism Paseka, J., Riečanová, Z.: Isomorphism theorems on generalized effect algebras based on atoms. Information Sciences 179(5), 521–528 (2009)riecanova2008effect Riečanová, Z.: Effect algebraic extensions of generalized effect algebras and two-valued states. Fuzzy Sets and Systems 159(9), 1116–1122 (2008)riecanova2005generalized Riečanová, Z., Marinová, I.: Generalized homogeneous, prelattice and MV-effect algebras. Kybernetika 41(2), 129–142 (2005)
http://arxiv.org/abs/1703.08722v1
{ "authors": [ "Gejza Jenča" ], "categories": [ "math.RA", "Primary: 03G12, Secondary: 06F20, 81P10" ], "primary_category": "math.RA", "published": "20170325174344", "title": "A note on unitizations of generalized effect algebras" }
http://arxiv.org/abs/1703.08655v1
{ "authors": [ "B. J. Carr" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170325065002", "title": "Quantum Black Holes as the Link Between Microphysics and Macrophysics" }
Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can) Karl BringmannMax Planck Institute for Informatics, Saarland Informatics CampusPaweł GawrychowskiUniversity of Haifa. Partially supported bythe Israel Science Foundation grant 794/13.Shay MozesIDC Herzliya. Partially supported bythe Israel Science Foundation grant 794/13. Oren Weimann^† ======================================================================================================================================================================================================================================================================================================================== Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptionsare the analysis of Gaussian bandits with unknown mean and variance by <cit.> and of uniform distributions with unknown support <cit.>.The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, whichis a scale free measure of the extremity of outliers. § INTRODUCTION The purpose of this note is to show that logarithmic regret is possible for finite-armed bandits with no assumptions on the noise of the payoffs except for a known finite bound on the kurtosis, which corresponds to knowing the likelihood/magnitude of outliers <cit.>. Importantly, the kurtosis is independent of the location of the mean and scale of the central tendency (the variance). This generalises the ideas of <cit.> beyond the Gaussian case with unknownmean and variance to the non-parametric setting.The setup is as follows. Let k ≥ 2 be the number of bandits (or arms). In each round 1 ≤ t ≤ n the playershould choose an action A_t ∈1,…,k and subsequently receives a reward X_t ∼ν_A_t, where ν_1,…,ν_k are a set of distributions that are not known in advance. Let μ_i be the mean payoff of the ith arm and μ^* = max_i μ_i and Δ_i = μ^* - μ_i.The regret measures the expected deficit of the player relative to the optimal choice of distribution:_n = [∑_t=1^n Δ_A_t] . The table below summarises many of the known results on the optimal achievable asymptotic regret under different assumptions on {ν_i}. A reference for each of the upper bounds is given in Table <ref>, while the lower bounds are mostly due to <cit.> and <cit.>. An omission from the table is when the distributions are known to lie in a single-parameter exponential family (which does not fit well with the columns).Details are by <cit.>. With the exception of rows 5 and 8 in Table <ref>, all entries depend on some kind of scale parameter. Missing is an entry for a non-parametric assumption that is scale free. This paper fills that gap with the following assumption and regret guarantee. There exists a known κ∈ such that for all 1 ≤ i ≤ k, the kurtosis of X ∼ν_i is at most[X] = [(X - [X])^4]/[X]^2 ≤κ .If Assumption 1 holds, then the algorithm described in <ref> satisfies lim sup_n→∞ _n/log(n) ≤C∑_i:Δ_i > 0 Δ_i (κ- 1 + σ^2_i/Δ_i^2) ,where σ^2_i is the variance of ν_i and C > 0 is a universal constant. What are the implications of this result? The first point is that the algorithm in <ref> is scale and translation invariant in the sense that its behaviour does not change if the payoffs are multiplied by a positive constant or shifted. The regret also depends appropriately on the scale so that multiplying the rewards by a positive constant factor also multiplies the regret by this factor. As far as I know, this is the first scale free bandit algorithm with logarithmic regret on a non-parametric class. The assumption on the boundedness of the kurtosis is much less restrictive than assuming an exact Gaussian model (which has kurtosis 3) or uniform (kurtosis 9/5).See Table <ref> for other examples.r7.8cm Distribution Parameters Kurtosis Gaussian μ∈, σ^2 > 0 3 Bernoulli μ∈ [0,1] 1 - 3μ(1-μ)/μ(1-μ) Exponential λ > 0 9Laplace μ∈, b > 0 9 Uniform a < b ∈ 9/5KurtosisAs mentioned, the kurtosis is a measure of the likelihood/existence of outliers of a distribution, and it makes intuitive sense that a bandit strategy might depend on some kind of assumption on this quantity. How else to know whether or not to cease exploring an unpromising action? The assumption can also be justified from a mathematical perspective. If the variance of an arm is not assumed known, then calculating confidence intervals requires an estimate of the variance from the data. Let X, X_1,X_2,…,X_n be a sequence of i.i.d. centered random variable with finite-variance σ^2.A reasonable estimate of σ^2 isσ̂^2 = 1/n ∑_t=1^n X_t^2 . Clearly this estimator is unbiased and has variance[σ̂^2] = [X^4] - [X^2]^2/n= σ^4 (κ- 1)/n .Therefore, if we are to expect good estimation of σ^2, then the kurtosis should be finite. Note that if σ^2 is estimated by (<ref>), then the central limit theorem combined withfinite kurtosis is enough for an estimation error of O(σ^2 ((κ-1) / n)^1/2) asymptotically. For bandits, however, finite-time bounds are required, which are not available using (<ref>) without additional moment assumptions(for example, on the moment generating function). Finite kurtosis alone is enough if the classical empirical estimator is replaced by a robust estimator such as the median-of-means estimator <cit.> or Catoni's estimator <cit.>. Contributions The main contribution is the new assumption, algorithm, and the proof of Theorem <ref> (see <ref>). The upper bound is also complemented by a lower bound (<ref>). Additional notation Let T_i(t) = ∑_t=1^n A_t = i be the number of times arm i has been played after round t. For measures P,Q on the same probability space, (P, Q) is the relative entropy between P and Q and χ^2(P,Q) is the χ^2 distance. The following lemma is well known.Let X_1, X_2 be independent random variables with X_i having variance σ^2_i and kurtosis κ_i < ∞ and skewness γ_i = [(X_i - [X_i])^3 / σ_i^3], then: (a) [X_1 + X_2] = 3 + σ_1^4(κ_1 - 3) + σ_2^4(κ_2 - 3)/(σ_1^2 + σ_2^2)^2 .(b) γ_1 ≤√(κ_1 - 1) .§ ALGORITHM AND UPPER BOUND Like the robust upper confidence bound algorithm by <cit.>, the new algorithm makes use of the robust median-of-means estimator. Median-of-means estimator Let Y_1,Y_2,…,Y_n be a sequence of independent and identically distributed random variables. The median-of-means estimator first partitions the data into m blocks of equal size (up to rounding errors). The empirical mean of each block is then computed and the estimate is the median of the means of each of the blocks. The number of blocks depends on the desired confidence level and should be O(log(1/δ)). The median-of-means estimator at confidence level δ∈ (0,1) is denoted by _δ({Y_t}_t=1^n).Let Y_1,Y_2,…,Y_n be a sequence of independent and identically distributed random variables with mean μ and variance σ^2 < ∞.|_δ(Y_t_t=1^n) - μ| ≥C_1 √(σ^2/n log(C_2/δ)) ≤δ ,where C_1 = √(12· 16) and C_2 = exp(1/8) are universal constants.Upper confidence bounds The algorithm is an obvious generalisation of UCB, but with optimistic estimates of the mean and variance. Let δ∈ (0,1) and Y_1,Y_2,…,Y_t be a sequence of independent and identically distributed random variables with mean μ, variance σ^2 and kurtosis κ < ∞. Furthermore, letμ̃({Y_s}_s=1^t, δ) = supθ∈: θ≤(Y_s_s=1^t) + C_1 √(σ̃^2_t({Y_s}_s=1^t, θ, δ)/t log(C_2/δ)) .whereσ̃_t^2({Y_s}_s=1^t, θ, δ) = ((Y_s - θ)^2_s=1^t)/max0,1 - C_1 √(κ - 1/tlog(C_2/δ)) .Note that μ̃({Y_s}_s=1^t, δ) may be (positive) infinite if t is insufficiently large. The following two lemmas show that μ̃ is indeed optimistic with high probability, and also that it concentrates with reasonable speed around the true mean. μ̃({Y_s}_s=1^t, δ) ≤μ≤ 2δ .Apply a union bound and Lemma <ref>.Let δ_t be monotone decreasing and μ̃_t = μ̃({Y_s}_s=1^t, δ_t). Then there exists a universal constant C_3 such that for any ϵ > 0, ∑_t=1^n μ̃_t ≥μ+ ϵ ≤C_3 maxκ- 1,σ^2/ϵ^2 log(C_2/δ_n) + 2∑_t=1^n δ_t . First, by Lemma <ref>∑_t=1^n |(Y_s_s=1^t) - μ| ≥C_1 √(σ^2/t log(C_2/δ_t)) ≤∑_t=1^n δ_t . Similarly,∑_t=1^n |((Y_s-μ)^2_s=1^t) - σ^2| ≥C_1 σ^2 √(κ- 1/t log(C_2/δ)) ≤∑_t=1^n δ_t . Suppose that t is a round where all of the following hold: (a) |(Y_s_s=1^t) - μ| < C_1√(σ^2/tlog(C_2/δ_t)) .(b) |((Y_s - μ)^2_s=1^t) - σ^2| < C_1 σ^2√(κ - 1/tlog(C_2/δ_t)) .(c) t ≥ 16C_1^2 (κ-1) log(C_2/δ_t) .Abbreviating σ̃^2_t = σ̃^2({Y_s}_s=1^t, μ̃_t, δ_t) and μ̂_t = (Y_s_s=1^t),σ̃^2_t= ((Y_s - μ̃_s)^2_s=1^t)/1 - C_1 √(κ- 1/t log(C_2/δ_t))≤2((Y_s - μ̃_t)^2_s=1^t) ≤4((Y_s - μ)^2_s=1^t) + 4(μ̃_t - μ)^2≤4((Y_s - μ)^2_s=1^t) + 8(μ̃_t - μ̂_t)^2 + 8(μ̂_t - μ)^2< 4σ^2 + 4C_1 σ^2 √(κ-1/t log(C_2/δ_t)) + 8C_1^2(σ^2+σ̃^2_t)(κ- 1)/t log(C_2/δ_t)≤11/2 σ^2 + σ̃^2_t/2 ,where the first inequality follows from (c), the second since (x - y)^2 ≤ 2x^2 + 2y^2 and the fact that({a Y_s + b}_s=1^t = a({Y_s}_s=1^t) + b .The third inequality again uses (x - y)^2 ≤ 2x^2 + 2y^2, while the last uses the definition of μ̃_t and (b). Therefore σ̃^2_t ≤ 11σ^2, which means that if (a–c) and additionally (d) t ≥19C_1^2 σ^2/ϵ^2log(1/δ_n) .Then|μ̃_t - μ|≤|μ̃_t - μ̂_t| + |μ̂_t - μ|< C_1 √(σ̃_t^2/t log(C_2/δ_n)) + C_1 √(σ^2/t log(C_2/δ_n))≤C_1 √(11σ^2/t log(C_2/δ_n)) + C_1 √(σ^2/t log(C_2/δ_n))≤ϵ .Combining this with (<ref>) and (<ref>) and choosing C_3 = 19 C_1^2 completes the result.Algorithm The new algorithm simply uses the upper confidence bound in the last section. Let δ_t = 1/(t^2 log(1+t)) andμ̃_i(t) = μ̃_i({X_s}_s∈[t], A_s = i, δ_t) ∈(-∞, ∞] .In each round the algorithm chooses A_t = _i ∈ [k]μ̃_i(t-1), where ties are broken arbitrarily. Assume without loss of generality that μ_1 = μ^*. The regret is_n = ∑_i=1^k Δ_i [T_i(n)] . A bound on [T_i(n)] follows immediately from Lemmas <ref> and <ref>.[T_i(n)]≤∑_t=1^n μ̃_1(t-1) ≤μ_1 + ∑_t=1^n μ̃_i(t-1) ≥μ_1andA_t = iThe first term is bounded using Lemma <ref>.∑_t=1^n μ̃_1(t-1) ≤μ_1 ≤∑_t=1^n ∑_u=1^t μ̃_1(t-1) ≤μ_1andT_1(t-1) = u≤2∑_t=1^n ∑_u=1^t δ_t= 2∑_t=1^n t δ_t = o(log(n)) . The second term is bound using Lemma <ref>. ∑_t=1^n μ̃_i(t-1) ≥μ_1andA_t = i≤∑_t=1^n μ̃_i(t-1) - μ_i ≥Δ_i≤C_3 maxκ- 1, σ^2_i/Δ_i^2 log(C_2/δ_n) + 2 ∑_t=1^n δ_t_o(log(n)) .Combining the last two displays with (<ref>) completes the proof.§ LOWER BOUNDS I briefly present some lower bounds. For the remainder, assume a fixed bandit strategy. We need two sets of distributions on . _σ= ν: ν is σ^2-subgaussian . _κ= ν: ν has kurtosis less thanκ .Following the nomenclature of <cit.>, a bandit strategy is called consistent over a set of distributionsif _n = o(n^p) forall p ∈ (0,1) and bandits in ^k. I call a bandit {ν_i} is non-trivial if there exists a suboptimal arm.The first theorem shows that if a strategy is consistent over ⋃_σ≥ 0_σ^k, then it does not enjoy logarithmic regret on any non-trivial bandit. The proof is quite standard and is simply omitted. Suppose there exists a σ > 0 and non-trivial bandit {ν_i}∈_σ^k such that lim sup_n→∞ _n/log(n) < ∞ .Then the strategy is not consistent over ⋃_σ≥ 0_σ^k. There are consistent strategy over ⋃_σ≥ 0_σ^k. For example, let f(t) be a monotone increasing function withf(t) = ω(log(t)) and f(t) = o(t^p) for all p ∈ (0,1) and consider the strategy that maximises the following index.μ̂_i(t-1) + √(f(t)/T_i(T-1)).By following the analysis in Chapter 2 of the book by <cit.> and noting that for t sufficiently large f(t) ≥ 2σ^2 log(t), it is easy to show that this strategy satisfieslim sup_n→∞ _n/f(t) = ∑_i:Δ_i > 0 1/Δ_i^2for any bandit {ν_i}∈_σ^k. It is important to emphasise that the asymptotics here hide large constants that depend on τ = min{t : f(t) ≥ 2σ^2 log(t)}. The next theorem shows that the upper bound derived in the previous section is nearly tight up to constant factors. Like most lower bounds, the proof relies on understanding the information geometry of the set of possible distributions. Letbe a family of distributions and let {ν_i} be a non-trivial bandit and i be a suboptimal arm. <cit.> showed that for any consistent strategy lim inf_n→∞ [T_i(n)]/log(n) ≥sup1/(ν_i, ν'_i) : ν_i' ∈ and_X ∼ν'_i[X] > μ^* .In parameterised families of distributions, the optimisation problem can often be evaluated analytically (eg., Bernoulli, Gaussian with known variance, Gaussian with unknown variance, Exponential). For non-parametric families the calculation is much more challenging. The following theorem takes the first steps towards understanding this problem for the class of distributions _ for ≥ 7/2. Let ≥ 7/2 and Δ > 0 and ν∈_ with mean μ, variance σ^2 > 0 and kurtosis κ. Then inf (ν, ν') : ν' ∈_κand_X ∼ν'[X] > μ+ Δ≤ minlog(1/1 - p),C' Δ^2/σ^2 if C κ^1/2(κ+1)Δ/σ < log(1/1 - p) otherwise , where C, C' > 0 are universal constants and p = minΔ/σ, 1/. Notice that the result is strongest on the `interior' of _ (that is, when κ≪).In fact, this is necessary because _ includes the Bernoulli with kurtosisandin this case there is very little wiggle room available to perturb the mean of the measure without also increasing the kurtosis. Since log(1 + x) ≤ x for all x we have1/log(1/1-p) ≥1-p/p = Ω(+ σ/Δ) .This means that provided κ and Δ are sufficiently small relative to , then the lower bound derived from the abovetheorem and <ref> matches the upper bound in the previous section up to constant factors. The proof of Theorem <ref> involves explicit alternative distributions ν' based on ν and is given in <ref>.§ SUMMARY The assumption of finite kurtosis generalises the parametric Gaussian assumption to a comparable non-parametric setup with a similar basic structure. Of course there are several open questions. Optimal constants The leading constants in the main results (Theorem <ref> and Theorem <ref>) are certainly quite loose. Deriving the optimal form of the regret is an interesting challenge, with both lower and upper bounds appearing quite non-trivial.It may be necessary to resort to an implicit analysis showing that (<ref>) is (or is not) achievable whenis the class of distributions with kurtosis bounded by some . Even then, constructing an efficient algorithm would remain a challenge. Certainly what has been presented here is quite far from optimal. At the very least the median-of-means estimator needs to be replaced, or the analysis improved. An excellent candidate is Catoni's estimator <cit.>, which is slightly more complicated than the median-of-means, but also comes with smaller constants and could be plugged into the algorithm with very little effort. For the lower bound, there appears to be almost no work on the explicit form of the lower bounds presented by <cit.> in interesting non-parametric classes beyond rewards with bounded or semi-bounded support <cit.>. Non-parametric Thompson sampling If an appropriate prior is used, then Thompson sampling has recently been shown to achieve the optimal rate when the distributions are Gaussian with unknown means and variances <cit.>. It is natural to ask if this algorithm can be generalised to the non-parametric setting discussed here. Note that this is possible in the case where the rewards have bounded support <cit.>. Absorbing other improvements There has recently been a range of improvements to the confidence level for the classical upper confidence bound algorithms that shave logarithmic terms from the worst-case regret or improve the lower-order terms in the finite-time bounds <cit.>. Many of these enhancements can be incorporated into the algorithm presented here, which may lead to practical and theoretical improvements. Replacing median-of-means with self-normalised inequalities While the median-of-means led to the simple analysis presented here, there is another approach that has the potential to lead to significantly smaller constants, which is to use the theory of self-normalised processes <cit.>. Comparison to Bernoulli Table <ref> shows that the kurtosis for a Bernoulli random variable with mean μ is κ = O(1/(μ(1-μ))), which is obviously not bounded as μ tends towards the boundaries. The optimal asymptotic regret for the Bernoulli case islim_n→∞ _n/log(n) = ∑_i:Δ_i > 0 Δ_i/d(μ_i, μ^*) .The interesting differences occur near the boundary of the parameter space. Suppose that μ_i ≈ 0 for some arm i and μ^* > 0 is close to zero. An easy calculation shows that d(μ_i, μ^*) ≈log(1/(1 - Δ_i)) ≈Δ_i.Thereforelim inf_n→∞ [T_i(n)]/log(n) ≈1/log(1/(1-Δ_i)) ≈1/Δ_i .Here we see an algorithm is enjoying logarithmic regret on a class with infinite kurtosis! But this is a very special case and isnot possible in general, as demonstrated by Theorem <ref>. The reason is that the structure of the hypothesis class allows strategies to (essentially) estimate the kurtosis with reasonable accuracy and anticipate outliersmore/less depending on the data observed so far.§ PROOF OF THEOREM <REF>Assume without loss of generality that ν is centered and has variance σ^2 = 1, which can always be achieved byshifting and scaling (neither effects the kurtosis or the relative entropy).The result is proved by piecing together two ideas. The first idea is to perturb the distribution by adding a Bernoulli `outlier'.The second idea is to perturb the distribution more smoothly. Let X be a random variable sampled from ν and B be a Bernoulli with parameter p = minΔ, 1/. Let Z = X + Y where Y = Δ B / p. Then [Z] = Δ and[Z] = 3 + κ- 3 + [Y]^2 ([Z] - 3)/(1 + [Y])^2= 3 + κ- 3 + ((1-p)^2Δ^2/p)^2 1-6p(1-p)/p(1-p)/(1 + (1-p)^2Δ^2/p)^2≤3 + - 3 + ((1-p)^2Δ^2/p)^2 1-6p(1-p)/p(1-p)/(1 + (1-p)^2Δ^2/p)^2 ≤ ,where the first inequality used Lemma <ref> and the final inequality follows from calculus and the assumption that ≥ 7/2. Let ν' = (Y) be the law of Y. Then(ν, ν') ≤log(1/1-p) .Moving onto the second idea, where I use C for a universal positive constant that changes from equation to equation. Let A = x : |x| ≤√(aκ) and A̅ =- A. Define alternative measure ν'(E) = ∫_E (1 + g(x)) dν(x) where g(x) = (α+ βx) x ∈Afor some constants α and β chosen so that∫_ g(x) dν(x)= α∫_A dν(x) + β∫_A x dν(x) = 0 . ∫_ g(x) x dν(x)= α∫_A x dν(x) + β∫_A x^2 dν(x) = Δ .Solving for α and β shows thatβ= Δ/∫_A x^2 dν(x) - (∫_A x dν(x))^2/ν(A)andα= -Δ∫_A xdν(x)/ν(A) ∫_A x^2 dν(x) - (∫_A x dν(x))^2 . We still need to show that ν' is a probability measure, which will follow from the positivity of 1 - g(·). The first step is to control each of the terms appearing in the definitions of α and β. By Cauchy-Schwarz and Chebyshev's inequalities,ν(A̅) = ν(x^2 ≥a κ) ≤1/κa^2and∫_A x^2 dν(x) = 1 - ∫_A̅ x^2 dν(x)≥1 - √(κν(A̅)) ≥1 - 1/a .Similarly,|∫_A x dν(x)| = |∫_A̅ x dν(x)| ≤√(σ^2 ν(A̅))≤1/a √(κ) .Therefore by choosing a = 2 we have ,|α| = Δ|∫_A x dν(x)/ν(A) ∫_A x^2 dν(x) - (∫_A x dν(x))^2| ≤Δ/√(κ)/a((1 - 1/κa^2) (1 - 1/a) - 1/a^2 κ) ≤4Δ/√(κ) |β|= Δ/∫_A x^2 dν(x) - (∫_A x dν(x))^2/ν(A) ≤Δ/1 - 1/a - 1/κa^2(1 - 1/a^2κ) ≤6Δ .Now g(x) is an increasing linear function supported on A, so max_x ∈ |g(x)| = max|g(√(aκ))|, |g(-√(aκ))|≤|α| + √(aκ) |β| ≤4Δ/√(κ) + 6Δ√(2κ) ≤1/2 ,where the last inequality by assuming that Δ≤√(κ)/4(2 + 3√(2)κ) = O(κ^-1/2) ,which is reasonable without loss of generality, since if Δ is larger than this quantity, then we would prefer the bound that depends onderived in the first part of the proof. The relative entropy between ν and ν' is bounded by(ν, ν')≤χ^2(ν, ν')= ∫_(dν(x)/dν'(x) - 1)^2 dν'(x)= ∫_A g(x)^2/1 + g(x) dν(x)≤2 ∫_A g(x)^2 dν(x)≤4 ∫_A α^2 dν(x) + 4∫_A β^2 x^2 dν(x)≤4 α^2 + 4β^2≤4 ·16 Δ^2/κ + 4 ·36 Δ^2 ≤C Δ^2 .In order to bound the kurtosis we need to evaluate the moments:∫_x^2 dν'= ∫_x^2 dν+ ∫_A g(x) x^2 dν= 1 + α∫_A x^2 dν(x) + β∫_A x^3 dν(x)≤1 + C Δ√(κ) . ∫_x^2 dν'= ∫_x^2 dν+ ∫_A g(x) x^2 dν≥1 - C Δ√(κ) . ∫_x^4 dν'= ∫_x^4 dν+ ∫_A g(x) x^4 dν= κ+ α∫_A x^4 dν(x) + β∫_A x^5 dν(x)≤κ(1 + CΔ√(κ)) .|∫_x^3 dν'(x)|≤√(∫_x^2 dν'(x) ∫_x^4 dν'(x))≤√(Cκ) . Therefore if κ' is the kurtosis of ν', thenκ' = ∫_(x - Δ)^4 dν'(x)/(∫_x^2 dν'(x) - Δ^2 )^2 = ∫_x^4 dν'(x) - 3Δ^4 + 6Δ^2 ∫_x^2 dν'(x) - 4Δ∫_x^3 dν'(x)/(1 - Δ^2 + α∫_A x^2 dν(x) + β∫_A x^3 dν(x))^2As a brief aside, if ν is symmetric, then the odd moments vanish and ∫_A x^i dν(x) = 0 for odd i. Therefore α = 0 κ' = κ- 3Δ^4 + 6Δ^2/(1 - Δ^2)^2 ≤κ+ 6Δ^2/1 - 2Δ^2 = κ+ 6Δ^2/1 - 2Δ^2 + 2κΔ^2/1 - 2Δ^2 ≤κ+ CκΔ^2 .On the other hand, if ν is not symmetric, then the odd moments must be controlled.κ' = ∫_x^4 dν'(x) - 3Δ^4 + 6Δ^2 ∫_x^2 dν'(x) - 4Δ∫_x^3 dν'(x)/(∫_x^2 dν'(x) - Δ^2)^2 ≤κ(1 + CΔκ^1/2) + 6Δ^2(1 + C Δκ^1/2) + CΔκ^1/2/(1 - CΔκ^1/2 - Δ^2)^2 ≤κ+ CΔκ^1/2(κ+1)/1 - CΔκ^1/2 ≤κ+ CΔκ^1/2(κ+1) . By patching the two results we obtain that for all Δ > 0 and ν∈_ with mean μ, variance σ^2 > 0 and kurtosis κ, inf (ν, ν') : ν' ∈_κand_X ∼ν'[X] > μ+ Δ≤ minlog(1/1 - p),C' Δ^2/σ^2 if C κ^1/2(κ+1)Δ/σ < log(1/1 - p) otherwise , where C, C' > 0 are universal constants and p = minΔ/σ, 1/.
http://arxiv.org/abs/1703.08937v1
{ "authors": [ "Tor Lattimore" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170327054103", "title": "A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis" }
[ \begin@twocolumnfalse Generating physically realizable stellar structures via embedding S.K. Mauryae1,addr1 M. Govendere2,addr2. Received: date / Accepted: date ================================================================= We study a LiCs strongly-interacting molecular gas loaded into an one-dimensional optical lattice at quarter filling. The molecules are in the lowest electronic and vibrational state, X^1Σ (ν=0). Due to the large intermolecular distance and low filling, dipole-dipole interaction in the nearest-neighbor approximation governs the dynamics of the rotational excitations. For low DC electric field strengths, the full set of rotational levels N=0,1 must be taken into account, nevertheless, our calculations show that very weak fields act as field-selectors disclosing two- and three-level systems out of the original four-level one. The dynamics and the generated von Neumann entanglement entropy among the internal rotational states throughout the evolution are presented for low, moderate and strong fields . We observe a sharp and monotonous growth of the entanglement as the dynamics take place showing the potential of these molecular systems to be used in quantum information protocols. The numerical simulations are performed by means of the Time-Evolving Block Decimation algorithm based on the Matrix Product State formalism and the Susuki-Trotter decomposition.\end@twocolumnfalse ] § INTRODUCTIONUltracold and quantum degenerate molecular gases are currently a focus of interest within atomic physics, molecular physics and physical chemistry communities <cit.>.The enormous control and high tunability of cold and ultracold molecular systems make them suitable to study a broad applications spectrum in diverse fields, from precision measurement and high-resolution spectroscopy <cit.> to ultracold chemistry <cit.>, quantum information processing <cit.> and quantum computing <cit.>.Cold lattice gases, either atomic or molecular, are often proposed as quantum simulators mimicking, for instance, the Hubbard and Heisenberg <cit.> models, serving as a way to develop new experimental, theoretical and numerical tools. The production of ultracold ground-state molecules opens the path to derive effective many-body models for nonreactive molecules in tight traps like optical tweezers <cit.> and optical lattices <cit.>.Arranging polar molecules in optical standing-wave trapping-potentials, new opportunities are expected due to the neighboring sites dipole-dipole interaction; in contrast, ground-state atomic lattice gases dominated by 1/R^6 van der Waals forces exhibit only on-site interaction. Besides the long-range character of the dipolar interaction, it is interesting to explore its anisotropy which can be manipulated by applying external electric fields <cit.>.The successful association of two pre-cooled atoms to form a molecule has been achieved using Feshbach resonances <cit.> and photoassociation <cit.>. Magneto association performed by STIRAP has produced KRb molecules in their electronic and vibrational ground-state <cit.> and photoassociation close to a Feshbach resonance has produced LiCs molecules in their electronic and vibrational ground-state <cit.>.It is of great interest to control the time-evolution among specific discrete quantum states, specifically, the array of long-range interacting polar molecules with rich internal structure form an excellent ground to comprehend complex systems relevant for quantum information processing and simulations <cit.>. For this porpoise, new theoretical and numerical tools are needed to understand themany-body quantum systems such as rotational dynamics present in state of the art experiments.Therefore, in the present work we analyze the coherent population transfer of X^1Σ molecules due to the dipole-dipole interaction in their ground and first excited rotational levels as a function of an external electric field.Pursuing this aim, the molecular gas is loaded onto an one-dimensional optical lattice and the gas is prepared in the strongly-correlated Mott-insulator regime.Recently, the use of quantum information tools in several communities has been growth <cit.>. Particularly, in the field of the many-body strongly-correlated quantum-systems <cit.>, these tools allow us to analyze nonlocal information and therefore a suitable mechanism to study quantum phases and phase transitions <cit.>. In a pure bipartite state, having measured only part of the system (a subsystem) it is possible to have information of the whole quantum state if there is entanglement among the parts <cit.>. In this paper, we use the von Neumann entropy associated to the reduced density matrix to provide the entanglement entropy of the measured subsystem.This paper is organized as follows: in sect. <ref> we describe both the model of the molecular lattice system and the implemented numerical method. Section <ref> is devoted to molecular excitations dynamics in presence of an external DC electric field. The entanglement generated throughout the evolution of the excitations is presented in sect. <ref>. Finally, sect. <ref> concludes.§ MODEL AND METHODS §.§ Molecular Hamiltonian in the Mott-insulator regimeWe consider an optical lattice filled with X^1 Σ LiCs mole­cules which are in the electronic and vibrational ground states. The lattice holds one molecule per site at a distance of 400 nm, hence, tunneling between lattice sites is completely suppressed <cit.>. Considering this low lattice filling, the on-site Hubbard interaction can be neglected as well. Thus, the interaction between the molecules is solely determined by the long-range anisotropic dipole-dipole interaction, creating quasi-partic­les responsible for excitation transfer along the lattice, so-called Frenkel excitons <cit.>. The only two states involved in the dynamics are the ground and the first excited rotational states. The Hamiltonian is given by <cit.>, Ĥ=∑_n=1^N_mol(B_eN̂_n^2- d_n· E)+1/2∑_n=1^N_mol∑_m n^N_molV̂_dd( r_n- r_m), where r_n is the position of the n-th lattice site, B_e is the rotational constant, d_n is the electric dipole operator of individual molecules, E is the constant electric field, N̂_n is the rotational angular momentum operator of the molecule at the site n and V̂_dd is the dipole-dipole interaction between molecules in different lattice sites. N_mol is the total number of molecules.The local rotational Hamiltonian, ∑_n B_eN̂_n^2, corresponds to the crystal energy operator (rotational energy) without intermolecular interactions. In absence of a static electric field, the rotational ground state of the given diatomic molecule has a rotational angular momentum N=0, and the first excited rotational state N=1 is triply degenerate. In the presence of a fieldE,the Stark effect is accounted for, the degeneracy of N=1 is lifted and splits into two sub-energy levels corresponding to the projections, M=0 and M=± 1, into the electric field quantization axis.For strong fields beyond a critical value (see reference <cit.> for critical field theory), the system reduces to a two-level problem considering exclusively M=0 projections. In contrast, for a weak static electric field all four levels must be considered <cit.>, this is the scenario considered in the present work.The electric field acts only on the internal coordinates of single molecules, and couples different local molecular states, this coupling depends on the electric field strength.The dipole-dipole operator, V̂_dd( r_n- r_m), involves two different sites and mixes two different molecular states. In fact this interaction determines the dynamics of the system and their features. In the Molecular Fixed Frame (MFF) this operator takes the form, V̂_dd(r)=1/r^3[d̂_n·d̂_m-3(d̂_n·ê)(ê·d̂_m)]. Here, r= r_n- r_m is the intermolecular vector, ê=r/r is the unit vector in the direction r and d̂_m, d̂_n are the dipole moment operators of the molecules at the sites m and n, respectively.In order to study the interplay of the local interaction with E and the non-local dipole-dipole interaction, it is needed to represent the operator (<ref>) in the Laboratory Fixed Frame (LFF), V̂_dd(r)=-√(6)/r^3∑_q=-2^2-1^qC_-q^2(r)[d̂_n⊗d̂_m]_q^(2). Here C_-q^2(r)=√(4π/5)Y^(2)_-q(r) are the reduced spherical harmonics which describe the movement of the intermolecular axis in the LFF. The one-dimensional confinement of the lattice reduces this coefficient to a simple factor. The operators on the right hand side in Eq. (<ref>) contain the information about the strength of the involved molecular dipole moments. The term can be seen as a second rank tensor product of two tensors of rank one T̂_i^1(d̂_s) <cit.> which represents the dipole moment operator of the molecule located in the site s ∈{n,m}. As a result, we express [d̂_n⊗d̂_m]_q^(2) as ∑_p⟨ 1 p, 1 q-p | 2 q ⟩T̂_p^1(d̂_n)T̂_q-p^1(d̂_m).Using the second quantization, the Hamiltonian (<ref>) can be written following Ref. <cit.> as Ĥ_exc=∑_nE_0P̂_n,M^†P̂_n,M+∑_n,m ≠ n∑_M,M'F_n,m^M,M'P̂_n,M^†P̂_m,M'. Here E_0 is the energy difference between the excited and ground rotational states, P̂_n,M^† (P̂_m,M') creates (annihilates) a molecule in the site n (m) and level M (M'). Specifically, the operators act as P̂_n,M^† |g ,0⟩ _n = |e,M ⟩ _n and P̂_m,M' |e,M' ⟩ _m = |g ⟩ _n, where |g,0 ⟩ _n and |e,M ⟩ _n are the field-dressed ground and excited states, respectively. The coefficients F_n,m^M,M'=⟨ e_n,M' g_m,0 | V̂_dd | g_n,0 e_m,M⟩ provide the molecular selection ru­les, which determine the restricted phase space accessible at low-energies. Here, we use a DC electric field perpendicular to the molecular arrangement leading to the selection rule Δ (M_n+M_m) = 0, ± 2 involving molecules on sites n and m <cit.>. Finally, working with molecules separated distances greater than 100 nm makes the intermolecular interaction considerably smaller compared to the rotational spectrum energy, so it is enough to account only for the nearest-neighbor interaction.The Hamiltonian (<ref>) allows us to describe the behavior of the excitations as a function of the external electric field and study the coherent population transfer between the two involved rotational molecular levels.We consider a lattice array with L sites, such bipartite system can be separated by applying the Schmidt decomposition <cit.>. Each state can be written as |Ψ⟩ = ∑_α_L =1^N_Bλ_α|α_B⟩⊗|α_E⟩, where {|α_B⟩} (on one subsystem with N_B Hilbert dimension) and {|α_E⟩} (on the other with N_E Hilbert dimension) are two orthonormal basis sets belonging to the respective Hilbert subspace, each with at most N_B basis elements, here N_E>N_B. The coefficients λ_α are non-negative real numbers satisfying ∑_α^χλ_α^2=1, known as the Schmidt coefficients and α runs in the reduced Hilbert subspace. In the numerical implementation the sum is truncated and only χ states are kept.In a pure state there is only one density matrix eigenvalue. But the reduced density matrix for a subsystem ρ̂_B=∑_αλ_α^2|α_B⟩|α_B⟩ represents a mixed state where the entanglement entropy can be measured using the von Neumann entropy, S=-∑_αλ_α^2 ln (λ_α^2).§.§ Time evolution Numerical calculations reveal important aspects in the ultracold interacting molecular behavior.In low dimensions, quantum fluctuations are enhanced and therefore many-body quantum systems, in general, are not susceptible to the standard mean field or perturbation expansion methods so one has to turn to non-perturbative techniques and numerical methods. Here, we use the Matrix Product State (MPS) ansatz <cit.> to efficiently approximate the quantum states. This formalism is highly recommended when the amount of entanglement in the system is limited, reducing the calculation complexity from the original exponential to an algebraic growth of the Hilbert space. Based on the MPS representation, we use the Time-Evolving Block Decimation (TEBD) algorithm <cit.> to numerically analyze the molecular quantum-chain dynamics, paying special attention to population transfer and quantum entanglement. The TEBD takes advantage of the fact that the Hamiltonian can be written as the sum over even and odd sites and then the time evolution operator is approximated using the Trotter-Suzuki expansion formula <cit.>. Since the entanglement of the system grows with time, keeping the matrices of the same size means to loose information throughout the evolution. In order to avoid so, one has to adapt the value of the entanglement parameter (χ) every time-step to keep the state as accurate as possible <cit.>.The approximations discussed in sect. <ref> allow us to use this numerical technique for calculating the time evolution since the dipole-dipole interaction couples only nearest-neighbors <cit.>. In the following, we consider chains with L=16 sites and initial entanglement parameter χ_0=8 since the evolution starts from an array of disentangled individual molecules. Towards the end, the entanglement grows and we reach χ_final=100 in average after 660 time-steps.§ DYNAMICS OF THE MOLECULAR EXCITATIONS In this section, we first analyze the dynamics of the molecular excitations in the presence of a static and perpendicular electric field. Several scenarios are regarded depending on the field strength and the initial condition.We work with LiCs alkali molecules, whose dipole moment is d_0 = 5,523 De <cit.>, the rotational constant is B_e = 5,816GHz and the critical electric field is E_cri = 2100V/cm <cit.>. Initially all lattice sites are prepared in the same state (|Ψ_0⟩) which consists of either a single rotational level or a superposition of them. The choices are motivated by two ideas; first, the possibility to reach these single-molecule states after two-atoms association mechanisms. And second, the access to the strongly-correlated regime with dimers with tools developed in atomic physics field <cit.>.As the system evolves the rotational excitations exchange and spread throughout the lattice following the selection rules. Hence, we locally measure the rotational state of each molecule, sum over all sites and divide by the total initial amount of molecules. Therefore, the figures presented in this section show the percent population of each single dressed state as a function of time. The time is given in unities of (μ^2/d^3)^-1 which for our case is of the order of ∼ 9× 10^4 GHz^-1. §.§ Weak field: Effective two- and three-level dynamics Let us start by analyzing the case where a weak field is applied, namely E=50 V/cm. For a very low external field, the coupling between | 0,0 ⟩ and | 1,0 ⟩ is weak. This physical fact is presented in Fig. <ref>, starting from the | 0,0 ⟩ state (see Fig. <ref>(a)), the population rapidly starts to decrease and head towards the two degenerate states |1,-1 ⟩ and | 1,1 ⟩, while |1,0 ⟩ state remains unchanged up to t=0.5 and slightly, almost imperceptible, rises its population.On the contrary, when all the population starts in the |1,0 ⟩ state, the excluded states from the exchange are | 1,-1 ⟩ and | 1,1 ⟩ (see Fig. <ref>(b)). The system behaves as a two-level system, besides the initial population does not decrease dramatically as in<ref>(a), but slowly go to the coupled state |0,0 ⟩. Initiating from |1,1 ⟩the population first go to the ground state due to the selection rules and later on the state | 1,-1 ⟩ begins to be populated. It can be seen how | 1,-1 ⟩ takes a while before enters into the dynamics and afterwards | 0,0 ⟩, | 1,-1 ⟩ and | 1,1 ⟩ coherently oscillate.Otherwise, Figs. <ref>(d), (e) and (f) show different initial population configurations, the evolutions start from linear superpositions of | 1,-1 ⟩, | 1,0 ⟩ and | 1,1 ⟩ states with different contributions.For these three cases the systems behave effectively as three-level systems involving only the states: {|0,0⟩, |1,-1⟩, |1,1⟩}, excluding the contribution of the |1,0⟩ state, which keeps the population practically constant for the whole evolution. The same behavior applies for Figs. <ref>(a) and (c).Hence, this low DC field strength allows us, depending on the initial condition, to switch from a four-level system to a two- or a three-level one. §.§ Moderate field As the field strength rises, all four states are involved in the dynamics. Here we use E=200 V/cm and the coherent collective evolutions are shown in Fig. <ref>. Starting from the state |0,0 ⟩ (Fig. <ref>(a)) or state |1,0 ⟩ (Fig. <ref>(b)), the difference directly regards the selection rules. In the case of <ref>(a) the excitations migrate directly to N=1, whereas in the case of <ref>(b) the excitations must first go to the ground state and thereafterpopulate the M=± 1 states. Hence, the states |1, ± 1 ⟩ remain unpopulated for a longer period of time. The coupling between M=0 projections gets stronger than the latter case (E=50 V/cm), so in Fig. <ref>(a) the population of |1,0 ⟩ state heightens up to 20% by the end of the time evolution, and the smooth coherent oscillations between | 0,0 ⟩ and | 1,± 1 ⟩ observed in Fig. <ref>(a) are weakened even more for E=200 V/cm. On top of that, after t=1.5 the populations in |1, ± 1 ⟩level off to a stationary regime getting 25% of the total population each. From this point on the dynamics reach again an effective two-level like system. In Fig. <ref>(b) the interaction between ground N=0 and exited states N=1 increases and the exchanges is higher than the case with lower electric field <ref>(b), and slight oscillations among the rotational levels appear.When the |1,1 ⟩ state is taken to be the initial state (similar analysis applies for |1,-1 ⟩), see Fig. <ref>(c), the population rapidly drops into |0,0 ⟩ state, then goes into |1,-1 ⟩ and finally the |1,0 ⟩ enters in the dynamics. The field dressed states with M = ± 1 have the same energy, hence, they have the same probability to be populated, being the reason why |1,-1 ⟩ state comes first in the dynamics than |1,0 ⟩ state. High-amplitude coherent oscillations are presented for this initial condition and the considered field strengths. For the case shown in Fig. <ref>(d), the evolution starts from a superposition state with equal populations (symmetric Bell-like state) using |1,± 1 ⟩ instead of a Fock state. As it is observed, the rotational ground state becomes quickly populated while |1,0 ⟩ rises late and slowly in the dynamics. The latter levels out reaching a steady regime whilst the populations of the other three states oscillate coherently, regardless of the initial Fock state (c) or Bell superposition (d). As long as |1,± 1 ⟩ dressed states are initially populated, the M=0 projection of the N=1 state behaves in the same manner matching actually their time-scales. This observation still applies even when the |1,0 ⟩ state is allowed to be populated with almost 10% of the total population, see Fig. <ref>(e) (calculations where performed also with 20% of the population delivering the same result) and the behavior for this state persists: remaining constant at the initial population for a while, and then rising slowly until it levels out.Finally we explore the case when all states in the rotational excited state N=1 have the same initial weight achieving the evolution presented in Fig. <ref>(f). In this case, the rotational level |1,0⟩ has the largest probability of being populated as the evolution takes place, while the populations in the ground state and projections M=± 1 tend to a quasi-steady regime sharing the same probability percentage. Between t=0.5 and t=1, the ground state reaches its maximum, almost when the population in | 1,0 ⟩ stars to rise, see Figs. <ref>(c), (d) and (f). Therefore, for the ground state to get populated a elapsed time around (μ^2/d^3)^-1 is needed, afterwards a redistribution to the other leves is performed according to selection rules and weight probabilities. §.§ Strong field Let us now increase the electric field to E=1000 V/cm which is still small compared to the rotational molecular energy. In this scenario, the coupling between the states with M=0 projection gets stronger, therefore the dynamics tend to be dominated by these two states, while the M=± 1 projections behave smoothly and the evolutions for several initial states are presented in Fig. <ref>.Let us consider two initial cases, when all molecules are prepared in the rotational ground state (Fig. <ref>(a)) and in the first rotational excited state with M=0 (Fig. <ref>(b)). In both cases the initial state drops dramatically its populationas much as the coupled state with the same projection rises and the populations oscillate strongly between the m=0 projections. Thereafter, these two states exchange coherently most of the chain population while the|1, ± 1 ⟩ dressed states starting from zero have an upward trend, with a slight decrease, stabilizing towards the end. For the evolution <ref>(b) M=± 1 states remain unpopulated for a longer period of time climbing up when the populations in |0,0⟩ and |1,0⟩ match each other, leveling out although swinging slightly until the end of the evolution. Taking |1,1 ⟩ as the initial state leads to a different time evolution (see Fig. <ref>(c)). According to the selection rules, molecules go first to the ground state and since the M=0-coupling becomes stronger due to the hight field, the |1,0 ⟩ state has a larger probability to be populated. Hence, in contrast of the previous case (Fig. <ref>(c)), the level population order changes and the ground state enters first in the dynamic, then the | 1,0 ⟩ state takes advantage over | 1,-1 ⟩ that comes latter. Once all levels get populated the states with m=0 oscillate coherently, while the |1, ± 1 ⟩ states are struggling to equate their populations. The maximum of| 1,- 1 ⟩ is reached approximately at t∼ 2.When launching the evolution with each molecule prepared in a Bell state made up of |1,± 1⟩ (see Fig. <ref>(d)), the population in both levels decreases continuously during the first time unity and remains stable around 30% throughout the rest of the evolution. Meanwhile, the |0,0 ⟩ and |1,0 ⟩ states behave in the same manner as in Fig. <ref>(c). Modifying the initial state by increasing the population in |1,0 ⟩ does not change the general demeanor of the evolution of the M=± 1 projections as shown in Fig. <ref>(e). This is true even if the initial population in |1,0 ⟩ equals M=± 1 as observed in Fig. <ref>(f). In the latter, we analyze the initial state with the molecules in a linear combination of equally probable N=1 states. Here, one can see that after decreasing the population in M=± 1 level almost to a constant value whilst both M=0 projections oscillate coherently damping down towards the end of the evolution. Moderate and strong field strengths lead, regardless the initial conditions, the evolutions to a population distribution between 20 % and 40 %. Hence, for longer period of times, out of our calculations scope, we infer that the populationdistribution in the steady state is expected to land in the same range. § ENTANGLEMENT EVOLUTION.In Sec. <ref> the evolution of the rotational levels has been presented depending on the initial state. Now, we focus on how the von Neumann entanglement entropy of the system grows as a function of time for three specific DC field strengths and several initial conditions. The corresponding results are shown in Fig. <ref>.At E=50 V/cm, see Fig. <ref>(a), the system develops the highest but also the lowest entanglement observed in our numerical calculations. Starting all molecules in the rotational ground-state turns out to be the initial condition that most entangles the system (lila curve), this is due to the fact that this state interacts with all the excited states, backed up by the selection rules established by the dipole-dipole interaction. Let us recall that for this electric field strength the state |1,0⟩ is excluded while the populations in M=± 1 states increase until the three involved states level out reaching a quasi-stationary regime. Therefore, it is expected that initiating from |1,0⟩ (green curve) the evolution leads to the most disentangled dynamic and this is exactly what we observed.When the majority contribution to the initial state comes from M= ± 1 states (red and blue curves), independently if it is a Fock or superposition state, due to the selection rules these levels have access only to the ground state, thus the system develops similar entanglement throughout the evolution.Even when the initial state, |Ψ_2⟩=1√(19)(3| 1,-1 ⟩ + |1,0 ⟩ + 3|1,1 ⟩), which has a partial contribution of | 1,0 ⟩, the greatest contributionto the dynamics comes from M = ± 1 states, which determine the entanglement evolution. A different scenario is presented when launching the evolution from an equally probable superposition of the excited rotational states (orange curve), where the entanglement grows steadily changing the slope to a lower rate at two time-units. This is because such superposition connects exclusively to the ground state by the selection rules.As the field strength is increased to E=200 V/cm, see Fig. <ref>(b), all evolutions rapidly entangle except the one when taking |1,0⟩ as initial condition (green curve). In the dynamic this initial state mostly interacts with | 0,0 ⟩ state, while the interaction with m=±1 is through of| 0,0 ⟩. Notice that initiating from | 1,1 ⟩ and the superposition |Ψ_1⟩=1√(3)(|1,-1 ⟩ + | 1,0 ⟩ + | 1,1 ⟩)lead to two different dynamics (Figs. <ref>(c) and (f)), but similar entanglement evolution, since both states transfer their population directly to the ground state. Let us point out that the coupling between | 1,0 ⟩ and | 0,0 ⟩ gets enhanced as the external electric field increases, hence it differs from the interaction among the levels | 1, ± 1 ⟩ and | 0, 0 ⟩. Finally in the case of E=1000 V/cm, see Fig. <ref>(c), most of the curves coincide except for the evolution of the superposition with equally probable N=1 states whose evolution does not match the rest although keeps the same behavior below the others. For this electric field the coupling between M=0 states gets even stronger, and M=± 1 states gets weaker, then for superposition states (blue and orange curves) that include equally probable M=±1 states, their entanglement evolution slow down. Overall, it is observed that the weaker the field the higher the entanglement rises for the ground state, this is due to its connection with all N=1 states. As the electric field increases, the coupling between M=0 levels is favored over M=±1 states and the amount of states involved throughout the evolution decreases and the entanglement in the chain decreases as well.Let us note that at some point all curves must stop the growing tendency and level out which is not presented in our calculations. This is due to the fact that the entanglement for finite size systems do not grow indefinitely rather reach a logarithmic behavior in 1D, and our calculations show only an early evolution of the molecular chain. § CONCLUSIONSWe studied a LiCs molecular gas loaded on an 1D optical lattice prepared in the strongly-correlated Mott-insulator regime with one molecule per lattice site.The molecules are in the lowest electronic and vibrational state and the coherent population transfer between the internal rotational states as a function of an external DC electric field is analyzed.Due to the large intermolecular lattice distance and low filling, the hopping between nearest-neighbors and the on-site Hubbard interaction are suppressed, hence, the dipole-dipole interaction in the near­est-neighbor approximation solely governs the dynamics of the excitations. For field strengths lower than the critical field the full set of rotational levels must be taken into account and the numerical simulations, here reported, confirmed this fact. Coherent transfer of population among the field-dressed states along the lattice are shown for several electric field intensities and initial conditions. Our numerical simulations show that weak fields have the potential to peak either a two- or three-level system out of the original four-level one, therefore, the field acts as field-selector for the molecular system, simplifying the complexity for both numerical and analytical calculations. Even more, the excluded states can be thought of as dark states and can be used to store unperturbed information throughout the dynamics regarding the initial condition.Numerical simulations, performed with three field stre­ngths, showed that molecular excitations become entangled as the dynamics occur for most initial conditions. The higher the field intensity, the more certainly the system will become entangled, regardless of the initial condition. Moreover, a sharp and monotonic growth of the entanglement was in general obtained for early time evolutions, reason why these type of novel molecular systems are suitable to be proposed for new quantum protocols.§ ACKNOWLEDGEMENTS K.R. thanks Ian Picken and Christiane P. Koch for carefully reading the manuscript and their valuable comments. This work has been supported by Universidad del Valle under the internal project 7967. The authors acknowledge the support from the Colombian Science, Technology and Innovation Fundation–COLCIENCIAS “Francisco José de Caldas” under project 110­665842793 (contract FP-005-2015). We also thank the Center of Excellence for Novel Materials (CENM) at Universidad del Valle for financial support for the research group. ieeetr
http://arxiv.org/abs/1703.08602v1
{ "authors": [ "Vanessa Olaya-Agudelo", "Karen Rodriguez" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170324212325", "title": "Coherent collective dynamics and entanglement evolution of polar molecules on 1D lattices" }
maketitle 0.5 0TCP in 5G mmWave Networks: Link Level Retransmissions and MP-TCP Michele Polese^*, Rittwik Jana^†, Michele Zorzi^* ^*Department of Information Engineering, University of Padova, Italy e-mail: {polesemi, zorzi}@dei.unipd.it^†AT&T Labs-Research, Bedminster NJ, USAe-mail: rjana@research.att.com=================================================================================================================================================================================================================================================startstop = [rectangle, rounded corners, minimum width=2cm, minimum height=0.5cm,text centered, draw=black] io = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=3cm, minimum height=1cm, text centered, draw=black] process = [rectangle, minimum width=2cm, minimum height=0.5cm, text centered, draw=black, alignb=center] decision = [ellipse, minimum width=2cm, minimum height=1cm, text centered, draw=black] arrow = [thick,<->,>=stealth] line = [thick,>=stealth] darrow = [thick,<->,>=stealth,dashed] sarrow = [thick,->,>=stealth] larrow = [line width=0.05mm,dashdotted,>=stealth] MmWave communications, one of the cornerstones of future 5G mobile networks, are characterized at the same time by a potential multi-gigabit capacity and by a very dynamic channel, sensitive to blockage, wide fluctuations in the received signal quality, and possibly also sudden link disruption. While the performance of physical and MAC layer schemes that address these issues has been thoroughly investigated in the literature, the complex interactions between mmWave links and transport layer protocols such as TCP are still relatively unexplored. This paper uses the ns–3 mmWave module, with its channel model based on real measurements in New York City, to analyze the performance of the Linux TCP/IP stack (i) with and without link-layer retransmissions, showing that they are fundamental to reach a high TCP throughput on mmWave links and (ii) with Multipath TCP (MP-TCP) over multiple LTE and mmWave links, illustrating which are the throughput-optimal combinations of secondary paths and congestion control algorithms in different conditions.(0,0)(0,-360) (0,0) (0,0)This paper was accepted for presentation at the 2017 IEEE Infocom(0,-10)5G & Beyond Workshop, May 1, 2017, Atlanta, Georgia, USA. § INTRODUCTIONMmWave communications are expected to play a major role in reaching the performance target of the next generation of mobile networks (5G) <cit.>. At mmWave frequencies (i.e., above 10 GHz), indeed, there is a high availability of contiguous bandwidth that can be allocated to cellular networks. However, mmWave communications also present challenges and issues that must be faced in order to make this technology market-ready. In fact, these frequencies suffer from high isotropic pathloss (compensated by massive MIMO and beamforming gains) and blockage by solid materials, as for example buildings, cars, and also the human body <cit.>.These extreme propagation conditions demand a careful design of the PHY and MAC layers <cit.>, but also have an impact on the interplay with the higher layers of the protocol stack. In particular, the congestion control mechanisms of the Transmission Control Protocol (TCP) may suffer from long blockages that trigger a Retransmission Timeout (RTO), and may not timely track the channel state when the link from a User Equipment (UE) to the serving evolved Node Base (eNB) switches from a Line-of-Sight (LOS) to a Non-Line-of-Sight (NLOS) condition <cit.>. Finally, TCP may take a long time to fill the huge amount of bandwidth available, and this penalizes short-lived TCP sessions such as those used for browsing or instant messaging applications.In this paper, we systematically analyze the performance of the Linux kernel TCP/IP stack implementation over mmWave links using the mmWave module for the ns–3 simulator.First, we study the interaction with lower-layer retransmission protocols, in terms of both throughput and latency, proving that without these retransmissions TCP is only able to reach a fraction of the potential mmWave NLOS link throughput. Secondly, we show that multi-path transmissions improve the performance of the mmWave network, by using Multipath TCP (MP-TCP) with different congestion control algorithms over (i) an LTE and a mmWave link or (ii) two mmWave links with different carrier frequencies. Finally, we test the multi-path transmission of TCP ACKs only, measuring in which conditions sending the TCP ACK packets over an LTE link (and data over a mmWave one) improves the throughput in a mobility scenario.The rest of the paper is organized as follows. Sec. <ref> describes the simulation setup, while Sec. <ref> illustrates the performance analysis involving the lower-layer retransmission. Sec <ref> presents the results for MP-TCP in mmWave networks, and Sec. <ref> those for TCP ACK transmission on LTE links. Finally in Sec. <ref> some conclusions are drawn, and future work is suggested.§ SIMULATION SETUPThe simulations use the NYU ns–3 module <cit.>, with the extensions developed in <cit.>, plugged to the TCP implementation of the Linux kernel (also including MP-TCP <cit.>) using a custom version of the Direct Code Execution (DCE) library <cit.>. In this way it is possible to test the real Linux implementation of TCP and MP-TCP with the flexibility of a network simulator. This approach can be applied to any application layer software or transport protocol, provided they only use the subset of kernel methods also available in DCE. It is possible to plug different applications on top, and the following experiments use IPERF <cit.> and Linux wget, in order to test respectively the throughput of the end-to-end connection and the time it takes to download files of different sizes.The NYU ns–3 module <cit.> simulates an end-to-end cellular network, with a complete mobile stack (with custom TDD-based PHY and MAC layers, RRC layer, and legacy LTE RLC and PDCP layers) which is able to transmit packets from the UE to a remote host, or vice versa. The main feature of the NYU ns–3 module is the channel model for the 28 GHz and the 73 GHz carrier frequencies, based on real measurements <cit.>, which can either statistically simulate LOS-NLOS-outage transitions <cit.> or rely on the ns–3 building module to track when a mobile terminal is in LOS or NLOS <cit.>. The main simulation parameters are those typically used in the performance analysis of mmWave networks <cit.>, and are summarized in Table <ref>.§ INTERACTION WITH LOWER-LAYER RETRANSMISSION PROTOCOLSCurrent and future mobile networks deploy different retransmission mechanisms in order to prevent packet loss and increase the throughput at the mobile devices. When using mmWave links, these retransmission protocols become a key element in hiding the highly dynamic and consequently unstable behavior of the channel to the higher layer transport protocols. At the MAC layer, Hybrid Automatic Repeat reQuest (HARQ) is used. When the PHY layer at the receiver receives a packet, but detects the presence of some errors that prevent reliable decoding, it asks for a retransmission. The sender then transmits additional redundancy that helps retrieve the correct version of the packet <cit.>. Moreover, in 3GPP networks (e.g., LTE), there is a layer on top of the MAC layer that may perform additional retransmissions, called Radio Link Control (RLC) layer, that will also likely be a part of the 5G protocol stack <cit.>. Since the number of retransmissions at the MAC layer is usually limited (typically only 3 attempts are performed), the RLC layer Acknowledged Mode (AM) offers another way of recovering lost packets. Thanks to periodic reports from the receiver, the RLC AM sender knows which packets are missing and can retransmit them. The number of attempts that RLC AM can perform is also limited, and, if some packets are still missing, a Radio Link Failure is declared. RLC Unacknowledged Mode instead does not perform any retransmission in addition to those of the HARQ at the MAC layer. These retransmission mechanisms operate based on information related to the link and with a greater timeliness with respect to TCP, which instead uses packet losses to detect congestion and operates on the larger timescale of retransmission time-outs (RTOs), of the order of a second.In order to test the effectiveness of coupling TCP with lower-layer retransmission mechanisms, we performed some simulations using the framework described in Sec. <ref>, where we considered an uplink connection from a User Equipment (UE) placed at different distances from an evolved Node Base (eNB). We use IPERF on top of the Linux implementation of TCP CUBIC, with the statistical channel model <cit.>, and perform Montecarlo simulations for each distance d∈{50, 75, 100, 150} m. RLC AM introduces additional redundancy in order to perform the retransmissions, but, when the distance between the eNB and the UE is equal to d=50 m and the UE is in LOS with very high probability, these retransmissions are not actually needed, because of the low packet error rate of the channel. Therefore, as also shown in Fig. <ref>, the throughput is lower when RLC AM is used (though by only a minimal amount). As the distance increases, instead, the performance of TCP without HARQ and without RLC AM collapses, because the TCP congestion control algorithm sees a very lossy link and triggers congestion avoidance mechanisms or, worse, a RTO. Fig. <ref> compares the traces at the PDCP layer of the throughput of a simulation over time, for d=150 m. It can be seen that the lack of HARQ and RLC AM places the whole burden of retransmissions on TCP, which does not manage to reach the high throughput allowed by the available bandwidth. If instead we compare the performance of HARQ with RLC UM and that of HARQ with RLC AM, it can be seen from Fig. <ref> that the additional retransmissions given by RLC AM increase the throughput by 100 Mbit/s at d=75 m and 50 Mbit/s at d=100 m. For d=150 m, instead, RLC AM does not improve the performance of RLC UM, showing that at such distance even further transmission attempts fail to successfully deliver packets (for example, because of extended outage events).RLC AM at large distances instead increases the latency of successfully received packets, as shown in Fig. <ref>, because of retransmissions and additional segmentation that may introduce Head of Line (HoL) blocking delays. The smallest latency is achieved without HARQ and with RLC UM, because no retransmissions are performed, but this option is not able to deliver a high TCP throughput in general.Fig. <ref> shows the download time for a file of different sizes (from 1 MB to 10 MB) using wget (the file is hosted in the UE and retrieved by the remote server, in order to be consistent with the previous uplink simulations). The results show that lower-layer retransmission mechanisms help decrease the download time, and that the performance gain increases as the distance and the file size increase. Moreover, the difference between the download times with RLC AM and with RLC UM (no retransmissions) is more noticeable than that between the throughput values of Fig. <ref>, showing that for short-lived TCP sessions it is important to perform retransmissions as fast as possible, i.e., at a layer as close to the radio link as possible.These results are well known when applied to traditional LTE networks <cit.>, but these are the first simulations that show how much TCP depends on lower-layer retransmissions in mmWave networks, using the real Linux TCP/IP implementation. They show that, also in mmWave networks, the support of lower-layer retransmission mechanisms is fundamental for reaching a high TCP throughput even at large distances between transmitter and receiver, at the price of additional latency. In particular, in the simulated scenario the most effective retransmission scheme is HARQ at the MAC layer, since it provides the greatest throughput gain, but also the acknowledged mode of the RLC layer helps improve the performance of the mmWave link by reducing the download time for short-lived TCP sessions.§ MULTIPATH TCP Multipath TCP (MP-TCP) has been proposed as a way of allowing vertical and seamless handovers between cellular networks and Wi-Fi hotspots and is currently under discussion for standardization at the IETF. It may also be used to provide path diversity in mmWave cellular networks. The three main design goals of MP-TCP are <cit.>: *Improve throughput: an MP-TCP flow should perform at least as well as a traditional single path TCP (SP-TCP) flow on the best path available. *On shared links, MP-TCP should not get more resources than standard TCP flows. *MP-TCP should prefer less congested paths, subject to the previous two conditions. There are three RFCs that describe MP-TCP <cit.>. They discuss the signaling and setup procedures <cit.>, the architectural choices for the deployment of MP-TCP <cit.>, and a congestion control (CC) algorithm <cit.>. Finally the document in <cit.> discusses the impact on the application layer.There are several studies that propose coupled congestion control algorithms for MP-TCP connections. By coupling over the different subflows, the authors of <cit.> claim that it is possible to reach goals <ref> and <ref> above. In particular they propose a first coupled CC, that is however criticized in <cit.> and in <cit.>, because it (i) transmits too much traffic on congested paths and (ii) is unfriendly with respect to SP-TCP. Therefore two more coupled CC were proposed: * In <cit.> the Opportunistic Linked Increases Algorithm (OLIA) is designed to overcome these two issues, but presents non-responsiveness problems with respect to congestion changes in the subflows; * In <cit.> the Balanced Linked Adaptation algorithm (BALIA) addresses both the problems of the original CC and those of OLIA. In particular, the parameters of the protocol are derived through a theoretical analysis of the performance of multipath congestion control algorithms. However, these schemes are based on the legacy design of Reno and New Reno congestion control algorithms (Additive Increase - Multiplicative Decrease, AIMD), which are shown to suffer from the highly dynamic behavior of mmWave links more than the newer TCP CUBIC congestion control algorithm <cit.>.MP-TCP could be used as an end-to-end solution for multi-connectivity, i.e., next generation mobile devices may connect both to an LTE and to a mmWave eNB, or to two or more mmWave eNBs with no need for coordination at the lower layers. However, there are some issues with its performance in mmWave networks, as we will show in the following paragraphs. In this performance evaluation campaign we used the real Linux implementation of MP-TCP (v0.90), which includes several CC algorithms, namely the original coupled CC, OLIA, BALIA, uncoupled (with any desired TCP flavor, e.g., CUBIC), and others. We co-deploy an LTE eNB and a mmWave eNB, or a mmWave eNB capable of transmissions at different frequencies (28 and 73 GHz, with the same bandwidth and the maximum number of antennas available in the ns–3 NYU simulator), and vary the distance of the multi-connected UE from the eNBs, using the statistical channel model. The remote host is a multi-homed server, supporting MP-TCP connections. The UE uses IPERF, and starts the connection on the 28 GHz mmWave link. Then another subflow is added on the LTE link, or on the 73 GHz mmWave link.Fig. <ref> shows the performance in terms of throughput of different MP-TCP congestion control algorithms over different connections, with respect to the baseline of a SP-TCP connection with TCP CUBIC. The dashed lines represent a scenario with paths on LTE and on mmWave (28 GHz), while the solid ones refer to paths on mmWave links with 28 GHz and 73 GHz as carrier frequencies. LTE as mmWave secondary path: When the UE is close to the eNB and has a LOS link most of the time on both the 28 and the 73 GHz connections (e.g., for d=50 m), then the solution with multipath TCP on mmWave-only links outperforms SP-TCP, with a gain that ranges from 800 Mbit/s (28%) to 1 Gbit/s (36%). Instead, due to the limit of the LTE uplink, the performance of a multipath on LTE and mmWave is close to that of SP-TCP (when CUBIC is used, because BALIA has much worse performance, as will be discussed later).However, it can be seen from Fig. <ref> that MP-TCP with LTE and mmWave links performs better than with only mmWave connections for d≥100 m, and with the CUBIC uncoupled CC algorithm also for d=75 m. Indeed, the 73 GHz link offers a potentially larger throughput than an LTE uplink connection, but it has a lossy behavior that penalizes the overall throughput, except for small distances. In particular, for d=150 m, MP-TCP with LTE and 28 GHz mmWave offers a gain of more than 450 Mbit/s (i.e., 100%) with respect to the SP-TCP (i.e., more than the LTE uplink throughput), showing that the presence of the secondary and reliable LTE path improves the throughput on the mmWave link. This can be seen also in Fig. <ref>, where we plot the contribution of the two subflows of MP-TCP connections at d∈{100, 150} m when the second subflow is LTE or mmWave. It can be seen that the contribution given by the reliable LTE uplink subflow is smaller than that of the 73 GHz mmWave subflow, but the primary 28 GHz mmWave subflow reaches a higher throughput when coupled with the LTE secondary subflow. For short-lived TCP sessions, instead, using a secondary subflow on mmWave links improves the system performance. This can be seen in Fig. <ref>, which shows the download time of a file using wget with the same setup described in Sec. <ref>. However, the performance gain, especially for smaller files, is minimal, showing that the LTE link makes up for its smaller capacity with a higher reliability that benefits the performance of TCP.Coupled vs uncoupled CC: Another important observation is that MP-TCP with the BALIA CC algorithm fails to meet target <ref>, since in many cases its throughput is lower than that of SP-TCP, as shown in Fig. <ref>. The most striking cases are those with MP-TCP on LTE and mmWave, and d∈{50, 75} m. Here the congestion control algorithm sees the losses on the 28 GHz mmWave link as congestion, and, according to design goal <ref>, it steers the whole traffic to the LTE subflow, degrading the performance of the end-to-end connection. Instead, the uncoupled congestion control algorithm is not affected by this issue, since each path behaves independently. However, in this case design goal <ref> is not met. An example of this behavior is shown in Fig. <ref>, where we compare the throughput over time of two different AIMD coupled CC algorithms (OLIA and BALIA) and of an uncoupled CC algorithm with CUBIC. It can be seen that at time t=7 s both OLIA and BALIA start using only the LTE connection, and that the throughput of the mmWave subflow goes to zero. A similar behavior for OLIA was observed in <cit.>. When considering short-lived TCP sessions and file download times, there are two different outcomes according to the file size. As shown in Fig. <ref>, when the file is smaller than 1 MB the BALIA coupled congestion control algorithm exhibits a slightly smaller download time than the CUBIC uncoupled CC. Instead, when the file is larger than 5 MB, as in Fig. <ref>, the MP-TCP solution with CUBIC as CC mechanism manages to download the file in less than a fifth of the time required by MP-TCP with BALIA. This behavior can be explained by considering the shape of the window growth function of CUBIC, which recalls a cubic function, i.e., flat at the beginning and then rapidly increasing. The main conclusions from this performance analysis of MP-TCP for mmWave networks are that at larger distances and for long-lived TCP sessions it is preferable to use a more stable LTE-like link, and that the deployment of MP-TCP coupled congestion control algorithms on mmWave links is not able to satisfy the original design goals of <cit.>. A possible improvement of MP-TCP CC algorithms should adapt the TCP CUBIC scheme to a coupled scenario, so that the reactiveness and stability of TCP CUBIC enhance the performance of the transport protocol while not harming other legacy TCP flows. § MULTI-CONNECTED UE WITH ACKS ON LTE AND DATA ON MMWAVEIn this section, we test the performance of another kind of multipath deployment. The UE receives downlink data from the eNB on a mmWave connection, and sends the TCP ACKs either on LTE or on the same mmWave link. We consider a mobility scenario, with the eNB at coordinates (-1, 20) m and the UE moving from (151, 0) m to (151, 40) m at speed s∈{2,5} m/s. 10 small obstacles are deployed randomly in the area between the eNB and the UE, and the simulations exploit an ns–3 channel model that uses real traces to model the LOS to NLOS (or vice versa) transitions <cit.>. An example of random deployment is shown in Fig. <ref>.The results of these simulations are shown in Fig. <ref> for different RLC buffer sizes (2, 20 MB) and UE speed s (2, 5 m/s). It can be seen that with a larger buffer the throughput is slightly higher when sending the ACKs on the LTE connection, while with the smaller one it is preferable to use the mmWave connection, but the throughput is 100 Mbit/s lower. Indeed, when the buffer is small there is a need for more timely updates of the TCP congestion window, since there are more chances to cause buffer overflow and lose packets. If LTE is used, the latency of the ACKs increases, therefore the timeliness of congestion control is reduced. However, the difference in throughput between the two solutions is small. Instead, with a larger bufferit is possible to queue more packets, and the transport layer is less sensitive to the latency in reporting the ACKs. In this case it is better to receive the ACKs on a more reliable LTE connection. However, notice that when the ACKs are sent on LTE the RTT increases, thus it takes more time to fill the capacity of the LOS link. This explains why also for the large buffer case the difference in performance between the system with ACKs on LTE and that with ACKs on mmWave is minimal. The main difference is thus on the choice of the buffer size: an undersized buffer degrades the throughput up to 27%.§ CONCLUSIONS The large bandwidth available at mmWave frequencies could allow a link capacity of the order of gigabits per second. However, the interaction with legacy transport protocols could prevent the full exploitation of the rate potentially available in mmWave communications. In this paper, we presented the first comprehensive performance evaluation of the interaction of Single Path and Multi Path TCP with mmWave links, with and without lower-layer retransmission mechanisms, using (i) a simulator with a channel model based on real measurements and a complete 3GPP-like cellular stack, and (ii) the actual TCP implementation of Linux. We firstly remarked that for mmWave it is very important to mask the channel losses to the higher TCP layer with link retransmission mechanisms, otherwise it is not possible to reach high throughput. Secondly, we studied the behavior of MP-TCP on 28 GHz mmWave links with LTE or 73 GHz mmWave links as secondary subflows. We showed that when the mmWave link has a high probability of being in NLOS state, a secondary LTE subflow improves the throughput performance of long-lived TCP sessions more than a mmWave subflow, and that the design goals of MP-TCP may not be met with mmWave links. Finally, we evaluated whether or not using LTE as the uplink connection for TCP ACKs helps improve the throughput, and showed that there is not a clear gain, because of the additional latency introduced by the LTE radio link. As part of our future work, we will study how it is possible to exploit connections over multiple paths and lower-layer retransmission mechanisms to reach a high throughput on mmWave links, while trying to reduce the additional delay introduced by retransmissions to meet the 1 ms latency 5G design goal. For example, we will extend our study to account for multiple mmWave eNBs deployed in different locations, so that the UE can use more than two MP-TCP subflows. IEEEtran
http://arxiv.org/abs/1703.08985v1
{ "authors": [ "Michele Polese", "Rittwik Jana", "Michele Zorzi" ], "categories": [ "cs.NI", "cs.IT", "math.IT" ], "primary_category": "cs.NI", "published": "20170327095020", "title": "TCP in 5G mmWave Networks: Link Level Retransmissions and MP-TCP" }
gang.xu@ircica.univ-lille1.frUniv. Lille, CNRS, UMR 8523 - PhLAM - Physique des Lasers Atomes et Molécules, F-59000 Lille, Francestefano.trillo@unife.itDepartment of Engineering, University of Ferrara, Via Saragat 1, 44122 Ferrara, Italy We investigate the temporal photonic analogue of the dam-break phenomenon for shallow water by exploiting a fiber optics setup. We clearly observe the decay of the step-like input (photonic dam) into a pair of oppositely propagating rarefaction wave and dispersive shock wave. Our results show evidence for a critical transition of the dispersive shock into a self-cavitating state. The detailed observation of the cavitating state dynamics allows for a fully quantitative test of the Whitham modulation theory applied to the universal defocusing nonlinear Schrödinger equation. 05.45.Yv,47.35.Fg,47.35.Bb,47.35.JkDispersive dam-break flow of a photon fluid Stefano Trillo December 30, 2023 ============================================Introduction.—The laser light propagating in nonlinear media often behaves as a photon fluid <cit.>, sharing phenomena that characterise fluid flows such as rogue waves <cit.>, instabilities <cit.>, transition to turbulence <cit.>, coherent <cit.> and incoherent <cit.> shock waves, and superfluid flow around obstacles <cit.>.In regimes described by the defocusing nonlinear Schrödinger equation (NLSE), a distinctive trait of the photon fluid evolution is the formation of dispersive shock waves (DSWs, or undular bores) <cit.>, fast oscillating wavetrains that spontaneously emerge from the tendency to develop a gradient catastrophe <cit.>. DSWs are ubiquitous, being observed in other systems ruled by the NLSE such as cold atom condensates <cit.> and spin waves <cit.>, as well as in other dispersive hydrodynamic settings involving, e.g. electrons <cit.>, water waves <cit.> and viscous fluid conduits <cit.>.A major breakthrough in the analytical description of the DSW is the Whitham modulation theory <cit.>, which, however, assumes the DSW to develop from step-like initial conditions (the Riemann problem), whereas experiments to date have been mostly concerned with smooth or periodic initial conditions.Therefore experiments devoted to investigate the dispersive Riemann problem are of paramount importance for advancing the understanding of dispersive hydrodynamic flows, a fortiori for systems such as the NLSE where the modulation theory predict critical transitions in the behavior of the shock <cit.>. In this letter, we exploit a fiber optics set-up to investigate experimentally the Riemann problem associated with an initial step, in the temporal domain, in the optical power. In the absence of any frequency chirp across the jump such a problem is isomorphic to the classic 1D dam-break problem of hydrodynamics <cit.>. Indeed, we demonstrate that the light evolves as a fluid mimicking the basic features of the dam-break in shallow water, namely the decay into a shock and a rarefaction-wave (RW) pair, connected by an expanding plateau. The dispersive character of the shock, however, leads to a critical transition, which is predicted in the frameworkof Whitham theory <cit.>. Above a critical height of the jump, we report evidence for the onset of self-cavitation, i.e. the appearance of a null point in the optical power. The full experimental characterisation of the cavitating state allows, for the first time, for a quantitative comparison with modulation theory.Theory of dispersive dam-break.— The dam-break Riemann problem that we investigate is described by the NLSE that rules the evolution of the temporal envelope field E(T,Z) along the fiber distance Zi∂ E/∂ Z - k”/2∂^2 E/∂ T^2 +γ| E |^2 E = 0,subject to a step initial condition in T=0, i.e. E(T,0)=√(P_L) for T<0, and E(T,0)=√(P_R) for T>0, where the constant left and right power levels P_L and P_R ≥ P_L define the bottom and the top of the optical dam in T=0 (see Fig. <ref>(a)).Here T=T_lab-k' Z is the retarded time in a frame moving with group-velocity 1/k'=dk/dω|_ω_0^-1, whereas k”=d^2k/dω^2 |_ω_0=176 ps^2/km and γ=3 (W km)^-1 are the dispersion and the nonlinear coefficient of our fiber, ω_0 being the carrier frequency.γ k”>0 corresponds to the defocusing regime of the NLSE.The Madelung transform E(T,Z)=√(P_R)√(ρ(t,z))exp(-i ∫_-∞^t u(t',z) dt' ) allows to formulate the NLSE in hydrodynamical form: ρ_z + (ρ u)_t = 0 ; u_z + ( u^2/2 + ρ)_t= 1/4[ ρ_tt/ρ - (ρ_t)^2/2ρ^2]_t,where we set z=Z/Z_0, t=T/T_0 with Z_0≡ (γ P_R)^-1 and T_0≡√(k”/γ P_R). By neglecting the RHS containing higher-order derivatives (quantum pressure term <cit.>), Eqs. (<ref>-<ref>) constitute the dispersionless vector conservation law known as the shallow water equations (SWEs) <cit.>. The role of local water depth and longitudinal velocity are played, here, by the normalized power ρ=|E|^2/P_R and chirp u, whereas space and time have interchanged role. The evolution of an initial (z=0) step elevation from ρ_L to ρ_R>ρ_L <cit.>, with u(t,0) identically vanishing, is the classic dam-break Riemann problem. The solution of the SWEs, first given by Stoker <cit.>, can be formulated in terms of the self-similar variable τ=t/z <cit.> and involves a classical shock wave and a RW. As shown by the black solid line in Fig. <ref>(a), the shock and the RW propagate in opposite directions (towards t<0 and t>0, respectively, or down- and up-stream directions in the hydrodynamic problem), being connected by an expanding plateau characterised by intermediate constant values ρ_i= (√(ρ_L) + √(ρ_R))^2/4, u_i= √(ρ_L) - √(ρ_R). The step values ρ_L and ρ_R also fix the edge velocities of the smooth RW to the values:τ_4=√(ρ_R);τ_3=3√(ρ_L)-√(ρ_R)/2.Conversely, the jump from ρ_i, u_i to ρ_L, u_L=0 constitutes a classical shock which moves with velocity τ_RH=ρ_i u_i/ρ_i - ρ_L, deriving from the well-known Rankine-Hugoniot condition <cit.>. However, such shock is regularized into an oscillating DSW by the effect of the RHS of Eq. (<ref>) that stems from dispersion. A snapshot of the breaking scenario obtained from the full NLSE is compared with the dispersionless limit (SWEs) in Fig. <ref>(a). The DSW is delimited by two edge velocities τ_1,2 (τ_1< τ_RH < τ_2), where the oscillations vanish (linear edge) or become deepest (soliton edge), respectively <cit.>. According to Whitham modulation theory such velocities reads as<cit.>τ_2=-√(ρ_L)+√(ρ_R)/2; τ_1=ρ_L-2ρ_R/√(ρ_R),whereas, in the same framework, owing to the RW smoothness, one recovers Eqs. (<ref>) for the RW edges. The velocities τ_1,2 in Eqs. (<ref>) and τ_3,4in Eqs. (<ref>) define the wedges where the DSW and RW expand, as shown (in dimensional units, i.e. including the multiplicative factor T_0/Z_0=√(k”γ P_R) for our fiber) in Fig. <ref>(b). Importantly, the modulation theory entails a crossover between two different regimes separated by a critical condition. In the first regime, the DSW envelopes are monotone and the DSW power never vanishes, as shown in Fig. <ref>(a). However, below a critical value of the key parameter, namely the ratio between the quiescent (down- and up-stream) states, that we henceforth denote as r=ρ_L/ρ_R=P_L/P_R, the DSW exhibits a self-cavitating point (i.e., zero power, corresponding to a vacuum point in gas dynamics), which we found at<cit.>τ_0 = √(ρ_L) - √(ρ_R) + 2√(ρ_L)[1 -√(ρ_R) - √(ρ_L)/√(ρ_R) - 3√(ρ_L)E(m)/K(m)]^-1,where E and K are elliptic integrals of the first and second kind, respectively,of modulus m =4ρ_L/(√(ρ_L) - √(ρ_R))^2.The threshold for cavitation can be obtained by imposing m=1 (cavitation on soliton edge of the DSW), or equivalently τ_2=u_i, from which we obtainr_th≡( ρ_L/ρ_R)_th≡( P_L/P_R)_th = 1/9≃ 0.11.For P_L/P_R below such threshold, a vacuum point always exists, which tends towards the linear edge of the DSW in the limit P_L → 0. In this limit, however, both the oscillation amplitude and the plateau extension tends to vanish <cit.>, and one recovers the limiting hydrodynamic case known as “dry-bed" dam-break, characterised by a single RW extending to zero and no shock <cit.>.Experiment.—We performed a series of experiments to provide evidence for the decay of a photonic dam into the DSW-RW pair and for the critical transition to self-cavitation. In our experimental setup (see<cit.>), we make use of a continuous laser diode source emitting at λ=1560 nm, which is intensity modulated by an electro-optic modulator driven by an arbitrary waveform generator, with typical rise time T_r ∼ 25 ps (raised tanh shape, rising from 10 % to 90 % in ∼ 50 ps) and 78 MHz repetition rate. The signal is pre-amplified in a semiconductor optical amplifier, and then it passes through a spectral filter to remove amplified spontaneous emission in excess, and is finally amplified by means of an Erbium-doped fiber amplifier. This signal is launched in a dispersion compensating fiber (parameters as in the caption of Fig. <ref>)and analysed in the time domain with an optical sampling oscilloscope with 1.6 ps resolution. A typical shape of the step-like signal that we obtained is shown in Fig. <ref>(a). Two elevating steps in power from zero to P_L and from P_L to P_R, can be adjusted independently, and are followed by a descending step (trailing edge) back to zero. The advantage of using this specific signal shape is that, in the same experimental run, we can compare (i) the DSW-RW dynamics developing around the jump from P_L to P_R, with (ii) the “dry-bed" dynamics developing over the descending step. Importantly, the duration of each of the constant power states P_L,P_R was adjusted to be long, up to ∼ 1 ns, so that the DSW-RW pair can develop without feeling the interaction with the first step and the trailing edge of the waveform. Clearly, as can be seen in Fig. <ref>(a), the main step (P_L to P_R) is not instantaneous. However, the short rising time allows us to clearly observe the dam-break phenomenon, as we will see below. Another challenging issue faced in the experiment is the loss compensation. Indeed, the DSW-RW dynamics is very sensitive to losses and even weak losses of optical fibers (0.5 dB/km) are strongly detrimental. For instance, the plateau that connects the RW and the DSW would be completely distorted, not allowing for a quantitative comparison with theory (see Figs. S2 and S3 in <cit.>). Inspired from transparent telecommunication networks, we counterbalanced linear losses by means of Raman amplification <cit.>. To this end, a counterpropagating beam was injected in the fiber at λ=1482 nm. In this way, we achieve significant (close to peak) Raman gain with weak relative noise intensity transfer <cit.>. Figure <ref>(b) shows the output temporal trace obtained for the input step-like signal with P_R=0.6 W, P_L ≃ 100 mW (r ≃ 0.16).As can be seen, the input optical dam at T=0 breaks into a DSW-RWpair. The DSW is characterized by fast oscillations with ∼ 40 ps average period (the period scales proportionally to √(k”/(γ P_i))). The RW smoothly connects the plateau with power P_i ≃ 0.3 W (in agreement with the theoretical prediction P_i=P_R ρ_i = 0.297 W) to the peak level P_R. Conversely, over the trailing edge, no shock occurs and a smooth RW dropping to zero is observed, consistently with the case of “dry-bed" dam-break. Overall, the data show an excellent agreement with the numerical simulations reported in Fig. <ref>(d) based on the NLSE (<ref>). We emphasise that the occurrence of the DSW-RW pair is related to the non-zero background and not to the character (ascending vs. descending) of the step. Indeed the DSW-RW pair is observed on the trailing edge for a mirror-symmetric input.We have then proceeded to investigate in detail the breaking dynamics of the step-like input into the DSW-RW pair for different heights of the optical dam. The results shown in Fig. <ref>, are obtained for a fixed peak power P_R=0.8 W (slightly larger than that of Fig. <ref>), and a variable background P_L, i.e. a variable ratio r. For a quantitative comparison with Whitham modulation theory we also report, as vertical dashed lines superimposed on the measured data, the delaysT_j= √(k”γ P_R) τ_j  L, j=1,2,3,4 corresponding to the edges of the RWs in Eq. (<ref>) and the DSW in Eq. (<ref>), respectively. We remark that: (i) the experimental traces (left column) show a very good agreement with both simulations based on the NLSE (right column), and the predicted delays T_j from modulation theory; (ii) the duration of the plateau connecting the DSW and the elevating RW significantly shrinks as the ratio r decreases (from top to bottom in Fig. <ref>), again in quantitative agreement with modulation theory which predicts that |τ_2-τ_3| reduces when r decreases; (iii) for relatively large ratios r the DSW never touches zero and the edge τ_2 of the DSW (where m=1) is constituted by a gray soliton <cit.>, as shown in Fig. <ref>(a1-a2).However, by decreasing r, we observe the onset of cavitation when the soliton at the trailing edge of the DSW becomes black (see Fig. <ref> (b1,b2), r = 0.11). The observed threshold value r=0.11 is in excellent agreement with the theoretical prediction [Eq. (<ref>)]. While the black soliton possess a zero velocity (with respect to its background), the DSW edge maintains a finite velocity due to the chirped background (velocity u_i). Also associated with such black soliton we expect a phase jump of π (see Fig. S5(b) for more details) which, however, cannot be measured with our set-up. At even lower ratios r, the vacuum point shifts towards the left and the DSW envelope becomes non-monotone. While the results in Fig. <ref> already show the crossover to the cavitating state, a detailed quantitative study of this regime requires to operate with a DSW possessing larger extension and shorter average period. To this end, we operate at the maximum available power in our set-up, P_R=1 W. This allows us to observe a DSW exhibiting several oscillations with an average period of ∼ 30 ps, spanning a range that exceeds 400 ps, as shown in the inset of Fig. <ref>(e), for P_L=50 mW or r=0.05.Importantly,in this regime the delay of the cavitating state (zero power) can be accurately identified within the DSW. Figures <ref>(a-d) display a zoom over the bottom part of the DSW in order to show how the cavitating state moves when the ratio r is varied across the threshold. As shown in Fig. <ref>(a1-a2), when r=0.15, the DSW still exhibits monotone envelopes featuring a gray soliton edge with non-vanishing dip. However, Fig. <ref>(b1-b2) show very clearly the onset of cavitation (black soliton edge) at the threshold r=0.11, in agreement with Eq. (<ref>). Decreasing further r (i.e., for higher dam heights) leads the cavitating state to acquire increasingly negative delays, shifting progressively towards the linear edge of the DSW, as clear from Fig. <ref>(c1-c2) for r=0.07 and Fig. <ref>(d1-d2) for r=0.03. For all cases the numerical simulations are still in good agreement with the measured profiles. We summarise in Fig. <ref>(e) the delays of the vacuum state extracted from the measured temporal traces for different ratios r (see also <cit.>) in the range 0.01 ≤ r ≤ r_th (below r ∼ 0.01, the residual noise makes impossible to resolve the delay of the vacuum which is quite close to the linear edge). The data are contrasted with the theoretical prediction from Eq. (<ref>), showing a satisfactory agreement in the whole range. We ascribe the discrepancies to the finite rise-time of the step and to the asymptotic character of Whitham theory, which is expected to become more accurate as the propagation length increases and/or the dispersion decreases <cit.>.In summary, we have reported the fluid behavior of light in a dispersive dam-break experiment, revealing a transition to cavitation in close quantitative agreement with the predictions of modulation theory. Such behavior is expected to be universal for systems ruled by the NLSE, while qualitatively differing from other dispersive breaking scenarios observed in fluids <cit.>. Our platform could be further used to explore other critical behaviors in the general dispersive Riemann problem <cit.>, including the focusing case <cit.>.The present research was supported by IRCICA (USR 3380 CNRS), by the Agence Nationale de la Recherche in the framework of the Labex CEMPI (ANR-11-LABX-0007-01), Equipex FLUX (ANR-11-EQPX-0017), by the projects NoAWE (ANR-14-ACHN-0014), TOPWAVE (ANR-13-JS04-0004), and the Ministry of Higher Education and Research, Hauts de France council and European Regional Development Fund (ERDF) through the Contrat de Projets Etat-Region (CPER Photonics for Society P4S). S.T. acknowledges also the grant PRIN 2012BFNWZ2. The authors are grateful to L. Bigot, E. Andresen and IRCICA-TEKTRONIX European Optical and Wireless Innovation Laboratory for technical support about the electronic devices.§ SUPPLEMENTAL MATERIAL1. Details on the experimental setup In this section, we provide more details on the experimental setup sketched in Fig. <ref>.The light source is a continuous-wave laser diode centered at 1560 nm which delivers 5 mW of power. The two steps in power are generated by means of an electro-optic modulator (EOM, NIR-MX-LN series, 20GHz of bandwidth, Photline) driven by an arbitrary waveform generator (AWG7000, 50 GHz of bandwidth, Tektronix). The signal is pre-amplified in a semiconductor optical amplifier, the spontaneous emission in excess is removed by a spectral filter of 1 nm bandwidth (full width half maximum), and further amplified with an erbium-doped fiber amplifier (EDFA). The power can be tunedby means of a variable attenuator before being launched inside the optical fiber. Temporal traces are recorded with an optical sampling oscilloscope with a 1.6 ps resolution (Eyechecker, Alnair). Overall, we obtain a clean step-like optical waveform with controllable power levels and durations, as well as a very short rise time (10-90% rise time is ∼ 50 ps), which is suitable to observe the dam-break phenomenon. The repetition rate was fixed to 78 MHz that corresponds to a good trade off between a large enough value required by the sampling oscilloscope to get stable behavior and a sufficiently low value to obtain high peak power by using the EDFA. The distortions induced by the EOM and the nonlinear response of the amplifiers are pre-compensated by finely adjusting the shape of the electrical signal delivered by the AWG. As can be seen in Fig. S1, in order to obtain the clean step-like signal at the fiber input (as in Fig. 2(a) of the letter, and Fig. <ref>(d)), it was necessary to drive the modulator with the signal shown in Fig. <ref>(c) to avoid the unexpected overshoots at t ∼ 30,1070 ps shown in Fig. <ref>(b), which result from driving the AWG with the nearly ideal signal in Fig. <ref>(a). An other crucial feature of the set-up is the loss compensation scheme.Indeed, while the losses in the optical fiber are rather low (0.5 dB/km), the long fiber length (15 km) leads to 7.5 dB (∼ 80%) of total attenuation. This strongly affects the dynamics of the process as evident by contrasting Fig. <ref>(a-h) with Fig. <ref>(a-h) that depict the regimes with and without losses for different values of r=P_L/P_R and the same input power P_R=0.8 W.The losses, besides substantially damping the output power (compare vertical scales in Figs. <ref> and <ref>), determine a strong deformation of the characteristic plateau that separates the RW and the DSW, which is an essential trait of the fluid-like dam-break scenario. In order to compensate this deviation from the ideal fluid behavior, we compensate the fiber losses by taking advantage of Raman amplification as employed in telecommunication systems <cit.>. The aim is to get an effectively transparent fiber. We used a Raman pump of 2 W maximum power centered at 1482 nm (∼ 13 THz detuning from the signal, which guarantees a large Raman gain <cit.>), which is launched and extracted by means of multiplexers at the fiber input and output. The pump counter-propagates with respect to the signal in order to minimise the pump to signal relative intensity noise transfer <cit.>. We carefully adjusted the Raman pump power by looking at the output traces until we achieve both a flat plateau between the DSW and the RW and a peak power plateau that equals the input level. The validity of the scheme is clear by comparing the left columns in Fig. <ref> (Raman pump off) with those in Fig. <ref> (Raman pump on, loss compensated), and from the perfect agreement that we obtain with numerical simulations (right columns) based on the NLSE with included linear loss in Fig. <ref> (a2-h2) and the pure (conservative) NLSE in Fig. <ref> (a2-h2). To summarise, our fiber system accurately mimics the dispersive hydrodynamics described by the conservative NLSE. 2. Additional experimental results on the self-cavitation transitionWe report in Fig. <ref> the full sequence of close-up measurements (with corresponding simulations) of the DSW bottom for different values of r=P_L/P_R (Fig. 4 in the letter contains a subset of such measurements). Above threshold r >r_th=1/9 the DSW envelope is monotone and no vacuum appears (Fig. <ref>(a-b)). Conversely, below threshold, a self-cavitating state (vacuum) r ≤ r_th appears, as highlighted by red circles in Fig. <ref>(c-e). The large extension of the DSW due to the high power regime (P_R=1 W) allows us to observe with extremely good resolution the shift of the vacuum towards the leading (linear) edge of the DSW as r is reduced. A very good agreement is also achieved with numerical simulations.3. Numerics: further detailsAll the simulations based on the NLSE are made by means of the split-step method. The elevating step from P_L to P_R is modelled by the input field E(Z=0,T)=[P_L+(P_R-P_L)(1+tanh(T/T_r))/2]^1/2, where T_r=25-30 ps arises from the fit with the optical input trace at any run, and is due to the rise-time of the modulator. Similarly, for the decreasing step on the trailing (“dry-bed") edge we use E(Z=0,T)=[P_R(1-tanh(T/T_f))/2]^1/2 with T_f=45 ps due to slightly longer falling time of the modulator. Obvious combinations of the steps allow for modelling the full waveform as in Fig. <ref>(d).We display in Fig. <ref>(a) and (b) an example of the full evolution of the intensity and phase along the fiber, for r=P_L/P_R = 0.05, which corresponds to Fig. <ref>(f1,f2). The dot-dashed oblique lines in Fig. <ref>(a1) stand for the edges of the DSW and RW from Eqs. (4-5) of the letter. From Fig. <ref>(a1) and Fig. <ref>(a2) reporting the output power profile, we clearly see the expanding plateau between the DSW and the RW. We also notice that the top of the pulse (peak power of 1W) is eroded by the two RWs (the left and right RW are due to the “wet-bed" and “dry-bed" dynamics on the input leading and trailing edges, respectively), without any appreciable interactions between them (this witnesses that the two scenarios do not mix up over the finite fiber length). A close-up of the bottom of the shock fan is shown in the inset of Fig. <ref>(a2). Around the vacuum point (t ≃ -398 ps), the numerics reveals a π phase jump as shown in the inset of Fig. <ref>(b2), consistently with the local nature of black solitons. Across the vacuum the chirp (phase derivative) u exhibits strong peaks of different sign (see Fig. <ref>(c)). Unfortunately such fast phase features are not measurable with the present set-up and will be investigated in the future by means of phase reconstruction techniques. 4. The velocities: from hydrodynamic approximation to modulation theorySince the shallow water equations (SWEs) are a two-dimensional vector conservation law, the dynamics of the general Riemann initial value problem entails the superposition of a wave pair, where each of the wave can be of rarefaction wave (RW) type or classical shock wave (SW) type. A classification into (i) RW-RW; (ii) SW-SW; (iii) RW-SW, can be made according to the values of the step in ρ and u. Such classification can be suitably extended for the dispersive problem ruled by the NLSE <cit.>. The dam-break Riemann problem corresponds to a step in the variable ρ(t,z=0) with u(t,z=0)=0, and leads to a RW-SW pair separated by a plateau with constant ρ_i, u_i . For an increasing step (ρ_L < ρ_R), as in the experiment, the SW is left-going, heading towards t<0 (whereas, it would be right-going for a decreasing step ρ_L > ρ_R).The first solution to the “wet-bed" problem (ρ_L ≠ 0) in the SWEs was reported by Stoker <cit.>. Essentially one can start from the Rankine-Hugoniot conditions for the SW, and using also the constancy of u+2√(ρ) across the RW, one can eliminate u_i to achieve an equation to be solved for ρ_i. Then, returning to the starting equations, one can specify all the quantities of interest (u_i, SW velocity, edges of RW) as a function of thequiescent states ρ_R, u_R=0 and ρ_L, u_L=0.A slightly different formulation, which is directly suited for the extension to the dispersive case ruled by the full NLSE, amounts to solve in terms of so called simple waves. The starting point are the SWEs cast in diagonal form in terms of Riemann invariants r^±(t,z) ≡ u(t,z) ± 2√(ρ(t,z)),∂_z r^± + v^±∂_t r^±=0,where the eigenvelocities are v^±=v^±(r^-,r^+)= (3r_± + r_∓)/4. The dam-break problem with ρ_L<ρ_R implies an increasing [decreasing] step for r^+(t,0) [r^-(t,0)], as displayed in Fig. <ref> (steps from r^+_L=2√(ρ_L) to r^+_R=2√(ρ_R), and fromr^-_L=-2√(ρ_L) to r^-_R=-2√(ρ_R), respectively). For this case, characterised by r_R^-<r_L^-<r_L^+<r_R^+, we sketch in Fig. <ref> the simple wave solutions of Eqs. (<ref>), such that one of the Riemann invariant is constant and other one varies in self-similar way, i.e. r=r(τ=t/z). The solution involves, for τ>0, a constant r^-=r^-_R and a RW for r^+ described by τ=v^+(r^-_R,r^+), whereas for τ<0, r^+=r^+_L is constant, and τ=v^-(r^-,r^+_L) describes the characteristic overturning (see Fig. <ref>), which allows to introduce the SW through the Rankine-Hugoniot condition (cyan step in Fig. <ref>, see also Fig. 1 of the letter). The upper state of the shock is the plateau in ρ,u=ρ_i,u_i, which is found by exploiting continuity in τ=0 to be u_i=(r^+_L+r^-_R)/2=√(ρ_L)-√(ρ_R) and ρ_i=[(r^+_L-r^-_R)/4]^2=(√(ρ_L)+√(ρ_R))^2/4. The approriate limits of the solution give the relevant values of the self-similar variable τ_i, i=1,2,3,4 shown in Fig. <ref>, as follows:τ_4 = 3r^+_R+r^-_R/4=√(ρ_R); τ_3 = 3r^+_L+r^-_R/4=3√(ρ_L)-√(ρ_R)/2; τ_2 = 3r^-_L+r^+_L/4=-√(ρ_L); τ_1 = 3r^-_R+r^+_L/4=√(ρ_L)-3√(ρ_R)/2,where we have exploited the fact that r^±_L,R=± 2 √(ρ_L,R).It is important to emphasise that τ_4 and τ_3 in Eqs. (<ref>-<ref>) represent the edge velocities of the RW, which hold valid, due to the RW smoothness, also when dispersive corrections are accounted for. Conversely, when dispersive correction are applied to the SW, one needs to calculate the edge velocities by averaging over the DSW oscillation as explained below. We anticipate that this gives a result markedly different from the expression of τ_2 and τ_1 in Eqs. (<ref>-<ref>).In the presence of an undular shock or DSW, the modulation theory based on Whitham averaging can be exploited to describe the DSW as a modulated cn-oidal wave with parameters that varies slowly with respect to its natural periodicity. The averaged Whitham equations describe the slow variation of the parameters and allow to compute the velocities of the linear and soliton edge of the DSW.For the details on the method, we refer the reader to the original literature <cit.> and to the excellent reviews on the topic, in particular to Refs. <cit.>. Below, we only briefly outline the steps that lead us to obtain Eqs. (4-6) in our letter.The Whitham equations for the NLSE can be expressed in the following diagonal form, first derived by Pavlov <cit.>, by introducing four Riemann invariants r_i=r_i(t,z), i=1,2,3,4, ordered in such a way that r_1 < r_2 < r_3 < r_4, which are a suitable combination of the parameters of the cn-oidal wave <cit.>∂ r_i/∂ z+ v_i ∂ r_i/∂ t =0; i=1,2,3,4.Here the velocities v_i=v_i(r_1, r_2, r_3, r_4) depends on combinations of {r_i } and elliptic integrals of first (K(m)) and second (E(m)) kind. For instance the velocity v_2=v_2(r_1, r_2, r_3, r_4), which will be relevant in the following, reads explicitly asv_2 = V + 1/2(r_2-r_1) [ 1 - (r_3-r_1) E(m)/(r_3-r_2) K(m)]^-1,where V=1/4(r_1+r_2+r_3+r_4) and m=(r_4-r_3)(r_2-r_1)/(r_4-r_2)(r_3-r_1). As shown above a left-going classical shock of the SWEs is characterised by a constant r^+ and a jump in r^- from r^-_L to r^-_R (with given left and right boundaries r^-_L,r^-_R such that r^-_R < r^-_L). The corresponding dispersive regularisation of the shock, i.e. the DSW, can be described by a solution of Eqs. (<ref>) where only one of the Riemann invariants varies. For the left-going DSW, this solution is generated by the initial value r_i0=r_i(t,z=0)with r_10 =r^-_R, r_30 =r^-_L, r_40 =r^+, anda step r_20 in t=0 from r^-_R to r^-_L (obtained from the so-called data regularisation procedure, see <cit.>). The solution is such that r_1, r_3, r_4 remain constant at any z, whereas r_2 gives rise to a smooth RW (owing to the fact that r_20 is non-decreasing) that depends on the self-similar variable τ=t/z. Indeed all Whitham equations are formally satisfied when r_1,3,4(t,z)=r_10,30,40 and r_2(t,z)=r_2(τ), provided the equation ( τ- v_2 ) r_2'=0 is fulfilled. For r_2(τ) ≠ constant, this implies τ=v_2. The latter relation, once v_2 is expressed as v_2=v_2(r_10, r_2, r_30, r_40) according to Eq. (<ref>), becomes a nonlinear equation in the only unknown r_2(τ), which can be solved to find the RW. The velocity τ_1 ≡τ_linear and τ_2 ≡τ_soliton of the leading and trailing edges of the DSW correspond to the edges of this rarefaction wave and can be calculated from the following limits (see Fig. <ref>)τ_1= lim_r_2 → r_R^- v_2(r_10, r_2, r_30, r_40) = = 2r^-_R + r^+ + r^-_L/4+(r^+ -r^-_R)(r^-_L-r^-_R)/2r^-_R-r^+ - r^-_L; τ_2= lim_r_2 → r_L^- v_2(r_10, r_2, r_30, r_40) = r^-_R + r^+ + 2r^-_L/4.To obtain final formulas that depend only on the quiescent states r^±_L,R, we observe that, across the shock, one has a constant r^+=r^+_L, as clear from Fig. <ref>.Moreover, since the minimum power of the cn-oidal wave turns out to be ρ_min=(r_1-r_2-r_3+r_4)^2/16 <cit.>, the existence in the DSW of a cavitating or vacuum state ρ_min=0 requires r_2=r_1-r_3+r_4. Therefore, in analogy with Eqs. (<ref>-<ref>), we obtain the self-similar location of the vacuum by performing the following limitτ_0= lim_r_2 → r^-_R - r^-_L + r^+_L v_2(r_10, r_2, r_30, r_40) == r^+_L + r^-_R/2 - r^-_L - r^+_L/2[1 -r^-_R - r^-_L/r^+_L+ r^-_R - 2 r^-_LE(m)/K(m)]^-1where m = (r^-_L - r^+_L /r^-_R - r^-_L )^2.Finally, we obtain Eqs. (5) and (6) in the letter by substituting in Eqs. (<ref>-<ref>) and (<ref>), respectively, the boundary value of Riemann invariants expressed in terms of ρ, i.e. r^±_L,R=± 2√(ρ_L,R). 99Brambilla91 M. Brambilla, L. A. Lugiato, V. Penna, F. Prati, C. Tamm, and C. O. Weiss, Phys. Rev. A 43, 5114 (1991). Onorato13PR M. Onorato, S. Residori, U. Bortolozzo, A. Montina, and F. T. Arecchi,Phys. Rep. 528, 47-89 (2013); N. Akhmediev et al.,J. Opt. 18, 063001 (2016).RTinst S. Jia, M. Haataja, and J. W. Fleischer,New J. Phys. 14, 075009 (2012).Turitsyna13 E. G. Turitsyna, S. V. Smirnov, S. Sugavanam, N. Tarasov, X. Shu, S. A. Babin, E. V. Podivilov, D. V. Churkin, G. Falkovich, and S. K. Turitsyn,Nat. Photonics 7, 783 (2013).Wan07 W. Wan, S. Jia, and J. W. Fleischer,Nature Phys. 3, 46 (2006).inchshock J. Garnier, G. Xu, S. Trillo, and A. Picozzi,Phys. Rev. Lett. 111, 113902 (2013); Carusotto13 I. Carusotto and C. Ciuti,Rev. Mod. Phys. 85, 299 (2013). I. Carusotto,Proc. R. Soc. A 470, 20140320 (2014). Vocke16 D. Vocke, K. Wilson, F. Marino, I. Carusotto, E. M. Wright, T. Roger, B. P. Anderson, P. Öhberg, and D. Faccio,Phys. Rev. A 94, 013849 (2016).Conti09 C. Conti, A. Fratalocchi, M. Peccianti, G. Ruocco, and S. Trillo,Phys. Rev. Lett. 102, 083902 (2009).Fatome14 J. Fatome, C. Finot, G. Millot, A. Armaroli and S. Trillo,Phys. Rev. X 4, 021022 (2014).Wang15J. Wang, J. Li, D. Lu, Q. Guo, and W. Hu,Phys. Rev. A 91, 063819 (2015).Xu16 G. Xu, A. Mussot, A. Kudlinski, S. Trillo, F. Copie, and M. Conforti, Opt. Lett. 41, 2656 (2016).Millot16 G. Millot, S. Pitois, M. Yan, T. Hovhannisyan, A. Bendahmane, T.W. Hänsch, and N. Picqué,Nat. Photon. 10, 27 (2016). ElHoefer16 G. A. El and M. A. Hoefer,Physica D 333, 11 (2016). Grava T. Grava,in Rogue and shock waves in nonlinear dispersive media, M. Onorato, S. Residori, F. Baronio, eds., Lecture Notes in Physics (Springer, Berlin, 2016).Kamchatnov A.M. Kamchatnov, Nonlinear Periodic Waves and Their Modulations - An Introductory Course, (World Scientific, Singapore, 2000). Hoefer06M. A. Hoefer, M. J. Ablowitz, I. Coddington, E. A. Cornell, P. Engels, and V. Schweikhard,Phys. Rev. A 74, 023623 (2006).spin P. A. P. Janantha, M. A. Hoefer, and M. Wu,arXiv:1610.08846 (2016).Mo13 Y. C. Mo, R. A. Kishek, D. Feldman, I. Haber, B. Beaudoin, P. G. O'Shea, and J. C. T. Thangaraj,Phys. Rev. Lett. 110, 084802 (2013).Trillo16 S. Trillo, G. Deng, G. Biondini, M. Klein, G. F. Clauss, A. Chabchoub, and M. Onorato,Phys. Rev. Lett. 117, 144102 (2016). Maiden16 M. D. Maiden, N. K. Lowman, D. V. Anderson, M. E. Schubert, and M. A. Hoefer,Phys. Rev. Lett. 116, 174501 (2016).Gurevich_JETP87 A. Gurevich and A. Krylov,JETP 5, 65 (1987).El_PhysD95 G. El, V. Geogjaev, A. Gurevich, and A. Krylov,Physica D 87, 186 (1995).StokerJ. J. Stoker, Water Waves, (Interscience, New York, 1957). Whitham G.B. Whitham, Linear and Nonlinear Waves (Wiley, New York,1974).Leveque R. J. Leveque, Finite-Volume methods for Hyperbolic Problems (Cambrige, 2004).drybedexpJ. D. Martin and W. J. Moyce.Phil. Trans. R. Soc. A 244, 312 (1952).Kodama95 Y. Kodama and S. Wabnitz,Opt. Lett. 20, 2291 (1995). supplementarySee Supplementary Material for the derivation of Eqs. (<ref>-<ref>), for more details on the experiment along with a sketch of the set-up, and for additional experimental results.GibbsG. Biondini and T. Trogdon, Gibbs phenomenon for dispersive PDEs, arXiv:1411.6142 (2015).AgrawalRaman C. Headley and G. P. Agrawal, Raman Amplification in fiber optical communication systems, (Academic Press, Amsterdam,2005). rho Clearly, choosing P_R as the reference power amounts to fix ρ_R=1. However, in order to write more general formulas we leave both ρ_R and ρ_L as free boundary values.warnvelocity τ=t/z is referred to as a velocity, as commonly used in shock wave theory, though, dimensionally, this is an inverse velocity.dispdambreakwater A. Treske,J. Hydraul. Res. 32, 355 (1994);S. Soares-Frazao and Y. Zech,J. Hydraul. Res. 40, 33 (2010); D.-H. Kim and P. J. Lynett,J. Hydraul. Eng. 137, 754 (2011). bk06 G. Biondini and Y. Kodama,J. Nonlinear Sci. 16, 435 (2006).focusing G. A. El, E. G. Khamis, A. Tovbis,Nonlinearity 29, 2798 (2016).AgrawalRaman C. Headley and G. P. Agrawal, Raman Amplification in fiber optical communication systems, (Academic Press, Amsterdam,2005).StokerJ. J. Stoker, Water Waves, (Interscience, New York, 1957).pavlov M. V. Pavlov, Teor. Mat. Fiz. 71, 351 (1987). Gurevich1987JETPA. Gurevich, A. Krylov, JETP 5, 65 (1987). EL1995PhysD G. El, V. Geogjaev, A. Gurevich, A. Krylov,Physica D 87, 186 (1995). EL2016PhysD G. El, M. Hoefer,Physica D 333, 11 (2016). Hoefer2006PRA M. A. Hoefer, M. J. Ablowitz, I. Coddington, E. A. Cornell, P. Engels, and V. Schweikhard,Phys. Rev. A 74, 023623 (2006).
http://arxiv.org/abs/1703.09019v2
{ "authors": [ "Gang Xu", "Matteo Conforti", "Alexandre Kudlinski", "Arnaud Mussot", "Stefano Trillo" ], "categories": [ "physics.optics", "nlin.PS", "nlin.SI", "physics.flu-dyn" ], "primary_category": "physics.optics", "published": "20170327114648", "title": "Dispersive dam-break flow of a photon fluid" }
We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the “middle” Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a finite-dimensional strategy, or a limit of finite-dimensional quantum strategies.[ [ Updated: March 2017 =======================§ INTRODUCTION A two-player non-local gameconsists of finite question sets _A and _B, finite output sets _A and _B, and a function V : _A ×_B ×_A ×_B {0,1}.During the game, the two players, commonly called Alice and Bob, are given inputs x ∈_A and y ∈_B respectively, and return outputs a ∈_A and b ∈_B respectively. The players win if V(a,b|x,y) = 1, and lose if V(a,b|x,y)=0. The players know the rules of the game, and can decide ahead of time on their strategy. However, once the game is in progress, they are unable to communicate, meaning they do not know each others inputs or subsequent choices. This can make it impossible for the players to win with certainty.Imagine that the game is played repeatedly. To an outside observer, Alice and Bob's actions during the game are described by the probability p(a,b|x,y) that Alice and Bob output a ∈_A and b ∈_B on inputs x ∈_A and y ∈_B. The collection {p(a,b|x,y)}⊂^_A ×_B ×_A ×_B is called a correlation matrix (or a behaviour).Which correlation matrices can be achieved depends on the physical model. For instance, a correlation matrix {p(a,b|x,y)} is said to be classical if it can be achieved using classical shared randomness. Formally, this means that there must be some integer k ≥ 1, a probability distribution{λ_i } on {1,…,k}, probability distributions {p^ix_a} on _A for each 1 ≤ i ≤ k and x ∈_A, and probability distributions {q^iy_b} on _B for each 1 ≤ i ≤ k and y ∈_B, such that p(a,b|x,y) = ∑_i=1^k λ_i p^ix_a q^iy_bfor all(a,b,x,y) ∈_A ×_B ×_A ×_B.The set of classical correlation matrices is denoted by C_c(_A,_B,_A,_B), although we typically write C_c when the output and input sets are clear. In quantum information, we are interested in what correlations can be achieved with a shared quantum state. Accordingly, a correlation matrix is said to be quantum if there are finite-dimensional Hilbert spaces H_A and H_B, a quantum state |ψ⟩∈ H_A ⊗ H_B, projective measurements[A projective measurement on a Hilbert space H is a collection {P_x}_x ∈ X of self-adjoint operators on H, such that P_x^2 = P_x for all x ∈ X, and ∑_x ∈ X P_x =. The set X is interpreted as the set of measurement outcomes.] {M^x_a}_a ∈_A on H_A for every x ∈_A, and projective measurements {N^y_b}_b ∈_B on H_B for every y ∈_B, such thatp(a,b|x,y) = ⟨ψ| M^x_a⊗ N^y_b|ψ⟩ for all (a,b,x,y) ∈_A ×_B ×_A ×_B.The set of quantum correlation matrices is denoted by C_q ≅ C_q(_A,_B,_A,_B). There are two natural variations on this definition. We can drop the requirement that H_A and H_B be finite-dimensional, in which case we get another set of correlations often denoted by C_qs. We can also look at correlations which can be realized as limits of finite-dimensional quantum correlations; the corresponding correlation set is the closure of C_q, and is typically denoted by C_qa. It is well-known that C_qs⊆ C_qa, and consequently C_qa is also the closure of C_qs <cit.>.Since C_qs⊆ C_qa, we get a hierarchy of correlation setsC_c⊆ C_q⊆ C_qs⊆ C_qa.All the sets involved are convex, and C_c and C_qa are both closed.Bell's celebrated theorem <cit.> states that C_c ≠ C_q, and furthermore that the two sets can be separated by a hyperplane. It has been a longstanding open problem to determine the relationship between thequantum correlation sets, and in particular to determine whether C_q and C_qs are closed (see, i.e., <cit.>). Part of the interest in this latter question comes from the resource theory of non-local games: C_q ≠ C_qa if and only if there is a non-local game which can be played optimally (with respect to some payoff function) using a limit of finite-dimensional quantum strategies, but cannot be played optimally using any fixed dimension. Numerical evidence has suggested that even very simple non-local games might have this property <cit.>. For variants of non-local games (for instance, with quantum questions, or infinite output sets), there are several examples of games with this property <cit.>.The purpose of this paper is to show that there are indeed non-local games (with finite classical input and output sets) that cannot be played optimally using any fixed dimension. A perfect strategy for a non-local gameis a correlation matrix {p(a,b|x,y)} such that Alice and Bob win with probability one on every pair of inputs x and y. Formally, this means that for all (a,b,x,y) ∈_A ×_B ×_A ×_B, if V(a,b|x,y) = 0, then p(a,b|x,y)=0. There is a non-local game with a perfect strategy in C_qa, but no perfect strategy in C_qs. In particular, neither C_q or C_qs are closed. The proof is constructive, with the game in question having input sets of size 184 and 235, and output sets of size 8 and 2. The set C_q is related to the cone of completely positive-semidefinite (cpsd) matrices defined in <cit.>. An n × n matrix M is said to be cpsd if there are non-negative operators P_1,…,P_n on some finite-dimensional Hilbert space with M_ij = (P_i P_j) for all 1 ≤ i,j ≤ n. By a theorem of Sikora and Varvitsiotis <cit.>, the set C_q is an affine slice of the cone of cpsd matrices, so the cone of cpsd matrices is not closed as a consequence of Theorem <ref>. The fact that C_qs≠ C_qa also has an interesting reformulation. Let G_i be the n-fold free product _m * ⋯ * _m, where n=|_i| and m = |_i|, for i=A,B. Let M^x_a denote the ath spectral projector of the xth factor of G_A in the full group C^*-algebra C^*(G_A) of G_A, and define M^y_b similarly for C^*(G_B). For each i=A,B, find a faithful representation ν_i of C^*(G_i) on some Hilbert space H_i. The minimal (or spatial) tensor product C^*(G_A) ⊗_s C^*(G_B) is the norm-closure of the image ν_A(C^*(G_A)) ⊗ν_B(C^*(G_B)) in the C^*-algebra (H_A ⊗ H_B). A correlation matrix {p(a,b|x,y)} belongs to C_qa if and only if there is a state ω on the C^*-algebra C^*(G_A) ⊗_s C^*(G_B) with p(a,b|x,y) = ω(M^x_a ⊗ N^y_b)for all (a,b,x,y)∈_A ×_B ×_A ×_B <cit.>. On the other hand, the correlation matrix belongs to C_qs if and only if there are representations ϕ_i of G_i on H_i, i=A,B, and a vector state |ψ⟩∈ H_A ⊗ H_B, withp(a,b|x,y) = ⟨ψ|ϕ_A(M^x_a) ⊗ϕ_B(N^y_b) |ψ⟩for all (a,b,x,y) ∈_A ×_B ×_A ×_B. Since C_qs≠ C_qa, there can be states on the minimal tensor product C^*(G_A) ⊗_s C^*(G_B) which do not come from vector states on some tensor-product ϕ_A ⊗ϕ_B of representations ϕ_A and ϕ_B.There is another candidate set of quantum correlations, the commuting-operator correlations C_qc, which contains C_qa. Determining whether C_qc is known to be equal to C_t for any t ∈q,qs,qa is known as Tsirelson's problem <cit.>. In a previous paper <cit.>, we showed that C_qs≠ C_qc. By showing that C_qs≠ C_qa, we provide another proof of this fact. The proof that C_qs≠ C_qc in <cit.> uses a universal embedding theorem, which states that every finitely-presented group embeds in the solution group of a linear system game. In this paper, we follow a similar line, proving a restricted embedding theorem for a subclass of finitely-presented groups which we call linear-plus-conjugacy groups. For the proof of this restricted embedding theorem, we use a completely different method from <cit.>, with the result that the proof is much shorter. However, it remains an open problem to prove the universal embedding theoremvia the new approach.An easy consequence of the universal embedding theorem is that it is undecidable to determine if a linear system game has a perfect strategy in C_qc. In this paper we prove a stronger result by applying our restricted embedding theorem to Kharlampovich's example <cit.> of a finitely presented solvable group with an undecidable word problem. There is a (recursive) family of linear system games such that * it is undecidable to determine if a game in the family has aperfect strategy in C_qa, and* every game in the family has a perfect strategy in C_qc if and only if it has a perfect strategy in C_qa.Kharlampovich's construction has been extended by Kharlampovich, Myasnikov, and Sapir to show that the word problem for finitely-presented residually-finite groups can be as hard as any computable function <cit.>.[The word problem for finitely-presented residually-finite groups is always decidable, so this is the best possible lower bound.] Using this extension, we can show:Let f : be a computable function. Then there is a family of linear system games _n, n ∈, such that * the games _n have input and output sets of size exp(O(n)), and the function n ↦_n is computable in exp(O(n))-time;* for any algorithm accepting the language { n ∈ : _nhas a perfect strategy inC_q},the maximum running time over inputs n ≤ N is at least f(N) when N is sufficiently large;* _n has a perfect strategy in C_qc if and only if it has a perfect strategy in C_q.Theorem <ref> has the following corollary.It is undecidable to determine if a linear system game has a perfect strategy in C_q. §.§ Acknowledgements I thank Jason Crann, Richard Cleve, Tobias Fritz, Li Liu, Martino Lupini, Narutaka Ozawa, Vern Paulsen, Mark Sapir, Jamie Sikora, and Thomas Vidick for helpful comments and conversations. § GROUP THEORY PRELIMINARIES§.§ Group presentations Given a set S, let (S) denote the free group generated by S. If H is a group, then homomorphisms (S)H can be identified with functions SH, and we use these two types of objects interchangeably. If R is a subset of (S), then the quotient of (S) by the normal subgroup generated by R is denoted by ⟨ S : R ⟩.If G = ⟨ S : R ⟩ and R' ⊂(S ∪ S'), then we write ⟨ G, S' : R' ⟩ to mean ⟨ S ∪ S' : R ∪ R' ⟩. A group G is said to be finitely presentable if G = ⟨ S : R ⟩ for some finite sets S and R. A finitely presented group is a tuple (G,S,R), where G = ⟨ S : R ⟩. In other words, a finitely presented group is a finitely presentable group along with a choice of finite presentation.§.§ Approximate representations Let · be the normalized Hilbert-Schmidt norm, i.e. if T is an endomorphism of a finite-dimensional Hilbert space H, then T = √((T^* T)) / √( H). Let G = ⟨ S : R ⟩ be a finitely presented group. A finite-dimensional -approximate representation (or -representation for short) is a homomorphism ϕ : (S) (H) from (S) to the unitary group (H) of some finite-dimensional Hilbert space H, such thatϕ(r) - ≤for all r ∈ R. Note that the normalized Hilbert-Schmidt norm is invariant under conjugation by unitaries, so the set of -representations is independent of the cyclic order of the relations r ∈ R. That means that, for instance, we can write the relation x = y without worrying about whether we mean xy^-1 = e or y^-1 x = e. There are several different notions of approximate representations in the literature. The notion we are using comes from the study of stable relations of C^*-algebras (see, for instance, Section 4.1 of <cit.>). For the purposes of this paper, we could also use the closely related notion of approximate homomorphisms as in <cit.>. However, Definition <ref> is very convenient for working with examples, as we frequently do in this paper. The main disadvantage of this definition is that it depends on the choice of presentation. We can work around this using the following easy lemma.Let ψ : GH be a homomorphism, where G = ⟨ S : R⟩ and H = ⟨ S' : R' ⟩ are finitely presented groups. If Ψ : (S) (S') is a lift of ψ, then there is a constant C > 0 such that if ϕ is an -representation of H, then ϕ∘Ψ is a C-representation of G.We record two other simple lemmas for later use.Let G = ⟨ S : R ⟩, and let M be the length of the longest relation in R. If ϕ is an -representation of G, and ψ is an approximate representation of G withψ(x) - ϕ(x) ≤δfor all x ∈ S, then ψ is an (Mδ+)-representation.Given approximate representations ϕ : (S) (H) and ψ : (S) (H') of G = ⟨ S : R ⟩, we can form new approximate representations ϕ⊕ψ : (S) (H⊕ H') and ϕ⊗ψ : (S) (H ⊗ H'). Suppose ϕ and ψ are - and '-representations of G respectively. Then ϕ⊕ψ is a max(,')-representation, and ϕ⊗ψ is an ( + ')-representation. A group G is said to be residually finite-dimensional if every non-trivial element of G is non-trivial in some finite-dimensional representation. More generally, the set of elements which are trivial in finite-dimensional representations forms a normal subgroup of G. We let G^fin denote the quotient of G by this normal subgroup (alternatively, G^fin is the image of G in its profinite completion). Any homomorphism ϕ : GH descends to a homomorphism G^fin H^fin. A homomorphism ϕ : GH is a fin-embedding if the induced map G^fin H^fin is injective, and a fin^*-embedding if ϕ is both injective and a fin-embedding.Equivalently, ϕ is a fin-embedding if ϕ(g) is non-trivial in finite-dimensional representations whenever g ∈ G is non-trivial in finite-dimensional representations. We can similarly look at elements which are non-trivial in approximate representations:Let G be a finitely presentable group. An element g ∈ G is non-trivial in (finite-dimensional) approximate representations if there is a finite presentation G = ⟨ S : R⟩, a representative w ∈(S) for g, and some constant δ > 0 such that, for all > 0, there is an -representation ϕ of G with ϕ(w) -> δ.Alternatively, if g ∈ G = ⟨ S : R ⟩, let ℓ^fa(g) := lim_ 0^+sup_ϕϕ(w) - ,where w is a representative for g, and the supremum is across -representations ϕ of G. It is easy to see that the right-hand side is independent of the choice of representative w. By Lemma <ref>, if ψ : GH is a homomorphism, then ℓ^fa(g) ≥ℓ^fa(ψ(g)). Consequently, ℓ^fa(g) is independent of the chosen presentation ⟨ S : R ⟩, and g is non-trivial in approximate representations if and only if ℓ^fa(g) > 0. This makes it apparent that the choice of presentation ⟨ S : R⟩ and representative w in Definition <ref> is arbitrary. Standard amplification arguments show that the constant δ in Definition <ref> is also somewhat arbitrary; in fact, ℓ^fa(g) never takes values in (0,√(2)). The same amplification arguments can be used to show that a finitely-presented group G is hyperlinear if and only if every non-trivial element of G is non-trivial in approximate representations, and this can be used as the definition of hyperlinearity for finitely-presented groups. We refer Section II.2 of <cit.> for the standard definition of hyperlinearity, along with the amplification arguments needed to prove the equivalence. Clearly ℓ^fa(g) ≥ 0 for all g ∈ G, and it is easy to see that ℓ^fa(gh) ≤ℓ^fa(g) + ℓ^fa(h) and ℓ^fa(hgh^-1) = ℓ^fa(g) for all g,h ∈ G. Thus the set of elements of G which are trivial in approximate representations (i.e. for which ℓ^fa(g) = 0) forms a normal subgroup of G. Let G^fa be the quotient of G by this normal subgroup. Because ℓ^fa is decreasing via homomorphisms, any homomorphism ϕ : GH between finitely presentable groups descends to a homomorphism G^fa H^fa. A homomorphism ϕ : GH is an fa-embedding if the induced map G^fa H^fa is injective, and an fa^*-embedding if ϕ is injective, a fin-embedding, and an fa-embedding. Equivalently, ϕ is an fa-embedding if ϕ(g) is non-trivial in approximate representations whenever g ∈ G is non-trivial in approximate representations. If ϕ and ψ are approximate representations, then we say that ϕ is a direct summand of ψ if ψ = ϕ⊕ϕ' for some other approximate representation ϕ'. We use the following simple trick to construct fa^*-embeddings. Let G = ⟨ S : R ⟩ and H = ⟨ S' : R' ⟩ be two finitely presented groups, and let Ψ : (S) (S') be a lift of a homomorphism ψ : GH.* Suppose that for every representation (resp. finite-dimensional representation) ϕ of G, there is a representation (resp. finite-dimensional representation) γ of H such that ϕ is a direct summand of γ∘ψ. Then ψ is injective (resp. a fin-embedding).* Suppose that there is an integer N>0 and a real number C>0 such that for every d-dimensional -representation ϕ of G, where > 0, there is an Nd-dimensional C-representation γ of H such that ϕ is a direct summand of γ∘Ψ. Then ψ is an fa-embedding.Part (a) is clear, so we prove (b). Suppose ϕ is an -representation of G, where > 0. If γ∘Ψ = ϕ⊕ϕ', where ϕ is d-dimensional and ϕ' is (N-1)d-dimensional, thenγ(Ψ(w)) -= ϕ(w)⊕ϕ'(w) - ≥1/√(N)ϕ(w)- for all w ∈(S). So ℓ^fa(ψ(g)) ≥ℓ^fa(g)/√(N), and ψ is an fa-embedding. In our applications it will be possible to check parts (a) and (b) of Lemma <ref> simultaneously, in which case ψ will be an fa^*-embedding. §.§ Groups over _2 For convenience, we use the following definition from from <cit.>: A group over _2 is a pair (G,J), where J is a central element of G of order two. Note that J is allowed to be the identity element. Typically we drop the pair notation, and just use the symbol J (or J_G where necessary) to refer to the special element of a group G over _2, in the same way that we use e to refer to the identity element. If G and H are groups over _2, then a morphism GH over _2 is a group homomorphism GH which sends J_G ↦ J_H. If a group G over _2 is finitely presentable, then it has a finite presentation ⟨ S : R ⟩ where J ∈ S, and R includes the relations J^2 = e and [J,s]=e for every s ∈ S ∖{J}. We use presentations of this form often enough that it is helpful to have some notation for them.Suppose that S_0 is a set of indeterminates, and R_0 ⊂(S_0 ∪{J}).Then we set⟨ S_0 : R_0 ⟩__2 := ⟨ S_0 ∪{J} : R_0 ∪{[J,s] = e : s∈ S_0}∪{J^2 = e}⟩,and call ⟨ S_0 : R_0 ⟩__2 a presentation over _2. As with ordinary presentations, if G = ⟨ S : R ⟩ or ⟨ S : R ⟩__2, then ⟨ G, S' : R' ⟩__2 := ⟨ S ∪ S' : R ∪ R' ⟩__2. § LINEAR SYSTEM GAMES AND SOLUTION GROUPS Let Ax=b be an m × n linear system over _2. To the system Ax=b, we can associate a non-local game, called a linear system game, as follows. For each 1 ≤ i ≤ m, let V_i = {j : A_ij≠ 0} be the set of indices of variables appearing in the ith equation. Let S_i ⊂_2^V_i be the set of assignments to variables x_j, j ∈ V_j satisfying the ith equation, i.e.a∈_2^V_i belongs to S_i if and only if ∑_j ∈ V_j a_j = b_i. Then Alice receives an equation as input, represented by an integer 1 ≤ i ≤ m, and must output an element a∈ S_i. Bob receives a variable, represented by an integer 1 ≤ j ≤ n, and must output an assignment b for x_j.The players win if either j ∉V_i, or j ∈ V_i and a_j = b, i.e. Alice's and Bob's outputs are consistent.A quantum strategy (presented in terms of measurements) for a linear system game consists of* a pair of Hilbert spaces H_A and H_B, * a projective measurement {N^j_b}_b∈_2 on H_B for every integer 1 ≤ j ≤ n,* a projective measurement {M^i_a}_a∈ S_i on H_A for every integer 1 ≤ i ≤ m, and* a quantum state |ψ⟩∈ H_A ⊗ H_B.The strategy is finite-dimensional if H_A and H_B are finite-dimensional. The associated quantum correlation matrix {p(a,b|i,j)} is defined by p(a,b|i,j) = ⟨ψ| M^i_a⊗ N^j_b|ψ⟩,1 ≤ i ≤ m, 1 ≤ j ≤ n, a∈ S_i, b ∈_2.As in the introduction, we also use the term strategy to refer to the correlation matrix {p(a,b|i,j)}. If j ∈ V_i, then the probability that Alice and Bob win on inputs i and j is p_ij := ∑_a,b : a_j = b p(a,b|i,j).A strategy is perfect if and only if p_ij = 1 for all 1 ≤ i ≤ m and j ∈ V_i.For linear system games, it is often convenient to work with strategies presented in terms of ± 1-valued observables—self-adjoint operators which square to the identity—rather than measurement operators.A quantum strategy (presented in terms of observables) consists of* a pair of Hilbert spaces H_A and H_B; * a collection of self-adjoint operators X_j, 1 ≤ j ≤ n, on H_B such that X_j^2 = for every 1 ≤ j ≤ n; * a collection of self-adjoint operators Y_ij, 1 ≤ i ≤ m, j ∈ V_i on H_Asuch that * Y_ij^2 = for every 1 ≤ i ≤ m and j ∈ V_i, * ∏_j ∈ V_i Y_ij = (-)^b_i for every 1 ≤ i ≤ m, and* Y_ij Y_il = Y_il Y_ij for every 1 ≤ i ≤ m and j,l ∈ V_i; and* a quantum state |ψ⟩∈ H_A ⊗ H_B. Given a quantum strategy presented in terms of measurements, we can get a quantum strategy presented in terms of observables by setting X_j = N^j_0 - N^j_1 for every 1 ≤ j ≤ n, and Y_ij = ∑_a∈ S_i (-1)^a_j M^i_afor 1 ≤ i ≤ m and j ∈ V_i. Conversely, given a quantum strategy in terms of observables, we can recover the measurement presentation using the spectral decomposition of the observables. So the two notions of strategy are equivalent. Note that if j ∈ V_i, then⟨ψ| Y_ij⊗ X_j |ψ⟩= ⟨ψ|∑_a∈ S_i (-1)^a_j M^i_a⊗∑_b ∈_2 (-1)^b N^j_b |ψ⟩ = ∑_a∈ S_i, b ∈_2 (-1)^a_j + b p(a,b|i,j)= 2 [ ∑_a,b : a_j = b p(a,b|i,j) ] - 1 = 2p_ij - 1,where p_ij is, again, the probability that Alice and Bob win on inputs i and j. The quantity 2p_ij-1 is called the winning bias on inputs i and j.To every linear system, we can also associate a finitely presented group over _2, as follows. Let Ax=b be an m × n linear system. The solution groupof this system is the groupΓ(A,b) : = ⟨ x_1,…,x_n : x_n^2 = efor all1 ≤ j ≤ n, ∏_j=1^n x_j^A_ij = J^b_i for all 1 ≤ i ≤ m,andx_j x_k = x_k x_jifj,k ∈ V_ifor some1 ≤ i ≤ m⟩__2We say that a group over _2 is a solution group if it has a presentation over _2 of this form. Solution groups and linear system games are related as follows. Letbe the linear system game associated to a system Ax=b. Then the following are equivalent: *has a perfect strategy in C_qs.*has a perfect strategy in C_q.* J_Γ is non-trivial in some finite-dimensional representation of Γ = Γ(A,b). Although we haven't defined the set of commuting-operator correlations C_qc, we can work with C_qc through the following result.The linear system game associated to a system Ax=b has a perfect strategy in C_qc if and only if J_Γ is non-trivial in Γ = Γ(A,b).The main point of this section is to prove an analog of one direction of Theorem <ref> for approximate representations.Let Γ = Γ(A,b) be a solution group. If J_Γ is non-trivial in finite-dimensional approximate representations of Γ then the linear system game associated to Ax=b has a perfect strategy in C_qa.The proof of Proposition <ref> is a straightforward application of a number of easy stability lemmas. We start by pinning down what we want to prove.The linear system game associated to Ax=b has a perfect strategy in C_qa if and only if, for all > 0, there is a finite-dimensional quantum strategy (presented in terms of observables) {Y_ij}, X_j, |ψ⟩ such that⟨ψ| Y_ij⊗ X_j |ψ⟩≥ 1 -for all1 ≤ i ≤ n, j ∈ V_i.Since C_qa is the closure of C_q, the linear system game associated to Ax=b has a perfect strategy in C_qa if and only if, for every > 0, there is a finite-dimensional quantum strategy such that the winning probability p_ij≥ 1 - /2 for every 1 ≤ i ≤ m and j ∈ V_i. But p_ij≥ 1-/2 if and only if the winning bias 2p_ij - 1 ≥ 1 -, so the lemma follows from equation (<ref>).Next, we come to the stability lemmas, which will allow us to turn approximate representations of the solution group Γ into quantum strategies. The following lemmas are all likely well-known to experts (see, for instance, <cit.>); we include the proofs for completeness.For any diagonal matrix X, there is a diagonal matrix D with D^2 = and D-X≤(1 + 1/√(2)) X^2 - .Suppose X is a d × d matrix, and let D_ii =X_ii for all 1≤ i ≤ d, where x = 1 if x ≥ 0 and -1 if x < 0. To show that the desired inequality holds, consider a complex number α = a + bi. Then |α^2 - 1|^2 = |a^2-b^2-1 + 2abi|^2 = [(a^2-1)-b^2]^2 + 4 a^2 b^2= (a^2-1)^2 + 2 b^2 + 2 a^2 b^2 + b^4.In particular, this implies that |α^2-1|^2 is greater than or equal to (a^2-1)^2 and 2 b^2. Consequently,( X)^2 -= √(1/d∑_j [( X_jj)^2 - 1]^2 )≤√(1/d∑_j | X_jj^2 - 1 |^2 ) =X^2 - ,and X - X =X = √(1/d∑_j | X_jj |^2 )≤√(1/2d∑_j | X_jj^2 - 1|^2 ) = 1/√(2)X^2 - .By considering the cases a ≥ 0 and a < 0 separately, we see that|a^2 - 1| = |1 +a| |1-a| = (1 + |a|)| a - a| ≥ | a - a|for all a ∈.Thus, as above, D -X≤( X)^2-, and the lemma follows. Suppose X_1,…,X_n are commuting unitary matrices, with X_i^2 = for all 1≤ i ≤ n, and Y is a unitary matrix such that Y^2 = and Y commutes with X_i for all 1 ≤ i ≤ n-1. Then there is a unitary matrix Z such that Z^2 =, Z commutes with X_i for all 1 ≤ i ≤ n, andZ-Y≤(1 + 1/2√(2)) X_n Y - Y X_n.Let Z_0 = 1/2 (Y + X_n Y X_n). Clearly Z_0 commutes with X_i for all 1 ≤ i ≤n-1. Since X_n^2 =, we also have that X_n Z_0 = 1/2 (X_n Y + Y X_n) = Z_0 X_n. Since Y^2 == (X_n Y X_n)^2 as well, we have thatZ_0^2 - = 1/4Y X_n Y X_n + X_n Y X_n Y - 2 ≤1/4Y X_n Y X_n -+ 1/4X_n Y X_n Y - = 1/2X_n Y - Y X_n.Since X_n and Y are self-adjoint, Z_0 is self-adjoint, so we can simultaneously diagonalize X_1,…,X_n and Z_0. Hence by Lemma <ref>, there is a matrix Z such that Z^2 =, Z commutes with X_i for all 1 ≤ i ≤ n, andZ - Z_0≤(1 + 1/√(2)) Z_0^2 - ≤(1/2 + 1/2√(2)) X_n Y - YX_n.Finally,Y - Z_0 = 1/2Y - X_n Y X_n = 1/2X_n Y - Y X_n,so the lemma follows.Consider _2^k as a finitely-presented group with presentation ⟨ x_1,…,x_k : x_i^2 = e, [x_i,x_j] = efor alli ≠ j ⟩.Then there is a constant C>0, depending on k, such that if ϕ is an -representation of _2^k on a Hilbert space H, then there is a representation ψ of _2^k on H with ψ(x_i) - ϕ(x_i) ≤ Cfor all 1 ≤ i ≤ k. Suppose ψ is an -representation of _2^k such that the following properties hold for some 1 ≤ l ≤ k-1:* ψ(x_i)^2 = for all 1 ≤ i ≤ k, and* ψ(x_i) commutes with ψ(x_j) for all 1 ≤ i ≤ l-1 and 1 ≤ j ≤ k.In particular, property (b) requires that ψ(x_1),…,ψ(x_l) pairwise commute. Then by Lemma <ref>, for each l < j ≤ k there is a unitary matrix X_j such that X_j^2 =, X_j commutes with ψ(x_i) for all 1 ≤ i ≤ l, and X_j - ψ(x_j)≤ C_0 ψ(x_l)ψ(x_j) - ψ(x_j)ψ(x_l)≤ C_0 ,where C_0 = 1 + 1/2√(2). Define an approximate representation ψ' of G by ψ'(x_i) = ψ(x_i) if i ≤ l and ψ'(x_i) = X_i if i > l. Then ψ'(x_i)^2 = for all 1≤ i ≤ k, and ψ'(x_i) commutes with ψ'(x_j) for all 1 ≤ i ≤ l and 1 ≤ j ≤ k. In other words, ψ' satisfies properties (a) and (b) with l replaced by l+1. Finally, ψ'(x_i) - ψ(x_i)≤ C_0 for all 1 ≤ i ≤ k, so ψ' is a (4C_0+1)-representation by Lemma <ref>. Now suppose that ϕ is any -representation of _2^k. By Lemma <ref>, there is an approximate representation ψ_1 of _2^k with ψ_1(x_i)^2 = and ψ_1(x_i) - ϕ(x_i)≤ C_1 for all 1 ≤ i ≤ k, where C_1 = (1 + 1/√(2)). By Lemma <ref>, ψ_1 is a (4C_1+1)-representation. Clearly, ψ_1 satisfies conditions (a) and (b) with l=1.Using the argument in the previous paragraph, we can then iteratively define approximate representations ψ_2,…,ψ_k-1, where ψ_j satisfies conditions (a) and (b) with l=j for all 1 ≤ j ≤ k-1. Let _l = (4C_0+1)^l-1 (4C_1+1), so ψ_1 is an _1-representation. It is not hard to check that ψ_l is an _l-representation, and furthermore that ψ_l(x_i) - ψ_1(x_i)≤1/4( (4C_0+1)^l-1 - 1) _1 = 1/4( (4C_0+1)^l-1 - 1 ) (4C_1+1)for all 1 ≤ i ≤ k. Since ψ_k-1 is an exact representation,we can take C = 1/4( (4C_0+1)^k-2 - 1 ) (4C_1+1) + C_1. Suppose G = ⟨ S_0 : R_0 ⟩__2, where R_0 includes the relations s^2 = e for all s ∈ S_0. If J_G is non-trivialin finite-dimensional approximate representations of G, then for every > 0 there is an -representation ϕ of G such that ϕ(J)=-, and ϕ(s)^2 = for all s ∈ S_0. Suppose A is an m × n matrix, and let S = S_0 ∪{J}. If J is non-trivial in approximate representations, then there is a δ > 0 such that for all > 0, there is an -representation ϕ with ϕ(J)- > δ. By Lemmas <ref>, <ref>, and <ref>, there are constants C,C' >0 such that if ϕ is an -representation, then there is a C'-representation ψ such that * ψ(x)^2 = for all x ∈ S, * ψ(s) and ψ(J) commute for all s ∈ S_0, and* ψ(J) - ϕ(J)≤ C.(We can take C = (1+1/√(2)), while C' will depend on thelength of the longest defining relation of G.) If ϕ(J) -> δ, and < δ / (2C), then δ < ϕ(J) - ≤ϕ(J) - ψ(J) + ψ(J)-≤δ/2 + ψ(J)-,so ψ(J)-≥δ/2. Thus we conclude that for all > 0, there is an -representation ψ satisfying conditions (1) and (2), and with ψ(J) -> δ/2. Suppose ψ is an -representation satisfying conditions (1) and (2), and with ψ(J) -> δ/2. Choose a basis with ψ(J) = _d_0⊕ (-_d_1). Since ψ(s) commutes with ψ(J) for all s ∈ S_0, we must have ψ = ψ_0 ⊕ψ_1, where ψ_a is an approximate representation of dimension d_a, and ψ_a(J) = (-)^a, a = 0,1. Since ψ(s)^2=, we also have ψ_a(s)^2= for all s ∈ S_0, a=0,1. To finish the proof, we just need to show that ψ_1 is a C”-representation for some constant C” independent of ψ.If w ∈(S), thenψ(w) - ^2 = d_0/d_0+d_1ψ_0(w)-^2 + d_1/d_0+d_1ψ_1(w)-^2.If w = J, then ψ_0(w) - =0 and ψ_1(w)- = -2 = 4, so we conclude thatδ^2/4 < ψ(J) - ^2 = 4 d_1/d_0+d_1,so d_1/(d_0+d_1) > δ^2 / 16. On the other hand, if w = r isone of the defining relations of G, then^2 ≥ψ(r) - ^2 ≥d_1/d_0+d_1ψ_1(r)- ^2> δ^2/16ψ_1(r) - ^2.Thus ψ_1 is a 4/δ-representation with ψ_1(J)=- and ψ_1(s)^2 = for all s ∈ S_0. Since δ is a constant, the lemma follows. Suppose J is non-trivial in finite-dimensional approximate representations of Γ. Given > 0, let ϕ be an -representation of Γ with ϕ(J) = - and ϕ(x_j)^2 = for all 1 ≤ j ≤ n, as in Lemma <ref>. Suppose ϕ has dimension d, and let |v⟩ be the maximally entangled state on ^d ⊗^d. For each 1 ≤ j ≤ n, set X_j = ϕ(x_j). For each 1 ≤ i ≤ m, let j_i be the maximal element of V_i, and set W_i := V_i ∖{j_i}. The restriction of ϕ to the subgroup ⟨ x_j : j ∈ W_i ⟩ is an -representation of _2^W_i, and by Lemma <ref>, there is a representation ψ_i of _2^W_i with ψ_i(x_j) - ϕ(x_j)≤ O().[For this proof, we use the notation O() to hide constants which are independent of , ϕ, and so on. The constants can still depend on the linear system Ax=b, however.] Set Y_ij := ψ_i(x_j)^T (the transpose of ψ_i(x_j) in a Schmidt basis for |v⟩) for all j ∈ W_i, and set Y_ij_i := (-1)^b_i∏_j ∈ W_i Y_ij. Suppose j ∈ W_i for some 1 ≤ i ≤ m. Since Y_ij and X_j are self-adjoint, we have that2 - 2/d(Y_ij^T X_j) = Y_ij^T - X_j^2 = ψ(x_j) - ϕ(x_j)^2 ≤ O(^2),so 1/d(Y_ij^T X_j) ≥ 1 - O(^2). For the remaining variable in V_i, we have thatY_ij_i^T - X_j_i= (-1)^b_i∏_j ∈ W_iψ_i(x_j) - ϕ(x_j_i)≤(-1)^b_i∏_j ∈ W_iϕ(x_j) - ϕ(x_j_i) + |W_i| = (-1)^b_i∏_j ∈ V_iϕ(x_j) -+ |W_i|≤ O(),where the last equality uses the fact that ϕ(x_j_i)^2 =. Because the Y_ij's commute for all j ∈ W_i, Y_ij_i is also self-adjoint, so once again we conclude that2 - 2/d(Y_ij_i^T X_j_i) = Y_ij_i^T - X_j_i^2 ≤ O(^2)or in other words that 1/d(Y_ij_i^T X_j) ≥ 1 - O(^2).Now clearly {Y_ij}, {X_j}, |v⟩ is a strategy for thelinear system game associated to Ax=b. If A and B are any two d × d matrices, it follows from the definition of maximally entangled states that⟨v| A ⊗ B |v⟩ = 1/d(A^T B).We conclude that ⟨v| Y_ij⊗ X_j |v⟩ = 1/d(Y_ij^T X_j) ≥ 1 - O(^2) for all j ∈ V_i, 1 ≤ i ≤ m. The proposition follows from Lemma <ref>. § LINEAR-PLUS-CONJUGACY GROUPS The goal of the next two sections is to show that there is a solution group Γ such that J_Γ is trivial in finite-dimensional representations, but non-trivial in approximate representations. In this section, we start by showing that it suffices to construct more general types of group with these properties. Given an m × n linear system Ax=b, we once again let V_i = V_i(A) : = {1≤ j ≤ n : A_ij≠ 0}. Suppose Ax =b is an m × n linear system over _2, and ⊆ [n] × [n] × [n], where [n] = {1,…,n}. LetΓ(A,b,) : = ⟨Γ(A,b) :x_i x_j x_i = x_kfor all(i,j,k) ∈⟩__2.Lacking a better term, we say that a group over _2 is a linear-plus-conjugacy group if it has a presentation over _2 of this form. The conjugacy part of the name comes from the fact that since x_i is an involution, the relation x_i x_j x_i = x_k is equivalent to the relation x_i x_j x_i^-1 = x_k, so Γ(A,b,) can be thought of as a solution group with additional conjugacy relations. In the context of linear-plus-conjugacy and related groups, we use the term conjugacy relations as a convenient shorthand for relations of the form xyx = z. We also use the term linear relation x_1 ⋯ x_n = e to refer to the set of relations {x_1 ⋯ x_n = e}∪{[x_i,x_j] = e : 1 ≤ i ≠ j ≤ n }.Finally, observe that there are two ways to make generators x_i and x_j commute in a linear-plus-conjugacy group: we can add a conjugacy relation x_i x_j x_i = x_j, or add an additional generator x_n+1 and a linear relation x_i x_j x_n+1 = e. We pick and choose from these two methods based on what is convenient.The main point of this section is to prove:Let G be a linear-plus-conjugacy group. Then there is an fa^*-embedding G Γ over _2, where Γ is a solution group. We prove Proposition <ref> by first showing that linear-plus-conjugacy groups can be embedded in linear-plus-conjugacy groups of a certain form. A linear-plus-conjugacy group is nice if it has a presentation of the form Γ(A,b,), where A is an m × n matrix over _2, b ∈ Z_2^m, and ⊆ [n] × [n]× [n] is such that if (i,j,k) ∈, then j,k ∈ V_l for some 1 ≤ l ≤ m.This means that if x_i x_j x_i = x_k is a defining relation of a nice linear-plus-conjugacy group, then x_j x_k = x_k x_j will also be a defining relation.Let G be a linear-plus-conjugacy group. Then there is an fa^*-embedding GK over _2, where K is a nice linear-plus-conjugacy group. Suppose G = Γ(A,b,), where A is an m× n matrix. LetK := ⟨Γ(A,b), w_j, y_j, z_jfor1 ≤ j ≤ nandf :f^2 = e,y_j^2 = z_j^2 = w_j^2 = efor all1 ≤ j ≤ n,x_j = y_j z_j = f w_jandf y_j f = z_jfor all1 ≤ j ≤ n,y_j z_k = z_k y_jfor all(i,j,k) ∈,and w_i y_j w_i = z_kfor all(i,j,k) ∈⟩__2.Since the generators are involutions, note that the relations imply that f w_j = w_j f, y_j z_j = z_j y_j, and f z_j f = y_j for all 1 ≤ j ≤ n. If (i,j,k) ∈, then w_i z_j w_i = w_i f y_j f w_i = f w_i y_j w_i f = f z_k f = y_k,so x_i x_j x_i = f w_i y_j z_j f w_i = (f w_i y_j w_i f)(f w_i z_j w_i f) = (f z_k f)(f y_k f) = y_k z_k = x_kin K. Thus there is a homomorphism ψ : GK sending x_i ↦ x_i.Suppose ϕ is an -representation of G, where > 0. Define an approximate representation γ of K by γ(x_i) = [ ϕ(x_i)0;0 ϕ(x_i) ], γ(J) = [ ϕ(J)0;0 ϕ(J) ], γ(y_i) = [ ϕ(x_i)0;0], γ(z_i) = [ 0;0 ϕ(x_i) ], γ(w_i) = [0 ϕ(x_i); ϕ(x_i)0 ],and γ(f) = [ 0; 0 ].It is straightforward to check that γ is an -representation of K. If Ψ is the lift of ψ sending x_i ↦ x_i, then γ∘Ψ = ϕ⊕ϕ. When ϕ is an exact representation of dimension d (possibly infinite), the same construction gives an exact representation γ of dimension 2d. By Lemma <ref>, ψ is an fa^*-embedding.Finally, we observe that K is a nice linear-plus-conjugacy group. Indeed, since the relation x_i = y_i z_i forces y_i and z_i to commute, this relation is equivalent to the relationsx_i y_i z_i = e = [x_i,y_i] = [x_i,z_i] = [y_i,z_i],which means that we can make x_i = y_i z_i, and similarly x_i = f w_i,part of the “linear” relations. By adding ancilla variables g_jk,the commuting relations y_j z_k = z_k y_j can also be replaced with equivalent linear relations g_jk y_j z_k = e. The conjugacy relations f y_j f = z_j and w_i y_j w_i = z_k will then satisfy the requirements of Definition <ref>.By Lemma <ref>, we can assume that G is a nice linear-plus-conjugacy group. Let G = Γ(A,b,) be a presentation satisfying the conditions of Definition <ref>. Augment the linear system Ax=b by adding additional variables y_Ij for each I ∈ and 1 ≤ j ≤ 7, and additional relations x_i + y_I1 + y_I2 = 0,x_j + y_I2 + y_I3 = 0,y_I3 + y_I4 + y_I5 = 0x_i + y_I5 + y_I6 = 0,x_k + y_I6 + y_I7 = 0,y_I1 + y_I4 + y_I7 = 0for every I = (i,j,k) ∈.Let Γ be solution group of this augmented linear system, so Γ = ⟨ Γ(A,b),y_Ij forI ∈, 1 ≤ j ≤ 7 : R ⟩__2,where R consists of the new relations (now written in multiplicative form)x_i y_I1 y_I2 = x_j y_I2 y_I3 = y_I3 y_I4 y_I5 = x_i y_I5 y_I6 = x_k y_I6 y_I7 = y_I1 y_I4 y_I7 = efor every I = (i,j,k) ∈, as well as the corresponding commutation relations. In Γ, we have thatx_i x_j x_i = (y_I1 y_I2) (y_I2 y_I3) (y_I5 y_I6) = y_I1(y_I3 y_I5) y_I6= y_I1 y_I4 y_I6 = y_I7 y_I6 = x_kfor every I = (i,j,k) ∈.So once again we get a homomorphism ψ: G Γ sending x_i ↦ x_i.Suppose ϕ is an -representation of G. Define an approximate representation γ of Γ by γ(x_i) = [ ϕ(x_i)0;0 ϕ(x_i) ], γ(y_I1) = [0 ϕ(x_i); ϕ(x_i)0 ], γ(y_I2) = [ 0; 0 ], γ(y_I3) = [0 ϕ(x_j); ϕ(x_j)0 ], γ(y_I4) = [0 ϕ(x_j x_i); ϕ(x_i x_j)0 ], γ(y_I5) = [ ϕ(x_j x_i x_j)0;0 ϕ(x_i) ], γ(y_I6) = [ ϕ(x_j x_k)0;0],and γ(y_I7) = [ ϕ(x_j)0;0 ϕ(x_k) ]for all I = (i,j,k) ∈. It is straightforward to show that γ is a C-representation of Γ, where C is a positive constant ≤ 15. For instance, consider the relation y_I5^2 = e. To show that γ(y_I5)^2 ≈, we need to show that ϕ(x_j x_i x_j)^2 ≈.Write X ≈_ Y to mean that X-Y≤. Since ϕ(x_i)^2 ≈_ and ϕ(x_j)^2 ≈, we have ϕ(x_i x_k x_i)^2 ≈_3. We can conclude from this that γ(y_I5)^2 ≈_3 (we can do slightly better by averaging over the blocks of γ(y_I5), but we ignore this to simplify the analysis). We can similarly show that γ(y_Ij)^2 ≈_3 for all 1 ≤ j ≤ 7, and that the linear relations in Equation (<ref>) hold to within 3. This leaves the commuting relations. Consider the relation y_I3 y_I4 y_I5 = e. We want to show that γ(y_I3), γ(y_I4), and γ(y_I5) approximately commute. But since γ(y_I3) γ(y_I4) γ(y_I5) ≈_3 and γ(y_Ij)^2 ≈_3, we conclude that γ(y_I4) γ(y_I5) ≈_3γ(y_I3)^* ≈_3γ(y_I3) ≈_3γ(y_I5)^* γ(y_I4)^*≈_6γ(y_I5) γ(y_I4),or in other words, γ(y_I4) γ(y_I5) ≈_15γ(y_I5) γ(y_I4). The other commuting relations follow similarly.Let Ψ be the lift of ψ sending x_i ↦ x_i. Then γ∘Ψ = ϕ⊕ϕ. Once again, the same construction applies when ψ is an exact representation, so ψ is an fa^*-embedding by Lemma <ref>.Note that if j = k in a relation x_i x_j x_i = x_k, then the system in Equation (<ref>) is precisely the Mermin-Peres magic square <cit.>. The magic square has previously been used by Ji to show that linear system games can require a (finite but) arbitrarily high amount of entanglement to play perfectly <cit.>. The proof of Proposition <ref> has several interesting features:Let G = Γ(A,b,) be an m × n linear-plus-conjugacy group, and let Γ' = Γ'(A',b') be the solution group constructed in the proof of Proposition <ref>. Then, accounting for Lemma <ref>, the system A'x=b' has 11n + 8 c + 1 variables and 8n + m + 7c equations, where c = || is the number of conjugacy relations. A presentation for Γ' can be constructed in polynomial time in m, n, and c. The proofs of Lemma <ref> and Proposition <ref> show that there is a constant C > 0, and a lift Ψ of the homomorphism G Γ' to the defining free groups, such that for any d-dimensional -representation ϕ of G, there is a 4d-dimensional C-representation ψ of Γ' with ψ∘Ψ = ϕ^⊕ 4. Taking into account the fact that we have to change the presentation of the group K in the proof of Lemma <ref>, we can take the constant C ≤ 75. The lift Ψ can be chosen to send the generators of G to generators of Γ' (although not every generator of Γ' will lie in the image of Ψ).It is important for our argument that the fa^*-embedding in Proposition <ref> is over _2. However, we can go a little further in what type of groups can be embedded if we drop this requirement.Suppose A is an m × n matrix over _2, and ⊆ [n] × [n] × [n]. Let (A,) : = ⟨ x_1,…,x_n : x_n^2 = efor all1 ≤ j ≤ n, ∏_j=1^n x_j^A_ij = efor all 1 ≤ i ≤ m,x_j x_k = x_k x_jifj,k ∈ V_i(A)for some1 ≤ i ≤ m,andx_i x_j x_i = x_kfor all(i,j,k) ∈⟩.We say that a group G is a homogeneous-linear-plus-conjugacy group if it has a presentation of this form. Since (A,) is not presented over _2, a homogeneous-linear-plus-conjugacy group is not a linear-plus-conjugacy group. However, the two types of groups are closely related, as (A,) ×_2 = Γ(A,0,).Suppose A is an m × n matrix over _2, _0 ⊆ [n] × [n] × [n], _1 ⊆ [ℓ] × [n] × [n], and L is an ℓ×ℓ lower-triangular matrix with non-negative integer entries.Let(A,_0,_1,L) : = ⟨Γ_0(A,_0),y_1,…,y_ℓ : y_i x_j y_i^-1 = x_kfor all(i,j,k) ∈_1,and y_i y_j y_i^-1 = y_j^L_ij for alli>jwithL_ij > 0 ⟩.We refer to the generators x_i in this presentation as involutary generators, and to the generators y_j as non-involutary generators. We say that a group G is an extended homogeneous-linear-plus-conjugacy group if it has a presentation of this form. Let G = (A,_0,_1,L) as in Definition <ref>, where A is an m × n matrix. Then there is an m × n' matrix A' and a set ' ⊂ [n'] × [n'] × [n'], where n ≤ n', such that there is an fa^*-embedding ψ : G (A',') with ψ(x_i) = x_i for all 1 ≤ i ≤ n.Suppose G has ℓ non-involutary generators, and letG' = ⟨ G, z, w : z^2 = w^2 = e, y_1 = zw, zy_i = y_i zfori=2,…,ℓ⟩.We claim that the natural morphism ψ : GG' is an fa^*-embedding.Indeed, let Ψ : (S) (S ∪{z,w}) be the natural inclusion, where S = {x_1,…,x_n,y_1,…,y_ℓ}. Given an -representation ϕ of G, define an approximate representation γ of G' byγ(x_i) = [ ϕ(x_i)0;0],γ(z) = [ 0; 0 ],γ(w) = [0 ϕ(y_1)^*; ϕ(y_1)0 ],γ(y_1) = [ ϕ(y_1)0;0 ϕ(y_1)^* ], and γ(y_i) = [ ϕ(y_i)0;0 ϕ(y_i) ] fori = 2,…,ℓ.Because L is lower-triangular, G' has no defining relations of the form y_1 y_i y_1^-1 = y_i^L_1i. Suppose L_i1 > 0, so that ϕ(y_i) ϕ(y_1) ϕ(y_i)^* ≈_ϕ(y_1)^L_i1, where once again X ≈_ Y means that X - Y≤. Then ϕ(y_i) ϕ(y_1)^* ϕ(y_i)^* ≈_ϕ(y_1)^-L_i1, so ψ(y_i) ψ(y_1) ψ(y_i)^* ≈_ψ(y_1)^L_i1. It is easy to see that the remaining defining relations of G' hold to within , so ψ is an -representation of G'. Since ϕ is a direct summand of γ∘Ψ, we can apply Lemma <ref> with N=2 and C=1 to see that ψ is an fa-embedding. The same construction for exact representations shows that ψ is an fa^*-embedding.Next, observe that G' is an extended homogeneous-linear-plus-conjugacy group with ℓ-1 non-involutary generators. Indeed, suppose (1,j,k) ∈_1. Then the defining relation y_1 x_j y_1^-1 = x_k is equivalent to the relation z w x_j w z = x_k. By adding an ancilla variable Z_jk with Z_jk^2 = e, we can replace this relation with the two conjugacy relations w x_j w = Z_jk and z Z_jk z = x_k. Similarly, suppose L_i1 > 0. Then the relation y_i y_1 y_i = y_1^L_i1 is equivalent to the relation y_i w y_i^-1 = w (zw)^L_i1-1. Once again, we can replace this relation with a sequence of conjugacy relations by adding ancilla variables. For instance, if L_i1=3, then we would add ancilla variables W_i0 and W_i1 with W_i0^2 = W_i1^2 = e, and conjugacy relations z w z = W_i0, w W_i0 w = W_i1, and y_i w y_i^-1 = W_i1. After making these replacements, the only relation containing y_1 is y_1 = zw, so we can remove y_1 from the set of generators. The commuting relations added in G' are equivalent to y_i z y_i^-1 = z for all 2 ≤ i ≤ℓ, so G' is an extended homogeneous-linear-plus-conjugacy group. The additional variables (including the ancilla) are involutary generators, so G' has ℓ-1 non-involutary generators. Iterating this construction, we get a sequence of fa^*-embeddings terminating in a homogeneous-linear-plus-conjugacy group, as desired. The reason the above argument does not apply for groups over _2 is that, if we set γ(J) = ϕ(J) ⊕, then γ(J) would not commute withγ(z) and γ(w), while if we set γ(J) = ϕ(J) ⊕ϕ(J), then any linear relations containing J would not be satisfied. The above proof shows that, in Proposition <ref>, we can taken' = n + 2 ℓ + ℓ2 + |_1| + sum(L)and|'| = |_0| + 2 |_1| + 2 ℓ2 + sum(L) + #(L),where ℓ is the number of non-involutary generators, sum(L) is the sum of the entries of L, and #(L) is the number of non-zero entries of L. The matrix A' and set ' can be constructed in polynomial time in m, n, ℓ, |_0|, |_1|, and sum(L).§ PROOF OF THEOREM <REF> The point of this section is to prove the following proposition, and hence finish the proof of Theorem <ref>.There is a solution group Γ for which J is trivial in finite-dimensional representations, but non-trivial in finite-dimensional approximate representations.For the proof of Proposition <ref>, it is convenient to work with sofic groups. We do not need to know the definition of soficity, just that the class of sofic groups has the following properties: * Amenable groups are sofic.* Sofic groups are hyperlinear. * If H is an amenable subgroup of a sofic group G, and α : HG is injective homomorphism, then the HNN extension of G by α is sofic.An expository treatment of sofic groups can be found in <cit.>. In particular, the last “closure property” can be found in <cit.>. We need one more general-purpose lemma before proceeding to the proof. Suppose G = ⟨ S : R ⟩ is a finitely-presented group, where R contains the relation a^2 = e for some a ∈ S. LetG := ⟨ G , t : t^2 = e, tat = Ja ⟩__2,where J, t ∉S. If a is non-trivial in approximate representations of G, then J is non-trivial in approximate representations of G. Note that G is the “_2-HNN extension” of G ×_2, where J is the generator of the _2 factor, by the order-two automorphism sending a ↦ Ja and J ↦ J. For the purposes of this proof, if X is a linear operator on a finite-dimensional Hilbert space H, let (X) := (X) /H. Suppose ϕ is an -representation of G with ϕ(a)^2 = 1 and (ϕ(a)) ≥ 0. Because the eigenvalues of ϕ(a) belong to {± 1}, we can choose a basis so that ϕ(a) = _d_0⊕ (-)_d_0⊕_d_1, where d_1 = (ϕ(a)). Define an approximate representation ψ of G by ψ(x) = ϕ(x)for allx ∈ S, ψ(J) = -,and ψ(t) = [0 _d_00; _d_000;00 _d_1 ].Clearly ψ(r) -= ϕ(r) - ≤ for all relations r ∈ R, ψ([J,s]) = for all s ∈ S ∪{t}, and ψ(t)^2 = ψ(J)^2 =. For the remaining relation, ψ(t a t) - ψ(J a) = 0_2d_0⊕ 2_d_1= 2√(d_1/2d_0 + d_1) = 2√((ϕ(a))).So ψ will be a max(, 2√((ϕ(a))))-representationwith ψ(J)- = 2.To make (ϕ(a)) small, we can use the tensor-power trick as in Section II.2 of <cit.>. Suppose a is non-trivial in approximate representations of G. By Lemmas <ref> and <ref>, there is a constant δ > 0, such that for all > 0, there is an -representation ϕ of G with ϕ(a) -> δ and ϕ(a)^2 =.Given > 0, find an integer k such that(1 - δ^2/4)^k≤^2/4,and let ϕ be an /k-representation with ϕ(a) -> δ and ϕ(a)^2 =. Suppose ϕ has dimension d, and let γ be the direct sum of ϕ with d copies of the trivial representation. Then γ is an /k-representation of G by Lemma <ref>, and furthermore γ(a)^2 =,(γ(a)) = d + (ϕ(a)) ≥ 0, andγ(a) -= _d⊕ϕ(a) - _2d = 1/√(2)ϕ(a) -> δ/√(2).Since γ(a) is self-adjoint,γ(a) - ^2 = 2 - 2 (γ(a)),so we conclude that0 ≤(γ(a)) ≤ 1 - δ^2/4.Since (X^⊗ k) = (X)^k, Lemma <ref> implies that γ^⊗ k is an -representation of G with 0 ≤(γ^⊗ k(a)) ≤(1 - δ^2/4)^k ≤^2/4.Applying the argument of the first paragraph to γ^⊗ k, we get an -representation ψ of G with ψ(J)- = 2. This shows that J is non-trivial in approximate representations of G. We are now ready to prove Proposition <ref>.Note that any hyperlinear but non-residually-finite group has an element which is trivial in finite-dimensional representations, but non-trivial in approximate representations. To prove Proposition <ref>, we show thatK = ⟨ x,y,a,b : a^2 = b^2 = e, ab = ba, y a y^-1 = a, y b y^-1 = ab, x y x^-1 = y^2⟩is an extended homogeneous-linear-plus-conjugacy group which is hyperlinear but non-residually finite.Indeed, to see that K has a presentation as in Definition <ref>, we can introduce a third variable c with c^2 = e and c = ab. Then K is equivalent to the extended homogeneous-linear-plus-conjugacy group with three involutary generators a,b,c, one linear relation abc = e (along with the corresponding commuting relations), two non-involutary generators x and y, and three conjugacy relations y a y^-1 = a, y b y^-1 = c, and x y x^-1 = y^2.For the remainder of this section, K will refer to this group. K is sofic, and the element a ∈ K is non-trivial. K_1 := ⟨ y,a,b: a^2 = b^2 = e, ab = ba, y a y^-1 = a, yby^-1 = ab ⟩ is isomorphic to ⋉ (_2 ×_2), and in particular is solvable (hence amenable). The group K is the HNN extension of K_1 by the injective endomorphism of ⟨ y ⟩ sending y ↦ y^2. Hence K is sofic by properties (1) and (3) of sofic groups above.In addition, the natural morphism K_1K is injective. Since a is clearly non-trivial in K_1, we conclude that a is non-trivial in K.The following lemma comes from discussions with Tobias Fritz.The element a ∈ K is trivial in all finite-dimensional representations of K. By a theorem of Mal'cev <cit.>, it suffices to show that a is trivial in finite representations, rather than finite-dimensional representations. So let ϕ : GH be a homomorphism from G to a finite group H. Now the order k of ϕ(x) is finite, so ϕ(y) = ϕ(x)^k ϕ(y) ϕ(x)^-k = ϕ(y)^2^k. It follows that the order m = |ϕ(y)| of ϕ(y) divides 2^k-1, and in particular is odd. Since ϕ(y) ϕ(b) ϕ(y)^-1 = ϕ(ab) and ϕ(y) ϕ(ab) ϕ(y)^-1 = ϕ(b), we conclude that ϕ(b) = ϕ(y)^m ϕ(b) ϕ(y)^-m = ϕ(ab). Consequently ϕ(a) = as desired. By Proposition <ref>, there is an fa-embedding of K to a homogeneous-linear-plus-conjugacy group G = (A,), in which a ∈ K is mapped to a generator x_i of G. Let G = ⟨ G, t : t^2 = e, t x_i t = J x_i ⟩__2.The relation t x_i t = J x_i can be replaced with the relations t x_i t = Z and Z x_i = J, where Z is an ancilla variable with Z^2 = e. With this presentation, G is a linear-plus-conjugacy group. By Proposition <ref>, there is an fa-embedding over _2 of G to a solution group Γ. By Lemma <ref>, a is non-trivial in approximate representations of K, and hence x_i is non-trivial in approximate representations of G.By Lemma <ref>, J_G is non-trivial in approximate representations of G, and we conclude that J_Γ is non-trivial in approximate representations of Γ.Finally, there is a morphism from K to G which sends a to x_i, so x_i will be trivial in all finite-dimensional representations of G by Lemma <ref>. But since J_G = [t,x_i], this means that J_G (and hence J_Γ) is trivial in all finite-dimensional representations of G. Let Γ be the solution group from Proposition <ref>, and letbe the associated game. Since J is trivial in finite-dimensional representations, Theorem <ref> implies thatdoes not have a perfect strategy in C_qs. But since J is non-trivial in approximate representations, Proposition <ref> implies thathas a perfect strategy in C_qa.By Remarks <ref> and <ref>, the linear system constructed in the proof of Theorem <ref> will have 235 variables and 184 equations.§ PROOFS OF THEOREMS <REF> AND <REF> To prove Theorem <ref>, we want to find a hyperlinear group with an undecidable word problem, which fa-embeds in a solution group. For Theorem <ref>, we want to find a family of residually finite groups with arbitrarily hard (albeit computable) word problems, which fin-embed in solution groups. Fortunately, such groups are provided by Kharlampovich <cit.> and Kharlampovich, Myasnikov, and Sapir <cit.>. Since the presentations are rather complicated, we do not repeat them here. Instead, we summarize some points of the construction from <cit.> in the following theorem.It is helpful to use the following notation: given S_0 ⊆ S_1, let (S_0,S_1) denote the normal subgroup generated by S_0 in the free group (S_1). Note that if S_1 ⊆ S, then (S_0,S_1) is a (not necessarily normal) subgroup of (S) in a natural way. Also, if x,y are group elements, recall that [x,y] = xyx^-1y^-1, and x^y = y x y^-1.[This is the reverse of the convention in <cit.>, where [x,y] = x^-1 y^-1 x y and x^y = y^-1 x y.]Let X ⊆ be recursively enumerable. Then there is a finitely-presented solvable group K_X = ⟨ S : R ⟩ with the following properties: * The set S is divided into three subsets L_i, i=0,1,2. * The relations in R come in three types: * R contains the relations x^2 = e for all x ∈ L_0 ∪ L_1.* R also contains commuting relations of the form xy = yx, for certain pairs x,y ∈ S.* Every other relation r ∈ R belongs to some normal subgroup (S_0,S_1), where S_1 ⊆ S and S_0 ⊆ (L_0 ∪ L_1) ∩ S_1 are such that the image of (S_0,S_1) in K_X is abelian. * The image of (L_0,S) in K_X is abelian.* There are elements z_0,z_1 ∈ L_0, A_1,A_2∈ L_1, and a,a' ∈ L_2, such that n ∈ X if and only if [A_2,[A_1,w(2^n)]] = [A_2,[A_1,z_0]]in K_X, where w(m) is defined byw(m) := z_1 m = 0 w(m-1) w(m-1)^a^-1 w(m-1)^a w(m-1)^a'm ≥ 1. * If X is recursive, then K_X is residually finite. Note that there is some overlap between relations of type (2b) and (2c). Indeed, if [x,y]=e is a relation, then the image of ({x},{x,y}) in K_X is equal to ⟨ x ⟩, and in particular is abelian. Since [x,y] belongs to ({x},{x,y}), any relation [x,y] = e of type (2b) with x ∈ L_0 ∪ L_1 can also be regarded as a relation of type (2c).To see that property (4) of the theorem holds from the description in <cit.>, it is helpful to note that, by properties (1), (2a), and (3) of the theorem, w(m) is an involution for all m ≥ 1. Suppose K = ⟨ S : R ⟩ is a finitely-presented group satisfying properties (1) and (2) of Theorem <ref>.Then K is an extended homogeneous-linear-plus-conjugacy group (as in Definition <ref>). Furthermore, if S_0 ⊆ S_1 ⊆ S are two subsets such that S_0 ⊆ L_0 ∪ L_1, and the image of (S_0,S_1) in K is abelian, then for every w ∈(S_0,S_1), there is a presentation of K as an extended homogeneous-linear-plus-conjugacy group in which w is equal in K to one of the involutary generators x_i.The generating set of K is split into involutary generators L_0 ∪ L_1 and non-involutary generators L_2.Since the order on non-involutary generators matters in Definition <ref>, choose an arbitrary enumeration y_1,…,y_k of L_2. According to property (2) of Theorem <ref>, the defining relations for K (aside from the involutary relations on L_0 ∪ L_1) fall into two types: (2b) and (2c). Both types of relations can be rewritten as linear and conjugacy relations of the types allowed in Definition <ref>.Indeed, commuting relations (relations of type (2b)) can be regarded as conjugacy relations (note that for relations y_i y_j = y_j y_i, we can choose either y_i y_j y_i^-1 = y_j or y_j y_i y_j^-1 = y_i depending on whether i > j or i < j).This leaves relations of type (2c). For this, we first prove the second part of the lemma. Suppose that the image of (S_0,S_1) is abelian in K, where S_0 ⊂ L_0 ∪ L_1. We claim that for any non-trivial element w ∈(S_0,S_1), there is a finite set of generators S_w and relations R_w ⊂(L_0 ∪ L_1 ∪ L_2 ∪ S_w) such that* R_w consists of linear and conjugacy relations as in Definition <ref>, * the relations R_w := R_w ∪{ s^2 = e : s ∈ L_0 ∪ L_1 ∪ S_w}imply that w is equal to an element of L_0 ∪ S_w, and* the added generators S_w and relations R_w do not change the group, i.e. the inclusionK ⟨ K, S_w : R_w ∪{ s^2 = e : s ∈ S_w }⟩is an isomorphism.To prove the claim, we use induction on the length of w in (S_1). The claim is trivially true if w ∈ S_0^±. Suppose w = z x z^-1, where x ∈(S_0) has length less than w, and z ∈ S_1. By induction, there is a set of ancilla variables S_x and relations R_x satisfying properties (i)-(iii) for x. In particular, the relations R_x imply that x is equal to some X ∈ S_0 ∪ S_x. Then we can set S_w := S_x ∪{W}, where W is a new indeterminate, and R_w := R_x ∪{W = z X z} or R_x ∪{W = z X z^-1} depending on whether z ∈ L_0 ∪ L_1 or z ∈ L_2. If w = z^-1 x z, then we do the same thing, but using z W z^-1 = X in place of W = z X z^-1.Finally, suppose that w = x_1 ⋯ x_n, where each x_i ∈(S_0,S_1) has smaller length than w. By induction, there are sets S_x_i and relations R_x_i implying that x_i is equal to some X_i ∈ L_0 ∪ S_x_i.We then set S_w := ⋃ S_x_i∪{W}, where W is again a new indeterminate, and R_w := ⋃ R_x_i∪{W X_1 ⋯ X_n = e = [W,X_i] = [X_i, X_j]for all1 ≤ i,j ≤ n}.Since the image of (S_0,S_1) in K is abelian, adding the relations R_w does not change K. This proves the claim. Now suppose that K has a defining relation in (S_0,S_1). If r = z x z^-1 for some x ∈(S_0,S_1) and z ∈ S_1^±, then r can be replaced with the simpler relation x. So we can assume without loss of generality that r = x_1 ⋯ x_n, where each x_i ∈(S_0,S_1). By the claim, we can add ancilla variables and relations such that each x_i is equal to an involutary generator X_i in K, and the relation r can be replaced with the linear relation X_1 ⋯ X_n = e. We conclude that K is an extended homogeneous-linear-plus-conjugacy group. The claim also immediately implies the second part of the lemma.We now come to the main result of this section.Let X ⊆ be a recursively enumerable set. Then there is a family of solution groups Γ_n = Γ(A^(n), b^(n)), n ≥ 1, such that * A^(n) x = b^(n) is an exp(O(n)) ×exp(O(n)) linear system;* the function n ↦(A^(n),b^(n)) is computable in exp(O(n))-time;* J_Γ_n is non-trivial in Γ_n if and only if n ∈ X; * if J_Γ_n is non-trivial in Γ_n, then J_Γ_n is non-trivial in approximate representations; and* if X is recursive and J_Γ is non-trivial inΓ_n, then J_Γ_n is non-trivial in finite-dimensional representations. Before giving the proof, we need the following exact version of Lemma <ref>.Suppose G = ⟨ S : R ⟩ is a finitely-presented group, where R contains the relation a^2 = e for some a ∈ S. LetG := ⟨ G, t : t^2 = e, tat = Ja ⟩__2,where J,t ∉S. If a is non-trivial in finite-dimensional representations of G, then J is non-trivial in finite-dimensional representations of G. Suppose a is non-trivial in finite-dimensional representations of G. A theorem of Baumslag states that the free product of two residually finite groups amalgamated over a finite subgroup is residually finite <cit.>.Let G := G ×_2, where the generator of the _2 factor is denoted by J, and let H = ⟨ t, a : t^2 = a^2 = e, tat = aJ ⟩__2_2 ⋉_2 ×_2.Then G is isomorphic to amalgamated free product of G and H over ⟨ a,J ⟩_2 ×_2, a finite group. While G is not necessarily residually finite, the group G^fin is residually finite by definition, and there is natural map from G to the amalgamated free product of G^fin and H over _2 ×_2. The image of J_G is non-trivial in G^fin, and hence in the amalgamated product of G^fin and H. So J is non-trivial in finite-dimensional representations of G by Baumslag's result.Given a recursively enumerable subset X ⊆, let K_X = ⟨ S : R ⟩ be the associated group from Theorem <ref>. Using the notation from property (4) of Theorem <ref>, let c(n) = [A_2,[A_1,w(2^n)]] [A_2,[A_1,z_0]]^-1, so that c(n) = e in K_X if and only if n ∈ X. Since c(n) belongs to (L_0,S), Lemma <ref> and property (3) of Theorem <ref> implies that K_X has a presentation ⟨ S_n : R_n ⟩ as an extended homogeneous-linear-plus-conjugacy group, in which c(n) is equal to some involutary generator in S_n. Since the presentation ⟨ S : R⟩ is fixed, the size of ⟨ S_n : R_n ⟩ depends only on the number of ancilla generators and relations needed to set c(n) equal to one of the involutary generators. Inspection of the argument from Lemma <ref> reveals that we need to add 4m ancilla generators and relations to set w(m) to an involutary generator. Thus S_n and R_n will have size O(2^n), and the function n ↦ (S_n,R_n) can be computed in O(2^n)-time. By Proposition <ref>, there is an fa^*-embedding from ⟨ S_n : R_n ⟩ to a homogeneous-linear-plus-conjugacy group G_n, in which c(n) is mapped to some generator x_i. As in the proof of Proposition <ref>, let G_n = ⟨ G_n, t : t^2 = e, t x_i t = J x_i ⟩__2.Then G_n is a linear-plus-conjugacy group, and by Proposition <ref>, there is an fa^*-embedding of G_n in a solution group Γ_n = Γ(A^(n), b^(n)). By Remarks <ref> and <ref>, A^(n) and b^(n) can be constructed in time polynomial in |S_n| and |R_n|, so A^(n) and b^(n) satisfy parts (a) and (b) of the proposition.Suppose c(n) is non-trivial. Since K_X is solvable, it is hyperlinear, so c(n) is non-trivial in approximate representations. By Lemma <ref>, J_Γ_n will be non-trivial in approximate representations. If X is recursive, then K_X will be residually finite by property (5) of Theorem <ref>, and hence J_Γ_n will be non-trivial in finite-dimensional representations by Lemma <ref> (this uses the fact that fa^*-embeddings are also fin-embeddings). On the other hand, if c(n) is trivial then J_Γ_n will be trivial. Hence parts (c)-(e) of the proposition follow from property (4) of Theorem <ref>. Let X ⊆ be a recursively enumerable but non-recursive set, and take the family {_n : n ∈} of games associated to the solution groups {Γ_n : n ∈} constructed in Proposition <ref>. By Theorem <ref> and part (c) of Proposition <ref>, _n will have a perfect strategy in C_qc if and only if n ∈ X. By Proposition <ref> and part (d) of Proposition <ref>, _n will have a perfect strategy in C_qc if and only if it has a perfect strategy in C_qa. Because the function n ↦_n is computable by part (b) of Proposition <ref>, it is undecidable to determine ifthe games in this family have perfect srategies in C_qa.Given a computable function f(n), let X ⊆ be arecursive subset such that for any Turing machine accepting X, the running time over inputs n ≤ N is at least f(N) when N is sufficiently large.[Often when talking about the running time, we look at the maximum running time over inputs of size ≤ N, rather than value ≤ N. However, thinking of the running time in terms of the values of the inputs does not change the fact that such sets X exist.] Once again, we can take the family of games {_n : n ∈} associated to the solution groups {Γ_n : n ∈} from Proposition <ref>. Then part (a) of Theorem <ref> follows from parts (a) and (b) of Proposition <ref>, while parts (b) and (c) of Theorem <ref> follow from parts (c) and (e) of Proposition <ref>, as well as Theorems <ref> and <ref>.Suppose there is an algorithm to decide if a linear system game has a perfect strategy in C_q. Let g(n) be the running time of this algorithm on games coming from linear systems with at most n rows and columns. Note that g(n) is an increasing function.Let f(n) be any computable function such that f(n) > g(2^n^2) + 2^n^2for all n ≥ 1. Let _n be the family of games associated to f(n) as in Theorem <ref>. Then there is a constant C such that _n has size ≤ 2^Cn for all n ≥ 1, and the function n ↦_n is computable in time 2^Cn. Plugging _n into the algorithm to decide whether a linear system game has a perfect strategy in C_q, we get an algorithm for the languageX = {n ∈ : _nhas a perfect strategy inC_q }with running time at most g(2^CN) + 2^CN on inputs n ≤ N. But by part (b) of Theorem <ref>, when N is sufficiently large the maximum running time on inputs n ≤ N for any algorithm for X must be at least f(N). Since N^2 will eventually be larger than C N, we get a contradiction. Thus there is no algorithm to decide if a linear system game has a perfect strategy in C_q. amsalpha
http://arxiv.org/abs/1703.08618v2
{ "authors": [ "William Slofstra" ], "categories": [ "quant-ph", "math-ph", "math.GR", "math.MP", "math.OA" ], "primary_category": "quant-ph", "published": "20170324224451", "title": "The set of quantum correlations is not closed" }
Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA13.60.-r,24.85.+p, 25.20.-x We show that introducing subnucleon scale fluctuations constrained by HERA diffractive J/Ψ production data significantly affects the incoherent diffractive J/Ψ production cross section in ultraperipheral heavy ion collisions. We find that the inclusion of the additional fluctuations increases the ratio of the incoherent to the coherent cross section approximately by a factor of 2, and modifies the transverse momentum spectra of the produced J/Ψ at momenta larger than the scale that corresponds to the distance scale of the subnucleonic fluctuations. We present predictions forJ/Ψ production in ultraperipheral heavy ion collisions at √(s_NN)=5.02TeV at the LHC and 200GeV at RHIC. Probing subnucleon scale fluctuations in ultraperipheral heavy ion collisionsBjörn Schenke Received: date / Accepted: date ===============================================================================§ INTRODUCTIONDeep inelastic scattering (DIS) is a clean process to study the internal structure of hadrons via interaction with (virtual) photons. The most precise data to date on the partonic structure of the proton comes from the DIS experiments performed at the HERA e^± + p collider <cit.>. These measurements have shown that the gluonic density of the proton grows rapidly when the momentum fraction x of the gluons decreases.Nonlinear QCD phenomena must limit this growth at very small x in order to avoid violating unitarity. These nonlinearities are most conveniently described within the Color Glass Condensate (CGC) effective theory of QCD <cit.>, which defines the framework we use in this work.The gluon density scales as A^1/3, which amplifies nonlinear effects in heavy nuclei with large mass number A. However, currently there is no nuclear DIS data at small x available. The proposed Electron Ion Collider (EIC)  <cit.> and Large Hadron Electron Collider (LHeC) <cit.> are designed to explore this region of large gluon densities. Before these machines are realized, one possibility to study deep inelastic scattering on nuclei is given by ultraperipheral heavy ion collisions, where a large impact parameter suppresses the relatively short range strong interactions. Instead, scattering processes involve a photon, emitted by one of the electrically charged nuclei, scattering off the other nucleus. First results from the LHC on diffractive vector meson production in ultraperipheral collisions have demonstrated the sensitivity of this process on nuclear effects at small x <cit.>. Recently, ultraperipheral proton-nucleus collisions, where the large electric charge of the nucleus causes the photon-proton scattering to dominate,have also been used to study deep inelastic scattering off a proton at energies much higher than were available at HERA <cit.>. Diffractive scattering, where a system of particles or just a single particle is produced without exchanging a net color charge with the target, is a powerful process to study the small-x structure of hadrons. At leading order in perturbative QCD, the diffractive cross section is proportional to the square of the gluon density, making it very sensitive to small-x gluons. In addition, exclusive vector meson production is sensitive to the geometric structure of the hadron. In particular, in coherent diffraction (where the target hadron remains in the same quantum state) an average density profile is probed. On the other hand, when the target breaks up, one is sensitive to the event-by-event fluctuations of the gluon fields in the target <cit.>.In recent phenomenological applications it has been demonstrated that one can indeed constrain the shape and the shape fluctuations of the proton (not included in previous literature) and nuclei at small x by studying exclusive vector meson production <cit.> (see also <cit.>).For a more formal introduction of diffractive scattering, the reader is referred to Ref. <cit.>.We calculate coherent and incoherent diffractive J/Ψ production in ultraperipheral heavy ion collisions in the dipole picture.The main focus of this work is to study how the diffractive cross sections are affected by inclusion of the subnucleon scale fluctuations that have been constrained using diffractive J/Ψ production at HERA in Refs. <cit.>. Apart from the inclusion of these fluctuations, we improve over previous state-of-the art CGC work <cit.> (see also <cit.>) by using a Monte Carlo method to explicitly calculate target averages.This allows us to use the impact parameter dependent saturation model (IPsat) dipole amplitude <cit.> without factorization of the impact parameter dependence (an approximation which was necessary to derive the incoherent cross section in Ref. <cit.>). We also use an updated IPsat parametrization fitted to the combined HERA data <cit.>. § DIPOLE SCATTERINGThe basic ingredient in the dipole framework is the forward elastic dipole-target scattering amplitude N(,,), whereis the two dimensional vector that connects the quark and antiquark of the dipole in the transverse plane,is the impact parameter andis the usual Bjorken variable of DIS in a diffractive event. In this work we use the IPsat model <cit.>, which employs an eikonalized DGLAP evolved <cit.> gluon distribution xg and includes saturation effects. The dipole-proton scattering amplitude is written asN(,,x) = 1 - exp(- π^2 /2(μ^2) xg(x, μ^2) ^2 T_p()) ,with the thickness functionT_p()=1/2π B_p e^-^2/(2B_p).Here both the coupling α_s and the gluon distribution are evaluated at the scale μ^2=μ_0^2 + 4/^2. The proton width B_p, initial scale μ_0^2 and the initial condition for the DGLAP evolution of the gluon distribution x g are parameters of the model. Their values are determined in Ref. <cit.> by performing fits to HERA DIS data. To include proton structure fluctuations we follow Refs. <cit.> and assume that the gluonic density of the proton in the transverse plane is distributed around three constituent quarks (hot spots)[As shown in Ref. <cit.>, it is also possible to describe the HERA data using a different number of hot spots, or tubes instead of round hot spots. Consequently, we do not expect our results to depend on the exact choice of model for the proton geometry, as long as it has the correct amount of fluctuations constrained by the HERA data.]. These hot spots are assumed to be Gaussian. In practice we perform the replacement of the impact parameter profile (<ref>)T_p() →∑_i=1^3 T_q(-) ,withT_q() = 1/2π B_q e^-^2/(2B_q).whereare the locations of the hot spots. They are sampled from a two dimensional Gaussian distribution whose width is B_qc.The free parameters B_q and B_qc are obtained in Ref. <cit.> by comparing with the HERA coherent and incoherent diffractive J/Ψ production data at the photon-proton center of mass energy W ∼ 75, corresponding to ∼ 10^-3. The proton fluctuation parameters obtained are B_qc=3.3^-2, B_q=0.7^-2 and are close to the values obtained in a similar analysis in Ref. <cit.>.An additional source of fluctuations we include here comes from fluctuations of the overall normalization of the saturation scale, which we refer to in short as saturation scale fluctuations. Following again Ref. <cit.>, we allow the saturation scale of each of the constituent quarks to fluctuate independently according to a log-normal distribution. The width of that distribution is obtained in Refs. <cit.> by comparing to the p+p multiplicity fluctuation data. The saturation scale fluctuations were shown in Ref. <cit.> to be necessary to describe the incoherent diffractive cross section measured by HERA at small |t|. For a more detailed description of the implementation see Ref. <cit.>.The replacement (<ref>) changes the impact parameter dependence of the average dipole amplitude (<ref>) even though we require ⟨∑_i T_q(-)⟩ = T_p().Here, the average is taken over different nucleon configurations. This is because the thickness function appears in the exponential.As a result, the description of the proton structure function data would not be as good as with the original fit <cit.>. Also, as shown in Ref. <cit.>, this parametrization tends to slightly underestimate the coherent γ p cross section measured at HERA. In principle one should perform a new fit to the HERA structure function data with the modified IPsat model parametrization. As the purpose of this work is to study the effect of the subnucleon scale fluctuations on ultraperipheral heavy ion collisions this fitting is left for future work.[There is also a large model uncertainty from the modeling of the vector meson wave function that will affect the overall normalization of the diffractive cross sections. We will return to this issue below and show that the ratio of incoherent to coherent cross sections is largely independent of the wave function.] The dipole-nucleus amplitude N_A is obtained by usingan independent scattering approximation, similar to Refs. <cit.>N_A(,,x) = 1 - ∏_i=1^A [1 - N(, -,x)],whereare the transverse positions of the nucleons in the nucleus. The interpretation here is that 1-N is the probability not to scatter off an individual nucleon, thus ∏_i (1-N_i) is the probability not to scatter off the entire nucleus.§ DIFFRACTIVE DEEP INELASTIC SCATTERING The scattering amplitude for diffractive vector meson production in γ^*-nucleus scattering can be written as <cit.>^γ^* A → V A_T,L(,Q^2, Δ) = 2i∫^2 ∫^2 ∫ z/4π × (Ψ^*Ψ_V)_T,L(Q^2, ,z)× e^-i[ - (1-z)]·ΔN_A(,,).Here the momentum transfer is =(P'-P)_⊥≈√(-t), where P and P' are the momenta of the incoming and outgoing nucleus, respectively. The subscripts T and L refer to the transverse and longitudinal polarization of the virtual photon with virtuality Q^2. In ultraperipheral events, the photons are approximately real (Q^2=0) and only the transverse component contributes.The scattering amplitude (<ref>) can be interpreted as follows: first, the incoming virtual photon fluctuates into a quark-antiquark dipole with transverse separation ||, the quark carrying the momentum fraction z. This splitting is described by thevirtual photon wave function Ψ (see e.g. <cit.>). As discussed previously, the elastic scattering amplitude for the dipole to scatter off the nucleus is N_A(,,). Finally, the vector meson V is formed, and the qq̅→ V formation is described by the vector meson wave function Ψ_V. In this work we use both the Boosted Gaussian and Gaus-LC wave function parametrizations from Ref. <cit.> in order to estimate the model uncertainty related to the formation of the J/Ψ (see also Ref. <cit.> for a recent more rigorous calculation of the vector meson wave functions). Two phenomenological corrections to the diffractive cross sections are included. First, equation (<ref>)is derived assuming that the dipole amplitude is completely real, which makes the diffractive scattering amplitude purely imaginary (in case of a rotationally symmetric dipole amplitude). A correction for the presence of the real part is necessary. Secondly, the skewedness correction that takes into account the fact that in two-gluon exchange processes the gluons in the target are probed at different values of Bjorken-x is also included. These corrections are discussed in more detail in Appendix <ref>.The coherent diffractive cross section is obtained by averaging the diffractive scattering amplitude over the target configurations and taking the square <cit.>:σ^γ^* A → V A/ t = 1/16π| ⟨^γ^* A → V A(,Q^2,Δ) ⟩|^2.Here the brackets ⟨⟩ refer to averages over different configurations of the target. The incoherent cross section is obtained by subtracting the coherent cross section from the total diffractive cross section. It takes the form of a variance of the diffractive scattering amplitude <cit.> (see also Refs. <cit.>):σ^γ^* A → V A^*/ t = 1/16π ( ⟨| ^γ^* A → V A(,Q^2,Δ)|^2 ⟩.     - . | ⟨^γ^* A → V A(,Q^2,Δ) ⟩|^2 ) .The cross sections are related to Fourier transforms of the dipole-nucleus amplitude fromcoordinate space to momentum space, and the transverse momentum transferis the Fourier conjugate to - (1-z). Here the impact parameterpoints to the center of the dipole from the center of the nucleus, and the factor (1-z) appears due to the use of non-forward wave functions  <cit.>. The dependence onshows that diffractive vector meson production is sensitive to the impact parameter profile,in contrast to calculations of proton structure functions where the impact parameter integral merely affects the overall normalization.This makes diffractive scattering a sensitive probe of the internal geometric structure of hadrons and nuclei. In particular, larger momentum transfers probesmaller distance scales, which we will show explicitly later. Because the incoherent cross section (<ref>) has the form of a variance, it is sensitive to the amount of fluctuations in coordinate space.In Ref. <cit.> the incoherent cross section was calculated analytically assuming that the impact parameter dependence of the dipole amplitude factorizes, and is explicitly proportional to e^-^2/(2B_p). In that case, the dipole amplitude does not saturate to unity at large dipoles or at large densities. As we calculate the target averages explicitly using a Monte Carlo method (similar to SARTRE <cit.>), we do not need to rely on this approximation. We note that for the J/Ψ production at the LHC, usage of the factorized approximation for the dipole amplitude reduces the coherent cross section by approximately 15% when no subnucleonic fluctuations are included. The averages over target configurations are calculated by sampling hundreds configurations. This involves sampling nucleon positions from a Woods-Saxon distribution and the subnucleonic structure for each of the nucleons as described above.The structure of the target is probed at the scale= Q^2+M_V^2-t/Q^2+W^2-m_N^2,which can be interpreted as the longitudinal momentum fraction of the nucleon carried by the color-neutral “pomeron”. Here W is the center-of-mass energy in the photon-nucleon scattering and m_N and M_V arethe nucleon and the vector meson masses, respectively.§ EXCLUSIVE VECTOR MESON PRODUCTION IN ULTRAPERIPHERAL COLLISIONS Following Ref. <cit.>, we write the vector meson production cross section in ultraperipheral heavy ion collisions as a convolution of photon flux generated by one of the nuclei n^A_i and the photon-nucleus cross section σ^γ A_i:σ^AA → J/Ψ AA'/ t = n^A_2(ω_2 ) σ^γ A_1(y) + n^A_1(ω_1) σ^γ A_2(-y) .Here y is the rapidity of the vector meson, and the total photon flux n^A_i(ω) is obtained by integrating the photon flux of the nucleus over the impact parameter in the region || > 2R_A. For an explicit expression, the reader is referred to Ref. <cit.>. The photon flux can be calculated by noticing that the photon energies are ω_1=(M_V/2)e^y and ω_2=(M_V/2)e^-y. The center of mass energy squared of the photon-nucleon system is W^2 = √(s_NN) M_V e^± y, and the Bjorken-x probed in the process becomes ≈ (M_V/√(s_NN)) e^∓ y.Note that there are two different contributions to the vector meson production at y ≠ 0. Either a large-x photon scatters off a small-x gluon, or vice versa. This limits the applicability of the framework at forward and backward rapidities, where a significant contribution to the cross section comes from a process where the center-of-mass energy of the photon-nucleon scattering is small. Thus we shall only calculate the cross section in the region where ≲ 0.015. At √(s_NN)=2.76 TeV this corresponds to |y|≲ 2.5, at √(s_NN)=5.02 TeV to |y|≲ 3 in the case of J/Ψ production.§ RESULTS The coherent and incoherent J/Ψ production cross sections in ultraperipheral Pb+Pb collisions at √(s_NN)=5.02 TeV calculated using the Boosted Gaussian wave function are shown in Fig. <ref>. The cross sections at y=0 are first calculated without subnucleonic fluctuations, and then including both the geometric and Q_s normalization fluctuations for the nucleons. Only the three first coherent peaks are shown, ascalculating the average becomes numerically challenging at high |t| <cit.>. Further, it will be hard to measure the coherent cross section in this regime. We find that the coherent cross section isslightly reduced when the subnucleon scale fluctuations are included.This change is caused by the modification of the impact parameter dependence of the dipole amplitude discussed in Sec. <ref>.Below |t|≲ 0.25^2, where one is sensitive only to fluctuations on length scales larger than the nucleon size, the incoherent cross section is approximately the same with and without nucleon structure fluctuations. On the other hand, at larger |t| subnucleonic fluctuations clearly modify the slope of the incoherent cross section. The |t| value 0.25^2, where the incoherent cross section becomes sensitive to subnucleonic fluctuations, corresponds to a distance scale ∼ 0.4, which is of the order of the sizes of the gluonic hot spots in the nucleon (their root mean square radius is √(2 B_q)≈ 0.24 fm).Preliminary ALICE data on exclusive J/Ψ production <cit.> show that the change in slope occurs around √(-t)≈ p_T ∼ 0.5, which is in quantitative agreement with our results.Next, we compare our results with the total (t integrated) J/Ψ production cross sections measured by ALICE <cit.> and CMS <cit.> at √(s_NN)=2.76 TeV as a function of the J/Ψ rapidity. The results for coherent and incoherent diffraction are shown in Figs. <ref> and <ref>, respectively. We show results obtained by using both the Boosted Gaussian and Gaus-LC wave functions for the J/Ψ. Similarly to previous literature <cit.>, we find that the different wave functions mainly affect the overall normalization of the cross section.In Fig. <ref> we see that the coherent cross section is somewhat reduced when subnucleonic fluctuations are included.This change is comparable to the model uncertainties related to the J/Ψ wave function. In particular, we find that replacing the Boosted Gaussian by the Gaus-LC parametrization, both the coherent and incoherent (shown in Fig. <ref>) cross sections are reduced by approximately 20%.Using the Boosted Gaussian wave function, without subnucleon scale fluctuations, the coherent cross section is significantly overestimated,as in Ref. <cit.>, and the incoherent cross section agrees with the data. Including geometric fluctuations increases the incoherent cross section almost by a factor of 2, and the results are consistentlyabove the data for both processes. The rapidity dependence of the data is well reproduced. Results with subnucleon scale fluctuations obtained with the Gaus-LC wave function are close to the experimental data for the coherent cross sectionand slightly higher for the incoherent cross section. At midrapidity, the ALICE coherent cross section datapoint is overestimated by 1.3σ and the incoherent cross section by 2σ. We note that there is tension between the HERA e+p data where the coherent cross section is underestimated by our model with subnucleonic fluctuations <cit.>. To reduce model uncertainties related to the vector meson wave function we study the ratio of incoherent to coherent cross section. Our results are compared to the ALICE data <cit.> at √(s_NN)=2.76 TeV in Fig. <ref>. It can be seen that inclusion of the subnucleonic fluctuations brings this ratio to a level compatible with the experimental data. The ratio is found to be approximately independent of rapidity and we confirm that the dependence on the vector meson wave function is very weak.Predictions for the J/Ψ rapidity distribution in ultraperipheral Pb+Pb collisions at√(s_NN)=5.02 TeV are shown in Figs. <ref> and <ref>. As for the lower energy, we find that the subnucleon scale geometric fluctuations have a large effect on the incoherent cross section. The ratio of the cross sections for the two processes for √(s_NN)=5.02 TeV decreases due to larger saturation effects on the incoherent cross section <cit.>. Numerically this effect is small in the LHC energy range, and we find that at √(s_NN)=5.02 TeV the ratio decreases by 0.5 … 3%.Finally, in Fig. <ref> we show |t|-spectra for J/Ψ production at midrapidity in ultraperipheral √(s_NN)=200 GeV Au+Au collisions at RHIC, corresponding to W=25. This corresponds to =0.015, which is at the edge of the validity of our model. Especially the skewedness and real part corrections together are almost 100%, which makes the absolute normalization unreliable (see Appendix <ref> and Fig. <ref>). The spectra are calculated using the Boosted Gaussian wave function.Integrating the cross sections over t, we get 106 μb for the coherent and 62 μb for the incoherent cross section with subnucleon fluctuations (with Gaus-LC wave function the cross sections are 100 μb and 52 μb). The corresponding cross section results without fluctuations are 121 μb and 33 μb (118 μb and 30 μb with Gaus-LC wave function). The results for the coherent cross section are in agreement with the PHENIX <cit.> result76 ± 33 ± 11 μb. The Bjorken-x evolution in this work comes directly from the Q_s evolution in the IPsat model. Thus, the amount of fluctuations and the size of the hot spots do not change as a function of x or center-of-mass energy. In principle the characteristic length scales (∼ Q_s^-1(x)) depend on the energy and recent explicit calculations show that protons grow and fluctuations are reducedwhen Bjorken-x is decreased <cit.>. If that is the case, we would expect the incoherent cross section to grow more slowly with energy than in our calculation.§ CONCLUSIONSWe have shown that the subnucleon scale fluctuations, in particular geometric fluctuations of the nucleon shape, contribute significantly to the incoherent J/Ψ production cross section at |t|≳0.25GeV^2, measured in ultraperipheral heavy ion collisions at the LHC and RHIC. This is the first main result of this work: we expect to see the incoherent t slope of the J/Ψ production cross section to change at the value of t corresponding to the distance scale of the subnucleonic fluctuations. Compared to the case where the only contribution to the fluctuations originates from fluctuating nucleon positions, the |t| integrated incoherent cross section increases almost by a factor of 2 when geometric fluctuations of the nucleon shape are included. When comparing to experimental data, the cleanest presented result is the ratio of the incoherent and coherent cross sections, which eliminates a large part of the model uncertainties. It increases by a factor of two when subnucleonic fluctuations are included. This is the second main result of this work, and explains previous tension between the employed dipole model and experimental data in Ref. <cit.>, where one generally overestimated the coherent and underestimated the incoherent cross section. In this work the energy evolution only affects the saturation scale of the nucleons. Our framework does not involve any possible smoothening of the nucleon or nucleus as one evolves towards small-x, discussed e.g. in Refs. <cit.>. As shown in <cit.>, one could expect that the smoothening of the proton slows down the growth of the incoherent cross section with energy. Including these effects in our calculation explicitly by solving JIMWLK evolution equations <cit.> as done in e.g. <cit.>, is left for future work.§ ACKNOWLEDGMENTSWe thank T. Lappi, D. Takaki andR. Venugopalan for discussions. This work was supported under DOE Contract No. DE-SC0012704. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. BPS acknowledges a DOE Office of Science Early Career Award. § PHENOMENOLOGICAL CORRECTIONS The diffractive scattering amplitude (<ref>) isobtained by assuming that the dipole amplitude is completely real, which leads to a fully imaginary diffractive scattering amplitude before the Fourier transform tomomentum space.The ratio between the real and the imaginary part of the amplitude, β, can be calculated as (see e.g. <cit.>)β = tanπλ/2,whereλ = ln^γ^* A → VA/ln 1/.In this work, we follow the prescription of Ref. <cit.> and calculate the effect of the corrections from photon-proton scattering, assuming that the correction has the same effect in photon-nucleus scattering. This correction is taken into account by multiplying the obtained cross sections by a factor 1+β^2. The second phenomenological correction we include theskewedness correction, which takes into account the fact that in the two-gluon exchange the gluons in the target are probed at different longitudinal momentum fractions x_1 ≪ x_2≈ <cit.>. In the IPsat model the collinear factorization gluon distribution g(, μ^2) is corrected to correspond to the off-diagonal (or skewed) distribution, which depends on both x_1 and x_2, by multiplying it by a skewedness factor R_g. Following the prescription of Ref. <cit.> we get R_g = 2^2λ_g+3Γ(λ_g+5/2)/√(π) Γ(λ_g+4) withλ_g = ln g(,μ^2)/ln 1/.For photon-nucleus scattering the skewedness correction is approximated by calculating its effect onthe photon-proton scattering, and using the obtained correction factor.Especially the skewedness correction is numerically important and needed to describe the HERA diffractive measurements. The effect of real part and skewedness corrections on the total coherent diffractive cross section is shown in Fig. <ref>. When corrections are calculated, no proton fluctuations are taken into account. The correction grows rapidly towards small rapidities (small y). Note that when J/Ψ production is calculated at nonzero rapidity, there are always large-x contributions (corresponding to negative y) and small-x contributions (positive y) to the cross section, see Eq. (<ref>). JHEP-2modlong.bst
http://arxiv.org/abs/1703.09256v2
{ "authors": [ "Heikki Mäntysaari", "Björn Schenke" ], "categories": [ "hep-ph", "nucl-th" ], "primary_category": "hep-ph", "published": "20170327183527", "title": "Probing subnucleon scale fluctuations in ultraperipheral heavy ion collisions" }
[]romvas@inorg.chem.msu.ruDepartment of Chemistry, Lomonosov Moscow State University, 119991, Moscow, Russia Department of Materials Science, Lomonosov Moscow State University, 119991, Moscow, RussiaDepartment of Physics, Lomonosov Moscow State University, 119991, Moscow, RussiaDepartment of Materials Science, Lomonosov Moscow State University, 119991, Moscow, RussiaDepartment of Materials Science, Lomonosov Moscow State University, 119991, Moscow, RussiaDepartment of Physics, Lomonosov Moscow State University, 119991, Moscow, RussiaMoscow Institute of Physics and Technology (State University), 141700, Dolgoprudny, Russia P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991, Moscow, RussiaPhotonic Materials Unit, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, JapanPhotonic Materials Unit, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan Semiconductor nanoparticles of cadmium chalcogenides are known to exhibit pronounced thickness-dependent E_0 series of exciton transitions at the Γ point of the Brillouin zone (BZ). In this work, we report an experimental evidence for high-energy series of exciton transitions, which originates from BZ points different from the Γ point, in the family of cadmium chalcogenide quasi-2D nanoplatelets (NPLs). Intensive UV absorption bands demonstrating a pronounced size effect are observed for CdTe, CdSe, and CdS NPLs in addition to the E_0 exciton bands in the visible region. These new bands are attributed to transitions analogous to the E_1, E_1+Δ_1, and E_2 series observed in bulk crystals. First-principles DFT calculations of the electronic structure and absorption spectra support this explanation and show that the main contribution to these optical transitions comes from X and M points of the 2D BZ, which originate from L and X points of the 3D BZ. At the same time, the E_0 series of transitions at the Γ point is well described by the multiband effective-mass model. The observation of the UV exciton bands reveals tunable optical properties of cadmium chalcogenide NPLs in UV spectral region, which may be interesting for practical applications.62.23.Kn, 73.22.-f, 78.67.-nHigh-energy exciton transitions in quasi-two-dimensional cadmium chalcogenide nanoplatelets Kazuaki Sakoda December 30, 2023 =========================================================================================== § INTRODUCTION Colloidal semiconductor nanoparticles have been intensively studied as materials with promising optical properties tunable with size effects. <cit.> In addition to the size, the shape of nanoparticles provides a way to vary their electronic and optical properties. Substantial progress in the synthesis of nanoparticles with different shape <cit.> has been achieved when growing them in the presence of specific stabilizers. Recently, a considerable interest has been attracted to cadmium chalcogenide colloidal quasi-2D nanoplatelets (NPLs) with a thickness of few monolayers. <cit.> Record narrow absorption and emission bands resulting from the thickness-dependent exciton transitions make them promising for the development of new types of light-emitting devices, <cit.> lasers, <cit.> photodetectors, <cit.> photocatalytic systems, <cit.> and as materials with strong electroabsorption response. <cit.>Atomically flat surface and very homogeneous thickness of NPLs result in very narrow luminescence bands with a typical width of 5–10 nm. The giant oscillator strength of the exciton transitions <cit.> results in very short luminescence decay times (hundreds of picoseconds) <cit.> and high absorption coefficients for NPLs which are much higher than those for quantum dots (QDs). <cit.>The exciton transitions for cadmium chalcogenides nanoparticles lie in the visible part of the optical spectrum. However, the ultraviolet (UV) absorption is of interest too. It is known that cadmium chalcogenide QDs exhibit featureless absorption at photon energies above 3 eV (Ref. JPhysChemB.104.6112) where the absorption coefficient is independent of their size and proportional only to the molar fraction of semiconductor. This is why the UV absorption is often used as a reference for calculating the absorption coefficients for CdSe and CdTe QDs. <cit.> In Refs. JOptTechnol.78.693, ACSPhotonics.3.58 this technique was extended to an analysis of the absorption of core/shell QDs. To the best of our knowledge, the absorption features of nanoplatelets in the UV spectral region (shorter than 380–400 nm) have not been studied yet.In this work, we report an observation of new high-energy exciton transition series in the family of cadmium chalcogenide NPLs. We analyzed the absorption and photoluminescence excitation spectra of CdS, CdSe, and CdTe NPLs of different thickness capped with various ligands in the UV region and revealed pronounced and intensive thickness-dependent exciton bands in addition to exciton bands in the visible region. First-principles DFT calculations of the electronic structure and absorption spectra of NPLs shows that these high-energy bands result from transitions at the X and M points of the 2D Brillouin zone (BZ) of NPLs, which originate from L and X points of the BZ of bulk crystals. At the same time, the series of transitions at the Γ point in the visible region is well described by the multiband effective-mass model. § EXPERIMENT§.§ Nanoplatelet synthesis Colloidal technique with cadmium acetate as an agent promoting the formation of 2D nanoparticles was used to synthesize nanoplatelets. The CdTe NPLs (the populations with the main exciton emission bands at 430, 500, 556, 600, and 634 nm that are further referred to as CdTe430, CdTe500, etc.), CdSe NPLs (CdSe396, CdSe463, CdSe512, and CdSe550) and CdS NPLs (CdS380) were synthesized. The synthesis of CdTe NPLs (CdTe430, CdTe500, and CdTe556 populations) was performed according to the method proposed in Ref. ChemMater.25.2455. A mixture of thicker CdTe600 and CdTe634 populations was obtained by slow heating of the reaction mixture to 270 ^∘C. The synthesis of CdSe NPLs (CdSe512 and CdSe550 populations) was carried out following the protocols given in Ref. JAmChemSoc.133.3070.The modified technique adapted from Refs. JAmChemSoc.133.3070,NanoRes.5.337 was used for the synthesis of CdSe NPLs (CdSe396 and CdSe463 populations) and CdS NPLs (the CdS380 population). Briefly, a mixture containing 0.13 g of cadmium acetate dihydrate, 0.08 ml of oleic acid, and 10 ml of octadecene was heated to 230 ^∘C (CdSe463 and CdS380) or 130 ^∘C (CdSe396) under argon flow. After that, 100 μl of 1M solution of selenium or sulfur in trioctylphosphine was injected and the growth of the NPLs was continued for 30 min (CdSe463 and CdS380) or 6 hours (CdSe396). Then the reaction mixture was cooled down to the room temperature. During the cooling process, 1 ml of oleic acid was injected and nanoplatelets were precipitated by acetone. After 2–3 cycles of precipitation and redispersion, the solutions of the NPLs in hexane with minimum impurity content were obtained. Ligand exchange with the thioglycolic acid (TGA) was applied to synthesize a CdSe463TGA sample using phase-transfer method similar to Ref. JAmChemSoc.134.18585.§.§ Measurement and calculations Micrographs of nanoplatelet ensembles were recorded on LEO912 AB OMEGA transmission electron microscope. The accelerating voltage was 15 kV.Absorption spectra were recorded using the Varian Cary 50 spectrophotometer on nanoplatelets dispersed in spectroscopic grade solvents (hexane or methanol) in the spectral range of 200–800 nm. The spectra were analyzed with the PeakFit software and were fitted with Lorentz profile bands. In some cases, the background caused by light scattering was subtracted. The photoluminescence excitation spectra were collected using the Perkin-Elmer LS 55 fluorescence spectrometer by monitoring the intensity of the lowest-energy exciton luminescence band.First-principles calculations of the band structure and absorption spectra of CdSe NPLs were carried out within the density-functional theory using thesoftware. In order to correctly account for spin-obit coupling, the LDA PAW pseudopotentials taken from Ref. ComputMaterSci.81.446 were used. The cut-off energy was 30 Ha, the integration over the Brillouin zone was performed on a 8×8×2 Monkhorst-Pach mesh.The absorption spectra were calculated using theprogram of thepackage by summing the contributions of all band-to-band optical transitions into the imaginary part of the dielectric function on a dense k-point mesh in the Brillouin zone. In order to correctly take into account all possible electronic transitions, up to 30 empty conduction bands were used in these calculations. The 26×26×26 k-point mesh was used for bulk CdSe and 26×26×1 or denser k-point meshes were used for NLPs. During the analysis, small changes in thecode were made in order to isolate the contributions from given points of the BZ to the absorption spectrum.To calculate the thickness dependence of the exciton transition energies resulting from the optical transitions at the Γ point, we used the multiband effective-mass calculation method which was previously used by Ithurria et al. for the study of CdSe, CdS, and CdTe NPLs. <cit.> § RESULTS§.§ TEM studies TEM studies of obtained NPLs showed that their lateral size was 100–200 nm except for CdSe512 and CdSe550 populations whose size was about 10 nm [Fig. 1(a–d)]. CdSe NPLs with large lateral size were observed to roll-up into nanoscrolls, however, after TGA treatment they unfolded to flat nanoplatelets, in agreement with the literature data. <cit.> According to X-ray and electron diffraction measurements, all the NPLs had the zinc-blende crystal structure.§.§ Optical absorption and photoluminescence studies Typical absorption spectra of CdTe, CdSe, and CdS NPLs are shown in Fig. 1(e). Two well-defined narrow absorption bands for each sample appear in the visible region. These bands correspond to the transitions from the light-hole and heavy-hole valence sub-bands to the conduction band and are denoted as E_0(lh–e) and E_0(hh–e), respectively. The spectral positions of these bands are consistent with the literature data. <cit.>In the UV region (200–400 nm), however, unexpectedly non-monotonic spectra with pronounced and intensive absorption bands was revealed, in contrast with the featureless absorption spectra typical of spherical QDs. The observed fine structure was different for various NPLs. For CdTe NPLs, three absorption bands with the width of 50–70 nm can be distinguished. These bands are wider than the E_0(lh–e) and E_0(hh–e) bands, whose widths are 6–8 and 15–20 nm, respectively. It should be noted that the shape of the UV bands is close to the Lorentz profile, which indicates the absence of inhomogeneous broadening. The UV absorption spectra of CdSe and CdS NPLs reveal a similar behavior. A single maximum was observed for CdSe463 and CdSe396 NPL populations, whereas two maxima were registered for CdS NPLs.The absorption spectra of CdTe NPLs with different thickness are shown in Fig. 2(a). For all populations, three bands in UV region are observed. These bands shift to longer wavelengths with increasing thickness.The presence of the absorption bands is supported by photoluminescence (PL) excitation spectroscopy. The PL excitation spectrum for CdTe550 NPL shown by dashed line in Fig. 2(a) clearly demonstrates the features coinciding well with those in the absorption spectrum. The PL excitation spectra and the thickness-dependent absorption spectra exclude a possible interpretation of the UV absorption bands as resulting from the traces of free oleic acid and cadmium oleate [Fig. 1(e)].In the case of CdSe NPLs, a pronounced size effect is observed for CdSe396 and CdSe463 populations [Fig. 2(b)]. A single UV band shifts to higher energies with decreasing thickness. The PL excitation spectra also show a pronounced peak in the UV region which coincides with the absorption bands in both the samples. Additional bands at 370 and 392 nm (CdSe463) and at 319 nm (CdSe396) observed in PL excitation spectra can be attributed to the E_0(2s) series transitions analogous to those observed in epitaxial quantum wells <cit.> or an exciton formed by an electron and a hole from spin-orbit split band. <cit.> However, this behavior does not apply to CdSe512 and CdSe550 populations whose high-energy peaks lie at higher energies as compared to their expected positions [Fig. 2(b)]. This could be a result of significantly smaller lateral size of these NPLs which is comparable to the Bohr radius of exciton in CdSe (7 nm) and so the expected 2D behavior is violated. To get additional information on the origin of the UV bands, we performed ligand exchange with thioglycolic acid to synthesize a CdSe463TGA sample covered with ligands other than the oleic acid. In this sample, the single UV band is retained but is shifted to longer wavelength [Fig. 2(b)] because of an increase in the NPL thickness resulted from the sulfide layer formation.§.§ First-principles calculations of the band structure and absorption spectra To calculate the band structure of NPLs, periodic structures consisting of slabs of (001)-oriented NPLs with the zinc-blende structure separated by vacuum layers were used. The nanoplatelets had a thickness n from one to six monolayers (ML) and were terminated by Cd atoms on both sides. The vacuum layer of 20 Å between the slabs was found to be sufficient to neglect the interaction between the NPL and its images. To make the nanoplatelets insulating, two terminating F atoms were added near the Cd atoms on both surfaces of the NPL to compensate the charge of s^2 electrons produced by an extra Cd plane. The energies of the on-top, hollow, and four different bridge configurations of F atoms on the surfaces of NPLs were compared; the bridge position with F atoms located near the expected Se sites of the zinc-blende structure was found to be the ground-state structure with the P4̅m2 space group.At first, the geometry of NPLs (the in-plane lattice parameter and atomic positions) was carefully optimized until the residual forces acting on the atoms decreased to below 5 · 10^-6 Ha/Bohr. The in-plane lattice parameter of NPLs was found to increase from 3.9513 Å for 1ML to 4.1731 Å for 6ML toward the calculated interatomic Cd–Cd distance of 4.2544 Å for bulk CdSe. The band structure of all NPLs was then calculated (Fig. 3). The scissors correction of 1.495 eV estimated from the band structure calculations for bulk zinc-blende CdSe was applied to the energies of all conduction bands of NPLs (the applicability of the scissors correction to CdSe was recently approved by GW calculations <cit.>).An analysis of the band structures shows that the valence band of Cd_n+1Se_nF_2 NPLs is composed of 3(n+2) bands with the A_1, B_2, and E symmetry at the Γ point originating mainly from F 2p and Se 4p atomic states. Note a large spin-orbit splitting for E bands originating from Se states (upper part of the valence band) and negligibly small splitting for E bands originating from F states (lower part of the valence band).The structure of the conduction band of NPLs appeared to be different from that expected in simple theories that predict size-effect-induced shifts of dispersion curves for sub-bands. In our case, we see a number of the lower-lying bands with the A_1 and B_2 symmetry at the Γ point originating mainly from, respectively, 5s and 5p_z empty atomic orbitals of Cd and Se. The higher-lying conduction bands have predominantly the E symmetry and originate from Cd and Se 5p_x,y states. An inspection of Fig. 3 and a comparison between the band structures of 6ML NPLs and bulk CdSe [Fig. 4(b)] shows that the number of bands below the E bands can be as large as 10–14. It looks like one of three 5p orbitals of Se, p_z, is strongly split off and form these extra bands while the other 5p_x and 5p_y orbitals form the higher-lying bands with wave functions of the E symmetry. As a result of a complex interaction between s and p_z atomic orbitals, a number of subbands have properties that are very different from those expected in simple theories. For example, these bands have very different effective masses (which is unusual for sub-bands). The comparison of the remaining parts of the band structures shows that there is a qualitative agreement between them: we can see the formation of dense “nodes” in the valence band along the dispersion curves of bulk CdSe when the number of monolayers is increased [see Fig. 3 and Fig. 4(b)]. § DISCUSSION The E_0(lh–e) and E_0(hh–e) bands in the visible region can be attributed to the E_0 exciton series at the Γ point of the BZ [Fig. 1(c), inset] and are related to the fundamental absorption edge. In the UV region, the behavior of the absorption bands is different. Although for different chalcogenides different fine structure of these bands was observed, in all cases the absorption fine structure in NPLs looks very similar to that in bulk crystals. <cit.> Three bands observed for CdTe NPLs in the UV region may be associated with E_1, E_1+Δ_1, and E_2 transitions found in bulk CdTe. <cit.> The single UV band observed for CdSe NPLs can be explained by an overlapped E_1 and E_1+Δ_1 transitions resulting from weaker spin-orbital coupling. This agrees with the optical spectra for bulk zinc-blende CdSe in the UV region, in which a pronounced E_1 peak and a shoulder for E_1+Δ_1 transition were observed. <cit.>Two UV bands observed for CdS NPLs are in agreement with two maxima observed for CdSe <cit.> (however, it should be noted that the data in Ref. JApplPhys.78.1183 are obtained for wurtzite CdS). In all cadmium chalcogenide NPLs, the UV bands are shifted towards shorter wavelengths as compared to the data for bulk compounds. In Fig. 2(c), we plotted their energies E_i as a function of the reciprocal square of the thickness 1/d^2. The NPL thickness values were taken from the literature. <cit.>The linear dependence of E_i on 1/d^2 for all exciton series confirms the size effect. The asymptotic energies in the 1/d^2 → 0 limit are close to the energies E_i^0 of the corresponding transitions in bulk compounds. <cit.>For UV bands, the thickness dependence of the shifts is weaker than that for the E_0 series; this may indicate a larger effective mass of charge carriers.The hypothesis about the relation between the UV absorption bands and optical E_1 and E_2 transitions at the boundary of the BZ needs a detailed analysis. For bulk cadmium chalcogenides with zinc-blende structure, these high-energy transitions are known to occur at the L point (E_1 and E_1+Δ_1 transitions) and the X point (E_2 transition) of the 3D BZ. In the case of quasi-2D nanoplatelets, the 3D BZ transforms to its 2D projection [Fig. 4(a)]: the X point becomes the M point at corner of square, and the L point (which does not exist in 2D BZ) is projected to the X point at the middle of its side. The comparison of the band structures for a typical F-terminated CdSe NPL with a thickness of 6 ML and bulk CdSe [Fig. 4(b)] shows a good agreement between the dispersion curves at the band edges. This means that the energies of the band extrema at the boundary of the 2D BZ are close to those in the band structure of bulk CdSe. This fact supports our interpretation of the UV absorption bands as resulting from the optical transitions at the X and M points of the 2D BZ.In order to give further evidences for our interpretation, we performed first-principles modeling of the absorption spectrum for CdSe463 NPL. The calculated absorption spectra for the in-plane and out-of-plane light polarizations (red and black dashed lines), the absorption spectrum averaged over polarizations (solid black line), and partial contributions into this spectrum from different points of 2D Brillouin zone (color shaded areas; the BZ regions used for integration are shown by shaded areas on the inset) are compared with the experimental spectrum (grey line) and the calculated absorption spectrum for bulk CdSe (dotted line) in Fig. 4(c). It is evident that the main contributions to the E_1 and E_2 bands originate from X and M points of the BZ, respectively, not from the Γ point. This explains weaker thickness-dependent shifts observed for these transitions. The agreement between the modeled and experimental spectra in the E_1 region is good. The position of the E_2 band lies outside of our experimentally available wavelength range. The excitonic features in the low-energy E_0 part of the spectrum are not reproduced because our calculations did not take into account the excitonic effects. It should be noted that the appearance of distinct E_1 and E_2 exciton peaks in the absorption spectra of NPLs is due to the specific features of their electronic structure that results in step-like or divergent van Hove singularities. <cit.>The contribution of the exciton transitions at the Γ point was analyzed within the eight-band Pidgeon–Brown model. In this model, the bulk band dispersion is characterized in terms of the k · p band parameters, which include the bulk band gap energy E_g, the Kane energy E_p, the valence band spin-orbit splitting Δ, the remote band contribution to the electron effective mass α, and the valence band Luttinger–Kohn parameters γ_1 and γ_2. The parameters used for CdTe and CdSe NPLs were taken from Refs. NatureMater.10.936,Adachi2009 and are summarized in Table <ref>. The effective-mass approximation was applied with an assumption of an infinite potential-barrier height and the energies of low-energy E_0(lh–e) and E_0(hh–e) transitions were calculated. We assumed that the in-plane wave vector k_∥ = 0 and the out-of-plane wave vector component k_⊥ = Nπ/d, where N is the confinement quantum number and d is the thickness of the NPL. The thickness dependence of exciton transition energies was found for N = 1, 2, 3. The calculated energies for CdTe NPLs are shown in Fig. 2(c) by dashed lines. Lowest-energy transitions are in good agreement with the experimental data. A similar behavior was observed for CdSe NPLs. However, the thickness dependence of the higher-order transitions deviates from the experimentally observed one, which possibly indicates that higher-order transitions are weaker than the transitions at the boundary of BZ. § CONCLUSIONS In summary, we have revealed that cadmium chalcogenide NPLs exhibit distinct and intensive exciton bands with a pronounced size effect not only in the visible region, but also in the UV region. We have shown that these high-energy bands are associated with the E_1, E_1+Δ_1, and E_2 exciton transitions occurring at the X and M points of the 2D BZ of NPL, which originates from L and X points of the BZ of bulk crystals. This behavior is general for all 2D NPLs and contrasts with the behavior of spherical QDs made of cadmium chalcogenides. Thickness-dependent shift of the absorption bands in the NPLs reveals tunable optical properties of them not only in the visible, but in the UV spectral region too. The specific features of the UV absorption spectra resulting from step-like or divergent van Hove singularities typical of quasi-2D NPLs may attract a fundamental interest to an analysis of the electronic transitions in other 2D semiconductor systems deep into the Brillouin zone. The UV features of the NPL absorption spectra may be interesting from a practical perspective for the development of UV photodetectors and other optoelectronic devices.This work was supported by the Russian Foundation for Basic Research (grants No. 16-03-00704, 16-29-11694, 15-02-05856, and 16-29-11805). 42 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Talapin et al.(2010)Talapin, Lee, Kovalenko, andShevchenko]ChemRev.110.389 author author D. V. Talapin, author J.-S. Lee, author M. V. Kovalenko,andauthor E. V. Shevchenko, 10.1021/cr900137k journal journal Chem. Rev. volume 110, pages 389 (year 2010)NoStop [Lan et al.(2014)Lan, Masala, and Sargent]NatureMater.13.233 author author X. Lan, author S. Masala,andauthor E. H. Sargent, 10.1038/nmat3816 journal journal Nat. Mater. volume 13, pages 233 (year 2014)NoStop [Hetsch et al.(2013)Hetsch, Zhao, Kershaw, and Rogach]MaterToday.16.312 author author F. Hetsch, author N. Zhao, author S. V. Kershaw,andauthor A. L. Rogach, 10.1016/j.mattod.2013.08.011 journal journal Materials Today volume 16, pages 312 (year 2013)NoStop [Shirasaki et al.(2013)Shirasaki, Supran, Bawendi, andBulović]NaturePhoton.7.13 author author Y. Shirasaki, author G. J. Supran, author M. G. Bawendi,and author V. Bulović,10.1038/nphoton.2012.328 journal journal Nat. Photon. volume 7, pages 13 (year 2013)NoStop [Vasiliev et al.(2011)Vasiliev, Dirin, and Gaskov]RussChemRev.80.1139 author author R. B. Vasiliev, author D. N. Dirin,and author A. M. Gaskov,10.1070/RC2011v080n12ABEH004240 journal journal Russ. Chem. Rev. volume 80,pages 1139 (year 2011)NoStop [Peng et al.(2000)Peng, Manna, Yang, Wickham, Scher, Kadavanich, and Alivisatos]Nature.404.59 author author X. Peng, author L. Manna, author W. Yang, author J. Wickham, author E. Scher, author A. Kadavanich,and author A. P. Alivisatos, 10.1038/35003535 journal journal Nature volume 404, pages 59 (year 2000)NoStop [Kuno et al.(2006)Kuno, Ahmad, Protasenko, Bacinello, and Kosel]ChemMater.18.5722 author author M. Kuno, author O. Ahmad, author V. Protasenko, author D. Bacinello,and author T. H. Kosel, 10.1021/cm061559m journal journal Chem. Mater.volume 18, pages 5722 (year 2006)NoStop [Deka et al.(2010)Deka, Miszta, Dorfs, Genovese, Bertoni, and Manna]NanoLett.10.3770 author author S. Deka, author K. Miszta, author D. Dorfs, author A. Genovese, author G. Bertoni,and author L. Manna, 10.1021/nl102539a journal journal Nano Lett.volume 10, pages 3770 (year 2010)NoStop [Manna et al.(2003)Manna, Milliron, Meisel, Scher,and Alivisatos]NatureMater.2.382 author author L. Manna, author D. J. Milliron, author A. Meisel, author E. C. Scher,and author A. P. Alivisatos, 10.1038/nmat902 journal journal Nat. Mater. volume 2, pages 382 (year 2003)NoStop [Kanaras et al.(2005)Kanaras, Sōnnichsen, Liu, andAlivisatos]NanoLett.5.2164 author author A. G. Kanaras, author C. Sōnnichsen, author H. Liu,and author A. P. Alivisatos,10.1021/nl0518728 journal journal Nano Lett. volume 5, pages 2164 (year 2005)NoStop [Ithurria and Dubertret(2008)]JAmChemSoc.130.16504 author author S. Ithurria and author B. Dubertret, 10.1021/ja807724e journal journal J. Am. Chem. Soc. volume 130, pages 16504 (year 2008)NoStop [Ithurria et al.(2011a)Ithurria, Tessier, Mahler, Lobo, Dubertret, and Efros]NatureMater.10.936 author author S. Ithurria, author M. D. Tessier, author B. Mahler, author R. P. S. M. Lobo, author B. Dubertret,andauthor A. L. Efros, 10.1038/nmat3145 journal journal Nat. Mater. volume 10, pages 936 (year 2011a)NoStop [Chen et al.(2014)Chen, Nadal, Mahler, Aubin, andDubertret]AdvFunctMater.24.295 author author Z. Chen, author B. Nadal, author B. Mahler, author H. Aubin,and author B. Dubertret, 10.1002/adfm.201301711 journal journal Adv. Funct. Mater. volume 24, pages 295 (year 2014)NoStop [Fan et al.(2015)Fan, Kanjanaboos, Saravanapavanantham, Beauregard, Ingram, Yassitepe, Adachi, Voznyy, Johnston, Walters, Kim, Lu, and Sargent]NanoLett.15.4611 author author F. Fan, author P. Kanjanaboos, author M. Saravanapavanantham, author E. Beauregard, author G. Ingram, author E. Yassitepe, author M. M. Adachi, author O. Voznyy, author A. K. Johnston, author G. Walters, author G.-H.Kim, author Z.-H. Lu,and author E. H. Sargent, 10.1021/acs.nanolett.5b01233 journal journal Nano Lett. volume 15, pages 4611 (year 2015)NoStop [Vitukhnovsky et al.(2015)Vitukhnovsky, Lebedev, Selyukov, Vashchenko, Vasiliev, and Sokolikova]ChemPhysLett.619.185 author author A. G. Vitukhnovsky, author V. S. Lebedev, author A. S. Selyukov, author A. A. Vashchenko, author R. B. Vasiliev,and author M. S. Sokolikova, 10.1016/j.cplett.2014.12.002 journal journal Chem. Phys. Lett. volume 619, pages 185 (year 2015)NoStop [Grim et al.(2014)Grim, Christodoulou, Di Stasio, Krahne, Cingolani, Manna, andMoreels]NatureNanotechnol.9.891 author author J. Q. Grim, author S. Christodoulou, author F. Di Stasio, author R. Krahne, author R. Cingolani, author L. Manna,and author I. Moreels, 10.1038/nnano.2014.213 journal journal Nat. Nanotech. volume 9, pages 891 (year 2014)NoStop [She et al.(2014)She, Fedin, Dolzhnikov, Demortière, Schaller, Pelton, andTalapin]NanoLett.14.2772 author author C. She, author I. Fedin, author D. S. Dolzhnikov, author A. Demortière, author R. D. Schaller, author M. Pelton,and author D. V. Talapin, 10.1021/nl500775p journal journal Nano Lett.volume 14, pages 2772 (year 2014)NoStop [Lhuillier et al.(2015)Lhuillier, Dayen, Thomas, Robin, Doudin, and Dubertret]NanoLett.15.1736 author author E. Lhuillier, author J.-F. Dayen, author D. O. Thomas, author A. Robin, author B. Doudin,and author B. Dubertret, 10.1021/nl504414g journal journal Nano Lett.volume 15, pages 1736 (year 2015)NoStop [Sigle et al.(2015)Sigle, Zhang, Ithurria, Dubertret,and Baumberg]JPhysChemLett.6.1099 author author D. O. Sigle, author L. Zhang, author S. Ithurria, author B. Dubertret,and author J. J. Baumberg, 10.1021/acs.jpclett.5b00279 journal journal J. Phys. Chem. Lett. volume 6, pages 1099 (year 2015)NoStop [Achtstein et al.(2014)Achtstein, Prudnikau, Ermolenko, Gurinovich, Gaponenko, Woggon, Baranov, Leonov, Rukhlenko, Fedorov, and Artemyev]ACSNano.8.7678 author author A. W. Achtstein, author A. V. Prudnikau, author M. V. Ermolenko, author L. I. Gurinovich, author S. V. Gaponenko, author U. Woggon, author A. V. Baranov, author M. Y. Leonov, author I. D. Rukhlenko, author A. V. Fedorov,and author M. V. Artemyev, 10.1021/nn503745u journal journal ACS Nanovolume 8, pages 7678 (year 2014)NoStop [Feldmann et al.(1987)Feldmann, Peter, Göbel, Dawson, Moore, Foxon, and Elliott]PhysRevLett.59.2337 author author J. Feldmann, author G. Peter, author E. O. Göbel, author P. Dawson, author K. Moore, author C. Foxon,and author R. J.Elliott, 10.1103/PhysRevLett.59.2337 journal journal Phys. Rev. Lett. volume 59, pages 2337 (year 1987)NoStop [Tessier et al.(2012)Tessier, Javaux, Maksimovic, Loriette, and Dubertret]ACSNano.6.6751 author author M. D. Tessier, author C. Javaux, author I. Maksimovic, author V. Loriette,and author B. Dubertret, 10.1021/nn3014855 journal journal ACS Nanovolume 6, pages 6751 (year 2012)NoStop [Tessier et al.(2013)Tessier, Mahler, Nadal, Heuclin, Pedetti, and Dubertret]NanoLett.13.3321 author author M. D. Tessier, author B. Mahler, author B. Nadal, author H. Heuclin, author S. Pedetti,and author B. Dubertret, 10.1021/nl401538n journal journal Nano Lett.volume 13, pages 3321 (year 2013)NoStop [Achtstein et al.(2015)Achtstein, Antanovich, Prudnikau, Scott, Woggon, and Artemyev]JPhysChemC.119.20156 author author A. W. Achtstein, author A. Antanovich, author A. Prudnikau, author R. Scott, author U. Woggon,and author M. Artemyev, 10.1021/acs.jpcc.5b06208 journal journal J. Phys. Chem. C volume 119, pages 20156 (year 2015)NoStop [Klimov(2000)]JPhysChemB.104.6112 author author V. I. Klimov, 10.1021/jp9944132 journal journal J. Phys. Chem. B volume 104,pages 6112 (year 2000)NoStop [de Mello Donegá and Koole(2009)]JPhysChemC.113.6511 author author C. de Mello Donegá and author R. Koole, 10.1021/jp811329r journal journal J. Phys. Chem. C volume 113, pages 6511 (year 2009)NoStop [Dirin et al.(2011)Dirin, Sokolikova, Gaskov, and Vasilev]JOptTechnol.78.693 author author D. N. Dirin, author M. S. Sokolikova, author A. M. Gaskov,and author R. B. Vasilev, 10.1364/JOT.78.000693 journal journal J. Opt. Technol. volume 78,pages 693 (year 2011)NoStop [Angeloni et al.(2016)Angeloni, Raja, Brescia, Polovitsyn, De Donato, Canepa, Bertoni, Proietti Zaccaria, andMoreels]ACSPhotonics.3.58 author author I. Angeloni, author W. Raja, author R. Brescia, author A. Polovitsyn, author F. De Donato, author M. Canepa, author G. Bertoni, author R. Proietti Zaccaria,and author I. Moreels, 10.1021/acsphotonics.5b00626 journal journal ACS Photonics volume 3, pages 58 (year 2016)NoStop [Pedetti et al.(2013)Pedetti, Nadal, Lhuillier, Mahler, Bouet, Abécassis, Xu, and Dubertret]ChemMater.25.2455 author author S. Pedetti, author B. Nadal, author E. Lhuillier, author B. Mahler, author C. Bouet, author B. Abécassis, author X. Xu,and author B. Dubertret, 10.1021/cm4006844 journal journal Chem. Mater. volume 25, pages 2455 (year 2013)NoStop [Ithurria et al.(2011b)Ithurria, Bousquet, and Dubertret]JAmChemSoc.133.3070 author author S. Ithurria, author G. Bousquet,and author B. Dubertret,10.1021/ja110046d journal journal J. Am. Chem. Soc. volume 133, pages 3070 (year 2011b)NoStop [Li et al.(2012)Li, Qin, Guzun, Benamara, Salamo, and Peng]NanoRes.5.337 author author Z. Li, author H. Qin, author D. Guzun, author M. Benamara, author G. Salamo,and author X. Peng, 10.1007/s12274-012-0214-5 journal journal Nano Res. volume 5, pages 337 (year 2012)NoStop [Ithurria and Talapin(2012)]JAmChemSoc.134.18585 author author S. Ithurria and author D. V. Talapin, 10.1021/ja308088d journal journal J. Am. Chem. Soc. volume 134, pages 18585 (year 2012)NoStop [Garrity et al.(2014)Garrity, Bennett, Rabe, and Vanderbilt]ComputMaterSci.81.446 author author K. F. Garrity, author J. W. Bennett, author K. M. Rabe, and author D. Vanderbilt,10.1016/j.commatsci.2013.08.053 journal journal Comput. Mater. Sci. volume 81, pages 446 (year 2014)NoStop [Bouet et al.(2013)Bouet, Mahler, Nadal, Abecassis, Tessier, Ithurria, Xu, andDubertret]ChemMater.25.639 author author C. Bouet, author B. Mahler, author B. Nadal, author B. Abecassis, author M. D. Tessier, author S. Ithurria, author X. Xu,and author B. Dubertret, 10.1021/cm304080q journal journal Chem. Mater.volume 25, pages 639 (year 2013)NoStop [Winkler(1995)]PhysRevB.51.14395 author author R. Winkler, 10.1103/PhysRevB.51.14395 journal journal Phys. Rev. B volume 51, pages 14395 (year 1995)NoStop [Yang et al.(2013)Yang, Son, Yu, Joo, andHyeon]ChemMater.25.1190 author author J. Yang, author J. S. Son, author J. H. Yu, author J. Joo,and author T. Hyeon, 10.1021/cm303145f journal journal Chem. Mater.volume 25, pages 1190 (year 2013)NoStop [Zakharov et al.(1994)Zakharov, Rubio, Blase, Cohen, and Louie]PhysRevB.50.10780 author author O. Zakharov, author A. Rubio, author X. Blase, author M. L. Cohen,and author S. G. Louie, 10.1103/PhysRevB.50.10780 journal journal Phys. Rev. B volume 50, pages 10780 (year 1994)NoStop [Ninomiya and Adachi(1995a)]JApplPhys.78.4681 author author S. Ninomiya and author S. Adachi, 10.1063/1.359815 journal journal J. Appl. Phys. volume 78,pages 4681 (year 1995a)NoStop [Adachi et al.(1993)Adachi, Kimura, and Suzuki]JApplPhys.74.3435 author author S. Adachi, author T. Kimura, and author N. Suzuki, 10.1063/1.354543 journal journal J. Appl. Phys. volume 74, pages 3435 (year 1993)NoStop [Kim et al.(1994)Kim, Klein, Ren, Chang, Luo, Samarth, and Furdyna]PhysRevB.49.7262 author author Y. D. Kim, author M. V. Klein, author S. F. Ren, author Y. C. Chang, author H. Luo, author N. Samarth,and author J. K. Furdyna, 10.1103/PhysRevB.49.7262 journal journal Phys. Rev. B volume 49, pages 7262 (year 1994)NoStop [Ninomiya and Adachi(1995b)]JApplPhys.78.1183 author author S. Ninomiya and author S. Adachi, 10.1063/1.360355 journal journal J. Appl. Phys. volume 78,pages 1183 (year 1995b)NoStop [Adachi(2009)]Adachi2009 author author S. Adachi, @nooptitle Properties of Semiconductor Alloys: Group-IV, III-V and II-VI Semiconductors (publisher John Wiley & Sons Ltd., year 2009)NoStop
http://arxiv.org/abs/1703.08960v1
{ "authors": [ "Roman B. Vasiliev", "Alexander I. Lebedev", "Elizabeth P. Lazareva", "Natalia N. Shlenskaya", "Vladimir B. Zaytsev", "Alexei G. Vitukhnovsky", "Yuanzhao Yao", "Kazuaki Sakoda" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170327074103", "title": "High-energy exciton transitions in quasi-two-dimensional cadmium chalcogenide nanoplatelets" }
firstpage–lastpage EoS's of different phases ofdense quark matter E J FerrerDept. of Engineering Science and Physics, College of Staten Island, CUNY,and CUNY-Graduate Center,New York 10314, USA===================================================================================================================================================Many protostellar gapped and binary discs show misalignments between their inner and outer discs; in some cases, ∼70 degree misalignments have been observed. Here we show that these misalignments can be generated through a “secular precession resonance” between the nodal precession of the inner disc and the precession of the gap-opening (stellar or massive planetary) companion. An evolving protostellar system may naturally cross this resonance during its lifetime due to disc dissipation and/or companion migration.If resonance crossing occurs on the right timescale, of order a few Myrs, characteristic for young protostellar systems, the inner and outer discs can become highly misaligned (≳ 60 degrees). When the primary star has a mass of order a solar mass, generating a significant misalignment typically requires the companion to have a mass of ∼ 0.01-0.1 M_⊙ and an orbital separation of tens of AU. The recently observed companion in the cavity of the gapped, highly misaligned system HD 142527 satisfies these requirements, indicating that a previous resonance crossing event misaligned the inner and outer discs. Our scenario for HD 142527's misaligned discs predicts that the companion's orbital plane is aligned with the outer disc's; this prediction should be testable with future observations as the companion's orbit is mapped out. Misalignments observed in several other gapped disc systems could be generated by the same secular resonance mechanism.accretion, accretion disks — protoplanetary disks —binaries: general — stars: individual (HD 142527)§ INTRODUCTIONNewly formed binary stars often undergo a period of co-evolution with a circumstellar and circumbinary disc system.In this scenario, a circumbinary disc is disrupted into accreting streams by the binary, and feeds onto circumstellar discs around individual stars <cit.>. Young binary systems with both circumstellar and circumbinary discs have been observed for some time <cit.>.Interestingly, recent imaging observations with high angular resolution have revealed that the circumstellar and circumbinary discs are often misaligned,rather than sharing a common orbital plane.In a picture where binaries form from an isolatedrotating molecular cloud core, strongly misaligned discs are rather strange where one would expect everything to be aligned. Furthermore, if the binary companion forms through disc fragmentation <cit.>, then one would naturally expect it to share the same orbital plane as the disc from which it formed. Simulations of turbulent star formation, where the cores do not appear to be isolated from the surrounding molecular clouds, indicate that accretion from different directions can cause the angular momentum vector of the disc to evolve in single star systems <cit.>, and a similar process is expected to happen in binary systems.Accretion through the binary/disc is generally expected to drive the system towards alignment <cit.>, although small misalignments are possible <cit.>.There are several observations of young binary systems that indicate misalignments between the circumbinary disc, circumstellar disc and the binary orbital plane <cit.>.Additionally, several protoplanetary discs with large gaps/cavities(often called “transition discs” – seefor recent reviews),which could contain massive companions, are observed to contain misaligned inner and outer discs. Of particular interest is the system HD 142527, which contains an extended, massive disc (hundreds of AUs, ∼ 0.1M_⊙) with a large cleared dust gap from ∼ 10 AU to ∼ 100 AU <cit.>; an accreting M-dwarf companion (with mass about 10-20 times smaller than the 2 M_⊙ Herbig Ae/Be primary) has been found to orbit within the cavity <cit.>.Recent NIR scattered light observations <cit.> have revealed notches and shadows, indicating that the inner disc and the outer disc are misaligned by ∼70 degrees <cit.> – This misalignment is confirmed by the gas kinematics from resolved CO observations <cit.>.Another system, HD 100453, shows similar features to HD 142527 in scattered light observations, and have been interpreted as a misalignment between the inner and outer discs of 72 degrees <cit.>. Yet another system, HD 135344B, shows features in its image that indicate a weaker misalignment between its inner and outer disc of 22 degrees <cit.>. In this paper, we explore a dynamical mechanism to generate large misalignments in gapped protoplanetary discs starting from a nearly co-planar configuration.The proto-typical system we study consists of an inner disc and an outer disc, both assumed to be nearly flat (see Section 2), with a gap produced by a low-mass binary companion. Such a setup includes “transition” discs if they do contain massive companions. The gravitational interaction between the discs and the companion generates mutual nodal precession of the disc's and binary's orbital planes. As the system evolves in time (e.g., due to accretion or dissipation of the discs), these precession frequencies match and may lead to inclination excitation. Such “secular precession resonance” has long been studied in the context ofplanetary spin dynamics <cit.>.Recently, Batygin & Adams (2013) considered the effect of secular precession resonance as a mechanism of generating misalignment between the stellar spin and protoplanetary disc in the presence of an external binary companion.Lai (2014) presented a simple way (based on vector equations) of studying this resonance duringthe disc evolution and also included the effects accretion and magnetic fields (see also Spalding & Batygin 2014). Matsakos & Konigl (2017) studied a variant of this “primordial disc misalignment” model in which the binary companion is a giant planet orbiting in the gap between the inner and outer disc.Our motivation in this paper is the misaligned disc systems like HD 142527 and HD 100453.Our system is similar to that studied by Matsakos & Konigl (2017); however, we ignore the stellar spin (since it has a negligible angular momentum compared to the discs and the embedded companion), and we focus on generating misalignments between the circumsteller (inner) and circumbinary (outer) discs, rather than on misalignments of the star's spin. The basic setup is described in Section 2, and the concept of secular precession resonance is discussed further in Section 3. In Section 4 we numerically investigate various possible scenarios. In Section 5 we put forward a simple qualitative understanding of our results and discuss them in the context of observations. Finally, we summarise our findings in Section 6.§ PROBLEM DESCRIPTION We consider a primary star of mass M_* surrounded by a circumstellar disc of mass M_ cs and radial extent R_ cs. The primary is then orbited by a lower-mass companion with mass M_ c and separation a_ c. The companion could be a lower-mass star, brown dwarf or giant planet. The primary star and companion are then surrounded by a circumbinary disc of mass M_ cb and radial extent from R_ in to R_ out. Similar to previous works <cit.>, we take the two discs to be flat, each having a single orientation. This assumption is reasonable because different regions of the disc can communicate with each other efficiently by internal stresses (such as bending waves[We note protostellar discs are in the “wave-like” regime where bending waves propagate with speeds of order half the sound speed, rather than the diffusive regime <cit.>]), so that any disc warp is small <cit.>. Thus, our system is specified by three vectors: the angular momentum of the circumstellar disc (), the angular momentum of the companion () and the angular momentum of the circumbinary disc (). There are six torques between the various components of which three are independent. We calculate the torques using the quadrupole approximation and assume that the primary star is much more massive than the companion and the discs <cit.>.The torque on the circumstellar disc due to the companion (τ_ cs,c) is:τ_ cs,c=3GM_ c/4a_ c^3(·)×∫_0^R_csR^2dM_cswhere L̂_̂î are unit vectors. To get a sense of the magnitude and form of this torque we can crudely evaluate the integral (assuming, as in most protostellar discs that the disc mass is dominated at large radius, but without specifically specifying its form – we will do this later in Section <ref>) to find (neglecting the (·)× factor):|τ_ cs,c|∼GM_ cM_ csR_ cs^2/a_ c^3The torque on the companion due to the circumbinary disc (τ_ c,cb) is:τ_ c,cb=3GM_ ca_ c^2/4(·)×∫_R_ in^R_ outdM_ cb/R^3Similiary the magnitdue of this torque is approximately:|τ_ c,cb|∼GM_ cM_ cb^La_ c^2/R_ in^3where M_ cb^L is the total disc mass “locally” around R_ in, specifically M_ cb^L≡2π R_ in^2Σ( R_ in). Finally the torque on the circumstellar disc due to the circumbinary disc (τ_ cs,cb) is:τ_ cs,cb= ∫_0^R_ csR^2dM_ cs∫_R_ in^R_ out3GdM_ cb/4R'^3(·)×which has a magnitude of order:|τ_ cs,cb|∼GM_ csM_ cb^LR_cs^2/R_ in^3With these torques the evolution of the angular momenta in the system can be written:d/ dt = τ_ cs,c+τ_ cs,cb+ d||/ dtd/ dt = -τ_ cs,c+τ_ c,cb+ d||/ dtd/ dt = -τ_ cs,cb-τ_ c,cb+ d||/ dtThis system contains many different precession frequencies allowing for the possibility of a “secular precession resonance” that could result in large misalignments between the two discs. We note, as we will discuss later, a resonance does not necessarily result in a large misalignment; in these models a resonance is a necessary, but not a sufficient condition.§.§ Precession Frequencies To evaluate the precession frequencies, we consider a power-law surface density profile of the form:Σ=Σ_0(R/R_0)^-1Such a choice isthe typical surface density profile obtained from mm-images of protoplanetary discs <cit.> and is expected for a constant `alpha-disc' model, with a flared structure (e.g. , where temperature ∝ R^-1/2 – ).With our choice of density profile, the angular momenta of the circumstellar and circumbinary discs are 2/3M_ csΩ_K(R_ cs)R_ cs^2 and 2/3M_ cbΩ_K(R_ cb)R_ cb^2 respectively (with Ω_K(R) the Keplerian angular velocity at radius R). The precession frequencies are defined via τ_ i,j=Ω_i,jL̂_j× L_i, and note that Ω_j,i=Ω_i,j| L_i|/| L_j|. The six individual precession frequencies are: Ω_ cs,c = -3/8 Ω_K(R_ cs)(M_ c/M_*)(R_ cs/a_ c)^3· Ω_ cs,cb = -3/16 Ω_K(R_ cs)(M_ cb/M_*)(R_ cs^3/R_ outR_ in^2)· Ω_ c,cb = -3/8 Ω_K(a_ c)(M_ cb/M_*)(a_ c^3/R_ outR_ in^2)· Ω_ c,cs = -1/4 Ω_K(a_ c)(M_ cs/M_*)(R_ cs/a_ c)^2· Ω_ cb,c = -3/4 Ω_K(R_ out)(M_ c/M_*)(a_ c/R_ in)^2· Ω_ cb,cs = -3/16 Ω_K(R_ out)(M_ cs/M_*)(R_ cs/R_ in)^2· For systems that are close to being aligned initially there arethree possible precession resonances: a resonance between the circumstellar disc and the companion (Ω_ cs=Ω_ c, where Ω_ cs=Ω_ cs,c+Ω_ cs,cb and Ω_ c=Ω_ c,cs+Ω_ c,cb); a resonance between the circumstellar disc and the circumbinary disc (Ω_ cs=Ω_ cb, where Ω_ cb=Ω_ cb,cs+Ω_ cb,c) and finally a resonance between the companion and the circumbinary disc (Ω_ c=Ω_ cb). Not all these resonances can occur in realistic systems or lead to large misalignments as we shall demonstrate below.§ SECULAR PRECESSION RESONANCESTo assess the importance of various resonances we consider a set of specific examples before performing numerical integrations. In each example we assume one component of the system has so much angular momentum that it precesses so slowly that the corresponding angular momentum unit vector can be considered constant. §.§ A massive circumbinary disc(L_ cb≫ L_ cs, L_ c)Perhaps the most natural setup consists of a massive extended circumbinary disc that dominates the angular momentum budget of the system. Therefore, only the circumstellar disc and the companion precess, and a secular precession resonance occurs when Ω_ cs=Ω_ c, or: Ω_ cs,c+Ω_ cs,cb=Ω_ c,cb+Ω_ c,csFor a massive, large circumbinary disc, Ω_ c,cb/Ω_ c,cs≫ 1, unless the circumbinary disc is abnormally far away from the companion. In the scenario we are envisaging, it is the companion itself that truncates the circumstellar and circumbinary disc, so we would expect the inequality to be readily satisfied. Furthermore, we would also expect Ω_ cs,c/Ω_ cs,cb≫1, as to truncate the circumstellar disc, the tidal torque from the binary should dominate over that from the circumbinary disc. Hence, the resonance condition is simplified to:Ω_ cs,c≃Ω_ c,cbor,(M_ c/M_ cb^L)∼(a_ c/R_ cs)^3/2(a_ c/R_ in)^3When a circumbinary disc accretes on to a companion, starting with M_ cb≫ M_c, toM_ cb≪ M_c, one would expect the criterion in Equation <ref> could be satisfied at some point during the evolution, leading to resonant behaviour and possibly large changes in the alignment angles. Setting Equation <ref> to have parameters similar to those observed in HD 142527 we find:M_ cb^L ∼2M_J(M_ c/0.1M_⊙)(R_ cs/10AU)^3/2 ×(a_ c/20AU)^-9/2(R_ in/100AU)^3Local disc masses of order jupiter masses are observed in many gapped systems <cit.>. In the case of HD 142527, <cit.> measure a local disc mass of approximately a Jupiter mass, assuming a gas-to-dust mass ratio of 100. Therefore, we anticipate it is likely a resonance between the precession of the companion and circumstellar disc can occur in real systems.To determine whether the resonance (Equation  <ref>) can lead to a significant production of misalignment, we can use a “geometric picture” following Lai (2014). In the limit of||≫ ||≫||., the unit angular momentum vectors evolve according to d/ dt ≈ Ω_ cs,c ×d/ dt ≈ Ω_ c,cb × We can transform into a frame rotating with the precession of the companion at a rate Ω_ c,cb, such thatremains fixed in time. In this rotating frame the evolution ofis governed by:( d/ dt)_ rot≈Ω_ cs,c ×-Ω_ c,cb×This tells us thatprecesses around a vector L̂_r with precession rate Ω_r, given by:Ω_rL̂_r=Ω_ cs,c-Ω_ c,cbAs the system evolves from being far from resonance with Ω_ cs,c≫Ω_ c,cb toΩ_ cs,c≪Ω_ c,cb, the circumstellar disc goes from precessing aboutto precessing around . However, close to resonance when Ω_ cs,c∼Ω_ c,cb the vector L̂_r about whichis precessing deviates fromby a large angle. This is obvious if one considers the case whereandare misaligned by some small angle θ, the one finds:arccos(L̂_r·)≈π/2+θ/2Thus, in resonanceis precessing around an axis that is almost orthogonal to .For a large misalignment to be generated, a necessary (but not sufficient) condition is Ω_r^-1 is comparable to, or shorter than the timescale on which the system is evolving. If resonance crossing is too fast,is unable to precess far enough around L̂_r to generate a large misalignment. Finally, it is clear from Equation <ref> that the misalignment between the companion and circumbinary disc is constant.§.§ A massive companion, (L_ c≫ L_ cs, L_ cb) One can imagine late in the evolution of a protostellar system, when the discs contain very little mass, the companion dominates the total angular momentum budget. Therefore, a “resonance” can occur between the precessing circumstellar and circumbinary discs when Ω_ cs=Ω_ cb, or:Ω_ cs,c+Ω_ cs,cb=Ω_ cb,c+Ω_ cb,csIn this case, the massive companion predominately drives the precession of both the inner and outer discs and Equation <ref> reduces to:Ω_ cs,c≃Ω_ cb,cor,a_c∼ 70AU(R_ cs/10AU)^3/10(R_ in/100AU)^2/5(R_ out/300AU)^3/10This criterion is independent of mass and could be satisfied in real systems. In this scenario, the disc angular momentum unit vectors evolve according to:d/ dt ≈ Ω_ cs,c ×d/ dt ≈ Ω_ cb,cs×Obviously, both discs are precessing around the same axis (L_ c) independently.As such, in “resonance” a large misalignment cannot be generated from an initially small misalignment.§.§ A massive circumstellar disc, (L_ cs≫ L_ c, L_ cb) While a circumstellar disc which contains most of the angular momentum seems strange, it is not so unusual from the perspective of formation of the companion by fragmentation. Fragmentation is expected to occur in the outer regions of massive circumstellar discs <cit.>. At the point of formation, the angular momentum could still be dominated by the circumstellar disc. The companion is then expected to migrate through the disc to smaller separations <cit.>, wherein the more usual hierarchy of a circumbinary disc which contains most of the angular momentum is restored (Section 3.1). Here we assume that the circumstellar disc precesses so slowly that we can consider it to be fixed, thus a resonance can occur when the precession frequencies of the companion and circumbinary disc become equal, Ω_ c=Ω_ cb, or:Ω_ c,cs+Ω_ c,cb=Ω_ cb,c+Ω_ cb,csIn this case of a massive inner disc, Ω_ c,cs/Ω_ c,cb≫ 1. For the precession of the circumbinary disc, it is not clear that either the circumstellar or companion will dominate its precession. Geometric arguments, indicate a strong resonance will occur if Ω_ cb,c/Ω_ cb,cs≫1. In this case the Equation <ref> simplifies to:Ω_ c,cs≃Ω_ cb,cor,M_ cs∼ 3M_ c√(a_c^3/R_ out^3)(a_c^4/R_ cs^2R_ in^2)Now requiring that the companion forms by fragmentation out of the disc that will make up the circumbinary disc means that M_ c≲ M_ cb. For fragmentation to take place one normally requires the outer regions to have a mass ∼ 0.1 M_*. Combing all these criteria, along with M_ cs≫ M_ cb and M_ cs≫ M_ c, implies a circumstellar disk more massive than the central star. We conclude that the precession resonance betweenthe companion and circumbinary disc is unlikely to occur in realistic binary-disc systems. This inference is confirmed by our numerical investigation where we can only get resonant behaviour forunphysically large circumstellar disc masses.§ AN EVOLVING SYSTEM We consider two basic scenarios. In the first (Section 4.1) case the companion remains at a fixed separation and we allow the discs to evolve and accrete mass onto the companion. In the second scenario (Section 4.2), we allow the companion to migrate through the disc while the total mass of the two discs remains fixed. It is easiest to work in an inertial Cartesian frame, where we initialise the angular momentum unit vectors of the components to:= [0,0,1]= [0,-sin(θ_0),cos(θ_0)]= [-sin(θ_0),0,cos(θ_0)] where θ_0 is a small angle which we set to 5 degrees. We have checked that our results are qualitatively independent of the initial choice of orientation of the three vectors and the choice of θ_0, provided it is small but non-zero. §.§ Evolving Discs In the “evolving disc” scenario we assume the companion remains on a orbit with a fixed separation, but the discs deplete due toaccretion. Simple, self-similar viscous accretion theory <cit.> suggests that for Σ∝ R^-1 the disc mass (M_ d) declines as:M_ d=M_d^0/(1+t/t_ν)^1/2where t_ν is the “viscous timescale” that set the time-scale for global disc evolution. Note, this form differs from the temporal evolution of the disc mass used by previous authors <cit.>. We take Equation <ref> to specify how the circumbinary disc evolves. In the case where the companion accretes, we assume it accretes some fraction (f) of the material that is depleted from the circumbinary disc. Therefore, the circumbinary disc evolves as:M_ cb=M_ cb(t=0)/(1+t/t_ν)^1/2and the companion evolves as:M_ c=M_ c(t=0)+fM_ cb(t=0)[1-1/(1+t/t_ν)^1/2] The numerical value of f is uncertain; for example it is not clear whether the circumstellar disc will receive most of the material (f<0.5) or whether the companion will accrete most of the material (f>0.5). Thus, we will keep f as a free parameter. The remaining mass that does not accrete onto the companion resupplies the inner disc. Since the viscous time of the inner disc is considerably shorter than that of the circumbinary disc, the circumstellar disc can be considered to be in a steady state with a constant mass supply. Where the accretion rate through the circumstellar disc is Ṁ_ cs=(1-f)Ṁ_ cb. As a steady disc has surface density proportional to Ṁ_ cs, then we can write: M_ cs=M_ cs(t=0)/(1+t/t_ν)^3/2 As the companion grows in mass, the truncation radii of the circumstellar and circumbinary discs due to the companion's torque evolve. For high-mass ratio binaries (with M_ c/M_* not too small), the disc is typically truncated at several Hill radii away from the companion <cit.>. For concreteness, we adopt the followingprescription for the disc truncation radii:R_ cs = a_c[1-3(M_ c/3M_*)^1/3]R_ in = a_c[1+3(M_ c/3M_*)^1/3]§.§.§ Results of numerical integrationsAs our canonical example we adopt the parameters which are typical of gapped discs: M_*=2M_⊙, a_ c=40 AU, M_ c(t=0)=10^-3M_⊙, M_ cs(t=0)=3×10^-3M_⊙, M_ cb(t=0)=0.25M_⊙ and R_ out=300AU, with evolution parameters f=0.5 and t_ν=2×10^5years. In this example the companion will reach a mass of ∼ 0.1M_⊙ after 5 Myr of accretion.In Figure <ref>, we show the evolution of the mutual inclinations between the discs and companion as well as their precession frequencies. In this scenario, the circumbinary disc contains enough angular momentum such that its precession timescale is long compared to the evolution of the system. In Figure <ref>, we show the evolution of various systems parameters. We see in Figure <ref> that the precession frequencies of the circumstellar disc and companion become comparable after about ∼ 0.25 Myr of evolution. This causes a resonant response where the circumstellar and circumbinary disc strongly misalign, with a misalignment of ∼ 60-70 degrees found. After the discs have misaligned, the circumstellar disc precesses on a time-scale ∼ 0.5 Myr with a nutation amplitude of about 10 degrees, driven by the torque from the circumbinary disc. As can be seen in Figure <ref>, the resonance occurs in the case where the outer disc dominates the angular momentum and mass budget of the system, and is representative of the case discussed in Section <ref>. We can investigate how the evolution parameters effect the results in Figure <ref>, where we vary the viscous time and f.The evolution curves show that the same evolution is achieved if the disc evolution parameters are varied: rapid evolution towards resonance crossing (either by a shorter viscous time or larger f) causes a large rapid change in the misalignment angle, whereas for a slower approach towards resonance crossing results in a smoother change over several precession periods of the circumstellar disc. In all cases we find that resonance crossing can produce large misalignments between the circumstellar and circumbinary disc, while aligning the companion with the circumbinary disc. Finally, if the companion does not accrete in this scenario, a resonance between the circumstellar disc and companion's precession never occurs and no large misalignment is generated. This is demonstrated in Figure <ref> where we drop the accretion efficiency to f=0.01, so that the companion only reaches a mass of ∼ 1.5 M_J after 3 Myr. A similar outcome is obtained by beginning the evolution with much less mass in the outer disc, such that the companion can not become massive enough to permit resonance crossing. In general, we do not expect misalignments to be generated in planet hosting protoplanetary discs, this conclusion is specifically relevant to whether or not our mechanism operates in “tranisition” discs.§.§ Migrating Companion In this “migrating companion” scenario we consider a companion of fixed mass that has formed in the outer regions of the disc, and is now migrating to smaller separations. We prescribe the migration rate as:ȧ_c/a_c=-1/t_ migwhere the migration time-scale (t_ mig) is a free parameter. The truncation radii of the discs then evolve according to Equations <ref> & <ref>. For simplicity we do not allow the discs to deplete due to viscous accretion, instead the total mass in the circumstellar and circumbinary discs remains fixed, but the individual mass of each disc evolves in response to the migrating companion. For a disc with a Σ∝ R^-1 surface density profile, the disc mass scales linearly with its outer radius. Thus:dM_ cs/ dt=M_ cs(Ṙ_ cs/R_ cs)=-M_ cs/t_ mig and, by mass conservation:dM_ cb/ dt=- dM_ cs/ dt=M_ cs/t_ mig§.§.§ Results of numerical integrations Again we motivate our choice of initial parameters by those similar to the observed systems: M_*=2M_⊙, a_ c(t=0)=200 AU, M_ c=0.2M_⊙, M_ cs(t=0)=0.1M_⊙, M_ cb(t=0)=0.05M_⊙ and R_ out=1.2R_ in(t=0), with a migration time of t_ν=7.5×10^5years. The outer edge of the circumstellar and inner edge of the circumbinary discs are again set using Equations <ref> & <ref>. Figure <ref> shows the evolution of the misalignment angles and precession frequencies for the migrating companion scenario.The evolution of the system components are shown in Figure <ref>.We see that after about 2Myr theprecession frequencies of the circumstellar disc and companion cross, by this time the companion has migrated to ∼10AU. Furthermore, the circumbinary disc now contains more mass than the circumstellar discand dominates the angular momentum budget of the system. The resonance crossing again causes a large misalignment angle between the two discs (∼70-80 degrees). This evolution is similar to that of the accreting companion scenario discussed in Section <ref>, where the secular resonance is dominated by Ω_ cs,c∼Ω_ c,cb. Again we find that the circumbinary disc and the companion align slightly, meaning the circumstellar disc and companion are also highly misaligned. However, unlike the accreting companion scenario, as the companion continues to migrate to smaller separations, the precession frequency of the circumstellar disc gets shorteras its outer edge and mass becomes smaller. In Figure <ref>, we show how varying the migration time between 5×10^5 years (left panel) and 1×10^6 years (right panel) effects the result. The faster migration time leads to a sharper change in the misalignment angles. Similar to the canonical example, after a several Myrs of evolution, the two discs are misaligned by >70 degrees. §.§ Massive companions and low-mass discs Towards the end of the disc's evolution most of the angular momentum of the system will be contained in the binary companion. In this case the companion's orbit will remain unchanged, but could result in nodal precession of the circumstellar and circumbinary discs (Section 3.2). In order to briefly explore this case we consider the migrating companion scenario as described in Section <ref>, but reduce the mass of both discs to 5×10^-3M_⊙, while keeping the companion's mass to 0.2M_⊙. While it is of course questionable whether the companion could still migrate rapidly, our example is illustrative.Figure <ref>, shows the evolution such a system. We see that in this case the companion always dominants the angular momentum budget of the system. The precession frequencies of the circumstellar and circumbinary discs (both of which are completely dominated by driving due to the companion), become equal after about 1.5 Myr of evolution. This causes a modest increase in the misalignment angle (to about 20 degrees) between the circumstellar and circumbinary discs. In general, a “resonance” crossing between the circumstellar and circumbinary discs' precession frequencies do not lead to large misalignments from initially aligned configurations.§ DISCUSSION We have investigated the possibility that secular precession resonances can generate large misalignments between circumstellar and circumbinary discs in young binary systems that have mass ratios of order 0.1. For realistic systems we find that only resonant interactions between the precession of the circumstellar disc and companion, or between the precession circumstellar and circumbinary disc can occur. The resonance between the circumstellar disc and companion is the most robust and the only one that can generate large misalignments (>60 degrees) between the two discs, from small (few degree) primordial misalignments.In a frame precessing with the companion, the circumstellar disc, precesses around an axis almost orthogonal to the disc's angular momentum axis when in resonance, with a precession rate Ω_r≈√(2)θ_0Ω_ cs,c, where θ_0 is the smallinitial misalignment. In the examples depicted in Figures 1 & 5,Ω_ cs,c∼ 50 Myr^-1 in resonance, and the rate of change of our system is ∼ 1-5 Myr^-1 for both the accreting and migrating companion cases. Thus, the evolution timescale of the system is comparable to the precession time Ω_r^-1 (for θ_0∼ 0.1). This means that the circumstellar disc can precess a significant fraction of the way around L̂_r before the system is no longer in resonance. Now, we can naturally see why we often get large misalignments if resonance crossing occurs. Young systems will always evolve on ∼Myr timescales in the radial range of 10-100 AU. For a companion to star mass ratio (q) in the range 0.01-0.1, Equations <ref>, <ref> & <ref> tell us Ω_r∼ 0.1qθ_0Ω_ K(a_ c) which for the orbital periods found at 10s of AU of a few hundred years, we get precession timescales about L̂_r of ∼Myr. Therefore, it is not surprising in the case of high mass-ratio binaries with separations of order 10s of AU resonance crossing leads to large misalignments. Finally, we note that while it is easy to construct initial and evolutionary parameters that cause our systems to undergo resonance, it is also easy to find examples that do not (see Figure <ref>). Therefore, depending on their parameters, real systems may or may not experience resonant misalignment excitations during their lifetimes.§.§ Assumptions In this work, we have made several simplifications in order to make the problem simple and to reveal the basic dynamics. The two most important assumptions are that we ignore explicit angular momentum conservation in our evolving system and we treat both discs as rigid plates.In the construction of our evolving system we have followed previous work and made no effort to ensure that our disc-companion system explicitly conserves angular momentum. In the case of the accreting disc system, we do not allow the circumbinary disc to expand as it accretes, or lose mass (and angular momentum) in a photoevaporative wind. In the case of viscous accretion with R_ out≫ R_ in, we would actually expect R_ out to expand such that the circumbinary disc maintains approximately constant angular momentum in the absence of mass-loss. Since the resonance of interest between the circumstellar disc and companion occurs when most of the angular momentum is in the circumbinary disc, this assumption does not have any important consequences on our calculations. In our models, we have implicitly assumed that asdisc material moves from the circumbinary disc to the circumstellar disc (in the case of the accreting system) or from the circumstellar disc to the circumbinary disc (in the case of the migrating companion) it joins the new disc's orbital plane. In reality gas parcels may join the new disc in its original orbital plane. The interaction of material flowing between the two discs is mediated by the companion and it is still uncertain how much angular momentum is absorbed by the companion (and in what orbital plane) during this interaction. However, since we are interested in starting from systems that are initial close to alignment, which then approach a resonance, this assumption should not significantly effect the resonant misalignment excitation. In the case of the accreting system, after the large misalignment has occurred, one would expect accretion from the circumbinary disc onto the circumstellar disc to possibly re-align the two discs on of order Myr timescales (this is similar to the alignment of a binaries orbit which is accreting from a misaligned circumbinary disc discussed by ). Therefore, we imagine large misalignments should readily be observable after the resonance has occurred, even if accretion does slowly realign the discs afterwards. In all our models we treat the discs as rigidly precessing plates. In general the discs are warped and twisted <cit.>.Warp propagation in accretion discs occupies two distinctly different regimes <cit.>, when the viscous α parameter is larger than the aspect ratio (H/R) of the disc, the warps propagate diffusively; however, if α is smaller than the aspect ratio the warp is wave-like and propagates with a speed approximately half of the sound speed.With aspect ratios of order 0.1 in protostellar discs at tens of AU <cit.> and observationally inferred viscous “alphas” at least an order of magnitude smaller <cit.>, protoplanetary discs are in the wave-like warp regime <cit.>, where bending waves propagate at approximatively half of the sound speed <cit.>. On timescales longer than the warp propogation time the disc evolves towards and precesses as a rigid body <cit.>. Since our systems evolve on the long, secular evolutionary timescales,much longer than the warp propagation time over hundreds of AU, the rigid planet approximation is likely to valid.Finally, even though the rigid plate approximation is good, the discs are still likely to maintain a small warp, such a warp can viscously dissipate the mutual inclination between the disc's and companion's orbital planes. This viscous damping rate is highly uncertain (e.g. and see discussion in ); however, <cit.> suggest that large misalignments (>20degrees) can be maintained for timescales similar to a disc's lifetime.§.§ Observational implications One of the motivations for our study was the observed misalignment between the circumstellar and circumbinary discs in HD 142527. HD 142527 is a well known gapped disc with a dust cavity extending from a few tens of AU to ∼100 AU <cit.>. Scattered light imaging of the circumbinary disc at NIR wavelengths <cit.>,were interpreted by <cit.> as a tilt of 70±5 degrees between the inner (circumstellar) and outer (circumbinary) discs. <cit.> discovered a low-mass companion in the cavity of HD 142527 which is now known to be a M dwarf with a mass of order 0.1-0.2M_⊙ <cit.> and hence HD 142527 is a high mass-ratio binary system with a mass ratio q=0.05-0.1. Therefore, the secular resonance model presented in this work is a likely explanation for the observed misalignment in HD 142527. The precession resonance could have been triggered due to either the companions migration to smaller separations, or accretion of mass onto the companion from the disc, or some combination of both. There are several other well known gapped discs which show evidence for large misalignments: HD 100453 <cit.> exhibits a similar misalignment between the inner and outer discs of ∼ 70 degrees, and HD 135344B <cit.> shows a smaller misalignment of ∼ 22 degrees. No companion has been detected in either system to date. The misalignment in HD 100453 could have been generated by resonance crossing and such a scenario would imply a low-mass ∼ 0.01-0.1M_⊙ companion residing in the cavity that is aligned with the outer disc. The misalignment of HD 135433B could have be generated by this mechanism and is in the process of realigning due to accretion, or viscous dissipation. We can probably rule out a resonance between the inner and outer disc in the case of HD 135433B as it requires the companion to dominate the angular momentum budget of the system. HD 135433B has a particularly massive and extended outer disc <cit.>, which would require the companion to be solar mass or above to induce a large misalignment. A precession resonance between the circumstellar disc and companion, starting from a nearly aligned configuration will keep the companion close to alignment with the circumbinary disc. Small arc analysis of approximately 2 years of observations of the companion in HD 142527 has provided weak constraints on the orbital elements of the companion <cit.>. These constraints show the companion is on an eccentric orbit, but the orbital inclination is more uncertain. Lacour et al (2016) suggest (see their Figure 8) that in order for the companion's apocenter to correspond to the inner edge of the circumbinary (outer) disc, it is likely that the companion is aligned with the inner disc. However, we caution that the apocenter distance is not where a companion would truncate the disc <cit.>. Companions with q∼0.1 are in-between the regime at low-masses where they truncate the disc at several Hill radii <cit.> and comparable mass ratios where they truncated the disc at a distance of a few times the companions separation <cit.>. Furthermore, the companion will truncate the gas disc, whereas the disc's inner edge is measured from dust emission. It is well known that the gas extends further in than the gas in HD 142527 <cit.>. Therefore, it is plausible that the companion is aligned with the outer disc and still play a role in carving the circumbinary disc. For example, applying the results of <cit.> indicates an aligned binary with mass ratio q∼0.1, an eccentricity of ∼ 0.5 will truncate the circumbinary disc between 3-4 times the binary separation. HD 142527's binary separation is 20^+17_-10 AU and eccentricity is 0.5±0.2; thus, the companion can orbit in the same plane as the circumbinary disc and still truncate the disc's dust component at ∼ 100 AU. Further monitoring of the companion should be able to determine its orbital plane, and test the resonant scenario presented here. § SUMMARYWe have shown that secular precession resonances can operate in protostellar gapped/binary discs, which can result in a misalignment between the circumstellar (inner) and circumbinary (outer) discs. For some cases, the generated misalignments can be large (≳ 60 degrees) resulting from crossing a precession resonance between the circumstellar disc and a low-mass companion (or massive planet) as the system evolves. Our main results are summarised below: * We identify two secular precession resonances in gapped/binary disc systems. First, a resonance between the precession of the circumstellar and circumbinary discs. Second, a resonance between the precession of the circumstellar disc and a low-mass companion residing in a gap between the circumstellar and circumbinary disc.* The resonance between the circumstellar and circumbinary disc cannot lead to large misalignments as both discs precess independently, even in “resonance”. * The resonance between the circumstellar disc and companion can lead to significant misalignments, even from systems that are close to being co-planar initially, provided that the resonance crossing occurs on the right timescale. This typically requires that the companion has a mass ∼ 0.01-0.1M_⊙, separation ∼ 10-100AU for an approximately solar mass primary and that the circumbinary disc dominates the angular momentum budget of the system. Such requirements are not satisfied by giant – of order Jupiter mass – planets in “transition” discs.* In numerical calculations of realistic systems we find misalignments of ∼ 70 degrees, consistent with those observed in HD 142527 and HD 100453. Our results indicate a secular resonance between a companion and the circumstellar (inner) disc in HD 142527 and HD 100453occurred previously in their histories and misaligned their two discs. Like the companion present in HD 142527, we would suggest a companion exists in HD 100453 which is either a very massive planet, brown dwarf, or low-mass star.* For gapped disc systems that are initially close to being co-planar, the secular resonance mechanism would predict the companion to remain close to co-planar with the circumbinary (outer) disc, but misaligned to the circumstellar (inner) disc. This prediction is testable in HD 142527 with a longer baseline of observations of the companion's orbit. § ACKNOWLEDGEMENTS JEO acknowledges support by NASA through Hubble Fellowship grant HST-HF2-51346.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc, for NASA, under contract NAS 5-26555. DL has been supported in part by NASA grants NNX14AG94G and NNX14AP31G, and a Simons Fellowship from the Simons Foundation.99 [Adams et al.(1989)]Adams1989 Adams, F. C., Ruden, S. P., & Shu, F. H. 1989, , 347, 959 [Albrecht et al.(2012)]Albrecht2012 Albrecht, S., Winn, J. N., Johnson, J. A., et al. 2012, , 757, 18 [Andersen et al.(1989)]Andersen1989 Andersen, J., Lindgren, H., Hazen, M. L., & Mayor, M. 1989, , 219, 142 [Andrews et al.(2009)]Andrews2009 Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2009, , 700, 1502[Andrews et al.(2010)]Andrews2010 Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2010, , 723, 1241[Andrews et al.(2011)]Andrews2011 Andrews, S. M., Wilner, D. J., Espaillat, C., et al. 2011, , 732, 42[Artymowicz & Lubow(1994)]Artymowicz1994 Artymowicz, P., & Lubow, S. H. 1994, , 421, 651[Avenhaus et al.(2014)]Avenhaus2014 Avenhaus, H., Quanz, S. P., Schmid, H. M., et al. 2014, , 781, 87[Bate et al.(2010)]Bate2010 Bate, M. R., Lodato, G., & Pringle, J. E. 2010, , 401, 1505[Batygin & Adams(2013)]Batygin2013 Batygin, K., & Adams, F. C. 2013, , 778, 169 [Benisty et al.(2016)]Benisty2016 Benisty, M., Stolker, T., Pohl, A., et al. 2016, , 597, A42[Biller et al.(2012)]Biller2012 Biller, B., Lacour, S., Juhász, A., et al. 2012, , 753, L38[Brinch et al.(2016)]Brinch2016 Brinch, C., Jørgensen, J. K., Hogerheijde, M. R., Nelson, R. P., & Gressel, O. 2016, , 830, L16 [Canovas et al.(2013)]Canovas2013 Canovas, H., Ménard, F., Hales, A., et al. 2013, , 556, A123 [Casassus et al.(2012)]Casassus2012 Casassus, S., Perez M., S., Jordán, A., et al. 2012, , 754, L31[Casassus et al.(2013)]Casassus2013 Casassus, S., van der Plas, G., M, S. P., et al. 2013, , 493, 191 [Casassus et al.(2015b)]Casassus2015b Casassus, S., Wright, C. M., Marino, S., et al. 2015, , 812, 126 [Casassus et al.(2015a)]Casassus2015a Casassus, S., Marino, S., Pérez, S., et al. 2015, , 811, 9 [Casassus(2016)]Casassus2016 Casassus, S. 2016, , 33, e013 [Chiang & Goldreich(1997)]Chiang1997 Chiang, E. I., & Goldreich, P. 1997, , 490, 368[Clarke(2009)]Clarke2009 Clarke, C. J. 2009, , 396, 1066[Close et al.(2014)]Close2014 Close, L. M., Follette, K. B., Males, J. R., et al. 2014, , 781, L30 [Crida et al.(2006)]Crida2006 Crida, A., Morbidelli, A., & Masset, F. 2006, , 181, 587[Foucart & Lai(2013)]Foucart2013 Foucart, F., & Lai, D. 2013, , 764, 106[Foucart & Lai(2014)]Foucart2014 Foucart, F., & Lai, D. 2014, , 445, 1731 [Fukagawa et al.(2006)]Fukagawa2006 Fukagawa, M., Tamura, M., Itoh, Y., et al. 2006, , 636, L153[Hartmann et al.(1998)]Hartmann1998 Hartmann, L., Calvet, N., Gullbring, E., & D'Alessio, P. 1998, , 495, 385[Hébrard et al.(2008)]Hebrard2008 Hébrard, G., Bouchy, F., Pont, F., et al. 2008, , 488, 763 [Hindmarsh(1983)]Hindmarsh1983 Hindmarsh, A. C. in Scientific Computing, R. S. Stepleman et al. (eds.), North-Holland, Amsterdam, 1983 (vol. 1 of IMACS Transactions on Scientific Computation), pp. 55-64.[Hioki et al.(2011)]Hioki2011 Hioki, T., Itoh, Y., Oasa, Y., Fukagawa, M., & Hayashi, M. 2011, , 63, 543 [Kennedy et al.(2012)]Kennedy2012 Kennedy, G. M., Wyatt, M. C., Sibthorpe, B., et al. 2012, , 421, 2264 [Kenyon & Hartmann(1987)]Kenyon1987 Kenyon, S. J., & Hartmann, L. 1987, , 323, 714 [Kratter & Matzner(2006)]Kratter2006 Kratter, K. M., & Matzner, C. D. 2006, , 373, 1563[Kraus et al.(2016)]Kraus2017 Kraus, S., Kluska, J., Kreplin, A., et al. 2016, arXiv:1612.07804[Lacour et al.(2016)]Lacour2016 Lacour, S., Biller, B., Cheetham, A., et al. 2016, , 590, A90 [Lai(2014)]Lai2014 Lai, D. 2014, , 440, 3532 [Li et al.(2015)]Li2015 Li, Y., Kouwenhoven, M. B. N., Stamatellos, D., & Goodwin, S. P. 2015, , 805, 116[Lubow & Ogilvie(2000)]Lubow2000 Lubow, S. H., & Ogilvie, G. I. 2000, , 538, 326[Lynden-Bell & Pringle(1974)]LyndenBell1974 Lynden-Bell, D., & Pringle, J. E. 1974, , 168, 603 [Marino et al.(2015)]Marino2015 Marino, S., Perez, S., & Casassus, S. 2015, , 798, L44[Mathieu et al.(1997)]Mathieu1997 Mathieu, R. D., Stassun, K., Basri, G., et al. 1997, , 113, 1841[Matsakos & Königl(2017)]Matsakos2016 Matsakos, T., & Königl, A. 2017, , 153, 60 [Matzner & Levin(2005)]Matzner2005 Matzner, C. D., & Levin, Y. 2005, , 628, 817[Meru & Bate(2011)]Meru2011 Meru, F., & Bate, M. R. 2011, , 410, 559[Miranda & Lai(2015)]Miranda2015 Miranda, R., & Lai, D. 2015, , 452, 2396 [Moutou et al.(2011)]Moutou2011 Moutou, C., Díaz, R. F., Udry, S., et al. 2011, , 533, A113[Muñoz & Lai(2016)]Munoz2016 Muñoz, D. J., & Lai, D. 2016, , 827, 43 [Narita et al.(2009)]Narita2009 Narita, N., Sato, B., Hirano, T., & Tamura, M. 2009, , 61, L35[Rice et al.(2005)]Rice2005 Rice, W. K. M., Lodato, G., & Armitage, P. J. 2005, , 364, L56 [Rodigas et al.(2014)]Rodigas2014 Rodigas, T. J., Follette, K. B., Weinberger, A., Close, L., & Hines, D. C. 2014, , 791, L37[Shi et al.(2012)]Shi2012 Shi, J.-M., Krolik, J. H., Lubow, S. H., & Hawley, J. F. 2012, , 749, 118[Stamatellos(2015)]Stamatellos2015 Stamatellos, D. 2015, , 810, L11 [Stolker et al.(2016)]Stolker2016 Stolker, T., Dominik, C., Avenhaus, H., et al. 2016, , 595, A113[Spalding & Batygin(2014)]Spalding2014 Spalding, C., & Batygin, K. 2014, , 790, 42 [Takakuwa et al.(2017)]Takakuwa2017 Takakuwa, S., Saigo, K., Matsumoto, T., et al. 2017, arXiv:1702.05562 [Tremaine & Davis(2014)]Tremaine2014 Tremaine, S., & Davis, S. W. 2014, , 441, 1408 [Triaud et al.(2010)]Triaud2010 Triaud, A. H. M. J., Collier Cameron, A., Queloz, D., et al. 2010, , 524, A25[Ogilvie & Latter(2013)]Ogilvie2013 Ogilvie, G. I., & Latter, H. N. 2013, , 433, 2420[D'Orazio et al.(2013)]DOrazio2013 D'Orazio, D. J., Haiman, Z., & MacFadyen, A. 2013, , 436, 2997[Owen(2016)]Owen2016 Owen, J. E. 2016, , 33, e005 [Papaloizou & Pringle(1983)]Papaloizou1983 Papaloizou, J. C. B., & Pringle, J. E. 1983, , 202, 1181 [Papaloizou & Lin(1995)]Papaloizou1995 Papaloizou, J. C. B., & Lin, D. N. C. 1995, , 438, 841[Pinilla et al.(2012)]Pinilla2012 Pinilla, P., Benisty, M., & Birnstiel, T. 2012, , 545, A81 [Ward(1973)]Ward1973 Ward, W. R. 1973, Science, 181, 260[Winn et al.(2004)]Winn2004 Winn, J. N., Holman, M. J., Johnson, J. A., Stanek, K. Z., & Garnavich, P. M. 2004, , 603, L45[Winn et al.(2009)]Winn2009 Winn, J. N., Johnson, J. A., Fabrycky, D., et al. 2009, , 700, 302
http://arxiv.org/abs/1703.09250v1
{ "authors": [ "James E. Owen", "Dong Lai" ], "categories": [ "astro-ph.SR", "astro-ph.EP" ], "primary_category": "astro-ph.SR", "published": "20170327182116", "title": "Generating large misalignments in gapped and binary discs" }
Keldysh Institute of Applied Mathematics RAS, Moscow, 125047, Russia National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow, 115409, Russia Keldysh Institute of Applied Mathematics RAS, Moscow, 125047, Russia Keldysh Institute of Applied Mathematics RAS, Moscow, 125047, Russia National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow, 115409, Russia Keldysh Institute of Applied Mathematics RAS, Moscow, 125047, Russia Keldysh Institute of Applied Mathematics RAS, Moscow, 125047, Russia National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow, 115409, Russia Lawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USALawrence Berkeley National Laboratory, Berkeley, California 94720, USA Institute of Physics ASCR, v.v.i. (FZU), ELI-Beamlines Project, 182 21 Prague, Czech RepublicInstitute of Physics ASCR, v.v.i. (FZU), ELI-Beamlines Project, 182 21 Prague, Czech RepublicInstitute of Physics ASCR, v.v.i. (FZU), ELI-Beamlines Project, 182 21 Prague, Czech Republic National Institutes for Quantum and Radiological Science and Technology (QST), Kansai Photon Science Institute, 8-1-7 Umemidai, Kizugawa, Kyoto 619-0215, Japan Plasma properties inside a hydrogen-filled capillary discharge waveguide were modeled with dissipative magnetohydrodynamic simulations to enable analysis of capillaries of circular and square cross-sections implying that square capillaries can be used to guide circularly-symmetric laser beams. When the quasistationary stage of the discharge is reached, the plasma and temperature in the vicinity of the capillary axis has almost the same profile for both the circular and square capillaries. The effect of cross-section on the electron beam focusing properties were studied using the simulation-derived magnetic field map. Particle tracking simulations showed only slight effects on the electron beam symmetry in the horizontal and diagonal directions for square capillary52.65.-y,52.65.Kj,52.58.Lq,52.38.Kd Plasma Equilibrium inside Various Cross-Section Capillary Discharges S. V. Bulanov December 30, 2023 ====================================================================§ INTRODUCTION For several decades capillary discharges have been under intensive investigation due to various promising applications, such as waveguides for laser electron accelerators and X-ray lasers <cit.>, and more recently for focusing of electron beams <cit.>. The majority of the experiments use circular cross-section capillaries, which reduce the dimensionality of the problem under consideration, simplifying the theoretical and computer simulation studies, and allowing the use of 1D MHD codes <cit.>. On the other hand, square cross-section capillaries, which have attracted substantially less attention, have several advantages for transverse plasma diagnostics <cit.>.For optical waveguides, the transverse electron density profile determines the guiding properties. A useful parameter to describe the guiding property is the matched spot size of the channel, which in the low power limit is the beam size that will propagate without change in transverse dimension. For a Gaussian beam and a parabolic channel, the matched spot size is given by <cit.>: W_m=(0.5 πr_e ∂^2 n_e /∂r^2)^-1/4, where n_e is the electron density, and r_e = e^2/m_ec^2 ≈ 2.817·10^-13 cm is the classical electron radius. The guiding density profile inside a hydrogen-filled capillary discharge waveguide was first simulated using 1D MHD simulations <cit.>. Subsequently 2D simulations for capillaries of square cross-section <cit.> showed that for a capillary with transverse size 0.465 mm the matched spot size of the channel was 0.065 mm in the direction perpendicular to the capillary wall, and 0.066 mm in the diagonal of the square capillary. This result suggested that for this size of capillary, a square cross-section is suitable for guiding circularly-symmetric laser pulses. Furthermore it was found that for circular capillaries with diameter equal to the square capillary width, there was negligible change to the matched spot size. For capillaries of smaller transverse size, one would expect the cross-section to have a larger effect on the guiding properties.In the present paper, square capillaries of width 500 μm and 250 μm are investigated. In addition the effect of cross-section shape on electron beam focusing properties is analyzed via the MHD-derived magnetic field profile, which determines the focusing strength and quality. One of the aims of our paper is to compare the plasma density and temperature distribution formed at the quasistationary stage of the discharge inside the hydrogen-filled capillaries with circular and square cross-sections under almost the same conditions characterizing the initial configurations and the external electric circuit.§ SIMULATION CONFIGURATION AND PARAMETERS We use MHD code MARPLE <cit.> for simulations of the capillary discharges with various cross-sections. The physical model implemented in the code is similar to the model formulated in <cit.>. The plasma is described within the framework of one-fluid, two-temperature (ion and electron) magnetohydrodynamics taking into account the dissipative processes as it follows. The model implements the electron and ion thermal conductivity, the electron-ion energy exchange, the magnetic field diffusion due to final electric conductivity, the Joule heating, and the radiation losses. The equation of state and dissipative coefficients incorporate the degree of the gas ionization.Since the magnetic field lines penetrate through the interface between the discharge plasma and the insulator, it is required to calculate the magnetic field also inside the insulator domain. Due to this reason the simulation region comprises of two domains, namely, the internal domain filled by plasma (called the “plasma”) and the external domain called the “insulator”. To calculate the magnetic field evolution we solve the same MHD equations in the “plasma” and “insulator” regions. In contrast to the “plasma” domain, in the “insulator” region the medium is treated as immobile (v = 0) and with zero conductivity (σ = 0) fluid.Two cross-section shapes of the capillaries were considered in our simulations: circular and square shapes, as shown in Fig. <ref>.In the simulations we consider the capillaries with the diameter/size equal to d = 2r_0 = 500 and 250 μm, prefilled with pure hydrogen plasma of homogeneous initial (at t=0) density equal to ρ_0 = 3.5·10^-6 g/cm^3. Initially there is no current inside the channel and hence, in order to initiate the discharge, the hydrogen is assumed to be slightly ionized (T_e = T_i = 0.5 eV). The same temperature was set on the wall. The external passive insulator domain is of the radius equal to R = 500 μm. The discharges are driven by the same pulse of the electric current with approximately sinusoidal profile having the quarter-period of 180 ns and the peak current of 311 A. The simulation time is within the range, t∈[0, 250] ns. The chosen capillary, plasma and external electric circuit parameters correspond to those discharge parameters that are typical for experiments on laser wake-field electron acceleration for guiding ultrashort high power laser pulses (see Ref. <cit.> and literature cited therein).§ SIMULATION RESULTS Figure <ref> presents the electron density and temperature distributions for both circular and square capillaries at the same time t = 200 ns (20 ns after current has reached its maximum), when the quasistatic equilibrium stage of the discharge is reached. The results obtained for the circular cross-section capillary are in a good agreement with the experiments and simulations conducted previously (see <cit.>).The time evolution of the electron density and temperature (taken at the dash line shown in Fig. <ref>) in the square cross-section capillary is presented in Fig. <ref> and Fig. <ref>, respectively. At about 200 ns a near parabolic temperature profile with the temperature maximum at the capillary axis is formed (Fig. <ref>). The corresponding electron density distribution has a near-parabolic profile with the density minimum at the capillary axis (Fig. <ref>).To compare the circular and square cross section capillaries, in Fig. <ref> we show the distributions of magnetic field and current density inside the capillaries at 200 ns. Frames (a) and (b) in Fig. <ref> correspond to the circular cross-section capillary, and frames (c) and (d) in the same figure show the magnetic field and current density distribution, respectively, inside square cross-section capillary.As seen in Fig. <ref> (c), the magnetic field lines are not perfectly circular near the plasma-insulator interface for the square capillary. For the circular one the field lines are perfectly circular, as expected. We note that near the axis (r→ 0) of the square capillary and near the external boundary of the insulator (r≈ R) the magnetic field lines become almost circular. These indicate that the size of insulator region was chosen correctly.In the case of the square cross-section capillary, the values of the temperature (T_e = 5.34 eV) and density (n_e = 12.0·10^17 cm^-3) at the capillary axis are lower than for the circular one (T_e = 5.73 eV and n_e = 13.1·10^17 cm^-3) as seen in Fig. <ref>. These differences are caused by differences in the geometry, in particular the electron temperature is lower because current density for the square capillary is less than for the circular one. However, general profiles of plasma and time-dependence of axial plasma parameters for the square capillary is quite similar to that of the circular one (see Fig. <ref>).Using these simulation results, we can approximate the electron density distribution in the vicinity (r ≤ 150 μm) of the capillary axis for the circular cross-section (◯) and square cross-section (□) capillaries in the same way as it was done in Ref. <cit.> for a circular capillary. They read: n_e^◯(r) = n_e(0) [1 + 0.33(r/r_0)^2 + 0.4(r/r_0)^4 + …], n_e^□(r) = n_e(0) [1 + 0.29(r/r_0)^2 ++ (0.24 + 0.075cos 4φ)(r/r_0)^4 + …],where φ is the azimuthal angle. The value of coefficient in front of (r/r_0)^2 – 0.33 – for the circular capillary is coincide with the value obtained in Ref. <cit.>. We note here that the r^4 dependence of the transverse density distribution can in principle be detected by observing the laser centroid oscillations as it was proposed in Ref. <cit.>.The azimuthal inhomogeneity of the electron density near the axis of the square cross-section capillary is substantially weak, as indicated by the smallness of the coefficient in front of the cos 4φ term in Eq. (<ref>). However the further from the axis, the more noticeable the effect of the cosine term becomes (see Fig. <ref>). To quantify such effect in the case of laser guiding, the matched spot size calculated according to Ref. <cit.> is shown in table <ref> for the different capillaries in the horizontal and diagonal directions, as well as the matched spot size obtained from the analytical quasistationary model <cit.>:W_Q.S.=1.48× 10^5√(r_0(μ m))/(n_0 [cm^-3])^1/4. For both the larger and smaller capillaries, the difference between the matched spot size calculated in the two directions is small, suggesting that even at diameter 250 μm, a square capillary is an effective waveguide for circularly-symmetric beams.For the electron beam focusing application of discharge capillaries, the focal length of the lens varies inversely with the magnetic field gradient and capillary length. For an ideal lens, the field gradient (and hence focal length) would be constant. For the analysis a factor of beam focusing 0.81 was applied to the magnetic field inside the circular capillary to keep the focal lengths the same and facilitate comparison. Fig. <ref> shows the field gradient variation from on-axis value as a function of position for the parameters of Fig. <ref>. One can see that the variations are similar for the circular and square capillaries, with the square capillary showing a lower variation along the diagonal.The effect on electron beam focusing of the differences in magnetic field gradient were studied using ELEGANT simulations <cit.>. Electron beam parameters relevant to the LPA-based XUV FEL under construction at the BELLA center at LBNL were chosen <cit.>. The electron beam energy and source size were 300 MeV and 0.5 micron, respectively. The electron beam was taken to have no energy spread and the distance from the electron beam waist to the 15 mm-long capillary was 7 cm, providing a focal point approximately 1 m from the capillary exit. The emittance growth due to the field gradient variations of the plasma lens are shown in Fig. <ref>, and as suggested by the field gradient variation, the difference between the circular and square capillaries is small. In both cases for a sufficiently small beam at the entrance of the capillary (r.m.s beam size ≲10% of the capillary radius), the emittance growth is negligible since the field gradient variation is small. As the beam fills a greater portion of the capillary, the emittance grows by approximately a factor of 3 when the r.m.s beam size is ≈ 20% of the capillary radius. The square capillary shows a slightly lower emittance growth due to lower field gradient variation along the diagonal. For higher divergence, the difference becomes less due to increased charge loss (percent level) at the entrance of the circular capillary.§ CONCLUSIONS In conclusion, we have simulatedthe development of a hydrogen-filled capillary discharge with circular and square cross-sections under the same initial plasma density, capillary size, and the parameters of the external electric circuit. We have found that the calculated magnetic field, electron temperature, and density distribution in the near-axis region of the square and circular capillaries are similar. These results indicate that square capillaries, which allow greater diagnostic accessibility, can be employed to guide cylindrically symmetric laser pulses and focus electron beams.§ ACKNOWLEDGMENTS The work was supported in part by the Russian Foundation for Basic Research (Grant No. 15-01-06195), by the Competitiveness Program of National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), contract with the Ministry of Education and Science of the Russian Federation No. 02.A03.21.0005, 27.08.2013 and the basic research program of Russian Ac. Sci. Mathematical Branch, project 3-OMN RAS. The work at Lawrence Berkeley National Laboratory was supported by US DOE under contract No. DE-AC02-05CH11231. This research was also sponsored by the project ELI – Extreme Light Infrastructure–Phase 2 (CZ.02.1.01/0.0/0.0/15_008/0000162) through the European Regional Development Fund and by the Ministry of Education, Youth and Sports of the Czech Republic (National Program of Sustainability II Project No. LQ1606).90 Ehrlich1996 Y. Ehrlich, C. Cohen, A. Zigler, J. Krall, P. Sprangle, and E. Esarey, Phys. Rev. Lett. 77, 4186 (1996). ECL2009 E. Esarey, C. B. Schroeder, and W.P. Leemans, Rev. Mod. Phys. 81, 1229 (2009).Leemans2014 W.P. Leemans, A. J. Gonsalves, H.-S. Mao, K. Nakamura, C. Benedetti, C. B. Schroeder, Cs. Toth, J. Daniels, D. E. Mittelberger, S. S. Bulanov, J.-L.Vay, C. G. R. Geddes, and E. Esarey, Phys. Rev. Lett. 113, 245002 (2014).Benware1998 B. R. Benware, C. D. Macchietto, C. H. Moreno, and J. J. Rocca, Phys. Rev. Lett. 81, 5804 (1998).Steinke2016 S. Steinke, J. van Tilborg, C. Benedetti, C. G. R. Geddes, C. B. Schroeder, J. Daniels, K. K. Swanson, A. J. Gonsalves, K. Nakamura, B. H. Shaw, E. Esarey, and W.P. Leemans, Nature 530, 190 (2016).Tilborg2015 J. van Tilborg, S. Steinke, C. G. R. Geddes, N. H. Matlis, B. S. Shaw, A. J. Gonsalves, J. V. Huijts, K. Nakamura, J. Daniels, C. B. Schroeder, E. Esarey, S. S. Bulanov, N. A. Bobrova, P. V. Sasorov, and W. P. Leemans, Phys. Rev. Lett. 115, 105003 (2015).Pompili2017 R. Pompili, M. P. Anania, M. Bellaveglia, A. Biagioni, S. Bini, F. Bisesto, E. Brentegani, G. Castorina, E. Chiadroni, A. Cianchi, M. Croia, D. Di Giovenale, M. Ferrario, F. Filippi, A. Giribono, V. Lollo, A. Marocchino, M. Marongiu, A. Mostacci, G. Di Pirro, S. Romeo, A. R. Rossi, J. Scifo, V. Shpakov, C. Vaccarezza, F. Villa, and A. Zigler, Appl. Phys. Lett. 110, 104101 (2017).Tilborg2017 J. van Tilborg, S. K. Barber, H.-E. Tsai, K. K. Swanson, S. Steinke, C. G. R. Geddes, A. J. Gonsalves, C. B. Schroeder, E. Esarey, S. S. Bulanov, N. A. Bobrova, P. V. Sasorov, and W. P. Leemans, Phys. Rev. AB, in press.Bobrova2001 N. A. Bobrova, A. A. Esaulov, J.-I. Sakai, P. V. Sasorov, D. J. Spence, A. Butler, S. M. Hooker, and S. V. Bulanov, Phys. Rev. E 65, 016407 (2001)Broks2005 B. H. P. Broks, K. Garloff, and J. J. A. M. van der Mullen, Phys. Rev. E 71, 016401 (2005). Bobrova2013 N. A. Bobrova, P. V. Sasorov, C. Benedetti, S. S. Bulanov, C. G. R. Geddes, C. B. Schroeder, E. Esarey, and W. P. Leemans, Phys. Plasmas 20, 020703 (2013).Gonsalves2016 A. Gonsalves, C. Pieronek, J. Daniels, S. S. Bulanov, W.Waldron, D. Mittelberger, W. Leemans, N. Bobrova, P. Sasorov, F. Liu, S. Antipov, J. Butler, J. Appl. Phys. 119, 033302 (2016).Gonsalves2007 A. J. Gonsalves, T. P. Rowlands-Rees, B. H. P. Broks, J. J. A. M. van der Mullen, and S. M. Hooker, Phys. Rev. Lett. 98, 025002 (2007). MARPLE12 V. Gasilov, A. Boldarev, S. Dyachenko, O. Olkhovskaya, E. Kartasheva, G. Bagdasarov, S. Boldyrev, I. Gasilova, V. Shmyrov, S. Tkachenko, J. Grunenwald, T. Maillard, in Applications, Tools and Techniques on the Road to Exascale Computing, (eds.: K. De Bosschere, E. H. D'Hollander, G. R. Joubert, D.Padua, F.Peters), IOS Press, Series "Advances in parallel Computing" 22, 235 (2012).Broks2007 B. H. P. Broks, W. Van Dijk, J. J. A. M. van der Mullen, A. J. Gonsalves, T. P. Rowlands-Rees, and S. M. Hooker, Phys. Plasmas 14, 023501 (2007). Gonsalves2010 A. J. Gonsalves, K. Nakamura, C. Lin, J. Osterhoff, S. Shiraishi, C. B. Schroeder, C. G. R. Geddes, Cs. Tth, E. Esarey, and W. P. Leemans, Phys. Plasmas 17, 056706 (2010).Benedetti2012 C. Benedetti, C. B. Schroeder, E. Esarey, and W. P. Leemans, Phys. Plasmas 19, 053101 (2012). ELEGANT M. Borland 2000 elegant: A Flexible SDDS-Compliant Code for Accelerator Simulation Advanced Photon Source LS-287XFEL BELLA J. van Tilborg, S. K. Barber, F. Isono, C. B. Schroeder, E. Esarey, and W. P. Leemans,AIP Conference Proceedings 1812, 020002 (2017).
http://arxiv.org/abs/1703.08604v2
{ "authors": [ "Gennadiy Bagdasarov", "Pavel Sasorov", "Alexey Boldarev", "Olga Olkhovskaya", "Vladimir Gasilov", "Anthony J. Gonsalves", "Samuel Barber", "Stepan Bulanov", "Carl B. Schroeder", "Jeroen van Tilborg", "Eric Esarey", "Wim P. Leemans", "Tadzio Levato", "Daniele Margarone", "Georg Korn", "Sergei V. Bulanov" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170324212618", "title": "Plasma Equilibrium inside Various Cross-Section Capillary Discharges" }
-0.8in
http://arxiv.org/abs/1703.09205v3
{ "authors": [ "Daniel Elander", "Maurizio Piai" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20170327174744", "title": "Calculable mass hierarchies and a light dilaton from gravity duals" }
firstpage–lastpageA Numerical Method for Pricing Discrete Double Barrier Option by Legendre Multiwavelet[ December 30, 2023 ======================================================================================== If a significant fraction of the dark matter in the Universe is made of an ultra-light scalar field, named fuzzy dark matter (FDM) with a mass m_a of the order of 10^-22-10^-21 eV, then its de Broglie wavelength is large enough to impact the physics of large scale structure formation. In particular, the associated cutoff in the linear matter power spectrum modifies the structure of the intergalactic medium (IGM) at the scales probed by the Lyman-α forest of distant quasars. We study this effect by making use of dedicated cosmological simulations which take into account the hydrodynamics of the IGM. We explore heuristically the amplitude of quantum pressure for the FDM masses considered here and conclude that quantum effects should not modify significantly the non-linear evolution of matter density at the scales relevant to the measured Lyman-α flux power, and for m_a10^-22 eV. We derive a scaling law between m_a and the mass of the well-studied thermal warm dark matter (WDM) model that is best adapted to the Lyman-α forest data, and differs significantly from the one inferred by a simple linear extrapolation. By comparing FDM simulations with the Lyman-α flux power spectra determined from the BOSS survey, and marginalizing over relevant nuisance parameters, we exclude FDM masses in the range 10^-22 m_a < 2.3× 10^-21 eV at 95 % CL. Adding higher-resolution Lyman-α spectra extends the exclusion range up to 2.9× 10^-21 eV. This provides a significant constraint on FDM models tailored to solve the "small-scale problems" of ΛCDM. large-scale structure of Universe – dark matter § INTRODUCTION While non-baryonic dark matter is a cornerstone in our current understanding of the observable Universe, its nature remains completely unknown after decades of theoretical and experimental investigations. Cold dark matter (CDM) is necessary to understand at the same time the global dynamics of the Universe and the gravitational formation of large scale structures from an initial, almost scale-invariant, power spectrum. The current-time dark matter density derived from CMB observations, Ω_c h^2 = 0.118 ± 0.001 <cit.> is six times larger than the baryon density Ω_b h^2 = 0.0222 ± 0.0002, which can also be independently derived from the measurements of light element abundances. Dark matter also seems to be necessary to understand the dynamics of gravitationally bound systems such as galaxies <cit.> and clusters of galaxies <cit.>.To understand the nature of dark matter, two complementary strategies are possible. A first option is to test specific models inspired by particle physics, using more or less direct detection strategies. Two outstanding candidates predicted in extensions of the Standard Model of particle physics are the supersymmetric neutralino, a thermal relic whose mass is in the GeV-TeV range, and the QCD axion, a relic pseudo-scalar field with a mass around 1-100 eV. They are tested with dedicated experiments or observations, without a positive outcome as of now <cit.> although future experiments will explore much more of their parameter space, in particular in the case of the currently poorly tested QCD axion. A second option consists in measuring or constraining generic properties of dark matter from astrophysical or cosmological observations. For example, assuming that the dark matter is made of fermions, then estimations of their phase-space density in dwarf galaxies imply the Tremaine-Gunn bound <cit.> on their mass, m ≳ 400 eV. Such a bound does not apply for bosonic particles. The small-scale clustering features of visible matter may also be used in different ways to constrain dark matter properties. In fact, at the galactic or sub-galactic scale it has been argued that CDM predictions do not match observations, concerning the number density of low-mass and dwarf galaxies <cit.>, and the absence of an observed central dark matter cusp in these objects <cit.>. While models which take into account the full richness of baryon physics at these scales may surpass those issues, it is still possible that these features result from specific DM properties, not included in the most naive CDM model.A popular scenario to explain the apparent lack of structures at small scales is that the DM velocity dispersion could be large enough at the time of equality, when structures start to form, so that the related free-streaming of these warm DM (WDM) particles would suppress small-scale structures. For thermal WDM of mass m_X, using cosmological parameters from <cit.>, the associated cutoff in the linear density power spectrum P(k) is at the scale <cit.> : k_c = 6.46 (m_X/ keV)^1.11Mpc^-1 where√(P_ WFM(k_c)/P_ CDM(k_c))=0.5 In this equation, k_c does not depend on h. Recent observations of a 3.5 keV X-ray line in several DM-rich objects <cit.> were also interpreted as a hint for a 7 keV sterile neutrino which would constitue WDM. An issue with WDM is that it seems difficult to solve the halo abundance and the cusp-core problems at the same time, if both are to be taken seriously. This is the "Catch 22" problem <cit.> : for WDM with a mass of a few keV, the halo core size, which originates from the above-mentioned Tremaine-Gunn phase-space density argument, is too small to solve the apparent cusp-core issue.Other DM properties were suggested and studied to alleviate the small-scale CDM issues. One possibility is that DM is self-interacting with large cross-sections of the order of 0.1 - 1 cm^2/g, thus producing constant-density cores inside halos <cit.>. The free-streaming effect characteristic of WDM could also be produced by decaying DM with a small mass splitting <cit.>.In this article, we now focus on the alternative scenario according to which DM consists in extremely light bosons, with a mass scale m_a∼ 10^-22 eV, so that the quantum wave properties of these particles are large enough to smooth the associated density fluctuations on the relevant small scales. The basic physics involved at large scales in this Fuzzy Dark Matter (FDM) scenario is presented in eg. <cit.>. The effective "quantum pressure" associated with FDM generates an effective Jeans wavenumber, which is the following for a fluid of density ρ : k_J(ρ) =2( G ρ)^1/4 (m_a/ħ)^1/2 =0.11/h^1/2 (ρ/ρ_c)^1/4 (m_a/10^-22eV)^1/2kpc^-1 Structure formation is suppressed for comoving scales smaller than the corresponding Jeans scale at equality time, which is roughly ∼ 10 Mpc^-1 for m_a=10^-22 eV. This results in a linear power spectrum with a small-scale cutoff produced at high redshift. The related phenomenology is therefore close to the WDM one. On the other hand, in dense environments such as the center of DM halos, where the effective Jeans scale is small due to the ρ^1/4 scaling, simulations which take explicitly into account these wave properties <cit.> demonstrate a rich phenomenology including the development of solitonic cores inside Navarro-Frenk-White halos. In particular, these solitonic cores can fit the observed kinematic properties of some dwarf galaxies, with FDM masses ranging from ∼ 0.4× 10^-22 eV up to ∼ 4 - 6 × 10^-22 eV depending on the considered objects and analysis procedure <cit.>.There is to our knowledge no strong argument to explain why the DM mass would be such that their de Broglie wavelength is of the order of galactic scales. On the other hand, light bosonic pseudo-scalars are a generic prediction in string models, for example as the zero-mode of the Kaluza-Klein tower resulting from the compactification of antisymmetric tensor fields, with the mass scale arising due to the modulus fixing the GUT scale coupling <cit.>. It is plausible that DM is made of several such species, one of which would have the right mass to produce the FDM phenomenology. However for simplicity, we consider in this article only the case of a single (fuzzy) DM species.Existing bounds on the mass of FDM particles arise from both linear and non-linear cosmological probes. The most robust contraints are based on linear probes of structure formation, mostly the CMB, which show that FDM is not a dominant component of dark matter for m_a ≲ 10^-25 eV <cit.>. Models with higher values for m_a cannot be distinguished from CDM given that the scales for which structure formation is damped with respect to CDM are too small to be observed with the primary CMB anisotropies. Pushing into the mildly non-linear regime, CMB lensing at high ℓ could explore FDM masses up to ∼ 10^-23 eV <cit.>. For even higher-mass FDM, the halo mass function (HMF) was semi-analytically computed from the linear power spectrum, using formalisms similar to the case of pure CDM <cit.> : for the benchmark mass m_a=10^-22 eV, the HMF starts to be suppressed for M10^10 M_, while it is fully cut off at M∼ 10^7 - 10^8 M_ depending on the calculation. The galaxy UV-luminosity function at high redshift z∼ 4-10 is therefore expected to be strongly attenuated for M_ UV -16, and accordingly reionization is also expected to be delayed <cit.>. Recent estimations of the high-redshift UV-luminosity function <cit.>, as derived from deep observations such as the Hubble Frontier Fields, are still subject to large uncertainties <cit.> but they do not clearly point towards a deviation from the CDM Schechter law. FDM bounds could therefore be derived, ranging from m_a > 1.2× 10^-22 eV to 5× 10^-22 eV or even 1.2× 10^-21 eV, depending on the data used and analysis strategy <cit.>.The absorption of the Lyman-α emission from distant quasars by neutral HI in the intergalactic medium (IGM) constitutes a powerful, high-redshift probe for spatial fluctuations of the matter density at comoving scales going down to ∼ 1 - 0.1 Mpc depending on the instrumental wavelength resolutions. As in the case of WDM, it is therefore expected that Lyman-α forest observations can constrain FDM models, with very different sources of uncertainties with respect to the aforementioned measurements. In fact, <cit.> disfavors pure FDM models with m_a < 10^-22 eV, by making use essentially of high-resolution Lyman-α forest spectra. Given the similar phenomenologies of FDM and WDM, it is also possible to apply existing WDM bounds on m_X (eg. <cit.>) to the FDM scenario, by using the correspondance between m_a and m_X which matches the cutoff position k_c in the linear matter power spectrum <cit.>. This correspondance can roughly be estimated as highlighted in <cit.> : the power spectrum suppression for WDM takes place for modes inside the horizon when T∼ m_X, while for FDM the suppressed modes are inside the horizon when the scalar field starts oscillating, i.e. for H(T)∼ m_a. Since T∼√(H M_ Pl) (radiation era), the order-of-magnitude mass matching between both models is thus m_X/ keV∼√(m_a M_ Pl)/ keV∼ 0.5 ( m_a/10^-22eV)^0.5 This relation is confirmed by more sophisticated calculations. The bound m_X > 4.09 keV <cit.> would therefore naively translate into m_a7× 10^-21 eV. However the significant differences of power spectrum shapes between FDM and WDM at the cutoff scale, as computed in <cit.>, make this interpretation uncertain and, as we will see, too optimistic.Given the potential of the Lyman-α forest to constrain FDM, it is desirable to dedicate more in-depth studies on this scenario. In this article we examine the impact of the exact FDM linear power spectrum on the Lyman-α forest. We present the results of hydrodynamical simulations which make use explicitly of FDM transfer functions. We highlight the impact of the difference between WDM and FDM linear transfer functions on the predicted Lyman-α flux spectra. While we do not take into account the full wave behavior of the FDM fluid in the numerical simulations, we present quantitative arguments which suggest that this approximation should be valid at the scales probed by our simulations for m_a10^-22 eV. We then combine our models with existing high-statistics Lyman-α power spectra obtained from the BOSS survey, as published in <cit.>, to derive constraints on m_a, by applying a procedure similar to that used in e.g. <cit.> to take into account various sources of uncertainties. Finally, we also add higher-resolution Lyman-α spectra from <cit.> and <cit.> to the analysis in order to improve the sensitivity to higher values of m_a.§ CALCULATION OF THE LYMAN-Α POWER SPECTRUM§.§ Linear matter spectrum Throughout this paper we consider for simplicity a pure-FDM model, in which all of the dark matter is made of a (pseudo) scalar field Ψ, of mass m_a. We also do not consider possible self-interactions. In the scenario when such interactions are given by the expansion of a periodic axion-like potential V∼ f_a^2 m_a^2(1-cosΨ/f_a), their effects are negligible at the cosmological scales of interest for values of f_a that do not require fine-tuning to give the relic abundance. The homogeneous component Ψ_0 of the field obeys then the Klein-Gordon equation in the Friedman-Robertson-Walker metric: Ψ̈_0 + 3 H(z) Ψ̇_0 + m_a^2 Ψ_0=0 For the masses of interest here, m_a ≫ H(z) during all the epoch of structure formation so that the field oscillates and behaves on large scales as CDM. On small scales, the effect of the pressure associated with Ψ gives to the FDM fluid an effective sound velocity (eg. <cit.>) c_ s,eff^2 = k^2/4 m_a^2/1+k^2/4 m_a^2 The related Jeans scale damps structure formation in the linear regime at small scales. Therefore the transfer function defined from the linear matter power spectrum P_ FDM(k) = T^2(k) × P_ CDM(k) presents a strong attenuation at the comoving Jeans scale during equality. We calculate P_ FDM(k) using the  <cit.> modification of theBoltzmann solver <cit.>, with all cosmological parameters from <cit.>.Fig. <ref> illustrates the corresponding cutoff calculated for three axion masses in the range of interest for this study, namely 3× 10^-22 m_a5× 10^-21 eV. We also show the FDM transfer function as parametrized by <cit.>. For identical values of m_a, we observe a slight, but significant shape difference between this parametrization and: the power spectrum cutofffrom the fullcalculation is smoother than the one from <cit.>. This trend is confirmed by <cit.>, a -based <cit.> calculation of the FDM transfer function. Based oncalculations for several values of m_a in the range 10^-23 - 10^-20 eV, we find the following scaling relation for the cutoff mode k_c defined by T(k_c)=0.5 : k_c = 4.97 ( m_a/10^-22eV)^0.46Mpc^-1 This relation is in good agreement with those found in <cit.>. From Eqn. <ref>, the mass-matching relation between WDM and FDM becomesm_X = 0.79 (m_a/10^-22eV)^0.42keV, again in reasonable agreement with previous estimates, including the order-of-magnitude value given in Eqn. <ref>.The WDM transfer function, also represented on Fig. <ref> for m_X=2.5 keV, is much smoother than the FDM one. Therefore it is expected that SDSS-based constraints on FDM will differ significantly from those obtained by the mapping of Eqn. <ref>, which justifies the need for dedicated simulations. Given that the Lyman-α forest essentially probes the one-dimensional matter power spectrum, P_ 1D(k) = 1/2 ∫_k^∞ dk̃ k̃ P_ 3D(k̃), the cutoff in the 3-D spectrum modeled by a given T(k) will generically produce a cutoff in the 1-D spectrum with a different, smoother shape, and at much smaller values of k (which explains why SDSS spectra, with k 2 hMpc^-1, are sensitive to models with k_c of several tens hMpc^-1). Non-linear and hydrodynamical effects make the picture more complicated, so that it is difficult to predict without full calculations how Lyman-α flux power spectra in the SDSS range are modified by changes in the shape of T(k). It is nevertheless expected that SDSS Lyman-α data probe essentially the low-k part of T(k), and therefore constraints on m_a should be less severe than those anticipated by making use of WDM bounds with relations such as Eqn. <ref>. Adding higher-resolution Lyman-α data will naturally extend the range of explored k, therefore improving the sensitivity to higher values of m_a.§.§ Non linear matter spectrum In the non-linear, newtonian and non-relativistic regime i.e. at low redshifts and reasonably small scale, the evolution of the FDM fluid can be rigorously followed using two equivalent methods : either by explicitly solving the Poisson-Schrödinger equations as in <cit.>, or by solving the corresponding Madelung fluid equations. Given the orders of magnitude for the DM density, velocity and de Broglie wavelength, the particle occupancy numbers are very high so that these equations can be interpreted in a classical way. For a FDM fluid whose wave function writes Ψ = √(ρ)e^iθ, with mv⃗ = ħ∇θ, the dynamical equation in a background geometry characterized by the scale factor a and expansion rate H, with newtonian potential ϕ, is (see e.g. <cit.>): _tv⃗ + H v⃗ + 1/a (v⃗·∇)v⃗ = -1/a∇[ ϕ - ħ^2/2m_a^2 a^2( ∇^2 √(ρ)/√(ρ)) ] This is equivalent to the Euler equation as implemented in standard N-body simulations, with an additional "quantum pressure" term Q ≡ - (ħ^2/2m_a) ∇^2 √(ρ)/√(ρ). Note that, as <cit.> show, solving this fluid equation gives identical results to the Poisson-Schrödinger system away from interference fringes. Implementing this term into N-body simulations is difficult <cit.> mostly because it is a second order derivative of the density field, which means that most of its power, in terms of Fourier transform, is on very small scales. In <cit.>, the effect of quantum pressure in the non-linear regime was recently studied at cosmological scales using a particle-mesh scheme. For m_a=2.5× 10^-22 eV, significant modifications of the matter power spectrum are visible for k100 Mpc^-1. This is consistent with arguments from <cit.> : simulations with m_a ∼ 10^-22 eV have shown that the Q term has the most severe impact on the internal structures of DM halos, and much less on larger scales. However, the SDSS Lyman-α spectra do not probe comoving scales smaller than a few hundreds of kpc. As another argument, we computed withthe linear growth rate ratio ξ defined in <cit.> : for m_a=3.4× 10^-22 eV, we find ξ=1 within 1 % for k 20 hMpc^-1. We therefore expect the Q term to have a negligible influence on the DM dynamics with respect to gravity, at the scale relevant for the observable Lyman-α forest.As a consequence, for this study it is most probably appropriate to use standard N-body simulations. In Section 3.1, we will show a posteriori distributions for the Q term which support this choice. §.§ Hydrodynamical simulations The methodology and numerical tools used here are very similar to and were extensively described in <cit.>. We use theN-body simulation, an update of theprogram <cit.>. In this exploratory work, which aims to assess specifically the impact of the shape of the linear power spectrum on the resulting Lyman-α flux, we use a single set of ΛCDM cosmological parameters from <cit.>. They are in accordance to the central values used in <cit.> : h=0.675, Ω_M=0.31, n_s=0.96 and σ_8=0.83. Note that we checked that, as for WDM, the cutoff in the linear matter power spectrum induced by the FDM models considered here does not change significantly the value of σ_8, for a given primordial scalar perturbation amplitude.The initial conditions are set at z=30 with thesoftware, starting from the linear matter power spectrum as computed byfor this redshift. The (fuzzy) dark matter fluid is then treated as a collection of fixed-mass point particles. The baryon fluid is evolved using the Smoothed Particle Hydrodynamics technique, with stars created in cold and dense baryon environments. Of importance for the Lyman-α forest, we model the IGM heating by the UV background light, using internalheating rate parameters which result in the redshift-dependent IGM temperature-density relationT_ IGM = T_0(z) (1+δρ/ρ)^γ(z)-1 As for the case of cosmological parameters, we adopt here fixed benchmark parameters such that T_0(z=3)=14000 K and γ(z=3)=1.3, which are in agreement with the measurements from <cit.>.Fromsnapshot files in the redshift range z=4.6 - 2.2 adapted to SDSS Lyman-α data, we infer the line-of-sight-averaged one-dimensional Lyman-α flux power spectrum. This observable is defined from the fluctuations of quasar's transmitted flux fraction, δ_φ(λ) = φ(λ) / φ̅ - 1, where φ̅ is the mean transmitted flux fraction at the Hi absorber redshift, computed overthe entire sample.We use here again a single HI optical depth model, which is known to roughly match existing data: τ_ eff = α× (1+z)^β with α=0.0025and β=3.7 In practice, the Lyman-α flux power spectrum is inferred in the range adapted either to published SDSS spectra, k=10^-3 - 2× 10^-2 s km^-1, or to higher-resolution spectra, k=10^-3 - 0.1 s km^-1. We exploit the splicing technique as described in <cit.>. It consists in combining the results of two N-body simulation outputs, one of high resolution with 768^3 DM particles in a 25 h^-1Mpc box, and one on larger scales, with 768^3 particles in a 100 h^-1Mpc box, making use of a third low-resolution simulation with 128^3 particles in a 25 h^-1Mpc box.Simulations were computed for four different values of m_a between 3.4× 10^-22 and 4.1× 10^-21 eV using . To assess FDM-related systematic effects, an additional simulation was run with T(k) given by the formula from <cit.>. § RESULTS Before comparing the predictions for the one-dimensional Lyman-α flux power spectrum with measured spectra, we first provide a discussion of the quantum pressure term, which was ignored in the N-body simulations. All the calculations presented here are therefore a posteriori and only hold if the dynamical impact of quantum pressure in the non-linear regime is negligible. §.§ Quantum pressure Fig. <ref> illustrates the properties of the DM fluid, derived from thesnapshots for z=2.6 at scales of a few Mpc, which correspond to the median redshift and smallest comoving scales of relevance for the SDSS Lyman-α flux. The top panel provides a by-eye comparison of the DM mass density for CDM with respect to the lowest-mass FDM used in the N-body simulations, m_a=3.4× 10^-22 eV. The severe attenuation of small-scale structures due to the linear cutoff in the FDM scenario is evident. In order to assess the relative importance of quantum pressure, the bottom panel compares the gravitational potential ϕ (left), as calculated explicitly with , with the quantum pressure Q/m_a (right). Both are expressed in (m/s)^2. The Q term is estimated from the numerical laplacian of the density field ρ, which is itself obtained by smoothing the DM point particle distribution with a kernel adapted to the local density of DM particles in the simulation, so that higher resolutions are obtained in higher density regions. We checked that the resulting Q distributions are stable with respect to the kernel size parameter, which means that our estimation for Q is not severely biased by shot-noise related fluctuations. On the other hand, we find that the gradient estimator is limited by the simulation resolution in regions where the DM density fluctuates rapidly, such as inside halos of course, but also along the filament edges : this means that in these regions higher-resolution simulations would necessarily predict larger values for Q. With this caveat in mind, we can observe that Q is significant only in small regions of space, again along filaments and in the vicinity of DM halos. The fact that gradients of Q look concentrated along filaments is consistent with the results of <cit.>, who find that the main differences in DM density maps when fully taking into account the Q term are in filaments (appart from halo cores which they don't resolve). Note that, although it could be expected that Q would blow up in deep voids due to the 1/√(ρ) factor <cit.>, we do not find any hint of this behavior from our calculations. To quantify the validity of the N-body approximation used, we compute fromsnapshots the ratio between the two "force" terms which appear in the right-hand-side of Eqn. <ref>, based on our available estimate for Q: r_Q ≡| ∇ Q / m_a |/|∇ϕ| The mass-weighted distribution of this dimensionless ratio is represented on Fig. <ref> for several values of m_a, using high-resolution snapshots at z=2.6.For m_a=1.8× 10^-21 eV (which is close to the limits that we will set in Section 3.3), and even for the lowest FDM mass implemented in our simulations, m_a=3.4× 10^-22 eV, the quantum force at the scales resolved by our simulations is always much smaller than the gravitational force. For m_a=3.4× 10^-22 eV, we find that r_Q>0.01 for much less than 1 % of the DM particles. The impact of Q on the DM dynamics is therefore probably negligible. We observe a strong increase of r_Q for higher redshifts due to the variation of gradients with the scale factor a, but that does not change the conclusion. However we remind again the caveat that our estimator for Q is such that the quantum force is significantly underestimated in unresolved, high-gradient regions - and we know indeed from eg. <cit.>, that the quantum force does play a major role especially inside DM halos for such values of m_a. For values of m_a significantly lower than ∼ 10^-22 eV, such as m_a=5.0× 10^-23 eV, Fig. <ref> shows the distribution of r_Q obtained by a simple 1/m_a^2 rescaling with respect to the case m_a=3.4× 10^-22 eV. We see that a significant population of FDM particles is expected to undergo a "quantum force" more than 10 % times the gravitational force, mostly along filaments. This implies that the use of standard N-body simulations may be inappropriate in this mass regime, if one wants to predict density power spectra with high precision.This discussion suggests that it is relevant to use classical N-body simulations in the context of this work, for scales resolved by our simulations (ie. larger than ∼ 100 kpc), for FDM masses large enough, roughly m_a10^-22 eV. This is consistent with the results of <cit.>, who find more explicitly that the effect of quantum pressure in the non-linear regime does not affect the power spectrum for modes k≲ 100 Mpc^-1 in the case m_a=2.5× 10^-22 eV. Importantly, it is clear that the validity of this approximation is directly linked to the explored scales : the use of the relatively low-resolution SDSS Lyman-α flux power spectrum is therefore more robust against wave effects not accounted for, with respect to higher-resolution probes such as the XQ-100, HIRES and MIKE spectra. §.§ Lyman-α flux power spectra The Lyman-α flux power spectrum as calculated according to Section 2.3 at scales measured by SDSS is illustrated in Fig. <ref>, for several redshifts of interest, both in the case of CDM and FDM with m_a=3.4× 10^-22 eV. The difference between the predicted fluxes is rather subtle, of the order of 5 %. This is however sufficiently large so that SDSS data can discriminate between models, using both the k and z signal dependences. On the contrary, for the highest FDM mass probed by our simulations, m_a=4.1× 10^-21 eV, the typical differences in flux power with respect to CDM are below 1 % and therefore hard to test with only SDSS spectra.Fig. <ref> illustrates the impact of shape modifications in the linear transfer function T(k) on the final Lyman-α flux prediction, for transfer functions with the same cutoff value k_c. While the differences between the linear power spectra predicted byand the Hu formula are not negligible at all, as discussed in Section 2.1, the top panel of Fig. <ref> shows that the related difference for the SDSS Lyman-α spectra is well below 1 %. Therefore we conclude that the uncertainties related to the modeling of the scalar field linear evolution are negligible for Lyman-α forest predictions.On the contrary, the shape difference between FDM and WDM transfer functions do propagate significantly to the Lyman-α forest, as Fig. <ref> (bottom) demonstrates. This is due to the fact that the WDM transfer function is smaller than the FDM one for values of k much smaller than the cutoff k_c. Finally, let us observe from Fig. <ref> that the flux power spectrum differences are more pronounced at high redshift, while they are reduced at low z where non-linear effects tend to erase the original shape differences.The Lyman-α flux power spectrum is also attenuated by the thermal pressure which smooths the spatial distribution of IGM over cosmological times. The relevant Jeans smoothing scale is given by <cit.> L_J ≃ 0.52kpc× (T/10^4 K)^1/2(n_H/ cm^-3)^-1/2. Using the values for T and n_H calculated in our CDM simulation, the spatial distribution of L_J is peaked around L_J∼ 500 kpc. The associated mode 2π/L_J corresponds to the value m_a ≃ 7× 10^-22 eV, using Eqn. <ref>. The effect of thermal Jeans smoothing happens therefore at a scale comparable to the one associated to the FDM effect we're looking for. There is therefore a degeneracy between IGM temperature parameters (T_0,γ) and m_a. §.§ Constraining m_a from the SDSS/BOSS Lyman-α forest The SDSS Lyman-α forest data used here is the same as in <cit.>, in which all the computation details for the flux power spectrum and its related statistical and systematic uncertainties are extensively described. From a parent sample of ∼ 60,000 SDSS-III/BOSS DR9 quasars <cit.>, the data consist in a selection of 13,821 spectra that have signal-to-noise ratios per pixel greater than 2, no broad absorption line features, no damped or detectable Lyman-limit systems, and an average resolution in the Lyman-α forest of at most85 km s^-1. The Lyman-α forest is defined as the region spanning1050 < λ_RF / Å < 1180. The spectra in this sample are used tomeasure the transmitted flux power spectrum in 12 redshift bins from ⟨ z ⟩ = 4.4 to 2.2, each bin spanning Δ z = 0.2, and in 35 equally-spaced spatial modes ranging from k=10^-3 to 2× 10^-2  s km^-1. The flux power spectrum is obtained from the Fourier Transform of the fractional flux transmission δ_φ defined in Section 2.3, computed separately for each z-sector. It has been shown (eg. in <cit.> that this SDSS Lyman-α forest matches well predictions based on the CDM model, by appropriately choosing relevant astrophysical and instrumental parameters such as the IGM thermal state modeling or the spectrometer resolution. We can therefore constrain the m_a parameter of FDM in a way similar to constraints previously set on the sum of active neutrino masses, or the mass m_X of WDM, using the same data. We use the same likelihood as used in <cit.>, in order to handle the influence of relevant cosmological and astrophysical parameters : H_0, Ω_M, n_s and σ_8 on the one hand, and on the other hand the previously mentioned parameters (T_0,γ) (Eqn. <ref>) and (α,β) (Eqn. <ref>), which provide a simple description of the IGM temperature and mean Lyman-α optical depth. No CMB-based prior is set for the cosmological parameters, except for H_0, to which we apply the Gaussian constraint H_0=67.3±1. No (ie. flat) priors are used for (α,β), while we constrain γ=1±0.3 and T_0∈ [0;25000] K.Other physical processes may impact significantly our FDM constraints, but are poorly known and not explicitly modeled in our simulations. To capture their effects, we introduce additional nuisance parameters, with simple analytical modifications of the predicted flux power spectra, based on other published simulations or models. The description of these corrections is detailed in <cit.>, and they include feedback processes from galactic outflows <cit.>, fluctuations of the UV background from discrete sources, and homogeneous reionization history <cit.>. Other identified nuisance parameters are taken into account, to model the Siiii-HI cross-correlation, the spectrometer noise, the presence of residual damped Lyman-α systems in the data, and simulation uncertainties related to the splicing algorithm. The signal dependence with m_a is captured using the grid of simulations with m_a=3.4× 10^-22, 7.9× 10^-22, 1.8× 10^-21 and 4.1× 10^-21 eV, and for which all other parameters are fixed to the benchmark values given in Section 2. For each FDM simulation a χ^2 is computed with respect to the 35× 12 SDSS data points in (k,z) space, assuming h=0.673±0.010 <cit.> while floating all other likelihood parameters.We then explored two methods to derive a bound on m_a from these χ^2s. In a first approach, we use already existing CDM simulations which vary all the aforementioned astrophysical and cosmological parameters while keeping 1/m_a=0 in order to extrapolate the grid at any point of the whole parameter space. Since no FDM simulation including cross terms between m_a and other parameters was carried out, this is done by assuming that the variation of the flux power spectrum with all parameters at any given value of m_a is identical to the one calculated for 1/m_a=0.We then identify the minimal χ^2 value, letting all parameters free. It occurs for σ_8=0.85, n_s=0.94, Ω_M=0.29,(T_0,γ)=(8723,0.93), (α,β)=(0.0025,3.7), and 1/m_a=0 - meaning that the best fit is the pure CDM scenario. The minimal χ^2 is 408.1, comparable to the one for the highest tabulated FDM mass, χ^2(m_a=4.1× 10^21  eV)=409.0. For the tabulated FDM model with lowest mass we find χ^2(m_a=3.4× 10^-22  eV)=431.6, which shows that such a value for m_a is clearly excluded. A frequentist 95 % lower limit on m_a is set, following eg. <cit.> : m_a> 2.0× 10^-21 eV. As already mentioned in <cit.>, all adjusted best-fit IGM nuisance parameters are compatible with no correction within 1σ.In order to by-pass the lack of FDM cross term calculations, and also to highlight the correspondance with WDM models, we also used a second approach which consists in using the WDM bound published in <cit.> in the following way. We determine again a relationship between m_X and m_a, this time so that the same χ^2 is obtained in the SDSS adjustment, χ^2(m_X) = χ^2(m_a). This requirement is satisfied with the following scaling law: m_X = 0.715×(m_a/10^-22eV)^0.558keV This scaling is significantly different from Eqn. <ref> which was derived from the cutoff position in linear matter spectra. However, it is the best adapted for our goal in the sense that the WDM and FDM models associated by this relation do predict Lyman-α flux power spectra that are indiscernible using current SDSS data. The 95 % CL limit m_X>4.09 keV derived by <cit.> then converts to m_a>2.3× 10^-21 eV. Given that this second approach takes into account cross terms, calculated in <cit.>, we consider it to be more accurate than the first one, and therefore conclude that the SDSS Lyman-α forest exclude FDM models with m_a<2.3× 10^-21 eV at 95 % CL.The above-mentioned matter spectral index adjusted with this SDSS data only, n_s=0.94, is in slight tension with the CMB-derived value n_s=0.97 <cit.>. As was noticed in <cit.>, using SDSS only without Planck-based priors (except for H_0) is similar to allowing for a running of n_s between the large scales of CMB and the smaller Lyman-α regime. Applying Planck-based priors on all our cosmological parameters therefore leads to a less tight constraint on the FDM or WDM mass. The 28 % change in m_X derived in <cit.> translates into the looser bound m_a>1.3× 10^-21 eV if one includes <cit.> into the adjustment.Uncertainties linked to the modeling of FDM itself were already extensively discussed. We noticed that uncertainties in the linear matter spectrum do not modify the result. The impact of quantum pressure in the N-body simulations is more an open question, but we argued that at least for m_a10^-22 eV it should be negligible at the scale and precision probed by the SDSS Lyman-α forest.Finally, errors related to the simplified treatment of IGM physics in our model were taken into account by allowing to float "nuisance" parameters, from (T_0,γ) to the analytical corrections for feedback effects or UV fluctuations. However, these corrections are derived from simulations or models with specific underlying assumptions, so that it cannot be certain that they fully take into account all processes relevant for our study. In addition, other phenomena that may impact the HI gas properties were not taken into account, such as residual fluctuations due to the inhomogeneity of the reionization process <cit.>. This specific process should however affect more the high-redshift IGM properties. §.§ Adding higher-resolution Lyman-α forest data The analysis presented in Section 3.3 can easily be extended by including additional Lyman-α forest data with higher resolution. For that purpose, in this Section we add to the SDSS flux power spectrum the data from both the XQ-100 survey <cit.> and HIRES/MIKE data as presented in <cit.>.The XQ-100 survey, obtained with the XSHOOTER spectrograph at VLT, consists of a homogeneous and high-quality sample of 100 quasar spectra at z=3.5 - 4.5. The Lyman-α flux power spectrum was first derived from this survey in <cit.>, and the XQ-100 publicly available raw data was recently reanalyzed by <cit.>, taking into account in particular an improved determination of the spectrograph resolution. Here we use the Lyman-α flux power spectrum as tabulated in <cit.>, for three values of average redshifts in the range z=3.2 - 3.93, and spatial modes reaching values as high as k=0.05 - 0.07 s km^-1, depending on the redshift. In addition to XQ-100, we also include the Lyman-α flux power spectrum as presented in <cit.> (Fig. 7), which is based on a collection of quasar spectra from Keck/HIRES and Magellan/MIKE. These high-resolution spectrometers permit to reach spatial modes up to k=0.09 s km^-1. Here we use only the flux power spectra determined at z ≤ 4.6 due to the lack of simulation snapshots available at higher redshifts.With the same methodology as in Section 3.3, we combine the SDSS Lyman-α spectra with those higher-resolution data, as was done in <cit.>. We make use of the same simulated FDM flux power spectra, described in Section 2. With the procedure based on the m_X - m_a mass matching, the limit m_X > 4.65 keV from <cit.> translates into the following bound m_a>2.9 × 10^-21 eV at 95 % CL. This corresponds to a 30 % improvement with respect to the limit based on SDSS only : the higher resolutions of XSHOOTER, HIRES and MIKE permit to explore fluctuations at smaller comoving lengths.If we include Planck priors on the cosmological parameters as was done with SDSS alone, the 95 % CL bound becomes m_a>2.4× 10^-21 eV. The relative change in the constraint is less severe than with SDSS data alone. This is because the inclusion of higher-resolution spectra to the original SDSS data also reduces the above-mentioned tension on the n_s parameter, as can be see in <cit.>.The uncertainties which may impact this bound are similar to those discussed in the end of Section 2.3. The impact of potential small-scale fluctuations in the IGM properties related to the distribution of ionizing sources and patchy reionization is increased, although our measurements do not rely on observations at z≥ 5. Also, for a given value of m_a, the effect of quantum pressure is stronger at the scales probed by high-resolution spectrographs : the "quantum force", being a third-order derivative, scales as ∼ k^3. However, while this effect could be problematic for m_a∼ 10^-22 eV, it should not be a concern for the mass range m_a=2.3 - 2.9× 10^-21 eV which is not covered by SDSS alone.§ CONCLUSIONS In this article, we studied the Lyman-α forest phenomenology within the Fuzzy Dark Matter scenario. The non-linear physics and hydrodynamics were modeled using standard N-body simulations with initial conditions which reproduce the FDM linear physics. This approach is an approximation which neglects the "quantum force" present in the Madelung equation. We presented semi-quantitative arguments, based on the simulation outputs, which strongly suggest that at least for m_a 10^-22 eV the quantum force at the scales resolved by our simulations is always much smaller than the gravitational force. In agreement with the conclusions of <cit.>, we find indications that for m_a10^-22 eV the quantum term may impact the non-linear predictions for P(k), and should therefore be included in the simulations in order to provide more reliable predictions.Our main result is obtained by comparing the SDSS Lyman-α flux spectra and simulations, which permits to derived the limit m_a>2.3× 10^-21 eV (95 % CL). Including higher-resolution spectra from the XSHOOTER, HIRES and MIKE spectrographs extends the exclusion range up to m_a=2.9× 10^-21 eV. This analysis was done with a similar method, and similar assumptions, as for the case of WDM models <cit.>. For this study we also derived the most appropriate m_a - m_X scaling law adapted to the SDSS Lyman-α data, which is different from the one which was previously found using the linear transfer function cutoffs.These constraints confirm that the Lyman-α forest provides sensitivity to FDM masses as large as or larger than other observables such as high-redshift galaxy luminosity functions <cit.>. More importantly, the Lyman-α forest is an independent probe with different sources of observational and astrophysical uncertainties. Together with others, our result now provides severe constraints on models tailored to solve small-scale problems of the CDM paradigm, for which m_a10^-21 eV seems to be necessary.While in the process of final redaction of this article, we were informed of the work presented in <cit.>. In their study, the authors of this work focus on the high-resolution Lyman-α forest to derive a bound on m_a. Their conclusions are very similar to ours in terms of constraints for m_a. In particular, our m_X - m_a scaling law given by Eqn. <ref> lies in between the curves quoted "k_1/2" and "k_0.75" in their Fig. 3, and interpolates reasonably their result points given by an MCMC analysis. We point out the complementarity in terms of systematics between both works. Whereas the work of <cit.> uses exclusively high-resolution and high signal-to-noise Lyman-α spectra, including at redshifts above z=5, their statistical uncertainties are larger, and we note that some astrophysical uncertainties associated with the physics of the IGM and the effect of reionization are more important athigh redshift. We also stress that the results of <cit.>, like ours, could be impacted by wave effects in the non-linear regime for m_a 10^-22 eV.In addition to progress in the modeling of IGM physics, which is needed to improve the robustness of all Lyman-α-based studies, future work should also be devoted to study the impact of non-linear wave effects for m_a10^-22 eV. Baring this issue, our results provide among the best absolute lower bounds for the mass of bosonic dark matter. Even given this constraint, a number of interesting phenomena are predicted in the FDM framework that are distinctive from CDM and deserve to be explored in the future. More elaborate, but plausible scenarios such as mixed CDM-FDM models or models with mass-varying FDM are also worth investigating.§ ACKNOWLEDGEMENTS We acknowledge fruitful discussions with Jim Rich on the physics of FDM, and with Christophe Magneville on the N-body simulation pipeline. We thank Daniel Grin for exchanges on theprogram. The authors are also grateful to Vid Irsic and Matteo Viel for providing the HIRES/MIKE-derived flux power spectra.We thank Volker Springel for providing us with theprogram.snapshots were exploited using thesoftware <cit.>. We acknowledge PRACE (Partnership for Advanced Computing in Europe) for access to thin and xlarge nodes on the Curie cluster based in France at the TGCC (Très Grand Centre de Calcul) under allocation numbers 2010PA2777, 2014102371 and 2012071264. We also acknowledge the French national access to high-performance computing GENCI (Grand Equipement National de Calcul Intensif) for access to the Curie cluster under allocation t2016047706. DJEM acknowledges support of a Royal Astronomical Society Postdoctoral Fellowship, hosted at King's College London.mnras
http://arxiv.org/abs/1703.09126v2
{ "authors": [ "Eric Armengaud", "Nathalie Palanque-Delabrouille", "Christophe Yèche", "David J. E. Marsh", "Julien Baur" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170327145315", "title": "Constraining the mass of light bosonic dark matter using SDSS Lyman-$α$ forest" }
Why there is no Newtonian backreaction [====================================== We develop proper correction formulas at the starting k-1 steps to restore the desired k^ th-order convergence rate of the k-step BDF convolution quadrature for discretizing evolution equations involving a fractional-order derivative in time. The desired k^ th-order convergence rate can be achieved even if the source term is not compatible with the initial data, which is allowed to be nonsmooth. We provide complete error estimates for the subdiffusion case α∈ (0,1), and sketch the proof for the diffusion-wave case α∈(1,2). Extensive numerical examples are provided to illustrate the effectiveness of the proposed scheme.Keywords: fractional evolution equation, convolution quadrature, initial correction, backward difference formula, nonsmooth, incompatible data, error estimatesmyheadings plain § INTRODUCTIONWe are interested in the convolution quadrature (CQ) generated by high-order backward difference formulas (BDFs) for solving the fractional-order evolution equation (with 0<α<1){(u(t)-v) -Au(t)= f(t),0<t<T, u(0)=v , .where f is a given function, and ∂_t^α u denotes the left-sided Riemann-Liouville fractional time derivative of order α, defined by (cf. <cit.>)∂_t^α u(t):= 1/Γ(1-)d/dt∫_0^t(t-s)^-u(s)ds,where Γ(z):=∫_0^∞ s^z-1e^-sds is the Gamma function. Under the initial condition u(0)=v, the Riemann-Liouville fractional derivative (u-v) in the model (<ref>) is identical with the usual Caputo fractional derivative <cit.>. In the model (<ref>), the operator A denotes either the Laplacian Δ on a polyhedral domain Ω⊂ℝ^d (d=1,2,3) with a homogenous Dirichlet boundary condition, or its discrete approximation Δ_h by Galerkin finite element method. Thus the operator A satisfies the following resolvent estimate (cf. <cit.> and <cit.>)(z -A)^-1_L^2(Ω)→ L^2(Ω)≤ c_ϕ z^-1,∀ z ∈Σ_ϕ,for all ϕ∈ (π/2,π), where Σ_θ:={z∈ℂ∖{0}: | z|< θ} is a sector of the complex plane ℂ. The model (<ref>) covers a broad range of applications related to anomalous diffusion discovered in the past two decades, e.g., conformational dynamics of protein molecules, contaminant transport in complex geological formations and relaxation in polymer systems; see <cit.>.There has been much recent interest in developing high-order schemes for problem (<ref>), especially spectral methods <cit.> and discontinuous Galerkin <cit.>. In this work, we develop robust high-order schemes based on CQs generated by high-order BDFs. The CQ developed by Lubich <cit.> provides a flexible framework for constructing high-order methods to discretize the fractional derivative ∂_t^α u. By its very construction, it inherits the stability properties of linear multistep methods, which greatly facilitates the analysis of the resulting numerical scheme, in a way often strikingly opposed to standard quadrature formulas <cit.>. Hence, it has been widely applied to discretize the model (<ref>) and its variants, especially the CQ generated by BDF1 and BDF2 (with BDFk denoting BDF of order k). In the literature, the CQ generated by BDF1 is commonly known as the Grünwald-Letnikov formula.By assuming that the solution is sufficiently smooth, which is equivalent to assuming smoothness of the initial data v and imposing certain compatibility conditions on the source term f at t=0, the stability and convergence of the numerical solutions of fractional evolution equations have been investigated in <cit.>. In general, if the source term f is not compatible with the given initial data, the solution u of the model (<ref>) will exhibit weak singularity at t=0, which will deteriorate the convergence rate of the numerical solutions. This has been widely recognized in fractional ODEs <cit.> and PDEs <cit.>. In particular, direct implementation of the CQ generated by high-order BDFs for discretizing the fractional evolution equations generally only yields first-order accuracy. To restore the theoretical rate O(τ^k) of BDFk, two different strategies have been proposed.For fractional ODEs, one idea is to use starting weights <cit.> to correct the CQ in discretizing the fractional time derivative, cf. (<ref>) below:∂̅_τ^αφ^n = 1/τ^α∑_j=0^n b_n-jφ^j + ∑_j=0^M w_n,jφ^j.The starting term ∑_j=0^M w_n,ju_j is to capture all leading singularities so as to recover a uniform O(τ^k) rate of the scheme, where M∈ℕ and the weights w_n,j generally depend on both α and k. This approach works well for fractional evolution ODEs, however, the extension of this approach to fractional evolution PDEs relies on expanding the solution into power series of t, which requires imposing certain compatibility conditions on the source term.The second idea is to split the source term f into f(t)=f(0) + (f(t)-f(0)) and to approximate f(0) by ∂̅_τ∂_t^-1f(0), with a similar treatment of the initial data v. This leads to a corrected BDF2 at the first step and restores the O(τ^2) accuracy for any fixed t_n>0. The idea was first introduced in <cit.> for solving a variant of formulation (<ref>) in the diffusion-wave case and then systematically developed in <cit.> for BDF2, and was recently extended to the model (<ref>) in <cit.> for both subdiffusion and diffusion-wave cases. Higher-order extension of this approach is possible, but is still not available in the literature.The goal of this work is to develop robust high-order BDFs for fractional evolution equations along the second strategy <cit.>. Instead of extending this strategy to each high-order BDF method, separately, we develop a systematic strategy for correcting initial steps for high-order BDFs, based on a few simple criteria, cf. (<ref>) and (<ref>) for the model (<ref>). These criteria emerge naturally from solution representations, and are purely algebraic in nature and straightforward to construct. The explicit correction coefficients will be given for BDFs up to order 6. For BDFk, the correction is only needed at the starting k-1 steps and thus the resulting scheme is easy to implement.We develop proper corrections for high-order BDFs for both subdiffusion, i.e., α∈(0,1), and diffusion wave, i.e., α∈(1,2). It is noteworthy that for α∈(1,2), high-order BDFs can be either unconditionally or conditionally stable, depending on the fractional order α, and in the latter case, an explicit CFL condition on the time step size τ is given. Theoretically, the corrected BDFk achieves the k^ th-order accuracy at any fixed time t=t_n (when t_n is bounded from below), and the error bound depends only on data regularity, without assuming any compatibility conditions on the source term or extra regularity on the solution(cf. Theorems <ref> and <ref>). These results are supported by the numerical experiments in Section <ref>.The rest of the paper is organized as follows. In Section <ref> we develop the correction for the subdiffusion case, including the motivations of the algebraic criteria for choosing the correction coefficients. The extension of the approach to the diffusion wave case is given in Section <ref>. Numerical results are presented in Section <ref> to illustrate the efficiency and robustness of the corrected schemes. Appendix <ref> gives an alternative interpretation of our correction method in terms of Lubich's convolution quadrature for operator-valued convolution integrals. Some lengthy proofs are given in Appendices B, C and D. Throughout, the notation c denotes a generic positive constant, whose value may differ at each occurrence, but it is always independent of the time step size τ and the solution u.§ BDFS FOR SUBDIFFUSION AND ITS CORRECTION Let {t_n=nτ}_n=0^N be a uniform partition of the interval [0,T], with a time step size τ=T/N. The CQ generated by BDFk, k=1,…,6, approximates the fractional derivative ∂_t^αφ(t_n) by∂̅_τ^αφ^n:= 1/τ^α∑_j=0^n b_j φ^n-j,with φ^n=ϕ(t_n), where the weights {b_j}_j=0^∞ are the coefficients in the power series expansionδ_τ(ζ)^α =1/τ^α∑_j=0^∞ b_jζ ^j δ_τ(ζ ):= 1/τ∑_j=1^k 1/j (1-ζ )^j.Below we often write δ(ζ)=δ_1(ζ). The coefficients b_j can be computed efficiently by the fast Fourier transform <cit.> or recursion <cit.>. Correspondingly, the BDF for solving (<ref>) seeks approximations U^n, n=1,…,N, to the exact solution u(t_n) by^α (U-v)^n-A U^n = f(t_n) .If the solution u is smooth and has sufficiently many vanishing derivatives at 0, then U^n converges at a rate of O(τ^k) <cit.>. However, it generally only exhibits a first-order accuracy when solving fractional evolution equations, due to the weak solution singularity at 0, even if the initial data v and source term f are smooth <cit.>. This has been observed numerically <cit.>. For α=1, BDFk is known to be A(ϑ_k)-stable with angle ϑ_k= 90^∘, 90^∘, 86.03^∘, 73.35^∘, 51.84^∘, 17.84^∘ for k = 1,2,3,4,5,6, respectively <cit.>.To restore the k^ th-order accuracy, we correct BDFk at the starting k-1 steps by (as usual, the summation disappears if the upper index is smaller than the lower one)^α (U-v)^n- A U^n = a_n^(k) (A v+f(0))+f(t_n)+ ∑_ℓ=1^k-2 b_ℓ,n^(k)τ^ℓ∂_t^ℓf(0) , 1≤ n≤ k-1,^α (U-v)^n-A U^n = f(t_n) , k≤ n≤ N.where a_n^(k) and b_ℓ,n^(k) are coefficients to be determined below. They are constructed so as to improve the accuracy of the overall scheme to O(τ^k) for a general initial data v∈ D(A) and a possibly incompatible right-hand side f. The only difference between (<ref>) and the standard scheme (<ref>) lies in the correction terms at the starting k-1 steps. Hence, the proposed scheme (<ref>) is easy to implement. In the scheme (<ref>), the derivative ∂_t^ℓf(0) may be replaced by its (k-ℓ-1)-order finite difference approximation f^(ℓ), without sacrificing its accuracy.The correction in (<ref>) is minimal in the sense that there is no other correction scheme which modifies only the k-1 starting steps while retaining the O(τ^k) convergence. This does not rule out corrections with more starting steps. We give an interesting correctionclosely related to (<ref>) in Appendix <ref>.§.§ Derivation of the correction criteriaNow we derive the criteria for choosing the coefficients a_j^(k) and b_ℓ,j^(k), cf. (<ref>) and (<ref>), using Laplace transform and its discrete analogue, the generating function <cit.>. We denote bytaking Laplace transform, and for a given sequence (f^n)_n=0^∞, denote by f(ζ ) the generating function, which is defined by f(ζ ) := ∑_n=0^∞ f^nζ ^n. First we split the right hand side f intof(t)=f(0) + ∑_ℓ=1^k-2t^ℓ/ℓ !∂_t^ℓf(0) + R_k,and R_k is the corresponding local truncation error, given byR_k = f(t)-f(0)- ∑_ℓ=1^k-2t^ℓ/ℓ !∂_t^ℓ f(0) =t^k-1/(k-1)!∂_t^k-1f(0) +t^k-1/(k-1)!*∂_t^kf,where ∗ denotes Laplace convolution. Thus the function w(t):=u(t)-v satisfiesw -Aw = A v +f(0) + ∑_ℓ=1^k-2t^ℓ/ℓ !∂_t^ℓ f (0) + R_k,with w(0)=0. Since w(0)=0, the identity ∂_t^α w(z)=z^αw(z) holds <cit.>, and thus by Laplace transform, we obtainz^w(z)-Aw(z) = z^-1 (Av+f(0))+ ∑_ℓ=1^k-21/z^ℓ+1∂_t^ℓ f (0) + R_k(z).By inverse Laplace transform, the function w(t) can be readily represented byw(t)= 1/2πi∫_Γ_θ,δ e^ztK(z)(Av+f(0))dz +1/2πi∫_Γ_θ,δ e^zt zK(z)(∑_ℓ=1^k-21/z^ℓ+1∂_t^ℓ f (0) + R_k(z))dz,with the kernel functionK(z)= z^-1(z^-A)^-1.In the representation (<ref>), the contour Γ_θ,δ is defined byΓ_θ,δ={z∈ℂ: |z|=δ, | z|≤θ}∪{z∈ℂ: z=ρ e^±iθ, ρ≥δ},oriented with an increasing imaginary part. Throughout, we choose the angle θ such that π/2 < θ < min(π,π/) and hence z^∈Σ_θ' with θ'=θ< π for all z∈Σ_θ. By the resolvent estimate (<ref>), there exists a constant c which depends only on θ andsuch that(z^-A)^-1≤ cz^- K(z)≤ c|z|^-1-α,∀ z ∈Σ_θ. Next, we give a representation of the discrete solution W^n:=U^n-v, which follows from lengthy but simple computations, cf. Appendix <ref>.The discrete solution W^n:=U^n-v is represented byW^n =1/2πi∫_Γ^τ_θ,δe^zt_nμ(e^-zτ) K( δ_τ(e^-zτ))(A v+ f(0)) dz+1/2πi∫_Γ^τ_θ,δe^zt_nδ_τ(e^-zτ)K(δ_τ(e^-zτ)) ∑_ℓ=1^k-2(γ_ℓ(e^-zτ)/ℓ ! +∑_j=1^k-1 b_ℓ,j^(k)e^-zt_j)τ^ℓ+1∂_t^ℓ f (0) dz +1/2πi∫_Γ^τ_θ,δe^zt_nδ_τ(e^-zτ)K( δ_τ(e^-zτ))τR_k(e^-zτ)dz ,with the contour Γ_θ,δ^τ :={ z∈Γ_θ,δ:|(z)|≤π/τ} (oriented with an increasing imaginary part), where the functions μ(ζ ) and γ_ℓ(ζ ) are respectively defined byμ(ζ ) = δ(ζ) ( ζ/1-ζ + ∑_j=1^k-1a_j^(k)ζ ^j) γ_ℓ(ζ )=(ζd/dζ)^ℓ1/1-ζ.By comparing the kernel functions in (<ref>) and (<ref>), we deduce that in order to have O(τ^k) accuracy, the following three conditions should be satisfied for z∈Γ_θ,δ^τ:|δ_τ(e^-zτ)-z|≤ c|z|^k+1τ^k,|μ(e^-zτ)-1| ≤ c|z|^kτ^k,|(γ_ℓ(e^-zτ)/ℓ ! +∑_j=1^k-1 b_ℓ,j^(k)e^-zt_j)τ^ℓ+1- 1/z^ℓ+1| ≤ c|z|^k-ℓ-1τ^k.Note that for BDFk, the estimate |δ_τ(e^-zτ)-z|≤ c|z|^k+1τ^k holds automatically (cf. Lemma <ref> in Appendix <ref>). It suffices to impose the following algebraic criteria (changing e^-zτ to ζ and zτ to 1-ζ): for BDFk, choose the coefficients {a_j^(k)}_j=1^k-1 and {b_ℓ,j^(k)}_j=1^k-1 such that|μ(ζ )-1|≤ c|1-ζ |^k, |γ_ℓ(ζ )/ℓ ! + ∑_j=1^k-1 b_ℓ,j^(k)ζ ^j-1/δ(ζ )^ℓ+1| ≤ c |1-ζ |^k-ℓ-1, ℓ=1,…,k-2,where the functions μ(ζ) and γ_ℓ(ζ) are defined in (<ref>). It can be verified that for BDFk, k=3,…,6, the leading singularities on the left hand side of (<ref>) do cancel out, and thus the criterion can be satisfied.§.§ Computation of the coefficients a_j^(k) and b_ℓ,j^(k)First we compute the coefficients a_j^(k). To this end, we rewrite ∑_j=1^k-1a_j^(k)ζ ^j as∑_j=1^k-1a_j^(k)ζ ^j=ζ∑_j=0^k-2c_j (1-ζ )^j.Consequently, by writing ζ = 1- (1-ζ), expanding the summation and collecting terms, we obtain (with the convention c_-2=c_-1=0)μ(ζ ) =∑_j=1^k 1/j (1-ζ )^j(ζ/1-ζ + ζ∑_j=0^k-2c_j (1-ζ )^j )=∑_j=0^k-11/j+1 (1-ζ )^j (1-(1-ζ )-∑_j=0^kc_j-2 (1-ζ )^j +∑_j=0^k-1c_j-1 (1-ζ )^j )=∑_j=0^k-11/j+1 (1-ζ )^j ∑_j=1^k1/j (1-ζ )^j -∑_j=2^k-1(∑_ℓ=0^j1/j-ℓ+1c_ℓ-2) (1-ζ )^j+∑_j=1^k-1(∑_ℓ=0^j1/j-ℓ+1c_ℓ-1)(1-ζ )^j +O((1-ζ )^k)=1+∑_j=1^k-1(1/j+1 - 1/j-∑_ℓ=0^j1/j-ℓ+1c_ℓ-2+∑_ℓ=0^j1/j-ℓ+1c_ℓ-1)(1-ζ )^j +O((1-ζ )^k) =1+∑_j=1^k-1(-1/j(j+1)-∑_ℓ=1^j-11/j-ℓc_ℓ-1+∑_ℓ=0^j-11/j-ℓc_ℓ)(1-ζ )^j +O((1-ζ )^k).Thus by choosing c_ℓ, ℓ=0,…,k-2, such that∑_ℓ=0^j-11/j-ℓc_ℓ= 1/j(j+1) +∑_ℓ=1^j-11/j-ℓc_ℓ-1,j=1,…,k-1 ,Criterion (<ref>) follows. The coefficients a_j^(k) can be computed recursively from (<ref>) and (<ref>), and are given in Table <ref>. It is worth noting that the result for k=2 recovers exactly the correction in <cit.>, and thus our algebraic construction generalizes the approach in <cit.>.Next we compute the coefficients b_ℓ,j^(k). First we expandγ_ℓ(ζ )/ℓ !- 1/δ(ζ )^ℓ+1 in 1-ζ asγ_ℓ(ζ )/ℓ !-1/δ(ζ )^ℓ+1 =∑_j=0^k-ℓ-2 g_ℓ,j^(k)(1-ζ )^j + O(|1-ζ |^k-ℓ-1),and then choose the coefficients b_ℓ,j^(k), j=1,…,k-1 to satisfy (<ref>). To this end, we rewrite ∑_j=1^k-1 b_ℓ,j^(k)ζ ^j into the following form:∑_j=1^k-1 b_ℓ,j^(k)ζ ^j=ζ∑_j=0^k-2 d_ℓ,j^(k)(1- ζ )^j =∑_j=0^k-2 d_ℓ,j^(k)(1- ζ )^j-∑_j=1^k-1 d_ℓ,j-1^(k)(1- ζ )^j.Then it suffices to choose d_ℓ,0^(k)=-g_ℓ,0^(k),d_ℓ,j^(k)=d_ℓ,j-1^(k)-g_ℓ,j^(k)j=1,…,k-ℓ-2,d_ℓ,j^(k)=0 j=k-ℓ-1,…,k-2 . Now the coefficients b_ℓ,j^(k) can be computed recursively using (<ref>), (<ref>) and (<ref>), and the results are given in Table <ref>. Note that for k=4 and 6, the coefficients b_k-2,j,j=1,2…,k-1 vanish identically. §.§ Error estimatesLast we state the error estimate for (<ref>). The proof relies on the splitting u(t_n)-U^n =w(t_n)-W^n and the representations (<ref>) and (<ref>), and then bounding each term using (<ref>). The details can be found in Appendix <ref>.Let Criteria (<ref>) and (<ref>) hold. Then for the solution U^n to the corrected scheme (<ref>), the following error estimate holds for any t_n>0U^n-u(t_n)_L^2(Ω)≤cτ^k (t_n^α -k f(0)+Av_L^2(Ω) + ∑_ℓ=1^k-1 t_n^α+ℓ -k ∂_t^ℓf(0)_L^2(Ω)+∫_0^t_n(t_n-s)^α-1∂_s^kf(s)_L^2(Ω)ds). Theorem <ref> implies that for any fixed t_n>0, the convergence rate is O(τ^k) for BDFk. In order to have a uniform rate O(τ^k), the following compatibility conditions are needed:f(0) + Av = 0, ∂_t^(ℓ) f(0) = 0, ℓ=1,…,k-1,concurring with known results on convolution quadrature <cit.>. In the absence of these conditions, the error estimate deteriorates as t→0, which is consistent with the corresponding regularity theory: the solution (and its derivatives) exhibits weak singularity at t=0<cit.>.The error estimate in Theorem <ref> requires Av∈ L^2(Ω), i.e., the initial data v is reasonably smooth. Upon minor modifications of the proof in Appendix <ref>, one can derive a similar error estimate for v∈ L^2(Ω):U^n-u(t_n)_L^2(Ω)≤ cτ^k ( t_n^-kv_L^2(Ω) + ∑_ℓ=0^k-1 t_n^α+ℓ -k ∂_t^ℓf(0)_L^2(Ω)+∫_0^t_n(t_n-s)^α-1∂_s^kf(s)_L^2(Ω)ds).§ CORRECTED BDF FOR DIFFUSION-WAVE PROBLEM Now we extend the strategy in Section <ref> to the diffusion-wave problem, i.e., 1<α<2:∂_t^α (u(t)-v-tb) -Au(t) = f(t),with the initial conditions u(0)=v and u^'(0)=b, where∂_t^α u(t):= 1/Γ(2-)d^2/dt^2∫_0^t(t-s)^1-u(s)ds .The main differences from the subdiffusion case lie in the extra initial condition b and better temporal smoothing property <cit.>. A straightforward implementation of BDFk can fail to yield theO(τ^k) rate, as the subdiffusion case, and further requires unnecessarily high regularity on f. We shall develop a corrected scheme to take care of both issues. First, in order to fully exploit the extra smoothing, we rewrite the source term f as f=∂_tg with g=∂_t^-1f. Then the diffusion-wave equation can be rewritten as∂_t^α (u-v-tb) - Au = ∂_t g,Next we correct the starting k-1 steps, and seek approximations U^n, n=1,…,N, by^α (U-v-tb)^n- A U^n = a_n^(k) A v + c_n^(k)τAb + ∂̅_τ g^n + ∑_ℓ=1^k-2 b_ℓ,n^(k)τ^ℓ-1∂_t^ℓ-1 f(0) ,1≤ n≤ k-1,^α (U-v-tb)^n-A U^n = ∂̅_τ g^n , k≤ n≤ N.The scheme involves ∂̅_τ g^n, instead of f^n, which enables one to relax the regularity requirement on f. The correction terms are to ensure the desired O(τ^k) rate.Now we derive the criterion for choosing the coefficients in (<ref>) using Laplace transform and generating function. First, since g(0)=0, g(t) can be split intog(t)= ∑_ℓ=1^k-2t^ℓ/ℓ !∂_t^ℓg(0) + R_k=∑_ℓ=1^k-2t^ℓ/ℓ !∂_t^ℓ-1f(0) +R_k,where R_k is the local truncation error R_k =t^k-1/(k-1)!∂_t^k-1g(0) +t^k-1/(k-1)!∗∂_t^kg(t). With the splitting (<ref>), the function w=u-v-tb satisfies∂_t^α w - Aw = Av +tAb + ∑_ℓ=1^k-2∂_t t^ℓ/ℓ !∂_t^ℓ-1f(0)+ ∂_t R_k.Then by Laplace transform, we derive a representation of the continuous solution w(t):w(t) =1/2πi∫_Γ_θ,δe^ztK(z) (Av + z^-1Ab) dz +1/2πi∫_Γ_θ,δ e^zt zK(z) (∑_ℓ=1^k-21/z^ℓ∂_t^ℓ-1f(0) + z R_k(z))dz,where the angle θ∈ (π/2,π) is sufficiently close to π/2 such that αθ<π, and δ is small.Since BDFk is A(ϑ_k)-stable, the scheme (<ref>) is unconditionally stable for any α <α^*(k):= π/(π-ϑ_k). The critical value α^*(k) is 1.91, 1.68, 1.40 and 1.11 for k=3,…,6. In contrast, for α≥α^*(k), it is only conditionally stable. Note that for any α∈(1,2), the curve δ(e^-iθ)^α is not tangent to the real axis at the origin (i.e., θ close to zero). This naturally gives rise to the following condition.Let r(A) be the numerical radius of A, and the following condition holds: (i) The fractional order α<α^*(k) or (ii) The fractional order α≥α^*(k) and τ^α r(A) ≤ c(α,k)-γ for some γ>0, where the constant c(α,k) is given by the intersection point of {δ(ζ)^α: |ζ|=1} with the negative real axis (closest to the origin).Condition <ref>(ii) specifies the CFL condition on the time step size τ (so it holds only if r(A)<∞). The CFL constant c(α,k) is not available in closed form, but can be determined numerically; see Fig. <ref> for the values. It is interesting to observe the qualitative differences of BDFs of different order. For example, the CFL constant c(α,6) of BDF6 does not approach zero even for α tends to 2; and there is an interval of α values for which the CFL constant c(α,4) for BDF4 is larger than c(α,3) for BDF3, i.e., BDF4 is less stringent in time step size.The next result gives the representation of the solution W^n=U^n-v-t_nb, which follows from simple yet lengthy computations, cf. Appendix <ref>.Under Condition <ref>, the discrete solution W^n:=U^n-v-t_nb is given byW^n= 1/2πi∫_Γ^τ_θ,δe^zt_nμ(e^-zτ) K( δ_τ(e^-zτ))A v dz +1/2πi∫_Γ^τ_θ,δe^zt_nK(δ_τ(e^-zτ))δ_τ(e^-zτ) (γ_1(e^-zτ)+∑_j=1^k-1c_j^(k)e^-zt_j)τ^2 Ab dz +1/2πi∫_Γ^τ_θ,δe^zt_nδ_τ(e^-zτ)K(δ_τ(e^-zτ))∑_ℓ=1^k-2(δ(e^-zτ)γ_ℓ(e^-zτ)/ℓ !+∑_j=1^k-1 b_ℓ,j^(k)e^-zt_j)τ^ℓ∂_t^ℓ-1f(0) dz +1/2πi∫_Γ^τ_θ,δe^zt_nδ_τ(e^-zτ)^2K(δ_τ(e^-zτ))τR_k(e^-zτ)dz ,with the contour Γ_θ,δ^τ :={ z∈Γ_θ,δ:|(z)|≤π/τ} (oriented with an increasing imaginary part), for some θ sufficiently close to π/2, where μ(ζ ) and γ_ℓ(ζ ) are defined in (<ref>). Proceeding like before, from the solution representations (<ref>) and (<ref>), we deduce the following algebraic criteria for choosing the coefficients a_j^(k), c_j^(k) and b_ℓ,n^(k):|μ(ζ)-1|≤ c|1-ζ|^k, |γ_1(ζ)+∑_j=1^k-1c_j^(k)ζ^j-1/δ(ζ)^2| ≤ c|1-ζ|^k-2,|δ(ζ)γ_ℓ(ζ)/ℓ !+∑_j=1^k-1 b_ℓ,j^(k)ζ^j-1/δ(ζ)^ℓ| ≤ c|1-ζ|^k-ℓ,ℓ=1,2,…,k-2,where the functions μ(ζ) and γ_ℓ(ζ) are defined in (<ref>).By comparing Criterion (<ref>) with (<ref>), and respectively Criterion (<ref>) with (<ref>), the coefficients a_j^(k) are identical with that for α∈(0,1), and respectively c_j^(k) with b_1,j^(k) for α∈(0,1). However, due to the presence of the extra factor δ(ζ), the coefficients b_ℓ,j^(k) are different from that of the case 0<α<1, and have to be determined. The procedure for computing b_ℓ,j^(k) is similar to that in Section <ref>, and the results are given in Table <ref>.Last, we state the error estimate for the approximation U^n. The proof is similar to that of Theorem <ref>, but with g=∂_t^-1f in place of f. It is briefly sketched in Appendix <ref>.Let Criteria (<ref>), (<ref>) and (<ref>) hold, and Condition <ref> be fulfilled. Then for the solution U^n to (<ref>), the following error estimate holds for any t_n>0U^n-u(t_n)_L^2(Ω)≤cτ^k (t_n^α -k f(0)+Av_L^2(Ω) + t_n^α+1 -kA b _L^2(Ω) + ∑_ℓ=1^k-2 t_n^α+ℓ -k ∂_t^ℓf(0)_L^2(Ω)+∫_0^t_n(t_n-s)^α-2∂_s^k-1f(s)_L^2(Ω)ds).Theorem <ref> only requires (k-1)^ th order derivative of f in time, instead of k^ th order derivative of f as in Theorem <ref>. Thus it indeed relaxes the regularity condition. § NUMERICAL EXPERIMENTS AND DISCUSSIONSNow we present numerical results to show the efficiency and accuracy of the schemes (<ref>) and (<ref>) in one-spatial dimension, on the unit interval Ω=(0,1). In space, it is discretized with the piecewise linear Galerkin finite element method <cit.>: we divide Ω into M equally spaced subintervals with a mesh size h=1/M. Since the convergence behavior of the spatial discretization is well understood, we focus on the temporal convergence. In the computation, we fix the time step size τ at τ=t/N, where t is the time of interest. We measure the accuracy by the normalized errors e^N=u(t_N)-U^N _L^2(Ω)/u(t_N) _L^2(Ω), where the reference solution u(t_N) is computed using a much finer mesh. All the computations are carried out in MATLAB R2015a on a personal laptop, and further, in order to observe error beyond double precision, we employ the Multiprecision Computing Toolbox[<http://www.advanpix.com/>, last accessed on January 11, 2017.] for MATLAB. §.§ Numerical results for subdiffusion In the subdiffusion case, we consider the following two examples: (a)v=x(1-x) ∈ H^2(Ω)∩ H_0^1(Ω) and f≡ 0;(b) v≡0 and f(x,t)=cos(t)(1+χ_(0,1/2)(x)). The numerical results for case (a) by the corrected scheme (<ref>) are presented in Table <ref>, where the numbers in the bracket denote the theoretical rate predicted by Theorem <ref>. It converges steadily at an O(τ^k) rate for all BDFs, which agrees well with the theory, showing clearly its robustness. Surprisingly, the asymptotic convergence of BDF6 kicks in only at a relatively small time step size, at N=50, which contrasts sharply with other BDF schemes. Thus in the preasymptotic regime, BDF5 is preferred over BDF6. To further illustrate Theorem <ref>, in Fig. <ref>, we plot the numerical solution by BDF5 and its error profile. The solution decays first rapidly and then slowly, resulting in an initial layer. This layer shows clearly the limited temporal regularity of the solution at 0 and as a result, the approximation error near 0 is predominant, partly confirming the prefactor t_n^α-k in Theorem <ref>.To illustrate the impact of initial correction, we present in Table <ref> the numerical results by the uncorrected BDF scheme (<ref>), and two popular finite difference schemes, i.e., L1 scheme <cit.> and L1-2 scheme <cit.>. The uncorrected BDFk scheme can only achieve a first-order convergence, and all BDF schemes have almost identical accuracy, irrespective of the order k. This low-order convergence is due to the poor approximation at the initial steps, which persists in the numerical solutions at later steps. Meanwhile, for sufficiently smooth solutions, the L1 and L1-2 schemes converge at a rate O(τ^2-α) and O(τ^3-α), respectively. For general problem data, the L1 scheme converges at an O(τ) rate <cit.>. The L1 and L1-2 schemes can only deliver an empirical O(τ) rate for case (a), due to insufficient solution regularity for general problem data. Although not presented, it is noted that the numerical results for other fractional orders are similarly. Therefore, the correction is necessary in order to retain the desired rate, even for smooth initial data.Next we consider the inhomogeneous problem in case (b). Since the source term f is smooth in time, Theorem <ref> is applicable, which predicts an O(τ^k) rate for the corrected BDFk scheme (<ref>). This is fully supported by the numerical results in Table <ref>. Like before, the uncorrected scheme (<ref>) and the L1 and L1-2 schemes can only achieve an O(τ) rate, despite the smoothness of the problem data, cf. Table <ref>. §.§ Numerical results for diffusion-waveNow we illustrate the corrected scheme (<ref>) on the following 1D diffusion-wave example: (c) v(x)=x(1-x), b(x)=sin(2π x) and f=e^t(1+χ_(0,1/2)(x)) For the diffusion-wave model, the scheme (<ref>) is only conditionally stable for α≥α^*(k)= π/(π-ϑ_k), with a stability threshold τ_0= (c(α,k)/r(A))^1/α, according to Condition <ref>. To illustrate the sharpness of the threshold τ_0 or equivalently the CFL constant c(α,k), we consider case (c) with k=5, α=1.5, h=1/M=1/100. The eigenvalues of the discrete Laplacian A are available in closed form <cit.>:λ^h_j= λ̅_j^h /(1 - h^26λ̅_j^h),   with λ̅_j^h=-4/h^2sin^2π j/2(N+1), j=1,2,…, M-1.Thus the numerical radius r(A)=max_j(λ^h_j) ≈ 1.2× 10^5, which together with the value c(α,k)=1.58 from Fig. <ref> gives a stability threshold τ_0≈ 5.60×10^-4. In Figs. <ref> (a) and (b), we plot the numerical solutions computed by the corrected scheme (<ref>) with N = 1700 (i.e., τ = 5.88×10^-4) and with N = 1800 (i.e., τ = 5.55×10^-4), respectively. The scheme (<ref>) gives an unstable solution for N=1700 but a stable one for N=1800. This observation fully confirms the sharpness of the CFL constant c(α,k) in Condition <ref>. In Table <ref>, we present the L^2 error for α>α^* and small τ (such that it satisfies the CFL condition). The numerical results indicate the desired O(τ^k) rate, supporting the theory.For α<α^*(k)= π/(π-ϑ_k), with α^* being the critical value, the corrected scheme (<ref>) based on BDFk is unconditionally stable. Numerically, the corrected scheme (<ref>) converges at an O(τ^k) rate steadily, cf. Table <ref>, which agrees well with Theorem <ref>.§ ACKNOWLEDGEMENTSThe authors are grateful to Professor Christian Lubich for his valuable comments on an earlier version of the paper. The work of B. Jin is supported by UK EPSRC grant EP/M025160/1. The work of B. Li is supported by The Hong Kong Polytechnic University (A/C code: 1-ZE6L). The work of Z. Zhou is supported in part by the AFOSRMURI center for Material Failure Prediction through peridynamics and the ARO MURI Grant W911NF-15-1-0562.§ AN ALTERNATIVE VIEW ON THE CORRECTION SCHEME (<REF>)In this appendix, we discuss the connection between our approach and the approach studied in <cit.>. The observation of this connection is due to Professor Christian Lubich.For the following integral and its convolution quadrature approximationu(t)=1/2π i ∫_Γ_θ,δF(z)e^-tz d z U^n=1/2π i∫_Γ_θ,δ^ττ F(δ_τ(e^-τ z))e^-t_nzd z,Lubich <cit.> showed the following error estimate away from t=0:|U^n-u(t_n)|≤ Ct_n^ν-k-1τ^k ,where ν∈ℝ is a parameter in the kernel estimate |d^m/d z^mF(z)|≤ C|z|^-ν-m, m≥ 0. In particular, if we choose F(z)=(z^α-A)^-1z^-ℓ-1∂_t^ℓ f(0) in (<ref>), thenu(t)=1/2π i∫_Γ_θ,δ(z^α-A)^-1z^-ℓ-1∂_t^ℓ f(0) e^-tz d z, andU^n=1/2π i∫_Γ_θ,δ^ττ (δ_τ(e^-τ z)^α-A)^-1δ_τ(e^-τ z)^-ℓ-1∂_t^ℓ f(0)e^-t_nzd zare the integral representations of the solutions of∂_t^α u(t)-Au(t) =t^ℓ/ℓ !∂_t^ℓ f(0), u(0)=0∂̅_τ^α U^n - AU^n = ω_n^(ℓ)∂_t^ℓ f(0), U^0=0,respectively, which are solutions and approximations of (<ref>) corresponding to a single component in the source splitting (<ref>). The weights {ω_n^(ℓ)}_n=0^∞ are the coefficients in the power series expansion δ_τ(ζ)^-ℓ-1 = ∑_n=0^∞ω_n^(ℓ)ζ^n . By <cit.>, the approximation {U^n} has the desired accuracy (<ref>). Our scheme (<ref>) is connected to (<ref>) as follows: we replace δ(ζ)^-ℓ-1 by an O(|ζ-1|^k-ℓ-1) close approximation γ_ℓ(ζ)/ℓ ! +∑_j=1^k-1 b_ℓ,j^(k)ζ^j, cf. (<ref>). Our choice of the kernel leads to∂̅_τ^α U^n- AU^n = t_n^ℓ/ℓ !∂_t^ℓ f(0) +b_ℓ,n^(k)τ^ℓ∂_t^ℓ f(0),which formally differs from (<ref>) only in the starting k-1 steps, as b_ℓ,n^(k)=0 for n≥ k in our scheme. Further, (<ref>) is minimal (or optimal) in the sense that it is the unique correction scheme that only modifies the starting k-1 steps while having an accuracy of O(τ^k).§ PROOF OF THEOREM <REF> We need the following estimates on the function δ_τ(e^-zτ).Let α∈(0,2). For any ε, there exists θ_ε∈ (π/2,π) such that for any fixed θ∈ (π/2,θ_ε), there exist positive constants c,c_1,c_2 (independent of τ) such thatc_1|z|≤ |δ_τ(e^-zτ)|≤ c_2|z|, δ_τ(e^-zτ)∈Σ_π-ϑ_k+ε,|δ_τ(e^-zτ)-z|≤ cτ^k|z|^k+1,|δ_τ(e^-zτ)^α-z^α|≤ cτ^k|z|^k+α, ∀z∈Γ_θ,δ^τ .Since the function δ(ζ)/(1-ζ) has no zero in a neighborhood 𝒩 of the unit circle <cit.> and for θ sufficiently close to π/2, e^-zτ lies in the neighborhood 𝒩, there are positive constants c_1' and c_2' such thatc_1'≤|δ(e^-zτ)| /|1-e^-zτ|=|δ_τ(e^-zτ)| /|(1-e^-zτ)/τ | ≤ c_2',∀z∈Γ_θ,δ^τ .Since c_1|zτ|≤ |1-e^-zτ|≤c_2 |zτ| for z∈Γ_θ,δ^τ, the first estimate follows.When |ζ|≤ 1 and ζ≠ 0, we have δ_τ(ζ)∈Σ_π-ϑ_k for the A(ϑ_k) stable BDFk <cit.>. Hence, by expressing e^-zτ as e^-|z|τcos(θ)e^- i|z|τsin(θ), we have|δ_τ(e^-zτ)-δ_τ(e^- i|z|τsin(θ))|=|δ_τ(e^-|z|τcos(θ)e^- i|z|τsin(θ))-δ_τ(e^- i|z|τsin(θ))|≤ ce^-σ|z|τcos(θ) |δ_τ'(e^-σ|z|τcos(θ)e^- i|z|τsin(θ)) zτcos(θ)|for some σ∈(0,1), by the mean value theorem. For θ close to π/2 and z∈Γ_θ,δ^τ, by Taylor expansion, |z|τ≤π/sinθ and the first estimate, we haveτ |δ_τ'(e^-σ|z|τcos(θ)e^- i|z|τsin(θ))| ≤ c |δ_τ(e^- i|z|τsin(θ))|≥ c|z| .Consequently, we deduce|δ_τ(e^-|z|τcos(θ)e^- i|z|τsin(θ))-δ_τ(e^- i|z|τsin(θ))|≤ c|cos(θ)| |δ_τ(e^- i|z|τsin(θ))| ≤ c|θ-π/2| |δ_τ(e^- i|z|τsin(θ))| .Hence, δ_τ(e^-τ z) is in a sector Σ_π-ϑ_k+c|θ-π/2|. If θ>π/2 is sufficiently close to π/2, then c|θ-π/2|<ε. This proves the second estimate.The third estimate is given in <cit.>. The last estimate follows from|δ_τ(e^-zτ)^α-z^α | =α|∫_z^δ_τ(e^-zτ)ξ^α-1dξ| ≤max_ξ|ξ|^α-1|δ_τ(e^-zτ)-z| ,where ξ lies in the line segment with end points δ_τ(e^-zτ) and z. Since δ_τ(e^-zτ)>0 for z∈Γ_θ,δ^τ with z>0, we have by the first estimate that|ξ|^α-1≤max(|z|,|δ_τ(e^-zτ)|)^α-1≤ c|z|^α-1.This inequality and (<ref>) yield the last estimate. Proof of Theorem <ref>. The functions W^n, n=1,…,N,satisfy (with W^0=0):^α W^n-AW^n =(1+a_n^(k))(A v+f(0))+ ∑_ℓ=1^k-2(t_n^ℓ/ℓ !+b_ℓ,n^(k)τ^ℓ)∂_t^ℓ f (0) + R_k(t_n) , 1≤ n≤ k-1,^α W^n-AW^n = A v+f(0) + ∑_ℓ=1^k-2t_n^ℓ/ℓ !∂_t^ℓ f (0) + R_k(t_n) , k≤ n≤ N.By multiplying both sides by ζ ^n, summing over n and collecting terms, we obtain∑_n=1^∞ζ ^n ^α W^n - ∑_n=1^∞ AW^nζ ^n= ( ∑_n=1^∞ζ ^n+ ∑_j=1^k-1a_j^(k)ζ ^j )(Av+f(0) ) + ∑_ℓ=1^k-2(∑_n=1^∞t_n^ℓ/ℓ !ζ ^n+ ∑_j=1^k-1 b_ℓ,j^(k)τ^ℓζ ^j) ∂_t^ℓ f (0)+R_k(ζ ) =(ζ/1-ζ + ∑_j=1^k-1a_j^(k)ζ ^j )(A v+f(0) ) +∑_ℓ=1^k-2(γ_ℓ(ζ)/ℓ !+ ∑_j=1^k-1 b_ℓ,j^(k)ζ ^j)τ^ℓ∂_t^ℓ f (0)+R_k(ζ ) ,where R_k(ζ ) = ∑_n=1^∞ R_k(t_n)ζ ^n and elementary identities∑_n=1^∞ζ ^n = ζ/1-ζ∑_n=1^∞ n^ℓζ ^n = (ζd/dζ)^ℓ1/1-ζ:=γ_ℓ(ζ).Next we simplify the summations on both sides. Since W^0=0, by the convolution rule, ∑_n=1^∞ζ ^n^α W^n=δ_τ(ζ )^αW(ζ ), and consequently, we obtainW(ζ )=K(δ_τ(ζ ))[τ^-1μ(ζ ) (Av + f(0) ) + ∑_ℓ=1^k-2δ_τ(ζ )(γ_ℓ(ζ)/ℓ !+∑_j=1^k-1 b_ℓ,j^(k)ζ ^j)τ^ℓ∂_t^ℓ f (0)+ δ_τ(ζ ) R_k(ζ )] .where the operator K is given by (<ref>), and the two polynomials μ(ζ) and γ_ℓ(ζ) are given by (<ref>). Since W(ζ ) is analytic with respect to ζ in the unit disk on the complex plane, thus Cauchy's integral formula implies the following representation for arbitrary ϱ∈(0,1)W^n = 1/2 πi∫_|ζ |=ϱζ ^-n-1W(ζ )dζ = τ/2πi∫_Γ^τ e^zt_nW(e^-zτ)dz,where the second equality follows from the change of variable ζ =e^-zτ, and Γ^τ is given byΓ^τ:={ z=-ln(ϱ)/τ+i y: y∈ℝ|y|≤π/τ}.Note that (1) η(ζ):=δ_τ(ζ)/(1-ζ) is a polynomial without roots in a neighborhood 𝒩 of the unit circle <cit.>. Thus, η(ζ)^α is analytic in 𝒩.(2) By choosing the angle θ sufficiently close to π/2, ϱ sufficiently close to 1 and 0<δ<-ln(ϱ/τ), the function e^-τ z lies in 𝒩 forz∈Σ_θ,δ^τ={z∈Σ_θ:|z|≥δ, | Im(z)|≤τ/π,Re(z)≤ -ln(ϱ)/τ} ; (3) (1-e^-τ z)^α is analytic for z∈ℂ\(-∞,0] ⊃Σ_θ,δ^τ.These properties ensure that δ_τ(e^-τ z)^α=τ^-α(1-e^-τ z)^αη(e^-τ z)^α is analytic for z∈Σ_θ,δ^τ. By choosing ε small enough, Lemma <ref> implies that 0≠δ_τ(e^-τ z)^α∈Σ_α(ϑ_k+ε)⊂Σ_π-ε for z∈Σ_θ,δ^τ. Thus K(δ_τ(e^-τ z))=δ_τ(e^-τ z)^-1(δ_τ(e^-τ z)^α-A)^-1 is analytic for z∈Σ_θ,δ^τ, which is a region enclosed by Γ^τ, Γ^τ_θ,δ and the two lines Γ_±^τ: =ℝ±iπ/τ (oriented from left to right). Since the values of e^zt_nW (e^-zτ) on Γ_±^τ coincide, Cauchy's theorem allows deforming the contour Γ^τ to Γ_θ,δ^τ in the integral (<ref>) to obtain the desired representation.§ PROOF OF THEOREM <REF> Let Criteria (<ref>)and (<ref>) hold. Then for z∈Γ_θ,δ^τ, there holdμ(e^-zτ)K(δ_τ(e^-zτ)) -K(z) ≤c τ^k |z|^k-1-α, (δ_τ(e^-zτ)^α-A)^-1(1/ℓ !γ_ℓ(e^-zτ)+ ∑_j=1^k-1 b_ℓ,j^(k) e^-jzτ)τ^ℓ+1- z^-ℓK(z)≤ c τ^k |z|^k-ℓ-1-α.Proof. Since |1-e^-zτ|≤ cτ|z| for z∈Γ_θ,δ^τ, by Criterion (<ref>), there holds |μ(e^-zτ)-1|≤ c|1-e^-zτ|^k≤ c τ^k |z|^k. Meanwhile, by the triangle inequality, we haveK(δ_τ(e^-zτ))-K(z)= δ_τ(e^-zτ)^-1(δ_τ(e^-zτ)^α-A)^-1-z^-1(z^α-A)^-1≤ |δ_τ(e^-zτ)^-1-z^-1|(δ_τ(e^-zτ)^α-A)^-1+ |z|^-1(δ_τ(e^-zτ)^α-A)^-1-(z^α-A)^-1.The identity (δ_τ(e^-zτ)^α-A)^-1-(z^α-A)^-1= (z^α-δ_τ(e^-zτ)^α)(δ_τ(e^-zτ)^α-A)^-1(z^α-A)^-1, Lemma <ref> and the resolvent estimate (<ref>) imply directly K(δ_τ(e^-zτ))-K(z)≤ c|τ|^k |z|^k-1-α. Consequently, we obtain the estimate (<ref>) byμ(e^-zτ)K(δ_τ(e^-zτ))-K(z)≤|μ(e^-zτ)-1|K(δ_τ(e^-zτ)) + K(δ_τ(e^-zτ))-K(z)≤c τ^k |z|^k-1-α∀ z∈Γ_θ,δ^τ.Next we show the estimate (<ref>). By Lemma <ref>, there holds|δ_τ(e^-zτ)^ℓ+1-z^ℓ+1| ≤ c|δ_τ(e^-zτ)-z||z|^ℓ≤ c τ^k |z|^k+ℓ+1∀ z∈Γ_θ,δ^τ.By Criterion (<ref>), there holds|γ_ℓ(e^-zτ)/ℓ !+ ∑_j=1^k-1 b_ℓ,j^(k) e^-jzτ - 1/δ(e^-zτ)^ℓ+1|≤ cτ^k-ℓ-1|z|^k-ℓ-1∀ z∈Γ_θ,δ^τ.Consequently, for any z∈Γ_θ,δ^τ, we have(δ_τ(e^-zτ)^α-A)^-1(1/ℓ !γ_ℓ(e^-zτ)+ ∑_j=1^k-1 b_ℓ,j^(k) e^-jzτ)τ^ℓ+1- z^-ℓK(z)≤ (δ_τ(e^-zτ)^α-A)^-1[(1/ℓ !γ_ℓ(e^-zτ)+ ∑_j=1^k-1 b_ℓ,j^(k) e^-jzτ)τ^ℓ+1-δ_τ(e^-zτ)^-ℓ-1] + δ_τ(e^-zτ)^-ℓK(δ_τ(e^-zτ))-z^-ℓK(z)≤ c τ^k |z|^k-ℓ-1-α.This completes the proof of the lemma. Proof of Theorem <ref>. By (<ref>) and (<ref>), we split U^n-u(t_n)=W^n-w(t_n) intoW^n - w(t_n) = I_1 + ∑_ℓ=1^k-2 I_2,ℓ -I_3+ I_4,where the terms I_1,…,I_4 are given byI_1= 1/2πi∫_Γ^τ_θ,δ e^zt_n(μ(e^-zτ)K(δ_τ(e^-zτ))-K(z))(Av+f(0)) dz, I_2,ℓ=1/2πi∫_Γ^τ_θ,δe^zt_n[δ_τ(e^-zτ) (γ_ℓ(e^-zτ)/ℓ !+∑_j=1^k-1 b_ℓ,j^(k) e^-zτ j)τ^ℓ+1K(δ_τ(e^-zτ))-z^-ℓK(z)] ∂_t^ℓ f (0) dz,I_3=1/2πi∫_Γ_θ,δ\Γ^τ_θ,δe^zt_nK(z) (Av+f(0) + z^-ℓ∑_j=1^k-1∂_t^ℓ f (0) ) dz,I_4 =1/2πi∫_Γ^τ_θ,δ e^zt(δ_τ(e^-zτ)^α-A)^-1τR_k(e^-zτ)dz -1/2πi∫_Γ_θ,δ (z^α-A)^-1R_k(z) dz.It suffices to bound these terms separately. By Lemma <ref>, and choosing δ=t_n^-1 in the contour Γ_θ,δ^τ, we bound the first term I_1 byI_1_L^2(Ω) ≤ c τ^kAv+f(0) _L^2(Ω)(∫_δ^π/(τsinθ)e^r t_ncosθ r^k-1-αdr+ ∫_-θ^θ e^δ t_n|cosψ|δ^k-αdψ)≤ c τ^k(t_n^α-k+δ^k-α)Av+f(0) _L^2(Ω)≤ c τ^k t_n^α-kAv+f(0) _L^2(Ω).By appealing to Lemma <ref> again and choosing δ=t_n^-1 in Γ_θ,δ^τ, we bound the terms I_2,ℓ byI_2,ℓ_L^2(Ω) ≤ cτ^k f^(ℓ)_L^2(Ω)(∫_δ^π/(τsinθ)e^r t_ncosθ r^k-ℓ-1-α dr+ ∫_-θ^θe^δ t_n|cosψ|δ^k-ℓ-α dψ) ≤ c τ^k t_n^α+ℓ -k ∂_t^ℓ f (0)_L^2(Ω),ℓ=1,2...,k-1.Direct computation yields the following estimate on I_3:I_3_L^2(Ω)≤c τ^k (t_n^α-kAv+f(0) _L^2(Ω) +∑_ℓ=1^k-2 t_n^α+ℓ -k ∂_t^ℓ f (0)_L^2(Ω)).The term I_4 is the error of the numerical solution with a compatible right-hand side R_k. Upon recalling the definition of R_k in (<ref>), we use the splitting R_k= t^k-1/(k-1)!∂_t^k-1 f(0) + t^k-1/(k-1)!*∂_t^k f (t) =:R_k^1 +R_k^2. Then we have I_4=I_4^1 + I_4^2 withI_4^i = 1/2πi∫_Γ^τ_θ,δ e^zt_n(δ_τ(e^-zτ)^α-A)^-1τR_k^i (e^-zτ)dz -1/2πi∫_Γ_θ,δe^zt_n (z^α-A)^-1R_k^i (z) dz.By repeating the preceding argument and (<ref>), we have the estimate for I_4^1:I_4^1 _L^2(Ω)≤ c τ^k t_n^-1∂_t^k-1f(0)_L^2(Ω),and using the argument in <cit.>,I_4^2 _L^2(Ω)≤ c τ^k ∫_0^t_n(t_n-s)^α-1∂_s^k f(s)_L^2(Ω)ds.This completes the proof of the theorem. § PROOF OF THEOREM <REF> Using the splitting (<ref>), the functions W^n, n=1,…,N, satisfy (with W^0=0):^α W^n-AW^n =(1+a_n^(k))A v+ (t_n+τ c_n^(k))Ab + ∑_ℓ=1^k-2(∂̅_τ t_n^ℓ/ℓ !+b_ℓ,n^(k)τ^ℓ-1) ∂_t^ℓ-1f(0) + ∂̅_τ R_k(t_n) ,1≤ n≤ k-1,^α W^n-AW^n = A v+t_nAb + ∑_ℓ=1^k-2∂̅_τ t_n^ℓ/ℓ !∂_t^ℓ-1f(0) + ∂̅_τ R_k(t_n) ,k≤ n≤ N.By multiplying both sides by ζ ^n and summing over n, we obtain∑_n=1^∞ζ ^n ^α W^n - ∑_n=1^∞ AW^nζ ^n =(∑_n=1^∞ζ^n + ∑_j=1^k-1a_j^(k)ζ ^j )A v +(∑_n=1^∞τ nζ^n + ∑_j=1^k-1τ c_j^(k)ζ^j)Ab + ∑_ℓ=1^k-2(∑_n=1^∞∂̅_τ t_n^ℓ/ℓ !ζ_n^n + ∑_j=1^k-1 b_ℓ,j^(k)τ^ℓ-1ζ ^j) ∂_t^ℓ-1f(0)+ ∑_n=1^∞∂̅_τ R_k(t_n)ζ^n.Using the elementary identities in (<ref>), the convolution rule ∑_n=1^∞ζ ^n ^α W^n = δ_τ(ζ)^αW, and ∑_n=1^∞ζ^n ∂̅_τt_n^ℓ/ℓ! = δ_τ(ζ) ∑_n=0^∞t_n^ℓ/ℓ!ζ^n = δ(ζ)τ^ℓ-1/ℓ!γ_ℓ(ζ), we deriveW(ζ )=K(δ_τ(ζ ))[τ^-1μ(ζ ) Av + δ_τ(ζ)(γ_1(ζ)+∑_j=1^k-1c_j^(k)ζ^j)τ Ab..+ ∑_ℓ=1^k-2δ_τ(ζ )(δ(ζ)γ_ℓ(ζ)/ℓ ! + ∑_j=1^k-1 b_ℓ,j^(k)ζ ^j)τ^ℓ-1 g^(ℓ)+ δ_τ(ζ)^2 R_k(ζ )] .Under Condition <ref> (i), by choosing ε small enough, Lemma <ref> implies that 0≠δ_τ(e^-τ z)^α∈Σ_α(ϑ_k+ε)⊂Σ_π-ε for z∈Σ_θ,δ^τ. Under Condition <ref> (ii), we have dist(δ(e^-zτ)^α,τ^αS(A))>0 (cf. Appendix <ref>), where S(A) denotes the closure of the spectrum of A in the complex plane ℂ. In either case, the operator K(δ_τ(e^-τ z))=δ_τ(e^-τ z)^-1(δ_τ(e^-τ z)^α-A)^-1 is analytic for z∈Σ_θ,δ^τ, which is the region enclosed by the four curves Γ^τ_θ,δ, -ln(ϱ)/τ+iℝ and ℝ±iπ/τ (for θ and ϱ sufficiently close to π/2 and 1, respectively). Then, like in the proof of Theorem <ref>, the assertion follows from Cauchy's integral formula and the change of variables ζ = e^-zτ. § PROOF OF THEOREM <REF>Under Condition <ref>(i), Theorem <ref> can be proved in the same way as Theorem <ref>, using (<ref>) and (<ref>). Under Condition <ref>(ii), it can be proved analogously, provided that the following resolvent estimate holds:(δ_τ(e^-zτ)^α-A)^-1≤ c |z|^-α ,∀z∈Γ^τ_θ,δ.To prove (<ref>), we use the following estimate (cf. <cit.>):(δ_τ(e^-zτ)^α-A)^-1 =τ^α(δ(e^-zτ)^α-τ^α A)^-1≤ cτ^αdist(δ(e^-zτ)^α,τ^αS(A))^-1,∀z∈Γ^τ_θ,δ ,where S(A) denotes the closure of the spectrum of A in ℂ. For the discrete Laplacian A=Δ_h, we have S(A)=[-r(A),0]. Since the angle between the contour δ(e^- iξτ)^α, ξ∈[-π/τ,π/τ], and the segment [-r(A),0] is (1-α/2)π>0, it follows that, for small κ,dist(δ(e^- iξτ)^α,τ^α S(A)) ≈ |δ(e^- iξτ)^α|sin[(1-α/2)π] ≥ c|δ(e^- iξτ)^α| |ξ|τ≤κ .Furthermore, CFL condition <ref>(ii) impliesdist(δ(e^- iξτ)^α,τ^α S(A)) ≥ c ≥ c |ξτ|^α κ≤ |ξ|τ≤π .Let Γ^τ_θ ={z∈ℂ: arg(z) =θ,-π/τ≤ |z|sin(θ) ≤π/τ}. Then the angle between the contour δ(e^-zτ)^α, z∈Γ^τ_θ, and the segment [-r(A),0] is π-αθ>0 (if θ is close to π/2). For small κ and z∈Γ^τ_θ, |z|τsin(θ) ≤κ, we havedist(δ(e^-zτ)^α,τ^α S(A)) ≈ |δ(e^-zτ)^α|sin(π-αθ) ≥ c |δ(e^-zτ)^α| ≥ c|zτ|^α .Estimate (<ref>) implies |δ(e^-zτ)-δ(e^- i|z|τsin(θ))|≤ c|θ-π/2| , and thus|δ(e^-zτ)^α-δ(e^- i|z|τsin(θ))^α|≤ c|θ-π/2| min(|δ(e^-zτ)|^α-1,|δ(e^- i|z|τsin(θ)|^α-1)≤ c|θ-π/2| |zτ|^α-1 , z∈Γ_θ^τ .Hence, if z∈Γ_θ^τ and κ≤ |z|τsin(θ) ≤π, with θ close to π/2, we havedist(δ(e^-zτ)^α,τ^α S(A))≥ dist(δ(e^- i|z|τsin(θ))^α,τ^α S(A)) -|δ(e^-zτ)^α-δ(e^- i|z|τsin(θ))^α|≥ c- c|θ-π/2| |zτ|^α-1≥ c - c|θ-π/2| |zτsin(θ)|^α-1≥ c - c|θ-π/2| max(κ,π)^α-1≥ c≥ c|zτ|^α .Thus we have dist(δ(e^-zτ)^α,τ^α S(A))≥ c|zτ|^α for z∈Γ_θ^τ. This inequality and (<ref>) yield (<ref>) for z∈Γ^τ_θ,δ∩Γ^τ_θ. Further, if z∈Γ^τ_θ,δ\Γ^τ_θ, then |z|=δ and -θ< arg(z)<θ, and Taylor expansion yields |δ(e^-zτ)|^α≤ |zτ|^α≤δ^ατ^α . By choosing δ small, we havedist(δ(e^-zτ)^α,τ^α S(A)) ≥λ_minτ^α-δ^ατ^α≥ cτ^α ,where λ_min is the smallest positive eigenvalue of the operator A (which can be made independent of h). This and (<ref>)yield(δ_τ(e^-zτ)^α-A)^-1≤ c ≤ cδ^-α =c|z|^-α,∀z∈Γ^τ_θ,δ\Γ^τ_θ .This completes the proof of (<ref>). abbrv
http://arxiv.org/abs/1703.08808v1
{ "authors": [ "Bangti Jin", "Buyang Li", "Zhi Zhou" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170326113528", "title": "Correction of high-order BDF convolution quadrature for fractional evolution equations" }
SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-rays Wei Dai, Joseph Doyle, Xiaodan Liang, Hao Zhang, Nanqing Dong, Yuan Li, Eric P. XingPetuum Inc.{wei.dai,joe.doyle,xiaodan.,hao.zhang,nanqing.dong,christy.li,eric.xing}@petuum.comDecember 30, 2023 ===================================================================================================================================================================================================== Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2–10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art. § INTRODUCTION Chest X-ray (CXR) is one of the most common medical imaging modalities. Due to CXR's low cost and low dose of radiation, hundreds to thousands of CXRs are generated in a typical hospital daily, which creates significant diagnostic workloads. In 2015/16 year over 22.5 million X-ray images were requested in UK's public medical sector, constituting over 55% of the total number of medical images and dominating all other imaging modalities such as computed tomography (CT) scan (4.5M) and MRI (3.1M) <cit.>. Among X-ray images, 8 millions are Chest X-rays, which translate to thousands of CXR readings per radiologist per year. The shortage of radiologists is well documented in the developed world <cit.>, not to mention developing countries <cit.>. Compared with the more modern medical imaging technologies such as CT scan and PET scans, X-rays pose diagnostic challenges due to their low resolution and 2-D projection. It is therefore of paramount importance to develop computer-aided detection methods for chest X-rays to support clinical practitioners.An important step in computer-aided detection on CXR images is organ segmentation. The segmentation of the lung fields and the heart provides rich structure information about shape irregularities and size measurements that can be used to directly assess certain serious clinical conditions, such as cardiomegaly (enlargement of the heart), pneumothorax (lung collapse), pleural effusion, and emphysema. Furthermore, explicit lung region masks can also improve interpretability of computer-aided detection, which is important for the clinical use.One major challenge in CXR segmentation is to incorporate the implicit medical knowledge involved in contour determination. In the most basic sense, the positional relationship between the lung fields and the heart implies the adjacency of the lung and heart masks. Moreover, when medical experts annotate the lung fields, they look for certain consistent structures surrounding the lung fields (Fig. <ref>). Such prior knowledge helps resolve boundaries around less clear regions caused by pathological conditions or poor imaging quality, as can be seen in Fig. <ref>. Therefore, a successful segmentation model must effectively leverage global structural information to resolve the local details.Unfortunately, unlike natural images, there is very limited CXR training data with pixel-level annotations, due to the expensive label acquisition involving medical professionals. Furthermore, CXRs exhibit substantial variations across different patient populations, pathological conditions, as well as imaging technology and operation. Finally, CXR images are gray-scale and are drastically different from natural images, which may limit the transferability of existing models. Existing approaches to CXR organ segmentation generally rely on hand-crafted features that can be brittle when applied on a different patient population, disease profiles, and image quality. Furthermore, these methods do not explicitly balance local information with global structure in a principled way, which is critical to achieve realistic segmentation outcomes suitable for diagnostic tasks. In this work, we propose to use the Structure Correcting Adversarial Network (SCAN) framework that incorporates a critic network to guide the convolutional segmentation network to achieve accurate and realistic chest X-ray organ segmentation. By employing a convolutional network approach to organ segmentation, we side-step the problems faced by existing approaches based on ad hoc feature engineering. Our convolutional segmentation model alone can achieve performance competitive with existing methods. However, the segmentation model alone can not capture sufficient global structure to produce natural contours due to the limited training data. To impose regularization based on the physiological structures, we introduce a critic network that discriminates between the ground truth annotations from the masks synthesized by the segmentation network. The segmentation network and the critic network can be trained end-to-end. Through this adversarial process the critic network learns the higher order regularities and effectively transfers this global information back to the segmentation model to achieve realistic segmentation outcomes. We demonstrate that SCAN produces highly realistic and accurate segmentation outcomes even when trained on very small dataset, without relying on any existing models or data from other domains. With the global structural information, our segmentation model is able to resolve difficult boundaries that require a strong prior knowledge. Using intersection-over-union (IoU) as the evaluation metric, SCAN improves the segmentation model by 1.8% absolutely and achieves 94.7% for the lung fields and 86.6% for the heart, both of which are the new state-of-the-art by a single model, competitive with human experts (94.6% for the lungs and 87.8% for the heart). We further show that SCAN model is more robust when applied to a new, unseen dataset, outperfoming the vanilla segmentation model by 4.3%. § RELATED WORKOur review focuses on two lines of literature most relevant to our problem: lung field segmentation and semantic segmentation with convolutional neural networks.Lung Field Segmentation. Existing work on lung field segmentation broadly falls into three categories <cit.>. (1) Rule-based systems apply pre-defined set of thresholding and morphological operations that are derived from heuristics <cit.>. (2) Pixel classification methods classify the pixels as inside or outside of the lung fields based on pixel intensities <cit.>. (3) More recent methods are based on deformable models such as Active Shape Model (ASM) and Active Appearance Model <cit.>. Their performance can be highly variable due to the tuning parameters and whether shape model is initialized to the actual boundaries. Also, the high contrast between rib cage and lung fields can cause the model to be trapped in local minima. Our approach uses convolutional networks to perform end-to-end training from images to pixel masks without using ad hoc features. The proposed adversarial training further incorporates prior structural knowledge in a unified framework. The current state-of-the-art method for lung field segmentation uses registration-based approach <cit.>. To build a lung model for a test patient, <cit.> finds patients in an existing database that are most similar to the test patient and perform linear deformation of their lung profiles based on key point matching. This approach relies on the test patients being well modeled by the existing lung profiles and correctly matched key points, both of which can be brittle on a different population.Semantic Segmentation with Convolutional Networks. Semantic segmentation aims to assign a pre-defined class to each pixel, which requires a high level visual understanding. Current state-of-the-art methods for semantic segmentation use fully convolutional network (FCN) <cit.>. Recently <cit.> applies adversarial training to semantic segmentation and observe some improvement. These works address the natural images with color input, and are pre-trained with models such as VGG network <cit.> incorporating the learning from large-scale image classification <cit.>. We adapt FCNs to gray-scale CXR images under the stringent constraint of a very limited training dataset of 247 images. Our FCN departs from the usual VGG architecture and can be trained without transfer learning from existing models or dataset.Separately, U-net <cit.> and similar architectures are popular convolutional networks for biomedical segmentation and have been applied to neuronal structure <cit.> and histology images <cit.>. In this work we propose to use adversarial training on existing segmentation networks to enhance the global consistency of the segmentation outcomes.We note that there is a growing body of recent works that apply neural networks end-to-end on CXR images <cit.>. These models directly output clinical targets such as disease labels without well-defined intermediate outputs to aid interpretability. Furthermore, they generally require a large number of CXR images for training, which is not readily available for many clinical tasks involving CXR images. § PROBLEM DEFINITION We address the problem of segmenting the left lung field, the right lung field, and the heart on chest X-rays (CXRs) in the posteroanterior (PA) view, in which the radiation passes through the patient from the back to the front. Due to the fact that CXR is a 2D projection of a 3D structure, organs overlap significantly and one has to be careful in defining the lung fields. We adopt the definition from <cit.>: lung fields consist of all the pixels for which radiation passes through the lung but not through the following structures: the heart, the mediastinum (the opaque region between the two lungs), below the diaphragm, the aorta, and, if visible, the superior vena cava (Fig. <ref>). The heart boundary is generally visible on two sides, while the top and bottom borders of the heart have to be inferred due to occlusion by the mediastinum. As can be seen in Fig. <ref>, this definition captures the common notion of lung fields and the heart, and include regions pertinent to CXR reading in the clinical settings. § STRUCTURE CORRECTING ADVERSARIAL NETWORK We detail our approach to semantic segmentation of lung fields and the heart using the proposed Structure Correcting Adversarial Network (SCAN) framework. To tailor to the special problem setting of CXR images, we develop our network architecture from ground up following the best practices and extensive experimentation. Using a dataset over an order of magnitude smaller than common semantic segmentation datasets for natural images, our model can be trained end-to-end from scratch to an excellent generalization capability without relying on existing models or datasets. §.§ Adversarial Training for Semantic Segmentation Adversarial training was first proposed in Generative Adversarial Network (GAN) <cit.> in the context of generative modeling[We point out that GAN bears resemblance to the actor-critic model in the existing reinforcement learning paradigm.]. The GAN framework consists of a generator network and a critic network that engage in an adversarial two-player game, in which the generator aims to learn the data distribution and the critic estimates the probability that a sample comes from the training data instead of synthesized by the generator. The generator's objective is to maximize the probability that the critic makes a mistake, while the critic is optimized to minimize the chance of mistake. It has been demonstrated that the generator produces samples (e.g., images) that are highly realistic <cit.>.A key insight in this adversarial process is that the critic, which itself can be a complex neural network, can learn to exploit higher order inconsistencies in the samples synthesized by the generator. Through the interplay of the generator and the critic, the critic can guide the generator to produce samples more consistent with higher order structures in the training samples, resulting in a more “realistic” data generation process.The higher order consistency enforced by the critic is particularly desirable for CXR segmentation. Human anatomy, though exhibiting substantial variations across individuals, generally maintains stable relationship between physiological structures (Fig. <ref>). CXRs also pose consistent views of these structures thanks to the standardized imaging procedures. We can, therefore, expect the critic to learn these higher order structures and guide the segmentation network to generate masks more consistent with the learned global structures.We propose to use adversarial training for segmenting CXR images. Fig. <ref> shows the overall SCAN framework in incorporating adversarial process to the semantic segmentation. The framework consists of a segmentation network and a critic network that are jointly trained. The segmentation network makes pixel-level predictions of the target classes, playing the role of the generator in GAN but conditioned on an input image. On the other hand, the critic network takes as input the segmentation masks and outputs the probability that the input mask is the ground truth annotations instead of the prediction by the segmentation network. The network can be trained jointly through a minimax scheme that alternates between optimizing the segmentation network and the critic network. §.§ Training Objectives Let S, D be the segmentation network and the critic network, respectively. The data consist of the input images x_i and the associated mask labels y_i, where x_i is of shape [H, W, 1] for a single-channel gray-scale image with height H and width W, and y_i is of shape [H,W,C] where C is the number of classes including the background. Note that for each pixel location (j,k), y_i^jkc=1 for the labeled class channel c while the rest of the channels are zero (y_i^jkc'=0 for c' c). We use S(x) ∈ [0,1]^[H,W,C] to denote the class probabilities predicted by S at each pixel location such that the class probabilities normalize to 1 at each pixel. Let D(x_i, y) be the scalar probability estimate of y coming from the training data (ground truth) y_i instead of the predicted mask S(x_i). We define the optimization problem asmin_S max_D { J(S, D) := ∑_i=1^N J_s(S(x_i), y_i) - λ[ J_d(D(x_i, y_i), 1) + J_d(D(x_i, S(x_i)), 0) ]}, where J_s(ŷ,y) := 1/HW∑_j,k∑_c=1^C -y^jkcln y^jkc is the multi-class cross-entropy loss for predicted mask ŷ averaged over all pixels. J_d(t̂, t) := -t lnt̂ + (1-t)ln(1-t̂) is the binary logistic loss for the critic's prediction. λ is a tuning parameter balancing pixel-wise loss and the adversarial loss. We can solve Eq. (<ref>) by alternate between optimizing S and optimizing D using their respective loss functions.Training the Critic:Since the first term in Eq. (<ref>) does not depend on D, we can train our critic network by minimizing the following objective with respect to D for a fixed S:∑_i=1^N J_d(D(x_i, y_i), 1) + J_d(D(x_i, S(x_i)), 0)Training the Segmentation Network: Given a fixed D, we train the segmentation network by minimizing the following objective with respect to S:∑_i=1^N J_s(S(x_i), y_i) + λ J_d(D(x_i, S(x_i)), 1)Note that we use J_d(D(x_i, S(x_i)), 1) in place of -J_d(D(x_i, S(x_i)), 0), following the recommendation in <cit.>. This is valid as they share the same set of critical points. The reason for this substitution is that J_d(D(x_i, S(x_i)), 0) leads to weaker gradient signals when D makes accurate predictions, such as during the early stage of training. §.§ Segmentation Network Our segmentation network is a Fully Convolutional Network (FCN), which is also the core component in many state-of-the-art semantic segmentation models <cit.>. The success of FCN can be largely attributed to convolutional neural network's excellent ability to extract high level representations suitable for dense classification. FCN can be divided into two modules: the down-sampling path and the up-sampling path. The down-sampling path consists of convolutional layers and max or average pooling layers, with architecture similar to those used in image classification <cit.>.The down-sampling path can extract the high level semantic information, usually at a lower spatial resolution. The up-sampling path consists of convolutional and deconvolutional layers (also called transposed convolution) to predict scores for each classes at the pixel level using the output of the down-sampling path.Most FCNs are applied to color images with RGB channels, and their down-sampling paths are initialized with parameters trained in large-scale image classification <cit.>. However, CXR is gray-scale and thus the large model capacity used in image classification networks that leverages the richer RGB input is likely to be counter-productive for our purpose.Furthermore, our FCN architecture has to be highly parsimonious to take into account that our training dataset of 247 CXR images is orders of magnitude smaller than those in the natural image domains. Lastly, in our task we focus on segmenting three classes (the left lung, the right lung, and the heart), which is a smaller classification space compared with dataset such as PASCAL VOC which has 20 class objects. A more parsimonious model configuration is therefore highly favorable in this setting.Figure <ref> shows our FCN architecture. We find that it is advantageous to use much fewer feature maps than the conventional VGG-based down-sampling path. Specifically, we start with just 8 feature maps in the first layers, compared with 64 feature maps in the first layer of VGG <cit.>. To obtain sufficient model capacity, we instead go deep with 20 convolutional layers. We also interleave 1× 1 convolution with 3 × 3 in the final layers to emulate the bottleneck design <cit.>. All in all the segmentation network contains 271k parameters, 500x smaller than VGG-based FCN <cit.>. We employ residual blocks <cit.> (Fig. <ref>(b)) to aid optimization. The parsimonious network construction allows us to optimize it efficiently without relying on any existing trained model, which is not readily available for gray-scale images. §.§ Critic NetworkOur critic network mirrors the construction of the segmentation network, and is also a fully convolutional network. Fig. <ref> shows the architecture, omitting the intermediate layers that are identical to the segmentation networks. This way the critic network contains similar model capacity as the segmentation network with a similar field of view, which is important due to the large object size in the CXR images. We can optionally include the original CXR image as input to the critic as an additional channel, which is a more economic approach to incorporate the image in critic network than <cit.>. Preliminary experiment shows that including the original CXR image does not improve performance, and thus for simplicity we feed only the mask prediction to the critic network. Overall our critic network has 258k parameters, comparable to the segmentation network.§ EXPERIMENTSWe perform extensive evaluation of the proposed SCAN framework and demonstrate that our approach produces highly accurate and realistic segmentation of the lung fields and the heart in CXR images. §.§ Dataset and ProcessingWe use two publicly available dataset to evaluate our proposed SCAN network for the segmentation of lung fields and the heart on CXR images. To the best of our knowledge, these are the only two publicly available dataset with at least lung field annotations. We point out that the dataset come from two different countries with different lung diseases, representing diverse CXR samples.JSRT. The JSRT dataset was released by Japanese Society of Radiological Technology (JSRT) <cit.> and the lung fields and the heart masks labels were made available by <cit.> (Fig. <ref>). The dataset contains 247 CXRs, among which 154 have lung nodules and 93 have no lung nodule. All images have resolution 2048× 2048 in gray-scale with color depth of 12 bits. We point out that this dataset represents mostly normal lung and heart masks despite the fact that the majority of patients have lung nodule. The reason is that lung nodules in most cases do not alter the contour of the lungs and the heart, especially when the lung nodules are small.Montgomery. The Montgomery dataset contains images from the Department of Health and Human Services, Montgomery County, Maryland, USA. The dataset consists of 138 CXRs, including 80 normal patients and 58 patients with manifested tuberculosis (TB). The CXR images are 12-bit gray-scale images of dimension 4020× 4892 or 4892× 4020. Only the two lung masks annotations are available (Fig. <ref>).We scale all images to 400× 400 pixels, which retains sufficient visual details for vascular structures in the lung fields and the boundaries. Preliminary experiments suggests that increasing the resolution to 800× 800 pixels does not improve the segmentation performance, consistent with the observation in <cit.>. Due to the high variation in image contrast between dataset (Fig. <ref>), we perform per-image normalization. Given an imagex we normalize it with x̃^jk := x^jk - x̅/√(var(x)), where x̅ and var(x) are the mean and variance of pixels in x, respectively. Note we do not use statistics from the whole dataset. Data augmentation by rotating and zooming images, did not improve results in our preliminary experiments. We thus did not apply any data augmentation.In post-processing, we fill in any hole in the predicted mask, and remove the small patches disjoint from the largest mask. We observe that in practice this is important for the prediction output of the segmentation network (FCN alone), but does not affect the evaluation results for FCN with adversarial training.§.§ Training ProtocolsGANs are known to be unstable during the training process and can “collapse” when the generator produces outcomes that lies in a much smaller subspace than the data distribution.To mitigate this problem, we pre-train the segmentation network using only the pixel-wise loss J_s (Eq. (<ref>)), which also gives faster training than the full adversarial training, as training the segmentation network using pixel losses involves forward and backward propagation through the segmentation network only but not the critic network. We use Adam optimizer with learning rate 0.0002 to train all models for 350 epochs, defined as a pass over the training set. We use mini-batch size 10. When training involves critic network, for each mini-batch we perform 5 optimization steps on the segmentation network for each optimization step on the critic network. Our training takes place on machines equipped with a Titan X GPU. We use the following two metrics for evaluation: Intersection-over-Union (IoU) is the agreement between the ground truth and the estimated segmentation mask. Formally, let P be the set of pixels in the predicted segmentation mask for a class and G the set of pixels in the ground truth mask for the same class. We can define IoU as |P∩ G|/|P∪ G| = |TP|/|TP| + |FP| + |FN|, where TP, FP, FN denotes for the set of pixels that are true positive, false positive, and false negative, respectively. Dice Coefficient is a popular metric for segmentation in the medical domain. Using the notation defined above, Dice coefficient can be calculated as 2|P∩ G|/|P| + |G| = 2|TP|/2|TP| + |FP| + |FN|.§.§ Experiment Design and Results We randomly divide the JSRT dataset into the development set (209 images) and the evaluation set (38 images). We tune our architecture and hyperparameters (such as λ in Eq. (<ref>)) using a validation set within the development set. Similarly, we randomly divide the Montgomery dataset into the development set (117 images) and the evaluation set (21 images). We tune our hyperparameters on the JSRT development set and use the same for the Montgomery experiments. We use FCN for the segmentation network only architecture, and SCAN for the full framework.Quantitative Comparison. In our first experiment we compare FCN with SCAN when trained on the JSRT development set and tested on the JSRT evaluation set. Table <ref> shows the IoU and Dice scores. We observe that the adversarial training significantly improves the performance. In particular, IoU for the two lungs improves from 92.9% to 94.7%. We also find that the performance of adversarial training is robust across a range of λ: the IoU for both lungs with λ=0.1,0.01,0.001 is 94.4%± 0.4%, 94.5%± 0.4%, and 94.7%± 0.4%, respectively.In Table <ref> we compare our approach to several existing methods on the JSRT dataset, as well as human performance. Our model surpasses the current state-of-the-art method which is a registration-based model <cit.> by a significant margin. Furthermore, our method is competitive with human performance for both lung fields and the heart. For clinical deployment it is important for the segmentation model to generalize to a different population with different imaging quality, such as when deployed in another country or a specialty hospital with very different disease distribution among the patients. In our next experiment we therefore train our model on the full JSRT dataset, which is collected in Japan from population with lung nodules, and test the model on the full Montgomery dataset, which is collected in the U.S. from patients potentially with TB. As can be seen in Fig. <ref>, the two dataset present very different contrast and background diseases. Table <ref> shows that FCN alone does not generalize well to a new dataset, as IoU for both lungs degrades to 87.1%. However, SCAN substantially improves the performance, surpassing state-of-the-art method based on registration <cit.>. We further investigate the scenario when training on the two development sets from JSRT and Montgomery combined. Without any further hyperparameter tuning, SCAN improves the IoU on two lungs to 95.1%± 0.43% on the JSRT evaluation set, and 93.0%± 1.4% on the Montgomery evaluation set, a significant improvement compared with when training on JSRT development set alone. Qualitative Comparison. Fig. <ref> shows the qualitative results from these two experiments. The failure cases in the middle row by our FCN reveals the difficulties arising from CXR images' varying contrast across samples. For example, the apex of the ribcage of the rightmost patient's is mistaken as an internal rib bone, resulting in the mask “bleeding out” to the black background, which has similar intensity as the lung field. Vascular structures near mediastinum and anterior rib bones (which appears very faintly in the PA view CXR) within the lung field can also have similar intensity and texture as exterior boundary, leading the FCN to make the drastic mistakes seen in the middle two columns. SCAN significantly improves all of the failure cases and produces much more natural outlines of the organs. We also notice that adversarial training sharpens the segmentation of costophrenic angle (the sharp angle at the junction of ribcage and diaphragm). Costophrenic angles are important in diagnosing pleural effusion and lung hyperexpansion, among others. Our SCAN framework is efficient at test time, as it only needs to perform a forward pass through the segmentation network but not the critic network. Table <ref> shows the run time of our method compared with <cit.> on a laptop. <cit.> takes much longer due to the need to search through lung models in the training data to find similar profiles, incurring linear cost in the size of training data. In clinical setting such as TB screening <cit.> a fast test time result is highly desirable.§ CONCLUSIONIn this work we present the Structure Correcting Adversarial Network (SCAN) framework that applies the adversarial process to develop an accurate semantic segmentation model for segmenting the lung fields and the heart in chest X-ray (CXR) images. SCAN jointly optimizes the segmentation model based on fully convolutional network (FCN) and the adversarial critic network which discriminates the ground truth annotation from the segmentation network predictions. SCAN is simple and yet effective, producing highly accurate and realistic segmentation.Our approach improves the state-of-the-art and achieves performance competitive with human experts.To our knowledge this is the first successful application of convolutional neural networks to CXR image segmentation, and our method holds the promise to integrate with many downstream tasks in computer-aided detection on CXR images§ ACKNOWLEDGEMENT We thank Carol Cheng and Ellen Sun for providing the medical insights into understanding chest x-ray and the clinical practices around it. Their help has been very instrumental to this project.ieee
http://arxiv.org/abs/1703.08770v2
{ "authors": [ "Wei Dai", "Joseph Doyle", "Xiaodan Liang", "Hao Zhang", "Nanqing Dong", "Yuan Li", "Eric P. Xing" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170326054838", "title": "SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-rays" }
⌊⌋ ⌈⌉ theoremTheorem proposition[theorem]Proposition claim[theorem]Claim lemma[theorem]Lemma corollary[theorem]Corollary definition[theorem]Definition observation[theorem]Observation fact[theorem]Fact propertyPropertynotationNotation[section]algorithmAlgorithm conjectureConjecture question[conjecture]Question empty Algorithmic interpretations of fractal dimensionAnastasios Sidiropoulos[Dept. of Mathematics and Dept. of Computer Science & Engineering, The Ohio State University.] [Supported by the National Science Foundation (NSF) under grant CCF 1423230 and award CAREER 1453472.] Vijay Sridhar[Dept. of Computer Science & Engineering, The Ohio State University.] [2]December 30, 2023 ========================================================================================================================================================================================================================================================================================================================= We study algorithmic problems on subsets of Euclidean space of low fractal dimension. These spaces are the subject of intensive study in various branches of mathematics, including geometry, topology, and measure theory.There are several well-studied notions of fractal dimension for sets and measures in Euclidean space. We consider a definition of fractal dimension for finite metric spaces which agrees with standard notions used to empirically estimate the fractal dimension of various sets. We define the fractal dimension of some metric space to be the infimum δ>0, such that for any >0, for any ball B of radius r≥ 2, and for any -net N (that is, for any maximal -packing), we have |B∩ N|=O((r/)^δ).Using this definition we obtain faster algorithms for a plethora of classical problems on sets of low fractal dimension in Euclidean space. Our results apply to exact and fixed-parameter algorithms, approximation schemes, and spanner constructions. Interestingly, the dependence of the performance of these algorithms on the fractal dimension nearly matches the currently best-known dependence on the standard Euclidean dimension. Thus, when the fractal dimension is strictly smaller than the ambient dimension, our results yield improved solutions in all of these settings.We remark that our definition of fractal dimension is equivalent up to constant factors to the well-studied notion of doubling dimension. However, in the problems that we consider, the dimension appears in the exponent of the running time, and doubling dimension is not precise enough for capturing the best possible such exponent for subsets of Euclidean space. Thus our work is orthogonal to previous results on spaces of low doubling dimension; while algorithms on spaces of low doubling dimension seek to extend results from the case of low dimensional Euclidean spaces to more general metric spaces, our goal is to obtain faster algorithms for special pointsets in Euclidean space. More precisely, we obtain the following results:* Exact algorithms: We show that TSP on a set of n points of fractal dimension δ > 1 in constant-dimensional Euclidean space, can be solved in time 2^O(n^1-1/δlog n). We also obtain an algorithm with quasi-polynomial running time when δ≤ 1. In contrast, it was previously known that for any fixed integer d≥ 2, TSP in d-dimensional Euclidean space can be solved in time 2^O(n^1-1/d). Our technique extends to other problems, such as Minimum Rectilinear Steiner Tree. * Fixed parameter algorithms: Given a set D of unit balls in constant-dimensional Euclidean space, the k-Independent Set problem on D can be solved in timen^O(k^1-1/δ) + logn, where δ>1 is the fractal dimension of the set of centers of the disks in D. When δ≤ 1, we get an algorithm with quasi-polynomial running time. Previously known algorithms for this problem in d-dimensional Euclidean space have running time n^O(k^1-1/d), for any d≥ 2. * Approximation schemes: Let P be a set of n points of fractal dimension δ>0, in d-dimensional Euclidean space. Then for any R>0, we can compute a (1+d/ℓ)-approximate R-cover of P in time ℓ^d+δ n^O((ℓ√(d))^δ). This matches the performance of known algorithms after replacing δ by d. We also obtain a similar algorithm for the R-packing problem. * Spanners: We show that for any >0, any set of n points of fractal dimension δ in constant-dimensional Euclidean space admits a (1+)-spanner of linear size, and of pathwidth at most n^1-1/δlog n if δ>1, and at most log^O(1) n if δ≤ 1. This provides a general polynomial-time reduction for geometric problems on Euclidean instances of low fractal dimension to corresponding graph instances of low pathwidth. § INTRODUCTIONSets of non-integral dimension are ubiquitous in nature and can be used to model a plethora of processes and phenomena in science and engineering <cit.>. Sets and measures in Euclidean space of certain fractal dimension are the subject of study in several branches of mathematics, including geometry, topology, and measure theory.In many problems in computational geometry, the dimension of the input set often determines the complexity of the best-possible algorithms.In this work we study the computational complexity of geometric problems on sets of bounded fractal dimension in low-dimensional Euclidean space. We observe the following interesting phenomenon: For many problems, it is possible to obtain algorithms with dependence on the fractal dimension similar to the best-possible dependence to the standard Euclidean dimension. This implies asymptotically faster algorithms when the fractal dimension of the input is smaller than the ambient dimension. §.§ Definition of fractal dimensionIntuitively, some X⊆ℝ^d has fractal dimension δ∈ [0,d] if when scaling X by a factor of α>0, the “volume” of X is multiplied by a factor α^δ. There are many different ways this intuition can be formalized, such as Hausdorff dimension, Minkowski dimension, and so on. Unfortunately, some of these definitions are not directly applicable in the context of discrete computational problems. For example, the Hausdorff dimension of any countable set is 0.Despite this, there are some natural methods that are used to estimate the fractal dimension of a set in practice. Let X⊆ℝ^d. Let Γ_ be a d-dimensional grid where each cell has width >0, and let I_(X) be the number of cells in Γ_ that intersect X. The fractal box-counting dimension of X is defined to be lim_→ 0log(I_(X))/log(1/) <cit.>. This definition is often used experimentally as follows: Intersect X with a regular lattice (ℤ)^d,and estimate the rate by which the cardinality of the intersection grows when → 0. In that context, X has fractal dimension δ when the size of the intersection grows as (1/)^δ <cit.>.We consider a definition that is closely related to box-counting dimension, but is more easily amenable to algorithmic analysis.Let S⊆ A. We say that S is an -covering of A if for any x ∈ A we have that ({x},S)≤. For any x ∈ A and y ∈ S we say that x is covered by y if ρ(x,y) ≤. S is an -packing if for any x,y ∈ S we have ρ(x,y) ≥. If S is both an -covering and an -packing of A then we say that S is an -net of A. We define the fractal dimension of some family of pointsets P⊆ℝ^d, denoted by (P), to be the infimum δ, such that for any >0 and r≥ 2, for any -net[We arrive at an equivalent definition if we require N to be a -packing instead of a -net.] N of P, and for any x∈ℝ^d, we have|N ∩(x,r)| = O((r/)^δ). For the sake of notational simplicty, we will be referring to the fractal dimension of some familty of pointsets P, as the fractal dimension of the pointset P, with the understanding that in the asymptotic notation |P| is unbounded. Figure <ref> depicts an example of an infinite family of discrete pointsets P with non-integral fractal dimension constructed as follows: We begin with the 3^k× 3^k integer grid, for some k∈ℕ, we partition it into 9 subgrids of equal size, we delete all the points in the central subgrid, and we recurse on the remaining 8 subgrids. The recursion stops when we arrive at a subgrid containing a single point. This is a natural discrete variant of the Sierpiński carpet. It can be shown that (P)=log_38, which is equal to the Hausdorff dimension of the standard Sierpiński carpet. §.§ Why yet another notion of dimension?We now briefly compare the above notion of fractal dimension to previous definitions and motivate its importance. The most closely related notion that has been previously studied in the context of algorithm design is doubling dimension <cit.>. We recall that the doubling dimension of some metric space M, denoted by (M), is defined to be logκ, where κ is the minimum integer such that for all r>0, any ball in M of radius r can be covered by at most κ balls of radius r/2. It is easy to show that for any metric space M, we have[Note that for a set X containing two distinct points we have (X)=0 while (X)=1 and thus it is not always the case that (X)=O((X)).] (M) = (M) + O(1) and (M) = O((M)). Thus our definition is equivalent to doubling dimension up to constant factors. However, in the problems we consider, the dimension appears in the exponent of the running time of the best-known algorithms; therefore, determining the best-possible constant is of importance. As we shall see, for several algorithmic problems, our definition yields nearly optimal bounds on this exponent, while doubling dimension is not precise enough for this task.Let us illustrate this phenomenon on the problem of solving TSP exactly of a set of n points in the Euclidean plane. It is known that TSP admits an algorithm with running time 2^O(√(n)logn) n^O(1) in this case <cit.>. Moreover, the exponent of O(√(n)logn) is known to be nearly optimal assuming the Exponential Time Hypothesis (ETH) <cit.> (see later in this Section for a more precise statement).We show that for sets of fractal dimension δ∈ (1,2], there exists an algorithm with running time 2^O(n^1-1/δlog n). Thus, for any fixed δ<2, we achieve an asymptotically faster algorithm than what is possible for general pointsets (assuming ETH).On the other hand, it is known that the unit disk cannot be covered with 6 disks of radius 1/2 (see <cit.>). Thus (ℝ^2) ≥log_2 7 > 2.807, while (ℝ^2)=2. Therefore doubling dimension is not precise enough to capture the best-possible exponent in this setting.In summary, while algorithms on spaces of low doubling dimension seek to extend results from the case of low dimensional Euclidean space to a more general setting, our goal is to obtain faster algorithms for special classes of pointsets in Euclidean space.§.§ Our resultsWe obtain algorithms for various problems on sets of low fractal dimension in Euclidean space. We consider exact algorithms, fixed parameter algorithms, and approximation schemes.In each one of these settings, we pick classical representative problems. We believe that our techniques should be directly applicable to many other problems.Exact algorithms. We first consider exact algorithms in ℝ^d. It is known that for any fixed d, TSP on a set of n points in ℝ^d can be solved in time 2^O(n^1-1/dlog n) <cit.>. By adapting ideas from the Euclidean setting, we show that TSP on a set of n points of fractal dimension δ > 1 in constant-dimensional Euclidean space, can be solved in time 2^O(n^1-1/δlog n). When δ=1 and δ<1, our algorithm has running time n^O(log^2 n) and n^O(log n) respectively. We remark that it has been shown by Marx and Sidiropoulos <cit.> that assuming ETH, there is no algorithm for TSP in ℝ^d with running time 2^O(n^1-1/d-), for any >0. Thus, our result bypasses this lower bound for sets of low fractal dimension. In particular, our result implies that, in a certain sense, the hardest instances for TSP in ℝ^d must be close to full-dimensional; that is, they must have fractal dimension close to d. Our technique also extends to the Minimum Rectilinear Steiner Tree problem in ℝ^2. Parameterized problems. We also consider algorithms for problems parameterized by the value of the optimum solution. A prototypical geometric problem in this setting is Independent Set of unit balls in ℝ^d. Formally, we show that given a set D of unit balls in ℝ^d, the k-Independent Set problem on D can be solved in timen^O(k^1-1/δ), for any fixed d, where δ>1 is the fractal dimension of the set of centers of the disks in D. When δ≤ 1, we get an algorithm with running time n^O(log k). Previously known algorithms for this problem in d-dimensional Euclidean space have running time n^O(k^1-1/d), for any d≥ 2<cit.>. Moreover, it has been shown that there is no algorithm with running time f(k) n^o(k^1-1/d), for any computable function f, assuming ETH <cit.> (see also <cit.>). Thus, our result implies that this lower bound can also be bypassed for sets of fractal dimension δ < d. Approximation schemes. We next consider approximation schemes. Let P be a set of n points of fractal dimension δ>0, in d-dimensional Euclidean space. We show that for any R>0, for any ℓ>0, we can compute a (1+d/ℓ)-approximate R-cover of P in time ℓ^d+δ n^O((ℓ√(d))^δ). This matches the performance of the algorithm of Hochbaum and Maass <cit.> after replacing δ by d. We also obtain a similar algorithm for the R-packing problem.Spanners and pathwidth. Recall that for any pointset in ℝ^d, and for any c≥ 1, a c-spanner for P is a graph G with V(G)=P, such that for all x,y∈ P, we have x-y_2 ≤ d_G(x,y) ≤ c ·x-y_2, where d_G denotes the shortest path distance in G. The parameter c is called the dilation of G. It is known that for any >0, any set of n points in ℝ^d admits a (1+)-spanner of size n (1/)^O(d) <cit.>. We strengthen this result in the following way. We show that for any >0, any set of n points of fractal dimension δ in constant-dimensional Euclidean space admits a (1+)-spanner of size n(1/)^O(d), and of pathwidth at most O(n^1-1/δlog n) if δ>1, at most O(log^2 n) if δ= 1, and at most O(log n) if δ<1. Our spanner is obtained via a modification of the construction due to Vaidya <cit.>. This provides a general polynomial-time reduction for geometric optimization problems on Euclidean instances of low fractal dimension to corresponding graph instances of low pathwidth. This result can be understood as justification for the fact that instances of low fractal dimension appear to be “easier” than arbitrary instances. We remark that our construction also implies, as a special case, that arbitrary n-pointsets in ℝ^d admit (1+)-spanners of size n(1/)^O(d) and pathwidth O(n^1-1/dlog n); this bound on the pathwidth appears to be new, even for the case d=2. §.§ Related workThere is a large body of work on various notions of dimensionality in computational geometry. Most notably, there has been a lot of effort on determining the effect of doubling dimension on the complexity of many problems <cit.>. Other notions that have been considered include low-dimensional negatively curved spaces <cit.>,growth-restricted metrics <cit.>, as well asgeneralizations of doubling dimension to metrics ofbounded global growth <cit.>.A common goal in all of the above lines of research is to extend tools and ideas from the Euclidean setting to more general geometries. In contrast, as explained above, we study restricted classes of Euclidean instances, with the goal of obtaining faster algorithms than what is possible for arbitrary Euclidean pointsets. The “square root phenomenon”. §.§ Relation to other notions of fractal dimensionHausdorff dimension. Minkowski dimension. Upper / lower Minkowski dimension.Percolation theory: “fractal dimension” of maximum connected component in a percolation process on the integer lattice ℤ^2.Finer notions exist:Multi-fractal spaces. Brownian motion. §.§ Notation and definitionsLet (X,ρ) be some metric space. For any x∈ X and r≥ 0, we define (x,r)= { y ∈ X : ρ(x,y) ≤ r } and (x,r)= { y ∈ X : ρ(x,y) = r }. For some A,B⊆ X, we write (A,B)=inf_x∈ A, y∈ B{ρ(x,y)}. For some r≥ 0, we writeN(A,r) = {x∈ X : (A, {x}) ≤ r}. Let S⊆ A. We say that S is an -covering of A if for any x ∈ A we have that ({x},S)≤. For any x ∈ A and y ∈ S we say that x is covered by y if ρ(x,y) ≤. S is an -packing if for any x,y ∈ S we have ρ(x,y) ≥. If S is both an -covering and an -packing of A then we say that S is an -net of A. We recall the following definition from <cit.>. Let D be a collection of subsets of ℝ^d. D is said to be κ-thick if no point is covered by more than κ elements of D. Let D' be any subset of D such that the ratio between the diameters of any pair of elements in D' is at most λ. Then D' is said to be λ-related. D is said to be (λ, κ)-thick if no point is covered by more than κ elements of any λ-related subset of D. The pathwidth of some graph G, denoted by (G), is the minimum integer k≥ 1, such that there exists a sequence C_1,…,C_ℓ of subsets of V(G) of cardinality at most k+1, such that for all {u,v}∈ E(G), there exists i∈{1,…,ℓ} with {u,v}⊆ C_i, and for all w∈ V(G), for all i_1<i_2<i_3 ∈{1,…,ℓ}, if w∈ C_i_1∩ C_i_3 then w∈ C_i_2.§.§ OrganizationThe rest of the paper is organized as follows.In Section <ref> we derive a separator Theorem for a set of balls whose set of centers has bounded fractal dimension. In Section <ref> we present our exact algorithms for TSP and RSMT. In Section <ref> we give a fixed-parameter algorithm for Independent Set of unit balls. In Section <ref> we give approximation schemes for packing and covering unit balls. Finally, in Section <ref> we present our spanner construction. § A SEPARATOR THEOREM In this section we prove a separator theorem for a set of d-balls intersecting a set of points with bounded fractal dimension.Subsequently, this result will form the basis for some of our algorithms. The proof uses an argument due to Har-Peled <cit.>. Let d≥ 2 be some integer, and let δ∈ (0,d] be some real number. Let P ⊂ℝ^d such that (P)= δ.Let B be a (λ,κ)-thick set of d-balls in ℝ^d, with |B|=n, λ≥ 2 and such that for all b ∈ B we have b ∩ P ≠∅. Then there exists a (d-1)-sphere C such that at most (1-2^-O(d))n of the elements in B are entirely contained in the interior of C, at most (1-2^-O(d))n of the elements in B are entirely outside C, and|A|= {[ O(κ (5λ)^d 6^δλ/1-λ^(1-δ) n^1-1/δ) if δ>1; O(κ (5λ)^d 6^δlog n) if δ=1;O(κ (5λ)^d 6^δ/λ^1-δ-1) if δ<1 ].,where A={b∈ B : (b) ≤(C)andb∩ C≠∅}. It is known that any ball in ℝ^d of radius r can be covered by at most k(d) = 2^O(d) balls of radius r/2. Let C' be the d-ball of minimum radius that contains at least 1/k(d)+1 n of the elements in B, breaking ties by choosing the ball that contains the maximum number of elements in B. Letdenote the origin in ℝ^d. Without loss of generality we can scale and translate the elements of B and P until the radius of C' is 1 and it is centered at . Now, let B^* denote the set of d-balls in B of diameter less than or equal to 4 after scaling. We pick uniformly at random r ∈ [1,2] and let C=(,r). Now we are ready to obtain an upper bound on the number of elements of B^* that intersect (,r) in expectation. Consider any d-ball b ∈ B^* of diameter x. The probability that (,r) intersects b is at most x. Now let M_1={b∈ B^* : (b) ≤ n^-1/δ andb∩(,r)≠∅} andM_2={b∈ B^* : n^-1/δ < (b) ≤ 4 andb∩(,r)≠∅}. |M_1| in expectation is at most O(n^1 - 1/δ) as |B^*| ≤ n. It remains to bound the expected value of |M_2|. Let B_i = {b∈ B^* : λ^i n^-1/δ < (b) ≤min{λ^i+1 n^-1/δ , 4 } andb∩(,r)≠∅}. Let n_i denote |B_i|. We will construct a λ^i n^-1/δ-net of P as follows. Let B'_i = B_i. Let π be some arbitrary ordering of the elements of B'_i. In the sequence determined by π pick the next d-ball b from B'_i. Remove all d-balls from B'_i that are entirely within a ball of diameter 5 ·λ^i+1n^-1/δ centered at the center of b. Repeat this procedure for the next element determined by π until all the remaining d-balls in B'_i have been visited. From the fact that B is (λ,κ)-thickwe have that there can be at most κ 5^d λ^d elements in B'_i that are contained within a ball of diameter 5 ·λ^i+1 n^-1/δ. This implies that we retain at least a constant fraction of the elements of B_i in B'_i. Now from each b ∈ B'_i pick a point p_b that also belongs to P and take the union of all such points to get a set of points N_i. From the choice of d-balls in the above argument |N_i| ≥1/κ 5^d λ^d n_i and N_i is λ^i n^-1/δ-packing. We can add more points from P to N_i to obtain a λ^i n^-1/δ-net N'_i. We have that|N_i| ≤ |N'_i ∩(,6)| ≤ O((6/λ^i n^-1/δ)^δ) since (P)= δ and the points of N_i are contained within the ball of radius 6 centered at the origin. This implies that |B_i| ≤ O( κ (5λ)^d 6^δλ^-i δn). Since the d-balls in B_i are intersected by (,r) with probability at most λ^i+1 n^-1/δ we have that the expected number of elements of B_i that are intersected by (,r) is O( κ (5λ)^d 6^δλ^i+1 -iδ n^1 -1/δ). We thus get𝔼[|M_2|] ≤∑_i = 0^logn/δ +2 |B_i| λ^i+1 n^-1/δ≤∑_i = 0^logn/δ +2 O( κ (5λ)^d 6^δλ^i+1 -iδ n^1 -1/δ). When δ > 1 this implies𝔼[|M_2|]≤ O( κ (5λ)^d 6^δ (λ/1-λ^(1-δ))n^1 -1/δ).When δ = 1 we have𝔼[|M_2|]≤ O( κ (5λ)^d 6^δλ(logn/δ + 3)n^1 -1/δ) ≤ O(κ (5λ)^d 6^δlogn).When δ < 1 we have𝔼[|M_2|]≤ O( κ (5λ)^d 6^δλ(λ^(logn/δ +3)(1-δ)-1/λ^(1-δ)-1)n^1 -1/δ) ≤ O(κ (5λ)^d 6^δ/λ^1-δ-1).For any r ∈ [1,2] we have that A ⊆ B^*. Thus𝔼[|A|] = 𝔼[|M_1|] + 𝔼[|M_2|] ≤ O(n^1-1/δ) + 𝔼[|M_2|], which implies that𝔼[|A|]= {[ O( κ (5λ)^d 6^δ (λ/1-λ^(1-δ))n^1 -1/δ) if δ>1;O(κ (5λ)^d 6^δlogn) if δ=1;O(κ (5λ)^d 6^δ/λ^1-δ-1) if δ<1 ].Finally we need to ensure that C separates a constant fraction of the elements of B. The choice of C' ensures that at least 1/k(d) + 1 n = 1/2^O(d) n of the elements in B are entirely contained in the interior of C. This implies that at most (1-2^-O(d)) n of the elements of B are in the exterior of C. Since the (d-1)-ball of radius 2 is covered by the union of at most k(d) (d-1)-balls of unit radius we have that there are at most k(d)/k(d)+1 n = (1-2^-O(d)) n of the elements in B contained in the interior of C. We note that the upper bound on 𝔼[|A|] remains unaltered for any choice of C'. We further remark that using a more complicated argument similar to the one used by Smith and Wormald <cit.> a cube separator can be found that separates a constant fraction of d-balls where the constant is independent of d.§ EXACT ALGORITHMS In this Section we give exact algorithms for TSP and RSMT problems on fractal dimension pointsets. §.§ TSP on fractal dimension pointsetsWe first use Theorem <ref> with the following Lemmas due to Smith and Wormald <cit.>to obtain a separator for any optimal TSP solution. Let d≥ 2 be some integer, and let P ⊂ℝ^d. Let W be the edge set of an optimal traveling salesman tour of the points of P. Let B be the set of circumballs of the edges of W. Then B is (2,κ)-thick where κ = 2^O(d).Let d≥ 2 be some integer, and let P ⊂ℝ^d. Let W be the edge set of an optimal traveling salesman tour of the points of P. For any x ∈ℝ^d let W_x = { w ∈ W : (w) ≥ 1andw∩(x,1)≠∅}. Then |W_x| ≤ 2^O(d) for all x ∈ℝ^d. Let d≥ 2 be some integer, and let δ∈ (0,d] be some real number. Let P be a set of n points in ℝ^d with (P)= δ. Let W be the set of edges of any optimal Euclidean TSP tour of P. Then there exists a (d-1)-sphereC such that at most (1-2^-O(d))n points in P are contained in the interior of C, at most (1-2^-O(d))n points in P are contained outside C, and |W_C|= {[O( n^1-1/δ) if δ>1; O(log n) if δ=1; O(1/2^1-δ-1) if δ<1 ].,whereW_C={w∈ W : w∩ C≠∅}.Let B be the set of circumballs of the edges in W. FromLemma <ref>we have that B is (2,2^O(d))-thick. Every ball in B contains an edge in W and therefore also two points in P. Therefore we can use Theorem <ref> on B to find a separator C.It remains to bound the number of edges in W that are intersected by C. Let W_1 = { w ∈ W : (w) ≤(C)andw∩ C≠∅} and W_2 = { w ∈ W : (w) > (C)andw∩ C≠∅}. Therefore W_C = W_1 ∪ W_2. Let B_1 denote the circumballs of the edges in W_1 and B_2 denote the circumballs of the edges in W_2. If an edge in W_1 is intersected by C then the corresponding circumball in B_1 is also intersected by C. From Theorem <ref> we have that|W_1|= {[O( n^1-1/δ) if δ>1; O(log n) if δ=1; O(1/2^1-δ-1) if δ<1 ].W.l.o.g. we can assume that C has unit radius and is centered at the origin by scaling and translation. Therefore any edge in W_2 also intersects the unit ball centered at the origin. Combining this with Lemma <ref> we have that |W_2| ≤ O(1). Since |W_C|≤ |W_1|+|W_2|, this concludes the proof. We now use Theorem <ref> to obtain an exact algorithm for TSP.We note that the O-notation hides a factor of n^O(1)^d. Let d≥ 2 be some fixed integer, and let δ∈ (0,d] be some real number. Let P be a set of n points in ℝ^d with (P)= δ. Then for any fixed d an optimal Euclidean TSP tour for P can be found in time T(n), whereT(n)= {[ n^O(n^1-1/δ) if δ>1; n^O(log^2 n) if δ=1; n^O(log n) if δ<1 ].First we observe that the (d-1)-sphere separator C described in Theorem <ref> can be assumed to intersect at least d+1 points in P. This is because we can always decrease the radius of C without changing W_C until at least one point in P lies on it.We exhaustively consider all separating (d-1)-spheres to find the separator from Theorem <ref>. Since every relevant (d-1)-sphere is uniquely defined by at most d+1 points of P intersecting it, there are at most n^O(d) spheres to consider.Let f(n,δ) denote the number of edges intersected by the separator C. From Theorem <ref> we have thatf(n,δ)= {[ O(n^1-1/δ) if δ>1; O(log n) if δ=1; O(1) if δ<1 ]. We guess a set E' of at most f(n,δ) edges in the optimal tour that intersect C. For each such guess E', we also guess the permutation of E' defined by the order in which the optimal tour traverses the edges in E'. For each such permutation we solve the two sub-problems in the exterior and interior of the separator respecting the boundary conditions. The resulting running time is T(n) ≤ n^O(d) n^O(f(n,δ)) 2 T((1-2^-O(d))n) which implies that for any fixed d implies the assertion.§.§ Finding the rectilinear Steiner minimal tree in ℝ^2Given P ⊂ℝ^d a set of n points. A Rectilinear Steiner Tree (RST) is a geometric graph connecting all the points in P and consisting only of line segments parallel to the coordinate axes. The length of a RST is the sum of the lengths of the line segments in it. A Rectilinear Steiner Minimal Tree (RSMT) is a RST of minimal length.We will use the following lemmas to prove our theorem.Let d ≥ 2 be some integer. Let P ⊂ℝ^d. Let S be an RSMT of P. Let B be the set of circumballs of the line segments of S. Thenfor any λ > 0, B is (λ,O(λ^d))-thick.Consider any edge xy ∈ S. Let m_xy denote the mid-point of xy. Let the diamond of xy denoted by D_xy be defined as D_xy = {p : p ∈ℝ^dand m_xy-p_1 ≤1/2m_xy-x_1 }. The d-volume of D_xy is O( x-y^d). Smith and Wormald<cit.> proved that for any pair of edges xy,wz ∈ S, D_xy and D_wz (called diamonds) are disjoint. Let B' ⊆ B be λ-related. Let the minimal diameter of any ball in B' be α. Let qr ∈ S be an edge of minimal length α whose circumball is in B'. This implies that the maximal diameter of any ball in B' is at most λα. Now consider any point p ∈ℝ^d. Any element of B' that covers p lies within the (p,λα). Since the diamonds of any pair of edges in S are disjoint it follows that the number of circumballs covering p is at most ((p,λα))/(D_qr) = O((λα)^d/α^d) = O(λ^d).Let d ≥ 2 be some integer. Let P ⊂ℝ^d. Let S be an RSMT of P. Let B be the set of circumballs of the line segments of S. For any x ∈ℝ^d let S_x =s ∈ S : (s) ≥ 1ands∩(x,1)≠∅. Then |S_x| ≤ O(2^d) for all x ∈ℝ^d.Consider any line segment t ∈ S_x. We have that the diamond of t D_t occupies at least O(1^d) d-volume in (x,2). Since the diamonds of all of these edges are disjoint we have that |S_x| ≤((x,2))/O(1^d) = O(2^d).Let d>0 be some integer. Let P ⊂ℝ^d. Let S be an RSMT of P. There are only n^d possible locations for Steiner points in S. Consequently there are at most n^O(d) possible line segments that can be part of S. Let δ∈ (0,2] be some real number. Let P ⊂ℝ^2 such that (P)= δ. Let S be the set of line segments of an RSMT of P. Then there exists a 1-sphere C such that at least 1/8 of the points in P are contained in the interior of C and at least 1/8 of the points in P are contained outside C. Let S_C={s∈ S : s∩ C≠∅}. Then we have,|S_C|= {[O( n^1-1/δ) if δ>1; O(log n) if δ=1; O(1/λ^1-δ-1) if δ<1 ]. Let B be the set of circumballs of the line segments in S. Fromlemma <ref> we have that B is (2,O(2^2))-thick. W.l.o.g we may assume that every line segment in S has at most one Steiner point as an end point since P ⊂ℝ^2. Every circumball in B contains a line segment in S and therefore at least one point in P. Therefore we can use theorem <ref> on B to find a separator C. We choose C' to be the 2-ball of minimum radius that contains 1/8 of the points in P. This gives us a separator C that separates a constant fraction of the points in P.Now it remains to bound the number of edges in W that are intersected by C. Let S_1 = { s ∈ S : (s) ≤ diam(C)ands∩ C≠∅} and S_2 = { s ∈ S : (s) > diam(C)ands∩ C≠∅}. Therefore S_C = S_1 ∪ S_2. Let B_1 denote the circumballs of the line segments in S_1 and B_2 denote the circumballs of the line segments in S_2. If a line segment in S_1 is intersected by C then the corresponding circumball in B_1 is also intersected by C. From theorem <ref> we have that, |S_1|= {[O( n^1-1/δ) if δ>1; O(log n) if δ=1; O(1/λ^1-δ-1) if δ<1 ].Without loss of generality we can assume that C has unit radius and is centered at the origin due to scaling and translation. Therefore any line segment in S_2 also intersects the unit ball centered at the origin. Combining this with lemma <ref> we have that |S_2| ≤ O(1). This implies that, |S_C|= {[O( n^1-1/δ) if δ>1; O(log n) if δ=1; O(1/λ^1-δ-1) if δ<1 ].Let δ∈ (0,2] be some real number. Let P ⊂ℝ^2 such that (P)= δ. Then an RSMT of P can be found in running time T(n), where T(n)= {[ 2^O(n^1-1/δlog n)if δ>1;n^O(log^2 n)if δ=1;n^O(log n)if δ<1 ]. Let S be an RSMT of P. Let f(n,δ)= {[ 2^O(n^1-1/δlog n)if δ>1;n^O(log n)if δ=1;n^O(1)if δ<1 ].Theorem <ref> implies that there exists a separator C intersecting at most f(n,δ) elements of S and separating a constant fraction of the points in P for a certain choice of C'. Like in theorem <ref> we first choose C' by finding the smallest 2-ball containing at least 1/8 points in P. We can do this exhaustively in time n^O(1) because every relevant 2-ball is uniquely determined by at most 3 points in P. Next we fix the center of our separator to be the center of C' and exhaustively consider all relevant radii. Since the separator C can be assumed to intersect at least one point in P ∪ S where S is the set of possible Steiner points we only need to consider at most |P ∪ S| different radii. From lemma <ref> we have that this can be done in time at most n^O(2). From theorem <ref> we can also guess the line segments intersected by C in time n^O(2f(n,δ)). Then we can guess the boundary condition. Given M crossing line segments there are at most O(n^M) possible boundary conditions. Here M ≤ f(n,δ) so we can guess these in time at most O(n^f(n,δ)). Finally we can solve the two smaller subproblems in the interior and exterior of C. The running time follows the recursion T(n) ≤ n^O(1)· n^O(2f(n,δ))· O(n^f(n,δ)) · [2T(7n/8)]. This implies that,T(n)= {[ 2^O(n^1-1/δlog n)if δ>1;n^O(log^2 n)if δ=1;n^O(log n)if δ<1 ].§ PARAMETERIZED PROBLEMS In this section we present an algorithm for the parameterized version of the Independent Set problem on a set of unit d-balls in ℝ^d, where set of centers of the d-balls has bounded fractal dimension. We first prove a separator theorem which will be used in the algorithm.Let δ∈ (0,2] be a real number. Let P ⊂ℝ^2 be a set of n points with fractal dimension (P) = δ. Let D = {(x,1): x∈ P}. Let D' ⊆ D be a set of disjoint disks such that |D'| = k. Then there exists c ∈ℝ^2 and r>0 such that at most O(k^(1-1/δ)) disks in D' intersect (c,r') and at most 7/8 disks in D' are on either side (interior and exterior) of (c,r'). Let P' denote the set of centers of the disks in D'. Since |D'| = |P'| we have that |P'| = k. Also since the disks in D' are disjoint we have that P' is a 2-packing of P.Let (c,r) be the circle with minimum radius that also contains in its interior 1/8k points in P'.Consider a random circle separator (c,r') with radius r' ∈ [r,2r] chosen uniformly at random. We have that (c,r') is contained within (c,2r). Since (c,2r) can be covered by at most 7 disks of radius r and considering our choice of (c,r) we have that (c,r') encloses at most 7/8k points in P'. Since (c,r') also encloses at least 1/8k points in P' we have that there are at most 7/8k points from P' in the interior of (c,r') and at most 7/8k points from P' in the exterior of (c,r'). Now it remains to to bound the number of disks in D' that intersect the separator (c,r'). First we note that the center of any disk in D' that potentially intersects (c,r') lies within (c,2r+1). Therefore the number of disks that potentially intersect (c,r') is at most |P' ∩(c,2r+1)|. Since P' is a 2-packing that can be augmented into a 2-net by only adding points we have that |P' ∩(c,2r+1)| ≤ O((2r+1/2)^δ) = O((r)^δ). Therefore we have that the number of disks that potentially intersect (c,r') is at most min{k,O((r)^δ)}. Any disk intersects (c,r') with probability 2/r since r' is chosen uniformly at random from the interval [r,2r]. This implies that in expectation the number of disks in D' that intersect (c,r') is at most min{k ·2/r ,O((r)^δ) ·2/r}. When r ≤ k^1/δ this is at most O((r)^δ) ·2/r = O(k^1-1/δ), and when r > k^1/δ this is again at most k ·2/r = O(k^1-1/δ). This implies that there exists some specific value of r' such that the number of disks in D' that intersect (c,r') is at most O(k^1-1/δ). Let δ∈ (0,2] be a real number. Let P ⊂ℝ^2 be a set of n points with fractal dimension (P) = δ. Let D = {(x,1): x∈ P}. The parameterized independent set problem on D for the parameter k can be solved in running time T(n,k), whereT(n,k)= {[ O(n^O(k^1-1/δ))if δ>1; O(n^O(log n)) if δ≤ 1 ].Let D' ⊆ D denote the set of k disjoint disks in any fixed optimal solution. Let P' denote the set of centers of the disks in D'. Since |D'| = |P'| we have that |P'| = k. We use a divide and conquer approach. First we guess the circle separator. Without loss of generality we can assume that the smallest circle (c,r) enclosing 1/8k points of P' either intersects 2 points diametrically or intersects 3 or more points. We can guess c and r by enumerating over all circles uniquely defined by 2 diametrically opposite points or 3 points. This can be done in time O(n2 + n3) = O(n^O(1)). Next we guess r' and the separator circle (c,r'). We can assume that the separator circle from Theorem <ref> intersects at least one disk in D' tangentially (Otherwise r' can be increased or decreased until this condition is met without altering the set of disks in D' that are intersected by the separator). Given a fixed center c and the set of disks D', the number of circle separators that tangentially intersect a disk in D' is at most 2n. We can enumerate over all such separators. For each such separator we again enumerate over all ways to pick the disks in D' that are intersected. This can be done in time O(n^O(k^1-1/δ)). Therefore the algorithm follows the runtime recursion T(n,k) = O(n^O(1)) · O(n) · O(n^O(k^1-1/δ)) · 2T(n,7/8k). This implies that,T(n,k)= {[ O(n^O(k^1-1/δ))if δ>1; O(n^O(log n)) if δ≤ 1 ]. Let d ≥ 2 be an integer. Let δ∈ (0,d] be a real number. Let P be a set of n points in ℝ^d with (P) = δ. Let D = {(x,1): x∈ P}. Let D' ⊆ D be a set of disjoint elements of D such that |D'| = k. Then there exists c ∈ℝ^d and r>0 such that at most H d-balls in D' intersect (c,r) and at most (1-2^-O(d))k d-balls in D' are contained on either side (interior and exterior) of (c,r) whereH= {[ O(k^1-1/δ) if δ>1; O(1)if δ≤ 1 ].Let P' denote the set of centers of the d-balls in D'. We have |P'| = |D'| = k. Also since the d-balls in D' are disjoint we have that P' is a 2-packing of P.Consider any c ∈ℝ^d and any r ≥ 1. Consider a random (d-1)-sphere (c,r') with radius r' ∈ [r,2r] chosen uniformly at random. Now we can bound the number of d-balls in D' that intersect the (c,r'). First we note that the center of any d-ball in D' that potentially intersects (c,r') lies within (c,2r+1). Therefore the number of d-balls that potentially intersect (c,r') is at most |P' ∩(c,2r+1)|. Since P' is a 2-packing that can be augmented into a 2-net by only adding points, we have that |P' ∩(c,2r+1)| ≤ O((2r+1/2)^δ) = O(r^δ). Therefore we have that the number of d-balls that potentially intersect (c,r') is at most min{k,O(r^δ)}. Any d-ball in D' intersects (c,r') with probability at most 2/r since r' is chosen uniformly at random from the interval [r,2r]. So in expectation the number of d-balls in D' that intersect (c,r') is at most min{k ·2/r ,O(r^δ) ·2/r}. When r ≤ k^1/δ and δ > 1 this is at most O(r^δ) ·2/r = O(k^1-1/δ), when r ≤ k^1/δ and δ≤ 1 this is at most O(r^δ) ·2/r = O(r^δ -1) = O(1), and when r > k^1/δ this is again at most k ·2/r = O(k^1-1/δ). This implies that there exists some specificr' ∈ [r,2r] such that the number of d-balls in D' that intersect (c,r') is at most O(k^1-1/δ) when δ > 1 and O(1) when δ≤ 1. Now it remains to specify our choice of c and r so that (c,r') induces a balanced separator. We will use the fact that for any r>0 any d-ball of radius 2r can be covered by at most g(d) = 2^O(d) d-balls of radius r. Let c and r be chosen such that (c,r) is the (d-1)-sphere with minimum radius that also contains in its interior 1/g(d)+1k elements of D'. Since the d-balls have unit radius it follows that the r ≥ 1. This ensures that there are at least 1/2^O(d) elements of D' in the interior of (c,r) and therefore at most (1-2^-O(d))k elements of D' in the exterior of (c,r). We have that (c,r') is contained within (c,2r). Since (c,2r) can be covered by at most g(d) d-balls of radius r, by our choice of c and r we have that (c,r') encloses at most g(d)/g(d)+1k = (1-2^-O(d))k d-balls in D' concluding the proof. Let d ≥ 2 be an integer. Let δ∈ (0,d] be a real number. Let P be a set of n points in ℝ^d with (P) = δ. Let D = {(x,1): x∈ P}.Then there exists an algorithm that computes an independent set in D of size k, if one exists, in time T(n,k), where for any fixed d we haveT(n,k)= {[ n^O(k^1-1/δ) if δ>1; n^O(log k)if δ≤ 1 ].Let D' ⊆ D denote the set of k disjoint d-balls in any fixed optimal solution. Let P' denote the set of centers of the d-balls in D'. We have |P'| = |D'| = k. We use a divide and conquer approach using the separator from Theorem <ref>. First we guess the center c and radius r of the smallest (d-1)-sphere enclosing 1/g(d)+1 of the d-balls in D'. W.l.o.g. we can assume that there exist a set of d-balls in D' that are tangential to (c,r) and are enclosed by (c,r), of cardinality d+1. Moreover (c,r) is uniquely defined by the d-balls that it is tangential to. This implies that (c,r-1) intersects at least d+1 points in P and can be uniquely defined by at most d+1 points in P. We can exhaustively guess c by searching through all (d-1)-spheres uniquely defined by at most d+1 points in P in time n^O(d). Next we can assume w.l.o.g. that (c,r') from Theorem <ref> is tangential to at least one d-ball in D' (otherwise r' can be increased or decreased until this condition is met without altering the set of d-balls in D' that are intersected by the separator). This means that given a fixed center c we need to search through at most 2n different radii to guess r'. We enumerate over all such separators. For each such separator we again enumerate over all ways to pick the d-balls in D' that are intersected. This can be done in time n^O(k^1-1/δ) when δ > 1 and n^O(1) when δ≤ 1. Therefore we haveT(n,k) = n^O(d)· O(n) · n^O(k^1-1/δ)· 2· T(n,(1-2^-O(d))k) when δ > 1 or T(n,k) = n^O(d)· O(n) · n^O(1)· 2· T(n,(1-2^-O(d))k) when δ≤ 1, which solves to the desired bound.§ APPROXIMATION SCHEMESIn this section we describe polynomial time approximation schemes for covering and packing problems. We use the approach of Hochbaum and Maass <cit.>. Let d≥ 2 be some integer, and let δ∈ (0,d] be some real number. Let P be a set of n points in ℝ^d with (P) = δ. Then there exists a polynomial time approximation scheme which given a natural number l > 0 and any >0, computes a (1 + d/l)-approximation to the -cover of P, in time l^d+δ n^O((l √(d))^δ).Let A be a d-rectangle that encloses the points in P. Consider a set of hyperplanes perpendicular to an axis of the ambient space that subdivide A into strips of width 2l, which are left closed and right open. This gives a partition P_0 of A where each strip has width 2l. Now for any integer i where 0 < i < l we shift the hyperplanes that define the partition P_0 by 2i to the right to get the partition P_i. Let S = {P_0,P_1,…,P_l-1}. Letbe the optimal -cover of P. Let D be the set of d-balls of radiuscentered at the points in . Any d-ball in D intersects the hyperplanes from at most one partition in S. Therefore there exists a partition P_i such that at most |D|/l d-balls in D are intersected by the hyperplanes defining P_i. In other words at most |D|/l d-balls in D intersect more than one strip in P_i. Now we consider partitioning A similarly along each axisto get a grid of hypercubes of side length 2l, which we call cells. Using the argument described above it follows that there exits a partition P' such that at most d|D|/l d-balls in D intersect more than one cell in P'.Now consider a cell C of side length 2l. Since (P) = δ and C is contained in a ball of radius √(d)l we have that there exists an -cover of the points in C of cardinality at most O(√(d)l /)^δ = O(√(d)l )^δ. We combine the above observations to obtain our algorithm as follows. The algorithm enumerates all l^d partitions of P into cells of side length 2 l. Next it enumerates exhaustively all -covers of cardinality at most O((√(d)l)^δ) for each cell. Since verifying whether a set of points is a valid cover takes time O(n(√(d)l)^δ)= O(nl^δ) this step overall takes time at most n^O((√(d)l)^δ)· l^δ. Finally the algorithm takes the union of the -covers of all the cells to get an -cover of P and returns the best solution over all partitions. Since there exists at least one partition whereat most d|D|/l d-balls in D intersect more than one cell in the partition, we have that the size of the solution returned is at most (1+d/l)|D| = (1+d/l)||. The running time of the algorithm is l^d · n^O((√(d) l)^δ)· l^δ = l^d+δ n^O((l √(d))^δ).Let d≥ 2 be some integer, and let δ∈ (0,d] be some real number. Let P be a set of n points in ℝ^d with (P) = δ. There exists a polynomial time approximation scheme which given a natural number l > 0 and any >0, computes a (1 + d/l-d)-approximation to the -packing of P, in time l^d+δ n^O((l √(d))^δ).We use the partitioning approach described in Theorem <ref>. We consider cells of side length l. Since any -packing can be augmented into an -net we have that any -packing of the points in a cell has cardinality at most O((√(d)l/2)^δ). We consider -packings for each cell where the points in the packing are all at least distance /2 from the boundary of the cell; this ensures that the d-balls of radius /2 centered at these points do not intersect multiple cells. Then we take the union of these points over all cells and take the minimum cardinality set over all partitions. The running time is l^d+δ n^O((l √(d))^δ) by the same reasoning used in Theorem <ref>. Letbe the optimal -packing of P. Since at most d/l|| d-balls in the optimal packing intersect more than one cell we have that the solution returned by the algorithm has cardinality at least (1 - d/l)||, as required. § SPANNERS AND PATHWIDTH We remark that several other constructions of (1+)-spanners for finite subsets of d-dimensional Euclidean space are known. However, they do not yield graphs of small pathwidth. Here we use a construction that is a modified version of the spanner due to Vaidya <cit.>. Let P be a set of n points in ℝ^d. Let us first recall the construction from <cit.>. Let >0. We will define a graph G with V(G)=P, that is a (1+)-spanner for P.Let I_1,…,I_d⊂ℝ be intervals, all having the same length, and such that each I_i is either closed, open, or half-open. Then we say that b=I_1×…× I_d is a box. We define (b) to be the length of the interval I_1. For each i∈{1,…,d}, let ψ(b)_i be the center of I_i, and define the half-spaces L_i(b)={(x_1,…,x_d)∈ℝ^d : x_i<ψ_i(b)} and R_i(b)={(x_1,…,x_d)∈ℝ^d : x_i≥ψ_i(b)} Let (b) be the set of boxes such that (b) = {b' : b'=b∩(⋂_i=1^d f_i),where for alli∈{1,…,d}, f_i=L_i(b)orf_i = R_i(b) }.We also define (b) to be some box satisfying the following conditions:(1) If |b∩ P|≤ 1 then (b)=b∩ P. Note that we allow (b) to be empty. (2) If |b∩ P|≥ 2 then (b) is some minimal box contained in b with (b)∩ P=b∩ P. Note that if there are multiple choices for (b), then we choose one arbitrarily. For some box b with |b∩ P|≥ 2, we define (b) to be the set of boxes such that(b) = {b' :there existsb”∈(b)s.t.  b”∩ P≠∅ andb'=(b”) }.If |b∩ P|≤ 1, then we define (b)=∅.The box-tree of P is defined to be a tree T where every node is some box. We set the root of T to be some minimal box b^* containing P. For each b∈ V(T), the set of children of b in T is (b). Note that |b∩ P| = 1 if and only if b is a leaf of T. For each b∈ V(T)∖{b^*} we denote by (b) the father of b in T.For each b∈ V(T) let(b)= { b'∈ V(T)∖{b^*} : (b')<(b)≤((b'))and (b,b') ≤6√(d)/(b) }.It follows by the construction that for each b∈ V(T), we have b∩ P≠∅. For each b∈ V(T) pick some arbitrary point (b)∈ b∩ P. We say that (b) is the representative of b. We further impose the constraint that for each non-leaf b∈ V(T), if b' is the unique child of b with (b)∈ b', then (b')=(b). This implies that for every b∈ V(T), there exists a branch in T starting at b and terminating at some leaf, such that all the boxes in the branch have the same representative as b. We remark that this additional requirement is not necessary in the original construction of Vaidya <cit.>.We define E(G)=E_1∪ E_2,where E_1 = {{(b),(b')} : b∈ V(T), b'∈(b), (b)≠(b')} and E_2 = {{(b),(b')} : b∈ V(T), b'∈((b))}. This completes the description of the spanner construction due to Vaidya <cit.>. His result is summarized in the following. G is a (1+)-spanner for P. Moreover |E(G)|= O(^-d n).For each e={u,v}∈ E(G), let D_e be the circumscribed ball for the segment u-v. Let D=⋃_e∈ E(G){D_e}.For each i∈{1,2} let D_i=⋃_e∈ E_i{D_e}. D_1 is (2,d^O(d))-thick.Let r>0 and define E_1,r = {{x,y}∈ E_1 : r≤x-y_2 <2r}. LetD_1,r = {D_e ∈ D_1 : e∈ E_1,r}. It suffices to show that D_1,r is d^O(d)-thick.For each e = {x,y}∈ E_1,r we define some unordered pair of boxes γ(e)={B(e), B'(e)}, as follows. By the definition of E_1, there exists some b∈ V(T), b'∈(b), with (b)≠(b'), such that {x,y}={(b), (b')}. Assume w.l.o.g. that x=(b) and y=(b'). By the choice of the representatives, there exist some branch b_0,…,b_t of T, for some t≥ 1 with b_0=b, that terminates at some leaf b_t, such that x=(b)=(b_0)=…=(b_t). Since x,y∈ b, it follows thatr≤x-y_2 ≤√(d)·(b).Since b_t is a leaf, we have (b_t)=0. Let t^*∈{1,…,t} be the maximum integer such that (b_t^*-1) ≥ r/√(d).Let A∈(b_t^*-1) such that b_t^*⊆ A. Note that (A)≥ r/(2√(d)), and (b_t^*) < r/√(d). Pick some box B(e), with b_t^*⊆ B(e)⊆ A, such that(B(e)) ∈[r/(2√(d)), r/√(d)]in a consistent fashion (i.e. for a fixed choice of b_t^* and A we always pick the same box). Similarly, let b_0',…,b_s' be a sequence of boxes such that b_0'∈(b), with b'⊆ b_0', and b_1',…,b_s' is a branch of T starting at b_1'=b' and terminating at some leaf b_s'. Arguing as before, let s^*∈{1,…,s} be the maximum integer such that (b_s^*-1') ≥ r/(2√(d)).If s^*=1 then let A'∈(b'_s^*-1), with b'_s^*⊆ A'; pick some box B'(e), with b_s^*⊆ B'(e)⊆ A', such that (B'(e)) ∈[r/(4√(d)), r/(2√(d))]in a consistent fashion.We say that e is charged to γ(e). By construction, there exists at most one edge in E_1,r that is charged to each pair of boxes.By (<ref>) and (<ref>) we have that for each e∈ E_1,r, the pair γ(e) consists of two boxes, each of size Θ(r/√(d)). Moreover by construction and our choice of boxes we have that for any e,f ∈ E_1,r B(e) and B(f) are disjoint or equal. Similarly B'(e) and B'(f) are also disjoint or equal. Thus, each point in ℝ^d can be contained in at most O(1) boxes in all the pairs γ(e), for all e∈ E_1,r. Moreover, (B(e),B'(e))≤x-y < 2r. Thus, each box participates in at most (√(d))^O(d) = d^O(d) pairs. For each e∈ E_1,r, let A(e)=N(B(e),r)∪ N(B'(e),r), where N(X,r) denotes the r-neighborhood of X in ℝ^d. It follows that {A(e)}_e∈ E_1,r is d^O(d)-thick. Since for each e∈ E_1,r, we have D_e∈ A(e), it follows that D_1 is d^O(d)-thick, as required. D_1 is (2,d^O(d))-thick.Let r>0 and define E_1,r = {{x,y}∈ E_1 : r≤x-y_2 <2r}. LetD_1,r = {D_e ∈ D_1 : e∈ E_1,r}. It suffices to show that D_1,r is d^O(d)-thick.For each e = {x,y}∈ D_1,r we define some unordered pair of boxes γ(e)={B(e), B'(e)}, as follows. By the definition of E_1, there exists some b∈ V(T), b'∈(b), with (b)≠(b'), such that {x,y}={(b), (b')}. Assume w.l.o.g. that x=(b) and y=(b'). By the choice of the representatives, there exist some branch b_0,…,b_t of T, for some t≥ 1 with b_0=b, that terminates at some leaf b_t, such that x=(b)=(b_0)=…=(b_t). Since x,y∈ b, it follows thatr≤x-y_2 ≤√(d)·(b).Since b_t is a leaf, we have (b_t)=0. Let t^*∈{1,…,t} be the maximum integer such that (b_t^*-1) ≥ r/√(d).Let A∈(b_t^*-1) such that b_t^*⊆ A. Note that (A)≥ r/(2√(d)), and (b_t^*) < r/√(d). Pick some box B(e), with b_t^*⊆ B(e)⊆ A, such that(B(e)) ∈[r/(2√(d)), r/√(d)]Similarly, let b_0',…,b_s' be a sequence of boxes such that b_0'∈(b), with b'⊆ b_0', and b_1',…,b_s' is a branch of T starting at b_1'=b' and terminating at some leaf b_s'. Arguing as before, let s^*∈{1,…,s} be the maximum integer such that (b_s^*-1') ≥ r/(2√(d)).If s^*=1 then let A'∈(b'_s^*-1), with b'_s^*⊆ A'; pick some box B'(e), with b_s^*⊆ B'(e)⊆ A', such that (B'(e)) ∈[r/(4√(d)), r/(2√(d))]We say that e is charged to γ(e). By construction, there exists at most one edge in E_1,r that is charged to each pair of boxes.By (<ref>) and (<ref>) we have that for each e∈ E_1,r, the pair γ(e) consists of two boxes, each of size Θ(r/√(d)). Thus, each point in ℝ^d can be contained in at most O(1) boxes in all the pairs γ(e), for all e∈ E_1,r. Moreover, (B(e),B'(e))≤x-y < 2r. Thus, each box participates in at most (√(d))^O(d) = d^O(d) pairs. For each e∈ E_1,r, let A(e)=N(B(e),r)∪ N(B'(e),r), where N(X,r) denotes the r-neighborhood of X in ℝ^d. It follows that {A(e)}_e∈ E_1,r is d^O(d)-thick. Since for each e∈ E_1,r, we have D_e∈ A(e), it follows that D_1 is d^O(d)-thick, as required.D_2 is (2,(d/)^O(d))-thick.Let r>0 and define E_2,r = {{x,y}∈ E_2 : r≤x-y_2 <2r}. LetD_2,r = {D_e ∈ D_2 : e∈ E_2,r}. It suffices to show that D_2,r is d^O(d)-thick.As in the proof of Lemma <ref>, for each e∈ D_1,r we define some unordered pair of boxes γ(e)={B(e), B'(e)}.By the definition of E_2, there exists some b∈ V(T), b'∈((b)), such that {x,y}={(b), (b')}. Assume w.l.o.g. that x=(b) and y=(b'). Thus we have(b')<((b)) ≤((b'))and(b', (b)) ≤6√(d)/((b)).Thusr ≤x-y_2 ≤√(d)·(b) + √(d)·(b') + (b,b')< √(d)·((b)) + √(d)·((b)) + (b',(b)) + √(d)·((b)) ≤ (3+6/) √(d)·((b)).Thus((b)) > r /(9 √(d)).Let b_0,…,b_t be a branch in T with b_0=(b), and b_t={x}. Arguing as in Lemma <ref>, let t^*∈{0,…,t-1} be the maximum integer such that (b_t^*)≥ r /(9√(d)). Let A∈(b_t^*), with b_t^*+1⊆ A, and pick some box B(e)⊂ A, with(B(e)) ∈[(r)/(18√(d)), (r)/(9√(d))]Similarly, let b_0',…,b_s' be a branch of T with b_0'=(b'), and b_s' is a leaf with b_s'={y}. Arguing as in Lemma <ref>, let s^*∈{0,…,s-1} be the maximum integer such that (b_s^*-1') ≥ (r)/(9√(d)).Let A'∈(b_s^*-1) with b_s^*⊆ A, and pick some box B'(e), with b_s^*⊆ A ⊆ B'(e), such that(B'(e)) ∈[( r)/(18√(d)), ( r)/(9√(d))]We say that e is charged to γ(e). By construction, there exists at most one edge in E_1,r that is charged to each pair of boxes.By (<ref>) and (<ref>) we have that for each e∈ E_2,r, the pair γ(e) consists of two boxes, each of size Θ(( r)/√(d)). Thus, each point in ℝ^d can be contained in at most O(1) distinct boxes in all the pairs γ(e), for all e∈ E_2,r. Moreover, (B(e),B'(e))≤x-y < 2r. Thus, each box participates in at most (√(d)/)^O(d) = (d/)^O(d) pairs. For each e∈ E_2,r, let A(e)=N(B(e),r)∪ N(B'(e),r). It follows that {A(e)}_e∈ E_2,r is (d/)^O(d)-thick. Since for each e∈ E_2,r, we have D_e∈ A(e), it follows that D_2 is (d/)^O(d)-thick, as required.D is (2,(d/)^O(d))-thick.By Lemma <ref> we have that D_1 is (2,d^O(d))-thick, and by Lemma <ref> we have that D_2 is (2,(d/)^O(d))-thick. Since D= D_1∪ D_2, we get that D is (2,κ)-thick, where κ=d^O(d)+(d/)^O(d)=(d/)^O(d), as required.Let x,y,z,w∈ℝ^d. We say that zw is a shortcut for xy if the following conditions holds: (1)x-z_2 ≤z-w_2/20.(2)The angle formed by the segments x-y and x-(w-z+x) is at most /20. We now proceed to modify G to obtain a graph G'. Initially, G' contains no edges. We consider all edges in G in increasing order of length. When considering an edge e={x,y}, if there exists {z,w}∈ E(G') such that either zw is a shortcut for xy or zw is a shortcut for yx, then we do not add e to G'; otherwise we add e to G'. This completes the construction of G'.We next argue that G' is a spanner with low dilation for P. The proof of the following is standard (see e.g. <cit.>). For completeness, we provide a sketch of the proof. G' is a (1+2)-spanner for P.We consider all {x,y}∈P 2 in order of increasing x-y_2 and we prove by induction that d_G'(x,y)≤ (1+2) x-y_2.If {x,y}∉ E(G), then the assertion follows by applying the inductive hypothesis on all the edges in the shortest-path between x and y in G. If {x,y}∈ E(G)∩ E(G'), then d_G'(x,y)=x-y_2, and the inductive hypothesis holds trivially. Finally, it remains to consider the case {x,y}∈ E(G)∖ E(G'). Since {x,y} was not added to G' it follows that there exists some {z,w}∈ E(G') such that either zw is a shortcut for xy or zw is a shortcut for yx. Assume w.l.o.g. that zw is a shortcut for xy. We haved_G'(x,y)≤ d_G'(x,z)+d_G'(z,w)+d_G'(w,y)≤ (1+2) x-z_2 + z-w_2 + (1+2)w-y_2≤ (1+/20+^2/10) z-w_2 + (1+2) w-y_2< (1+/20+^2/10) (1+/4) (x-y_2-w-y_2) + (1+2) w-y_2< (1+) (x-y_2-w-y_2) + (1+2)w-y_2< (1+2) x-y_2,which concludes the proof. Let c∈ℝ^d and let r>0. LetE^* = {{x,y}∈ E(G') : x-y > 2rand x-y∩(c,r)≠∅}. Then |E^*| ≤ (d/)^O(d).Let E^*_0={{x,y}∈ E^* : x-y≤ 100r/}. We can partition E^*_0 into O(log(1/)) buckets, where the i-th bucket contains the balls with radius in [r2^i, r2^i+1). Since by Lemma <ref>, D is (2,(d/)^O(d))-thick, and all the balls in E^* are contained in a ball of radius O(r/), it follows that each bucket can contain at most (1/)^O(d)· (d/)^O(d) balls. Thus |E^*_0| = O(log(1/)) · (1/)^O(d)· (d/)^O(d) = (d/)^O(d).Let E_1^*=E^*∖ E_0^*. Suppose that |E_1^*| > (d/)^Cd. Setting C to be a sufficiently large universal constant it follows that there exist distinct edges{x,y}, {z,w}∈ E_1^* that form an angle of less than /20. Assume w.l.o.g. that x-y_2≥z-w_2, x∈(c,r), and z∈(c,r). Then zw must be a shortcut for xy, which is a contradiction since {x,y}∈ E(G'), concluding the proof.We now prove the main result of this section. Let d≥ 2 be some fixed integer, and let δ∈ (0,d] be some real number. Let P⊂ℝ^d be some finite point set with |P|=n, such that (P)=δ. Then, for any fixed∈ (0,1], there exists a (1+)-spanner, G', for P, with a linear number of edges, and with (G)={[ O(n^1-1/δlog n)if δ>1;O(log^2 n)if δ=1;O(log n)if δ<1 ].Moreover, given P, the graph G' can be computed in polynomial time.Let G' be the spanner constructed above. The bound on the number of edges of G' follows by Theorem <ref> since G'⊆ G. We will bound the pathwidth of G'. By Lemma <ref> we have that D is (2,(d/)^O(d))-thick. By Theorem <ref> there exists some (d-1)-sphere C with radius r such that at most (1-2^-O(d))n points of P are contained in either side of C, and |A|≤ M, whereM={[ O(n^1-1/δ) if δ>1; O(log n) if δ=1; O(1) if δ<1 ].,and A={{x,y}∈ E(G): x-y≤ 2randx-y ∩ C ≠∅}.Let A'={{x,y}∈ E(G'): x-y≤ 2randx-y ∩ C ≠∅}. We have A'⊆ A, and thus |A'|≤ |A|. Let A”={{x,y}∈ E(G'): x-y>2randx-y ∩ C ≠∅}. By Lemma <ref> we have |A”|=O(1) (for fixed d and ). Let S be the set of all endpoints of all the edges in A'∪ A”. We have |S|≤ 2|A'∪ A”| = O(|A|). Let U (resp. U') be the set of points in P that are inside (resp. outside) C. Then S separates in G' every vertex in U from every vertex in U'. We may thus recurse on G'[U∖ (S∪ U')] and G'[U'∖ (S∪ U)] and obtain path decompositions X_1,…,X_t and Y_1,…,Y_s respectively. We now obtain the path decomposition X_1∪ S, …,X_t∪ S,Y_1∪ S,…,Y_s∪ S for G'. The width of the resulting path decomposition is at most O(M log n), concluding the proof.Acknowledgements The authors wish to thank Sariel Har-Peled, Dimitrios Thilikos, and Yusu Wang for fruitful discussions.§ FRACTAL DIMENSION AND DOUBLING DIMENSION In this section we observe that the fractal dimension and the doubling dimension of a set of points are related up to a constant factor. For any metric space M we have (M) = (M) + O(1) and (M) = O((M)). Let M=(X,ρ). We first show that (M)=O((M)). Let (M)=δ. For any -net N of M, at most O((r/)^δ) points in N are contained in any ball of radius r, for any r>. Setting =r/2 and taking the balls of radius r/2 centered at the points of N, we get that any ball of radius r is covered by the union of at most O(2^δ) balls of radius r/2. Thus (M)=δ+O(1).Next we show that (M)=O((M)). Let (M)=λ. From the definition of doubling dimension we have that for any r>0, any ball of radius r can be covered by at most 2^λ balls of radius r/2. Given r>>0 and x ∈ℝ^d, applying the definition of doubling dimension log(2r/ /2) times and taking the centers of the balls obtained, we get that there exists S ⊆ X such that S is an /2-covering of X and |S ∩(x,2r)| ≤ (4r/)^λ. Consider any S' ⊆ X such that S' is -packing. We have that any two points in S' are covered by different points in S since they are at least distanceapart. Also every point in S' ∩(x,r) is covered by some point in S ∩(x,2r). This implies that |S' ∩(x,r)| ≤ |S ∩(x,2r)| ≤ (4r/)^λ. Therefore for any -net N, which is also an -packing by definition, we have that |N ∩(x,r)| ≤ (4r/)^λ.Thus (M)=O(λ), concluding the proof.
http://arxiv.org/abs/1703.09324v1
{ "authors": [ "Anastasios Sidiropoulos", "Vijay Sridhar" ], "categories": [ "cs.DS", "F.2.2" ], "primary_category": "cs.DS", "published": "20170327221013", "title": "Algorithmic interpretations of fractal dimension" }
Aix Marseille Univ, CNRS, LAM, Laboratoire d'Astrophysique de Marseille, Marseille, France, matthieu.bethermin@lam.fr European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching, GermanyCalifornia Institute of Technology, MC 367-17, Pasadena, CA 91125, USA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France SRON Netherlands Institute for Space Research, Landleven 12, 9747 AD, Groningen, The Netherlands Kapteyn Astronomical Institute, University of Groningen, Postbus 800, 9700 AV, Groningen, The Netherlands CEA Saclay, Laboratoire AIM-CNRS-Université Paris Diderot, Irfu/SAp, Orme des Merisiers, F-91191 Gif-sur-Yvette, France SISSA, Via Bonomea 265, 34136 Trieste, Italy INAF-Osservatorio Astronomico di Trieste, via Tiepolo 11, 34131 Trieste, ItalyINFN-Sezione di Trieste, via Valerio 2, 34127 Trieste, ItalyFollow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg^2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model.Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor ∼2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only ∼60 % of the Herschel flux density, but that this number will rise to ∼95 % for future millimeter surveys on 30 meter-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called SIDES (Simulated Infrared Dusty Extragalactic Sky), is available at <http://cesam.lam.fr/sides>. The impact of clustering and angular resolution on far-infrared and millimeter observationsBéthermin et al.The impact of clustering and angular resolution on far-infrared and millimeter continuum observations Matthieu Béthermin1,2 Hao-Yi Wu3,4 Guilaine Lagache1 Iary Davidzon1 Nicolas Ponthieu5 Morgane Cousin1 Lingyu Wang6,7 Olivier Doré3,4 Emanuele Daddi8 Andrea Lapi9,10,11 Received ??? / Accepted ??? ===========================================================================================================================================================================§ INTRODUCTION The star formation history (SFH) in the Universe is one of the key constraints to understand the evolution of galaxies. The combination of various tracers (H_α, far UV, far infrared and millimeter) was successfully used in the last 20 years to measure the star formation rate density (SFRD) up to very high redshift (z ∼ 8, seefor a review). At z≥2 - 3, building complete spectroscopic samples becomes very challenging and continuum emission is mainly used to derive star formation rates (SFR). Consequently, the prime tracer of recent star formation is the redshifted far-UV emission from young stars. However, even at early epochs, massive galaxies have already formed a large amount of dust and UV light is thus absorbed <cit.>. Two main approaches can then be used to derive the intrinsic SFR: correct the UV absorption using the UV spectral slope as a proxy of attenuation <cit.> or directly detect the reprocessed UV light emitted by dust in the far-infrared and millimeter <cit.>.Far-infrared and submillimeter observations are challenging because of the limited angular resolution of the instruments. The deepest observations of the most modern single-dish instruments are limited by the confusion, that is, the blending of sources in the same beam of the instrument <cit.>. Only the brightest galaxies emerge from the confusion and can be extracted individually from far-infrared and submillimeter maps. However, because of the large beam of the single-dish instruments, their measured flux density can be contaminated by their fainter neighbors. Indeed, follow-up observations of the brightest 850 μm sources at high-resolution with ALMArevealed that a large fraction are multiple sources <cit.>. Because of this, the flux density distributions measured with single-dish instruments and interferometers such as ALMA <cit.> strongly disagree.Also, the Herschel space observatory <cit.> has a limited angular resolution and could be affected by similar effects. However, it is very difficult to verify because interferometric follow-up observations are not possible at the high frequencies of the Herschel observations. Consequently, other approaches such as modeling must be used to explore possible biases induced by the angular resolution on Herschel number counts. In particular, we have to understand why the number density of red Herschel sources found in the extragalactic surveys is almost an order of magnitude higher than that predicted by the models (, see also ).In addition to studies of bright sources above the confusion limit, various advanced techniques were developed to probe galaxy populations in the confusion such as the stacking method <cit.>, P(D) measurements <cit.>, or source extraction using position priors coming from shorter wavelengths <cit.>. These methods can also be biased by the contamination of the measured flux by faint clustered sources.Simulations were developed to test these possible biases <cit.>, but the clustering of infrared galaxies at high redshift was poorly constrained at that time. Important progress has been made recently. In particular, Planck and Herschel measured cosmic infrared background (CIB) anisotropies with an unprecedented precision <cit.>. Their modeling showed that the typical mass of the dark-matter halos hosting the bulk of the obscured star formation is almost constant and around 10^12 M_⊙ up to z∼3 <cit.>. In addition, clustering studies of bright high-redshift far-infrared and millimeter galaxies showed they are hosted by massive halos (∼ 10^13 M_⊙, e.g., ). These massive halos are strongly clustered. The impact of clustering on the extraction of sources from confusion-limited surveys might be stronger than predicted by pre-Planck and Herschel simulations, which assumed a weaker clustering.It is thus timely to develop new simulations that are able to reproduce simultaneously the far-infrared and millimeter observations at various angular resolutions. These simulations must include clustering and take into account all the lessons learnt from Herschel and ALMA. On the one hand, <cit.> built a simulation based on abundance matching, but this analyzes only the galaxy populations selected at 850 μm (see also ). On the other hand, <cit.> built a simulation of the panchromatic properties of galaxies, but did not include a physical clustering model. Our new simulation combines the strengths of these two approaches and accurately reproduces spectral and spatial properties of galaxies and CIB anisotropies. In this paper, we focus on the continuum properties of galaxies and the effect of angular resolution from 70 μm to 1.2 mm. In a future paper, we will introduce the (sub)millimeter line ([CII], [NII], [CI], CO...) properties of galaxies, discuss the perspectives for (sub)millimeter intensity mapping and test methods of line deblending.Our simulation is based on the Bolshoi-Planck simulation <cit.>, from which a lightcone covering 2 deg^2 was produced. We populate the dark-matter halos using an abundance-matching technique <cit.>. The luminous properties of the galaxies are derived using an updated version of the 2SFM (2 star-formation modes) model <cit.>. This model is based on the observed evolution of the main sequence of star forming galaxies <cit.>, that is, a SFR-M_⋆ correlation evolving with redshift, and the observed evolution of the spectral energy distributions (SEDs) with redshift. In the new version of the model, we take into account the increase of dust temperature in main sequence galaxies recently measured from z=2 to z=4 <cit.>, extending the increase found from z=0 to z=2 by <cit.>. We also include the latest calibration of the evolution of the main sequence <cit.>.In Sect. <ref>, we present the ingredients of our simulation and discuss its limitations. We compare our results with observed number counts and discuss the effects of resolution in Sect. <ref>. We then discuss the redshift-dependent observables and the consequences on the obscured star formation history (Sect. <ref>). We then show the significant impact of clustering on the pixel histogram of the Herschel maps, also known as P(D), and check that our model correctly reproduces the CIB anisotropies measured by Herschel and Planck (Sect. <ref>). Finally, we discuss the existence of the red sources found by Herschel surveys (Sect. <ref>).We assume a <cit.> cosmology and a <cit.> initial mass function (IMF). The products of our simulation, called SIDES (Simulated Infrared Dusty Extragalactic Sky), are publicly available at <http://cesam.lam.fr/sides>.§ INGREDIENTS OF THE SIMULATION This section describes the ingredients used to build our simulated sky, namely, * the dark-matter lightcone, which is the starting point (Sect. <ref>),* the stellar mass function (Sect. <ref>),* the abundance-matching procedure used to populate the dark-matter halos with galaxies (Sect. <ref>),* our recipe to split galaxies into a star forming and a passive population (Sect. <ref>),* our method to derive a SFR for each galaxy (Sect. <ref>),* the assignment of SEDs to our simulated galaxies (Sect. <ref>),* the implementation of strong and weak lensing (Sect. <ref>).Finally, in Sect. <ref>, we discuss the limitations of our simulation. We homogenized the cosmology used in the dark matter simulation and in the observed stellar mass functions. Our method is described in Appendix <ref>. §.§ Dark matter simulation and lightcone catalog We use the publicly available halo catalogs from the Bolshoi-Planck simulation <cit.>[The catalogs are available at<http://hipacc.ucsc.edu/Bolshoi/MergerTrees.html>]. The simulation has a volume of (250 h^-1 Mpc)^3, with a dark-matter particle mass of 1.5 × 10^8 h^-1M_⊙. The cosmological parameters are compatible with <cit.>: h = 0.678, σ_8 = 0.823, Ω_Λ= 0.693, Ω_M= 0.307, Ω_b =0.048, n_s = 0.96.Dark matter halos are identified by the phase-space halo finder Rockstar <cit.>. We use the halo mass M_200, which is defined by the radius within which the spherical overdensity is 200 times the critical density of the Universe.We only use halos with mass above 10^10 M_⊙, which have more than 50 dark matter particles. We have explicitly verified that above 10^10 M_⊙, the halo mass function from our simulation agrees with the analytic halo mass function.Using the simulation snapshots at different redshifts, we construct a lightcone catalog of 1.4 deg× 1.4 deg, 0<z<10, corresponding to a comoving volume of 0.17 Gpc^3, approximately three times the volume of the Bolshoi-Planck simulation. In a lightcone catalog, each object is at a cosmic distance that corresponds to the cosmic time that it emits light.The simulation outputs are saved at discrete time steps, and we use snapshots approximately spaced by Δ z = 0.25.We have explicitly checked that the structure in such a narrow redshift bin has negligible evolution and can be represented by the same snapshot. To construct the lightcone, we replicate the box in all three dimensions, using the periodic boundary condition inherent in the simulations. Since our lightcone catalog has a pencil-beam geometry, we use a "slanted” line of sight to reduce the repeated structure; that is, the line of sight is not parallel to any of the axes or diagonals of the box. Specifically, we first rotate the box by 10^∘ along the y-axis and another 10^∘ along the z-axis.We then transform the cartesian coordinates into the equatorial coordinates, following the convention of astropy. The distance of an object from the observer is converted into the cosmological redshift, and we add to the redshift the peculiar velocity along the line-of-sight. For more details about the constructions of the lightcone, we refer to <cit.>. §.§ Stellar mass function In our simulation, the stellar mass function (SMF) is the starting point from which we generate all the properties of the galaxies. Similarly to the approach presented in <cit.>, we assume that it can be described by a double Schechter function <cit.>:ϕ( M_⋆)   d(M_⋆)= e^- M_⋆/ℳ^⋆  [Φ_1^⋆ ( M_⋆/ℳ^⋆)^α_1+Φ^*_2( M_⋆/ℳ^* )^α_2 ] d(M_⋆)/ℳ^⋆,where ℳ^⋆ is the characteristic mass of the knee of the SMF, Φ^*_1 and Φ^*_2 are the normalization of the two components, and α_1 and α_2 the power-law slopes at low mass. We use the same functional representation of the SMF at all redshifts to avoid discontinuities of its evolution with redshift.The evolution with redshift of the parameters described above is based on the observations. We use the data points of <cit.> in the GAMA field in the local Universe, <cit.> from the VIPERS survey up to z=1.5, <cit.> in the COSMOS field from z=1.5 to z=4, and <cit.> at z>4. <cit.> uses a simple Schechter function. At z>4, to ensure a smooth transition with smaller redshifts at which a double Schechter function is used, we fix Φ_1^⋆ to 0 and use the Φ and α of the single Schechter function for the second component. We connect the data points (taken at the center of the redshift bins of the authors) using a linear interpolation of each parameter (log(ℳ^⋆), Φ_1^⋆, log(Φ_2^⋆), α_1, α_2) as a function of (1+z). We chose to use Φ_1^⋆ and log(Φ_2^⋆) to avoid problems with the log where Φ_1^⋆ is fixed to zero and to avoid negative values at z>7, respectively. The stellar mass function of the galaxies used to generate our simulation is shown in Fig. <ref>. Below 10^8 M_⊙, the number density at fixed mass no longer evolves monotonically with redshift. This unphysical behavior is caused by the uncertainties on the low-mass slope of the observed data we used. This is a limitation of our empirical approach. However, these low-mass sources have a small impact on our simulation, since they emit only 4 % of the infrared luminosity. Thus, we have chosen to keep these sources in the simulation, since they contribute to confusion noise. §.§ Abundance matching To assign stellar mass to dark matter halos and subhalos, we perform subhalo abundance matching between the halo catalogs and the stellar mass functions described above. The basic idea of abundance matching is to assign higher stellar mass to more massive halos or subhalos, either monotonically or with some scatter, according to the number densities of the objects in the Universe <cit.>. In this work, instead of mass, we use the peak circular velocity v_ pk of dark matter halos and subhalos to perform the abundance matching, since v_ pk is known to be more tightly correlated with stellar mass <cit.>. We assume that the stellar mass has an intrinsic scatter of 0.2 dex at a given v_ pk, which is required for the resulting galaxy catalog to reproduce the observed galaxy clustering <cit.>. The input SMF (Eq. <ref>) is deconvolved into a stellar mass function without the intrinsic scatter on the stellar versus halo mass relation; this deconvolved stellar mass function is then used to match the number density of halos monotonically.We use the implementation by Y.-Y. Mao[The code is publicly available at<https://bitbucket.org/yymao/abundancematching>], and we refer the readers to <cit.> and <cit.> for the detailed implementation. The left panel of Fig. <ref> demonstrates that the stellar mass functions resulted from this abundance-matching calculation (solid curves) recover the input stellar mass functions (dashed curves). There is a slight tension at low mass in some redshift bins caused by the evolution of the SMF inside a redshift bin. We also observe a sharp cut below 10^7 M_⊙ for 0<z<0.4, which is caused by the halo mass limit of the simulation. Since halo and stellar masses are correlated, this also implies a low-mass cut in the stellar mass function.The right panel of Figure <ref> shows the stellar mass–halo mass relation resulting from the abundance-matching calculation. §.§ Fraction of star forming galaxies We will draw randomly galaxy properties from their stellar mass and redshift using the prescriptions of the 2SFM formalism <cit.>, which applies only to star forming galaxies. First, we have to estimate the probability of a galaxy at a given M_⋆ and redshift to be star forming. Accordingly, we split the galaxies in our simulation in two populations: passive galaxies, which have a negligible star formation, and star forming galaxies. We used the observed evolution of the star forming fraction by <cit.> to derive this fraction. In this work, the authors classified the galaxies as star forming or not using their position in the (NUV-r) versus (r-K) color diagram <cit.>. We fit their results with the following parametric form (see also Fig. <ref>):f_ SF(M_⋆, z) = (1 - f_ Q,0(z))1 - erf[ log_10(M_⋆) - log_10(M_t(z))/σ_ SF(z)]/2 , where f_ Q,0(z) is the fraction of passive galaxies at low mass (M_⋆ << M_t(z)). The fraction is higher at low redshift, where a significant fraction of low-mass galaxies in dense environments are passive. In Eq. <ref>, M_t(z) is the stellar mass of the transition between passive and star forming galaxies, and σ_ SF(z) the width of this transition. These three quantities evolve with redshift. Their evolution is parametrized in the following way:f_ Q,0(z) = f_ Q,0,z=0 (1+z)^γ , log_10(M_t)(z) = log_10(M_t,z=0) + α_1 z + α_2 z^2 , σ_ SF(z) = σ_SF,z=0+ β_1 z + β_2 z^2.This parametric form provides an excellent fit of the measurements (reduced χ^2 of 0.82). The best fit parameters are f_ Q,0,z=0 = 0.1017, log_10(M_t,z=0) = 10.53, σ_SF,z=0 = 0.8488, α_1 = 0.2232, α_2 = 0.0913, β_1 = 0.0418, β_2 = -0.0159, and γ = -1.039.This approach neglects environmental effects, since it depends only on the stellar mass and redshift. Our simulation is optimized for field galaxies, cosmic infrared background, and intensity mapping studies. Cosmic infrared background and intensity mapping are dominated by central galaxies and are thus not severely affected by these effects <cit.>. The limitations implied by this simplification are discussed in Sect. <ref>. §.§ Star-forming properties We assume that only galaxies classified as star-forming have far-infrared and millimeter outputs. In passive galaxies, some residual emission of cirrus heated by the old stellar populations has been observed. However, at a given stellar mass, these galaxies usually have infrared luminosities lower by at least one order of magnitude than galaxies on the main sequence <cit.>. Neglecting their infrared outputs is thus a fair assumption.The SFR of star forming galaxies is derived using the 2SFM formalism <cit.>. The first step is to compute the mean SFR of sources from the measured evolution of the main sequence of star forming galaxies, written afterwards SFR_ MS. <cit.> measured the evolution of this main sequence up to z=4 and proposed the following parametric description:log_10(SFR_MS/M_⊙/yr) = log_10(M_⋆/10^9 M_⊙) - m_0 + a_0 log_10(1+z)- a_1 [max( 0,log_10(M_⋆/10^9 M_⊙) - m_1 - a_2 log_10(1+z) ) ]^2,with m_0 = 0.5, a_0 = 1.5, a_1 = 0.3, m_1 = 0.36, a_2 = 2.5. In <cit.>, we assumed a simple power law for the main sequence at a given redshift. In addition, at z>2.5, there was no evolution of sSFR, that is, SFR/M_⋆, with z at fixed M_⋆. In this updated version (Eq. <ref>), the SFR decreases sharply at high M_⋆ and sSFR continues to evolve at higher redshift. This rising sSFR was already discussed in <cit.>, since it reproduces the CIB anisotropies better. The <cit.> formula is fitted on observations at z>0.5 and sSFR is too high at lower redshift. To correct for this offset, we applied a 0.1 ×0.5-z/0.5-0.22 dex offset to the <cit.> formula at z<0.5. The detailed explanations are provided in Appendix <ref>.Star forming galaxies are not all on the main sequence. In this paper, a starburst is defined as a positive outlier of the main sequence <cit.>. Following <cit.>, the fraction of starburst does not vary with stellar mass; it grows linearly with redshift from 1.5% at z=0 to 3% at z=1 and stays flat at higher redshift. We randomly drew a main sequence or a starburst galaxy using this probability.The main sequence is of course not a perfect correlation and it has a non-negligible scatter. We followed a procedure similar to <cit.> to distribute the galaxies around the main sequence. We randomly drew the SFR of each source using a log-normal distribution in agreement with the observational results <cit.>. Following <cit.>, the <cit.> model assumed a width of 0.15 and 0.2 dex for main sequence galaxies and starbursts, respectively. However, more recent measurements by <cit.> and <cit.> found a slightly higher width of 0.3 dex (see also ). We thus use this updated value in our simulation. The distribution of main sequence galaxies is centered on 0.87 SFR_ MS and 5.3 SFR_ MS for the starbursts <cit.>. Since a log-normal distribution centered on 1 has a mean value above unity, the center of main sequence is set to 0.87 SFR_ MS in order to have the correct mean SFR (seeandfor more explanations).Using follow-up observations of submillimeter sources detected by single-dish telescopes with interferometers, <cit.> showed that the brightest of these sources have multiple components (see also ). They found that the bright end of the number counts at 850 μm were significantly overestimated and that none of the single components have a SFR significantly above 1000 M_/yr. The SFR distribution of 870 μm-selected galaxies measured by <cit.> drops strongly above 1000 M_⊙/yr. A rapid drop of the number density at SFR⪆1000 M_⊙ was also found by high-resolution radio observations <cit.>. For simplicity, we implemented a sharp SFR limit at 1000 M_/yr. The SFR of each galaxy is redrawn until it is lower than this limit. Consequently, the sSFR distribution of the most massive galaxy populations is truncated at high sSFR. Wide surveys found some rare sources with a higher SFR suggesting this is not a sharp limit <cit.>. However, using a sharp limit rather than an exponential cut of the sSFR distribution is a reasonable assumption considering the small size of our field. The impact of this SFR limit on the number counts is discussed in Sect. <ref>. So far, the physical origin of this SFR cut is not totally clear. These objects could be Eddington-limited starbursts limited by the radiative pressure <cit.>. This could also be explained by the weaker boost of star formation induced by mergers in gas-rich systems <cit.>.§.§ Spectral energy distributions and continuum fluxes We then assigned a SED to each of our sources to derive their flux densities in a large set of instrument filters from their total infrared luminosity (L_ IR). L_ IR is directly derived from SFR using the <cit.> conversion factor (1.0×10^-10 M_⊙/yr/L_⊙ after converting toIMF). We use <cit.> SED library. The shape of the SEDs depends on the galaxy type (main sequence or starburst) and on the ⟨U⟩ parameter, that is, the mean intensity of the radiation field. This parameter is strongly correlated with the dust temperature <cit.>. It evolves with redshift for main sequence galaxies (see Fig. <ref>). In <cit.>, we had no data above z=2 and we assumed a flattening at z>2, since it provides a better agreement with the observed submillimeter number counts. Two new observational inputs motivated us to update the evolution of ⟨U⟩ in our simulation. In <cit.>, we measured that ⟨U⟩ continues to rise at z>2 using astacking analysis. In addition, <cit.> also showed that submillimeter number counts were overestimated because of blending effects (see the discussion in Sect. <ref>). The previous measurements favored a scenario with no evolution of ⟨U⟩ at z>2, because it was producing colder SEDs and consequently higher submillimeter counts. This is no longer true with the new number counts that are observed to be lower.For main sequence galaxies, we used ⟨ U_ MS⟩ (z=0) = 5 as in <cit.>. An evolution in (1+z)^α does not fit very well the observational data from <cit.> with an overly sharp decrease with decreasing redshift at z<0.5. This artificially low ⟨ U ⟩ at low redshift is responsible of an excess of the bright number counts at 160 μm. We thus used another parametric form, which fits better the observational data:log_10[⟨ U_ MS⟩(z)] = log_10[⟨ U_ MS⟩(z=0)] + α_⟨ U ⟩ z ,with α_⟨ U ⟩ = 0.25. Following <cit.>, we use a constant ⟨ U_ SB⟩ = 31. However, at z>3, this would lead to starbursts colder than main sequence galaxies. This behavior could be considered as unphysical and we thus assumed ⟨ U_ MS⟩ = ⟨ U_ SB⟩ at higher redshift. At z>4, we have no constraints on an evolution of U_ MS. Extrapolating this behavior up to z∼10 would imply unphysically high values. We thus assume a plateau in the high redshift regime (z>4). In addition to this mean evolution, we also included a 0.2 dex scatter on ⟨ U ⟩ following <cit.>. §.§ Magnification by lensing Gravitational lensing can have a non-negligible impact on the bright submillimeter number counts, because of their steepness <cit.>. At 350 and 500 μm, this effect is maximal around 100 mJy, where ∼20 % of the sources are lensed. Our simulation of a 2 deg^2 field contains only six sources brighter than this threshold. The lensing has thus a relatively weak effect on the total number counts; however, it has a non-negligible impact on the number of bright red sources (see Sect. <ref>), since the fraction of lensed sources is higher at high redshift. It is therefore important to consider lensing. For each source of our simulation, we randomly drew the magnification μ. The determination of the magnification does not include any spatial information (see Sect. <ref> for a discussion about this approximation). For the strong lensing (μ > 2), we used the probability distribution of <cit.> used also in <cit.>, which depends only on the redshift. We also included a simplified weak lensing model for the other sources. We randomly drew their magnification from a Gaussian, whose width and mean value are derived from <cit.>. §.§ Limits of our simulation Our simulation is based on the observed evolution of star forming galaxies and aims to accurately reproduce current observations of the far-infrared and (sub)millimeter Universe. However, the current version of this simulation has several limitations, which should be kept in mind while comparing it with observations. Our simulation is based on a single 2 deg^2 field. Since it is based on a dark-matter simulation, it is thus affected by cosmic variance beyond simple Poisson fluctuations and can contain under- or overdensities at some specific redshift.Our abundance-matching procedure assumes that the stellar mass of a galaxy is associated with v_ pk (proxy of the potential well of dark matter halos or subhalos), with some scatter. During our abundance-matching procedure, we implicitly assume that main halos and subhalos follow the same relation. In addition, the probability of a galaxy to be passive at a given z depends only on its stellar mass. Our simulation thus neglects the environmental quenching observed in the most massive halos (M_ halo > 10^14 M_⊙) at z<1 <cit.>. There are only 26 such halos in our simulation. Moreover the contribution of these massive structures to the star formation density is small <cit.>. This approximation should thus only be a problem if the simulation is used to study low-redshift overdensities.Our description of the lensingdepends only on the redshift and ignores the position of foreground sources. This treatment is thus inconsistent with the large-scale structures of our simulation. Overdensities of low-z galaxy populations are associated with massive halos, which can strongly magnify high-redshift sources <cit.>. A full consistent treatment of lensing is beyond the scope of this paper and we thus decided to have a purely probabilistic treatment of lensing magnification. The impact of this simplification should be small on most of the statistics, but spatial correlations between bright-lensed sources and their neighbors could be significantly affected.Finally, at z>4, our simulation relies on extrapolations of relations calibrated at lower redshift. The SEDs used in our simulation evolve only up to z=4. At higher redshift, we assume no evolution due to the lack of constraints. Potentially, SEDs could become even warmer because of the effect of CMB <cit.>. However, the temperature of our SEDs is higher than the dust temperature assumed in these studies (∼ 40 K versus ∼15 K). The CMB effect should thus be much smaller than estimated in these studies, but it might be non-negligible for the highest-redshift objects of our simulation. The evolution of sSFR in our simulation is based on the <cit.> relation, which is derived from z<4 data. The scatter on the main sequence is also assumed to be constant with mass and redshift, since there is currently no evidence of the contrary. Finally, the evolution of the parameters of the stellar mass function are extrapolated at z>6. The predictions of our simulation at z>4 should thus be taken with caution.Our simulation currently contains only the far-infrared and millimeter observables and we thus assumed in Sect. <ref> that L_ IR traces the total star formation. However, in low-mass and high-redshift galaxies, the fraction of UV photons escaping the galaxies can be non-negligible. The impact of neglecting unobscured star formation on the infrared observables was discussed extensively in <cit.>. They showed that the scatter of the infrared excess (IRX = L_ IR / L_ UV) has a negligible impact on infrared observables. In contrast, the increasing IRX with increasing M_⋆ implies that low-mass objects have a smaller fraction of their UV reprocessed by dust and the faint-end slope of the number counts should be slightly steeper if we include the UV <cit.>. The impact of these lower number counts on the confusion noise is small (<5 %), since the shot noise is proportional to dN/dS S^2 dS, where S is the flux density and dN/dS are the number counts.§ NUMBER COUNTS AND MULTIPLICITY OF SOURCES DETECTED BY SINGLE-DISH INSTRUMENTS In this section, we demonstrate that our simulation is able to reproduce the observed number counts, when the effects of angular resolution on source extraction are properly taken into account. We also discuss in particular the multiplicity of Herschel/SPIRE and NIKA2 sources and the bias caused by clustering on stacking measurements. §.§ Simulating the observational process <cit.> showed that the 850 μm sources found by single-dish telescopes are often blends of several sources. The same phenomenon could also impact other single-dish observations and especially Herschel. We thus compare the measurements with both the intrinsic number counts from our simulated catalog and number counts extracted from simulated maps.We built Herschel simulated maps from our simulated catalog. We used Gaussian beams with full widths at half maximum (FWHM) of 5.5, 6.5, 11, 18.2, 24.9, and 36.3 arcsec at 70, 100, 160, 250, 350, and 500 μm, respectively, corresponding to the measured size of the Herschel beams. We did not include any instrumental noise, since we are only interested in the effect of angular resolution. The faint sources in the simulated map are responsible for the confusion noise. We measured a confusion noise of 6.0, 6.5, and 6.0 mJy at 250, 350, and 500 μm, respectively. This is compatible at 2 σ with the measurements of <cit.>, who found 5.8±0.3, 6.3±0.4, and 6.8±0.4, respectively.We extracted the sources from Herschel maps using FASTPHOT <cit.>. This routine uses source positions from another wavelength as a prior to deblend their flux.A large fraction of Herschel catalogs were produced usingthe position of 24 μm sources as a prior <cit.>. Photometry routines using positional priors are not converging when too many sources are located in the same beam because of degeneracies. We thus kept only the brightest 24 μm sources in a 0.5 FWHM of radius in our list of prior position. Finally, even if catalogs extracted using position priors are not affected by flux boosting, they are still affected by the Eddington bias. This bias appears when a steep distribution is convolved by measurement uncertainties (see ). We estimated the correction factor following <cit.>. We start from the flux distribution measured in the map. We then add a random Gaussian noise to each flux and compare the flux distribution before and after adding this noise. The photometric noise is estimated using the standard deviation of the residual map.Contrary to λ≤500 μm, we cannot use 24 μm priors to extract the sources at 850 μm and 1.2 mm, since the 24 μm is no longer probing the dust emission (>8 μm rest-frame) at the typical redshift of the sources detected at these wavelengths (see Sect. <ref>). Thus, we extracted blindly the >5 σ peaks in our simulated map. This task is relatively easy, since we have no instrumental noise in our simulations. We then measured the flux density of the detected sources using FASTPHOT and deboosted the fluxes following <cit.>. Single-dish observations were performed with various angular resolutions. We chose to use the resolution of the James Clerk Maxwell Telescope (JCMT, 15-meter diameter). This choice was guided by the fact that the most recent single-dish surveys at 850 μm and 1.1 mm / 1.2 mm were performed with this telescope. Usually, ground-based (sub)millimeter maps are convolved by the Gaussian of the size of the beam before extracting the sources. This technique is optimal to extract point sources in noise-limited maps, but it increases the confusion and blending problems. The convolved map is usually called beam-smoothed map. We thus produced beam-smoothed simulated maps, with an effective FWHM after convolution of 21 and 26 arcsec at 850 μm and 1.1 mm, respectively. We used the beam at 1.1 mm instead of 1.2 mm, since the most accurate number counts were measured at this wavelength with the AzTEC camera <cit.>.§.§ Spitzer and Herschel number counts The comparison between our simulation and the observed number counts is presented in Fig. <ref>. The intrinsic number counts in our simulated catalog (black solid lines) agree well with the data overall. However, there are some tensions at some specific wavelengths and flux regimes. The number counts at 70 μm in our simulation (both intrinsic and extracted from the simulated maps) are 2 σ high at the bright end compared with <cit.> Spitzer measurements, but agree at 1 σ with the Hercshel/PACS measurements of <cit.>. The intrinsic faint-end slope (<2 mJy) of the PACS number counts (70, 100, and 160 μm) is less steep in our simulation than in the observations, but the number counts recovered after a source extraction in our simulated map (red solid lines, Sect. <ref>) agree with the observations. Jin et al. (in prep.) also found that the published PACS number counts are underestimated using advanced source extraction techniques.Below 5 mJy, the intrinsic Herschel/SPIRE number counts (250, 350, and 500 μm) are 2 σ higher than the constraints derived by stacking by <cit.> and by P(D) analysis by <cit.>. These constraints come essentially from the GOODS fields, which are deep but small and thus strongly affected by the cosmic variance. For instance, only the S<5 mJy data points of <cit.> come from GOODS-N. The S>5 mJy data points are dominated by COSMOS, which probes a much larger volume than the GOODS fields, and agree well with our simulation at 250 μm. In addition, the pixel histograms of the COSMOS maps, that is, P(D), which is very sensitive to the number of faint sources (see Sect. <ref>), agree well with our simulation.The main disagreement between intrinsic and measured number counts is located between 5 mJy and 50 mJy at 350 μm and 500 μm, where the simulation is a factor of 2 below the measurements. In contrast, the number counts extracted from the simulated maps (red solid line) agree well with the observations. The resolution has thus a strong impact on the bright Herschel/SPIRE number counts and models should thus be compared with observations only after having simulated these resolution effects. Consequently, models adjusted directly on the observed number counts potentially overestimate the number of bright dusty star forming galaxies. The SCUBA2 camera observed deep fields at 450 μm with a 8 arcsec angular resolution <cit.>. In Fig. <ref>, these data points are shown using yellow, orange, and brown colors. We did not attempt to correct for the slightly different wavelength, since the 450 μm/500 μm color varies strongly with redshift. The latest data points of <cit.> agree very well with the intrinsic number counts in our simulation. This is not surprising, because the much better resolution of SCUBA2 compared with SPIRE limits the effect of resolution on the number counts. Our simulation also well agree with <cit.> and <cit.>. <cit.> measurements have a 3 σ excess between 10 and 20 mJy and disagree with both the previously quoted measurements and our simulation. §.§ Ground-based (sub)millimeter number counts Contrary to number counts at λ≤500 μm, number counts at 850 and 1.2 mm were measured with both interferometers and single-dish telescopes. <cit.> (see also ) showed that number counts derived using low- and high-angular-resolution data are inconsistent. These wavelengths are thus essential to test the ability of our simulation to consistently describe these resolution effects. The comparison between our simulation and the observed number counts at 850 μm and 1.2 mm is presented in Fig. <ref>. In order to homogenize these data taken at heterogeneous wavelengths, we applied a multiplicative factor of 0.8 to the 1.1 mm data to convert them at 1.2 mm and a factor of 1.07 to 870 μm data to convert them at 850 μm. These factors are derived using our main sequence SED template at z=2 and are only weakly redshift dependent.At 850 μm, our model agrees well with sub-mJy number counts extracted by <cit.> using ALMA calibration observations. Above 1 mJy, we have access to two types of constraints: single-dish measurements (in orange, ) and interferometric follow-up of these bright single-dish sources (blue and purple, ). As explained in <cit.>, the number counts derived from the interferometric follow-up of bright sources are lower than the number counts extracted directly from single-dish data, because the flux density of some single-dish sources is coming from several galaxies. Our intrinsic number counts agree perfectly with the interferometric data. The number counts extracted from the simulated map agree well with <cit.>, but are slightly lower than <cit.>.At 1.2 μm, our intrinsic number counts are in good agreement with the deep blank ALMA fields <cit.>). The number counts extracted from the simulated single-dish maps agree perfectly with <cit.> and are 1 σ higher than <cit.>. <cit.> used a mix of ASTE and JCMT data. The angular resolution is thus similar to that used in our simulated map. <cit.> data were taken with the IRAM 30-meter telescope and have thus an angular resolution two times higher, which explains why these measurements are significantly below the number counts extracted from the simulated map. In contrast, they agree well with the intrinsic number counts of the simulated catalog.We thus managed to reproduce simultaneously the interferometric and single-dish number counts at 850 μm and 1.2 mm together with those from Herschel. This reconciles the observations at low- and high-angular-resolution and highlights the importance of taking into account both the clustering and resolution effects in the modeling of the evolution of dusty galaxies. §.§ Multiplicity of single-dish sourcesAs we shown in Sect. <ref> and <ref>, number counts at λ≥ 350 μm derived from single-dish observations are severely affected by the limited angular resolution of the instruments. We thus expect that the flux density of bright single-dish sources is emitted by several galaxies. This phenomenon has been well studied at 850 μm from both an observational and theoretical point of view <cit.>. In contrast, it is much less explored for Herschel sources, because of the difficulty in observing with interferometers from the ground below 850 μm. In this paper, we present the results of our simulation of the Herschel sources and predictions for the new NIKA2 camera at IRAM <cit.>.For each single-dish source extracted from the simulated map with the method described in Sect. <ref>, we searched in our simulated catalog for the brightest galaxy in the beam. We used a search radius of 0.5 FWHM, since the brightest galaxy is usually close to the center of the single-dish source (<0.15 FWHM on average for Herschel and NIKA2 data) and we want to avoid selecting a galaxy contributing to another close single-dish source. We then computed the ratio between the flux density of this brightest galaxy in our simulated catalog and the flux density of the single-dish source measured in our simulated map. In Fig. <ref>, we show the average ratio as a function of the measured single-dish flux density.We also estimated the fraction of the flux density emitted by other galaxies at a similar redshift as the brightest galaxy. We chose to define a redshift as similar if |Δ z| < 0.01. This value was determined using the histogram of the difference between the redshift of the brightest source and the other sources in the beam. This histogram has a very sharp peak around Δ z = 0 with a FWHM of 0.0072, 0.0064, 0.0047 for SPIRE 250 μm, SPIRE 500 μm, and NIKA2 1.2 mm, respectively. Our |Δ z| < 0.01 criterion thus corresponds to at least 3 σ. We then computed the contribution of these physically related sources to the single-dish flux density. The easiest way to proceed would be to sum the flux density of all the galaxies at the same redshift and closer than a given distance. Unfortunately, this definition is problematic, since the result will depend significantly on the chosen search radius. We thus chose the following alternative method. For every galaxy at the same redshift, we computed their contribution at the center of the single-dish source by multiplying their flux density in the simulated catalog by exp(-d^2 / 2 σ_ beam^2), where d is the distance between the galaxy and the center of the single-dish source and σ_ beam is the size of the Gaussian beam. We finally divided the sum of the contribution of all these sources at the same redshift as the brightest source by the measured single-dish flux density measured in the simulated map. The results are presented in Fig. <ref>.At 250 μm, 80to 90 % of the flux density is emitted by the brightest galaxy. <cit.> found ∼50 % based on a Bayesian source-extraction method using shorter wavelength priors (black downward-facing triangles). These results could seem to contradict our analysis. However, they used a very different definition of the flux density fraction. They divided the flux density of brightest galaxy by the sum of the flux density of all the galaxies in a 1 FWHM radius. In our simulated catalog, this sum is larger than the flux density measured in the simulated map. Indeed, the numerous faint sources are responsible for a background <cit.>, which is removed by photometric tools, and thus do not contribute to the flux densities measured in our simulated maps. In addition, the galaxies at 1 FWHM from the Herschel sources can contribute to another close single-dish source. Using the same method as <cit.>, we find a similar value of 50 %. However, the trend with the flux density is different. We find a rising trend, while they have a decreasing one. Their observational method is based on several important assumptions and only high-resolution far-infrared observations will allow to identify which are the most reliable. Finally, we estimated the average contribution of the other sources at the same redshift and found 5 %. The sum of the flux density of the brightest galaxy and other galaxies at the same redshiftremains smaller than unity. There is thus a significant contribution of galaxies at different redshifts than the brightest galaxy to SPIRE 250 μm sources.At 500 μm, resolution effects are much stronger and on average only 58 % of the flux density is coming from the brightest galaxy. At 70 mJy, this fraction is compatible with unity.The >60 mJy SPIRE sources are essentially local star-forming objects and lensed galaxies, which are sufficiently bright to be detected by themselves. The clustering of nearby objects is weaker than at high redshift. The contrast between a magnified source and its unlensed environment is also high. This explains why these galaxies have a smaller contamination from other galaxies when their flux density is measured with a single dish. The contribution to the measured flux density from the physically related neighbors is ∼10 % between 20 mJy and 40 mJy and decreases to 5 % for these bright sources, in agreement with our understanding.At 1.2 mm, we produced predictions for the NIKA2 camera <cit.>. The data will be less affected by resolution effects. The contribution of the brightest galaxy to the NIKA2 sources is ∼95 % at all flux densities, except the faintest ones that are close to the confusion limit. The resolution effects will be smaller with NIKA2, essentially because of the smaller beam (∼12 arcsec). The contribution from galaxies at the same redshift is ∼5 % and this fraction does not evolve significantly with the flux density. At all flux densities, the sum of the brightest galaxy and other galaxies at the same redshift is responsible for at least 97 % of the single-dish flux density measured in our simulated map. The contamination by low-redshift galaxies will thus be much smaller than with Herschel, because they are observed far from their peak of emission. §.§ The importance of a SFR limit In Sect. <ref>, we introduced a SFR limit at 1000 M_⊙/yr. The impact on number counts below 500 μm is moderate: the model without SFR limit slightly overproduces the number of sources above 100 mJy. On the contrary, the impact is much stronger at 850 μm, as shown in Fig. <ref>. This is not surprising because longer wavelengths are dominated by higher redshifts, where the sSFR is on average higher and more sources are thus affected by this limit. The models with and without SFR limit start to diverge at 4 mJy. Above 10 mJy, the model without SFR limit is 5 σ above the counts of <cit.>, which should already be taken as upper limits since they are extracted from single-dish observations. The version of the model without SFR limit is thus clearly ruled out, proving a posteriori the necessity to introduce this threshold.The SFR limit used in our simulation is an effective way to obtain number counts at the bright end in agreement with observations in the submillimeter. Other modifications could have produced similar number counts. Without SFR limit, the S_850>10 mJy galaxies in our simulation are massive (<M_⋆> = 8.6×10^10 M_⊙) and at relatively high redshift (<z> = 2.9). A smaller number density of massive star-forming galaxies could thus have a similar impact on the number counts. Because of the steepness of the SMF at the high-mass end, the uncertainties on the stellar mass measurements could produce an artificial excess of massive objects (an effect similar to the Eddington bias). However, it was taken into account by <cit.> in their fit of the SMF. Some massive passive galaxies could also have been wrongly classified as star forming. However, the main sequence measured by stacking of star forming galaxies would also be lower. Finally, the boost of star formation in starbursts (5.3 in our simulation following ) could be lower in massive galaxies at z>2. A lower boost or a SFR limit causing a truncated sSFR distribution are very hard to disentangle with the current data. We thus chose the SFR-limit solution for its simplicity.The infrared luminosity functions at z>2 measured with Herschel contain objects above 10^13 L_⊙ (SFR>1000 M_⊙) even if their density drops quickly above this luminosity <cit.>. We showed in Sect. <ref> that the SPIRE fluxes could be overestimated because of resolution effects. This could propagate to the luminosity function as discussed in Sect. <ref>. Most of the interferometric follow-up observations of these abundantly star-forming objects were performed at λ >850 μm. Future ALMA band-9 observations (450 μm) would thus be valuable for confirming the measurements of their obscured SFR. §.§ Impact of clustering on stacking analysis Since confusion limits the detection of faint individual galaxies with single-dish instruments, a large fraction of the far-infrared and millimeter observables were measured by stacking analysis. Stacking analysis can also be biased by clustering effects. Since galaxies are clustered, there is a higher probability of finding a source in the beam of a stacked source than at a random position <cit.>. Consequently the average flux density of a galaxy population measured by stacking tends to be biased toward higher values. This bias was extensively discussed in the literature and various methods were proposed to correct for this effect <cit.>.Our simulation is built using two observational studies based on stacking: the evolution of the main sequence measured by <cit.> and the evolution of the SEDs presented in <cit.>. These results were corrected for the clustering bias using empirical approaches. Since they are key elements in the calibration of our simulation, we checked that these empirical corrections are consistent with the biases we measure in our simulation. We discuss only Herschel/SPIRE data, since shorter wavelengths have a negligible bias (<10 %, ). As detailed in Appendix <ref>, our values of the excess of flux density caused by clustered neighbors agree well with the estimate of <cit.>: 13±1 % in our simulation versus 14_-9^+14 % in theirs at 250 μm, 21±1 % versus 22_-14^+19 % at 350 μm, and 34±1 % versus 39_-23^+22 %. In <cit.>, we used a redshift-dependent correction estimated using two different techniques, which also agrees with our simulation as explained in the Appendix <ref>. The observables derived from a stacking analysis corrected from clustering were thus paradoxically more reliable than the statistical properties derived from catalogs of individually-detected sources.§ REDSHIFT-DEPENDENT OBSERVABLES AND CONSEQUENCES ON THE STAR FORMATION HISTORY In this section, we compare the results of our simulation with redshift-dependent observables (redshift distributions, number counts per redshift slice) and discuss the impact of these results on the determination of the obscured star formation history. §.§ Comparison with observed redshift distributions In our simulation, we implemented significant modifications compared to the <cit.> version of the model as the updated evolutions of the SEDs and of the SFR-M_⋆ relation. We thus checked if this updated model reproduces correctly the observed redshift distributions in Fig. <ref> (seefor a detailed discussion about the modeling of the redshift distributions). There is an overall good agreement between the intrinsic redshift distributions in our simulation and the measured ones from 100 μm to 1.1 mm.However, the measurement of the redshift distributions is a complicated task, which requires identification of the galaxy responsible for the main fraction of the far-infrared or submillimeter flux and measurement of its redshift. Various methods can be used. The procedures based on high-resolution follow-up are difficult to reproduce with our simulation. In contrast, our simulation is perfectly suited to testing the prior-based source extraction, which was used to derive Herschel redshift distributions <cit.>. In Fig. <ref>, we compared the intrinsic redshift distribution in our simulated catalog with the redshift distribution of the sources extracted in our simulated map using 24 μm positions as a prior as described in Sect. <ref>. At 250 μm, the two distributions are very similar, showing this extraction technique does not bias the results. We obtained similar results at shorter wavelength with Herschel/PACS. At 350 μm, we found that the redshift distribution derived from the source extracted in the simulated map is slightly biased toward lower redshifts compared to the intrinsic distribution. At 500 μm, this bias becomes stronger. As discussed in Sect. <ref>, the flux density of 500 μm sources is emitted by several galaxies. In addition, the 24 μm/500 μm color varies more with redshift than colors between 24 μm and shorter wavelengths. The brightest 24 μm galaxy in a 500 μm beam is thus not systematically the main contributor to the 500 μm flux density. In conclusion, the Herschel redshift distribution extracted using 24 μm positional priors are thus accurate only below 250 μm. At longer wavelength, other methods must be used.In <cit.>, we used a prior-based extraction based on both the 24 μm flux density and the redshift. Instead of directly selecting the brightest 24 μm source in a 0.5 FWHM radius as an input for FASTPHOT, we predicted the 500 μm flux from the 24 μm flux density and the redshift, and kept in the prior list the galaxy with the highest predicted flux density at 500 μm in a 0.5 FWHM radius. For the prediction, we used the average colors measured by stacking. We thus kept more high-redshift sources in the prior list. This approach agrees with the intrinsic redshift distribution in our simulation (Fig. <ref>), but not with the extracted one. This highlights that more advanced prior-based source extraction techniques could be sufficient to derive accurate redshift distributions from confusion-limited maps. Our simulation will be particularly useful for validating future studies of the redshift distributions. §.§ Number counts per redshift sliceIn Fig. <ref>, we compare the results of our simulation with the measured number counts per redshift slice measured with PACS <cit.> and SPIRE <cit.>. This observable is very close to monochromatic luminosity functions <cit.>, but is not affected by the assumptions made on the K-corrections, which are necessary to determine the luminosity functions. The full observational process used to measure the number counts per redshift slice is thus easier to simulate. There is an overall good agreement between 70 and 160 μm at z<2. At z>2, our simulation under-predicts the source counts at 70 μm and 100 μm by a factor of 5 and 2.5, respectively. The number counts in our simulated catalog and after simulating the full source-extraction procedure are similar. This is thus not a problem caused by the resolution. The most likely explanation is contamination by active galactic nuclei (AGNs), since these Herschel bands at z>2 correspond to <23 and <33 μm rest-frame. Indeed, we did not implement the contribution of AGNs to the mid-infrared emission in our simulation, which focuses on the far-infrared and millimeter domain. However, at these wavelengths, 99 % of the sources lie at z<2 and the AGN contribution to the SEDs has thus a negligible impact on the global statistical properties of the galaxies.At 250 μm, the number counts in our simulation at z<2 agree well with observations and there is no significant difference between the intrinsic number counts and those extracted using 24 μm priors. At z>2, the intrinsic counts under-predict the observations at the bright end, but the number counts extracted from the simulated map agree well with the data. At 350 μm and 500 μm, the intrinsic number counts are systematically below the observations at z>0.5, but the number counts extracted from the simulated maps agree better with the data. However, the number counts extracted from the simulated map tend to be lower than the observations at z>2 and higher at z<0.5. We note however, that the source extraction from the simulated map was done using only the 24 μm position as a prior and is thus slightly biased toward low redshift, as shown in Sect. <ref>.§.§ Consequences on the obscured star formation history In the previous sections, we have shown that the flux densities of individually-detected sources (Sect. <ref>) are biased toward higher values because of angular resolution effects, while stacking-derived observables were already corrected from the clustering effects. Since the peak of the far-infrared emission of galaxies is around 100 μm rest-frame, Herschel/SPIRE data are thus essential to derive accurate obscured SFR, but unfortunately they are affected by these resolution effects. They are also limited by the confusion and only sources brighter than ∼20 mJy can be extracted reliably from the maps. These bright, individually detected sources have an important role in understanding the evolution of the massive systems, but they contribute only marginally to the global star formation budget. At 250 μm, the resolution has an impact only at z>2 (Fig. <ref>). In our simulated catalog, at z>2, the S_250 > 20 mJy galaxies contribute to only 2.5 % of the obscured star formation density. At 350 and 500 μm, the galaxies brighter than 20 mJy host only 2.9 and 1.4 % of the SFRD, respectively, at z>2. At those fluxes, the excess of flux density caused by the resolution effects (see Sect. <ref> and Fig. <ref>) is 21, 46, and 96% at 250, 350, and 500 μm, respectively. It is hard to propagate this effect to the estimate of the total infrared luminosity density and star formation density (SFRD), since it requires combining several wavelengths. However, even in the worst case scenario of using only 500 μm as a SFR estimator, the excess of SFRD caused by sources brighter than 20 mJy will remain below 10 %. This effect remains thus below the systematic uncertainties associated with the extrapolation of the contribution of the faint sources.We checked if the SFRD in our simulation agrees or not with other estimates from the literature. In Fig <ref>, we compare the obscured SFRD from our simulation with the latest observations compiled by <cit.>. Our simulation agrees well with both the IR- and UV-derived measurements up to z∼3. This confirms that the impact of resolution effects are minors on the global star formation budget.At z>3, our simulation is 2 σ higher than the measurements of <cit.> derived from Herschel observations. However, they have only three data points at L_ IR>10^12.5 L_⊙ and have to make strong assumptions about the faint-end slope of the luminosity function. In contrast, our simulation is 0.5 σ and 2 σ lower than the estimate of <cit.> at z=4 and z=6, respectively. There is thus significant tension between the various estimates of the obscured SFRD at z>3. Our simulation agrees with the measured redshift distributions and deep millimeter counts and is thus compatible with the current non-extrapolated data. These differences between studies highlight how uncertain the obscured star formation history at z>3 remains. Concerning the observation derived from dust-corrected UV, our simulation agrees with <cit.> measurement at z>5, but is 50 % higher at z∼4. This suggests that a fraction of the star formation might have been missed at this redshift by optical surveys. Future wide and deep millimeter surveys with NIKA2 at IRAM and with the large millimeter telescope (LMT) will be essential to confirm or not this result.§ ONE- AND TWO-POINT MAP STATISTICS In addition to the statistical properties of the sources, we checked the agreement of our model with map statistics. This is particularly important for SPIRE data, which has a limited resolution. §.§ Pixel histograms: P(D) The distribution of the surface brightness in the pixels (P(D)) of a map is directly connected to the number counts of the objects in this map <cit.>. This method was used to measure the faint source counts with, for example, Bolocam <cit.>, LABOCA <cit.>, BLAST <cit.>, and Herschel <cit.>. As discussed in <cit.> and <cit.>, clustering could impact P(D) analysis. However, when these analyses were performed, simulations did not include clustering that we know is critical. We can now investigate this effect using our new simulations.In Fig. <ref>, we compare the pixel histograms of the simulated maps and of real Herschel maps. We used the COSMOS maps[<http://hedam.lam.fr/HerMES/>] from the HerMES survey <cit.>, which match the size of our simulation. We used the real noise maps released by the HerMES team to generate a similar Gaussian instrument noise in our simulated maps. In order to evaluate the impact of clustering, we produce another simulated map without clustering by randomly reshuffling the positions of the galaxies. At 250 μm, the clustering has an impact of less than 5 % and both clustered (red) and unclustered (blue) simulated maps agree at 5 % with the observed histogram. This is a very good agreement considering the 4 % calibration uncertainty of SPIRE <cit.>. At 350 and 500 μm, the effect of clustering is much larger because of the larger beam and can reach 15 %. The clustered maps agree well with the observed one, but the randomized maps have a large excess at the peak. This shows that clustering has a non-negligible effect on the P(D) analysis and must be taken into account. This could explain why the P(D) analysis of <cit.> agrees with individual source counts even if they are biased high compared to the intrinsic counts (see Fig. <ref>).§.§ Anisotropies of the cosmic infrared background: P(k) Measuring the clustering of individually detected population in confusion-limited data is difficult. The sample sizes remain limited to obtain good statistics <cit.>. The contamination of the fluxes by the neighbors tends to introduce artificial correlation between redshift slices and to bias the measurements <cit.>. The power spectrum of the CIB anisotropies, which is not affected by this problem, is currently the best way to constrain how the star formation is distributed in dark-matter halos <cit.>. CIB anisotropies are a powerful observation to test that the model simultaneously reproduces the infrared emission and the spatial distribution of galaxies.In Fig. <ref>, we compare the power spectrum measured with Herschel <cit.> (black diamonds) and in our simulation (blue solid lines). In order to reduce the Poisson noise, the brightest sources are usually masked or subtracted from the maps. We chose to use a S_ν,cut=50 mJy flux density cut, which is the deepest cut used by <cit.>. We also included the Planck data at 857 GHz (350 μm, red triangles). We shifted these data by the difference of the Poisson noise in our simulations between a flux density cut of 50 mJy and 710 mJy (used by Planck). Finally, as shown by <cit.>, the Herschel/SPIRE 500 μm absolute flux calibration is 4.7 % too high compared to Planck. We thus corrected <cit.> data points accordingly. To accurately measure the power spectrum, we generate a map without convolving it by the PSF and without including the sources above the flux cut. To be fully consistent with the observational process, we produced a map using the SPIRE spectral response to extended emission to measure the power spectrum, but we used the flux densities from the point-source spectral response to select the sources to put in the map (see Lagache et al. in prep. for a detailed discussion). We measured the power spectrum from these simulated maps using the POKER software <cit.>. This software accounts for non-periodic boundary conditions of the map that otherwise bias large-scale measurements. The error bars are estimated via Monte-Carlo simulations of the estimated power spectrum.At small scale (k>0.3 arcmin^-1), the power-spectrum is dominated by the shot noise from galaxies. These Poisson fluctuations of the number of galaxies in a patch of sky produce a plateau in the power spectrum, which can be derived directly from the number counts <cit.>:σ_ Poisson^2 = ∫_0^S_ν,cut S_ν^2d^2 N/dS_ν dΩdS_ν,where S_ν is the flux density and d^2 N/dS_ν dΩ are the differential number counts. At 350 and 500 μm, our simulation agrees at 1 σ with the measurements of <cit.>. At 250 μm, our simulation is systematically 1.5 σ above the measurements. Since the measurements are dominated by systematic effects (e.g., deconvolution of the beam), their error bars are strongly correlated. This offset is thus not statistically significant. Overall, the Poisson level in our simulation and in real data agrees.<cit.> found a discrepancy between the Poisson level measured in Herschel and Planck data and that derived from the measured Herschel number counts using Eq. <ref>. The Poisson level derived directly from the number counts are higher than the measurements. This problem can be solved naturally considering the discrepancy between the measured and the intrinsic number counts that we identified in Sect. <ref>. Indeed, the Poisson level depends on the number counts. Since the observed number counts are overestimated at 350 and 500 μm because of resolution effects, the Poisson levels derived from them are thus overestimated.At 250 μm, our simulation agrees at better than 1 σ with the data at large scale (k<0.3 arcmin^-1). In this regime, the power spectrum is dominated by the large-scale clustering of galaxies. At 350 and 500 μm, at k<0.1 arcmin^-1, our simulation underestimates the power spectrum by 1.5 σ. Future larger simulations will allow us to determine if this deficit is real or just a statistical fluctuation. § THE NATURE OF HERSCHEL RED SOURCES <cit.> and <cit.> found a large population of red Herschel sources in the HerMES survey <cit.>. They claimed that the number of sources they found is one order of magnitude higher than predicted by the models. If confirmed, these results would suggest that the models strongly underpredict the number of bright z>4 dusty star forming objects.<cit.> also found a large number of red high-redshift candidates in the H-ATLAS survey using a slightly different selection. In this section, we verify that our simulation accurately reproduces the statistics of red sources. First, we check the statistics of red sources in our 2 deg^2 simulation, which is a small area but includes clustering (Sect. <ref>). We then investigate the statistics of red sources in a large simulated catalog, without clustering (Sect. <ref>).The criteria of <cit.> are hard to reproduce, since they involve some visual inspection. We thus focus our analysis on the results of <cit.>, who used the following criteria: S_250<S_350<S_500, S_500 > 52 mJy, and D = 0.92 M_500 - 0.392 M_250 > 34 mJy, where M_500 and M_250 are the values of the maps at the position of a source at 250 and 500 μm, respectively, after matching all the maps at the resolution of 500 μm data. §.§ Simulation in map space in 2 deg^2 There is no galaxy in our 2 deg^2 simulated catalog that follows the <cit.> criteria. However, as we will show, some of these sources could be explained by noise fluctuations and resolution effects.<cit.> homogenized the beams to the size of the 500 μm one. For simplicity, we directly generated three SPIRE maps using a Gaussian beam with a FWHM of 36.3 arcsec. We then added instrumental noise using the same values as in <cit.>. The D map is generated from the 250 μm and 500 μm maps. The noise in our simulated D map is very close to the observations: 8.8 versus 8.5 mJy. This shows that our approximation on the beam is sufficiently good to perform our analysis of red sources.We extracted the peaks higher than 34 mJy in the D map and measured the photometry on the maps at the native SPIRE resolution using the flux in the central pixel of the source after subtracting the mean of the map. The number of detected red sources varies depending on the realization of the noise. We thus used 1000 realizations to estimate the mean number of detected sources. This estimate does not take into account the cosmic variance. We found 1.7_-0.9^+1.9 in our 2 deg^2 field, which corresponds to 229_-121^+258 sources in 274 deg^2. This agrees at 1 σ with the 477 detections reported by <cit.>. Even if the statistics are very limited, our results indicate that there might be no real tension between models and observations of red sources.§.§ Red sources in a 274 deg^2 catalog Our simulation covers only 2 deg^2 and thus contains small statistics compared to <cit.> who used 274 deg^2. We thus generated another simulated catalog based on the same prescriptions and covering the same area of 274 deg^2 as <cit.>. Since we do not have a sufficiently large dark-matter simulation, we have to ignore the clustering and draw directly the sources from the stellar mass function. With this simplified method, we could potentially underestimate the number of red sources, since we neglect the boosting of the flux of massive high-redshift dusty galaxies by their neighbors. This effect can be potentially important because of the strong clustering of the most star-forming galaxies at z>2 <cit.>. We generated only sources with M_⋆ > 10^10 M ⊙ and z>1 to save memory, because no source with a lower mass can be sufficiently bright and no source below z=1 can be sufficiently red to pass the "red source" selection. We do not produce maps so we compute D from the flux densities (0.92 S_500 - 0.392 S_250). We have only 18 objects following the criteria of <cit.> in our simulated catalog compared to the 477 objects in Herschel data. All these 18 sources are strongly lensed (μ > 2). Our simulation contains 439 256 non-lensed sources with S_250<S_350<S_500, but none of them are sufficiently bright to satisfy S_500 > 52 mJy. Number counts of red sources at 500 μm are shown in Fig. <ref>. While the counts from our simulation above 100 mJy are close to the data, they are well below for fainter flux densities. The problem is thus not coming from the lensing, but from a lack of intrinsically bright sources. If we remove the SFR limit in our simulation, we find 205 non-lensed sources matching the Asboth et al. criterion (676 if we add noise to the simulated catalog, see next paragraph). However, this SFR cut is necessary to be consistent with the 850 μm number counts (see Sect. <ref>). Another explanation must thus be found to understand this discrepancy.The results described in the previous paragraph ignore the effect of noise on the number counts of red sources. We thus simulate the effect of both confusion and instrumental noise by adding a random Gaussian noise to the flux densities of the simulated sources. We used the noise values provided in Table 1 of <cit.>. This method produces a higher noise on D than in the real D map, since the confusion noise from 250 and 500 μm tend to partially cancel each other out in the real maps. Because of the very steep color distribution and number counts, the noise stronglyincreases the number of red sources (see Fig. <ref>). We found 168 sources matching the Asboth et al. criteria (74 with only instrumental noise), of which only 29 % are strongly lensed (60% with only instrumental noise). The noise has thus an important role in producing red sources without strong lensing. Concerning the weakly-lensed sources, the weakly-lensed red sources are on average 6 % more magnified than the average magnification at z>2. Red-source selections are thus biased toward higher magnifications. Weak lensing acts as an additional noise on the flux density and sources on a positive fluctuation of the magnification tend to pass the 500μm flux density threshold more often. This is similar to the Eddington bias. Without weak lensing, the number of red sources decreases to 132. In our simulation, the noise thus strongly increases the number density of detected red sources. These results could seem to be in contradiction with <cit.>, who found that the number of injected and recovered red sources was similar with the number of recovered ones using an end-to-end simulation. However, we identified one potentially incorrect assumption in their simulation. Their simulation used the <cit.> model with the intrinsically red source removed and a power-law distribution of red sources with a fixed color based on the median observed one. The flux distribution of the red sources in their simulation is based on the number of detected red sources directly extracted in the real map, which is one order of magnitude higher than what is intrinsically in the <cit.> model. The relative contribution to the extracted number counts of intrinsically non-red sources, which matches the red criteria because of the noise and resolution effects, might thus be significantly underestimated because of the very high number of intrinsically red sources in their simulation. This highlights the difficulty to correct the biases in statistical measurements of red sources using only inputs from observations.In Fig. <ref>, we illustrate the impact of the noise on the selection of red sources. If we apply the <cit.> criteria to the intrinsic fluxes of the sources, red sources are selected at z>3. If we select red sources with the same criteria after adding noise, the number of detections at z<4 increases dramatically, while the number of z>4 detections remains almost constant (see left panel). The noise has thus a strong impact on the redshift distribution of red sources. The red sources selected in the noisy catalog usually have a measured D value and a 500 μm flux density just above the limit (vertical black dotted line in middle and right panels of Fig. <ref>). However, their intrinsic D value and 500 μm flux density are in general below the detection limit (⟨D⟩ = 23 mJy and ⟨S_500⟩ = 42 mJy). The intrinsic non-red sources selected because of the noise are thus not purely spurious objects. They are mostly strongly star-forming objects with a D value and an intrinsic 500 μm flux density slightly below the cut. The first red sources that have been followed up were selected in deep fields and were usually the reddest sources in the sample <cit.>. These objects were confirmed spectroscopically to be at high redshift. However, their selection was less affected by the noise because of their particularly red observed color and the lowest instrument noise in such fields. §.§ The challenge of using red sources to constrain models Overall, the combination of the noise and the weak lensing can dramatically boost the number density of detected red sources compared to their intrinsic number density. Our 2 deg^2 end-to-end simulation including clustering is compatible with the observations, but the statistics are limited. We also built a larger simulation based on a catalog of galaxies with random positions and a random Gaussian noise. However, the number of red sources remains lower by a factor of 2.8 in this simplified simulation. This could be explained by several effects. As shown in the pixel histogram of Herschel maps in Fig. <ref>, the combination of instrument and confusion noise has an asymmetrical distribution with a large positive tail, which could be responsible for more bright 500 μm outliers than in the Gaussian case. In addition, the faint foreground sources are clustered and a local underdensity of faint blue foreground galaxies in a beam could create an artificial red color. Finally, the model is sensitive to the value of the SFR cut and better observational constraints should allow us in the future to determine its value or favor one of the alternative scenarios described in Sect. <ref>. Comparing observations of red sources and models remains complicated, because it would require using the exact same algorithm on simulated maps with clustering of ∼100 deg^2. For wide shallow fields as the one used by <cit.>, the noise also plays a crucial role. Applying a new method of deblending to the deeper 55 deg^2 HeVICS field, Donevski et al. (to be submitted) found that number counts of red sources are an order of magnitude lower than <cit.>. They simulated the effect of instrument noise and confusion on our simulated catalogs and found an excellent agreement with their new observations. They also found that the noise has only a mild impact on the number counts of red sources in this deeper field. This highlights the need to perform end-to-end simulations with the same extraction algorithm and realistic noise properties to compare observations and model. However, these simulations remain extremely difficult to perform on large areas because they require both a large-volume dark-matter simulation and a sufficiently-high-mass resolution to have the faint galaxies responsible for confusion. In addition to these difficulties, the lensing was included in a non-consistent way in our simulation, since it was drawn randomly in a distribution, which does not vary with the position of the source (see Sect. <ref>). Non-trivial biases can occur in color selections for lensed sources. For instance, the sources clustered with the lens can change the color of the source, since they are at lower redshift and thus bluer than the background source <cit.>. In addition, we used only a simplified Gaussian weak lensing. The number of sources with a magnification between 1.5 and 2 is thus underestimated compared to the numerical simulation of <cit.>. For instance, the extreme starbursts reported by <cit.> were later proved to be lensed <cit.> by a small factor. Interpreting the statistics of red sources with models is thus a very challenging task, because of the complexity of the various artifacts affecting the selection of these objects. Direct millimeter selections as SPT <cit.> provides more straightforward constraints for models. However, we should mention that, despite the difficulty in interpreting their number counts, red-source selections in Herschel fields are a powerful tool to build large samples to study the physics of high-redshift, dusty star forming galaxies.§ CONCLUSION We presented a new simulation of the far-infrared and (sub)millimeter sky called SIDES. This simulation is based on an updated version of the <cit.> phenomenological galaxy evolution model using the latest observational constraints on the stellar mass function, the main sequence of star forming galaxies, and the evolution of the SEDs. To obtain realistic clustering, we used an abundance matching procedure to populate the dark-matter halos of a light cone constructed from the Bolshoi-Planck simulation <cit.> with the galaxies produced by our model. The intrinsic galaxy number counts in this new simulation are significantly lower than the measurementsfrom single-dish instruments, while they agree with interferometric data. To understand this tension between our simulation and the observations, we simulated the full source extraction process and showed that the number counts extracted from our simulated maps agree with the observed ones. When we take into account the observational effects, our simulation is able to simultaneously reproduce the single-dish and interferometric number counts from the far-infrared to the millimeter domain together with redshift distributions, CIB anisotropies, and the pixel histograms of SPIRE maps. Our simulation also allowed us to evaluate the impact of clustering and angular resolution on some statistical properties derived from far-infrared and (sub)millimeter surveys. We identified the following effects: * The flux density of Herschel sources is affected by resolution effects. The brightest galaxy in the Herschel beam is responsible for ∼85 % of the flux density at 250 μm, but only ∼60 % at 500 μm. Other galaxies contributing to the Herschel flux density are both galaxies at the same redshift as the brightest one and randomly aligned galaxies. Our simulation predicts that the fraction of the flux density coming from the brightest galaxy will rise to 95 % in future millimeter single-dish surveys performed by 30 meter-class telescopes (e.g., NIKA2 at IRAM).* Measurements using stacking are also biased by the clustering. However, this bias is already known. The corrections made by the observational studies are compatible with the corrections derived from our simulation. Paradoxically, the stacking studies, which took into account the clustering, are more accurate than the observations of individually detected sources.* The redshift distributions of Herschel sources extracted using 24 μm positional priors tend to be biased toward lower redshifts at 350 and 500 μm, but are reliable at shorter wavelengths.* Even if the flux density of the brightest Herschel sources tends to be overestimated, the impact of these sources on the global star formation history is small. Our simulation is compatible with the UV- and IR-derived measurements of the SFRD up to z<3. At z∼3.5, our simulation predicts a higher SFRD than the measurements of <cit.> and <cit.>. At higher redshift, our results are compatible with the constraints from the UV.* The clustering has a significant impact on the pixel histogram of Herschel maps used to perform P(D) analysis. This explains why the number counts derived by <cit.> using this statistical technique are compatible with the measurements derived using standard source-extraction methods, but not with the intrinsic number counts.* The resolution effects allow us to solve the tension between the measured level of the Poisson fluctuation of the CIB power spectrum and what is expected from the observed number counts. Indeed, the number counts are biased high and the shot-noise derived from the number counts is thus overestimated.* Recently, <cit.> identified a population of red Herschel sources. Their number density is one order of magnitude higher than in models, including our new simulation. However, after taking into account the noise and the resolution effects, our 2 deg^2 simulation produces the correct number of objects.These results highlight the difficulty to interpret the long-wavelength single-dish observations. Correcting the observations for all the observational effects is a complex task. The corrections usually assume an underlying model. In this paper, we started from our model and reproduced the full observational process. This approach is more direct and allowed us to test the validity of our model without having to rely on complex corrections of the data. This approach is probably the best way to deal with the complexity and the precision of the modern data sets. Our simulation (SIDES), released publicly at <http://cesam.lam.fr/sides>, has many potential applications: * It can be used to prepare future surveys and in particular to predict the number of detected galaxies and their properties (redshift, SFR, stellar mass). The realistic clustering included in our simulation can also be used to accurately estimate the confusion limit.* It is a powerful tool to test source-extraction techniques and characterize their biases. We can also use it to test and optimize methods to identify the counterparts at shorter wavelengths of single-dish sources. Finally, it can be used to validate stacking softwares and determine the most efficient and less biased ones.* The various biases affecting the extraction of single-dish sources can have strong impacts on clustering measurements and bias the estimates of the host halo mass <cit.>. Since it accurately reproduces a large set of observables, our simulation is well suited to characterize and correct for these effects.* Finally, in a future paper, we will include the emission of far-infrared and (sub)millimeter lines in our simulation and perform predictions for spectroscopic surveys and for (sub)millimeter intensity mapping experiments.We thank the anonymous referee for his/her very useful and constructive comments. We thank Y.-Y. Mao for providing the abundance matching code and for assistance. We thank Eric Jullo, Stéphane Arnouts, Olivier Ilbert, and Corentin Schreiber for their explanations. The Bolshoi-Planck simulation was performed by Anatoly Klypin within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC) and was run on the Pleiades supercomputer at the NASA Ames Research Center. We acknowledge financial support from "Programme National de Cosmologie and Galaxies" (PNCG) funded by CNRS/INSU-IN2P3-INP, CEA and CNES, France. This work has been partially funded by the ANR under the contract ANR-15-CE31-0017. This work has been carried out thanks to the support of the OCEVU Labex (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French government program managed by the ANR. HW acknowledges the support by the US National Science Foundation (NSF) grant AST1313037. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.aa§ HOMOGENIZATION OF COSMOLOGY Our simulation is based on observational quantities (e.g., stellar masses, SFR). They were derived assuming a cosmology that was different from the one used in the dark-matter simulation. To be consistent, we convert the observational constraints to the Planck cosmology. At fixed SED, the SFR and stellar masses are proportional to the intrinsic luminosity of an object. In the Planck cosmology, the luminosity distance is ∼3% (small redshift dependance) larger than in the 773 cosmology used in most of the observational papers (h = 0.7, Ω_Λ= 0.7, Ω_M= 0.3). The intrinsic luminosity of the objects, and consequently the stellar masses and SFRs, are thus slightly higher than in the observational papers in 773 cosmology. We thus applied a (D_ L, Planck/D_ L, 773)^2 correction to the observed stellar masses and SFR. Similarly, the volume corresponding to a redshift slice and estimated with 773 cosmology is smaller than with Planck cosmology. The number density of observations estimated in 773 cosmology is thus overestimated. We thus apply (dV_ comoving, 773/dz) / (dV_ comoving,Planck/dz) corrections to the characteristic densities Φ. The correction is computed at redshift corresponding to the center of the redshift bins used to derive the mass and luminosity functions. We checked that this correction does not vary by more than 1% inside a bin.§ CORRECTION OF THE MAIN SEQUENCE AT LOW-Z When we use the <cit.> analytic description of the evolution of the main sequence, the simulation exhibits an excess of SFRD at z<0.5 and overpredicts the bright end (S_100>30 mJy) of the number counts at 100 μm by ∼30% (see Fig. <ref>, red dashed line). We found a similar excess at the very bright end at other wavelengths. The simulation of <cit.>, which uses the same description of the main sequence but different SEDs, has a similar excess atS_100>30 mJy (see their Fig. 10). This is thus not a problem of SEDs. These bright sources have a median redshift of 0.22 and a median stellar mass of 2×10^10 M_⊙. At this stellar mass and redshift, the SFR_ MS provided by <cit.> is 0.1 dex higher than <cit.> estimate based on a large compilation of data. We thus offset the sSFR_ MS by 0.1 ×0.5-z/0.5-0.22 dex to correct for this offset. After implementing this correction, the simulation reproduces the SFRD and the bright end of the number counts (Fig. <ref>, black solid line).§ THE IMPACT OF CLUSTERING ON STACKING RESULTS AS A FUNCTION OF REDSHIFT In Sect. <ref>, we discussed the impact of clustering on the measurements by stacking. In this Appendix, we explain the technical details of our comparison between the corrections used by <cit.> and <cit.> and the ones derived from our simulation.<cit.> used the real position of their stacked galaxies. They used their stellar mass and redshift to predict their mean expected flux densities at SPIRE wavelengths. They then built a map based on these predicted flux densities. They finally compared the mean flux density in their simulated catalog with the measurements by stacking in their simulated map. They used several methods to perform the photometry (small apertures, PSF-fitting photometry, signal in the central pixel). We chose to use the signal in the central pixel, since it is the easiest to simulate. After we had subtracted the mean value of the map to remove the background, we computed the mean flux density in all the SPIRE pixels hosting a galaxy in the input stacked catalog. The relative excess of flux density caused by clustered neighbors is computed using:Relativeexcess = S_ stack - ⟨ S_ cat⟩/⟨ S_ cat⟩,where S_ stack is the mean flux density measured by stacking in the simulated map and ⟨ S_ cat⟩ is the mean flux of the stacked sources in the simulated catalog. We found 13±1 %, 21±1 %, 34±1 % at 250, 350, and 500 μm, respectively, which agrees well with the values of <cit.> of 14_-9^+14 %, 22_-14^+19 %, and 39_-23^+22 %, respectively.We also compared our results with <cit.>, who used a redshift-dependent correction. To allow an easier comparison, we used the same stellar mass selection (3×10^10 M_⊙) and redshift bins and derived the stacking excess using the method described in the previous paragraph. Our results are presented in Fig. <ref> (black squares). The relative excess caused by clustered neighbors is compatible with zero at z=0, rises up to a maximum at z∼1, and slightly decreases with increasing redshift at z>2. This decrease of the relative excess at higher z was already discussed in <cit.> and was interpreted as the result of the rising of both the rarity and the infrared brightness of star forming galaxies with increasing redshift, causing a higher contrast between the massive galaxies and their environment. The first approach used in <cit.> uses an initial stacking to determine a mass-to-flux-density ratio evolving on the redshift (blue triangles). A simulated map is then produced from the real COSMOS catalog from the real position of the galaxies and their stellar mass. The relative excess caused by clustering is then estimated comparing the injected flux densitywith the stacked flux density measured in this simulated map. This method neglects the diversity of the SEDs, the non-linearity of the M_⋆-SFR relation, and the scatter around it. The trend between this method and our simulation agrees overall. However, our simulation found slightly lower values at z>2. This disagreement of ∼15 % is hard to explain and could come from the cosmic variance, a systematic effect at small scale in the real catalogs (incompleteness, problem of deblending), or a poor description of the small-scale clustering at z>2 in our simulation. The other method (red diamonds) is based on a stacking in map space and a fit of the resulting radial profile by both a point-like and an extended clustered component <cit.>. The results are similar to those obtained with the method described previously and our simulation at 250 μm and 350 μm. At 500 μm, this method predicts higher values than the other ones at z>2. However, at 500 μm, the decomposition is hard to perform, since the typical scale of the intra-halo clustering is close to the size of the beam.Larger biases can be found if we stack fainter or more clustered galaxy populations. It can reach 90% if we stack, for example, all sources with M_⋆ > 10^9 M_ at 500 μm. The impact of clustering should thus be carefully checked while stacking Herschel data. Our simulation will be particularly well suited to checking the accuracy of the stacking approaches in future studies.
http://arxiv.org/abs/1703.08795v2
{ "authors": [ "Matthieu Bethermin", "Hao-Yi Wu", "Guilaine Lagache", "Iary Davidzon", "Nicolas Ponthieu", "Morgane Cousin", "Lingyu Wang", "Olivier Dore", "Emanuele Daddi", "Andrea Lapi" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170326094204", "title": "The impact of clustering and angular resolution on far-infrared and millimeter continuum observations" }
=1 h-physrev
http://arxiv.org/abs/1703.09188v2
{ "authors": [ "J. Ignacio Cirac", "David Perez-Garcia", "Norbert Schuch", "Frank Verstraete" ], "categories": [ "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20170327171115", "title": "Matrix Product Unitaries: Structure, Symmetries, and Topological Invariants" }
Nash Equilibrium SeekingF. Salehisadaghiani Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Road, Toronto, ON, M5S 3G4, Canada farzad.salehisadaghiani@mail.utoronto.caNash Equilibrium in Social Media Farzad Salehisadaghiani December 30, 2023 ================================ § INTRODUCTIONIn this work, we investigate an application of a Nash equilibrium seeking algorithm in a social network. In a networked game each player (user) takes action in response to other players' actions in order to decrease (increase) his cost (profit) in the network. We assume that the players' cost functions are not necessarily dependent on the actions of all players. This is due to better mimicking the standard social media rules. A communication graph is defined for the game through which players are able to share their information with only their neighbors. We assume that the communication neighborsnecessarily affect the players' cost functions while the reverse is not always true. In this game, the players are only aware of their own cost functions and actions. Thus, each of them maintains an estimate of the others' actions and share it with the neighbors to update his action and estimates. A Nash equilibrium is a point that no player can unilaterally deviate his action to gain a better profit while the others keep their actions unchanged. In this work, we do not study the various algorithms within this framework, but instead, we analyze a specific application of these algorithms in social networks. Interested readers may refer to <cit.> for a detailed explanation of such algorithms.§.§ Social Media BehaviorIn this section we aim to investigate social networking media for users' behavior. In such media like Facebook, Twitter and Instagram users are allowed to follow (or be friend with) the other users and post statuses, photos and videos or also share links and events. Depending on the type of social media, the way of communication is defined. For instance, in Instagram, friendship is defined unidirectional in a sense that either side could be only a follower and/or being followed. Recently, researchers at Microsoft have been studying the behavioral attitude of the users of Facebook as a giant and global network <cit.>. This study can be useful in many areas e.g. business (posting advertisements) and politics (posting for the purpose of presidential election campaign). Generating new status usually comes with the cost for the users such that if there is no benefit in posting status, the users don't bother to generate new ones. In any social media drawing others' attention is one of the most important motivation/stimulation to post status <cit.>. Our objective is to find the optimal rate of posting status for each selfish user to draw more attention in his network. In the following, we make an information/attention model of a generic social media <cit.>, define a way of communication between users via a digraph G_C, and mark the interactions between users by an interference digraph G_I. Consider a social media network of N users. Each user i produces x_i unit of information such that the followers can see in their news feeds. The users' communication network is defined by a strongly connected digraph G_C in which .5pt-.9pt i→0.5pt-.5pt j means j is a follower of i or j receives x_i in his news feed. We also assume a strongly connected interference digraph G_I to show the influence of the users on the others. We assume that each user i's cost function is not only dependent on the users he follows, but also by the users that his followers follow. The cost function of user i is denoted by J_i and consists of three parts: * C_i(x_i): A cost that user i pays to produce x_i unit of information. C_i(x_i):=h_ix_i, where h_i>0 is a user-specific parameter. * f_i^1(x): A differentiable, increasing and concave utility function of user i from receiving information from his news feed with f_i^1(0)=0. f_i^1(x):=L_i√(∑_j∈ N_C^in(i)q_jix_j), where q_ji represents follower i's interest in user j's information and L_i>0 is a user-specific parameter. * f_i^2(x): An incremental utility function that each user i obtains from receiving attention in his network with f_i^2(x)|_x_i=0=0. Specifically, this function targets the amount of attention that each follower pays to the information of other users in his news feed. f_i^2(x)=∑_l:i∈ N_C^in(l)L_l(√(∑_j∈ N_C^in(l)q_jlx_j)-√(∑_j∈ N_C^in(l)\{i}q_jlx_j)). In fact, f_i^2(x) computes the total profit of the followers of user i from receiving information from the users excluding i and subtract it from the one that includes i. The total cost function for user i is then J_i(x)=C_i(x_i)-f_i^1(x)-f_i^2(x). For this example, we consider 5 users in the social media whose network of followers G_C is given in Fig. 2. (a). From G_C and taking J_i into account, one can construct G_I (Fig. 2. (b)) in a way that the interferences among users are specified.Note that this is a reverse process of the one discussed in <cit.> because G_C is given as the network of followers and G_I is constructed from G_C. For the particular networks in Fig. 2, assumptions in <cit.> hold. We then employ the algorithm in <cit.> to find an NE of this game for h_i=2 and L_i=1.5 for i∈ V, and q_41=q_45=1.75, q_32=q_43=2 and the rest of q_ij=1.§.§ AnalysisIn this section we analyze the NE x^*=[0,0,0.42,2.24,0.14]^T. One can realize that from G_C in Fig. 2 (a), user 4 has 3 followers (users 1, 3 and 5), user 3 has 2 followers (users 2 and 5) and the rest has only 1 follower. It is straightforward to predict that users 4 and 3 could draw more attentions due to their more number of followers which is end up having less cost. This let them to produce more information (x_4^*≥ x_3^*≥ x_j∈{1,2,5}^*). On the other hand, user 5 receives x_4 and x_3 from his news feed which ends up having greater payoff than users 1 and 2 from perceiving information. This is why x_5^*≥ x_j∈{1,2}^*.splncs03
http://arxiv.org/abs/1703.09177v1
{ "authors": [ "Farzad Salehisadaghiani" ], "categories": [ "cs.GT", "cs.SI", "cs.SY" ], "primary_category": "cs.GT", "published": "20170327164449", "title": "Nash Equilibrium in Social Media" }
paper paper*xproof#1:7pt#1:10pt _$̋__χxΨ E→ℙ𝔼ℝ𝔹xα α^*αϵ_t_tN M X Y R B⊤top C_C_M̃m̃M̂ R Bρ_ Rρ_ Bρ_ R Bρ_ B RP_ R RP_ B BP_ B RP_ R BH̋ T UΔ_Δ_ BPAEHℱzpaper =0.08inAssortative Mixing Equilibria paperAvin, Daltrophe, Lotker & PelegHadassa Daltrophe Ben Gurion University of the Negev, Beer Sheva, Israel avin@cse.bgu.ac.il,hd@cs.bgu.ac.il,zvilo@bgu.ac.ilWeizmann Institute of Science, Rehovot, Israel david.peleg@weizmann.ac.ilAssortative Mixing Equilibria inSocial Network GamesSupported in part by a grant of the Israel Science Foundation (1549/13) Chen Avin1 Hadassa Daltrophe1 Zvi Lotker1 David Peleg2 December 30, 2023 =============================================================================================================================paper It is known that individuals in social networks tend to exhibit homophily (a.k.a.assortative mixing) in their social ties,which implies that they prefer bonding with others of their own kind.But what are the reasons for this phenomenon? Is it that such relations aremore convenient and easier to maintain? Or are there alsosome more tangible benefits to be gained from this collective behaviour?The current work takes a game-theoretic perspective on this phenomenon,and studies the conditions under which different assortative mixing strategieslead to equilibrium in an evolving social network.We focus on a biased preferential attachment model where the strategy ofeach group (e.g., political or social minority) determines the level of biasof its members toward other group members and non-members. Our first result is that if the utility function that the group attemptsto maximize is thedegree centrality of the group, interpreted asthe sum of degrees of the group members in the network, then the only strategyachieving Nash equilibrium is a perfect homophily, which implies thatcooperation with other groups is harmful to this utility function. A second, and perhaps more surprising, result is thatif a reward for inter-group cooperation is added to the utility function(e.g., externally enforced by an authority as a regulation),then there are only two possible equilibria, namely,perfect homophilyorperfect heterophily, and it is possible to characterizetheir feasibility spaces. Interestingly, these results holdregardless of the minority-majority ratio in the population.We believe that these results, as well as the game-theoretic perspectivepresented herein, may contribute to a better understanding of the forcesthat shape the groups and communities of our society.paper paper paper § INTRODUCTION paper Homophily (lit. “love of the same") <cit.>, also known asassortative mixing <cit.>, is a prevalent and well documented phenomenon in social networks <cit.>; in making their social ties, people often prefer to connect with other individuals of similar characteristics, such as nationality, race, gender, age, religion, education or profession. Homophilyhas many important consequences, both on the structure of the social network (e.g., the formation of communities) and on the behaviors and opportunities of participants in it, for example on the welfare of individuals <cit.> and on the diffusion patterns of information in the network <cit.>.It is therefore interesting to explore the reasons for this phenomenon. Clearly, one natural reason is that relationship with similar individuals may be more convenient and easier to maintain.But are there also some more tangible benefits to be gained from this collective behaviour of sub-populations in the network? To better understand homophily, we take a different perspective on this phenomenon and study it through a strategic, game-theoretic prism.We investigate the conditions under which different assortative (and disassortative) mixing strategies lead to equilibrium in an evolving social network game.To model the network evolution, we use a variant of the classical preferential attachment model <cit.>, which incorporates aheterogeneous population and assortative mixing patterns for the sub-populations. This model, known as biased preferential attachment (BPA)  <cit.>, maintains the “rich get richer" property, but additionally enables different mixing patterns (including perfect homophily and heterophily) between sub-populations, by using rejection sampling.In this paper, we modify this model by turning it into a game. Each sub-population is represented as a player who can choose its mixing pattern as a strategy. Theutility function (orpayoff) of a player is a result of its population's (expected) properties in the BPA model. Astrategy profile (describing the strategies of both players) attains a Nash equilibrium for the game if no playercan do better by unilaterally changing its own strategy. Obviously, the result of the game depends on the players' utility functions. In the current study we take an initial step and study two natural utility functions. In the first, we consider the payoff to be the total power of the group, that is, the sum of degrees of all group members. In this case we prove that there is a unique stable Nash equilibrium which is theperfect homophily profile, namely, cooperation with other groups is harmful to this utility function. We stress that while there are other strategy profiles, like the unbaised profile, that guarantee the same total power to the groups, those profiles do not yield Nash equilibrium.Since perfect homophily results in complete segregation of the sub-populations, we consider a second utility function based on a linear combination between the total power of the group and the number of cross-population links (i.e., the size of the population cut). In particular, the utility is taken to be γ times the total power of the group plus 1-γ times the population cut size, for someweight factor0≤γ≤ 1. Such a utility can be viewed as a rule (or a law) imposed by a regulator to encourage cooperation between the two sub-populations.At a first glance, this utility seems to lead to different Nash equilibria for different γ values. Somewhat surprisingly, we show that only two possible equilibria may emerge. For γ > 1/2, the perfect homophily profile is the unique Nash equilibrium, and for γ < 1/2, the heterophily profile is the unique Nash equilibrium. For γ = 1/2, both profiles yield a Nash equilibrium, but only the perfect homophily yields a stable equilibrium. (Note, by the way, that all our results are independent of the ratio r between the sizes of the two sub-populations.)What may we learn from these results? A first, quite intuitive, lesson is that if the payoff includes benefits for heterophilic edges, then the game can move away from the perfect homophily equilibrium. But, within the natural utility function we study, if the game moves away from the homophily equilibrium, then it must reach a perfect heterophily equilibrium.Both of these equilibria may appear to be too “radical” from a social capital perspective, which may find it desirable to maintain some balance in-between the two extremes, i.e., preserve the internal structure of both sub-populations as well as form significant cross-population links between the two sub-populations. This leaves us with some interesting follow-up research directions:what `mechanism design' rules can a regulator employ in order to have a more fine-grained control on the equilibrium? what happens in a system with more than two sub-populations? how do the equilibria behave? We leave these questions for future work; we believe that taking the game theoretic perspective on evolving social network models for heterogenous populations is an important tool in understanding homophily, as shown in this initial model. paper Due to space limitations, we provide only an outline of our proofs. The interested reader is referred to <https://goo.gl/h0Fegc> for details.paper < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > (a)- homophily (b)- heterophily (c)- unbiased Examples of the Biased Preferential Attachment (BPA) model with various parameter settings. All examples depict a 200-vertex bi-populated network generated by our BPA model starting from a single edge connecting a blue and a red vertex and 30% red nodes (with vertex size proportional to its degree). paper paper § RELATED WORK paper Game theory provides a natural framework for modeling selfish interests and the networks they generate <cit.>. While many studies (see <cit.> for a comprehensive survey) focus on local network formation games, others (e.g., <cit.>) model the players as making global structural decisions. In this paper we define a game that features a mixture of both local and global characteristics. This situation is close to cooperative games <cit.>, where all the nodes of the same group have the same payment. However, the key idea of cooperative games is to choose which coalitions to form, whereas here the partition into groups is predefined. In this context, one should distinguish between network formation games<cit.> and evolving network games (e.g., <cit.>). The former involve a fixed set of nodes, with the connections between them changing over time. In contrast, in the evolving network model used herein, the nodes and edges are both dynamic, and new nodes join the network as it evolves over time.Based on the assumption that people have tendency to copy the decisions of other people, we suggest a network construction process that follows the well known preferential attachment model <cit.> with an additional phase to incorporate the mixing parameter <cit.>. However, related studies in the economics literature examine different procedures to model the social network formation. The studies of <cit.> assume that individuals are randomly paired with other members of the population and then match assortatively. Another model, presented at <cit.>, suggests two-phase attachments. The nodes first choose their neighbors with a bias towards their own type and then make an unbiased choice of neighbors from among the neighbors of their biased neighbors. While the models of <cit.> and others assume that a connecting edge between a pair of nodes is fixed by using bilateral agreement, in our model the matching choice is somewhat ambiguous. The rejection of a proposed connection can be interpreted as either decided by one of the parties unilaterally or accepted by a bilateral agreement. One of the main themes of this paper is studying the homophily phenomenon and its influence on minority-majority groups. McPherson et al. <cit.> give an overview of research on homophily and survey a variety of properties and how they lead to particular patterns in bonding. While some studies (e.g., <cit.>) model homophily as ranging over a spectrum between perfect homophily and unbiased society, we have followed <cit.> and <cit.>, which also allow disassortative matching.Currarini, Jackson and Pin <cit.> examine friendship patterns in a representative sample of U.S. high schools and build a model of friendship formation based on empirical data. They report that all groups are biased towards same-type friendship relative to demographics, but different homophilic patterns emerge as a function of the group size; while homophily is essentially absent for groups that comprise very small or very large fractions of their school, it is significant for groups that comprise a middle-ranged fraction. In <cit.> it is also claimed that the majority group has greater tendency to homophily. In contrast, we have presented independence between the size of the group and the mixing pattern. Namely, the majority-minority parameter r does not influence the attained equilibria. This inconsistency can be explained by the different construction of the network (<cit.> and <cit.> assume random matching with biased agreement as mentioned above), or perhaps bythe simplicity of our model and the fact that it involves only two groups. paper § NETWORK AND GAME MODEL paperOur network model is an extension of the bi-populated biased preferential attachment (BPA) model <cit.>. We use this model as the basis to anevolving heterogeneous network game.We start by describing the network model.paper§.§ Biased Preferential Attachment ModelThe biased preferential attachment model[In fact, here we extend the model of <cit.> to allow heterophily.] (BPA)  <cit.> is a bi-populated preferential attachment model obtained by applying the classical preferential attachment model<cit.> to a bi-populated minority-majority network augmented with homophily.The model describes a bi-populated random evolving network with red and blue vertices, where n is the total number of nodes,r is the arrival rate of the red vertices andis the mixing matrix.Denote the social network at time t by G_t=(V_t,E_t), where V_t and E_t, respectively, are the sets of vertices and edges in the network at time t, and let d_t(v) denote the degree of vertex v at time t. The process starts with an arbitrary initial bi-populated (red-blue) connected network G_0 with n_0 vertices and m_0 edges. For simplicity we hereafter assume that G_0 consists of one blue and one red vertex connected by an edge, but this assumption can be removed. This initial network evolves in n time steps as follows.In every time step t, a new vertex v enters the network.The arrival rate of the red nodes is denoted by 0 < r <1, i.e., the newvertex v is red with probability r and blue with probability 1-r. In the first stage, v selects atentative neighbor u at random by preferential attachment, i.e., with probability proportional to u's degree at time t,paper [uis chosen] = d_t(u)/∑_w∈ V_td_t(w). The second stage employs a 2 × 2 stochastic mixing matrix, , composed of the stochastic homophily vectors of each player, _, _, i.e., paper= ( [ _; _; ]) = ( [1-; 1- ;]). paperLetting ∈{, } be v's color, the edge (v,u) is inserted into the graph with probability ρ_ when u's color is also .If the colors differ, then the edge is inserted with probability 1-ρ_. If the edge is rejected (i.e., is not inserted into the graph), then the two-stage procedure is restarted.This process is repeated until some edge {v,u} has been inserted. Thus in each time step, one new vertex and one new edge are added to the existing graph.Note that the mixing matrixdescribes the degree ofsegregation (incorporated by using rejection sampling) of the system. In particular, using theperfect homophily matrix=( [ _̋; _̋;])= ( [ 1 0; 0 1; ]),paper all added edges connect vertex pairs of the same color.At the other extreme, using the perfectheterophily matrix= ( [ _; _; ]) =( [ 0 1; 1 0; ]),paper all added edges connect vertex pairs with different color. Similarly, using theunbiased strategy matrix = ( [ _; _; ]) =( [ .5 .5; .5 .5;]),paper edges are connected independently of the node colors. For intermediate values 0<,<1, the players show a tendency to favor one kind of interaction over another. When ,>0.5, the players tend to be homophilic, and when ,<0.5, the players tend to be heterophilic. Figure <ref> presents three examples of parameter settings for the BPA model on a 200-vertex bi-populated social network with r=0.3 (30% red nodes), using , and .paper§.§ Evolving Heterogeneous Network GamesWe now define theevolving heterogeneous(t,r,,γ) network game ( game, for short) between the two sub-populations. The game is played between two players, the red playerand the blue player . (Note that we occasionally useandto denote either the color, the corresponding set of nodes,or the corresponding player. The exact meaning will be clear from the context.)Assume r and G_0 are given to the players. Each player ∈{,} can now choose its strategy vector as a mixing vector _ in the mixing matrix . Then the network evolves according the biased preferential attachment model (t,r,). Let n_t() and n_t(), respectively, denote the number of red and blue nodes at time t>0, where n_t = n_t()+n_t()=n_0+t. Denote by d_t() (respectively, d_t()) the sum of degrees of the red (resp., blue) vertices present in the system at time t≥ 0. Altogether, the number of edges in the network at time t is m_t=m_0+t, where d_t()+d_t()=2m_t.Let C(G_t) denote the cut of the graph G_t defined by the red-blue partition of V_t, i.e., the set of edges that have one endpoint inand the other in . Formally, paperC(G_t) = {(u,v)∈ E_t | u∈,v∈}. C(G_t) = {(u,v)∈ E_t | u∈,v∈}. Let ϕ(G_t)=C(G_t) denote the size of the cut.In our game, the payoff of each player is a combination of two quantities: thetotal power of its sub-population (namely, its expected sum of degrees), and the expectedcut sizeϕ(G). Observe that these quantities pull in opposite directions, hence they are balanced using a parameter 0 ≤γ≤ 1 that will serve as aweighting factor for theutility function of the game. The parameter γ can be viewed as set by a regulator to enforce cooperation between sub-populations.Formally, the payoffs (utilities) of theplayersandat time t are paperU_t^γ()  = γd_t()/d_t+(1-γ)ϕ_t/2 m_t= 1/d_t(γ d_t()+(1-γ)ϕ_t), U_t^γ()  = γd_t()/d_t+(1-γ)ϕ_t/2 m_t=1/d_t(γ d_t()+(1-γ)ϕ_t).A strategy profileis a Nash equilibrium for the game (t,r,,γ) if no player ∈{,} can do better by unilaterally changing its own strategy _. A Nash equilibrium for the game (t,r,,γ) is stable if a small change infor one player leads to a situation where two conditions hold: (i) the player who did not change has no better strategy in the new circumstance, and (ii) the player who did change is now playing with a strictly worse strategy. If both conditions are met, then the player who changed itswill return immediately to the Nash equilibrium, hence the equilibrium isstable. If condition (i) does not hold (but condition (ii) does), then the equilibrium is unstable. paper § DEGREE MAXIMIZATION GAME paperBefore studying the behavior of the general evolving heterogeneous network game, let us consider the solution of the game in the basic case where γ=1 for every t, i.e., each player's utility depends only on the expected sum of degrees.§.§.§ An urn process.The biased preferential attachment (n, r,) process can also be interpreted as a Polya's urn process, where each new edge added to the graph corresponds to two new balls added to the urn, one for each endpoint, and the balls are colored by the color of the corresponding vertices. In this interpretation, a time step of the original evolving network process corresponds to the arrival of a new ball x (which is red with probability r and blue with probability 1-r), and in the ensuing procedure, we choose an existing ball y from the urn uniformly at random; now, if x is of the same (respectively, different) color ∈, as y, then with probability ρ_ (resp., 1-ρ_) we add to the urn both x and a second copy of y (corresponding to the two endpoints of the added edge),and with probability 1-ρ_ (resp., ρ_) we reject the choice of y and repeat the experiment, i.e., choose another existing ball y' from the urn uniformly at random. This is repeated until the choice of y is not rejected.Hence the arrival of each new ball x results in the addition of exactly two new balls to the urn, namely, x and a copy of some existing ball y.The key observation is that to analyze the expected fraction of the red balls in the urn at time t, there is no need to keep track of the degrees of individual vertices in the corresponding process of evolving network; the sum of degrees of all red vertices, d_t(), is exactly the number of redballs in the urn.Noting that exactly two balls join the system in each time step, we havepaperd_t()+d_t() =  d_t  =  2t+n_0=2(t+1). d_t()+d_t() =  d_t  =  2t+n_0=2(t+1). Note that while d_t() and d_t() are random variables, d_t is not.§.§.§ Convergence of expectations.Let α_t = d_t()/d_t be a random variable denoting the fraction of red balls in the system at time t. Given the mixing matrix , we claim thatthe process will converge to a ratio of α red balls in the system (as a function of ). More formally, we claim that, regardless of the starting condition, there exists a limitα = lim_t→∞[α_t] .   [α_t+1|α_t]=α_t+F(α_t)-α_t/t+2 , whereF(x)=1/2(1 +(-1 + r) (-1 + α)/-α +(-1 + 2 α) + r α/1 - α +(-1 + 2 α)).paperGiven that the new vertex at time t+1 is blue, the probabilitythat it attaches to a blue vertex satisfies()=(1-α_t)+α_t()+(1-α_t)(1-) (),hence()=-α_t/+α_t-2α_t Similarly, given that the new vertex at time t+1 is red, the probability P_ that it attaches to a red vertex satisfies()=α_t+α_t(1-)()+(1-α_t)(),hence ()=α_t/1-α_t +(2α_t-1).We later expressandas a function of α_t, i.e., (x) = - x/+x-2 x , (x) =x/1-x +(1-2x) .In each step the sum of the degrees increases by 2, so d_t+1 = d_t + 2. We start from an arbitrary ratio α_0 = d_0(R)/d_0. Let _t(x) be a random variable denoting the number of new red balls at time t assuming α_t=x.Then_t+1(x)=0,with probability(1-r)(x) (a blue ball entered and chose a blue ball),2,with probabilityr(x) (a red ball entered and chose a red ball),1,with the remaining probability(a blue ball chose a red ball or vice versa),andd_t(R)=d_0(R)+ ∑_i=1^t _t(α_i-1). We now define _t=[_t+1(α_t)] and calculate it to be_t=[d_t+1(R)-d_t(R) |α_t] =1·((1-(1-r)(α_t)-r(α_t)) + 2· r(α_t)=1-(1-r)(α_t)+r(α_t)=1-(1-r)-α_t/+α_t-2α_t+rα_t/1-α_t +(1-2α_t)=2F(α_t).Substituting d_t+1(R)=2(t+2)α_t+1 and d_t(R)=2(t+1)α_t and rewriting yields the lemma. paperThe function F(x) has the following properties:* F(x) is monotonically increasing. * F(x) has exactly one fixed point, α∈[0,1]. *The image of the unit interval by F(x) is contained in the unit interval:F([0,1]) =[r/2,1+r/2]⊂[0,1]. *If x < α then x < F(x) < α and if x > α then x > F(x) > α. paperFor the first property, observe that ∂ F/∂ x = 1/2((-1 +)(-1 +r)/( +x -2 x)^2- r (-1 + ) /(-1 ++ x - 2x)^2)  >  0for every x,,∈[0,1] and r∈(0,1).For the second property, we define the function G(x) = F(x) - x. The roots of G(x) correspond to the fixpoints of F(x) so we will show that G(x) has exactly one real root in the interval [0,1]. We arrange G(x) as G(x)=Q(x)/W(x) whereW(x)=2 (-x +(-1 + 2 x)) (1 - x +(-1 + 2 x)). Since the denominator W(x) is positive for each r,x,,∈[0,1], it is enough to show that the numerator Q(x) has exactly one real root in the interval [0,1] as shown in Lemma <ref> below.The third property follows from the fact that the function F(x) is strictly monotonically increasing and by evaluating the function F(x) for the two extreme values F(0)=r/2, and F(1)=(1+r)/2.Finally, the fourth property follows from the fact that the function is strictly monotonically increasing, that there is only one fixed point and that F(x) maps [0,1] inside [0,1].The polynomialQ(x) = 2 (-1 + 2 ) (-1 + 2 ) x^3 +(-3 + 7+r + 4- 10+ r- 4r ) x^2+ (1 - 3- 2r -+ 3+ 4r ) x +r -rhas a unique root in [0,1]. In what follows, we employ Sturm's Theorem (to be explained next) in order to bound the number of distinct real roots of Q(x). Consider some degree n polynomial P(x) =a_n x^n+ ... +a_1 x + a_0 over the reals. The Sturm sequence of P(x) is a sequence of polynomials denoted by p_0(x),p_1(x),... ,p_m(x), where p_0(x) = P(x), p_1(x) = dP(x)/dx, and p_i(x) = remainder(p_i-2(x)/p_i-1(x)) for i > 1. This recursive definition terminates at step m such that remainder(p_m-1(x)/p_m(x)) = 0. Since the degree of p_i(x) is at most n-i, we conclude that m≤ n. Define SC_p(t) to be the number of sign changes in the sequence p_0(t),p_1(t),... ,p_m(t). We are now ready to state the following theorem attributed to Jacques Sturm, 1829 (cf. <cit.>).Consider two reals a,b, where a < b and neither of them is a root of P(x). Then the number of distinct real roots of P(x) in the interval (a,b) is SC_p(a)-SC_p(b). Let's examine the Sturm sequence of Q in (0,1)for every ,, checking different ρ ranges as follows.For ∈(0,1/2) and ∈(1/2,1): To find the number of roots between 0 and 1,we first evaluate p_0(x),p_1(x),p_2(x) and p_3(x) at x=1 and get the sequences of signs of the results: {-,-,-,-}, which contains no sign changes. Evaluating p_0(x),p_1(x),p_2(x) and p_3(x) at x=0 yields two optional sequences of signs of the results: for 1/2 << (1 + 2 r)/(1 + 4 r) and (-1 + )/(-3 + 3 - 2 r + 4 r) << 1/2 we get {+,-,-,-}. Otherwise, we get the sequences of signs {+,+,*,-} (where * is + or -). All of the sequences contain one sign changes, hence, the number of roots of Q between 0 and 1 is 1- 0 = 1 as needed.For ∈(1/2,1) and ∈(0,1/2): To find the number of roots between 0 and 1, we evaluate p_0(x),p_1(x),p_2(x) and p_3(x) at x=0 and get the sequence of signs: {+,-,+,-} which contains three sign changes. The same procedure for x=1 gives for 1/2 << (-3 + 2 r)/(-5 + 4 r) and (1 - )/(5 - 7- 2 r + 4r) < < 1/2 the sign sequences: {-,-,+,-}. Otherwise, we get the sequences {-,+,*,-} (where * is + or -) . Since all of these contain two sign changes, we get that the number of roots of Q between 0 and 1 is 3-2 = 1 as needed.For ,∈(0,1/2) or ,∈(1/2,1) we get that Q(1)<1 and Q(0)>0. Observe that Q(x)=∞ when x→∞ and Q(x)=-∞ when x→-∞. This implies that there are one or three roots in both intervals (-∞,0) and (1,∞). Knowing that Q(x) has exactly three roots concludes the claim that G has exactly one root in [0, 1].Finally, when either =1 and 0≤<1, or =0 and 0<<1, there is a root of M at x=0, hence these cases must be dealt with separately. Another special case occurs when ,=1/2. For each of these special caseswe explicitly solve the equation M(x)=0 and show that there is a unique root at (0,1). Lemma <ref> follows. Assume w.l.o.g. that α_t < α. By Lemma <ref>α_t < F(α_t) < α, so by Lemma <ref>α_t < [α_t+1|α_t] < α. Taking expectations, we get that [α_t] < [α_t+1] < [α]=α. We have thus shown that the expected value of α_t converges to the fixed point α of F(x). We have thus established the following. Given the rate r of red nodes and the mixing matrix , for any initial graph, as t tends to infinity, the expected fraction of red balls, [α_t], converges to the unique real α∈ (0,1) satisfyingthe equation F(α)=α, or2α = 1 +(-1 + r) (-1 + α)/-α +(-1 + 2 α) + r α/1 - α +(-1 + 2 α) .Hence the limit α is the solution of the cubic equation paper(2 - 4- 4+ 8 ) α^3+ (-3 + 7+r + 4- 10+ r- 4r ) α^2 +(1 - 3- 2r -+ 3+ 4r ) α+ r -r  = 0.Note that this limit is independent of the initial values d_0 and α_0 of the system. §.§.§ Existence of a Nash Equilibrium.Having shown that for any given strategy profilethe expected fraction of red node degrees converges to α, we examine the influence of the different strategies on the utility functions. The limit α and [α_t] are monotone in the mixing matrix entries, i.e., both increase with increasingand decrease with increasing . paperWe show (strict) monotonicity in ; a similar proof can be obtained for.Consider two urn processesand ' corresponding to the games (n,G_0,r,) and (n,G_0,r,'), where= ( [1-; 1- ;])         ' = ( [ +ϵ' 1-(+ϵ');1-; ])for some ϵ'>0. Denote by α_t=d_t()/d_t and α_t'=d'_t()/d_t the fraction of red balls at time t inand ' respectively. Let α=lim_t→∞[α_t] and α'=lim_t→∞[α'_t]. In order to prove the first part of the lemma(i.e., the claim on the limit α) we show that α<α'. Let F(x) and F'(x) be the functions defined forthe processand ' respectively. Observe that ∂ F/∂>0 for each ρ,r∈[0,1] and x∈(0,1), so F(x)<F'(x) for every x∈(0,1). Note that F(α)=α and F'(α')=α' are the unique fixed points of F(x) and F'(x), respectively, hence α=F(α)<F'(α')=α' as required.The proof of the second part of the lemma(i.e., the claim on [α_t]) uses stochastic domination (see cf. <cit.>). We give the formal definition and a basic theorem that we use.Let X and Y be two random variables, not necessarilyon the same probability space. The random variable X isstochastically smaller thanY, denoted X≼ Y, if [X> z]≤[Y> z] for every z∈. If additionally [X>z]<[Y>Z] for some z, then X isstochastically strictly less thanY, denoted X≺ Y.Let X and Y be two random variables, not necessarily on the same probability space.*Suppose X≺ Y. Then [U(X)]<[U(Y)] for any strictly increasing continuous utility function U. *Suppose X_1≺ Y_1 and X_2≺ Y_2, for four random variables X_1, Y_1,X_2 and Y_2. Then aX_1 + bY_1 ≺ aX_2 + bY_2 for any two constants a,b>0. Let _t+1(x) (respectively, '_t+1(x)) be a random variable denoting the number of new red balls at time t+1 in(resp., ') assuming α_t=x (resp., α'_t=x). _t+1(x) ≺'_t+1(x) for any 0<x<1 and integer t≥ 0.By Eq. (<ref>) and (<ref>),[_t+1(x)=0]= ['_t+1(x)=0], [_t+1(x)=2]< ['_t+1(x)=2].Hence [_t+1(x)>z]≤['_t+1(x)> z] for every z∈ and [_1(α_0)> 1] < [_1'(α_0)> 1], yielding _t+1(x) ≺'_t+1(x).For t≥ 0, if α_t≺α'_t then _t+1(α_t) ≺'_t+1(α'_t).We would like to show that for every z,[_t+1(α_t)> z] ≤['_t+1(α'_t)> z]. Denoting expectation according to the r.v. Z by _Z[·], we have[_t+1(α_t)> z]= _α_t[ [_t+1(α_t)> z] ]  ≤ _α_t[ ['_t+1(α_t)> z] ] ≤ _α'_t[ ['_t+1(α'_t)> z] ] = ['_t+1(α'_t)> z],where the first inequality follows from Lemma <ref>, which shows that _t+1(x) ≼'_t+1(x), and the second is by Theorem <ref>(1), noting that ['_t+1(x)> z] is monotone in x.To show strictness (i.e., _t+1(α_t) ≺'_t+1(α'_t)) we consider z=1 and show that[_t+1(α_t)> 1] < ['_t+1(α'_t)> 1].α_t≺α'_t for t≥ 0.Note thatd_t()= d_0() + ∑_i=1^t_i(α_i-1),d'_t()=d'_0() + ∑_i=1^t'_i(α'_i-1).We prove the claim by induction, over t.Induction basis.d_0()=d'_0()=c_ for some constant c_>0. Then α_0=c_/d_0=α'_0. It follows that[_1(α_0)=0]=(1-r)(α_0-1)/(α_0-1)-(1-)α_0=['_1(α'_0)=0]and[_1(α_0)=2]  = rα_0/(1-)(1-α_0)+α_0  < r(+ϵ)α_0/(1-)(1-α_0)+(+ϵ)α_0  = ['_1(α'_0)=2],hence [_1(α_0)> z]≤[_1(α'_0)> z] for every z∈ and [_1(α_0)> 1] < [_1(α'_0)> 1], yielding _1(α_0)≺'_1(α'_0). Induction step. Suppose that α_t≺α'_t holds. By Lemma <ref>, _t+1(α_t)≺'_t+1(α'_t). Henced_t+1()= d_t()+_t+1(α_t) ≺ d'_t()+'_t+1(α'_t)=d_t+1(), where d_t() ≺ d'_t() by the induction assumption. Note we also used Theorem <ref>(2). This implies α_t+1≺α'_t+1 as needed.By Theorem <ref> we get [α_t]<[α'_t], which complete the proof of the second part of Lemma <ref>. Given the utility functions U^1_t()=d_t() and U^1_t()=d_t(), each player can choose its row in the mixing matrix .By Theorem <ref> we get that U^1_t→∞()=d_tα and U^1_t→∞()=d_t(1-α). Lemma <ref> implies that the red and blue players maximize their utility by increasingand , respectively. Hence, the homophily strategy profileis strictly dominant for both players. The same applies for t<∞. The homophily strategy profileis a unique Nash equilibrium for the game (t,r,,γ=1).paper § UTILITIY MAXIMIZATION GAME paperThe evolving heterogeneous network game (t,r,,γ) for a bi-populated network consists of two contrasting ingredients, the expected sum of degrees d(·) and the cut size ϕ(G). The following theorem expresses the impact of these forces on the system as a function of the weighting factor γ.Consider the evolving network game (t,r,,γ) for 0 < r < 1.*For γ>1/2, the homophily strategy profileis a unique Nash equilibrium. *For γ<1/2, the heterophily strategy profileis a unique Nash equilibrium. *For γ=1/2, the only two Nash equilibria areand . The homophily strategy profileis astable Nash equilibrium, while the heterophily strategy profileis anunstable Nash equilibrium.paperLet _t(x) be a random variable denoting the number of new cut edges at time t. We have paper_t+1(x)=0,with probability(1-r)(x)+ r(x), 1,with the remaining probability,andϕ(G_t)=ϕ(G_0)+∑_i=1^t _i(α_i-1). Define the potential function of the red player, denoted , as the expected increment of its utility at step t. Then= [U_t+1^γ()-U_t^γ() |α]=γ_t+1(α)+(1-γ)_t+1(α)= γ(1-(1-r)(α)+ r(α)) +(1-γ)(1-((1-r)(α)+ r(α)))=1-(1-r)(α)+r(2γ -1)(α) .Similar considerations imply that the potential function of the blue player is:=  1 - r (α) + (1-r)(2γ -1)(α). Let's examine the value of the potential functionsandfor every γ, checking different γ ranges as follows. γ > 1/2: In this range the value ofcontributes positively toand negatively to . Hence, the red player would like to increase . This would be done by increasingas shown in Lemma <ref>. Similarly, in this range contributes positively toand negatively to . Hence, the blue player prefers to increase .It followsthat the homophily strategies _̋ and _̋ are strictly dominant for both players.Note that this result also holds for the special case where γ=1, as shown in Theorem <ref>.γ < 1/2: Here, bothandprovide negative contributions toand . Therefore, decreasingimplies decreasingbut also increasing(see Lemma <ref>). The variation ofis due to the influence ofon 1-α, which is similar to the variation ofdue to α. However,is also decreased directly by , hence the red player prefers to decrease . Similarly, the blue player would like to decrease , which implies that the heterophily strategies _ and _ are strictly dominant.Note that this result also holds for the special case where γ=0. In this case, the utility is based only on the cut G(ϕ), so it is clear that the best strategy for both players is to attach to a node of the opposite color as dictated by the heterophily strategy.γ = 1/2: In this range the potential function value is=(1-(1-r))=(1-(1-r)(1-α)/(1-α)+α(1-)).Although the strategy of the red player, , does not appear explicitly in this expression, it appears implicitly in α. Setting =0 implies =0, yielding =1. Similarly, setting =0 yields =1. Since 1 is the maximum value ofand , it follows that the heterophily strategies are dominant for both players, i.e.,is a Nash equilibrium.However, as in the case of γ>1/2, when > 0 the red player would minimize α byincreasingas shown in Lemma <ref>. Similarly, when > 0 the blue player would increase . This leads both players to the homophily strategies _̋ and _̋. Thus, is an unstable Nash equilibrium andis a stable Nash equilibrium.andare monotone in the entries of the mixing matrix:*increases with increasingand decreases with increasing , and*increases with increasingand decreases with increasing .Observing that ∂/∂>0 and ∂/∂α>0, and using Lemma <ref>, yields the first part of the claim. Similarly, ∂/∂>0 and ∂/∂α<0 yield the second part. paperGiven that the new vertex at time t+1 is blue, the probabilitythat it attaches to a blue vertex satisfiespaper()=(1-α_t)+α_t()+(1-α_t)(), ()=(1-α_t)+α_t()+(1-α_t)(), hence()=-α_t/+α_t-2α_t. Similarly, when the new vertex at time t+1 is red, the probability that it attaches to a red vertex is ()=α_t/1-α_t +(1-2α_t).Let _t(x) and _t(x) be random variables denoting, respectively,the number of new red balls and cut edges at time t. We have d_t(R)=d_0(R)+ ∑_i=1^t _t(α_i-1) and ϕ(G_t)=ϕ(G_0)+∑_i=1^t _i(α_i-1). Define the potential function of the red player, denoted , as the expected increment of its utility at step t. Then paper= [U_t+1^γ()-U_t^γ() |α]=[γ_t+1(α)+(1-γ)_t+1(α)]= γ(1-(1-r)(α)+ r(α)) +(1-γ)(1-((1-r)(α)+ r(α)))=1-(1-r)(α)+r(2γ -1)(α) . paperSimilar considerations imply that the potential function of the blue player ispaper =  1 - r (α) + (1-r)(2γ -1)(α). =  1 - r (α) + (1-r)(2γ -1)(α). The theorem follows by inspecting the value of the potential functionsandfor every γ and using Lemma <ref> (for the monotonicity of (α) and (α) with the entries of the mixing matrix).paper § DISCUSSION paper This work investigates the assortative mixing phenomenon using a game theory perspective. Given some predefined rules related to the probability of connecting to other node, each player is allowed to determine its strategy in order to maximize its payoff.First we used a utility function that captures degree centrality, and showed that the expected sum of degrees and its limit are monotonically increasing with the homophily tendency. This directly implies that the homophily strategy is the unique Nash equilibrium. In this context, it will be interesting to use different centrality measures (such as PageRank, betweenness, etc.) and examine their influence on the equilibria.Next we enhanced the utility function to give positive payoff for both the degree and the cut.The results we have presented show a phase transition in the strategy as a function the weight γ. A small fluctuation in γ might cause extreme changes in the preference of the players, i.e., from perfect homophily to perfect heterophily (or vice versa); the intermediate strategies are never in equilibrium.This result is independent of the fraction of the sub-population size in the population. Generalizing the model to more than two sub-populations or reformulating the utility function may shape the strategy function differently. An interesting outcome of the above is the possibility that setting a rule (or a law) by a regulator to encourage cooperation between the two sub-populations will play as a remedial strategy to achieve equal opportunities. This observation is remarkable since, in contrast to the usual affirmative action approach, this attitude does not discriminate any individual, but at the same time, it promises a fair representation of the different sub-populations and even a way for breaking the glass ceiling <cit.> that some minority sub-populations suffer from. We leave this direction for further work.paper 10aumann1988endogenous R. Aumann and R. Myerson.Endogenous formation of links between players and coalitions: an application of the shapley value.The Shapley Value, pages 175–191, 1988.avin2015homophily C. Avin, B. Keller, Z. Lotker, C. Mathieu, D. Peleg, and Y.-A. Pignolet.Homophily and the glass ceiling effect in social networks.InProceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 41–50. ACM, 2015.barabasi1999emergence A.-L. Barabási and R. Albert.Emergence of scaling in random networks.Science, 286(5439):509–512, 1999.basu2005algorithms S. Basu, R. Pollack, and M.-F. Roy.Algorithms in real algebraic geometry, volume 20033.Springer, 2005.bilbao2012cooperative J. M. Bilbao.Cooperative games on combinatorial structures, volume 26.Springer Science & Business Media, 2012.Bramoulle12 Y. Bramoulle, S. Currarini, M. O. Jackson, P. Pin, and B. W. Rogers. Homophily and long-run integration in social networks.J. Economic Theory, 147(5):1754–1786, 2012.chen2009network H.-L. Chen and T. Roughgarden.Network design with weighted players.Theory of Computing Systems, 45(2):302–324, 2009.currarini2009economic S. Currarini, M. O. Jackson, and P. Pin.An economic model of friendship: Homophily, minorities, and segregation.Econometrica, 77(4):1003–1045, 2009.di2015quantifying A. Di Stefano, M. Scatà, A. La Corte, P. Liò, E. Catania, E. Guardo, and S. Pagano.Quantifying the role of homophily in human cooperation using multiplex evolutionary game theory.PloS one, 10(10):e0140646, 2015.fu2012evolution F. Fu, M. A. Nowak, N. A. Christakis, and J. H. Fowler.The evolution of homophily.Scientific reports, 2, 2012.jackson2005survey M. O. Jackson.A survey of network formation models: stability and efficiency.Group Formation in Economics: Networks, Clubs, and Coalitions, pages 11–49, 2005.jackson2008social M. O. Jackson et al.Social and economic networks, volume 3.Princeton university press Princeton, 2008.jackson2013diffusion M. O. Jackson and D. López-Pintado.Diffusion and contagion in networks with heterogeneous agents and homophily.Network Science, 1(01):49–67, 2013.jackson2002evolution M. O. Jackson and A. Watts.The evolution of social and economic networks.Journal of Economic Theory, 106(2):265–295, 2002.lazarsfeld1954friendship P. F. Lazarsfeld, R. K. Merton, et al.Friendship as a social process: A substantive and methodological analysis.Freedom and control in modern society, 18(1):18–66, 1954.mcpherson2001birds M. McPherson, L. Smith-Lovin, and J. M. Cook.Birds of a feather: Homophily in social networks.Annual review of sociology, pages 415–444, 2001.newman2003mixing M. E. J. Newman.Mixing patterns in networks.Phys. Rev. E, 67:026126, Feb 2003.shaked2007stochastic M. Shaked and J. G. Shanthikumar.Stochastic orders.Springer Science & Business Media, 2007.tardos2007network E. Tardos and T. Wexler.Network formation games and the potential function method.Algorithmic Game Theory, pages 487–516, 2007.paper
http://arxiv.org/abs/1703.08776v1
{ "authors": [ "Chen Avin", "Hadassa Daltrophe", "Zvi Lotker", "David Peleg" ], "categories": [ "cs.SI", "cs.GT", "physics.soc-ph" ], "primary_category": "cs.SI", "published": "20170326064954", "title": "Assortative Mixing Equilibria in Social Network Games" }
=1
http://arxiv.org/abs/1703.08789v2
{ "authors": [ "Shohei Okawa", "Yuji Omura" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170326085759", "title": "Hidden sector behind the CKM matrix" }
exampstyle theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary remarkRemark[section] example[theorem]Example Z
http://arxiv.org/abs/1703.08843v1
{ "authors": [ "Xi Chen", "Weidong Liu" ], "categories": [ "math.ST", "stat.TH" ], "primary_category": "math.ST", "published": "20170326163752", "title": "Testing independence with high-dimensional correlated samples" }
romanNuclear Instruments and Methods AMeV GeV TeVm cm mm λ_ int X_0 Nucl. Instr. and Meth.  IEEE Trans. Nucl. Sci.  Phys. Rev. Lett. et al. e.g.,  i.e.,  cf.  etc.  vs. ca-lo-ri-me-ter ca-lo-ri-me-ters Brems-strah-lung^a Texas Tech University, Lubbock (TX), USA^b INFN Sezione di Pisa, Italy^c INFN Sezione di Cagliari, Monserrato (CA), Italy^d University College, London, UK^e Korea University, Seoul, Korea^f INFN Sezione di Pavia and Dipartimento di Fisica, Università di Pavia, Italy^g Dipartimento di Fisica, Università di Roma ”La Sapienza”and INFN Sezione di Roma, Italy^h INFN Sezione di Pavia, Italy^i Kirchhoff-Institut für Physik, Ruprecht-Karls-Universität Heidelberg, Germany^j Iowa State University, Ames (IA), USA^l Dipartimento di Fisica, Università della Calabria and INFN Cosenza, Italy^m Kyungpook National University, Daegu, Korea^† Deceased[Corres]Corresponding author. Email wigmans@ttu.edu, fax (+1) 806 742-1182.-7mmIn this paper, we describe measurements of the response functions of a fiber-based dual-readoutcalorimeter for pions, protons and multiparticle “jets” with energies in the range from 10 to 180 GeV. The calorimeter uses lead as absorber material and has a total mass of 1350 kg. It is complemented by leakage counters made of scintillating plastic, with a total mass of 500 kg. The effects of these leakage counterson the calorimeter performance are studied as well. In a separate section, we investigate and compare different methods to measure the energy resolution of a calorimeter. Using only the signals provided by the calorimeter, we demonstrate that our dual-readout calorimeter, calibrated with electrons,is able to reconstruct the energy of proton and pion beam particles to within a few percent at all energies. The fractional widths of the signal distributions for these particles (σ/E) scale with the beam energy as 30%/√(E), without any additional contributing terms.3mm PACS: 29.40.Ka, 29.40.Mc, 29.40.Vj -5mmDual-readout calorimetry, Čerenkov light, optical fibers § INTRODUCTION-5mmThe performance of hadron calorimeters is typically strongly dominated, and negatively affected, by the effects of fluctuations in the electromagnetic (em) shower fraction, f_ em. One approach to eliminate the effects of such fluctuations is to measure f_ em for each event. It turns out that the Čerenkov mechanism provides unique opportunities in this respect. Calorimeters that use Čerenkov light as signal source are, for all practical purposes, only responding to the em fraction of hadronic showers <cit.>.By comparing the relative strengths of thesignals representing the visible deposited energy and the Čerenkov light produced in the shower absorption process, the em shower fraction can be determined and the total shower energy can be reconstructed using the known e/h value(s) of the calorimeter[The ratio e/h represents the ratio of the average calorimeter signals per unit deposited energy from the em and non-em components of hadron showers. A calorimeter with e/h = 1 is said to be compensating, but in practice almost all calorimeters have e/h > 1.]. This is the essence of what has become known as dual-readout calorimetry. We are studying the properties of particle detectors of this type in the context of CERN's RD52 project <cit.>.In the dual-readout calorimeter discussed in this paper, signals are generated in scintillating fibers, which measure the deposited energy, and in clear plastic fibers, which measure the relativistic shower particles, by means of the Čerenkov light generated by these. A large number of such fibers are embedded in a metal absorber structure. This detector is longitudinally unsegmented, the fibers are oriented in approximately the same direction as the particles to be detected.In previous papers, we have focused on the electromagnetic performance of such a detector <cit.> and on its capability to identify the particles developing showers in it <cit.>. In this paper, we describe experiments in which the hadronic performance of this calorimeter was measured. Hadron showers require a very large volume to fully develop. The 1350 kg fiducial volume of the calorimeter used for our purpose absorbed in practice, on average, only ∼ 90% of the shower, depending on the energy of the showering particle. Therefore, fluctuations in (side) leakage formed a dominating contribution to the energy resolution. In order to get a handle on this contribution, the calorimeter was surrounded by a (rather crude) system of leakage counters. In our measurements, we also tried to distinguish between showers initiated by pions and by protons, using the calorimeter information. Our experimental program concentrated on two issues:* To what extent can the very crude system of leakage counters that we had installed around the calorimeter measure these event-to-event fluctuations and improve the measured energy resolution?* Can we separate pions and protons in the CERN SPS H8 beam, and measure the dual-readout calorimeter performance separately for these particles? We also studied the performance for multi-particle “jets,” produced in high-multi-plicity interactions by the beam hadrons in an upstream target. In modern particle physics experiments, the detection of jets is very important. The multiparticle events we used for our studies are, of course, not the same as the QCD jets that originate from a fragmenting quark or gluon. Yet, for the purpose of calorimetry they are very useful, since they represent a collection of particles that enter the calorimeter simultaneously. The composition of this collection is unknown, but the total energy is known. In the absence of a jet test beam, this isa reasonable alternative.In Section 2, the instruments and the experimental setup in which the measurements were carried out are described, as well as the calibration and data analysis methods that were used. Experimental results are presented in Section 3. In Section 4, we investigate and compare different methodsto measure the energy resolution of this calorimeter. Conclusions from these studies are presented in Section 5.§ EQUIPMENT AND MEASUREMENTS-5mm§.§ Detectors and beam line-5mmFor these particular studies, which were carried out in October 2015, we used secondary or tertiary beams derived from the 400 GeV proton beam delivered by the CERN Super Proton Synchrotron. These particle beams were steered through the H8 line into the RD52 fiber calorimeter.Figure <ref> shows the experimental setup.The fiber calorimeter used for the studies described in this paper is modular, and uses lead as the absorber material. Each of the nine modules is 2.5 m long (10 λ_ int), has a cross section of 9.2 × 9.2 cm^2 and a fiducial mass of 150 kg. Each module consists of four towers (4.6×4.6×250 cm^3), and each tower contains 1024 plastic optical fibers (diameter 1.0 mm, equal numbers of scintillating and clear plastic fibers)[The scintillating fibers are of the SCSF-78 type, produced by Kuraray, the Čerenkov light is generated in PMMA-based SK40 fibers, produced by Mitsubishi.]. Each tower produces two signals, a scintillation signal and a Čerenkov signal, which are detected by separate PMTs[10-stage Hamamatsu R8900 and R8900-100. In order to limit the effects of self absorption on the signals, the PMTs detecting scintillation light are equipped with yellow filters.]. For this reason, this type of detector is also known as a DREAM (Dual-REAdout Method) calorimeter. The sampling fraction for minimum ionizing particles in this calorimeter, both for the scintillation and for the Čerenkov sampling structure, is 5.3%. Measurements of the radial shower profile showed that the showers initiated by 60 GeV π^- were, on average, contained at the level of ∼ 93% in this structure. Electromagnetic showers are contained to better than 99% and shower leakage was thus not an issue for electrons and photons. In order to detect the hadronic shower leakage, the calorimeter was surrounded by large slabs ofplastic scintillator (50×50×10 cm^3, mass 25 kg). Twenty such counters were used in these tests. They can be seen in Figure <ref> on the top, the bottom and the right hand side of the box containing the calorimeter. The location of the leakage counters with respect to the fiber calorimeter is shown in Figure <ref>.The experimental setup also contained a number of auxiliary detectors, which were intended to limit and define the effective size of the beam spot, to determine the identity of individual beam particles, and to measure their trajectory.Figure <ref> shows a schematic overview of the beam line, in which the positions of these auxiliary counters are indicated (not to scale): * A set of three small scintillation counters provided the signals that were used to trigger the data acquisition system. These trigger counters were 2.5 mm thick, the area of overlap between the first two (T_1,T_2) was 4×4 cm^2. Downstream from these counters, a third scintillation counter (T_H) was installed. The latter had a hole with a radius of 10 mm in it. A (anti-)coincidence between the logic signals from these counters provided the trigger (T_1·T_2·T_H). * The trajectories of individual beam particles could be reconstructed with the information provided by two small delay wire chambers (DC1, DC2). This system made it possible to determine the location of the impact point of 80 GeV beam particles at the calorimeter surface with a precision of about 1 mm. * About 80 cm upstream of the calorimeter, a preshower detector (PSD) provided signals that could be used to remove electrons contaminating the hadron beams. This PSD consisted of a 5 mm thick lead plate, followed by a 5 mm thick plastic scintillator. Electrons started developing showers in this device, while muons and hadrons typically produced a signal characteristic of a minimum ionizing particle (mip) in the scintillator plate. * For certain (high) energies, an interaction target (IT), consisting of 10 cm of plastic, followed by a 5 mm thick plastic scintillator, was installed behind the PSD. This detector was used to create and select interactions in the plastic in which a significant number of secondaries were produced. The signal in the scintillator provided a means to select events with a certain (minimum) multiplicity.* Downstream of the calorimeter, a Tail Catcher (TC) could also serve to help identify pions and muons. This Tail Catcher consisted of a simple 20× 20 cm^2 scintillation counter. Electrons were fully absorbed in the calorimeter and thus did not create a signal in this detector, while muons produced a mip signal in it. Larger signals were typically caused by late showering hadrons. * Further downstream of the calorimeter, behind an additional 8 λ_ int worth of absorber,a 50×50 cm^2 scintillation counter (μ) served to identify muons that contaminated the particle beam.* About 50 m upstream of the calorimeter, two Threshold Čerenkov counters (Č_1,2) provided signals that made it possible to identify the type of beam particle. These counters were filled with CO_2 gas at a pressure that was chosen depending on the beam energy. These counters were in practice used to separate pions from protons. The calorimeter was mounted on a table that could be displaced both horizontally and vertically, and also rotated around the vertical axis. This allowed us to choose both the impact point and the angle of incidence of the beam particles.§.§ Data acquisition-5mm In order to minimize delays in the DAQ system, short, fast cableswere used to transport the signals from the trigger counters to the counting room. All other signals were transported through standard RG-58 cables with (for timing purposes) appropriate lengths.In the counting room, signals from the Threshold Čerenkov counters, PSD, Interaction Target, Tail Catcher and muon counter were fed into charge ADCs. The signals from the wire chambers were fed into TDCs.The data acquisition system used VME electronics. Two VME crates hosted all the needed readout and control boards. The signals from the auxiliary detectors (Threshold Čerenkov counters, PSD, Interaction Target, Tail Catcher, and Muon counter) were integrated and digitized with a sensitivity of 100 fC/count and a 12-bit dynamic rangeon a 32-channel CAEN V862AC module. The timing information of the tracking chambers was recorded with 1 ns resolution in a 16-channelCAEN V775N TDC, and was converted into (x,y) coordinates of the point where the beam particle traversed the chamber. Our readout scheme optimized the CPU utilization and the data taking efficiency using the bunch structure of the SPS accelerator cycle (which lasted between 36 and 54 s, depending on the various tasks of the accelerator complex), during which period beam particles were provided to our experiment by means of two extractions with a duration of 4.8 seconds each. §.§ Experimental data, calibration and analysis methods-5mmThe measurements described in this paper were performed in the H8 beam line of the Super Proton Synchrotron at CERN.We used either secondary beams directly produced by the 400 GeV protons from the accelerator on a target shared by several beam lines, or tertiary beams derived from these secondary ones. For the experiments described in this paper, a secondary beam of 20 GeV positive particles was used to calibrate all calorimeter towers. This beam consisted almost exclusively of positrons. Secondary beams of 60 GeV and 180 GeV were used to measure the response functions at a variety of energies. Low energy beams were derived as tertiaries from the 60 GeV secondary beam, and beams with energies above 60 GeV were derived from the180 GeV secondary beam. The latter was also used to provide 180 μ^+ particles (obtained by blocking all other particles with absorbers), which were used to calibrate the leakage counters.For the calibration runs, beam particles were steered into the center of each of the 36 individual calorimeter towers (see insert Figure <ref>),or through the central plane of the leakage counters.For each run, 10 000 events were collected, while 10% randomly triggered events provided pedestal information. The information from the wire chambers was used to select events in which the particles hit the calorimeter within a beam spot with a diameter of 10 mm. The HV settings were chosen such that the average calorimeter signal corresponded to several hundred ADC counts.The calibration runs were used to determine the energy equivalent of one ADC count for all individual signals, the 36 scintillation signals, the 36 Čerenkov signals and the 20 signals from the leakage counters. These calibration constants formed the basis for the energy determination of the hadronic events. The 72 calibration constants of the calorimeter towers were determined with 20 GeV positrons, which deposited 93% of their energy in one individual tower. The calibration constants of the leakage counters were equalized using 180 GeV muons which traversed the 50 cm long central plane of each counter. The overall scale of the leakage signals was set with 60 GeV pions sent into thecenter of the calorimeter. The total signal from all leakage counters combined was set to 3.84 GeV, representing 6.4% of the particle energy. In the analysis of the hadronic calorimeter performance described in the following sections, the scintillation signal is the sum of the signals measured in the scintillating fibers embedded in the calorimeter and the signals from the leakage counters, both expressed in GeV, using the scale derived from the electron calibration. The electron scale was also used for the hadron signals from the Čerenkov fibers.Dedicated hadron runs were carried out for the following energies and polarities: +20, +40, ± 60, +80, +100, and +125 GeV. Dedicated multiparticle(“jet”) runs were performed at +40, +60, +100 and +125 GeV. For each event selected by the trigger counters, the ADCdata from the auxiliary detectors and the TDC data from the wire chambers were recorded. Off-line, the beam chamber information was used to select events within a small beam spot (typically with a radius < 5 mm). The information provided by the auxiliary detectors was used to identify and select the desired particles.In order to select hadron event samples, the electrons and muons had to be removed from the collected events. This had to be done in a way that would not bias the resulting hadron event samples, and therefore had to be based entirely on the auxiliary detectors. Electrons (or positrons) were identified as particles that produced a signal in the PSD that was larger than ∼ 200 ADC counts above pedestal, which corresponds to the combined signals produced by two minimum ionizing particles (mips) traversing this detector. Additional requirements were that no signals incompatible with electronic noise were produced in the muon counter.Muons were identified as particles that produced a signal incompatible with electronic noise in the muon counter. At low energies, a significant fraction of the muons did not traverse that counter because of multiple scattering (or absorption) in the upstream material. In that case, particles were also identified as muons if they produced signals in the PSD or IT, as well as in the Tail Catcher, that were compatible with a mip, and no signal incompatible with the pedestal in the sum of all leakage counters.Protons were defined as particles that produced a signal compatible with the pedestal in both upstream Threshold Čerenkov counters. Pions were required to produce a signal in at least one of these counters that was significantly (at least 3σ) above pedestal. Table <ref> lists the percentages of protons and pions in the hadron event samples determined on the basis of this criterion, as well as the percentage of contaminating electrons and muons. The counters were not fully efficient, and therefore the resulting proton/pion separation was not perfect, especially at the highest energies. Figure <ref> shows the signal distributions measured for beams of +40 GeV and +100 GeV. In order to determine the inefficiency of the Threshold Čerenkov counters, we compared the signal distributions measured for beams with negative and positive polarity at the same energy. This comparison assumes that the production of antiprotons on the production target is negligible. Table <ref> shows the fraction of hadrons (after electrons and muons have been removed from the event samples) that produced signals compatible with the pedestal in both Threshold Čerenkov counters, for beams of 40, 60 and 80 GeV.Based on these considerations, we estimated the purity of the proton sample. The possible contamination of the proton sample by misidentified pions is listed in the last column of Table <ref>, as a percentage of the total number of events. The negative polarity hadrons were all considered pions. No attempts were made to measure the contribution of kaons to the various event samples. Such contributions are estimated (from beam simulations) to be at the few percent level. Since kaons would also generate pedestal events in the Threshold Čerenkov counters, the percentage of protons listed in Table 1 is a maximum.No distinction was made between protons and pions for the measurements with the Interaction Target. Interacting hadrons were selected by means of a cut in the signal from the scintillation plate connected to the downstream end of this plastic target. An interacting hadron was defined as an event in which a signal compatible with a mip was produced in the PSD, combined with a signal larger than a certain minimum value (equivalent to 6 mips) in the IT.§ EXPERIMENTAL RESULTS-5mm§.§ The Dual Readout Method-5mm The Dual-Readout approach for measuring hadron showers exploits the fact that the energy carried by the non-em shower componentof hadron showers is mostly deposited by non-relativistic shower particles (protons), and therefore does not contribute to the signals of a Čerenkov calorimeter. By measuring simultaneously the visible deposited energy (dE/dx) and the Čerenkov light generated in the shower absorption process, one can determine f_ em event by event and thus eliminate (the effects of) its fluctuations. The correct hadron energy can be determined from a combination of both signals.This principle was first experimentally demonstrated by the DREAM Collaboration <cit.>, with a Cu/fiber calorimeter. Scintillating fibers measured dE/dx, and quartz fibers measured the Čerenkov light. The response ratio of these two signals was related to f_ em as C S = f_ em + 0.21 (1 - f_ em)f_ em + 0.77  (1 - f_ em) where 0.21 and 0.77 represent the h/e ratios of the Čerenkov and scintillator calorimeter structures, respectively.The hadron energy could be derived directly from the two signals <cit.>: E = S - χ C1 - χ ,      with   χ = [1- (h/e)_S][1 - (h/e)_C] ≈ 0.3 The e/h values, and thus the value of the parameter χ are a bit different when lead absorber is used.§.§ Impact of the leakage counters-5mmIn order to study the effectiveness of the described leakage counters, we first studied the correlation between the signals from these counters and the scintillation signals from the fiber calorimeter. The result, shown in Figure <ref> for 60 GeV π^-, indicates that there is indeed a good anti-correlation between the average signals. However, the resolution improvement depends of course on the event-by-event anti-correlation. The counters turned out to be indeed somewhat effective in that respect. An extreme example of this effectiveness is shown in Figure <ref>, in which the signal distribution for all events (Figure <ref>a) is compared with the signal distributionfor the events in which no shower leakage was observed, the (small fraction of the) events that were entirely contained in the fiber calorimeter. The latter distribution exhibits an energy resolution that is almost a factor of two better, and is in addition well described by a Gaussian function. These signal distributions were obtained with the standard dual-readout procedure (Section 3.1). The signal distribution for all 60 GeV π^- events shows deviations from a Gaussian shape. The type of deviations indicates that effects of light attenuation in the (scintillating) fibers are responsible for this <cit.>. The response of the fibers is not uniform in depth. Because of light attenuation, the response gradually increases as the light is produced closer to the PMTs, deeper inside the calorimeter. The convolution of the attenuation curve with the longitudinal light production profile in hadron showers leads to a response function with the measured characteristics. The steeper the light attenuation curve, the more pronounced these effects become. It has been demonstrated that the effective light attenuation length increases with the distance to the light detector <cit.>. An important feature responsible for this is the “cladding light,” which is much stronger attenuated than light trapped inside the fiber core. Therefore, this cladding light contributes predominantly to the signals from energy deposited close to the light detector. By making the upstream end of the fibers reflective, the attenuation curve becomes flatter,increasingly so as the distance to the light detector increases. In this way, effective attenuation lengths in excess of 8 m were obtained for the fibers in the SPACAL calorimeter <cit.>. However, in our calorimeter, the open end of the fibers was not made reflective, and the attenuation length is therefore shorter. The effect of this is an additional contribution to the hadronic energy resolution which, in first approximation, is energy independent. This contribution, as well as the asymmetry of the response function, increases as the light is produced closer to the PMTs, deeper inside the calorimeter. To investigate these phenomena, we separated the events into sub-samples, based on the fraction of the total leakage signal that was measured in ring 1 of the leakage counters (see Figure <ref>).Figure <ref>a shows the distribution of that fraction. In general, we may assume that a small fraction indicates that the scintillation light is, on average, produced deep inside the calorimeter, in the region where the light attenuation curve is steeper than for light produced close to the calorimeter's front face <cit.>. Figure <ref>b shows that the asymmetry is indeed predominantly observed for events in which that fraction is small (< 0.2), events in which most of the energy was deposited deep inside the calorimeter, where the effects of light attenuation are largest. The signal distributions for the other events are much more Gaussian (Figure <ref>c). The average calorimeter signal is also somewhat smaller for these events, which is consistent with larger attenuation losses due to the longer path length of the light on its way to the PMT. These results indicate that light attenuation in the scintillating fibers was indeed a significant factor contributing to the hadronic energy resolution of this calorimeter. By comparing Figures <ref>a and <ref>b, one may conclude that leakage fluctuations contributed the rest. In order to investigate how effective the signals from the leakage counters were in reducing the effects of side leakage on the energy resolution, we compared the signal distributions for the 60 GeV pions in which the leakage signals were added event by event to those from the scintillating fibers (Figure <ref>a) with the signal distributions in which the fiber scintillation signals were all multiplied by a constant factor, representing the average leakage fraction (Figure <ref>b). The leakage counters did indeed improve the hadronic energy resolution significantly, albeit not as much as one might expect from a sufficiently enlarged fiber calorimeter.If we take the energy resolution of 6.4% (Figure <ref>b) as the value for fully contained showers[GEANT4 based Monte Carlo simulations gave similar resolution values <cit.>.], then the contribution of leakage fluctuations to the resolution shown in Figure <ref> was reduced from 11.1% in the absence of any leakage detection (Figure <ref>b) to 7.8% for the imperfect leakage detector used in these studies (Figure <ref>a).We have shown in the past that the light attenuation effects can be eliminated event by event through the time structure of the signals <cit.>, or by placing the calorimeter at a small (∼ 1^∘) angle with the beam line <cit.>. That information was not available during these tests. Instead, we have chosen to limit the analyses to event samples in which more than 20% of the leakage signal was recorded in the first ring of the leakage counters. Figure <ref>c shows that this choice effectively eliminates events in which the hadrons start showering deep inside the calorimeter.§.§ Proton/pion differences-5mmA second issue we wanted to investigate with the data taken during the 2015 test beam period concerned the separation of pions and protons using only calorimeter information. To that end, we used the two threshold Čerenkov counters that were installed about 50 m upstream of the calorimeter setup. These counters were filled with CO_2 gas at a pressure that was chosen depending on the beam energy.Protons were defined as particles that produced a signal compatible with the pedestal in both Čerenkov counters. Pions were required to produce a signal significantly (at least 3σ) above pedestal in at least one of these counters (see Figure <ref>).In 1998, Akchurin and coworkers showed that there are significant differences between the shower development of high-energy protons and pions, which have measurable consequences for the signals from non-compensatingcalorimeters <cit.>. In prototype studies of the Forward Calorimeter for CMS, which is based on the detection of Čerenkov light, they found that the signals from pions were typically ∼ 10% larger than those from protons of the same energy. On the other hand, event-to-event fluctuations in these signals were ∼ 10% smaller for protons, and the signal distributions were also more symmetric for protons. These differences are a consequence of the requirement of baryon number conservation, which prohibits a π^0 from being the leading particle in proton induced showers. Our data are in agreement with these findings. Figure <ref> shows signal distributions for the Čerenkov signals from 80 GeV pions (<ref>a) and 80 GeV protons (<ref>b), respectively. Indeed, the proton signals are, on average, ∼ 10% smaller than the pion ones. On the other hand, the signal distribution for the protons is more symmetric and also somewhat narrower than the pion one. Interestingly,a comparison between Figures <ref>c and <ref>d shows that application of the dual-readout method (Equations <ref>,<ref>) largely eliminated the differences between these two types of showers. §.§ The calorimeter performance for single hadrons-5mmIn this section, we present results on the energy resolution measured for single hadrons of different energies. For the positive polarity, separate samples of protons and pions were used. No attempts were made to isolate the kaons, whose showers should also be different from pion ones in terms of the em shower component. Strangeness conservation prevents the production of leading π^0s in kaon induced showers, and therefore the characteristics of the em shower component (average value, event-to-event fluctuations in f_ em) are probably similar to those in proton induced showers.For every event, two signals were available, a Čerenkov signal and a scintillation signal. The particle energy was found by combining these signals as in Equation <ref>, using a parameter value χ =0.45.Both the reconstructed energy and the quality of the Gaussian fit are sensitive to the value of this parameter, and the chosen value representsthe result of an optimization procedure in which χ was varied in small steps. The optimal value is somewhat larger than for the copper-based DREAM calorimeter, as a result of differences between the e/h values for leadand copper, both for the scintillation and the Čerenkov sampling structure <cit.>.We have used a value χ = 0.45 for our lead-based calorimeter throughout this analysis. The procedures to obtain signal distributions for pions and protons were identical over the entire energy range studied here.As an example, Figure <ref> shows the distributions for the two individual signals, as well as the distribution of the dual-readout signals, combined according to Equation <ref>, for 20 GeV pions. These distributions illustrate the benefits of the dual-readout method. Whereas the C and S distributions are rather wide and asymmetric, the dual-readout signal distribution is well described by a Gaussian fit.The results of this study are summarized in Figures <ref> and <ref>. Figure <ref> shows the average signal per unit deposited energy (the calorimeter response) as a function of energy, for pions with energies ranging from 20 to 125 GeV. Results are given separately for the Čerenkov signals and for the dual-readout signals. Whereas the Čerenkov response increased by more than 50% over this energy range, the dual-readout response was constant to within a few percent, except for the lowest energy. The results for protons were essentially the same.The hadronic energy resolution is shown as a function of energy in Figure <ref>. The energy scale is proportional to -E^-1/2, which means that the data points should be located on a straight line through the bottom right corner of this plot if the resolution is only determined by fluctuations that are governed by Poisson statistics. Any deviation from such a line means that non-stochastic effects play a significant role. The experimental data show that the energy resolution for pions is well described by stochastic fluctuations alone when the dual-readout signals are considered[Our Monte Carlo simulations have shown that this energy resolution is dominated by fluctuations in lateral shower leakage <cit.>.]. On the other hand, the energy resolution measured on the basis of the Čerenkov signals exhibits substantial deviations from E^-1/2 scaling. The straight line fit through the experimental data points suggests a 5% resolution at infinite energy. This is a consequence of the fact that the event-to-event fluctuations in the em shower fraction (f_ em) are not stochastic.§ THE ENERGY RESOLUTION OF A CALORIMETER AND HOW TO DETERMINE IT-2mm §.§ Introduction-5mm In this section, we look in detail at the meaning of the term energy resolution and discuss and compare various ways in which it is determined in practice. Strictly speaking, the energy resolution of a calorimeter describes the precision with which the energy of an unknown object that is absorbed in it can be determined. In practice, this important characteristic is usually measured with a beam of mono-energetic particles produced by an accelerator. This beam is sent into the detector. The (relative) energy resolution is deemed to be represented by the (fractional) width of the distribution of the signals produced by the calorimeter in response to these particles. Two important caveats should be mentioned in this context:* Since all particles typically enter the calorimeter in the same small area defined by the beam spot, the results are strictly speaking only valid for this particular part of the calorimeter. If the average signal varies with the impact point of the particles, which is often the case, then the real energy resolution is underestimated in this procedure.* The width of the signal distribution measured in this way is only indicative for the energy resolution if the average calorimeter signal indeed represents the correct energy of the beam particles. This condition may not be met, for example, when the calorimeter is intrinsically non-linear and has been calibrated at a different energy than that of the beam particles. It is also not met when the average signals are different for different types of particles with the same energy (π, K, p) and the beam composition is unknown.However, even when we assume that these caveats do not play a role for the calorimeter in question, one should realize that the procedure chosen to determine the energy resolution relies on the signal distribution from an ensemble of particles with the same energy. This is a crucial aspect of methods that attempt to improve the energy resolution by techniques known as offline compensation, or Particle Flow Analysis, where the width of a signal distribution is reduced with an iterative procedure, in which calibration constants of the various calorimeter sections that contribute to the signals are varied until an optimal result is obtained.The procedure used to determine the energy resolution of the RD52 calorimeters, as described in the previous section, does not rely on the availability of an ensemble of events created by particles of the same energy. The particle energy is determined with a simple formula (<ref>), which combines the values of two signals. The energy resolution is measured by comparing that calculated energy with the true energy of the particle that created the event. The availability of an ensemble of mono-energetic particles is not essential in this case.However, if measuring the energy resolution is considered equivalent to measuring the width of the signal distribution for an ensemble of mono-energetic beam particles, then the dual-readout method also offers an alternative approach, described below. This approach leads to resolutions that are considerably better than the ones mentioned in the previous section.§.§ The rotation method for single hadrons-3mmFigure <ref>a shows a scatter plot of the Čerenkov signals the scintillation signals measured with this detector for 60 GeV pions. The signals from the leakage counters were added to those from the scintillating fibers, using the fact that the measured shower profile indicated that the side leakage at this energy was, on average, 6.4%. The energy scale for both the Čerenkov and the scintillation signals is given in units of GeV, derived from the calibration of these signals with electron showers. This scatter plot shows the data points located on a locus, clustered around a line that intersects the C/S = 1 line at the beam energy of 60 GeV. This is of course to be expected. In first approximation, the Čerenkov fibers only produced signals generated by the electromagnetic components of the hadron showers, predominantly π^0s. The larger the em shower fraction, the larger the C/S signal ratio. Events in which (almost) the entire hadronic energy was deposited in the form of em shower components thus produced signals that were very similar to those from 60 GeV electrons and are, therefore, represented by data points located near (60,60) in this scatter plot. The fact that the data points cluster around a straight line in this plot is in agreement with Groom's assessment of the fundamental aspects of dual-readout calorimetry <cit.>.We can now rotate the scatter plot over the angle θaround this intersection point: (S^' C^') = (cosθ sinθ - sinθ cosθ) (S C) and the result is shown in Figure <ref>b, for θ = 30^∘. The projection of this rotated scatter plot on the x-axis is shown in Figure <ref>c. This signal distribution is well described by a Gaussian function with a central value of 61.0 GeV and a relative width, σ/E, of 3.9%. This corresponds to 30%/√(E). The narrowness of this distribution reflects the clustering of the data points around the axis of the locus in Figure <ref>a.Energy resolution!measured with test beams We have applied exactly the same procedure for data taken at +125 GeV, +80 GeV, +40 GeV and +20 GeV, and obtained similar results.In addition, the use of positive polarity beams allowed us to separate the data into proton and π^+ samples. Figure <ref> shows the Čerenkov scintillation scatter plots for the 80 GeV π^+ (Figure <ref>a) and proton (Figure <ref>c) signals. These plots show a significant difference between the pion and proton signals. The average Čerenkov signal is about 10% larger for the pions than for the protons, a consequence of the absence of leading π^0s in the proton showers. However, using the intersection of the axis of the locus and the C/S = 1 point as the center of rotation, and the same rotation angle (30^∘) as for 60 GeV, the resulting signal distributions had about the same average value: 80.7 GeV for the pions (Figure <ref>b) and80.4 GeV for the protons (Figure <ref>d). The widths of both distributions were also about the same: 2.60 GeV for pions, 2.69 GeV for protons. Regardless of the differences between the production of π^0s (and thus of Čerenkov light) in these two types of showers, the signal distributions obtained after the dual-readout procedure applied here, were thus practically indistinguishable.We have applied exactly the same procedure for the 20 GeV, 40 GeV and the 125 GeV particles, with very similar results. Also here, the average Čerenkov signals in the raw data were significantly smaller for protons than for pions. However, after applying the same rotation procedure as for the 60 and 80 GeV data (always using the same rotation angle, θ = 30^∘), the resulting signal distributions were centered around approximately the same values, and also the relative widths of these distributions were approximately the same. The fact that the rotation angle used to achieve these results is independent of the particle type and the energy is consistent with Groom's observation that this angle only depends on the energy independent value of the χ parameter defined in Equation <ref> <cit.>. The results are summarized in Table <ref>, which lists for each type of particle the average value of the measured Čerenkovsignals, the average signal after application of the dual-readout rotation method, the fractional energy resolution (σ/E) and thefractional energy resolution multiplied with √(E). All signal values are expressed in em GeV, the energy scale derived from the calibration with electron showers. 10mm These results exhibit some very important features:* The calorimeter is very linear, both for pion and for proton detection. The beam energy is correctly reconstructed at all energies within a few percent, using the energy scale for electrons, which were used to calibrate the signals. Figure <ref> shows the calorimeter response to protons and pions, the average signal per unit deposited energy, as a function of energy. Variations of ± 1% about the average value are indicated by the shaded band. The vertical scale is normalized to the electron response. The hadron signals are thus afew percent larger than those for em showers of the same energy. * The reconstructed signal distributions are very narrow, narrowerthan those reported by any other detector we know of. * The reconstructed signal distributions are very well described by Gaussian functions. This is illustrated in Figure <ref>, which shows signal distributions for hadrons at the low and high end of the spectrum of particles studied here. The normalized χ^2 values varied between 1.02 and 2.27 for all particles listed in Table <ref>.* The fractional width of the reconstructed signal distribution also scales very well as expected for an energy resolution dominated by Poissonian fluctuations. Over the full energy range of 20 - 125 GeV we find: σ/E = (30 ± 2%)/√(E). This result is represented by the straight line in Figure<ref>, which shows the experimental data points, separately for protons and pions, as a function of the beam energy.§.§ The rotation method for multiparticle events-5mmThis method was used with the same rotation angle (θ = 30^∘) for multiparticle events, samples of which were available for beam energies of +40, +60, +100 and +125 GeV. During these dedicated runs, the Interaction Target was installed in the beam line (see Figure <ref>). Events were selected by requiring that the beam hadrons produced a signal compatible with a mip in the upstream PSD and a signal of at least 6 mip in the downstream scintillation counter. No distinction was made between protons and pions for this analysis.Otherwise, the conditions were identical to the ones used for the single-hadron analysis. Figure <ref> shows an example of the signal distribution for 125 GeV multiparticle events obtained with the rotation method. This distribution shows similar features as those for single hadrons (Figure <ref>): A rather narrow distribution, centered at approximately the correct (energy) value, well described by a Gaussian function. However, there are also some differences, which become more obvious when we look at the results for all energies for which this analysis was carried out. These are listed in Table <ref>, and shown graphically in Figure <ref>.It turns out that the multiparticle signal distributions are clearly wider than those for single hadrons. However, in both cases, the fractional width scales with E^-1/2, without any significant deviations: 53%/√(E) for “jets”, 30%/√(E) (Figure <ref>b). This indicates that only stochastic fluctuations contribute to this width. The reconstructed energies are also somewhat lower in the case of the multiparticle events, more so at low energy (Figure <ref>a). Very substantial differences are observed in the size of the Čerenkov component, which is on average considerably smaller for the multiparticle events.These features can be understood by realizing that the primary interaction of the beam particles took place at a distance of about 75 cm upstream of the calorimeter. Low-energy secondaries produced in these interactions may have traveled at such large angles with the beam line that they physically missed the calorimeter, as well as the leakage counters surrounding the calorimeter. The effect of that is larger when the energy of the incoming beam particle is smaller. The increased side leakage is probably also the main factor responsible for the increased width of the signal distribution. The difference in the strength of the Čerenkov component most likely reflects the fact that the average energy fraction carriedby the em component in hadronic showers increases with energy. Therefore, if the energy of the incoming beam particle is split between at least six secondaries (our trigger condition for multiparticle events), the total em energy fraction is likely to be smaller than when the beam particle enters the calorimeter and deposits its entire energy there in the form of a single hadronic shower.§.§ Discussion-5mmNotice that we have not used any knowledge about the energy of the beam particles in the rotation procedure described in the previous subsections. The coordinates of the rotation center were chosen on the basis of the equality of the hadronic Čerenkov and scintillation signals. This implies that the hadronic response at that point must be equal to that for electrons, which was used to set the energy scale for both types of calorimeter signals.The described method has thus allowed us to measure the energy of the beam particles with great precision. The average beam energy has been correctly reproduced within a few percentfor all energies studied, the fractional width of the signal distribution scaled with E^-1/2 and, most interestingly, the dual-readout signal distributions were found to be essentially identical for protons and pions, despite the substantial differences between the signal distributions for these particles measured in the scintillation or Čerenkov channels. The latter aspect is a unique feature of dual-readout calorimetry. No other calorimeter we know of is capable of this. ATLAS has reported significant differences between the signal distributions of protons and pions <cit.>, but their “offline compensation” methods required prior knowledge of the particle type to eliminate these differences. Yet, while we have managed to obtain very narrow signal distributions for the beam particles using only the calorimeter information, we don't think it is correct to interpret the relative width of these distributions as a measure for the precision with which the energy of an arbitrary particle absorbed in this calorimeter may be determined. The determination of the coordinates of the rotation point, and thus the energy scale of the signals, relied on the availability of an ensemble of events obtained for particles of the same energy. In practice, however, one is only dealing with one event, of unknown energy, and the described procedure can thus not be used in that case. The DREAM Collaboration has developed a procedure to determine the energy of an unknown particle showering in the dual-readout calorimeter that is not affected by this problem. In this procedure, described in Section 3.1, the em shower fraction (f_ em) of the hadronic shower is derived from the ratio of the Čerenkov and scintillation signals. Using the known e/h values of the two calorimeter structures, the measured signals can then be converted to the em energy scale (f_ em = 1).The energy resolutions obtained with this method are worse than the ones given in this section, although it should be mentioned that they aredominated by incomplete shower containment and the associated leakage fluctuations, and are likely to improve considerably for detectors that are sufficiently large <cit.>. However, the same is probably true for the measurements of which the results are shown in Figure <ref>. Figure <ref> graphically illustrates the difference between the values of the energy resolution obtained with the two methods discussed here. The precision of the energy measurement isrepresented by the arrows in the two diagrams. 2mmThe message we want to convey in this section is that one should not confuse the precision of the energy determination of a given event based on calorimeter signals alone with the width of a signal distribution obtained in a testbeam, since the latter is typically based on additional information that is not available in practice. In the example described above, this additional information derived from the fact that a large number of events generated by particles of the same energy were available. In other cases, additional information may be derived from knowledge of the particle energy. This is especially true for calorimeters whose energy scale depends on “offline compensation,” or other techniques intended to minimize the total width of the signal distribution from a detector system consisting of several longitudinal segments. Such techniques rely on calibration constants whose values depend on the energy, on the type of showering particle, and sometimes also on the ratios of the signals from the different calorimeter sections.§ CONCLUSIONS-5mmWe have studied the hadronic performance of a lead-based dual-readout fiber calorimeter with beams of pions and protons of different energies, and with multiparticle events created by upstream interactions of these beam particles in a dedicated target. The assessment of the performance characteristics, and thus of the potential possibilities of this type of detector, was limited by the fact that thecalorimeter was too small and used lead as absorber material. As has been pointed out elsewhere <cit.>, a lower-Z absorber material such as copper would be much more suitable for this type of detector. However, we have not yet managed to identify a low-cost technique for mass production of the complicated absorber structure out of copper. On the other hand, lead could be extruded into the desired shape.We have demonstrated that the hadronic energy resolution of the tested calorimeter was dominated by fluctuations in lateral shower leakage. We have tried to mitigate these effects with a crude and rather non-hermetic system of leakage counters surrounding the calorimeter. This certainly improved the energy resolution significantly, but not nearly enough to eliminate the leakage effects. The effects of leakage on the energy resolution became clear by selecting events in which no measurable leakage occurred. For these events, the measured resolution was comparable to that of the best compensating calorimeters ever built.Similar performance was achieved with an analysis method in which we made use of the availability of an ensemble of events caused by particles of the same energy. The availability of two signals that provided complementary information about the showers made it possible to determine the energy of the particles, independent of any additional information. A simple rotation procedure then led to signal distributions with all the characteristics of an ideal calorimeter: Signal linearity, Gaussian response functions with a very narrow width that scaled with E^-1/2, and the same response for pions and protons, whose responses differed substantially when measured in the scintillation or Čerenkov channels. With the exception of the energy resolution, similarly good performance was obtained with the standard dual-readout method, which can be applied for individual events. The width of the signal distribution which, as explained above, was dominated by lateral leakage fluctuations, was also in this case measured to be completely determined by stochastic fluctuations, as evidenced by the E^-1/2 scaling. § ACKNOWLEDGMENTS-5mmWe thank CERN for making good particle beams available to our experiments in the H8 beam.In particular, we also thank the technicians who are responsible for the construction and installation of the calorimeter: Freddi Angelo, Domenico Calabrò, Claudio Scagliotti and Filippo Vercellati. This study was carried out with financial support of the United States Department of Energy, under contract DE-FG02-12ER41783, of Italy's Istituto Nazionale di Fisica Nucleare and Ministero dell'Istruzione, dell' Università e della Ricerca, and of the Basic Science Research Program of the National Research Foundation of Korea (NRF), funded by the Ministry ofScience, ICT & Future Planning under contract 2015R1C1A1A02036477. unsrt 99.Akc97 N. Akchurin , A399 (1997) 202.RD52webAll publications and other results obtained in the context of this project can be found at the RD52 website:http://highenergy.phys.ttu.edu/dreamhttp://dream.knu.ac.krRD52_em N. Akchurin , A735 (2014) 130.smalltheta A. Cardini , A808 (2016) 41.PID N. Akchurin , A735 (2014) 120.Akc05a N. Akchurin , A537 (2005) 537.deg07 D.E. Groom, 572 (2007) 633.Aco90 D. Acosta , A305 (1991), 55.Hartjes F.G. Hartjes and R. Wigmans, A277 (1989), 379.geant N. Akchurin , A762 (2014) 100.Akc14 N. Akchurin , A735 (2014) 120.Akc98N. Akchurin , A408 (1998) 380.Wig00 R. Wigmans, Calorimetry, Energy Measurement in Particle Physics, International Series of Monographs on Physics, Vol. 107, Oxford University Press (2000).PDG16 C. Patrignani  (Particle Data Group),Chin. Phys. C40, 100001 (2016), Section 34.9.2.Adr10 P. Adragna , A615 (2010) 158.Wig13 R. Wigmans, A713 (2013) 43.
http://arxiv.org/abs/1703.09120v1
{ "authors": [ "S. Lee", "A. Cardini", "M. Cascella", "S. Choi", "G. Ciapetti", "R. Ferrari", "S. Franchino", "M. Fraternali", "G. Gaudio", "S. Ha", "J. Hauptman", "H. Kim", "A. Lanza", "F. Li", "M. Livan", "E. Meoni", "J. Park", "F. Scuri", "A. Sill", "R. Wigmans" ], "categories": [ "physics.ins-det", "hep-ex" ], "primary_category": "physics.ins-det", "published": "20170327144548", "title": "Hadron detection with a dual-readout fiber calorimeter" }
bibdataout@apsbibdataout @CONTROLapsrev41Control,author="08",editor="1",pages="0",title="0",year="1" @filesw auxoutapsrev41Control tmargin=2cm,bmargin=1.8cm,lmargin=1.5cm,rmargin=1.5cmnumbers,square,citesep=,-.24em observ1ℱ_QGHZ()[]{}⟨⟩ iagoba.apellaniz@gmail.comDepartment of Theoretical Physics, University of the Basque Country UPV/EHU, P. O. Box 644, E-48080 Bilbao, SpainDepartment of Theoretical Physics, University of the Basque Country UPV/EHU, P. O. Box 644, E-48080 Bilbao, SpainDepartment of Theoretical Physics, University of the Basque Country UPV/EHU, P. O. Box 644, E-48080 Bilbao, SpainDahlem Center for Complex Quantum Systems, Freie Universität Berlin, 14195 Berlin, GermanyWigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, HungaryDepartment of Theoretical Physics, University of the Basque Country UPV/EHU, P. O. Box 644, E-48080 Bilbao, Spaintoth@alumni.nd.eduhttp://www.gtoth.euDepartment of Theoretical Physics, University of the Basque Country UPV/EHU, P. O. Box 644, E-48080 Bilbao, SpainWigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, HungaryIKERBASQUE, Basque Foundation for Science, E-48013 Bilbao, Spain@nameWe study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom,we also discuss the case of gradient metrology with a single Bose-Einstein condensate.DOI: https://doi.org/10.1103/PhysRevA.97.05360310.1103/PhysRevA.97.053603 Precision bounds for gradient magnetometry with atomic ensembles Géza Tóth Received 7 July 2017; published 8 May 2018 ================================================================§ INTRODUCTIONMetrology plays an important role in many areas of physics and engineering <cit.>. With the development of experimental techniques, it is now possible to realize metrological tasks in physical systems that cannot be described well by classical physics and instead quantum mechanics must be used for their modeling. Quantum metrology <cit.> is the novel field that is concerned with metrology using such quantum mechanical systems.One of the basic tasks of quantum metrology is magnetometry with an ensemble of spin-j particles. Magnetometry with a completely polarized state works as follows. The total spin of the ensemble is rotated by a homogeneous magnetic field perpendicular to it. We would like to estimate the rotation angle or phase θ based on some measurement; this phase parameter can then be used to obtain the field strength. To determine the rotation angle, one needs, for instance, to measure a spin component perpendicular to the mean spin.Up to now, it looks as if the total spin behaves like a clock arm and its position tells us the value of θ exactly. At this point one has to remember that we have an ensemble of N particles governed by quantum mechanics, and the uncertainty of the spin component perpendicular to the mean spin can never be zero. Hence, simple calculation shows that the scaling of the precision of the phase estimation is θ ∼ N, which is called shot-noise scaling<cit.>. However, spin squeezing <cit.> can decrease the uncertainty of one of the componentsperpendicular to the mean spin and this can be used to increase the precision of the measurements <cit.>. While it is possible to surpass the shot-noise limit, for the case of a linear Hamiltonian <cit.>, no quantum state can have a better scaling in the precision than θ ∼ N^2, called Heisenberg scaling.In recent years, quantum metrology has been applied in many scenarios, from atomic clocks <cit.> and precision magnetometry <cit.> to gravitational wave detectors <cit.>. So far, most of the attention has been paid to the problem of estimating a single parameter. The case of multiparameter estimation for quantum systems is much less studied, possibly, since it can be more complicated due to the noncommutative nature of the problem <cit.>.In this paper, we compute precision bounds for the estimation of the magnetic field gradient (see Fig. <ref>). In general, in order to achieve these bounds, an estimate of the constant (homogeneous) part of the field is required. Hence, we have to use the formalism of multiparameter estimation. Magnetometry of this typecan be realized with differential interferometry with two particle ensembles, which has raised a lot of attention in quantum metrology <cit.>. Another possibility is considering spin chains, which can be relevant in trapped cold ions or optical lattices of cold atoms, where we have individual access to the particles <cit.>. Finally, gradient magnetometry can be carried out using a single atomic cloud, which is very relevant from the point of view of cold gas experiments. One can consider both atomic clouds of localized particles, as well as Bose-Einstein condensates. While most works in magnetometry with a single ensemble focus only on the determination of the strength and direction of the magnetic field, certain measurement schemes for the gradient have already been proposed and tested experimentally. Some schemes use an imaging of the ensemble with a high spatial resolution.They do not count as single-ensemble methods in the sense we use this expression in our paper since in this case not only collective observables are measured<cit.>. There is a method based on collective measurements of the spin length of a fully polarized ensemble given in Ref. <cit.>. There is also a scheme based on many-body singlet states described in Ref. <cit.>.We use the quantum Fisher information (QFI) and the Cramér-Rao (CR) bound in our derivations <cit.>. Due to this, our calculations are generally valid for any measurement, thus they are relevant to many recent experiments <cit.>. We note that in the case of the spin singlet, our precision bounds are saturated by the metrological scheme presented in Ref. <cit.>.We can also connect our results to entanglement theory <cit.>. We find that the shot-noise scaling cannot be surpassed with separable states, while the Heisenberg scaling can be reached with entangled states. However,the shot-noise scaling can be surpassed only if the particle positions are correlated, which is the case, for instance, if the particles attract each other.Next, we present the main characteristics of our setup. For simplicity, as well as followingrecent experiments (e.g., Ref. <cit.>), we consider an ensemble of spin-j particles placed in a one-dimensional arrangement. The atoms are then situated along the x axis with y=z=0. We assume that we have particles that behave classically with respect to their spatial state. That is, they cannot be in a superposition of being at two different places. On the other hand, they have internal degrees of freedom, their spin, which is quantum. This is a very good description to many of the cold gas experiments.Based on these considerations, we assume that the state is factorizable into a spatial part and a spin part as ϱ=ϱ^(x)⊗ϱ^(s),where the internal state is decomposed in its eigenbasis as ϱ^(s) =∑_λ p_λ|λ⟩⟨λ|.For the spatial part defined in the continuous Hilbert space, we assume that it can be modeled by an incoherent mixture of pointlike particles as ϱ^(x)=∫P(x)/⟨x|x⟩|x⟩⟨x|x,where x =(x_1,x_2,…,x_N) is a vector which collects all the particle positions, P(x) is the spatial probability distribution function of the atoms, and x denotes x_1x_2⋯x_N. Note that the spatial part (<ref>) is diagonal in the position eigenbasis, which simplifies considerably our calculations (see Appendix <ref> for more details). During the evolution of the state, correlations might arise between the internal and the spatial parts and the product form (<ref>) might not be valid to describe the evolution of the system.First, we consider spin chains and two particle ensembles at different places. The gradient measurement with two ensembles is essentially based on the idea that the gradient is just the difference between two measurements at different locations. With these systems, it is possible to reach the Heisenberg scaling.We also examine in detail the case of a single atomic en- semble. Since in such systems the atoms cannot be individually addressed, we assume that the quantum state is permutationally invariant (PI). We show that for states insensitive to the homogeneous magnetic field, one can reduce the problem to a one-parameter estimation scenario. Single-ensemble measure- ments have certain advantages because the spatial resolution can be higher and the experimental requirements are smaller since only a single ensemble must be prepared.For completeness, we mention the case of Bose-Einstein condensates (BEC). The spatial state in this case is pure ϱ^(x)_ BEC=(|Ψ⟩⟨Ψ|)^⊗ N,where |Ψ⟩ is the spatial state of a single particle. Hence, the spatial state is delocalized and it is not an incoherent mixture of various eigenstates of x. While we do not consider such systems in detail, our formalism could be used to model them.We now outline the model we use to describe the interaction of the particles with the magnetic field. The field at the atoms is given asB(x,0,0)=B_0 +x B_1 + 𝒪(x^2),where we neglect the terms of order two or higher, and where 𝒪(ξ) is the usual Landau notation to describe the asymptotic behavior of a quantity, in this case for small ξ. We consider the magnetic field pointing in the z direction, hence, B_0=B_0 (0,0,1) and B_1=B_1 (0,0,1). For this configuration, due to the Maxwell equations, with no currents or changing electric fields, we have div B = 0,curl B = (0,0,0).This implies ∑_l=x,y,z∂ B_l /∂ l=0 and ∂ B_l /∂ m- ∂ B_m /∂ l=0 for l m. Thus, the spatial derivatives of the field components are not independent of each other. In this paper, however, we consider an elongated trap. In the case of such a quasi-one- dimensional atomic ensemble, only the derivative along the axis of the trap has an influence on the quantum dynamics of the atoms or a double-well experiment.We determine the precision bounds for the estimation of the magnetic field gradient B_1. We calculate how the precision scales with the number of particles. We compare systems with an increasing particle number, but of the same size. As discussed later, if we follow a different route, we can obtain results that can incorrectly be interpreted as reaching the Heisenberg limit, or even a super-Heisenberg scaling.The angular momentum of an individual atom is coupled to the magnetic field, yielding the following interaction term:h^(n)=γ B_z^(n)⊗ j_z^(n),where the operator B_z^(n)=B_0+B_1x̂^(n) acts on the spatial part of the Hilbert space and x̂^(n) is the position operator of a single particle. Moreover, j_z^(n) is a single-particle spin operator, acting on the spin part of the Hilbert space. Finally, γ = g μ_B where g is the gyromagnetic factor and μ_B corresponds to the Born magneton, and we set ħ=1 for simplicity. We use the “” notation to distinguish the operator x̂ from the coordinate x. Later, we will omit it for simplicity. The Hamiltonian of the entire system is just the sumof all two-particle interactions of the type Eq. (<ref>) and can be written asH = γ∑_n=1^N B_z^(n)⊗ j_z^(n).Equation (<ref>) generates the time evolutionof the atomic ensemble.One could include also the kinetic energy in the Hamiltonian. Such an extra term causes that the gradient field pushes atoms in state |0⟩ into one direction, while atoms in state |1⟩ into the other direction. In our work, we do not take into account this effect. Moreover, we do not include in the model the initial thermal dynamics of the particles. Both of these effects are negligible in a usual setup, as shown in Appendix <ref>.We calculate lower bounds on the precision of estimating B_1 based on a measurement on the state after it passed through the unitary dynamics U=exp(-iHt), where t is the time spent by the system under the influence of the magnetic field. The unitary operator can be rewritten asU=e^-ib_0 H_0 + b_1 H_1 ,where the b_i=γ B_i t. The generator describing the effect of the homogeneous field isgiven asH_0=∑_n=1^N j_z^(n) = J_z,while the generator describing the effect of the gradient isH_1=∑_n=1^N x^(n)j_z^(n).We omit ⊗ and the superscripts (x) and (s) for simplicity, and use them only if it is necessary to avoid confusions.The operators H_0 and H_1 commute with each other. However, it is not necessarily true that the operators we have to measure to estimate b_0 or b_1 can be simultaneously measured. The reason for that is that both operators to be measured act on the same atomic ensemble. If the measurement operators do not commute with each other, then the precision bound obtained from the theory of QFI cannot necessarily be reached. For the particular cases studied in this paper, we prove that a simultaneous measurement to estimate both the homogeneous and the gradient parameter can be carried out (see Appendix <ref>). On the other hand, in schemes in which the gradient is calculated based on measurements on two separate atomic ensembles or different atoms in a chain, the measuring operators can always commute with each other <cit.>.The paper is organized as follows. In Sec. <ref>, general precision bounds for the estimation of the gradient of the magnetic field are presented. In Sec. <ref>, we compute precision bounds for relevant spatial configurations appearing in cold atom physics such as spin chains andtwo ensembles spatially separated from each other. In Sec. <ref>, we consider a single atomic ensemble in a PI state and we calculate the precision bounds for various quantum states, such as the singlet spin state or the totally polarized state. In Sec. <ref>, we consider Bose-Einstein condensates. § PRECISION BOUNDSFOR ESTIMATING THE GRADIENT In this section, we show how the QFI helps us to obtain the bound on the precision of the gradient estimation. First, we discuss gradient magnetometry using quantum states that are insensitive to homogeneous fields. In this case, we need to estimate only the gradient and do not have to know the homogeneous field. Hence, this case corresponds to a single-parameter estimation problem.Then, we discuss the case of quantum states sensitive to homogeneous fields. Even in this case, we are interested only in the gradient, and we do not aim at estimating the homogeneous field. In spite of this, gradient estimation with such states is a two-parameter estimation task. We introduce the basics of multi-parameter quantum metrology, and we adapt that formalism to our problem. We also show that the precision bound obtained does not change under spatial translation, which will be used later to simplify our calculations. In Appendix <ref>, we show that even the precision bounds for states sensitive to the homogeneous field, appearing in this paper, are saturable.Next, we summarize important properties of the QFI used throughout this paper (for reviews, see Refs. <cit.>). Let us consider a quantum state with the eigendecomposition ϱ = ∑_kp_k|k⟩⟨k|.For two arbitrary operators A and B, and a state ϱ [Eq. (<ref>)], the QFI is defined as <cit.>ϱ,A,B:=2∑_k, k'(p_k-p_k')^2/p_k+p_k'A_k,k'B_k',k,where A_k,k'=kAk' and B_k,k'=kBk'. If the two operators are the same then, from Eq. (<ref>), the usual form of the QFI is obtained: ϱ,A≡ϱ,A,A=2∑_k, k'(p_k-p_k')^2/p_k+p_k' |A_k,k'|^2.We list some useful properties of the QFI: (i) Based on Eq. (<ref>), ϱ, A, B is linear in the second and third arguments ϱ,∑_i A_i,∑_j B_j = ∑_i,jϱ,A_i,B_j.This will make it possible to calculate the QFI for collective quantities based on the QFI for single-particle observables.(ii) The QFI remains invariant if we exchange the second and the third arguments ϱ,A,B=ϱ,B,A.Equation (<ref>) will help to simplify our calculations.(iii) The following alternative form, ϱ, A, B = 4 AB - 8∑_k,k'p_kp_k'/p_k+p_k'A_k,k'B_k',k,is also useful since the correlation appears explicitly.(iv) For pure states, Eq. (<ref>) simplifies to |ψ⟩,A,B=4(AB_ψ -A_ψB_ψ.Using Eq. (<ref>) for A=B, we obtain that for pure states the QFI equals four times the variance, i.e., |ψ⟩,A=4(Δ A)^2.(v) The QFI is convex on the space of the density matrices, i.e., pϱ_1+(1-p)ϱ_2,A⩽ pϱ_1,A+(1-p)ϱ_2,A,Hence, when maximizing the QFI,we need to carry out an optimization over pure states only.In the following, we show the general form of the expressions giving the precision bounds for states insensitive to the homogeneous field, as well as for states sensitive to it. We also show that both bounds are invariant under the spatial translation of the system which makes the computing for particular cases much easier.§.§ Precision boundfor states insensitive to homogeneous fields: Single-parameter dependence We will now consider quantum states that areinsensitive to the homogeneous field. For such states, [ϱ, H_0]=0 holds. Hence, the unitary time evolution given in Eq. (<ref>) is simplified toU=e^-ib_1H_1, and the evolved state is a function of a single unknown parameter b_1.When estimating a single parameter, the Cramér-Rao bound gives the best achievable precision as <cit.>b_1|_max = ϱ,H_1.It is always possible to find a measurement that saturates the precision bound, (<ref>), which is indicated using the notation “|_max =”. For states insensitive to the homogeneous fields, the maximal precision of the estimation of the gradient parameter b_1 is given as b_1|_max = ∑_n,m^N ∫ x_n x_m P(x) x ϱ^(s), j_z^(n), j_z^(m),where the integral represents the correlation between the particle positions x_n and x_m. Moreover, Eq. (<ref>) is translationally invariant, i.e., it remains the same after an arbitrary displacement d of the form ofU_d=exp(-idP_x),where d is the distance displaced and P_x is the sum of all single-body momentum operators p_x^(n) in the x direction.We have to evaluate the right-hand side of Eq. (<ref>). The state is a tensor product of the spatial and internal parts, and the spatial part is an incoherent mixture of position eigenstates, as in Eqs. (<ref>) and (<ref>). Hence, the eigenstates are |x,λ⟩, where |x⟩ and |λ⟩ are defined in the spatial and internal Hilbert spaces, respectively. Then, the matrix elements of H_1, which is diagonal in the spatial subspace, are obtained as(H_1)_x,λ;y,ν = δ(x-y) λ∑_n=1^N x_n j^(n)ν.Calculating Eq. (<ref>) for A = H_1, Eq. (<ref>) leads to Eq. (<ref>) (see Appendix <ref> for details).In the last part of the proof, we show thatthe precision (<ref>) remains the same for any displacement of the system. We use the Heisenberg picture in which the operators must be transformed instead of the states. After the displacement, the operator H_1 describing the effect of the gradientis obtained asH_1(d) =H_1-dH_0.Hence, the unitary evolution operator of the displaced system is obtained asU(d)=e^-i[b_0H_0+b_1 H_1(d)]=e^-i[(b_0-b_1d)H_0+b_1H_1].Using the commutation relation (<ref>), we can see thatEq. (<ref>) is equal to the time evolution given in Eq. (<ref>). §.§ Precision bound for states sensitive to homogeneous fields: Two-parameter dependence We now show how to obtain the precision bounds for states sensitive to the homogeneous field. The homogeneous field rotates all the spins in the same way, while the field gradient rotates the spins differently depending on the position of the particles. Hence, in order to estimate b_1, we have to consider the effect of a second unknown parameter b_0. Note, however, that we are not interested to estimate b_0 precisely, we just need it to estimate b_1.In this case, the metrological performance of the quantum state is given by the 2×2 Cramér-Rao matrix inequality <cit.>C≥ℱ_Q^-1, where the covariance matrix is defined as C_ij=b_i b_j - b_ib_j. The matrix elements of the quantum Fisher information matrix ℱ_Q are ℱ_ij:= ϱ, H_i, H_j.Unlike in the case of single-parameter estimation, Eq. (<ref>) can be saturatedonly if the measurements for estimating the two parameters are compatible with each other <cit.>. Hence, we use “⩽” instead of “|_max” for the bounds for quantum states sensitive to the homogeneous fields.Using the well-known formula for the inverse of 2× 2 matrices,Eq. (<ref>) yields b_1⩽ℱ_11-ℱ_01ℱ_10/ℱ_00,for the precision of b_1. For states sensitive to the homogeneous field, the expression to compute the precision bound for the gradient parameter takes the following form: b_1⩽ ∑_n,m^N ∫ x_nx_m P(x)xϱ^(s), j_z^(n), j_z^(m) - ∑_n=1^N∫ x_n P(x) xϱ^(s), j_z^(n),J_z^2/ϱ^(s), J_z.Moreover, the bound, (<ref>), similarly to Eq. (<ref>), is invariant under spatial translations of the system.To obtain the bound, (<ref>), we need to consider the matrix elements of QFI one by one. First of all, we compute ℱ_11 which has the same form as Eq. (<ref>) ℱ_11=∑_n,m^N ∫ x_n x_m P(x) xϱ^(s), j_z^(n), j_z^(m).Next, we have that H_0, similarly to Eq. (<ref>), is diagonal in the spatial |x⟩ basis, and its matrix elements in the |x,λ⟩ basis of the state are written as(H_0)_x,λ;y,ν = δ(x-y)λ∑_n=1^N j_z^(n)ν.With this we obtain ϱ, H_0, H_0 as ℱ_00 = ϱ^(s), J_z.Note that Eq. (<ref>) is not a function of the whole state but only of the internal ϱ^(s) state. Finally, we compute ℱ_01 and ℱ_10. Since ℱ_01 = ℱ_10, we have to compute only one of them. Using Eqs. (<ref>) and (<ref>),ϱ,H_0,H_1 is obtained as ℱ_01 = ∑_n=1^N ∫ x_n P(x)xϱ^(s), j_z^(n),J_z.With these results, Eq. (<ref>) follows (see Appendix <ref>).Let us now determine the bound on the precision for estimating the gradient on the translated system. We have to compute first the QFI matrix elements. We use the linearity of the last two arguments of ϱ, A, B given in Eq. (<ref>), the fact that H_0 remains unchanged in the Heisenberg picture. We also use the formula (<ref>) for the shiftedH_1 operator. The diagonal element of the QFI matrix corresponding to the measurement of the homogeneous field isℱ_00(d) =ϱ, H_0(d) = ℱ_00, hence, it does not change due to the translation. For the diagonal element corresponding to the gradient measurement we obtain ℱ_11(d)=ℱ_11-2dℱ_01+d^2ℱ_00. Finally, for the off-diagonal element, we getℱ_01(d)= ℱ_01-dℱ_00.After determining all the elements of the QFI matrix, the bound for a displaced system can be obtained as b_1⩽ℱ_11(d) -(ℱ_01(d))^2/ℱ_00(d)=ℱ_11-2dℱ_01+d^2ℱ_00-ℱ_01^2-2d ℱ_01ℱ_00 +d^2ℱ_00^2/ℱ_00.The bound in Eq. (<ref>) can be obtained from the right-hand side of Eq. (<ref>) with straightforward algebra. § SPIN CHAIN AND TWO SEPARATED ENSEMBLES FOR MAGNETOMETRY After presenting our tools in Sec. <ref>, we start with simple examples to show how our method works. We calculate precision bounds for gradient metrology for spin chain and for two-particle ensembles separated by a distance.Before considering the setups mentioned above, we introduce various quantities describing the distribution of the particles based on the probability distribution functionappearing in Eq. (<ref>). The mean particle position is μ = ∫∑_n=1^N x_n/N P(x) x.The standard deviation of the particle positions, describing the size of the system, is computed as σ^2 = ∫∑_n=1^N x_n^2/N P(x) x - μ^2.Finally, the covariance averaged over all particle pairs is η = ∫∑_n≠ m^N x_nx_m/N(N-1) P(x) x - μ^2.The covariance is a large positive value if the particles tend to be close to each other, while it is negative if they tend to avoid each other.After presenting the fundamental quantities above, let us study concrete metrological setups. The first spatial state we consider is given by N particles placed equidistantly from each other in a one-dimensional spin chain, as shown in Fig. <ref>.Such a system has been studied also in the context of a single parameter estimation in the presence of collective phase noise <cit.>. The probability density function describing such a system isP(x)=∏_n=1^N δ(x_n-na),where a is the distance between the particles in the chain. For this system, the average position of the nth particle is ∫ x_n P(x) x= na,whereas the two-point average (<ref>) is ∫ x_nx_m P(x) x= nma^2.The standard deviation defined in Eq. (<ref>) is obtained as σ_ch^2 = a^2N^2-1/12. Next, we will obtain precision bounds for particles placed in a spin chain. Let us consider a chain of N spin-j particles placed along the x direction separated by a constant distance, and a magnetic field pointing in the z direction. Then, for the spin-state totally polarized in the y direction,|ψ_tp⟩=|j⟩_y^⊗ N,the precision bound is given by b_1|_max = 2σ_ch^2Nj.Here, σ_ch denotes the standard deviation of the average position of the particles for the chain (ch).We use the precisions bound for states sensitive to the homogeneous field given in Eq. (<ref>). We obtain b_1|_max =∑_n,m^N nma^2 |j⟩_y^⊗ N, j_z^(n),j_z^(m)-∑_n=1^N an |j⟩_y^⊗ N, j_z^(n), J_z^2/|j⟩_y^⊗ N, J_z, J_z=2a^2 N^2 -1/12 Nj,Note that the bound can be saturated (see Appendix <ref>). Here, for the last equality we used the definitions of the average quantites given inEqs. (<ref>) and (<ref>) and we also used Eq. (<ref>) giving the QFI for pure states. We can see that the standard deviation given in Eq. (<ref>) coincides with a factor we have in Eq. (<ref>), with which we conclude the proof. Note that the bound (<ref>) seems to scale with the third power of the particle number N, and hence seems to overcome the ultimate Heisenberg limit. The reason is that the length of the chain increases as we introduce more particles into the system. We should compare the metrological usefulness of systems with different particle numbers, but of the same size. In our case, weuse throughout the paper the standard deviation of the averaged particle positions as a measure of the spatial size of the system, and normalize the results with it. One can miss this important point since when only the homogeneous field is measured such a normalization is not needed.[This comment is relevant for the setup of Ref. <cit.>, where the precision of the gradient estimation seems to reach the Heisenberg scaling. In reality, the shot-noise scaling has not been overcome. The question of normalization is also important for the setup in Ref. <cit.>.]After the spin chain, we consider estimating the gradient with two ensembles of spin-j atoms spatially separated from each other. Such systems have been realized in cold gases (e.g., Ref. <cit.>), and can be used for differential interferometry <cit.>. We will determine the internal state with the maximal QFI.Let us assume that half of the particles are at one position and the rest at another one, both places at a distance afrom the origin. The probability density function of the spatial part isP(x)=∏_n=1^N/2δ(x_n+ a)∏_n=N/2+1^Nδ(x_n-a).Such a distribution of particles could be realized in a double-well trap, where the width of the wells is negligible compared to the distance between the wells. To distinguish the two wells we use the labels “L” and “R” for the left-hand side and right-hand side wells, respectively. Based on these, we obtain the single-point averages as ∫ x_n P(x) x = -a ifn∈L,+a ifn∈R..The two-point correlation functions are∫ x_nx_m P(x) x = +a^2 if(n,m)∈(L,L) or (R,R),-a^2 if(n,m)∈(L,R) or (L,R)..For the average particle position we obtain μ=0, while the standard deviation for the spatial state in the double well (dw) is σ_dw^2 = a^2,Next, we calculate the achievable precision of the gradient estimation. For the case of two ensemble of N spin-j particles, the state thatmaximizes the QFI is |ψ⟩ = |j⋯ j⟩^(L)|-j⋯-j⟩^(R)+|-j⋯-j⟩^(L)|j⋯ j⟩^(R)/√(2).The best achievable precision is given as b_1|_max = 4 σ_dw^2 N^2j^2.Equation (<ref>) agrees with the results obtained in Ref. <cit.>.The state given in Eq. (<ref>) is insensitive to the homogeneous field, hence we have to use the formula (<ref>) to bound the precision. We obtain b_1|_max = ∑_(n,m)=(L,L), (R,R) a^2 |ψ⟩, j_z^(n), j_z^(m)+ ∑_(n,m)=(L,R),(R,L)-a^2 |ψ⟩, j_z^(n), j_z^(m).For the state (<ref>), the equation above, (<ref>), yields b_1|_max = ∑_(n,m)=(L,L), (R,R) a^2 j^2 + ∑_(n,m)=(L,R), (R,L) -a^2 (-j^2)= 4 a^2 N^2j^2,where we have used the definition of the QFI for pure states given in Eq. (<ref>). A factor in Eq. (<ref>) can be identified with the standard deviation, (<ref>), from which the proof follows.It is interesting to simplify the QFI for product states states|ψ⟩^(L)⊗|ψ⟩^(R), where |ψ⟩^(L) and|ψ⟩^(R) are pure states of N/2 particles each. This approach is also discussed in Ref. <cit.>. Such states can reach the Heisenberg limit, while they are easier to realize experimentally than states in which the particles in the wells are entangled with each other. Before obtaining the precision for the case above, we present a method to simplify our calculations. The system is at the origin of the coordinate system such that for mean particle position given in Eq. (<ref>), μ=∫∑_n x_n/N P(x)x = 0holds. Thus, the second term in the expression for the bound for states sensitive to the homogeneous field (<ref>) is zero since all [ϱ^(s), j_z^(n), J_z] are equal considering product states of two equal permutationally invariant states, |ψ⟩^(L)⊗|ψ⟩^(R). Hence, the bounds for states insensitive and sensitive to the homogeneous field, Eqs. (<ref>) and (<ref>), respectively, are the same in this case.We now compute [ρ, H_1] for the case when the state is sensitive to the homogeneous field, hence, we use the bound on the precision given in Eq. (<ref>). Using the the probability density distribution function given in Eq. (<ref>), and following steps leading to Eq. (<ref>), we obtain|ψ⟩^(L)|ψ⟩^(R), H_1 = 2a^2|ψ⟩^(L),J_z^(L),where we used that |ψ⟩^(L),J_z^(L) = |ψ⟩^(R),J_z^(R). Note that our results concerning using product states for magnetometry can be interpreted as follows. In this case, essentially the homogeneous field is estimated in each of the two wells, and then the gradient is computed from the measurement results. The bounds for these type of states are also saturable (see Appendix <ref>).We will now present precision bounds for various well known quantum states in the two wells. We consider the Greenberger-Horne-Zeilinger (GHZ) state <cit.>|⟩ = |00⋯00⟩+|11⋯11⟩/√(2),where |0⟩ and |1⟩ are the eigenstates of j_z^(n) with eigenvalues -1/2 and +1/2, respectively. We also consider unpolarized Dicke states <cit.>|N⟩_l=NN/2^-1/2∑_k𝒫_k(|0⟩_l^⊗ N/2⊗|1⟩_l^⊗ N/2),where l=x,y,z and the summation is over all 𝒫_k permutations. Such states are the symmetric superposition of product states with an equal number of |0⟩_l's and |1⟩'s. Based on these, in Table <ref> we summarized the precision bounds for states of the type |ψ⟩^(L)⊗|ψ⟩^(R) for the double-well case. § MAGNETOMETRY WITH A SINGLEATOMIC ENSEMBLE In this section, we discuss magnetometry with a single atomic ensemble. We consider a one-dimensional ensemble of spin-j atoms placed in a trap which is elongated in thex direction. The setup is depicted in Fig. <ref>. In the second part of the section, we calculate precision bounds for the gradient estimation with some important multiparticle quantum states, for instance, Dicke states, singlet states, and GHZ states.§.§ Precision bound for an atomic ensemble In an atomic ensemble of many atoms, typically the atoms cannot be individually addressed. We willtake this into account by considering states for which both the internal state ϱ^(s) and the probability distribution function P(x), appearing in Eq. (<ref>), are PI. The permutational invariance of P(x) implies thatP(x)=1N!∑_k𝒫_k [P(x)]holds, where the summation is over all possible permutations 𝒫_k of the variables x_n. Hence, we do not need to sum over all possible n'sin Eqs. (<ref>) and (<ref>), and neither to sum over all n's and m's in Eq. (<ref>). All the terms in each sum are equal to each other due to the permutationally invariance of the probability distribution function (<ref>).An interesting property of the covariance (<ref>) is that it can only take values bounded by the variance in the following way: -σ^2/N-1⩽η⩽σ^2,where both the lower and the upper bounds are proportional to the variance σ^2. See Fig. <ref> for examples on how different correlations are obtained in an atomic 1D lattice. Next, we present precision bounds for PI states. The maximal precision achievable by a single atomic ensemble insensitive to homogeneous fields is b_1|_max =(σ^2-η) ∑_n=1^Nϱ^(s),j_z^(n).The precision given in Eq. (<ref>) can be reached by an optimal measurement. Nevertheless, it is worth to note that the precision cannot surpass the shot-noise scaling because ϱ^(s),j_z^(n) cannot be larger than j^2. Moreover, η cannot be smaller than -σ^2/(N-1) due to Eq. (<ref>), which makes its contribution negligible for large N. From the definition of the QFI for states insensitive to the homogeneous field [Eq. (<ref>)] we obtain the bound for a single ensemble as b_1|_max = ∑_n,m^N ∫ x_nx_m P(x) xϱ^(s), j_z^(n), j_z^(m)=∑_n=1^N σ^2 ϱ,j_z^(n) + ∑_n≠ m^N ηϱ,j_z^(n),j_z^(m).Then, we have to use the fact that for states insensitive to the homogeneous fields ϱ,J_z=0 holds, which implies ϱ, J_z = ∑_n,m^N ϱ,j_z^(n),j_z^(m) = 0.Based on this, for such states the sum of QFI terms involving two operators can be expressed with the sum of QFI terms involving a single operator as ∑_n≠ m^N ϱ,j_z^(n),j_z^(m)=-∑_n=1^N ϱ, j_z^(n).Substituting Eq. (<ref>) into Eq. (<ref>), Observation <ref> follows.For states sensitive to homogeneous fields, the precision of estimating the gradient is bounded from above as b_1|_max = (σ^2-η) ∑_n=1^N ϱ^(s),j_z^(n)+ηϱ^(s),J_z,which may surpass the shot-noise scaling whenever η is a positive constant.We start from Eq. (<ref>) and take into account that in this case the bound is saturable (see Appendix <ref>). As explained in Sec. <ref>, if we move the system, the precision bounds do not change. We then move our systemto the origin of the coordinate system yielding μ=0, and making the second term appearing in Eq. (<ref>)zero. Thus, we only compute the first term in Eq. (<ref>) and obtain b_1|_max = ∑_n=1^N σ^2 ϱ,j_z^(n) + ∑_n≠ m^N ηϱ,j_z^(n),j_z^(m),Then, we add η∑_n=1^N ϱ,j_z^(n) to the last term and subtract it from the first term to make the expression more similar to Eq. (<ref>). Note that the second term on the right-hand side of Eq. (<ref>) is new in the sense that it did not appear in the bound for states insensitive to homogeneous fields given inEq. (<ref>). Even if the first term cannot overcome the shot-noise limit, in the second term the covariance is multiplied by the QFI for estimating the homogeneous field and therefore this concrete term, for extremely correlated particle positions, allows to achieve the Heisenberg scaling. §.§ Precision limit for various spin states In this section, we present the precision limits for various classes of important quantum states such as the totally polarized state, the state having the best precision among separable states, the singlet state, the Dicke state (<ref>), or the GHZ state (<ref>). We calculate the precision bounds presented before, (<ref>) and (<ref>), for these systems. §.§.§ Singlet states A pure singlet state is a simultaneous eigenstate of the collective J_z and J^2 operators, with an eigenvalue zero for both operators. We will now consider PI singlet states. Surprisingly, the precision bound is the same for any such state. PI singlet states are very relevant for experiments, since they have been experimentally created in cold gases <cit.> while they also appear in condensed matter physics <cit.>.Let us now see the most important properties of singlet states of an N-particle system. There are several singlets pairwise orthogonal to each other. The number of such singlets, D_0, depends on the particle spin j and the number of particles N. It is the most natural to write the singlet state in the angular momentum basis. The basis states are |J,M_z,D⟩, which are the eigenstates of J_x^2+J_y^2+J_z^2 with an eigenvalue J, and of J_z with an eigenvalue M_z. The label D is used to distinguish different eigenstates corresponding to the same eigenvalue of J and J_z. Then, a singlet state can be written as ϱ_singlet^(s)=∑_D=1^D_0 p_D|0,0,D⟩⟨0,0,D|,where ∑_D p_D=1.Let us see some relevant single-particle expectation values for the singlet. Due to the rotational invariance of the singlet ϱ_singlet^(s), we obtain that (j_x^(n))^2=(j_y^(n))^2=(j_z^(n))^2 holds. We also know that for the sum of the second moments of the single particle angular momentum components (j_x^(n))^2+(j_y^(n))^2+(j_z^(n))^2=j(j+1)holds. Hence, the expectation value of the second moment of the single-particle angular momentum component is obtained as (j_z^(n))^2=j(j+1)/3.After discussing the main properties of the singlet states, we can now obtain a precision bound for gradient metrology with such states. For PI spin states living in the singlet subspace, i.e., states composed of vectors that have zero eigenvalues for J_z and J^2 and all their possible statistical mixtures, the precision of the magnetic gradient parameter is bounded from above as b_1_singlet|_max = (σ^2-η) N 4j(j+1)/3.First compute the QFI for the one-particle operator j_z^(n),ϱ^(s), j_z^(n). For that we need that when j_z^(n) acts on a singlet state, produces a state outside of the singlet subspace. Hence, 0,0,Dj_z^(n)0,0,D'=0for any pair of pure singlet states. Then, we use the formula (<ref>) to compute the QFI. The second term of Eq. (<ref>) is obtained as8 ∑_D,D'p_Dp_D'/p_D+p_D' |0,0,Dj_z^(n)0,0,D'|^2 = 0,due to Eq. (<ref>). It follows that the single-particle QFI for any singlet equals four times the second moment of the angular momentum component ϱ_singlet^(s), j_z^(n) =4[ϱ_singlet^(s) (j_z^(n))^2].Note that Eq. (<ref>) is true even though ϱ_singlet^(s) is a mixed state. Inserting the expectation value of the second moment of the angular momentum component given in Eq. (<ref>) into Eq. (<ref>), we obtain ϱ_singlet^(s), j_z^(n) for any n. Then, we have all the ingredients to evaluate the maximal precision given in Eq. (<ref>), and with that we prove the Observation. As mentioned earlier, singlet states are insensitive to homogeneous magnetic fields, hence determining the gradient leads to a single-parameter estimation problem. This implies that there is an optimal operator that saturates the precision bound given by Eq. (<ref>). However, it is usually very hard to find this optimal measurement, although a formal procedure for this exists <cit.>. In Ref. <cit.>, a particular setup for determining the magnetic gradient with PI singlet states was suggested by the measurement of the J_x^2 collective operator.For this scenario the precision is given by b_1 = |∂_b_1J_x^2|^2/J_x^2.In Appendix <ref>, weshow that this measurement provides an optimal precision for gradient metrology for all PI singlets. §.§.§ Totally polarized state The totally polarized state can easily be prepared experimentally. It has already been used for gradient magnetometry with a single atomic ensemble <cit.>. For the gradient measurement as for the measurement of the homogeneous field, the polarization must be perpendicular to the field we want to measure.We chose as before the totally polarized state along y axis, given in Eq. (<ref>). The relevant variances for the state, (<ref>), are(Δ J_z)^2_ tp = Nj/2,(Δ j_z^(n))_ tp^2 = j/2,for all n. Based on Eq. (<ref>), for pure states the QFI is just four times the variance. Hence, from Eq. (<ref>), we obtain ϱ,j_z^(n)=2j and ϱ,J_z=2Nj. Then, the bound on the sensitivity can be obtained from the formula for PI states sensitive to homogeneous fields (<ref>) as b_1_tp|_max = 2 σ^2 Nj.We can see clearly that the precision scales as 𝒪(N) for large N.Let us now see which measurement could be used to estimate the field gradientwith a totally polarized state. The homogeneous field rotates all spins by the same angle, while the gradient rotates the spin at different positions by different angles. Due to that, the homogeneous field rotates the collective spin, but does not change its absolute value. On the other hand, the field gradient decreases the absolute value of the spin since it has been prepared to be maximal, which has been used in Ref. <cit.> for gradient magnetometry (see Fig. <ref>). Hence, we can measure the spin length to estimate the field gradient. §.§.§ Best separable state We now turn our attention to the precision bound for all separable spin states. It is useful to obtain this value so we have a direct comparison on what the best classically achievable precision is. It turns out that for j>1/2, it is possible to achieve a precision higher than with the fully polarized state (<ref>).Let us consider general separable states, which are not necessarily PI. We do not know if the optimal separable state is sensitive or insensitive to the homogeneous field. The corresponding precision bounds for the gradient estimation are given in Eqs. (<ref>) and (<ref>), respectively. Since the probability density function (<ref>) is PI, we have ∫ x_n P(x) dx = μ for all n. As explained in Sec. <ref>, by moving the ensemble, the precision bounds do not change. If we move the system to the origin of the coordinate system achieving μ=0, we can make our calculations simpler since the second term appearing in Eq. (<ref>) is zero. Thus, we only compute the first term in Eq. (<ref>). Hence, the two bounds (<ref>) and (<ref>) are the same in this case and we arrive at b_1_sep|_max = ∑_n,m∫ x_n x_m P(x)xϱ^(s), j_z^(n), j_z^(m),where we already assume that the bound can be saturated (see Appendix <ref>).We now look for the separable state that maximizes the right-hand side of Eq. (<ref>), which has to be a pure product state due to the convexity of the quantum Fisher information. Hence, we look for the pure product state maximizing ϱ^(s), j_z^(n), j_z^(m). Based on Eq. (<ref>), for product states we find that ϱ^(s), j_z^(n), j_z^(m)=0 ifn m,4j_z^(n) ifn=m.For all n, a state that maximizes Eq. (<ref>) is |ψ_sep⟩ = (|-j⟩+|+j⟩√(2)^⊗ N,for which the single-particle variances are maximal, i.e, j_z^(n)=j^2. While we carried out an optimization over general, non-necessarily PI separable states, the optimal state is PI. Plugging the state (<ref>) into the bound given in Eq. (<ref>) leads tothe precision bound for separable states as b_1_sep |_max = 4σ^2 N j^2,where we have used the definition of the variance of the particle positions (<ref>) for a permutational invariant state.Note that the bound for the best separable state given in Eq. (<ref>) is above the bound obtained for the singlet state (<ref>), whereas the bound for the totally polarized state in Eq. (<ref>) is below. Nevertheless, when the singlet state is used, the homogeneous magnetic field has no effect on the state. In contrast, the state (<ref>) is sensitive to the homogeneous field. §.§.§ Unpolarized Dicke states |D_N⟩ and | D_N⟩_x Next, we compute precision bounds for entangled states. In this section, we consider unpolarized Dicke states, which play an important role in quantum optics and quantum information science. The Dicke state |N⟩_l [Eq. (<ref>)], with a maximal ⟨ J_x^2+J_y^2+J_z^2 ⟩ and ⟨ J_l⟩=0 for any l∈ x,y,z is particularly interesting due to its entanglement properties and its metrological usefulness <cit.>. This state has been created in photonic experiments <cit.> and in cold atoms <cit.>, while a Dicke state with ⟨ J_z⟩>0 has been created with cold trapped ions <cit.>.The Dicke state |N⟩ is an eigenstate of J_z so it is insensitive to a homogeneous magnetic field pointing into the z direction. Thus, the precision bound can be saturated by some measurement. The Dicke state |N⟩_x is sensitive to the homogeneous field. Moreover, it is very useful for estimating the homogeneous field as it has been shown in Ref. <cit.>. Here, we consider large particle numbers, to make the results simpler.Let us now see the most important properties of Dicke states. For the expectation values of the single-particle angular momentum components j_l^(n) = 0hold for l=x,y,z for all n. The second moments of the collective angular momentum components are given as J_x^2=J_y^2=N/4(N/2+1), J_z^2=0.Let us now see two-body correlations. Since the Dicke state is PI, we have j_l^(n)j_l^(m)=j_l^(1)j_l^(2),(j_l^(n))^2=(j_l^(1))^2 for all m n and l=x,y,z. Hence, the collective second moments are connected to the single particle and two-particle operator expectation values as J_l^2=N(j_l^(1))^2+N(N-1)j_l^(1)j_l^(2) for l = x,y,z. Considering the symmetry under rotations around z axis, we also have (j_x^(1))^2 = (j_y^(1))^2, j_x^(1)j_x^(2) = j_y^(1)j_y^(2). Based on these and using Eq. (<ref>) for j=1/2, we arrive at <cit.>(j_l^(n))^2= 1/4 for l=x,y,z.After discussing the main properties of the Dicke states, we can now obtain a precision bound for gradient metrology with such states. For large N, the precision bound for the Dicke state |N⟩ is b_1_|_max = (σ^2-η) N.For theDicke state |N⟩_x, theprecision is bounded as b_1_,x |_max = (σ^2 -η) N + ηN(N+2)/2,which allows in principle a Heisenberg limited behavior due to the second term on the right-hand side.Let us prove first Eq. (<ref>). Since |N⟩ is a pure state, the QFIs appearing in Eq. (<ref>) are simply four times the corresponding variances of j_z^(n). Based on the relations (<ref>) and (<ref>) giving the first and second moments of j_l^(n), respectively, we obtain |N⟩,j_z^(n) = 4 (Δ j_z^(n)^2)=1. From Eq. (<ref>) and the bound for states insensitive to the homogeneous field (<ref>), the precision bound for the Dicke state |N⟩ follows.We prove now the bound for the |N⟩_x states given in Eq. (<ref>). The second moments (j_z^(n))^2 for |N⟩_x can be obtained from the second moments computed above for |N⟩ by relabeling the coordinate axes. Since |N⟩_x is a pure state, the QFI again equals four times the corresponding variance. Hence, we obtain|N⟩_x,j_z^(n)=1,|N⟩_x,J_z= N(N+2)/2, and using the bound for states sensitive to homogeneous fields given in Eq. (<ref>) we have all we need to prove Observation <ref>.§.§.§ GHZ state The GHZ states are defined for qubits in Eq. (<ref>). Such states are very sensitive to the homogeneous field. GHZ states are highly entangled and play an important role in quantum information theory <cit.>. They have been created experimentally in photonic systems <cit.> and trapped ions <cit.>.Let us see first the relevant expectation values for GHZ states. Direct calculation shows that j_z^(n)=0,J_z=0.Moreover, for the second moments (j_z^(n))^2=1/4,J_z^2 = N^2/4 hold.Let us now calculate the precision bound. We recall that for pure states the QFI is given as Eq. (<ref>). Using the bound for states sensitive to homogeneous fields given in Eq. (<ref>),we obtain b_1_|_max = (σ^2-η) N + η N^2.From (<ref>) follows that we can reach the Heisenberg limit with such states, but only in cases where η is positive, i.e., when the particles are spatially correlated.§.§.§ Summary of results Finally, we summarize the precision bounds obtained for various quantum states in Table <ref>. In Fig. <ref>, we show the mean values and variances of the collective angular momentum components for these states. Note that for these PI states the optimal estimators for the homogeneous field and the gradient field are compatible (seeAppendix <ref>). It means that the two parameters can be estimated at once even for the states sensitive to the homogeneous fields.§ GRADIENT MAGNETOMETRY WITH A BOSE-EINSTEIN CONDENSATE In this section we study the case when our external state is a Bose-Einstein condensate instead of an incoherent mixture of pointlike particles. We can write the spatial state of a BEC [Eq. (<ref>)] asϱ^(x)_ BEC = |0⟩⟨0|,where we define the state |0⟩ as the pure product state representing the BEC.Since all particles are in the same spatial state, several important quantities describing the ensemble can easily be computed. For such a quantum state,for the average particle position defined in Eq. (<ref>) we obtain μ =0x^(n)0 for all n. For the variance given in Eq. (<ref>), σ^2=0(x^(n))^20 - μ^2 holds for all n. Finally, there is no correlation between particle positions, i.e.,⟨ x^(n)x^(m)⟩=⟨ x^(n)⟩⟨ x^(m)⟩ if n m. Hence, the covariance, (<ref>), is zero η=0.Finally, as explained in Sec. <ref>, the precision bounds do not change if we translate the system. We move the atomic ensemble to the origin of the coordinate system such thatμ=0. This will make our calculations much simpler.Based on Eq. (<ref>), for states insensitive to the homogeneous field we obtain(Δ b_1)^2|_max = ℱ_11.Based on Eq. (<ref>), for states sensitive to the homogeneous field we obtain(Δ b_1)^2≤ℱ_11.Here we used that ℱ_10=ℱ_10=4μ (Δ J_z)^2is zero due to Eq. (<ref>). The bounds needed in Eqs. (<ref>) and (<ref>) are equal to each other and can be obtained as follows. We will compute a bound on ℱ_11 on pure states. Straightforward algebra leads to ℱ_11=4(Δ H_1)^2=4σ^2 [∑_n (j_z^(n))^2 ϱ^(s)].One can see that the optimal spin state for gradient estimation is the state totally polarized in the z direction |Ψ⟩_ opt,BEC=|j⟩^⊗ N,which is separable. Hence, the precision is bounded for spin-j particles asb_1 |_max = 4σ^2 N j^2.This is quite surprising, since under the dynamics coupling to the z component of the spin and hence it rotates around the z axis. One would naively expect that the optimal state is the state totally polarized in the y direction (<ref>) studied in Sec. <ref> for the case of cold atomic ensembles. Due to the convexity of the quantum Fisher information, the bounds are also valid for the case of a mixed spin state.Based on Eq. (<ref>), we see that the Heisenberg scaling cannot be reached in this case. Interestingly, this is true for any spatial wave function. For instance, if a single BEC is in a double-well potential, it still cannot have a scaling better than the shot-noise scaling in gradient estimation. In contrast, in Sec. <ref> we have seen that a Heisenberg scaling is possible in a double well, if two independent BECs are in the two wells. § CONCLUSIONS In this work, we investigated the precision limits of measuring thegradient of a magnetic field with atomic ensembles arranged in different geometries and initialized in different states. We were particularly interested as to how the best achievable precision scales with the number of particles. For spin chains and the two-ensemble case, the precision of theestimation of the gradient can reach the Heisenberg limit. For a single ensemble with localized particles, the shot-noise limit can be surpassed and even the Heisenberg limit can be achieved if there is a strong correlation between the particle positions. We also studied the case of a single Bose-Einstein condensate, and found that the shot-noise limit can not be surpassedin this case. However, even if the Heisenberg limit is not reached, single-ensemble methods can have a huge practical advantage compared to methods based on two or more atomic ensembles since using a single ensemble makes the experiment simpler and can also result in a better spatial resolution. Independently from our work, Ref. <cit.> studied gradient metrology for different configurations of N particles distributed on a line.We thank J. Calsamiglia, G. Colangelo, R. Demkowicz- Dobrzański, I. L. Egusquiza, O. Gühne, S. Altenburg, S. Wölk, M. Oszmaniec, C. Klempt, M. W. Mitchell, M. Modugno, L. Santos, R. J. Sewell, and A. Smerzi for stimulating discussions. We acknowledge the financial support of the EU (ERC Starting Grant No. 258647/GEDENTQOPT, CHIST-ERA QUASAR, COST Action CA15220, QuantERA CEBBEC), the Spanish Ministry of Economy, Industry and Competitiveness, and the European Regional Development Fund FEDER through Grant No. FIS2015-67161-P (MINECO/FEDER), the Basque Government (Project No. IT986-16), the UPV/EHU program UFI 11/55, and the National Research, Development and Innovation Office NKFIH (Contracts No. K124351, No. K124152, and No.K124176).I. U.-L. acknowledges the support of a Ph.D. grant of the Basque Government. Z. Z. was supported by the János Bolyai Scholarship of the Hungarian Academy of Sciences.§ THE EFFECTS OF THE MOVEMENTOF THE ATOMS ON THE PRECISIONIn this paper, we compute the precision bounds neglecting the displacement of the particles generated by the gradient field and the thermal dynamics of the particles. We now first analyze the displacement induced by the gradient of the magnetic field, and next we analyze which are the blurring effects caused by the thermal dynamics.First of all, let us assume that we have for the internal subspace a completely mixed N-particle state ϱ^(s) placed in a single point in space (see Fig. <ref>). From the famous experiment of Gerlach and Stern <cit.>, we know that the final state is split in two. Moreover, the more distance between the two final subensembles, the larger the gradient of the field. Hence, surprisingly, taking into account the movement of the particles induced by the gradient reduces the error in the estimation, so neglecting it, our bounds on the precision are still valid.Nevertheless, the gradient induces a force which depends on the spin state of the atoms. The force is constant, thus the position will change quadratically in time. On the other hand, the spin state changes linearly. Hence, for small enough evolution times the displacement of the particles can be neglected.Moreover, in a typical experiment for sensing the gradient of the magnetic field, sensitivities of the order of 1pT/µm can be reached, for a gradient of the magnetic field of 100nT/µm<cit.>. Hence, the classical acceleration due to the gradient of the magnetic field is a ≈ g_Fμ_B B_1 / m, where m and g_F are the mass and the gyromagnetic g-factor of a ^87Rb atom, respectively,m≈87u and g_F≈ 0.5. This results in an acceleration of the order of 3×10^-2m/s^2. After 0.5ms of evolution <cit.>, the atom travels a distance of the order of 10nm, which is irrelevant compared with the size of these systems.Next, let us consider the thermalization of the state which introduces random displacements of the particles potentially blurring the signal. A typical cigar-shape ensemble of ^87Rb atoms used for gradientometry is a couple of millimeters long and temperatures around 20µK<cit.>. We use the formula that connects the mean-root-average velocity of the particles and the temperature, v̅ = √(3k_BT/m). Note that not all the particles move towards the same direction but randomly in any direction. Hence, we compute the average of the modulus of the projection of the velocity parallel to the direction of the cloud as |v_∥| = v̅/2. We conclude that the atoms are displaced by around 19µm along the axis to the cloud, which again is irrelevant for clouds of the size of millimeters <cit.>.Moreover, the displacement due to the gradient and thermal dynamics can be clearly neglected in the cases of the spin chain, the twoensembles, and the BEC, which are discussed in Secs. <ref> and <ref>. Hence, the precision bounds computed in this paper can be used as a tool to characterize different states.Concerning the sensitivity of our magnetometer, we can say the following. Assuming N=8.5×10^6 atoms, trap length σ=3 mm, and for the completely polarized state discussed in Sec. <ref>, we obtain Δ B_1 ≈3pT/mm, which is similar to the state of the art of other cold gas magnetometers <cit.>. The precision can be considerably improved if we use entangled states and we have correlation between the particle positions. There are other setups that work at much lower length scales, however, it is difficult to compare them to our system since they would not work at mm length scales <cit.>. § SPATIAL STATE OF THERMALLY DISTRIBUTED POINTLIKE PARTICLESWe discuss the spatial state represented by Eq. (<ref>). For that, let us introduce the position operator as x̂ = ∫x|x⟩⟨x|x,where x is a vector of the particle positions, and |x⟩ denotes a spatial state in which the pointlike particles are at given positions with the usual normalization ⟨x|y⟩=δ(x-y),as expected. Based on Eq. (<ref>), we see that x̂|x⟩=x|x⟩.Thus, |x⟩ is an eigenstate of the operator x̂. In order to obtain a quantum state that represents N pointlike particles placed in the locations determined by the x vector, we have to normalize it as |φ_x⟩ = |x⟩/√(⟨x|x⟩).From Eq. (<ref>) and using that there is a probability distribution function, P(x), and defining P(x) as the probability to find particles at a given position x, we arrive at Eq. (<ref>). § CALCULATION OF THE QFI MATRIX ELEMENTS FOR POINTLIKE PARTICLES In this appendix, we show how to compute the QFI ϱ,H_i,H_j if the spatial part of the state is written as Eq. (<ref>).Let us write first the density matrix in its eigenbasis as ϱ= ∫P(x)/⟨x|x⟩|x⟩⟨x|x⊗∑_λ p_λ|λ⟩⟨λ|= ∫∑_λP(x)p_λ/⟨x|x⟩|x,λ⟩⟨x,λ|x,where P(x)p_λ/⟨x|x⟩ are the eigenvalues. Based on Eq. (<ref>), the QFI matrix elements are written asϱ,H_i,H_j =2∫∑_λ,ν1/⟨x|x⟩[P(x)p_λ-P(y)p_ν]^2/P(x)p_λ+P(y)p_ν ×(H_i)_x,λ;y,ν(H_j)_y,ν;x,λxy.Note that ⟨x|x⟩≡⟨y|y⟩ and that the integral is over 2N variables, x and y.We now use the fact that the generators H_0 and H_1 are diagonal in the spatial basis [see Eqs. (<ref>) and (<ref>)]. Hence, the matrix elements can be rewritten as(H_i)_x,λ;y,ν≡δ(x-y) (ℋ_i)_λ,ν for i = 0,1, where ℋ_i is a shorthand for ∑_n=1^N j_z^(n) and ∑_n=1^N x_n j_z^(n), respectively. Using ⟨x|y⟩ = δ(x-y) and Eq. (<ref>), we write Eq. (<ref>) as ϱ,H_i,H_j =2 ∫∑_λ,ν P(x) (p_λ-p_ν)^2/p_λ+p_ν×(ℋ_i)_λ,ν(ℋ_j)_ν,λx,which using the definition (<ref>) for ϱ^(s), j_z^(n), j_z^(m) simplifies to Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) depending on the case. § OPTIMAL MEASUREMENTS FOR SINGLET STATESIn this appendix, we prove that the precision limits for gradient metrology can be saturated for singlet states if we measure J_x^2. Let the initial spin state of an atomic ensemble be an arbitrary PI singlet state ϱ^(s)_singlet. Consider the experimental setup when b_1 is obtained by measuring J_x^2. The precision of estimating b_1, which is given by the error propagation formula, is optimal in the short-time limit, i.e., lim_t → 0|∂_b_1J_x^2(t)|^2/J_x^4(t)-J_x^2(t)^2 = ϱ^(s), H_1,where J_x^k(t)=U^†(t) J_x^k U(t), the time-evolution unitary operator is of the form U(t)=e^-ib_1H_1, and H_1 is defined in Eq. (<ref>).Since for any pure singletJ_x^k|0,0,D⟩=0,holds [Eq. (<ref>)], we have that J_x^2(0)=J_x^4(0)=0. For the numerator, we havelim_t → 0 | ∂_b_1J_x^2(t)|^2 = lim_t → 0∂_b_1[e^i b_1 H_1 J_x^2 e^-i b_1 H_1] ϱ^(s)|^2 =| iH_1J_x^2 ϱ^(s) - iH_1 ϱ^(s) J_x^2|^2= 0.We see that both the numerator and denominator of the right-hand side of Eq. (<ref>) go to zero as t → 0, thus the l'Hospital rule can be used applying the derivative ∂_b_1 in both the denominator and the numerator, which yields lim_t → 0b_1= lim_t → 02∂^2_b_1 J^2_x(t)∂_b_1J^2_x (t)/∂_b_1 J_x^4(t)-2J_x^2(t)∂_b_1 J_x^2(t).However, here the numerator and the denominator are again zero at t=0, so we employ the l'Hospital rule once again and obtainlim_t → 0b_1= = lim_t → 02∂^2_b_1J_x^2(t)^2 + 2∂^3_b_1J_x^2(t)∂_b_1J_x^2(t)/∂_b_1^2 J_x^4(t)-2∂_b_1 J_x^2(t)^2- J_x^2(t)∂_b_1^2 J_x^2(t)= lim_t → 02∂^2_b_1J_x^2(t)^2/∂_b_1^2 J_x^4(t) =2 H_1, [H_1, J_x^2]^2/ H_1, [H_1,J_x^4]= 4 H_1 J_x^2 H_1^2/ H_1 J_x^4 H_1 ,where we simplified the expectation values that are 0 and we used the Heisenberg equation of motion twice for the second derivatives and simplified the result, using Eq. (<ref>) and the definition of the commutator, to rewrite the equation.Next, we will compute the numerator and the denominator in Eq. (<ref>). First of all using the angular momentum commutation relation [j_z^(n), j_x^(m)] = iδ_n,mj_y^(m), we compute [H_1, J_x] obtaining[H_1, J_x] = ∑_n = 1^N x^(n)[j_z^(n), J_x] = i∑_n = 1^N x^(n)j_y^(n) =: i H_1,y.From the formula [A, B^k] = ∑_α = 1^kB^α -1 [A,B] B^k -α, and using Eq. (<ref>), we arrive at[H_1,J_x^k] = i∑_α = 1^k J_x^α-1 H_1,y J_x^k-α,and similarly,[H_1,y,J_x^k] = -i∑_α = 1^k J_x^α-1 H_1 J_x^k-α.Now, using the commutator relations (<ref>) and (<ref>), andEq. (<ref>), we are able to substitute H_1 J_x^k by [H_1,J_x^k] for which only remains the first term in the summation, α = 1, and repeating the procedure for H_1,yJ_x^k-1, we obtain H_1 J_x^k H_1 = iH_1,yJ_x^k-1 H_1= H_1J_x^k-2 H_1.Hence, we have that H_1 J_x^k H_1 = H_1^2 for any even k. Finally, from Eq. (<ref>), we arrive at lim_t → 0b_1=4H_1^2 which for the case of the singlets is equal to 4H_1 since H_1 = 0. Hence, the proof follows. § PROOF THAT THE PRECISION BOUNDS CAN BE SATURATED When working with a state that is sensitive to the homogeneous field, in order to optimally estimate the gradient, one must measure simultaneously the gradient and the homogeneous field. In other words, the optimal measurement for the homogeneous field and for the gradient parameter should commute with each other. In this section, we will show that in all cases we considered the two measurements commute with each other. As a consequence, our bounds on the precision obtained based on the formalism given in Sec. <ref> can be saturated.In order to proceed, it is necessary to define the symmetric logarithmic derivative (SLD) L(ϱ,A) which has the property thatL(ϱ,A) ρ + ρ L(ϱ,A)/2 = i[ϱ,A].and for a density matrix with an eigendecomposition of the form (<ref>) is given asL(ϱ,A)= 2i∑_λ≠νp_λ-p_ν/p_λ+p_νλAν|λ⟩⟨ν|. Then, quantum metrology tells us that the condition for being able to construct compatible measurements to estimate b_0 and b_1 is <cit.> [L(ϱ,H_0),L(ϱ,H_1)]=0.The two SLDs can be obtained as L(ϱ,H_0) = ^(x)⊗ L(ϱ^(s),J_z),L(ϱ,H_1) = ∑_n=1^N∫x x_n |x⟩⟨x|⊗ L(ϱ^(s),j_z^(n)),after reordering the subspaces. For all cases when the internal state is permutationally invariant, we arrive at the following expressions for the SLDs: L(ϱ,H_0) = ^(x)⊗ L(ϱ^(s),J_z),L(ϱ,H_1) = μ̂^(x)⊗ L(ϱ^(s),J_z),where the SLD for the spin state is given as L(ϱ^(s),J_z) = 2i ∑_λ,νp_λ - p_ν/p_λ + p_νλJ_zν|λ⟩⟨ν| and the average position operator is defined as μ̂^(x)=1/N∑_n ∫xx_n |x⟩⟨x|. One can see by inspection that the operators given in Eqs. (<ref>) and (<ref>) commute with each other. Hence, all our bounds for PI states in Sec. <ref> can be saturated, and all PI states discussed in other sections as well.Finally, we have to discuss the states appearing in Table <ref>. They are product states of two PI states of N/2 particles each. Thus, in terms of “L” and “R,” we have the following expression for L(ϱ, H_1):L(ϱ, H_1) = μ̂^(L)⊗^(R,x)⊗ L(|ψ⟩^(L), J_z^(L)) ⊗ (|ψ⟩⟨ψ|)^(R)+^(L,x)⊗μ̂^(R)⊗ (|ψ⟩⟨ψ|)^(L)⊗ L(|ψ⟩^(R), J_z^(R)),where μ̂^(L) is the average position operator for the “L” ensemble, similarly for μ̂^(R), and J_z^(L) is the z projection of the total angular momentum of the “L” subsystem, as J_z^(R) is for “R.” Clearly, the operator L(ϱ, H_1) commutes with L(ϱ, H_0), which is given inEq. (<ref>).With this, we conclude this appendix which let us demonstrate that all bounds in this paper can be saturated.
http://arxiv.org/abs/1703.09056v3
{ "authors": [ "Iagoba Apellaniz", "Inigo Urizar-Lanz", "Zoltan Zimboras", "Philipp Hyllus", "Geza Toth" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170327132935", "title": "Precision bounds for gradient magnetometry with atomic ensembles" }
firstpage–lastpageNew Physics in : Distinguishing Models through CP-Violating Effects S. Uma Sankar====================================================================Recent studies suggest that the quenching properties of galaxies are correlated over several mega-parsecs. The large-scale “galactic conformity” phenomenon around central galaxies has been regarded as a potential signature of “galaxy assembly bias” or “pre-heating”, both of which interpret conformity as a result of direct environmental effects acting on galaxy formation. Building on thehalo quenching framework developed in <cit.>, we discover that our fiducial halo mass quenching model, without any galaxy assembly bias, can successfully explain the overall environmental dependence and the conformity of galaxy colours in SDSS, as measured by the mark correlation functions of galaxy colours and the red galaxy fractions around isolated primaries, respectively.Our fiducialhalo quenching mock also correctly predicts the differences in the spatial clustering and galaxy-galaxy lensing signals between the more vs. less red galaxy subsamples, split by the red-sequence ridge-line at fixed stellar mass. Meanwhile, models that tie galaxy colours fully or partially to halo assembly bias have difficulties in matching all these observables simultaneously. Therefore, we demonstrate that the observed environmental dependence of galaxy colours can be naturally explained by the combination of 1) halo quenching and 2) the variation of halo mass function with environment — an indirect environmental effect mediated by two separate physical processes. cosmology: observations — cosmology: large-scale structure of Universe — gravitational lensing: weak — methods: statistical § INTRODUCTION Recent studies have shown that the quenching (i.e., the cessation of star forming activities in galaxies) properties of central galaxies, such as star formation rate (SFR), morphology, neutral hydrogen content, and broad-band colour, are correlated with those of their neighbouring galaxies <cit.>. This so-called “galactic conformity” phenomenon exists over two distinct distance scales between the central galaxy of a primary dark matter halo and its surrounding galaxies, including the true satellite galaxies within the same halo <cit.>[<cit.> compared group-sized halos at fixed total optical luminosity, which they used as a proxy for halo mass.], and galaxies in halos that are a few virial radii away from the primary (∼3) — effects we refer to as “1-halo” and “2-halo” conformities, respectively. In essence, galactic conformity is a manifestation of some unknown environmental effect on galaxy formation, and is closely related to the colour-density or morphology-density relation that has been known for many decades <cit.>. However, it is not clear whether galactic conformity extends to even larger scales, and the underlying driver of this environmental effect remains one of the most important open questions in galaxy formation theory. The 1-halo conformity is closely related to the physics of galaxy quenching within individual halos. For instance, virial shocks can heat the incoming gas to high temperatures and inhibit star formation if the halo is more massive than a few ×10^12 <cit.>. Massive clusters can even drive extended distribution of hot halo gas via accretion shocks up to several times the virial radii <cit.>.Feedbacks from active galactic nucleus (AGN) can also potentially truncate the merger-driven star formation episodes at high redshift <cit.>, and keep the hot gas from cooling efficiently at low redshift <cit.>.In addition, the hot halo gas can strip the gaseous disks from newly accreted satellites <cit.>, while transforming spirals into S0 galaxies <cit.>. The efficiencies of all those physical processes are directly tied to the halo mass, and we will collectively refer to them as “halo quenching”. Recently, from the weak lensing measurements of the locally brightest galaxies in the Sloan Digital Sky Survey <cit.>, <cit.> discovered a strong bimodality in the average host halo mass of the red vs. blue central galaxies — at fixed , red central galaxies preferentially live in halos that are factor of two (∼ 2×10^10) to almost ten (> 2×10^11) more massive than ones that host blue centrals <cit.>. <cit.> interpreted this halo mass bimodality as a pronounced signature of halo quenching, and demonstrated that models without an explicit halo quenching are unlikely able to reproduce such strong bimodality in halo mass. For example, in a model that maximizes the so-called “galaxy assembly bias” <cit.> by matching galaxy colours to the formation time of halos <cit.>, blue central galaxies are instead placed in slightly more massive halos than red centrals, due to the weak anti-correlation between the formation time and mass of halos. The observed 1-halo conformity effects are consistent with the halo quenching scenario.<cit.> found qualitative agreement between the 1-halo colour conformities observed in SDSS and predicted by the <cit.> semi-analytic model (SAM), where star formation is regulated by the AGN “radio-mode” feedback with an efficiency that is directly tied with halo mass  <cit.>.Using morphology as the quenching indicator, <cit.> found a similar 1-halo conformity effect in SDSS and argued that the hot halo gas in high-mass systems could be responsible for the coherent transformation of galaxy morphologies. In the 2-halo regime, a possible conformity effect was first detected by <cit.> using photometrically-selected satellite galaxies in SDSS. Switching to the spectroscopic satellite sample, <cit.> found that for primaries witharound a few × 10^10, strong conformity between the gas-poor primaries and the HI content of satellites persists at projected distances of ∼ 3, where a direct causal link between the two becomes unlikely; for primaries with ∼ 10^11, the large-scale conformity signal is weak and confined within a couple virial radii of clusters. One potential problem with the 2-halo conformity detection is, the primary galaxies in <cit.> were selected by an isolation criteria that could include gas-poor low-mass central galaxies within the vicinity of a massive companion <cit.>, or even mis-identify a small fraction of satellites within a larger system as primaries <cit.>.<cit.> explored this contamination issue using the <cit.> SAM mock catalogue, and argued that the impact is too small to account for the observed conformity. However, using a group finding algorithm for identifying centrals, <cit.> argued that the 2-halo conformity seen by <cit.> could be artificially boosted by the mis-identified primaries in the sample.<cit.> also discovered that the isolation criteria ofcould include low-mass central galaxies in the vicinity of massive systems, and that the large-scale conformity signal is likely a short-range effect sourced by massive halos.Currently, there are two possible explanations of the 2-halo conformity effect. <cit.> argued that the strong signal around the very low-mass central galaxies (a few ×10^9) favors the “pre-heating” scenario in which large reservoirs of intergalactic medium (IGM) was heated up to a high entropy level, probably due to bursty star-forming activities at early epochs or spatially-coherent injection of energy from AGN/stellar feedbacks <cit.>. Despite the lack of robust detections so far <cit.>, the galaxy assembly bias effect can also produce a strong 2-halo conformity signal, by making use of the coupling between halo accretion histories in the same density environment across large scales <cit.>.In essence, both pre-heating and galaxy assembly bias are direct environmental effects on two key parameters of galaxy formation, i.e., the entropy of IGM and the overall accretion rate of baryons, respectively.Alternatively, an indirect environmental effect, such as the combination of the environmental dependence of halo mass function and the simple halo quenching mechanism, should also give rise to a large-scale galactic conformity. This third, indirect effect has not been adequately explored in previous studies, partly due to a common mis-conception that there is no inherent environmental dependence in the standard halo model framework <cit.>. But as articulated by <cit.> <cit.>, the dependence of halo abundance and formation history on the large-scale density environment is a standard element of the excursion set theory of cosmological structure formation <cit.>, which also allows mass-dependent biasing to be understood within the peak-background split formalism <cit.>.Is halo quenching consistent with the environmental dependence of colours observed in SDSS? In the pioneering work by <cit.>, they demonstrated that a simple Halo Occupation Distribution <cit.> model of galaxy colours, without assembly bias, is able to reproduce the level of environmental dependence of colours in SDSS up to 20. They are the first to employ the mark correlation functions of colours as a robust measure of the environmental dependence of galaxy quenching, which includes contributions from the large-scale conformity around central galaxies and the correlation between the colours of satellites inside different halos. However, <cit.> made a few overly-simplified assumptions, including that the galaxy colour depends solely on luminosity. As a result, this model does not represent a viable halo quenching model, and is thus unlikely able to explain the observed strong halo mass bimodality of central galaxies in SDSS <cit.>. Similarly, <cit.> constructed an HOD mock of galaxy colours that has the similar -dependence of red galaxy fractions as their age-matching mock, but without any galaxy assembly bias.They found that this HOD mock does not produce any conformity effects, as measured by the red galaxy fractions around the red vs. blue primaries selected in the same way as in <cit.>.<cit.> further argued that a large-scale conformity signal is the smoking-gun evidence of galaxy assembly bias.However, despite the technical differences between the HOD mocks built by <cit.> and <cit.>, it is quite intriguing that a strong overall environmental dependence of colours seen in one may not yield an equally strong conformity signals of centrals in the other. Therefore, it is very important to explore whether a viable halo quenching model within the HOD framework can simultaneously explain the environmental effects in the spatial distribution of galaxy colours and the large-scale colour conformity signals observed in SDSS.In this paper, we build on the best-fitting halo quenching model within theframework developed in our Paper I <cit.> and II <cit.> of this series, and employ the mark correlation functions and the red galaxy fractions at fixedas the joint probe of environmental effects in our analysis. To better distinguish different models of galaxy colours, we also investigate the spatial clustering and the g-g lensing of more vs. less red galaxies, split by the red-sequence (RS) ridge-line at fixed . This paper is organized as follows. We briefly describe theframework and the simple halo quenching model in  <ref>, and introduce our three colour assignment schemes in  <ref>. We present our main findings in  <ref> and conclude by summarizing our results and looking to the future in  <ref>.§ THEMODEL AND MOCK GALAXY CATALOGUESThe mock galaxy catalogues in this study are built on themodel developed in Papers I & II.We will briefly describe the main features ofbelow, and refer readers to Papers I & II for more details, including the mathematical framework of the model, the selection of galaxy samples from SDSS, and the measurement of w_p andfor those samples. We will also describe the N-body simulations and procedures we employ to generate the mock galaxy catalogues. Readers who are familiar with Papers I & II can skip the next subsection and start from  <ref>. For the conformity “mark” we focus on the g-r colours (K-corrected to z=0.1), which are measured much more robustly than other quenching indicators like the SFR or HI gas mass. Studies of environmental effects are particularly sensitive to the volume size, and <cit.> demonstrated that the clustering of low- galaxies in the local volume below z=0.03 are subjected to very severe cosmic variance effect. Therefore, we limit our analyses to galaxies with ⩾10^10, so that the minimum redshift range of our volume-limited galaxy samples is z=[0.01, 0.07], i.e., the redshift range of our lowest- sample. The maximum redshift probed by the high- samples is 0.2, and the median redshift of the SDSS galaxies used this analysis is around 0.1. Throughout this paper and Papers I & II, all the length and mass units are scaled as if the Hubble constant were 100 ^-1. In particular, all the separations are expressed in co-moving distances in units of eitheror , and the stellar masses and halo masses are in units ofand , respectively. We employ the stellar mass estimates from the latest MPA/JHU value-added galaxy catalogue[<http://home.strw.leidenuniv.nl/ jarle/SDSS/>], and convert other stellar masses <cit.> to the MPA/JHU values using the fitting formulae given by <cit.>. Unless otherwise noted, the halo mass is defined by ≡ M_200m = 200ρ̅_m(4π/3)r_200m^3, where r_200m is the corresponding halo radius within which the average density of the enclosed mass is 200 times the mean matter density of the Universe, ρ̅_m.§.§ TheModel and Simple Halo Mass Quenching Theframework aims to describe the probabilistic connection between galaxies and halos, assuming that the enormous diversity in the individual galaxy assembly histories inside similar halos would reduce to a stochastic scatter about the mean galaxy-to-halo relation by virtue of the central limit theorem. Therefore, the key is to derive P(𝐠|𝐡), the conditional probability distribution function (PDF) of galaxy properties 𝐠 at fixed halo properties 𝐡, where 𝐠 and 𝐡 are the corresponding vectors that describe the most important sets of galaxy and halo properties. For example, we could include stellar mass, optical colour, SFR, and morphology in 𝐠, and halo mass, concentration, spin, and tidal environment in 𝐡. In Paper I and II, we have applied themodel to the SDSS main spectroscopic sample, and successfully mapped the red and blue galaxies at different stellar masses to their underlying halos. In this first-cut analysis, we adopted a binary colour variable b_g-r, by classifying each galaxy into either red (b_g-r=1) or blue (b_g-r=0) based on a -dependent colour-split(g-r)_split|_ = 0.8 (/10.5)^0.6.We then derived the conditional probability distribution of halo mass for galaxies at fixed stellar mass and colour category P( | M_*,b_g-r), from the stellar mass and color dependence of galaxy clustering (w_p) and g-g lensing () measurements. Thanks to the probabilistic nature of the model, we are able to include ∼80% more galaxies in the analysis than traditional HOD methods, while accounting for the incompleteness of galaxy samples in a statistically consistent fashion. In practice, we first derive the overall stellar-to-halo connection P( | M_*) in Paper I from the stellar mass dependence of w_p and . In Paper II, we describe the halo quenching effect statistically using the red galaxy fractions of centrals and satellites as functions of  (c.f. equations 12 and 13 of Paper II),() = 1 - () = 1 - exp[-( /)^],and() = 1 - () = 1 - exp[-( /)^],whereandare the critical halo masses responsible for triggering quenching of central and satellites, respectively, andandare the respective powered-exponential indices controlling the transitional behavior of halo quenching across the critical halo masses. Therefore, by combining Equations <ref> and <ref> together with the overall P(| M_*), we now arrive at a complete model for P( | M_*, b_g-r). More importantly, the best-fitting P( | M_*, b_g-r) successfully predicts the strong bimodality in the host halo mass distributions of the red and blue galaxies in SDSS <cit.>, which implies a dominant halo quenching mechanism that turns on in halos above ≃≃1.5×10^12 (with different powered-exponential indices for central and satellites).This success is highly non-trivial, as many alternative models that strive to explain galaxy colours fail this test <cit.>.One of the interesting extensions of the currentmodel, expressed by P( | M_*, b_g-r), is to add an important secondary halo property, such as concentration c, to see whether it would provide a more comprehensive description of the observed galaxy colours, especially when g-r is included as a continuous variable instead of a binary one. This extension, expressed by P(, c| ,g-r), also represents a useful formalism for including galaxy assembly bias when connecting galaxy colours to halos, because concentration is one of the best indicators for halo assembly bias <cit.>.§.§ Fromto Mock Galaxy Catalogues In Papers I & II we adopted an analytic method to predict the projected galaxy correlation function w_p and g-g lensing signalat fixedfor the red and blue galaxies in SDSS. In this paper, a fully analytic approach will not enable us to easily address the question of interest, so we will predict w_p, , as well as the colour-mark correlation functionand the red galaxy fraction as a function of projected distanceusing direct measurements from the mock galaxy catalogues generated under  <cit.>. In order to cover a dynamic range of two orders of magnitude in stellar mass (10^10-10^12), we employ two cosmological N-body simulations that are evolved from the same WMAP5 cosmological parameters <cit.>, but with complementary sets of mass resolutions and volumes. For mock galaxies with < 10.6, we use the  <cit.> simulation because of its higher mass resolution (1.35 × 10^8). The volume ofis too small (250^3) to overcome the cosmic variance of more massive galaxies, so we use thesimulation <cit.> for deriving mock galaxies with ≥ 10.6. In both simulations, we make use of the halo catalogues identified by the  <cit.> spherical overdensity halo finder at z = 0.1, the median redshift of our SDSS galaxy samples. Since the cosmology assumed in Papers I & II is slightly different from the WMAP5 values used by the simulations, we have re-calculated the constraints on the globalparameters and the halo quenching model using the WMAP5 cosmology. We have also updated our analytic model in Paper I by including the so-called “residual redshift-space distortion” effect in w_p, using the correction method described in <cit.>. The main change in cosmology is the increase of σ_8, which we anticipate to affect primarily δ, the high-mass end slope of the mean stellar-to-halo mass relation (SHMR), but cause very little change to the slope at the low-mass end or the scatter about the mean SHMR <cit.>. Therefore, we only vary δ during the new fit, while keeping otherparameters unchanged from the constraints listed in the table 2 of Paper I. The new best-fitting δ is 0.44 (0.42 in Paper I).The new best-fitting halo quenching parameters are {, , , } = {11.78, 0.41, 12.19, 0.24}, slightly different from Paper II (c.f., table 2). Conceptually,mock galaxies can be generated from simulated halo catalogues by drawingand b_g-r jointly from P(M_*,b_g-r|), which can be trivially derived from P(| M_*,b_g-r) using Bayes' Theorem. But since the centrals and satellites follow distinct stellar-to-halo mass relations, in practice it is more convenient to assign stellar masses to the centrals and satellites separately, then label them red or blue, and at last give them positions and velocities. We describe the three steps in turn below.* The first step is to assign stellar masses. Using the best-fittingparameters as input, we derive the mean SHMR and its logarithmic scatter for central galaxies (c.f., Fig. 10 of Paper I), and the conditional stellar mass functions (CSMF) for satellites in halos of different masses (ϕ(^sat | ); c.f., Fig. 12 of Paper I). For each simulated halo with mass , we randomly draw the stellar mass of its central galaxy from the log-normal distribution P(^cen | ) specified by the combination of SHMR and its logarithmic scatter at ; we then assign it a set of satellite galaxies, whose stellar mass distribution follows ϕ(^sat | ). * Secondly, for each central (satellite) galaxy residing in a halo of , we label it red or blue according to the mean red central (satellite) galaxy fraction at that halo mass (i.e., Equations <ref> and <ref>), predicted by the best-fitting halo quenching parameters. The label indicates whether the galaxy colour is above or below (g-r)_split, but without a particular g-r value. * We predict the relative positions and velocities of mock galaxies with respect to the halo center as follows. For each main halo with concentration c, radius r_200m, and 3D dark matter velocity dispersion σ, the central galaxy is placed at the center, while the satellite positions are assigned randomly according to an isotropic NFW profile with c_g ≡ f_c×c and a cut-off at r_200m. The galaxy velocities are assigned based on the galaxy velocity bias model described in <cit.>, where the relative velocities of central and satellite galaxies follow Gaussian distributions with zero means and standard deviations of σ_c = α_c σ and σ_s = α_s σ, with α_c = 0.20 and α_s = 1.00, respectively. Note that we do not fit the observed monopole and quadrupole of the correlation functions for the values of α_s and α_c, as was done in <cit.>, because our analyses focus on projected quantities and the impact of peculiar velocities on those quantities is minimal. It is worth nothing that by adopting a halo boundary of r_200m, we have implicitly assumed that the halo quenching effects have a sharp transition across r_200m without extending beyond individual halos in those mocks, which is not necessarily an unreasonable assumption <cit.>.We have also ignored the radial segregation of satellite stellar mass <cit.> and colour <cit.> in this study. Whether a true 2-halo environmental dependence or galactic conformity would emerge on scales beyond 3 is independent of the choice for the halo quenching boundary or the segregation effects, as they would only modify the shape of the mark correlation functions and conformity signals in the 1-halo regime.As a sanity check, Figure <ref> compares the projected correlation functions w_p measured from the SDSS (dashed curves with shaded uncertainty bands) and themock galaxy samples (data points with errorbars), for red and blue galaxies within three different stellar mass bins.Clearly, the mock galaxy samples provide an excellent description of the spatial clustering of the red and blue galaxies in SDSS. In addition, the g-g lensing signals (not shown here, but see Fig. 8 in Paper II) predicted by the red and blue mock galaxies also agree with that measured from SDSS very well. Taking advantage of the realistic red galaxies in the mock catalogue, <cit.> constructed a mock cluster catalogue that mimics the clustering of the redMaPPer clusters <cit.> and predicted the observed level of cluster assembly bias in SDSS. We emphasize that it is imperative for the red and blue mock galaxies to accurately reproduce the observed auto-correlation (w_p) and cross–correlation with the dark matter (), as demonstrated by Figure <ref>. For any mock galaxy catalogues that fail to recover w_p and , systematics discrepancies in these two observables could potentially propagate into the systematic uncertainties of the mark correlation functions and the red galaxy fractions around isolated primaries, rendering the physical interpretation of those measurements inconclusive. § ASSIGNING GALAXY COLOURS The simple halo quenching model described above allows us to label each mock galaxy as red or blue, i.e., to predict whether the g-r colour of that galaxy is above or below (g-r)_split. Although this binary split is adequate for the purpose of distinguishing different quenching models in Paper II, it is insufficient for studying the environmental dependence of colours, which are sensitive to the small yet spatially coherent variations of galaxy colours. Therefore, we need to further assign specific g-r values to the mock galaxies before comparing them to the data. Most importantly, we need to make sure that the colour distribution of mock galaxies at any given stellar mass, P(g-r | ), is consistent with SDSS. <cit.> found a significant discrepancy between the conformity observed in SDSS and that predicted by a cosmological hydrodynamic simulation <cit.>; But since the detailed SFR and colour distributions withinare quite different from those in the observations, it is unclear whether the existing physical processes in the simulation should be capable of reproducing the correct large-scale conformity, or some key environmental quenching mechanism is missing. In this analysis, we aim to eliminate this ambiguity by drawing conclusions based on mock galaxies that are generated with different quenching physics but have the identical P(g-r | ) as the real galaxies. In this section, we will derive P(g-r | ) by fitting double-Gaussian PDFs to SDSS colour distributions in  <ref>, and comment on the origin of scatter in galaxy colours in   <ref>. We next describe the three colour assignment schemes in turn in  <ref> and examine the theoretical sources of conformity in those schemes in   <ref>.§.§ Modelling the Bimodal Distributions of ColourTo accurately measure the underlying colour distributions in SDSS, we select a suite of volume-limited stellar mass-binned samples, with equal bin width of 0.1 dex starting at =10.0. The stellar mass limit adopted in this study is the same as the “mixture limit” M_*^mix(z) defined in Paper I,M_*^mix(z) = 5.4 × (z-0.025)^0.33 + 8.0,corresponding to the characteristic stellar mass at which the average galaxy colour ⟨ g-r ⟩ sharply transitions from below to above 0.8 at any given z. As explained in Paper I, galaxy samples selected above this limit can be regarded as being approximately volume-complete. As a result, we measure the colour distributions from the suite of stellar-mass binned samples, and use them directly as the input data for fitting P(g-r | ). Following previous studies of colour bimodality <cit.>, we employ the double-Gaussian function as the analytic form of P(g-r | ),P(g-r|) = f^'_red()𝒩^red + [1-f^'_red()]𝒩^blue,where 𝒩^red and 𝒩^blue represent the red and blue Gaussian components, respectively, while f^'_red() is the fraction of the red Gaussian component.Note that f^'_red() is an unknown free parameter in the double-Gaussian model, different from the usual red galaxy fractionf_red() = ∫_(g-r)_split|_^+∞ P(g-r|)(g-r),which can be directly measured from SDSS. During the fitting, we not only minimize the χ^2 between the double-Gaussian and the colour distribution measured from each volume-limited sample, but also ensure that the predicted f_red is equal to the measured value from SDSS. We impose the latter primarily for the sake of consistency between the colour assignments below and the halo quenching model described in  <ref>, which is based on the division of red vs. blue galaxies using (g-r)_split. Figure <ref> compares the best-fitting double-Gaussians (gray curves) with the colour distributions (black histograms) measured from eight selected stellar mass samples, with the stellar mass range, redshift range, and the red galaxy fraction of each sample indicated on the top left of every panel. The red solid and blue dashed distributions indicate the red and blue Gaussian components, and the orange dotted vertical lines indicate the red vs. blue division defined by (g-r)_split. In general, the best-fitting models of P(g-r | ) provide an adequate description of the observed colour distributions across all measured stellar mass bins, showing only very minor discrepancies in the so-called “green valley” where the two components overlap. Figure <ref> summarizes the result of the fitting. In the top panel, the red solid and blue dotted lines indicate the means of the red and blue Gaussian components, respectively, while each colour-shaded band represents the scatter about the respective mean. The orange dashed line indicates (g-r)_split, the red vs. blue division used for defining f_red. Clearly, there is progressively more overlap between the two Gaussian components at higher mass, mainly due to the two means approaching each other. As a result, there is a discrepancy of ∼ 0.10 between f_red and f^'_red above = 10, as shown by the two curves in the bottom panel. The strong overlap makes it difficult to distinguish the true quiescent vs. active galaxies based on colour at the high-mass end (discussed further later).We will assign galaxy colours in two steps. In the first step, we generate an ensemble of mock colours at each  (adopting a bin size of Δ = 0.1) by drawing random g-r values from the best-fitting analytic P(g-r | ).Within each narrow stellar mass bin, we divide those mock colours into red and blue, so that the colours of red galaxies followP^red(g-r|) = {[ P(g-r|)/f_red(); 0 , ].while the colours of blue galaxies followP^blue(g-r|) = {[ P(g-r|)/1-f_red(); 0 . ].We then distribute those red (blue) colours among the red (blue) galaxies based on three different schemes below.In this way, a red (blue) galaxy will retain its red (blue) label across the three catalogues, but obtain a different red (blue) g-r colour depending on the relative strength of halo quenching vs. galaxy assembly bias assumed in each catalogue.§.§ Origin of Scatter in the Colour-Stellar Mass Relations In essence, our goal is to find out which halo properties are the most responsible for driving the intrinsic scatter in the colour–stellar mass relation (CSMR) of red or blue galaxies. Below 10^11, the scatter in the red/blue galaxy CSMR at any givenis dominated by that of the intrinsic stellar colours of galaxies, which reflects the diversity in the integrated star-formation histories of those red/blue galaxies at that stellar mass. Starting from =11, however, there is a sudden increase in the scatter of the blue component, as shown by the blue shaded band in the top panel of Figure <ref>. This increase is unlikely intrinsic, but caused by a combination of strong dust reddening in some edge-on spirals and starbursts triggered by major mergers. The internal dust reddening in spirals only becomes prominent at the high mass end, probably because they have accumulated significant dust during past star formation activities and are physically large with long path lengths through the edge-on discs <cit.>. Ideally, we would prefer using the intrinsic stellar colours that are corrected for internal dust. <cit.> tried to estimate the internal dust reddening empirically from the internal extinctions (A_V) given by Stellar Population Synthesis fits <cit.>. After subtracting this estimated reddening, they found substantial decrease in the dispersion of the blue component at high , mostly due to the large corrections to dusty star-forming systems. However, it is quite difficult to assess the systematic uncertainties associated with the assumed dust properties, especially the propensity to over-correct systems with little or no dust <cit.>. Therefore, for the sake of simplicity, we will not apply any reddening correction on the g-r colours in this analysis, and assume that the extrinsic scatter can be effectively accounted for by reducing the cross-correlation between the observed colour and halo properties. For galaxies with > 10^11, the cross-correlation is further diluted by the extra extrinsic scatter and the severe overlap between the red and blue components. §.§ Colour Assignment SchemesWith the mock colours generated from P^red and P^blue, we now proceed to construct three more comprehensive halo quenching mock catalogues for our analysis in  <ref>.§.§.§ Baseline Halo QuenchingThe baseline halo quenching mock catalogue corresponds to our null hypothesis — the scatter within either colour population is independent of halo properties, despite the fact that the blue-to-red transformation is statistically driven by halo mass. In this case, the relative colour of a red galaxy with respect to the RS ridge-line is independent of its host halo properties, and the galaxies redder than the ridge-line (hereafter referred to as thegalaxies) would live in similar halos as the less red galaxies (hereafter referred to as ).Likewise in the blue population, the bluer half () would mix well with the less blue half () in terms of their dark matter habitats. To build such a catalogue, we simply assign the red g-r colours randomly to the red mock galaxies by drawing from Equation (<ref>) <cit.>, and likewise for the blue colours for blue galaxies using Equation (<ref>). This baseline halo quenching mock should exhibit the minimum level of the environmental dependence of colours and colour conformity around centrals among all the halo quenching mocks.§.§.§ Fiducial Halo QuenchingWithin the halo quenching model, there are at least two channels through which the environmental dependence of colours could be boosted beyond the baseline mock.One possibility is that halo mass remains the key quantity in “tinting” the colour of a red or blue galaxy, so that the , , , andgalaxies at fixedlive in progressively more massive halos. This naturally extends the simple halo quenching model from modelling binary to continuous colour variables. To build such an extension, we introduce the cross-correlation coefficients between colour and halo mass at fixed stellar mass, ρ^cen_m and ρ^sat_m, as our two new parameters. As the red fraction of central galaxies is a steeper function of halo mass than that of satellites, we choose a higher value for ρ^cen_m (0.5) than ρ^sat_m (0.3) for galaxies with < 11; For the reasons outlined in  <ref>, at ⩾ 11 we reduce the value of ρ^sat_m to zero, while keeping the value of ρ^cen_m at 0.5 — we assume that the increase in the extrinsic scatter of colour is mainly due to the enhanced activity level of satellites in massive halos.We hereafter adopt this catalogue as our fiducial halo quenching mock. §.§.§ “Assembly-Biased” Halo QuenchingAlternatively, galaxy assembly bias could coherently modulate the quenching processes within the same large-scale environment, making galaxies slightly redder (bluer) in over (under)-dense regions than in the field. If such galaxy assembly bias effect is mediated via some secondary halo property other than , we would expect a strong correlation between galaxy colour and that mediator. For example, one might speculate that the older halos host slightly redder galaxies than the younger ones at the same mass, if they have had more time forming and quenching galaxies in denser environments. However, the theoretical connection between any secondary halo properties and galaxy formation remains obscure. But for the purpose of our study, it is not necessary to distinguish which halo property is the true underlying mediator, as long as it is a good indicator for halo assembly bias that strongly correlates with large-scale overdensity. Halo assembly bias reveals itself as the dependence of halo clustering on a variety of secondary halo properties, the three most prominent of which are concentration, age, and spin <cit.>. Among the three, the age dependence is much weaker than the other two on the high mass end <cit.>, while the impact of halo spin on galaxy formation is likely the most complicated <cit.>, depending on spin-tidal field alignment <cit.> and halo merger history <cit.>. Therefore, we will focus on halo concentration as our proxy for galaxy assembly bias.To remove the anti-correlation between halo concentration and mass <cit.>, we define a new variant of the concentration parameter for each halo with mass ,ln c_assem = {[ ln c - ⟨ln c() ⟩ ; -( ln c - ⟨ln c() ⟩), ].where c is the standard halo concentration parameter, ⟨ln c() ⟩ is the average logarithmic concentration of all halos at , and M^nl is the characteristic non-linear mass scale (M^nl = 12.8 in our mocks). Since halo concentration follows a log-normal distribution at fixed halo mass, and the logarithmic scatter is roughly constant with halo mass, there is little residual correlation betweenand . In addition, we flip the sign inside the parentheses at M^nl, because the cross-correlation coefficient between halo concentration and overdensity goes from being positive to negative across M^nl <cit.>. Therefore, this new parameter c_assem serves as our proxy for halo assembly bias, and is positively correlated with large-scale overdensity on all mass scales. Similar to the fiducial halo quenching mock, we introduce two new parameters, ρ^cen_c and ρ^sat_c, as the cross-correlation coefficients between colour and c_assem for the centrals and satellites, respectively. For our assembly-biased halo quenching catalogue, we set both cross-correlation coefficients to be unity, thereby maximizing the level of conformity that can be achieved by turning on galaxy assembly bias within halo quenching. We emphasize that our assembly-biased halo quenching mock is fundamentally different from a pure assembly-bias quenching mock like, e.g., the age-matching mock of <cit.>. At fixed , in the assembly-biased halo quenching mock, the blue-to-red transition is statistically determined by the simple halo quenching model of Paper II, while the more vs. less red (blue) colours are driven by their halo assembly bias via ; In the age-matching mock, however, the colour of a galaxy at fixeddepends almost exclusively on its halo assembly bias via a characteristic redshift z_starve. For most of the centrals (below 10^11), z_starve is equivalent to the formation redshift of the subhalos, but at very highit largely corresponds to the first epoch at which halo mass exceeds 10^12. §.§ Theoretical Sources of Conformity in Mock Catalogues Before delving into our main results in the next section, we explore the two theoretical sources of conformity in our three halo quenching mock catalogues in Figure <ref>. In the top panel, we show the 3D real-space correlation functions (ξ_hh) of allhalos above 10^11 (black circles), along with that of halos in the upper (solid) and lower (dashed) quartiles of the distributions in  (red) and  (blue). The middle panel shows the ratios between the ξ_hh of two halo subsamples selected by  (red squares) and  (blue triangles), respectively. As expected from the standard theory of halo biasing <cit.>, the high mass halos have a stronger clustering strength than the low mass ones on all scales, while the halo assembly bias effect is illustrated by the high vs. low- halos — the clustering biases of these two halo subsamples are different despite having the same average mass (⟨⟩≃11.45). However, the bias ratios here do not give us full information on the potential conformity signal that can be induced byor . On the contrary, the mark correlation function is an ideal tool for revealing the environmental dependence of halo/galaxy properties. The 3D real-space mark correlation is measured asℳ^3d(r) = WW/DD = 1+W(r)/1+ξ(r),where WW and DD are the mark-weighted and unweighted number counts of pairs with 3D distance r <cit.>, while W(r) and ξ(r) are the mark-weighted and unweighted real-space correlation functions, respectively.Since the marks are normalized to have a mean of unity, if the distribution of marks is independent of environment, ℳ^3d(r) should be unity on all scales due to the lack of conformity. Otherwise, if the marks of two objects are correlated over some scale, ℳ^3d(r) will deviate above unity at that scale. The bottom panel of Figure <ref> shows the 3D real-space mark correlation functions of halos, using the rank-orders of  (red squares) and  (blue triangles) as marks. We use rank-orders so that the distributions of the two marks are the same uniform distribution from 0 to 2, making a direct comparison between the two mark correlations feasible <cit.>.Both mark correlation signals decline with radius, but stay significantly above unity on scales up to 15, exhibiting strong conformities in the 2-halo regime. Therefore, we expect that an environmental dependence of colours (including a 2-halo colour conformity) would naturally arise in all three halo quenching mocks, but the overall amplitude and scale-dependence would be different from one to another, due to the varying relative strength of the mass effect and assembly bias in each mock. Finally, we emphasize that the underlying drivers of the environmental effects in galaxy colours are different among the three mocks — while the baseline and the fiducial halo quenching mocks rely solely on the environmental dependence of the halo mass function to produce an “indirect” environmental effect, the assembly-biased halo quenching mock predicts both direct and indirect environmental effects, via the halo assembly bias effect ofand the environmental dependence of halo mass functions, respectively. Meanwhile, the age-matching mock employs the halo assembly bias effect of z_starve to produce direct environmental effects in the spatial distribution of galaxy colours.§ RESULTS In this Section, we present the main results of our analysis, by comparing the three halo quenching mock catalogues with the SDSS data using different measurements. These include the clustering w_p and weak lensingof theandgalaxies, the 2D mark correlation functions of colours , and the red galaxy fractions around red vs. blue primaries . §.§ Clustering and Lensing of “More Red” vs. “Less Red” GalaxiesAs a sanity check, we will first compare the clustering and g-g lensing of the  (more red) and  (less red) galaxies predicted by the mocks to that measured from data. By construction, the three halo quenching mocks assign different values of red g-r colours to the red galaxies that share the same set of halos in the simulation, therefore placing theandgalaxies into different halos.In particular, theandgalaxies occupy halos with similarandin the baseline mock, but live in halos of differentorin the other two mocks. This segregation of galaxies by halo properties, which is lacking in the baseline mock, dictates that theandgalaxies will exhibit different clustering and weak lensing signals, a prediction that can be directly tested using the SDSS data. Figure <ref> shows the ratios of the projected correlation functions w_p between theandgalaxies for three different stellar mass bins.To obtain a more stable measurements of the clustering ratios, we compute the cross-correlations of the two subsamples with the overall red galaxy sample in each bin, instead of the auto-correlations shown in Figure <ref>.In each panel, the gray dashed curve with shaded band is the measurement from volume-limited samples in SDSS, and the data points with errorbars are predictions by the fiducial (red circles) and assembly-biased (blue triangles) halo quenching mocks. Clearly, the baseline mock, which predicts a w_p ratio of unity on all scales (dotted horizontal line), is readily ruled out in all three stellar mass bins. The fiducial halo quenching mock shows the best overall agreement with the SDSS measurements on all scales between 0.1 and 25,whereas the assembly-biased halo quenching mock only provides an adequate description of the SDSS measurements on scales beyond 2 (except for the intermediate mass bin where it fails to match the data on all scales). It is worth nothing that some of the discrepancies between the mock predictions and the SDSS measurements on small scales are caused by the lack of colour segregation in the mocks (e.g., the rapid increase of observed ratios on scales below 0.2 in SDSS). The assembly-biased halo quenching mock predicts that theandgalaxies live in halos of similar mass (mostly above M^nl), hence the similar small-scale amplitude of w_p. Furthermore, it predicts that thesatellite galaxies live in high- halos, which mainly correspond to low-c halos and a less concentrated satellite distribution within halos, causing the ratio to cross from above to below unity on scales ≲ 0.5.In the low (= [10.0, 10.8]) and intermediate (= [10.8, 11.0]) stellar mass bins, however, neither the decrease of the ratio on small scales nor the ratio inversion at 0.5 is seen in the data, indicating that the assembly-biased halo quenching is not an adequate model for the colouring of red-sequence satellite galaxies below 10^11. For the high mass bin (= [11.0, 11.4]), the SDSS measurement is consistent with predictions from both mocks, in part due to the large errorbars resulting in poor discriminating power on the small scales where the predictions from those mocks differ. Similarly, Figure <ref> shows the ratio comparison between the more blue vs. less blue galaxies in two different stellar mass bins. Due to the lack of a distinct colour ridgeline in the blue portion of the colour-stellar mass diagram, we divide each blue sample into two halves using the median blue colour as a function of . At any given stellar mass, the two subsamples of blue galaxies in SDSS exhibit weaker discrepancies in their clustering biases than the two red galaxy subsamples, and the observed clustering ratio is also better-reproduced by the blue galaxies in the fiducial mock than those in the assembly-biased halo quenching mock on all scales. For diagnosing the colour properties of central galaxies, we turn to the weak lensing signals around central galaxies with different shades of red. Following <cit.>, we select a sample of “locally brightest galaxies” <cit.> from the SDSS spectroscopic sample as our candidates for central galaxies, and measure the weak lensing profiles for the red LBG subsamples split by the RS ridgeline at fixed stellar mass.<cit.> discovered a strong bimodality in the average halo mass between the red and blue LBGs, which we subsequently interpreted as the indication of a dominant halo quenching mechanism using ourframework in Paper II. In the same spirit as <cit.>, we show the weak lensing profilesof the  (maroon filled circles) and  (magenta open triangles) LBGs within three stellar mass bins in the upper panels of Figure <ref>. In each upper panel, we also show the predictions from the fiducial (red solid and dashed curves) and assembly-biased (blue solid and dashed) halo quenching mocks. We ignore the profiles beyond R = 2 (covered by the gray shaded region) as they do not carry clean information on the halo mass profile. In the lower panels, we show the ratios between theandweak lensing profiles for the SDSS LBGs (black curve with gray shaded bands), the fiducial halo quenching mock (red curve), and the assembly-biased halo quenching mock (blue curve). As expected, the assembly-biased halo quenching mock predicts aratio of roughly unity on all scales, modulo minor tilt due to differences in halo concentration. But in the fiducial mock, thecentral galaxies exhibit stronger weak lensing amplitude than thecentrals on the relevant scales. In the lowest mass bin (= [10.0, 10.6]), both mock predictions are consistent with the data, but the large uncertainties in the weak lensing measurements prevent us from making any statistical statements on one mock being preferred by the data. Similarly in the intermediate mass bin (= [10.6, 11.0]), both mock predictions are roughly consistent with the measuredprofiles. However, theratio measurement on scales above 0.6 slightly prefers the assembly biased halo quenching mock, but shows a strong discrepancy between the two subsamples on scales below 0.6, which tends to favor the fiducial halo quenching mock. Fortunately, the weak lensing measurements in the high mass bin (= [11.0, 12.0]) leave no ambiguity as to which mock is the superior model for colouring high-mass central galaxies — the fiducial halo quenching mock provides an excellent description for theprofiles of the more vs. less red LBGs and the ratio between the two, while the assembly-biased halo quench mock completely fails to do so. Combining Figures <ref>, <ref>, and <ref>, it is clear that the fiducial halo quenching mock significantly out-performs the other two mocks in describing the clustering and weak lensing of red galaxies split by the RS ridge-line. However, the weak lensing measurements for low- galaxies have large uncertainties, and we cannot perform the same lensing test on the blue galaxies due to the their low number density in this mass range. With this result and its limitations in mind, we will turn to the mark correlation functions of colours for a clearer picture in the next subsection.§.§ Environmental Dependence: Mark Correlation Functions of Galaxy ColoursMarked statistics are an efficient tool for quantifying the correlation between the properties (i.e., marks) of galaxies and their environment <cit.>. Here we focus on the 2D projected mark correlation function of g-r colours M_g-r(R) as our diagnostic of the underlying driver of environmental dependence of galaxy colours, whether it be halo mass, halo assembly bias, or some combination of the two.Following earlier studies <cit.> of galaxy mark correlation functions in SDSS, we define M_g-r(R) asM_g-r(R) = 1+W_p(R)/R/1+w_p(R)/R,where W_p and w_p are the mark-weighted and unweighted projected correlation functions of galaxies, respectively. This particular definition makes M(R) ∼ ℳ^3d(r≡2R) on large scales, where w_p(R)/R ∼ ξ.We measure M_g-r(R) by combining the measurements of W_p and w_p as in Equation <ref>, and compute the uncertainties by Jackknife re-sampling 200 sub-regions within the sample footprint. We refer readers to Paper I for the technical details on the projected correlation function measurements. To make sure that the mark values are always positive, we adopt exp(g-r)/⟨exp(g-r)⟩ as the mark in both the data and mock catalogues. Since the mock colour distribution at each fixedis almost identical to that in SDSS, we can compare the colour-mark correlation functions of the mock and data catalogues directly. More importantly, since all the mocks accurately reproduce the abundance, spatial clustering, and g-g lensing of the red and blue galaxies in SDSS, any discrepancy between the colour-mark correlations of the mock and the data galaxies would be a clean sign of incorrect colour assignment in that mock. Figure <ref> compares the colour-mark correlation functions between the data and various mocks within six different stellar mass bins. In each panel, the gray shaded band indicates the measurement (± 1σ range) from a volume-limited sample of SDSS galaxies within therange (listed on the top left); Orange squares with errorbars are measured from the baseline halo quenching mock, while the red and blue thick curves are from the fiducial and assembly-biased halo quenching mocks, respectively, with similar uncertainties (not shown) as in the baseline case. We also show the measurement (magenta triangles with errorbars) from the age-matching mock produced by <cit.> [<http://logrus.uchicago.edu/ aphearin/SDSS_Mock_Catalog/SDSS_Mock_Catalog.html>], which serves as an interesting limiting case in which the galaxy assembly bias effect is maximized and the halo quenching effect is minimized. Overall, the observed mark correlation signal decreases with increasing stellar mass, partly due the lack of a prominent blue population in the high- samples. The observed M_g-r signal also decreases as a function of distance, analogous to the 1-halo to 2-halo transition in the regular correlation functions. However, in all the stellar mass bins below 10^11, the observed mark correlations stay significantly above unity on scales up to 15, indicating strong environmental dependence of galaxy colours at fixed . The colour-mark correlation functions predicted by the baseline halo quenching model (orange squares with errorbars) have similar shapes as the M_g-r measured from SDSS, but their overall amplitudes are 30-50% lower than the observed ones.The lower amplitudes are expected: by design the baseline model includes the least amount of halo quenching effect that is allowed by the measurements of the clustering and g-g lensing of red and blue galaxies in SDSS.With a stronger coupling between halo mass and galaxy colour, the fiducial halo quenching mocks (red curves) are able to roughly reproduce the correct amplitudes of M_g-r, especially the deviation from unity on large scales. Similar to the baseline model, the fiducial halo quenching mock is consistent with the clustering and lensing measurements of the red and blue galaxies in SDSS. In addition, the fiducial model is also the only one among the three colour assignment schemes that is consistent with the observations of the clustering and lensing of the more vs. less red (blue) galaxies. Therefore, it is tempting based on the combined evidence so far to argue that the fiducial halo quenching mock roughly encodes the correct information on the connection between galaxy quenching and halo properties in SDSS. The primary discrepancy between the mark correlations predicted by the fiducial model and the data is that the mock curves have a slightly sharper transition from 1-halo to 2-halo scales compared to the data, probably due to the lack of colour segregation and the sharp halo boundaries adopted in the mock.It is worth noting that the 1-halo to 2-halo transition is notoriously difficult to model correctly even for regular correlation functions (not marked).The assembly-biased halo quenching mock predicts a very similar amplitude of M_g-r on large scales as the fiducial mock and the data, and is thus consistent with the SDSS measurements. However, recall that the assembly-biased model represents the maximum amount of galaxy assembly bias (mediated by concentration) that can be allowed in the halo quenching scenario — the colour of a red/blue galaxies is maximally coupled with the rank-order of , our proxy for the halo environment; therefore, it is very unlikely that halo concentration is the underlying driver for the “tinting” of red or blue galaxies under the halo quenching framework.On small scales, the predicted curves level off and approach unity for the three stellar mass bins above =10.6. Similar to the small-scale behavior seen in Figure <ref> (blue curves), the low amplitudes on scales below 0.3 are caused by the lower values of g-r colours assigned to the satellites in massive halos with high concentrations than those inside low-concentration clusters.Unlike the three halo quenching mocks, the age-matching mock relies on galaxy assembly bias, i.e., by matching galaxy colours to their halo formation times at fixed , to introduce the environmental dependence of galaxy colours. Clearly, the maximal coupling between galaxy colour and halo age is strongly disfavored by the data, as the predicted mark correlation functions (magenta triangles with errorbars) are 40-50% higher than the observations on almost all scales and for four of the six stellar mass bins. A more reasonable fit to the data can be potentially achieved by tuning down the coupling strength between galaxy colour and halo age, but as we pointed out in Sec. <ref>, without halo quenching it is still unlikely to reproduce the observed bimodality in the host halo mass of red vs. blue central galaxies.We need to globally match not just the mark correlation functions, but also the regular two-point correlations, for which Paper I and II demonstrated halo quenching is necessary. Clearly, Figure <ref> shows that there is significant environmental dependence of galaxy colours up to scales as large as 15, and the fiducial halo quenching model is able to reproduce this large-scale environmental dependence of colours across the entire stellar mass range we probed (> 10^10). However, a strong environmental effect revealed by the mark correlation functions does not necessarily translate to a strong colour conformity, which is exclusively a phenomenon surrounding central galaxies. In particular, the mark correlation function has two contributing sources: one is the galactic conformity between the central galaxies and neighbouring galaxies (i.e., theandterms), and the other is the correlation between the colours of satellites (i.e., theterm). Therefore, it is possible that thecontribution could dominate the signal in the mark correlation functions, while the conformity contribution stays confined within the 1-halo regime. If proven true, this scenario would explain the findings by <cit.>, that an HOD model of galaxy colours cannot induce 2-halo conformity, despite of the strong environmental effect in the spatial distribution of galaxy colours.Figure <ref> compares the three contributions,  (magenta solid),  (orange dotted),  (blue dashed), between the fiducial halo quenching (upper panels) and the age-matching (lower panels) mocks in three stellar mass bins (columns from left to right: [10.2,10.4], [10.6,10.8], [10.8,11.0]). In each panel we also show the overall mark correlation functions measured in the mock (symbols with errorbars) and the data (gray shaded bands) from Figure <ref>. In the low- bin (left column), the environmental dependence of colours in the fiducial halo quenching mock (upper left) is primarily driven by the correlation between the satellite colours, and partly by the colour conformity between central galaxies and satellites. In halo quenching, the dominantterm is caused by the strong halo quenching effect on the satellites of rich groups and clusters, which tend to live in dense environments; Theterm is very weak, as in this low stellar mass regime the red vs. blue central galaxies are living in halos whose masses differ only by a factor of two or less, due to the weak halo quenching effect below 10^12. On the contrary, the age-quenching mock (lower left) predicts strongandterms, because the host halos of those centrals (with < M^nl) exhibit a strong assembly bias effect <cit.>; Theterm is substantially weaker than the conformity terms, due to the weaker halo assembly bias effect in their massive host halos (with > M^nl). At higher stellar massses (middle and right columns), theterm remains dominant in the upper panels (halo quenching), but theterm begin to increase, indicating stronger conformity effects for higherprimaries in the halo quenching scenario. For the age-matching model, theterm decreases at higher , because the halo assembly bias effect becomes weaker with increasing  <cit.>. From Figure <ref>, we can see that the environmental dependence of colours shown in the halo quenching mocks are largely driven by the correlation between the colours of satellites. This finding explains the lack of large-scale conformity in the HOD model of colours in <cit.> — a strong mark correlation function signal does not necessarily imply strong conformity on large scales, especially for galaxies with < 10.4.However, the 2-halo conformity signal predicted by our fiducial halo quenching mock increases rapidly with stellar mass, becoming comparable to the age-matching prediction at ∼ 10.6, and much stronger than age-matching at > 10.8.Furthermore, it is worth nothing that even in the lowest stellar mass bin probed here (∼ 10.2), there still exists some colour conformity signal on large scales, mostly due to the correlation between the colours of central galaxies and that of satellites inside nearby massive halos. In the next subsection, we will explore whether such a level of 2-halo conformity is consistent with the observed red galaxy fractions around red vs. blue primaries in SDSS.§.§ Conformity: Red Galaxy Fraction Around Red vs. Blue Isolated PrimariesSo far, the fiducial halo quenching mock has successfully passed the sanity check in  <ref> and reproduced the observed strong environmental dependence of colours in  <ref>.As we alluded to in Section <ref>, <cit.> and <cit.> showed that the observed level of large-scale conformity is very sensitive to the isolation criteria for identifying primary galaxies. They found significant reduction in the conformity signal around low-mass primary galaxies after removing a small fraction of mis-identified primaries, which are revealed by the group finders <cit.> to be living inside or near massive systems (e.g., the Coma cluster). However, based on the iterative reshift-space friends-of-friends technique, the group finders also have their own limitations in crowded environments <cit.>.Since our goal here is to find out whether the level of conformity predicted by the mocks is consistent with observations, the conclusion should not depend on the criteria for finding centrals, as long as we self-consistently apply the same criteria to the mock and data catalogues.Therefore, we will adopt the simple isolation criteria of <cit.>, and directly measure the red galaxy fractions around the red and blue isolated primaries in SDSS, without resorting to more stringent criteria based on complicated group finders.For the candidate sample for primary galaxies, we choose the log-stellar mass range of [10.2, 10.6], similar to the main stellar mass range probed in <cit.> and <cit.>. For the secondary galaxy sample, we select all galaxies with stellar mass above 5 × 10^9, 0.5 dex below the minimum stellar mass of the primaries. We adopt the same criteria in <cit.> to find the isolated primaries — a galaxy with stellar massis defined as an isolated primary if there is no other galaxy with stellar mass greater than /2 within a projected radius of 350 and with velocity difference less than 500. We apply the same sample selection and isolation criteria in the fiducial halo quenching and the age-matching mock catalogues, and the purities (i.e., fraction of true central galaxies among the selected isolated primaries) are 0.81 (fiducial) and 0.95 (age-matching), respectively.The lower purity in the mock is largely due to the higher average frequency of having a satellite galaxy that is at least twice as massive as the central within halos above 10^13 (see the lower panels of Fig. 5 in Paper II).Since the age-matching mock catalogue is incomplete below ∼ 10, we expect the mean red galaxy fraction to be different from the observations, but the conformity signal, i.e., the ratio between the red fractions around red vs. blue primaries, should be preserved. For the observations, we also applied the same isolation criteria to a volume-limited SDSS galaxy sample withabove 5 × 10^9 and a redshift range of [0.01, 0.055]. We then measure the red galaxy fractions as functions of projected distance away from the red and blue primaries, as well as the ratio between the red galaxy fractions around red vs.blue primaries in each catalogue.Figure <ref> compares the conformity signals measured from the fiducial halo quenching mock (circles), the age-matching mock (triangles), and the SDSS galaxies (shaded bands), respectively. All the errorbars are 1-σ uncertainties derived from jackknife re-sampling, and the sizes of the errorbars measured from the two mock catalogues are comparable to the size of the symbols. The top panel shows , the red galaxy fractions as functions of projected distance away from the isolated primaries.The amplitude and scale-dependence ofpredicted by the fiducial halo quenching mock agree very well with the observations. This agreement is not by construction, because thequenching parameters were constrained solely from the clustering and g-g lensing information, without any input from the relative abundance of red/blue galaxies. The age-matching predictions for the overall red galaxy fractions are substantially lower, likely due to the sample incompleteness below 10^10. However, the age-matching mock clearly predicts strong conformity, i.e., large discrepancy between the red galaxy fractions around red vs. blue primaries, while the level of conformity in the data and the fiducial halo quenching mock is weaker.The bottom panel of Figure <ref> presents a clearer picture of conformity via the ratio between the red galaxy fractions around red vs. blue primaries. The SDSS galaxies show substantial colour conformity signal on projected distances up to 2-3, beyond which the signal becomes consistent with having no conformity on scales ∼5.The ratio predicted by the age-matching model is slightly higher than the measurement, and stays significantly above 1.0 on all scales. As expected, the ratio predicted by the fiducial halo quenching mock is lower than that predicted by the age-matching mock on all scales, due to the weakercontribution to the environmental dependence of colours. However, the halo quenching prediction is in good agreement with the SDSS measurement, showing considerable conformity on scales below 3. Figure <ref> demonstrates that the level of conformity observed at stellar mass around a few × 10^10 can be naturally explained by the HOD model of galaxy colours that includes ahalo quenching prescription for assignment of colours. Inside the halo quenching mock, the conformity signal is primarily (81%) sourced by the correlation between the colours of central galaxies and their neighbouring galaxies, and partly (19%) induced by the mis-identified primaries that are satellites of the more massive halos. The underlying drivers of both components, however, are the same combination of the environmental dependence of halo mass function and the strong dependence of galaxy quenching on halo mass — an indirect environmental effect imprinted on galaxy formation. § SUMMARY AND DISCUSSION §.§ Fiducial Picture of Halo Quenching In this paper, we have investigated whether halo mass quenching is capable of reproducing the environmental dependence of galaxy colours and the large-scale galactic conformity observed in SDSS, for which recent studies suggested that a strong galaxy assembly bias effect may be required.Developed in the first two papers of the series <cit.>, the best-fittinghalo quenching model can accurately describe the observed abundance, spatial clustering, and weak gravitational lensing of the red and blue galaxies in SDSS, serving as an ideal test-bed for exploring the environmental effects predicted by halo quenching.For the prediction, we start by producing quiescent and active mock galaxies within a suite of N-body cosmological simulations based on the best-fittinghalo quenching prescription, which describes the red galaxy fractions of the centrals and satellites as powered-exponential functions of the halo mass.We then assign g-r colours to the quiescent and active galaxies separately at fixed stellar mass , by introducing correlations between galaxy colours and the mass of their dark matter halos. In our fiducial halo quenching model, we set the cross-correlation coefficients ρ^cen_m between halo mass and the red or blue central galaxies to be 0.5, while assuming a weaker coupling with halo mass for the red and blue satellites (ρ^cen_m = 0.3 below 10^11 and 0.0 above). Despite the fact that the quantity that determines the galaxy colours in the halo quenching model is associated with individual halos, the fiducial halo quenching mock predicts strong environmental dependence of galaxy colours — the mark correlation functions of colours deviate significantly above unity on scales up to 15 in all stellar mass bins, in excellent agreement with the measurements from SDSS. This strong environmental dependence is induced by the combination of halo quenching and the environmental dependence of the halo mass function: in denser environments there are more massive halos, hence more quenched and redder galaxies at fixed . By decomposing the predicted colour-mark correlation function at fixedinto , , andcomponents, we find that the overall environmental dependence is dominated by theterm. However, there still remains a significant level ofandcontributions, i.e., galactic conformity, on scales up to 15. After applying the same isolation criteria of <cit.> to the halo quenching mock, we demonstrate that the halo quenching model correctly reproduces the level of large-scale conformity in SDSS, as measured by the red galaxy fractions around the red vs. blue primary galaxies. Confirming the results from <cit.> and <cit.>, we find that ∼19% of the primaries in the mock are mis-identified satellite galaxies, which contribute a significant false conformity signal to the red fraction of distant neighbours. To summarize, the fiducial halo quenching model provides a remarkably simple yet accurate picture of the spatial distribution of galaxy colors in the local Universe (z < 0.25), including the colour dependence of galaxy clustering and weak gravitational lensing, the mark correlation functions of colours, and the large-scale galactic conformity in SDSS. On the contrary, models that rely on the halo assembly bias effect to quench star formations have great difficulties in matching to those observations, whether it be the age-matching model ofor the assembly-biased halo quenching model developed in our analysis.§.§ Alternative Environmental Effects as a Test of Halo QuenchingThere are at least two aspects of the large-scale environment that one could imagine might affect halo and galaxy properties: one is the large-scale background density, and the other is the large-scale tidal tensor field, i.e., the geometrical environment. In this paper we are primarily concerned with the former, but there could be additional dependences of quenching on the geometrical environment, which can be classified as clusters, filaments, sheets and voids via the Hessian matrix of the gravitational potential <cit.>. <cit.> investigated the variation of halo mass functions in different geometrical environments and found that at fixed large-scale overdensity, the halo mass functions are similar among the four different types of structures within the cosmic web. Therefore, under the fiducial picture of halo quenching there should be no colour dependence on the geometrical type of the environment. However, some halo properties, like the halo spin and shape, depend strongly on the tidal and velocity shear fields <cit.>, and it is plausible that they could also affect galaxy quenching.Observationally, there is tentative evidence for correlations between galaxy properties (colour, size, spin, morphology, bar strength, etc) and the proximity to various geometrical environments <cit.>. Therefore, it will be interesting to predict the distributions of galaxy colours in different geometrical environments from the fiducial halo quenching mock, and compare to the measurements from data. Any discrepancies would signal additional environmental effects that cannot be accounted for by the halo quenching model, or any galaxy formation models that are insensitive to the tidal tensor field. §.§ Theoretical Implications and Future Outlook Under our fiducial halo quenching picture, the efficiency of galaxy quenching is tied to the strength of the gravitational potential in the host halos, which could drive virial shocks that heat up the gas and/or harbor AGNs that inhibit star formation via powerful feedback processes. The great success of this simple picture, combined with the lack of any discernible galaxy assembly bias effect, suggests that galaxy quenching is likely a local process, in the sense that it is largely determined by the physical conditions inside a halo.In particular, the termination of star formation becomes prevalent when the halo reaches a critical mass of M_h^crit ∼ 10^12. This value of M_h^crit, derived from the observational constraints in Paper II and in this paper under a different cosmology, is roughly the same for central and satellite galaxies, and is naturally expected from canonical halo quenching theory <cit.>. This scenario of galaxy quenching being locally driven by host halos is also consistent with the lack of environmental dependence in the mass-metallicity relation (MZR) of galaxies <cit.>, while a strong correlation between quenching and halo formation time would instead shift the the overall amplitude of the MZR with the large-scale overdensity.The ability of our fiducial halo quenching model to capture the observed environmental dependence of colours is encouraging news to the modelling of large-scale structure in ongoing and upcoming surveys.In particular, the construction of large mock galaxy catalogues with realistic stellar mass and colour properties is vital to the success of future surveys like the Dark Energy Spectroscopic Instrument <cit.>, Prime Focus Spectrograph <cit.>, and Large Synoptic Survey Telescope <cit.>, for validating analysis pipelines and extracting cosmological information <cit.>. For example, themock galaxy catalogue is one of the primary synthetic sky catalogues that are deployed within the web-based mock catalog validation and comparison framework for the LSST Dark Energy Science Collaboration (DESCQA; Heitmann et al, in prep). Furthermore, the success of the halo quenching model reinforces the theoretical foundation for the red-sequence based cluster finding algorithms in modern photometric surveys <cit.> — the richness of a cluster λ, defined by λ() ≡ f_red() × N_sat(),is more tightly correlated withthan either f_red or N_sat individually, and is insensitive to the average age of its subhalos or the large-scale environment. Another exciting prospect is to apply thehalo quenching framework to high redshifts.In particular, the Bright Galaxy Survey (BGS) program within DESI will conduct a magnitude-limited survey of approximately 10 million galaxies with a median redshift of 0.2. When combined with the current analysis of the SDSS main spectroscopic sample, themodelling of the BGS galaxies will enable an exquisite understanding of the evolution of galaxy quenching properties over the past ∼2 Gyrs, which is comparable to the observed star formation efficiency timescale of the molecular gas <cit.>, as well as the expected timescale for quenching <cit.>. At slightly higher redshifts (z ∼ 0.5), a comprehensivemodelling will also shed important insight on the apparent inconsistency between the clustering and lensing of galaxies under the Planck15 cosmology <cit.>.Finally, this paper concludes our three-paper series by building the fiducial halo quenching mock catalogue that accurately reproduces the spatial clustering, weak lensing, and density environment of galaxies at any given stellar mass and colour within the local Universe.We will be happy to provide our mock catalogues or generate new mock catalogues with user-defined halo catalogues for those interested, upon request.§ ACKNOWLEDGEMENTS We thank Ravi Sheth and David Weinberg for helpful discussions. YZ and RM acknowledge the support by the U.S. Department of Energy (DOE) Early Career Program. YZ is also supported by a CCAPP fellowship. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de).The Bolshoi simulations have been performed within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC) and were run at the NASA Ames Research Center.mnras
http://arxiv.org/abs/1703.09219v3
{ "authors": [ "Ying Zu", "Rachel Mandelbaum" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170327180000", "title": "Mapping stellar content to dark matter halos - III.Environmental dependence and conformity of galaxy colours" }
APS/123-QED ak@csrc.ac.cn ^1Beijing Computational Science Research Center, Beijing 100193, China^2School of Science and Engineering, Reykjavik University, Menntavegi 1, IS-101 Reykjavik, Iceland^3Department of Physics, McGill University, Montréal, Québec H3A 2T8, CanadaWe present a detailed theoretical study of non-Markovian dynamics in the fluorescence spectrum of a driven semiconductor quantum dot (QD), embedded in a cavity and coupled to a three-dimensional (3D) acoustic phonon reservoir. In particular, we investigate the effect of pure dephasing on one of the side-peaks of the Mollow-triplet spectrum, expressed in terms of the off-diagonal element of the reduced system operator. The QD is modeled as a two-level system with an excited state representing a single exciton, and ground state represents the absence of an exciton. Coupling to the radiative modes of the cavity is treated within usual Born-Markov approximation, whereas dot-phonon coupling is discussed within non-Markovian regime beyond Born approximation. Using an equation-of-motion technique, the dot-phonon coupling is solved exactly and found that the exact result coincides with that of obtained within Born approximation. Furthermore, a Markov approximation is carried out with respect to the phonon interaction and compared with the non-Markovian lineshape for different values of the phonon bath temperature. We have found that coupling to the phonons vanishes for a resonant pump laser. For a non-resonant pump, we have characterzied the effect of dot-laser detuning and temperature of the phonon bath on the lineshape. The sideband undergoes a distinct narrowing and aquires an asymmetric shape with increasing phonon bath temperature. We have explained this behavior using a dressed-state picture of the QD levels.Valid PACS appear here Theory of non-Markovian dynamics in resonance fluorescence spectrum Abhishek Kumar^1,2,3 December 30, 2023 ===================================================================§ INTRODUCTIONThe laws of quantum mechanics allow for quantum computers<cit.>, which are known to be significantly more powerful than classical computers. In a quantum computer, information is stored in quantum bits (qubits), rather than classical bits<cit.>. A single qubit represents a zero or one and is a two-level system with two energy levels which can be used to store and process the information<cit.>. An example of a two-level system, frequently used in quantum optics, is composed of the ground and excited states of an atom. Semiconductor QDs can be modeled as a two-level system with one exciton in the excited state<cit.>. Semiconductor QDs embedded inside a cavity has been a subject of intense research as a promising candidate for quantum computation and information tasks, as well as a source of single photon<cit.>.Recently, experiments on the cavity embedded QDs have been reported to show different spectral features of the Mollow-triplet fluorescence spectrum<cit.>. In particular, a dot coupled to the acoustic phonon bath in the super-ohmic regime has shown modified spectral features as a function of the phonon bath temperature. More precisely, the triplet sideband is observed to show a systematic spectral sideband broadening for both resonant and off-resonant cases. This problem was studies experimentally<cit.> and anlyzed theoretically<cit.> in terms of the usual Born-Markov approximation. However, the pure dephasing process a 3D phonon bath is in the form a super-ohmic independent boson model (IBM), which is known to be highly non-Markovian<cit.>, and these results have been studied and discussed in terms of usual Born-Markov approximation<cit.>. The system correlation function, for non-Markovian interactions (for e.g. nuclear spins<cit.>, phonons<cit.>), decays with a typical time scale given by the correlation time τ_c which never dies-off to zero. In other words, the correlation time is non-zero and can be larger than the system decay time τ_S, which is the signature of a strongly history-dependent non-Markovian interaction. Furthermore, the equation of motion for correlation function has an additional term known as irrelevant part, which is non-zero for non-Markovian interactions. This additional term vanishes for a Markov approximation, when the well-known QRT can be applied to find the system correlation<cit.>. Recent theories [for e.g. Ref. hughes12prb] discussing fluorescence spectrum in solid-state systems rely on a history-independent Markov process and apply the usual QRT, giving rise to an exponential decay of the system correlation. Often, physical processes in solid-state systems<cit.> are highly non-Markovian (history-dependent), and so the resulting spectrum using the QRT can no longer be used to describe their spectral properties.In this paper, we analyze the effect of pure dephasing due to a 3D acoustic phonon bath on the fluorescence spectrum of a cavity coupled to a semiconductor QD. The associated emission spectrum can be directly related to a correlation function for system observables, which we evaluate beyond the Markov approximation using a Nakajima-Zwanzig GME<cit.>. Assuming a large band-width, coupling to the radiation modes of the cavity can be treated within Born-Markov approximation, which is relevant to cavity quantum electrodynamics (cavity-QED) experiments<cit.>. The dot is represented by a two-level system with an exciton in the excited state coupled to a phonon reservoir, and can be modeled with the usual IBM<cit.>. The resultant fluorescence spectrum has three components due to dressing of the levels by a pump laser<cit.>, and we project the system into a dressed state basis which allows us to characterize the three components of triplet separately<cit.>. We have solved the dot-phonon coupling using an exact approach beyond Born approximation, and the exact result coincides with that of obtained within Born approximation. Phonon coupling gives rise to a frequency dependent frequent-shift and dephasing, which bring non-Lorentzian features in the fluorescence spectrum. Frequency-shift and dephasing due to phonon interaction are strongly temperature-dependent and vanish for the lower temperatures. We found that in the dressed-state basis levels of interest are coupled asymmetrically to the phonons and have vanishing dephasing and frequency-shift for a resonant dot-laser frequency. We have also observed that the sideband undergoes a distinct narrowing and becomes asymmetric with increasing temperature, which is explained using the dressed-states energy levels.This paper is organized as follows: In Sec. <ref>, we start with discussing the setup and establish the formula for resonance fluorescence spectrum of a general two-level system. In Sec. <ref>, we introduce the model Hamiltonian for a driven cavity-QED two-level system interacting with a phonon bath. In Sec. <ref>, we discuss and derive the exact form of Nakajima-Zwanzig GME for the dynamics of the reduced density matrix and correlation function. We obtain the expressions for the lineshape functions in both Markovian and non-Markovian regimes. In Sec. <ref>, we present our results and discuss the plots in different parametric regimes. In Sec. <ref>, we conclude with a discussion and summary of the results. Other technical details are discussed in the Appendixes.§ FLUORESCENCE SPECTRUMWe start with the model Hamiltonian of a general two-level system interacting with radiation modes of the electromagnetic field, which can be written as system (H_S), field (H_R), and interaction (H_SR) in terms of the standard Jaynes-Cummings model within a rotating-wave approximation (RWA):H_0 =H_S+H_R+H_SR,H_R =∑_kω_ka_k^†a_k,H_SR =∑_k g_k(σ_aba_k+σ_baa_k^†),whereσ_ab=|a⟩⟨ b| and σ_ba=|b⟩⟨ a| are the raising and lowering operators between excited state |a⟩ and ground state |b⟩ in the Hilbert space of the system and a_k, a_k^† are the annihilation and creation operators in the Hilbert space of a set of electromagnetic modes coupled to the system. Coupling to the radiation modes is given by a coupling constant g_k to the mode of frequency ω_k. For simplicity of the units, we have also used ħ=1. Here, we adopt the well-known theory of gedanken spectrum analyzer given in Ref. [scully], and assume that the radiation field emitted by the system is detected by a two-level atom (detector) with a transition frequency ω_α-ω_β=ω_0, which is prepared in its ground state |β⟩ initially, see Fig. <ref>.The Hamiltonian for the detector is given byH_D=ω_0/2(|α⟩⟨α|-|β⟩⟨β|),and coupling between detector atom H_D and the radiation field, included in the Hamiltonian H, is given by the Hamiltonian H_DR=∑_k g_k^D(|α⟩⟨β|a_k+|β⟩⟨α|a_k^†),where g_k^D is the coupling of detector to a field mode k. Therefore, the Hamiltonian for entire system including system and detector, as well as coupling to the radiation modes can be written as:H=H_0+H_D+H_DR.According to Wiener-Khintchine theorem, fluorescence spectrum, in the stationary regime and in the interaction picture with respect to the detector, is given by Fourier transform of the correlation function<cit.>S(ω_0)=|℘_αβ|^2 Re∫_0^∞dτ ⟨ E^(-)(0)E^(+)(τ)⟩ e^iω_0τ,where ℘_αβ is the detector dipole matrix element with positive-frequency part of the electric field is defined by E^(+)(t) =∑_kε_ka_k(t),and the negative-frequency part of the electric field is E^(-)(t)=[E^(+)(t)]^†. The quantity ε_k=√(ħω_k/(2ϵ_0V)) is the electric field per photon and V is the effective volume of a cubic cavity resonator. We rewrite the correlation function, in Eq. (<ref>), in terms of the system operators using the methods described in Refs. scully,cohen:S(ω_0)=I̅^2 Re∫_0^∞dτ ⟨σ_ab(0)σ_ba(τ)⟩ e^iω_0τ, here I̅ is the detector response function [discussed in Appendix <ref>]. The average ⟨…⟩=Tr{…ρ̅} , in Eq. (<ref>), is given with respect to the stationary density matrix ρ̅, where ρ̅=lim_T→∞1/T∫_0^Tdt ρ(t). Using the cyclic property of trace, fluorescence spectrum can be written asS(ω_0)=I̅^2 Re∫_0^∞dτ Tr{σ_baΩ(t)} e^iω_0t,here operator Ω(t) is defined as Ω(t)=e^-iH_0tρ̅σ_abe^iH_0t, and H_0=H_S+H_R+H_SR. Since σ_ab and σ_ba are operators in the system Hilbert space and [H_D,H_0]=0, the evolution of Ω(t) is determined by the Hamiltonian of emitting system and radiation field, H_0, in the absence of detector and can be computed without using the well-known QRT<cit.>. § MODEL §.§ HamiltonianWe consider a driven two-level cavity-QED system with an excited state |a⟩ representing a single exciton, and a ground state |b⟩ with no exciton. The QD interacts with the cavity photons and a phonon reservoir which is coupled to the excited state |a⟩, as shown in Fig. <ref>. The Hamiltonian for the total system reads,H(t) = ω_ab/2σ_z+Ω/2(σ_ab+σ_ba)(e^iω t+e^-iω t)+∑_kω_ka_k^†a_k +∑_kg_k(σ_ab+σ_ba)(a_k+a_k^†)+∑_kω_qb_q^†b_q+∑_qλ_q σ_aa(b_q+b_q^†),where ω_ab is the transition frequency of the two-level system and ω is the frequency of laser field. The photon (phonon) modes are represented by bosonic fields with frequencies ω_k (ω_q) with creation and annihilation operators a_k^† (b_q^†) and a_k (b_q), respectively. The system-photon (system-phonon) coupling strength is given by g_k (λ_q), and Ω (Rabi frequency) is the coupling between the two-level system and laser field. The system operators are denoted by σ_ij=|i⟩⟨ j| where i,j∈{ a,b} and σ_z=σ_aa-σ_bb.The explicit time dependence in H(t) [Eq. (<ref>)] can be removed by going to a rotating frame and applying a RWA. We perform the RWA on both the driving term and the system-photon coupling term, in which we neglect the rapidly-oscillating terms and keeping only the time-independent part. The resulting RWA Hamiltonian in the rotated frame is thenH̃= Δ/2σ_z+Ω/2(σ_ab+σ_ba)+∑_kΔ_ka_k^†a_k+∑_kg_k(σ_aba_k+σ_baa_k^†)+∑_kω_qb_q^†b_q+∑_kλ_q σ_aa(b_q+b_q^†),where Δ=ω_ab-ω and Δ_k=ω_k-ω are the detunings of atomic and cavity frequencies from the laser pump frequency ω. In general, operators in the rest frame are transformed to the rotating frame according to the following relations:σ_z(t) =σ̃_z(t), σ_ab(t) =e^-iω tσ̃_ab(t).Due to presence of the intense laser field the two bare states are strongly coupled to each other and give rise to two dressed states. The dressed states can be written, in terms of the bare states, as:|+⟩ = c|a⟩+ s|b⟩, |-⟩ =- s|a⟩+ c|b⟩,where c=cosθ and s=sinθ with the mixing angle θ is given by tanθ=√((Ω_R-Δ)/(Ω_R+Δ)), where Ω_R=√(Ω^2+Δ^2) is the dressed Rabi frequency.Furthermore, following closely the discussing on IBM in Ref. [mahan], we apply another transformation by using a canonical transformation H'=e^BH̃e^-B, where B=1/21⊗∑_qλ_q/ω_q(b_q^†-b_q) is an anti-hermitian operator, to write the total Hamiltonian in terms of the free and perturbed parts as, after transforming it to the dressed-state basisH̅≃ H_0+H_V, where the free part isH_0 =H_S+H_R+H_P,H_S =Ω'_R/2σ_3,H_R =∑_kΔ_ka_k^†a_k,H_P =∑_qω_qb_q^†b_qand the perturbed part is given byH_V =H_dR+H_SR+H_dP,H_dR = c.s σ_3 ∑_kg_k(a_k+a_k^†),H_dP = c^2- s^2/2 σ_3 ∑_qλ_q(b_q+b_q^†),H_SR =∑_kg_k[( c^2 a_k- s^2 a_k^†)σ_+-+h.c.],where σ_ij=|i⟩⟨ j|, i,j∈{ +,-} and σ_3=|+⟩⟨ +|-|-⟩⟨ -|. The polaron transformation introduces a frequency shift as: Ω_R'=Ω_R-Δ_P, where Δ_P=( c^2- s^2)∑_qλ_q^2/ω_q is the polaron shift. We have also performed a Schrieffer-Wolff transformation on the above Hamiltonian to get rid of the energy exchange term (T_1 lifetime process) due to phonons, see Appendix <ref>. The separation into pure dephasing and transition terms is determined by the form of system coupling, i.e. dephasing terms contain the diagonal coupling and transition terms contain the off-diagonal couplings. Accordingly, fluorescence spectrum, in Eq. (<ref>), can be written in Laplace domain using the dressed-state representation as a three-peak spectrum [Ref. cohen]:S(Δ_0)= I̅^2 Re[ c.s Ω_z(s)+ c^2Ω_+-(s)- s^2Ω_-+(s)]_s=-iΔ_0,where Δ_0=ω_0-ω is the probe detuning, we have used the relations Ω_ij=⟨ j|Ω|i⟩, i,j∈{ +,-} and Ω_z=Ω_++-Ω_–. The polaron transformation does not affect the fluorescence spectrum in above expression as B acts on the phonon mode Hilbert space and commutes with the system operators. It can be seen that the first term in Eq. (<ref>) gives the central peak whereas last two terms give rise to satellite peaks of the Mollow-triplet, shown in Fig. <ref>. We are interested in the influence of pure dephasing due to phonon interaction, which only affects the off-diagonal elements of the system operator. From here and what follows, we will study the effect of pure dephasing on the fluorescence spectrum, with a particular focus on one of the Mollow-triplet sidebands (Stokes line). Alternatively, when the width of each side peak ∼Γ, [see Eq. (<ref>), below] is small compared to the peak separation ∼Ω_R, we approximate the spectrum near the side peak centered at Δ_0≃Ω_R by, see Fig. <ref>S(Δ_0)≃ S_+(Δ_0)=I̅^2 c^2 Re[Ω_+-(s=-iΔ_0)].In order to compute the spectrum, we will evaluate the dynamics of matrix element Ω_+-(t)= Tr[σ_-+Ω(t)] in the dressed-state representation with Hamiltonian in the polaron frame, given by Eq. (<ref>). §.§ Initial conditionsThe radiation and phonon modes are decoupled from the system for times t_0<0 (where t_0 is a time in the distant past), and prepared independently in the states described by density matrices ρ_R(t_0), ρ_P(t_0) and ρ_S(t_0), respectively. The interactions (photons and phonons) are switched on at this time t=t_0, and the state of entire system is described by the full density matrix ρ(t_0):ρ(t_0)=ρ_R(t_0)⊗ρ_P(t_0)⊗ρ_S(t_0),where initial density matrix for photon is described by vacuum of the cavity modes ρ_R(t_0)=∏_k|0_k⟩⟨0_k|,and phonon modes are described by a canonical ensemble at temperature T:ρ_P(t_0)=exp(-H_P/k_BT)/ Tr[exp(-H_P/k_BT)].We recall the operator Ω(t), whereΩ(t)=e^-iH̅tρ̅σ_abe^iH̅tis analogous to the density matrix operator with a modified initial condition, given byΩ(0)=ρ̅σ_ab,where stationary density matrix ρ̅ and hence Ω(0) account for conditions that accumulate between the system and interactions in the time interval t∈[t_0 ,0]. We choose an initial condition when exciton is in excited state |a⟩ given by ρ_S(t_0)=|a⟩⟨a|, and evolves in the presence of pump laser. We switch on the detector at t=0 and subsequently calculate the dynamics of Ω_+-(t) for t>0, with initial condition given by the steady state density matrix accumulated between the time interval [t_0, 0]. § GENERALIZED MASTER EQUATION We are interested in the dynamics of reduced system operator, after tracing over variables of photon and phonon modes; Ω_S(t)= Tr_RTr_PΩ(t). To study the dynamics of reduced system operator Ω_S(t), we introduce a projection superoperator P, defined by its action on an operator: P𝒪(t)=ρ_R(t_0)ρ_P(t_0) Tr_RTr_P𝒪(t). Both operators, ρ(t) and Ω(t), follow the same von-Neumann equation, and can be written in form of exact Nakajima-Zwanzig GME<cit.>, using Qρ(t_0)=0Pρ̇(t)=-iPLPρ(t)-i∫_t_0^tdt' Σ(t-t')Pρ(t'),where Σ(t) is the self-energy superoperator Σ(t)=-iPLQ e^-iQLtQLP,and L is the full Liouvillian superoperator, defined as L_α𝒪=[H_α,𝒪] and α=0(S,R,P),V(dR,dP,SR). We have used the properties of projection operator: P^2=P and⟨𝒪_S⟩(t)=Tr{𝒪_Sρ(t)}=Tr{𝒪_SPρ(t)},also introducing its complement Q=1-P. We can derive an equation of motion for Ω(t) analogous to the equation for ρ(t) [Eq. (<ref>)]. However, an additional term appears because QΩ(0)=Qρ̅σ_ab≠ 0, and we have assumed that the full density matrix operator is not separable for all times i.e. ρ(t)≠ρ_R(t_0)⊗ρ_P(t_0)⊗ρ_S(t). The resulting motion equation for P Ω (t) is then,PΩ̇(t)= -iPLPΩ(t)-i∫_0^tdt' Σ(t-t')PΩ(t')-iPLQe^-iQLtQΩ(0),where Σ(t) is defined in Eq. (<ref>) and the last term in the above equation contains QΩ(0), i.e. the irrelevant part of Ω(0) is non-zero. When the radiation and phonon modes are described by Eqs. (<ref>) and (<ref>), the projection operator P follows some useful identitiesPLP =L_SP=PL_S,PL_VP =0,PLQ =PL_V,QLP =L_VP.We apply above identities (<ref>)-(<ref>), and perform the partial traces on Eqs. (<ref>) and (<ref>) over variables of radiation and phonon modes to obtain an equation of reduced system operators, ρ̇_S(t) =-iL_Sρ_S(t)-i∫_t_0^tdt' Σ_S(t-t')ρ_S(t'), Ω̇_S(t) =-iL_SΩ_S(t)-i∫_0^tdt' Σ_S(t-t')Ω_S(t')+Φ_S(t), Σ_S(t) =-iTr_RTr_P[L_Ve^-iQLtL_Vρ_R(t_0)ρ_P(t_0)], Φ_S(t) =-iTr_RTr_P[L_Ve^-iQLtQΩ(0)],where Σ_S(t) and Φ_S(t) are reduced self-energy and irrelevant part superoperators, respectively, and 𝒪_S(t)=Tr_RTr_P𝒪(t)=∑_αβ∈{ +,-}𝒪_αβ(t)|α⟩⟨β| is the reduced system operator. Comparing Eqs. (<ref>) and (<ref>), we found that the first two terms are identical but there is an additional irrelevant part, Φ_S(t), is present in the equation for Ω_S(t). This term vanishes in a Markov approximation and usual QRT can be applied to find the system correlation<cit.>. In addition to this, we have also assumed that the full density matrix is not separable for all times which is another reason that QRT is no longer valid in the present case. Furthermore, for a non-Markovian equation the irrelevant term is non-zero and usual QRT can not be used to compute the system correlation in this case<cit.>. The equation for off-diagonal matrix element, Ω_+-, is coupled to both diagonal Ω_++, Ω_– and off-diagonal Ω_-+ elements of the reduced system operator, and can be written in the form Ω̇_+-(t) =-iΩ_R'Ω_+-(t)-i∫_0^tdt' Σ_+-,+-(t-t')Ω_+-(t')-i∫_0^tdt' Σ_+-,-+^SR(t-t')Ω_-+(t')-i∫_0^tdt' Σ_+-,++^SR(t-t')Ω_++(t')-i∫_0^tdt' Σ_+-,–^SR(t-t')Ω_–(t')+G_+-,+-(t)Ω_+-(0),with a non-zero irrelevant part matrix element expressed as (see Appendix <ref>)[G_S(t)]_+-,+-=[-Tr_RTr_PL_V e^-iQLt(1/0^++iQLL_V)ρ_R(t_0)ρ_P(t_0)]_+-,+-. The self-energy superoperator can be decomposed into three terms asΣ_S(t)=Σ_S^dR(t)+Σ_S^dP(t)+Σ_S^SR(t),where the first and second terms in above expression are pure dephasing (T_2^* pure dephasing time) processes due to photon and phonon couplings, respectively, and the third term gives rise to transition (T_1 lifetime) due to radiative modes coupling. We treat the dot-cavity coupling self-energy within Born-Markov approximation to second-order in perturbation Liouvillain L_V, see Appendix <ref>. Applying the continuum of modes for cavity given by Lorentzian density of state [Eq. (<ref>)], we find the self-energy matrix elements in their Laplace domian, defined as f(s)=∫_0^∞e^-stf(t)dt,Σ_+-,+-^dR(s)≃ -ig^2(Ω_R^2-Δ^2)/2Ω_R^2(1/s+i(Ω_R'-Δ_c)+Γ_c+1/s+i(Ω'_R+Δ_c)+Γ_c), Σ_+-,+-^SR(s)≃-ig^2/4Ω_R^2((Ω_R+Δ)^2/s+iΔ_c+Γ_c+(Ω_R-Δ)^2/s-iΔ_c+Γ_c),where Δ_c=ω_c-ω is the detuning of cavity from laser pump frequency and Γ_c is cavity bandwidth. Here, Born approximation is justified by finding that the higher order terms in reduced self-energy are suppressed by a small parameter ∼ g^2/(Ω'_RΓ_c). Similarly for phonon modes, applying a continuum of modes [see Appendix <ref>] for the deformation potential coupling mechanism<cit.> N(ϵ)|λ(ϵ)|^2=α_P|ϵ|^3e^-|ϵ|/ϵ_c, where α_P is the phonon coupling parameter in the units of frequency^-2 and ϵ_c is the phonon cut-off frequency, we obtainΣ_+-,+-^dP(s)= -iα_PΔ^2/2Ω_R^2∫_0^∞dϵ|ϵ|^3e^-|ϵ|/ϵ_c(2n_B(ϵ)+1)(1/s+i(Ω'_R-ϵ)+1/s+i(Ω'_R+ϵ)),here n_B(ϵ) is Bose function. In above expression, the dot-phonon interaction self-energy is evaluated using exact approach within non-Markovian limit and it is found that Born approximation is exact in this case, see Appendix <ref>. Furthermore, the reduced self-energy matrix elements couple to the populations can as well be written within usual Born approximation Σ_++,++^SR(s)≃ -ig^2(Ω_R+Δ)^2/4Ω_R^2(1/s+i(Ω_R'-Δ_c)+Γ_c+1/s-i(Ω_R'-Δ_c)+Γ_c), Σ_++,–^SR(s)≃ ig^2(Ω_R-Δ)^2/4Ω_R^2(1/s+i(Ω_R'+Δ_c)+Γ_c+1/s-i(Ω_R'+Δ_c)+Γ_c),and similarly for the coherence Ω_-+,Σ_+-,-+^SR(s)≃ -ig^2(Ω_R^2-Δ^2)/4Ω_R^2(1/s+iΔ_c+Γ_c+1/s-iΔ_c+Γ_c).Equation for the coherence in Eq. (<ref>) contains both diagonal and off-diagonal elements of the self-energy superoperator, some of them are fast moving compare to others. In next section, we will perform a secular approximation<cit.> to get rid of the fast oscillating terms.§.§ Secular approximationThe secular approximation consists in neglecting the fast oscillating terms in Markov equation-of-motion, and the equation for Ω_+- [Eq. (<ref>)] is decoupled from populations and coherence within a secular approximation<cit.>. Here, we consider a general equation of motion for the operator Ω(t), without irrelevant part matrix elements:Ω̇_+-(t)= -iΩ_R'Ω_+-(t)-i∫_0^tdt' Σ_+-,+-(t-t')Ω_+-(t')-i∫_0^tdt'Σ_+-,-+^SR(t-t')Ω_-+(t'),and would like to perform the secular approximation in order to get rid of the fast oscillating terms. To this end, introducing a rotating frameΩ'_+-(t)=e^i(Ω_R'+Δω)tΩ_+-(t),where Δω is the total frequency shift given implicitly by the expressionΔω=Re∫_0^∞dt'e^i(Ω_R'+Δω)t'Σ_+-,+-(t').In the weak coupling regime, Ω_R'≫Δω, the frequency shift to the leading order in ΔωΔω≃Re[Σ_+-,+-(s=-iΩ_R')].Here, the purpose of introducing a rotating frame is to get rid of all oscillating parts from Ω'(t) and hence obtain an equation for Ω'_+-(t)Ω̇'̇_+-(t)= iΔω Ω'_+-(t)-i∫_0^tdt' Σ̃_+-,+-(t-t')Ω'_+-(t')-ie^i(Ω_R'+Δω)(t+t')∫_0^tdt' Σ_+-,-+^SR(t-t')Ω'_-+(t'),where Σ̃_+-,+-(t)=e^i(Ω_R'+Δω)tΣ_+-,+-(t). Due to presence of the oscillatory exponential in the last two terms in RHS of the above equation, we decompose it into slow-varying and fast-oscillating parts as:Ω̇_+-^'S(t) =iΔω Ω'_+-(t)-i∫_0^tdt'Σ̃_+-,+-(t-t')Ω'_+-(t') Ω̇_+-^'F(t) =-ie^i(Ω_R'+Δω)(t+t')∫_0^tdt'Σ_+-,-+^SR(t-t')Ω'_-+(t'),where S and F stand for slow and fast, respectively. Carrying out Markov approximation on slow term by replacing: t'→ t-t' , Ω'_+-(t-t')→Ω'_+-(t), and finally extending the upper limit of integration to infinity, we obtainΩ̇_+-^'S(t)=-Γ Ω'_+-(t),whereΓ=1/T_2=-Im∫_0^∞dt'e^i(Ω'_R+Δω)t'Σ_+-,+-(t').Condition for validity of Markov approximation: e^i(Ω'_R+Δω)t'Σ_+-,+-(t') decays on a time scale τ_c≪ T_2 which is the decay time of Ω_+-^'S(t) and given by the relation<cit.>∫_0^∞(∫_t^∞dt'e^i(Ω'_R+Δω)t'Σ_+-,+-(t'))dt≪ 1.On substituting Eqs. (<ref>) and (<ref>) in the above inequality and for Γ_c≫Ω_R', Δω and Δ_c, it leads to the condition g/Γ_c≪ 1 and similarly for the phonon coupling α_PΔω^3e^Δω/ϵ_c(2n_B(Δω)+1)/ϵ_c≪ 1. Similarly, for the fast termΩ̇_+-^'F(t)=-i e^2i(Ω_R'+Δω)tΩ'_-+(t)∫_0^∞dt' Σ_+-,-+^SR(t'),and the condition for validity of Markov approximation:∫_0^∞(∫_t^∞dt'Σ_+-,-+^SR(t'))dt≪ 1,which leads to a similar condition g/Γ_c≪ 1. For large Ω_R'+Δω and due to presence of the highly oscillatory exponential e^i(Ω_R'+Δω)t, the effects of terms Ω'_-+ will eventually average out to the smaller values compared to Ω'_+-. Therefore, in secular approximation when|∫_0^∞dt'Σ_+-,-+^SR(t')|≪ 2(Ω_R'+Δω),we neglect the fast oscillating terms in Markovian approximation since this term will oscillate fast and average out to a smaller value compared to the slow term. On substituting for Σ_+-,-+^SR(s) from Eq. (<ref>) and for Γ_c≫Δ_c, Ω_R'≫Δω, we get an explicit condition for the validity of secular approximation as: g^2/(Ω_R'Γ_c)≪ 1. In the similar manner, we can also neglect the contribution form Ω_++(t) and Ω_–(t) from Eq. (<ref>). Going back to lab frame and within secular approximation, we obtain the expression in Laplace transform:Ω_+-(s=-iΔ_0)≃Ω_+-(0)/-i(Δ_0-Ω_R'-Δω)+Γ,with initial condition expressed in terms of reduced self-energy matrix elements given by Eqs. (<ref>) and (<ref>), see Appendix <ref>:Ω_+-(0)=- c^2Σ_++,–^SR(s=0)/Σ_++,++^SR(s=0)-Σ_++,–^SR(s=0).The irrelevant part matrix elements, due to photon G_+-,+-^R and phonon G_+-,+-^P, both are identically zero under a Markov approximation. On substituting for Ω_+-(s=-iΔ_0) from Eq. (<ref>) in the expression (<ref>), we obtain an expression for one-peak Markovian spectrumS_m(Δ_0)≃XΓ/(Δ_0-Ω_R'-Δω)^2+Γ^2,which is a Lorentzian line centered at Δ_0=Ω_R'+Δω with a width given by Γ, frequency-shift Δω and decay Γ are given by Eqs. (<ref>) and (<ref>), respectively, and the pre-factor is given byX=I̅^2 c^4((Ω_R-Δ)^2/Γ_c^2+(Ω_R^'2+Δ_c)^2)/(Ω_R+Δ)^2/Γ_c^2+(Ω_R^'2-Δ_c)^2+(Ω_R-Δ)^2/Γ_c^2+(Ω_R^'2+Δ_c)^2.Here, we have substituted the expression for the self-energy matrix elements given by Eqs. (<ref>) and (<ref>). Expression for the lineshape in Eq. (<ref>) is Markovian with respect to both photon and phonon interactions. However, we are interested in the non-Markovian regime with respect to phonon coupling and an equation for Ω_+-(t) always contains an extra small term G_+-,+-^P(t) due to non-Markovian interaction, known as irrelevant part matrix element. Assuming that the irrelevant part is associated with a smallness in the present problem and in order to get the further insight, we will estimate the typical size of its contribution and find a regime where non-Markovian correction is dominant compared to its irrelevant part contribution. Equation for Ω_+-(t) in Laplace domain with its irrelevant part matrix element can be written as, Ω_+-(Δ_0)=1/-i(Δ_0-Ω_R')+iΣ_+-,+-^dP(Δ_0)[1+G_+-,+-^P(Δ_0)]Ω_+-(0).The smallness of irrelevant part compared to non-Markovian self-energy due to phonon interaction can be justified by the inequality given by|G_+-,+-^P(Δ_0)|≪ 1,and we only keep the contribution from self-energy matrix element. Furthermore, expanding Eq. (<ref>) in the powers of self-energy and ignoring the higher order terms, we haveΩ_+-(Δ_0)≃1/-i(Δ_0-Ω_R')[1+Σ_+-,+-^dP(Δ_0)/Δ_0+Ω_R'+G_+-,+-^P(Δ_0)]Ω_+-(0), and assuming that irrelevant part gives rise to a small contribution and comparing it with self-energy contribution leads to the following inequality,|Σ_+-,+-^dP(Δ_0)/Δ_0+Ω_R'|≫|G_+-,+-^P(Δ_0)|.The one-peak spectrum is centered around Δ_0∼Ω_R', with a width mostly dominated by Markovian decay rate Γ, estimating the size of above inequality around Δ_0∼Ω_R'+Γ, and it is found that irrelevant part matrix element is always suppressed by a small parameter, Γ/Ω_R'≪ 1, compared to self-energy matrix element contribution, also see Appendix <ref>. Following the above discussion, we neglect the irrelevant part from the equation for Ω_+-(t) and after going back to lab frame we have an equation written in Laplace domain, Ω_+-(Δ_0)=Ω_+-(0)/-i[Δ_0-Ω_R'-Δω_R-Δω_P(Δ_0)]+Γ_R+Γ_P(Δ_0). Above expression is Markovian with respect to photon coupling but non-Markovian in terms of phonon interaction, where Markovian frequency shift (Δω_R) and decay rate (Γ_R) are given byΔω_R ≃ Re[Σ_+-,+-^dR(s)+Σ_+-,+-^SR(s)]_s=-iΩ_R', Γ_R =- Im[Σ_+-,+-^dR(s)+Σ_+-,+-^SR(s)]_s=-i(Ω_R'+Δω).Similarly, non-Markovian frequency-dependent shift (Δω_P(Δ_0)) is expressed asΔω_P(Δ_0)= Re[Σ_+-,+-^dP(s)]_s=-iΔ_0and dephasing (Γ_P(Δ_0)) is given byΓ_P(Δ_0)=- Im[Σ_+-,+-^dP(s)]_s=-iΔ_0. On substituting for Ω_+-(s=-iΔ_0) from Eq. (<ref>) in the expression (<ref>), we obtain an expression for the one-peak non-Markovian spectrumS_nm(Δ_0)=X[Γ_R+Γ_P(Δ_0)]/[Δ_0-Ω_R'-Δω_R-Δω_P(Δ_0)]^2+[Γ_R+Γ_P(Δ_0)]^2, where pre-factor X is given by Eq. (<ref>). It should be noted here that we have not performed Born-Markov approximation in terms of phonon interaction. In Markovian regime frequency shift Δω_P(Δ_0) and dephasing Γ_P(Δ_0) are replaced by their Δ_0=Ω_R'+Δω frequency parts and give rise to an exponential decay and hence to a Lorentzian line centered at Δ_0=Ω_R'+Δω with a width given by Γ. Whereas in the non-Markovian regime frequency-shift and dephasing due to phonon interaction are frequency dependent and lead to a non-Markovian (non-exponential) decay giving rise to non-Lorentzian features in the lineshape. We apply above obtained theoretical results to InAs/GaAs QDs, and use the cavity and phonon parameters given in Refs. qd1,qd2,ulrich11prl,weiler12,brunner09,krummheuer02.§ RESULTS AND DISCUSSIONIn this section, we plot and analyze the results obtained in the previous sections for different parameter regimes. Typical phonon parameters for GaAs are obtained from Refs. weiler12,qd1,qd2: phonon cut-off ω_c=1meV and coupling α_P=2.08× 10^-7 μ eV^-2. For the laser and cavity, we choose following parameters<cit.> Ω=500 μ eV, g=50 μ eV, Γ_c=2meV. We have varied the other parameters in the plots and explained along with the figures.§.§ Temperature dependent frequency-shift and dephasingIn Fig. <ref>, we plot the frequency-shift and dephasing as a function of probe-detuning for different values of phonon bath temperatures. We observe that the effect is strongly temperature-dependent. It increases linearly in the high temperature limit and vanishes for small temperatures. In the present parameter regime for a typical GaAs QD, the dominant contribution from phonon interaction is mainly due to the frequency-shift, see Fig. <ref>. Here, we have assumed that the dot and cavity are resonant and set Δ=Δ_c=500 μ eV.§.§ Temperature dependent one-peak fluorescence spectraIn Fig. <ref>, we also plot the associated one-peak spectra for different phonon bath temperatures for the fixed detunings and analyze the effect of frequency-shift and dephasing on the lineshape. We observe a distinct narrowing and asymmetry in the side-peaks with increasing temperature mainly due to the frequency-shift; as frequency-shift changes with increasing temperature giving rise to non-Lorentzian features in the lineshape. This behavior is not observed in the Markovian lineshape as both frequency-shift and dephasing are constants in this case. In the dressed-state basis, levels involved in the transitions of interest (Stokes line) are coupled asymmetrically with the phonon modes, and frequency-shift due to phonons pulls these energy levels away from the resonance bringing an additional shift. This extra shift, due to phonons, increases the level separation and reduces the number of channels for the radiative decay. In other words, some of the photons are used to compensate for this additional shift which leads in less photons coming out to the outer world or seen by the detector, eventually causing a narrowing in the sideband. Moreover, the frequency-shift appears to be an odd function with respect to probe detuning (Fig. <ref>) which leads to different probability of emitting and absorbing the phonons on the either side of probe detuning. We also observe that the Lamb shift and additional broadening, due to phonon interaction, vanish for the case of resonant pump laser.This can be explained in the dressed-state representation, where both energy-levels of interest, correspond to one of the transitions (Stokes line), are coupled asymmetrically to the phonons. For the zero detuning they become degenerate leading to the vanishing frequency-shift and dephasing. We have also observed a little shifts in the peak positions due to strong dependence of frequency-shift on the probe-detuning. § CONCLUSIONWe have discussed the dynamics of a driven cavity-QED system coupled to a 3d acoustic phonon reservoir with non-Markovian pure dephasing mechanism. We have observed highly modified non-Lorentzian features in the associated spectrum due to phonon interaction, and for solid-state systems it can only be described within non-Markovian regime. We have shown that quantum regression theorem is not valid in this case because of a non-zero irrelevant part, and also the full density matrix is not separable for all times. The system correlation is solved without using the quantum regression theorem beyond usual Born-Markov approximation. We have obtained the analytical formulas for both Markovian and non-Markovian one-peak lineshapes in terms of the model parameters. Both expressions look like Lorentzian lines centered around Δ_0≃Ω_R', but this analogy is not correct as shift and dephasing are also probe-dependent in case of non-Markovian lineshape. We have investigated the one-peak spectrum of the Mollow-triplet for different values of phonon bath temperature for the resonant dot and cavity. We have found vanishing shift and dephasing when laser is resonant with the dot, because levels are equally and asymmetrically coupled to phonon modes rendering them degenerate for a resonant pump laser. We have derived an exact form of Najakjima-Zwanzig generalized master equation and showed that Markov and non-Markov solutions give significantly different results. We show that non-Markovian contribution is significantly large and can be clearly seen in the spectrum. A distinct narrowing of the sideband has been reported which is contrary to the recent results leading to broadening in the presence of phonons. We have shown that the frequency-dependent shift has strong temperature dependence which causes different features like narrowing and asymmetry in the lineshape. This procedure can also be used to systematically account for features in optical spectra of a general multi-level system due to genuine non-Markovian dynamics. § ACKNOWLEDGMENTSAK acknowledge financial support from the Icelandic Research Fund RANNIS, CIFAR, Canada, the National Key Research and Development Program of China (Grant No. 2016YFA0301200), and the NFSC grants (No 11574025 and No. U1530401). AK thanks Bill Coish, Sigurdur I. Erlingsson, Stefano Chesi, Li-jing Jin, and Tilen Cadez for the useful discussions and feedback.§ DETECTOR RESPONSE FUNCTIONThe detector response function in Eq. (<ref>)I̅=∫_0^∞dτ ∑_kg_kg_k^De^-iω_kτ,where g_k^D=|℘_αβ|ε_k. Applying the continuum of modes after replacing the sum via integral ∑_kg_k^2→∫_0^∞D(ϵ)|g(ϵ)|^2dϵ and using the well-known formula1/X± i0^+=𝒫(1/X)∓ iπδ(X),(𝒫 indicates the principal part) one can write the square of detector response function in terms of principal part and delta function asI̅^2= ∫_0^∞dϵ D_c(ϵ)|g(ϵ)|^2[-i𝒫(1/ϵ)+πδ(ϵ)]×∫_0^∞dν D(ν)|g^D(ν)|^2[i𝒫(1/ν)+πδ(ν)],where D_c(ϵ) and D(ν) are photonic density of states of cavity and open space, respectively. The photonic density of states of the cavity is described by a Lorentzian density of states given by Eq. (<ref>). § SCHREIFFER-WOLFF TRANSFORMATIONHamiltonian H' resulting from the polaron transformation, is H'= Ω_R'/2σ_3+∑_kΔ_ka_k^†a_k+∑_qλ_qb_q^†b_q+ c.s σ_3 ∑_kg_k(a_k+a_k^†)+( c^2- s^2)/2σ_3 ∑_qλ_q(b_q+b_q^†)+∑_kg_k[( c^2a_k- s^2a_k^†)σ_+-+h.c.]- c.s ∑_qλ_q(σ_+-+σ_-+)(b_q+b_q^†)-∑_qλ_q^2/4ω_q+ c.s ∑_qλ_q ^2/ω_q(σ_+-+σ_-+).Last two terms in above Hamiltonian commute with rest of the Hamiltonian and can be igonred within a secular approximation for large Ω_R'. We get rid of the energy exchange process due to phonon interaction using leading-order Schrieffer-Wolff transformation<cit.> and start from the full Hamiltonian:H' =H_1+V_2H_1 =H_S+H_R+H_P+H_dR+H_dP+H_SR,where individual terms are defined asH_S =Ω'_R/2σ_3,H_R =∑_kΔ_ka_k^†a_k,H_P =∑_qλ_qb_q^†b_q,H_dR = c.s σ_3 ∑_kg_k(a_k+a_k^†),H_dP =( c^2- s^2)/2 σ_3 ∑_qλ_q(b_q+b_q^†),H_SR =∑_kg_k[( c^2a_k- s^2a_k^†)σ_+-+h.c.],V_2 =- c.s ∑_qλ_q(σ_+-+σ_-+)(b_q+b_q^†).We apply a transformation, H̅=e^AHe^-A, generated by an anti-hermitian operator A=-A^† to eliminate the transition terms to the first order. Using Baker-Campbell-Hausdorff formula and expanding H̅ in the powers of A, we obtainH̅=H_1+V_2+[A,H_1]+[A,V_2]+1/2[A,[A,H]]+...In order to get rid of transition term V_2 to leading order, we set V_2=-[A,H_1], where A can be written asA=1/L_1V_2,and L_1𝒪=[H_1,𝒪]. Here A is of order of transition term V_2. Substituting for A, we obtain Hamiltonian up to the second or higher order in V_2H̅=H_1+1/2[A,V_2]+...Using the definitions of H_1 and V_2, we obtain the expression for A as:A=- c.s∑_qλ_q/Ω_R(b_q+b_q^†)(σ_+--σ_-+).Therefore, the transformed Hamiltonian to the first order in the transition terms due to phonon can be well approximated and written as free and perturbed parts asH̅≃ H_0+H_V, where the free part isH_0 =H_S+H_R+H_P,H_S =Ω'_R/2σ_3,H_R =∑_kΔ_ka_k^†a_k,H_P =∑_qω_qb_q^†b_qand perturbed parts is given byH_V =H_dR+H_SR+H_dP,H_dR = c.s σ_3 ∑_kg_k(a_k+a_k^†),H_dP = c^2- s^2/2 σ_3 ∑_qλ_q(b_q+b_q^†),H_SR =∑_kg_k[( c^2 a_k- s^2 a_k^†)σ_+-+h.c.],which are Eqs. (<ref>)-(<ref>) in the main text.§ IRRELEVANT PART MATRIX ELEMENTThe irrelevant part is given by Eqn. (<ref>) can be written within Born approximation after transforming to Laplace domainΦ(s)≃-i Tr_RTr_PL_V 1/s+iL_0 QΩ(0).In particular, we want the matrix element, Φ_+-,+-(s) due to phonon interaction, which can be simplified and written asΦ_+-,+-(s)=-i Tr_P L_Y^+ 1/s+i(Ω_R'+L_P) [QΩ(0)]_+-,where irrelevant part of the stationary density matrix can be found<cit.> using GME discussed in Sec. <ref>[QΩ(0)]_+-=-ilim_s→ 0s/s+i(Ω_R'+L_P)L_Y^+ρ_P(t_0) [ρ_S(s)σ_ab]_+-.The propagators in Eqn. (<ref>) has no poles at s=0^+, where 0^+ is a positive infinitesimal. After performing the limit in the above expression, we substitute for [QΩ(0)]_+- in the expression for Φ_+-,+-(s), to obtain and expression for the irrelevant term as Φ_+-,+-(s)=- Tr_P L_Y^+ 1/s+i(Ω_R'+L_P)1/0^++i(Ω_R'+L_P)L_Y^+ρ_P(t_0)Ω_+-(0),where Ω_+-(0)=[ρ̅_Sσ_ab]_+-, see Eqs. (<ref>) and (<ref>). Above expression can be written in terms of irrelevant part matrix element in the problem as Φ_+-,+-(s) =G_+-,+-^P(s)Ω_+-(0),G_+-,+-^P(s) =- Tr_P L_Y^+ 1/s+i(Ω_R'+L_P)1/0^++i(Ω_R'+L_P)L_Y^+ρ_P(t_0). Solving above expressions and applying the continuum of modes, we found that irrelevant part matrix element will be suppressed by 1/Ω_R' compared to the self-energy matrix element due to phonon interaction. In this limit, contribution from irrelevant part can be neglected compared to the contribution from the non-Markovian self-energy.§SELF-ENERGY CALCULATIONSThe reduced self-energy superoperator Σ_S(t) in Eq. (<ref>) can be transformed in to Laplace domain and using L_V=L_dR+L_SR+L_dP, we obtainΣ_S(s)= -i Tr_RTr_P(L_dR+L_SR+L_dP)1/s+iL×(L_dR+L_SR+L_dP)ρ_R(t_0)ρ_P(t_0),dropping Q from the exponential in Eq. (<ref>) will not affect the final expression <cit.>. Cross terms in above expression will vanish because L_dR and L_SR act on an operator in the radiation mode Hilbert space whereas L_dP act on phonon Hilbert space and will not contribute in the final trace. Moreover, the cross terms between L_dR and L_SR will give rise to off-block diagonal matrix elements in self-energy matrix and will be neglected within secular approximation, see Sec. <ref> in main text. The self-energy superoperator in Eq. (<ref>) can be decomposed into three different parts as:Σ_S(s)=Σ_S^dR(s)+Σ_S^dP(s)+Σ_S^SR(s),where first and second terms give rise to pure dephasing (T_2^* process) due to radiation and phonon modes, respectively, whereas last term in above expression leads to transition (T_1 process) due to radiation modes coupling. Free propagator in reduced self-energy expression can be expanded in the powers of interacting Liouvillian L_V as <cit.>1/s+iL=1/s+iL_0∑_k(-iL_V1/s+iL_0)^2k,because of the form of couplings in present model only even powers 2k in above expression will survive in the final trace. In order to find the matrix elements of the self-energy superoperator, we write the superoperators in matrix form in dressed-state basis as[L_S]= ( [ 0 0 0 0; 0 0 0 0; 0 0Ω_R' 0; 0 0 0 -Ω_R' ]),where [L_S]_αβ,γδ=Tr{|β⟩⟨α|S|γ⟩⟨δ|} and {α,β}∈{ +,-}. In the dressed state basis, non-interacting Liouvillian is diagonal and can be inverted to write as 2× 2 blocks[1/s+iL_0]= ( [ G_∥(s)0;0 G_⊥(s) ]),where parallel block is[G_∥(s)]= ( [ 1/s+i(L_R+L_P)0;0 1/s+i(L_R+L_P) ]),and perpendicular block is given by[G_⊥(s)]= ( [1/s+i(Ω_R'+L_R+L_P)0;0 1/s+i(-Ω_R'+L_R+L_P) ]).In the similar fashion, we can find other matrices as well[L_dR(P)]= ( [L_X(Y)^- 0 0 0; 0L_X(Y)^- 0 0; 0 0L_X(Y)^+ 0; 0 0 0 -L_X(Y)^+; ]),here we have defined new Liouvillian for commutation and anti-commutation relations: L_X(Y)^±𝒪=[X_R(P),𝒪]_±, where operators X_R and X_P are given asX_R =√(Ω_R^2-Δ^2)/2Ω_R∑_kg_k(a_k+a_k^†),X_P =Δ/2Ω_R∑_kλ_q(b_q+b_q^†).Furthermore, we also find the matrix for superoperator L_SR in the dressed-state basis[L_SR]=( [00 -Z_r^†Z_l;00Z_l^† -Z_r; -Z_rZ_l00;Z_l^† -Z_r^†00 ] ),where we have defined the operators for left and right multiplications as:Z_l𝒪_R =∑_kg_k( c^2a_k- s^2a_k^†)𝒪_RZ_r𝒪_R =𝒪_R∑_kg_k( c^2a_k- s^2a_k^†),and 𝒪_R is an operator in the radiation mode Hilbert space. Reduced self-energy matrix elements of interest can be calculated according to Eq. (<ref>) in the main text.§.§ Self-energy for photon interactionWe apply a continuum of modes for the cavity density of states given by a Lorentzian spectrum <cit.>,D_c(ϵ)|g(ϵ)|^2=1/πg^2Γ_c/(ϵ-Δ_c)^2+Γ_c^2,where Δ_c=ω_c-ω is the detuning of cavity from laser pump frequency, and Γ_c is cavity bandwidth. Reduced self-energy matrix elements due to radiation mode coupling giving rise to transition are given as Σ_+-,++^SR(s) =-ig^2(Ω_R+Δ)^2/4Ω_R^2(1/s+i(Ω_R'-Δ_c)+Γ_c+1/s-i(Ω_R'-Δ_c)+Γ_c),Σ_+-,–^SR(s) =ig^2(Ω_R-Δ)^2/4Ω_R^2(1/s+i(Ω_R'+Δ_c)+Γ_c+1/s-i(Ω_R'+Δ_c)+Γ_c),Σ_+-,+-^SR(s) =-ig^2/4Ω_R^2((Ω_R+Δ)^2/s+iΔ_c+Γ_c+(Ω_R-Δ)^2/s-iΔ_c+Γ_c),Σ_+-,-+^SR(s) =-ig^2(Ω_R^2-Δ^2)/4Ω_R^2(1/s+iΔ_c+Γ_c+1/s-iΔ_c+Γ_c),and self-energy matrix element responsible for pure dephasing due to cavity coupling can be found asΣ_+-,+-^dR(s)=-ig^2(Ω_R^2-Δ^2)/2Ω_R^2(1/s+i(Ω_R'-Δ_c)+Γ_c+1/s+i(Ω'_R+Δ_c)+Γ_c). For a large band-width cavity, coupling to its radiative modes to system is treated under Markov approximation and above self-energies are replaced by their s=-i(Ω_R'+Δω) frequency parts, refer main text for details.§.§ Self-energy for phonon interactionSimilarly, we apply a continuum of modes for 3-d acoustic phonons<cit.> with an exponential cut-off at ϵ=ϵ_c ∑_qλ_q^2→α_P∫_0^∞dϵ|ϵ|^3e^-|ϵ|/ϵ_c,we obtain the expression for self-energy in Laplace transform asΣ_+-,+-^dP(s)= -iα_PΔ^2/2Ω_R^2∫_0^∞dϵ|ϵ|^3e^-|ϵ|/ϵ_c(2n_B(ϵ)+1)×(1/s+i(Ω_R'-ϵ)+1/s+i(Ω_R'+ϵ)),where α_P= is the phonon coupling paramter in the units of freq.^-2 and n_B(ϵ) is Bose function. On further simplification, above self-energy matrix element can be decomposed into real and imaginary parts after setting s=-iΔ_0, where Δ_0 is detuning of the probe from the pump laser frequency, as:Σ_+-,+-^dP(s=-iΔ_0)=Δω_P(Δ_0)-iΓ_P(Δ_0),where Δω_P(Δ_0)= Re[Σ_+-,+-^dP(Δ_0)] is the frequency-shift and Γ_P(Δ_0)=- Im[Σ_+-,+-^dP(Δ_0)] is the dephasing due to phonon interaction.§.§ Self-energy matrix element calculated exactlyIn the previous section, we have computed the self-energy matrix element for phonon interaction to the second order in Born approximation. In this section, we will discuss an equation-of-motion method to find the phonon interaction self-energy for all orders in perturbed Liouvillain due to phonons L_Y^+ beyond Born approximation, and show that exact approach recovers the result obtained within Born approximation. Using the general form of superoperators matrices, the expression for self-energy matrix element due to phonons can be written in Laplace domain as:Σ_+-,+-^P(s)=-i Tr_P L_Y^+ 1/s+i(Ω_R'+L_P+L_Y^+) L_Y^+ ρ_P(t_0),also in the time domain, we haveΣ_+-,+-^P(t)=-ie^-iΩ_R't Tr_P [L_Y^+ e^-i(L_P+L_Y^+) L_Y^+ ρ_P(t_0)]_𝒞(t).On further simplification, one obtains𝒞(t) =2Tr_P [[X_P,X_P(t)]_+ρ_P(0)]=2⟨ X_PX_P(t)⟩+2⟨ X_P(t)X_P⟩,whereX_P(t)=e^-i(L_P+L_Y^+)tX_P(0).Above expression gives rise to a differential equationẊ_P(t)=-i(L_P+L_Y^+)X_P(t),which can be written asẊ_P(t)=-i(L_Y^+-L_P)X_P(t).Introducing: X̃_P(t)=e^-iL_PtX_P(t), we obtain an equation of motion for X̃_P(t)Ẋ̃̇_P(t)=-i[ X_P^0(t),X̃_P(t)]_+where X_P^0(t)=Δ/2Ω_R∑_qλ_q(b_qe^iω_qt+b_q^†e^-iω_qt).Solving for one q, the solution for X̃_P(t) takes the formX̃_P,q(t)=U_q(t)X̃_P,q(0)W_q(t)whereX̃_P(t) =∑_qX̃_P,q(t) U̇_q(t) =-iX_P,q^0(t)U_q(t) Ẇ_q(t) =-iW_q(t)X_P,q^0(t).Using Eqns. (<ref>) and (<ref>), we have (for one q)U̇_q(t)=-iλ_q^'e^-iH_qt(b_q+b_q^†)e^iH_qtU_q(t)whereλ_q^' =Δλ_q/2Ω_RH_q =ω_qb_q^†b_q.IntroducingŨ_q(t)=e^iH_qtU_q(t)⇒ U_q(t)=e^-iH_qtŨ_q(t)and taking the time derivative, one can find an expressionŨ̇_q(t)=i[H_q-λ_q^'(b_q+b_q^†)]Ũ_q(t). Using the shifting operator S_q=e^λ_q^'/ω_q(b_q-b_q^†) above expression can be written asH_q-λ_q^'(b_q+b_q^†)=S_qH_qS_q^†-2λ_q'^2/ω_q,this impliesŨ̇_q(t)=i(S_qH_qS_q^†-2λ_q'^2/ω_q)Ũ_q(t).Multiplying by S_q^† on both sidesd(S_q^†Ũ_q(t))/dt=i(H_q-2λ_q'^2/ω_q)S_q^†Ũ_q(t),solving for S_q^†Ũ_q(t) S_q^†Ũ_q(t)=e^iH_qtS_q^†Ũ_q(0)e^-iΔ_P,qt;Δ_P,q=2λ_q^'2/ω_q,using U_q(t)=e^-iH_qtŨ_q(t) and U_q(0)=1, we obtainU_q(t)=e^-iH_qtS_qe^iH_qtS_q^†e^-iΔ_P,qt. Similarly, for W_q(t)W_q(t)=S_qe^-iH_qtS_q^†e^iH_qte^iΔ_P,qt.Substituting for U_q(t) and W_q(t), also using U_q(0)=W_q(0)=1, we obtain an expression for X_P,q(t)X_P,q(t)=S_qe^iH_qtS_q^†X_P,q(0)S_qe^-iH_qtS_q^†.Recalling Eq. (<ref>):𝒞(t)=2⟨ X_PX_P(t)⟩_𝒞_1(t)+2⟨ X_P(t)X_P⟩_𝒞_2(t).and substituting for X_P,q(t), we have𝒞_1(t)= Tr_P[∑_q',q”'X_P,q'(0)(∏_q”S_q”e^iH_q”tS_q”^†X̃_P,q”'(0)× S_q”e^-iH_q”tS_q”^†)(∏_qρ_P,q(0))].Above expression can be solved for q=q'=q”=q”', because other modes do not contribution in the final trace and do not conserve the particle number for the different modes. On further simplification and doing some algebraic manipulation, one can obtain a simplified expression as 𝒞_1(t)=∑_q'=q”'∏_q=q'=q”Tr_P[X_P,q'(0)(S_q”e^iH_q”tS_q”^†X̃_P,q”'(0)S_q”e^-iH_q”tS_q”^†)ρ_P,q(0)]and similarly for 𝒞_2(t)𝒞_2(t)=∑_q'=q”'∏_q=q'=q”Tr_P[(S_q”e^iH_q”tS_q”^†X̃_P,q”'(0)S_q”e^-iH_q”tS_q”^†)X_P,q'(0)ρ_P,q(0)]. Considering the factor in parenthesis which is common in both expressions, we haveX_P(t)=∑_q=q'∏_q(S_qe^iH_qtS_q^†X̃_P,q'(0)S_q_Z_qe^-iH_qt^Y_q(t)S_q^†),where we have defined:Y_q(t)=e^iH_qtZ_qe^-iH_qtand Z_q=S_q^†X̃_P,q(0)S_q.Using Baker-Campbell-Hausdorff formula:Z_q =S_q^†X̃_P,q(0)S_q=λ_q'(b_q+b_q^†)-2λ_q'^2/ω_q,similarly for Y_q(t),Y_q(t) =e^iH_qtZ_qe^-iH_qt=λ_q'(b_qe^-iω_qt+b_q^†e^iω_qt)-2λ_q'^2/ω_q.Substituting for Y_q(t) and Z_q in the expression for X_P(t), and then substituting for X_P(t) in the expressions for 𝒞_1(t) and 𝒞_2(t), we perform the final trace to obtain the following expressions𝒞_1(t) =Δ/4Ω_R∑_qλ_q^2(n_q e^iω_qt+(n_q+1)e^-iω_qt) 𝒞_2(t) =Δ/4Ω_R∑_qλ_q^2(n_q e^-iω_qt+(n_q+1)e^iω_qt).After substituting the expressions for 𝒞_1(t) and 𝒞_2(t) in the expression for reduced self-energy matrix element and performing the final trace, we obtainΣ_+-,+-^P(t)=-iΔ^2 e^-iΩ_R't/2Ω_R^2∑_qλ_q^2(2n_q+1) cos(ω_qt).Applying the continuum of modes and going back to Laplace domain,Σ_+-,+-^P(s)= -iα_PΔ^2/2Ω_R^2∫_0^∞dϵ|ϵ|^3e^-|ϵ|/ϵ_c(2n_B(ϵ)+1)×(1/s+i(Ω_R'-ϵ)+1/s+i(Ω_R'+ϵ)),above expression recovers the result obtained for self-energy matrix element within Born approximation, which is Eq. (<ref>) in the main text.§ INITIAL CONDITIONIn this section, we will discuss the stationary density matrix and find the initial condition for the operator Ω(t) given in Eq. (<ref>). The stationary density matrix ρ̅ accounts for conditions that accumulate between the system and interactions in the time interval t∈[t_0 ,0]. In this time interval, [t_0, 0], the stationary density matrix is given by its value at t=0 and we can replace ρ_S(t')→ρ_S(t) in Eq. (<ref>), to obtain ρ̇_S(t)=-iL_Sρ_S(t)-iρ_S(t)∫_t_0^tdt'Σ_S(t-t').Changing integration variable: τ=t-t' to obtainρ̇_S(t)=-iL_Sρ_S(t)-iρ_S(t)∫_0^t-t_0dτΣ_S(τ),one can extend the upper limit of above integration to infinity by setting t_0→-∞, and solving the differential equation to obtainρ_S(0)=e^i[L_S+Σ_S(s=0)]t_0ρ_S(t_0),and taking Laplace transform on both sides, we obtainρ_S(0)/s=1/s+iL_S+iΣ_S(s=0)ρ_S(t_0),here Laplace transform is defined as f(s)=∫_0^∞e^-stf(t)dt. We choose an initial condition when exciton is in excited state |a⟩ given by ρ_S(t_0)=|a⟩⟨a|, and evolves in the presence of pump laser. After performing a secular approximation and using the definition of stationary limitρ̅_S=lim_s→ 0^+s(ρ_S(0)/s),one can find all the elements of stationary density matrix operator as:ρ̅_++=-Σ_++,–(s=0)/Σ_z(s=0),where Σ_z=Σ_++,++-Σ_++,–. Similarly,ρ̅_– =Σ_++,++(s=0)/Σ_z(s=0) ρ̅_+- =ρ̅_-+=0.Recalling, the initial condition for operator Ω(t) given by Eq. (<ref>):Ω(0)=ρ̅σ_ab,and matrix element of interest can be extracted after performing the trace over phonon and photon modes, and substituting for stationary density matrix element ρ̅_++,Ω_+-(0)=- c^2Σ_++,–(s=0)/Σ_z(s=0).Dynamics of the operator Ω(t) is evaluated using Hamiltonian given by Eq. (<ref>), with an initial condition given by above expression which is Eq. (<ref>) in the main text.43 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[von Neumann(1955)]vonneumann55 author author J. von Neumann, @nooptitle Mathematical Foundations of Quantum Mechanics (publisher Princeton University Press, address Princeton, NJ, 195, year 1955)NoStop [Benioff(1980)]benioff80 author author P. Benioff, @nooptitle J Stat Phys(year 1980)NoStop [Feynman(1982)]feynman82 author author R. P. Feynman, @nooptitle Int J Theor Phys(year 1982)NoStop [Divincenzo(1997)]loss97 author author D. P. Divincenzo, title Mesoscopic electron transport,(publisher Springer Netherlands, address Dordrecht, year 1997) Chap. chapter Topics in Quantum Computers, pp. pages 657–677NoStop [Schumacher(1995)]schumacher95 author author B. Schumacher, 10.1103/PhysRevA.51.2738 journal journal Phys. Rev. A volume 51, pages 2738 (year 1995)NoStop [Loss and DiVincenzo(1998)]loss98 author author D. Loss and author D. P. DiVincenzo, 10.1103/PhysRevA.57.120 journal journal Phys. Rev. A volume 57, pages 120 (year 1998)NoStop [Zrenner et al.(2002)Zrenner, Beham, Stufler, Findeis, Bichler, and Abstreiter]zrenner02 author author A. Zrenner, author E. Beham, author S. Stufler, author F. Findeis, author M. Bichler,and author G. Abstreiter, 10.1038/nature00912 journal journal Naturevolume 418 (year 2002),10.1038/nature00912NoStop [Brunner et al.(2009)Brunner, Gerardot, Dalgarno, Wüst, Karrai, Stoltz, Petroff, and Warburton]brunner09 author author D. Brunner, author B. D. Gerardot, author P. A. Dalgarno, author G. Wüst, author K. Karrai, author N. G. Stoltz, author P. M. Petroff,and author R. J. Warburton, 10.1126/science.1173684 journal journal Science volume 325, pages 70 (year 2009)NoStop [Strauß et al.(2016)Strauß, Placke, Kreinberg, Schneider, Kamp, Höfling, Wolters, and Reitzenstein]strauss17 author author M. Strauß, author M. Placke, author S. Kreinberg, author C. Schneider, author M. Kamp, author S. Höfling, author J. Wolters,and author S. Reitzenstein, 10.1103/PhysRevB.93.241306 journal journal Phys. Rev. B volume 93, pages 241306 (year 2016)NoStop [Kuhlmann et al.(2015)Kuhlmann, Prechtel, Houel, Ludwig, Reuter, Wieck, and Warburton]warburton15 author author A. V. Kuhlmann, author J. H. Prechtel, author J. Houel, author A. Ludwig, author D. Reuter, author A. D. Wieck,and author R. J. Warburton, 10.1038/ncomms9204 journal journal Naturevolume 6 (year 2015),10.1038/ncomms9204NoStop [Ulhaq et al.(2013)Ulhaq, Weiler, Roy, Ulrich, Jetter, Hughes, and Michler]Ulhaq13 author author A. Ulhaq, author S. Weiler, author C. Roy, author S. M. Ulrich, author M. Jetter, author S. Hughes,and author P. Michler, 10.1364/OE.21.004382 journal journal Opt. Express volume 21, pages 4382 (year 2013)NoStop [Cui and Raymer(2006)]cui06 author author G. Cui and author M. G. Raymer, 10.1103/PhysRevA.73.053807 journal journal Phys. Rev. A volume 73, pages 053807 (year 2006)NoStop [McCutcheon and Nazir(2010)]dpsm10 author author D. P. S.McCutcheon and author A. Nazir, http://stacks.iop.org/1367-2630/12/i=11/a=113042 journal journal New Journal of Physics volume 12, pages 113042 (year 2010)NoStop [Ulrich et al.(2011)Ulrich, Ates, Reitzenstein, Löffler, Forchel, and Michler]ulrich11prl author author S. M. Ulrich, author S. Ates, author S. Reitzenstein, author A. Löffler, author A. Forchel,and author P. Michler, 10.1103/PhysRevLett.106.247402 journal journal Phys. Rev. Lett. volume 106, pages 247402 (year 2011)NoStop [Roy and Hughes(2012)]hughes12prb author author C. Roy and author S. Hughes,10.1103/PhysRevB.85.115309 journal journal Phys. Rev. B volume 85, pages 115309 (year 2012)NoStop [McCutcheon and Nazir(2013)]dpsm13prl author author D. P. S.McCutcheon and author A. Nazir, 10.1103/PhysRevLett.110.217401 journal journal Phys. Rev. Lett. volume 110, pages 217401 (year 2013)NoStop [Weiler et al.(2012)Weiler, Ulhaq, Ulrich, Richter, Jetter, Michler, Roy, andHughes]weiler12 author author S. Weiler, author A. Ulhaq, author S. M. Ulrich, author D. Richter, author M. Jetter, author P. Michler, author C. Roy,and author S. Hughes, 10.1103/PhysRevB.86.241304 journal journal Phys. Rev. B volume 86, pages 241304 (year 2012)NoStop [Krummheuer et al.(2002)Krummheuer, Axt, and Kuhn]krummheuer02 author author B. Krummheuer, author V. M. Axt,and author T. Kuhn, 10.1103/PhysRevB.65.195313 journal journal Phys. Rev. B volume 65, pages 195313 (year 2002)NoStop [Vagov et al.(2014)Vagov, Glässl, Croitoru, Axt,and Kuhn]vagov14 author author A. Vagov, author M. Glässl, author M. D. Croitoru, author V. M. Axt,and author T. Kuhn, 10.1103/PhysRevB.90.075309 journal journal Phys. Rev. B volume 90, pages 075309 (year 2014)NoStop [Hughes and Agarwal(2017)]agarwal17 author author S. Hughes and author G. S. Agarwal, 10.1103/PhysRevLett.118.063601 journal journal Phys. Rev. Lett. volume 118, pages 063601 (year 2017)NoStop [Kryuchkyan et al.(2017)Kryuchkyan, Shahnazaryan, Kibis, andShelykh]shelykh17 author author G. Y. Kryuchkyan, author V. Shahnazaryan, author O. V. Kibis,and author I. A. Shelykh, 10.1103/PhysRevA.95.013834 journal journal Phys. Rev. A volume 95, pages 013834 (year 2017)NoStop [Coish and Loss(2004)]bill04 author author W. A. Coish and author D. Loss,10.1103/PhysRevB.70.195340 journal journal Phys. Rev. B volume 70, pages 195340 (year 2004)NoStop [Kaer et al.(2010)Kaer, Nielsen, Lodahl, Jauho, andMørk]kaer10 author author P. Kaer, author T. R. Nielsen, author P. Lodahl, author A.-P. Jauho,and author J. Mørk, 10.1103/PhysRevLett.104.157401 journal journal Phys. Rev. Lett. volume 104, pages 157401 (year 2010)NoStop [Swain(1981)]swan81 author author S. Swain, @noopjournal journal Journal of Physics A: Mathematical and General volume 14,pages 2577 (year 1981)NoStop [Thanopulos et al.(2017)Thanopulos, Yannopapas, and Paspalakis]thanopulos17 author author I. Thanopulos, author V. Yannopapas,and author E. Paspalakis, 10.1103/PhysRevB.95.075412 journal journal Phys. Rev. B volume 95, pages 075412 (year 2017)NoStop [Toyli et al.(2016)Toyli, Eddins, Boutin, Puri, Hover, Bolkhovsky, Oliver, Blais, and Siddiqi]toyli16 author author D. M. Toyli, author A. W. Eddins, author S. Boutin, author S. Puri, author D. Hover, author V. Bolkhovsky, author W. D.Oliver, author A. Blais,and author I. Siddiqi, 10.1103/PhysRevX.6.031004 journal journal Phys. Rev. X volume 6, pages 031004 (year 2016)NoStop [McCutcheon(2016)]dpsm16 author author D. P. S.McCutcheon, 10.1103/PhysRevA.93.022119 journal journal Phys. Rev. A volume 93, pages 022119 (year 2016)NoStop [Fick and Sauermann(1990)]fick author author E. Fick and author G. Sauermann, @nooptitle The Quantum Statistics of Dynamic Processes (publisher Springer-Verlag, address Berlin, year 1990)NoStop [Muller et al.(2007)Muller, Flagg, Bianucci, Wang, Deppe, Ma, Zhang, Salamo, Xiao, and Shih]muller07 author author A. Muller, author E. B. Flagg, author P. Bianucci, author X. Y. Wang, author D. G. Deppe, author W. Ma, author J. Zhang, author G. J.Salamo, author M. Xiao,and author C. K. Shih, 10.1103/PhysRevLett.99.187402 journal journal Phys. Rev. Lett. volume 99, pages 187402 (year 2007)NoStop [Mahan(1990)]mahan author author G. Mahan, @nooptitle Many-particle Physics(publisher Plenum, address New York, year 1990)NoStop [Mollow(1969)]Mollow69 author author B. R. Mollow, 10.1103/PhysRev.188.1969 journal journal Phys. Rev. volume 188,pages 1969 (year 1969)NoStop [Cohen-Tannoudji and Grynberg(2004)]cohen author author J. D.-R.Cohen-Tannoudji and author G. Grynberg, @nooptitle Atom-Photon Interactions (publisher Wiley-VCH, address Berlin, year 2004)NoStop [Hopfmann et al.(2017)Hopfmann, Carmele, Musiał, Schneider, Kamp, Höfling, Knorr, and Reitzenstein]hopfmann17 author author C. Hopfmann, author A. Carmele, author A. Musiał, author C. Schneider, author M. Kamp, author S. Höfling, author A. Knorr,and author S. Reitzenstein, 10.1103/PhysRevB.95.035302 journal journal Phys. Rev. B volume 95, pages 035302 (year 2017)NoStop [Laucht et al.(2016)Laucht, Simmons, Kalra, Tosi, Dehollain, Muhonen, Freer, Hudson, Itoh, Jamieson, McCallum, Dzurak, and Morello]laucht17 author author A. Laucht, author S. Simmons, author R. Kalra, author G. Tosi, author J. P. Dehollain, author J. T. Muhonen, author S. Freer, author F. E. Hudson, author K. M.Itoh, author D. N. Jamieson, author J. C. McCallum, author A. S. Dzurak,and author A. Morello, 10.1103/PhysRevB.94.161302 journal journal Phys. Rev. B volume 94, pages 161302 (year 2016)NoStop [Scully and Zubairy(1997)]scully author author M. O. Scully and author M. S. Zubairy, @nooptitle Quantum Optics(publisher Cambridge University Press, address Cambridge, year 1997)NoStop [de Vega and Alonso(2008)]vega08 author author I. de Vega and author D. Alonso, 10.1103/PhysRevA.77.043836 journal journal Phys. Rev. A volume 77, pages 043836 (year 2008)NoStop [Lax(2000)]lax00 author author M. Lax, http://dx.doi.org/10.1016/S0030-4018(00)00622-2 journal journal Optics Communications volume 179, pages 463(year 2000)NoStop [Jin et al.(2016)Jin, Karlewski, and Marthaler]jin16 author author J. Jin, author C. Karlewski, and author M. Marthaler, http://stacks.iop.org/1367-2630/18/i=8/a=083038 journal journal New Journal of Physics volume 18, pages 083038 (year 2016)NoStop [M.E. and Rumyantsev(1996)]qd1 author author L. M.E. and author S. Rumyantsev, @nooptitle Handbook Series on Semiconductor Parameters (publisher World Scientific,address London, year 1996)NoStop [A. and Kundrotas(1994)]qd2 author author D. A. and author J. Kundrotas,@nooptitle Handbook on Physical Properties of Ge, Si, GaAs and InP (publisher Science and Encyclopedia Publishers, address Vilnius, year 1994)NoStop [Schrieffer and Wolff(1966)]swt author author J. R. Schrieffer and author P. A. Wolff, 10.1103/PhysRev.149.491 journal journal Phys. Rev. volume 149,pages 491 (year 1966)NoStop [DiVincenzo and Loss(2005)]loss05 author author D. P. DiVincenzo and author D. Loss, 10.1103/PhysRevB.71.035318 journal journal Phys. Rev. B volume 71,pages 035318 (year 2005)NoStop [Ma et al.(2012)Ma, Sun, Wang, and Nori]nori12 author author J. Ma, author Z. Sun, author X. Wang,and author F. Nori, 10.1103/PhysRevA.85.062323 journal journal Phys. Rev. A volume 85, pages 062323 (year 2012)NoStop
http://arxiv.org/abs/1703.08965v2
{ "authors": [ "Abhishek Kumar" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170327081408", "title": "Theory of non-Markovian dynamics in resonance fluorescence spectrum" }
U.S. Naval Research LaboratoryCode 6792, Plasma Physics Division, Nonlinear Systems Dynamics SectionWashington, D.C., 20375, USA email:Ira.Schwartz@nrl.navy.mil tel:202-404-8359 fax:202-767-0631Department of Mathematics, Southern Methodist University, Dallas, Texas 75275, USA Networks of interacting, communicating subsystems are common in many fields, from ecology, biology, epidemiology to engineering androbotics. In the presence of noise and uncertainty, interactions between the individual components can lead to unexpected complex system-wide behaviors. In this paper, we consider a generic model of two weakly coupled dynamical systems, and show how noise in one part of the system is transmitted through the coupling interface. Working synergistically with the coupling, the noise on one system drives a large fluctuation in the other, even whenthere is no noise in the second system. Moreover, the large fluctuation happens while the first system exhibits only small random oscillations.Uncertainty effects are quantified by showing how characteristic time scales of noise induced switching scale as a function ofthe coupling between the two coupled parts of the experiment. In addition, our results show that the probability of switching in the noise-free system scales inversely as the square of reduced noise intensity amplitude, rendering the virtual probability of switching to be an extremely rare event. Our results showing the interplay between transmitted noise and coupling are alsoconfirmed through simulations, which agree quite well with analytic theory. 05.45.-a,05.40.-a,05.10.-aAsymmetric noise-induced large fluctuationsin coupled systems Thomas W. Carr December 30, 2023 =============================================================== § INTRODUCTION Understanding the interaction between noise and system dynamics is key to understanding unexpected system behaviors <cit.>, and hence torobust and efficient operation of autonomous systems deployed in noisy, uncertain environments. It is often assumed that dynamics with small noise input can be modeled as small perturbations of the deterministic system dynamics; however, there are many known cases where small noise inputs can drive large-scale transitions in system behavior. Examples include noise-induced switching between attractors in continuous systems <cit.>, and noise-induced switching and extinction in finite-size systems <cit.>.In bothswitching and extinction, a significant change in the state of the system occurs as the result of a noise-induced large fluctuation. For systems with small noise, such a large fluctuation is a rare event, and occurs on average when the noise signal lies along a so-called “optimal path” <cit.>. For systems operating in most common environments, noise is assumed to be homogeneous, and it is relatively straight-forward to compute the optimal paths which lead to large fluctuations <cit.>.In contrast to homogeneous noise, finite systems, whethercontinuous or discrete, are often subject to asymmetric noise <cit.>. One excellentexample of multiple independent noise sources occurs in coupled finite communicating systemsoperating in noisy environments <cit.>, where the effects of noise on the collective motionsof swarms of self-propelled autonomous agents results in drastic patternchanges. Such systems are of tremendous practical importance; coordinatedgroups of agents have been deployed for a wide range of applications,including exploration and mapping of unknown environments<cit.>, search andrescue <cit.>, and construction<cit.>. Extensions to the basic swarming dynamics by usingteams of heterogeneous agents capable of cooperatively executing morecomplex tasks are presented in <cit.>. In addition,network structure and uncertainties in delay communication have been shownto give rise to dynamic patterns in collective swarm motion<cit.>. Usually, sophisticatedmodels are used to predictbehavioral patterns for large groups ofinteracting individual agents<cit.>. However, testing thesebehaviors in real-world environments often presents significant logisticalchallenges. In many cases, it is more practical to relyon mixed-reality experiments (similar to ideasin <cit.>), where real agents are deployedalongside simulated ones, in order to better understand how real-worldnoise affects the collective dynamics, as well as validate the theoryagainst a critical number of agents <cit.>. This creates a situation where we have two coupled systems with asymmetric noise characteristics: the set of real agents, operating in a high-noise real-world environment, and the simulated agents, operating in a (at least partly) idealized simulated world. Our current paper is inspired by this situation; we consider a generic pair of coupled dynamical systems, and study the effects of interaction on switching in the low-noise system.As shown e.g. in <cit.>, even weak coupling between systemdynamics can significantly affect the behavior of the coupled systems.Weshow that even weak interaction between a low noise, or noise-free,simulated system and a noisy “real” system can cause catastrophictransitions between states. That is to say, even if only part of the systemoperates in noisy real-world conditions, we can observe large changes inthe dynamics of the idealized, low-noise virtual part, since noise istransmitted from the real to virtual world via coupling. Since one of ourmain results shows how the probability exponent is enhanced by the ratio oftwo noise sources, we refer to the state transitions induced in the noise-free virtual system by coupling with the noisy real-world system asextreme rare events. The rest of the paper is laid out as follows. InSection <ref> we define the general asymmetricnoise problem for coupled systems (which include MR systems). Gaussian noiseis considered here, but the theory can be made more general to includenon-Gaussian perturbations <cit.> andcorrelations <cit.>. Noise induced large fluctuations areposed in a variational setting for the coupled problem. Linear response tothe noise is derived in general. InSection <ref>, weconsider a model problem of coupled bi-stable attractors subjected toasymmetric noise. For the specific problem we compare our theory to MonteCarlo simulations, and derive scalings as a function of the heterogeneity ofthe noise. We derive a general scaling relation between the noise ratio andthe coupling strength that governs the mean probability to switch. Toquantify the extreme rare event of the low-noise system switching, we derivethe exponent of the probability distribution, and show that this exponent varies as the inverse noise ratio squared. Comparisons between our general theory and numerical experiments of such large fluctuation events show excellent agreement. A discussion of the results and conclusions are given in Section <ref>.§ PROBLEM SETUP IN GENERALThe problem formulation described here is motivated by the mixed-reality system shown in Fig. <ref>.In this setup, the physical agents operate in an uncertain, noise-ridden environment, which imposes a larger noise source on all of the agents.In contrast,the virtual agents are isolated from any real environmental perturbations, and experience only the noise modeled in the simulation. We let the time dependent vectors , denote the state-space configurations of agents operating in virtual and real environments, respectively.We wish to examine the situation where there is a significant asymmetry in the noise characteristics of the two coupled systems; in particular, where the noise intensity in the low-noise (“virtual”) system goes to zero.§.§ The stochastic equations of motionTo analyze how noise impacts the dynamics from one environment to another, we consider a general coupled stochastic differential equation of the form(t)= f((t)) +h_1((t),(t),K)+ ϵG_1((t))ξ_x(t),(t)= f((t)) + h_2((t),(t),K) + G_2((t))ξ_y(t), where x∈ℝ^n_x and y∈ℝ^n_y represent the state-space configurations of the low- and high-noise systems, respectively, and the matrices G_i, i=1,2 [Throughout the paper, boldface lower-case letters will indicate vectors, while boldface upper-case letters will indicate matrices.]are given by G_i((t))=diag{g_i1((t)),g_i2((t)),...g_in_i((t))}, where the g_ij's are general nonlinear functions. Coupling strength is denoted by parameter K, and we choose h_1, h_2 so that h_1((t),(t),0)= h_2((t),(t),0) = 0;i.e., the systems , are uncoupled when K=0. We assume that the noise inputs ξ_x∈ℝ^n_x and ξ_y∈ℝ^n_y areindependent Gaussian-distributed stochastic processes with independent components, and intensity D. They are both characterized by a probability density functional 𝒫_ξ=e^-ℛ_ξ/D, where ℛ_ξ is defined asR_[(t)]= 1/4∫ dt dt' (t)(t'). In order to capture the asymmetric noise levels between the two systems, we introduce a parameter, ϵ≪1, that controls the noise intensity of the state variable .The case ϵ = 0 corresponds to noise-free operation. However, even with ϵ = 0, noise-induced transitions can still occur as a result of noise transference through the coupling with the high-noise system.§.§ Deterministic dynamics In the absence of any noise, Eqs. <ref> are ordinary differential equations, and we suppose that there exist steady states which depend on the coupling strength, K. We therefore assume that there exists an attracting equilibrium (_a(K),_a(K)) and at least one saddle equilibrium point, (_s(K),_s(K)).The stationary states satisfy.f(_a) +h_1(_a,_a,K)= f(_s) +h_1(_s,_s,K)=0 f(_a) + h_2(_a,_a,K)= f(_s) + h_2(_s,_s,K)=0.The stability of the equilibrium states is given by the linear variational equations of motion about that state:Ẋ(t) =M(,,K)X(t),where , denote either _a,_a or _s,_s, andM(,,K) =[∂ f()/∂ x +∂ h_1(,,K)/∂ x ∂ h_1(,,K)/∂ y; ∂ h_2(,,K)/∂ x ∂ f()/∂ y + ∂ h_2(,,K)/∂ y ].The matrix M(,,K) evaluated at thesaddle point is assumed to have onlyone positive real eigenvalue (associated with an unstable direction in the space of dynamical variables), while the rest of the eigenvalue spectrum lies in the left hand side of the complex plane. In particular, we assume that for all values of interest K, the saddle pointlies on the basin boundary of the attractor. The generic switchingscenario occurs for arbitrarily small noise when the dynamics in one basin approachesthe stable manifold of the saddle point, which guides the dynamics to thesaddle. Once in the neighborhood of the saddle point, noise may causethe switch from one basin to another along the direction of the unstablemanifold associated with the unstable eigenvalue.When noise is added to the system, we wish to compute the probability of escaping from the basin of attraction of attractor (_a,_a). The asymmetry of the noise between the virtual and real agents is controlled by ϵ, which scales the noise intensity of . Computing the probability of escapein the small noise limit implies that we computethe most likely paths which cross thebasin boundary of the attractor at the saddle point (_s,_s). In describing the effect of how noise bleeds into the virtual world from the real world, we want to measurenoise-induced changes that are large in the dynamics of the state ofwhileremains approximately stationary; i.e.,does not change as much as . Thus, in the presence of noise, we are interested in describing how the most likely path develops when _a(K) changes its position much more than_a(K). In order to focus on this case, we assume that, when noise and coupling are both 0, _a(0) lies in a different part of phase space than _a(0). Correspondingly, we also assume that||_a(K)- _s(K)|| ≫ ||_a(K)- _s(K)||, while ||_a(K)- _s(K)|| ≪ 1 given that the equilibria depend smoothly on the coupling strength K. We note, however, that these assumptions do not affect the general theory, and our methods could be equally well applied if these assumptions were dropped.§.§ The Variational Formulation of Noise Induced Escape For a given coupling K, we wish to determine the path with the maximum probability of noise inducedswitching from the initial attracting state (_a(K),_a(K)) to another (_b(K),_b(K)), where theinitial and final states are equilibria of the noise-free versions of Eqs. <ref>. Each attractor possesses its own basin of attraction, and therefore on average, small noise is expected to induce small fluctuations about the stable equilibria. However, sometimes the noise will organize itself in such a waythat a large fluctuation occurs, allowingescape over the effective energy barrier away from the stable equilibrium.If the fluctuation is sufficiently large to bring the system state close to the saddle point, there is a possibility of switching. Near the saddle point,depending on the sign of the projection of the local trajectory onto the unstable manifold of the positive eigenvalue, the system will approach one or the other attractor. Switching occurs once the trajectory enters a different basin of attraction from the one where it started. We assume the noise intensity D ismuchsmaller than the effective barrier height, and that the scaling on the noise input ϵ satisfies 0 ≤ϵ < 1.Note that the noise terms (_,_) are formally the time derivative of a Brownian motion, sometimes referred to as white noise <cit.>.For D sufficiently small, we make the ansatz that the probability distribution of observingsuch a large fluctuationscales exponentially as the inverse of D <cit.>, 𝒫_x = e^-R/D, whereR(K) =min_(,,ξ_x,ξ_y,λ_1,λ_2)ℛ(,,ξ_x,ξ_y,λ_1,λ_2;K),and ℛ(,,_x,_y,λ_1,λ_2;K) = R__x [_x(t)] + R__y [_y(t)] + ∫_-∞^∞ dt λ_1(t)· [ (t)-f((t)) -h_1((t),(t),K) - ϵG_1((t)) _x(t)] + ∫_-∞^∞ dt λ_2(t)· [(t) - f((t)) -h_2((t),(t),K) -G_2((t))_y(t)]. We will see later that the Lagrange multipliers, λ_1,λ_2, also correspond to the conjugate momenta of the equivalent Hamilton-Jacobi formulation of this problem [The vector multiplication here is assumed to be an inner product.]. Similar to classical mechanics, the exponent R of Eq. <ref> is called the action, and corresponds to the minimizer of the action in the Hamilton-Jacobi formulation which occurs along the optimal path <cit.>. This path will minimize the integral of Eq. <ref>, and is found by setting the variations along the path δℛ to zero. The transition rate exponent is proportional to the action, R. When computing the action, the boundary conditions are important, especially since in general they depend on the parameters of the problem. Therefore, we suppose that dynamics starts near the attractor (_a,_a). Small fluctuations will on average remain in the basin of the attractor until at some point in time, the dynamics hits the saddle point, (_s,_s). Thus, we have the boundary conditions given by:lim_t → -∞ ((t),(t))= (_a(K),_a(K)) lim_t →∞ ((t),(t))= (_s(K),_s(K)).To examine the structure of the Hamiltonian governing the large fluctuations, we take the variational derivative of ℛ(,,_x,_y,λ_1,λ_2;K) with respect to the noise sources, ξ_i (where i=,). Setting the derivative equal to 0 givesξ_x = 2 ϵG_1() λ_1ξ_y = 2 G_2()λ_2.The full set ofequations of motion is then derived by taking the variational derivatives with respect to the state variables and their corresponding momenta:= f() +h_1(,,K) + 2 ϵ^2G_1^2(x)λ_1λ̇_̇1̇ = -ϵ^2 G_1(x)∂G_1(x)/∂xλ_1λ_1- ∂(f()+ h_1(,,K))/∂xλ_1 - ∂( h_2(,,K))/∂xλ_2= f() +h_2(,,K) + 2 G_2^2(y)λ_2λ̇_̇2̇ = -G_2(y)∂G_2(y)/∂yλ_2λ_2 -∂(f(y)+ h_2(,,K))/∂yλ_2- ∂( h_1(,,K))/∂yλ_1. The full Hamiltonian is derived bysubstitutingthe ansatz in Eq. <ref> into the appropriateFokker-Planck equation and dropping terms of order higher than 1/D,which results in a Hamilton-Jacobi equation with Hamiltonian: H=[ϵ^2 G_1^2(x)λ_1]·λ_1+[G_2^2(y)λ_2]·λ_2+λ_1·[f() +h_1(,,K)]+λ_2·[f() +h_2(,,K)]. One immediate observation from Eq. <ref> is that from the conjugate variables, (λ_1,λ_2) ≡( 0, 0) is an invariant manifold. Moreover, for the system to remain at the equilibria, in Eq. <ref>, the conjugate variables must vanish. (Here, we assume that multiplicative noise functions do not vanish at the equilibria.) Although the action is in the exponent of the distribution, the conjugate momenta act as an effective control force that pushes the system along a most likely path from the attractor to the saddle point. From Eqs. <ref> and <ref>, it is thereforeclear that the noise mustbe related toa large fluctuation governed by the conjugate variables in the system. Since at the equilibrium points ofthe attractor or saddle, the noise does not contribute to the exponent of the distribution,we assume that the other boundary conditions at equilibrium points for λ_i arelim_t →±∞ (λ_1(t),λ_2(t))= ( 0, 0). Locating and computing the most likely, or optimal, path for basin escape revolves around computing the solution to the two point boundary value problem consisting of Eqs. <ref> and boundary condition Eqs. <ref> and <ref>. However, one must check the local stability of the equilibria at the boundaries. It can be shown that if the attractor and saddle points in the deterministic system are hyperbolic, then the full set of conservative equations of motion will have saddle points at the boundaries. That is, both the deterministicattractors and saddles will appear assaddles in the Hamiltonian formulation. A fairly general proof in finite dimensions as well as a useful general method of computing the solutions for the optimal pathcan be found in <cit.>.Finally, we note that once we have the optimal path satisfying the variational problem above, the switching rate from one attractor to the other is given to logarithmic accuracy by W = C exp(-R/D),where C is a constant and R is given by Eq. <ref>.§.§ Perturbation of Variation Because the optimal-path equations are in general nonlinear, solving them analytically is unrealistic. However, in the case where the coupling constant K is small, we can useperturbation theory, assuming that the variational trajectories remain close to the corresponding trajectories for K=0. Even though the measured perturbation terms will be small, they affect the exponent of the distribution, and since the action is divided by a small intensity, D, even a small change in the action could have a large effect on the density and mean switching times.Assuming the terms in the vector field of Eq. <ref> are sufficiently smooth, we suppose the coupling terms ( h_1(,,K), h_2(,,K)) may be expanded in terms of K as:h_1(,,K)= K ĥ_1(,) + O(K^2)h_2(,,K)= K ĥ_2(,) + O(K^2).Using Eq. <ref>, we can write, to first order in K:ℛ(,ξ,,ξ,λ_1,λ_2;K) = ℛ_0(,ξ,,ξ,λ_1,λ_2)+Kℛ_1(,ξ,,ξ,λ_1,λ_2)where ℛ_0(,ξ_x,,ξ_y,λ_1,λ_2)= R_ξ_x [_x(t)] + R_ξ_y [_y(t)]+ ∫_-∞^∞ dt λ_1(t)· [ (t)-f((t)) - ϵG_1((t)) ξ_x(t)] + ∫_-∞^∞ dt λ_2(t)· [(t) - f((t))-G_2((t))ξ_y(t)] and ℛ_1(,ξ_x,,ξ_x,λ_1,λ_2)= - ∫_-∞^∞ dt [λ_1(t)·ĥ_1((t),(t))+ λ_2(t)·ĥ_2((t),(t))]. The first order correction to the action can be found by first finding the solution (^0,ξ_x^0,^0,ξ_y^0,λ_1^0,λ_2^0) thatminimizes ℛ_0(,ξ_x,,ξ_y,λ_1,λ_2). We then explicitly evaluate the integral in Eq. <ref> at the zeroth-order minimization. We note that higher order terms may be found by applying standard perturbation theory to the equations of motion and boundary conditions directly, or we may use the general distribution theory <cit.> to get the next order in K, which we do below.The Hamiltonian for the variation of the action ℛ_0 of the uncoupled system is given byH^0=ϵ^2 [G_1^2(x^0)λ^0_1]·λ^0_1+ [G_2^2(y^0)λ^0_2]·λ^0_2+λ^0_1·f(^0 )+λ^0_2·f(^0).The structure of Eq. <ref> is such that the total action is just the sum of the action of theandvariables since K=0. In addition the initial and final states for K=0 are given for the attractor (^0_a,^0_a) and saddle (^0_s,^0_s). Since we are interested in movingthrough a large fluctuation while holdingapproximately constant, when uncoupled the initial states satisfy ^0_a ^0_a, while ^0_s = ^0_a, the latter assuming no movement in .The effect of the noise reducing parameter on the action is now evident from the equations of motion derived from Eq. <ref>. The total action is just the sum of theandactions, R^0[], R^0[], respectively. Moreover,since there is no movement in ,R^0[] ≡ 0. Assuming the multiplicative noise term is non-singular, the resulting uncoupled action is therefore given by R^0[] = -1/ϵ^2∫__a^_s [ G_1^2]^-1 f() d .The expected effect of the parameter ϵ is evident, in that the action scales as 1/ϵ^2. Coupled with the fact that the action is in the exponent of the distribution means that the exponent should scale as 1/ϵ^2 D, which will make the probability oftransitioning through a large fluctuation conditioned onstaying approximately constant a very rare event. Notice that we can also consider howswitches while keepingapproximately constant, by changing the boundary conditions. In this case the switching rate is much higher since the exponent of the switching rate scales as 1/D.To see how such a rare event explicitly comes about, we consider the following generic bi-stable situation.§ A MODEL EXAMPLE OF MIXED REALITY NOISE INDUCED PERTURBATIONS For clarity, we now give an example of noise-induced switching in a generic coupled system where the individual components are affected by different scales of noise. Consider two coupled particles interacting in a double-well potential U(x). One particle represents a simulated robotic agent, while the other represents the real-world robot that interacts with the simulation. The two-particle system is used because it is sufficiently complex to illustrate our argument, while remaining simple enough to be understood analytically. Our approach follows the general theory of switching in the previous section, but for purposes of analysis we consider the following symmetric double-well potential: U(x) = x^4/4 - x^2/2.In the absence of coupling, the resulting motion of a single particle is described by x/ t = f(x)=-dU(x)/dx. Now suppose that the (x,y) particles are coupled with a spring potential <cit.>, and a white Gaussian noise ξ_x,ξ_y is assumed to act on each particle independently. Let x and y denote the positions of particles 1 and 2, respectively. Their equations of motion are then:ẋ = f(x) - K·(x-y) +ϵξ_x ẏ = f(y) + K·(x-y) + ξ_y. We assume that E[ξ_x(s)ξ_y(t)] = 2Dδ(t-s)δ(x-y), where D ≪ 1 is the noise intensity, and ϵ the satisfies the hypotheses in the previous section. §.§ The deterministic picture Consider the noise-free system obtained by setting ξ_x ≡ 0, ξ_y≡ 0 in (<ref>). The system has an effective potential V(x,y;K) given by:V(x,y;K) =-1/2 x^2+1/4 x^4-1/2 y^2+1/4 y^4 +1/2 K ( x-y) ^2.The topology of the equilibria for K=0.1 is pictured in Fig. <ref>. The system has stable equilibria at (x,y) = (-1,-1) and (1,1). The equilibrium solution (x,y) = (0,0) is unstable for K<1/2 and a saddle point for K>1/2. For K ≤ 1/2, the symmetric configuration about 0, with (x,y) = (±√(1-2K),∓√(1-2K)), is stable for K ∈ [0,1/3) and a saddle for K ∈ (1/3,1/2]. As K → 1/2^-, this solution approaches the unstable equilibrium at (0,0). The solutions collide at K=1/2, resulting in a saddle point at (0,0). The system has 4 additional equilibria, defined by(x,y) = (ζ, ζ/K(ζ^2 + K - 1))where ζ is a root of ζ^4 + (K-1)ζ^2 + K^2 = 0. Solutions exist for K ∈ [0,1/3]; the corresponding equilibria are saddle points.A plot of the equilibria for this system for different K is shown in Fig. <ref>. §.§ Switching When adding noise into the system, it is possible to observe noise-induced switching between stable equilibria of the noise-free system. Here we will derive most-likely noise-induced switching paths starting from the stable symmetric configuration where the particles are located in separate basins; x then experiences a large fluctuation and transitions to the basin occupied by y. In the small-noise limit, the most likely path passes through a saddle point of the noise-free system. In the following analysis, we therefore compute the optimal switching path from the initial system configuration to the saddle point; the remaining transition from the saddle to the final stable configuration occurs much more rapidly, since it is dominated by the deterministic dynamics.For sufficiently small noise intensity D, the switching dynamics can be described using the Hamiltonian formulation of Eq. <ref>, where we extend the system to 4 dimensions by adding in conjugate momenta (λ_1 and λ_2), and set the multiplicative noise terms to the identity: ẋ = f(x) - K·(x-y) + 2 ϵ^2 λ_1ẏ = f(y) + K·(x-y) + 2 λ_2λ̇_1= -(f'(x) - K)λ_1 - Kλ_2λ̇_2= -(f'(y) - K)λ_2 - Kλ_1, with corresponding Hamiltonian (̋x,y,λ_1,λ_2)= [f(x) - K·(x-y)]λ_1+[f(y) + K·(x-y)]λ_2 + λ_2^2 +ϵ^2 λ_1^2. Note that (̋x^*,y^*,0,0) = 0 for all (x^*,y^*) in the set of equilibria of (<ref>). Since $̋ is time-invariant, optimal switching paths between equilibria are required to satisfy a two-point boundary problemon the zero level sets of=̋ 0in order to compute the action.We use the numerical approach described in <cit.> to compute the optimal path starting at(x_a, y_a) = (√(1-2K), -√(1-2K))fort →-∞and passing through the saddle point given by(x_s, y_s) = (ζ, ζ/K(ζ^2 + K - 1)) ≈(K+K^2/2,-1 + 2K + 5 K^2/8)ast →∞, where1/√(3) < ζ< 1, andK ≪1. An example of such a path is shown in Fig. <ref>.In Eqs. <ref>, consider the limitK →0. The particle motions are uncoupled, and the situation is equivalent to a single-particle switching problem. In this case, it is possible to find an analytic solution in time explicitly, and make use of the general perturbation theory. From Eq. <ref>, we know that for non-zeroϵ, the zeroth order term of the actionscales inversely withϵ^2, and in fact is given by R^0 = 1/4 ϵ^2,where we have used the fact that from the Hamiltonian, the optimal path whenK = 0is given explicitly byλ_1^0 = -1/ϵ^2 f(x^0).To get the first order corrections, we need the solution to the two point value problem along the zeroth order optimal path as a function of time: x^0(t)= 1/√(1 + e^2t)y^0(t)= -1λ_1^0(t)= - e^2 t/( 1+ e^2 t) ^3/2ϵ^2 λ_2^0(t)= 0 Notice that ast →±∞, we have the following boundary conditions satisfied for(x^0(t),λ_1^0(t))while holding(y^0 ≡-1,λ_2^0 = 0)constant:lim_ t → -∞lim_ t →∞x^0(t)→x_a^0=1 x^0(t)→x_s^0=0 λ_1^0(t)→ 0λ_1^0(t)→ 0.Using the zeroth order time series in the first order expression of the action gives, to linear order inK:R = 1/4 ϵ^2 - K 3/2ϵ^2. §.§ Second order effectsWe can get the second order effects of the coupling strengthKon the action by considering the potential function of Eq. <ref>, and using the general results of computing the probability of escape for Gaussian noise in <cit.>. However, the approach here is one that will be problem-specific.We choose to formallyexamine the Hamiltonian in Eq. <ref>, and notice thatyand its conjugate momenta remain approximately near the attractor. Therefore, we use the asymptotic expression of the attractor and saddle, in the limits of Eq. <ref>.ℛ_1= ∫_-∞^∞ dt [λ_1(t)·ĥ_1((t),(t)) + λ_2(t)·ĥ_2((t),(t))]= -∫_-∞^∞ dt f(x_0(t)/ϵ^2(x_0(t)-y_0(t))= 1/ϵ^2∫_x_a(K)^x_s(K) dx_0(x_0+1)Using the asymptotic expressions for the attractor and saddle forx_0,expanding for smallK, and collecting terms, we haveR≈1/4 ϵ^2-3K/2ϵ^2+2K^2/ϵ^2. An example of the optimal path projections is given in Fig. <ref> for moderate noise reduction (ϵ=0.5) and small couplingK. Notice that in the figure,(x(t),y(t))spend most of their time near the equilibria specified at the boundaries. In addition,x(t)traverses a distance of order unity when it switches from the attractor to the saddle point, while deviations ofy(t)from the equilibrium position are only of orderK. Therefore, even though thescaled reduction of the noise parameter is small, the noise transmitted toxhas a very strong effect through the coupling. Using the theory for the action, Fig. <ref> shows how it scales as a function ofKwhenϵ= 0.5. Along with the numerically computed action are the results from the asymptotic analysis for small coupling using Eq. <ref>. Notice that forK < 0.2, the agreement is good, and improves asKgets smaller. One of the interesting facets of the problemoccurs when there is noise only on theycomponent. This situation occurs whenϵapproaches zero. Although asymptotically the action is seen to approach∞asϵapproaches zero, since the system is coupled it is possible to compute an optimal pathconditioned onthat large fluctuations occur only inx. Using the results for finiteϵas an initial guess, weuse continuation to decreaseϵto0, and obtain the optimal path for switching in the coupled system with noise acting only on particley(see Fig. <ref>), where the coupling constant is relatively small; i.e.K ≈0.06.The action along the optimal path is on the order of10^5, which indicates that switching would be an extremely rare event. In this case we do observe a relatively large change inywhich is on the order of unity rather thanK, butydoes spend most its time near its equilibrium. The interaction of the coupling and noise induced forces is key in determining the switching times for the system. Increasing the couplingKby an order of magnitude results in a drastic change in the values of the conjugate variables along the optimal switching path, as shown in Fig. <ref>-<ref>. Here we see that in the system with increasedK(Fig. <ref>), bothxandystill undergo a change of order unity; however, the values of theconjugate variablesλ_1 , λ_2, have been reduced by several orders of magnitude. The action is therefore much smaller (R ≈500compared toR ≈5.7 ·10^5with weak coupling), implying a much shorter switching time. The effect of coupling strength on action along the optimal path is shown in Fig. <ref>, for different values ofϵ. We observe that the range ofKfor which the asymptotic prediction (K ≪1) of the action holds decreases asϵdecreases.§.§ Monte Carlo Simulation We consider the problem of switching in Eqs. <ref> where the asymmetry in noise intensity between two coupled systems is governed by the parameterϵ. Using the Milstein method for numerical solution of stochastic differential equations (SDEs), we implement a Monte Carlo schemeto compute the mean time for thexvariable to switch while theyvariable remains in its basin, given that the particles start in different basins of attraction.That is, we compute the mean time it takes forxto transition fromx(0)=x_a(K)tothe saddle pointx(T)=x_s(K). We first check the existence of an exponential distribution of times by computing the switching time as a function of the inverse noise intensity for various values ofϵandK.From the ansatz that the mean switching time exponents are proportional toR/D, we plot the log of the mean switching time as a function of1/D, where the slope should be the action evaluated at the parameters ofϵ,K.We can see how the asymptotic theory holds as a function ofKby comparing it with the mean switching times in Figs. <ref>-<ref>. For smallK, the theory holds up quite well for a range of noise intensities (where noise intensity is small compared to the barrier height), and over sufficiently large range ofK. § DISCUSSION AND CONCLUSION In this paper, we addressed the problem of how coupling can enhance noise-induced switching in systems with highly asymmetric noise characteristics. As a motivating example, we considered a simple mixed-reality experiment in which a virtual system with very low or zero noise is coupled with a noisy real system. We showed that the effect of coupling was sufficient to cause the virtual dynamics to undergo a large fluctuation while the real dynamics, which was driven by larger noise intensity, remainedquiescent. It was very natural to take a variational approach to describing such a large fluctuation, and although it was applied to Gaussian noise, the same approach can be extended to more general noise sources <cit.>. Using the variational approach, we generated a Hamiltonian two-point boundary value problem with asymmetric driving representing the effect of the heterogeneous noise sources. We used scaling parameterϵ∈[0,1]to quantify the asymmetry in the noise. The solution to the Hamiltonian equations generates the optimal switching path, which in turn can be used to predict mean switching rates.We focused on the case where a large fluctuation occurs in the low noise system, while the system with higher intensity noise remains near its equilibrium point. Note that, because of the asymmetry in noise levels, the probability of a large transition in the high noise system occurring before the fluctuation in the low noise system is very high. Thus having the low noise system transition first is an extremely rare event.We illustrated the general theory using a general model of a pair of coupled particles in a bi-stable potential. This example was inspired by bistable behaviors predicted for a mixed-reality system of swarming agents <cit.>. We quantified the action as a function of coupling strength over a range of scaling valuesϵ, revealing an excellent comparison between asymptotictheory and numerical solutions of the optimal paths. However, we note that, for very small values ofϵ, the asymptotic theory diverges from the true action for even moderate values ofK. This is a result of two small parameters in the approximation; higher-order corrections may need to be included in Eq. <ref>. We also quantified the mean escape timesin terms of parametersϵandD, again with excellent agreement between simulation and theory for the log of the mean switching time.We computed the paths as the noise scaling parameterϵapproaches zero, so that the probability of extremely rare events is governed by coupling strength alone. That is, the noise is only transmitted through the coupling terms. The asymptotic theory predicts an logarithmic exponent of the probability of virtual switching given that the real dynamics exhibits only small fluctuations, where the exponent scales as1/ϵ^2. Although extremely rare, the switching is still observed whenϵ→0and couplingKis sufficiently large <cit.>, as we have shown in Fig. <ref>. The physical interpretation of the transmitted noise induced large fluctuation is that the coupling also acts as an effective force along with the effective stochastic momenta to enhance the observation of an extremely rare event. Thecoupling used in the generic example is similar to the couplings found in many physical systems, including the swarm experiment we described. Since our theory is generic, it predicts that such noise transmitted fluctuations should appear in many coupled systems, including mixed-realitysituations, where the noise intensities are highly skewed. § ACKNOWLEDGMENTSThe authors gratefully acknowledge the Office of Naval Research for their support underN0001412WX20083, and support of the NRL Base Research Program N0001412WX30002. KS was a National Researchpost doctoral fellow while performing the research. We acknowledge useful conversations with Ani Hsieh, Luis Mier,and Brandon Lindley about early versions of the research, and Jason Hindes for an initial reading of the manuscript.55 fxundefined [1]ifx#1 fnum [1] #1firstoftwosecondoftwo fx [1] #1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty [Gardiner(2004)]gar03 authorauthorC. W. Gardiner, @noop titleHandbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences (publisherSpringer-Verlag, year2004)NoStop[Van Kampen(2007)]vanKampen_book authorauthorN. G. Van Kampen, @noop titleStochastic Processes in Physics and Chemistry (publisherElsevier, year2007)NoStop[Freidlin and Wentzell(1984)]FW84 authorauthorM. I. Freidlin and authorA. D. Wentzell, @noop titleRandom Perturbations of Dynamical Systems (publisherSpringer-Verlag, year1984)NoStop[Doering and Gadoua(1992)]Doering1992 authorauthorC. R. Doering and authorJ. C. Gadoua, @noop journaljournalPhys. Rev. Lett. volume69, pages2318 (year1992)NoStop[Castellano et al.(1996)Castellano, Torrioli, Cosmelli, Costantini, Chiarello, Carelli, Rotoli, Cirillo, andKautz]Castellano1996 authorauthorM. G. Castellano, authorG. Torrioli, authorC. Cosmelli, authorA. Costantini, authorF. Chiarello, authorP. Carelli, authorG. Rotoli, authorM. Cirillo,and authorR. L. Kautz, @noop journaljournalPhys. Rev. B volume54,pages15417 (year1996)NoStop[Lapidus et al.(1999)Lapidus, Enzer, and Gabrielse]Lapidus1999 authorauthorL. J. Lapidus, authorD. Enzer, and authorG. Gabrielse,@noop journaljournalPhys. Rev. Lett.volume83, pages899 (year1999)NoStop[Kim et al.(2005)Kim, Heo, Lee, Ha, Jang, Noh, and Jhe]Kim2005 authorauthorK. Kim, authorM. S. Heo, authorK. H. Lee, authorH. J. Ha, authorK. Jang, authorH. R. Noh,and authorW. Jhe, @noop journaljournalPhys. Rev. A volume72, pages053402 (year2005)NoStop[Siddiqi et al.(2005)Siddiqi, Vijay, Pierre, Wilson, Frunzio, Metcalfe, Rigetti, Schoelkopf, Devoret, Vion, and Esteve]Siddiqi2005 authorauthorI. Siddiqi, authorR. Vijay, authorF. Pierre, authorC. M. Wilson, authorL. Frunzio, authorM. Metcalfe, authorC. Rigetti, authorR. J.Schoelkopf, authorM. H.Devoret, authorD. Vion,and authorD. Esteve, @noop journaljournalPhys. Rev. Lett. volume94, pages027005 (year2005)NoStop[Chan and Stambaugh(2007)]Chan2007 authorauthorH. B. Chan and authorC. Stambaugh, @noop journaljournalPhys. Rev. Lett. volume99, pages060601 (year2007)NoStop[Dykman and Schwartz(2012)]DS2012 authorauthorM. I. Dykman and authorI. B. Schwartz, 10.1103/PhysRevE.86.031145journaljournalPHYSICAL REVIEW E volume86 (year2012),10.1103/PhysRevE.86.031145NoStop[D'Huys et al.(2014)D'Huys, Juengling, and Kinzel]Dhuys2014a authorauthorO. D'Huys, authorT. Juengling, and authorW. Kinzel, 10.1103/PhysRevE.90.032918journaljournalPHYSICAL REVIEW E volume90 (year2014), 10.1103/PhysRevE.90.032918NoStop[Emenheiser et al.(2016)Emenheiser, Chapman, Posfai, Crutchfield, Mesbahi, and D'Souza]Emen2016 authorauthorJ. Emenheiser, authorA. Chapman, authorM. Posfai, authorJ. P. Crutchfield, authorM. Mesbahi,and authorR. M. D'Souza, 10.1063/1.4960191journaljournalCHAOSvolume26 (year2016),10.1063/1.4960191NoStop[Huang et al.(2016)Huang, Xue, and Tang]Huang2016 authorauthorQ. Huang, authorC. Xue,andauthorJ. Tang,10.1063/1.4941355journaljournalAIP Adv.volume6, pages015219 (year2016)NoStop[Allen et al.(2005)Allen, Warren, and Ten Wolde]Allen2005 authorauthorR. J. Allen, authorP. B. Warren, and authorP. R. Ten Wolde, 10.1103/PhysRevLett.94.018104journaljournalPhys. Rev. Lett. volume94 (year2005), 10.1103/PhysRevLett.94.018104, http://arxiv.org/abs/0406006arXiv:0406006 [q-bio]NoStop[Barzel and Biham(2008)]Barzel2008a authorauthorB. Barzel and authorO. Biham, 10.1103/PhysRevE.78.041919journaljournalPhys. Rev. E - Stat. Nonlinear, Soft Matter Phys. volume78, pages1 (year2008)NoStop[Kamenev et al.(2008)Kamenev, Meerson, and Shklovskii]Kamenev2008b authorauthorA. Kamenev, authorB. Meerson, and authorB. Shklovskii,@noop journaljournalPhys. Rev. Lett.volume101, pages268103 (year2008)NoStop[Kamenev and Meerson(2008)]Meerson2008 authorauthorA. Kamenev and authorB. Meerson,10.1103/PhysRevE.77.061107journaljournalPhys. Rev. E volume77, pages061107 (year2008)NoStop[Dykman et al.(2008)Dykman, Schwartz, and Landsman]Dykman authorauthorM. I. Dykman, authorI. B. Schwartz,and authorA. S. Landsman,10.1103/PhysRevLett.101.078101journaljournalPhys. Rev. Lett. volume101, pages078101 (year2008)NoStop[Wells et al.(2015)Wells, Kath, and Motter]Wells2015 authorauthorD. K. Wells, authorW. L. Kath, and authorA. E. Motter,10.1103/PhysRevX.5.031036journaljournalPHYSICAL REVIEW X volume5 (year2015), 10.1103/PhysRevX.5.031036NoStop[Chen et al.(2016)Chen, Thill, and Cao]Chen2016a authorauthorH. Chen, authorP. Thill,andauthorJ. Cao,10.1063/1.4948461journaljournalJ. Chem. Phys. volume144 (year2016),10.1063/1.4948461NoStop[Hindes and Schwartz(2016)]Hindes2016 authorauthorJ. Hindes and authorI. B. Schwartz,10.1103/PhysRevLett.117.028302journaljournalPhys. Rev. Lett. volume117, pages028302 (year2016)NoStop[Schwartz et al.(2011)Schwartz, Forgoston, Bianco, andShaw]schwartz2011converging authorauthorI. Schwartz, authorE. Forgoston, authorS. Bianco,and authorL. Shaw, @noop journaljournalJournal of The Royal Society Interfacevolume8, pages1699 (year2011)NoStop[Dykman(1990)]Dykman1990 authorauthorM. I. Dykman, @noop journaljournalPhys. Rev. A volume42, pages2020 (year1990)NoStop[Le Masne et al.(2009)Le Masne, Pothier, Birge, Urbina, and Esteve]ISI:000263389500058 authorauthorQ. Le Masne, authorH. Pothier, authorN. O. Birge, authorC. Urbina,and authorD. Esteve, 10.1103/PhysRevLett.102.067002journaljournalPHYSICAL REVIEW LETTERS volume102 (year2009), 10.1103/PhysRevLett.102.067002NoStop[Ankerhold and Pechukas(1999)]ISI:000082402400005 authorauthorJ. Ankerhold and authorP. Pechukas, 10.1063/1.479748journaljournalJOURNAL OF CHEMICAL PHYSICS volume111, pages4886 (year1999)NoStop[Lindley et al.(2013)Lindley, Mier-y-Teran Romero, and Schwartz]Lindley2013a authorauthorB. S. Lindley, authorL. Mier-y-Teran Romero,and authorI. B. Schwartz, in10.1109/ACC.2013.6580546booktitle2013 American Control Conference (publisherIeee, year2013) pp. pages4587–4591NoStop[Earon et al.(2001)Earon, Barfoot, and D'Eleuterio]Earon2001 authorauthorE. Earon, authorT. Barfoot, and authorG. D'Eleuterio(year2001) pp. pages1267–1272, noteIEEE/ASME International Conference on Advanced Intelligent Mechatronics.Stop[Benerjee and Deepthi(2015)]Cheng2011 authorauthorC. Benerjee and authorN. Deepthi (year2015) pp. pages1–6,noteInternational Conference on Robotics, Automation, Control and Embedded SystemsNoStop[Bhattacharya et al.(2012)Bhattacharya, Ghrist, and Kumar]Bhattacharya2012 authorauthorS. Bhattacharya, authorR. Ghrist,and authorV. Kumar, @noop journaljournalInternational Journal of Robotics Research volume33, pages113 (year2012)NoStop[Lynch et al.(2008)Lynch, Schwartz, Yang, and Freeman]Lynch2008 authorauthorK. M. Lynch, authorI. B. Schwartz, authorP. Yang,and authorR. A. Freeman,10.1109/TRO.2008.921567journaljournalIEEE Transactions on Robotics volume24, pages710 (year2008)NoStop[Wu and Zhang(2011)]Wu2011 authorauthorW. Wu and authorF. Zhang, 10.1016/j.automatica.2011.06.001journaljournalAutomatica volume47,pages2044 (year2011)NoStop[Takano et al.(2014)Takano, Yamazaki, Ichikawa, Hattori,and Takadama]Takano2014 authorauthorR. Takano, authorD. Yamazaki, authorY. Ichikawa, authorK. Hattori,and authorK. Takadama (year2014) pp. pages585 – 590, noteIEEE International Conference on Systems, Man and CyberneticsNoStop[Al Tair et al.(2015)Al Tair, Taha, Al-Qutayri, and Dias]AlTair2015 authorauthorH. Al Tair, authorT. Taha, authorM. Al-Qutayri,and authorJ. Dias (year2015)pp. pages210–213, noteInternational Conference on Information and Communication Technology ResearchNoStop[Augugliaro et al.(2014)Augugliaro, Lupashin, Hamer, Male, Hehn, Mueller, Willmann, Gramazio, Kohler, andD'Andrea]Augugliaro2014 authorauthorF. Augugliaro, authorS. Lupashin, authorM. Hamer, authorC. Male, authorM. Hehn, authorM. W. Mueller, authorJ. S.Willmann, authorF. Gramazio, authorM. Kohler, and authorR. D'Andrea,10.1109/MCS.2014.2320359journaljournalIEEE Control Systems volume34,pages46 (year2014)NoStop[Ramp et al.(2009)Ramp, Davis, Leonard, Shulman, Chao, Robinson, Marsden, Lermusiaux, Fratantoni, Paduan, Chavez, Bahr, Liang, Leslie, and Li]Ramp2009 authorauthorS. R. Ramp, authorR. E. Davis, authorN. E. Leonard, authorI. Shulman, authorY. Chao, authorA. R. Robinson, authorJ. E.Marsden, authorP. F. J.Lermusiaux, authorD. M.Fratantoni, authorJ. D.Paduan, authorF. P.Chavez, authorF. L.Bahr, authorS. Liang, authorW. Leslie, and authorZ. Li,10.1016/j.dsr2.2008.08.013journaljournalDeep Sea Research Part II: Topical Studies in Oceanographyvolume56, pages68 (year2009)NoStop[Dorigo et al.(2013)Dorigo, Floreano, Gambardella, Mondada, Nolfi, Baaboura, Birattari, Bonani, Brambilla, Brutschy, Burnier, Campo, Christensen, Decugniere, Di Caro, Ducatelle, Ferrante, Forster, Gonzales, Guzzi, Longchamp, Magnenat, Mathews, Montes de Oca, O'Grady, Pinciroli, Pini, Retornaz, Roberts, Sperati, Stirling, Stranieri, Stutzle, Trianni, Tuci, Turgut, and Vaussard]Dorigo2013 authorauthorM. Dorigo, authorD. Floreano, authorL. M. Gambardella, authorF. Mondada, authorS. Nolfi, authorT. Baaboura, authorM. Birattari, authorM. Bonani, authorM. Brambilla, authorA. Brutschy, authorD. Burnier, authorA. Campo, authorA. L.Christensen, authorA. Decugniere, authorG. Di Caro, authorF. Ducatelle, authorE. Ferrante, authorA. Forster, authorJ. M.Gonzales, authorJ. Guzzi, authorV. Longchamp, authorS. Magnenat, authorN. Mathews, authorM. Montes de Oca, authorR. O'Grady, authorC. Pinciroli, authorG. Pini, authorP. Retornaz, authorJ. Roberts, authorV. Sperati, authorT. Stirling, authorA. Stranieri, authorT. Stutzle, authorV. Trianni, authorE. Tuci, authorA. E.Turgut,and authorF. Vaussard,10.1109/MRA.2013.2252996journaljournalIEEE Robotics & Automation Magazine volume20, pages60 (year2013)NoStop[Szwaykowska et al.(2016a)Szwaykowska, Schwartz, Romero, Heckman, Mox, and Hsieh]szwaykowska2016collective authorauthorK. Szwaykowska, authorI. B. Schwartz, authorL. M.-y.-T. Romero, authorC. R. Heckman, authorD. Mox,and authorM. A. Hsieh, @noop journaljournalPhysical Review E volume93, pages032307 (year2016a)NoStop[Hindes et al.(2016)Hindes, Szwaykowska, and Schwartz]hindes2016hybrid authorauthorJ. Hindes, authorK. Szwaykowska,and authorI. B. Schwartz,@noop journaljournalPHYSICAL REVIEW E volume94, pages032306 (year2016)NoStop[Ben-Jacob et al.(1997)Ben-Jacob, Cohen, Czirók, Vicsek, and Gutnick]b-jccvg97 authorauthorE. Ben-Jacob, authorI. Cohen, authorA. Czirók, authorT. Vicsek,and authorD. L. Gutnick, @noop journaljournalPhysica A volume238, pages181 (year1997)NoStop[Calovi et al.(2014)Calovi, Lopez, Ngo, Sire, Chaté, and Theraulaz]Calovi2014 authorauthorD. S. Calovi, authorU. Lopez, authorS. Ngo, authorC. Sire, authorH. Chaté,and authorG. Theraulaz,10.1088/1367-2630/16/1/015026journaljournalNew Journal of Physics volume16, pages015026 (year2014), http://arxiv.org/abs/1308.2889arXiv:1308.2889NoStop[Carlen et al.(2013)Carlen, Chatelin, Degond, and Wennberg]Carlen2013 authorauthorE. Carlen, authorR. Chatelin, authorP. Degond,and authorB. Wennberg,10.1016/j.physd.2012.05.013journaljournalPhysica D: Nonlinear Phenomena volume260,pages90 (year2013)NoStop[Hsieh et al.(2008)Hsieh, Halász, Berman, and Kumar]Hsieh2008a authorauthorM. A. Hsieh, authorÁ. Halász, authorS. Berman,and authorV. Kumar,10.1007/s11721-008-0019-zjournaljournalSwarm Intelligence volume2,pages121 (year2008)NoStop[Gintautas and Hübler(2007)]Gintautas2007 authorauthorV. Gintautas and authorA. W. Hübler,10.1103/PhysRevE.75.057201journaljournalPhys. Rev. E volume75, pages057201 (year2007)NoStop[Szwaykowska et al.(2016b)Szwaykowska, Schwartz, Mier-y Teran-Romero, Heckman, Mox, and Hsieh]Szwaykowska2016 authorauthorK. Szwaykowska, authorI. B. Schwartz, authorL. Mier-y Teran-Romero, authorC. R. Heckman, authorD. Mox,andauthorM. A. Hsieh, @noop journaljournalPhysical Review E volume93, pages032307 (year2016b)NoStop[Kozlov et al.(2016)Kozlov, Vakulenko, and Wennergren]Kozlov2016 authorauthorV. Kozlov, authorS. Vakulenko, and authorU. Wennergren, 10.1103/PhysRevE.93.032413journaljournalPhys. Rev. E volume93, pages1 (year2016)NoStop[Dykman(2010)]Dykman_prefactor authorauthorM. I. Dykman, @noop journaljournalPhys. Rev. E volume81, pages051124 (year2010)NoStop[Feynman and Hibbs(1965a)]feyhib65 authorauthorR. P. Feynman and authorA. R. Hibbs, @noop titleQuantum Mechanics and Path Integrals (publisherMcGraw-Hill, Inc., year1965)NoStop[Note1()]Note1 noteThroughout the paper, boldface lower-case letters will indicate vectors, while boldface upper-case letters will indicate matrices.Stop[Fleming and Rishel(1975)]fleming1975deterministic authorauthorW. Fleming and authorR. Rishel, @noop titleDeterministic and stochastic optimal control (publisherSpringer New York,year1975)NoStop[Note2()]Note2 noteThe vector multiplication here is assumed to be an inner product.Stop[Feynman and Hibbs(1965b)]FeynmanQM authorauthorR. P. Feynman and authorA. R. Hibbs, @noop titleQuantum Mechanics and Path Integrals (publisherMcGraw-Hill, addressNew-York, year1965)NoStop[Lindley and Schwartz(2013)]Lindley2013 authorauthorB. S. Lindley and authorI. B. Schwartz,10.1016/j.physd.2013.04.001journaljournalPhysica D: Nonlinear Phenomena volume255, pages22 (year2013)NoStop[Bouchet and Reygner(2016)]Bouchet2016 authorauthorF. Bouchet and authorJ. Reygner,10.1007/s00023-016-0507-4journaljournalAnnales Henri Poincaré volume17, pages3499 (year2016)NoStop[Billings et al.(2010)Billings, Schwartz, McCrary, Korotkov, and Dykman]Billings2010 authorauthorL. Billings, authorI. B. Schwartz, authorM. McCrary, authorA. N. Korotkov,andauthorM. I. Dykman,10.1103/PhysRevLett.104.140601journaljournalPhys. Rev. Lett. volume104, pages140601 (year2010)NoStop[Meerson()]Note3 authorauthorB. Meerson, @noop notePrivate communicationNoStop
http://arxiv.org/abs/1703.09249v2
{ "authors": [ "Ira B. Schwartz", "Klimka Szwaykowska", "Thomas W. Carr" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170327181904", "title": "Asymmetric noise-induced large fluctuations in coupled systems" }
𝐀𝐚 𝐀𝐚𝒜𝔸𝐁𝐛𝐁𝐛ℬ𝔹𝐂𝐜 𝐂𝐜𝒞ℂ𝐃𝐝𝐃𝐝𝒟𝔻𝐄𝐞 𝐄𝐞ℰ𝔼𝐅𝐟𝐅̅𝐟̅𝐅𝐟ℱ𝔽𝐆𝐠𝐆̅𝐠̅ 𝐆𝐠𝒢𝔾𝐇𝐡𝐇̅𝐡̅ 𝐇𝐡ℋℍ𝐈𝐢𝐈̅𝐢̅ 𝐈𝐢ℐ𝕀𝐉𝐣𝐉̅𝐣̅ 𝐉𝐣𝒥𝕁𝐊𝐤𝐊̅𝐤̅ 𝐊𝐤𝒦𝕂𝐋𝐥𝐋̅𝐥̅𝐊𝐤ℒ𝕃𝐌𝐦𝐌̅𝐦̅ 𝐌𝐦ℳ𝕄𝐍𝐧𝐍̅𝐧̅𝐍𝐧𝒩ℕ𝐎𝐨𝐎̅𝐨̅ 𝐎𝐨𝒪𝕆𝐏𝐩𝐏̅𝐩̅ 𝐏𝐩𝒫ℙ𝐐𝐪𝐐̅𝐪̅ 𝐐𝐪 𝒬ℚ𝐑𝐫𝐑̅𝐫̅ 𝐑𝐫 ℛℝ 𝐒𝐬𝐒̅𝐬̅𝐒𝐬 𝒮𝕊𝐓𝐭𝐓̅𝐭̅ 𝐓𝐭𝒯𝕋𝐔𝐮𝐔̅𝐮̅ 𝐔𝐮𝒰𝕌𝐕𝐯𝐕̅𝐯̅ 𝐕𝐯 𝒱𝕍𝐖𝐰𝐖̅𝐰̅ 𝐖𝐰𝒲𝕎𝐗𝐱 𝐗𝐱𝒳𝕏𝐘𝐲 𝐘̅𝐲̅𝐘𝐲𝒴𝕐𝐙𝐳𝐙̅𝐳̅𝐙𝐳 𝒵ℤth 01 Tr TV 𝔼det 𝐯𝐞𝐜𝐯𝐚𝐫𝐜𝐨𝐯𝐬.𝐭.for rankrankn𝐝𝐢𝐚𝐠𝐬𝐢𝐠𝐧𝐥𝐨𝐬𝐬whenwhereif Cohesion-based Online Actor-Critic Reinforcement Learning for mHealth Intervention Feiyun Zhu^⋆,, Peng Liao^, Xinliang Zhu^⋆, Yaowen Yao^⋆, Junzhou Huang^⋆ ⋆ Feiyun Zhu, Xinliang Zhu, Yaowen Yao and Junzhou Huang are with the Department of CSE in the University of Texas at Arlington.Feiyun Zhu and Peng Liao are with the Department of Statistics in the University of Michigan. December 30, 2023 ============================================================================================================================================================================================================================================================================================================ In the wake of the vast population of smart device users worldwide, mobile health (mHealth) technologies are hopeful to generate positive and wide influence on people's health. They are able to provide flexible, affordable and portable health guides to device users. Current online decision-making methods for mHealth assume that the users are completely heterogeneous. They share no information among users and learn a separate policy for each user. However, data for each user is very limited in size to support the separate online learning, leading to unstable policies that contain lots of variances. Besides, we find the truth that a user may be similar with some, but not all, users, and connected users tend to have similar behaviors. In this paper, we propose a network cohesion constrained (actor-critic) Reinforcement Learning (RL) method for mHealth. The goal is to explore how to share information among similar users to better convert the limited user information into sharper learned policies. To the best of our knowledge, this is the first online actor-critic RL for mHealth and first network cohesion constrained (actor-critic) RL method in all applications. The network cohesion is important to derive effective policies. We come up with a novel method to learn the network by using the warm start trajectory, which directly reflects the users' property. The optimization of our model is difficult and very different from the general supervised learning due to the indirect observation of values. As a contribution, we propose two algorithms for the proposed online RLs. Apart from mHealth, the proposed methods can be easily applied or adapted to other health-related tasks. Extensive experiment results on the HeartSteps dataset demonstrates that in a variety of parameter settings, the proposed two methods obtain obvious improvements over the state-of-the-art methods.Actor-Critic, Reinforcement Learning, Mobile Health (mHealth) Intervention, Cohesion § INTRODUCTION With billions of smart device[i.e. smartphones and wearable devices, such as Fitbit Fuelband and Jawbone etc.] users globally, it is increasingly popular among the scientist community to make use of the state-of-the-art articial intelligence and mobile health technologies to leverage supercomputers and big data to facilicate the prediction of healthcare tasks <cit.>. In this paper, the goal of mobile health (mHealth) is to make use of various smart devices as great platforms to collect and analyze raw data (weather, location, social activity, stress, etc.). Based on that, the aim is to provide effective intervention that helps users to change to or adapt to healthy behaviors, such as reducing the alcohol abuse <cit.> and promoting physical activities <cit.>. The traditional adaptive treatment has restrictions on the time, location and frequency—patients have to visit the doctor's office for treatments. Compared with them, mHealth is more affordable, portable and much more flexible in the sense that smart devices allow for the real-time collection and analysis of data as well as in-time delivery of interventions. Thus, mHealth technologies are widely used in lots of health-related tasks, such as physical activity <cit.>, eating disorders <cit.>, alcohol use <cit.>, mental illness<cit.>, obesity/weight management <cit.>.Formally, the mHealth intervention is modeled as a sequential decision making (SDM) problem. It aims to learn the optimal policy to determine when, where and how to deliver the intervention <cit.> to best serve users. This is a new research topic that lacks of methodological guidance. In 2014, Lei <cit.> made a first attempt to formulate the mHealth intervention as an online actor-critic contextual bandit problem. Lei's method served a good starting point for the mHealth study. However, this method did not consider the important delayed effect in the SDM—the current action may influence not only the immediate reward but also the next states and, through that, all the subsequent rewards <cit.>. Dr. Murphy <cit.> proposed an average reward based RL to consider the delayed effect in the mHealth. However, those two methods rely on some ideal assumptions. They either assume that all the users are completely homogenous or completely heterogeneous. We find the truth lying between those extremes: a user might be similar with some, but not all, users. Their methods are easy to bring in too much bias or too much variance. Besides, <cit.> is in the batch learning setting, which is different from this paper's focuses.Recently, Dr. Cesa-Bianchi <cit.> proposed a contextual bandit algorithm that considers the network information. It is for the recommendation system, which is very different from the mHealth task. Besides, there are three drawbacks making the method in <cit.> impractical for the mHealth: (1) Cesa-Bianchi's method focues on the bandit algorithm. It doesn't consider the important delayed effect in mHealth. (2) They assume the network information is given beforehand from the social information. The given network may not be targeted for the mHealth study. There is lots of misleading network information for the mHealth study <cit.>. (3) In their work, however, it is unable to control the amount of information shared among linked users, which is not flexible for the mHealth study <cit.>.In this paper, we propose a cohesion-based reinforcement learning for the mHealth and derive two algorithms. It is in an online, actor-critic setting. The aim is to explore how to share information across similar users in order to improve the performance. The main contributions of this paper are summarized as follows: (1) to the best of our knowledge, this is the first online (actor-critic) RL method for the mHealth. (2) Current evidence verifies the wide existence of networks among users <cit.>. We improve the online RL by considering the network cohesion among users. Such improvement makes it the first network constrained (actor-critic) RL method to the best of our knowledge. It is able to relieve the tough problem of current online decision-making methods for the mHealth by reducing variance at the cost of inducing bias. Current online RL learns a separate policy for each user. However, there are too few of samples to support the separate online learning, which leads to unsatisfactory interventions (policies) for the users. (3) Our method doesn't require the given network cohesion. We propose a method to learn the network intentionally for the mHealth study. It makes use of the warm start trajectories in the online learning, which are expected to represent the users' properties. (4) Compared with <cit.>, the proposed method has a tuning parameter, which allows us to control how much information we should share with similar users. It is worth mentioning that our method may not be limited to mHealth. It can be applied to other health-related tasks. Extensive experiment results on the HeartSteps dataset verifies that our method can achieve clear improvement over the Separate-RL. § PRELIMINARIES §.§ Markov Decision Process (MDP) We assume the mHealth intervention is a Markov Decision Process (MDP) <cit.> that consists of a 5-tuple {,,,,γ}, whereis the state space andis the action space. :××↦[0,1] is the state transition model in which (s,a,s') indicates the probability of transiting from one state s to another s' after taking action a; (s,a,s') is the corresponding immediate reward for such transition where :××↦. For simplicity, the expected immediate reward (s,a)=_s'∼[(s,a,s')] is assumed to be bounded over the state and action spaces. γ∈[0,1) is the discount factor that reduces the influence of future rewards. To allow for the matrix operators, the state spaceand action spaceare assumed to be finite, though very large in mHealth.The policy of an MDP is to choose actions for any state s∈ in the system <cit.>. There are two types of policies: (1) the deterministic policy π:↦ selects an action directly for the state, and (2) the stochastic policy π:s∈↦π(·| s)∈() chooses the action for any state s by providing s with a probability distribution over all the possible actions <cit.>. In mHealth, the stochastic policy is preferred due to two reasons: (a) current evidence shows that some randomness in the action is likely to draw users' interest, thus helpful to reduce the intervention burden/habituation <cit.>; (b) though some deterministic policy is theoretically optimal for the MDP, however, we do not know where it is for the large state space on the one hand and the MDP is a simplification for the complex behavioral process on the other; some variation may be helpful to explore the system and search for a desirable policy <cit.>. We consider the parameterized stochastic policy, π_θ(a| s), where θ∈^m is the unknown parameter. Such policy is interpretable in the sense that we could know the key features that contribute most to the policy by analyzing the estimated θ, which is important to behavior scientists for the state (feature) design <cit.>.In RL, value is a core concept that quantifies the quality of a policy π <cit.>. There are two definitions of values: the state value and the state-action (Q-) value <cit.>. In mHealth, the Q-value is considered because the model (i.e. state transition and immediate reward) is assumed to be unknown, and Q-value allows for action selection without knowing the model while the state value requires the model for the action selection <cit.>. Formally, the Q-value Q^π(s,a)∈^||×|| measures the total amount of rewards an agent can obtain when starting from state s, first choosing action a and then following the policy π. Specially, the discounted reward is one of the most commonly used value measures Q^π(s,a)=_a_i∼π,s_i∼{∑_i=0^∞γ^ir_i| s_0=s,a_0=a} .The goal of RL is to learn an optimal policy π^* that maximizes the Q-value for all the state-action pairs via interactions with the dynamic system <cit.>. The objective is θ^*=max_θJ(θ), where J(θ)=∑_s∈d_ref(s)∑_a∈π_θ(a| s)Q^π_θ(s,a)and d_ref(s) is the reference distribution of states (e.g. the distribution of initial states); Q^π_θ is the value for the policy π_θ. According to (<ref>), we have to learn the Q^π_θ for all the state-action pairs to determine the objective (<ref>) and, after then, to improve the policy. Thus in this paper, we employ the actor-critic algorithm. It is an alternating updating algorithm between two steps untill convergence. At each iteration, the critic updating estimates the Q-value function (i.e. policy evaluation, cf. Section <ref> and <ref>) for the lastest policy; the actor updating (i.e. policy improvement, cf. Section <ref>) learns a better policy based on the newly estimated Q-value. Moreover, the actor-critic algorithm has great properties of quick convergence with low variance and learning continuous policies <cit.>.§.§ Bellman Equation and Q-value EstimationIt is well known that due to the Markovian property, the Q-value satisfies the linear Bellman equation <cit.> for any policy π:Q^π(s,a)=(s,a)+γ∑_s^'∈(s,a,s')∑_a^'∈π(a'| s')Q^π(s',a').It has the matrix form as^π=+γΠ_π^π,where ^π andare vectors both with |||| elements; ∈^||||×|| is the stochastic state transition matrix, in which P((s,a),s')=(s,a,s'); Π_π∈^||×|||| is the stochastic policy matrix, where Π_π(s,(s,a))=π(a| s) <cit.>. Once both the reward and the state transition models are given <cit.>, it is easy to obtain the analytical solution as ^π=(-γΠ_π)^-1.However, there are two factors making it impossible to have the analytical solution for the Q-value estimation: (a) in mHealth, both rewardand state transition(i.e. ) models are unknown. (b) the state space in mHealth is usually very large or even infinite, which makes it impossible to directly learn the Q-value due to lack of observations for sharper learning and too high storage requirements, i.e. O(||||) to only store the Q-value table. We resolve these problems via the parameterized function approximation, which assumes that Q^π is in a low dimensional space: Q^π≈ Q_=^⊺(s,a), where ∈^u is the unknown variable and (s,a) is a feature processing step that combines information in the state and action. We then learn the value Q_ from observations via a supervised learning paramdigm, which, however, is much more challenging than the general supervised learning since the Q-value is not directly observed <cit.>. As a direct solution, the Monte Carlo (MC) method draws very deep trajectories to obtain the observation of actual Q value. Although MC can provide an unbiased estimation of Q_, it is not suitable for mHealth since MC can't learn from the incomplete trajectory <cit.>. Such case requires massive sampling from users, which, however, is very labor-intensive and expensive in time. As a central idea of RL <cit.>, the temporal-difference (TD) learning is able to make use of the Bellman equation (<ref>) and to learn the value from the incomplete trajectories. The learned result of TD has the property of low variance.§.§ The critic updating: Least-Squares TD for Q-value (LSTDQ) Estimation In mHealth, though the data for all users is abundant, the data for each user is limited in size. We employ the least-square TD for the Q-value (LSTDQ) estimation, due to its advantage of efficient use of samples over the pure temporal-difference algorithms <cit.>. The goal of LSTDQ is to learn a Q_ to approximately satisfy the Bellman equation (<ref>), by minimizing the TD error <cit.> as ==min_∈^K‖^⊺-(+γΠ_π^⊺)‖ _D^2,where = is a fixed point problem andis a function of ;is a designed matrix consisting of all the state and action pairs in the MDP; D describes the distributions over the state and action pairs.Since the state transitionis unknown andis too large to form in mHealth, we can not directly solve (<ref>). Instead, we have to make use of the trajectories collected from N users, i.e. ={_n} _n=1^N, where _n={_i=(s_i,a_i,r_i,s_i')| i=0,⋯,t} summarizes all the t+1 tuples for the n-th user and _i is the i-th tuple in _n.Current online contextual bandit (i.e. a special RL with γ=0) methods for mHealth assume that all users are completely heterogeneous. They share no information and run a separate algorithm for each user <cit.>. Following this idea, we extend <cit.> to the separate RL setting. The objective for the n-th user is defined as _n=_n=min__n∑__i∈_n‖_i^⊺_n-(r_i+γ_i^⊺_n)‖ _2^2n∈{ 1,⋯,N}, where _i=(s_i,a_i) is the value feature at time i and _i=∑_a'∈𝐱(s_i',a')π_θ(a'| s_i') is the value feature at the next time point. For the sake of easy derivation, we define the following matrices to store the actual observations_n=[𝐱(s_1,a_1),𝐱(s_2,a_2)⋯,𝐱(s_t,a_t)]∈ℝ^u× t _n=[(s_1';θ_n),⋯,(s_t';θ_n)]∈ℝ^u× t _n=[r_1,r_2,⋯,r_t]^⊺∈ℝ^t,where u is the length of the sample feature for the Q-value approximation, t is the current time point in the online RL learning procedure (i.e. the current trajectory length), (s_i';θ_n)=∑_a^'∈𝐱(s_i',a')π_θ_n(a'| s_i'), and π_θ_n(a| s) is the policy for the n-th user. Let =[_1,⋯,_N]∈^t× N store the reward of all N users at all the t time points. To prevent the overfitting when t is small at the beginning of online RL learning, the ℓ_2 norm based constraint is considered in the objective as follows _n=_n=min__n‖_n^⊺_n-(_n+γ_n^⊺_n)‖ _2^2+ζ_c‖_n‖ _2^2n∈{ 1,⋯,N}. The LSTDQ provides a closed-form solution _θ_n=[_n(_n-γ_n)^⊺+ζ]^-1_n_n,for { n} _n=1^N, where _θ_n is a function of the policy parameter θ_n.§.§ The actor updating for policy improvementIn mHealth, the reference distribution of states d_ref(s) is unknown and hard to estimate due to the lack of samples. We set d_ref(s) as the empirical distribution of states. Accordingly, the observations in the trajectory, i.e. _n, are used to form the objective for the actor updating θ_n=max_θ_nJ(θ_n), where J(θ_n)=1/|_n|∑_s_i∈_n∑_a∈Q(s_i,a;_θ_n)π_θ_n(a|s_i)-ζ_a/2‖θ_n‖ _2^2n∈{ 1,⋯,N}. Here ‖θ_n‖ _2^2 is the constraint to make (<ref>) a well-posed problem and ζ_a is the tuning parameter that controls the strength of the smooth penalization <cit.>. We use J(θ_n) rather than J(θ_n) in (<ref>) to indicate that the objective function for the actor updating is defined based on the Q-value estimation.Since the critic updating results in a closed-form solution (<ref>), we could substitute the expression (<ref>) into the objective for the actor updating (<ref>). Such case, however, leads to a very complex optimization problem. In the case of large feature space, one can recursively update _θ and θ_n to reduce the computational cost. § NETWORK COHESION BASED ONLINE ACTOR-CRITIC RL It is a famous phenomenon observed in lots of social behavior studies <cit.> that people are widely connected in a network and linked users tend to have similar behaviors. Advances in social media help a lot to record the relational information among users, which ensures the availability of network information for health-related studies. Besides, individuals are widely connected due to the similar features, such as age, gender, race, religion, education level, work, income, other socioeconomic status, medical records and genetics features etc <cit.>. However, for simple study, current online methods for the mHealth simply assume that users are completely different; they share no information among users and learn a separate RL for each user by only using his or her data. Such assumption works well in the ideal condition where the sample drawn from each user is large in size to support the separate online learning. However, though the data for all users is abundant, the data for each user is limited in size. For example at the beginning of online learning, there are t=5 tuples, which is hardly enough to support a separate learning and likely to result in unstable policies. From the perspective of optimization, the problem of lack of samples badly affects the actor-critic updating not only at the beginning of online learning but also along the whole learning process. This is because the actor-critic objective functions are non-convex and nonlinear; the bad solution at the beginning of online learning would bias the optimization to sub-optimal directions. Besides, the policy achieved at the early stage of online learning is of bad user experience, which is likely for the users to be inactive with or even to abandon the mHealth.Different from current methods, we consider the phenomenon that a user is similar to some (but not all) users, and similar users behave similar but not completely identical to each other. To this end, we propose a cohesion-based online RL method for the mHealth study. We aim to understand how to share information across similar users in order to improve the performance.§.§ Construct the network cohesion by using the warm start trajectory (WST)We assume there is an undirected network cohesion connecting similar users, i.e. =(,), where ={ 1,2,⋯,N} is the set of nodes (representing users) and ⊂× is the edge set. Altough advanced social medias, like Facebook, Twitter and Linkedin, could provide us ith various network information, they are not designed for the mHealth. There is noisy and misleading relational information in the network for mHealth WS<cit.>. Thus, we want to learn the network cohesion intentionally for the mHealth by measuring the similarities between the related behaviors of users.In RL, the MDP provides a mathematical tool to describe the property of users in a specific mHealth study[The MDPs of one user on two diverse mHealth studies should be very different; for example, the MDP in the HeartSteps study <cit.> for one user should be different from that in the alcohol control <cit.> study.]. By measuring the similarities among the users' MDPs , we could learn the network cohesion targeted to that mHealth study. However, the MDP models are unknown to the RL problem. Instead, the warm start trajectories (WSTs) of all the N users are available, which provide the observation of users. Thus, we use the WSTs for the graph learning, i.e. ^(0)={_n^(0)| n=1,⋯,N}, where _n^(0)={(s_i,n,a_i,n,r_i,n)} _i=1^T_0 is the WST for the n-th user. Since an MDP consists of the state transistion and immediate reward model, the feature for the cohesion network learning is constructed by stacking the states and rewards in the WST as follows _n=[s_1,n^⊺,r_1,n,⋯,s_T_0,n^⊺,r_T_0,n]^⊺∈^pT_0+T_0,for n∈{ 1,⋯,N}. Note that the action or policy is not part of an MDP. To reduce the influence of random actions in the WST, we get rid of the temporal order by sorting all the elements in _n (<ref>). Then the benchmark method, i.e. K-nearest neighbor (KNN), is used to learn the neighboring information among users c_ij= 1, _i∈(_j) or _j∈(_i)0,otherwise,where _i∈(_j) indicates that i-th node is the KNN of the j-th node <cit.>; (<ref>) is an undirected Graph. The value of K controls how widely the users are connected. A large K indicates a wide connection among users and vice versa.§.§ Model of cohesion based Actor-Critic RLThe underlying assumption throughout this paper is that if two users are connected, their values and policies are constrained to be similar, e.g. ‖_i-_j‖ and ‖θ_i-θ_j‖ are small if i↔ j <cit.>. With the network cohesion =(c_ij)_N× N, the objective function for the critic updating is formed as follows ==min_∑_n=1^N∑__i∈_n‖_i^⊺_n-(r_i+γ_i^⊺_n)‖ _2^2∑_i,j=1^Nc_ijd(_i,_j)≤δ_1 and ∑_i,j=1^Nc_ijd(_i,_j)≤δ_2,where =[_1,⋯,_N]∈^u× N and =[_1,⋯,_N]∈^u× N are designed matrices that consist of all the N users' variables (each column summarizes the unknown varibile of one user); d(_i,_j) is a distance measure between two vectors; usually we set d(·,·) as the Euler distance. With the matrix notations in Section <ref>, we turn (<ref>) into the following two-level nested optimization problems=min_ (∑_n=1^N‖_n^⊺_n-(_n+γ_n^⊺_n)‖ _2^2+μ_1∑_i,j=1^Nc_ij‖_i-_j‖ _2^2+ζ_1∑_n=1^N‖_n‖ _2^2), =min_ (∑_n=1^N‖Φ_n_n-Φ_n_n‖ _2^2+μ_2∑_i,j=1^Nc_ij‖_i-_j‖ _2^2+ζ_2∑_n=1^N‖_n‖ _2^2),where Φ_n is a designed matrix to facilitate the optimization of (<ref>). The 1st level (<ref>) projects the Bellman image onto a linear space (we refer (<ref>) as the projection step); the 2nd level (<ref>) deals with the fixed point problem (i.e. the fixed-point step) <cit.>.The objective for the actor updating is defined as follows {θ_1,⋯,θ_n,⋯,θ_N} =max_{θ_n} _n=1^NJ(θ_1,⋯,θ_N),where Θ=[θ_1,⋯,θ_N], Q(s_i,a;𝐰_θ_n)=𝐱(s_i,a)^T𝐰_θ_n is the estimated value for the n-th policy π_θ_n and J(Θ)=∑_n=1^N(1/|_n|∑__i∈_n∑_a∈Q(s_i,a;_θ_n)π_θ_n(a|s_i)) -μ_3/2∑_i,j=1^Nc_ij‖θ_i-θ_j‖ _2^2-ζ_3/2∑_n=1^N‖θ_n‖ _2^2.Although we are able to obtain a closed-form solution for the critic updating (<ref>), to reduce the computational costs, we substitute the solution in value for {_n} _n=1^N rather than the closed-form expression of {_θ_n} _n=1^N into the actor updating. The actor updating algorithm performs the maximization of (<ref>) over Θ, which is computed via the Sequential Quadratic Programming (SQP) algorithm. We use the implementation of SQP with finite-difference approximation to the gradient in the fmincon function of Matlab.In the objectives (<ref>), (<ref>) and (<ref>), μ_1, μ_2 and μ_3 are the tuning parmaters to control the strength of the network cohesion constraints. It is an advantage of our methods over the network based bandit <cit.>. When μ_1,μ_2,μ_3→∞, the connected users are enforced to have identical values and policies. When μ_1,μ_2,μ_3=0, there is no network cohesion constraint. In such case, our method is equivalent to the separate online RL method. Compared with the Separate-RL, the model complexity of our methods is reduced since their parameter domain is constrained via the network cohesion regularization. Such case ensures our methods to work well when the sample size is small. However, the optimization of our method is much more complex than that of the separate-RL. The updating rules of all the users are independent with each other in the separate-RL; while in our method, the optimization of all the users is all coupled together. In the following section, two actor-critic RL algorithms are proposed to deal the objectives (<ref>) and (<ref>).§ ALGORITHM#1 FOR THE CRITIC UPDATE §.§ Updating Rules for the Projection Step (<ref>)We first discuss how to minimize the objective for the projection step. The objective is J=∑_n=1^N‖_n^⊺_n-(_n+γ_n^⊺_n)‖ _2^2+μ_1(^⊺)+ζ_1‖‖ _F^2,where ‖·‖ _F^2 is the Frobenius norm of a matrix, (^⊺)=∑_i=1^N∑_j=1^Nc_ij‖_i-_j‖ _2^2and =-∈^N× N is a graph laplacian;is a diagonal matrix whose elements are column (or row, asis a symmetric matrix) sums of , i.e. d_ii=∑_ic_ij. The partial derivative of J_1, i.e. the 1st term in (<ref>), with respect to _n is ∂ J_1/∂_n=2_n_n^⊺_n-2_n_n-2γ_n_n^⊺_n.Summarizing the partial derivatives with respect to all the variables in =(_1,⋯,_N)∈^K× N, we have ∂ J_1/∂()= 2(∑_n=1^N_n⊗(_n_n^⊺))() -2(∑_n=1^N_n⊗_n)() -2γ(∑_n=1^N_n⊗(_n_n^⊺))()where ()=[_1^⊺,⋯,_N^⊺]^⊺∈^uN is the vectorization process for a matrix; _n=(0,⋯,1,⋯,0)∈^N× N is a diagonal matrix with the n-th diagonal element equal to 1, all the other equal to zero; ⊗ indicates the Kronecker product between two matrices resulting in a block matrix. Let _1=∑_n_n⊗(_n_n^⊺), _2=∑_n_n⊗_n and _3=∑_n_n⊗(_n_n^⊺). We have a simpler formulation for the ∂ J_1/∂() as follows ∂ J_1/∂()=2_1()-2[_2()+γ_3()].The partial derivatives of the 2nd term in (<ref>), i.e. J_2=μ_1(^⊺)+ζ_1‖‖ _F^2, with respect tois ∂ J_2/∂=2μ_1+2ζ_1.According to the Encapsulating Sum <cit.>, we have ∂ J_2/∂()=2[(μ_1^⊺+ζ_1_N)⊗_u]()where _N∈^N× N and _u∈^u× u are identical matrices. Setting the gradient of J in (<ref>) with respect to () to zero gives the closed-form solution for the projection step as follows ()=[_1+_⊗(μ_1,ζ_1)]^-1[_2()+γ_3()],where _⊗(μ_1,ζ_1)=(μ_1^⊺+ζ_1_N)⊗_u∈^uN× uN.§.§ Updating Rules for the Fixed Point step (<ref>) Considering the 1st term in the fixed point step (<ref>) givesO_1=∑_n‖Φ_n_n-Φ_n_n‖ _2^2=‖Φ_⊗()-Φ_⊗()‖ _2^2where Φ_⊗=(∑_n=1^N_n⊗Φ_n). To facilitate the optimization, we design {Φ_n} _n=1^N to let Φ_⊗=_1+_⊗(μ_1,ζ_1), which leads to O_1=‖Φ_⊗()-Φ_⊗()‖ _2^2 =‖Φ_⊗()-[_2()+γ_3()]‖ =‖(Φ_⊗-γ_3)()-_2()‖ _2^2,and finally results in an easy solution for the critic updating (<ref>) (cf. Theorem <ref>). Letting =Φ_⊗-γ_3, we have O_1=‖()-_2()‖ _2^2.The partial derivative of O_1 with respect to () is ∂ O_1/∂()=2^⊺()-2^⊺_2().Considering the partial derivative of the cohesion constraint and the Frobenius norm based smooth constraint with respect to (), and setting the overll partial derivative to zero, i.e. ∂ O/∂()=, we can obtain the following closed-form solution (^*)=[^⊺+_⊗(μ_2,ζ_2)]^-1^⊺_2(),where _⊗(μ_2,ζ_2)=(μ_2^⊺+ζ_2_N)⊗_u. =^⊺+_⊗(μ_2,ζ_2) is a symmetric and positive definite matrix, which leads to an easy critic updating rule in (<ref>).Suppose that ∈^n× n and ∈^m× m are square matrices. Let λ_1,⋯,λ_n be the eigenvalues ofand ν_1,⋯,ν_m be those of . Then the eigenvalues of ⊗ <cit.>, where ⊗ is the Kronecker Product, are λ_iν_j, i∈{ 1,⋯,n} , j∈{ 1,⋯,m} .§ ALGORITHM#2 FOR THE CRITIC UPDATE In this section, we provide another updating rule for the critic update (i.e. policy improvement). Note that to prevent the overfitting when the sample size is very small, the conventional LSTDQ usually employs the ℓ_2 constraint on the variablein the projection step. They do not put the ℓ_2 constraint on the fixed-point variable<cit.>. Following this idea, we have a simpler objective function for the critic update as _n=_n=min__n∑__i∈_n‖_i^⊺_n-(r_i+γ_i^⊺_n)‖ _2^2, n∈{ 1,⋯,N} and∑_i,j=1^Nc_ijd(_i,_j)≤δ_1.According to the derivation in Section <ref> that considers the Frobenius norm based smooth constraint, the updating rule for the projection step is (<ref>). In the fixed-point step, the objective is simply _n=_n (i.e. a fixed-point problem), which leads to ()=().Thus, we have the closed-form solution for () as follows ()=[_1-γ_3+_⊗(μ_1,ζ_1)]^-1_2().It is simpler than the 1st updating rule for the critic update (<ref>). § EXPERIMENT RESULTSWe verify the proposed methods on the HeartSteps dataset. It has two choices for an action, i.e. { 0,1} , where a=1 means sending the positive intervention, while a=0 indicates no intervention <cit.>. Specifically, the stochastic policy is assumed to be in the form π_θ(a| s)=exp[-θ^⊺ϕ(s,a)]/∑_a'exp[-θ^⊺ϕ(s,a')], where θ∈^m is the unknown parameter and ϕ(·,·) is a feature process that combines the information in actions and states, i.e. ϕ(s,a)=[as^⊺,a]^⊺∈^m.§.§ The HeartSteps Dataset To verify the performance of our method, we use a dataset from a mobile health study, called HeartSteps <cit.>, to approximate the generative model. This is a 42-day mHealth intervention that aims to increase the users' steps they take each day by providing positive treatments (i.e. interventions), which are adapted to users' ongoing status, such as suggesting users to take a walk after long sitting <cit.>, or to do some exercises after work.A trajectory of T tuples ={(s_i,a_i,r_i)| i=1,⋯,T} are generated for each user <cit.>. The initial state is drawn from the Gaussian distribution S_0∼_p{ 0,Σ}, where Σ is a p× p covariance matrix with pre-defined elements. The action a_t for 0≤ t≤ T_0 is drawn from the random policy, with a probability of 0.5 to provide interventions, i.e. μ(1| s)=0.5 for all states s. Such process is called drawing warm start trajectory (WST) via the micro-randomized trials <cit.>, and T_0 is the length of the WST. When t≥ T_0, we start the actor-critic updating, and the action is drawn from the learned policy, i.e. a_t∼π_θ_t(·| s_t). When t≥1, the state and immediate reward are generated as followsS_t,1=β_1S_t-1,1+ξ_t,1,S_t,2=β_2S_t-1,2+β_3A_t-1+ξ_t,2,S_t,3=β_4S_t-1,3+β_5S_t-1,3A_t-1+β_6A_t-1+ξ_t,3,S_t,j=β_7S_t-1,j+ξ_t,j, j=4,…,pR_t=β_14×[β_8+A_t×(β_9+β_10S_t,1+β_11S_t,2) +β_12S_t,1-β_13S_t,3+ϱ_t],where β={β_i} _i=1^14 is the main parameter for the MDP and -β_13S_t,3 is the treatment fatigue; {ξ_t,i} _i=1^p∼(0,σ_s^2) is the noise in the state (<ref>) and ϱ_t∼(0,σ_r^2) is the noise in the reward model (<ref>).As it is known to us, the individuals are generally more or less different from each other, and each individual is similar to a part, but not all, of the individuals. In the mHealth and RL study, an individual is abstracted as an MDP, which is determined by the value of β, cf. (<ref>) and (<ref>). To achieve a more practical dataset compared with <cit.>, we come up with a method to generate N users (i.e. βs) that satisfy the above requirements in two steps: (a) manually design V basic βs, i.e. {β_v^basic| v=1,⋯,V}, that are very different from each other; (b) a set of N_v different individuals (i.e. βs) are generated for each β_v^basic via the following process β_i=β_v^basic+δ_i, for i∈{ 1,2,⋯,N_v}, where δ_i∼(0,σ_b_14) is the noise in the MDPs and _14∈^14×14 is an identity matrix. After such processing, the individuals are all different from the others. The value of σ_b specifies how different the individuals are. In the experiments, the number of groups is set as V=3 (each group has N_v=15 people, leading to N=45 users involved in the experiment). The β^basic's for the V groups are set as follows β_1^basic= [0.40,0.25,0.35,0.65,0.10,0.50,0.22, 2.00,0.15,0.20,0.32,0.10,0.45,800] β_2^basic= [0.35,0.30,0.30,0.60,0.05,0.65,0.28, 2.60,0.35,0.45,0.45,0.15,0.50,650] β_3^basic= [0.20,0.50,0.20,0.62,0.06,0.52,0.27, 3.00,0.15,0.15,0.50,0.16,0.70,450]. §.§ Compared Methods and Parameter Settings There are three online actor-critic RL methods for the comparison: (a) Separate-RL, which is an extension of the online actor-critic contextual bandit in <cit.> to the online actor-critic reinforcement learning. It learns a separate RL policy for each user by only using his or her data. (b) Cohesion-RL#1 is the first version of our method. (c) Cohesion-RL#2 is the second version of our method (cf. Algorithm <ref> for detail). Specially, Cohesion-RL#1 and Cohesion-RL#2 share the same actor updating. The difference between them is the different critic updating rules that they employ.The noises in the MDP are set as σ_s=0.5, σ_r=1 and σ_β=0.05. The state has dimension p=3 and the policy feature has m=4 elements. We set the ℓ_2 constraint in the Separate-RL as ζ_a=ζ_c=0.1. When the cohesion constraint in our methods are too small (10^-4, say), we need the ℓ_2 constraint for the actor-critic updating to avoid the overfitting, with the parameters as ζ_1=ζ_2=ζ_3=0.1. Otherwise, we set ζ_1=ζ_2=ζ_3→0. The feature processing for the value estimation is (s,a)=[1,s^⊺,a,s^⊺a]^⊺∈^u, where u=2p+2, for all the compared methods. The feature for the policy is processed as ϕ(s,a)=[as^⊺,a]^⊺∈^m where m=p+1. We set K=8 for the K-NN based network cohesion learning. If there is no special setting, the following three paremeters are set as: (a) the trajectory length in mHealth is T=80, which indicates that the online RL learning ends at t=80; (b) the length of warm start trajectory is set as T_0=10; (c) to reduce the number of parameters in the algorithm, the parameters for the cohesion constraint in our methods are set as μ_1=0.1, μ_3=μ_1 and μ_2=0.01μ_1.§.§ Evaluation Metrics We use the expectation of long run average reward (ElrAR) 𝔼[η^π_Θ] to quantify the quality of the estimated policy π_Θ on a set of N=45 individuals. Here π_Θ summarizes the policies for all the 45 users, in which π_θ_n is the n-th user's policy. Intuitively, ElrAR measures how much average reward in the long run we could totally get by using the learned policy π_Θ on the testing users (i.e. MDPs), for example measuring how much alcohol users have in a fixed time period in the alcohol use study <cit.>. Specifically in the HeartSteps, ElrAR measures the average steps that users take per day over a long time; a larger ElrAR corresponds to a better performance. The average reward for the n-th user, i.e. η^π_θ_n, is calculated by averaging the rewards over the last 4,000 elements in a trajectory of 5,000 tuples under the policy π_θ_n, i.e. η^π_θ_n=1/T-i∑_j=i^T(s_j,n,a_j,n∼π_θ_n), where T=5000 and i=1000. Then ElrAR 𝔼[η^π_Θ] is approximated by averaging over the 45 η^π_θ_n's, i.e. 𝔼[η^π_Θ]≈1/N∑_n=1^Nη^π_θ_n.§.§ Comparisons in three experiment settingsThe following experiments are carried out to verify different aspects of the three online actor-critic RL algorithms:(S1) In this part, the trajectory length of all users ranges as T∈{ 50,80,110,150}. The experiment results are showed in Table <ref> and Fig. <ref>. There are two sub-tables in Table <ref>; each sub-table displays the ElrAR of three RL methods (i.e. Separate-RL, Cohesion-RL#1 and Cohesion-RL#2 respectively) under six γ settings; the last row shows the average ElrAR over the results of all the six γs. In Fig. <ref>, there are three sub-figures; each sub-figure illustrates the results of three methods under one γ setting. As we shall see that the performance of three methods generally increases as T rises. The performance of our RL methods, i.e. Cohesion-RL#1 and Cohesion-RL#2, have an obvious advantage over the Separate-RL under all the parameters settings in (S1). Besides, the advantage of our methods over Separate-RL slowly decreases as T rises. Compared with Separate-RL, our methods averagely improve 156.0 steps and 188.3 steps when T=50, and averagely improve 136.3 steps and 163.7 steps when T=150.(S2) In this part, the length of warm start trajectory ranges as T_0={ 5,10,15,20}, which indicates that the RL methods wait longer and longer before starting the online learning. The experiment results are summarized in Table <ref> and Fig. <ref>. As we shall see that as T_0 rises across this range, the performance of Separate-RL increases dramatically and Cohesion-RL#1 rises gradually, while Cohesion-RL#2 remains stable. Thus, the average advantage of our method over Separate-RL decreases dramatically as T_0 rises, i.e., from 224.07 steps and 261.67 steps when T_0=5 to 33.26 steps and 52.09 steps when T_0=20. Such case suggests that our methods work perfectly when the WST is very short. In this case, the mining of network cohesion is necessary for the online RL learning. In general, however, our methods still outperform Separate-RL significantly.(S3) The parameter of the Network-Cohesion constraint μ_1 for the projection step ranges from 0.001 to 10. To reduce the number of parameters in our algorithm, we simply set μ_2=0.01μ_1 (i.e. the cohesion constraint for the fixed-point step) and μ_3=μ_1 (i.e. the cohesion constraint for the actor updating). The experiment results are illustrated in Fig. <ref>, where there are three sub-figures. Each sub-figure shows the results of three online RLs vs. five μ_1 settings under one γ. As we shall see that as μ_1 rises across this range, our method always obtains superior performance compared with Separate-RL. Specially, Cohesion-RL#2 is very stable and always better than Cohesion-RL#1. Such case indicates that it is reliable to follow the idea on how to introduce the ℓ_2 constraint in LSTDQ. In Fig. <ref>, since Separate-RL does not have the Network-Cohesion constraint, its result keeps unchanged.Consider (S1) and (S2) for the Separate-RL, we find: (a) the lack of samples at the beginning of the online learning may bias the optimization direction, which badly influence the performance even when the trajectory is very long; (b) Compared with T, the increase of T_0 has a more important influence on the performance. In (S1), where T_0=10 is fixed and T ranges from T=50 to T=150, the performance of Separate-RL increases 32.74 steps. In (S2), where T=80 is fixed and T_0 rises from T_0=5 to T_0=20, Separate-RL achieve an improvement of 210.51 steps, which is much significant than the rise caused by the rising T.§ CONCLUSIONS AND DISCUSSThis paper presents a first attempt to employ the online actor-critic reinforcement learning for the mHealth. Following the current methods that learn a separate policy for each user, the Separate-RL can not achieve satisfactory results. This is due to that data for each user is very limited in size to support the separate learning, leading to unstable policies that contain lots of variances. After considering the universal phenomenon that users are generally connected in a network and linked users tend to have similar behaviors, we propose a network cohesion constrained actor-critic reinforcement learning for mHealth. It is able to share the information among similar users to convert the limited user information into sharper learned policies. Extensive experiment results demonstrate that our methods outperform the Separate-RL significantly. We find it easy to apply the proposed methods to other health-related tasks.§.§ Appendix: the proof of Theorem <ref>Considering _⊗(μ_2,ζ_2)=(μ_2^⊺+ζ_2_N)⊗_u gives the equation =^⊺+μ_2^⊺⊗_u+ζ_2_uN. The first term _1=^⊺ is obviously positive semi-definite as ∀, we have ^⊺^⊺=‖‖ _2^2≥0. The graph laplacianis positive semi-definite, which indicates that its eigenvalues are non-negative, i.e. λ_1,⋯,λ_N≥0. The eigenvalues of _u are μ_1=⋯=μ_u=1. According to Lemma <ref>, we have the conclusion that the eigenvalues of ^⊺⊗_u are non-negative, which indicates that it is a positive semi-definite matrix. The last term inis an identical matrix, which is surely positive definite. The sum of two positive semi-definite matrices and a positive definite matrix results in a positive definite matrix.Since for any matrices ∈^l× k and ∈^m× n, the Kronecker product has the property (⊗)^⊺=(^⊺⊗^⊺) <cit.>. Besides, the graph laplacianis symmetric. We have ^⊺=(^⊺+μ_2^⊺⊗_u+ζ_2_uN)^⊺=.ieeetr
http://arxiv.org/abs/1703.10039v2
{ "authors": [ "Feiyun Zhu", "Peng Liao", "Xinliang Zhu", "Yaowen Yao", "Junzhou Huang" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170325230120", "title": "Cohesion-based Online Actor-Critic Reinforcement Learning for mHealth Intervention" }
DataSlicer: Task-Based Data Selection For Visual Data Exploration Farid Alborzi, Rada Chirkova,Pallavi Deo, Christopher Healey,Gargi Pingale, Vaira Selvakani North Carolina State University, USAEmail: {falborz,rychirko,psdeo, healey,gpingal,vbselvak}@ncsu.edu Juan Reutter Pontificia Universidad Catolica de ChileEmail:jreutter@ing.puc.clSurajit Chaudhuri Microsoft Research, USAEmail: surajitc@microsoft.com ========================================================================================================================================================================================================================================================================================================================================================================= In visual exploration and analysis of data, determining how to select and transform the data for visualization is a challenge fordata-unfamiliar or inexperienced users.Our main hypothesis is that for many data sets and common analysis tasks,there are relatively few “data slices” thatresult in effective visualizations.Byfocusing human users on appropriate and suitably transformed parts of the underlying data sets, these data slices canhelp theusers carry their taskto correct completion. To verify this hypothesis, we develop a framework that permits us to capture exemplary data slicesfor a usertask,and to explore and parse visual-exploration sequencesinto a format that makes them distinctandeasy to compare.We develop a recommendation system, DataSlicer, that matches a “currently viewed” data slice with the most promising “next effective” data slices for the given exploration task. We report the results of controlled experiments with an implementation of the DataSlicer system, using four common analytical task types. The experiments demonstrate statistically significant improvements in accuracy and exploration speed versus users without access to our system.§ INTRODUCTION Data-intensive systems accompanied by visualization software are being increasingly used for interactive data explorations <cit.>. These and other systemshelp data analysts in their exploratory tasks of visually identifying trends, patterns, and outliers of interest.The visualizations make it more efficient to find task-relevant types of objects in exploratory data analysis, especially in presence of large data.The reason is, visualizations allow analysts to leverage their visual pattern-matching skills, domain expertise, knowledge ofcontext, and ability to manage ambiguity in ways that fully automated systemscannot. Due to the exploratory nature of their tasks, analysts often face a wide variety of visualization options to choose from. As pointed out in <cit.>, it is not the visualization per se that is the main challenge. Indeed, once the data to visualize have been selected and transformed (e.g., grouped and aggregated in an appropriate way), users can take advantage of a visualization tool to provide an effective visual presentation of the resulting data. In this paper we look into exploratory data analysis under the assumption that we have access to such presentation solutions,and focus instead on the issue of determiningwhich“data slices”would be the most helpful to the user in addressing the task at hand when visualized. Here, the term data slice refers to the outcome of the process of selecting the data of interest from the given data set, as well as potentiallytransforming(e.g., grouping and aggregating) the selected data. Identifying the data slices that are appropriate for the given taskis a challenge for inexperienced users or those not familiar with the data at hand.The reason is that, typically, only a small fraction of the available data slices results in task-relevant visualizations, while all the other options fails to help the user with her task. This may force such usersto examine a large number of options, to find those that lead to relevant visualizations for their exploration or analysis task. While clearly a challenge in presence of large-scale data, this is a hard problem even when the data set is small. Our Focus:Our focus is on analytical tasks of common interest, such as detection of outliers or trends, that users often perform in visual exploratory analysis of data. Our objective is to improve the user experience by suggesting to herthose data slices that, when visualized, present correct solutions to her task in a prominent way.Solving this problem would be instrumental in helping casual or inexperienced users to effectively conductexplorations of potentially unfamiliar data sets,in a number of application domains and for a spectrum of exploration objectives. For our study, we assume that a user begins work by declaring the task that she plans to perform. We also assume that she is able to identify a correct solution for her task (e.g., an outlier) when the solution is presented to her prominently in a visualization of some data slice. Proposed Solution: We address the combinatorial explosion in data-slice selection by basing data-slice suggestions on the stage at which the user is in solvingher task, and (when available) on expert knowledge of the domain, task, and data set. In this emphasis on, and appreciation of, expert knowledge in solving complex data problems, our effort is in line with the research directions such as that of DeepDive<cit.>.As an illustration, consider a relation storing the data from <cit.> (see <cit.> for the details) on major earthquakes worldwide from 1900–2013. The data set has 17 attributes and 8289 data points, please see Fig. <ref> for a fragment of the data. Suppose that in that data set, the user task is to find locations in Central America containing earthquakes that are outliers based on magnitude.In this user task, there is a wide range of options when selecting the initial data to be visualized. One natural starting point in the explorationwould be to examine a map showing locations and other information about the earthquakes in the data set.One such visualizationis shown in Fig. <ref>.The key point to note is that this visualization is unlikely to be helpfulto those users who are not familiar with the data set. For instance, the arrow in Fig. <ref> is pointing to one correct answer (Guadeloupe in Fig. <ref>) for this exploration task; observe that the visualization is not conducive to finding that answer, as the data point in question does not stand out in the visualization.One explanation for the relative ineffectiveness of the visualization of Fig. <ref> for the task at hand is that Fig. <ref> shows not only the location and magnitude, but also other information about each earthquake. Suppose the analyst eliminates those features of the datathat are irrelevant to the task at hand; the resulting visualization could be as in Fig. <ref> or <ref>.[The difference between Figures <ref> and <ref> is just in the visual representation of the values of earthquake magnitude.] Interestingly and perhaps counterintuitively, we have foundthat these visualizations are not very helpful either to human viewers performing this task on the data set <cit.>, again because the answers do not all stand out visually.A more effective way to address this exploratory task is for the user to firstexamine a box plot showing the earthquake-magnitude mean and outlier whiskers; please see Fig. <ref> for the visualization.Once the cutoff value for outlier earthquake magnitude has been found, the user can effectively construct a correct answer for hertask byfiltering out the irrelevant data. The result is visualized in Fig. <ref>. The data slice depictedin Fig.<ref> is not related to the data slices used to construct Figures <ref>–<ref>. The difference goes beyond removing irrelevant data features and, in fact, represents a drastically different choice of both the data dimensions and of their grouping layout. We found that if a user is unable to find a data slice that would present prominently the outlier values of earthquake magnitude in the data set, then suggesting to her the data slice (and the straightforward visual presentation) of Fig. <ref> would typically enable her to proceed efficiently to constructing the data slice of Fig. <ref>. Moreover, if the user is not sure how to proceed even after examining Fig. <ref>, then she should find the data slice (and the map presentation) of Fig. <ref> a helpful suggestion for the final stage of her overall task.In our experiments with this data set and user task (task 1 in Section <ref>), we found that for humans looking for earthquake-magnitude outliersfor the first time, it is not trivial to come up with an effective first-step visualization such as the box plot of Fig. <ref>. Moreover, even though the data set <cit.> has relatively few (17) data attributes,it isimpractical to enumerate all the possible data slices by brute force, in the hope of eventually identifying and visualizing a useful choice such as the data in Fig. <ref>. Indeed, a seemingly natural but suboptimal choice of the initial visualization to look at — such as those in Figures <ref>–<ref> — is not necessarily conducive to finding the answers to the exploration task in question. While clearly a challenge in presence of large-scale data, this effect may be present even in those cases where the data sets are small by today's measures. (Recall that the earthquakes data set<cit.> has 8289 records.) Note that relatively minor (“local”) modifications of initially suboptimal data choices to visualize, such as in the transition between Figures <ref>–<ref>, do not necessarily make the resulting visualization any more helpful to the user than the previous choice. The main hypothesis put forth in this paper is that for many data sets and common exploratory-analysis tasks, there are relatively few data slices that are key to providing effective visualizations for the task. Intuitively, these data slices are manifestations of the domain and data-set knowledge that is relevant to the task at hand. As we argue in this paper and corroborate with our preliminary experiments (see Section <ref>), the data-slice choices made by domain experts may help other users of the data set solve similar exploration/analysis tasks in amore correct and efficient fashion.To substantiate and verify these claims, we use the specific measures (as in, e.g., <cit.>) of:result accuracy,understood as the average number of correct solutions found, and of user efficiency (speed),understood as the average number of data-specification steps taken to find a correct visualization for the task.Significant advances have been made lately in developing visual solutions for data exploration and analysis. Major projects, includingthose described in <cit.>, focus on determining which data slices could be useful to human viewers when visualized. (We provide an overview of these projects in Section <ref>.)Typically, data slices in these and other projects are suggested to the users based on generic expectations about what a user might find interesting in the data, rather than on the context of a particular task that the user might be facing, or of the user's stage in solving the task. Thus, to the best of our understanding, the solutions in the literature still fail to solve the problem of how to efficiently lead casual or inexperienced human users to visualizations of the data that summarize in an effective and prominent way the data points of interestfor the user's exploratory-analysis task. As observed via the preliminary experiments reported in this paper, solving two distinct visual-exploration/analysis tasks on the same data set may lead to distinct sequences of data slices, with the data slices in each sequence being of value in the context, and perhaps at the specific stage, of just one of these tasks but not the other.(Please see the discussion of experimental tasks 3 and 4 in Section <ref>.) In addition, to the best of our knowledge, suggesting (sequences of) data slices that would be helpful in solving at least one of these tasks, that of determining trends in the data, cannot be done using state-of-the-art tools. The specific contributions that we report are as follows:∙ We develop a formal framework for capturing data slices of interest in a given class of visual-exploration tasks, and for providing appropriately visualized user-specific modifications ofeach data slice.The data structures in the framework are scalable in the size of the data set, and typically do not need to be modified as the contents of the data set change over time. ∙We develop prediction software that matches a “currently viewed”data slice with the most promising “next effective” data slice for the given type of exploration task on the data. ∙ We implement our framework and prediction system, DataSlicer, in tandem with commercial visualization software.∙Finally, we provide results from controlled experiments with 48 volunteers. The experiments demonstrate, for four common types of visual-analysis tasks, statistically significant improvements in accuracy and exploration speed versus users without access to our system.Organization: After reviewing related work in Section <ref>, we present our framework in Section <ref>.Section <ref> outlines our main algorithms, and Section <ref>describes construction of our data structures. The architecture of the DataSlicer system is discussed in Section <ref>.Section <ref>reports the experimental results, and Section <ref> concludes. § RELATED WORK Significant advances have been made lately in developing various facets of visual solutions for data exploration and analysis.In this space, we focus mainly on projectsthat concentrate on the problem of finding the right visualization,e.g.,<cit.>. We refer the reader to the survey <cit.> for a more general discussion of data-exploration techniques. The system architecture in this current project is based on the connection between SQL queries and visualizations, which is at the core of commercialtools such as Tableau <cit.>. Our data-slice format, as detailed in Section <ref>, has been inspired by, and is similar to, the formalization of visualizations provided in <cit.>.At the same time, the main purpose of that formalization in <cit.> is for the visualization system to keep track of the current visualization, as it is being actively managed by the user,rather than by the system itself. In this current paper, the main purpose of thedata-slice format is to match the user's current visualization with the stored past visualizations, and to recommend back to the user the best “next-step” data slice for her visualization sequence.As in <cit.>, we view the task of constructing visualizations as a two-step process: One first decides on the data slice that is to be shown, and then chooses an appropriate visualspecification for this data slice. Several projects, including <cit.>, have focused in this space on (semi) automatic recommendation ofthe best visual specification for a given task and data slice. However, the built-in assumption in those projects is that the appropriate data slice has been chosen. Our work is orthogonal to these efforts, in that we aim at choosing the best data slice, and assume that the visual specifications are given.We expect to be able to combine forces in the future, to create a system that can help users to select both the appropriatedata and the best presentation. Regarding the problem of choosing the appropriate data slice, the first connection that comes to mind isthe problem of choosing the adequate SQL query for agiven task. This problem has received substantial attention in the database community (see, e.g., <cit.>). At the same time, our work is more closely related to those projects that focus on learning which data need to be presented using a visual interface, rather than on constructing directly the appropriate SQL query.Here we have systems such as Vizdeck <cit.> and Charles <cit.>, which aim to recommend the bestvisualization based on statistical properties of the data. There are also systems thatrecommend visualizationsbased on the user feedback<cit.>. The system called SeeDB<cit.> automatically generates “interestingvisualizations” based on those data slices where the trend deviates in a statistically significant way from the trend on the overall data set. Further, <cit.> describes a vision of anautomated system, whichcan explore past user decisionswith the goal of discovering further operations on the data of potential interest to the same user.In this current project, our overall goal is the same as in the above papers. At the same time, instead of aiming for a fully automatic generic tool for selecting potentially popular individual data slices, we focus on choosing data slicesthat best address a given visualization-based task. As a result, the data slices selected by our system are task dependent, rather than just data-set dependent, and are also not limited to statistically interesting data.(For an illustration of how our system provides task-dependent, rather than data-dependent, recommendations, see Section <ref> for experimental tasks 3 and 4 performed on the same data set.) Further, we work with the hypothesis that previous users, when faced with the same type oftask, could guide the system as to which data slices (or sequences thereof), with their visualizations, are interesting for the current user. In its emphasis on domain knowledge for the given task and data set, our approach is in line with research directions such as that of DeepDive <cit.>.As a result, our approach can suggest to users data slices, such as those showinggeneral trends on the data, that state-of-the-art systems cannot recommend to the best of our knowledge. (See discussion of experimental task 4 in Section <ref>.)Finally, a good example of a collaborative tool for visualizing datais AstroShelf <cit.>. This tool is specifically tailored for astrophysicists and, unlike ours, aims more at facilitatingcollaborations than recommending visualizations. § THE FRAMEWORK: AN OVERVIEWIn this section we describe the envisioned user experience with a visualization-enabled system,where the system would advance the user's task-solving process by suggesting task-relevant data slicesfrom the underlying data. We then outline our proposed approach to delivering such an experience.§.§ The Intended User Experience When presented with a visual-exploration or visual-analysis task, users need to make decisions on which datato visualize to solve the task. The default approach is for the user to construct various visualizations directly in a visualization tool, and to then keep improving or replacing themuntil one or more visualizations that are effective for the task are found. This can be time- and resource-consuming (cf. <cit.>).Our goal isto alleviate or eliminate the inefficiencies in solving the data-selection part of the user's visual-analysis task. Our proposed system is designed to serve as a back-end of a standalonevisualization tool.At any given time in working on the task, usersmay ask the system to suggestvisualizations that would be useful for solving the task.If so requested, the system would analyze the current user's session and would recommend an (appropriately visualized) data slice based on the historyof previous users who were involved insolving similar tasks.When analyzing the sequences of previous users, the system would assign higher priority to those data slices that werelabeled by previous users as interesting;for instance, a data slice is considered interesting if past users spent a considerable amount of time looking at its visualization(s).Consider, for example, the task of finding earthquake-magnitude outliers in Central America using the data set <cit.>, as presented in Section <ref>.A user may start her work on this task by constructing a visualization of Fig. <ref> or of Fig. <ref>. If she is overwhelmed by the amount of potentially relevant information in the visualizations, shewould ask the system for a recommendation. The system would then analyze the user's current data slice, and woulddetermine that the most successful past sequences involving the data slice of Fig. <ref> wouldnextswitch to the data slice whose fragment is shown in Fig. <ref>, and then to that whose fragment is shown in Fig. <ref>.The two latter data slices, in that order and augmented by the current session's filtering conditions (Central America), would end up being chosen for the user. The system would determine appropriate visualizations for the recommended data slices by either using the user's visualization preferences in her current session or (if not available) by rules in the system.For the framework and system introduced in this paper, the claim of this example is corroborated by our experimental results, please see a discussion of experimental task 1 in Section <ref>. §.§ The Proposed Approach: Data Sequences via Graphs Our proposed framework and system are designed to work with users who create sequences of appropriately visualized data slices. A sequence could be exploratory, with the usertrying to determine which individual (single) data slice works best for addressing her current task. Alternatively, a sequence could be part of a solution that calls forconstruction of multiple consecutivedata slices, as in the earthquake-magnitude task ofSections <ref> and <ref>. Either way, we use the graph representation to encode all the sequences of data slices for atype of task on a data set; we callthe resulting graph the data-slice graph for this task type and data set. In a data-slice graph, nodes encode data slices, together with any appropriate visualizations, and directed edges encode transitions between consecutive data slices in past user sessions.When users ask for recommendations, our system matches their current session with the information stored in the data-slice graph, based on node similarity. Our approach can use any algorithm for measuring similarity between nodes; please seeSection <ref> for a specific instantiation.The system then recommends to the user those data slices that were the most helpful, at the matched point in the graph, to previous users working on tasks of the same type. Again, our approach can use any algorithm for determining whether a node is helpful — interesting — enough to a user. (For instance, in our experiments we considered a data slice interesting if its visualization had been examined by at least one user for an amount of time above a fixed threshold.)To enable the recommendation feature, each node in the data-slice graph is marked as either “interesting” or “not interesting.” The number ofdata slices that one could construct using a data set with even afew attributesmaybe prohibitively large for computational purposes.It may not be practical or even feasible to represent and store all the possibilities explicitly. Instead,since our goal is to present the user with a specific data slice,we manipulate abstractions from visualizationsusing the relational model, similarly to what was done in <cit.>.More precisely, we map each data slice to a (simplified) relational-algebra expression, and work with relational queries. We store as nodes in a data-slice graph only those relational-algebra expressions that were featured in at least onesequence executed for the same type of task on the data set at hand by at least one previous user.The data-slice graph contains all the information that we need to recommend data slices to the user: Once we match theuser's current data slice to a node in the graph, it suffices to look for those interesting nodes in the graph that are“downstream closest” to the matched node. Intuitively, this amounts to finding the next interesting nodes in previous sequences that feature a data slicesimilar to that of the current user.In the next two sections we provide details onthe construction of the data-slice graph, how the matching is done, and how we look for the closest interesting nodes.§ THE DATASLICER SYSTEMIn this section we describe the DataSlicer framework and system. We start with a description of our theoretical frameworkfor specifying sequences of data slices and their accompanying visualizations. We then discuss how the framework stores sequences in a data-slice graph, and explain how this graph is used to recommend to usersdata slices for addressing their task on the data set. §.§ Data-Slice Sequences and Graphs We represent each visual depiction of data as a tuple = ⟨ D, S ⟩. Here, D is thedata specification, which contains the information on the data slice in the visual depiction.Further, S is the visual specification, with information regarding how the data slice is to be visually presented,including the type of visualization (e.g., box plot or pie chart), colors, shapes, and so on. Consider, for instance, Fig. <ref>, which visualizes information on earthquakes in Central America. To createthis visualization, we first need the latitude and longitude for each observation in the data set; this will tell us how to place each observation on the map.Fig. <ref> also shows three additional attributes for each observation point: the average magnitude, thenumber of records, and the average depth of the earthquakes. Each attribute is shown using a differentvisual cue: We use the dot color to represent magnitude, the dot size to represent the number of records, and the dot label for the average earthquake depth. The visualization terminology for each of these attributes is a layer; in general, each layer isassigned a different visual cue. Thus, the data specification D for Fig. <ref> will state which information to extract about thedata points to be shown: the latitude, longitude, magnitude, number of records, and depth, see Fig. <ref>. Thevisual specification S for Fig. <ref> states that the visualization needs to show the map of Central America, that each data point is to be shown as a dot, and what visual cue is assigned to each of the layers: color for average magnitude, size for number of records,and label for average depth. Our data-specification format has been inspired by, and is similar to,the formal definition of visualizations provided in <cit.>.(Please see Section <ref> for a discussion of the difference between<cit.> and this project in the use of the formalism.)Similarly to <cit.>,we assume that the data to be specified come from a single relational table.[If two or morerelations are to be visualized,one could join them and treat the result as a single relation to be visually represented. This is a common approach in commercialdata-visualization systems.]To define a data specification on a relation R, the following information is required:1.The fields applicable to the data set. These are either attributes of R (called simple fields), orcomplex fields formed by combining two or more fields using the operations of concatenation (+), cross product (×), andnesting (/) <cit.>. We also allow aggregation over simple and/or complex fields, using operators , , , or . 2.How the data from these fields are extracted. This amounts to specifying how the data are being grouped and which filters are currently active. Here we also provide information about which fields are being mapped to the visual axes X and Y, and about what fields are being rendered as layers. As an example of a data specification, consider again the visualization in Fig. <ref>.In this data specification, X corresponds to longitude, Y to latitude, and there are three layers:(magnitude),(number of records) and (depth). We also need to mention that the data are being grouped bythe value of “place.” (The attribute “place” is a standard construct included in geographical data sets; it is used to group the data points by their geographical location.) The full data specification for Fig. <ref>is shown in Fig. <ref>.Formally, a data specification is a tuple (X, Y, , , ), where X and Y arethe fields rendered respectively as the X and Y axis,is the set of fields rendered as layers, is the set of filters in use, andis the set of attributes used for grouping. Continuing with our example, the data specification for Fig. <ref> is (lon, lat, {AVG (mag), SUM (nr), AVG (de)}, pl, - ).A data specification is a SQL-query template of the form[This is the way specifications are generated in, e.g., the Polaris prototype <cit.> of the Tableau software system <cit.>.] The connection between data specifications and SQL is important, as it provides flexibility when communicatingwith the log of visualization systems: We can either capture their data specifications, or we can capture SQL queries and produce specifications ourselves. For our example, the query isThe Navigation Algebra:We now specify operations on data specifications. The purpose is to enable transitions from one data specification to the next in a visual-exploration sequence that a user generates on the data.The basic operations for transforming data specifications are as follows: ∙ Add or remove a filter condition;∙ Add or remove a field to/from the SELECT condition (that is, the fields rendered as a layer), X axis, or Y axis;∙ Add or remove a field to/from grouping specification; and∙ Modify the specification of a complex field by adding or removing an operation (such as × or +).(In most systems, one can directly replace a field A with a field B. For technical reasons, we choose to model this action with two operations:removing A and then adding B.)We use the NavigationAlgebra to represent how users navigate between visualizations in a step-by-step fashion.Consider, for example, a user going from the visualization of Fig. <ref> to that of Fig. <ref>.We can model this as a sequence of three data specifications, starting with(lon, lat, {AVG (mag), SUM (nr), AVG (de)}, pl, - ), then removing depth, to obtain (lon, lat, {AVG (mag), SUM (nr)}, pl, - ),and then removing the number of records, to arrive at (lon, lat, {AVG (mag)}, pl, - ), which corresponds to the data specification of Fig. <ref>. Sequences and Data-Slice Graphs: When working on a visual-exploration or visual-analysis task, users create what we call sequences of visualizations: Starting at aparticular visualization (such as that of Fig. <ref>), a user cancreatenew visualizations (such as the one of Fig. <ref>), by performing operations made available to them by the user interface – e.g., filtering the data, adding an extra attribute to the data specification, or changing thetype of visualization. Each subsequent operation produces a new visualization in the sequence, and users continuein this fashion untiltheir task is complete. Our goal is to suggest to the user the slice of the data whose visualization is appropriate for the current stage of the user's task on the data. Thus, we do not concentrate onthose parts of the sequences where new visualizations are created by modifying the visual specification. Rather, we focus on the underlying sequence given by the changes in the data specifications.These changes are modeled using our Navigation Algebra as described above.Assuming that we have a log withvisualization sequences generated by previous users,we construct what we call the Data-Slice Graph of this log: The nodes of this graphconsist of all the data specifications occurring in the sequences in the log, and there is a directed edge from node D_1 to nodeD_2 if the log contains a sequence where D_1 and D_2 are consecutive data specifications. As an example, Fig. <ref> shows a data-slice graph for the task of finding outlier earthquakes in the data set <cit.> (Section <ref>); this is task 1 in Section <ref>. The graph contains sequences generated by userswho were solving the same type of task on the data set. Figure <ref> depicts a fragment of the graph, showing nodes with IDs 14, 13, 8, 9, 23 and 24. Figure <ref> was generated by the user sequence (D_8, D_9, D_23, D_24, D_23, D_8, D_13, D_14).The user started in node 8, with the specificationD_8 = ( longitude, latitude, {}, place, - ), that is, assigning the earthquake longitude to the X axis, the latitude to the Y axis, and grouping by place. This specification corresponds to a visualization showing the map and just onedot for each place in that map where there has been at least one earthquake. (The groupingin D_8means that all the earthquake events in the same vicinity are grouped into a single tuple.) Theuser then went on to add a filter on the attribute magnitude, to filter out places where the average magnitude is not high enough.Note that rather than storing the precise filter, D_8 stores just the fact that a filter was added. Thisallows us to store together all the data specifications with similar filters. Continuing with the sequence, the user then added depth (node ID 23 with D_23) and minimum depth (24 and D_24). Then the depth was removed, resulting in node 23, and so on. Interesting Nodes:Some of the sequences of visualizations in a log may contain data that are important for the user task. We denote these as interesting visualizations, and mark these nodes as interesting nodes in the data-slice graph.[Ingeneral, determining whether a visualization is interesting to a user is a nontrivial problem. While our framework can use any interestingness-measuring algorithm as a black box,in our experiments we marked as interesting all those visualizations which at least one user had visually examined for at least a fixed number of milliseconds.]For example, in our experiments with task 1of Section <ref>, the nodes with IDs 9and 23 inFig. <ref> were the most interesting to the human subjects. Since the data specification D_9 represents visualizations that are similar to that of Fig. <ref>, this confirms the intuitionthat the visualization in Fig. <ref> is amongst the most informative for this type of task. We distinguish between two types of users: experts and regular users. (This distinction is discussed in more detail in Section <ref>.) We say that there is an expert (directed) edgefrom node D_1 to node D_2 if the sequence generating D_1 and D_2 was generated by an expert, andthere is a user (directed) edge if it was generated by a regular user. In addition, for each edge of the form (D_1, D_2) we maintainwith the edge the number of sequences in the log in which D_2 followed D_1.§.§ Algorithms to Match and Rank Data SlicesThe main focus of our framework is on servicing user requests to recommend the next task-relevant data slice andits appropriate visualization. To continue with our example, suppose that a user is exploring the earthquakes data set formagnitude outliers in Central America, and is currently looking at the visualization of Fig. <ref>. The data specification for Fig. <ref>, asdiscussed in Section <ref>, is (lon, lat, {AVG (mag), SUM (nr), AVG (de)}, pl, - ). When the user asks for a recommendation, the system needs to perform the following two operations:1.The data specification currently being examined by the user needs to be matched to data-specification nodes in the data-slice graph. We keep all such “best-match” nodes. 2.Once a match has been found,the system needs to find in the data-slicegraph those “downstream” data specifications that are potentially interesting to the userand are at the same time the closest to the matched node, in terms of operations of the Navigation Algebra.The algorithm addressing the first challenge is called Match Data Slices, please see Algorithm <ref> for the pseudocode. The algorithm accepts a data specification D and computes, for the stored data-slice graph G, the edit distance between D and the data specification in each node of G. (As mentioned in Section <ref>, both this algorithm and the Rank Data Slices algorithm can use any distance measure, e.g., page rank. The edit distance shown in the pseudocode of Match Data Slices is one specific choice made in our implementation described in Section <ref>.) We do not want to differentiate between the specifications where the X and the Yaxis are switched, as they representsemantically the same object, and likewise for switching between layers and axes. Thus, we proceed as follows.For each node n in the graph we compute three distances between n and D: (1) The edit distance d_s that considers onlythe fields assigned to the X and Y axes and the layers in D and n; (2) the edit distance d_g considering only the fields in thegrouping clause; and (3) the edit distance d_f that considers only the filters in each of D and n. We then add the three values, and output all the nodes n in the graph for which the resulting value is the lowest.We now look at addressing the second challenge listed above, making recommendations using the current match.Once we have matched a specification to a node in the data-slice graph, the next task is to retrieve the interesting“downstream” nodes in the graph thatare the closest to the matched node. We do this using our Rank Data Slices algorithm, please see Algorithm <ref> for the pseudocode. The algorithm works as follows. We assume that each node k in the data-slice graphis given an “interestingness” value I_k. (Any interestingness measure will work for our purposes, as outlined in Section <ref>.) We are also given a threshold T, with the objective of selecting only those nodes with an interestingness value above T, as well as the desired number M of output nodes.For each node n that is in the output of Match Data Slices, we select all the nodes in the data-slice graph whose interestingnessvalue is greater than the threshold T, and rank them in terms of their weighted-shortest-path distance to n. (Other distance measures could be used instead.) We then select and return the M nodesfrom this list that are closest to n; if there are not enough such nodes, we complete the list with the most interesting nodes overall according to the I-values in the graph. (This might be necessary if, for instance, the user's current visualization is not relevant to the task and thus cannot provide a useful input to the Match algorithm.)In our experiments,as reported in Section <ref>, we chose screen time as our measure of interestingness of each data specification. (We assume there that the longer a user looks at the screen in examining a particular visualization, the more interesting thatvisualization is to the user.) We also set our threshold T to 3 seconds.Though it might look like a small value for the interestingness threshold, its effect is that of filtering out almost 70% of the graph nodes.Furthermore, in the experiments we considered the graph information that had originatedfrom an expert as more helpful than the information from a regular user, and thus made the weight of expert edges in the graph lower (i.e., intuitively contributing to a shorter distance from the matched node) than the weight of “regular-user” edges. Specifically, the weight of an edge from a specification D to a specification D'that was part of an expert sequencewould be set in the experiments to 1, and the weight of an edge from a regular-user sequence would be set to 1 + 1/n_u,where n_u is the total count of previous users' sequences that have moved from D to D' in one step. Please see Section <ref> for a discussion of expert and regular edges. Coming back to our example, recall that the specification of Fig. <ref> was matched to the nodes 8 and 23 of the querygraph. A call to Rank Data Slices will now try to find the most interesting specifications that are closest to these nodes.Intuitively, this can be understood as asking for the most interesting specifications that include the latitude and longitude, and thus are expected to be shown in a geographical representation.The ranking algorithm would return the two interesting nodes that are closest to either 8 or 23; these answers include 23 itself, with distance 0, and 9, with distance 1.To present these back to the user, we take these specifications and produce a visualization using the user's previous visual specification, which was a geographical representation. If we use the visual specification of Fig. <ref>, the visualization of thespecification of node 9 would looklike that of Fig. <ref>. § CONSTRUCTING AND USING DATA-SLICE GRAPHSIn this section we outline the process ofconstructing the data-slice graph for a given task type on a data set. Then we discuss the modes of using data-slice graphs depending on whether domain experts have been involved in the construction. §.§ The Construction Algorithm Recall (Section <ref>) that we assume that each user declares her task as she begins the work. Thus, each user sequence can be associated in the logwith the task that the user was solving when generating the sequence. We also assume that each expert sequence (if any) is marked as such by the log administrator; we discuss the implications later in this section. At the point of logging a completed user sequence, we reformulate it, with two goals in mind.First, we make sure that all the logged sequencesare formulated “at the same level of granularity.” Toward this goal, wemake each sequence detailed enough so that each edge in the output sequence corresponds to a single operation in the Navigation Algebra of Section <ref> (see Fig. <ref> for an illustration of the outcome). The second goal is to mark, in each sequence, eachnode that is interesting under the given interestingness measure, see Sections <ref> through <ref> for a discussion. The overall algorithm for this reformulation of user sequences is straightforward. Suppose now that we have selected from the log all the sequences that are to be included in the data-slice graph that we are constructing. (We discuss potential selection criteria in Section <ref>.) We begin the construction by declaring one arbitrary selected sequence as the (initial) data-slice graph. We then keep adding the other selected sequences to the graph one at a time, by combining each node in the current sequence with some node in the graph, as long as the two nodes are the same in the D part of their = ⟨ D, S ⟩ representation. That is, we combine a node in a user sequence with a node in the graph if and only if the D parts of these nodes are the same; we store with each resulting node as many visual (S) specifications as we had in all thenodes that we have combined. If, on the other hand, for a node n in the sequence being added there are no nodes in the data-slice graph that have the same D part as n, we just add n as a new node in the graph. For each node we keep the maximum interestingness amongst all the sequences in which this node appeared. Once we have merged all the nodes of a sequence with the graph in this manner, we add to the graph all the edges belonging to the sequence being added. In the process, if the sequence being added is an expert sequence, we re-weigh its edges as described in the discussion of the Rank Data Slices algorithm in Section <ref>. Output and Correctness: The output of the overall graph-construction algorithm is a data-slice graph constructed as described above. By definition ofthe algorithm, its output does not depend on the order in which the selected input sequences are processed and merged with the graph. The construction can be done either in the batch fashionor with the graph being enhanced over time in an incremental fashion, with addition of one user sequence at a time as needed.§.§ Recommendation and Prediction Systems We now discuss possible criteria for selecting logged user sequences for entry into the data-slice graph. Recommendation Systems: One criterion could be to include all the sequences from the log that are associated with the task type of interest. (We consider two tasks on the same data set to be of the same type if they differ only in the filtering criteria. E.g., we declare to be of the same type the tasks “find all the magnitude-outlier earthquakes in the world” and “find all the magnitude-outlier earthquakes in Central America” on the data set <cit.>, see Section <ref>.) In this case, there is no need to mark user sequences as expert, and thus the entire process of constructing both the log and the data-slice graph as described in Section <ref> canbe fully automatic.We call such a data-slice graph a recommendation graph; the overall DataSlicer system will function as a recommendation system in this case. The reason is, in this case we have no information on which nodes in the graph would be the most helpful to the users in prominently featuringcorrect solutions to their task. In working with such a graph, the users will possibly “upvote” over time those graph nodes that are more helpful to them in solving their task. This “upvoting” process is sound, as we assume (Section <ref>) that each user can recognize correct solutions once they have been presented to her prominently in some visualization of the data. (The “upvoting” functionality can be easily added to the ranking algorithm of Section <ref>.) The resulting graph nodes can then be recalibrated automatically into more interesting nodes. Prediction Systems: We now consider the case where the help of domain experts is available, or perhaps even sought after, as would be in case of mission-critical applications. Recall that the log administrator can mark some of the sequences to be logged as coming from domain experts. This can be done in case one or more experts on the domain, task type, and data set are involved in solving tasks of this type for the benefit of the user community; the community could be employees of a certain company, analysts using a certain product, and so on. In this case, the process of constructing the graph is the same as before (see Section <ref>), with expert nodes and edges being marked explicitly as such in the construction. When the DataSlicer system uses a data-slice graph constructed using expert sequences, we refer to this mode of operation as “prediction mode,” and to the system as a “prediction system.” Indeed, domain experts are expected to know how to solve effectively and efficiently tasks of the given type on the data set, and nodes and edges generated in thegraph by their solutions are expected to help the community in solving tasks of the same type more so than sequences created by regular users. Note how our algorithm of Section <ref> incorporates into a data-slice graph and automatically reconciles potentially different approaches of multiple experts to solving the same task. As a result, sequences coming from multiple experts get transformed into multiple solution paths in the graph. § PUTTING IT ALL TOGETHERFig. <ref> depicts a high-level overview of the architecture of our system. In this section we outline itcomponent by component, and then discuss the scalability and implementation. Front End: The front end of the system can be any visualization tool, as long as it can issue appropriate data-specification queries on the data-set store and visualize the answers, and also has a means of communicatingits operations to other software. Some commercial visualization systems make available logs of their operations;we have implemented DataSlicer with a commercially available front-end tool, in such a way that all the DataSlicer communicationwith the front end is done through such logs, as explained in the next paragraph. Interface: The DataSlicer interface is the means to connect with the front-end visualization tool. The interface is in charge of the followingtwo main tasks: First, it provides a way to obtain and understand logs of the system, to enable extracting from the logsinformation about previous-usersessions. This part of the interface is called the log parser; it also maintains the current user's currentvisual specification, as well as the data specifications returned by the ranking algorithm of Section <ref>. Second, once the system has recommended a set of data specifications, the interface visualizes them and presents them back to the user.To create these visualizations, we maintain the current user's previous visualization preferences and use them wherever possibleto visualize the recommended data specifications. Forthose recommended data specifications that cannot be visualized using the current user's visual preferences, the system uses default visualization rules.Because of the closed architecture that many commercial visualization systems opt to implement, for our experimentswe had to implement this second task in a semi-manual way. Data-Slice Graph: The data-slice graph for the given task type and data set is physically stored as a separate database.We do not allow for any direct updates to the data-slice graph.Instead, to augment the data-slice graph with more information we set up separate system sessions, where past users sequences are provided to the log parser. During those sessions, the log parserenhances the existing data-slice graph with a new set of sequences, or creates a new data-slice graph from scratch, asdetailed in Section <ref>. Back End: The back end of the system is the part that is in chargeof producing recommendations for users. It comprises the Match and Rank algorithms, as described in Section <ref>. Scalability: In the DataSlicer architecture, visualizations are constructed for users by separate front-end visualization software, which sends to the data store queries based on the data slices, and then visually postprocesses the query answers. Thus, in the overallDataSlicer system, the processing of data-slice queries is decoupled from executing the Match and Rank algorithms of Section <ref> on the data-slice graphs. Further, data-slice graphs are constructed based on task-exploration sequences, and thus on the structure rather than on the contents of the data set being explored. Thus, the size of a data-slice graph does not depend on the number of tuples of the data set,and the graph does not need to be modified as the contents of the data set change over time.On the other hand, the size of a data-slice graph is directly proportional to the number of user sequences that it captures,and the Match and Rank algorithms clearlyrun in at most linear time with respect to the size of the graph.Addressing the issue of scalability of Match and Rank in the number of user sequences in the data-slice graph is a direction of our current work.The Implementation for the Experiments: The system used for the experiments reported in Section <ref> has been built using the Java framework and compiled using JDK 1.8. To store the data-slice graphs for the experiments,we used MongoDB version 2.2.We worked with a commercial visualization tool; we can support working with any visualization tool, but for each differentvisualization tool, a different DataSlicer interface needs to be built. (This includes the log parser and the connection that presents visualizations back to the user.) § EXPERIMENTAL RESULTS To evaluate DataSlicer's recommendation performance, we conducted a set of controlled experiments. The results were evaluated in terms of the measures of (see <cit.>) participant speed, understood asthe average number of data-specification steps taken to find a correct visualization for the task, as well as of result accuracy, understood as the degree to which the participant's solution is close to the correct solution. (In our experiments, the correct solutions were determined as part of the experimental setup.) Following the experiments, each participant completed a questionnaire to capture their perception of: (1) the difficulty of the assigned task, (2)the correctness of their solution, (3)the correctness of the system's solution, and (4) the overall usefulness of DataSlicer.Both the statistical and questionnaire results were positive. Specifically, the results suggest thatDataSlicer provides technically correct visualizations and, perhaps moreimportantly, rapidly directs participants to a correct visualization,potentially improving their performance over time. Due to the page limit, some of the discussions in this section are omitted. All of the omitted information can be found in the full version <cit.> of this paper. §.§ The ProcedureWe conducted four sets of experiments involving 48 human participants, with 12participants randomly assigned to each of the fourseparate groups. The participants were graduate students ranging in age from 21 to 34, with 31 males and 17 females, each withnormal or corrected to normal vision. Each of the 48 participants was first trained to work with our choice of front-end visualization software, and was then given a task to complete. The tasks focused on common data-analytics concepts of finding outliers and general data trends.After the initial training, each participant was asked to complete their assigned task without using DataSlicer. The resulting log files were analyzed for comparison with DataSlicer's recommended“correct” visualization. Next, the participants used DataSlicer to findadditional solutions for the same task, on the same data set. We thencompared the accuracy and speed for the participants'task completion with and without access to DataSlicer. The participants concludedtheir session by providing feedback via a questionnaire (see <cit.>).The data sets used in the experiments are summarized in Table <ref>, and the experimental results are given in Table <ref>. Please note that the data sets (Table <ref>) were small in size. Still, we found (Table <ref>) that our human participants had difficulty completing the assigned tasks even on these small data sets. Presumably, increasing the number of observations would further degrade the users' unassisted performance.§.§ The TasksEach participant was asked to perform one of the four different tasks, both with and without assistance from DataSlicer: (1) locating spatial outliers in an earthquakes data set <cit.>; (2) locating data outliers ina baseball data set <cit.>; (3) locating outlier patterns and trends inan economic data set <cit.>; and (4) recognizing the general trends in the (same) data set <cit.>. The experiments were designed to cover commonanalytical tasks performed across a wide range of data domains; the tasks and data sets used in the experiments are as provided by <cit.>. The expert sequences for each task were generated and validated as part of the experimental setup. The experts' log fileswere retrieved from the front-end visualization tool, parsed, and integratedinto DataSlicer as discussed in Section <ref>. Task 1: Spatial Outliers. This task used an earthquakesdata set <cit.> containing the location of 8,289 earthquakes with magnitude 6 or greater throughout the world, from 1900 to 2013 (Table <ref>). The participants were asked to find places (locations) on the map that contain earthquakes with either: (1) outlier magnitudes; or (2) outlier number of occurrences. (The definitions of outliers, via inter-quartile ranges, are “as expected” and can be found in <cit.>.) Task 2: Local Data Outliers. This task used a baseball data set <cit.>containing information on 45 baseball players from the 2012 Major League Baseball season (Table <ref>). The participants were asked to find the data points for players that were outliers based on a specific position or type. E.g., a participant could look for outlier players at the shortstop position by finding all shortstop players, then search for outliers within that subgroup. If a data point contained any attribute that was an outlier relative to the other players in the subgroup (hence the name “local data outliers”), then that player would be reported as an outlier. (The definitions of outlier values, via inter-quartile ranges, are “as expected” and can be found in <cit.>.) Task 3: Outliers in Economic Patterns. This task used a World Bank indicators data set <cit.> containing 11 economic, health, and population attributes for 216 countriesfor the years 2000–2010 (Table <ref>).The participants were asked to identify the top eight countries in terms of average exports, then determine which of these countries displayed an outlier pattern in terms of export statistics over the given years. Outliers are identified by differences in the direction of the slope of their trend lines versus the overall norm for a given attribute. Task 4: General Economic Patterns.This task used the same data set <cit.> as Task 3.The participants were asked to identify a visualization that showed the similarities and dissimilarities between the export and import trends for the top country in the urban population category over the years 2000 to 2010. §.§ Expert Solutions We now discuss the steps that were used by expertsto solve tasks 3–4. (Due to the page limit,expert solutions for tasks 1–2 can be found in the full version <cit.> of the paper; Fig. <ref> and Fig. <ref> show the respective visualizations obtained by expert users to present the answers to the tasks.)Task 3. Identifying export pattern outliers in the World Bank indicators data set <cit.> involved two stages. First, the top eight countries in terms of average exports were filtered by setting a lower export bound to include only eight countries. Next, a line-graph visualization of each country's exports over the years 2000 to 2010 was generated. The countries whose trend lines deviated in slope from the norm (i.e., the trend lines that did not follow the ascending or descending pattern of the norm) were deemed to be outliers (Fig. <ref>). Task 4.Recognizing general patterns in import and export data for the top urban population country in the World Bank indicators data set <cit.> involved two stages. First, the top urban population country in 2000–2010 was identified by setting a lower bound on urban population as a filter. Next, a line diagram was generated on imports and exports over these years. The resulting visualization contains the top country's trends for both imports and exports (Fig. <ref>).§.§ The Results The average results for accuracy (either the number of solutions found or the indicator of whether the single correct solution was found) and for speed (the number of query steps performed), both without and with assistance from DataSlicer, are detailed in Table <ref>. Based on the average values in Table <ref>,the accuracy of user solution for all tasks is at least 1.84 times better with DataSlicer than without DataSlicer, with the average of 5.09.Moreover, the speed in obtaining final visualization is at least 3 times better with DataSlicer than without DataSlicer, with the average of 6.34. We used Welch's analysis of variance (ANOVA) <cit.> to search forsignificant differences between the participant performance with and without assistance from DataSlicer. Based on this analysis, we determined that in each of the four tasks, the participants were in statistically significant ways both faster[Increased speed here means that fewer data-specification operations were required with DataSlicer than without, to identify a correct visualization.] and more accurate[Better accuracy here means that more outliers were located with DataSlicer than without, and general trends were located with DataSlicer but not without.]with help from DataSlicer than without the help. (Due to the page limit,the report on the detailed statistics is omitted from this paper; the report can be found in the full version of the paper <cit.>.) Based on these results, we conclude that DataSlicer allows participants to find statistically significantly more outliers and trends, in significantly fewer data-specificationsteps, thanunaided exploration. The tasksassigned to the participants include spatial outliers, local outliers, trend outliers, and general trends, which represent common analytic tasks on real data.Thus, the improved accuracy and speed in our experiments suggestbetter accuracy and speedfor real-world data analysis.The questionnaire results (see <cit.>) were also positive. On a scale of 1 to 7, with 1 being lowest and 7 highest, the participants rated the usefulness of DataSlicer as 5.44, on average, and the accuracy of DataSlicer as 5.94, on average. The participants were more confident about their answers with DataSlicer than without (5.88 versus 5.46, on average). § CONCLUSIONS Searching for outlier data elements, data patterns, and trends are common and critical tasks during visual analytics. The value of visualizations is in their offering the ability to present data in ways that leverage a user's domain expertise, knowledge of context, and ability to manage ambiguity that fully automated systems cannot.At the same time, users are often overwhelmed by the sheer volume of data (even in small data sets such as that <cit.> of experimental task 1 in Section <ref>), which may preventthem from understanding even basic properties of their data sets. Thisbecomes particularly important in situations where the data set islarge.In our experiments with four task types designed to be representative of real-time exploration and discovery, DataSlicer significantly improved both the accuracy and speed for identifying spatial outliers, data outliers, outlier patterns, and general trends. The system quickly predicted what a participant was searching for based on their initial operations, then presented recommendations that allowed the participants to transform the data, leading them toidentification of the desired solutions.Although our data sets were moderate in size, the human participants had difficulty completing the assigned tasks on the data. Presumably, increasing the size of data would further degrade their performance, and therefore strengthen the value of using DataSlicer. As discussed in Section <ref>, our predictive sequence comparisons are relatively insensitive to data-set size, depending most directly on the number of expert sequences to match against. In the scenarios that we have tested, larger data sets would lead to more target observations (e.g., outliers identified), but not to more steps required to find the targets. In this way, we address an important goal of scalability: With predictions based on user-generatedsequences, the prediction cost is based on the number of sequences and sequence length, and not on data-set size.We have run separate preliminary experiments with a “recommendation” data-slice graph involving only regular-user sequences from our original experiments with task 1 of Section <ref>.The outcomes, discussed in <cit.>, were far from satisfactory, as no graph nodes were of significant help to users in their solving the task with DataSlicer.This confirms the intuition that such tasks are very difficult to solve for users that are not experts in their fields, therefore reinforcing the desirability ofconstructing data-slice graphs using expert sequences. It remains to be seen if recommendation graphs can be useful tools for simpler tasks or with significantly larger user bases. plain
http://arxiv.org/abs/1703.09218v1
{ "authors": [ "Farid Alborzi", "Surajit Chaudhuri", "Rada Chirkova", "Pallavi Deo", "Christopher Healey", "Gargi Pingale", "Juan Reutter", "Vaira Selvakani" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20170327150142", "title": "DataSlicer: Task-Based Data Selection for Visual Data Exploration" }
On the minimum output entropy of random orthogonal quantum channels]On the minimum output entropy of random orthogonal quantum channels MF: Yamagata University, 1-4-12 Kojirakawa, Yamagata, 990-8560 Japan fukuda@sci.kj.yamagata-u.ac.jp IN: Zentrum Mathematik, M5, Technische Universität München, Boltzmannstrasse 3, 85748 Garching, Germany and CNRS, Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, UPS, F-31062 Toulouse, France nechita@irsamc.ups-tlse.fr [2000]We consider sequences of random quantum channels defined using the Stinespring formulawith Haar-distributed random orthogonal matrices.For any fixed sequence of input states, we study the asymptotic eigenvalue distribution of the outputs through tensor powers of random channels.We show that the input states achieving minimum output entropy are tensor products of maximally entangled states (Bell states) when the tensor power is even.This phenomenon is completely different from the one for random quantum channels constructed from Haar-distributed random unitary matrices, which leads us to formulate some conjectures about the regularized minimum output entropy. [ Ion Nechita December 30, 2023 =====================§ INTRODUCTION One of most important questions in quantum information theory is to determine the optimal rate of transmission of classical information through noisy quantum channels. Unlike its classical counterpart, no closed formula has been found yet for the classical capacity of quantum channels.Since the capacity is defined as the maximum rate at which classical information can be sent reliably over the channel in a way that the probability of error approaches zero as the length of codes goes infinity, naturally the capacity C(·) of a quantum channel Φ has an asymptotic formula <cit.>C(Φ) =lim_r →∞ 1/r χ(Φ^⊗r)where χ(·) is the Holevo capacity.Here, we assume that the errors appearing in the transmission of information are independent along the uses of the quantum channels Φ, andit is represented by the tensor power in the formula. For some classes of channels, such as depolarizing channels <cit.>, entanglement breaking channels <cit.>, Hadamard channels <cit.>, and unital qubit channels <cit.>, the above formula (<ref>) can be simplified.This is a consequence of the following additivity property proved in the above cited papers: for any r ∈ℕχ(Φ^⊗r) = r χ(Φ).Additivity for the Holevo capacity yields a closed formula (called a single-letter formula) for the classical capacity for such channels: C(Φ) = χ(Φ).However, the above simplification does not hold for all quantum channels.In a breakthrough paper <cit.>, Hastings showed violation of additivity for another quantity, the minimum output entropy,which implies that (<ref>) does not hold for some quantum channels. These two concepts of minimum output entropy and Holevo capacity are originally different; the former only cares about single output states, while the latter deals with ensembles of outputs (see Section <ref> for the exact definitions). However, previous to Hastings' work, Shor showed <cit.> that additivity properties for those two quantities are globally equivalent to each other, allowing the translation of counter-examples from one setting to the other. In this paper, we focus on the minimum output entropy S_min(Φ^⊗ r), which has close conceptual connection to χ (Φ^⊗ r).We inquire what kind of inputs states will minimize the output entropy for randomly chosen quantum channels. We explain briefly our methodology in three main points. First, we choose to focus on random quantum channels. The interest in the study of random quantum channels comes mainly from the fact that, to date,violation of additivity is provedonly through random techniques (typically with random unitary quantum channels generated by random unitary matrices), see <cit.>. Non-random counter-examples have been obtained only for p-Rényi minimum output entropies, see <cit.>.Second,our main results concern random orthogonal quantum channels. As is explained in Section <ref>, any quantum channel can be dilated to a unitary closed evolution on a larger space. In this work, we only consider the case where closed dynamics comes from an orthogonal rotation. The reason for this choice is that it allows us to consider identical copies of a random quantum channel, whereas if one uses the more general unitary evolutions, then one needs to take pairs of a channel and its complex conjugate to witness additivity violations:S_min(Φ⊗Φ̅) < S_min(Φ)+S_min( Φ̅)where the complex conjugation are applied to the unitary matrix which defines the channel Φ. To translate this result into a violation inequality for two copies of the same channel S_min(Φ^⊗2) < 2S_min(Φ)one needs to restrict themselves to the real case, where the complex conjugate does not make any difference(unless one employs a particular symmetrization operation, see <cit.>). Third, we shall fix a sequence of input states, and study the asymptotic behavior of the output states.In order to obtain the exact value of the minimum output entropy, one has to optimize over all input states for a fixed realization of the random quantum channel,but our current techniques do not allow this setting. This is indeed a drawback of our method, but in this setting we can obtain quite precise results on the possible outputs in the asymptotic limit. The current setting, where a universal, channel-independent encoding is considered, is related to the coding theory for compound quantum channels, see e.g. <cit.>.Our main results (Theorem <ref> and Corollary <ref>) can be informally stated as follows.Consider random quantum channels Φ_n obtained by partial-tracing the action of Haar-distributed random orthogonal matrices,where n is the system dimension. Then, among fixed sequences of input states, the ones achieving minimum output entropy (asymptotically, as n →∞) for the channels Φ_n^⊗ 2rare tensor products of r maximally entangled states (Bell states).The paper is organized as follows. In Sections <ref> and <ref> we recall, respectively, some basics notions and facts from quantum information theory and from the combinatorial theory of permutations and pairings. In Section <ref> we present the theory of invariant integration over the orthogonal group, using the graphical tensor notation. We discuss then in Section <ref> the model of random quantum channels we are studying. Sections <ref> and <ref> are the technical core of the paper, in which we characterize the asymptotical output states for an arbitrary fixed sequence of inputs, and then we optimize over input sequences. Finally, we discuss our results and a few conjectures in the closing Section <ref>.Acknowledgement. We would like to thank the referees for their very helpful comments which helped improve the quality of the presentation. I.N.'s research has been supported by a von Humboldt fellowship, the ANR project StoQ ANR-14-CE25-0003-01. M.F. was financially supported by JSPS KAKENHI Grant Number JP16K00005. I.N and M.F. are both supported by the PHC Sakura program (project number: 38615VA), implemented by the French Ministry of Foreign Affairs, the French Ministry of Higher Education and Research and the Japan Society for Promotion of Science. Both authors acknowledge the hospitality of the TU München, where this research was conducted.§ BASICS FROM QUANTUM INFORMATION THEORY We review in this section some basic definitions and facts from quantum information theory. Some excellent references on the subject are <cit.> and <cit.>. A quantum state is a positive semidefinite matrix with unit trace; we denote the set of quantum states byℳ_d^1,+(ℂ) := {ρ∈ℳ_d(ℂ):ρ≥ 0and Trρ = 1}.Rank one projections ρ = xx^* (here, x∈ℂ^d, x=1) are the extremal points of the convex body of quantum states. In the case of bipartite composite systems, the state space is the tensor product [ℳ_d_1(ℂ) ⊗ℳ_d_2(ℂ)]^1,+. Of particular importance is the maximally entangled state ω̂= d^-1ΩΩ^* ∈ℳ_d^2^1,+(ℂ),which is also called Bell state. Here, ℂ^d ⊗ℂ^d ∋Ω :=∑_i=1^d e_i ⊗ e_iis a vector of norm √(d) (hence the normalization factor d^-1in the formula for ω̂). We denote by ω = ΩΩ^* the un-normalized version of ω̂. One can extend, using functional calculus, the notion of (Shannon) entropy to quantum states:S(ρ)= - Trρlogρ,a quantity which is called the von Neumann entropy of the quantum state ρ. Quantum channels are the most general transformations of quantum states allowed by the laws of quantum mechanics. Mathematically, quantum channels are completely positive, trace preserving maps between two matrix algebras (remember that we are concerned here only with finite-dimensional quantum systems). By the celebrated Stinespring dilation theorem <cit.>, all quantum channels Φ:ℳ_d(ℂ) →ℳ_k(ℂ) can be obtained as Φ(X) = [id⊗Tr](VXV^*),where V :ℂ^d →ℂ^k ⊗ℂ^n is an isometry, and n is a parameter (called the ancilla dimension) which can be taken to be n = dk. As explained in the introduction, quantum Shannon theory is concerned with information transmission tasks in the quantum world. One of the fundamental information processing protocols is the transmission of classical information through a noisy quantum channel. The classical capacity of a quantum channel Φ is defined as the optimal rate (# bits transmitted) / (# uses of channel), assuming that the probability of successfully decoding the transmitted information approaches one. The mathematical theory was developed in <cit.> and <cit.>, see also <cit.> for a textbook presentation. The definition of the classical capacity of a given quantum channel Φ isC(Φ) = lim_r →∞1/rχ (Φ^⊗ r), where χ is the Holevo capacity of Φ given byχ(Φ) = max_{p_i, ρ_i} S(Φ(∑_i p_i ρ_i)) - ∑_i p_i S(Φ(ρ_i)),where the maximum is taken over all ensembles of probability weights p_i and input quantum states ρ_i (actually, ensembles of size d^2, where d is the dimension of the input space of Φ are enough). The question whether the quantity χ is additive, i.e.∀Φ, Ψ, χ(Φ⊗Ψ) = χ(Φ) + χ(Ψ)is known as the additivity problem <cit.>. Shor has shown in <cit.> that the additivity of χ is equivalent to the additivity of a much simpler quantity, the minimum output entropyS_min(Φ) = min_ρ∈ℳ_d^1,+(ℂ) S(Φ(ρ)).Much of the work on the additivity problem was about the quantity S_min, proving either that additivity holds for particular classes of channels,or providing counter-examples (see discussion and references in Section <ref>).The focus of the current paper is to understand, for a random orthogonal quantum channel Φ,how additivity S_min(Φ^⊗ r) = r S_min(Φ) is violated and to find input states achieving S_min(Φ^⊗ r).§ COMBINATORIAL ASPECTS OF PERMUTATIONS AND PAIRINGS As the reader shall see in the next section, the theory of invariant integration over the orthogonal group 𝒪(d) is intimately connected to the combinatorial theory of pairings and permutations. We gather in the current section the necessary definitions and basic facts from combinatorics, as well as some useful lemmas. We denote by 𝒮_r the symmetric group on r elements.For a permutation α∈𝒮_r, we denote by #α the number of its cycles (including fixed points). The quantity |α| = r - #α is called the length of α, and it can be shown to be equal to the minimal number of transpositions that multiply to α. Also, |α| is the distance between α and the identity permutation id∈𝒮_r inside the Cayley graph of 𝒮_r generated by all transpositions.Permutations α,β,γ∈ S_r satisfy triangle inequality: |αβ^-1| ≤ |αγ^-1| + |γβ^-1|, and when the equality holds, we say that γ is on a geodesic connecting α and β, and express it as α- γ- βWe write 𝒮̃_2r for the set of products of r disjoint transpositions. The set 𝒮̃_2r is in bijection with the set of pairings of [2r]:={1, 2, …, 2r}. To any permutation α∈𝒮_r, we associate an unoriented graph G_α, which has vertex set V = [r] and edge set E = {{i, α(i)}: i ∈ [r] }. It is obvious that each vertex has degree 2 (a loop at a vertex contributes degree 2 to that vertex) and that the cycles of α are in bijection with the connected components of G_α. In particular, it holds that G_α has #α connected components. We investigate next a similar setting, where the permutation is replaced by a set of pairings. To a pair (α,β) of pairings of the set [2r], encoded by permutations α,β∈𝒮̃_2r, we associate an unoriented graph G_α,β having vertex set V=[2r], and edge set given by E = {{i, α(i)}: i ∈ [2r] }∪{{i, β(i)}: i ∈ [2r] },with the convention that we allow multiple (in our case, at most 2) edges between two vertices. The following lemma is implicit in <cit.>The number of connected components of the graph G_α,β is #(αβ)/2 = r - |αβ|/2. First, note that #θ=2 r - |θ| for θ∈𝒮_2r. Indeed, choose γ∈𝒮_2r so that γτγ^-1 is non-crossing, but it implies that#θ= #(γθγ^-1)= 2r - |γθγ^-1| = 2r-|θ|based on the well-known fact on non-crossing permutations <cit.>. Next, we count the number of connected components of G_α,β for α,β∈S̃_2r. To do so, we analyze the connected component which includes 1.Suppose a number, say, m is connected to 1 in the graph G_α,β.Then, we have the following two exclusive cases.1 ↦β(1) ↦αβ(1) ↦βαβ(1) ↦ …↦m 1 ↦α(1) ↦βα(1) ↦αβα(1) ↦…↦m,i.e. we can reach m by applying α and β in turn because of the idempotent property:α^2= id= β^2.Hence, we now have identified the connected component which includes 1 as a disjoint union of two sets of vertices: { (αβ)^l(1): l ∈ℤ} ⊔{(αβ)^l α(1): l ∈ℤ}Indeed, we haveβα = β^-1 α^-1= (αβ)^-1 β = βαα= (αβ)^-1 α Hence, a connected component in the graph G_α,β always consists of two loopsgenerated by αβ(i) and βα(i) for some i ∈ [2r], so that the number of connected components is #(αβ)/2. In fact,α(1) = (αβ)^l α(1)= α(αβ)^-l(1)⇔(αβ)^l (1) =1This completes the proof.To understand the proof more intuitively see Figure <ref>. All numbers connected to 1 are represented by black and white dots, where from left to right 1↦β(1) ↦αβ(1) ↦…↦ (αβ)^l(1) =1 for some l. The left part of (<ref>) corresponds to the black dots and the right the white dots.Note that α(1) = β (αβ)^l-1(1) = (βα)^l-1β(1) and those arrows represent applications of αβ.§ INVARIANT INTEGRATION OVER THE ORTHOGONAL GROUP Since the technical core of the paper consists of moment computation for random, Haar distributed orthogonal matrices, we review in this section the Weingarten formula for averaging over the orthogonal group. Following the work of Weingarten <cit.>, the modern mathematical formulation was developed by Collins and Śniady in <cit.>; some further elements can be found in <cit.>. The orthogonal Weingarten formula provides a combinatorial expression for the average of a monomial in the entries of a Haar orthogonal matrix. <cit.> For every choice of indices i_1, …, i_2r and j_1, …, j_2r, we have∫_𝒪(n) U_i_1j_1⋯ U_i_2rj_2r dU = ∑_α, β∈𝒮̃_2r∏_s=1^2rδ_i_s,i_α(s)δ_j_s,j_β(s)Wg_n(α,β).The odd moments vanish:∫_𝒪(n) U_i_1j_1⋯ U_i_2r+1j_2r+1 dU = 0.The Weingarten function Wg is a combinatorial function, which can either be seen as the matrix inverse of the loop counting matrix in the Brauer algebra or as a sum over Young diagrams, see <cit.>. The values of this function for r ≤ 4 can be found in <cit.>. In <cit.>, the authors also compute the leading order in the large n asymptotic expansion of the orthogonal Weingarten function:Wg_n(α,β) = (1+o(1))n^-r-|αβ|/2Möb(α,β),where Möb is the Möbius function that we define next (see <cit.>). Let 2p_i be the number of cycles of the permutation αβ having length i (this number is indeed even, see Lemma <ref>). Then, defineMöb(α,β):=∏_i (-1)^p_i-1Cat_p_i-1,where Cat_p is the p-th Catalan number Cat_p = 1/p+12pp. In <cit.> and <cit.>, the authors introduced a graphical calculus for computing expectation values of expressions involving random unitary matrices and, respectively, random Gaussian matrices. We present next an natural extension of these ideas to integrals over the orthogonal group with respect to the Haar measure. We shall be brief in our exposition, since the procedure is very similar to the one in <cit.>, also described at length in <cit.>. We shall encode tensors (i.e. vectors, linear forms, matrices, bipartite matrices, etc.) by boxes having labels attached to them corresponding to the respective vector spaces. Empty labels are associated to duals of vector spaces (linear forms, or “inputs” of matrices), while filled labels correspond to primal spaces (that is vectors, or “outputs” of matrices). Wires connect an empty label with a filled one of the same shape, corresponding to the same vector space. In other words, wires encode tensor contractions V^* × V →ℂ. Presented with a diagram 𝒟 (a collection of boxes and wires) containing boxes associated to a Haar distributed random orthogonal matrix U ∈𝒰(n), we can interpret the Weingarten formula (<ref>) as a graph expansion corresponding to the sum over the pairings α and β. To each term in the sum we associate a new diagram 𝒟_α,β which is obtained by deleting the boxed corresponding to the random matrix U, and adding wires encoding the product of delta functions in (<ref>).For each pair (i,j) contained in α, a wire is added between each primal vector space (i.e. filled label) of the boxes corresponding to the i-th and the j-th matrix U. Similarly, wires are added between the empty labels, according to the permutation β. We have thus, assuming 𝒟 contains 2r U-boxes, 𝔼_U 𝒟 = ∑_α, β∈𝒮̃_2r𝒟_α,βWg_n(α, β). Let us showcase the formula above using a simple example. Let A ∈ℳ_n(ℂ), and let us compute 𝔼_U UAU^⊤, for a Haar orthogonal matrix U ∈𝒪(n). Here, r=1, so there is only one possible pairing α = β = (12). The original diagram and the graph expansion are represented in Figure <ref>. We conclude that𝔼_U UAU^⊤ =Tr(A)I_n Wg_n((12),(12)) = 1/nTr(A)I_n. § OUTPUT STATES FOR TENSOR POWERS OF RANDOM HAAR-ORTHOGONAL QUANTUM CHANNELSWe consider the following model of random quantum channels. We fix an integer k and a real number t ∈ (0,1), which are the parameters of the model. For each integer n, consider the random quantum channel Φ_n: ℳ_d_n(ℂ) →ℳ_k(ℂ), where d_n := ⌊ tkn ⌋ andΦ_n(X) := [id_k ⊗Tr_n](V_nXV_n^⊤),where V_n : ℝ^d_n→ℝ^k ⊗ℝ^n is a Haar distributed random isometry. Note that although V_n is a real matrix, the matrix in (<ref>) is an element of ℳ_kn × d_n(ℂ). The random isometry V_n can be obtained by truncating a Haar-distributed random orthogonal matrix U_n ∈𝒪(kn).Now we investigate the sequence of random matrices, which are output states of tensor powers of random Haar-orthogonal quantum channels,with some fixed sequence of input states.More precisely, given a fixed sequence of input states ρ_n = ψ_nψ_n^*, with ψ_n ∈ℂ^rd_n, ψ_n=1, let Z(ρ_n):= Φ_n^⊗ r(ρ_n) ∈ℳ_k^r(ℂ). Our goal in this section will be to characterize the asymptotic behavior of the sequence of random matrices Z(ρ_n). In this setting, the parameters r,k,t are fixed.The first result is a formula for the moments of the random matrices Z(ρ_n). Let p ≥ 1 be the order of the moment and we wish to compute 𝔼Tr Z(ρ_n)^p. We shall use the graphical orthogonal Weingarten formula from Section <ref>.We have depicted the diagram for TrZ(ρ_n)^2, in the case r=3, in Figure <ref>.The diagram corresponding to the p-th moment contains p × r × 2 random orthogonal matrices U ∈𝒪(kn). We shall index these matrices by a triple [i,x,P], where* the label i ∈{1, …, p} indicates the index of the copy of the matrix Z(ρ_n) the U box belongs to;* the label x ∈{1, …, r} denotes the index of the channel Φ_n in the tensor power;* the position label P ∈{L,R} indicates whether the box U appears on the “left” side of the picture or on the “right” side (i.e. the matrix U appears without or with a transposition in (<ref>)).We introduce now two permutations which encode the initial wiring (tensor contractions) appearing in the diagram. To this end, we identify the set of integers {1, …, 2pr} with the set of triples [i,x,P] described above. We putδ := ∏_i=1^p ∏_x=1^r ([i,x,L], [i,x,R])γ :=∏_i=1^p ∏_x=1^r ([i,x,L], [i-1,x,R]).In the second equation above, we abuse notation and write [0,x,P]:=[p,x,P] for any index x and position P. It is important to notice that both permutations above are products of pr disjoint transpositions, so δ, γ∈𝒮̃_2pr. As we shall see, the permutations δ, γ encode the wirings corresponding to the partial trace (for each quantum channel) and, respectively, to the trace appearing in the moment of Z(ρ_n). The graphical formulation of the Weingarten formula for integrals over the orthogonal group 𝒪(kn) gives𝔼Tr Z(ρ_n)^p = ∑_α, β∈𝒮̃_2pr𝒟_α,βWg_kn(α, β),where the sum ranges over pairs (α, β) of pairings ofthe set of 2rp boxes containing the random isometry U; the permutation α is responsible for pairing the “outputs” of the boxes (corresponding to black labels), while β pairs the inputs (i.e. white labels). Let compute explicitly the content of a given diagram 𝒟_α,β: * Loops corresponding to the partial traces in the quantum channel. Since the original wiring of the boxes corresponding to these loops is encoded by the permutation δ, the contribution of these loops is n^#(δα)/2, by Lemma <ref>.* Loops coming from the matrix multiplication, giving a total contribution of k^#(γα)/2 (for the same reasons as above). * The contribution of the input state, let us call it f_β(ρ_n) for now.Let us bound the contribution of the input state f_β(ρ_n). To this end, notice that f_β(ρ_n) = Tr[(ρ_n)^⊗ p M(β)], where M(β) ∈ℳ_d_n(ℂ)^⊗ pr is a matrix encoding the pairing β, having pr inputs corresponding to labels [i,x,L] and pr outputs corresponding to labels [j,y,R], see Figure <ref> for an example. Let us define, for a pairing β∈𝒮̃_2q where q=pr, its number of bumps ♭(β)as the number of pairs inside β which connect elements on the R “side”. For the pairing β in Figure <ref>,we have ♭(β) = 1, since there is onlyone “bump” on the R side. It is obvious that the number of “bumps” on the L side is also ♭(β), and that, up to multiplying from the left and from the right with some unitary operators,the matrix M(β) is a tensor product of ♭(β) unnormalized maximally entangled states with the identity operator up to rotations. In particular, we have M(β)_∞ = d_n^♭(β), and thus, using Hölder's inequality, we conclude that|f_β(ρ_n)| ≤d_n^♭(β). In order to get a better understanding of the number of bumps of a pairing, let us call a pairing τ transverse if it maps the L side to the R side and vice-versa. In other words, τ is transverse if for all (i,x) ∈ [p] × [r], τ([i,x,L]) = [*,*,R], and τ([i,x,R]) = [*,*,L]. Note that transverse pairings have zero bumps. We claim the following expression for the number of bumps of a given pairing β:For β∈S̃_2q2♭(β) = min_τ transverse |τβ|.Moreover, the minimum is achieved if and only if τ = τ_1 ⊕τ_2. Here, τ_1 = ∏_i=1^2♭(r_i,l_i) where for 1≤ j ≤♭ each pair{r_2j-1,r_2j} or {l_2j-1,l_2j}supportsa bump in R or L side, respectively, and τ_2 = ∏_i=2♭+1^2q(r_i,l_i) where (r_i,l_i) ∈β. To prove our claim, we can assume without loss of generality that 2♭(β)=q,i.e., all 2q elements are supporting elements of bumps, and τ_2 =0. This is because for each transposition (r,l) ∈β where r and l are from R and L sides, respectively, we can restrict ourselves to transverse τ such that (r,l) ∈τ in search for the minimum of |τβ|.To begin with, we prove ≤ in (<ref>).Consider the bumps on R side and name the supporting elements in pairsby {r_1,r_2},… ,{r_2♭ -1,r_2♭} where ♭ =♭(β). Then, for a transverse τ∈S̃_2q we have the following mapping of τβ: for 1≤ j ≤♭r_2j-1↦l_2j r_2j↦l_2j-1for some distinctive elements l_1,…,l_2♭ from L side, i.e., τ(r_i) = l_i for 1 ≤ i ≤ 2♭. Suppose τβ consists of disjoint cycles, say, c_1, …, c_m, so that |τβ| = ∑_i=1^m (card(c_i)-1)where card(c_i) is the cardinality of cycle c_i. Here, we have m ≤ 2♭ based on the comment at the beginning of this proof. Now, each mapping in (<ref>) constitutes a part of some cycle. If c_i is related to k_i mappings in (<ref>), then card (c_i)≥ 2k_i. This implies that |τβ| ≥∑_i=1^m (2 k_i -1) = 4 ♭- m ≥2 ♭ The equality holds if and only if m=2♭ and card(c_i) = 2. In this case, the condition τβ (l_2j) = r_2j-1 implies that β(l_2j) = l_2j-1. This complets the proof.Given 4m elements {l_1, … ,l_2m,r_1,… ,r_2m}, define two permutations in S̃_4m.δ̂ = ∏_i=1^2m (r_i,l_i) = ∏_i=1^m (r_2i-1, l_2i-1) (r_2i, l_2i)β̂ = ∏_i=1^m (r_2i-1, r_2i) (l_2i-1, l_2i).Then, α̂∈S̃_4m such that δ̂-α̂- β̂ is of the form:α̂=∏_i ∈Λ(r_2i-1, r_2i) (l_2i-1, l_2i) ∏_i∈[m]∖Λ (r_2i-1, l_2i-1) (r_2i, l_2i)for some Λ⊆ [m]. Here we used the notation from (<ref>). We decompose {l_1, … ,l_2m,r_1,… ,r_2m} = _i=1^m {l_2i-1,l_2i,r_2i-1,r_2i}and work on each component for the geodesic because δ̂ and β̂ both respect this decomposition. Note that, at fixed i, the only elements on the geodesic are the restrictions of δ̂ and β̂ on the 4-element set:dist( (r_2i-1, r_2m) (l_2i-1, l_2m),(r_2i-1, l_2i-1) (r_2m, l_2m)) =2while the intermediate permutations do not belong to S̃_4m. The proof is now complete, since each block of α must be either of δ̂ type or of β̂ type. With these ingredients in hand, and with the asymptotic formula for the Weingarten function from (<ref>),we can calculate the general term in the sum (<ref>)and upper bound its absolute valueas follows (remember our notations of δ and γ in (<ref>)): 𝒟_α,β Wg_kn(α, β) = n^#(δα)/2 k^#(γα)/2 f_β(ψ_n)Wg_kn(α,β), and then |𝒟_α,β Wg_kn(α, β)| ≤(1+o(1)) [n^#(δα)/2 k^#(γα)/2 (tkn)^♭(β) (kn)^-pr - |αβ|/2|Möb(α,β)|].By using (<ref>) in Lemma <ref> the exponent of n (the only variable which grows) in the RHS of (<ref>) reads#(δα)/2 +min_τ transverse |τβ|/2 - pr - |αβ|/2= (min_τ transverse |τβ| - |δα| - |αβ|)/2≤ (min_τ transverse |τβ| - |δβ| )/2 ≤ 0,where we have used the triangle inequality and the fact that the permutation δ is transverse.To identify the leading order terms in (<ref>) we then try to ignore as many terms as possible by getting rid of terms which do not saturate the three bounds (<ref>), (<ref>) and (<ref>). Note that we consider the bound (<ref>) only asymptotically, as one can see below.First, the equality min_τ transverse |τβ|= |δβ| must hold in (<ref>). Since δ is a transverse, Lemma <ref> shows thatβ must be of the formβ = ∏_B ([i_1(s),x_1(s),L],[i_2(s),x_2(s),L])([i_1(s),x_1(s),R],[i_2(s),x_2(s),R])×∏_B^c ([i_3(t), x_3(t),L],[i_3(t),x_3(t),R]).In other words, β must be a product of symmetrical bumps and horizontal wires.Here, B ∈𝒞_p, and 𝒞_p is defined by a set of particular types of transpositions:𝒞_p= {([i_1(s),x_1(s)], [i_2(s),x_2(s)])}_s=1^m: m ∈[⌊pr/2 ⌋], [i_j(s),x_j(s)] ≠ [i_l(t),x_l(t)]unless j=l and s=t}.Also, we abuse notations by writing B^c to denote fixed points in [pr] by all transpositions in B. Second, the equality in (<ref>) holds if and only if α lies on the geodesic between δ and β.This is equivalent via Lemma <ref> to the fact that α has the following form for A ∈𝒞_p such that A ⊆ B and:α = ∏_A ([i_1(s),x_1(s),L],[i_2(s),x_2(s),L])([i_1(s),x_1(s),R],[i_2(s),x_2(s),R])×∏_A^c ([i_3(t), x_3(t),L],[i_3(t),x_3(t),R]).In other words, α consists of horizontal lines and a subset of the bumps of β. Third, we discuss when the equality in (<ref>) is asymptotically saturated when p=2.To this end, we define ♭_in(β) the number of “non-trespassing” bumps for β defined in (<ref>). For this aim, we defineB_in ={ ([i_1,x_1], [i_2,x_2]) ∈B:i_1 = i_2 }, where “∈” means that the left transposition is one of transpositions constituting B, so that we have the definition of ♭_in(β) = |B_in|.Now, we need a lemma:For β defined in (<ref>), we have the following bound for p=2.|f_β(ρ_n)| ≤d_n^♭_in(β)Let ω_C be a maximally entangled state associated to C ∈𝒞_p, i.e. a tensor product of maximally entangled states, each of which is defined by a transposition in C (see Section <ref> for the definitions). Then, using the general “linearization trick”Tr(XY^T) = Tr[ω (X ⊗ Y) ω],we getf_β(ρ_n)= d_n^♭(β)·_B^c [ (ω̂_B_in^* ⊗ω̂_B ∖B_in^*⊗I_B^c ) ρ_n⊗ρ_n( ω̂_B_in⊗ω̂_B ∖B_in ⊗I_B^c ) ] ≤d_n^♭(β)_B_in ⊗B^c [ (I_B_in⊗ω̂_B ∖B_in^*⊗I_B^c ) ρ_n⊗ρ_n( I_B_in⊗ω̂_B ∖B_in ⊗I_B^c ) ] = d_n^♭_in(β)[Ψ^(1)Ψ^(2)T ] ≤d_n^♭_in(β) where Ψ^(1) and Ψ^(2) are reduced density operators of ρ_n in the first and second spaces, and we have used the trivial matrix inequality ω̂≤ I.This means that we can reduce candidates of leading order terms in (<ref>), and for writing purpose we define the set of non-trespassing bumps by𝒞_p,in = {B ∈𝒞_p: B_in = B }Note that trivially 𝒞_1 = 𝒞_1, in. Then, finally, we can state the result giving the asymptotic moments of the sequence of random matrices Z(ψ_n).From here on, weidentify α, β with A,B ∈𝒞_p.For any given sequence of input states ρ_n, 1) All moments of Z(ρ_n) are expressed as (1+o(1))∑_B∈𝒞_p A ⊆Bk^#(γα)/2 +|A|-pr·t^|B|·g_B(ρ_n) ·(-1)^|B|-|A|where g_B(ρ_n)=f_β(ρ_n)/(tnk)^|B| ≤12) For the first and second moments of Z(ρ_n) one can replace 𝒞_p by 𝒞_p,in. For pairings α and β as in (<ref>), resp. (<ref>), the Möbius function is given by (<ref>):Möb(α,β) = (-1)^| B ∖ A| = (-1)^| B|-|A| .Also note that ♭(β) = |B| for β in (<ref>). Neglecting terms in (<ref>) which vanish according to the above discussions, the general moment an be written, except for the (1+o(1)) factor, as ∑_α, β as in(<ref>), (<ref>)k^#(γα) /2·(tk)^|B| ·g_B(ρ_n)·k^-pr-|αβ|/2(α,β) = ∑_B∈𝒞_p A ⊆Bk^#(γα)/2 +|A|-pr·t^|B|·g_B(ρ_n) ·(-1)^|B|-|A|which is the general formula we wanted. Moreover we can replace 𝒞_p by 𝒞_p,in for p=1,2, based on Lemma <ref> and the remark following it. Next, we calculate the average output state for a fixed input ρ_n. To this end, we introduce a useful notation before going onto our theorem.Define for A ∈𝒞_1T_A^(k) := [ ⊗_{i,j} ∈A ω_ij ] ⊗[ ⊗_s ∉A I_s ]where we denote by ω the (un-normalized) maximally entangled state ω = ΩΩ^* with Ω = ∑_i = 1^k e_i ⊗ e_i ∈ℂ^k ⊗ℂ^k, see also Section <ref>.We write ω_ij for the operator ω acting on the copies i and j of the space ℂ^k.We also abuse notation so thats ∉ A means that s ∈ [r] stays fixed by transpositions in A ∈𝒞_1. Then,𝔼 Z(ρ_n)= (1+o(1)) M(ρ_n)whereM(ρ_n) := ∑_B∈𝒞_1 A ⊆BT_A^(k) ·k^|A|-r ·t^|B|·g_B(ρ_n) ·(-1)^|B|-|A|.Now we calculate “the first moment without trace”.To this end, we just replace k^#(γα)/2 in (<ref>) by T_A^(k). In fact T_A^(k) = k^#(α)/2=k^|A| where γ=δ for p=1.For a fixed sequence of input states (ρ_n)_n ≥ 1 we have the following convergence in probability: Z(ρ_n) -𝔼 Z(ρ_n) _2 →0Using the second part of Theorem <ref>, the second moment of Z(ρ_n) is a sum indexed by sets B ∈𝒞_2,in. For such a B,we write B = B_1 ⊕ B_2 where these two belong to blocks with i=1,2 respectively, so that, using the notation from Theorem <ref>, we can factorizeg_B (ρ_n) = g_B_1(ρ_n) ·g_B_2(ρ_n)Then,the formula in (<ref>) with p=2, which represents the second moment, up to o(1) terms, changes into:∑_B_1 ⊕B_2∈𝒞_2,in A_1 ⊕A_2 ⊆B_1 ⊕B_2k^#(γ(α_1 ⊕α_2))/2 +|A_1| +|A_2|-2r·t^|B_1|+|B_2|·g_B_1(ρ_n) ·g_B_2(ρ_n) ·(-1)^|B_1|+|B_2|-|A_1| -|A_2| =[ ∏_i=1^2 ( ∑_B_i∈𝒞_1 A_i ⊆B_iT_A_i^(k) ·k^|A_i|-r·t^|B_i|·g_B_i(ρ_n) ·(-1)^|B_i|-|A_i| ) ] = [ (M(ρ_n) )^2 ] + o(1),where α_i are defined by A_i, respectively. Then, Chebyshev's inequality shows for each ε >0ℙ(Z(ρ_n)- 𝔼 Z(ρ_n)_2^2 ≥ε^2 )≤1/ε^2𝔼 Z(ρ_n) - 𝔼 Z(ρ_n)_2^2 = [𝔼TrZ(ρ_n)]^2 - Tr[M(ρ_n)^2] + o(1)/ε^2 =o(1)/ε^2This completes our proof of the convergence in probability.For some models of random unitary channels, it is possible to show that similar convergence results hold almost surely, a stronger convergence that the convergence in probability proven here. This is enabled by better controlling the error in equations such as (<ref>), up to O(n^-2) terms. This is one technical difference between random unitary and random orthogonalmatrices: in the former case, the error in the approximation of the Weingarten formula (<ref>) is O(n^-2), while in the latter it is O(n^-1), see <cit.>. § OPTIMAL SEQUENCES OF INPUT STATES Having computed in the previous section the asymptotic behavior of the outputs for a fixed sequence of input state, we turn now to the problem of finding the input sequences giving the outputs with least entropy (asymptotically). Our strategy is to show that for any sequence of input states, the outputs will lie, asymptotically, inside a fixed, deterministic set K_r,k,t. We shall then minimize the entropy for states inside this convex set K_r,k,t. We start by writing the expected value of an output state into a more compact form.In what follows we replace 𝒞_1 by 𝒫̂_2(r) the set of partial parings on [r] because in this section the parameter r is more relevant.Starting from M(ρ_n) in (<ref>), we haveM(ρ_n)= ∑_A ⊆ B ∈𝒫̂_2(r) T_A^(k) t^|B| k^-r+|A| g_B(ρ_n) (-1)^|B|-|A|=∑_ B ∈𝒫̂_2(r)⟨T̃_B^(d_n) , ρ_n ⟩∑_A ⊆Bt^|B| k^-r+|A| (-1)^|B|-|A| T_A^(k)= ∑_ B ∈𝒫̂_2(r)⟨T̃_B^(d_n), ρ_n ⟩R̃_B^(k),where the operators T̃_B^(d_n)∈ℳ_d_n^r(ℂ) andR̃_B^(k)∈ℳ_k^r(ℂ)for A,B ∈𝒫̂_2(r) are defined as follows:T̃_B^(d_n) := d_n^-|B| T_B^(d_n)(=[ ⊗_{i,j}∈ B d_n^-1ω_ij] ⊗[ ⊗_s ∉ BI_s ] ) R̃_B^(k) := [ ⊗_{i,j}∈ B t(k^-1ω_ij- k^-2I_ij)] ⊗[ ⊗_s ∉ B k^-1 I_s ]= ∑_A ⊆ Bt^|B| k^-r+|A| (-1)^|B|-|A| T_A^(k) where one can see the last equality via binomial formula.Note that equation (<ref>) is close to what we want: to express the output of the channel as a convex combination of simple quantum states. The problem here is that, although the scalars ⟨T̃_B^(n) , ρ_n ⟩ are non-negative, the matrices R̃_B^(k) are not, in general, positive semidefinite. In fact, we have TrR̃_B^(k) = δ_B, ∅. In order to achieve our goal, we shall apply the Möbius inversion formula <cit.> to (<ref>). First, it is quite obvious to see that the Möbius function on the lattice 𝒫̂_2(r) is identical to the one for the lattice of subsets: if a partial pairing A is contained in another partial pairing B, then μ(A, B) = (-1)^|B|-|A|. Hence, if we defineS̃^(k)_B:= ∑_A ⊆ BR̃_A^(k) Q̃_A^(d_n) := ∑_B ⊇ A(-1)^|B|-|A|T̃_B^(d_n),we have, via the Möbius inversion formulaR̃^(k)_B = ∑_A ⊆ B (-1)^|B|-|A|S̃_A^(k),and we can rewrite (<ref>) as M(ρ_n) =∑_ B ∈𝒫̂_2(r)⟨T̃_B^(d_n) , ρ_n ⟩R̃_B^(k)=∑_ A ⊆B ∈𝒫̂_2(r)⟨T̃_B^(d_n) , ρ_n ⟩ (-1)^|B|-|A|S̃_A^(k)=∑_ A∈𝒫̂_2(r)⟨∑_B ⊇A(-1)^|B|-|A|T̃_B^(d_n) , ρ_n ⟩S̃_A^(k)=∑_ A∈𝒫̂_2(r)⟨Q̃_A^(d_n) , ρ_n ⟩S̃_A^(k).From (<ref>), we can actually obtain an explicit formula for the matrices S̃_B^(k):S̃^(k)_B:= ∑_A ⊆ BR̃_A^(k)= ∑_A ≤ B[ ⊗_{i,j}∈ A t(k^-1ω_ij- k^-2I_ij)] ⊗[ ⊗_s ∉ A k^-1 I_s ] =[ ⊗_{i,j}∈ B t(k^-1ω_ij- k^-2I_ij) + k^-2I_ij] ⊗[ ⊗_s ∉ B k^-1 I_s ] =[ ⊗_{i,j}∈ B tk^-1ω_ij+(1-t) k^-2I_ij] ⊗[ ⊗_s ∉ B k^-1 I_s ] =[ ⊗_{i,j}∈ Bη_ij] ⊗[ ⊗_s ∉ B k^-1 I_s ],where η := tk^-1ω+(1-t) k^-2I ∈ℳ_k^2(ℂ)is indeed a quantum state (i.e. a positive semidefinite matrix of unit trace); such states, convex mixtures between a maximally entangled state and a maximally mixed state are called isotropic states in the quantum information theory literature. We have now all the ingredients to state the main result of this section.Consider a sequence of random quantum channels Φ_n : ℳ_d_n(ℂ) →ℳ_k(ℂ) constructed from random Haar distributed orthogonal matrices U_n ∈𝒪(kn), as in Section <ref>. Furthermore, assume that d_n ∼ tkn for some constant t ∈ (0,1) and define, for any r ≥ 1, the convex setK_r,k,t:= conv{S̃^(k)_B : B ∈𝒫̂_2(r) }⊆ℳ_k^r^1,+(ℂ).Then, for any fixed sequence of input states ρ_n ∈ℳ_d_n^1,+(ℂ), the output states converge, in probability, to the convex body K_r,k,t: for all ε >0,lim_n →∞ℙ[ dist(Φ_n^⊗ r(ρ_n), K_r,k,t) > ε] = 0.Note that K_r,k,t depends on t via (<ref>). Let us fix a sequence of input states (ρ_n) and use the triangle inequality:dist(Φ_n^⊗ r(ρ_n), K_r,k,t) ≤dist(𝔼Φ_n^⊗ r(ρ_n), K_r,k,t)+ Φ_n^⊗ r(ρ_n) -𝔼Φ_n^⊗ r(ρ_n)_2.We have shown in Theorem <ref> that the second term in the right hand side of the above inequality converges in probability towards zero; it is enough thus to show that the first term also vanishes as n →∞. From (<ref>), we have the following decomposition𝔼Φ_n^⊗ r(ρ_n) = (1+o(1))∑_ A∈𝒫̂_2(r)⟨Q̃_A^(d_n) , ρ_n ⟩S̃_A^(k).To finish the proof, we show next that the weights in the equation above are (asymptotically) non-negative and sum up to one. For the claim about the sum, note that∑_ A∈𝒫̂_2(r)Q̃_A^(d_n)=∑_ A ⊆ B∈𝒫̂_2(r) (-1)^|B|-|A|R̃_A^(d_n)= T̃_∅^(d_n) = I_k^r,proving the claim. The other claim follows from <cit.>, where it was shown that the spectrum of the matrices Q̃_A^(d_n) is at distance O(1/n) from the set {0,1}. The reader should make note of the fact that although the matrices Q̃^(d_n)_· are indexed by different combinatorial objects (partial pairings here and partial permutations in <cit.>), they encode the same linear operators and thus they have the same spectrum. Let B_0 be a maximal partial pairing in 𝒫̂ _2(r), i.e. a pairing consisting of ⌊ r/2 ⌋ pairs and, when r is odd, a singleton. Then, for any fixed sequence of input states ρ_n ∈ℳ_d_n^1,+(ℂ), the inputs G^(d_n)_B_0:= [ ⊗_{i,j}∈ B_0 d_n^-1ω_ij] ⊗[ ⊗_s ∉ B_0 d_n^-1 I_s ] = d_n^2⌊ r/2 ⌋-rT̃^(d_n)_B_0 give output states having less entropy than the sequence of inputs ρ_n: for all ε >0,lim_n →∞ℙ[ H( Φ_n^⊗ r(ρ_n)) <H(Φ_n^⊗ r(G^(d_n)_B_0))- ε] = 0.In other words, the sequence of input states consisting of a tensor product of ⌊ r/2 ⌋ maximally entangled states and, when r is odd, a maximally mixed state yields the output sequence with least asymptotical entropy. By the theorem, the outputs belong, when n is large, to the set K_r,k,t. The extremal points of K_r,k,t are precisely the quantum states S̃^(k)_B, with B a partial pairing of [r]. Such an extremal state has von Neumann entropyH(S̃^(k)_B) = |B| H(η) + (r-2|B|) log k,where η is the bipartite quantum state define in (<ref>); it has entropy strictly less than 2log k, more preciselyH(η) = h(tk^-1 + (1-t)k^-2) + (k^2-1)h((1-t)k^-2) ,where h(x) = -x log x. To finish the proof, we show that the input sequence G^(d_n)_B_0 produces the output sequence S̃^(k)_B_0. Indeed, from (<ref>), we have𝔼Φ_n^⊗ r(G^(d_n)_B_0)=(1+o(1))∑_ A∈𝒫̂_2(r)⟨Q̃_A^(d_n) , G^(d_n)_B_0⟩S̃_A^(k)=(1+o(1))∑_ A ⊆ B∈𝒫̂_2(r)(-1)^|B|-|A|d_n^2⌊ r/2 ⌋-r⟨T̃_B^(d_n) , T̃^(d_n)_B_0⟩S̃_A^(k).By direct inspection, and using the fact that B_0 is a maximal partial pair pairing, we have that (see also <cit.>)d_n^2⌊ r/2 ⌋-r⟨T̃_B^(d_n) , T̃^(d_n)_B_0⟩ = (1+o(1)) 1_B ⊆ B_0,and thus𝔼Φ_n^⊗ r(G^(d_n)_B_0) =(1+o(1))∑_ A ⊆ B ⊆ B_0 ∈𝒫̂_2(r)(-1)^|B|-|A|S̃_A^(k) = S̃_B_0^(k),finishing the proof. § DISCUSSION In this work, using Weingarten calculus on the orthogonal group, we have shown that among fixed input sequences for a tensor power of a random orthogonal quantum channel, product of maximally entangled states achieve the smallest output entropy.We consider our results to be evidence toward the claim that such random channels do not violate (asymptotically, with high probability) the additivity relation.More precisely, for r ≥ 1 we conjecture that, almost surely for random orthogonal quantum channels such as the ones in Section <ref> lim_n →∞ S_min(Φ_n^⊗2r) ?= r lim_n →∞ S_min(Φ_n^⊗2). For this conjecture we must refer to a sentence in <cit.>:“This two-letter additivity conjecture would enable us to restrict our attention to considering input states with a bipartite entanglement structure,possibly opening the way to computing the capacity for arbitrary channels”.Hastings conjectures thus the following additivity for quantum channels:S_min((Ψ⊗Ψ̅)^⊗r)?=rS_min(Ψ⊗Ψ̅)In <cit.>, we have studied this question in the frame work of the current work, but with random unitary quantum channels. Then, we have shown that among a very large class of fixed input sequences, tensor products of maximally entangled states yield the outputs with least entropy.This is a strong supporting mathematical evidence towards Hastings' conjecture. In the same direction, see <cit.> for considerations about upper bounds on the amount of additivity violations for random quantum channels. Surprisingly, if we compare our calculations with ones for unitary random quantum channels from <cit.>, we are inclined to conjecture that generically entanglement does not help to improve minimum output entropy of tensor powers of random unitary quantum channels, while (only) bipartite entanglement helps for random orthogonal channels: almost surely,lim_n →∞lim_r →∞1/rS_min(Ψ_n^⊗r) ?= lim_n →∞ S_min(Ψ_n) andlim_n →∞lim_r →∞1/r S_min(Φ_n^⊗r)?=1/2 lim_n →∞ S_min(Φ_n^⊗2)where Ψ_n and Φ_n are sequences of respectively unitary and orthogonal random quantum channels.We also conjecture that similar phenomena might occur for the Holevo capacity too, and we hope that such results might shed light on capacity formulas.Indeed, according to <cit.>, certain random quantum channels satisfy a simple linear relation between their Holevo capacity and their minimum output entropy, while such a linear relation was initially observed in <cit.> for covariant channels. alpha
http://arxiv.org/abs/1703.08979v2
{ "authors": [ "Motohisa Fukuda", "Ion Nechita" ], "categories": [ "quant-ph", "math-ph", "math.MP", "math.PR" ], "primary_category": "quant-ph", "published": "20170327092152", "title": "On the minimum output entropy of random orthogonal quantum channels" }
Simultaneous detection of quantum oscillations from bulk and topological surface states in metallic Bi_2Se_2.1Te_0.9Keshav Shresthaa,*[^*Email address: keshav.shrestha@inl.gov], David E. Grafb, Vera Marinovac, Bernd Lorenzd, and Paul C. W. Chud,e aIdaho National Laboratory, 2525 Fremont Ave., Idaho Falls, ID 83402, USA;bNational High Magnetic Field Laboratory, Florida State University, Tallahassee, FL 32310, USA; cInstitute of Optical Materials and Technology, Bulgarian Academy of Sciences, Acad. G. Bontchev Str. 109, Sofia 1113, Bulgaria; dTCSUH and Department of Physics, University of Houston, 3201 Cullen Blvd., Houston, Texas 77204, USA; eLawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, USADecember 30, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Shubnikov-de Haas (SdH) oscillations in metallic Bi_2Se_2.1Te_0.9 are studied in magnetic fields up to 35 Tesla. It is demonstrated that two characteristic frequencies determine the quantum oscillations of the conductivity. Angle dependent measurements and calculations of the Berry phase show that the two frequencies F_1 and F_2 describe oscillations from surface and bulk carriers, respectively. At low magnetic fields, only SdH oscillation from topological surface states can be detected whereas at high magnetic field the bulk oscillations dominate. The origin of the separation of bulk and surface SdH oscillations into different magnetic field ranges is revealed in the difference of the cyclotron masses m_c. The bulk m_c is nearly three times larger than the surface cyclotron mass resulting in a stronger attenuation of the bulk oscillation amplitude upon decreasing magnetic field. This makes it possible to detect and characterize the surface SdH oscillations in the low-field range and the bulk oscillations at high magnetic fields.Topological insulator; Shubnilov-de Haas oscillations; surface states; bulk states § INTRODUCTIONTopological surface states in systems with strong spin-orbit interactions have become the focus of interest in recent years <cit.>. The nontrivial topology of the respective electronic states results in a number of novel quantum phenomena, for example, the surface of a topological insulator (TI) has to be conducting since the transition from a nontrivial insulator to the vacuum or an ordinary insulator demands the electronic surface states to be gapless. The surface states are usually protected by symmetry, e.g. the time reversal symmetry in a three-dimensional TI. The intrinsic characteristics of topological surface states is the Dirac-like dispersion and their two-dimensional character, both properties can be experimentally verified.While the band structure and the existence of electronic states with a Dirac-like dispersion filling the gap between the valence and conduction bands can be verified in angle resolved photoelectron spectroscopy (ARPES), the electrical transport properties of the surface states and the associated quantum oscillations in magnetic fields (Shubnikov-de Haas effect) are frequently utilized to prove the topological surface states in various systems. This method works very well as long as the bulk states of the TI are insulating, i.e. the Fermi energy falls into the bulk band gap <cit.>. However, many topological systems are conducting in the bulk due to defects resulting in a shift of the Fermi energy into the conduction or valence bands. A typical example is the Bi_2Se_3 - Bi_2Te_3 system <cit.> where it was also found that bulk conduction interfering with surface conduction made it difficult, if not impossible, to detect the specific signature of the surface states in Shubnikov-de Haas (SdH) oscillations of the conductivity <cit.>.However, in a recent study of metallic Bi_2Se_2.1Te_0.9 we found the distinct signature of topological surface states in SdH oscillations by investigating the dependence of the quantum oscillations on the angle of the magnetic field with the surface of the crystal as well as the Berry phase which was found consistent with the Dirac nature of the conducting particles (holes) <cit.>. Surprisingly, in the magnetic field range up to 7 Tesla, no SdH oscillations from bulk carriers have been observed in this study although the bulk conduction turned out to be metallic to the lowest temperatures and the total carrier density was relatively high, leading to the speculation that the quantum oscillations from bulk carriers had been strongly suppressed at low fields but might be found at higher magnetic fields.In this work we extend the magnetic field range to 35 Tesla and study the SdH oscillation spectrum in hole-like metallic Bi_2Se_2.1Te_0.9. We prove that indeed SdH oscillations from bulk carriers dominate in the high-field range and are attenuated at lower fields. This enables us to separate bulk and surface states, determine the relevant parameters, and explain the interference and possible separation of surface and bulk quantum oscillations.§ EXPERIMENTALThe growth of the single crystals of Bi_2Se_2.1Te_0.9 was achieved by utilizing a modified Bridgman technique with high purity starting materials Bi (99.9999%), Se (99.9999%), and Te (99.9999%). The mixture, enclosed in quartz ampoules, was molten at 875 ^∘C and kept at this temperature for 2 days. The melt was slowly cooled to 670 ^∘C at a rate of 0.5 ^∘C/h. The crystals were finally cooled to room temperature at a speed of 10 ^∘C/h. Platelet-like crystals of typical size 5 mm x 3 mm x 0.1 mm have been extracted from the synthesis product. All crystals prepared for transport measurements have been cleaved to provide fresh and clean surfaces.Magnetotransport measurements have been conducted using a lock-in technique at the National High Magnetic Field Laboratory (NHMFL) in Tallahassee, FL. Six gold contacts were sputtered onto one surface of the crystal to conduct longitudinal (resistance) and transverse (Hall) measurements. Platinum wires were attached to the gold pads with silver paint. The sample was mounted on a rotating platform which allowed for positioning the sample at different angles with the magnetic field. The platform, mounted in a ^3He cryostat (Oxford), was inserted into the 32 mm bore of a resistive magnet with a maximum field of 35 Tesla.§ RESULTS AND DISCUSSION §.§ Shubnikov-de Haas oscillationsThe metallic character of the bulk conductivity is demonstrated in Fig. 1. The continuous decrease of the resistivity ϱ_xx(T) upon decreasing temperature and the positive slope of the Hall resistance R_xy(B) (shown in the inset to Fig. 1) prove that the charge carriers are holes and the Fermi energy is positioned below the top of the valence band, consistent with other crystals from the same growth batch <cit.>. From the Hall data measured at 5 K (Fig. 1), the bulk hole carrier density is determined as n_bulk=1.3×10^18 cm^-3. This number is in good agreement with the carrier density of a similar crystal of the same chemical composition <cit.>. Longitudinal and Hall resistance of Bi_2Se_2.1Te_0.9 are measured in high magnetic field up to 35 Tesla at the NHMFL, as shown in figure 2. Both R_xx(B) and R_xy(B) show quantum oscillations at high magnetic fields. The quantum oscillations in longitudinal and Hall components have a same frequency but have a phase difference of 90^o. Since in our data it is easier to subtract a background signal in Hall component than that in longitudinal, we have taken Fourier transform of the Hall signal for frequency analyses. The oscillatory part Δ R_xy, obtained after subtracting a smooth polynomial background, is shown in figure 3. It is obvious from Fig. 3 that the data cannot be described by an oscillation with one single frequency only, but rather by a superposition of different frequencies. This is confirmed by analyzing the Fourier transform (FFT) of the data from Fig. 3. The FFT's of Fig. 3 in different field ranges are displayed in Fig. 4.Two frequencies, F_1≈ 26 Tesla and F_2≈ 55 Tesla, dominate the oscillating behavior of R_xy. Since F_2 is nearly twice F_1, both frequencies could be the first and second harmonic of the same oscillation, as observed in other compounds <cit.>. However, the relative weight of the oscillations with F_1 and F_2 depends on the magnetic field range suggesting that F_2 is not likely to be the second harmonic of F_1. This is supported in our analysis of the SdH oscillations proving its bulk origin from both the angle dependence measurements and Berry phase calculations. The SdH oscillation with the lower frequency F_1 dominates in the low-field range whereas the higher frequency F_2 is stronger at higher magnetic fields. This is demonstrated in Fig. 4 where the Fourier transform of the data from Fig. 4 is shown for various field ranges, B_R as indicated in the figures. In the low-field B_R = (0 to 5) Tesla, the FFT exhibits only one peak at frequency F_1 (Fig. 4a). This is similar to and consistent with the earlier work that was limited to magnetic fields below 7 Tesla <cit.>. With increasing magnetic field, a second peak at F_2 develops (Fig. 4b) and for fields up to 15 tesla both peaks have about the same magnitude (Fig. 4c). With further increasing field, the F_2 peak becomes dominant (Fig. 4d).The development of the two peaks shown in Fig. 4 prove that F_1 and F_2 characterize SdH oscillations of different origin. In our previous communication, we have shown that the low-frequency oscillation (F_1) arises from topological surface states, but the origin of the second frequency observed at higher fields is not clear. It appears conceivable to attribute the F_2 frequency to bulk SdH oscillations, as conjectured earlier <cit.>. To study the properties of the F_2 oscillation it has to be resolved separately, without the interference from the surface state oscillations (F_1). This can be achieved by analyzing the high-field data above 10 Tesla. Fig. 5 shows that the FFT of the data above 10 Tesla exhibits only one pronounced peak at frequency F_2, i.e. the contribution from surface oscillations is largely eliminated. §.§ Angle dependence of SdH oscillationsSdH oscillations from bulk and surface states can be distinguished by measuring the dependence on the angle with the magnetic field. Surface oscillations of Δ R_xy or Δ R_xx are expected to be periodic if plotted as function of the inverse normal component, 1/B_⊥, of the field with respect to the surface. If the field angle Θ to the normal of the surface changes, the position of the oscillation frequency follows a 1/cosΘ scaling, due to the strictly two-dimensional character of the surface conduction <cit.>. For bulk conduction, however, the SdH frequency will not follow the 1/cosΘ scaling, but it may still show a minor angle dependence if the Fermi surface geometry is anisotropic.The angle dependent measurements have been conducted over the whole field range up to 35 Tesla and angles between 0^∘ and 70^∘. The Fourier transform to determine F_1 and F_2 at different field angles was calculated using the data measured at various angles Θ, similar to the data for Θ=0^∘ in Fig. 3. As shown in Fig. 6, the frequency F_1 scales well with 1/cos(Θ) (dashed line in Fig. 6a) indicating that this conduction channel is two-dimensional. The F_2 oscillation, however, changes only very little with the angle Θ and is therefore attributed to the bulk conduction channel (Fig. 6b).It should be noted that there is a small shoulder in the Fourier transform near 100 Tesla visible in Figs. 4d and 5. This shoulder develops into a peak with frequency F_3≈ 90 to 100 Tesla with increasing angle Θ. This additional peak is attributed to another section of the bulk Fermi surface which contributes to the SdH oscillations only at higher angles Θ. Since the value of F_3 is nearly independent of the angle Θ, it cannot arise from surface states. §.§ Berry phaseThe angle dependent transport data discussed so far lead to the conclusion that SdH oscillations from bulk and topological surface states can be measured simultaneously and resolved separately in different magnetic field ranges. The conclusion is further supported by an analysis of the Berry phase which distinguishes the nature of the charge carriers. The charge carriers of the topologically nontrivial surface states with a Dirac dispersion are expected to have a Berry phase β=1/2, in contrast to the bulk carriers with a Berry phase of zero. β can be determined from the Landau level fan diagram <cit.>.It has been shown that the SdH oscillations of the conductivity Δσ, in contrast to oscillations of the resistivity Δϱ, provide a more accurate determination of the Berry phase <cit.>. To determine the nature of the charge carriers in the high-field range (with oscillation frequency F_2), we have to evaluate the SdH oscillations at sufficiently high fields, cutting off the low-field data, to eliminate any interference from the surface oscillations. It will be shown below, that the crossover from surface (F_1) to dominantly bulk (F_2) oscillations takes place at B_c≈14 Tesla. Fig. 7 shows the oscillating part of longitudinal conductivity σ_xx(B) above 13 Tesla, calculated using the formula, σ_xx=ϱ_xx/(ϱ_xx^2+ϱ_xy^2). The vertical dashed lines indicate the positions of maxima and minima. It was shown that the minima and maxima of the Δσ_xx correspond to the integer and half-integer numbers of n, respectively <cit.>. The Landau level fan diagram is shown in the inset of Fig. 7. The plot n vs. 1/B_n reveals a linear relation given by F/B_n-β=n-1 and the value n obtained from the extrapolation 1/B_n→0 is very close to 1. Accordingly, the linear fit determines the Berry phase as β=0.038±0.071. This value is consistent with the bulk nature of the charge carriers which give rise to the SdH oscillations in the high-field range <cit.>, in agreement with the weak angle dependence (Fig. 6b).For comparison, the Berry phase of the surface carriers is determined from the low-field data, B< 7 Tesla. In this field range, the SdH oscillations are pronounced in the second derivative of σ_xx with respect to the inverse field 1/B (see Fig. 8). Here the maxima have to be assigned to integer values of the Landau level index n, as labeled in Fig. 8. The linear extrapolation of the Landau level fan plot (inset to Fig. 8) to 1/B→0 reveals a value of n_0=0.45 corresponding to a Berry phase of β=0.55±0.06. This value is in very good agreement with the earlier data for a similar crystal of Bi_2Se_2.1Te_0.9 <cit.>. The value of β close to 0.5 proves the Dirac nature of the topological surface carriers. §.§ Lifshitz-Kosevich analysisThe separation of SdH oscillations arising from topological surface and trivial bulk states into low- and high-field ranges, respectively, needs to be understood. To this end, the microscopic parameters defining the quantum oscillations have to be determined. This can be achieved through the Lifshitz-Kosevich (LK) analysis of the SdH oscillations of Δ R_xx measured at different temperatures. According to the LK theory, the amplitude of the SdH oscillation of Δ R_xx is expressed as function of temperature and magnetic field: <cit.> Δ R(T,B) = Δ R_0 e^-λ_D(B)λ(T/B)sinh[λ(T/B)] with λ_D(B) = 2π^2k_Bħ e m_c T_D Bλ(T/B) = 2π^2k_Bħ e m_c T B The first term in equ. (1), Δ R_0, is the amplitude of the oscillation in the high-field limit 1/B→0. The next term is the Dingle factor representing the exponential decrease of Δ R with decreasing field B. The last term describes the attenuation of Δ R with increasing temperature T. m_c is the cyclotron mass of the charge carriers and T_D is the Dingle temperature which is related to the inverse life time of the carriers.There are only three fit parameters in equs. (1) to (3), Δ R_0, m_c, and T_D, which can be determined for a specific oscillation by analyzing the field and temperature dependencies of Δ R(T,B). Fig. 9a shows the SdH oscillations of Δ R_xx in the high-field range at different temperatures. The temperature dependence of Δ R is solely determined by the λ/sinhλ term in equ. (1). Fitting this expression to the data at different constant magnetic fields, e.g. at 25 Tesla shown in Fig. 10, allows for the determination of the Landau level spacing Δ E_N(B)=ħ eB/m_c and the cyclotron mass m_c from the slope of the plot Δ E vs. B in the lower inset of Fig. 10. For the high-field (bulk) oscillations we obtain m_c=0.34 m_e (m_e is the bare electron mass).The Dingle temperature can be determined from the semi-logarithmic plot shown in the upper inset of Fig. 10 for three different temperatures. The dashed lines are a linear fit to the data and T_D = 6.6 K is calculated from the slopes. With the parameters m_c and T_D fixed, the oscillation amplitude in the high-field limit is estimated from the data of Fig. 9 as Δ R_0 = 5.04 mΩ. The three parameters completely define the bulk SdH oscillation amplitude, which dominates the quantum oscillations above 15 Tesla, as function of magnetic field and temperature.In the low-field range, the SdH oscillations are determined by the topological surface states. A similar evaluation within the LK theory, restricted to below 10 Tesla, reveals the set of parameters for the SdH oscillations arising from the surface conduction. Some results of the LK analysis, based on data displayed in Fig. 9b, are shown in Fig. 11. For the current sample, the parameters determined for surface conduction are Δ R_0 = 2.6 mΩ, m_c = 0.13 m_e, and T_D = 8.5 K. The parameters for bulk and surface quantum oscillations are compared and summarized in Table 1. Note that the oscillation amplitude Δ R_0 of the bulk SdH oscillations is larger by a factor of 2 as compared to Δ R_0 of the surface sates, explaining the domination of bulk oscillations at higher magnetic fields. However, the cyclotron mass m_c of the bulk oscillations is also significantly larger than that of the surface conduction resulting in a faster exponential decay at lower fields and higher temperatures. Although the Dingle temperature T_D is slightly lower in the bulk, the product m_c· T_D, which determines the exponent of λ_D in equ. (1), is still larger and the bulk oscillations decrease more rapidly upon decreasing magnetic field. Therefore, the SdH oscillations are dominated by surface states in the low-field range. As an example, we show in Fig. 12 the oscillation amplitudes for both, surface (frequency F_1) and bulk (frequency F_2) transport, at 5 K calculated with the parameters from Table 1. It is obvious that, with increasing magnetic field, there is a crossover from surface dominated to bulk SdH oscillations. For example, at 5.85 Tesla (data shown in Fig. 10) the ratio of surface and bulk oscillation amplitudes is about 9, demonstrating the dominance of quantum oscillations from topological surface states at this field. At 14 Tesla, both oscillation amplitudes are equal resulting in the strongest interference. Below and above this crossover field, surface and bulk oscillations can well be separated, as shown in the frequency analysis above (Figs. 4 and 5).§ SUMMARY AND CONCLUSIONSThe question of how to detect topological surface states utilizing different experimental techniques has been discussed in recent years. The study of the electronic excitation spectrum in ARPES measurements has revealed the Dirac-like dispersion filling the semiconducting bulk gap in various systems with strong spin-orbit interactions. Alternatively, magnetotransport investigations have shown the characteristics of two-dimensional conduction arising from topological surface states in the angle dependence of the frequency of Shubnikov-de Haas oscillations. This method was believed to work well as long as the bulk states do not contribute to the conduction <cit.>. In metallic topological compounds, however, the interference of SdH oscillations from surface and bulk states makes it very difficult to separate and study surface and bulk oscillations.In the current example, metallic Bi_2Se_2.1Te_0.9 with hole-type carriers, Shubnikov-de Haas oscillations have been observed in magnetic fields up to 35 Tesla. Two characteristic oscillation frequencies, F_1 and F_2, can be clearly distinguished and attributed to oscillations from surface and bulk states, respectively. The character of the surface and bulk carriers is determined from the angle dependence of the SdH oscillations and the derived Berry phases. It is demonstrated that both oscillations can be separated whereas the topological surface states dominate in the low-field range and the bulk oscillations increase in relative weight at higher magnetic fields. The main origin of this separation is found in the different cyclotron masses (m_c^bulk/m_c^surf≈3) which causes the bulk oscillations to decay (exponentially) more rapidly if the magnetic field is decreased. At a temperature of 5 K, the crossover from bulk to surface dominated quantum oscillations upon decreasing field is found at a critical value of B_c=14 Tesla.The results of this study pave the way to study topological materials with bulk metallic properties using magnetoconductance measurements. They show that SdH oscillations from topological surface states can be detected even when the Fermi energy cuts through the valence band and the bulk transport properties are metallic. The conditions for a successful separation of surface and bulk SdH oscillations have been identified. The key parameter is the difference of the cyclotron masses m_c which have a profound effect on the oscillation amplitudes as a function of magnetic field. According to the Lifshitz-Kosevich theory, the oscillation amplitude decreases exponentially with the inverse magnetic field and the exponent is determined by m_c. In the current example, Bi_2Se_2.1Te_0.9, the field ranges where bulk and surface oscillations dominate, are well separated and the analysis of the quantum oscillations can be conducted at high and low fields revealing the fundamental parameters of bulk and surface oscillations, respectively. Other topological systems with bulk metallic conduction are expected to show similar properties and may be analyzed following the procedure outlined in this work.§ ACKNOWLEDGEMENT(S)This work is supported in part by the T.L.L. Temple Foundation, the J.J. and R. Moores Endowment, the State of Texas through TCSUH, the US Air Force Office of Scientific Research, and at LBNL through the US Department of Energy. V. M. acknowledges support from the Bulgarian Science Fund, project FNI-T-02/26. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490 and the State of Florida. The work at Idaho National Laboratory is supported by Department of Energy, Office of Basic Energy Sciences, Materials Sciences, and Engineering Division and through Grant No. DOE FG02-01ER4587299hasan:10 M. Z. Hasan and C. L. Cane, Topological insulators, Rev. Mod. Phys. 82 (2010), pp. 3045.qi:11 X.-L. Qi, Topological insulators and superconductors, Rev. Mod. Phys. 83 (2011), pp. 1057.ando:13 Y. Ando, Topological insulator materials, J. Phys. Soc. Jpn. 82 (2013), pp. 102001.taskin:11 A. A. Taskin, Z. Ren, S. Sasaki, K. Segawa, and Y. Ando, Observation of dirac holes and electrons in a topological insulator, Phys. Rev. Lett. 107 (2011), pp. 016801.eguchi:14 G. Eguchi, K. Kuroda, K. Shirai, A. Kimura, and M. Shiraishi, Surface Shubnikov-de Haas oscillations and non-zero Berry phase of the topological hole conduction in Tl_1-xBi_1+xSe, Phys. Rev. B 90 (2014), pp. 201307(R).xia:10 Y. Xia, D. Qian, D. Hsieh, L. Wray, A. Pal, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Observation of a large-gap topological-insulator class with a single dirac cone on the surface, Nat. Phys. 5 (2009), pp. 398.chen:10 Y. L. Chen, J. G. Analytis, J.-H. Chu, Z. K. Liu, S.-K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C. Zhang, I. R. Fisher, Z. Hussain, and Z.-X. Shen, Experimental realization of a three-dimensional topological insulator, Bi_2Te_3, Science 325 (2009), pp. 178.hsieh:10 D. Hsieh, Y. Xia, D. Qian, L. Wray, F. Meier, J. H. Dil, J. Osterwalder, L. Patthey, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Observation of time-reversal-protected single-dirac-cone topological-insulator states in Bi_2Te_3 and Sb_2Te_3, Phys. Rev. Lett. 103 (2009), pp. 146401.shrestha1:10 K. Shrestha, V. Marinova, D. Graf, B. Lorenz, and C. W. Chu, Quantum oscillations in metallic Sb_2Te_2TeSetopological insulator, Phys. Rev. B 95 (2017), pp. 075102.qu:11 D.-X. Qu, Y. S. Hor, J. Xiong, R. J. Cava, and N. P. Ong, Quantum oscillations and hall anomaly of surface states in the topological insulator Bi_2Te_3, Science 329 (2010), pp. 821.analytis:11 J. G. Analytis, R. D. McDonald, S. C. Riggs, J.-H. Chu, G. S. Boebinger, and I. R. Fisher, Two-dimensional surface state in the quantum limit of a topological insulator, Nature Phys. 6 (2010), pp. 960.eto:11 K. Eto, Z. Ren, A. A. Taskin, K. Segawa, and Y. Ando, Angular-dependent oscillations of the magnetoresistance in Bi_2Se_3 due to the three-dimensional bulk fermi surface, Phys. Rev. B 81 (2010), pp. 195309.cao:13 H. Cao, S. Xu, I. Miotkowski, J. Tian, D. Pandey, M. Z. Hasan, and Y. P. Chen, Structural and electronic properties of highly doped topological insulator Bi_2Se_3 crystals, Phys. Status Solidi RRL 7 (2013), pp. 133.shrestha:14 K. Shrestha, V. Marinova, B. Lorenz, and P. C. W. Chu, Shubnikov-de haas oscillations from topological surface states of metallic Bi_2Se_2.1Te_0.9, Phys. Rev. B 90 (2014), pp. 241111(R).taskin:09 A. A. Taskin and Y. Ando, Quantum oscillations in a topological insulator Bi_1-xSb_x, Phys. Rev. B 80 (2009), pp. 085303.jalan:10 B. Jalan, S. Stemmer, S. Mack, and S. J. Allen, Two-dimensional electron gas in δ-doped SrTiO_3, Phys. Rev. B 82 (2010), pp. 081103(R).li:14 G. Li, Z. Xiang, F. Yu, T. Asaba, B. Lawson, P. Cai, C. Tinsman, A. Berkley, S. Wolgast, Y. S. Eo, D.-J. Kim, C. Kurdak, J. W. Allen, K. Sun, X. H. Chen, Y. Y. Wang, Z. Fisk, and L. Li, Two-dimensional Fermi surfaces in Kondo insulator SmB_6, Science 346 (2014), pp. 1208.ren:10 Z. Ren, A. A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, Large bulk resistivity and surface quantum oscillations in the topological insulator Bi_2Te_2Se, Phys. Rev. B 82 (2010), pp. 241306(R).
http://arxiv.org/abs/1703.08847v2
{ "authors": [ "Keshav Shrestha", "David E. Graf", "Vera Marinova", "Bernd Lorenz", "Paul C. W. Chu" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170326165824", "title": "Simultaneous detection of quantum oscillations from bulk and topological surface states in metallic Bi2Se2.1Te0.9" }
Department of Physics, North Carolina State University, Raleigh, North Carolina 27695, USAWe present an experiment in which a horizontal quasi-2D granular system with a fixed neighbor network is cyclically compressed and decompressed over 1000 cycles.We remove basal friction by floating the particles on a thin air cushion, so that particles only interact in-plane.As expected for a granular system, the applied load is not distributed uniformly, but is instead concentrated in force chains which form a network throughout the system. To visualize the structure of these networks, we use particles made from photoelastic material. The experimental setup and a new data-processing pipeline allow us to map out the evolution subject to the cyclic compressions. We characterize several statistical properties of the packing, including theprobability density function of the contact force, and compare them with theoretical and numerical predictions from the force network ensemble theory. An experimental investigation of the force network ensemble Jonathan E. Kollmer1jekollme@ncsu.edu Karen E. Daniels1 December 30, 2023 ===================================================================== § INTRODUCTION Although the positions of particles in a jammed granular system are fixed to a specific geometrical configuration, the particle positions alone are not sufficient to determine the force network that carries the load on that packing. As such, an underdetermined mechanical system, there are many ways in wich force and torque balance on each particle can be statistfied for any given packing geometry and boundary conditions <cit.>. There are both stable configurations, and unstable configurations and initially stable granular systems can evolve into catastrophic failure. While two packings might have the same occupied volume or internal pressure they might have vastly different bulk material properties <cit.>. For frictional granular system it remains an open question to determine whether the statistics due to the stress state and the volume of the system can be decoupled from each other. To make predictions for the physical behavior of granular systems, tools and concepts from statistical physics have become widely used <cit.>, butwhat is the correct ensemble to describe jammed granular packings?To get more insight into these jammed packings one needs to look at the structure of force networks that form in loaded granular packings. When the packing is subjected to external load, not all particles share the load equally but the forces are highly localized into force chains.In this work, we present an experiment to look at the distribution of forces in a loaded granular system,while disentangling the effects of configuration from other influences. This is of interest to compare these distributions to predictions from the Force Network Ensemble theory <cit.>.§.§ The Force Network Ensemble The Force Network Ensemble (FNE) is a concept introduced by Snoeijer et al. <cit.> in which they use an ensemble approach for examining the force distribution in static granular packings. Since forces infixed granular packings are typically underdetermined, the ensemble averages over all microscopic variations of a packing, an approach that goes back to Edwards<cit.>.The FNE predicts, among other things <cit.>, a fitnite value for P(F)as F → 0 and a faster than exponential decay of the probability density function (PDF) of the contact forces at large forces, dependingon the dimensionality of the packing <cit.>. For a two dimensional system, P(F) is predicted to have a gaussian tail. For a review see <cit.> (and references therein)where it is also discussed that the peak in the PDF should vanish for anisotropic stresses. Saithoh et al. <cit.>, using molecular dynamics simulations to determine transition rates for contact changes, recently also found a master equation that describes the PDF of forces in soft particle packings. § AN EXPERIMENTAL APPROACH While a number of experimental works exist, e.g. <cit.>,PDFs of contact forces are most oftenproduced from numerical simulations, including a recent pair of papers by Pugnaloni <cit.> and Kondic <cit.>where they study the structure of force networks in tapped particulate systems of disks superimposed by a gravity force. However, in most experimental studies the external load that probes the force network is not the only force appliedto the system, there is additionally a load superimposed by gravity, or basal friction <cit.>. Further, there is only few experiments <cit.> probing granular ensembles. In this manuscript, we present an experiment that is designed to enumerate how many force configurations ofa single hyperstatic granular arrangement are practically accessible, while at the same time keeping the external load the only force that is beeing applied to the system.To achieve this we prepare a horizontal quasi 2D granular system that is floated on a gentle air cushion, thereby generating an effectively gravity free system without basal friction <cit.>.The particles are confined by a piston that can apply an uniaxial load to the packing. A schematic drawing of the experimental setup is detailed in Fig. <ref>.By cyclically loading and unloading the packing in a way thatwill not change the particle configuration (no neighbors changes), the systemcycles trough many contact force configurations due to microscopic changes of the exact contact point.For the experiments done here, we compress the packing in steps of constant volume (Δ V = 0.002869 V_initial) and the compression steps are applied quasistatically over 20 substeps of Δ x = 0.01  mm.The initial volume V_initial was chosen to be close to the onset of jamming, and the final volume so that the mean contact force rises by more than a factor of 3. We performe the experiment with several random configuration of 29 particles of two different radii (r_1 = 5.5 mm and r_2 = 7.6 mm) to prevent crystallization.In order to extract force information from the experiment, the particles are made of photoelastic material (Vishay PSM-4), which will shift the polarization of light that is shined through it as a function of the applied load.A model of the force modulated light intensity can then be fitted to camera images of the particles <cit.>.§ RESULTS When we start the load cycling we observe that even after an initial annealing period there are variations in the force network, (see Fig. <ref>), while the particle configurationis unchanged. This validates our experimental approach and allows us to probe the nature of the FNE.We run the experiment for ≈ 1000 cycles and observe strong fluctuations in the contact number, determined by a minimum threshold force (F_th > 0.01 N) and in the number of load bearing particles (the number of particles with one ore more contacts above the threshold force). Figure <ref> shows the average contact force F = ⟨√(F_N^2+F_T^2)  ⟩and its standard deviation over all cycles, as a function of the applied compression. We see that the standard deviations in F grow with the applied compression. For higher packing fractions, the intervals given by the STD(F) around F begin to overlap for consecutive compression steps. Figure <ref> shows the PDFs of the contact forces for 5 compression steps over all cycles.The PDF exhibits a strong peak, decaying in the limit ofboth low and high forces. These results are qualitatively similar to the predictions by FNE theory: most notablya stronger than exponential decay in P(F).We also see the peak in P(F), along with the average contact force, move towards higher values as the system gets compressed stronger. In fact, the PDFs collapse onto a single curve when normalized by the average contact force at the corresponding compression step.Although we perform an experiment with anisotropic loading, we nonetheless identify a peak in the distribution, a feature that is suggested to vanish for anisotropic loads <cit.>.For small forces we find, P(F) to rise exponentially, as approximately F^3/2 . Wyart<cit.>, showed that the exponent is determined by the pair distribution function g(r). However, in our small system with a fixed particle configuration, g(r) is undersampled, so the question arises what the exponent is set by here. § CONCULSIONS AND OUTLOOKWe have designed an experiment that can explore the Force Network Ensemble of a two dimensional granular packing, while excluding forces other than the applied load. We find that we can qualitatively reproduce some of the features in contact force distribution predicted by the FNE theory. Due to the limited number of cycles and initial packing configurations explored here, it is not immediately clear whether some features we observed in the contact force PDFs are due to sample size or the specific initial configuration. Repeating the experiment not only for more compression cycles but also many different packing configurations, would allow us to probe ergodic effects by contrasting the time average statistics to the ensemble average statistics.Furthermore, future work should try to identify whether the are several subpopulations of networks which differ in their contact force PDFs. These subpopulations can be found by using tools from network theory such as community detection <cit.>. More generally, the data generated by this experiment will be useful in relating force network features to macroscopic packing properties <cit.>.§ ACKNOWLEDGEMENTS AND REFERENCESWe gratefully acknowledge James Puckett for the design and construction of the air table on which the apparatus is based, and for the inspiration for the new parallelized version of the contact-force code. This research was suppored by the James S. McDonnell Foundation and the NSF through grants DMR-0644743 and DMR-1206808.
http://arxiv.org/abs/1703.09169v1
{ "authors": [ "Jonathan E. Kollmer", "Karen E. Daniels" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170327162829", "title": "An experimental investigation of the force network ensemble" }
plain theorem[equation]Theorem *theoremATheorem <ref> *theoremDTheorem <ref> *theoremETheorem <ref> *theoremFTheorem <ref> lemma[equation]Lemma proposition[equation]Proposition prop[equation]Proposition cor[equation]Corollary conj[equation]Conjecture conjecture[equation]Conjecturedefinition defn[equation]Definition ex[equation]Example example[equation]Exampleremark remark[equation]Remarkequationsubsection usecolor usecolortrue rgbundefined enum enumrom We compute the variances of sums in arithmetic progressions of arithmetic functions associated with certain L-functions of degree two and higher in [t], in the limit as q→∞. This is achieved by establishing appropriate equidistribution results for the associated Frobenius conjugacy classes.The variances are thus related to matrix integrals, which may be evaluated.Our results differ significantly from those that hold in the case of degree-one L-functions (i.e. situations considered previously using this approach).They correspond to expressions found recently in the number field setting assuming a generalization of the pair-correlation conjecture.Our calculations apply, for example, to elliptic curves defined over [t].Variance of sums of arithmetic functions] Variance of sums in arithmetic progressions of arithmetic functions associated with higher degree L-functions in 𝔽_q[t] We are pleased to acknowledge support under EPSRC Programme Grant EP/K034383/1 LMF: L-Functions and Modular Forms.JPK is also grateful for support through a Royal Society Wolfson Research Merit Award and a Royal Society Leverhulme Senior Research Fellowship. We thank Nick Katz, MManuel Kowalski, and Zeev Rudnick for discussion and helpful comments.[ Edva Roditty-Gershon School of Mathematics, University of Bristol, Bristol BS8 1TW, UK Received ; accepted==========================================================================================§ INTRODUCTION§.§ Analytic motivation Let Λ(n) denote the von Mangoldt function, defined byΛ(n) = log pn=p^kpk≥ 1, 0The prime number theorem implies that∑_n ≤ xΛ(n)= x+o(x),as x→∞, determining the average of Λ(n) over long intervals.In many problems one needs to understand sums over shorter intervals and in arithmetic progressions.This is significantly more difficult, because the fluctuations between different short intervals/arithmetic progressions can be large, and in many important cases we do not have rigorous results.One may seek to characterize the fluctuations in these sums via their variances.These variances are the subject of several long-standing conjectures.For example, in the case of short intervals Goldston and Montgomery <cit.> have made the following conjectureFor any fixed ε>0,∫_1^X( ∑_X≤ n ≤ x+hΛ(n)- h)^2 dx ∼ hX(log X-log h)uniformly for 1≤ h≤ X^1-ε. It is natural to try to compute the variance in Conjecture <ref> using the Hardy-Littlewood Conjecture∑_n≤ XΛ(n)Λ(n+k)∼𝔖(k)Xas X→∞, where 𝔖(k) is the singular series 𝔖(k) = 2∏_p>2(1-1/(p-1)^2) ∏_p>2p|kp-1/p-2 k 0kMontgomery and Soundararajan <cit.> proved that (<ref>), together with an assumption concerning the implicit error term, implies a more precise asymptotic for the variance in Conjecture <ref> when log X≤ h≤ X^1/2, namely that it is equal tohX(log X-log h - γ_0-log 2π) + O_ε(h^15/16X(log X)^17/16+h^2X^1/2+ε),where γ_0 is the Euler-Mascheroni constant.An alternative approach to computing this variance follows fromζ^'(s)/ζ(s)=-∑_n=1^∞Λ (n)/n^s,which links statistical properties of Λ(n) to those of the zeros of the Riemann zeta-function ζ(s).Taking this line, Goldston and Montgomery <cit.> proved that Conjecture <ref> is equivalent to the following conjecture, due to Montgomery <cit.>, concerning the pair correlation of the non-trivial zeros 1/2 +iγ of the zeta-function:Let ℱ(X,T) = ∑_0<γ,γ'≤ TX^i(γ-γ')w(γ-γ'),where w(u)=4/4+u^2.Then for any fixed A≥1 we have, assuming the Riemann Hypothesis,ℱ(X,T)∼Tlog T/2πuniformly for T≤ X≤ T^A.See also <cit.> and <cit.>, where lower order terms are considered in the equivalence.There is a similar theory in the case of sums in arithmetic progressions.The Prime Number Theorem for arithmetic progression states that for a fixed modulus ,∑_n≤ Xn=AΛ(n) ∼X/ϕ(),X→∞ ,where ϕ() is the Euler totient function, giving the number of reduced residues modulo .The variance of sums over different arithmetic progressions is then defined by G(X,)=∑_A (A,)=1|∑_n≤ Xn=AΛ(n)-X/ϕ()|^2.Asymptotic formulae are known when G(X, ) is summed over a long range of values of(c.f. <cit.>, <cit.> and <cit.>), but much less is known concerning G(X, ) itself.In the latter case, Hooley has made the following conjecture <cit.>. G(X, )∼ Xlog. Hooley was not specific about the size ofrelative to X for which this asymptotic should hold.Friedlander and Goldston <cit.> have shown that in the range >X^1+o(1),G(X,) ∼ Xlog X - X - X^2/ϕ() + O(X/(log X)^A) + O((log)^3).This is a relatively straightforward range because it contains at most one prime.They conjecture that Hooley's asymptotic holds if X^1/2+ϵ<<X and further conjecture that if X^1/2+ϵ<<X^1-ϵ thenG(X,) ∼ Xlog - X·(γ_0 +log 2π + ∑_p|log p/p-1).They show that both Conjecture <ref> and (<ref>) hold assuming the Hardy-Littlewood conjecture with small remainders.For <X^1/2 relatively little seems to be known.Conjectures <ref> and <ref> remains open, but their analogues in the function field setting have been proved in the limit of large field size <cit.>. Letbe a finite field of q elements and [t] the ring of polynomials with coefficients in .Let [t] be the subset of monic polynomials and _n be the subset of polynomials of degree n.Letbe the subset of irreducible polynomials and _n=∩_n.The norm of a non-zero polynomial f∈[t] is defined to be |f|=q^ f. The von Mangoldt function is the function ondefined asΛ(f) = df=π^mπ∈_d 0The Prime Polynomial Theorem in this context is the identity∑_f∈ℳ_nΛ(f) = q^n.The analogue of Conjecture <ref> is the following result, proved in <cit.>: for h≤ n-5, 1/q^n∑_A∈ℳ_n| ∑_|f-A|≤ q^hΛ (f)-q^h+1|^2 ∼ q^h+1(n-h-2)as q→∞; note that |{f:|f-A|≤ q^h}|=q^h+1.In the same vein, the function-field analogue of Conjecture <ref> was also established in <cit.>: fix n≥ 2, then, given a sequence of finite fieldsand square-free polynomials ∈[t] with 2≤()≤ n+1, one has∑_A (A,)=1| ∑_f∈ℳ_n f=A Λ(f)-q^n/Φ()|^2 ∼ q^n( deg-1)as q →∞.The asymptotic formulae (<ref>) and (<ref>) were established in <cit.> by expressing the variances as sums over families of L-functions.These L-functions can be expressed as the characteristic polynomials of matrices representing Frobenius conjugacy classes.In the limit as q→∞, these matrices become equidistributed in one of the classical compact groups and the sums become matrix integrals of a kind familiar in Random Matrix Theory.Evaluating these integrals leads to the expressions above.This approach to computing variances has subsequently been applied to other arithmetic functions defined over function fields, including the Möbius function<cit.>, the square of the Möbius function (i.e., the characteristic function of square-free polynomials) <cit.>, square-full polynomials <cit.>, and the generalized divisor functions <cit.>.For overviews see <cit.>, <cit.>, and <cit.>.The arithmetic functions considered so far have all been associated with degree-one L-functions (or simple functions of these).Our main aim in this paper is to extend the theory to arithmetic functions associated with L-functions of degree-two and higher.For example, our results apply to L-functions associated with elliptic curves defined over [t]. This will require us to establish the appropriate equidistribution results for such L-functions.We achieve this using the machinery developed by Katz <cit.>.The main reason for moving to higher-degree L-functions is the recent discovery in the number-field setting that one gets qualitatively new behaviour when the degree exceeds one <cit.>.We summarize briefly now the results in <cit.>.Let 𝒮 denote the Selberg class L-functions.For F∈𝒮 primitive, writeF(s) = ∑_n=1^∞a_F(n)/n^s.Then F(s) has an Euler productF(s) = ∏_pexp(∑_l=1^∞b_F(p^l)/p^ls)and satisfies the functional equationΦ(s) = ε_FΦ(1-s),where Φ(s)=Φ(s) andΦ(s) = ^s(∏_j=1^rΓ(λ_j s+μ_j)) F(s),for some >0, λ_j>0, Re(μ_j)≥0 and |ε_F|=1.There are two important invariants of F(s): the degree d_F and the conductor 𝔮_F, given byd_F=2∑_j=1^rλ_j ,𝔮_F=(2π)^d_F^2∏_j=1^rλ_j^2λ_j.respectively.Another is m_F, the order of the pole at s=1, which equals 1 for the Riemann zeta function and is expected to be 0 otherwise.Let Λ_F be the arithmetic function defined byF'(s)/F(s) = -∑_n=1^∞Λ_F(n)/n^s,and let ψ_F be the function defined byψ_F(x) := ∑_n ≤ xΛ_F(n).The former will be the main focus of our attention.A generalized prime number theorem of the form∑_n ≤ xΛ_F(n) = m_F x+o(x)is expected to hold.In analogy with the case of the Riemann zeta function, it is natural to consider the varianceṼ_F(X, h) :=∫_1^X |ψ_F(x+h)-ψ_F(x) - m_Fh|^2 dx.For example, when F represents an L-function associated with an elliptic curve, Ṽ_F(X, h) is the variance of sums over short intervals involving the Fourier coefficients of the associated modular form evaluated at primes and prime powers; and in the case of Ramanujan's L-function, it represents the corresponding variance for sums involving the Ramanujan τ-function.For most F∈𝒮 it is expected that∑_n≤ XΛ_F(n)Λ_F(n+h) = o(X).This might lead one to expect that Ṽ_F(X, h) typically exhibits significantly different asymptotic behaviour than in the case when F is the Riemann zeta-function because in that case (<ref>) plays a central role in our understanding of the variance.However, all principal L-functions are believed to look essentially the same from the perspective of the statistical distribution of their zeros; that is, it is conjectured that the zeros of all primitive L-functions have a limiting distribution which coincides with that of random unitary matrices, as in Montgomery's conjecture (<ref>).It was proved in <cit.>, assuming the Generalized Riemann Hypothesis (GRH), that an extension of the pair correlation conjecture for the zeros that includes lower or terms (and which itself follows from the ratio conjecture of <cit.> along the lines of <cit.>) is equivalent to the formulae (<ref>) and (<ref>) below for Ṽ_F(X, h) which generalize the Montgomery-Soundararajan formula (<ref>). If 0<B_1<B_2≤ B_3<1/d_F, thenṼ_F(X,h) = h X(d_F logX/h+log𝔮_F-(γ_0+log 2π)d_F) +O_ε(hX^1+ε(h/X)^c/3)+O_ε(hX^1+ε(hX^-(1-B_1))^1/3(1-B_1))uniformly for X^1-B_3≪ h≪ X^1-B_2, for some c>0.Otherwise, if 1/d_F<B_1<B_2≤ B_3<1,Ṽ_F(X,h) =1/6h X(6log X-(3+8log 2))+O_ε(hX^1+ε(h/X)^c/3)+O_ε(hX^1+ε(hX^-(1-B_1))^1/3(1-B_1))uniformly for X^1-B_3≪ h≪ X^1-B_2, for some c>0.If d_F=1 there is only one regime of behaviour, governed by (<ref>).When 𝔮_F=1, this coincides exactly with (<ref>); and when𝔮_F≠ 1, it generalizes (<ref>) in a straightforward way.If d_F>1 there are two ranges of behaviour, depending on the size of h.In the first range, Ṽ_F(X,h)/h is proportional to log h; in the regime it is independent of h at leading order.It is this behaviour that we seek to understand better in the case of function fields.In that case we are able to establish unconditional theorems which illustrate the qualitatively new form of the variance when the degree two or higher. §.§ Function-field analogue Our results are quite general and to state them requires a good deal of notation and terminology to be developed.For this reason we postpone presenting them until later sections, when the necessary theory has been developed.For reference, our main results are Theorem <ref> (see <ref>) and Theorem <ref> (see <ref>).The former provides the variance estimates we need and the latter provides an application of these estimates to L-functions of abelian varieties.Two key ingredients used to prove these theorems are Theorem <ref> (see <ref>) and Theorem <ref> (see <ref>) which provide requisite equidistribution andbig-monodromy results respectively.To illustrate our results we state now a special case of one of them.Suppose q is an odd prime power, and let E/(t) be the Legendre curve, that is, the elliptic curve with affine modely^2 = x(x-1)(x-t).Over the ring [t], this curve has bad reduction at t=0,1 and good reduction everywhere else, so it has conductor =t(t-1).It also has additive reduction at ∞, so the L-function is given by an Euler productL(T,E/(t)) = ∏_π∈𝒫 L(T^(π),E/𝔽_π)^-1where 𝒫⊂[t] is the subset of monic irreducibles and 𝔽_π is the residue field [t]/π[t].Each Euler factor of L(T,E/(t)) is the reciprocal of a polynomial in ℚ[T] and satisfiesTd/dTlog L(T,E/𝔽_π)^-1 = ∑_m=1^∞ a_π,mT^m ∈ℤ[[T]].Moreover, if we define Λ_Leg to be the function on the subset ℳ of monic polynomials given by(f) = d· a_π,m f=π^mπ∈𝒫(π)=d 0,then the L-function satisfiesTd/dTlog(L(T,E/(t))) = ∑_n=1^∞( ∑_f∈ℳ_n(f) ) T^n. Let ∈[t] be monic and square free.For each n≥ 1 and each A in Γ()=([t]/[t])^×, consider the sumS_n,(A) := ∑_f∈ℳ_n f≡ A(f).Let A vary uniformly over Γ(), and consider the moments𝔼_A[S_n,(A)] = 1/|Γ()|∑_A∈Γ() S_n,(A), Var_A[S_n,(A)] = 1/|Γ()|∑_A∈Γ() |S_n,(A)-𝔼_A[S_n,(A)]|^2.These moments (and the quantity |Γ()|) depend on q, so one can ask how they behave when we replaceby a finite extension, that is, let q→∞.Using the theory we develop in this paper one can prove the following theorem.If (,)=t and if () is sufficiently large, then|Γ()|·𝔼_A[S_n,(A)] = ∑_f∈ℳ_n(f),lim_q→∞|Γ()|/q^2n·Var_A[S_n,(A)] = min{n,2()-1}.See Theorem <ref>.This should be compared to (<ref>).For definiteness, we could replace “sufficiently large” by ()>900, but we do not believe this bound to be optimal.We also do not believe the hypothesis on (,) is necessary (cf. Remark <ref>).The fact that the expression for the variance depends on 2() is a direct consequence of the fact that the associated L-functions have degree two.(For an L-function of degree r, one will get a leading term of r() instead.)This then leads to there being two ranges of behaviour.The analogues of our main results in the number field setting are formulae for the variance of Λ_F when summed over arithmetic progressions (a similar case to when these sums are considered in short intervals, as in (<ref>) and (<ref>)). For example if we take a rational elliptic curve and write the number of points over the field of p elements as N_p=p+1-a_pand the number of points over an extension field of degree m asN_p^m=p^m+1-a_p^m then our function field theorems are analogous to considering the fluctuations of the sum of a_p^m, weighted by the logarithm of p, over residue classes of p^m c. §.§ Underlying equidistribution theorem The key ingredients we use to prove Theorem <ref> and its generalizations are the Mellin transform and Deligne's equidistribution theorem.More precisely, we start with a lisse sheafon a dense open T[1/] and twist it by variable Dirichlet characterswith square-free conductorto obtain a family of lisse sheaves _ on T[1/]; this family is a Mellin transform of .One can associate a monodromy _ group to this family generated by Frobenius conjugacy classes _, for variable Dirichlet charactersoverfinite extensions /.A priori _ is reductive and defined over , but Deligne's Riemann hypothesis allows us to associate the classes _, for `good'to well-defined conjugacy classes in a compact form of the `same' reductive group over .Deligne's equidistribution theorem implies these classes are equidistributed.For our applications, we need equidistribution in a unitary group U_(), and thus we need _ to be as big as possible, namely _,.We were only able to prove this is the case under the hypotheses that ()≫ 1 and thathas a unipotent block of exact multiplicity one about t=(,)=0.On one hand, while we do expect that one may encounter exceptions when () is small, we do not believe our lower bound on () is sharp.On the other hand, the hypothesis on the monodromy about the unique prime dividing (,) was made in order to ensure we could exhibit elements of _ whose existence helped ensure the group was big.We conjecture one still has big monodromy under the weaker hypothesis that (,)=1. §.§ Overview The structure of this paper is as follows.We start in <ref> by establishing notation and relatively basic facts that we need throughout the rest of the paper.In <ref> we define two L-functions that one can attach to a Galois representation ρ: the complete L-function L(T,ρ) and a partial L-function (T,ρ).The former may be defined in terms of an Euler product over all places of the function field (t), and for the latter we exclude the Euler factors indexed by a finite setof places in (t).If the excluded Euler factors are in fact trivial, then the two L-functions will coincide, but otherwise they will not.Either way, after imposing requisite hypotheses on the representation ρ, we apply the theory of L-functions and also Deligne's theorem to deduce information about their degrees and zeros.In <ref> we consider twists of the representation ρ by Dirichlet charactersof square-free conductor .The material in this section is mostly a recasting of the results in <ref> in a manner which is convenient for us.The main objects of interest at the complete L-function L(T,) and the partial L-function (T,).In <ref> we recall the notion of a good characterfor ρ: it is a character such that L(T,) and (T,) are both polynomials and equal to each other.This is precisely the property we need to deduce that they are `pure', that is, that their zeros are Weil numbers, and to produce a unitarized L-function (T,).This allows us to associate to each good charactera conjugacy class θ_ρ, in a unitary group U_() for R=((T,ρ)).We define what it means for the resulting multiset of conjugacy classes Θ_ρ,q to be equidistributed in U_() as q→∞.SSentially it says that for any representation Λ U_()→_n(), the average of Λ((θ_ρ,)) over the goodtends to the value of a matrix integral ∫_U_()(Λ(θ))dθ.We then prove a theorem which asserts that one achieves equidistribution when the Mellin transform of ρ has big monodromy.In <ref> we introduce the arithmetic functions of interest to us.More precisely, we define a generalization Λ_ρ of the von Mangoldt function and consider sumsof its values in an arithmetic progression modulo .For each n, we consider the expected value and variance of these sums as A varies uniformly over .We show how to evaluate the limit of both quantities as q→∞ under the hypothesis that the Mellin transform of ρ has big monodromy.As mentioned above, we use this hypothesis to deduce that the conjugacy classes Θ_ρ,q are equidistributed and then to evaluate the variance in terms of an easy-to-evaluate matrix integral.In <ref> we prove a theorem which asserts that the Mellin transform of ρ has big monodromy provided ρ satisfies certain hypotheses.The material in this section rests heavily on the monumental works of Katz, most notably the monograph <cit.>.In order to prove our result, we were forced to impose the condition that the (square-free) conductorof ρ and the twisting conductorsatisfy ((,))=1.We also imposed conditions on the local monodromy of ρ at the zero of (,).We used both of these hypotheses to deduce that the relevant monodromy groups contained an element so special that the group was forced to be big (e.g., for the specific example considered in Theorem <ref> one obtains pseudoreflections).While the specific result we proved is new, it borrows heavily from the rich set of tools developed by Katz, and one familiar with his work will easily recognize the intellectual debt we owe him.In <ref> we bring everything together and show how Galois representations arising from (Tate modules of) certain abelian varieties satisfy the requisite properties to apply the theorems of the earlier sections.More precisely, we consider Jacobians of (elliptic and) hyperelliptic curves of arbitrary genus, the Legendre curve being one such example.Because we chose to work with hyperelliptic curves we were forced to assume q is odd.Nonetheless, we expect one can find other suitable examples in characteristic two.There are two appendices to the paper containing material we needed for the results in Section <ref>.In the first appendix we prove the group-theoretic result which asserts that a reductive subgroup of _ with the sort of special element alluded to above is big.In the second appendix we recall much of the abstract formalism required to define the monodromy groups which we want to show are big.While none of this material is new, it elaborates on some of the facts which we felt were not always easy to give a direct reference for in <cit.>.In particular, our work should not be regarded as a substitute for Katz's original monograph, but we hope some readers will find it an acceptible complement to his masterful presentation. § FRAMEWORK§.§ Notation Let q be the power of an odd prime p,be the finite field with q elements, and K be the global field (t).Letbe the places of K and _d be the finite subset of places of degree d.For each v∈, letbe its residue field and d_v=[:] be its degree.If v is a finite place, then it corresponds to a monic irreducible π∈[t], andis the quotient ring [t]/π.On the other hand, the residue field of the unique infinite place v=∞ can be regarded as the quotient ring [u]/u by taking u=1/t.Let [t] be the subset of monic polynomials and _d be the subset of polynomials of degree d.Let Å_d_d be the subset of irreducible polynomials and vÅ_d→_d be the map which identifies an irreducible π with its corresponding finite place v(π).Letbe a separable closure of K andbe the algebraic closure of K.Let =(/K) and G_=(/), and let G̅_K G_K be the stabilizer ofso that there is an exact sequence1 G̅_K G_K G_ 1of profinite groups.Given a quotient Q of profinite groups, we write Q̅ Q for the image of G̅_K and call it the geometric subgroup.For each v∈, we fix a decomposition group D(v), that is, a representative of its conjugacy class; equivalently we fix a place ofover v.Let I(v) D(v) be the inertia subgroup and P(v) I(v) be the wild inertia subgroup (i.e., the p-Sylow subgroup).The quotient =D(v)/I(v) is the absolute Galois group of , and we write _v∈for the Frobenius element _q^d_v.For each subset S, letbe the maximal subextension unramified away from S andbe the maximal subextension tamely ramfied over S.Both extensions are Galois over K, so we writeandfor their respective Galois groups.There is a commutative diagram[rr][dr] [dl]of quotients.If v∉S, then the inertia subgroup I(v) is contained in the kernel of the horizontal map.In particular, every element of the coset _v I(v) maps to the same element ofwhich we denote _v∈.Moreover, the kernel of the horizontal map is generated by the conjugates of the I(v) for v∉S, and it and the conjugates of the P(v) for v∈ S generate the rest of the kernel of the other map from .Given a number field E, we write _E for the ring of integers.Given a maximal prime ł_E, we write ℓ∈ for the rational prime it divides and E_ł for the ł-adic completion of E.We also write E̅_ł for an algebraic closure of E_ł, e.g.,is an algebraic closure of .Given a smooth geometrically connected curve U over , we writefor the base change curve U×_.We fix (but do not name) a geometric generic point of U and write U andfor the arithmetic and geometric étale fundamental groups of U respectively.Moreover, if T is a second smooth geometrically connected curve overand if T→ U is a finite étale cover, then we implicitly suppose the geometric generic point of T maps to that of U and write T→U for the induced inclusion of fundamental groups.Given a sheafon U, we suppose thatis constructible, and unless stated otherwise we suppose it has coefficients in .We also write H^i(,) and H^i_c(,) for the étale cohomology groups of .For each integer n, we write (n) for the Tate twisted sheaf ⊗_(n) and recall that(1-T _q| H^i(,(n))) = (1-q^nT _q| H^i(,)).A similar identity holds for cohomology with compact supports (cf. <cit.>).In particular, we have identities(H^i(,(n))) = (H^i(,)), (H^i_c(,(n))) = (H^i_c(,))for every i and n.The sheafis lisse (or locally constant) on U if and only it corresponds to a continuous representation U→(V) from the étale fundamental group to a finite-dimensionalvector space V (cf. <cit.>).In that case one has identificationsH^0(,) = V^ H^2_c(,(2)) = V_with the subspace of -invariants and quotient space of -coinvariants (see <cit.>).§ L-FUNCTIONS Let ℓ be a prime distinct from p andbe a finite-dimensional -vector space.Let ∈[t] be monic and square free,be the subset consisting of ∞ and v(π) for every prime factor π of , andbe a finite subset of places.Suppose ρ is a homomorphismρ→()which is continuous with respect to the profinite topologies and which has trivial geometric invariants (i.e., the subspace of G̅_K,-invariants ofvanishes).In this section, we define, for each v∈, the Euler factor L(T,ρ_v)∈[T] of a local representation ρ_v G_v→(_v), as well as L-functionsL(T,ρ) = ∏_v∈ L(T^d_v,ρ_v)^-1, (T,ρ) = ∏_v∉ L(T^d_v,ρ_v)^-1and cohomological factorsi(T) = (1-T _q| H^i_c([1/], ρ)) i=1,2(see <ref>).We also define numerical invariants of ρ, including (ρ), ρ, and (ρ), and we show that ((T,ρ)) and (L(T,ρ)) equal(ρ) = (ρ) - ρ + (ρ) + (()-1)·()and(ρ) = (ρ) + (ρ) - 2·()respectively (see <ref>).Finally, we define what it means for ρ to be punctually ι-pure of weight w and use Deligne's Riemann hypothesis to derive some consequential properties of the L-functions (see <ref> and <ref>).Using these definitions we then given the main result of this section, Theorem <ref>, in <ref>. §.§ Galois modules versus sheaves While most of this paper uses the language of global fields, it is useful to adopt a geometric language.Certain readers will find the latter language more to their taste, and we acknowledge that many of our results may have a more appealing formulation in the language of geometry (and sheaves).However, we felt the language of Galois representations over global (function) fields was accessible to a broader audience, so we tried to do `as much as possible' in that language.§.§ Middle extensions Let U X be dense Zariski open subsets and j U→ X be the inclusion, and letbe a sheaf on X.Suppose everything is defined overso that the fiber _ ofover the geometric point =Spec() is a G_K-module.If the restriction j^* is lisse on U, then the fiber _ is even a module over the étale fundamental group .Conversely, for every continuous homomorphism →(), there is a lisse -sheaf on U whose fiber overis the -module .Given a sheafsheaf on U (e.g., j^*), there are two functorial extensions ofto a sheaf on all of X we wish to consider, the extension by zero j_! and the direct image j_*.(One can also consider hybrid versions such as j”_!j'_* for inclusions j' U→ U' and j” U”→ X, but we do not need such versions.)Asandvary we have_X(j_!,) = _U(,j^*) _X(,j_*) = _U(j^*,),that is, the functors j_!,j_* are adjoints of j^* (cf. <cit.>).In particular, the adjoints of the identity j^*→ j^* are maps of the form j_!j^*→ and → j_*j^* which we call adjunction maps.We say thatis supported on U iff the first map is an isomorphism, andis a middle extension iff the second map is an isomorphism for every j. *If j^* is lisse and → j_*j^* is an isomorphism, thenis a middle extension.*Ifis lisse, then j_* is a middle extension. Let U' X be a dense Zariski open and U”=U∩ U'.Consider the commutative diagramU”[r]^i'[d]_iU'[d]^j' U[r]_j Xof inclusions and the corresponding commutative diagram[ [d][r] j_*j^*[d] j'_*j^' *[r] (ij)_*(ij)^* = (i'j')_*(i'j')^* ]of adjunction maps.Supposeis lisse.On one hand, this implies the map → i_*i^* is an isomorphism, so the right map of (<ref>) is an isomorphism when j^* is lisse.In particular, if the top map of (<ref>) is also an isomorphism, then the left map must also be an isomorphism, for every j, hence (<ref>) holds.On the other hand, the direct image map j_*→ j_*i_*i^* is also an isomorphism.It even coincides with the adjunction map j_*→ j'_*j^' *j_* via the functorial identities j_*i_*i^* = j'_*i'_*i^* = j'_*j^' *j_*, so (<ref>) holds. The following proposition shows that there is a canonical middle extension sheaf onwe can associate to ρ.We denote it and its restriction to X by ρ. There is a middle extensionwith _= as G_K-modules, and it is unique up to isomorphism.There are quotients G_K and G_K, so _ andare G_K-modules.Moreoever, if U' U is a sufficiently small dense Zariski open, then there exist a quotientand a unique lisse sheafon U' with _= as -modules.Its direct image ρ on X is a middle extension by Lemma <ref>.<ref>, and ρ_=_= as -modules by construction.Letbe any middle extension with _= as -modules; we must show it isomorphic to ρ.Up to shrinking U, we may suppose that ρ andare lisse on U and thus ρ_,_ are -modules.Then the canonical bijection ρ_→_ extends uniquely to an isomorphism j^*ρ→ j^* of lisse sheaves.Moreover, the direct image j_*j^*ρ→ j_*j^* and adjunction maps ρ→ j_*j^*ρ and → j_*j^* are all isomorphisms, so there exists an isomorphism ρ→ as claimed.Let ' be a finite subset containingand ρ' G_K,'→() be the composition of ρ with the natural quotient G_K,'.Then ρ and ρ' are isomorphic.The quotient → factors as G_K,', and ρ'_==ρ as -modules.Since ρ,ρ' are both middle extensions, Proposition <ref> implies they are isomorphic.§.§ Euler characteristics Letbe a sheaf on U.Then there is an exact sequence0 j_! j_*_ 0 where _ is a skyscraper sheaf supported on Z= U, and the corresponding long exact sequence of (étale) cohomology (over ) can be written⋯→ H^n(,_) → H^n+1_c(,) → H^n+1(,j_*) →⋯where n∈. There exist exact sequences0 → H^0_c(,) → H^0(,j_*) → H^0(,_) → H^1_c(,) → H^1(,j_*) → 0and0 H^2_c(,) H^2(,j_*) 0and all other cohomology groups in (<ref>) vanish.The first term of (<ref>) vanishes unless n=0 since (Z)=0, and the other two terms vanish for n+1≠ 0,1,2 since U andare curves.Therefore (<ref>) breaks into the pieces (<ref>) and (<ref>), and all other terms vanish.If U=, then the middle term of (<ref>) vanishes, and otherwise the first term vanishes since any curve U⊊ is affine.Either way, the Euler characteristicsχ(,j_*) = ∑_n=0^2 (-1)^n(H^n(,j_*)), χ_c(,j_*) = ∑_n=0^2 (-1)^n(H^n_c(,j_*)),and χ(,_)=(H^0(,_)) satisfyχ(,j_*) - χ_c(,) = χ(,_) = ∑_z∈ Z(z)·(^I(z)_).§.§ L-functions of ρ The decomposition group D(v) stabilizes the subspace =, and I(v) acts trivially on it, so there is a representation →(). We identify the subspace V_v V and the representationwith ageometric fiber of ρ (cf. <cit.>).The Euler factor of ρ at v is given byT = (1 - T(_v)|)∈[T],and its degree equals the dimension of V_v.The partial and complete L-functions of ρ are the formal power series in [T] with respective Euler products(T,ρ) = ∏_v∉T^d_v^-1 L(T,ρ) = ∏_v∈T^d_v^-1.If U=[1/], then they equal the L-functions of the sheaves j_!j^*ρ and ρ, and the ratio(T,ρ) = L(T,ρ)/(T,ρ) = ∏_v∈ L(T^d_v,ρ_v)^-1is the L-function of the restriction of ρ to Z and hence is the reciprocal of a polynomial.The étale cohomology groups of these sheaves are finite-dimensional -vector spaces, and _q acts -linearly on them.In particular, we have characteristic polynomialsn(T) = (1-T _q| H^n_c([1/], ρ))which are trivial for i≠ 1,2 sinceis an affine curve, and they satisfy(T,ρ) = 1(T,ρ) / 2(T,ρ).Similarly, the characteristic polynomialsP_n(T,ρ) = (1-T _q| H^n(,ρ)).are trivial for i≠ 0,1,2 sinceis a complete curve, and they otherwise satisfyL(T,ρ) = P_1(T,ρ)/P_0(T,ρ)P_2(T,ρ).Moreover, the degrees are related to the respective Euler characteristics via the identities((T,ρ))=-χ_c(,ρ) (L(T,ρ))=-χ(,ρ).§.§ Numerical invariants of ρ Let_v(ρ) = (L(T,ρ_v)) and _v(ρ)=()-_v(ρ), and let _v(ρ) be the Swan conductor ofas an [I(v)]-module (see <cit.>).We call these andρ = ∑_v∈ d_v·_v(ρ).the local invariants of ρ and(ρ) = (), (ρ) = ∑_v∈ d_v·_v(ρ), (ρ) = ∑_v∈ d_v·_v(ρ)are the global invariants.The latter remain unchanged if we replaceby a finite extension.χ(,ρ) = 2·(ρ) - ( (ρ) + (ρ) ) Suppose ρ is lisse on U since ρ is a middle extension.On one hand, the Euler-Poincare formula, as proved by Raynaud <cit.>, assertsχ_c(,ρ) = (ρ) · (2-(Z)) - (ρ).On the other hand, a short calculation showsχ(,ρ) = (Z)·(ρ) - (ρ)since ρ is also a middle extension, and thusχ(,ρ) = χ_c(,ρ) + χ(,ρ) = 2·(ρ) - (ρ) - (ρ)as claimed.If ρ is supported on [1/], then χ_c([1/],ρ)=χ(,ρ), andχ_c([1/],ρ) = (1-())·(ρ) - ( (ρ) - ρ + (ρ) )in general.If ρ is supported on [1/], then ρ=()·(ρ) and ()=1+(), so it suffices to show (<ref>) holds in general.There is a canonical bijection Z= when U=[1/], so the desired identity follows easily from the identitiesχ_c([1/],ρ) = χ(,ρ) - χ(,ρ)andχ(,ρ) = ()·(ρ) - ρand from the identity in Proposition <ref>.§.§ Purity Let ι→ and → be field embeddings.A non-zero polynomial ψ∈[T] is ι-pure of q-weight w iff every zero α∈ is a q-Weil number of weight w, that is, lies inand satisfies|ι(α)|^2=(1/q)^w.It is pure of q-weight w iff it is ι-pure of q-weight w for every ι, and it is (ι-)mixed of q-weights ≤ w iff it is a product of (ι-)pure polynomials each of q-weights ≤ w.Our terminology is unconventional in that we incorporate q, however, we need to make q explicit since we have not said where ψ comes from. If M is an invertible d× d matrix with coefficients inand if (1-M T) is mixed of q-weights ≤ w, then (M)∈ and |ι((M))|^2≤ dq^w for every field embedding ι→.If M is invertible and ψ(T)=(1-M T) is mixed, there exist β_1,…,β_d∈E̅^× such thatψ(T) = ∏_i=1^d (1 - β_i T) = 1 - (M)· T + ⋯ + (-1)^d·(M)· T^dand such that (M)=β_1+⋯+β_m also lies in E̅.Therefore, if ιE̅→ is a field embedding, then|(M)|^2 = | ∑_i=1^d β_i |^2 ≤∑_i=1^d |β_i|^2 = dq^was claimed. The representation ρ is punctually (ι-)pure of weight w iff T is (ι-)pure of q^d_v-weight w for all v∈ S.Equivalently, we want T^d_v to be pure of q-weight w for all v∉S.The modifier punctually should remind the reader the definition is local. If ρ is punctually ι-pure of weight w, then the cohomological factors P_n,(T,ρ) are ι-mixed of q-weights ≤ w+n and the factors P_n(T,ρ) are ι-pure of q-weight w+n.See Theorems 1 and 2 of <cit.> for the respective assertions about P_n,(T,ρ) and P_n(T,ρ). Letbe a middle-extension sheaf on .We say thatis punctually (ι-)pure of weight w iff for some dense Zariski open subset U on whichis lisse, the corresponding representation of U is punctually (ι-)pure of weight w. Let j U→ be the inclusion of a dense Zariski open subset and Z= U.Ifis lisse on U and punctually ι-pure of weight w, then (1-T_q| H^0(,j_*)) is ι-mixed of q-weights ≤ w.See <cit.>.§.§ Semisimplicity and irreducibility Consider an exact sequence of -modules0 _1 _2 0,and let ρ_i→(_i) be the corresponding structure homomorphism for i∈{1,2}.A priori, (<ref>) does not split, but we say ρ is arithmetically semisimple iff the sequence splits for every -invariant subspace _1.By Clifford's theorem, the condition implies that ρ is geometrically semisimple sinceis normal in(cf. <cit.>), that is, every -invariant subspace ofhas a -invariant complement, but the converse need not be true.We say that ρ is geometrically simple iff ρ is irreducible and geometrically semisimple.It is equivalent to assuming ρ is geometrically irreducible, that is, there are no non-zero proper subsheaves over . If ρ is punctually ι-pure, then it is geometrically semisimple, and in particular, the subspace ofof -invariants is trivial if and only if the quotient space ofof -coinvariants is trivial.One can rephrase semisimplicity for ρ in terms of semisimplicity for ρ (cf. <cit.>).It follows that both are geometrically semisimple of ρ is ι-pure (see <cit.>).In particular, ρ has trivial -invariants if and only if it has trivial -coinvariants, hence H^0(,ρ) vanishes if and only if H^2(,ρ) does.If ρ is punctually ι-pure, then the following are equivalent: *L(T,ρ) is in (T) but not [T];*V^ and V_ vanish;*P_0(T,ρ) and P_2(T,ρ) are non-trivial polynomials in [T];*V_ vanishes;*P_2(T,ρ) is a non-trivial polynomial in [T]. On one hand, Theorem <ref> implies that the cohomological factors P_n(T,ρ) are relatively prime, so (<ref>) and (<ref>) are equivalent.Moreover, (<ref>) and (<ref>) (resp. (<ref>) and (<ref>)) are equivalent by (<ref>) and (<ref>).On the other hand, Proposition <ref> implies that P_0(T,) is trivial if and only if P_2(T,) is trivial, so (<ref>) and (<ref>) are equivalent.If ρ is punctually ι-pure and has trivial geometric invariants, then H^i(,ρ) and H^i_c(,ρ) vanish for i≠ 1, and there is an exact sequence0 H^0(,ρ) H^1_c(,ρ) H^1(,ρ) 0.Therefore L(T,ρ)=P_1(T,ρ) and (T,ρ)=P_1,(T,ρ).Suppose ρ is punctually ι-pure and has trivial geometric invariants so that Proposition <ref> implies ρ has trivial geometric coinvariants.We claim H^i(,ρ) vanishes for i≠ 1.The Corollary then follows by observing that (<ref>) simplifies to (<ref>) and that H^2_c(,ρ) vanishes by (<ref>).The claim is independent of U, so up to shrinking U, we suppose j^*ρ is lisse.ThenH^0(,ρ) = H^0(,ρ) H^2(,ρ) = H^2_c(,ρ)are the subspace of -invariants and (a Tate twist of the) quotient space of -coinvariants respectively ofby (<ref>).The claim is also independent of , so up to replacingby a finite superset in , we suppose ρ factors through a natural quotient .Then the cohomology spaces in question are the -invariants and -coinvariants of , which are trivial by hypothesis, so H^i(,ρ) vanishes for i≠ 1 as claimed.The following are equivalent: *(T,ρ)=1, that is, ρ is supported on [1/];*(T,ρ) is a polynomial which is ι-pure of q-weight w+1.Note, (T,ρ) is the L-function of the restriction of ρ to Z, so the former is trivial if and only if the latter is. If (<ref>) holds, then the subspace of I(∞)-invariants ofis trivial, so a fortiori, the subspace of -invariants is trivial.Therefore Corollary <ref> implies (T,ρ) equals L(T,ρ)=P_1(T,ρ) and hence Theorem <ref> implies (<ref>) holds.If (<ref>) holds, then P_2,(T,ρ) divides P_1,(T,ρ) by (<ref>).Theorem <ref> implies P_2,(T,ρ)=P_2(T,ρ) is ι-pure of q-weight w+2, so it is coprime to P_1,(T,ρ) and hence trivial.Therefore H^2(,ρ) vanishes, and hence H^0(,ρ) also vanishes since ρ is geometrically semisimple.That is, ρ has trivial geometric invariants.Moreover, 1/(T,ρ) is a polynomial which is ι-mixed of q-weights ≤ w by Lemma <ref> while L(T,ρ) is a polynomial which is ι-pure of q-weight w, so Corollary <ref> implies (<ref>) holds.§.§ Main Theorem The following theorem is the main result of Section <ref>.The essential ingredient it uses is Deligne's Riemann hypothesis. Suppose ρ is punctually ι-pure.Corollary <ref> implies that it has trivial geometric invariants if and only if L(T,ρ) is a polynomial if and only if P_2,(T,ρ) is trivial, so suppose these equivalent conditions hold.On one hand, Corollary <ref> implies L(T,ρ)=P_1(T,ρ) and (T,ρ)=P_1,(T,ρ), so both are polynomials are claimed.Moreover, Proposition <ref> implies(L(T,ρ)) = (ρ) + (ρ) - 2·() = (ρ)and Corollary <ref> implies ((T,ρ)) = -χ_c([1/],ρ) = (ρ)as claimed.On the other hand, Theorem <ref> implies L(T,ρ) is pure of q-weight w+1 and (T,ρ) is mixed of q-weights ≤ w+1 since ρ is punctually pure of weight w.Moreover, Lemma <ref> implies that (T,ρ)/L(T,ρ)=1/(T,ρ) is a polynomial which is ι-mixed of q-weights ≤ w, so L(T,ρ) is the largest ι-pure factor of (T,ρ) of q-weight w+1 as claimed. § TWISTED L-FUNCTIONS Recall we have a finite-dimensional -vector space V and a (continuous) representationρ→(V).We fix a field embedding ι→ and suppose ρ is punctually ι-pure of weight w so that we can apply the results of the previous section.Let ,∈[t] be monic and square free, and supposeis the finite subset consisting of ∞ and v(π) for every prime factor π of .Letbe defined similarly and =∪.Letbe the finite groupandbe the dual group of all Dirichlet characters→of conductor dividing .For each , we define a twisted representation→(V_)where V_=V as -vector spaces (see <ref>).We show thatis also punctually ι-pure of weight w and that the corresponding L-functionsL(T,) = ∏_v∈ L(T^d_v,()_v)^-1, (T,) = ∏_v∉ L(T^d_v,()_v)^-1are ι-mixed (see <ref> and <ref>). The proof is in <ref>. §.§ Dirichlet characters By definition, each ∈ is a homomorphism →.There is also a quotient G_K, from abelian class field theory, and we writeG_K,→_1()for the composition of these maps and the canonical isomorphism →_1().The corresponding middle-extension sheafis a so-called Kummer sheaf.It is tamely ramified oversince the hypothesis thatis square free implies thathas order prime to p and thus (P(t)) is trivial for every t∈.There is a natural quotientsince , and we writefor the composition of this quotient and . §.§ Tensor products The tensor product of ρ andis the representationρ⊗→(V_) given by ()(g)=ρ(g)(g) where V_=V as -vector spaces.The corresponding Euler factors are given byL(T,()_v) = (1 - T ()_v(_v)| V^I(v)_),and in particular,L(T,()_v) = L((_v)T,ρ_v)for v∉. *If ρ is geometrically simple, then so is .*If ρ is punctually ι-pure of weight w, then so is . If W_ V_ be a G̅_K,-invariant subspace, then W=W_⊗ is a G̅_K,-invariant subspace.Moreover, if ρ is geometrically simple, then W equalsor V, hence W_ equalsor V_.Thus (<ref>) holds.Observe that ζ=(_v) is a root of unity sincehas finite order, hence ζ∈ and |ι(ζ)|^2=1.If v∉ and if α∈ is a zero of L(T,()_v), then (<ref>) implies that α/ζ is a zero of L(T,ρ_v).In particular, |α|^2=|α/ζ|^2=(1/q^d_v)^w, hence L(T^d_v,()_v) is ι-pure of q-weight w for almost all v.Thus (<ref>) holds. Therefore we can apply Theorem <ref> to . () - (ρ) = - ρ and () = (ρ).If v∈, then _v()=_v(ρ) since tensoring with tamely ramified character (e.g., ) does not change the local Swan conductor.Moreover, if v∉, then V and V_ are isomorphic as I(v)-modules, and thus L(T,ρ_v) and L(T,()_v) have the same degree, that is, _v()=_v(ρ).() = (ρ).Combine the lemma and (<ref>) to deduce() = () - + () + (()-1)·(V)= (ρ) - ρ + (ρ) + (()-1)·(V) = (ρ)as claimed.§.§ Induced representations Let L=(u) be the subfield of K corresponding to the finite cover →, and letbe a finite set of places in L including those lying belowand those which ramify in L/K.Then for each ∈, we have an induced representation() G_L,→((V_))where (V_) is a vector space of dimension n·(V_). If ρ is punctually ι-pure of weight w, then so is (). Letbe a place in L not lying and , and let v| denote any place in K lying over .ThenL(T^(),()_) = ∏_v| L(T^(v),()_v).In particular, Lemma <ref>.<ref> implies the factors on the right are ι-pure of q-weight w, so the left side is also ι-pure of q-weight w.§.§ Proof of Theorem <ref> If ρ is punctually ι-pure of q-weight w, then so isby Lemma <ref>.<ref>.Hencealso has trivial geometric invariants, then we can apply Theorem <ref>.In particular, we deduce that (T,) and L(T,) are polynomials of respective degrees () and (), that (T,) is ι-mixed of q-weights ≤ w+1, and that L(T,) is the largest factor of (T,) which is ι-pure of q-weight w+1.Finally, we observe that() Cor. <ref>= (ρ) (<ref>)= (ρ) - ρ + (ρ) + (()-1)·() Th. <ref>= (L(T,ρ)) + (()+1)·() - ρas claimed.§ STATEMENT OF EQUIDISTRIBUTION Recall we have an -vector space V of finite dimension $̊ and a (continuous) representationρ→(V)which is punctually pure of weightw.We also have monic square free,∈[t]and corresponding finite subsets,of supporting places.In this section, we consider the partialL-functions(T,)asχvaries overand regard them as a proxy for coefficients in a Mellin transform ofρ.One can easily show that there are hardly any characters∈such thathas non-trivial geometric invariants, and otherwise, having trivial geometric invariants implies(T,)is a polynomial in[T]of degree=(ρ)by Theorem <ref>.Moreover, the subsetρ = { ∈ : (T,)=L(T,)∈[T] }is `big' (see Corollary <ref>) and consists of allfor which(T,) = (T/(√(q))^1+w,)is pure ofq-weight zero by Theorem <ref>. In particular, for each∈ρ,(T,)is the characteristic polynomial of a unitary element of_(), so there is a unique conjugacy classofU_()_()whose elements have the same characteristic polynomial.We would to know whether or not they are equidistributed.We say the multiset = { :∈ρ } of conjugacy classes becomes equidistributed in U_() as q→∞ iff, for every continuous central functionfU_()→, one haslim_q→∞1/|ρ|∑_∈ρf() = ∫_U_() f(θ)dθwheredθis the unique Haar probability measure onU_().Equivalently, by the Peter-Weyl theorem, one has equidistribution if and only if for every irreducible finite-dimensional representationΛU_()→_(Λ)()and forf=∘Λ, the identity in (<ref>) holds.In principle, one could try to exhibit equidistribution for all ofat once.Instead we follow Katz and (try to) prove simultaneous and uniform equidistribution for certain one-parameter families of characters.More precisely, we partitioninto cosetsof a subgroup(defined in <ref>) and (try to) prove equidistribution for characters inρ=∩ρ.Doing so for a single coset is equivalent to showing that an associated monodromy group we denoteequals_,.See <ref>, <ref>, and <ref>.The monodromy group is an algebraic subgroup of_,.We say the former is big iff it equals the latter, and we writeρ = { ∈ : ρ }for the subset of big characters.We say that the Mellin transform ofρhas big monodromy in_,iff|ρ| ∼ |ρ| q→∞,or equivalently (cf. Corollary <ref>),|ρ| ∼ || q→∞.Suppose ρ is punctually ι-pure andis in ρ. Let Λ U_()→_(Λ)() be a finite-dimensional representation.If q is sufficiently large, then1/|ρ|∑_'∈ρ Λ() = ∫_U_() Λ(θ) dθ + o(1) q→∞,and the implicit constant depends only on =̊(V) and (Λ).In particular, if the Mellin transform of ρ has big monodromy, thenis equidistributed in U_().The proof is in <ref>. Observe that the q-weight w of ρ plays no role in the statement of the theorem.This is because we factored out the weight in the normalization (<ref>).Another way to achieve the same renormalization is to replace ρ by an appropriate Tate twist so that w=-1 and(T,) = (T,). §.§ Reduction to _m Letanddenote the projectivet-line andu-line respectively, and let=(u).The function-field embedding→Kgenerated byu↦corresponds to a finite morphism→.The morphism has generic degreen=()and is generically etale sinceis square free of degreen, and it fits in a commutative diagram[1/][r][d]_ [d]^ ÷()[l][d]^ _m[r] {0,∞}[l]where the outer vertical maps are finite morphisms.There are canonical identifications of÷()withand{0,∞}with a setcomposed of two places of the function field(u).For any sheafon thet-line, one can define the direct image sheaf_*.On one hand, the geometric generic fiber of=_*ρis the induced representation(ρ)G_K'→((V))where(V)is a vector space of dimensionn·(V)(cf. <cit.>). Moreover, ifis a geometric closed point of, that is, a closed point of×_, and if^-1()={_1,…,_m}×_, then the various geometric fibers satisfy(_*)_ = H^0(,_*) = ⊕_i=1^m H^0(_i,) = ⊕_i=1^m __ias-vector spaces (cf. <cit.>).In particular, ifis supported on[1/], then_*is supported on_m.On the other hand, the functorial properties of_*yield canonical isomorphismsH^n(,) = H^n(,_*) H^n_c([1/],) = H^n_c(_m,_*)for eachn.For example,_*is exact sinceis a finite map, so the first identity in (<ref>) is a consequence of the (trivial) Leray spectral sequence (cf. <cit.>).In particular, the identities (<ref>), (<ref>), and (<ref>) jointly imply thatL(T,) = L(T,_*) (T,) = L_(T,_*)for∈. §.§ One-parameter families Recall∈[t]is monic and square free and→is the function-field embedding which sendsuto.The norm map→is multiplicative and sendstto(-1)^nuforn=().It also induces homomorphismsν→ν^* →where=([u]/u[u])^×andis its dual.In particular,νis surjective, so its dualν^*is injective, and we can identifywith its image.Moreover, as the following lemma shows, twisting by elements of the cosetis the `same' as twisting by elements of. Let ∈ and α∈. * _* is isomorphic to ().* _*α^ν is isomorphic to ()⊗α. By <cit.>, _* is a middle extension, and since it is generically equal to the middle extension sheaf (), Proposition <ref> implies part (<ref>) holds.Up to replacing ρ by , we suppose without loss of generality that =.Let T be a dense Zariski open subset and U=(T).Suppose that U so that ^*α is lisse on T, that the restriction T→ U is , and that ρ is lisse on T.Let i T→ and j U→ be theinclusions.We have ρ⊗α^ν≃ i_*i^*(ρ⊗α^ν) ≃ i_*i^*(ρ⊗α^ν) ≃ i_*i^*(ρ⊗^*α)since each of the sheaves is a middle extensions and lisse on T.Therefore the projection formula implies _*ρ⊗α^ν≃_*(i_*i^*(ρ⊗^*α)) ≃ j_*j^*(_*ρ⊗α)since each of the sheaves is lisse on U and a middle extension on(by part (<ref>)) and since T→ U is .Finally,j_*j^*(_*ρ⊗α) ≃ j_*j^*((ρ)⊗α) ≃(ρ)⊗αand thus part (<ref>) holds.§.§ Properties preserved by _* We say a character∈is good for ρ or simply good iff it lies in the subsetρdefined in (<ref>).When=tand thus[1/]=_m, then Lemma <ref> and the following lemma together show that our notion of good coincides with that of Katz's (cf. <cit.>): If ∈ and α∈, then the following are equivalent: * α^ν is good for ρ;* α^ν is supported on [1/];* ()⊗α is supported on _m;* α∈ is good (à la Katz) for _*.Corollary <ref> implies the first conditions (<ref>) and (<ref>) are equivalent.Conditions (<ref>) and (<ref>) are equivalent by the identity in (<ref>) for ∈'.Finally, taking =t and applying the equivalence of (<ref>) and (<ref>) yields the equivalence of (<ref>) and (<ref>). Letρbe the complementρandρ=ρ∩.|ρ|≤(1+())·(ρ).If ∈ρ, thenit coincides with some tame character of ρ at some v∈, and there are at most (1+())·(ρ) such characters.Compare <cit.>. |ρ|∼|| as q→∞.Observe that Corollary <ref> implies || - |ρ| = |ρ| = ∑_ |ρ| ≤ O(||/||) = o(||)as q→∞.§.§ Tannakian monodromy groups Suppose=tand thus=={0,∞}and=.Suppose moreover thatρis geometrically simple and(V)>1so that no geometric subquotient ofρis a Kummer sheaf.Letj_m→be the inclusion, letj_0_m→be the inclusion map, and for eachα∈, letω_α(ρ) = H^1_c(,j_0*j^*ρ⊗α).It is aG_-module, that is,_qacts functorially, and it corresponds to a well-defined conjugacy class of elements_,α(ω(ρ))whereω(ρ)=ω_1(ρ)and1∈is the trivial character.Moreover, ifαis good, thenω_α(ρ) = H^1_c(_m,ρ⊗α),and in particularL_(T,ρ⊗α) = (1-_α T|ω(ρ)).In a way we will not make precise here, the_α`generate'ℓ-adic reductive subgroupsρρ_,which are well-defined up to conjugacy.They are fundamental groups of certain Tannakian categories, and we call them the Tannakian monodromy groups of ρ.See Appendix <ref> for details.We say the Mellin transform ofhas big Tannakian monodromy iffρ=_,.For generaland∈, we writeρρ_,for the Tannakian monodromy groups of(), and we say that the Mellin transform ofhas big Tannakian monodromy iffρ=_,.Now the action of_qonω_α()corresponds to a well-defined conjugacy class_,αρ. §.§ Proof of Theorem <ref> We may suppose without loss of generality thatΛis irreducible since it is semisimple and(Λ_1⊕Λ_2)=(Λ_1)+(Λ_2)for any representationsΛ_1,Λ_2.Moreover, one can show that∫_U_() Λ(θ) dθ = 1Λ 0so to prove (<ref>) we must show that1/|ρ|∑_'∈ρ Λ() = 1Λ o(1)whenqis large.Ifqis sufficiently large, then Corollary <ref> implies that|ρ| ≤ (1+())·(ρ) < ||and thusρis non-empty.In particular, the left side of (<ref>) is defined for largeq, and it is identically1whenΛis the trivial representation.On the other hand, ifΛis non-trivial and ifqis bigger than(|ρ|+1)^2, then <cit.> implies that1/|ρ|| ∑_'∈ρ Λ() | ≤ ((V) + (Λ)) ( 1/√(q) + 1/√(q)^3).Thus (<ref>) holds, as claimed, and the implicit constant depends only onrand(Λ).To complete the proof of the theorem we must show thatbecomes equidistributed inU_().We observe that| Λ()|≤(Λ) '∈ρTherefore∑_∈ρ Λ() = ∑_∈ρ Λ() + o(1)·|ρρ|whereρ = ρ∩ρ.In particular, if the Mellin transform ofρhas big monodromy, that is, if (<ref>) holds, then|ρρ|/|ρ| = o(1) q→∞and thus1/|ρ|∑_∈ρ Λ() (<ref>)=1/|ρ|∑_∈ρ Λ() + o(1)· O((Λ)) (<ref>)=∫_U_() Λ(θ) dθ + o(1)asq→∞.Thereforebecomes equidistributed inU_()as claimed. An examination of the above proof will show that one does not need to suppose q→∞ by taking q=p^m and letting m→∞.Indeed, the key identities (<ref>) and (<ref>) are valid even if one takes q=p and p→∞ in .This would allow one to prove `horizontal' variants of Theorem <ref>.Because stating a correspondingly general result would be cumbersome and we do not need such results, we leave the details to an interested reader. § SUMS IN ARITHMETIC PROGRESSIONS In addition to assuming that our representationρ→(V)is punctuallyι-pure of weightw, we suppose thatρis geometrically simple yet not an element ofand that the Mellin transform ofρhas big monodromy.The first hypothesis ensures thathas trivial geometric invariants for every∈while the second allows us to apply Theorem <ref>.In this section, which forms the heart of our paper, we shift gears and analyze the distribution of certain traces indexed by residue classes modulo.More precisely, for each monic irreducibleπ∈, the traces are coefficients of the Euler factorTofv=v(π), and we use them to define a function→satisfyingTd/dTlog(L_{∞}(T,ρ)) = ∑_n=1^∞( ∑_f∈_n(f) ) T^n(see <ref>).In particular, for eachn≥1andA∈, we consider the sum= ∑_f∈_nf≡ A(f),and then we consider the mean and variance of these sums given by_A[] = 1/ϕ()∑_A∈, _A[] = 1/ϕ()∑_A∈| - _A[]|^2respectively.Our main result has two parts.On one hand, we can precisely evaluate_A[]in terms of the coefficientsρcoming from the identityTd/dT(T,ρ) = ∑_n=1^∞ρT^nsatisfied by the normalizedL-function (see <ref>).We can also give bounds for the archimedean norm of these coefficients (see <ref>).On the other hand, we can evaluate_A[]using trace formulae (see <ref>), and its leading order term is the value of a matrix integral onU_()by our hypotheses onρ(see <ref> and <ref>).The value of this integral exhibits a dichotomy depending on whether or notn≤=(ρ), and in particular, the interval of smallngrows with=̊(V)sincedoes.After giving some preliminary results we calculate the mean and variance in Theorem <ref> of <ref>.In our proof we use a classification of the elements ofin terms of a trichotomy of good, mixed, and heavy characters (see <ref>).As we explain, this is a refinement of Katz's dichotomy of good and bad characters. §.§ Trace formula In this section we define local and cohomological traces ofρand recall how they are related by a trace formula.For details, see <cit.>.On one hand, the local traces ofρare given byρ,v = ((_v)^m|) v∈m≥ 1,and they satisfyTd/dTlog L(T,ρ_v)^-1 = ∑_m=1^∞ρ,vT^m v∈.Combining this identity with (<ref>) yields the more general identityTd/dTlog L(T,()_v)^-1 = ∑_m=1^∞(_v)^mρ,vT^m v∈. On the other hand, the cohomological traces ofρ⊗are given by= ∑_i=1^2 (-1)^i·(_q| H^i_c([1/],⊗_)) n≥ 1,and they satisfyTd/dTlog(T,) = ∑_n=1^∞T^n.Similarly, we define the normalized cohomological traces ofbyρ, = 1/q^n(1+w)/2 = 1/(√(q))^1+w∑_i=1^2 (-1)^i·(_q| H^i_c([1/],⊗_))so that (<ref>) and (<ref>) implyTd/dTlog(T,) = Td/dTlog(T/(√(q))^1+w,) = ∑_n=1^∞ρ,T^n.Combining (<ref>) and (<ref>) with (<ref>) yields the identityTd/dTlog(T,) = ∑_n=1^∞( ∑_md=n∑_v∈_d d·ρ,v) T^nand, in particular, we obtain the Grothendieck–Lefschetz trace formula∑_md=n∑_v∈_d d·ρ,v = .§.§ Von Mangoldt function We define the von Mangoldt function ofρto be the map→given by(f) = d·ρ,v(π)f=π^mπ∈Å_d 0.We also define the extension by zero of∈to be the map→given by(f) = (f+ [t])(f,)=1 0.It is multiplicative and satisfies(π) = (_v(π))π∤ 0π∈Å.These functions allow us to rewrite (<ref>) asTd/dTlog((T,)) = ∑_n=1^∞( ∑_f∈_n(f)(f) ) T^nand, in particular, to deduce the identity∑_f∈_n(f)(f) = n≥ 1.We observe that in the special case=this simplifies toρ = ∑_f∈_n (f,)=1(f).§.§ Random arithmetic-progression sums RegardAis a uniformly random element of, and consider the expected value_A[] = 1/ϕ()∑_A∈.Observe that, for eachA_1,A_2∈, one has1/ϕ()∑_∈(A_1)(A_2) = 1A_1=A_2 0A_1≠ A_2,and thus= 1/ϕ()∑_f∈_n(f) ∑_∈(f) (A) = 1/ϕ()∑_∈·(A)by (<ref>).Therefore, if we write∈for the trivial character, then the right side of (<ref>) equals1/ϕ()^2∑_∈∑_A∈(A) = 1/ϕ()ρ,since, for every,∈, one has1/ϕ()∑_A∈(A)(A) = 1= 0≠.In particular, we have the identity- _A[] = 1/ϕ()∑_∈ ≠·(A). Now consider the variance_A[] = 1/ϕ()∑_A∈| - _A[]|^2.If we apply identities (<ref>) and (<ref>), then the right side equals1/ϕ()^3∑_A∈∑_,∈ ,≠ρ⊗ρ⊗·(A)(A) = 1/ϕ()^2∑_∈ ≠ ||^2.In summary, the functionof the random variableAsatisfies_A[] = 1/ϕ()ρ⊗, _A[] = 1/ϕ()^2∑_∈ ≠ ||^2.Observe thatρ⊗=ρand thusρ⊗=ρ. §.§ Trichotomy of characters On one hand, a character∈is good for ρ (or ρ-good) if and only if theL-functions(T,)andL(T,)are polynomials and equal in[T]; see (<ref>).In that case Theorem <ref> implies they equal1(T,)and areι-pure ofq-weightw+1, and then(T,)is given by(T,) = (1-T _q| H^1_c([1/], ))where:= ((1+w)/2) = ⊗((1+w)/2)is a so-called Tate twist of.Moreover,(T,)has degree=()=(ρ)and isι-pure ofq-weight zero.In particular, it is the characteristic polynomial of a unique conjugacy classU_(), and thus= -(_q^n| H^1_c([1/],)) = - (^n)whereU_()→_()is the inclusionU_()_R().On the other hand, there are two ways a character can fail to be good forρ: eitherL(T,)is not a polynomial orL(T,)and(T,)are polynomials but not equal to each other.Only the first of these possibilities is problematic for us because in that case the denominator ofL(T,)has zeros of excessive weight.More precisely, if the factorP_2(T,)of the denominator ofL(T,)is non-trivial, then itι-mixed ofq-weights≤w+1but notι-mixed ofq-weights≤w(cf. Theorem <ref>).Hence we say thatis heavy for ρ (or ρ-heavy) iff it lies in the subsetρ = { ∈ : L(T,)∉[T] }.The following lemma can be used to classifywhich are heavy forρ. Suppose ρ is geometrically simple and punctually ι-pure and ∈.Then ∈ρ if and only ifis geometrically isomorphic to the trivial representation.The essential point is that sinceis geometrically simple, the quotient space of geometric coinvariants (V_)_ either vanishes or equals V_.The former occurs if and only ifis geometrically isomorphic to the trivial representation, so the lemma follows from Corollary <ref>.Suppose ρ is geometrically simple and punctually ι-pure, and let r=(V).Then ρ{} if and only if one of the following hold: * r>1;* r=1 and ρ is geometrically isomorphic to the trivial representation;* r=1 and ρ is not geometrically isomorphic to a Dirichlet character in .Moreover, ρ={} if and only if (<ref>) holds.Let ∈.Lemma <ref> implies thatis heavy for ρ if and only ifis geometrically isomorphic to the trivial representation (and hence r=1).By the contrapositive,is not heavy for ρ if and only if r>1 or ρ is not geometrically isomorphic to 1/.Therefore (<ref>) or (<ref>) holds if and only if ρ is empty, and (<ref>) holds if and only if ρ={}. We also say thatis mixed for ρ (or ρ-mixed) iff it lies in the subsetρ =(ρ∪ρ).Equivalently,is mixed forρif and only if(T,)is a polynomial which isι-mixed ofq-weights≤w+1but notι-pure ofq-weightw+1.In summary, we classify the characters inby a trichotomy: each is eitherρ-good,ρ-mixed, orρ-heavy.This terminology refines Katz's because we divide his bad characters into mixed and heavy characters. Suppose ρ is punctually ι-pure of weight w and ∈.Then *Ifis heavy for ρ, then ||^2=O(q^n), and otherwise ||^2=O(1).* |ρ{}|∼ O(|ρ|/q) and |ρ|=O(1).Moreover, the bounds assume q tends to infinity and the implied constants depend only on ρ.Regardless of whetheris good, mixed, or heavy, we have = -(_q^n| H^1_c([1/],)) +(_q^n| H^2_c([1/],)).One one hand, the second term on the right vanishes unlessis heavy.On the other hand, Theorem <ref> and Lemma <ref> imply | (_q^n| H^i_c([1/],))|^2 = O(q^i-1)sinceis punctually pure of weight -1. Up to replacingby a proper monic divisor, we can apply the same trichotomy to characters in. Letbe a monic divisor ofin [t].If ρ is punctually ι-pure of weight w and if ∈, then |ρ|∼|| as q→∞.Apply Lemma <ref> within lieu of .§.§ Key estimates In this section we provide the exact formula and key asymptotic estimate we need to prove Theorem <ref>. Suppose ρ is punctually ι-pure of weight w and ρ{}.Then ϕ()·_A[] = ρ.By definition, ϕ()·_A[] = ∑_A∈ = ∑_A∈∑_f∈_nf≡ A(f) = ∑_f∈_n (f,)=1(f),and (<ref>) then yields the desired identity.While we do not need the result, we point out that Proposition <ref> and Lemma <ref> imply ϕ()/q^n(1+w)·|_A[]|^2 = |ρ|^2 ∼ O(1) q→∞ when ρ is punctually ι-pure of weight w and ρ{}.Combine .Suppose ρ is punctually ι-pure of weight w and ρ{}.Then ϕ()/q^n(1+w)·_A[] = 1/|ρ|∑_∈ρ | (^n)|^2 + O(q^-1) q→∞ where U_()→_() is the representation given by the inclusion U_(C)_().Lemma <ref> impliesϕ()^2·_A[] - ∑_∈ρ ≠ ||^2 = ∑_∈ρ ≠ ||^2 + ∑_∈ρ ≠ ||^2 ∼ |ρ{}|· O(q^n(1+w)) + |ρ{}|· O(q^n(2+w)),and thus Lemma <ref> implies _A[] ∼q^n(1+w)/ϕ()( 1/|ρ|∑_∈ρ ||^2 + O(q^-1) )as q→∞.The proposition now follows from (<ref>).§.§ Proof of Theorem <ref>The following theorem is the main result of this section. See Corollary <ref> for a classification ofρsatisfying the conditionρ{}. The first part of the theorem is an immediate consequence of (<ref>) since ρ{} for all q.Let =(ρ).Then Theorem <ref> implies thatis equidistributed in U_() as q→∞ since the Mellin transform of ρ has big monodromy.Therefore Proposition <ref> implies that ϕ()·_A[] = ρ,and Proposition <ref> and (<ref>) imply ϕ()/q^n(1+w)·_A[] ∼∫_U_() | (θ^n)|^2 dθ.The second part of the theorem now follows from the identity ∫_U_() | (θ^n)|^2 dθ = min{n,} = min{n,(ρ)} (see[NB: The reference <cit.> is sometimes used, but as explained in <cit.>, the theorem is incorrectly stated.]<cit.>). § EXHIBITING BIG MONODROMY In this section we present sufficient criteria for the Mellin transform ofρto have big monodromy and refer the interested reader to <ref> for explicit examples of representations meeting these criteria.Before stating the main theorem, we make some hypotheses and introduce pertinent terminology.Throughout this section, we suppose that(,)=t-a, for somea∈.One could easily argue that this is less general than supposing that,are relatively prime, however, we do not presently have a way to avoid our hypothesis.For ease of exposition, we also suppose thata=0and observe that, up to performing an additive translationt↦t+a, this represents no additional loss of generality.Fort=0,∞, we regardV_as anI(t)-module and then denote itV_(t).We writeV_(t)^for the maximal subspace ofV_(t)on whichI(t)acts unipotently.It is a direct summand ofV_(t), and each simplee-dimensional submodule of it is isomorphic to a common module(e).We sayV_(t)has a unique unipotent block exact multiplicity one iff, for a unique integere≥1, someI(t)-submodule is isomorphic(e)but no submodule is isomorphic to(e)⊕(e). We prove the theorem in <ref>. As the reader will notice, the proof of our theorem has a lot in common with Katz's proof of <cit.>.We both need the hypothesis on (,) and the structure of V(0)^ in order to exhibit special elements of the relevant arithemtic monodromy groups.More precisely, the hypothesis that (,)=t helps ensure that, for sufficiently many , some induced representation (V_) has the property that (V_)(0)^=V(0)^ (cf. Lemma <ref>).The hypothesis on the structure of these coincident modules then leads to the desired element (cf. Lemma <ref>).We expect one can remove this hypothesis but do not know how to do so.The hypothesis (,)=t also plays a minor role in Proposition <ref>.However, one could easily make other hypotheses (e.g., (,)=1) and still be able to proceed (cf. <cit.>).§.§ Two norm maps This subsection recalls material from <cit.> and borrows heavily from LetBbe the finite-algebra[t]/ [t].It is a direct product of finite extensions ofand hencesinceis square free.More generally, for each finite extension/, the-algebraB_ = B⊗_isand has the structure of a freeB-module of rankd=[:].Letbe the functor on variable-algebrasRdefined by(R) = R[t]/ R[t].It is the functorR↦B_R=B⊗_Rand takes values in the category of-algebras.In fact,(R)even has the structure of anR-algebra which is free of rank().In particular, for each-algebraR, there is a norm map(R)→Rwhich is part of a transformation_B/→𝕀_-algebrasbetweenand the identity functor on the category of-algebras.Letbe the functor on variable-algebrasRdefined by(R) = (R[t]/ R[t])^×.It is the composition ofwith the functorA↦A^×of-algebras and takes values in the category of groups.Moreover, the restriction of the norm map(R)→Rto the group of units yields a homomorphismν_R(R)→ R^×,and in particular,ν_is the mapνof <ref>.For each finite extension/, let_,_be the functors on variable-algebrasRdefined by_(R) = B_⊗_R,_(R) = (B_⊗_R)^×respectively.On one hand,_takes values in the category of-algebras.However,_(R)also has the structure of anB_R-algebra which is free of rankdas aB_R-module sinceB_⊗_R = B⊗_⊗_R = B_R⊗_and sinceB_is anB-algebra which is free of rankdas aB-module.In particular, there is a transformation_/_E→between the functors_Eand.On the other hand,_takes values in the category of groups and is even a smooth commutative group scheme.More precisely,is a group scheme overof multiplicative type (i.e., a torus), and_is the torus_/()overgiven by extending scalars toand then taking the Weil restriction of scalars ofback down to(cf. <cit.>).Moreover, the transformation_/induces a transformation_/_E→which is even ansurjective homomorphism of tori. In particular, since_() = () = ([t]/[t])^×one obtains a second norm mapν_^ ' ([t]/[t])^×→ ([t]/[t])^×which is a surjective homomorphism by Lang's theorem. §.§ Characters of a twisted torus Let/be a finite extension andbe the dual group((),^×)so that=.Suppose thatsplits completely over, and leta_1,…,a_n∈be the zeros ofso that=∏_i=1^n(t-a_i)in[t].For each-algebraR, the Chinese Remainder Theorem implies that there is a unique algebra isomorphismR[t]/ R[t]→∏_i=1^n R[t]/(t-a_i)R[t]which sends the residue class oftto the tuple(a_1,…,a_n)of residue class representatives.Writing it as an isomorphism(R)→R^nand restricting to units yields a group isomorphism(R)→(R^×)^n.AsRvaries over-algebras, the latter isomorphisms in turn yield an isomorphism of toriσ→^nover.In particular, applying Weil restriction of scalars fromtoyields an isomorphism_/(σ)_→_m,^nof tori overwhere_m,=_/().There is a unique permutationϕ∈([n])satisfyinga_ϕ^-1(i)=a_i^qsinceis square free and has coefficients in.Whileσdoes not descend to a morphism→^nin general, we can useϕto construct a twisted formof^noversuch thatσis the pullback of a morphism→over.More precisely, we define the twisted Frobeniusτon=^nas the composition(b_1,…,b_n) ↦ (b_1^q,…,b_n^q) ↦ (b_ϕ(1)^q,…,b_ϕ(n)^q)of the usual Frobenius automorphism and a permutation of the coordinates of^n.One can easily verify thatτ^dis thedth power of the usual Frobenius and thusis indeed a twist of^n.Moreover, one can also show that(a_1,…,a_n)is fixed byτand even that()=^τ=1=().In particular, by precomposing withτwe obtain the automorphismτ_^∨on((),^×) = (^n(),^×) = (^×,^×)^ngiven byτ_^∨ (_1,…,_n) ↦ (_ϕ^-1(1)^q,…,_ϕ^-1(n)^q). Composition of_/(σ)with the projection_m,^n→_m,onto theith factor yields a surjective homomorphismπ_i_→_m,of tori over.In particular, taking duals of the respective groups of-rational points and using the bijections_m,()=()=^×yields an isomorphismσ_^∨∏_i=1^n(^×,^×) ∋(_1,…,_n)↦∏_i=1^n_iπ_i∈.We observe that sinceν_^'is surjective its dualν_^ ' ∨is a monomorphism→and thus we can identifywith a subset of(^×,^×)^n.More precisely, it is the subgroup of characters fixed byτ_^∨and thus(σ_^∨)^-1( ν_^ ' ∨() ) = { (_1,…,_n)∈(^×,^×)^n : _ϕ(i)=_i^qi∈[n] }.§.§ Characters with distinct components We say that a character∈has distinct components iff it lies in the subset= { σ_^∨(_1,…,_n)∈ : _i≠_j 1≤ i<j≤ n },and we define the corresponding subset ofas the intersection= ∩ν_^ ' ∨()whereν_^ ' ∨→is the dual ofν_^ '. is well defined, that is, it does not depend upon our choice of .Let '/ be a finite extension and observe that the norm map ^'×→^× is surjective so induces a monomorphism (^×,^×) →(^'×,^×),and thus = '∩.In particular, if ”/ is a second finite extension over whichsplits completely and if ' contains the compositum ”, then ∩ν_^ ' ∨() = '∩ν_'^ ' ∨() = ”∩ν_”^ ' ∨()andis indeed well defined. Let=∏_j=1^rπ_i∈[t]be a factorization into monic irreducibles.The quotient_j=[t]/π_j[t]is a finite extension ofof degree andn_j=(π_j).It is also the splitting field ofπ_jand thus may be embedded in.Moreover, there are bijections= ∏_j=1^rπ_j = ∏_j=1^r(_j^×,^×), = ∏_j=1^rπ_j = ∏_j=1^r(^×,^×)^n_jgiven by applying the Chinese Remainder Theorem.For each monic factorofin[t], letbe the subset ofdefined similarly as above but within lieu of.One can easily verify that it does not depend upon the polynomialof whichis a factor. |π_j|∼|π_j|, for each j∈[r], as q→∞.Let j∈[r], and suppose without loss of generality that a_1,…,a_n_j are the zeros of π_j and ϕ(i)≡ i+1n_j for i∈[n_j].Then by (<ref>) and (<ref>) there is an identificationπ_j={ (_1,…,_n_j)∈(_j^×,^×)^n_j : _i+1=_i^qi∈[n_j-1] }.since any ∈(^×,^×) factors through an inclusion _j^×→^× if ^q^n_j=.The groups _j^× and (_j^×,^×) are cyclic and non-canonically isomorphic, so let g and χ be respective generators.Then we have a further identificationsπ_j={ (χ^e_1,…,χ^e_n_j)∈(_j^×,^×)^n_j : e_i+1≡ qe_iq^n_j-1 i∈[n_j-1] } ={ (g^e_1,…,g^e_n_j)∈(_j^×)^n_j : e_i+1≡ qe_iq^n_j-1 i∈[n_j-1] }.From this last identification one easily deduces an identification between π_j and the set { (g^e_1,…,g^e_n_j)∈(_j^×)^n_j : e_i+1≡ qe_iq^n_j-1 i∈[n_j-1] (g^e_1)=_j },and thus |π_j| = |{ g^e∈_j^× : e∈[q^n_j-1] _j=(g^e) }|.Finally, it is well known that the cardinality of the righthand set is asymptotic to q^n_j-1 as q→∞ (cf. <cit.>), and thus |π_j| = |(_j^×,^×)| = |_j^×| = q^n_j-1 ∼ |π_j| q→∞ as claimed.Ifis a monic factor ofin [t], then ||∼|| as q→∞.Suppose without loss of generality that =π_1⋯π_s with s∈[r] so that there is a bijection = ∏_j=1^s π_j.This bijection in turn induces an inclusion →∏_j=1^s π_j whose coimage is bounded above by ∏_j=1^s (()-n_j) since an element of the codomain lies in the image if (and only if) the components are pairwise distinct.In particular, || ∼∏_j=1^s |π_j| Lemma <ref>∼∏_j=1^s |π_j| q→∞ as claimed.§.§ Properties of H^2_c LetXbe a smooth geometrically connected curve over, letTXbe a dense Zariski open subset, and letbe a sheaf onX. There is a bijection H^2_c(,)→ H^2_c(X̅,).Let j T→ X be the corresponding inclusion.Then the adjunction map j_!j^*→ is part of an exact sequence of sheaves on X 0→ j_!j^*→→→ 0whereis a skyscraper sheaf supported on X T.The bijection in question is part of the corresponding long exact sequence of cohomology ⋯→ H^1_c(X̅,) → H^2_c(,) → H^2_c(X̅,) → H^2_c(X̅,) →⋯ where H^i_c(X̅,) vanishes for i≠ 0 sinceis a skyscraper sheaf. Letbe a sheaf onXand^∨be its dual.Supposeandare lisse onT, and thus so is^∨.Letρ→(V),ω→(W), andω^∨→(W^∨)be the respective corresponding representations. Supposeandare lisse andgeometrically simple on T. * (H^2_c(,⊗^∨))=(_(W,V))≤ 1.* (H^2_c(,⊗^∨))=1 if and only ifandare geometrically isomorphic on T.Let G= so that ρ and ω^∨ are absolutely simple representations of G and ρ⊗ω^∨ is the representation on V⊗ W^∨ corresponding to ⊗^∨.Therefore (H^2_c(,⊗^∨)) (<ref>)= ((V⊗ W^∨)_G) = ((V⊗ W^∨)^G) = (_G(W,V))(cf. <cit.>).Moreover, the sheaves , are geometrically isomorphic on T if and only if V and W are isomorphic as representations of G.If these equivalent conditions hold, then Schur's lemma implies (_G(W,V))=1, and otherwise (_G(W,V))=0 (see <cit.>).§.§ Invariant scalars Letł∈^̨×.If we identifywith{0,∞}and regardłas an element of()̨, then multiplication by it (i.e., translation) induces an automorphism ofover$̨ which we also denote ł→.We say ł is an invariant scalar ofiff the direct image ł_* is geometrically isomorphic to .For example, 1 is an invariant scalar for every , and every ł is an invariant scalar of the constant sheaf .Let α→^× be a tame character.The corresponding sheaf _α=α is a so-called Kummer sheaf. Every ł∈^̨× is an invariant scalar of _α.The tame fundamental group ofis a quotient and completely generated by the images of the inertia groups I(0) and I(∞).The character α is completely determined by these images, and translation by λ does not change how I(0) and I(∞) act since it fixes both 0 and ∞.Therefore ł_*_α and _α are lisse and geometrically isomorphic on , and ł is an invariant scalar of _α.ł is an invariant scalar ofif and only if it is an invariant scalar of ⊗_α In particular, the answer to the question of whether or not ł is an invariant scalar of _* depends only on the coset . The sheaves ł_*_α and _α are lisse and geometrically isomorphic onby Lemma <ref>.Moreover,ł_*(⊗_α)⊗(⊗_α)^∨ = ł_*⊗(ł_*_α⊗_α^∨)⊗^∨,so ł_*⊗^∨ and ł_*(⊗_α)⊗(⊗_α)^∨ are lisse and geometrically isomorphic on U{0,∞}.Thus ł is an invariant scalar ofif and only if it is an invariant scalar of ⊗_α. The following lemma gives a cohomological criterion for detecting invariant scalars. Let ł∈^×.Suppose ł_* andare lisse and geometrically simple on U.Then the following are equivalent:* ł is an invariant scalar of ;* H^2_c(,ł_*⊗^∨)≠;* H^2(,ł_*⊗^∨)≠. Lemma <ref> implies the equivalence of (1) and (2), and Lemma <ref> implies the equivalence of (2) and (3).§.§ Avoiding invariant scalars Consider the affine plane curveX_ł : ł(x_1)=(x_2),and let π_i X_ł→ be the map (x_1,x_2)↦ x_i.They are part of a commutative diagramX_ł[r]^π_2[d]_π_1@.>[dr]^π [d]^ [r]_łwhere π=π_2=łπ_1.Moreover, the mapsand ł are generically étale of degree n=(), thus their fiber product π is generically étale of degree n^2.Let / be a finite extension over whichsplits and Z={a_1,…,a_n} be the zeros of . X_ł is smooth over the n^2 points of Z×_Z=Z× Z.The subset Z is the vanishing locus ofand ł, hence Z×_Z=Z× Z.Moreover,∂/∂ x_2(ł(x_1)-(x_2)) = '(x_2) = ∑_i=1^n∏_j≠ i(x-a_j)does not vanish at any a_i∈ Z sinceis square free, so X_ł is smooth at every (a_i,a_j)∈ Z× Z. Consider the external tensor product sheaf_,ł := ⊠^∨on × and the tensor product sheaf_,ł := ł_*⊗_*^∨one .They have respective generic ranks r and r^2 since bothand its dual have generic rank r.Let T_ł X_ł be a smooth dense Zariski open subset and U_ł=π(T_ł).Up to shrinking T_ł, we suppose that _,ł is lisse on T_ł and that π is étale over U_ł. The sheaves π_*(_,ł) and _,ł are lisse and isomorphic on U_ł.Let w be a geometric point of U_ł, and let W_1=(ł)^-1(w) and W_2=^-1(w).Then |W_1|=|W_2|=() and π^-1(w)=W_1× W_2 since π is unramified over w, andπ_*(_,ł)_w = ⊕_(w_1,w_2)∈ W_1× W_2_,ł,(w_1,w_2) = ⊕_(w_1,w_2)∈ W_1× W_2(_w_1⊗^∨_w_2)whereas_,ł,w = (⊕_w_1∈ W_1_w_1) ⊗(⊕_w_2∈ W_2^∨_w_2).Therefore both sheaves have the same geometric fibers, and hence they are isomorphic.It remains to show they are lisse on U_ł.On one hand, _,ł is lisse on T_ł, so its geometric fibers all have the same rank r^2.Moreover,isover U_ł by hypothesis, so the geometric fibers of π_*(_,ł) also all have the same rank ()r^2 and hence π_*(_,ł) is lisse on U_ł (see <cit.>).On the other hand, π_*(_,ł) is isomorphic to T_,ł on U_ł which implies the latter is also lisse on U_ł. The contrapositive of the following corollary gives us a way to show some ł is not an invariant scalar. Suppose ρ is geometrically simple and ∈.Then the following are equivalent:* ł is an invariant scalar of _*;* H^2_c(U̅_λ,_,ł)≠. They imply * H^2_c(T̅_ł,_,ł)≠.Lemmas <ref> and <ref> imply the equivalence of (1) and (2).If U_λ→(V) is the representation corresponding to _ł, then V^U_ł V^T_ł so (<ref>) and (2) imply (3). The following proposition was inspired by <cit.>. Suppose ()≥ 2+((,s)) and ∈.* If ρ is geometrically irreducible, then so is .* ł=1 is the only invariant scalar of _*. Let / be a splitting field ofand a_1,a_2∈ be zeros ofwhich are distinct from each other and the zeros of s.Let _1,_2∈(^×,^×) be the corresponding components of (σ_^∨)^-1(ν_^ ' ∨()) as an element of (σ_^∨)^-1() (compare (<ref>) and (<ref>)).Then _1,_2 are distinct characters, so α=_1/_2 is a non-trivial character.Let ł∈^̨× be an arbitrary scalar.If ł≠ 1, then for each component T'_ł T_ł over , there is a smooth point t'=(t'_1,t'_2)∈ T'_ł() satisfying {t'_1,t'_2}={a_1,a_2}.The map π is étale over 0 sinceis square free, hence we can use π to identify I(t') with I(0).We can also identify I(t'_1) and I(t'_2) with I(0).On one hand, the fiber ofat t=t'_i and the fiber at t=0 of ^r⊗__i are isomorphic as I(0)-modules since s(a_i)≠ 0.Moreover, the fiber of _,ł at t' and the fiber at u=0 of ^r^2⊗_ are isomorphic as I(0)-modules.On the other hand, the latter fibers have no I(0)-invariants sinceis non-trivial, so a fortiori, the geometric generic fiber of _,ł has no _ł-invariants.Therefore (<ref>) implies H^2_c(T̅_ł,_,ł) vanishes for ł≠ 1, and hence the contrapositive of Corollary <ref> implies ł=1 is the only invariant scalar of _*.§.§ Baby theorem In this subsection we prove a simplified version of Theorem <ref>.Let U be a dense Zariski open subset of ={0,∞} and θU→(W) be a continuous representation to a finite-dimensional -vector space W.Letbe the dual of =([u]/u[u])^× (cf. <ref>).For u=0,∞, let W(u) denote W regarded as an I(u)-module and W(u)^ be its maximal submodule where I(u) acts unipotently.If θ is geometrically simple and punctually pure of weight w and if (W)>1, then we can associate to θ a pair of Tannakian monodromy groups_(θ,) _(θ,) _,for =χ(,θ) (see <ref> and Theorem <ref>). Suppose that θ is geometrically simple and punctually pure of weight w, that (W)>1 or that θ does not factor through the composed quotient U, and that ł=1 is the only invariant scalar of θ.Suppose moreover that W(0)^ has dimension at most r and a unique unipotent block of exact multiplicity one and that >72(r^2+1)^2.Finally, suppose W(∞)^=.Then _(θ,) equals _,. The proof consists of a few steps and will occupy the remainder of this section.Let G=_(θ,) and H=_(θ,). G and H are reductive and there is an exact sequence1→ H→ G→ T→ 1for some torus T over .Observe that θ is geometrically simple yet is not a Kummer sheaf since otherwise one would have (W)=1 and θ would factor through u.Moreover, θ is geometrically simple and punctually pure of weight w by hypothesis.Therefore the lemma follows from Proposition <ref>.<ref>. A priori G or H could be disconnected, so let G^0 and H^0 be the respective identity components. G^0 and H^0 are (Lie-)irreducible subgroups of _,.This follows from <cit.> since ł=1 is the only invariant scalar of θ. Let m(^×)^m→^m be the mth weight multiplicity map for m= given in Definition <ref>. There exist an element g∈ G^0 and an eigenvalue tuple γ∈(^×)^R of g satisfying the following:* γ=(γ_1,…,γ_) lies in (^×)^ and thus (g)=γ_1⋯γ_ lies in ^×;*|ι((g))|^2=(1/q)^w for some w≠ 0 and every field embedding ι→;* c=(γ) satisfies (c)≤ r+1 and 1=c_(c)<c_(c)-1 and c_2≤ r. This follows from Proposition <ref>.<ref> with g=f^c for any element f∈_, and for c=[G:G^0].More precisely, if α=(α_1,…,α_) is an eigenvalue tuple of f, then all the α_i lie in , all the non-zero weights w_1,…,w_n of the α_i are negative since W(∞)^ vanishes, one has 1≤ n≤ r since 1≤(W(0)^)≤ r, there is a unique non-zero weight of multiplicity one since W(0)^ has a unique unipotent block of exact multiplicity one, and the weight zero has multiplicity -n≥-r>1.Hence it suffices to take γ∈(^×)^ to be the eigenvalue tuple with γ_i=α_i^c for 1≤ i≤ and w to be (w_1+⋯+w_n)c.(H) equals ^×.Follows from Lemma <ref>.<ref> and the argument in<cit.> using the element g in Lemma <ref>. Let [G^0,G^0] be the derived subgroup of G^0. [G^0,G^0] equals _,.Combine Lemmas <ref> and <ref> to deduce that the hypotheses of Theorem <ref> hold, and thus G^0 equals one of _() or _().The derived subgroup of both of these groups equals _(). We may now complete the proof of the theorem.First, we have inclusions[G^0,G^0] [G,G] [_,,_,] = _,,and Lemma <ref> implies the outer terms are equal, so the inclusions are equalities.Moreover, Lemma <ref> implies H is normal in G and G/H is abelian, so H contains [G,G]=_,, and hence, by Corollary <ref>, H=_, as claimed. §.§ Frobenius reciprocity Let T→ U be a finite étale map of smooth geometrically connected curves over .Let(resp. ) be a lisse sheaf on T (resp. U) and →(V) (resp. →(W)) be the corresponding representation.Let ^∨ be the dual ofand →(V^∨) be the corresponding representation. _*(^∨) is isomorphic to the dual of _*.See <cit.>. Therefore we may unambiguously write _*^∨. (H^2_c(,^*⊗^∨)) = (H^2_c(,⊗_*^∨)).Let H= and G=.We suppose that V (resp. W) is a left H-module (resp. G-module), and define _H^G(V) to be the (Mackey) induced module _G([H],V) and _H^G(W) to be the restricted module W regarded as a left H-module.Then Frobenius reciprocity implies that there is a bijection of vector spaces_H(_H^G(W),V) →_G(W,_H^G(V))given by ψ↦ (w↦ (r↦ψ(rv))) (cf. <cit.>).Moreover, Lemma <ref> implies that(H^2_c(,^*⊗^∨)) = (_H(_H^G(W),V))and that(H^2_c(,⊗_*^∨)) = (_G(W,_H^G(V))),so the proposition follows immediately.§.§ Begetting simplicity In this section we give a criterion for () to be geometrically simple.Our argument was inspired by <cit.>. Let ∈.Suppose that (,s)=t, that ()≥ 2, and that (Γ(t))=.If ρ is geometrically simple, then so areand ().Let T be a dense Zariski open subset and U=(T).Up to shrinking T, we suppose that = is lisse over T and thatis étale over U. Suppose that ρ is geometrically simple and thus so is .Let =_*^∨ (cf. Lemma <ref>), and observe that Lemma <ref>.<ref> implies thatand ()^∨ are isomorphic over U.We wish to show that (H^2(,⊗^∨))=1 so that Lemma <ref> implies that () is geometrically simple over U, that is, that () is geometrically simple.In fact, Lemma <ref> and Proposition <ref> imply that(H^2_c(,⊗^∨)) = (H^2_c(,_*⊗_*^∨)) = (H^2_c(,^*_*⊗^∨)),so it suffices to show the last term equals 1.The functor ^* is left adjoint to the functor _* sinceis finite (cf. <cit.>), so the identify map _*→_* induces an adjoint ^*_*→.Generically it is the trace map (V_)→ V_ and thus is surjective (cf. <cit.>).Letbe the kernel so that we have an exact sequence of sheaves0→→^*_*→→0.These sheaves and ^∨ are all lisse over T and free, so the sequence0→⊗^∨→^*_*⊗^∨→⊗^∨→0is exact on T.In particular, we have a corresponding exact sequence of cohomologyH^2_c(,⊗^∨) → H^2_c(,^*_*⊗^∨) → H^2_c(,⊗^∨) → H^3_c(,⊗^∨)the last term of which vanishes.The hypothesis thatis geometrically simple implies the penultimate term has dimension 1 by Lemma <ref>, so it suffices to show that the first term vanishes.Let / be a splitting field of , let a_1,…,a_n∈ be the zeros of , and let(_1,…,_n) = (σ_^∨)^-1(ν_^ ' ∨()) ∈(^×,^×)^nas in (<ref>).We suppose without loss of generality that a_1=0 and thus s(a_2)⋯ s(a_n)≠ 0 since (,)=1.Let G= and H=, and let G→(V_) and H→(_H^G(V_)) be the representations corresponding toand _* respectively.The exact sequences (<ref>) and (<ref>) correspond to exact sequences of G-modules0→ K→ R→ V_→ 0and0→ K⊗ V_^∨→ R⊗ V_^∨→ V_⊗ V_^∨→ 0where R=_H^G(_H^G(V_)).We claim the first term of the latter sequence has no I(0)-convariants so a fortiori has no -convariants, and hence H^2(,⊗^∨) vanishes as claimed.The translation map t↦ t+a_i induces an isomorphism I(0)≃ I(a_i) for each i∈[n], so we can regard V_(a_i) as an I(0)-module.In fact, we have isomorphisms of I(0)-modulesR(0) ≃⊕_i=1^n V_(a_i) , K(0) ≃⊕_i=2^n V_(a_i) , (K⊗ V_^∨)(0) ≃⊕_i=2^n(^r-1⊗_i^-1).More precisely, the first isomorphism corresponds to the fact that the geometric fibers of ^*_* andsatisfy (^*_*)_0=⊕_(a)=0_a sinceisover u=0 (cf. <cit.>); the second isomorphism uses (<ref>) and the assumption that a_1=0 to identify K(0) with R(0)/V_(0); and the last isomorphism uses that s(a_2)⋯ s(a_n)≠ 0, that is, {a_1} lies in the locus of lisse reduction of ^∨.The hypothesis that Γ(t) is in the kernel ofimplies that V_(0)≃ V(0) as I(0)-modules.Moreover, _2,…,_n are all non-trivial since they are distinct from the trivial character _1 by hypothesis, so each of the summands (^r-1⊗_i^-1) has trivial I(0)-coinvariants.Therefore K⊗ V_^∨ has trivial -coinvariants as claimed.§.§ Preserving unipotent blocks For each monic divisorofin [t], consider the subsetρ = { ∈ : [1/] }.If ρ is the trivial representation, then it consists of the odd primitive characters of conductor .For t=0,∞, let V_(t) denote V_ regarded as an I(t)-module.Similarly, for u=0,∞, let (V_)(u) denote (V_) regarded as an I(u)-module, and let (V_)(u)^ be the maximal submodule of (V_)(u) where I(u) acts unipotently.We say that (V_)(0) (resp. V_(0)) has a unipotent block of dimension e and exact multiplicity m iff it has an I(0)-submodule isomorphic to U(e)^⊕ m but no I(0)-submodule isomorphic to U(e)^⊕ m+1. Suppose (,s)=t, and let =/t and ∈∩ρ.Then*(V_)(0) has a unipotent block of dimension e and exact multiplicity m if and only if V(0) does; *(V_)(∞)^=.On one hand, V_(z)^= for every z∈{0} sinceis in ρ and (,)=1.Moreover, V_(0) and V(0) are isomorphic as I(0)-modules since ()=.Therefore the only unipotent blocks of (V_)(0) are those coming from V_(0), and all such blocks contribute identical blocks to V_(0), so (<ref>) holds.On the other hand, every unipotent block of (V_)(∞) contributes to V_(∞)^, and the latter vanishes sinceis ρ-primitive, so (<ref>) holds.§.§ Proof of Theorem <ref>Recall thatis given by:= (ρ) = (()+1)r + (L(T,ρ)) - ρand it equals ((T,)) for all ∈ (see Theorem <ref>). R>72(r^2+1)^2Follows from (<ref>) and the hypothesis on () in the statement of the theorem. Let =/t. Suppose ∈∩ρ.Then the following hold:*() is geometrically simple;*((V_)(0)^)=(V_(0)^) and (V_)(0) has a unique unipotent block of exact multiplicity one;*(V_)(∞)^=. Part (<ref>) follows from Proposition <ref> sinceis in ∩, since ρ is geometrically simple, and since ()≥ 2.Parts (<ref>) and (<ref>) follow from Lemma <ref> sinceis also in ρ and since V(0) has a unique unipotent block of exact multiplicity one.(∩ρ)ρ.Let ∈∩ρ, and let θ=() and W=(V_).Then Lemmas <ref> and <ref> imply that θ=() is geometrically simple and punctually pure of weight w since ∈.Moreover, (W)=()·(V)>2 since ()≥ 2, and Proposition <ref> implies that λ=1 is the only invariant scalar of θ≃_* since ()≥ 3 and ∈.Lemma <ref> also implies that W(0) has a unique unipotent block of exact multiplicity one, that (W(0)^)=(V(0)^)≤(V)=r, and that W(∞)^=.Finally, Lemma <ref> implies R>72(r^2+1)^2.Therefore the hypotheses of Theorem <ref> hold, and hence ∈ρ.(∩ρ)ρ.Follows from Corollary <ref> since ρ is a union of cosets . Let ∈ andbe the corresponding coset. |∩|=1.We must show that there is a unique element α∈ satisfying α^ν(Γ(t))=.Since (,)=t, we can speak of the component ofat t=0: it is the character given by restricting χ to the subgroup Γ(t).There is a unique element ofwith the same component at t=0, call it β^ν.Then α=1/β is the desired character. We need one more estimate to complete the proof of the theorem. |∩ρ| ∼ || ∼ || q→∞.We observe that there are natural inclusions(∪_π|/π) (∩)since an element ofwill fail to lie inonly if one of its () components is trivial, that is, if it lies in /π for some prime factor π|.Intersecting with ρ gives further inclusions((ρ∩)∪_π|/π) (∩ρ) .Finally, we know that|ρ| Lem. <ref>∼ || Cor. <ref>∼ || , |∪_π|/π|/|| ≪ 1/q = o(1)and hence|(ρ∩)∪_π|/π| ∼ ||as q→∞.|(∩ρ)| ∼ || for q→∞.Combine Lemma <ref> and Lemma <ref>. The theorem now follows by observing that|| Cor. <ref>∼ |(∩ρ)| Cor. <ref>≤ |ρ| ≤ ||and thus|ρ| ∼ ||for q→∞.∴The Mellin transform of ρ has big monodromy as claimed and Theorem <ref> holds. § APPLICATION TO EXPLICIT ABELIAN VARIETIES In this section we apply the theory developed in the previous sections torepresentations coming from (the Tate modules of) a general class of abelian varieties.More precisely, we give an explicit family of abelian varieties for which we can show the corresponding representations satisfy the hypotheses of Theorem <ref>.Our principal application, of which Theorem <ref> is a special case, is Theorem <ref>.Throughout this section we suppose that q is an odd prime power so that we can speak of hyperelliptic curves.One who is interested in even characteristic or in L-functions whose Euler factors have odd degree is encouraged to consider Kloosterman sheaves (e.g., see <cit.>). §.§ Some hyperelliptic curves and their Jacobians Let g be a positive integer.In this section we construct an explicit family of abelian varieties which give rise to Galois representations we can easily show satisfy the hypotheses Theorem <ref>.One member of this family is an elliptic curve, the Legendre curve, and it has affine model: y^2 = x(x-1)(x-t).It is isomorphic to its own Jacobian, and the general abelian varieties in our family will be Jacobians of curves.More precisely, we fix a monic square free f∈[x] of degree 2g and consider the projective plane curve X/K with affine modelXy^2 = f(x)(x-t).For technical reasons we will eventually suppose that f has a zero a in , and up to the change of variables x↦ x+a, we will suppose that a=0.We do not need this hypothesis yet since the discussion in this section does not use it.The curve X has genus g.If g>1, it is a so-called hyperelliptic curve, and otherwise it is an elliptic curve.Either way its Jacobian J is a g-dimensional principally polarized abelian variety over K.See <cit.> for more information about hyperelliptic curves and their Jacobians.For each finite place v=π, one can define a reduction X/ starting with the reduction of (<ref>) modulo π. The monic polynomial =f(t)∈[t] satisfies the following: * if π∤, then X/ is a smooth projective curve of genus g;* if π|, then X/ is smooth away from a single node and has genus g-1. The essential point is that, for any monic polynomial h(x) with coefficients in a field F of characteristic not two, the affine curve y^2=h(x) is smooth iff h is a square free polynomial.More generally, if h=h_1h_2^2 where h_1,h_2∈ F[x] are square free and relatively prime, then the following hold: * the map (x,y)↦ (x,y/h_2(x)) induces a birational map from y^2=h_1(x) to y^2=h(x);* the (h_2) points (x,y) satisfying h_2(x)=y=0 are so-called nodes of y^2=h(x);* the map in (1) corresponds to blowing up the nodes in (2);* the curve y^2=h_1(x) is smooth of genus ⌊((h_1)-1)/2⌋ since h_1 is square free;* both curves have one (resp. two) points at infinity if (h) is odd (resp. even). (Compare <cit.>.)The proof of the lemma will consist of showing that we are in this general situation.Let t_0∈ satisfy t≡ t_0π, and let h_0(x):=f(x)(x-t_0)∈[x].The polynomial f(x) is square free by hypothesis, so h_0(x) is square free iff f(t_0)=0, or equivalently, π|.In particular, if π∤, then h_0 is square free and y^2=h_0(x) is smooth of genus g.Otherwise, h_0=h_1h_2^2 where h_1=f/(x-t_0) and h_2=x-t_0 are coprime (since f is square free), and thus y^2=h_0(x) is smooth away from the node (t_0,0) and birational to the curve y^2=h_1(x) which is smooth of genus g-1.One can also define a reduction X/ by writing t=1/u and clearing denominators, and one eventually finds that X/ has genus zero.However, the arguments are subtler and beyond the scope of this article, so we omit them. For example,has smooth reduction away from t=0,1,∞, over t=0,1 its reduction is a so-called node, and over t=∞ it is a so-called cusp.Since it is isomorphic to its Jacobian, these are sometimes refers to these as good, multiplicative, and additive reduction respectively.However, in general, one needs to construct separately reductions J/, for every π, and also a reduction J/.* If π∤, then J/ is the Jacobian of X/ so is a g-dimensional abelian variety;* If π|, then J/ is an extension of an abelian variety by a one-dimensional torus. Both statements are easy consequences of Lemma <ref>.More precisely, if X/ is projective and smooth away from n nodes, then J/ is an extension of a (g-n)-dimensional abelian variety by an n-dimensional torus.See <cit.> and keep in mind Lemma <ref>.One can also show that J/ is a g-dimensional additive linear algebraic group, but demonstrating it directly is harder and requires a finer statement than the claim in Remark <ref>.One can regard the various reductions of J as the special fibers of the (identity component of the) Néron model of J/K over .However, for our purposes, Lemma <ref> contains all the information we need about the model.More precisely, we only need to know the respective dimensions g_π, m_π, and a_π of the good, multiplicative, and additive parts of J/.Thus(g_π,m_π,a_π) = (g,0,0)π∤ (g-1,1,0)π|by Lemma <ref>.In <ref> we will show that(g_∞,m_∞,a_∞) = (0,0,g)as claimed in Remark <ref>. §.§ Tate modules Let ℓ be a prime distinct from the characteristic p of .For each m≥ 0, let J[ℓ^m] J(K̅) be the subgroup of ℓ^m-torsion; it is isomorphic to (/ℓ^m)^2g and hence is a finite Galois module.Multiplication by ℓ induces an epimorphism J[ℓ^m+1] J[ℓ^m], for each m, and the -Tate module of J is the projective limitT_ℓ(J) :=J[ℓ^m].Concretely one can regard T_ℓ(J) as the set{ (P_0,P_1,…) : P_m∈ J[ℓ^m]ℓ P_m+1=P_mm≥ 0 }.It is even a Galois _ℓ-module (since the action of G_K and multiplication by ℓ commute), and it is isomorphic to ^2g as a -module (cf. <cit.>).Let V be the vector space T_ℓ(J)⊗_ and →(V) be the corresponding Galois representation.For each v∈, let V(v) denote V as an I(v)-module and let V(v)^ be the maximal submodule where I(v) acts unipotently. Let v∈, and let g_z and m_z be the respective dimensions of the abelian and multiplicative part of J/_vThenV(v)^≃ U(1)^⊕ 2g_v⊕ U(2)^⊕ m_v. This is a general fact about Tate modules of abelian varieties.See <cit.>. Let ={π∈:π|}∪{∞} where =f(t) as in Lemma <ref>.Then by Proposition <ref>, the action of G_K on V induces a representationρ→(V)since(V^I(v))=(V)=2g v∈by (<ref>). ρ is geometrically simple and punctually pure of weight one, and it satisfies_v(ρ) = 0 v∈ 1 v∈{∞} 2g v=∞,(ρ) = 0. The values _v(ρ) for v≠∞ follow directly from (<ref>) since_v(ρ) = (V) - (V^I(v)) = 2g - 2g_v - m_vby Proposition <ref>.For the assertions about geometric simplicity and weight and about _∞(ρ) and (ρ) we refer to <cit.> (cf. <cit.> for a related discussion about J[ℓ]).L(T,J/K)=1, that is, it is a polynomial and (L(T,J/K)) = 0.The representation ρ is geometrically simple and (V)=2g>0, so ρ has trivial geometric invariants.Moreover, it is punctually pure of weight w=1, so Theorem <ref> implies L(T,ρ) is a polynomial of degree(ρ) = (ρ) + (ρ) - 2·() Lem. <ref>= ((f)· 1+1· 2g) + 0 - 2· 2g = 0as claimed. Let ∈[t] be monic and square free andbe the finite subset consisting of π and v(π) for every prime factor π of(cf. <ref>). For every ∈, the representationis geometrically simple and punctually pure of weight one, andis not heavy.Lemma <ref>.<ref> implies thatis geometrically simple since ρ is.Moreover, it has trivial geometric invariants since (V)=2g>1, sois not heavy.Finally, Lemma <ref>.<ref> implies that it is punctually pure of weight w=1 since ρ is.If ∈, then (T,) is a polynomial and((T,)) = 2g·() - ((,)). By Lemma <ref> the hypotheses of Theorem <ref> hold, and hence (T,) is a polynomial of degree(ρ) = (L(T,ρ)) + (()+1)(V) - ρ = 2g· (()+1) - _∩(ρ).The corollary follows by observing that_∩(ρ) = ∑_v∈∩ d_v·_v(ρ) = ((,))· 1 + _∞(ρ)and that _∞(ρ)=2g.§.§ Arithmetic application In this section we show how to apply our main theorem to the example given above.The Euler factor at v=∞ of the L-function of J is trivial since _∞(ρ)=(V), and thus the complete L-function satisfiesL(T,J/K) = ∏_π∈Å L(T^(π),J/)^-1 = ∏_v∈ L(T^d_v,ρ_v)^-1 = L_{∞}(T,ρ).Similarly, for the partial L-function of ρ, we have(T,ρ) = ∏_v∈ L(T^d_v,ρ_v)^-1 = ∏_π∈Å π∤ L(T^(π),J/)^-1. For each π∈Å, the Euler factor L(T,J/)^-1 is the reciprocal of a polynomial with coefficients inso satisfiesTd/dTlog(L(T,J/)) = ∑_n=1^∞ a_π,nT^nfor integers a_π,n∈.The complete L-function is also a polynomial with coefficients in , and it satisfiesTd/dTlog(L(T,J/K)) = Td/dTlog(L_{∞}(T,ρ)) = ∑_n=1^∞( ∑_f∈_n (f) ) T^nwhere (f)→ is the von Mangoldt function of ρ defined in (<ref>) by(f) = d· a_π,nf=π^mπ∈Å_d 0.Similarly, the partial L-function of ρ is a polynomial with coefficients inand satisfiesTd/dT(T,ρ) = ∑_n=1^∞( ∑_f∈_n (f,)=1(f) ) T^n. For A in =([t]/[t])^× and positive integer n, we defined the sumin (<ref>) by= ∑_f∈_n f≡ A(f).We then defined the expected value and variance of this sum as A varies uniformly overby_A[] = 1/ϕ()∑_A∈, _A[] = 1/ϕ()∑_A∈| - _A[]|^2respectively where ϕ()=|| (see (<ref>)). Suppose that (,)=t and that ()>1/2g(72(4g^2+1)^2+1).Thenϕ()·_A[] = ∑_f∈_n (f,)=1(f) lim_q→∞ϕ()/q^2n·_A[] = min{n,2g·()-1}. This will follow from Theorem <ref> once we show that all the hypotheses of that theorem are met.Lemma <ref> implies that ρ is punctually pure of weight w=1 and that ρ is empty[There are mixed characters, but as shown the proof of Proposition <ref>, they do not contribute to the main term of the variance estimate.].Moreover, Proposition <ref> implies that V(0) has a unique unipotent block of dimension two and no other unipotent block of multiplicity one (since 2g-2≠ 1), hence Theorem <ref> implies that the Mellin transform of ρ has big monodromy since (,)=t and since() > 1/2g(72((2g)^2+1)^2 - 2g - 0 + (1+2g)) = 1/2g(72(4g^2+1)^2+1).Therefore the hypotheses of Theorem <ref> hold as claimed. Taking g=1 and f=x(x-1) yields Theorem <ref> from <ref>. § DETECTING A BIG SUBGROUP OF _R§.§ Weight multiplicity map Let ι→ be a field embedding, m be a positive integer, and m={1,…,m}. A weight partition map of an element α=(α_1,…,α_m) in (^×)^m is a map w_α[m]→[m] satisfying the following for every i,j∈[m]:w_α(i) = w_α(j) |ι(α_i)| = |ι(α_j)|; |w_α^-1(i)|≥ |w_α^-1(j)| i≤ j.In general, α may have multiple weight partition maps, but all will have the same range and yield the same map [m]→ given by i↦ |w_α^-1(i)|.In particular, if w_α is a weight partition map of α and if σ∈(m), then the composed map w_ασ is also a weight partition map of α. The mth weight multiplicity map is the mapm (^×)^m→^mwhich sends an element α to the tuple ł=(ł_1,…,ł_m) satisfying ł_i=|w_α^-1(i)| for some weight partition map w_α and every i∈[m].Let α,β∈(^×)^m, and let s∈^× and σ∈(m).Suppose β_i=sα_σ(i) for every i∈[m].Then m(α)=m(β).Let w_α,w_β be respective weight partition maps of α,β.Then for every i,j∈[m], one hasw_β(i) = w_β(j) |ι(β_i)| = |ι(β_j)| |ι(α_σ(i))| = |ι(α_σ(j))| w_ασ(i) = w_ασ(j).In particular, the weight partition maps σ w_α,w_β of α,β respectively coincide, so m(α)=m(β) as claimed. For any ł=m(α), let (ł)=max{1≤ i≤ m:ł_i≠ 0}. Observe that [(ł)] is the range of any weight partition map w_α of α and (ł_1,…,ł_(ł)) is a partition of m. §.§ Tensor indecomposability Let m,n≥ 2 be integers, let α∈ (^×)^m, β∈ (^×)^n, and γ∈(^×)^mn be elements, and let a=m(α), b=n(β), c=mn(γ).Suppose τ[m]×[n]→ [mn] is a bijection satisfyingγ_τ(i,j) = α_iβ_j (i,j)∈[m]×[n],and let w_α,w_β,w_γ be weight partition maps of α,β,γ respectively.There exists a unique map [(a)]×[(b)]→[(c)] which makes the following diagram commute:[m]× [n][r]^τ[d]_w_α× w_β[mn][d]^w_γ [(a)]×[(b)][r] [(c)]. To see that such a map exists observe that w_γτ factors through w_α× w_β since(w_α× w_β)(i_1,j_1) = (w_α× w_β)(i_2,j_2) |α_i_1|=|α_i_2||β_j_1|=|β_j_2| ⟹ |α_i_1β_j_1| = |α_i_2β_j_2| |γ_τ(i_1,j_1)| = |γ_τ(i_2,j_2)| w_γτ(i_1,j_1) = w_γτ(i_2,j_2)for every i_1,i_2∈[m] and j_1,j_2∈[n].To see that the map is unique, observe that the left vertical map of the diagram is surjective and that the map must satisfy l↦ w_γτ(i,j) for any (i,j) in (w_α× w_β)^-1(l). Let κ [(a)]×[(b)]→[(c)] be the map of Lemma <ref>. For each l∈[(a)], the restriction of κ to {l}×[(b)] is injective.Recall that [(a)] and [(b)] are the respective ranges of w_α and w_β, so suppose i∈[m] and j_1,j_2∈[n].Moreover, one hasκ(w_α(i),w_β(j_1))=κ(w_α(i),w_β(j_2)) w_γτ(i,j_1) = w_γτ(i,j_2) |γ_τ(i,j_1)| = |γ_τ(i,j_2)| |α_iβ_j_1| = |α_iβ_j_2| w_β(j_1)=w_β(j_2),and thus the restriction of κ to {w_α(i)}×[(b)] is injective as claimed. Let r be a positive integer. *If c_(c)≤ r, then a_(a)≤ r and b_(b)≤ r. * If a_1>r (resp. b_1>r), then c_(b)>r (resp. c_(a)>r). For part (<ref>), we prove the contrapositive.More precisely, if k∈[(c)], then one hasc_k = ∑_κ(i,j)=k a_i b_j ≥ a_(a)b_(b)≥max{a_(a),b_(b)},and thus c_(c)>r if a_(a)>r or b_(b)>r.Thus (<ref>) holds.For part (<ref>), we suppose, without loss of generality, that a_1>r and show that c_(b)>r. We first observe that Lemma <ref> implies the integers κ(1,1),…,κ(1,(b)) are distinct.Moreover, for each l∈[(b)], one hasc_κ(1,l)≥ a_1b_l > r· 1 = r.Therefore at least (b) integers in the monotone decreasing sequence c_1,…,c_(b) exceed r, and thus (<ref>) holds. The following proposition is the main result of this subsection.We will use its contrapositive to deduce that a certain representation is tensor indecomposable whenever mn≫ r. Suppose c_(c)=1 and c_2≤ r.If (c)≤ r+1, then m,n≤ r^2+1 and thus mn≤ (r^2+1)^2.Lemma <ref>.<ref> implies that a_(a)=b_(b)=1 since c_(c)=1.Therefore (a)≥ 2 and (b)≥ 2 since m≥ 2 and n≥ 2 respectively, and moreover, c_2≥ c_(a) or c_2≥ c_(b).Hence the contrapositive of Lemma <ref>.<ref> implies a_1≤ r and b_1≤ r since c_2≤ r.In particular, if (c)≤ r+1, then Lemma <ref> implies (a),(b)≤ r+1, and thusm = ∑_i=1^(a) a_i≤ ra_1 + a_(a)≤ r^2+1, n = ∑_j=1^(b) b_j≤ rb_1 + b_(b)≤ r^2+1as claimed.§.§ Pairing avoidance Let n be a positive integer and I be the n× n identity matrix.We define the orthogonal and symplectic groups of matrices byØ_n() = { M∈_n() : MM^t = I }and_2n() = { M∈_2n() : MPM^t = P P=([0I; -I0 ]) }respectively.Suppose m=n (resp. m=2n) and g∈Ø_n() (resp. g∈_2n()).Let α∈(^×)^m be a tuple of the eigenvalues of g and a=m(α).Then some involution π∈((a)) satisfies the following:* a_i=a_π(i) for every i∈[(a)];* π has at most one fixed point. The involution s↦ 1/s of ^× induces a permutation of the eigenvalues of elements of Ø_n() and _2n().The latter is an involution σ∈(m) with the property that, for any weight partition map w_α of α and every i∈[m], one hasw_α(i) = w_ασ(i) |α_i| = |α_σ(i)| |α_i| = |1/α_i| |α_i| = 1.The involution in question is given by w_α(i)↦ w_ασ(i) for every i∈[m]; recall w_α maps onto [(a)]. The following is the main result of this subsection.We will use its contrapositive to show that some subgroup of _m() fails to preserve non-degenerate pairings which are either symmetric or alternating. Suppose m=n (resp. m=2n) and g∈_n().Let α∈(^×)^m be a tuple of the eigenvalues of g and a=m(α).If there exist i,j such that a_i,a_j are distinct from each other and from all a_k for k≠ i,j, then g∉Ø_n() (resp. g∉_2n()).We prove the contrapositive.More precisely, if g∈Ø_n() (resp. g∈_2n()) and if π∈((a)) is an involution satisfying the properties of Lemma <ref>, then π(i)=i for at most one i.Therefore, for all but at most one i and for j=π(i), one has i≠ j and a_i=a_j.In particular, there is at most one i such that a_i≠ a_j for j≠ i.§.§ Main theorem In this section we state and prove the main result of this appendix. Let r,R be positive integers and G be a connected reductive subgroup of _R().Let g∈ G be an element and γ∈(^×)^R be an eigenvector tuple of g.Suppose that G is irreducible, that γ lies in (^×)^R, and that c=R(γ) satisfies (c)≤ r+1 and 1=c_(c)<c_(c)-1 and c_2≤ r.If R>72(r^2+1)^2, then either G=_R() or G=_R(). The proof will occupy the remainder of this subsection.Since G is algebraic, it contains the semisimplification of g, an element for which γ is also an eigenvector.Hence we replace g by its semisimplification and suppose without loss of generality that g is semisimple.We also replace G and g by the conjugates h^-1Gh and h^-1gh by a suitable element h∈_R() so that we may suppose without loss of generality that g is the diagonal matrix diag(γ_1,…,γ_R).Let V=^R and f be the diagonal matrixf = diag(|ι(γ_1)|,…,|ι(γ_m)|).We claim we may regard f as an element of _R().More precisely, it is an element of _R(ι())_R() since |ι(γ_i)|^2=ι(γ_i)ι(γ_i) lies in the algebraically closed subfield ι() and thus so does |ι(γ_i)|.Replacing G, g, f by conjugates by a suitable common permutation matrix, we suppose without loss of generality that |ι(γ_1)| is an eigenvalue of f of multiplicity c_1.f is a semisimple element of G such that f-|ι(γ_1)|∈(V) has rank at most r^2.For some sequence e_1,…,e_n of tuples e_i=(e_i,1,…,e_i,m)∈^m, the intersection of G with the subgroup of diagonal matrices in _R() consists of all matrices (α_1,…,α_m) satisfying∏_i=1^mα_i^e_1,i = ∏_i=1^mα_i^e_2,i = ⋯ = ∏_i=1^mα_i^e_n,i = 1.By hypothesis, g lies in this intersection, and thus|ι(∏_i=1^mγ_i^e_1,i)| = |ι(∏_i=1^mγ_i^e_2,i)| = ⋯ = |ι(∏_i=1^mγ_i^e_n,i)| = |ι(1)|or equivalently∏_i=1^m|ι(γ_i)|^e_1,i = ∏_i=1^m|ι(γ_i)|^e_2,i = ⋯ = ∏_i=1^m|ι(γ_i)|^e_n,i = 1.Therefore f is a diagonal (hence semisimple) element of G as claimed.It remains to show f-|ι(γ_1)|∈(V) has rank at most r^2.Indeed, exactly c_1 of its eigenvalues equal |ι(γ_1)|, hence the rank of f-|ι(γ_1)| isR - c_1 ≤∑_i=2^(c) c_i ≤ r· r = r^2by our hypotheses on c. Let [G,G] be the derived (i.e., commutator) subgroup of G.Observe that G acts irreducibly on V=^R by hypothesis, so its center Z(G) consists entirely of scalars and G is an almost product of [G,G] and Z(G).In particular, [G,G] is a connected semisimple group which also acts irreducibly on V, and for some a∈^×, the scalar multiple af lies in [G,G].Let 𝔤_R=(V) be the Lie algebra of [G,G].It is a semisimple irreducible Lie subalgebra of _R since [G,G] is semisimple and acts irreducibly on V.It also contains af, and Lemma <ref> implies that ((af-a|ι(γ_1)|)V)≤ r^2.Finally, the contrapositive of Proposition <ref> implies that 𝔤 is simple since otherwise V would be tensor decomposable as a representation of G.Therefore, a result of Zarhin <cit.> implies that 𝔤 is one of 𝔰𝔩(V), 𝔰𝔬(V), or 𝔰𝔭(V) sinceR = (V) > 72(r^2)^2 ≥ 72((f-|ι(γ_1)|)V)^2 = 72((af-a|ι(γ_1)|)V)^2by our hypotheses on R.To complete the proof of the theorem it suffices to rule out 𝔤=𝔰𝔬(V) and 𝔤=𝔰𝔭(V) or equivalently to show that G preserves neither an orthogonal nor a symplectic pairing.However, our hypotheses on c together with the contrapositive of Proposition <ref> implies that G preserves neither such type of pairing, so 𝔤=𝔰𝔩(V) as claimed.That is, [G,G] is (V) and G is equal to one of (V) or (V).§ PERVERSE SHEAVES AND THE TANNAKIAN MONODROMY GROUP§.§ Category of perverse sheaves Given a smooth curve X over a perfect field , we can speak of the so-called derived category X.Its objects M are complexes of constructible -sheaves on X overwhose cohomology complex⋯-̋1̋M 0̋M 1̋M ⋯is bounded and whose cohomology sheaves i̋M are all constructible.There is a well-defined dual object M, the Verdier dual of M.Moreover, for each n∈, there is a well-defined shifted complex M[n] which satisfies i̋M[n]=i̋+̋n̋M.We say that M is semi-perverse iff 0̋M is punctual and i̋M vanishes for i>0 and that M is perverse iff M and M are semi-perverse.We write X for the full subcategory of perverse objects in X.It is an abelian category thus one can speak of subquotients of its objects as well as kernels and cokernels of its morphisms.It is common to call its objects perverse sheaves despite the fact that they are complexes of sheaves.There is a natural functor from the category of constructible -sheaves on X over k to X: it sends a sheafto a complex concentrated at i=0 and takes a morphism to the unique extension to a morphism of complexes.The image of this functor is not stable under duality though: if ^∨ is the dual of , thenis isomorphic to ^∨(1)[2].If instead one sends sends eachto (1/2)[1], then self-dual objects are taken to self-dual objects and middle-extension sheaves are taken to perverse sheaves. §.§ Purity Let X be a smooth curve over .We say an object M in X is ι-mixed of weights ≤ w iff i̋M is punctually ι-mixed of weights ≤ w+i for every i, and then M[n] is ι-mixed of weights w+n.We also say M is ι-pure of weight w iff M is ι-mixed of weights ≤ w and M is ι-mixed of weights ≤ -w, and then M[n] is ι-pure of weight w+n.Finally, we say M is pure of weight w iff it is ι-pure of weight w for every field embedding ι→. §.§ Subobjects and subquotients Let (,⊕) be an abelian category, letbe its zero object, and let M,N be a pair of objects in .We say that N is a subobject of M and write N M iff there is a monomorphism N M in .More generally, we say N of M is a subquotient of M iff there exist an object S, a monomorphism S M, and an epimorphism S N all in .Equivalently, N is a subquotient of M iff there exist an object Q, an epimorphism M Q, and a monomorphism N Q all in . If M∈ is ι-pure of weight w, then so is every subquotient N.See <cit.>. Given a pair N_1,N_2 M of subobjects, we write N_1 N_2 M iff N_1 N_2 and, for the corresponding monomorphisms, N_1 M equals the composition N_1 N_2 M.We also write N_1=N_2 M iff N_1 N_2 M and N_2 N_1 M.For example, if M is an object inand if ϕ is the Frobenius automorphism of M̅, then the subobjects N M give rise to precisely those subobjects N̅M̅satisfying N̅=ϕ(N̅)M̅. §.§ Kummer sheaves Let ={0,∞} over , and letbe the tame étale fundamental group, that is, the maximal quotient ofwhose kernel contains the p-Sylow subgroups of I(0) and I(∞).It lies in an exact sequence1→→→(/)→ 1whereis the image ofvia the tame quotient .We say a constructible sheaf onis a Kummer sheaf iff it is a middle-extension sheaf which is lisse of rank one onand for which the corresponding representation factors through thequotient .Equivalently, the Kummer sheaves are the middle-extension sheaves _ρ onassociated to a continuous character ρ→^×. §.§ Middle convolution onLet π×→ be the multiplication map onover .Using it one can define two additive bifunctors oncorresponding to two flavors of multiplicative convolution:M⋆_! N := Rπ_!(M⊠ N), M⋆_* N := Rπ_*(M⊠ N).There is a canonical map M⋆_! N→ M⋆_* N, but it need not be an isomorphism in general.However, if both convolution objects lie in , then one can speak of the image of the map and defineM N := Image(M⋆_! N→ M⋆_* N).This observation led Katz to define the full subcategoryofwhose objects are all M for which N↦ M⋆_!N and N↦ M⋆_*N take perverse sheaves to perverse sheaves (see <cit.> and <cit.>).Among other things, it includes perverse sheaves [1] fora simple middle-extension sheaf onof generic rank at least two.Moreover, it is an additive category with respect to the usual direct sum of sheaves.Katz called the resulting additive bifunctor onmiddle convolution. §.§ The category _ Let → be the “extension of scalars” functor which sends an object of M overto the object M̅=M×_.It maps objects ofto objects of , and we define _ to be the full subcategory ofwhose objects M are those for which M̅ lies in .Among other things, _ contains perverse sheaves [1] fora geometrically simple middle-extension sheaf onoverwhich is of generic rank at least two.Once again we have the two flavors of multiplicative convolutionM⋆_! N := Rπ_!(M⊠ N), M⋆_* N := Rπ_*(M⊠ N).for any pair of objects M,N in .We can also define middle convolution on _ as beforeM N := Image(M⋆_! N→ M⋆_* N).for any pair of objects M,N in _. If M and N are ι-pure of weights m and n respectively, then M N is ι-pure of weight m+n.Our argument is essentially that of <cit.>.On one hand, M⊠ N is ι-pure of weight m+n on ×, hence <cit.> and Proposition <ref> imply M⋆_! N and its perverse quotient M N are ι-mixed of weight m+n.On the other hand, DM and DN are ι-pure of weights m and n respectively, andD(M N) =Image(D(M⋆_*N)→ D(M⋆_! N))=Image(DM⋆_! DN→ DM⋆_* DN) = DM DNhence D(M N) is ι-mixed weights ≤ m+n (cf. <cit.>).Thus M N is ι-pure of weight m+n as claimed.§.§ The categoryGabber and Loeser defined an object M into be negligible iff its Euler characteristic χ(,M) vanishes (see <cit.>), or equivalently, it is isomorphic to a successive extension of shifted Kummer sheaves _ρ[1] (cf. <cit.>).They showed that the full subcategoryofwhose objects are the negligible sheaves is a thick subcategory of the abelian category (see <cit.>), and thus one can speak of the quotient category:=/.They then proceeded to show thatis a neutral Tannakian category (see <cit.> and <cit.>). The composite map →→ induces an equivalence of categories such that: * middle convolution oninduces a tensor product ⊗ on ;* the unit objectcorresponds to the skyscraper sheaf i_* for i{1}→ the inclusion; * the dual M^∨ of an object M is the object [x↦ 1/x]^*DM; * the dimension (M) of an object M is χ(,M); *a fiber functor is M↦ H^0(,j_0!M) for j_0→ the inclusion. See <cit.> and <cit.>. §.§ The categoryLetbe the full subcategory ofwhose objects M are those for which M̅ lies in , and let:= /.Like , the quotient category is an abelian category and even a neutral Tannakian category with tensor product ⊗ given by middle convolution.Moreover, the “extension of scalars” functor induces a functor→which also call the “extension of scalars” functor. Suppose M,N∈ are ι-pure of weights m and n respectively.Then M^∨, N^∨, and M⊗ N are ι-pure of weights m, n, and m+n respectively.The Verdier duals DM and DN are ι-pure of weights m and n respectively, hence so are the Tannakian duals M^∨=[x↦ 1/x]^*DM and N^∨=[x↦ 1/x]^*DN.Moreover, Proposition <ref> implies that M⊗ N=M N is ι-pure of weight m+n.§.§ Semisimple abelian categories We say that M is simple iff the only subobjects N M inare isomorphic toor M.More generally, we say that M is semisimple iff it is isomorphic to a finite direct sum N_1⊕⋯⊕ N_m of simple subobjects N_1,…,N_m M.We say thatis semisimple iff each of its objects is semisimple. If M∈ is ι-pure of weight zero, then M̅ is semisimple.If N_1,N_2∈ are ι-pure of weight zero, then so is N_1⊕ N_2. Therefore Proposition <ref> implies that T^a,b(M) is pure of weight zero, for every a,b≥ 0, and <cit.> implies that T^a,b(M̅) is semisimple.§.§ Tannakian monodromy group Let k be an algebraically closed field of characteristic zero and _k be the category of finite-dimensional vector spaces over k.It is well known that the latter yields a rigid abelian tensor category (_k,⊗) with respect to the usual operators ⊕ and ⊗ of vector spaces and with unit object =k.Let (,⊗) be a neutral Tannakian category over k.Thus (,⊗) is a rigid abelian tensor category whose unit objectsatisfiesk=() and for which there exists a fiber functor ω, that is, an exact faithful k-linear tensor functor ω→_k.For example, _k is a neutral Tannakian category and the identity functor _k→_k is a fiber functor.More generall, given an affine group scheme G over k, the category _k(G) of linear representations of G on finite-dimensional k-vector spaces yields a neutral Tannakian category (_k(G),⊗), and the forgetful functor _k(G)→_k is a fiber functor.Given an object M of , its dual M^∨, and non-negative integers a,b, letT^a,b(M) := M^⊗ a⊕ (M^∨)^⊗ band let M be the full tensor subcategory ofwhose objects consist of all subobjects of T^a,b(M) for all a,b≥ 0.For each automorphism γ∈_(M), let γ^∨∈_(M^∨) be the corresponding dual automorphism and T^a,b(γ)∈_(T^a,b(M)) be the induced automorphism.Let _k be the category of k-algebras andbe the category of sets.Given a pair ω_1,ω_2 of fiber functors →_k and an object M in , one can define a functor^⊗(ω_1|M,ω_2|M)_k→by sending a k-algebra R to the set{ γ∈_R(ω_1(M)_R,ω_2(M)_R) : T^a,b(γ)(ω_1(N))ω_2(N) a,b≥ 0 N T^a,b(M) }where ω_i(M)_R=ω_i(M)⊗_k R and_R(ω_1(M)_R,ω_2(M)_R) = { γ∈_R(ω_1(M)_R,ω_2(M)_R) : γ }.Similarly, given a single fiber functor ω→_k and object M in , one can define a functor^⊗(ω|M)_k→as the functor ^⊗(ω|M,ω|M). Let ω_1,ω_2 be fiber functors →_k and M be an object of .* ^⊗(ω_i|M) is representable by an algebraic group scheme G_ω_i|M over k;*if M is semisimple, then G_ω_i|M is reductive;*^⊗(ω_1|M,ω_2|M) is represented by an affine scheme over k which is a G_ω_1|M-torsor;See <cit.>.We call the group scheme G_ω_i|M in the theorem the Tannakian monodromy group of M with respect to ω_i. Let ω→_k be a fiber functor overand M∈.If M is pure of weight zero, then G_ω|M̅ is reductive.This follows from Proposition <ref> and Theorem <ref>.<ref>.§.§ Geometric versus arithmetic monodromy For every object M inand all integers a,b≥ 0, the “extension of scalars” functor sends a subobject N T^a,b(M) to a subobject N̅ T^a,b(M̅).Moreover, composing the functor with a fiber functor ω onyields a fiber a fiber functor onwhich we also denote ω.Thus there is a natural transformation^⊗(ω|M̅) →^⊗(ω|M)and a corresponding monomorphism of Tannakian monodromy groupsG_ω|M̅→ G_ω|M.We call G_ω|M̅ and G_ω|Mthe geometric and arithmetic Tannakian monodromy groups of M with respect to ω respectively. Suppose M is in / and is pure of weight zero. * G_ω|M̅ is a normal subgroup of G_ω|M* If M is arithmetically semisimple, then G_ω|M/G_ω|M̅ is a torus, and thus G_ω|M is reductive. Proposition <ref> implies that M̅ is semisimple, so part (1) follows from <cit.>.Therefore we can speak of the quotient G_ω|M/G_ω|M̅, and <cit.> implies it is a quotient of M is arithmetically semisimple.Moreover, Proposition <ref> implies that G_ω|M̅ is reductive, so part (2) follows by observing that the extension of a torus by a reductive group is reductive.§.§ Frobenius element Let ω be a fiber functor →_k, let / be a finite extension, and let M be in /.The geometric Frobenius element of (/) induces a well-defined automorphism ϕ_E of M̅.By applying ω, one obtains a well-defined k-linear automorphism of ω(M̅), that is, an element of (ω(M̅))=(ω(M)).It is even an element of G_ω|M since, for every N T^a,b(M) and a,b≥ 0, one hasN̅=T^a,b(ϕ_)(N̅) T^a,b(M̅)and thusω(N̅)=T^a,b(ϕ_)(ω(N̅))ω(T^a,b(M̅))=T^a,b(ω(M)).We call ω(ϕ_) the geometric Frobenius element of G_ω|M. §.§ Frobenius conjugacy classes Let ω_1,ω_2 be fiber functors →_k, let M be an element of , and let π be an element of ^⊗(ω_1|M,ω_2|M)(k).Then Theorem <ref>.<ref> implies that the map g↦π g induces a bijectionG_ω_1|M→^⊗(ω_1|M,ω_2|M).Moreover, the map g_2↦ g_2^π=π^-1g_2π induces an isomorphism G_ω_2|M→ G_ω_1|M.While the map is not canonical (since π is not), the conjugacy class_ω_2|M = { ω_2(ϕ)^π g_1 : g_1∈ G_ω_1|M(k) } G_ω_1|M(k)is well defined.We call it the geometric Frobenius conjugacy class of ω_2|M in G_ω_1|M.For each finite extension / and each character ρ∈u, let _ρ be the corresponding Kummer sheaf onover E and ω_ρ→_k be the functor given byM↦ H^0(,j_0!(M⊗_ρ)).It is a fiber functor by <cit.>, and ω_ is the fiber functor of Theorem <ref>.<ref>.We write_,ρ G_ω_|Mfor the corresponding geometric Frobenius conjugacy class of ω_ρ|M_ where M_=M×_.Let m=(ω_ρ(M)) and n∈{0,1,…,m}.We say that ω_ρ(M) is mixed of weights w_1,…,w_m iff there exists an eigenvector tuple α=(α_1,…,α_m)∈(^×)^m of any element of _,ρ such that α∈(^×)^m and such that|ι(α_i)|^2 = (1/||)^w_i1≤ i≤ mfor every field embedding ι→.We also say that ω_ρ(M) is mixed of non-zero weights w_1,…,w_n iff it is mixed of weights w_1,…,w_m with w_n+1=⋯=w_m=0. §.§ Monodromy for pure middle-extension sheaves Let U be a dense Zariski open subset over .Let θU→(W) be a continuous representation to a finite-dimensional -vector space W and =θ be the associated middle-extension sheaf on .Suppose that θ is punctually pure of weight w so that M=((1+w)/2)[1] is pure of weight zero.Suppose moreover that θ is geometrically simple and that it does not factor through the composed quotient U so that M lies in _.Letbe the dual of =([u]/u[u])^× (cf. <ref>).We define the geometric and arithmetic Tannakian monodromy groups of (the Mellin transformation of) θ to be_(θ,):=G_ω_|M̅,_(θ,):=G_ω_|M.For u=0,∞, let W(u) denote W regarded as an I(u)-module, and let W(u)^ be the maximal submodule of W(u) where I(u) acts unipotently.Moreover, let e_u,1,…,e_u,d_u be positive integers integers satisfyingW(u)^≃ U(e_u,1)⊕⋯⊕ U(e_u,d_u)as I(u)-modules where U(e) denotes the irreducible e-dimensional I(u)-module on which I(u) acts unipotently. *The groups _(θ,) and _(θ,) are reductive, and there is an exact sequence1 →_(θ,) →_(θ,) → T → 1for some torus T over . *For each finite extension / and each α∈u, the fiber ω_ρ(M) is mixed of non-zero weights -e_0,1,…,-e_0,d_0,e_∞,1,…,e_∞,d_∞.Part (1) follows from Proposition <ref>, and part (2) follows from <cit.>. amsplain.bst
http://arxiv.org/abs/1703.09190v1
{ "authors": [ "Chris Hall", "Jonathan P. Keating", "Edva Roditty-Gershon" ], "categories": [ "math.NT", "11L99, 11M06, 11M38, 11M50" ], "primary_category": "math.NT", "published": "20170327172253", "title": "Variance of sums in arithmetic progressions of arithmetic functions associated with higher degree $L$-functions in $\\mathbb{F}_q[t]$" }
^1Department of Solar Energy & Environmental Physics, Blaustein Institutes for Desert Research, Ben-Gurion University of the Negev, Midreshet Ben-Gurion 84990, Israel ^2Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel^3Institut für Theoretische Physik, Justus-Liebig-Universität Giessen, 35392 Giessen, Germany ^4Institute of Innovative Research, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8502, Japan j.fang.fan@gmail.com  is probably the most influential climate phenomenon on interannual time scales. It affects the global climate system and is associated with natural disasters; it has serious consequences in many aspects of human life. However, the forecasting of the onset and in particular the magnitude of  are still not accurate enough, at least more than half a year ahead. Here, we introduce a new forecasting index based on climate network links representing the similarity of low frequency temporal temperature anomaly variations between different sites in the Niño 3.4 region. We find that significant upward trends in our index forecast the onset of  approximately 1 year ahead, and the highest peak since the end of last  in our index forecasts the magnitude of the following event. We study the forecasting capability of the proposed index on several datasets, including, ERA-Interim, NCEP Reanalysis I, PCMDI-AMIP 1.1.2 and ERSST.v5. 92.10.am, 05.40.-a, 89.60.-k, 89.75.-kKeywords: ENSO, climate networks, complex systems, dynamic networks. § INTRODUCTION  Southern Oscillation (ENSO) is an inter-annual coupled ocean-atmosphere climate phenomenon <cit.>.  is the warm phase of ENSO and is characterized by several degrees warming of the eastern equatorial Pacific ocean. It occurs every 3-5 years, and is regarded as the most significant climate phenomenon on decadal time scales. Among other factors, it affects the surface temperature, precipitation and mid-tropospheric atmospheric circulation over extended regions in America, Australia, Europe, India, and East Asia <cit.>. In particular, strong  can trigger a cascade events that can affect many aspects of human life <cit.>. As a result of the environmental, economical, and social impacts of , intensive efforts have been undertaken to understand and eventually forecast  <cit.>. Extensive atmospheric and oceanic observations have been used to track variations in ENSO cycle, and many complex computer models have been developed to forecast  <cit.>. Still, reliable forecasts techniques for the onset and in particular the magnitude of  with relatively long lead time (of more than half a year) are not fully satisfactory. We have just undergone one of the strongest  events since 1948, which started in the end of 2014 and ended in mid-2016 <cit.>. The onset of this event was predicted one year ahead using the network approach <cit.>. Here, we develop a climate network based index to forecast the onset of  approximately 1 year ahead (similar to <cit.>). In particular our approach forecasts the magnitude of , once it begins. § METHODOLOGYThe Oceanic Niño Index (ONI) is a standard index that is used to identify  <cit.>. It is the running 3-month mean sea surface temperature (SST) anomaly averaged over the Niño 3.4 region, based on 30 years periods, updated every 5 years. When the ONI exceeds 0.5^∘C for at least five consecutive months, the corresponding year is considered to be an  year. We use the ONI (whose first value is at 1950) to estimate the accuracy of our predictions for  events occurred after 1950. We analyze the variability of the daily mean near surface (1000 hPa) air temperature fields of the ERA-Interim reanalysis <cit.>, the NCEP Reanalysis I <cit.>, the AMIP Sea Surface Temperature boundary condition data (current version: PCMDI-AMIP 1.1.3) <cit.>, and the extended reconstructed Sea Surface Temperature v5 (ERSST.v5) <cit.> in the Niño 3.4 region (i.e., 5^∘S-5^∘N, 120^∘W-170^∘W) using a climate network approach <cit.>. See Table <ref> (rows 1-5) for detailed information on the datasets. We find that the temporal variations of temperature anomaly (defined below in (i)) in different sites of the Niño 3.4 region become less coherent (more disordered) well before the onset of . In particular, the magnitude of the event is approximately proportional to the maximal degree of disorder (defined below in (ii)) that the Niño 3.4 region can reach before the onset of . We suggest a single index, the degree of disorder of the El Niño 3.4 region, that can forecast both the onset and magnitude of . In the following, we first demonstrate the steps of the forecasting method we propose on 33 years (1984 to present) of the reanalysis data of the European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) <cit.>. We then examine the robustness and accuracy of the prediction method on longer periods using several other datasets (NCEP Reanalysis I <cit.>, PCMDI-AMIP 1.1.2 <cit.> and ERSST.v5 <cit.>). The daily mean near surface (1000 hPa) air temperature fields of the ERA-Interim reanalysis data have a spatial (zonal and meridional) resolution of 2.5^∘× 2.5^∘, resulting in 105 grid points in the Niño 3.4 region. Different locations (grid points) in the Niño 3.4 region correspond to nodes in the local climate network, and the weight of links are determined by the similarities (defined below in (ii)) of the temporal temperature anomaly variations between pairs of nodes <cit.>. The forecasting algorithm is as follows: *At each node k of the network, we calculate the daily atmospheric temperature anomalies T_k(t) (actual temperature value minus the climatological average which then is divided by the climatological standard deviation) for each calendar day. For the calculation of the climatological average and standard deviation, only past data up to the prediction date have been used. For simplicity leap days were excluded.We have used the first 5 years of data (1979-1983) to calculate the first average value and start the prediction from 1984. *For obtaining the time evolution of the weight of the links between nodes i and j in the Niño 3.4 region, we follow <cit.> and compute, for each month t (the first day where the month starts) in the considered time span between Jan. 1, 1981 and Aug. 31, 2017, the time-delayed cross-correlation function defined asC^(t)_i,j(-τ)=⟨ T_i^(t)(t) T_j^(t)(t-τ) ⟩-⟨ T_i^(t)(t)⟩⟨ T_j^(t)(t-τ) ⟩/√(⟨ (T_i^(t)(t)-⟨ T_i^(t)(t)⟩)^2⟩)·√(⟨ (T_j^(t)(t-τ)-⟨ T_j^(t)(t-τ)⟩)^2⟩),andC^(t)_i,j(τ)=⟨ T_i^(t)(t-τ) T_j^(t)(t) ⟩-⟨ T_i^(t)(t-τ)⟩⟨ T_j^(t)(t) ⟩/√(⟨ (T_i^(t)(t-τ)-⟨ T_i^(t)(t-τ)⟩)^2⟩)·√(⟨ (T_j^(t)(t)-⟨ T_j^(t)(t)⟩)^2⟩),where the brackets denote an average over the past 365 days, according to⟨ f(t) ⟩=1/365∑_a=1^365f(t-a). We consider, for the daily datasets, time lags of τ∈ [0, 200] days, where a reliable estimate of the background noise level can be guaranteed (the appropriate time lag is discussed in <cit.>). For monthly updating datasets (PCMDI-AMIP 1.1.3 and ERSST.v5), the brackets denote an average over the past 12 months, according to ⟨ f(t) ⟩=1/12∑_a=1^12f(t-a). and we consider time lags of τ∈ [0,6] months. The similarity between two nodes (the weight of the link) is determined by the value of the highest peak of the cross-correlation function, C^(t)_i,j(θ), where θ is the corresponding time lag at the peak. The degree of coherence/disorder of the Niño 3.4 region is quantified by the average value of all links at their peaks, i.e. C(t)=2/N(N-1)∑_i=1^N-1∑_j=i+1^NC^(t)_i,j(θ), where N=105 is the number of nodes in the Niño 3.4 region. Thus, higher values of C(t) indicate higher coherence in the Niño 3.4 region.We like to note that the strength of the link between nodes i and j is represented by the strength of the cross-correlation between the temperature records at the nodes, which is defined by <cit.> W_i,j^(t) = C_i,j^(t)(θ) - E(C_i,j^(t))/√(E(C_i,j^(t)-E(C_i,j^(t)))^2),where E (g) denotes the average over 401 shifting days, according toE(g)=1/401(∑_τ=0^200g(τ)+∑_τ=1^200g(-τ)).Thus, W_i,j^(t) is high when the peak at τ=θ is sharp and prominent, and it is low when the cross-correlation function C_i,j^(t)(τ) varies slowly with τ. In  <cit.>, Ludescher, et al. introduced a 12-mo forecasting scheme based on the observation that the mean strength of links that connect the “ basin”(equatorial Pacific corridor) with the surrounding sites tends to increase about one year before the  event. *The forecasting index (FI) we propose here, is based on the temporal evolution of C(t) (defined in (ii) Eq. (<ref>)), representing the interactions or similarity (coherence) between the different sites within theNiño 3.4 region. We define the FI as a function of months as follows, FI(t)=1/m+1∑_a=0^mln C(t-a)-ln C(t)where m is the total number of months before t since Jan. 1981. We use a minus sign in the right hand side of Eq. <ref> so that peaks in the FI will correspond to peaks in the ONI, see Fig. <ref>. We also use the log in C(t) instead of just C(t) in order to make small variations of C(t) to become more significant so that it will be seen more clearly in Fig. <ref>. We start to evaluate FI(t) from Jan. 1984. (For NCEP Reanalysis I, m equals the number of months before t since Jan. 1950, and the FI(t) starts from Jan. 1953; For PCMDI-AMIP 1.1.2, m equals the number of months before t since Jan. 1872, and FI(t) starts from Jan. 1950; For ERSST.v5, m equals the number of months before t since Jan. 1856, andFI(t) starts from Jan. 1950) Thus, it follows that FI(t) increases (C(t) decreases) when the Niño 3.4 region is less coherent or more disordered (due to the minus sign). FI(t) is calculated for each month (red dotted line in Fig. <ref>) and one can easily see that usually FI(t) increases well before the onset of , and decreases once  begins. In other words, the temporal variations of temperature anomaly in different sites of the Niño 3.4 region become less coherent (more disorder) prior to , and start to synchronize once  begins. In particular, we find that the more disordered the Niño 3.4 region is before , the higher is the magnitude of the approaching .§ THE FORECASTING ALGORITHM USING INDEX FI Based on the above observation, we suggest the following algorithm to forecast simultaneously both the magnitude and onset of  using FI(t). For demonstration see the example shown in Fig. <ref> (b). * To forecast themagnitude, as soon as one month the ONI rises across 0.5^∘ C we regard the value of the highest peak of FI(t) (“Peak”, as indicated by the red points in Fig. <ref> (a) and the red arrow in (b)) since the end of last  as an estimate (forecased magnitude) for  strength (observed magnitude). However, if the peak value is negative or there is no peak during this period, we use zero as the forecasted magnitude and forecast a weak  event (ONI<1^∘ C) (we counted the results of all the datasets we used, and find that the ratio of such events is 13% on average of all the  events, and most of them (84% on average of this kind of events) are indeed weak). In addition, we should clarify that if the ONI rises across but do not keep above 0.5^∘ C for at least five months, we do not have an  event, thus the value of the highest peak is not a prediction of  magnitude. * To forecast the onset, we track both FI(t) and the ONI, starting from the onset of the previous . If FI(t) increases from a local minimum (“Valley”, as indicated by the blue arrow in Fig. <ref> (b)) continuously for at least two months (time segment that yielded the best forecast), the time at which FI(t) exceeds 0 (if it is not during ongoing / period, i.e. -0.5^∘ C<ONI<0.5^∘ C) is considered as a potential signal for the onset of either  or  event within approximately the next 18 months (“Forecast”, as indicated by the green arrows in Fig. <ref>). Moreover, if  is experienced within these 18 months, we forecast a new  to occur within 18 months after the end of  (the first month of ONI>-0.5^∘ C after ). Given the above, a true-positive prediction of  is counted if within 18 months after the potential signal an  occurs (“normal”, as indicated by the green arrows in Fig. <ref>), or a  that followed by an  in the next 18 months occurs (“delayed”, as indicated by the green arrows with stars on the top in Fig. <ref>); otherwise, a false alarm is counted. Next, we elaborate on the reasoning behind our approach. In Fig. <ref> (a), we plot the probability density function (PDF) of C_i,j^(t)(θ) for all links in network windows at which “Valley” (m=Valley, blue), “Forecast” (m=Forecast, green) and “Peak” (m=Peak, red) occur, respectively. We compare these PDFs with a PDF of random networks that are obtained by shuffling the order of the calendar days for each node within the Niño 3.4 region. We find the strongest correlations for the “Valley” periods (as the PDF is stretched toward higher values), then weaker correlations for the “Forecast” periods, and then the weakest correlations for the “Peak” periods (closest to the shuffled correlations). Thus, the Niño 3.4 network (region) becomes less coherent when progressing from “Valley” periods to the “Peak” periods. The order is reestablished towards the actual peak of . The evolution of the cross correlation of a typical link (shown in Fig. <ref> (b)), before the onset of 2014-2016  event, is shown in Fig. <ref> (c). The three cross correlation functions (blue, green, and red) correspond to the “Valley”, “Forecast” and “Peak” points marked by blue, green and red arrows in Fig. <ref> (b). Consistently, we find that the maximal values of the cross correlation function, C_i,j^(t)(θ), decreases from time of “Valley” to “Forecast” time, then to the “Peak” time.Moreover, while C_i,j^(t)(θ) is decreasing from Valley to Peak months, the strength of the link W_i,j^(t) <cit.> is increasing. This difference is probably due to the autocorrelation of the temperature anomaly variations in the Niño 3.4 region  <cit.>; see Fig. S1.§ RESULTS§.§ Forecasting the magnitude of We now examine the accuracy and robustness of our forecast for the magnitude of  events between 1950 and present (since the ONI begin from 1950), using several datasets. For this purpose, we plot the predicted magnitude versus the observed magnitude of  (scatter plot), and use the Pearson correlation coefficient, r, to quantify the correlation. We present such scatter plots in Fig. <ref>.Next, we apply the Kolmogorov-Smirnov test to quantify the significance of the relationship between the predicted and observed magnitude of ; Fig. <ref> (insets). Each time we randomly choose ten events and calculate the correlation coefficient between their predicted and observed magnitudes; we repeated this procedure 1 million times, and obtained the PDF of r-values for each dataset (colored by green in Fig. <ref>). For a comparison, we also consider random cases as follows. Each time we choose randomly ten predicted values and randomly ten observed values and then perform a linear regression between them; also here we have performed 1 million selections, and obtain the PDF of r-values for each dataset (colored by gray in Fig. <ref>). Then we compare the PDFs of observed r-values to the random r-values using Kolmogorov-Smirnov statistic D <cit.>. For each dataset used here, D is relatively large (D≥ 0.37), indicating significant difference between the observed and predicted  magnitude.The results are summarized in Table <ref>, in the rows heading “ magnitude”. We note however, that the prediction of the magnitude of  is performed at the actual onset of , which on average occurs about half a year prior to the peak of . §.§ Forecasting the onset ofNext we examine the forecasting power of the onset of . The results are summarized in Table <ref>, in the rows heading “ onset”. Here, the hit rate is defined as the hits (true positive prediction) divided by the number of  events; the false alarm rate is defined as the number of false alarms divided by the number of years during which no  started. The lead time equals the time from the potential signal (or the end of , if  is experienced after the prediction) to the actual onset of  (shaded areas in Fig. <ref>). Previous studies proposed various methods to forecast  events. Some of these predict quite successfully the onset of , about one year in advance <cit.>. We compare our prediction method to prediction of the 12-mo forecasting scheme based on climate-network approach <cit.> and to the prediction of state-of-the-art models—the COLA anomaly coupled model  <cit.> and the Chen-Cane model  <cit.>; for this purpose we use the operating characteristics (ROC) <cit.>, see Fig. S3. We use the Hit and False alarm rates as follows. The Hit rate=hits/hits+misses, where “hits” is the number of true positive prediction of . The False alarm rate=false alarms/false alarms+correct rejections, where the number of “correct rejections” equals the number of years where no  started and no false alarm appeared in the past 18 months before the year. The resulting hit rate of our approach is ≥ 0.81 and the false alarm ratio is ≤ 0.24. For prediction lead time of 12 months the hit rate is <0.4 for the COLA model <cit.> and <0.45 for the Chen-Cane model <cit.> with false alarm ratio of ∼ 0.2. The hit rate for the network approach in <cit.> is 0.667 and false alarm rate is 0.095. The prediction scheme we proposed here improves the prediction of the onset of . An additional and also the most important advantage of the prediction scheme we propose is that it provide prediction both for the magnitude and the onset of   based only on the temperature variability and their coherence in the Niño 3.4 region. § SUMMARY In summary, we introduce a new forecasting index (FI) that is based on climate networks which accurately andsimultaneously forecasts both the onset and magnitude of . The performance of the FI is examined successfully on several datasets. Our forecasting algorithm is based on the finding that the similarity or the coherence of low frequency temporal variability of temperature anomaly between different sites (strength of links) in the Niño 3.4 region decreases well before  and increases at the onset of . The magnitude of the predicted  is positively related with the highest peak in the FI during the period between the end of last  and the onset of the new one. The results presented here indicate an important characteristic of the phase of the ENSO cycle, i.e., significant increase of disorder occurs in the Niño 3.4 region well before the onset of . The relationship between  and the variation of the degree of disorder in the Niño 3.4 region may be further explained by defining an entropy based on the coherence of temperature variations in different sites of the Niño 3.4 region, which oscillates periodically with the ENSO cycle. There is surely a room of further improvement of the forecasting algorithm proposed here, probably with combination with other forecasting techniques and models. § ACKNOWLEDGMENTS We thank Kai Xu for helpful discussions and suggestions. We acknowledge the Israel-Italian collaborative project NECST, the Israel Science Foundation, ONR, Japan Science Foundation, BSF-NSF, and DTRA (Grant No. HDTRA-1-10-1-0014) for financial support. J.F thanks the fellowship program funded by the Planning and Budgeting Committee of the Council for Higher Education of Israel.§ REFERENCES10Dijkstra2005 Dijkstra H A 2005 Nonlinear Physical Oceanography: A Dynamical Systems Approach to the Large Scale Ocean Circulation and(New York: Springer Science) Clarke2008Clarke A J 2008 AnIntroductiontotheDynamicsofandtheSouthern Oscillation (London: Academic)Cane2010 Cane M A and Sarachik E S 2010 The -southern oscillation phenomenon. (Cambridge: Cambridge University Press) Halpert1992 Halpert MSand Ropelewski C F 1992 Surface temperature patterns associated with the Southern Oscillation J. Clim.5 577 Diaz2001 Diaz H F, Hoerling M P and Eischeid J K 2001 ENSO variability, teleconnections and climate change Int. J. Climatol. 21 1845Ropelewski1986 C. F. Ropelewski, M. S. Halpert 1986North American Precipitation and Temperature Patterns Associated with the /Southern Oscillation (ENSO) Mon. Weather Rev. 114 2352Kumar2006 Kumar K K, et al 2006 Unraveling the Mystery of Indian Monsoon Failure During  Science 314, 115Kushnir2007 Ihara C, Kushnir Y andCane M A 2007 Indian summer monsoon rainfall and its link with ENSO and Indian Ocean climate indicesInt. J. Climatol. 27, 179Hsiang2011 Hsiang S M, Meng K C, and Cane M A 2011 Civil conflicts are associated with the global climate Nature 476, 438 Burke2015 Burke M, Gong E and Jones K 2015 Income Shocks and HIV in Africa Econ. J. 125, 1157Schleussner2016 Schleussner C F, Donges J F, Donner R V, Schellnhuber H J 2016 Armed-conflict risks enhanced by climate-related disasters in ethnically fractionalized countriesProc. Natl. Acad. Sci. USA 113, 9216McPhaden1998 McPhaden M J et al 1998 The Tropical Ocean-Global Atmosphere observing system: A decade of progress J. Geophys. Res. 103, 14169Chen2008 Chen D and Cane M A 2008  prediction and predictability J. Comput. Phys. 227, 3625Cane1986 Cane M A, Zebiak S E and Dolan S C 1986 Experimental forecasts of  Nature. 321, 827 Barnett1988 Barnett T Pet al 1988 On the Prediction of the  of 1986-1987Science 241, 192 Tang1997 Tang F T, Hsieh W W, and Tang B 1997 Forecasting the equatorial Pacific sea surface temperatures by neural network models Clim. Dyn. 13, 135Kirtman1997 Kirtman B P et al 1997Multiseasonal predictions with a coupled tropical ocean global atmosphere systemMon. Wea. Rev. 125, 789Chen2004 Chen D, Cane MA, Kaplan A, Zebiak S E, Huang D 2004 Predictability of over the past 148 years Nature, 428, 733 Luo2008 Luo J, Masson S,Behera S K and Yamagata T 2008 Extended ENSO Predictions Using a Fully Coupled Ocean–Atmosphere Model J. Clim. 21, 84 Yeh2009 Yeh S W et al 2009  in a changing climate Nature. 461, 511Galanti2003 Galanti E, Tziperman E, Harrison M, Rosati A and Sirkes Z 2003 A Study of ENSO Prediction Using a Hybrid Coupled Model and the Adjoint Method for Data Assimilation Mon. Wea. Rev. 131, 2748 ENSO_report ENSO Cycle: Recent Evolution, Current Status and Predictions, Climate Prediction Center/NCEP.< http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf. >Ludescher2014 Ludescher J,Gozolchiani A, Bogachev M, Bunde A, Havlin S and Schellnhuber H J 2014 Very early warning of next   Proc. Natl. Acad. Sci. USA. 111, 2064Ludescher Ludescher J,Gozolchiani A, Bogachev M, Bunde A, Havlin S and Schellnhuber H J 2013 Improved  forecasting by cooperativity detectionProc. Natl. Acad. Sci. USA. 110, 11742 Jun2017 Meng J, Fan J, Ashkenazy Y and Havlin S 2017 Percolation framework to describe conditions Chaos 27, 035807Yamasaki2008 Yamasaki K, Gozolchiani A and Havlin S2008 Climate Networks around the Globe are Significantly Affected by   Phys. Rev. Lett. 100, 228501Tsonis2008Tsonis A A and Swanson K L 2008 Topology and Predictability of  and  Networks Phys. Rev. Lett. 100, 228502Donges20091 Donges J F, Zou Y, Marvan N and Kurths J 2009 Complex networks in climate dynamics Eur. Phys. J. Spec. Top. 174, 157 Donges20092 Donges J F, Zou Y, Marvan N and Kurths J 2009 The backbone of the climate network Europhys. Lett. 87, 48007Gozolchiani2011Gozolchiani A,Havlin S and Yamasaki K 2011 Emergence of  as an Autonomous Component in the Climate Network Phys. Rev. Lett. 107, 148501 Wang2013Wang Y, Gozolchiani A, Ashkenazy Y, Berezin Y, Guez O andHavlin S 2013 Dominant Imprint of Rossby Waves in the Climate Network Phys. Rev. Lett. 111, 138501 Zhou2015Zhou D, Gozolchiani A, Ashkenazy Y and Havlin S 2015 Teleconnection Paths via Climate Network Direct Link Detection Phys. Rev. Lett.115, 268501 Fan2017 Fan J, Meng J,Ashkenazy Y, Havlin S and Schellnhuber H S 2017 Network analysis reveals strongly localized impacts of Proc. Natl. Acad. Sci. USA 114, 7543oni <https://www.esrl.noaa.gov/psd/data/correlation/oni.data>, Aug, 2017.Dee2011 Dee D P et al 2011 The ERA-Interim reanalysis: configuration and performance of the data assimilation system Quarterly Journal of the Royal Meteorological Society 137, 656 ncep Kalnay E et.al 1996 The NCEP/NCAR 40-Year Reanalysis Project Bull. Am. Meteorol. Soc.77, 437 cmip6 Taylor, K.E., D. Williamson and F. Zwiers, 2000: "The sea surface temperature and sea ice concentration boundary conditions for AMIP II simulations" PCMDI Report 60, Program for Climate Model Diagnosis and Intercomparison, Lawrence Livermore National Laboratory, 25 pp available as a PDF (For further descriptive details, see <http://www-pcmdi.llnl.gov/projects/amip/AMIP2EXPDSN/BCS/index.php>).ersst Boyin Huang, Peter W. Thorne, Viva F. Banzon, Tim Boyer, Gennady Chepurin, Jay H. Lawrimore, Matthew J. Menne, Thomas M. Smith, Russell S. Vose, and Huai-Min Zhang (2017): NOAA Extended Reconstructed Sea Surface Temperature (ERSST), Version 5. [indicate subset used]. NOAA National Centers for Environmental Information. doi:10.7289/V5T72FNM [access date].Guez2014 Guez O, Gozolchiani A and Havlin S 2014 Influence of autocorrelation on the topology of the climate networkPhys. Rev. E. 90, 062814 k-s Sheldon M R 2004 Introduction to probability a nd statistics for engineers and scientists (3rd edition, Elsevier Academic Press) Kirtman2003 Kirtman B P 2003 The COLA anomaly coupled model: Ensemble ENSO prediction. Mon. Weather. Rev. 131 2324roc Mason S.J. and Graham N.E., Wea. Forecast, 14, 713(1999)
http://arxiv.org/abs/1703.09138v2
{ "authors": [ "Jun Meng", "Jingfang Fan", "Yosef Ashkenazy", "Armin Bunde", "Shlomo Havlin" ], "categories": [ "physics.geo-ph" ], "primary_category": "physics.geo-ph", "published": "20170327151431", "title": "Forecasting the magnitude and onset of El Nino based on climate network" }
-@thistlm theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary mydefDefinitionproof[1][Proof:]#1 definition[1][Definition:]#1 example[1][Example]#1 remark[1][Remark]#1 Polynomial-Time Methods to Solve Unimodular Quadratic Programs With Performance Guarantees Shankarachary Ragi, Member, IEEE, Edwin K. P. Chong, Fellow, IEEE, and Hans D. Mittelmann The work of S. Ragi and H. D. Mittelmann was supported in part by Air Force Office of Scientific Research under grant FA 9550-15-1-0351. The work of E. K. P. Chong was supported in part by National Science Foundation under grant CCF-1422658. This work appeared in part in the Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Mar 2017. S. Ragi and H. D. Mittelmann are with the School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287, USA e-mail: {shankarachary.ragi, mittelmann}@asu.edu. E. K. P. Chong is with the Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO, 80523, USA e-mail: edwin.chong@colostate.edu. December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We develop polynomial-time heuristic methods to solve unimodular quadratic programs (UQPs) approximately, which are known to be NP-hard. In the UQP framework, we maximize a quadratic function of a vector of complex variables with unit modulus. Several problems in active sensing and wireless communication applications boil down to UQP. With this motivation, we present three new heuristic methods with polynomial-time complexity to solve the UQP approximately. The first method is called dominant-eigenvector-matching; here the solution is picked that matches the complex arguments of the dominant eigenvector of the Hermitian matrix in the UQP formulation. We also provide a performance guarantee for this method. The second method, a greedy strategy, is shown to provide a performance guarantee of (1-1/e) with respect to the optimal objective value given that the objective function possesses a property called string submodularity. The third heuristic method is called row-swap greedy strategy, which is an extension to the greedy strategy and utilizes certain properties of the UQP to provide a better performance than the greedy strategy at the expense of an increase in computational complexity. We present numerical results to demonstrate the performance of these heuristic methods, and also compare the performance of these methods against a standard heuristic method called semidefinite relaxation.Unimodular codes, unimodular quadratic programming, heuristic methods, radar codes, string submodularity§ INTRODUCTIONUnimodular quadratic programming (UQP) appears naturally in radar waveform-design, wireless communication, and active sensing applications <cit.>.To state the UQP problem in simple terms- a finite sequence of complex variables with unit modulus to be optimized maximizing a quadratic objective function. In the context of a radar system that transmits a linearly encoded burst of pulses,the authors of <cit.> showed that the problems of designing the coefficients(or codes) that maximize the signal-to-noise ratio (SNR) <cit.> or minimize the Cramer-Rao lower bound (CRLB) lead to a UQP(see <cit.> for more details). We also know that UQP is NP-hard from the arguments presented in<cit.> and the references therein. In this study, we focus on developing tractable heuristic methods to solve theUQP problem approximately having polynomial complexity with respect to the size of the problem. We also provide performance bounds for these heuristic methods. In this study, a bold uppercase letter represents a matrix and a bold lowercase letter represents a vector, and if not bold it represents a scalar. Let s represent the unimodular code sequence of length N, where each element of this vector lies on the unit circle Ω centered at the origin in the complex plane, i.e., Ω = { x ∈ℂ, |x| = 1 }. The UQP problem is stated as follows:s∈Ω^Nmaximizes^H Rs,where R∈ℂ^N× N is a given Hermitian matrix. There were several attempts at solving the UQP problem (or a variant) approximately or exactly in the past; see references in <cit.>. For instance,the authors of <cit.> studied the discrete version of the UQP problem, where the unimodular codes to be optimized are selected from a finite set of points on the complex unit circle around the origin, as opposed to the set of all points that lie on this unit circle in our UQP formulation (as shown in (<ref>)). Under the condition that the Hermitian matrix inthis discretized UQP is rank-deficient and the rank behaves like 𝒪(1) with respect to the dimension of the problem, the authors of<cit.> proposed a polynomial time algorithm to obtain the optimal solution. Inspired by these efforts, we propose three new heuristic methods to solve theUQP problem (<ref>) approximately, where the computational complexity grows only polynomially with the size of the problem. In our study, we exploit certain properties of Hermitian matrices to derive performance bounds for these methods. The rest of the paper is organized as follows. In Section <ref>, we present a heuristic method called dominant-eigenvector-matching and a performance bound for this method. In Section <ref>, we develop a greedy strategy to solve the UQP problem approximately, which has polynomial complexity with respect to the size of the problem; we also derive a performance bound (when R satisfies certain conditions) for this method for a class of UQP problems. In Section <ref>, we discuss the third heuristic method called row-swap greedy strategy, and we also derive a performance bound for this method for certain class of UQP problems. In Section <ref>, we show application examples where our greedy and row-swap greedy methods are guaranteed to provide the above-mentioned performance guarantees. In Section <ref>, we demonstrate the effectiveness of the above-mentioned heuristic methods via a numerical study. Section <ref> provides a summary of the results and the concluding remarks.§ DOMINANT EIGENVECTOR-MATCHING HEURISTICLet λ_1, …,λ_N be the eigenvalues of R such that λ_1 ≤⋯≤λ_N. We can verify thatλ_1 N ≤s∈Ω^Nmaxs^H R s≤λ_N N. The above upper bound on the optimal solution (λ_N N) will be used in the following discussions.In this study, a complex vector a is said to be matching acomplex vector b when (a(i)) = (b(i)) for all i, wherea(i) and b(i) are the ith elements of the vectors a and b respectively, and (x) represents the argument of a complex variable x.Without loss of generality, we assume that R is positive semi-definite. If R is not positive semi-definite,we can turn it into one with diagonal loading technique without changing the optimal solution to UQP, i.e., we do the following R = R - λ_1 𝕀_N, where λ_1 (< 0 as R is not semi-definite) is the smallest eigenvalue of R. Let R be diagonalized as follows: R = U Λ U^H, where Λ is a diagonal matrix with eigenvalues (λ_1,…,λ_N) of R as the diagonal elements, and U is a unitary matrixwith the corresponding eigenvectors as its columns. Let U = [e_1 …e_N], where e_i is the eigenvector corresponding to the eigenvalueλ_i. Thus, the UQP expression can be written as: s^H R s = s^H U Λ U^H s = ∑_i=1^N λ_i |t(i)|^2,where t(i) is the ith element of U^Hs, and | . | is the modulus of a complex number. We know that ∑_i=1^N |t(i)|^2 = N for all s ∈Ω^N. Ideally, the UQP objective function would be maximum if we can find an s such that |t(N)|^2 = N and |t(i)| = 0 for all i<N; but for any given R such an s may not exist. Inspired by the above observation, we present the following heuristic method to solve the UQP problem approximately. We choose an s∈Ω^N that maximizes the last term in the above summation |t(N)|. In other words, we choose an s∈Ω^N that “matches” (see the definition presented earlier) e_N—the dominant eigenvector of R. We call this method dominant-eigenvector-matching. But e_N may contain zero elements, and when this happens we set the corresponding entry in the solution vector to e^j0. The following proposition provides a performance guarantee for this method. Hereafter, this heuristic method is represented by 𝒟. The following result provides a performance bound for D. Given a Hermitian and positive semi-definite matrix R, if V_𝒟 and V_opt represent the objective function values from the heuristic method 𝒟 and the optimal solution respectively for the UQP problem, then V_𝒟/V_opt≥λ_N + (N-1)λ_1/λ_N N, where λ_1 and λ_N are the smallest and the largest eigenvalues of R of size N. Let d be the solution obtained from the heuristic algorithm 𝒟. Therefore, the objective function value from 𝒟 isV_𝒟 =d^H Rd = ∑_i=1^N λ_i |e_i^H d|^2,where λ_1,…,λ_N (λ_1 ≤…≤λ_N) are the eigenvalues of R with e_1,…,e_N being thecorresponding eigenvectors. Since d matches the dominant eigenvector of R, we know that |e_N^H d|^2 = ( ∑_i=1^N |e_N(i)|)^2 ≥∑_i=1^N |e_N(i)|^2 = 1.We know that ∑_i=1^N |e_i^H d|^2= U^Hd_2^2 = d_2^2 = N,where ·_2 represents the 2-norm. Thus,V_𝒟/V_opt ≥λ_N(|e_N^H d|^2) + λ_1(N-|e_N^H d|^2)/λ_N N≥λ_N + (N-1)λ_1/λ_N N.The above heuristic method has polynomial complexity as most eigenvalue algorithms (to find the dominant eigenvector) have a computational complexity of at most 𝒪(N^3) <cit.>, e.g., QR algorithm. § GREEDY STRATEGYIn this section, we present the second heuristic method, which is a greedy strategy, and has polynomial-time complexity (with respect to N). We also explore the possibility of our objective function possessing a property called string submodularity <cit.>, which allows our greedy method to exhibit a performance guarantee of (1-1/e). First, we describe the greedy method, and then explore the possibility of our objective function being string-submodular. Let g represent the solution from this greedy strategy, which is obtained iteratively as follows:g(k+1)= max_x∈Ω [g(1), …, g(k), x]^H R_k [g(1), …, g(k), x], k=1,…,N-1where g(k) is the kth element of g with g(1) = 1. In the above expression, [a,b] represents a column vector with elements a and b, and R_k is the principle sub-matrix of R obtained by retaining the first k rows and the first k columns of R. This method is also described in Algorithm <ref>.In other words, we optimize the unimodular sequence element-wise with a partitioned representation of the objective function as shown in (<ref>), which suggests that the computational complexity grows as 𝒪(N). Let this heuristic method be represented by 𝒢. The greedy method 𝒢 is known to exhibit a performance guarantee of (1-1/e) when the objective function possesses a property called string-submodularity <cit.>. To verify if our objective function has this property, we need to re-formulate our problem, which requires certain definitions as described below.We define a set A^* that contains all possible unimodular strings (finite sequences) of length up to N, i.e.,A^*= {(s_1,…,s_k)| s_i ∈Ω for i=1,…,k and k=1,…,N},where Ω = { x ∈ℂ, |x| = 1 }. Notice that all the unimodular sequences of length N in the UQP problem are elements in the set A^*.For any given Hermitian matrix R of size N, let f:A^*→ℝ be a quadratic function defined as f(A) = A^H R_k A, where A = (s_1,…,s_k) ∈ A^* for any 1 ≤ k ≤ N, and R_k is the principle sub-matrix of R of size k× k as defined before. We represent string concatenation by ⊕, i.e., if A = (a_1,…,a_k) ∈ A^* and B = (b_1,…,b_r) ∈ A^* for anyk+r ≤ N, then A⊕ B = (a_1,…,a_k,b_1,…,b_r). A string B is said to be contained in A, represented by B ≼ A if there exists a D ∈ A^*such that A = B⊕ D. For any A,B ∈ A^* such that B ≼ A, a function f:A^* →ℝ is said to be string-submodular <cit.> if both the following conditions are true:* f is forward monotone, i.e., f(B) ≤ f(A).* f has the diminishing-returns property, i.e., f(B⊕ (a)) - f(B) ≥ f(A ⊕ (a)) - f(A) for any a ∈Ω.Now, going back to the original UQP problem, the UQP quadratic function may not be a string-submodular function for any given Hermitian matrix R. However, without loss of generality, we will show that we can transform R to R (by manipulating the diagonal entries) such that the resulting quadraticfunction A_k^H R_k A_k for any 1≤ k≤ N and A_k ∈ A^* is string-submodular, where R_k is the principle sub-matrix ofR of size k× k as defined before. The following algorithm shows a method to transform R to such a R that induces string-submodularity on the UQP problem.* First define δ_1,…,δ_N as follows: δ_k = ∑_i=1^k-1 |r_ki|, where k= 2,…,N, δ_1 = 0, and |r_ki| is the modulus of the entry in the kth row and the ith column of R. * Define a vector with N entries (a_1,…,a_N), where a_k = 2δ_k + 4(∑_i=1^N-kδ_k+i) for k=1,…,N-1, and a_N = 2δ_N. * Define R as follows:R = R - Diag(R) + diag((a_1,…,a_N)),where Diag(R) is a diagonal matrix with diagonal entries the same as those of R in the same order, and diag((a_1,…,a_N)) is a diagonal matrix with diagonal entries equal to the array (a_1,…,a_N) in the same order. Since we only manipulate the diagonal entries of R to derive R, the following is true: max_A_N∈Ω^N A_N^H R A_N = max_A_N ∈Ω^N A_N^H R A_N. For any given Hermitian matrix R and the derived R (as shown above), let F:A^*→ℝ be defined as F(A_k) = A_k^H R_k A_k,where A_k ∈ A^*.For a given R and F:A^*→ℝ as defined in (<ref>) with the derived R from R, and for anyA,B ∈ A^* such that B ≼ A, with B = (b_1,…,b_k) and A = (b_1,…,b_k,…,b_l) (k≤ l ≤ N), the inequalities4∑_i=k+1^l ∑_j=1^N-iδ_i+j≤ F(A) - F(B) ≤ 4∑_i=k+1^l ( δ_i + ∑_j=1^N-iδ_i+j)hold where δ_i for i=1,…,N are defined in (<ref>). Let r_ij be the entry from R at the ith row and jth column. From (<ref>), we can verify that r_ii = a_i for i=1,…,N.Therefore, from the definitions of F and a_i for i=1,…,N, we can verify thatF(A) - F(B) = A^H R_l A - B^H R_k B= ∑_i=k+1^l r_iib_i b_i^* + b_i^*( ∑_j=1^i-1 r_ij b_j ) + b_i ( ∑_j=1^i-1 r_jib_j^*)Therefore, from the definitions of δ_i for i=1,…,N in (<ref>), it follows that ∑_i=k+1^l ( r_ii - 2(∑_j=1^i-1 |r_ij| ) )≤ F(A) - F(B)≤∑_i=k+1^l (r_ii + 2(∑_j=1^i-1 |r_ij| ) )4∑_i=k+1^l ( δ_i + ∑_j=1^N-iδ_i+j)≤ F(A) - F(B) ≤ 4∑_i=k+1^l ( δ_i + ∑_j=1^N-iδ_i+j) Given any Hermitian matrix R of size N, the objective function F:A^*→ℝ defined in (<ref>) is string submodular. Forward monotonocity proof Let A,B ∈ A^* such that B ≼ A, therefore A and B are of the form B = (b_1,…,b_k) and A = (b_1,…,b_k,…,b_l), where k ≤ l ≤ N. Thus, from Lemma <ref>F(A) - F(B) ≥ 4 ∑_i=k+1^l ∑_j=1^N-iδ_i+j≥ 0as δ_i ≥ 0 for all i=1,…,N.Diminishing returns proofFor any u ∈Ω, u is also an element in the set A^*, i.e., {u}∈ A^*. Therefore, from Lemma <ref> the following inequalities hold true:(F(B⊕{u}) - F(B)) - (F(A⊕{u}) - F(A)) ≥ 4(∑_j=1^N-k-1δ_j+k+1) - 4( δ_l+1 + ∑_j=1^N-l-1δ_l+j+1)= 4( ∑_j=k+2^l+1δ_j) - 4δ_l+1≥ 0.We know from <cit.> that the performance of the heuristic method 𝒢 is at least (1-1/e) of the optimal value with respect to the function F, i.e., if g∈ A^* is the solution from the heuristic method 𝒢 and if o is the optimal solution that maximizes the objective function F as in o = max_A_N ∈Ω^N A_N^H RA_N,then F(g) ≥(1-1/e)F(o).Although we have a performance guarantee for the greedy method with respect to F, we are more interested in the performance guarantee with respect to the original UQP quadratic function with the given matrix R. We explore this idea with the following results.For any Hermitian matrix R, if R is derived from R according to (<ref>), then Tr(R) = ∑_k=2^N (4k-2)δ_k, where δ_k for k=2,…,N are defined in (<ref>).For a given Hermitian matrix R, if Tr(R) ≤ Tr(R), where R is derived from R as described earlier in this section, then g^H Rg≥(1-1/e) ( max_s∈Ω^Ns^H Rs),where g is the solution from the greedy method 𝒢. For any A_k ∈ A^*, we know that F(A_k) = A_k^H R_k A_k = A_k^H R_k A_k + Tr(R_k) - Tr(R_k). Since F in( <ref>) is string-submodular, given g is the solution from the heuristic 𝒢 and o being the optimal solution thatmaximizes F over all possible solutions of length N, from (<ref>) we know thatF(g) ≥(1 - 1/e) F(o)g^H Rg + Tr(R) - Tr(R) ≥(1 - 1/e) (o^H Ro + Tr(R) - Tr(R))g^H Rg ≥(1 - 1/e)o^H Ro + 1/e(Tr(R) -Tr(R) ) ≥(1 - 1/e)o^H Ro.We are interested in finding classes of Hermitian matrices that satisfy the requirement Tr(R) ≤ Tr(R), so that the above result holds true. Intuitively, it may seem that diagonally dominant matrices satisfy the above requirement. But it is easy to find a counter example R = [ 2 1; 1 2 ], where Tr(R) = 6 and Tr(R) = 4. Clearly, diagonal dominance is not sufficient to guarantee that the result in Theorem <ref> holds true. Thus, we introduce a new kind of diagonal dominance called M-dominance, which lets us finding a class of Hermitian matrices for which the result in Theorem <ref> holds true. A square matrix R = [r_ij]_N× N is said to be M-dominant if r_ii≥ M (∑_j=1, j≠ i^N |r_ij| ); ∀ i.If a Hermitian matrix R of size N is 2N-dominant, then Tr(R) ≤ Tr(R), where R is derived from R according to (<ref>), and g^H Rg≥( 1 - 1/e + 1/e(1/2N+1)) ( max_s∈Ω^Ns^H Rs),where g is the solution from the greedy method 𝒢.See Appendix.Clearly, if R is 2N-dominant, then the greedy method G provides a guarantee of (1-1/e). From the above proposition, it is clear that for such a matrix, G can provide a tighter performance bound of ( 1 - 1/e + 1/e(1/2N+1)). But this bound quickly converges to (1-1/e) as N →∞, as shown in Figure <ref>. As it turns out, the above result may not have much practical significance, as it requires the matrix R to be 2N-dominant, which narrows down the scope of the result. Moreover, as N increases, the bound looses any significance because the lower bound on the UQP objective value for any solution is much greater than the above derived bound. In other words, if R is 2N-dominant, the lower bound on the performance of any UQP solution s is given by s^H Rs≥(2N-1/2N+1) o^H Rs, where o is the optimal solution for the UQP. Clearly, for N > e (i.e., N ≥ 3), 2N-1/2N+1 > ( 1 - 1/e + 1/e(1/2N+1)). Thus, 2N-dominance requirement may bea strong condition, and further investigation may be required to look for a weaker condition that satisfies Tr(R) ≤ Tr(R).In summary, for applications with large N, the result in Proposition <ref> does not hold much significance, as can be seen from Figure <ref>.§ ROW-SWAP GREEDY STRATEGYIn this section, we present the third heuristic method to solve the UQP approximately. Let P_mn be a row-switching transformation such that P_mnA swaps the mth and nth rows of A and AP_mn swaps the mth and nth columns of A. Let P = {P_mn | m=1,…,N; n=1,…,N; m>n} be a collection of all such matrices that are exactly one row-swap operation away from the identity matrix of size N × N. In the UQP, if we replace R with PRP for any P∈P, and ifô = s∈Ω^Narg maxs^H PRPs,then we can relate ô to the optimal solution of the original UQP (<ref>) as follows: ô= Po. We also know that o^H Ro = ô^H Rô, i.e., for any row-switching matrix P∈P, the optimal objective value does not change if we replace R by PRP in the UQP. However, the objective value from the greedy strategy changes if R in the UQP is replaced by PRP. Thus, for a given UQP problem, we may be able to improve the performance of the greedy strategy by simply replacing R by PRP for some P∈P. We are interested in finding which matrix P among N(N-1)/2 possible matrices in P gives us the best performance from the greedy strategy (note that |P| = N(N-1)/2). We know that each of the above-mentioned N(N-1)/2 objective values (one each from solving the UQP with R replaced by PRP) is upper bounded by the optimal objective value of the original UQP. Clearly, the best performance from the greedy strategy can be obtained by simply picking a matrix from the collection P⋃{I_N} (I_N is the identity matrix of size N) that gives maximum objective value. We call this method row-swap greedy strategy. The motivation for using this strategy is that one of the (N(N-1)/2) + 1 row-switching matrices (including the identity matrix) moves us close to the global optimum. This method is also described in Algorithm <ref>.The objective value from the row-swap greedy method is given by max_P∈P⋃{I_N} V_P,where V_P is the objective value from the greedy strategy applied to the UQP with R replaced by PRP. Clearly, the row-swap greedy strategy outperforms the “greedy strategy" and provides a performance guarantee of (1-1/e) as max_P∈P⋃{I_N} V_P≥ V_I_N≥ (1-1/e) (o^H Ro), where o is the optimal solution to the UQP. We note that the computational complexity for the row-swap greedy strategy grows as O(N^2(N-1)/2).It is quite possible, but unlikely (confirmed by our numerical study in Section <ref>), that the performance from the row-swap greedy method may remain exactly the same as the standard greedy method, which happens when row-switching does not improve the performance. In this case, the optimum solution to (<ref>) is V_I_N.§ APPLICATION EXAMPLESIn the case of a monostatic radar that transmits a linearly encoded burst of pulses (as described in <cit.>), the problem of optimizing the code elements that maximize the SNR boils down to UQP, where SNR = |a|^2 c^H Rc, R = M^-1⊙ (pp^H)^* (⊙ represents the Hadamard product), M is an error covariance matrix (of size N) corresponding to a zero-mean Gaussian vector, a represents channel propagation and backscattering effects, c represents the code elements, and p is the temporal steering vector. See <cit.> for a detailed study of this application problem. From Theorem <ref>, we know that if Tr(R) ≤ Tr(R) for this application, then the greedy and the row-swap greedy methods for this application are guaranteed to provide the performance of ( 1 - 1/e) of that of the optimal. In the case of a linear array of N antennas, the problem of estimating the steering vector in adaptive beam-forming boils down to UQP as described in <cit.> <cit.>, where the objective function is c^H M^-1c, where M is the sample covariance matrix (of size N), and c represents the steering vector; see <cit.> for details on this application problem. Again, we can verify that if Tr(R) ≤ Tr(R), where R = M^-1, then the greedy and the row-swap greedy methods provide a performance guarantee of ( 1 - 1/e), as the result in Theorem <ref> holds true for this case as well.§ SIMULATION RESULTSWe test the performance of the heuristic method 𝒟 numerically for N=20, 50, 100. We generate 500 Hermitian and positive semi-definite matrices randomly for each N,and for each matrix we evaluate V_𝒟 (value from the method 𝒟) and the performance bound derived in Proposition <ref>. To generate a random Hermitian and positive semi-definite matrix, we use the following algorithm: 1) first we generate a random Hermitian matrix A using the function rherm, which is available at <cit.>; 2) second we replace the eigenvalues of A with values randomly (uniform distribution) drawn from the interval [0,1000]. Figure <ref> shows plots of V_𝒟/λ_N N (normalized objective function value) for each N along with the performance bounds for D, which also shows V_rand/λ_N N, where V_rand is the objective function value when the solution is picked randomly from Ω^N. The numerical results clearly show that the method 𝒟 outperforms (by a good margin) random selection, and more importantly the performance of 𝒟 is close to the optimal strategy, which is evident from the simulation results, where the objective function value from 𝒟 is at least 90% (on average) of the upper bound on the optimal value for each N. The results show that the lower bound is much smaller than the value we obtain from the heuristic method for every sample. In our future study, we will tighten the performance bound for 𝒟 as the results clearly show that there is room for improvement. Figure <ref> shows the normalized objective function value from the greedy method, for each N, along with the bound (1-1/e), supporting the result from Theorem <ref>.We now present numerical results to show the performance of the row-swap greedy method for N=10,20,30. We generate 100 Hermitian and positive semi-definite matrices. For each of these matrices, we solve the UQP via the row-swap greedy method and also evaluate the performance bound ( 1 - 1/e). Figure <ref> shows plots of the normalized objective function values from the row-swap greedy method along with the performance bounds. It is evident from these plots that the above heuristic method performs much better than the lower bound suggests, and also suggests that this method performs close to optimal.We now compare the performance of the heuristic methods presented in this study against a standard benchmark method called semidefinite relaxation (SDR). The following is a brief description of SDR, as described in <cit.> (repeated here for completeness). We know that s^H Rs = tr(s^H Rs) = tr(Rss^H). Thus, UQP can also be stated as follows:S∈Ω^Nmaximizetr(RS) subject toS = ss^H,s∈Ω^N.The rank constraint S = ss^H is what makes the UQP hard to solve exactly. If this constraint is relaxed, then the resulting optimization problem is a semidefinite program, as shown below:S∈Ω^Nmaximizetr(RS) subject to [S]_k,k = 1, k=1,…,NS is positive semidefinite.The above method is called semidefinite relaxation (SDR). The semidefinite program shown above can be solved in polynomial time by any interior point method <cit.>; we use a solver called cvx <cit.> to solve this SDR. The authors of <cit.> proposed a power method to solve the UQP approximately, which is an iterative approach described as follows:s^t+1 = e^j arg(Rs^t),where s^0 is initialized to a random solution in Ω^N. The authors also proved that the objective function value is guaranteed to increase with t. We now test the performance of our proposed heuristic methods - dominant eigenvector matching heuristic, greedy strategy, and row-swap greedy strategy against existing methods such as the SDR and the above-mentioned power method. For this purpose, we generate 100 Hermitian and positive-semidefinite matrices. For each of these matrices, we solve the UQP approximately with the above-mentioned heuristic methods. Figure <ref> shows the cumulative distribution function of the objective values and the execution times of the heuristic methods for several values of N. It is evident from this figure that the proposed heuristic methods significantly outperform the standard benchmark method - SDR. Specifically, the row-swap greedy and the dominant eigenvector-matching methods deliver the best performance among the methods considered here. However, the row-swap greedy method is the most expensive method (in terms of execution time) among the methods considered. Also, we can distinctly arrange a few methods in a sequence of increasing performance (statistical) as follows: SDR method, greedy strategy, dominant eigenvector matching heuristic or row-swap greedy strategy. This figure also shows the cumulative distribution functions of the execution times for each N, which suggests that all the heuristic methods considered here can be arranged in a sequence of increasing performance (decreasing execution time) as follows: row-swap greedy strategy, SDR method, greedy strategy, power-method, and dominant eigenvector matching heuristic.§ CONCLUDING REMARKS We presented three new heuristic methods to solve the UQP problem approximately with polynomial-time complexity with respect to the size of the problem. The first heuristic method was based on the idea of matching the unimodular sequence with the dominant eigenvector of the Hermitian matrix in the UQP formulation. We have provided a performance bound for this heuristic that depends on the eigenvalues of the Hermitian matrix in the UQP. The second heuristic method is a greedy strategy. We showed that under loose conditions on the Hermitian matrix, the objective function would possess a property called string submodularity, which then allowed this greedy method to provide a performance guarantee of (1-1/e) (a consequence of string-submodularity). We presented a third heuristic method called row-swap greedy strategy, which is guaranteed to perform at least as well as a regular greedy strategy, but is computationally more intensive compared to the latter. Our numerical simulations demonstrated that each of the proposed heuristic methods outperforms a commonly used heuristic called semidefinite relaxation (SDR). § PROOF FOR PROPOSITION <REF> We know that Tr(R) = ∑_k^N (4k-2) δ_k. If R is 2N-dominant, we can verify that ∑_k=2^N δ_k ≤1/4N Tr(R),Therefore, the following inequalities hold true Tr(R)= ∑_k^N (4k-2) δ_k ≤ (4N-2) ∑_k^N δ_k ≤ (4N-2) (1/4N) Tr(R)≤ Tr(R).For any given 2N-dominant Hermitian matrix R, if o is the optimal solution to the UQP, and r_ij is the element of R at the ith row and jth column, we can verify the following:o^H Ro ≤∑_i=1^N∑_j=1^N |r_ij| = ∑_i=1^N |r_ii| + ∑_i^N ∑_j = 1, j ≠ i^N |r_ij|, ≤∑_i=1^N |r_ii| + 1/2N∑_i=1^N |r_ii|= (1 + 1/2N) Tr(R).Also, for any 2N-dominant Hermitian matrix R, from Remark <ref>, (<ref>), and (<ref>) we can derive the following:Tr(R) - Tr(R)= Tr(R) - ∑_k=2^N (4k-2) δ_k≥ Tr(R) - (4N-2)∑_k=2^N δ_k ≥ Tr(R) - (4N-2/4N) Tr(R)= (1/2N) Tr(R)≥(1/2N+1) o^H Ro.From (<ref>) and the above result, we can obtain the following:g^H Rg ≥(1 - 1/e)o^H Ro + 1/e(Tr(R) -Tr(R) ) ≥(1 - 1/e)o^H Ro + 1/e(1/2N+1) o^H Ro= ( 1 - 1/e + 1/e(1/2N+1)) o^H Ro. IEEEtran
http://arxiv.org/abs/1703.08589v1
{ "authors": [ "Shankarachary Ragi", "Edwin K. P. Chong", "Hans D. Mittelmann" ], "categories": [ "math.OC", "cs.DS" ], "primary_category": "math.OC", "published": "20170324202152", "title": "Polynomial-Time Methods to Solve Unimodular Quadratic Programs With Performance Guarantees" }
On positive-definite ternary quadratic forms with the same representations overRyoko Oishi-TomiyasuGraduate School of Science and Engineering,Yamagata University / JST PRESTO 990-8560 1-4-12, Kojirakawa-cho, Yamagata-shi, Yamagata, JapanE-mail: tomiyasu@imi.kyushu-u.ac.jp[Current affiliation: Institute of Mathematics for Industry (IMI), Kyushu University] ===================================================================================================================================================================================================================================================================================================empty emptyStochastic dynamical systems often contain nonlinearities which make it hard to compute probability density functions or statistical moments of these systems. For the moment computations, nonlinearities in the dynamics lead to unclosed moment dynamics; in particular, the time evolution of a moment of a specific order may depend both on moments of order higher than it and on some nonlinear function of other moments. The moment closure techniques are used to find an approximate, close system of equations the moment dynamics. In this work, we extend a moment closure technique based on derivative matching that was originally proposed for polynomial stochastic systems with discrete states to continuous state stochastic systems to continuous state stochastic differential equations, with both polynomial and trigonometric nonlinearities. We validate the technique using two examples of nonlinear stochastic systems.§ INTRODUCTIONStochastic dynamical systems appear in numerous contexts in physics, engineering, finance, economics, and biology (see, e.g., <cit.>). In terms of mathematical characterization, the most useful quantity in analysis of stochastic systems is the probability density function (pdf). However, the pdf is analytically intractable for most systems. So, numerical techniques, such as Monte Carlo simulation, are employed to compute the pdf <cit.>. Generally speaking, in analysis of many stochastic systems, the goal is often less ambitious than computing the pdf, and knowing only a few lower order moments (mean, variance, etc.) might suffice.If the system under consideration has polynomial dynamics, then time evolution of various statistical moments can be computed by solving a system of coupled linear differential equations. However, a major drawback of using these moment equations is that except for a few special cases such as systems with linear dynamics, the differential equations for moments up to a given order consist of terms involving higher-order moments. This is known as the problem of moment closure. A typical way around this is to truncate the system of ODEs to a finite system of equations, and close the moment equations using some sort of approximation for a given moment in terms of moments of lower order <cit.>. If the system under consideration involves nonlinearities such as trigonometric functions that often arise in swing equations, then the differential equations describing the moments involve moments of nonlinear functions of the state. In such cases, usage of moment closure schemes is rather limited.For systems with polynomial dynamics, a number of moment closure techniques have been proposed to approximate a higher order moment in terms of lower order moments. Some of these techniques make prior assumptions on the distribution of the system, while others attempt to find a linear or nonlinear approximation of the moment dynamics <cit.>. One method that falls in the latter category is the derivative matching based closure <cit.>. Here, a nonlinear approximation of a given moment is obtained in terms of lower order moments by matching the derivatives of the original moment dynamics with the proposed approximate dynamics at some initial point in time. This method was originally proposed for approximating moment dynamics of biochemical reaction systems which are described via discrete states <cit.>. Given the attention received by this approach and its superior performance than several moment closure schemes <cit.>, we apply it to close moments for nonlinear stochastic systems described via stochastic differential equations (SDEs). We further extend the method to include trigonometric functions in the dynamics. Our results show that the derivative matching technique provides reasonably good approximation to the moment dynamics. Remainder of the paper is organized as follows. In section II, we describe the moment equations for a stochastic differential equations, and discuss the moment closure problem. In section III, we discuss the the derivative matching moment closure technique for SDEs and provide a proof for it. We illustrate the technique via examples in section IV. The paper is concluded in section V, along with a few directions of future research.Notation: Vectors and matrices are denoted in bold. The set of real numbers and non-negative integers are respectively denoted byand . The expectationis represented by angled-brackets, < >.is used to denote the Identity matrix. § MOMENT DYNAMICS OF AN SDEConsider a n-dimensional stochastic differential equation (SDE) represented asd = (,t)dt + (,t)d_t,where = [ x_1 x_2 … x_n ]^⊤∈^n is the state vector; (,t)=[ f_1(,t) f_2(,t) … f_n(,t) ]^⊤: ^n × [0,∞) →^n and (,t)=[ g_1(,t) g_2(,t) … g_n(,t) ]^⊤:^n × [0,∞) →^n describe the system dynamics; and _t is the n-dimensional Weiner process satisfying<d _t >=0, <d_t d_t^⊤>= dt,whereis an n× n Identity matrix. We further assume that sufficient mathematical requirements for the existence of the solution to(<ref>) are satisfied (see, e.g., <cit.>).The moments of an SDE can be obtained using the well-known Itô formula<cit.>. This formula states that for any smooth scalar-valued function h((t)), the increment is given bydh((t)) = ∂ h((t))/∂ x( f((t)) dt + g((t)) d(t) ) +1/2 Tr (∂^2 h((t))/∂ x^2 g((t)) g((t))^⊤)dt.Taking expectations and dividing both sides by dt gives the following differential equationd/dt⟨ h((t)) ⟩ = ⟨∂ h((t))/∂ xf((t))+1/2 Tr ( ∂^2 h((t))/∂ x^2 g((t))g((t))^⊤) ⟩ .Let h() be monomial of the formh()=x_1^m_1 x_2^m_2… x_n^m_n=:^[],where =[ m_1 m_2 … m_n ]^⊤∈^n, then <h(x)> represents a moment of . For a given , we represent the moment by μ_=< ^[]>. Using(<ref>), dynamics of μ_ evolves according to d μ_/dt=∑_i=1^n<f_i ∂^[]/∂ x_i>+1/2∑_i=1^n∑_j=1^n<(^⊤)_ij∂^2 ^[]/∂ x_i ∂ x_j>.The sum ∑_j=1^nm_j is referred to as the order of the moment.As long as (,t) and (,t) are linear in , a moment of a certain order is a linear combination of other moments of same or smaller order <cit.>. Hence, if we construct a vectorconsisting of all moments up to the M^ th order moments of , its time evolution is captured by the solution of the following system of linear differential equations:d/dt=+.Here, =[ μ__1 μ__2… μ__k ]^⊤, _p ∈^n, ∀ p ∈{1,2,…, k} is assumed to be a vector of k elements. The vectorand the matrixare determined by the form of (,t) and (,t). Under some mild assumptions, standard tools from linear systems theory can be used to obtain solution to (<ref>), and it is given by(t)=-^-1+e^ t( (0)+^-1). It is easy to see that there are (m+n-1)!/(m! (n-1)!) moments of order m.Therefore, the dimension of the vectorin (<ref>) is given byk=∑_m=1^M(m+n-1)!/m! (n-1)!= (M+n)!/M! n!-1.Without loss of generality, we can assume that the elements inare stacked up in graded lexicographical order. That is, the first n elements inare the moments of first order, next n(n+1)/2 elements are moments of the second order, and so on. ▪ In general, when (,t) and (,t) are polynomials in , the time derivative of a moment might depend on moments of order higher than it. To see this, consider the following one dimensional cubic driftdx = -x^3 dt + dw_t. The time evolution of a moment of order m ≥ 1 is given byd<x^m>/dt =<∂ x^m/∂ x (-x^3)> + 1/2<∂ x^m/∂ x^2> =-m <x^m+2> + m(m-1)/2<x^m-1>,which clearly depends upon the (m+1)^th moment. In other words, the moment dynamics is not closed. Thus, for systems with nonlinear dynamics, the moment equations in (<ref>) need to be modified to a general form d/dt=+ + ,where ∈^r is a vector of moments of order greater than or equal to M+1. The solution to (<ref>) is generally obtained by approximating the higher order moments inas, possibly nonlinear, functions of lower order moments in . The approximation might be made by assuming some underlying distribution, or by applying some other physical principle <cit.>. Essentially the moment closure methods translate to finding anapproximation of(<ref>) by a system of equationsd/dt=+ + (), =[ ν__1 ν__2… ν__k ]^⊤,where the function :^k →^r is chosen such that (t) ≈(t). Here, M is called the order of truncation. If the functions (,t) are not polynomials, then it may not be possible to obtain a convenient form like (<ref>) for the moments. For instance, consider the following differential equationdx_1 =x_2 dt dx_2= - sin x_1dt + dw_t. Here, the time evolution of <x_2> is given byd<x_2>/dt =-<sin x_1 >,which depends a nonlinear moment <sin x_1>. Although, (<ref>) can be used to write the dynamics of <sin x_1>, it will further depend on other trigonometric moments. In Section IV, we will considera system of this type and perform moment closure. In the next section, we first discuss the derivative matching closure scheme for SDEs.§ DERIVATIVE MATCHING MOMENT CLOSURE TECHNIQUE FOR SDESIn this section, we describe the derivative matching based moment closure technique for SDEs. As the name suggests, the closure is performed by matching time derivatives of (t) and (t). This technique was originally proposed for approximating moment dynamics of discrete–state continuous–time systems <cit.>. The derivative matching technique attempts to approximate (t) by some (t) such that a sufficiently large number of their derivatives match point-wise. The idea being that if the values of these two vectors at some time t_0 are equal, and their derivatives up to certain order also match, then they would closely follow each other for some time interval after t_0. More precisely, for each δ >0 and N ∈, ∃T ∈ such that if (t_0)=(t_0) d^i (t)/dt^i|_t=t_0=d^i (t)/dt^i|_t=t_0,hold for a t_0 ∈ [0,∞) and i=1,2,…,N, then(t)-(t)≤δ, ∀ t ∈ [t_0, T].Further, one can obtain the bound in (<ref>) for the interval [t_0,∞) under some appropriate asymptotic conditions <cit.>.To construct the closed moment dynamics, we follow similar steps as <cit.>. Consider a vector ∈^n such that μ_ is an element in . We approximate μ_ as a function of elements in the vector . Denoting the corresponding approximation of μ_ in () by ϕ_(), the following separable form is consideredϕ_()=∏_p=1^k(μ__p)^α_p,where α_p are appropriately chosen constants. Generally speaking, (<ref>) is a strong requirement and it is not possible to find the coefficients α_p such that it holds for every initial condition. We, therefore, consider a relaxation of this by seekingα_p such that the derivatives match for a deterministic initial condition (t_0)=_0. Next, we state a theorem showing that the coefficients α_p can be obtained by solving a system of linear equations. Before that, we define a short-hand notation that is used in the theorem.For two vectors =[ m̂_1 m̂_2… m̂_n ]^⊤∈^n and =[ m̆_1 m̆_2… m̆_n ]^⊤∈^n,we have the following notation _^:= C_m̆_1^m̂_1C_m̆_2^m̂_2⋯ C_m̆_n^m̂_n, where C_l^h= h!/l! (h-l)!, h ≥ l, 0, h < l. For each element μ_ of the vector , assume that the corresponding moment closure function ϕ_() in the vector () is chosen according to(<ref>) with the coefficients α_p chosen as the unique solution to the following system of linear equations_[_s]^[]=∑_p=1^kα_p _[_s]^[_p],s=1,2,⋯,k.Then, for every initial condition (t_0)=_0 ∈^n, we have that (t_0)=(t_0)d (t)/dt|_t=t_0=d (t)/dt|_t=t_0d^2 (t)/dt^2|_t=t_0=d^2 (t)/dt^2|_t=t_0. It is sufficient to prove that for each element μ_ ofand its corresponding moment closure function ϕ_(), we have the following: μ_(t_0) =ϕ_((t_0)),dμ_(t)/dt|_t=t_0 =dϕ_((t))/dt|_t=t_0. We first show that(<ref>) holds. Since initial conditions are (t_0)=_0 with probability one, we have μ_(t_0) =_0^[],ϕ_((t_0)) =∏_p=1^k(_0^[_p])^α_p=_0^[∑_p=1^kα_p _p]. Recall Remark <ref>, that without loss of generality, the moments in vectorcan be assumed to be stacked in graded lexicographical order. Thus, the first n elements ofare moments of order one. This allows us to write =[ __1^,__2^, …,__n^ ]^⊤,_p =[ __1^_p,__2^_p, …,__n^_p ]^⊤, ∀ p=1,2,…,k, where a vector _i ∈^n, i=1,2,…,n has 1 at the i^th position, and rest of the elements are zero. Using these relations, and(<ref>) for s=1,2,…,n, we obtain =∑_p=1^kα_p _p.Substituting this result in (<ref>) proves(<ref>).Next, we prove that (<ref>) holds. For this part, we assume that _0=[ x_01, x_02, …, x_0n ]^⊤∈^n. Consider dϕ_((t))/dt |_t=t_0 = ϕ_((t_0)) ∑_p=1^kα_p dμ__p(t)/dt |_t=t_0/μ__p(t_0) = ∑_p=1^kα_p _0^[-_p]dμ__p(t)/dt |_t=t_0. Assuming _p =[ m_p1 m_p2… m_pn ]^⊤∈^n, we can use(<ref>) to obtain the expression for dμ__p(t)/dt=d<^[_p]>/dt. This enables us to write d ϕ_((t))/dt |_t=t_0 =∑_p=1^kα_p _0^[]∑_i=1^nm_pi/x_0if_i(_0,t_0)+ 1/2∑_p=1^kα_p _0^[]∑_i=1^nm_pi(m_pi-1)/x_0i^2(g(_0,t_0)g^⊤(_0,t_0))_ii+ 1/2∑_p=1^kα_p _0^[]∑_i,j=1i j^nm_pi m_pj/x_0ix_0j(g(_0,t_0)g^⊤(_0,t_0))_ij=_0^[]∑_i=1^n∑_p=1^kα_p m_pi/x_0if_i(_0,t_0) + 1/2_0^[]∑_i=1^n∑_p=1^kα_p m_pi(m_pi-1)/x_0i^2(g(_0,t_0)g^⊤(_0,t_0))_ii + 1/2_0^[]∑_i,j=1i j^n∑_p=1^kα_p m_pi m_pj/x_0ix_0j(g(_0,t_0)g^⊤(_0,t_0))_ij. Comparing this with the expression for dμ_/dt computed at t=t_0, which can be calculated from (<ref>) and assuming =[ m_1, m_2, …, m_n ]^⊤∈^n, we require: ∑_p=1^kα_p m_pi =m_i,∑_p=1^kα_p m_pi(m_pi-1)/2 =m_i(m_i-1)/2,∑_p=1^kα_p m_pim_pj/2 =m_i m_j/2. Note that (<ref>) is nothing but the relation in (<ref>) written element-wise. Further, we had assumed that the vectorhas its elements stacked up in graded lexicographical order (Remark 1). In particular, the moments of second order start with the (n+1)^th element. In that case, the equality in (<ref>) follows when relations in (<ref>)–(<ref>) are used in (<ref>) for s=n+1,2n+1,⋯,n^2+1 (i.e., the second order moments with one of the exponents as 2 and rest of them as zeros). Likewise, (<ref>) holds for the rest of the second order moments wherein two exponents are 1 and rest are zeros.It is worth noting that when the derivative–matching technique is applied for a discrete-state process, there is an error in matching the first two derivatives <cit.>. However, in case of a continuous state stochastic differential equation, the first two derivatives are matched exactly. Another important difference between the discrete state systems, andcontinuous state systems is that in the latter, the first two derivatives are matched exactly regardless of the form ofandwhereas in the former,one needs to assume polynomial form for the rates at which the states are changed. ▪ Although we do not have a proof, the solution to the system of linear equations in (<ref>) results in integer values of the coefficients α_p for all examples we have solved thus far. ▪ § NUMERICAL VALIDATIONIn this section, we illustrate the derivative matching technique on two examples. The first example is a Van der Pol oscillatorthat frequently arises in many engineering applications <cit.>. In this case, the system dynamics consists of polynomial functions of the state vector. The second example isa swinging pendulum subject to white noise. In this example, the dynamics consist of polynomial functions in one state and, and a trigonometric functions in another state. We show that the derivative matching technique can be straightforwardly applied to the second example.§.§ Van der Pol oscillatorIn the deterministic setting, the Van der Pol oscillator is governed by the following second-order differential equation d^2 x/d t^2 -ϵ(1-x^2 )d x/d t+ ω_n^2 x =A cos(ω_g t),where ϵ is the bifurcation parameter, ω_n is the natural frequency, ω_g is the force frequency and A is the force amplitude. A possible stochastic description of the oscillator could be to assume that the force is noisy, i.e., the actuators that apply the force also add a zero mean noise to the system. By choosing x_1=x and x_2 = d x /d t, the oscillator dynamics could be written as dx_1= x_2 dt, dx_2 = ( ϵ(1-x_1^2 )x_2- ω_n^2 x_1+A cos(ω_g t)) dt + A dw_t.Suppose we are interested in the dynamics of <x_1>. To this end, wewrite moment dynamics of this oscillator up to order two d⟨ x_1 ⟩/dt =⟨ x_2 ⟩ ,d⟨ x_2 ⟩/dt = ϵ(⟨ x_2 ⟩ -⟨ x_1^2 x_2 ⟩ )- ω_n^2 ⟨x_1 ⟩ +A cos(ω_g t) ,d⟨ x_1^2⟩/dt = 2 ⟨x_1 x_2⟩ ,d⟨ x_2^2 ⟩/dt =2ϵ(⟨ x_2^2⟩ -⟨ x_1^2 x_2^2 ⟩ )- 2 ω_n^2 ⟨x_1 x_2 ⟩+2A ⟨ x_2 ⟩cos(ω_g t)+ A^2 ,d⟨ x_1 x_2 ⟩/dt =⟨ x_1^2 x_2 ⟩+ϵ(⟨ x_1 x_2 ⟩ -⟨ x_1^3 x_2 ⟩ )- ω_n^2 ⟨x_1^2⟩+A ⟨x_1⟩cos(ω_g t). As expected, the nonlinearities in the dynamics manifestin unclosed moment dynamics, and the moment equations up to order two depend upon third and fourth order moments. In terms of notations in (<ref>), we have =[<x_1><x_2><x_1^2> <x_1 x_2 ><x_2^2> ]^⊤, and = [⟨ x_1^2 x_2 ⟩ ⟨ x_1^2 x_2^2 ⟩ ⟨ x_1^3 x_2 ⟩]^⊤.Applying the derivative matching closure as described inSection III, we seek approximations of each element ofin terms of those ofas in (<ref>). Solving (<ref>) for each of these yields the following approximations ⟨ x_1^2 x_2 ⟩≈⟨x_1^2 ⟩⟨x_1 x_2 ⟩^2 /⟨ x_1 ⟩^2 ⟨ x_2 ⟩ , ⟨ x_1^2 x_2^2 ⟩≈⟨x_1^2 ⟩⟨x_1 x_2 ⟩^4 ⟨x_2^2 ⟩/⟨ x_1 ⟩^4 ⟨ x_2 ⟩^4 , ⟨ x_1^3 x_2 ⟩≈⟨x_1^2 ⟩^3⟨x_1 x_2 ⟩^3/⟨ x_1 ⟩^6 ⟨ x_2 ⟩^2 . Using the approximations from (<ref>) in (<ref>), we obtain a closed set of moment equations. Fig. 1 compares the solution of <x_1> with that of numerical simulations. Our results show an almost perfect match between the system with closure approximation and numerical simulations.A caveat of the proposed derivative matching approximation is that, as in (<ref>), the mean of states appear in the denominator. Since the oscillator repeatedly crosses the zero, it is possible that some of these moments approach to zero. To avoid this, we add a small term δ to the denominator of approximations. §.§ Pendulum Swing In the deterministic setting, dynamics of a simple pendulum (see Fig. <ref>) are given byd^2 θ/d t^2 +k/md θ/d t +g/lsinθ = 0where g is the acceleration due to gravity, l is the length of the pendulum, and θ is the angular displacement <cit.>. We also consider friction in our system, with friction constant k. In the stochastic formulation, we could consider that the dynamics are affected by white noise that arises due random interaction of pendulum with air molecules. This term scales inversely with mass of the pendulum m, i.e., the interaction with gas air particles is negligible for a large mass. By choosing x_1=θ and x_2 = d θ/d t,the dynamics of the pendulum can be represented as dx_1= x_2 dt, dx_2 = (-k/m x_2 -g/lsin x_1 )dt + 1/m dw_t.Here we have the trigonometric function sin x_1, which gives rise to nonlinear behavior. To illustrate how derivative matching closure can be used in this context, we approximate <sin x_1> using (<ref>). To this end, we use Euler's relation to writesin x_1 =e^j x_1- e^-j x_1/2j.With a change of variables, we can use the Itô formula to transform (<ref>) to the following d e^j x_1 =j e^j x_1 x_2dt, d e^-j x_1 =-j e^-j x_1 x_2dt, dx_2 = (-k /m x_2 +j/2g/l e^j x_1 - j/2g/l e^-j x_1)dt + 1/m dw_t. Fo these dynamics, we can write the moment dynamics with moments of x_2 appearing inthe form of monomials, and moments of x_1 appearing in the form of complex exponentials as below d⟨ e^j x_1⟩/dt =j⟨ e^j x_1x_2 ⟩ ,d⟨ e^-j x_1 ⟩/dt = -j⟨ e^-j x_1x_2 ⟩ ,d⟨ x_2 ⟩/dt = -k/m⟨ x_2 ⟩ +j/2g/l⟨ e^j x_1 ⟩ - j/2g/l⟨ e^-j x_1 ⟩, d⟨ e^j x_1 x_2⟩/dt =j⟨ e^j x_1x_2^2 ⟩ -k/m⟨ e^j x_1 x_2 ⟩+j/2g/l⟨ e^2j x_1 ⟩ - j/2g/l , d⟨ e^-j x_1 x_2⟩/dt =-j⟨ e^-j x_1x_2^2 ⟩ -k/m⟨ e^-j x_1 x_2 ⟩ -j/2g/l⟨ e^-2j x_1 ⟩ + j/2g/l , d⟨ x_2^2 ⟩/dt = -2k/m⟨ x_2^2 ⟩ +j g/l⟨ e^j x_1 x_2⟩ -j g/l⟨ e^-j x_1x_2⟩ +1/m^2, d⟨ e^2j x_1 ⟩/dt =2j⟨ e^2j x_1x_2 ⟩ ,d⟨ e^-2j x_1 ⟩/dt =-2j⟨ e^-2j x_1x_2 ⟩ .One way to interpret the above mixed complex exponential monomial moment dynamics is to think that since all moments of x_2 are generated by taking expectations of the monomials 1, x_2, x_2^2, …,we could consider the terms e^j x_1 and e^-j x_1 as two different variables. The mixed moments can then be generated by taking expectation of the products of the complex exponentials 1,e^-jx_1,e^-2j x_1, … (or 1,e^jx_1,e^2j x_1, …) with the monomials 1, x_2, x_2^2, …. The order of the mixed moment can be thought of as the sum of powers of the monomials and complex exponentials. Given the above interpretation, the moment dynamics in (<ref>) are not closed. As per notation in (<ref>), we have =[ <e^jx_1><e^-jx_1><x_2>…<x_2^2>]^⊤, and = [⟨ e^j x_1x_2^2 ⟩ ⟨ e^-j x_1x_2^2 ⟩ ⟨ e^2j x_1x_2 ⟩ ⟨ e^-2j x_1x_2 ⟩]^⊤. An important point to note is that since e^-jx_1 e^j x_1=1, there is no need to consider their cross-moments. Thus, we only consider cross moments of e^-jx_1 with x_2, and e^jx_1 with x_2.Next, we present different closure schemes for approximating moments inas nonlinear functions of moments up to order 2. As an example, consider the third-order moment ⟨ e^j x_1x_2^2 ⟩. The aim of closure is to approximate this moment as ⟨ e^j x_1x_2^2 ⟩≈⟨ e^j x_1 ⟩^α_1⟨ e^j x_1x_2 ⟩^α_2⟨x_2 ⟩^α_3⟨x_2^2 ⟩^α_4.Performing derivative matching approach as explained in Section III results in ⟨ e^j x_1x_2^2 ⟩≈⟨x_2^2 ⟩/⟨ e^j x_1 ⟩⟨ e^j x_1x_2 ⟩^2 /⟨x_2 ⟩^2 .With a similar approach we can approximate the other moments in the vector ⟨ e^-j x_1x_2^2 ⟩≈⟨x_2^2 ⟩/⟨ e^-j x_1 ⟩⟨ e^-j x_1x_2 ⟩^2 /⟨x_2 ⟩^2 , ⟨ e^2j x_1x_2 ⟩≈⟨ e^2j x_1 ⟩/⟨ x_2 ⟩⟨ e^j x_1x_2 ⟩^2 /⟨ e^j x_1 ⟩^2 , ⟨ e^-2j x_1x_2 ⟩≈⟨ e^-2j x_1 ⟩/⟨ x_2 ⟩⟨ e^-j x_1x_2 ⟩^2 /⟨ e^-j x_1 ⟩^2 . Another approximation that can be used is by assuming that the correlation in between two random variables is small due to presence of noise.Hence the third order moment ⟨ e^j x_1x_2^2 ⟩ can be approximated as ⟨ e^j x_1 x_2^2⟩≈⟨ e^j x_1⟩⟨x_2^2⟩ .Similarly the rest of moments incan be approximated as ⟨ e^j x_1 x_2^2⟩≈⟨ e^j x_1⟩⟨x_2^2⟩,⟨ e^-j x_1x_2^2 ⟩≈⟨ e^-j x_1⟩⟨x_2^2⟩, ⟨ e^2j x_1x_2 ⟩≈⟨ e^2 j x_1⟩⟨x_2⟩ , ⟨ e^-2j x_1x_2 ⟩≈⟨ e^-2 j x_1⟩⟨x_2⟩ . The results of the closure approximations is compared to numerical solutions in Fig. 3. The results show that derivative matching provides reasonably accurate approximation of the moment dynamics.§ CONCLUSIONIn this paper, we extended the derivative matching based moment approximation method to stochastic dynamical systems with continuous state. We further illustrated that the method is not limited to polynomial dynamics, and it can be used to study systems that contain trigonometric functions. It would be interesting to extend the technique to other form of mixed functions, and also include differential algebraic inequalities. This would open possibilities of using the moment closure techniques to study a variety of nonlinearities, and has potential applications in power systems analysis. In addition, while in this paper we just considered continuous dynamics modeled through SDEs, many models contain both continuous dynamics and random discrete events <cit.>. Deriving derivativ matching closure for such hybrid systems will be another avenue of research. Finally, we note that despite the promising results obtained by closure approximations, generally there are no guarantee on the errors of the closure approximation. Future work will carry out a detailed error analysis using other methods of finding bounds on moments<cit.>. § ACKNOWLEDGMENTAS is supported by the National Science Foundation Grant DMS-1312926, University of Delaware Research Foundation (UDRF) and Oak Ridge Associated Universities (ORAU). §IEEEtran
http://arxiv.org/abs/1703.08841v1
{ "authors": [ "Khem Raj Ghusinga", "Mohammad Soltani", "Andrew Lamperski", "Sairaj Dhople", "Abhyudai Singh" ], "categories": [ "math.OC", "math.PR" ], "primary_category": "math.OC", "published": "20170326162730", "title": "Approximate moment dynamics for polynomial and trigonometric stochastic systems" }
Joint Quantum Institute, Department of Physics, University of Maryland and NIST, College Park, MD 20742, USA. Army Research Laboratory, Adelphi, MD 20783, USA. Joint Quantum Institute, Department of Physics, University of Maryland and NIST, College Park, MD 20742, USA. Joint Quantum Institute, Department of Physics, University of Maryland and NIST, College Park, MD 20742, USA.The evanescent field outside an optical nanofiber (ONF) can create optical traps for neutral atoms. We present a non-destructive method to characterize such trapping potentials. An off-resonance linearly polarized probe beam that propagates through the ONF experiences a slow axis of polarization produced by trapped atoms on opposite sides along the ONF. The transverse atomic motion is imprinted onto the probe polarization through the changing atomic index of of refraction. By applying a transient impulse, we measure a time-dependent polarization rotation of the probe beam that provides both a rapid and non-destructive measurement of the optical trapping frequencies.Dynamics of trapped atoms around an optical nanofiber probed through polarimetry S. L. Rolston December 30, 2023 ================================================================================Nano-optical waveguides allow efficient ways to couple trapped atoms to propagating photons, a crucial element in the development of quantum technologies <cit.>. Optical nanofibers (ONF) <cit.> have shown to be a particularly versatile platform in this context by enabling quantum memories <cit.>, switches <cit.>, diodes <cit.>, and reflectors <cit.>. These examples show integration of photonic and atomic systems.An ONF consists of single-mode optical fiber heated and pulled to create a tapered profile. The tapers can adiabatically guide the propagating light in and out of a sub-wavelength diameter waist with less than 0.1% loss <cit.>.Because the nanofiber radius is smaller than the wavelength of the propagating mode, most of the field is outside its dielectric body as an evanescent field <cit.>. This field allowscoupling of atoms near the ONF surface to the guided mode. The tight confinement of the propagating mode enables significant atom-light coupling. The large spatial gradient of the evanescent field enables an optical dipole trap for atoms with two different wavelengths of light, one detuned above atomic resonance (blue-detuned) to repel the atoms from the surface, and the other detuned below resonance (red-detuned) for confinement.Such traps are an effective tool to confine atoms close the the ONF waveguide for millisecond time-scales with low optical powers (≈5 mW), creating a robust platform for coupling propagating photons to atoms <cit.>.A typical ONF dipole trap, with retro-reflection of the red-detuned light, creates two one-dimensional arrays of atoms on each side of the ONF, sketched in Fig. <ref> (a). Characterizing the atom number and trap characteristics is necessary for future applications of this platform. The number of trapped atoms can be measured on resonance <cit.> or off resonance <cit.>, i.e. destructive and dispersive measurements, respectively. Parametric heating to find vibrational frequencies has also been applied to ONFs <cit.>, but is destructive and is a serial measurement for finding the trap frequencies. In this letter we present a method to non-destructively characterize the trapping potential of an ONF dipole trap. We propagate a weak, off-resonance probe beam through the ONF that is linearly polarized and tilted 45^∘ relative to the azimuthal axis defined by the trapping potential. The probe experiences a modified refractive index with a fast axis and a slow axis due to the presence of trapped atoms. This effective birefringence rotates the polarization of the probe as a function of the position of the atoms. Turning on the probe beam imparts a momentum kick to the trapped atoms so that they oscillate at the radial and azimuthal trapping frequencies. Detecting the time-dependent polarization change of the probe gives us a direct and non-destructive measurement of the motion and transverse frequencies of the trapping potential. By probing the atomic motion directly, the spectrum of the system response can be analyzed in a single time-domain measurement up to the bandwidth of the detection.Because the evanescent field decay constant is proportional to its wavelength, the red (blue) detuned light creates a longer (shorter) range attractive (repulsive) potential. Combining both red and blue detuned light, the atoms experience a potential energy minimum a fraction of a wavelength away from the ONF surface. This two-color dipole trap provides radial confinement for the atoms. Two counter-propagating red-detuned beams in a standing-wave configuration provide confinement along the optical nanofiber in a one-dimensional lattice. Azimuthal confinement is achieved by correctly choosing the polarization of the trapping beams. At the ONF waist, linearly-polarized light becomes quasi-linearly polarized, breaking the azimuthal symmetry of the intensity profile of the propagating field. Aligning the polarization axis of the red-detuned beam orthogonal to the blue detuned one provides azimuthal confinement for the atoms (See Fig. <ref> (a) and (b)).We create a dipole trap for ^87Rb atoms with a 235-nm radius ONF waist by coupling two counter-propagating red-detuned beams (1064 nm) in a standing wave configuration and one blue-detuned beam (750 nm). The dominant resonances for Rb are at 780 nm (D2 line)and 795 nm (D1 line). We typically use 1 mW of power for each red-detuned beam, and 3 mW for the blue-detuned beam. Fig. <ref>(b) shows this configuration, which produces a trapping potential with a depth of about500 μK. Here, and throughout the paper, we consider only the atomic scalar polarizability for the calculations of the trapping potentials.We image the light scattered from the nanofiber to characterize the polarization of the laser beams at the ONF waist <cit.>. Because Rayleigh scattering preserves the polarization of the field, with the help of a linear polarizer in front of the camera we determine the polarization of the propagating field. The polarization can be controlled by wave plates at the input of the ONF. Each laser beam has to be characterized and controlled independently, since inherent stress in the ONF creates a birefringent medium that affects each wavelength differently.A magneto-optical trap (MOT) loads cold ^87Rb atoms into our ONF dipole trap in a vacuum chamber kept at lower than 10^-9 Torr. We further cool the atoms by increasing the detuning of the MOT beams for 90 ms. We then turn off the magnetic field gradient to create optical molasses for 1 ms. The atoms are typically at 15 μK when we let them fall into the dipole trap. Because of the tight confinement of the trap, the atoms are expected to be in a collisional blockade regime. This leads to a binary loading with one or zero atoms per trapping site. We typically trap a few hundred atoms for trapping lifetimes of the order of 10 ms. The trapped atoms are in an statistical mixture of m_F Zeeman sub-levels.We send an off-resonant beam, detuned 200 MHz to the blue of the the F=2→ F'=3 transition of the D2 line, through the ONF to probe the trapped atoms. We align its polarization to be 45^∘ from the trapping beams when there are no atoms present. The projection of the transverse polarization component along the axis defined by the trapped atoms experiences a modified refractive index while the orthogonal component, which does not interact with the atoms, propagates unaltered. The motion of trapped atoms in the transverse plane of the nanofiber will change this birefringence as a function of time, producing a dynamical polarization rotation of the probe beam.Motion along the fiber axis (z-direction) is likely to be only weakly coupled to the probe and would not produce significant polarization rotation. Because of the significant atom-light coupling provided by the tight mode area, more than a few tens of nW of probe power will perturb the trap near resonance. We use 70 nW of probe power, enough to imprint a momentum kick in the atoms to start their motion, but too weak to excite the atoms out of the trap. Fig. <ref> (c) shows the effect of the probe beam on the trapping potential. The polarization rotation of such a low probe power is detected by heterodyne measurements by mixing the probe with a local oscillator (LO) with a 1 MHz relative frequency shift. We typically use 9 mW of power for the LO beam. After the probe goes through the ONF it is combined with the LO using a 50/50 beam splitter. We use one of the output paths for detection. Its polarization components are separated by a Wollaston prism and sent to a 4 MHz bandwidth balanced photodetector. The 1 MHz beat note between the probe and the LO is mixed down to DC. This allows us to use the LO as gain for the probe, and directly detect the probe polarization rotation as a function of time with a bandwidth higher than the expected trap frequencies.Figure <ref> (a) shows a typical signal of the polarization rotation of the probe. Although the signal is visible in single-shot, the data is averaged to improve the signal to noise ratio by a factor of 10.The original data was acquired with a 2-ns bin width, and the plot is a 400-ns moving average for visualization purposes. The detector polarizations are set such that when there are no trapped atoms the measured output voltage is zero. However the zero voltage at time t=0 in the plot is produced only by the LO (probe beam off). The probe field turns on at 2 μs. The signal can be decomposed in two time regimes: a short time regime where we observe oscillations due to the atoms moving back and forth in the trapping potential; and a long time regime where the oscillations vanish but the non-zero signal shows the presence of atoms in the trap. The sharp initial peak comes from atoms starting their motion closer to the ONF surface, where they interact more strongly with the probe beam, producing a larger signal. The decoherence of the oscillations comes from the large anharmonicity of the trapping potential and the thermal motion of the trapped atoms. The long timescale slope is the lifetime of the trap. In this case the characteristic decay time is 370± 3 μs, where the error represents the standard error of the fit. The lifetime is degraded by more than an order of magnitude when the probe beam is kept on. A small fraction of the probe beam gets absorbed by the trapped atoms and results in losses as the trapping potential becomes shallower (see Figs. <ref> (b) and (c) with the depth scale).The temporal response and initial oscillations in Fig. <ref> (a) encode information about transverse trapping frequencies. By taking a discrete Fourier transform of the data (after the probe turns on) we obtain the resonance frequencies of the oscillating atoms. Fig. <ref> (b) shows the power spectrum of the signal. We observe two distinct peaks at ν_ϕ=73± 3 kHz and ν_r=197± 2 kHz, corresponding to the azimuthal and radial frequencies of the trap. The uncertainties in the mean are calculated from the full width at half maximum of the peak over the signal to noise ratio <cit.>. The width of the spectral peaks and damping of the time-domain oscillations arise from the dephasing of the atoms due to the strong anharmonicity of the trap. As an approximation, we can model the problem as a damped harmonic oscillator. The fit to a Lorentzian line shape shows a linewidth of γ_ϕ=64± 8 kHz γ_r=47± 6 kHz respectively, where the errors are the standard errors of the fit. This represents a decay time of the oscillations of around 20 μs, enough to measure trapping potentials of more than 50 kHz. The observation of oscillations from the azimuthal motion of the atoms depends on the alignment of the probe polarization to within few degrees. On the other hand, the detection of oscillation from radial motion of the atoms is more robust under misalignments.We can compare the measured frequencies in Fig. <ref> (b) to a numerical calculation. Taking the second derivative of the trapping potential shown in Fig.<ref> (c) and knowing the atomic mass m we can calculate the expected trapping frequencies as ν_i=√(1/2π m∂^2 U/∂ x_i^2), where the index i denotes the radial or azimuthal direction in cylindrical coordinates. For the experimental parameters listed in this paper, which produce Fig. <ref> (c), we find that ν_ϕ=70± 4 kHz and ν_r=195± 6 kHz. The frequencies are extracted by fit an harmonic potential to the bottom of the calculated potential and extracting the corresponding trapping frequency for each spatial direction. The errors represent the sensitivity of the simulation to a 5% variation of the experimental parameters, these parameters being the four lasers beams power (two red-detuned, a blue-detuned and the probe), and the four polarization angles (three relative angles). We assume that the polarizations are perfectly linearly-polarized, which is in general not true, but greatly reduces the number of free parameters in the simulation. The theoretical results are 2% above and 7% below the measured values for the azimuthal and radial frequencies respectively. The measured signal is in good agreement with the expected result within the experimental uncertainties.The non-destructive feature of this method is further tested by probing the trapped atoms more than once while they still are in the trap. Fig. <ref> shows the polarization rotation as a function of time for a probe beam that turns on and off four times. We see that the first pulse is enough to extract the oscillation frequency of the atoms before it decreases. Consecutively the probe turns off and on again, after 10 μs, reproducing the same oscillatory signal but with smaller amplitude. This process can be repeated as long there are enough atoms in the trap to produce a detectable signal. The signal from the four pulses shown in Fig. <ref> has an over-all slope corresponding to a trapping lifetime of 265± 1 μs. This is almost 30% shorter lifetime compared to keeping the probe beam constantly on (as in Fig. <ref> (a)), because the momentum kick of suddenly turning the probe beam on and off can induce atom loss. However, the dispersive measurement is non-destructive enough to test the characteristics of the trap while leaving a significant amount of atoms for further experimentation. The inset of Fig. <ref> shows a numerical simulation of the detected signal for only radial oscillations (uncoupled motion). Using the simulated trapping potential (Fig <ref> (c)) we calculate the motion of a set of 500 atoms randomly positioned with a flat distribution of ± 75 nm centered at 80 nm towards the ONF from the potential minimum. The trajectories of the atoms, computed and averaged, give an effective trajectory. The signal is proportional to the dynamical change of the coupling into the ONF of an atom following such an effective trajectory. The displacement of the center of the distribution of the initial atomic positions takes into account the displacement of the center of the trap when the probe beam is turned on. The parameters for the simulation are empirically found within a experimentally realistic range. This simple model captures the qualitative behavior of the detected signal.Although the probe beam modifies the potential landscape felt by the atoms, the good agreement between the measurements and the simulations allows us to extract the trapping potential without the modification due to the probe beam. In our case we obtain ν_ϕ=178.3 KHz and ν_ϕ=252.2 KHz from the potential shown in Fig. <ref> (b). Moreover, by optimizing the photodetection, a weaker probe beam could be used to minimally perturb the trapping potential. In this configuration another pulsed beam can rapidly imprint a momentum kick to the atoms, so they start oscillating in phase. Colder atoms might also help to establish longer coherence time for the oscillations, since the trapping potential approximates to an harmonic trap around its minimum. The measured signal increases linearly with the number of trapped atoms. A more efficient loading of the trap may increase the number of atoms and the amplitude of the signal.We have shown how a polarimetric measurement of an off-resonance probe beam can be used for rapid and non-destructive characterization of the trapping potential of a two-color ONF-based dipole trap. This technique can be easily implemented in any ONF-based dipole trap experiment, allowing a shot-to-shot measurement of the trapping potential before performing further experiments in the same experimental sequence, an advantage over other configurations of optical dipole traps. The results are in good agreement with theoretical predictions, showing an understanding of the variables involved in the problem. This points to different strategies to improve the technique in the future. We expect that non-destructive and fast-readout characterization of local potential experienced by trapped atoms near dielectric surfaces to become standard tools in the growing field of interfacing nano-photonic platforms to cold atoms.§ ACKNOWLEDGMENTSThis work has been supported by National Science Foundation of the United States (NSF) (PHY-1307416); NSF Physics Frontier Center at the Joint quantum Institute (PHY-1430094).
http://arxiv.org/abs/1703.09122v1
{ "authors": [ "Pablo Solano", "Fredrik K. Fatemi", "Luis A. Orozco", "S. L. Rolston" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170327144559", "title": "Dynamics of trapped atoms around an optical nanofiber probed through polarimetry" }
Group Cooperation with Optimal Resource Allocation in Wireless Powered Communication Networks Ke Xiong, Member, IEEE, Chen Chen, Gang Qu, Senior Member,  IEEEPingyi Fan, Senior Member, IEEE,  Khaled Ben Letaief, Fellow, IEEE Ke Xiong is with the School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,R.P. China. e-mail:kxiong@bjtu.edu.cn.Chen Chen and Gang Qu are with Department of Electrical & Computer Engineering, University of Maryland, College Park, USA. email: ccmmbupt@gmail.com, Gangqu@umd.edu.Pingyi Fan is with the Department of Electronic Engineering,Tsinghua University,Beijing,R.P. China, 100084. e-mail:fpy@tsinghua.edu.cn.K. B. Letaief is with the School of Engineering, Hong Kong University of Science & Technology (HKUST), China. e-mail:eekhaled@ece.ust.hk.================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTIONSegmented silicon detectors play a dominant role today as precision tracking devices in high-energy-physics (HEP) experiments with typical spatial resolutions in the order of 10 μm. Recently, there has been large interest in studying and developing also the timing capabilities of silicon detectors. The intermediate studies concentrate on optimising spatial and timing performance separately, but concepts of 4D tracking (high precision space and time measurements combined in one device) are also being conceived <cit.>. An important application of timing with an overall precision of 10–30 ps could be pile-up removal at the Large Hadron Collider (LHC) experiments since the exact knowledge of the time-of-arrival of a particle hints at its originating primary vertex. This is needed for forward detectors such as the ATLAS Forward Proton (AFP) experiment <cit.> or the CMS-TOTEM Precision Proton Spectrometer (CT-PPS) <cit.>, which require 10–20 ps timing detectors already in 2017, or for high-luminosity (HL-)LHC upgrade projects such as the ATLAS High Granularity Timing Detector (HGTD) <cit.>, which are planned to be installed around 2024.The time resolution of silicon detectors has two major contributions (neglecting contributions from the digitisation of the signal or non-uniform weighting fields): the jitter from the noise of the sensor-electronics system, which can be improved with a high signal-to-noise ratio (S/N) and a short rise time of the signal; and charge deposition variations, sometimes called Landau noise. The latter leads on the one hand to amplitude variations, which can be typically taken into account using amplitude corrections or constant-fraction thresholds. On the other hand, however, local charge deposition variations in different depths of the sensor lead to uncorrectable intrinsic signal fluctuations. Their impact can be mitigated by using thinner sensors <cit.>.Standard silicon detectors without internal gain and thicknesses between 100 and 200 μm have been shown to provide a time resolution of 100–150 ps <cit.> for a minimum-ionising particle (MIP). Since a high signal-to-noise ratio is key to obtain a good time resolution, silicon detectors with a built-in charge multiplication mechanism providing internal gain can provide advantages. Early tests on commercial silicon avalanche diodes with active thickness of 140 μm and a gain of 50 achieved time resolutions down to 65 ps <cit.>. Recently, following the observation of charge multiplication as a radiation effect in highly irradiated silicon detectors <cit.>, a new technology called Low Gain Avalanche Detectors (LGAD) <cit.> has been developed at CNM Barcelona in the framework of the CERN-RD50 collaboration <cit.>. It is based on implanting a highly doped p-type layer between the high resistivity p-type bulk and the n^+ implant, which acts as high-field charge multiplication layer with a target gain of 10–50 and custom segmentation between the μm and mm levels. The first devices with a thickness of 300 μm and a gain of about 10 showed already a time resolution of 120 ps for a MIP <cit.>. It was predicted that thinner detectors would improve the timing performance significantly to about 30 ps for 50 μm thin devices with a gain in the order of 10, due to both a steeper signal slope and less Landau noise compared to thicker LGADs <cit.>.In May 2016, CNM finished the first production of LGADs with an active thickness of about 45 μm and a gain of larger than 10. The first timing studies of these devices have been performed in the AFP beam test in June/July 2016 at the CERN-Super Proton Synchrotron (SPS) with 120 GeV pions, which will be reported here. Also other groups studied the timing performance before irradiation <cit.>.However, a key to the above-mentioned applications in LHC experiments is radiation hardness. For example, AFP is exposed to a highly non-uniform irradiation with a peak fluence of about 10^15 n_eq/cm^2 per year <cit.>, similarly for CT-PPS. Also HGTD is required to withstand radiation levels in the same order of magnitude. However, in previous studies on 300 μm thick LGADs, it was found that the gain is highly reduced after irradiation due to the effective removal of the initial acceptors <cit.>. Hence, a focus of this study is the gain and timing performance of 45 μm LGADs after irradiation with neutrons to fluences of 3×10^14 and 10^15 n_eq/cm^2, which was measured in the AFP beam test in September 2016 at the CERN-SPS. The first results of LGAD time resolutions after irradiation will be reported in this paper.§ TECHNOLOGY AND SAMPLES LGAD devices were produced by CNM Barcelona on 4" silicon-on-insulator (SOI) wafers with nominally 50 μm thickness on a 300 μm thick support wafer and 1 μm buried oxide[The first run 9088, which is studied here, finished in May 2016.]. Due to the diffusion of the highly doped n^+ and p^+ implants at the front and back side, respectively, the active thickness is reduced to about 45 μm, which is consistent with capacitance measurements (see section <ref>). Figure <ref> (top left) shows a picture of the wafer with different structures. It mainly includes single pad diodes of overall active area of 1.3×1.3 mm^2 (LGA) and 3.3×3.3 mm^2 (LGB) in the periphery, as well as segmented arrays of diodes with various dimensions in the centre of the wafer, which were designed for the HGTD and CT-PPS/TOTEM experiments. In this study, the small single pad diodes LGA were used. Figure <ref> shows a top and side view of it. The bulk is p-type doped with 12 kΩcm resistivity and the central pad is made of a 1.2 mm wide n^+ implantation to provide the p-n junction, including a junction termination extension (JTE) at the edges to improve the break down behaviour. It is surrounded by a p-stop implant, defining the 1.3 mm wide active area, and a guard ring. The gain is provided by a 1.0 mm wide central p-type multiplication layer underneath the n^+ implant, which enhances the electrical field in this region to values sufficient for impact ionsiation. Three different multiplication layer implantation doses[In the following simply referred to as dose, not to confuse with the irradiation level which will be referred to in terms of particle fluence.] were used on different wafers in order to provide devices with different gain and break down voltage (V_BD) behaviour: 1.8 (low), 1.9 (medium) and 2.0 (high) ×10^13 cm^-2. In this paper, the low and medium implantation doses were studied. The top implant is metalised with aluminium with a central hole for light injection. Four holes are etched through the insensitive substrate from the back side, which is subsequently covered with aluminium, to provide the back side contact to the thin active area.Two of the medium-dose samples were irradiated with neutrons at the TRIGA reactor in Ljubljana to a 1-MeV-neutron-equivalent fluence of 3×10^14 n_eq/cm^2, and another two to 10^15 n_eq/cm^2.An overview on the samples studied is given in table <ref>. Typically, for each type of dose and fluence, there are two copies of LGAD diodes called briefly L1 and L2 (for the low dose, also a third device L3 was used, but only for the Sr90 measurement). In the following, the devices will be referred to according to their short names listed in the table. § LABORATORY CHARACTERISATION: IV AND GAINBefore the beam tests, the devices were characterised in the laboratories of CNM, IFAE Barcelona and JSI Ljubljana with current-voltage (IV), capacitance-voltage (CV) and Sr90 charge collection measurements.Figure <ref> (top left) shows the 1/C^2 vs. voltage curves for one device with low and medium implantation dose before irradiation, respectively. Up to 32 V (for the medium dose about 1 V later due to the higher doping), the thin highly-doped p-type multiplication layer is slowly depleting, and 1/C^2 stays at a low level. Then within about 3 V the remaining high-resistivity bulk is fully depleting. The capacitance of the LGAD devices was measured to be 3.9 pF after full depletion, consistent with expectations for a 45 μm active thickness.Figure <ref> (top right) shows the IV curves for all devices, measured at room temperature before and at -10^∘C after irradiation. Before irradiation, the current is approximately at the nA level after full depletion at about 40 V. The current never reaches a real plateau and keeps increasing due to the charge multiplication (note the logarithmic scale), before it reaches a hard break down at about 240 V (320 V) for the medium (low) dose. After irradiation, the current increases with fluence, as expected, and reaches the μA level (for -10^∘C). The current increase with voltage is stronger for the lower fluence, and also the break down is reached earlier since, as will be shown below, the gain degrades with fluence.The charge in response to beta particles from a Sr90 source was measured with the setup in Ljubljana described in detail in reference <cit.>. The devices were mounted in aluminium boxes that were placed between the source and a scintillator trigger. The signal was processed with a charge-sensitive amplifier and a custom-made 25 ns shaper, recorded with an oscilloscope and analysed offline. The charge was calibrated with an Am source. In the following the most-probable value (MPV) of a Landau-Gauss fit to the charge spectrum will be used. The gain is calculated as the ratio of the charge of the LGAD devices to the charge of an equivalent 45 μm thin pad diode from the same run, but without p-type charge multiplication implant. This no-gain reference charge was measured to be 2.88 ke^-, consistent with expectation <cit.>. Figure <ref> (bottom) shows the measured charge and corresponding gain as a function of voltage. The two devices of the same implantation dose or fluence behave very similarly. At a fixed voltage, the gain is highest for the medium dose before irradiation, which reaches a gain of 20 at 200 V, whereas the low dose reaches a lower value as expected, namely only 10. However, since the voltage stability of the low-dose device surpasses the one of the medium-dose one, similar end-point gains can be also reached with lower doses, but at higher applied voltages. It should be noted that the exact end point is not well defined. The measurements were stopped when the waveform started to show instabilities like micro discharges or baseline fluctuations. This was typically the case close to the hard current break down and highly depends on the setup and operating and environmental conditions. For example in the beam tests, the measurements could be typically performed up to slightly higher voltages (see section <ref>). Due to the large slope at high voltages, this can make a considerable difference in the gain achieved. After irradiation, the gain is highly reduced at the same voltage, as it has been already observed in thicker LGAD devices <cit.>. This can be explained by an effective removal of the initial acceptors of the p-type multiplication layer. For example at 200 V, the gain is only 4 at 3×10^14 n_eq/cm^2 and about 1 at 10^15 n_eq/cm^2. But also here, the improved voltage stability after irradiation helps to recover at least part of the original gain. A maximum gain of about 18 (10) is reached at 425 V (650 V) at 3×10^14 n_eq/cm^2 (10^15 n_eq/cm^2). A more detailed and extensive study of radiation effects in 45 μm thin LGADs is under way and will be published separately. It is observed that the effective acceptor removal of the multiplication layer seems to be completed at a fluence of about 2×10^15 n_eq/cm^2, where the LGAD sensors behave in the same way as standard reference diodes without built-in multiplication layer <cit.>. However, both LGADs and standard reference sensors do show charge multiplication at such high fluences with a gain up to 8, which in that case originates from high fields from radiation-induced charged bulk defects as observed before in other thin silicon pad diodes <cit.>. § BEAM TEST MEASUREMENTSThe first timing measurements of 45 μm LGAD detectors before and after irradiation were performed in the AFP beam tests in June/July and September 2016 in the H6B beam line of the CERN-SPS North Area with 120 GeV pions. §.§ Beam test setup and operation Figure <ref> shows a picture of the beam test setup. The LGAD detectors were mounted on a Printed Circuit Board (PCB) developed originally for Transient Current Technique measurements. The bias voltage was applied via a low-pass RC filter to the back side. The front side was connected via a bond wire and SMA connectors to an amplifier, typically a broad band CIVIDEC C2 TCT model with a band width of 2 GHz <cit.>. The guard ring was not connected. The signal was recorded by an oscilloscope, typically the Agilent infiniium DSA91204A oscilloscope with 40 GS/s sampling rate and a band width of 12 GHz, which was however reduced online to 1 GHz by default (see discussion below). Optionally, it was also possible to use an AFP Constant Fraction Discriminator (CFD) as described in <cit.> between the amplifier and the oscilloscope. As fast 10 ps timing references, two devices based on Cherenkov-light emitting quartz bars coupled to silicon photomultipliers (in the following simply referred to as SiPM1 and 2) were used <cit.>. The quartz bars have a cross section of 3×3 mm^2 and are 30 mm long in beam direction. A bias voltage of 30.7 V was applied to the SiPMs and the signal was discriminated with the CFDs before being recorded by the oscilloscope. The SiPMs were placed on a mechanically adjustable table to align them with respect to the LGADs. The whole setup was mounted on a base plate connected to a remotely controllable micrometer precision table. The base plate could be covered with a styrofoam box for light-tightness and thermal insulation during cooling. The unirradiated devices were measured without cooling at room temperature. The cooling of that setup during the measurements of the irradiated devices was performed with dry ice. The on-sensor temperature was extracted using the temperature dependence of the leakage current by comparing the current during the measurement to the one measured at different temperatures in a laboratory climate chamber. In this way, the on-sensor temperature during the measurements with the 3×10^14 n_eq/cm^2 (10^15 n_eq/cm^2) devices was determined as -6^∘C (-15^∘C). For the 3×10^14 n_eq/cm^2 devices, an additional run was performed in a beam-test compatible climate chamber set to -20^∘C. In that case, the on-sensor temperature was found to be consistent with the set one.At the beginning of the beam test, setup variations and operation conditions were studied to find the optimal measurement configuration, as well as to understand the running stability. Besides the CIVIDEC C2 TCT amplifier, also other broad band amplifiers were tested such as the AFP custom-made pre-amplifiers <cit.> or the Particulars TCT amplifier <cit.>, but both were found to give 5–10 ps worse time resolution due to a lower S/N. The performance of the CIVIDEC C6 charge-sensitive amplifier was even much worse (about 100 ps) due to the non-optimised shaping time of 4 ns, much longer than the original 500 ps rise time of the signal (see section <ref>). Also another oscilloscope (LeCroy, 2 GHz, 20 GS/s) was studied and, moreover, the sampling rate of the Agilent oscilloscope was reduced online to 10 or 20 GS/s, but these variations were found to have negligible impact. However, the setting of the vertical scale of the oscilloscope was found to contribute to the noise and hence the time resolution due to the rather large digitisation uncertainty introduced by oscilloscopes with 8-bit vertical resolution. It was set by default to 50 mV/div and kept unchanged during the measurements to obtain comparable results. Also the band width of the oscilloscope, which could be set online, was found to impact the time resolution, since it should be large enough not to deteriorate the fast rise time of the signal, but small enough to filter out high-frequency noise. This was studied in the range between 0.5 and 12 GHz and the optimum was found at 1 GHz, which was set as default. This corresponds to a rise time of 340 ps for a step function, which is below the intrinsic pulse rise time of about 500 ps.Due to ambient noise in the beam area and the non-optimal assembly with long wires and connectors between the sensor and the amplifier, run-to-run variations were observed. The noise was found to vary between 3 and 4 mV, and the amplitude and integrated charge by up to 30–50%. However, the impact on the final time resolution was found to be typically only about 10%. Some runs were taken with two SiPMs and two LGAD devices connected to the oscilloscope channels, but in some cases only one SiPM was available, or, in the case of the additional run in the climate chamber, no SiPM. Typically 10,000 events were recorded for each run. The trigger could be basically selected to be any channel, and for test run periods, data with different triggers (L1, L2 and SiPM2) were taken and compared. Since no significant difference or bias depending on the trigger channel was found and the purity of the data sample was higher when triggering on the LGADs themselves, this was taken as the default trigger option. The LGAD trigger level for low- and medium-voltage runs was typically set to 15–20 mV, which was high enough above the noise but low enough not to cut into the signal. At the highest measured voltages, the trigger levels sometimes had to be increased significantly to avoid fake triggers, in rare cases up to 50 mV, not because the Gaussian noise increased so much, but probably due to micro discharges. However, at high voltages also the signal was much higher due to the gain so that this could be afforded up to a certain limit. In general, the measurements were stopped at a voltage when the trigger level would have been needed to be set too high so that it would have cut into the signal distribution, or when the waveforms became instable, deformed or started not to return to the baseline, due to micro discharges. For some devices, the last measured voltage point presented here is already affected by such instable waveforms, see figure <ref> right (typically one device started slightly earlier than the other), but in general the measurement was stopped then. An incident occurred during the measurement of the devices at the highest fluence of 10^15 n_eq/cm^2, when both sensors stopped working during a longer no-beam waiting period biased at 600 V while being cooled with dry ice. Possible explanations include warming up and thermal run-away during that time. After that incident, both devices showed a reduced break down voltage of less than 1 V and could not be used anymore for measurements.§.§ Measured waveform parameters and propertiesIn this section, the measured waveforms are presented and their properties discussed. Figure <ref> shows example waveforms at two voltages, and figure <ref> the corresponding amplitude distributions for a few bias voltages. Figure <ref> displays the voltage dependence of key waveform parameters, namely the rise time τ_10-90%[Measured as the time from 10 to 90% of the amplitude.], the most probable value (MPV) of the amplitude, the gain, the baseline noise N, the signal-to-noise ratio S/N and the jitter σ_jitter.The waveforms are relatively fast and short with measured rise times of about 500 ps and durations of about 1.5 ns at high voltages (excluding the second peak about 1 ns after the first one, which is believed to originate from artifacts of the setup such as impedance mismatch, since it is not observed in other setups <cit.>). The intrinsic waveform shape (before distortions from electronics) is a superposition of the induced currents from the drift of the primary electrons and holes and the secondary holes (see reference <cit.> for a detailed simulation of the signal). The latter are created by impact ionisation in the charge multiplication layer from electrons that reach the front side (the field necessary for multiplication from holes is much higher than for electrons). This multiplication increases the number of charge carriers and hence the induced current as long as the primary electrons are drifting to the front side, which is expected to last for about 450 ps at drift velocity saturation. This constitutes the intrinsic rise time of the signal, consistent with the measurement. As can be seen from figure <ref>, the rise time slightly decreases with voltage due to the faster drift. Subsequently, the multiplied holes drift to the back side, which is expected to last slightly longer due to the slower drift of the holes, which adds to the total pulse width and gives a trapezoidal shape to the waveform. The intrinsic LGAD waveform is convoluted with the electronics response function that includes contributions from the sensor capacitance in combination with the 50 Ω impedance of the amplifier with a rise time of 2.2τ_RC=430 ps for a step function, and from the 1 GHz band width of the oscilloscope-amplifier system with a rise time of 340 ps for a step function.The amplitude distributions are shown in figure <ref> for selected voltages and fitted with a Landau-Gauss function to extract the MPV. From figure <ref> the strong voltage dependence of the MPV can be appreciated. A similar behaviour is obtained for the gain, which is extracted here by integrating the waveform from -1 to 4 ns and dividing by the expected no-gain signal of 3.2 ke^-, which is slightly different for 120 GeV pions than for Sr90 beta particles <cit.>. The signal-to-noise ratio was not high enough to directly measure the charge of a 45 μm no-gain reference sensor, but the charge calibration has been verified with a 300 μm thick pad diode. The in-situ gain from the beam test agrees with the Sr90 measurements from section <ref> within the rather large run-to-run variations of 30–40% mentioned in section <ref>. The kink in the curves of med,unirr,L1/2 at 200 V stems from such variations since measurements below and above have been obtained in different runs. Note that during the beam test measurements, it was possible to measure at slightly higher voltages than during the Sr90 tests, as mentioned before, and hence higher gains were achieved. In particular, the maximum gain of low,unirr was higher than for med,unirr, and the maximum gain of med,3e14 was approaching the one of med,unirr. Only for med,1e15, it was not possible to apply the maximum voltage from the Sr90 measurements, since the devices broke before as discussed above. The noise, as obtained from sampling the baseline before the signal at -1 ns, was measured to be typically in the range between 3–4 mV before irradiation and also after irradiation for low voltages, which is included by above mentioned run-to-run variations. However, whereas the unirradiated devices show no voltage dependence in the range measured, it is interesting to note that the noise of the irradiated devices increases with voltage, up to 5 mV, probably due to increased shot noise from higher leakage currents. The signal-to-noise ratio, obtained as the ratio of the amplitude MPV and the noise, gave values as high as 60 before irradiation and up to 35 (10) at 3×10^14 n_eq/cm^2 (10^15 n_eq/cm^2). The time resolution contribution due to the electronics jitter can be estimated from the slope dV/dt of the signal and the noise N as σ_jitter = N/(dV/dt) ≈τ_10-90%/(0.8· S/N). It is steeply decreasing with voltage and reaches values as low as 10–15 ps before irradiation and not much worse values of 15 (20) ps at 3×10^14 n_eq/cm^2 measured at -20^∘C (-6^∘C). However, for 10^15 n_eq/cm^2, the jitter is limited to 50 ps.§.§ Time resolution The time resolution was obtained from the spread of the time-of-arrival difference Δ t between two devices. This technique includes the contributions of both channels. For the digital CFD signals of the SiPMs, the time-of-arrival was simply taken at a fixed threshold of -250 mV, which was about half of the constant amplitude. Variations of the threshold or more sophisticated algorithms were not found to change the results. For the analog LGAD signals without CFD, however, the time-of-arrival was obtained from an offline constant-fraction algorithm, which is needed to correct for the signal time walk due to amplitude variations. A threshold was set for each waveform individually at a certain fixed fraction of the amplitude, and the time of the threshold passing was interpolated linearly from the two measurement points just above and below that threshold. Many variations were tried such as including more points in a linear or polynomial fit or a spline interpolation, but no significant improvements in resolution were obtained, so that this simple and robust approach was taken. The fraction was scanned in 5% steps from 10 to 90% of the amplitude, and the optimal value was taken. Interestingly, this differed systematically for different voltages and devices, probably due to waveform shape variations that led to different points of the steepest slope dV/dt. For example, for the unirradiated LGAD devices, the optimal fraction was found to be around 80% at low voltages, decreasing to 20% at the highest measured voltages, whereas for the irradiated devices it stayed constant between 80 and 90%.Figure <ref> shows example distributions of the time-of-arrival difference Δ t for different devices.The histogram spreads were extracted as the standard deviation of a Gaussian fit σ_total, which contains the contributions of both devices. The individual resolutions of the SiPM reference devices were determined from some runs at high LGAD voltages, in which two LGADs and two SiPMs were included in the data taking. In a combined analysis of all channel combinations, which allowed to disentangle the individual resolutions, they were determined as σ_SiPM1=13±1 ps and σ_SiPM2=7±1 ps, with the uncertainties containing statistical uncertainties of about 0.2 ps and run-to-run variations of about 1 ps. In the following, the LGAD resolutions σ_LGAD of each individual LGAD device were determined by default from Δ t(LGAD-SiPM2) by subtracting quadratically σ_SiPM2, or, in case only two LGADs were measured as for med,3e14,L1/2 at -20^∘C, the average resolution of both devices was obtained as σ_<LGAD>=σ_total/√(2).Figure <ref> (left) shows the voltage dependence of σ_LGAD obtained in such a way for all devices measured. The uncertainties shown are statistical only (typically between 0.5–1 ps). As mentioned above, run-to-run variations add a systematic uncertainty of about 10%, about 3 ps at the highest voltages for each devices. The strong, almost linear decrease of σ_LGAD with voltage can be seen. The two devices of the same type behave similarly, but for the different doses and fluences, there is an almost constant offset, mostly due to the different gain at a fixed voltage. In fact, if σ_LGAD is plotted as a function of the gain as shown in figure <ref> (right), an approximately universal behaviour for all doses and fluences is observed. The spread of this curve is within the systematic uncertainties of the gain obtained in-situ in the beam test measurements of about 30–40% (as discussed in section <ref>). Slight deviations from a universal gain dependence are only expected if the noise shows significant variations, or if the drift velocities and hence rise times are very different for the same gain, due to different voltages below saturation or different temperatures. The end-point time resolutions at the highest voltages measured are very similar for the medium and low doses before irradiation with 29 ps at 235 V and 28 ps at 320 V, respectively. The values before irradiation agree within the 10% systematic uncertainties with the ones measured by other groups with a different setup <cit.>. Even after irradiation to 3×10^14 n_eq/cm^2, the same time resolution as before irradiation could be obtained when measured at -20^∘C, but at a higher voltage of 430 V. When measured at -6^∘C, the time resolution was about 8 ps worse at the same voltage, which might be partly explained by a lower gain due to a lower impact ionisation coefficient at higher temperatures, and partly due to lower drift velocities. A more detailed study of the temperature dependence, also before irradiation, will be presented in a later paper. Whereas at 3×10^14 n_eq/cm^2, similar time resolutions could be achieved as before irradiation, the voltage stability of the devices irradiated to 10^15 n_eq/cm^2 was not good enough to compensate for the effective acceptor removal in the multiplication layer. The resolution at the highest measured voltage of 620 V was found to be minimally 57 ps. Better time resolutions might be achievable with more sophisticated algorithms that exploit the full waveform information. However, for HEP applications with a very large number of channels, a simple solution such as a CFD followed by a TDC is the most realistic option. This is why the above analysis with the offline constant-fraction algorithm tried to stay as close as possible to that scenario. Furthermore, for one run with a device of the medium dose before irradiation, an AFP CFD was inserted between the amplifier and the oscilloscope with a constant-fraction parameter of 50% of the amplitude and a threshold below which events were rejected of 50 mV (hence only high voltages could be measured). At a voltage of 230 V, a time resolution of 35 ps was achieved, which is only slightly worse than the 30 ps achieved without CFD at the same voltage, especially considering the 10% systematic uncertainty and the non-optimised CFD threshold and fraction for this first test. § CONCLUSIONS AND OUTLOOKLow Gain Avalanche Detectors (LGADs) were produced with an active thickness of about 45 μm, and their gain and time resolution were studied for the first time for different initial multiplication layer implantation doses and before and after irradiation with neutrons up to 10^15 n_eq/cm^2.The gain showed the expected decrease at a fixed voltage for a lower implantation dose and a higher fluence due to lower acceptor concentrations in the multiplication layer. Time resolutions below 30 ps were obtained at the highest applied voltages for both doses before irradiation, as well as after a fluence of 3×10^14 n_eq/cm^2. Also, the time resolution was found to follow approximately a universal function of gain for all devices. This shows that given a good voltage stability that allows to reach a certain gain, different devices can reach similar time resolutions. However, at 10^15 n_eq/cm^2, the time resolution at the maximum applicable voltage of 620 V during the beam test was degraded to 57 ps since the voltage stability was not good enough to compensate for the doping loss in the multiplication layer. Further investigations on the voltage stability and dependence on environmental conditions such as temperature and humidity are envisaged, as well as the addition of more fluence steps and other irradiation types.These results demonstrate that thin LGADs are good candidates for HEP experiments with 10–30 ps timing requirements, since the one-layer time resolution obtained here can be improved using multiple layers as demonstrated in reference <cit.>. LGADs are an option for experiments with moderate radiation levels of at least up to 3×10^14 n_eq/cm^2. Limitations can arise from the observed resolution degradation at about 10^15 n_eq/cm^2. Also here, the same methods of using multiple layers could help to bring back a part of the performance, still about 30 (20) ps could be reached at that fluence using 4 (9) layers. Forward experiments like AFP or CT-PPS have the additional complication of non-uniform irradiation, which would make it necessary to apply different bias voltages to different parts of the detector to achieve a homogeneous timing performance throughout the whole device after irradiation.Hence, although the first results on irradiated LGADs obtained here are promising, eventually it would be desirable to make the LGAD technology intrinsically more radiation hard. Possible solutions being investigated include Gallium instead of Boron doping for the p-type multiplication layer or Carbon enhancement <cit.>. First devices with these modifications have been produced at CNM and are under study <cit.>. Furthermore, it is considered to investigate devices with even thinner active area since on the one hand, in general a better time resolution is expected, and on the other hand the electric field and hence charge multiplication in the bulk region is enhanced, especially after irradiation, making them potentially more radiation hard.This work was partly performed in the frameworks of the ATLAS Forward Proton and the CERN RD50 collaborations. The authors wish to thank the CERN-SPS NA team for excellent beam and infrastructure support; DESY and Hamburg University (especially H. Jansen and M. Matysek) for providing the PCBs; and M. Moll, the CERN SSD group, E. Griesmayerand L. Paolozzi for providing and help with the amplifiers.This work was partially funded by: the MINECO, Spanish Government, under grants FPA2013-48308-C2-1-P, FPA2014-55295-C3-2-R, FPA2015-69260-C3-2-R, FPA2015-69260-C3-3-R (co-financed with the European Union's FEDER funds) and SEV-2012-0234 (Severo Ochoa excellence programme), as well as under the Juan de la Cierva programme; the Spanish ICTS Network MICRONANOFABS partially supported by MINECO; the Catalan Government (AGAUR): Grups de Recerca Consolidats (SGR 2014 1177); the Czech programmes IGA_Prf_2017_005 of Palacky University and MSMT INGO II č. LG15052; and the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement no. 654168 (AIDA-2020). 10bib:UFSDproposal H. F.-W. Sadrozinski et al., Ultra-fast silicon detectors, http://dx.doi.org/10.1016/j.nima.2013.06.033Nucl. Instrum. Meth. 730 (2013) 226 – 231.bib:4Dtracking N. Cartiglia et al., Tracking in 4 dimensions, http://dx.doi.org/10.1016/j.nima.2016.05.078Nucl. Instrum. Meth. 845 (2017) 47 – 51.bib:AFPreference1 ATLAS Forward Proton Collaboration, Technical Design Report for the ATLAS Forward Proton Detector, Tech. Rep. CERN-LHCC-2015-009. ATLAS-TDR-024, CERN, Geneva, May, 2015. <https://cds.cern.ch/record/2017378>.bib:CTPPS CT-PPS Collaboration, CMS-TOTEM Precision Proton Spectrometer, Tech. Rep. CERN-LHCC-2014-021. TOTEM-TDR-003. CMS-TDR-13, Sep, 2014. <https://cds.cern.ch/record/1753795>.bib:CTPPStiming R. Arcidiacono on behalf of the CMS and TOTEM collaborations, A new timing detector for the CT-PPS project, http://dx.doi.org/10.1016/j.nima.2016.05.114Nucl. Instrum. Meth. 845 (2017) 16 – 19.bib:HGTD L. Masetti et al., ATLAS High Granularity Timing Detector, January, 2017. <https://indico.desy.de/getFile.py/access?contribId=13 sessionId=8 resId=0 materialId=slides confId=16161>. 5th Beam Telescopes and Test Beams Workshop, Barcelona, Spain.bib:Lorenzo L. Paolozzi et al., 100 ps time resolution with thin silicon pixel detectors and a SiGe HBT amplifier, http://dx.doi.org/10.1088/1748-0221/11/03/P03011JINST 11 (2016) P03011. bib:LGADDesignOpt N. Cartiglia et al., Design optimization of ultra-fast silicon detectors, http://dx.doi.org/10.1016/j.nima.2015.04.025Nucl. Instrum. Meth. 796 (2015) 141 – 148.bib:NA62TB G. A. Rinella et al., Test-beam results of a silicon pixel detector with Time-over-Threshold read-out having ultra-precise time resolution, http://dx.doi.org/10.1088/1748-0221/10/12/P12016JINST 10 (2015) P12016.bib:APD J. Hauger et al., A time-of-flight detector based on silicon avalanche diodes, http://dx.doi.org/10.1016/0168-9002(94)91104-5Nucl. Instrum. Meth. 337 (1994) 362 – 369.bib:CMJoern J. Lange et al., Properties of a radiation-induced charge multiplication region in epitaxial silicon diodes, http://dx.doi.org/10.1016/j.nima.2010.07.036Nucl. Instrum. Meth. 622 (2010) 49 – 58.bib:CMIgor I. Mandić, V. Cindro, G. Kramberger, and M. Mikuž, Measurement of anomalously high charge collection efficiency in n^+p strip detectors irradiated by up to 10^16 n_eq/cm^2, http://dx.doi.org/10.1016/j.nima.2009.01.207Nucl. Instrum. Meth. 603 (2009) 263 – 267.bib:CMGian G. Casse, A. Affolder, P. Allport, H. Brown, and M. Wormald, Enhanced efficiency of segmented silicon detectors of different thicknesses after proton irradiations up to 1×10^16 n_eq/cm^2, http://dx.doi.org/10.1016/j.nima.2010.02.134Nucl. Instrum. Meth. 624 (2010) 401 – 404.bib:LGAD G. Pellegrini et al., Technology developments and first measurements of Low Gain Avalanche Detectors (LGAD) for high energy physics applications, http://dx.doi.org/10.1016/j.nima.2014.06.008Nucl. Instrum. Meth. 765 (2014) 12 – 16.bib:RD50 RD50 - radiation hard semiconductor devices for very high luminosity colliders. <http://rd50.web.cern.ch/rd50/>.bib:UFSD300umTB H.-W. Sadrozinski et al., Ultra-fast silicon detectors (UFSD), http://dx.doi.org/10.1016/j.nima.2016.03.093Nucl. Instrum. Meth. 831 (2016) 18 – 23.bib:UFSD50umTBNicolo N. Cartiglia et al., Beam test results of a 16 ps timing system based on ultra-fast silicon detectors, http://dx.doi.org/10.1016/j.nima.2017.01.021Nucl. Instrum. Meth. 850 (2017) 83 – 88.bib:LGADradiationGregor G. Kramberger et al., Radiation effects in Low Gain Avalanche Detectors after hadron irradiations, http://dx.doi.org/10.1088/1748-0221/10/07/P07006JINST 10 (2015) P07006.bib:EPIcharge G. Kramberger et al., Charge collection properties of heavily irradiated epitaxial silicon detectors, http://dx.doi.org/10.1016/j.nima.2005.08.066Nucl. Instrum. Meth. 554 (2005) 212 – 219.bib:PDG Particle Data Group Collaboration, C. Patrignani et al., The Review of Particle Physics, http://dx.doi.org/10.1088/1674-1137/40/10/100001Chin. Phys. C 40 (2016) 100001.bib:GregorTrento G. Kramberger et al., Radiation hardness of thin LGAD detectors, February, 2017. <https://indico.cern.ch/event/587631/contributions/2471705/attachments/1414923/2165831/RadiationHardnessOfThinLGAD.pdf>. 12th Trento Workshop on Advanced Silicon Radiation Detectors, Trento, Italy.bib:CIVIDEC CIVIDEC Instrumentation GmbH, Schottengasse 3, 1010 Wien, Austria. <https://cividec.at>.bib:AFPbeamTests J. Lange et al., Beam tests of an integrated prototype of the ATLAS Forward Proton detector, http://dx.doi.org/10.1088/1748-0221/11/09/P09005JINST 11 (2016) P09005.bib:Particulars Particulars, advanced measurement systems, Dragomelj 154, SI-1230 Domzale, Slovenia. <http://www.particulars.si/products.php?prod=amplifiers.html>.bib:MarTrento M. Carulla et al., Last measurements and developments on LGAD detectors, February, 2017. <https://indico.cern.ch/event/587631/contributions/2471730/attachments/1415849/2167721/Trento_Meeting_21-02-2017.pdf>. 12th Trento Workshop on Advanced Silicon Radiation Detectors, Trento, Italy.
http://arxiv.org/abs/1703.09004v2
{ "authors": [ "J. Lange", "M. Carulla", "E. Cavallaro", "L. Chytka", "P. M. Davis", "D. Flores", "F. Förster", "S. Grinstein", "S. Hidalgo", "T. Komarek", "G. Kramberger", "I. Mandić", "A. Merlos", "L. Nozka", "G. Pellegrini", "D. Quirion", "T. Sykora" ], "categories": [ "physics.ins-det", "hep-ex" ], "primary_category": "physics.ins-det", "published": "20170327110024", "title": "Gain and time resolution of 45 $μ$m thin Low Gain Avalanche Detectors before and after irradiation up to a fluence of $10^{15}$ n$_{eq}$/cm$^2$" }
Control of the Black-Scholes equation Claire David December 30, 2023 =====================================Université Pierre et Marie Curie-Paris 6 Laboratoire Jacques Louis Lions - UMR 7598 Bote courrier 187, 4 place Jussieu, F-75252 Paris cedex 05, France The purpose of this work is to apply the results developed by J.Y. Chemin and Cl. David <cit.>, <cit.>, to the Black-Scholes equation. This latter equation being directly linked to the heat equation, it enables us to propose a new approach allowing to control properties of the solution by means of a shape parameter.Key Words: Black-Scholes equation ; control; shape parameters. § INTRODUCTION It is well known that the Black-Scholes (BS) model, which gives the dynamic of the option prices in financial markets <cit.>, <cit.>, is given, in its usual arbitrage free version, and the specific case of a "call", for european options, by:∂_t C + σ^2S^2/2∂^2 C/∂ S^2+ r( S∂_S C - C )= 0 where the price of the option, C, is a function of the underlying asset price S and time t; r is the risk-free interest rate, and σ the volatility of the stock. There are hundreds of papers dealing with the Black-Scholes equation. Yet, no one seems to have ever used the scaling invariance coming from the heat equation. And there appears to be very few studies on the "control aspect" of the equation (one can see, for instance <cit.>). In <cit.>, J.Y. Chemin and Cl. David obtained new results for the mass critical nonlinear Schrödinger equation, building a continuous map from the Lebesgue space L^2(^2) in the set G of initial data which give birth to global solution in the space L^4(^1+2). The principle is simple: one uses the fact that for this nonlinear equation, solutions of scales which are different enough almost do not interact. The nonlinearity of the original equation just required to determine a condition about the size of the scale which depends continuously on the data.The idea is to apply the building technique of the map described above, starting from the point that the Black-Scholes equation can be transformed into a linear heat equation, which has a natural scaling invariance, and that the linearity of the transformed equation enables one to obtain exact solutions, and, thus, yield interesting results. The aim of this paper is to present a new way to change either the Delta, or other greeks, by a given amount ; this technique appears thus as a new way to control the equation, without resorting to control functions. Indeed, one can change the greeks directly using the general solution of the Black-Scholes equation, nevertheless, our technique appears as an alternative and simple one to implement. Section <ref> is devoted to applying the Chemin-David technique to the Black-Scholes equation. The consistency of the approach with respect to initial conditions is examined in Section <ref>. Section <ref> present numerical results.§ INTRODUCTION OF A SHAPE PARAMETER §.§ Classical results for the Black-Scholes equation If one denotes by E the exercise price of the option, for a study in the time interval [0,T], T > 0, the limit conditions are:{[ C(0, t) =0 for allt; lim_S → + ∞ C(S, t) = S; C(S, T) =max { S - E, 0 } ].The Black-Scholes equation can classically be transformed into a heat-diffusion equation, through the change of variables:τ= σ^2/2 (T-t) ,x = lnS/E setting:C(S,t)=E C(x,τ) and:C (x,τ)= e^α x+β τ C(x,τ) where: k=2 r /σ^2 , α= 1-k/2 , β=α^2+(k-1) α-k=- (k+1)^2/4which lead to the normalized heat equation:∂C/∂τ = ∂^2 C/∂ x^2∀ (x,τ) ∈[0,σ^2 T/2] ×with the initial condition:C(x, 0) = C_0(x)= max { e^(k+1) x/2-e^(k-1) x/2,0} The classical analytical solution is then given by:C_classical (x, τ) =1/2 √(π τ) ∫_-∞^∞C_0 (y)exp[-(x - y)^2/4 τ] dy §.§ Our approach - Consistency with respect to initial conditions Let us recall the natural scaling of the heat equation (<ref>). If we denote by C a solution, then, for any strictly positive real number λ, the map (t,x) ↦C_λ (t,x) = λ C(λ^2t,λ x) is also solution. Following J.Y. Chemin and Cl. David <cit.>, <cit.>, we define the mapping, denoted by , from , by:( C_0, λ, N_0 ) = C_0+ ϵ ∑_j=1^N_0λ^-j C_0( λ^-j ·), ϵ ∈ { -1,1 },N_0∈ ^⋆which takes its origin in the so-called "profile decomposition theory" initiated by P. Gerard and H. Bahouri <cit.>. It relies on the idea that two solutionsof an evolution equations with scales that are different enough almost do not interact.An important question one may ask is wether our approach does not affect the required initial conditions (<ref>). The difficulty, here, lays in the fact that the functions at stake are not integrable on . This is the reason why we will work on L^2_loc(), more precisely, on L^2([0,S_0]), for S_0 ≥ 0. The main property of  is given by the proposition that follows. There exists a strictly positive constant λ_0 such that:λ≥λ_0 ⇒ ( C_0, λ, N_0 )- C_0^2_L^2([0,S_0] ) = o(1)1cmOne has: [( C_0, λ, N_0 )- C_0^2_L^2([0,S_0] ) = ∑_j=1^N_0λ^- j C_0( λ^-j ·) ^2_L^2( [0,S_0] ); = ∑_j=1^N_0λ^-2j C_0( λ^-j ·) ^2_L^2( [0,S_0] );+2 ∑_ 1 ≤ j<k ≤ N_0λ^- j-k(C_0( λ^-j ·), C_0( λ^-k ·) )_L^2( [0,S_0] ); ≤ (∑_j=1^N_0λ^-j) C_0 ^2_L^2( [0,S_0] ); +2 ∑_1 ≤ j<k ≤ N_0λ^- j-k(C_0( λ^-j)·C_0( λ^-k ) )_L^2([0,S_0]); =λ^-1 1-λ^-N_0/1-λ^-1 C_0 ^2_L^2( [0,S_0] ); +2 ∑_ 1 ≤ j< k ≤ N_0λ^- j-k(C_0( λ^-j ·) ,C_0( λ^-k ·) )_L^2([0,S_0]); ] due to:[λ^- 2j C_0( λ^-j ·) ^2_L^2( [0,S_0] )= λ^- 2j ∫_ 0 ^S_0C_0^2( λ^-j ·)dS; = λ^- 2j ∫_ 0 ^λ^-j S_0C_0^2(·)λ^ j dS; ≤λ^-j ∫_ 0 ^S_0C_0^2(·) dS; =λ^-j C_0^2_L^2( [0,S_0] ) ] and, for any set of integerssuch that j< k, provided that the scaling factor λ is great enough, the pseudo-orthogonality, or the fact that scales that are different enough almost do not interact, yields: [ λ^- j-k(C_0( λ^-j ·),C_0( λ^-k ·) )_L^2([0,S_0])=λ^ -k ∫_ 0 ^λ^-j S_0C_0 ( ·)C_0 ( λ^-(k-j) ·)dS; = o(1) ] For any strictly positive real number ε, one easily can find the threshold value λ_0 such that:∀ λ> λ_0 : λ^-1 1-λ^-N_0/1-λ^-1 C_0 ^2_L^2( [0,S_0] ) ≤ε 1cmThe mapping F shows that the control depends on N_0 and λ. As λ increases, one will require smaller values of the integer N_0: N_0 lnλ≤exp{ -1/N_0 ln (1-ε λ 1-λ^-1/ C_0 ^2_L^2( [0,S_0] )) } 1cm The illustration of the above theoretical results can be seen through the following numerical results. We hereafter display the variations of:⇝ the initial condition function C_0 given by (<ref>), in red ; ⇝ the initial condition function C_ 0,λ given by: C_ 0,λ(x)=C_0(x)+ϵ ∑_j=1^N_0C_0, λ,j (x )=C_0(x)+ϵ ∑_j=1^N_01/λ^j C_0 ( x/λ^j ) =C_0(x)+ϵ ∑_j=1^N_01/λ^j max { e^(k+1) x/2 λ^j-e^(k-1) x/2 λ^j,0}in green ; in the case where: r = 0.06, σ = 0.3, E = 100If one chooses values of λ greater or equal to 10, the required initial conditions (<ref>) are not affected.§.§ Analytic resultsAs previously, we consider, for Λ>Λ_0, initial data of the form:C_ 0,λ(x)=C_0(x)+ϵ ∑_j=1^N_0C_0, λ,j (x )=C_0(x)+ϵ ∑_j=1^N_01/λ^j C_0 ( x/λ^j ) =C_0(x)+ϵ ∑_j=1^N_01/λ^j max { e^(k+1) x/2 λ^j-e^(k-1) x/2 λ^j,0}The related exact analytic solution C, which is a function of x, τ, and of the scaling parameter λ, is given by: C(x,τ,λ)=C_classical (x, τ)+ϵ 1/2 √(π τ) ∫_-∞^+∞∑_j=1^N_0C_0, λ,j (y)e^-(x-y)^2/4 τ dyFor any integer j in :[ 1/2 √(π τ) ∫_-∞^+∞C_0, λ,j (y)e^-(x-y)^2/4 τ dy = 1/√(π) λ^j ∫_- ∞^+∞max { e^(k+1) x/2 λ^j-e^(k-1) x/2 λ^j,0} e^- z^2dz;; = 1/√(π) λ^j ∫_-x/ 2 √(τ)^+∞e^ (k+1) (x+2 z √(τ))/2 λ^j e^- z^2dz; - 1/√(π) λ^j ∫_ -x/ 2 √(τ)^+∞ e^(k-1) (x+2 z √(τ)) /2 λ^j e^- z^2dz;; = 1/√(π) λ^j ∫_-x/ 2 √(τ)^+∞e^ (k+1) x/2 λ^je^-(z+(k+1) √(τ)/4 λ^j)^2dz; - 1/√(π) λ^j ∫_-x/ 2 √(τ)^+∞e^ (k-1) x +(k-1)^2 τ e^-(z+(k-1) √(τ)/2 λ^j)^2dz;; = e^ (k+1) x/2 λ^j +(k+1)^2 τ/4 λ^2j/√(π) λ^j ∫_-x/ 2 √(τ)^+∞ e^-(z+(k+1) √(τ)/2 λ^j)^2dz; - e^(k-1) x/2 λ^j +(k-1)^2 τ/4 λ^2j/√(π) λ^j ∫_-x/ 2 √(τ)^+∞e^-(z+(k-1) √(τ)/2 λ^j)^2dz;; =e^(k+1) x/2 λ^ j +(k+1)^2 τ/4 λ^2j/√(π) λ^j ∫_-x/ 2 √(τ)+ (k+1) √(τ)/2 λ^j^+∞ e^-zdz; -e^(k-1) x/2 λ^j +(k-1)^2 τ/4 λ^2j/√(π) λ^j ∫_-x/ 2 √(τ)+ (k -1) √(τ)/2 λ^j^+∞e^- z^2dz;; = e^(k+1) x/2 λ^j +(k+1)^2 τ/4 λ^2j/√(π) λ^j √(π)/2 Erf_c (-x/ 2 √(τ)+(k+1) √(τ)/2 λ^j);- e^(k-1) x/2 λ^ j +(k-1)^2 τ/4 λ^2j/√(π) λ^j √(π)/2 Erf_c (-x/ 2 √(τ)+(k-1) √(τ)/2 λ^j);; =1/λ^j e^(k+1) x/2 λ^j +(k+1)^2 τ/4 λ^2j N ( √(2) x/ 2 √(τ)- √(2) (k+1) √(τ)/2 λ^j);-1/λ^j e^(k-1) x/2 λ^j +(k-1)^2 τ/4 λ^2jN (-√(2) x/ 2 √(τ)- √(2) (k-1) √(τ)/2 λ^j);; ] where the complementary error function Erf_c is defined, for any real number x, by:Erf_c(x) = 2/√(π) ∫_x^+∞ e^-t^2 dt while the normal (gaussian) cumulative distribution function is given, for any real number d, by: N(d) = 1/√(2π) ∫_ -∞^d e^-t^2/2 dt =1/√(2π) √(2) ∫_-d/√(2)^ +∞e^-t^2dt =1/ 2Erf_c (-d/√(2)) Thus:[ C(x,τ,λ)= C_classical (x, τ)+ϵ ∑_j=1^N_0e^(k+1) x/2 λ^j +(k+1)^2 τ/4 λ^2j/ 2 λ^j Erf_c (-x/ 2 √(τ)+(k+1) √(τ)/2 λ^j);-ϵ ∑_j=1^N_1e^(k-1) x/2 λ^ j +(k-1)^2 τ/4 λ^2j/ 2 λ^jErf_c (-x/ 2 √(τ)+(k-1) √(τ)/2 λ^j); ;] It is interesting to note that: [ C(x,τ,λ)= C_classical (x, τ)+ ϵ ∑_j=1^N_01/λ^j C_classical (x/λ^j, τ/λ^2j) ] and to notice, thus, that it includes, as expected, the natural scaling of the heat equation (<ref>).One easily goes back to the call function C through C(S,t)=E C (x,τ)= E e^-α x-β τ C(x,τ,λ )with x =lnS/E § RESULTS In finance, the sensitivity of a portfolio to changes in parameters values can be measured through what commonly call "the Greeks", i.e.: i. the Delta , which enables one to quantify the risk, and is thus the most important Greek. ii. The Gamma Γ=∂^2 C /∂S^2≥ 0.iii. The Vega (the name of which comes from the form of the greek letter ν) ν=∂C /∂σ.iv. The Theta Θ=∂C /∂ t.v. The rho ρ=∂C /∂ r.The good strategy, for traders, is to have delta-neutral positions at least once a day, and, whenever the opportunity arises, to improve the Gamma and the Vega <cit.>. §.§ Control of the Delta and GammaTo test our approach, we have choosen to compare, first: ⇝ the classical Delta Δ_classical and Gamma Γ_classical ; ⇝ the ones of our approach. Due to the decomposition[ C(x,τ,λ)= C_classical (x, τ)+ ϵ ∑_j=1^N_01/λ^j C_classical (x/λ^j, τ/λ^2j) ] the change of variables x =lnS/E leads to: [ ∂ C /∂ S=Δ_classical (x, τ)+ ϵ E/S ∑_j=1^N_01/λ^j ∂/∂ x [C_classical (x/λ^j, τ/λ^2j) ]; = Δ_classical (x, τ)+ ϵ E/S ∑_j=1^N_0 1/λ^j ∂/∂ x [C_classical (x/λ^j, τ/λ^2j) ]; =Δ_classical (x, τ)+ ϵ ∑_j=1^N_0 1/λ^2 j Δ_classical (x/λ^j, τ/λ^2j);]where we have set: Δ_classical(x, τ)=Δ_classical(S,t)=∂ C /∂ S It appears thus that:⇝ for ϵ=1, one can increase the Delta; ⇝ for ϵ=-1, one can decrease the Delta.In practice, it seems interesting to determine a suitable value λ_0 of the shape parameter λ such that: Δ=(1+ϵ η_0 ) Δ_classical ,η_0 ∈]0,1 [ It can be achieved through a series expansion of the quantity . One has: [Δ_classical (x,τ)= E/S ∂/∂ x [C(x,τ ) ]; = E/S ∂/∂ x [ e^(k+1) x/2 +(k+1)^2 τ/4/√(π) √(π)/2 Erf_c (-x/ 2 √(τ)+(k+1) √(τ)/2) ];-E/S ∂/∂ x [ e^(k-1) x/2 +(k-1)^2 τ/4/√(π) √(π)/2 Erf_c (-x/ 2 √(τ)+(k-1) √(τ)/2) ]; =E/S (k+1) /2 e^(k+1) x/2 +(k+1)^2 τ/4/√(π) √(π)/2 Erf_c (-x/ 2 √(τ)+(k+1) √(τ)/2); + 1/ 2 √(τ) E/S e^(k+1) x/2 +(k+1)^2 τ/4/√(π)e^ (-x/ 2 √(τ)+(k-1) √(τ)/2)^2; -E/S (k-1) /2 e^(k-1) x/2 +(k-1)^2 τ/4/√(π) √(π)/2 Erf_c (-x/ 2 √(τ)+(k-1) √(τ)/2); -1/ 2 √(τ) E/S e^(k-1) x/2 +(k-1)^2 τ/4/√(π)e^ (-x/ 2 √(τ)+(k-1) √(τ)/2)^2;] and, therefore: [ Δ (x/λ^j, τ/λ^2j) = E/S (k+1) /2 e^(k+1) x/2λ^j+(k+1)^2 τ/4 λ^j/√(π) √(π)/2 Erf_c (-x/ 2 λ^j √(τ)+(k+1) √(τ)/2 λ^j);+ λ^j/ 2 √(τ) E/S e^(k+1) x/2λ^j+(k+1)^2 τ/4 λ^j /√(π) e^ (-x/ 2 λ^j √(τ)+(k-1) √(τ)/2 λ^j )^2; -E/S (k-1) /2 e^(k-1) x/2 λ^j+(k-1)^2 τ/4 λ^j/√(π) √(π)/2 Erf_c (-x/ 2 λ^j √(τ)+(k-1) √(τ)/2 λ^j);-λ^j/ 2 √(τ) E/S e^(k-1) x/2λ^j+(k-1)^2 τ/4 λ^j /√(π)e^ (-x/ 2 λ^j √(τ)+(k-1) √(τ)/2 λ^j )^2; ] One requires then the series expansion of the Erf_c function in 0, which is given, for z∈, by: [ Erf_c (z) = 2/√(π) ∑_n=0^+∞(-1)^nz^2n+1/ (2n+1) n ! ] For any integer j in , a series expansion of the termis:[ 1/λ^2 j (k+1) /2 √(π)( ∑_n=0^+∞(k+1)^n x^n/2^n λ^j n n ! ) (2/√(π) ∑_n=0^+∞(-1)^n / (2n+1) n !(-x/ 2 λ^j √(τ)+(k+1) √(τ)/2 λ^j)^2n+1);;+ λ^j/√(τ)( ∑_n=0^+∞(k+1)^n x^n/2^n λ^j nn !) (1+(-x/ 2 λ^j √(τ)+(k+1) √(τ)/2 λ^j )^2+O ( 1/λ^4j));; -(k-1) /2 √(π)( ∑_n=0^+∞(k-1)^n x^n/2^n λ^j n n ! )(2/√(π) ∑_n=0^+∞(-1)^n / (2n+1) n !(-x/ 2 λ^j √(τ)+(k-1) √(τ)/2 λ^j)^2n+1);;-λ^j/√(τ)( ∑_n=0^+∞(k-1)^n x^n/2^n λ^j n n ! ) ( ∑_n=0^+∞1 /n!(-x/ 2 λ^j √(τ)+(k-1) √(τ)/2 λ^j )^n );; ]One obtains thus, for a given set (x,τ), or, in an equivalent way, (S,t), and a given value η_0, an equation in λ that can be solved numerically.In the same way as above, the change of variables x =lnS/E leads to: [ ∂^2 C /∂ S^2= ∂^2 C /∂ S^2+ ϵ E^2/S^2 ∑_j=1^N_0 1/λ^j ∂/∂ S [∂/∂ S C_classical (x/λ^j, τ/λ^2j) ]; =Γ_classical (x, τ)+ ϵ ∑_j=1^N_0 1/λ^3 j Γ_classical (x/λ^j, τ/λ^2j);] It appears thus that:⇝ for ϵ=1, one can increase the Gamma ; ⇝ for ϵ=-1, one can decrease the Gamma. The interesting point is that, due do the relation (<ref>): [ ∂/∂ S [Δ-Δ_classical] = ϵ ∑_j=1^N_0 1/λ^3 j Γ_classical (x/λ^j, τ/λ^2j); ]this expression being of the same sign as ϵ. Hence:⇝ for ϵ=1, the positive quantity Δ-Δ_classicalis an increasing function of the asset stock price S; ⇝ for ϵ=-1, the negative quantity Δ-Δ_classicalis a decreasing function of the asset stock price S. It appears thus that if one finds a value λ_0 of the scale parameter such that ⇝ Δ=(1+η_0 ) Δ_classical ,η_0 ∈]0,1 [, one will increase the Delta on the study interval ;⇝ Δ=(1-η_0 ) Δ_classical ,η_0 ∈]0,1 [, one will decrease the Delta on the study interval. §.§ Numerical results We present, in the following, a few numerical results. Tests have been made for: r = 0.06, σ = 0.3, E = 100, T=60 daysThe following figures display the graph of the difference Δ-Δ_classical as a function of stock price S and time t, for the values of the shape parameter λ such that, at the time T/2, for S=100:⇝ Δ-Δ_classical=0.15, which leads to the choice λ=2.37163 ; ⇝ Δ-Δ_classical=0.25, which leads to the choice λ=1.91079 ;⇝ Δ-Δ_classical=0.35, which leads to the choice λ=1.67543 ;3cm99 CheminEtClaire1 Chemin, J.Y.,David, Cl., Sur la construction de grandes solutions pour des équations de Schrdinger de type " masse critique ", Séminaire Laurent Schwartz - EDP et applications, 2013, under press. CheminEtClaire2 Chemin, J.Y.,David, Cl., From an initial data to a global solution of the nonlinear Schrödinger equation: a building process, International Mathematics Research Notices, 2015, rnv199, doi:10.1093/imrn/rnv199. BlackScholesBlack, F., &Scholes, M., The Pricing of Options and Corporate Liabilities, J. Pol. Econ., 81, 1973, 637-659. Merton Merton, R.C. , Theory of Rational Option Pricing, Bell J. Econ. and Management Sci., 4, 1973, 141-183. FreyPolteFey, R., & and Polte, U.,Nonlinear Black-Scholes equations in finance: associated control problems and properties of solutions, SIAM J. Control optim., 49(1), 2011, 185-204.bahourigerard H. Bahouri and P. Gérard, High frequency approximation of solutions to critical nonlinear wave equations, AmericanJournal ofMath.,121, 1999, 131-175.
http://arxiv.org/abs/1703.09206v1
{ "authors": [ "Claire David" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170327174809", "title": "Control of the Black-Scholes equation" }
Data Science Institute, Imperial College London National Heart & Lung Institute, Imperial College London St George's Hospital, University of London The Deep Poincaré Map: A Novel Approach for Left Ventricle Segmentation Yuanhan Mo1 Fangde Liu1Douglas McIlwraith1 Guang Yang2 Jingqing Zhang1 Taigang He3 Yike Guo1 December 30, 2023 ================================================================================================ Precise segmentation of the left ventricle (LV) within cardiac MRI images is a prerequisite for the quantitative measurement of heart function. However, this task is challenging due to the limited availability of labeled data and motion artifacts from cardiac imaging. In this work, we present an iterative segmentation algorithm for LV delineation. By coupling deep learning with a novel dynamic-based labeling scheme, we present a new methodology where a policy model is learned to guide an agent to travel over the the image, tracing out a boundary of the ROI – using the magnitude difference of the Poincaré map as a stopping criterion. Our method is evaluated on two datasets, namely the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV segmentation challenge. Our method outperforms the previous research over many metrics. In order to demonstrate the transferability of our method we present encouraging results over the STACOM 2011 data, when using a model trained on the SCD dataset. § INTRODUCTIONAutomatic left ventricle (LV) segmentation from cardiac MRI images is a prerequisite to quantitatively measure cardiac output and perform functional analysis of the heart. However, this task is still challenging due to the requirement for relatively large manually delineated datasets when using statistical shape models or (multi-)atlas based methods. Moreover, as the heart and chest are constantly in motion the resulting images may contain motion artifacts with low signal to noise ratio. Such poor quality images can further complicate the subsequent LV segmentation. Deep learning based methods have been proved effective for LV segmentation <cit.>. A detailed survey of the state-of-the-art lies outside the scope of this paper, but can be found elsewhere <cit.>. Such approaches are often based on, or extend image recognition research, and thus require large training datasets that are not always available for the cardiac MRI. To the best of our knowledge, there is very limited work using significant prior information to reduce the amount of training data required while maintaining a robust performance for LV segmentation.In this paper, we propose a novel LV segmentation method called the Deep Poincaré Map (DPM). Our DPM method encapsulates prior information with a dynamical system employed for labeling. Deep learning is then used to learn a displacement policy for traversal around the region of interest (ROI). Given an image, a CNN-based policy model can navigate an agent over the cardiac MRI image, moving toward a path which outlines the LV. At each time step, a next step policy (a 2D displacement) is given by our trained policy model, taking into account the surrounding pixels in a local squared patch. In order to learn the displacement policy, the DPM requires a data transformation step which converts the labeled images into a customized dynamic capturing the prior information around the ROI. An important property of DPM is that no matter where the agent starts, it will finally travel around the ROI. This behavior is guaranteed by the existence of a limit cycle using our customized dynamic.The main contributions of this work are as follows.(1) The DPM integrates prior information in the form of the context of the image surrounding the ROI. It does this by combining a dynamical system with a deep learning method for building a displacement policy model, and thus requires much less data that traditional deep learning methods. (2) The DPM is rotationally invariant. Because our next step policy predictor is trained with locally oriented patches, the orientation of the image with respect to the ROI is irrelevant. (3) The DPM is strongly transferable. Because the context of the segmentation boundary is considered, our method generalizes well to previously unseen images with the same or similar contexts.§ METHODOLOGY As shown in Fig <ref>, The DPM uses a CNN-based policy model, trained on locally oriented patches from manually segmented data, to navigate an agent over a cardiac MRI image (256x256) using a locally oriented square patch (64x64) as its input. The agent creates a trajectory over the image tracing the boundary of the LV – no matter where the agent starts on the image. A crucial prerequisite of this methodology is the creation of a vector field whose limit cycle is equal to the boundary surrounding the ROI. This can be seen in Fig <ref>. In the following sections we will discuss the DPM methodology in detail, namely 1) the creation of a customized dynamic (i.e. a vector field) with a limit cycle around the ROI of the manually delineated images. 2) The creation of a patch-policy predictor. 3) The stopping criterion using the Poincaré map.§.§ Generating a Customized DynamicA typical training dataset for segmentation consists of many image-to-label pairs. A label is a binary map that has the same resolution as its corresponding image. In each label, pixels of ground truth will be set to 1 while the background will be set to 0. Conversely, in our system, we firstly construct a customized dynamic (a vector field) for each labeled training instance. The constructed dynamic results in a unique limit cycle which is placed exactly on the boundary of the ROI. To illustrate, let us consider an example indicated in Fig <ref>. Consider a label of a training instance as a continuous 2D space ℝ^2 (a label with theoretical infinite resolution), we define the ground truth contour as a subspace Ω⊆ℝ^2 as shown in step (a) in Fig <ref>. To construct a dynamic in ℝ^2 where a limit cycle exists and is exactly the boundary ∂Ω, we firstly introduce the distance function S(p):S(p) ={[d(p, ∂Ω) ifpis not on ∂Ω; 0ifp is on ∂Ω ].d(p, ∂Ω) denotes the infimum Euclidean distance from p to the boundary ∂Ω. Eqt <ref> is used to create a scalar field from a binary image as shown in step (b) in Fig <ref>. In order to build the customized dynamic, we need to create a vector field from this scalar field. A gradient operator is applied to create dynamic equivalent to the active contour <cit.> as shown in step (c) in Fig <ref>.This gradient operator is expressed as Eqt <ref>.dp/dt =∇_pS(p),Our final step adds a limit cycle onto the system by gradually rotating the vectors according to the distance between each pixel and the boundary, as shown in Fig <ref>. The rotation function is given by R(θ),R(θ) = [cosθ -sinθ;sinθcosθ ]where θ is defined by Eqt <ref>.θ = π(1 - 𝐬𝐢𝐠𝐦𝐨𝐢𝐝(S(p)))Putting Eqt <ref> and Eqt. <ref> together, we obtain Eqt <ref>.dp/dt = R(θ) ∇_pS(p),Eqt. <ref> has an important property: When p ∈∂Ω, S(p) = 0 so that θ is equal to π/2 according to Eqt. <ref>. This means on the boundary, the direction of dp/dt is equal to the tangent of p ∈∂Ω as shown in step (d) in Fig <ref>. As opposed to active contour methods <cit.> where the dynamic is generated from images, we generate the discretized version of Eqt. <ref> for each label. Then, a vector field is generated from it for each training instance with the property that limit cycle of the field is the boundary of ROI. This process generates a set of tuples (image, label, dynamic). That is, for each cardiac image, we have its associated binary label image, and its corresponding vector field. In the next subsection, we introduce the methodology to learn a CNN which maps an image patch to a vector from our vector field (Fig <ref>). This allows us to create an agent which follows step-by-step displacement predictions.§.§ Creating a Patch-Policy Predictor using a CNN§.§.§ TrainingOur CNN operates over patches which are oriented with respect to our created dynamic. In order to prepare data for training, for each training image, we randomly choose a pre-defined proportion of points acting as the center of a rectangular sampling patch. We define a sampling direction which is equal to the velocity vector of the associated point. For example, for a given position (x_0,y_0) on image, its velocity (δ x, δ y) in the corresponding vector field is defined as the sampling direction, as shown in Fig <ref>. In the training process, such vectors are easily accessible, however they must be predicted during inference (see next Subsection <ref>). It is worth noting that a coordinate transformation is required to convert the velocity from the coordinate system of the dynamic to that of the patch, as illustrated in Fig <ref>. In order to improve robustness, training data augmentation can be performed by adding symmetric offsets to the sampling directions (e.g. (+45,-45)). Our CNN is based on the AlexNet architecture <cit.> with two output neurons. During training we use Adam optimizer with the mean square error (MSE) loss.§.§.§ Inference At the inference stage, before the first time step t=0, we determine an initial, rough, starting point using a basic LV detection module and a random sampling direction. This ensures that we don't start on an image boundary where there is insufficient input to create the first 64x64 pixel patch, and that we have an initial sampling direction. At each step, given an position p_t and a sampling direction s_t of the agent (which is unknown and is thus inferred as the difference between the current sampling direction and the last), a local patch is extracted and used as the input to the CNN-based policy model. The policy model then predicts the displacement for the agent to move, which in turn leads to the next local patch sample. This process iterates until the limit cycle is reached as illustrated earlier (Fig <ref>). §.§ Stopping Criterion: The Poincaré Map Instead of identifying the periodic orbit (the limit cycle) from the trajectory itself, we introduce the Poincaré section <cit.> which is a hyperplane, Σ, transversal to the trajectory. This cuts through the trajectory of the vector field, as seen in Fig <ref>. The stability of a periodic orbit in the image can be reflected by the procession of corresponding points of intersection in Σ (a lower dimensional space). The Poincaré map is the function which maps successive intersection points with the previous point, and thus, when the mapping reaches a small enough value we may say that the procession of the agent in the image has converged to the boundary (the limit cycle). The convergence of customized dynamic has been studied using the Poincaré-Bendixson theorem <cit.>, however the details are beyond the scope of this paper.§ EXPERIMENTAL SETTING AND RESULTS In this study, we evaluate our method on (1) the Sunnybrook Cardiac Dataset (SCD) <cit.>, which contains 45 cases and (2) the STACOM 2011 LV Segmentation Challenge, which contains 100 cases. SCD Dataset The DPM was trained on the given training subset. We applied our trained model to the validation and online subsets (800 images from 30 cases in total) to provide a fair comparison with previous research, and we present our findings in Table <ref>. We report the dice score, average perpendicular distance (APD) (in millimeters) and `good' contour rate (Good) for both the endocardium (i) and epicardium (o). We obtained a mean Dice score of 0.94 with a mean sensitivity of 0.95 and a mean specificity of 1.00.Transferability to the STACOM2011 DatasetTo demonstrate the strong transferability of our method we train on the training subset of the SCD dataset and test on the STACOM 2011 dataset. We performed myocardium segmentation by segmenting the endocardium and epicardium separately, using 100 randomly selected MRI images from 100 cases. We report the Dice index, sensitivity, specificity, positive and negative predictive values (PPV and NPV) in Table <ref>. We obtained a mean Dice index of 0.74 with a mean sensitivity of 0.84 and a mean specificity of 0.99.§ CONCLUSION In this paper we have presented the Deep Poincaré Map as a novel method for LV segmentation and demonstrate its promising performance.The developed DPM method is robust for medical images, which have limited spatial resolution, low SNR and indistinct object boundaries. By encoding prior knowledge of a ROI as a customized dynamic, fine grained learning is achieved resulting in a displacement policy model for iterative segmentation. This approach requires much less training data than traditional methods. The strong transferability and rotational invariance of the DPM can be also attributed to this patch-based policy learning strategy. These two advantages are crucial for clinical applications. § ACKNOWLEDGEMENTYuanhan Mo is sponsored by Sultan Bin Khalifa International Thalassemia Award. Guang Yang is supported by the British Heart Foundation Project Grant (PG/16/78/32402). Jingqing Zhang is supported by LexisNexis HPCC Systems Academic Program. Thanks to TensorLayer Community.splncs
http://arxiv.org/abs/1703.09200v2
{ "authors": [ "Yuanhan Mo", "Fangde Liu", "Douglas McIlwraith", "Guang Yang", "Jingqing Zhang", "Taigang He", "Yike Guo" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170327173733", "title": "The Deep Poincaré Map: A Novel Approach for Left Ventricle Segmentation" }
European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, GermanyEuropean Synchrotron Radiation Facility (ESRF), B.P.220, F-38043 Grenoble, France European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany Toyota Physical and Chemical Research Institute, Nagakute, Aichi 480-1192, Japan European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, GermanyWe have performed high-resolution powder x-ray diffraction measurements on a sample of ^242PuCoGa_5, the heavy-fermion superconductor with the highest critical temperature T_c = 18.7 K. The results show that the tetragonal symmetry of its crystallographic lattice is preserved down to 2 K. Marginal evidence is obtained for an anomalous behaviour below T_c of the a and c lattice parameters. The observed thermal expansion is isotropic down to 150 K, and becomes anisotropic for lower temperatures. This gives a c/a ratio that decreases with increasing temperature to become almost constant above ∼150 K. The volume thermal expansion coefficient α_V has a jump at T_c, a factor ∼20 larger than the change predicted by the Ehrenfest relation for a second order phase transition. The volume expansion deviates from the curve expected for the conventional anharmonic behaviour described by a simple Grüneisen-Einstein model. The observed differences are about ten times larger than the statistical error bars but are too small to be taken as an indication for the proximity of the system to a valence instability that is avoided by the superconducting state.Thermal Expansion of the Heavy-fermion Superconductor PuCoGa_5 R. Caciuffo December 30, 2023 ==============================================================§ INTRODUCTIONPuCoGa_5 has the highest T_c (18.7 K) of any heavy-fermion superconductor. Fifteen years after its discovery <cit.> our understanding of much of this material remains at best confused <cit.>. We do know from NMR <cit.> and point-contact spectroscopy <cit.> measurements that the superconducting state has d-wave symmetry. Magnetic form-factor measurements with polarized neutron diffraction <cit.> have shown that the ground state is not the conventional 5f^5 state found in many Pu intermetallics. Neutron inelastic scattering has failed to detect any sign of a resonance as found, for example, in the isostructural CeCoIn_5 compound <cit.>, which also has d-wave symmetry, although the difficulty of performing these neutron experiments on Pu should not be overlooked. Recent theoretical efforts <cit.> have concluded that the driving mechanism for superconductivity is valence fluctuations. Electronic structure calculations combining the local-density approximation with an exact diagonalization of the Anderson impurity model <cit.> show an intermediate 5f^5-5f^6-valence ground state and delocalization of the 5f^5 multiplet of the Pu atom 5f shell. The 5f local magnetic moment is compensated by a moment formed in the surrounding cloud of conduction electrons, leading to a singlet Anderson impurity ground state.The presence of valence fluctuations has been recently suggested by resonant ultrasound spectroscopy measurements, showing that the three compressional elastic moduli exhibit anomalous softening upon cooling, which is truncated at the superconducting transition <cit.>. These results have been interpreted as evidence for a valence transition at a T_V < T_c that is avoided by the superconducting state, suggesting that PuCoGa_5 is near a critical-end point involved in the unconventional superconductivity <cit.>. However, the identification of the fluctuating order parameter responsible for the observed anomalous softening requires information on the thermal expansion of the lattice, which is not available. Moreover, crystallographic studies at low temperature (T) have not yet been performed, so that the occurrence of a lattice distortion at T_c (or above T_c) has not been verified. These are the issues that we have addressed by performing high-resolution x-ray diffraction measurements in the T range between 2 and 300 K on a defect-free polycrystalline sample of^242PuCoGa_5. The results of our investigation show that no measurable structural distortion is associated with the stabilization of the superconducting phase. The thermal expansion is isotropic down to 150 K, and anisotropic for lower temperatures, which is not surprising for a superconductor with an order parameter of d-wavesymmetry. The T dependence of both a and c lattice parameters shows small anomalies at T_c and a behavior that deviates from the one expected by the simple quasi-harmonic approximation commonly used to describe the thermal expansion in solids. However, no convincing evidence is found for an incipient valence transition of the Pu electronic configuration associated with the formation and condensation of Cooper pairs.§ EXPERIMENTAL DETAILS AND RESULTSThe experiment was performed at the ID22 beamline of the European Synchrotron Radiation Facility (ESRF) in France. Data have been collected on a sample obtained by crushing a single crystal grown at the Karlsruhe establishment of the Joint Research Centre in a Ga flux using the ^242Pu isotope (99.932 wt% ^242Pu, 0.035 wt% ^241Pu, 0.022 wt% ^240Pu, 0.005 wt% ^239Pu, 0.004 wt% ^238Pu, 0.002 wt% ^244Pu on December 2015) to avoid effects from radiation damage and self-heating. The total sample mass was 4.6 mg, corresponding to a plutonium mass of 1.7 mg and a total activity of ∼760 kBq. Magnetic susceptibility and specific heat measurements show superconductivity below T_c = 18.7 K (see inset of Fig. <ref>).Following a protocol developed for powder diffraction measurements at synchrotron radiation sources on other transuranium isotopes <cit.>,the sample was put inside a hermetic holder providing four levels of containment. For this, we used a kapton capillary (1 mm diameter, ∼25 mm in length) half filled with Stycast. The resin was allowed to cure, before a 5 mm mixture of a second resin (Epofix) and the sample was inserted with a pipette. The Epofix was used because of the lower viscosity, allowing easier mixing with the powder sample and insertion into the narrow kapton capillary. The remainder of the capillary was then filled with Stycast and, once fully cured, it was inserted into a drilled-out plexiglass rod, which was sealed with a plexiglass plug, glued with further Stycast and finally enveloped within a 4 mm polyimide tube. Due to the contamination risk generated by the plutonium element, all operations of preparation and encapsulation have been carried out in shielded gloveboxes under inert nitrogen atmosphere following well-established safety procedures.The channel-cut Si-111 monochromator of ID22 provided an incident beam wavelength of 0.354155 Å. The sample capillary was mounted on the axis of the diffractometer inside a liquid-helium-cooled cryostat allowing reaching a base temperature of 2 K. To avoid any risks of mechanical failure of the containment, the sample was not spun within the cryostat. This did not result in preferred sample orientation issues in the data, as the setting process in the resin eliminates any preferred orientation and provides a good sample average. A NIST 640c Si standard was used to calibrate the Si-111 multi-analyser stage. In the first part of the experiment, the diffraction pattern was measured at several temperatures, from 2 to 300 K, with acquisition times up to four hours. The T dependence of the lattice parameters was obtained from the Rietveld refinement of diffraction patterns collected on warming from 5 K up to 260 K, with a counting time of 1 hour at each temperature. The experimental resolution was of the order of Δ d/d = 10^-6. The main results are summarized below. The best fit of the diffraction profile is obtained within the tetragonal P4/mmm Space Group in the whole temperature range explored in this experiment. Close examination of the shape and width of individual Bragg peaks shows no evidence for the occurrence of a lattice distortion across T_c, as shown for the (020) Bragg peak in Fig. <ref>.The temperature dependence of the lattice parameters a and c is shown in Fig. <ref>, together with the thermal expansion of the unit cell volume V(T). The error bar on the experimental data represent the error σ_R estimated from the Rietveld refinement multiplied by a factor 5. The solid line is a fit to a simple one-phonon Grüneisen-Einstein model, ln(V(T)/V_0) = k_B n/B V_mγ T_E/exp(T_E/T)-1 and equivalent expressions for the lattice parameters a(T) and c(T). In Eq. <ref>, V_0 is the unit cell volume at T = 0, k_B is the Boltzman constant, n = 7 is the number of atoms per unit cell, B ≃ 89-100 GPa is the bulk modulus <cit.>, V_m = 7.23×10^-5 m^3/molis the molar volume, γ is the Grüneisen parameter and T_E is the Einstein temperature. The best fit is obtained for V_0 = 120.090(3) Å^3 (c_0 = 6.7607(4) Å, a_0 = 4.2146(3) Å), γ = 5.2(4), and T_E = 197(4) K. The values estimated by this simple model are in line with those obtained by self-consistent calculations reported in Ref. filanovich16. Moreover, the Grüneisen parameter has the order of magnitude reported for other mixed-valent Ce and U compounds, for instance CePd_3, CeSn_3, and UAl_2 <cit.>.Although the simple model above describes well the behavior at high temperature, clear deviations from the predicted dependence are observed below T_c. Whilst a decreases linearly with decreasing T, c has a small expansion at T_c and becomes constant at lower temperatures, a behavior similar to the one calculated by Millis and Rabe for La_2-xSr_xCuO_4 and YBa_2Cu_3O_7 by taking into account Gaussian fluctuation corrections to the mean field superconducting free energy <cit.>. As a consequence, the volume expansion deviates from the curve expected for the conventional anharmonic behavior described by the Grüneisen-Einstein model, with differences that are about two times larger than the error bars given by 5σ_R. One must, of course, be aware that Eq. <ref> has his roots in the Einstein approximation for the specific heat of a solid, which underestimates the contribution of long wavelength vibrational modes at very low temperatures. However, as significant differences with the curve predicted by more sophisticated models are expected at temperatures much smaller than the T_c in PuCoGa_5, its use to signal an anomaly in the observed experimental data at the onset of superconductivity is, in the present case, justified. As shown in Fig. <ref> (top panel), upon cooling the expansion is isotropic down to 150 K and anisotropic for lower temperatures. This results in a c/a ratio that decreases with increasing T to become almost constant above ∼150 K. It is interesting to note that the marked increase of the c/a ratio below 150 K occurs in the temperature range where Ramshaw et al. [ramshaw15] observe an anomalous softening of the bulk modulus and a significant temperature dependence of the in-plane Poisson ratio.Such a behavior was attributed in Ref. [ramshaw15] to the development of in-plane hybridization between Pu 5f moments and conduction electrons. The inset (top panel) of Fig. <ref> shows the linear thermal expansion coefficients along the a and c directions around T_c. The temperature dependence of the volume thermal expansion coefficient α_V is shown in the bottom panel of Fig. <ref>. The presence of an anomaly with a minimum at T_c has been confirmed by repeating the sequence of measurements both on warming and on cooling cycles. § DISCUSSIONThe anisotropic change in thermal expansion at T_c is not unexpected for a d-wave superconductor adjusting its crystal structure in order to minimize the lattice free energy. On the other hand, the observed deviation of the unit cell volume with respect to the Grüneisen-Einstein prediction is much larger than the one obtained from the first Ehrenfest equation for 2^nd order phase changes. Such equation relates the difference between the temperature derivative of the volume at constant pressure calculated above and below the phase transition with the jump of the specific heat at T_c and the initial slope of the hydrostatic pressure dependence of T_c (which measures the average of the stress derivatives)(∂ V_s/∂ T)_p - (∂ V_n/∂ T)_p = Δα_V V_m = ∂ T_c/∂ pC_s-C_n/T_cwhere Δα_V = α_Vs - α_Vn is the difference between the thermal expansion coefficients in the superconducting and normal phase. Previous studies have reported ∂ T_c/∂ p = 0.4(2)×10^-9 K/Pa <cit.> and (C_s-C_n)/T_c = 0.110(4) J mol^-1 K^-2 <cit.>, leading to Δα_V= 0.6×10^-6 K^-1. This value for the thermal expansion discontinuity is comparable with those calculated for La_2-xSr_xCuO_4 and YBa_2Cu_3O_7 in Ref. [millis88] but it is smaller by one order of magnitude than the anomaly observed in the experimental curve shown in Fig. <ref>. Moreover, the positive jump at T_c of α_V (with increasing T), in conjunction with the negative jump of the specific heat, would indicate an initial negative value for ∂T_c/∂p, in contrast to the direct measurement.Such discrepancies in sign or magnitude have been observed in other superconductors like for example the A15 material V_3Si <cit.>, the Chevrel phase PbMo_6S_8 <cit.> or the iron-based layered superconductor Ba(Fe_1-xCo_x)_2As_2 <cit.>, where the thermal expansion is also highly anisotropic and the derivative ∂T_c/∂p deduced from the Ehrenfest relation is negative, whereas the pressure diagram is clearly displaying an increase of T_c with increasing pressure around p ∼ 0 GPa. Several hypotheses have been invoked to account for these deviations, but no clear explanation has emerged yet, so we refer interested readers to references therein. One should also notice that PuCoGa_5 is a plutonium-based material and Pu element is already at the origin of numerous exotic phenomena, such as a negative thermal dilatation in the δ phase <cit.>, due to its specific electronic structurethat is far from being fully understood.Although we do not have any straightforward explanation for the departure from the predictions of the Ehrenfest relation, we think that this is an interesting finding that calls for further studies.Definitely, determining the thermal expansion of PuCoGa_5 single crystals along the main directions with a high-sensitivity technique like dilatometry would be valuable to yield more details on this anomaly at T_c and stimulate theoretical work. Thermal expansion measurements have been reported for the isostructural heavy-fermion superconductor CeCoIn_5 (T_c = 2.3 K) <cit.>. Also in that case, the thermal expansion shrinks for the [100] direction in the superconducting state, while it expands for [001]. The volume thermal expansion decreases linearly down to T_c with decreasing temperature, and more rapidly so in the temperature range from T_c down to 1.5 K <cit.>. This is similar to what we report for PuCoGa_5, but for CeCoIn_5 the coefficient of the volume thermal expansion shows an anomaly with a lambda shape, which is not the case for PuCoGa_5. Moreover, applying the Ehrenfest relation to CeCoIn_5 leads to a correct estimate for ∂T_c/∂p (both in sign and magnitude), again in contrast with what we report for the Pu analogue.The linear decrease of the unit cell volume with decreasing temperature below T_cis qualitatively similar to the one observed in CeRu_2Si_2, a compound where the Kondo screening (changing the 4f^1 localized state to a non-magnetic 4f-itinerant state) is accompanied by a volume contraction below the Kondo temperature T_K = 20 K <cit.>. In that case, the phenomenon can be interpreted in the framework of a theory describing critical valence fluctuations involving 4f^1 and 4f^0 electronic configurations <cit.>, although CeRu_2Si_2 is thought to be relatively far from criticality <cit.>. According to Ref. [miyake14], the variation of the f-shell occupation number Δ n_f = n_f(T) - n_f(0) is proportional to the volume change Δ V(T), Δ n_f ∝ζχ_0Δ V / (η_0+bT^ξ), where ζ is a temperature independent constant that relates the energy variation of the f-electrons levels to the volume change, χ_0 is the non-interacting susceptibility of the order of the quasiparticle density of states, η_0 is a parameter characterizing the degree of departure from the critical point, and ξ a critical exponent. Although a theory for valence transitions between electronic configurations with occupation numbers higher than 1 is not yet available, the behavior should be qualitatively similar, and the observed volume shrinkage could be an indication that the valence of the Pu atoms changes below T_c. However, it must be emphasized that the observed departure of the volume from the Grüneisen-Einstein model is of the order of 10^-4 and, in the absence of a quantitative theory, we cannot claim that PuCoGa_5 is at the verge of a critical valence transition on the basis of our results.Our attempts to separate electronic and vibrational contributions to the observed thermal expansion failed. In PuCoGa_5 T_c is relatively high, whereas its linear specific heat capacity is relatively small compared to other heavy-fermion materials. As a consequence, the phonon contribution to the thermal expansion cannot be considered as a small correction as in many heavy-fermion superconductors where the critical temperature is in the sub-kelvin range and the specific heat Sommerfeld coefficient γ is very high. Therefore, in the absence of a precise estimate of the vibrational term, a reliable separation of the different contributions to the thermal expansion was not feasible. § CONCLUSIONSX-ray diffraction measurements with a resolution of Δ d/d ∼10^-6 in the lattice spacing show that the tetragonal symmetry exhibited by the PuCoGa_5 unconventional superconductor is preserved down to 2 K, well below the critical temperature T_c = 18.7 K. The lattice thermal expansion is isotropic down to 150 K, and anisotropic for lower temperatures. This gives a c/a ratio that decreases with increasing T to become almost constant above ∼ 150 K. The volume thermal expansion coefficient α_V has a jump at T_c, a factor∼ 20 larger than the change predicted by the Ehrenfest relation. At low temperatures, the expansion of the unit cell volume deviates from the curve corresponding to a simple one-phonon Grüneisen-Einstein model and shows, below T_c, a continuous linear shrinking of the volume. In the case of the CeRu_2Si_2 Kondo system, a similar trend has been attributed to critical valence fluctuations. Although the deviations observed for PuCoGa_5 are about ten times larger than the statistical errors, in the absence of a quantitative theory it is not possible to establish the occurrence of critical valence fluctuations near T_c. The determination of thermal expansion along the main directionsin PuCoGa_5 single crystals with a technique affording higher sensitivity and a higher density of experimental points like dilatometry would be welcome to study more precisely this anomaly at T_c, confirm and refine ours observations and stimulate theoretical works.§ ACKNOWLEDGEMENTWe thank A. Fitch, J. M. Lawrence, and E. D. Bauer for stimulating discussions and advice. We are grateful to P. Colomp of the ESRF radioprotection services for his cooperation during the execution of the experiment.
http://arxiv.org/abs/1703.08984v1
{ "authors": [ "R. Eloirdi", "C. Giacobbe", "P. Amador Celdran", "N. Magnani", "G. H. Lander", "J. -C. Griveau", "E. Colineau", "K. Miyake", "R. Caciuffo" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170327094118", "title": "Thermal Expansion of the Heavy-fermion Superconductor PuCoGa$_{5}$" }
Who Said What: Modeling Individual Labelers Improves Classification Melody Y. GuanWork done as a member of the Google Brain Residency program (g.co/brainresidency).Stanford University450 Serra MallStanford, California 94305mguan@stanford.edu Varun Gulshan, Andrew M. Dai, Geoffrey E. HintonGoogle Brain1600 Amphitheatre PwkyMountain View, California 94043{varungulshan, adai, geoffhinton}@google.com ================================================================================================================================================================================================================================================================================================================================================================ Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label and to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by <cit.>; <cit.>. Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training. § INTRODUCTION Over the last few years, deep convolutional neural networks have led to rapid improvements in the ability of computers to classify objects in images and they are now comparable with human performance in several domains. As computers get faster and researchers develop even better techniques, neural networks will continue to improve, especially for tasks where it is possible to get a very large number of accurately labeled training examples. In the near future, we can expect neural networks to start serving as alternatives to human experts. We would, in fact, like the neural networks to perform much better than the human experts used to provide the training labels because these training labels are often unreliable as indicated by the poor agreement between different experts (55.4% for the datasets we consider) or even between an expert and the same expert looking at the same image some time later (70.7%). [Inter-grader variability is a well-known issue in many settings in which human interpretation is used as a proxy for ground truth, such as radiology and pathology <cit.>.] Intuitively, we would expect the quality of the training labels to provide an upper bound on the performance of the trained net. In the following section we show that this intuition is incorrect.Our paper's main contribution is to show that there are significantly better ways to use the opinions of multiple experts than simply treating the consensus of the experts as the correct label or using the experts to define a probability distribution over labels. Figure <ref> summarizes our optimal procedure. §.§ Beating the Teacher To demonstrate that a trained neural net can perform far better than its teacher we use the well-known MNIST hand-written digit benchmark for which the true labels are known and we create unreliable training labels by corrupting the true labels. This corruption is performed just once per experiment, before training starts, so the noise introduced by the corruption cannot be averaged away by training on the same example several times. MNIST has 60k training images and 10k test images and the task is to classify each 28×28-pixel image into one of the ten classes.For the purposes of this demonstration, we use a very simple neural net: two convolutional layers with 5×5 filters, rectified linear unit (ReLU) activation functions, and 16 and 25 output channels respectively, each followed by a max pooling layer with 2x2 filters; a fully connected hidden layer of 32 ReLUs; and a 10-way softmax layer. We train the net on 50k examples using stochastic gradient descent on mini-batches of size 200 with the Adam optimizer <cit.> and we use the remaining 10k training cases as a validation set for tuning the learning rate and the magnitude of the initial random weights. The best-performing net has a test error rate of 1.01% when the training labels were all correct.If the labels are corrupted by changing each label to one of the other nine classes with a probability of 0.5, the test error rate only rises to 2.29%. Even if each training label is changed to an incorrect label with probability 0.8 so that the teacher is wrong 80% of the time, the trained net only gets 8.23% test error. If the teacher is even less reliable there comes a point at which the neural net fails to “get the point” and its error rate rises catastrophically, but this does not happen until the teacher is extremely unreliable as shown in Figure <ref>.This demonstrates that the performance of a neural net is not limited by the accuracy of its teacher, provided the teacher's errors are random. One obvious question is how many noisily labeled training examples are worth a correctly labeled training example. In Appendix A we show that this question can be answered, at least approximately, by computing the mutual information between label and truth. §.§ Making Better Use of Noisy Labels for Diabetic Retinopathy ClassificationWe are interested in noisy datasets of medical images where many different doctors have provided labels but each image has only been labeled by a few doctors and most of the doctors have only labeled a fairly small fraction of the images. This paper focuses on datasets of images used for screening diabetic retinopathy because neural networks have recently achieved human-level performance on such images <cit.> and if we can produce even a relatively small improvement in the state-of-the-art system it will be of great value.Diabetic retinopathy (DR) is the fastest growing cause of blindness worldwide, with nearly 415 million diabetics at risk <cit.>. Early detection and treatment of DR can reduce the risk of blindness by 95% <cit.>. One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye called fundus images and rate them on the International Clinical Diabetic Retinopathy scale <cit.>, defined based on the type and extent of lesions (e.g. microaneurysms, hemorrhages, hard exudates) present in the image. The image is classified into one of 5 categories consisting of (1) No DR, (2) Mild NPDR (non-proliferative DR), (3) Moderate NPDR, (4) Severe NPDR, and (5) Profilerative DR (Figure <ref>). Another important clinical diagnosis that can be made from the fundus image is the presence of diabetic macular edema (DME). While this work focuses only on the 5 point grading of DR, the findings should be applicable to DME diagnosis as well.Most of the prior work on DR classification focuses on obtaining a single ground truth diagnosis for each image, and then using that for training and evaluation. Deep learning has recently been used within this setting by <cit.> who show a high sensitivity (97.5%) and specificity (93.4%) in the detection of referable DR (moderate or more severe DR).In this work we explore whether, in the context of data in which every example is labeled by multiple experts, a better model can be trained by predicting the opinions of the individual experts as opposed to collapsing the many opinions into a single one. This allows us to keep the information contained in the assignment of experts to opinions, which should be valuable because experts labelling data differ from each other in skill and area of expertise (as is the case with our ophthalmologists, see Figure <ref>). Note that we still need a single opinion on the test set to be able to evaluate the models. To that end, we use a rigorous adjudicated reference standard for evaluation, where a committee of three retinal specialists resolved disagreements by discussion until a single consensus is achieved. § RELATED WORKSOur work on learning from multiple noisy annotators relates to literature on noisy labels, crowd-sourcing, weak supervision, semi-supervised learning, item response theory, and multi-view learning.Since the foundational work of <cit.>, who model annotator accuracies with expectation-maximization (EM), and <cit.>, who integrate the opinions of many experts to infer ground truth, there has a large body of work using EM approaches to estimate accurate labels for datasets annotated by multiple experts <cit.>. Works that use Bayesian probabilistic models for the image generation and/or annotation process include <cit.>; <cit.>; <cit.>; <cit.>.<cit.>; <cit.> learn from multiple annotators how to do active learning, i.e. which samples to select and which annotators to query, the latter using Gaussian processes to explicitly handle uncertainty. <cit.>; <cit.> propose message passing algorithms for crowdsourcing.An extension in weak supervision is generalizing from noisy sources to programmatically generate labeled training sets <cit.>. An extension in the crowdsourcing domain is budget allocation during label sourcing <cit.> .Previous work in biostatistics and epidemiology that estimate ground truth from multiple annotators in the absence of ground truth data are <cit.>; <cit.>; <cit.> but none of these model individual labelers as we do.§ METHODS §.§ Motivation for Model DesignFirst we describe the rationale behind our proposed models. There is more information in the particular labels produced by particular doctors than is captured by simply taking the average of all the doctors who have labeled a particular image and treating this distribution as the correct answer. The amount of constraint that a training case imposes on the weights of a neural network depends on the amount of information required to specify the desired output. So if we force the network to predict what each particular doctor would say for each particular training case we should be able to get better generalization to test data, provided this does not introduce too many extra parameters. For a K-way classification task, we can replace the single softmax <cit.> that is normally used by as many different K-way softmaxes as we have doctors. Of course, there will be many doctors who have not labeled a particular training image, but this is easily handled by simply not backpropagating any error from the softmaxes that are used to model those doctors. At test time we can compute the predictions of all of the modeled doctors and average them. Our belief is that forcing a neural network to model the individual doctors and then averaging at test time should give better generalization than simply training a neural network to model the average of the doctors.We expect some doctors to be more reliable than others and we would like to give more weight to their opinions. We should this be able to do better than just averaging the opinions of the modeled doctors. After we have finished learning how to model all of the individual doctors we can learn how much to weight each modeled doctor's opinion in the averaging. This allows us to downweight the unreliable doctor models. We also expect that the doctors will have received different training and may have experienced different distributions of images so that the relative reliability of two doctors may depend on both the class of the image and on properties of the image such as the type of camera used. Our weights for averaging doctor models should therefore possibly be image-dependent. §.§ Model ArchitectureWith these intuitions in mind, we consider a sequence of models of increasing complexity for training the diabetic retinopathy classifier (Figure <ref>). The neural network base used in this work is the Inception-v3 architecture <cit.> (Figure <ref>). * Baseline Net (BN): Inception-v3 trained on average opinions of doctors; a TensorFlow reimplementation of the model used in <cit.>. * Doctor Net (DN): BN extended to model the opinions of each of the 31 doctors.* Weighted Doctor Net (WDN): Fixed DN with averaging weights for combining the predictions of the doctor models learned on top, one weight per doctor model.* Image-specific WDN (IWDN): WDN with averaging weights that are learned as a function of the image.* Bottlenecked IWDN (BIWDN): IWDN with a small bottleneck layer for learning the averaging weights. For BN, the outputs of the last hidden layer of Inception are used to compute the logits used in the five-way softmax output layer. For DN, the opinions of each doctor are modeled using a separate softmax for each doctor, while Inception weights were shared. For evaluation, the predictions from the softmax “doctor models” are arithmetically averaged to give a single five-class prediction. For subsequent nets, the parameters and predictions of the DN model are frozen and only the averaging weights for the doctor models are learned. For WDN, one averaging weight per doctor is trained, shared across all images. For IWDN, these averaging weights are made image-dependent by letting them be a function of the last hidden layer of Inception. For BIWDN, a linear bottleneck layer of size three is added between the last hidden layer of Inception (of dimension 2048) and the 31-way softmax of IWDN as a precautionary measure against model overfitting. A bottleneck layer of this size reduces the number of trainable parameters ten-fold.Rather than directly learning the averaging weight for each doctor model (B)(I)WDN, we learn averaging logits for each model that we could then pass through a softmax to produce averaging weights that are guaranteed to be positive. To train the averaging logits, we use the opinions of the doctors who actually labeled a training image to define the target output distribution for that image (Appendix B.2 discusses an alternative target). We then combine the predictions of the models of all the other doctors using the weights defined by their current averaging logits. Finally we update our parameters by backpropagating with the cross entropy loss between the target distribution and the weighted average prediction. This way all of the training cases that a doctor did not label can be used to learn the averaging logit for that doctor, and no extra data are needed beyond those used to learn the weights of DN. Moreover, if a doctor model has similar performance to other doctor models but makes very different errors it will tend to be upweighted because it will be more useful in the averaging. This upweighting of diverse doctor models would not occur if we had computed the reliabilities of the doctors separately. For a single image, let I be the set of indices of the doctors who actually graded that image. Let the label of doctor i∈ I be l_i. For every doctor j∈{1, 2, …, 31}, denote the prediction of its model p_j. Let p_∅ be the prediction of the model of the average doctor in BN. For WDN, IWDN, and BIWDN, let w_j be the averaging weight for the jth modeled doctor, where ∑_j w_j=1. Note that p_j is a five-dimensional vector and w_j is a scalar. The explicit inputs of the cross entropy loss being minimized during training of each model are shown in Table <ref> and post-Inception computations are shown schematically in Figure <ref>. In the case of DN, the cross entropy losses of the individual doctor models are added together to get the total loss for each training example.§.§ Summary of Procedure Here we summarize the entire procedure for using Weighted Doctor Net (WDN), which turns out to be the best performing model. The process is illustrated for generic labelers in Figure <ref>.WDN has two phases of training: * Phase 1: We learn a doctor model for each doctor. Each doctor model consists of the Inception-v3 base followed by a softmax output layer. The Inception-v3 is shared by all the doctor models while the output layers are unique to each doctor model. * Phase 2: We fix the doctor models that we learned in Phase 1 (Note this implies that the predictions made by the doctor models for any given image are also fixed.) Now we learn how to combine the predictions of the doctor models in a weighted manner. We do this by training averaging logits according to Table <ref> and then taking a softmax of these averaging logits to get averaging weights. During evaluation of WDN, the prediction made for our model is a linear combination of the doctor models predictions where the coefficients are the averaging weights learned in Phase 2 of training.Next we describe two benchmarks to compare our models against.§.§ Estimating Doctor Reliability with EM <cit.> use a representative online EM algorithm to estimate abilities of multiple noisy annotators and to determine the most likely value of labels. We calculate updated labels by executing the method in <cit.> on our human doctors and we use these updated labels to train BN, as a competing algorithm for our DN method. <cit.> also actively select which images to label and how many labels to request based on the uncertainty of their estimated ground truth values and the desired level of confidence, and they select and prioritize which annotators to use when requesting labels. We do not use these other aspects of their algorithm because labels for all images in our dataset have already been collected.§.§ Modeling Label Noise <cit.> describe a deep neural network that learns to label road pixels in aerial images. The target labels are derived from road maps that represent roads using vectors. These vectors are converted to road pixels by using knowledge of the approximate width of the roads so the target labels are unreliable. To handle this label noise, <cit.> propose a robust loss function that models asymmetric omission noise.They assume that a true, unobserved label 𝐦 is first generated from a w_m× w_m image patch 𝐬 according to some distribution p(𝐦|𝐬), and the corrupted, observed label 𝐦̃ is then generated from 𝐦 according to a noise distribution p(𝐦̃|𝐦). The authors assume an asymmetric binary noise distribution p(m̃_i|m_i) that is the same for all pixels i. They assume that conditioned on 𝐦, all components of 𝐦̃ are independent and that each m̃_i is independent of all m_j≠ i. The observed label distribution is then modeled as:p(𝐦̃|𝐬)=∏_i=1^w_m^2∑_m_ip(m̃_i|m_i)p(m_i|𝐬). For another baseline, we use a multi-class extension of their method on DN, modeling the noise distribution prior for all doctors d with the parameters:θ_ll'=p(m̃_d=l'|m_d=l)where l,l'∈{1,2,3,4, 5}. We estimate θ_ll' using the 5×5 confusion matrix between individual and average doctor opinions on training images. Treating the average doctor opinion as the true label, we convert each doctor's individual count matrix into proportions which we averaged across all doctors. We train this model by minimizing the negative log posterior, -log(p(𝐦̃|𝐬)). This variant of the method by <cit.> is an alternative way to improve upon DN to our proposal of learning averaging weights (WDN).§ EXPERIMENTAL SETUP§.§ Neural Network TrainingWe train the network weights using distributed stochastic gradient descent <cit.> with the Adam optimizer on mini-batches of size 8. We train using TensorFlow with 32 replicas and 17 parameter servers, with one GPU per replica. To speed up the training, we use batch normalization <cit.>, pre-initialization of our Inception network using weights from the network trained to classify objects in the ImageNet dataset <cit.>, and the following trick: we set the learning rate on the weight matrix producing prediction logits to one-tenth of the learning rate for the other weights. We prevent overfitting using a combination of L1 and L2 penalties, dropout, and a confidence penalty <cit.>, which penalizes output distributions with low entropy. At the end of training, we use an exponentially decaying average of the recent parameters in the final model.We tune hyperparameters and pick model checkpoints for early stopping on the validation dataset, using five-class classification error rate as the evaluation metric. The optimal values for these hyperparameters are displayed in Appendix C. Note that we tune the baseline as well to ensure that our improvements are not the result of more hyperparameter optimization.When evaluating on the test set we average the predictions for the horizontally and vertically flipped versions (four in total) of every image.We also train a version of BN where the output prediction is binary instead of multi-class, as is done in <cit.>. The binary output is obtained by thresholding the five-class output at the Moderate NPDR or above level, a commonly used threshold in clinics to define a referable eye condition. For this BN-binary network, the area under the ROC curve is used as the validation evaluation metric. To deal with differences in class distribution between the datasets (Table <ref>), we use log prior correction during evaluation. This entails adding to the prediction logits, for each class, the log of the ratio of the proportion of labels in that class in the evaluation dataset to the proportion of labels in that class in the training set. Our assumed test class distribution for computing the log prior correction is the mean distribution of all known images (those of the training and validation sets).So for each image under evaluation we update the prediction logit for class c by adding:log(q_valid(c)/q_train(c))for the validation dataset, andlog(q_valid∪ train(c)/q_train(c))for the test dataset,where q(c) is the proportion of labels in that class. Applying log prior correction improves accuracy and all our reported results use it.§.§ DatasetsThe training dataset consists of 126,522 images sourced from patients presenting for diabetic retinopathy screening at sites managed by 4 different clinical partners: EyePACS, Aravind Eye Care, Sankara Nethralaya, and Narayana Nethralaya. The validation dataset consists of 7,804 images obtained from EyePACS clinics. Our test dataset consists of 3,547 images from the EyePACS-1 and Messidor-2 datasets. More details on image sourcing are in Appendix D.Each of the images in the training and validation datasets was graded by at least one of 54 US-licensed ophthalmologist or ophthalmology trainee in their last year of residency (postgraduate year 4). For training the doctor models, we use the 30 ophthalmologists who graded at least 1,000 images were used, and we lump the remaining doctors as a single composite doctor to avoid introducing doctor-specific parameters that are constrained by less than 1,000 training cases. Meanwhile, the labels for the test set were obtained through an adjudication process: three retina specialists graded all images in the test dataset, and discussed any disagreements as a committee until consensus was reached.We scale-normalize our images by detecting the circular fundus disk and removing the black borders around them. We use images at a resolution of 587×587 pixels and we augment our training data with random perturbations to image brightness, saturation, hue, and contrast. §.§ Our Baseline vs Published Baseline This section describes the multiple ways in which our baseline differs from that of <cit.>. For these reasons, results from this paper's own BN should be used for model comparisons with DN, WN, IWDN, and BIWDN rather than numbers from <cit.>.* Unlike in <cit.>, we remove grades of doctors who grade test set images from both training and validation sets to reduce the chance that the model is overfitting on certain experts. This handicaps our performance vis-à-vis their paper, especially because we exclude the most expert doctors (the retinal specialists) during model development, but ensures generalizability of our results.* We use different datasets, and in particular our adjudicated test set has gold standard “ground truth" labels.* We train with five-class loss instead of binary loss.* If a doctor grades a single image multiple times, as often occurs, <cit.> treats these as independent diagnoses while we collapse these multiple diagnoses into a distribution over classes.* We employ higher resolution images (587×587 pixels versus 299×299) and image preprocessing and theoretical techniques unused in <cit.>.§ SUMMARY OF RESULTS We run 10 replicates of each model and average the resulting metrics, which are reported in Table <ref>. For full comparability of models we use the same 10 replicates reported for DN to serve as the fixed part of the model for training the WDN, IWDN, and BIWDN replicates. §.§ Training with Five-Class Loss Beats Training with Binary Loss Even on Binary Metrics We find that training BN with a five-class loss improves test binary AUC compared to training with a binary loss, as is done by <cit.>, even when validating the former on five-class training error instead of binary AUC (Table <ref>). Test binary AUC is raised by a substantial 1.53% (97.11% vs 95.58%) from using five-class loss. Intuitively this fits with our thesis that generalization is improved by increasing the amount of information in the desired outputs. All results reported in Table <ref> and subsequent sections, including for BN, are obtained from training with five-class loss. §.§ Averaging Modeled Doctors Beats Modeling the Average DoctorWe see a reduction in five-class classification test error of 1.97% (from 23.83% to 21.86%) from averaging modeled doctors (DN) instead of modeling the averaged doctor (BN). In comparison, using labels from the algorithm in <cit.> to train BN only reduces five-class test classification error by 0.09%. Over BN, DN also increases binary AUC by 0.17% (97.11% to 97.28%), decreases binary classification error 0.17% (9.92% to 9.75%), and increases specificity at 97% sensitivity (spec@97%sens) by 2.21% (79.60% to 81.81%). Meanwhile, using labels from <cit.> on BN merely increases spec@97%sens by 0.37% relative to vanilla BN and actually leads to slightly worse performance on binary AUC (-0.11%) and binary error (+0.20%). Note that the binary AUC, binary error, and spec@97%sens metrics would be improved for all models if we were to do hyperparameter tuning and early stopping for them specifically, but we decided to do all our model selection on one metric (five-class error) both for simplicity and to simulate the decision metric required in real-life automated diagnosis systems. We see that DN is significantly better on all test metrics compared to BN trained using the labels obtained from the algorithm in <cit.>.§.§ Learning Averaging Weights Helps We see a further 1.28% decrease in five-class test error from using WDN as opposed to DN. Binary AUC increases an additional 0.17%, binary classification error decreases another 0.68%, and spec@97%sens increases an extra 0.88%, all on test data. Results from IWDN and BIWDN are slightly worse than those from WDN. We would expect a bigger improvement from WDN and potentially further improvements from training averaging logits in an image-specific way if we had doctors with more varied abilities and greater environmental differences, but on our dataset image-specific averaging logits does not help. Our extension of the competing algorithm by <cit.> actually causes DN to perform worse by 0.90% on five-class classification test error, and is more computationally costly than (B)(I)WDN. A different noise model we considered does not help either (Appendix B.3). § CONCLUSIONWe introduce a method to make more effective use of noisy labels when every example is labeled by a subset of a larger pool of experts. Our method learns from the identity of multiple noisy annotators by modeling them individually with a shared neural net that has separate sets of outputs for each expert, and then learning averaging weights for combining their modeled predictions. We evaluate our method on the diagnosis of diabetic retinopathy severity on the five-point scale from images of the retina. Compared to our baseline model of training on the average doctor opinion, a strategy that yielded state-of-the-art results on automated diagnosis of DR, our method can lower five-class classification test error from 23.83% to 20.58%. We also find that, on binary metrics, training with a five-class loss significantly beats training with a binary loss, as is done in the published baseline. We compare our method to competing algorithms by <cit.>; <cit.> and we show that corresponding parts of our method give superior performance to both. Our methodology is generally applicable to supervised training systems using datasets with labels from multiple annotators.§ ACKNOWLEDGMENTSWe thank Dale Webster, Lily Peng, Jonathan Krause, Arunachalam Narayanaswamy, Quoc Le, Alexey Kurakin, Anelia Angelova, Brian Cheung, David Ha, Matt Hoffman, and Justin Gilmer for helpful discussions and feedback. § A. MUTUAL INFORMATION FOR NOISY LABELS Here we compute the mutual information between a noisy MNIST label and the truth, assuming random noise, in order to estimate the number of noisily labeled training cases equivalent to one case that is known to be correctly labeled.Empirically, N perfectly labeled training cases give about the same test error as N I_ perfect/I_ noisy training cases with noisy labels, where I_ noisy is the mutual information per case between a noisy label and the truth and I_ perfect is the corresponding mutual information for perfect labels. For ten classes, the mutual information (in nats) is I_ perfect=2.3=-log(0.1), but when a noisy label is 20% correct on average, the mutual information is:I_ noisy = 0.044=-log(0.1)-10×0.02×log(0.1/0.02)-90×0.1×0.8/9log(0.1/0.1×0.8/9).So if the learning is making good use of the mutual information in the noisy labels we can predict that 60,000 noisy labels are worth 60,000×0.044/2.3≈1,148 clean labels. In reality we needed about 1,000 clean labels to get similar results.§ B. OTHER IDEAS TESTED§.§ B.1 Mean Class BalancingIn addition to log prior correction of class distributions, we also attempted mean class balancing wherein samples from less frequent classes were upweighted and more frequent classes are downweighted in the cross entropy loss, in inverse proportion to their prevalence relative to the uniform distribution across classes. Explicitly, we weight each sample of class c by:α_c = q̅/q(c)=1/|c|q(c).<cit.> employ a similar method for computer vision tasks although they use medians instead of means. In our case, using mean class balancing lowers performance, possibly because it makes too many assumptions on the unknown test distribution, and was not employed. §.§ B.2. Alternative Target Distribution for Training Averaging LogitsTo train the averaging logits, we take each training case and use the opinions of the doctors who actually labeled that case to define the target output distribution. Alternatively, the target distribution can be defined as the equally weighted average of the predictions of the doctor models corresponding to the doctors who labeled that case. In the notation used in Table <ref>, this would be 1/|I|∑_i∈ Ip_i. We experimented with using this alternative target distribution in calculating cross entropy loss but saw inferior results. §.§ B.3. An Alternative Noise ModelBecause our multi-class extension of <cit.> shows poor results, which we postulated may have been because it is sensitive to differences in class distributions between datasets, we considered a different noise model that makes less assumptions on the class distribution of the data. This model assumes a symmetric noise distribution that is determined by a single prior parameter. This assumes that if a label is wrong, it has equal probability of belonging to any of the other classes. However we allow this parameter to vary by doctor and we estimate it for each doctor d asθ_d = p(m̃_d=l|m_d=l),where the real doctor reliability score is calculated from the <cit.> algorithm. Unfortunately this method performs slightly worse than the 5-class variant of <cit.>. Note that a number of other noise models of varying complexity can be considered as well.§ C. HYPERPARAMETER SEARCH Table <ref> displays the optimal hyperparameters used in DR classification. We tuned using grid search on the following hyperparameter spaces: dropout for Inception backbone ∈{0.5, 0.55, 0.6, …, 1.0}, dropout for doctor models ∈{0.5, 0.55, 0.6, …, 1.0}, learning rate ∈{1×10^-7,3×10^-7, 1×10^-6, …, 0.03}, entropy weight ∈{0.0, 0.0025, 0.005, …, 0.03}∪{0.1}, weight decay for Inception ∈{0.000004, 0.00001, 0.00004, …, 0.1}, L1 weight decay for doctor models ∈{0.000004, 0.00001, 0.00004, …, 0.04}, L2 weight decay for doctor models ∈{0.00001, 0.00004, …, 0.04}, L1 weight decay for averaging logits ∈{0.001, 0.01, 0.02, 0.03, …, 0.1, 0.2, 0.3, …, 1, 2, 3, …, 10, 100, 1000}, L2 weight decay for averaging logits ∈{0.001, 0.01, 0.1, 0.2, 0.3, …,1, 5, 10, 15, 20, 30, …, 150, 200, 300, 400, 500, 1000}, and bottleneck size (for BIWDN) ∈{2, 3, 4, 5, 6, 7}. We used a learning rate decay factor of 0.99 optimized for BN. The magnitudes of the image preprocessing perturbations were also optimized for BN. § D. DATASET DETAILSOur training set consists of 119,589 of the 128,175 images used in the training set of <cit.> and 6,933 new labeled images acquired since the creation of their training dataset. The images in the training set of <cit.> that we do not use were excluded for the following reasons. (i) 4,204 images of their dataset were removed to create a separate validation dataset for experiments within the research group, (ii) 4,265 were excluded because they were deemed ungradable by every ophthalmologist that graded them. Unlike <cit.>, we do not predict image gradeability in this work and hence excluded those images. (iii) 117 of their images were excluded because they fail our image scale normalization preprocessing step. Our validation dataset consists of 7,963 images obtained from EyePACS clinics. These images are a random subset of the 9,963 images of the EyePACS-1 test set used in <cit.>. The remaining 2,000 images were used as part of our test set. In practice, only 7,805 of the 7,963 validation images have at least one label, since the remaining 158 images were of poor quality and considered ungradable by all ophthalmologists that labeled them.The test set consists of 1,748 images of the Messidor-2 dataset <cit.> and 2,000 images of the EyePACS-1 test dataset used in <cit.>, as we just mentioned. 1,744 of the 1,748 images of Messidor-2 and 1,803 of the 2,000 images from EyePACS-1 were considered gradable after adjudication and were assigned labels.aaai
http://arxiv.org/abs/1703.08774v2
{ "authors": [ "Melody Y. Guan", "Varun Gulshan", "Andrew M. Dai", "Geoffrey E. Hinton" ], "categories": [ "cs.LG", "cs.CV" ], "primary_category": "cs.LG", "published": "20170326063445", "title": "Who Said What: Modeling Individual Labelers Improves Classification" }
=1 claimClaim remRemark explExample hypothesisHypothesis hypothesisHypothesisHypotheses ALC@uniqueLineLines #1#1 #1#1 #1#1
http://arxiv.org/abs/1703.08816v2
{ "authors": [ "Andrea L. Bertozzi", "Xiyang Luo", "Andrew M. Stuart", "Konstantinos C. Zygalakis" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170326132925", "title": "Uncertainty quantification in graph-based classification of high dimensional data" }
Approaching Confinement Structure for Light Quarks in a Holographic Soft Wall QCD Model Meng-Wei Li^a, Yi Yang^b and Pei-Hung Yuan^a Received ; accepted======================================================================================= empty Raman scattering spectroscopy is an acknowledged characterization tool of layered materials, such as, for example graphene <cit.>, boron nitride <cit.>, or semiconducting transition metal dichalcogenides (TMDs) <cit.>. Since the beginning of recent boom in the physics and technology of graphene-related systems, it has been primarily used to determine the number of layers in ultimately thin films of various 2D crystals serving as building blocks of more complex structures<cit.>. Additional advantages of this technique come out when Raman scattering spectra are investigated as a function of the excitation energy and/or under resonant conditions when either the incoming or outgoing photons coincide in energy with optically active electronic transitions <cit.>. Resonant Raman scattering offers supplementary information on TMD layers concerning essentially the coupling of particular phonons to electronic transitions of a specific symmetry <cit.>. Here we report on resonant Raman scattering study of monolayer WS_2. An extremely rich Raman scattering response is uncovered when using an unconventional spectroscopy scheme, referred to as Raman scattering excitation (RSE). This method relies on tracing the Raman scattering response when the detection energy of the outgoing photons is fixed and the laser energy is being swept. The shape of the RSE spectrum strongly depends on a selected detection energy, in our case related to either the negatively charged (X^-) or the neutral (X^0) exciton. The former resonance condition results in a strong enhancement of the X^- emission due to cascade scattering by optical and acoustic phonons. In contrast, several Raman scattering processes, including double out-of-plane A'_1 zone-centre modes, are enhanced while being in resonance with the outgoing photons emitted at the energy of the X^0 exciton. The RSE spectroscopy is proposed as a convenient tool to investigate electron-phonon interactions in thin layers of TMDs. § EXPERIMENTAL RESULTS Figure <ref>(a) presents an optical microscope image of the investigated WS_2 monolayer supported by a Si/(300 nm)SiO_2 substrate. The topography of the flake is shown in Fig. <ref>(b) with a false-colour atomic-force-microscope (AFM) image of an area enclosed in panel (a) with a dashed white box. The inset to panel (b) demonstrates a height profile measured along the white line crossing the edge of the monolayer. The thickness of the monolayer one can extract from that profile (0.85 nm) is slightly larger than the value of 0.62 nm (Ref. ) based on the separation between adjacent layers in bulk WS_2 crystal. As reported also for other TMD monolayers deposited on Si/SiO_2 substrates, the difference may result from a non-zero equilibrium distance between the bottom surface of the monolayer and the top surface of the SiO_2 layer as well as from a thin layer of water captured between the monolayer and the substrate. A representative photoluminescence (PL) spectrum of the monolayer excited at non-resonant energy(λ=514.5 nm) is shown in Fig. <ref>(c). It is composed of several emission lines (see the inset to Fig. <ref>(c)), which are attributed to the neutral (X^0), negatively charged (X^-), and localized (Ls) excitons related to the maximum of the valence band and the higher-lying energy level in the conduction band at the K^± points of the WS_2 monolayer's Brillouin zone (BZ) <cit.>. Subsequent (selected) spectra obtained with several excitation energies are displayed in Fig. <ref>(d). The spectral window presented covers the energy range that corresponds to the emission of light due to recombination of neutral and charged excitons. It can be seen that the lineshape of the spectra critically depends on the excitation energy. In particular, there are narrow lines superimposed on broad peaks due to recombination of the excitons. The lines follow the laser excitation energy which points out to Raman scattering as their origin. Moreover, the emission related to the negatively charged exciton is highly enhanced and broadened when its energy coincides with the energy of light scattered by processes identified later in the text. Complete sets of the collected data are presented in Fig. <ref> in the form of color-coded maps showing the intensity of the optical response as a function of excitation energy. Panels (a) and (b) of the Figure present the results obtained with a lower (∼10 μ W) and higher (∼50 μ W) excitation power, respectively. Three energy regions of the optical response can be clearly distinguished in Fig. <ref>: (i) the lowest-energy range (<2.075 eV) in which the emission ascribed to the X^- exciton can be observed, (ii) an intermediate energy range (2.075 eV…2.09 eV) in which a rather weak optical response can be seen, and (iii) the highest energy range (>2.09 eV) in which the emission due to the X^0 exciton becomes apparent. In order to identify the processes responsible for the Raman scattering in the three energy regions mentioned above, we have analyzed the optical response of the WS_2 monolayer measured as a function of excitation energy and detected at three selected energies: 2.061 eV, 2.078 eV, and 2.093 eV (see the white vertical dashed lines in Fig. <ref>(b)). The resulting RSE spectra, displayed as a function of the Stokes shift, measured at 2.061 eV (in resonance with the negatively charged exciton), 2.078 eV, and 2.093 eV (in resonance with the neutral exciton) are shown in Fig. <ref>(a) and Fig. <ref>(b), respectively. When interpreting the spectra, it is important to keep in mind that although they correspond to Raman scattering, they were not obtained at a constant excitation energy, as it is typical for the Raman scattering technique. As a result, the spectra strongly depend on the detection energy, which corresponds to different resonance conditions. Particularly, the richest spectrum can be observed while the scattered light is in resonance with the neutral exciton, which points out to the role of exciton-phonon interactions in the Raman scattering processes. Let us focus first on the highest-energy region (see Fig. <ref>(b)). There are three zone-center (Γ) Raman active modes in monolayer 2H-TMDs, which belong to the A'_1, E', and E" representations <cit.>. Two of them (A'_1 and E') are usually observed in the back-scattering configuration, in which the presented results have been obtained. The corresponding Raman scattering peaks can be seen in our experiment at 418 cm^-1 and 357 cm^-1, respectively. As it is widely accepted, the broadening of the latter line results from a double 2LA(M) process <cit.>, whose peak emerges at 352 cm^-1 in the spectrum. In order to be Raman-active, the E" process requires the electric field of the laser light to be oriented out of the structure's plane, which is not possible in the back-scattering configuration. However, although the process is in principle forbidden in the geometry of our experiment <cit.>, its observation under resonant excitation conditions is often reported in TMDs <cit.>. We therefore ascribe the peak at 331 cm^-1 to the E" process as no other phonon mode is expected near this frequency. It should be noted that the energy of the peak matches quite closely the maximum of the total density of phonon states related to the phonon dispersion near the M point of the BZ <cit.>. Other peak related to phonons from the border of the BZ, namely ZA(M), is observed at 146.5 cm^-1. Its attribution to out-of-plane acoustic phonons was previously proposed in Ref. . A weak feature at 176 cm^-1 can be attributed to a longitudinal acoustic mode from the M point of the BZ, LA(M) <cit.>. Analyzing the Raman features related to the border of the BZ one must consider a possible presence of disorder in the structure. The phonon localization induced by disorder relaxes the momentum conservation rule, which allows the involvement of a single mode from outside of the Γ point <cit.>. Low intensity of the LA(M) feature suggests a rather low impact of disorder, while much higher intensities of the ZA(M) and E"(M) features are more likely due to the resonance with the neutral exciton. Following the assignment of the peak at 146.5 cm^-1 to the ZA(M) process it is naturally to ascribe the peak of a relatively high intensity observed at 294 cm^-1 to the process involving double out-of-plane acoustic phonons from the edge of the BZ, 2ZA(M) <cit.>. Other features which were previously observed in the Raman scattering spectrum of monolayer WS_2 are combined processes: A'_1(M) + LA(M) at 585 cm^-1 and 4LA(M) at 703 cm^-1 <cit.>. We ascribe the peak occurring at 714 cm^-1 to the double 2E'(Γ) process. The assignment of the peaks at 146.5 cm^-1 and 176 cm^-1 correspondingly to the ZA(M) and LA(M) phonons permits to propose an identification of other combined acoustic processes at 475 cm^-1, 498 cm^-1, 648 cm^-1, and 680 cm^-1 as LA(M) + 2ZA(M), 2LA(M) + ZA(M), 2LA(M) + 2ZA(M), and 3LA(M) + ZA(M), respectively. Moreover, the low-energy shoulder of the feature due to the E" process is close to the expected energy of the combined LA(M) + ZA(M) processes (322.5 cm^-1). Finally, the peaks at 775 cm^-1 and 836 cm^-1 can be attributed to the sum E'(Γ)+A'_1(Γ) and the double 2A'_1(Γ) processes. The apparent broadening of the former feature may also result from the contribution of the 4LA(M) process. Both these features have not yet been reported for monolayer WS_2. The RSE spectrum detected at 2.078 eV, in the intermediate energy range (see Fig. <ref>(a)), is dominated by two Raman scattering modes: A'_1(Γ) and 2LA(M)/E'(Γ). A relatively high intensity of the peak due to the 2ZA(M) mode should be noted (see an arrow in Fig. <ref>(a)). The optical response in the lowest-energy range (at 2.061 eV) which corresponds to the resonance of the scattered light with the charged exciton (see Fig. <ref>(a)) shows an emission band of substantial intensity. As it can already be noticed in Fig. <ref>, the emission due to the charged exciton is strongly enhanced by the resonance with two main Raman-active processes: A'_1(Γ) and 2LA(M)/E'(Γ). The overall emission is also shifted to higher energies while being in resonance with the light scattered by those two processes. When the excitation energy increases towards non-resonant conditions, the intensity saturates at substantially lower level. The difference between the resonant and non-resonant conditions may correspond to a fundamental change in the processes involved in the optical response. Out of the resonance only the PL emission is observed. In the resonance, the optical response is also due to cascade Raman scattering processes, which involve optical and acoustic phonons. In the latter case, the energy of the photoexcited carriers is lost in a series of scattering processes, while the intensity of the emission in the former process basically does not change with increasing the excitation energy. The difference between the two mechanisms responsible for the optical emission can be further appreciated by inspecting the spectra excited with laser light of particularly selected energies as shown in Fig. <ref>. It is clearly seen in Fig. <ref> that the intensity and lineshape of the optical spectrum related to the charged and neutral excitons strongly depend on the resonance conditions. The intensity of the optical response due to the charged exciton is strongly enhanced while being in outgoing resonance with main vibrational modes of the crystal lattice: 2LA(M)/E'(Γ) or A'_1(Γ) (see Fig. <ref>(a)). The resonance of the light scattered by vibrational modes of the crystal with the neutral exciton results in the enhancement of the Raman scattering features with no strong effect on the background PL emission (see Fig. <ref>(b)). Moreover, it can be seen in Fig. <ref>(a), that the peak related to the 2LA(M)/E'(Γ) process is accompanied by an additional structure whose maximum occurs ∼14 cm^-1 above. The structure is clearly visible while the outgoing resonance with the 2LA(M)/E'(Γ) mode takes place at the energy corresponding to the maximum of the charged exciton peak and it becomes less pronounced outside that energy region. The lineshape of the spectrum around the resonance with the A'_1(Γ) peak also suggests an additional contribution from higher-energy processes. A weak, broad feature at the energy higher by ∼17 cm^-1 than the energy of the A'_1(Γ) peak can also be distinguished. Both high-energy structures must result from multiphonon processes, which involve principal optical phonons (2LA(M)/E'(Γ) or A'_1(Γ)) and additional acoustic phonons. Additional structures emerging on higher-energy slopes of the principal phonon modes in the investigated monolayer are related to multiphonon scattering by optical (LO) and acoustic phonons. The exciton-enhanced multiphonon Raman scattering can be explained in terms of a "cascade model". The process involves (1) an optical excitation of an exciton, (2) its relaxation with the emission of an optical phonon down to the vicinity of the band, (3) the subsequent emission of acoustic phonon(s), and (4) finally its radiative recombination <cit.>. The peak due to an additional acoustic process, which follows the emission of the optical phonon is broader than the LO phonon peak with the energy occurring at the largest value of the crystal momentum allowed by the exciton dispersion. The cascade scattering can be strongly enhanced by an outgoing resonance with excitonic complexes, as observed in several semiconductor systems <cit.>. A resonance with the recombination of a free electron and a hole localized on a carbon acceptor in GaAs (e, A^0) also leads to a similar effect <cit.>. Raman peaks related to the combined processes are dispersive, which reflects the exciton dispersion <cit.>. No clear dispersion of the relevant peaks observed in our data most likely results from limited spectral resolution of the experimental set-up. The resonance of the scattered light with the neutral exciton results in quite different spectra, as shown in Fig. <ref>(b). The resonance induces a strong enhancement of several Raman peaks, clearly visible in Fig. <ref>, but not of the background PL emission. This may point out to a different exciton-phonon interaction as compared with the charged exciton. In the case of the charged exciton, the strong optical response may be related to cascade Raman scattering involving both optical and acoustic phonons. The resonance with the neutral exciton gives rise mainly to the enhancement of Raman scattering by discrete modes. The enhanced modes, whose attribution has already been discussed, can clearly be seen in Fig. <ref>(b). Our results underline a complicated character of exciton-phonon interactions in thin TMD layers. This statement is even more valid in view of recent results reported by C. M. Chow <cit.> and co-workers, who showed virtually no effect of the Raman scattering on the emission due to the negatively charged exciton in monolayer MoSe_2 and a crucial effect of multiple LA(M) phonon emission on the neutral exciton. This might look surprising as both WS_2 and MoSe_2 share the same crystallographic structure. It is moreover known that critical differences between the influence of resonant excitation on the Raman scattering in different materials can exist <cit.>, which can be explained by theoretical calculations. The explanation presented in Ref.  requires solid theoretical justification which is beyond the scope of our experimental work. We can, however, stress two points which may be important for the possible analysis. First is a crucial difference in the electronic structure of MoSe_2 and WS_2. Monolayer WS_2 is a darkish material, in which the energetically lowest transition is optically inactive <cit.>. Monolayers of MoSe_2 are bright, which means that in their case the energetically lowest transition is optically active <cit.>. Next, a closer inspection of our results shows that the resonantly enhanced emission due to the charged exciton is blue-shifted as compared to the emission excited out-of-resonance (see Fig. <ref>(b)). Recently, it has been reported that the PL spectrum due to the negatively charged exciton in monolayer WS_2 is composed of two lines associated with two possible states of that exciton: intravalley (singlet) and intervalley (triplet) (for details see Ref. ). In consequence, the observed blue shift may suggest that the resonance involves the intervalley (triplet) state of the charged exciton. It contrasts with monolayer MoSe_2, where the charged exciton is ascribed to the intervalley (singlet) state. This could explain the difference between the resonant-excitation effect on the charged exciton emission seen in our results and reported in Ref. . These facts may be of importance for the explanation of data and we do believe that they will trigger some interest in establishing their theoretical framework. § CONCLUSIONS We have presented a study of low-temperature optical emission from monolayer WS_2 excited resonantly in the energy range corresponding to the neutral and charged excitons. A clear difference between the Raman scattering excitation spectra detected at the energy of negatively charged and neutral excitons has been observed reflecting the differences in the electron-phonon interactions involved. The resonance of the emitted light with the negatively charged exciton results in the cascade scattering by the A'_1(Γ), 2LA(M)/E'(Γ) and acoustic phonons, which strongly enhances the related optical response of the system. The outgoing resonance with the neutral exciton leads to the enhancement of the Raman scattering intensity by several processes, including the double A'_1(Γ) one. It has also been shown that the RSE spectroscopy employed in our experiment represents a sensitive tool to study electron-phonon interactions in thin films of TMD materials. § METHODS The WS_2 monolayer under investigation was prepared by mechanical exfoliation of a bulk crystal purchased from HQ Graphene. Initially, thin WS_2 flakes were exfoliated onto a polydimethylsiloxane (PDMS) stamp attached to a glass plate. The monolayers were then identified based on their optical contrast and cross-checked with the use of room-temperature Raman scattering and PL measurements. The highest quality ones of them were finally transferred to chemically cleaned and oxygen-plasma activated Si/(300 nm) SiO_2 substrates following a similar protocol as described in Ref. . Topography images of selected flakes were acquired using an NSV-VEECO-D3100 atomic force microscope operated in tapping mode under ambient conditions. Raman scattering measurements were carried out at low temperature (T=5 K) using a typical set-up for the PL and PL excitation experiments. The investigated sample was placed on a cold finger in a continuous flow cryostat mounted on x-y motorized positioners. The non-resonant PL measurements were carried out using 514.5 nm radiation from a continuous wave Ar^+ laser. To study the optical response of the system as a function of excitation energy, a dye laser based on Rhodamine 6G was used providing a tunable wavelength range extending from about 633 nm to almost 560 nm. The excitation light was focused by means of a 50x long-working distance objective (NA=0.50) producing a spot of about 1 μm diameter. The signal was collected via the same microscope objective, sent through a 0.5-m-long monochromator, and then detected by a charge-coupled device camera. § ACKNOWLEDGEMENTS The work has been supported by the European Research Council (MOMB project no. 320590), the EC Graphene Flagship project (no. 604391), the National Science Center (grant no. DEC-2013/10/M/ST3/00791), the Nanofab facility of the Institut Néel, CNRS UGA, and the ATOMOPTO project (TEAM programme of the Foundation for Polish Science co-financed by the EU within the ERDFund). § AUTHOR CONTRIBUTIONS STATEMENT M.R.M. carried out optical experiments and preliminary analysed the data. K.N. fabricated the samples under study and performed their AFM characterization. M.P. supervised the project and contributed to data analysis. A.B. performed the final data analysis. M.R.M. and A.B. wrote the paper with contributions from K.N. and M.P. § ADDITIONAL INFORMATION Competing financial interests The authors declare no competing financial interests.
http://arxiv.org/abs/1703.09175v2
{ "authors": [ "Maciej R. Molas", "Karol Nogajewski", "Marek Potemski", "Adam Babinski" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170327164235", "title": "Raman scattering excitation spectroscopy in monolayer WS$_2$" }
Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions Yuhong WeiDepartment of Mathematics & Statistics, McMaster University, Hamilton, ON, Canada. Yang Tang^*Emilie ShiremanDepartment of Psychological Sciences, University of Missouri, Columbia, MO, U.S.Paul D. McNicholas^* Douglas L. Steinley^†===================================================================================================================================================================================================================================================================Growth mixture models (GMMs) incorporate both conventional random effects growth modeling and latent trajectory classes as in finite mixture modeling; therefore, they offer a way to handle the unobserved heterogeneity between subjects in their development. GMMs with Gaussian random effects dominate the literature. When the data are asymmetric and/or have heavier tails, more than one latent class is required to capture the observed variable distribution. Therefore, a GMM with continuous non-elliptical distributions is proposed to capture skewness and heavier tails in the data set. Specifically, multivariate skew-t distributions and generalized hyperbolic distributions are introduced to extend GMMs. When extending GMMs, four statistical models are considered with differing distributions of measurement errors and random effects. The mathematical development of GMMs with non-elliptical distributions relies on their expression as normal variance-mean mixtures and the resultant relationship with the generalized inverse Gaussian distribution. Parameter estimation is outlined within the expectation-maximization framework before the performance of our GMMs with non-elliptical distributions is illustrated on simulated and real data. § INTRODUCTION Many longitudinal studies focus on investigating how individuals change over time with respect to a characteristic that is measured repeatedly for each participant. Conventional random effect growth modeling has provided a number of tools for modeling intra-individual change and inter-individual differences in change <cit.>, where within-class changes are described as a function of time and between-class changes are described by random effects and coefficients. Conventional random effects models provide a basis for formulating growth mixture models (GMMs) for longitudinal data. In the latent variable framework, a quadratic random effects growth model with a continuous outcome y_it for individual i at time t, and time-invariant covariates x_i, is specified according toy_it = η_0i+η_1i(a_t - a_0)+η_2i(a_t - a_0)^2+ϵ_it, η_0i = α_0 + _0'_i +ζ_0i, η_1i = α_1 + _1'_i +ζ_1i, η_2i = α_2 + _2'_i +ζ_2i,where a_t are time scores (t=1,2,…,T) centred at a_0, η_0i is the random intercept, η_1i is the random linear slope, η_2i is the quadratic growth rate, and _i and _i are normally distributed residuals, i.e., _i ∼𝒩(0,) and _i ∼𝒩(0,). Formally, η_0i, η_1i, and η_2i are continuous latent variables (called the growth factors) representing the growth patterns, the α_k are the mean parameters for the growth factors if there are no covariates _i, and the _k are the effects of covariates _i on the growth factors. In conventional growth modeling applications, the individual growth parameters (e.g., individual intercept and slope factors) are usually assumed to be identically distributed, i.e., drawn from a single homogeneous population. However, we are often interested in and deal with samples from multiple populations and, in most cases, the class memberships are either unknown or unobserved.Simultaneous modeling of change over time and unobserved multiple populations (heterogeneity) in the data can be accommodated using GMMs and latent class growth analysis (LCGA), wherein parameters describing growth patterns are estimated and each individual's most likely class membership is obtained via maximum a posteriori (MAP) probabilities. GMMs were introduced by <cit.> and then extended by <cit.>, <cit.>, and <cit.>. For convenience, define c_ik so that c_ik = 1 if individual i falls in class k and c_ik = 0 otherwise. The quadratic growth model in (<ref>) and (<ref>) can be extended to a simple GMM in class k (k=1,2,…,K) viaY_it|c_ik=1 = η_0i+η_1i(a_t - a_0)+η_2i(a_t - a_0)^2+ϵ_it, η_0i|c_ik=1 = α_0k + _0'_i +ζ_0i, η_1i|c_ik=1 = α_1k + _1'_i +ζ_1i, η_2i|c_ik=1 = α_2k + _2'_i +ζ_2i,where _k = (α_0k,α_1k,α_2k) parameters vary across classes to capture different trajectories; the parameters _0, _1, and _2 remain the same across classes but this could be relaxed to allow variation across classes with respect to how a covariate affects the growth factors; and _i and _i are still normally distributed residuals but with class-specific covariance matrices, i.e., _i ∼𝒩(0,_k) and _i ∼𝒩(0,_k). The latent class growth analysis (LCGA) developed by Nagin and Land <cit.> can be thought of as one special case of GMMs in the sense that it assumes zero within-class growth factor variances, i.e., _k=0 for k=1,2,…,K.One common fundamental assumption for GMMs is that model errors are normally distributed. However, simulation studies in <cit.> show that, when the data are drawn from a single non-Gaussian distribution, a two-class Gaussian GMM is preferred when fitting the data. In such cases, the within-class parameter estimates become uninterpretable because there are too many groups. <cit.> give an example of strongly non-normal outcomes, i.e., body mass index (BMI) development over age, and show that more than one latent class is required to capture the observed variable distribution — interpreting mixture components as subpopulations will lead to overestimation of the number of subpopulations. In general, non-elliptical distributions are used in multivariate analysis for the study of asymmetric data; such distributions can have a concentration parameter to account for heavy tailed data. Intuitively, in the two-dimensional case, the joint distribution forms a non-elliptical shape in the iso-density plot. Relaxing the normality assumption for asymmetry and skewness, <cit.> develop a GMM with a normally distributed model error and “classical" multivariate skew-t (cMST) random effects, i.e., _i ∼𝒩(0,_k) and _i ∼MST(0,_k).An alternative specification of GMMs (called nonlinear mixed effect mixture models) was developed by <cit.> within a Bayesian framework, wherein _i is assumed to follow a cMST while _i remains normally distributed. Note that the “classical" formulation of the multivariate skew-t distribution is that given by <cit.> and <cit.>. In this paper, we outline a more general extension of GMMs to the generalized hyperbolic distribution while also considering the formulation of the multivariate skew-t distribution that arises as its special and limiting case (Section <ref>). The advantage of the generalized hyperbolic distribution is its flexibility. Many other well-known distributions are special or limiting cases of the generalized hyperbolic distribution; please refer to <cit.> for details on a variety of limiting cases of the generalized hyperbolic distribution.The remainder of this article is laid out as follows. In Section <ref>, we go through some important background material on the generalized hyperbolic distribution as well as a special and limiting case that gives a formulation of the multivariate skew-t distribution. In Section <ref>, we outline the extension of GMMs to generalized hyperbolic and multivariate skew-t distributions, respectively. Section <ref> presents an expectation-maximization (EM) algorithm for obtaining maximum likelihood estimates of model parameters. Then, our approach is illustrated on simulated and real data (Section <ref>). The paper concludes with some discussion and suggestions for future work (Section <ref>). § BACKGROUND §.§ Generalized hyperbolic distributionA multivariate generalized hyperbolic distribution arises from a multivariate mean-variance mixture where the weight function h(w|ω,η,λ) is the density of a generalized inverse Gaussian (GIG) distribution. The density of the GIG distribution is given by h(w|ω,η,λ) = (w/η)^λ-1/2η K_λ(ω)exp{-ω/2(w/η+η/w)}for w>0, where η>0 is a scale parameter, ω>0 is a concentration parameter, K_λ denotes the modified Bessel function of the third kind with index λ, and λ characterizes certain subclasses and considerably influences the size of tail weights. Write W ∼ℐ (ω,η, λ) to denote that the random variable W has the density in (<ref>). The GIG distribution has some attractive features, including tractable expected values. Consider W ∼ℐ (ω,η, λ), then the following expected values hold:[W]= ηK_λ+1(ω)/K_λ(ω),[1/W]= 1/ηK_λ+1(ω)/K_λ(ω)-2λ/ωη,[log W]= logη+1/K_λ(ω)∂/∂λK_λ(ω).Extensive details on GIG distribution can be found in <cit.>. <cit.> show that a p-dimensional generalized hyperbolic random variablecan be generated using the relationship=+ W+ √(W),where W ∼ℐ (ω,1, λ),andare p-vectors that play the role of location and skewness parameters, respectively, and ∼𝒩(0,). From (<ref>), it follows that | w ∼𝒩(+w,w). Now, recalling that W ∼ℐ(ω,1,λ) and that the unconditional distribution ofis a generalized hyperbolic, Bayes' theorem gives W|∼GIG( ω +^'^-1, ω +δ (,|),λ-p/2 ).Under this parameterization, a p-dimensional multivariate generalized hyperbolic distribution hasdensityf(|) = [ω +δ (,|)/ω +^'^-1]^(λ-p/2)/2K_λ-p/2(√([ ω +^'^-1][ω +δ (,|) ]))/(2π)^p/2||^1/2K_λ(ω) exp{-(-)^'^-1},with index parameter λ, concentration parameter ω, skewness parameter , mean vector , and scale matrix . Here, δ (, |) is the squared Mahalanobis distance betweenand , i.e., δ (, |) = ( - )^'^-1( - ), K_λ-p/2 and K_λ are modified Bessel functions of the third kind with indices λ-p/2 and λ, respectively, and = (λ, ω, , ,) denotes the model parameters. Herein, let ∼GHD_p (λ, ω, , ,) represent a p-dimensional generalized hyperbolic random variablewith density as per (<ref>). Note that the parameterization in (<ref>) is one of several available for multivariate generalized hyperbolic distributions <cit.>.There are a number of special and limiting cases that can be derived from the generalized hyperbolic distribution. However, the presence of the index parameter λ enables a flexibility that is not found in its special and limiting cases. Figure <ref> illustrates the Gaussian distribution as well as a skew-t distribution with ν=5 degrees of freedom and the generalized hyperbolic distribution for two different values of λ; clearly, the different values of λ lead to very different densities. §.§ Multivariate skew-t distributionSeveral alternative formulations of the multivariate skew-t distribution have appeared in the literature, e.g., <cit.>, <cit.>, and <cit.>. Some recent discussion about some of these formulations is given by <cit.>. <cit.> developed a GMM with the SDB version of the restricted multivariate skew-t distribution, i.e., the version of <cit.>. The formulation of the multivariate skew-t distribution used herein arises as a special and limiting case of the generalized hyperbolic distribution by setting λ=-ν/2 and χ=ν while also letting ψ→ 0. This formulation of the multivariate skew-t distribution has been used by <cit.> to develop a mixture of skew-t factor analyzer models and by <cit.> to develop a mixture of common skew-t factor analyzers. A p-dimensional skew-t random variable , with this formulation, has the density functionf(|) = [ν+δ (,|)/^'^-1]^(-ν-p)/4ν^ν/2K_(-ν-p)/2(√([ ^'^-1][ν +δ (,|) ]))/(2π)^p/2||^1/2Γ(ν/2)^ν/2exp{-(-)^'^-1},whereis the location parameter,is the scale parameter,is the skew parameter, ν is the degree of freedom parameter, and K_(-ν-p)/2 and δ (,|) are as defined in (<ref>). We write ∼GST(,,,ν) to denote that the random variablefollows the skew-t distribution with the density in (<ref>). Now, ∼GST(,,,ν)can be obtained through the relationship in (<ref>) with W ∼IG(ν/2,ν/2), where IG(·) denotes the inverse-gamma distribution. We have | w ∼𝒩(+w,w), and so, from Bayes's theorem, W |∼GIG( ^'^-1, ν +δ (,|),-(ν+p)/2 ). §.§ The EM algorithm and its convergence criterionThe EM algorithm <cit.> is an iterative algorithm for finding maximum likelihood estimates when data are incomplete or treated as such, and is widely used to estimate model parameters in the context of model-based clustering. The E-step computes the expected value of the complete-data log-likelihood given the current model parameters, and the M-step maximizes this expected value with respect to the model parameters. After each E- and M-step, the log-likelihood is driven uphill, and the method iterates towards a maximum until some convergence criterion is satisified.Many variants of the EM algorithm have been proposed over the years, such as the expectation-conditional-maximization (ECM) algorithm <cit.>, the alternating ECM (AECM) algorithm <cit.>, and the Fisher-EM algorithm <cit.>. Herein, we will make use of the EM algorithm for parameter estimation and a stopping criterion based on the Aitken acceleration <cit.> is used to determine the convergence. The Aitken acceleration at iteration s isa^(s) = l^(s+1)-l^(s)/l^(s)-l^(s-1),where l^(s) is the (observed) log-likelihood value at iteration s. This yields an asymptotic estimate of the log-likelihood at iteration s+1, given byl_∞^(s+1) = l^(s) + 1/1-a^(s)(l^(s+1)-l^(s))<cit.>, and the EM algorithm is stopped when l_∞^(s+1)-l^(s) < ϵ, provided this difference is positive <cit.>. Note that this criterion is at least as strict as the lack of progress criterion in the neighbourhood of a maximum. §.§ Model selectionIn model-based clustering, a penalized log-likelihood-based criterion is typically used to determine the “best” fitting model among a family of models. The most popular such criterion is the Bayesian information criterion <cit.>, which can be motivated as an approximation to a Bayes factor <cit.>. The BIC is defined as BIC = 2l() - ρlog n, whereis the maximum likelihood estimate of model parameters , l() is the maximized log-likelihood, ρ is the number of free parameters, and n is the number of observations. Some theoretical support for use of the BIC in mixture model selection is given by <cit.> and <cit.>. § METHODOLOGY §.§ Conventional GMM with Gaussian random effectsSuppose a longitudinal study features n subjects and T time points or measurement occasions. For subject i (i = 1, …, n), let _i be a T × 1 vector _i = (y_i1, y_i2,…,y_iT)' where y_it represents the outcome on occasion t (t = 1, …, T), let _i = (x_i1, x_i2, …, x_im)^' be an m × 1 vector of observed time-invariant covariates, let _i be a q × 1 vector containing q continuous latent variables, and note that_i = (C_i1,…,C_iK)' has a multinomial distribution, where C_ik = 1 if individual i is in class k and C_ik = 0 otherwise. The conventional GMM with Gaussian random effects can be represented using a hierarchical three-level formulation as follows.At level 1 of the GMM, the continuous outcome variables _1,…,_n are related to the continuous latent variables _1,…,_n via_i| (c_ik=1) = _y_i+_i for i=1,…,n, where _i is a T × 1 vector of residuals or measurement errors that is assumed to follow a multivariate normal distribution, i.e., _i ∼𝒩(0, _k), and _y is a T × q design matrix consisting of factor loadings with each column corresponding to specific aspects of change. The matrix _y and the vector _i determine the growth trajectory of the model. For instance, when q=3,_i = (η_0i, η_1i, η_2i), and _y is a T × 3 matrix. Assuming a_t are age-related time scores (t = 1,2,…,T) centred at age a_0, the design matrix _y is given by_y = [ 1 a_1 - a_0 (a_1 - a_0)^2; 1 a_2 - a_0 (a_2 - a_0)^2; ⋮ ⋮ ⋮; 1 a_T-1 - a_0 (a_T-1 - a_0)^2 ]. At level 2 of the GMM, the continuous latent variables _i are related to the latent categorical variables _i and to the observed time-invariant covariate vector _i by the relation_i| (c_ik=1) = _k + _k_i +_i,where _k (k=1,…,K) denotes the intercept parameter for class k, _i is a q-dimensional vector of residuals assumed to follow a multivariate normal distribution _i ∼𝒩(0, _k), and _k is a q× m parameter matrix representing the effect of _i on the latent continuous variables _i and assumed to be different among classes. Note that the level 2 errors _i are uncorrelated with the measurement errors _i. We may allow for class-specific effects _k in (<ref>) that are equal across classes. By combining the first two levels of the GMM, we havep (_i |_i) = ∑_k=1^Kπ_k ϕ(_i ; _k, _k),where π_k = Pr(C_ik = 1) is the kth class probability or mixing proportion satisfying π_k ∈ (0, 1] and ∑_k=1^Kπ_k=1, and ϕ(·; _k, _k) is a multivariate Gaussian density with mean _k = _y(_k + _k_i) and covariance matrix _k = _y_k_y^'+_k. Notice that the GMM in (<ref>) assumes that class probability π_k is constant for each class.At level 3 of the GMM, we assume that the class probabilities are not constant for each class, but depend on the observed covariates. In other words, we want to know how π_k is related to an individual's background variables, e.g., gender and income. At this level, the categorical latent variables _i represent membership of mixture components that are related to _i through a multinomial logit regression for unordered categorical responses. Define π_ik = Pr(C_ik=1|_i), i.e., the probability that subject i falls into the kth class depending on the covariates _i. Let _i=(π_i1,π_i2,…,π_iK)' andlogit(_i) = ((π_i1/π_iK),(π_i2/π_iK),…,(π_i K-1/π_iK))'.Then,logit(_i) = _c + _c _i,where _c is a (K-1)-vector of parameters and _c is a (K-1) × q parameter matrix.By combining these three levels of the GMM, we havep (_i |_i) = ∑_k=1^Kπ_ikϕ(_i ; _k, _k),where ϕ(·; _k, _k) is defined as in (<ref>).Note that the right hand side of (<ref>) is not a finite mixture model because the class probabilities are not constant with resect to i.§.§ GMM with generalized hyperbolic random effectsThe conventional GMM assumes that the residuals _i and _i have multivariate Gaussian distribution with zero means and within-class covariance matrices, respectively. We are interested in constructing a GMM with generalized hyperbolic distribution model errors, denoted by GHD-GMM. The generalized hyperbolic distribution can be represented as a normal mean-variance mixture, where the mixing weight has a GIG distribution (see Section <ref>).To this end, we introduce a latent continuous variable W_ik with W_ik| c_ik=1∼ℐ (ω_k, 1, λ_k). Accordingly, conditional on c_ik and w_ik, we assume that model errors _i and _i are non-centered Gaussian error terms with distinct covariance matrices:_i | w_ik,c_ik =1 ∼𝒩(w_ik_yk,w_ik_k),_i | w_ik,c_ik =1 ∼𝒩(w_ik_η k,w_ik_k),where _k is the diagonal covariance matrix for _i, and _k is the covariance matrix for _i. The T-dimensional vector _yk is a vector of skewness parameters, which we refer to as the skewness parameter for the measurement errors. The q-dimensional vector _η k is the vector of skewness parameters for the continuous latent variables _i. Then, based on (<ref>) and (<ref>), the observed random variables _i, conditional on _i, c_ik, and w_ik,follow a conditional Gaussian distribution of the form_i |_i,w_ik,c_ik=1 ∼𝒩(_y _i + w_ik_yk, w_ik_k ).Based on (<ref>) and (<ref>), _i |_i,w_ik,c_ik=1 ∼𝒩(_k+_k_i + w_ik_η k, w_ik_k)and, from the preceding equations, we have the conditional distribution_i |_i, w_ik, c_ik = 1 ∼𝒩 (_k + w_ik(_y_η k + _yk), w_ik_k),where _k=_y(_k+_k_i) and _k=_y_k_y^'+_k. From (<ref>), we obtain the conditional distributions_i|_i,c_ik = 1∼GHD_q(λ_k,ω_k,_k+_k_i,_k,_η k), _i|_i,c_ik = 1∼GHD_T(λ_k,ω_k,_k,_k,_y_η k + _yk). By combining the preceding setup and level 3 of the GMM from Section <ref>, we arrive at a GMM with density p (_i |_i) = ∑_k=1^Kπ_ik f_GHD_T(_i ; λ_k, ω_k, _k, _k, _y_η k + _yk), where f_GHD_T(·) is the density of a T-dimensional random variable following a generalized hyperbolic distribution. Note that the overall skewness for _i is _y_η k +_yk. Note also that, within this setup, the dependent observed variable _i, the latent growth factors _i, and residual variables _i and _i all have generalized hyperbolic distributions. Note that the distribution of the covariates _i is not modelled; please refer to <cit.> for detailed explanations.§.§ GMM with multivariate skew-t random effects In this section, we are interested in extending the conventional GMM to have multivariate skew-t distribution model errors, denoted by GST-GMM. As in the case for the generalized hyperbolic distribution, the formulation of the multivariate skew-t distribution we use has a convenient representation as a normal mean-variance mixture; this time, the weight has an inverse-gamma distribution (see Section <ref>). In analogous fashion to the GHD-GMM, a latent continuous random variable W_ik is first introduced, where W_ik| c_ik = 1 ∼IG(ν_k/2,ν_k/2). Accordingly, we assume that _i and _i are non-centered Gaussian error terms with their own covariance matrices as in (<ref>) and (<ref>), and _i and _i are conditionally normally distributed as in (<ref>) and (<ref>). Form this characterization of the multivariate skew-t distribution, the following conditional distributions are obtained:_i|_i,c_ik = 1∼GST_q(_k+_k_i,_k,_η k, ν_k), _i|_i,c_ik = 1∼GST_T(_k,_k,_yk + _y_η k, ν_k),where _k and _kare as described above and ν_k is a concentration parameter (i.e., the degrees of freedom). Similarly, we arrive at a GMM with a multivariate skew-t distributionp(_i|_i) = π_ik f_GST_T(_i;_k,_k,_yk+_y_η k,ν_k).In this setup, the random variable _i, the latent growth factors _i, and the residual variables _i and _i all follow multivariate skew-t distributions. §.§ Comments on the GHD-GMM and GST-GMM Recalling that the overall skewness for _i is _y_η k +_yk, there are a total of T+q skewness parameters in our GMM extensions. Hence, the skewness parameters _yk and _η k are subject to identifiability issues because no more than T skewness parameters can be identified from T-dimensional _i. Therefore, two special formulations are considered in this paper. The first formulation is where _yk = 0. In this formulation, the residuals for _i or the measurement errors are not skewed, i.e., _i | w_ik,c_ik =1 ∼𝒩(0,w_ik_k), and all of the skewness in the data is assumed to come from the distribution of latent factors. The second special formulation is the case where _η k = 0. In this formulation, the residuals for the latent factors _i are symmetric, i.e., _i | w_ik,c_ik =1 ∼𝒩(0,w_ik_k). Accordingly, all of the skewness in the data is assumed to come from the residuals of _i or the measurement errors. In practice, we would want as much of the skewness as possible in the observed data _1,…,_n to be explained through the latent factors. There appears to be no optimal strategy with respect to which skewness parameter to estimate. Accordingly, four statistical models, differing with respect to the distributions of measurement errors and random effects for the first two levels of the GMM, are employed and compared. These models are as follows: * Model I: A model with independent multivariate generalized hyperbolic random effects and measurement errors while assuming all of the skewness in the data comes from the distribution of latent factors (i.e., GHD-GMM under _yk = 0).* Model II: A model with independent multivariate generalized hyperbolic random effects and measurement errors while assuming all of the skewness in the data comes from the residuals of _i (i.e., GHD-GMM under _η k = 0).* Model III: A model with independent multivariate skew-t random effects and measurement errors while assuming all of the skewness in the data comes from the distribution of latent factors (i.e., GST-GMM under _yk = 0).* Model IV: A model with independent multivariate skew-t random effects and measurement errors while assuming all of the skewness in the data comes from the residuals of _i (i.e., GST-GMM under _η k = 0).Take Model I (i.e., GHD-GMM under _yk=0) as an example. For different trajectory classes, the parameters λ_k, ω_k, _k, _η k, _k, _k, and _k may be different across classes, or may be the same across the classes. By imposing constraints on all these parameters (different or the same across classes), we obtain a family of GHD-GMM models. In this paper, we only consider two models, one model assumes that the parameters λ_k, ω_k, _k, _η k, _k, _k, and _k are different across classes, we call this model the general model. The second model assumes that only the parameter _η k is different across classes while all the other parameters are the same across classes, i.e., λ_k = λ,ω_k = ω, _k = _c, _k =, _k=, and _k=_c for k = 1, 2, …, K; we call this model the most constrained model. To this end, eight parameterizations in Table <ref> are considered. Models II and IV allow a more general representation of the class skewness parameters (i.e., _yk). However, in terms of model complexity, Models II and IVhave K(T-q) more parameters than Models I and III.Hence, Models II and IV need larger sample sizes as small class sizes can create problems, such as singularity of the covariance matrix and slow or non-convergence of the EM algorithm. In addition, Model III is the most parsimonious and it may be useful when the number of classes K is large.§.§ Parameter estimationTo fit the models, we adopt the well-known EM algorithm. In our case, the missing data comprise the latent categorical variables _1,…,_n, the latent growth factors _1,…,_n, and the latent weight parameter w_ik. Therefore, the complete-data consist of the observed outcome data _1,…,_n, the covariates _1,…,_n together with the _i, _i, and w_ik, and complete-data likelihood is given byℒ_c() = [π_ikϕ(_i|_y_i,w_ik_k)ϕ(_i|_k+_k_i+w_ik_η k,w_ik_k)h(w_ik)]^c_ik,where W_ik∼ℐ (ω_k,λ_k) for GHD-GMM andW_ik∼IG(ν_k/2,ν_k/2) for GST-GMM.In the E-step, we compute the expected value of the complete data log-likelihood, denoted 𝒬, conditional on the current model parameters. Then, in the M-step, we obtain the updated model parameters by maximizing 𝒬. Detailed parameter updates for Models I, II and III are outlined in Appendix B. § ILLUSTRATIONS §.§ Performance assessmentAlthough all of our illustrations are treated as genuine clustering analysis, i.e., no prior knowledge of labels is assumed, the true labels are known in each case and can be used to evaluate the performance of our GHD-GMM and GST-GMM models. We use misclassification rates (ERR) and the adjusted Rand index <cit.> to assess classification performance. The ERR is simply the proportion of misclassified observations. The ARI indicates the pairwise agreement between true and predicted group memberships while also accounting for the fact that random classification would classify some observations correctly by chance. An ARI value of 1 indicates perfect classification, its expected value is 0 under random classification, and a negative ARI value indicates classification that is worse than one would expect under random classification. Further details and discussion of the ARI are given by <cit.>.§.§ Alcoholic consumption data from the National Longitudinal Survey of Youth§.§.§ The dataThe National Longitudinal Survey of Youth (NLSY) is a longitudinal study conducted by the United States Bureau of Labor Statistics with the goal of understanding the interaction between labor force participation, education, and health behaviors in children and adolescents. The sample for this study was a cohort of children who were between the ages of 12 and 17 when first interviewed in 1997. The data of interest were gathered each year between 1997 and 2011 and again in 2013 (15 total possible interviews). Each respondent provided a number between 0 and 98 that represents the number of alcoholic drinks they typically consume on a given day on which they are drinking. Because we are interested in modeling drinking behaviour over the life span, the data are shifted from representing year of interview to age. We follow individuals who were first interviewed at age 16 until they are 19; all individuals with missing inputs are excluded and no covariates are adopted. To this end, 1151 observations with four time points (i.e., at ages 16, 17, 18, and 19) are used for the following analyses. §.§.§ Model selectionWe implement the Gaussian GMM via Mplus Version 7.1 <cit.>. Our proposed GHD-GMM and GST-GMM are implemented in R and run with K=1,…,10 until the best model is obtained under each scenario. Table <ref> shows the results of fitting all of the models as aforementioned for a varying number of latent classes. The BIC values show that more than eight classes are needed with the conventional GMM, two are needed with constrained Models I and IV, and three are needed for all of the other models. The BIC values for the GST-GMM and GHD-GMM are always better than the BIC for the normal GMM. Notably, the BIC values for the GHD-GMM do not always improve on those for the GST-GMM. Among all fitted models, the three-cluster general GST-GMM under _yk=0 (i.e., general Model III) is preferable according to the BIC. It is worth mentioning that, even though the skew-t distribution is a special case of the generalized hyperbolic distribution, the GST-GMM seems to be useful in addition to the GHD-GMM. §.§.§ Interpretation of the best modelThe best-fitting model, the three-class Model III, breaks the data into three groups.From Table <ref>, it can be seen that Class 1 comprising 56% of the population, begins with low-moderate drinking (<1 drink per drinking day), slightly increases during adolescence, and by age 19 the average drinks per drinking day is at about 1. These can be considered “consistent low" drinkers. Although the intercept for this class is heavily positively skewed (intercept skewness =2.59), the slope is not (intercept skewness =0.03), which indicates that the individual slopes are nearly normally distributed around the class slope of 0.21. The second class, comprising 24% of the population, are what will be called the “decreasing" drinkers. This class has an intercept of around five drinks per drinking day (a drinking binge) and ends at about 3 drinks per drinking day (just below the amount considered a drinking binge).[The World Health Organization defines heavy episodic drinking (also called a drinking “binge") as the consumption of 60 or more grams of alcohol on one occasion (www.who.int/gho/alcohol/consumption_patterns/heavy_episodic_drinkers_text/en/), which is about four standard drinks (www.niaaa.nih.gov/alcohol-health/overview-alcohol-consumption/what- standard-drink).] The intercept is again positively skewed (intercept skewness =2.90) but the slope is negatively skewed (slope skewness =-0.78), suggesting that individuals in this class decrease their consumption quickly over the period of adolescence. The third class, comprising 20% of the population, will be called the “increasing moderate" drinkers. Their initial level of drinking is around 2.87 drinks per drinking day (less than a binge) and this increases during adolescence, ending at age 19 around 7 drinks per drinking day (far above a drinking binge). Both the slope and intercept are slightly positively skewed (intercept skewness =0.48, slope skewness =0.41; see Table <ref>). These results suggest that, during adolescence, which is typically a time when alcohol consumption is initiated, individuals will have different reactions to the exposure to alcohol given their previous experience. Those individuals who are low drinkers will tend to continue to be low drinkers, those who have already consumed alcohol heavily will begin to taper back to safe levels (alluding to these individuals “knowing their limits" when it comes to alcohol), and those who are only at moderate levels tend to increase to heavy drinking. This model may be useful because for indicating which 15-year-olds should be the target of interventions if the goal is to prevent heavy drinking in late adolescence. Although the high drinkers may appear to be the most likely to develop problems related to alcohol, they may “grow out" of their alcohol consumption; most especially, the 15-year-olds that only drink at moderate levels should not be neglected. <cit.> find similar patterns using three different youth surveys. They measure the stability of alcohol consumption over four time periods. Time 1 cohort are separated into three classes: abstainers, moderate drinkers, and heavy drinkers. They find a high proportion of abstainers continue to abstain and very few drink more than once or drink heavily. Moderate drinkers also show considerable stability in these samples, with 70% or more staying in the moderate category. Time 1 heavy drinkers are the least stable. In all three surveys, less than half of the heavy drinkers remain heavy drinkers.Our results also aligned better with those in <cit.>, who found three groups of adolescent drinking initiation: a large group with no or low drinking (our “consistent low" drinkers), a group that drank exclusively in adolescence and then decreased (analogous to our “decreasing" drinkers), and a group that started low and increased (analogous to our “increasing moderate" drinkers). §.§.§ Partition studyOther models tend to find similar latent classes. For instance, the three-class Model I (which has a very similar BIC to the three-class skew-t model) demonstrates a similar partition (Table <ref>). This suggests that the same pattern endures regardless of the distributional assumptions. However, the cluster proportions differ slightly (58%, 24%, and 18%, for the low, high/decreasing, and moderate/increasing classes, respectively), which seems to suggest that the GHD model classifies more individuals into the “low” class than the skew-t model. If the goal of the analysis is to identify groups to target for interventions for the prevention of alcoholism, the proportions found in the skew-t model might be preferred as they create population groups that are larger. Therefore, interventions targeting this group may have a greater impact on the population than those targeting a smaller group. §.§ Simulation studiesIn addition to real data application of our proposed model, we perform simulation studies with data generated in a number of scenarios: linear and quadratic GMMs with different distributions of the measurement errors and random effects, resulting in four distinct simulated data examples (see Table <ref> for details). Individual trajectories for these four simulation experiments are plotted in Figure <ref>. We assess the performance of the family of non-elliptical GMMs in several different ways: Section <ref> illustrates the ability of our proposed family of models to recover underlying parameters when the number of classes and the model are correctly specified; then we present a comparison of the proposed models on BIC, ARI, and ERR from the clustering result for each simulation in Section <ref>; last but not least,we compare our proposed family of non-elliptical GMMs with Gaussian GMM developed by Muthén and colleagues, which dominates the literature on GMMs in Section <ref> §.§.§ Parameter recovery under the true modelFirst, we evaluate the ability of our proposed model to recover underlying parameters when the number of classes and the model are correctly specified. To this end, 100 datasets are generated for each of thefour simulation experiments. True values and the means of the parameter estimates with their associated standard deviations are summarized in Tables <ref>–<ref>. The results for each of the first three simulation experiments show that the means of all parameter estimates are close to the true values, with small standard deviations. The means of the last four elements of _1 and _2 are different from the true values due to the overlap between classes. §.§.§ Comparing Models I–IVSecond, we compare Models I–IV. One hundred datasets are generated for the four simulation experiments above and analyzed using the GMMs developed herein. The means of the BIC, the ARI, and the ERR are summarized in Table <ref>. For Simulations 1–3, the best models obtained are those with underlying true data structures, as expected. The BIC selects Model 2 with the correct number of components for Simulation 4; however, the estimated indices, i.e., λ̂_1=-2.93 and λ̂_2=-2.70, are very close to the parametrization under the skew-t distribution. §.§.§ Comparison with Gaussian GMMsFinally, we compare our proposed family of models with the Gaussian GMMs (via Mplus). First, 100 datasets are generated, as described before, from Simulation 1 where the distributions of random effects are not normal. Table <ref> summarizes the percentage of the replications favoured by the BIC when analyzing those 100 generated datasets for 1–6 latent classes (note that 6 latent classes were never selected, see Table <ref>) via Models I and III as well as Gaussian GMMs. We then generate 100 datasets from Simulation 2 where the distribution of measurement errors are not normal (Table <ref>). It is not surprising that the Gaussian GMMs overestimate the number of classes in both cases because the normality assumptions are violated.Moreover, when the normality assumption of the random effects is violated (Simulation 1), Gaussian GMMs tend to point to even more classes. It is noteworthy to mention that the best models, based on the BIC, are consistently Models I and II, respectively.§ DISCUSSIONWe have introduced novel GHD-GMM and GST-GMM models, which are extensions of the GMMs introduced by <cit.> to the generalized hyperbolic and skew-t distributions, respectively, to facilitate heavier tails or asymmetry. Updates are derived for parameter estimation within the EM algorithm framework, which is made feasible by the fact that the generalized hyperbolic distribution can be represented as a normal mean-variance mixture, where the weight follows a GIG distribution. In our GMM extensions, four models were considered (GHD-GMM and GST-GMM under _yk = 0, GHD-GMM and GST-GMM under _η k = 0) and their performance was compared using simulated and real data. In terms of interpretation, GHD-GMM under _η k = 0 is preferable to GHD-GMM under _yk = 0 because the skewness parameters are in the data space and so the interpretation of the skewness parameters is clear. However, in terms of model complexity, GHD-GMM under _yk = 0 is preferable to GHD-GMM under _η k = 0 because the former model has K(T-q) fewer parameters than the latter. We believe that this kind of mixture modeling approach for longitudinal data is important in many biostatistical and psychological applications, allowing accurate inference of model parameters and class membership probabilities while adjusting for heterogeneity, heavy tails, and skewness in the data. The proposed GHD-GMM and GST-GMM models have several advantages over Gaussian GMMs. The proposed GHD-GMM, which includes the multivariate skew-t, variance-gamma distribution, multivariate normal inverse-Gaussian distributions, etc., as special or limiting cases, provides flexibility to handle a broader range of multivariate longitudinal data — the same is true, albeit to a lesser extent, for the proposed GST-GMM. In the presence of heterogeneity, heavy tails, and skewness in longitudinal data, the proposed models can fit the data considerably better than Gaussian mixtures thereby reducing the risk of extracting latent classes that are merely due to non-normality of the outcomes. When data are normal, the proposed GHD-GMM can be used to check the reproducibility of a Gaussian GMM solution due to the flexibility of the generalized hyperbolic distribution — again, the same is true for the GST-GMM. However, when thereexistoutliers, while the concentration parameter mitigates the outliers, it has been shown that a contaminated approach can be more effective in handling the impact of the outliers <cit.>. Therefore, developing a contaminated version of the family of non-elliptical GMMs will be considered in future work.The models proposed herein can also be further developed in various ways. For example, for the first level of the GMM, only qth order polynomial equations are considered but kernel regressions or non-linear regressions could be incorporated into the model. Bayesian mixture modeling may offer researchers an alternative way to handle clustering of longitudinal data due to the popularity and advances in Markov chain Monte Carlo techniques. Finally, it is also worthwhile to other flexible parametric distributions of measurement errors and random effects, such as the coalesced generalized hyperbolic distribution and the multiple scaled generalized hyperbolic distribution <cit.> and then hidden truncation hyperbolic distribution <cit.>.[AitkenAitken1926]aitken26 Aitken, A. C. (1926). A series formula for the roots of algebraic and transcendental equations. Proceedings of the Royal Society of Edinburgh 45, 14–22.[Arellano-Valle and GentonArellano-Valle and Genton2005]arellano05 Arellano-Valle, R. B. and M. G. Genton (2005). On fundamental skew distributions. Journal of Multivariate Analysis 96(1), 93–116.[Azzalini, Browne, Genton, and McNicholasAzzalini et al.2016]azzalini16 Azzalini, A., R. P. Browne, M. G. Genton, and P. D. McNicholas (2016). On nomenclature for, and the relative merits of, two formulations of skew distributions. Statistics and Probability Letters 110, 201 – 206.[Azzalini and CapitanioAzzalini and Capitanio1999]azzalini99 Azzalini, A. and A. Capitanio (1999). Statistical applications of the multivariate skew normal distribution. Journal of the Royal Statistical Society: Series B 61(3), 579–602.[Azzalini and ValleAzzalini and Valle1996]azzalini96 Azzalini, A. and A. D. Valle (1996). The multivariate skew-normal distribution. Biometrika 83, 715–726.[Barndorff-Nielsen and HalgreenBarndorff-Nielsen and Halgreen1977]barndorff77 Barndorff-Nielsen, O. and C. Halgreen (1977). Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions. Probability Theory and Related Fields 38(4), 309–311.[BlæsildBlæsild1978]blaesild78 Blæsild, P. (1978). The shape of the generalized inverse Gaussian and hyperbolic distributions. Research Report 37, Department of Theoretical Statistics, Aarhus University, Denmark.[Bauer and CurranBauer and Curran2003]bauer03 Bauer, D. J. and P. J. Curran (2003). Distributional assumptions of growth mixture models: implications for overextraction of latent trajectory classes. Psychological methods 8(3), 338–363.[Böhning, Dietz, Schaub, Schlattmann, and LindsayBöhning et al.1994]bohning94 Böhning, D., E. Dietz, R. Schaub, P. Schlattmann, and B. Lindsay (1994). The distribution of the likelihood ratio for mixtures of densities from the one-parameter exponential family. Annals of the Institute of Statistical Mathematics 46(2), 373–388.[Bouveyron and BrunetBouveyron and Brunet2012]bouveyron12 Bouveyron, C. and C. Brunet (2012). Simultaneous model-based clustering and visualization in the Fisher discriminative subspace. Statistics and Computing 22(1), 301–324.[Browne and McNicholasBrowne and McNicholas2015]browne15 Browne, R. P. and P. D. McNicholas (2015). A mixture of generalized hyperbolic distributions. Canadian Journal of Statistics 43(2), 176–198.[Bryk and RaudenbushBryk and Raudenbush1987]bryk87 Bryk, A. S. and S. W. Raudenbush (1987). Application of hierarchical linear models to assessing change. Psychological Bulletin 101(1), 147.[Bryk and RaudenbushBryk and Raudenbush1992]bryk92 Bryk, A. S. and S. W. Raudenbush (1992). Hierarchical Linear Models: Applications and Data Analysis Methods. Sage Publications, Inc.[Dempster, Laird, and RubinDempster et al.1977]dempster77 Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B 39(1), 1–38.[GoodGood1953]good53 Good, I. J. (1953). The population frequencies of species and the estimation of population parameters. Biometrika 40(3-4), 237–264.[HalgreenHalgreen1979]halgreen79 Halgreen, C. (1979). Self-decomposability of the generalized inverse Gaussian and hyperbolic distributions. Probability Theory and Related Fields 47(1), 13–17.[Hubert and ArabieHubert and Arabie1985]hubert85 Hubert, L. and P. Arabie (1985). Comparing partitions. Journal of Classification 2(1), 193–218.[JørgensenJørgensen1982]Jorgensen82 Jørgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. New York: Springer-Verlag.[Kass and RafteryKass and Raftery1995]kass95 Kass, R. E. and A. E. Raftery (1995). Bayes factors. Journal of the American Statistical Association 90, 773–795.[Kass and WassermanKass and Wasserman1995]kasswass95 Kass, R. E. and L. Wasserman (1995). A reference Bayesian test for nested hypotheses and its relationship to the Schewartz criterion. Journal of the American Statistical Association 90(431), 773–795.[KeribinKeribin2000]keribin00 Keribin, C. (2000). Consistent estimation of the order of mixture models. Sankhyā. The Indian Journal of Statistics. Series A 62(1), 49–66.[Kerr, Fillmore and BostormKerr et al.2002]kerr00 Kerr W. C., K. M. Fillmore and A. Bostorm (2002). Stability of alcohol consumption over time: evidence from three longitudinal surveys from the United States. Journal of Studies on Alcohol 63(3), 325–333.[Laird and WareLaird and Ware1982]laird82 Laird, N. M. and J. H. Ware (1982). Random-effects models for longitudinal data. Biometrics 38(4), 963–974.[LerouxLeroux1992]leroux92 Leroux, B. G. (1992). Mixture models: Theory, geometry and applications. The Annals of Statistics 1992, 1350–1360.[LindsayLindsay1995]lindsay95 Lindsay, B. G. (1995). Mixture models: Theory, geometry and applications. In NSF-CBMS Regional Conference Series in Probability and Statistics, Volume 5. California: Institute of Mathematical Statistics: Hayward.[Lu and HuangLu and Huang2014]lu14 Lu, X. and Y. Huang (2014). Bayesian analysis of nonlinear mixed-effects mixture models for longitudinal data with heterogeneity and skewness. Statistics in Medicine 33(16), 2830–2849.[McArdle and EpsteinMcArdle and Epstein1987]mcardle87 McArdle, J. J. and D. Epstein (1987). Latent growth curves within developmental structural equation models. Child Development, 110–133.[McLachlan and PeelMcLachlan and Peel2000]mclachlan00 McLachlan, G. and D. Peel (2000). Finite Mixture Models. New York, NY: John Wiley & Sons.[McNeil, Frey, and EmbrechtsMcNeil et al.2005]mcneil05 McNeil, A., R. Frey, and P. Embrechts (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press.[McNicholas, Murphy, McDaid, and FrostMcNicholas et al.2010]mcnicholas10 McNicholas, P. D., T. B. Murphy, A. F. McDaid, and D. Frost (2010). Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models. Computational Statistics and Data Analysis 54(3), 711–723.[Meng and RubinMeng and Rubin1993]meng93 Meng, X.-L. and D. B. Rubin (1993). Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika 80(2), 267–278.[Meng and Van DykMeng and Van Dyk1997]meng97 Meng, X.-L. and D. Van Dyk (1997). The EM algorithm—an old folk-song sung to a fast new tune. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 59(3), 511–567.[Murray, Browne, and McNicholasMurray et al.2014a]murray14a Murray, P. M., R. P. Browne, and P. D. McNicholas (2014a). Mixtures of skew-t factor analyzers. Computational Statistics & Data Analysis 77, 326 – 335.[Murray, Browne, and McNicholasMurray et al.2017]murray17b Murray, P. M., R. B. Browne, and P. D. McNicholas (2017). Hidden truncation hyperbolic distributions, finite mixtures thereof, and their application for clustering. Journal of Multivariate Analysis 161, 141–156.[Murray, McNicholas, and BrowneMurray et al.2014b]murray14b Murray, P. M., P. D. McNicholas, and R. P. Browne (2014b). A mixture of common skew-t factor analysers. Stat 3(1), 68–82.[MuthénMuthén2004]muthen04 Muthén, B. (2004). Latent variable analysis. The Sage Handbook of Quantitative Methodology for the Social Sciences. Thousand Oaks, CA: Sage Publications, 345–68.[Muthén and AsparouhovMuthén and Asparouhov2008]muthen08 Muthén, B. and T. Asparouhov (2008). Growth mixture modeling: Analysis with non-Gaussian random effects. Longitudinal Data Analysis, 143–165.[Muthén and AsparouhovMuthén and Asparouhov2015]muthen15 Muthén, B. and T. Asparouhov (2015). Growth mixture modeling with non-normal distributions. Statistics in Medicine 34(6), 1041–1058.[Muthén and SheddenMuthén and Shedden1999]muthen99 Muthén, B. and K. Shedden (1999). Finite mixture modeling with mixture outcomes using the EM algorithm. Biometrics 55(2), 463–469.[Muthén and MuthénMuthén and Muthén2012]muthen98 Muthén, B. O. and L. K. Muthén (1998–2012). Mplus User's Guide. Seventh Edition. Los Angeles, CA: Muthén and Muthén.[NaginNagin2005]nagin05 Nagin, D. (2005). Group-Based Modeling of Development. Harvard University Press.[NaginNagin1999]nagin99 Nagin, D. S. (1999). Analyzing developmental trajectories: A semiparametric, group-based approach. Psychological Methods 4(2), 139.[Nagin and LandNagin and Land1993]nagin93 Nagin, D. S. and K. C. Land (1993). Age, criminal careers, and population heterogeneity: Specification and estimation of a nonparametric, mixed Poisson model. Criminology 31, 327.[Punzo and McNicholasPunzo and McNicholas2016]punzo16 Punzo, A. and P. D. McNicholas (2016). Parsimonious mixtures of multivariate contaminated normal distributions. Biometrical Journal 58(6), 1506–1537.[Sahu, Dey, and BrancoSahu et al.2003]sahu03 Sahu, S. K., D. K. Dey, and M. D. Branco (2003). A new class of multivariate skew distributions with applications to Bayesian regression models. Canadian Journal of Statistics 31(2), 129–150.[SchwarzSchwarz1978]schwarz78 Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics 6, 461–464.[Singer and WillettSinger and Willett2003]singer03 Singer, J. D. and J. B. Willett (2003). Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford university press.[SteinleySteinley2004]steinley04 Steinley, D. (2004). Properties of the Hubert-Arabie adjusted Rand index. Psychological Methods 9, 386–396.[Tortora, McNicholas, and BrowneTortora et al.2016]tortora16 Tortora, C., P. D. McNicholas, and R. P. Browne (2016). A mixture of generalized hyperbolic factor analyzers. Advances in Data Analysis and Classification 10(4), 423–440.[Verbeke and LesaffreVerbeke and Lesaffre1996]verbeke96 Verbeke, G. and E. Lesaffre (1996). A linear mixed-effects model with heterogeneity in the random-effects population. Journal of the American Statistical Association 91(433), 217–221.[Warner, White, H. R., and Johnson, VWarner et al.2007]warner07 Warner, L. A., H. R. White, and V. Johnson (2007). Alcohol initiation experiences and family history of alcoholism as predictors of problem-drinking trajectories. Journal of Studies on Alcohol and Drugs 68(1), 56–65.[WoodburyWoodbury1950]woodbury50 Woodbury, M. A. (1950). Inverting modified matrices. Statistical Research Group, Memorandum Report 42. Princeton University, Princeton, New Jersey.§ DISTRIBUTION OF _I | _I, _I, W_IK,C_IK=1Herein, we give the detailed derivation of the conditional distribution of _i given _i, _i, w_ik, and c_ik=1, which facilitates computation of the conditional expectations in the E-step of the EM algorithm. It also serves as a way to estimate the growth factor scores. The joint distribution of _i and _i given _i, w_ik, and c_ik=1 is given by[ _i; _i ]| _i, w_ik, c_ik=1∼𝒩([ _k + _k_i + w_ik_η k; _y(_k + _k_i + w_ik_η k) ], [w_ik_kw_ik_k_y^';w_ik_y_k w_ik(_y_k_y^'+_k) ]). According to the properties of the conditional distribution for multivariate normal variables, the conditional distribution of _i given _i, _i, w_ik, c_ik=1 is also a multivariate normal distribution with (_i |_i, _i, w_ik, c_ik=1 )= _k + _k_i + w_ik_η k+ _k_y^'(_y_k_y^'+_k)(_i - _y(_k + _k_i + w_ik_η k)), 𝕍ar(_i |_i, _i, w_ik, c_ik=1 )= w_ik_k - w_ik_k_y^'(_y_k_y^'+_k)_y_k.According to the Woodbury matrix identity <cit.>, the covariance matrix for the latent variable _i can be simplified to𝕍ar(_i |_i, _i, w_ik, c_ik=1 ) = w_ik(_k+_y^'_k_y).Next, let us simplify the expectation of the latent variable : (_i |_i, _i, w_ik, c_ik=1 )=(_T -_k_y^'(_y_k_y^'+_k)_y)( _k + _k_i + w_ik_η k) + _k_y^'(_y_k_y^'+_k)_i,=(_k -_k_y^'(_y_k_y^'+_k)_y_k)_k( _k + _k_i + w_ik_η k) + _k_y^'(_k-_k_y(_k+_y^'_k_y)_y^'_k)_i,=(_k+_y^'_k_y)_k( _k + _k_i + w_ik_η k) + (_k(_k+_y^'_k_y)- _k_y^'_k_y)(_k+_y^'_k_y)_y^'_k_i,=(_k+_y^'_k_y)_k( _k + _k_i + w_ik_η k) + (_k+_y^'_k_y)_y^'_k_i,=(_k+_y^'_k_y)(_k( _k + _k_i + w_ik_η k) +_y^'_k_i).Finally, we obtain the conditional distribution_i |_i, _i, w_ik, c_ik=1∼𝒩(_k(_k( _k + _k_i + w_ik_η k) +_y^'_k_i), w_ik_k),where _k =(_k+_y^'_k_y). § DETAILED PARAMETER ESTIMATION §.§ EM algorithm for Model IFor our GHD-GMM under _yk=0, the observed log-likelihood can be expressed as follows:ℒ = log p(_i|_i),where p(_i|_i) = π_ik f_GHD_T(_i; λ_k,ω_k,_k,_k,_y_η k).Now, _i |_i,w_ik,c_ik=1 ∼𝒩(_k+ w_ik_y _η k, w_ik_k) independently for i = 1, …,n, W_ik| c_ik=1∼ℐ (ω_k, 1, λ̃_k); therefore, from Bayes's theorem, W_ik|_i,_i,c_ik=1∼GIG(ψ_k,χ_ik,λ̃_k) with ψ_k = ω_k +_η k^'_y^'_k_y_η k, χ_ik = ω_k +δ (_i,_k|_k), and λ̃_k = λ_k-T/2. It follows that _i|_i,_i, w_ik, c_ik=1 ∼𝒩(_k(_k(_k+_k_i+w_ik_η k)+_y^'_k_i),w_ik_k ),where _k=(_k+_y^'_k_y). The result in (<ref>) is used to estimate the latent growth factors _i and a detailed proof thereof is given in Appendix A.Therefore, the complete-data likelihood is given byℒ_c() = [π_ikϕ(_i|_y_i,w_ik_k)ϕ(_i|_k+_k_i+w_ik_η k,w_ik_k)h(w_ik|ω_k,λ_k)]^c_ik,with the same notation used previously,where h(w_ik|ω_k,λ_k) is the density of the GIG distribution in (<ref>) with η = 1. After some algebra, the complete-data log-likelihood isℒ_c (|, ) =ℒ_1c(π) + ℒ_2c(_k)+ℒ_2c(_k,_η k,_k,_k)+ℒ_4c(,),where = (λ_1,…,λ_K) and = (ω_1,…,ω_K), andℒ_1c = c_ikπ_ik, ℒ_2c = c_ik{1/2|_k^-1| -1/2w_ik_i^'_k^-1_i+1/w_ik_i^'_k^-1_y_i-1/2w_ik^'_y^'_k^-1_y}+B_1, ℒ_3c = c_ik{1/2|_k^-1| -1/2w_ik_i^'_k^-1_i+1/w_ik(_k+_k_i)^'_k^-1_i+_η k^'_k^-1_i..-1/2w_ik(_k+_k_i)^'_k^-1(_k+_k_i)-(_k+_k_i)^'_k^-1_η k-w_ik/2_η k^'_k^-1_η k}+B_2, ℒ_4c = c_ik{(λ_k-1)log w_ik -K_λ_k(ω_k) - ω_k/2(w_ik+1/w_ik)},where B_1 and B_2 are constants with respect to model parameters. In the E-step, we compute the conditional exception of ℒ_c(|, ) given in (32), denoted 𝒬. First, let p_ik denote the probability that the ith observation belongs to the kth component of the mixture, and is updated byp_ik𝔼[C_ik|_i,_i]=π_ik f_GHD_T(_i;λ_k,ω_k,_k,_k,_y_η k)/∑_l=1^Kπ_il f_GHD_T(_i; λ_l,ω_l,_l,_l,_y_η l).The following expectations are required:E_1ik =√(χ_ik/ψ_k)K_λ̃_k+1(√(ψ_kχ_ik))/K_λ̃_k(√(ψ_kχ_ik)), E_2ik =√(ψ_k/χ_ik)K_λ̃_k+1(√(ψ_kχ_ik))/K_λ̃_k(√(ψ_kχ_ik))-2λ̃_k/χ_ik, E_3ik =log(√(χ_ik/ψ_k))+1/K_λ̃_k(√(ψ_kχ_ik))∂/∂λ̃_kK_λ̃_k(√(ψ_kχ_ik)), E_4ik =_k(_k^-1(_k+_k_i+E_1ik_η k)+_y^'_k^-1_i), E_5ik =E_2ik_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)+_k_k^-1_η k, E_6ik =_k+_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)_η k_k^-1_k+E_2ik_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)(_k^-1(_k+_k_i)+_y^'_k^-1_i)^'_k+_k_k^-1_η k^'(_k^-1(_k+_k_i)+_y^'_k^-1_i)^'_k+E_1ik_k_k^-1_η k_η k^'_k^-1_k,where ψ_k, χ_ik, and λ̃_k are as previously defined. These attractive closed forms for E_1ik, E_2ik, and E_3ik exist because W_ik|_i,_i,c_ik=1∼GIG(ψ_k,χ_ik,λ̃_k), and so we can use the formulae in (<ref>). The exisitance of these attractive closed forms for E_4ik, E_5ik, and E_6ik is due to the conditional Gaussian distribution ofas in (<ref>).In the M-step, we maximize 𝒬 with respect to the model parameters to get the updates. In particular, we have to maximize p_ikπ_ik,withrespect to π_ik for k=1,…, G and we obtainπ̂_ik=p_ik/∑_k=1^K p_ik.The updates for ω_k and λ_k are computed by maximizing the following functionq_k(ω_k,λ_k) = - K_λ_k(ω_k) + (λ_k-1)d̅_k-ω_k/2(a̅_k+b̅_k),where n_k= p_ik, a̅_k=(1/n_k) p_ikE_1ik, b̅_k=(1/n_k) p_ikE_2ik, and d̅_k=(1/n_k) p_ikE_3ik. The associated updates areλ̂_k =c̅_kλ̂_k^prev[∂/∂ t K_t(ω̂_k^prev)|_t=λ̂_k^prev]^-1, ω̂_k =ω̂_k^prev-[∂/∂ tq_k(t,λ̂_k)|_t=ω̂_k^prev][∂^2/∂ t^2q_k(t,λ̂_k)|_t=ω̂_k^prev]^-1,where the superscript “prev” means the previous estimate — refer to <cit.> for further details.Finally, we get the updates of the other parameters in the model:_k = { p_ik(E_2ik_i_i^' - _iE_5ik^'_y^'-_yE_5ik_i^'+_yE_6ik_y^')/n_k}, _k = p_ik(E_5ik_i^'-E_2ik_k_i^'-_k_i^')( p_ik E_2ik_i_i^')^-1, _k =a̅_k p_ik(E_5ik-E_2ik_k_i)- p_ikE_4ik +p_ik_k_i/n_k(a̅_kb̅_k-1), _η k =b̅_k p_ik(E_4ik-_k_i)- p_ikE_5ik +p_ikE_2ik_k_i/n_k(b̅_kb̅_k-1), _k =1/n_k p_ik[E_6ik-_η kE_4ik^'-E_5ik(_k+_k_i)^'-E_4ik_η k^'-(_k+_k_i)E_5ik^'+E_2ik(_k+_k_i)(_k+_k_i)^'+(_k+_k_i)_η k^'+_η k(_k+_k_i)^'+E_1ik_η k_η k^']. For the more parsimonious version of Model I, the M-step maximizesp_iklogit(_i) = p_ik(_c + _c _i)with respect to the parameters _c and _c, which may be viewed as a multinomial logistic regression with fractional observations p_ik. The updates are_c = p_ik(E_5ik_i^'-E_2ik_k_i^'-_k_i^')(K p_ik E_2ik_i_i^')^-1, _c = n_k _k/n.§.§ EM algorithm for Model IIThe EM algorithm for Model II was employed for parameter estimation in an analogous fashion to the algorithm for Model I described in Section <ref>. The complete-data comprise the observed outcomes _i and covariates _i, the class membership labels c_ik, the latent factors _i, and the latent w_ik, for i=1,…,n and k=1,…,K. Therefore, the complete-data log-likelihood is l_c() =c_ik[ π_ik+ϕ(_i|_y_i + w_ik_yk,w_ik_k)+ϕ(_i|_k+_k_i,w_ik_k)+ h(w_ik|ω_k,λ_k)]. The E-step requires the computation of the conditional expectations regarding the latent factors _i and the latent variables W_ik. Under this formulation, _i |_i, _i, w_ik, c_ik=1∼𝒩(_k(_k( _k + _k_i) +_y^'_k(_i -w_ik_yk), w_ik_k),and the conditional distribution of latent variable W_ik given _i, _i, andc_ik=1 is given byW_ik|_i,_i,c_ik=1∼GIG(ψ_k^⋆,χ_ik,λ̃_k), with ψ_k^⋆ = ω_k +_yk^'_k_yk, χ_ik = ω_k +δ (_i,_k|_k), λ̃_k =λ_k-T/2, where _k = _y(_k + _k_i), and _k = _y_k_y^'+_k. Therefore, we have convenient forms for the following conditional expectations:E_1ik^⋆ =√(χ_ik/ψ_k^⋆)K_λ̃_k+1(√(ψ_k^⋆χ_ik))/K_λ̃_k(√(ψ_k^⋆χ_ik)), E_2ik^⋆ =√(ψ_k^⋆/χ_ik)K_λ̃_k+1(√(ψ_k^⋆χ_ik))/K_λ̃_k(√(ψ_k^⋆χ_ik))-2λ̃_k/χ_ik, E_3ik^⋆ =log(√(χ_ik/ψ_k^⋆))+1/K_λ̃_k(√(ψ_k^⋆χ_ik))∂/∂λ̃_kK_λ̃_k(√(ψ_k^⋆χ_ik)), E_4ik^⋆ =_k(_k(_k+_k_i)+_y^'_k(_i-E_1ik^⋆_ yk)), E_5ik^⋆ =_k(E_2ik^⋆(_k(_k+_k_i)+_y^'_k_i)-_y^'_k_yk), E_6ik^⋆ =_k-_k(_k(_k+_k_i)+_y^'_k_i)_yk^'_k_y_k+E_2ik^⋆_k(_k(_k+_k_i)+_y^'_k_i)(_k(_k+_k_i)+_y^'_k_i)^'_k,-_k_y^'_k_yk^'(_k(_k+_k_i)+_y^'_k_i)^'_k+E_1ik^⋆_k_y^'_k_yk_yk^'_k_y_k. At each E-step, the values of E_1ik^⋆,E_2ik^⋆,…,E_6ik^⋆ are updated. We also update the value of the class membership variable c_ik usingp_ik^⋆ π_ik f_GHD_T(_i ; λ̃_k,ω_k,_k,_k,_yk)/∑_l=1^Kπ_il f_GHD_T(_i ; λ̃_l,ω_l,_l,_l,_yl). At each M-step, the following model parameters are obtained by maximizing the conditional expected value of l_c() and are updated sequentially. The updates for _ik, _c, _c, λ̃_k, and ω_k are similar to those used in Appendix <ref>. We update the skewness parameter _yk using_yk =p_ik^⋆ (_i - _yE_4ik^⋆)/ p_ik^⋆ E_1ik^⋆and the measurement error _k using_k = 1/n_k p_ik^⋆( E_2ik^⋆_i_i^'-_iE_5ik^⋆'_y^'-_i_yk^'-_yE_5ik^⋆_i^'+_yE_6ik^⋆_y^'+_yE_4ik^⋆_yk^'-_yk_i^'+_ykE_4ik^⋆'_y^'+E_1ik^⋆_yk_yk^'),where n_k =p_ik^⋆. We update _k, _k, and _k sequentially using_k=[ p_ik^⋆ (E_5ik^⋆-E_2ik^⋆_k)_i^'][ p_ik^⋆ E_2ik^⋆_i_i^']^-1, _̂k̂ =p_ik^⋆ (E_5ik^⋆-E_2ik^⋆_k_i)/ p_ik^⋆ E_2ik^⋆, _k= 1/n_k p_ik^⋆[E_6ik^⋆ - E_5ik^⋆(_k + _k_i)^' - (_k + _k_i)E_5ik^⋆' +E_2ik^⋆(_k +_k_i)(_k + _k_i)^'].§.§ EM algorithm for Model III Similarly, parameter estimation for Model III is carried out within the EM algorithm framework. Suppose we observe the outcome _i and the covariates _i from a GMM with skew-t random effects as in (<ref>) but with _yk=0. There are three sources of unobserved data: the latent categorical variables _i, the latent growth factors _i, and the latent w_ik. The complete-data log-likelihood can be expressed as follows:ℒ_c() = π_ik[ π_ik +ϕ(_i|_y_i,w_ik_k)+ϕ(_i|_k+_k_i+w_ik_η k,w_ik_k)+ f(w_ik|ν_k/2,ν_k/2)],wheref(w_ik|ν_k/2,ν_k/2) is the density of the inverse Gamma distribution.The E-step requires the computation of the expected value of the complete-data log-likelihood. Note that W_ik|_i,_i,c_ik=1∼GIG(ψ_k^∗,χ_ik^∗,λ_k^∗) with ψ_k ^∗= _η k^'_y^'_k_y_η k, χ_ik^∗ = ν_k +δ (_i,_k|_k), and λ_k^∗ = -(λ_k+T)/2. Therefore, we have convenient forms for the following expected values:E_1ik^∗ =√(χ_ik^∗/ψ_k^∗)K_λ_k^∗+1(√(ψ_k^∗χ_ik^∗))/K_λ_k^∗(√(ψ_k^∗χ_ik^∗)), E_2ik^∗ =√(ψ_k^∗/χ_ik^∗)K_λ_k^∗+1(√(ψ_k^∗χ_ik^∗))/K_λ_k^∗(√(ψ_k^∗χ_ik^∗))-2λ_k^∗/χ_ik^∗, E_3ik^∗ =log(√(χ_ik^∗/ψ_k^∗))+1/K_λ_k^∗(√(ψ_k^∗χ_ik^∗))∂/∂λ_k^∗K_λ_k^∗(√(ψ_k^∗χ_ik^∗)).We also need the expected value of the class membership, i.e.,τ_ik𝔼[C_ik|_i,_i]=π_ik f_GHD_T(_k,_k,_y_η k,v_k)/∑_l=1^Kπ_il f_GHD_T(_l,_l,_y_η l,v_l),as well as the following conditional expectations, which are similar to those derived in the E-step of parameter estimation for the GHD-GMM:E_4ik^∗ =_k(_k^-1(_k+_k_i+E_1ik^∗_η k)+_y^'_k^-1_i),E_5ik^∗ =E_2ik^∗_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)+_k_k^-1_η k,E_6ik^∗ =_k+_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)_η k_k^-1_k+E_2ik^∗_k(_k^-1(_k+_k_i)+_y^'_k^-1_i)(_k^-1(_k+_k_i)+_y^'_k^-1_i)^'_k,+_k_k^-1_η k^'(_k^-1(_k+_k_i)+_y^'_k^-1_i)^'_k+E_1ik^∗_k_k^-1_η k_η k^'_k^-1_k. The M-step requires the computation of the parameter updates to maximize the conditional expected value of the complete-data log-likelihood. In this step, the parameter updates for _c, _c, _k,_η k, _k, _k, and _k are obtained in closed form and are similar to those derived in the M-step of parameter estimation for the GHD-GMM and, hence, are omitted here. To obtain the update for ν_k, we solve the equation(ν_k/2) + 1 - φ(ν_k/2) - 1/ n_kτ_ik(E_3ik^∗ + E_2ik^∗) = 0for ν_k, numerically, where n_k=τ_ik.
http://arxiv.org/abs/1703.08723v2
{ "authors": [ "Yuhong Wei", "Yang Tang", "Emilie Shireman", "Paul D. McNicholas", "Douglas L. Steinley" ], "categories": [ "stat.ME", "stat.AP", "stat.CO" ], "primary_category": "stat.ME", "published": "20170325175731", "title": "Extending Growth Mixture Models Using Continuous Non-Elliptical Distributions" }
⌈⌉ thmTheorem[section] prop[thm]Proposition lem[thm]Lemma cor[thm]Corollary defn[thm]Definition prob[thm]Problem conj[thm]Conjectureexample[thm]Example eg =10000 =15ptCALT-TH-2017-013mailto:seancarroll@gmail.comseancarroll@gmail.com, mailto:achatwin@caltech.eduachatwin@caltech.eduWalter Burke Institute for Theoretical Physics,California Institute of Technology, Pasadena, CA 91125In a wide class of cosmological models, a positive cosmological constant drives cosmological evolution toward an asymptotically de Sitter phase. Here we connect this behavior to the increase of entropy over time, based on the idea that de Sitter spacetime is a maximum-entropy state. We prove a cosmic no-hair theorem for Robertson-Walker and Bianchi I spacetimes that admit a Q-screen (“quantum” holographic screen) with certain entropic properties: If generalized entropy, in the sense of the cosmological version of the Generalized Second Law conjectured by Bousso and Engelhardt, increases up to a finite maximum value along the screen, then the spacetime is asymptotically de Sitter in the future. Moreover, the limiting value of generalized entropy coincides with the de Sitter horizon entropy. We do not use the Einstein field equations in our proof, nor do we assume the existence of a positive cosmological constant. As such, asymptotic relaxation to a de Sitter phase can, in a precise sense, be thought of as cosmological equilibration.Cosmic Equilibration: A Holographic No-Hair Theorem from the Generalized Second Law Sean M. Carroll and Aidan Chatwin-Davies December 30, 2023 =====================================================================================§ INTRODUCTION Like black holes, universes have no hair, at least if they have a positive cosmological constant Λ <cit.>. A cosmic no-hair theorem states that, if a cosmological spacetime obeys Einstein's equation with Λ > 0, then the spacetime asymptotically tends to an empty de Sitter state in the future.[For a different definition of cosmic hair which more closely parallels black hole hair, see <cit.>.] A more precise statement is due to Wald, who proved the following theorem <cit.>:All Bianchi spacetimes (except for certain type IX spacetimes) that are initially expanding, that have a positive cosmological constant Λ > 0, and whose matter content besides Λ obeys the strong and dominant energy conditions, tend to a de Sitter state in the future.Bianchi spacetimes are cosmologies that are homogeneous but in general anisotropic<cit.>. For example, the metric of the 1+3-dimensional Bianchi I spacetime in comoving Cartesian coordinates is given byds^2 = -dt^2 + a_1^2(t) dx^2 + a_2^2(t) dy^2 + a_3^2(t) dz^2 .It is essentially a Robertson-Walker (RW) spacetime in which the scale factor can be different in different directions in space. In this case, when the necessary conditions are satisfied, Wald's theorem implies that each a_i(t) tends to the same de Sitter scale factor, exp(√(Λ/3)t) for a cosmological constant Λ > 0, as t tends to infinity.The intuition behind why one would expect a cosmic no-hair theorem to hold is that as space expands, the energy density of ordinary matter decreases while the density of vacuum energy remains constant. As such, the cosmological constant eventually dominates regardless of the initial matter content and geometry, and a universe in which a positive cosmological constant is the only source of stress-energy is de Sitter. For Bianchi I spacetimes, one can make this intuition explicit by writing down a Friedmann equation for the average scale factor, a̅(t) ≡ [a_1(t) a_2(t) a_3(t)]^1/3, which gives <cit.> (ȧ̅̇(t)/a̅(t))^2 ∝( ρ_Λ + ρ_matter + ρ_an) .On the right-hand side, ρ_Λ and ρ_matter denote the energy densities due to the cosmological constant and matter respectively, while ρ_an is an effective energy density due to anisotropy, similar to how one can think of spatial curvature as an effective source of stress-energy. Crucially, ρ_an scales at most like a̅^-2, and so as the universe expands, only the constant contribution due to ρ_Λ persists. The exception to Wald's theorem is the case of a Bianchi IX spacetime (which has positive spatial curvature) whose initial matter energy density is so high that the spacetime recollapses before the cosmological constant can dominate <cit.>. Intuitively, we expect not only anisotropies, but also perturbative inhomogeneities to decay away at late times, though this is harder to prove rigorously <cit.>. For arbitrary inhomogeneous and anisotropic cosmologies, one can always find regions that expand at least as fast as de Sitter, thus realizing a type of local no-hair theorem <cit.>.Beyond classical general relativity, various generalizations of Wald's theorem attempt to demonstrate analogous no-hair theorems for the quantum states of fields on a curved spacetime background <cit.>.As the universe expands and the cosmological constant increases in prominence with respect to other energy sources, something else is also going on: entropy is increasing. According to the Second Law of Thermodynamics, the entropy of any closed system (such as the universe) will increase or stay constant, at least until it reaches a maximum value. It is interesting to ask whether there is a connection between these two results, the cosmic no-hair theorem and the Second Law. Can the expansion of the universe toward a quiescent de Sitter phase be interpreted as thermodynamic equilibration to a maximum-entropy state? It is well established that de Sitter has many of the properties of an equilibrium maximum-entropy state, including a locally thermal density matrix with a constant temperature <cit.>, and the relationship between entropy and de Sitter space has been examined from a variety of perspectives <cit.>.In this paper we try to make one aspect of these ideas rigorous, showing that a cosmic no-hair theorem can be derived even without direct reference to Einstein's equation, simply by invoking an appropriate formulation of the Second Law. This strategy of deducing properties of spacetime from the behavior of entropy is reminiscent of the thermodynamic and entropic gravity programs <cit.>, as well as of the gravity-entanglement connection <cit.>. Though we do not attempt to derive a complete set of gravitational field equations from entropic considerations, it is interesting that a specific spacetime can be singled out purely from the requirement that entropy increases to a maximum finite value.To derive our theorem, we require a precise formulation of the Second Law that is applicable in curved spacetime, and that includes the entropy of spacetime itself. A step in this direction is Bekenstein's Generalized Second Law (GSL) <cit.>. Recall that the entropy of a black hole with area A is given by S_BH = A/4G. The GSL is the conjecture that generalized entropy, S_gen, which is defined as the sum of the entropy of all black holes in a system as well as the ordinary thermodynamic entropy, increases or remains constant over time. Unfortunately this form of the GSL does not immediately help us in spacetimes without any black holes. Recently, Bousso and Engelhardt proposed a cosmological version of the GSL <cit.>, building on previous work on holography <cit.>, apparent horizons <cit.>, and holographic screens <cit.>. They define a version of generalized entropy on a hypersurface they call a “Q-screen.” A Q-screen is a quantum version of a holographic screen, which in turn is a modification of an apparent horizon. Given a Cauchy hypersurface Σ and a codimension-2 spatial surface with no boundary σ⊂Σ that divides Σ into an interior region and an exterior region, the generalized entropy is the sum of the area entropy of σ, i.e., its area in Planck units, and the entropy of matter in the exterior region:S_gen[σ,Σ] = A[σ]/4G + S_out[σ,Σ].Bousso and Engelhardt's version of the GSL is the statement that generalized entropy increases strictly monotonically with respect to the flow through a specific preferred foliation of a Q-screen:dS_gen/dr > 0 ,where r parameterizes the foliation. Although it is unproven in general, this version of the GSL is well motivated and known to hold in specific circumstances (the discussion of which we defer to the next section). In this work, we use the GSL to establish a cosmic no-hair theorem purely on thermodynamic grounds. In an exact de Sitter geometry, the de Sitter horizon is a holographic screen[Pure de Sitter spacetime does not, however, satisfy the generic conditions outlined in <cit.>.], and every finite horizon-sized patch is associated with a fixed entropy that is proportional to the area of the horizon in Planck units<cit.>. We therefore conjecture that evolution toward such a state is equivalent to thermodynamic equilibration of a system with a finite number of degrees of freedom, and therefore a finite maximum entropy. Specifically, assuming the GSL, we show that if a Bianchi I spacetime admits a Q-screen along which generalized entropy monotonically increases up to a finite maximum, then the anisotropy necessarily decays and the scale factor approaches de Sitter behavior asymptotically in the future. At no point do we use the Einstein field equations, nor do we assume the presence of a positive cosmological constant. The GSL and that entropy tends to a finite maximum along the Q-screen take the logical place of these two respective ingredients.The proof essentially consists of first showing that an approach to a finite maximum entropy heavily constrains the possible asymptotic structure of a Q-screen. Second, we show that the spacetime must necessarily be asymptotically de Sitter (and in particular, isotropic as well) in order to admit a Q-screen with the aforementioned asymptotic structure.The structure of the rest of this paper is as follows. We review Q-screens and the GSL in sec:GSL. In sec:CNH-RW, we first prove a cosmic no-hair theorem for the simpler case of RW spacetimes using the GSL. Then, in sec:Bianchi, we move on to the proof for Bianchi I spacetimes, first in 1+2 dimensions to illustrate our methods, and then in 1+3 dimensions, which also illustrates how to generalize to arbitrary dimensions. We discuss aspects of the theorems and their proofs as well as some implications in sec:disc. § THE GENERALIZED SECOND LAW FOR COSMOLOGYWe begin by briefly reviewing Bousso and Engelhardt's conjectured Generalized Second Law (GSL). The GSL can be thought of as a quasilocal version of Bekenstein's entropy law for black holes <cit.>, but which also applies to cosmological settings. Moreover, the GSL is a natural semiclassical extension of Bousso and Engelhardt's area theorem for holographic screens in the same way that Bekenstein's entropy law extends Hawking's area theorem to evaporating black holes.An early cornerstone of classical black hole thermodynamics <cit.> was Hawking's area theorem: in all spacetimes which satisfy the null curvature condition, the total area of all black hole event horizons can only increase, i.e., dA/dt ≥ 0 <cit.>. Of course, the area theorem fails for evaporating black holes, the technical evasion being that they do not satisfy the null curvature condition. Bekenstein pointed out, however, that if one instead interprets the area of the black hole event horizon as horizon entropy and includes the entropy of the Hawking radiation outside the black hole, S_out, in the total entropy budget, then the generalized entropy, S_gen = A/4G + S_out, increases monotonically or stays constant,dS_gen/dt ≥ 0 <cit.>.From the perspective of trying to understand the thermodynamics of spacetime, both Hawking's and Bekenstein's results suffer from two inconveniences. First, they are fundamentally nonlocal, since identifying event horizons requires that one know the full structure of a Lorentzian spacetime. Second, these results only apply to black holes; it would be desirable to understand thermodynamic aspects of spacetime in other geometries as well. These considerations motivate holographic screens <cit.>, a subset of which obey a classical area theorem, as well as their semiclassical extensions called Q-screens <cit.>, a subset of which are conjectured to obey an entropy theorem. Importantly, both holographic screens and Q-screens are quasilocally defined and are known to be generic features of cosmologies in addition to black hole spacetimes.Let us first review holographic screens. Following the convention of Bousso and Engelhardt, here and throughout we will refer to a spacelike codimension-2 hypersurface simply as a “surface."Let σ be a compact connected surface. At every point on σ, there are two distinct future-directed null directions (or equivalently, two distinct past-directed null directions) that are orthogonal to σ: inward- and outward-directed. The surface σ is said to be marginal if the expansion of the null congruence corresponding to one of these directions, say k^μ, is zero everywhere on σ. Consequently, σ is a slice of the null sheet generated by k^μ that locally has extremal area. This last point is particularly clear if one observes that the expansion, θ = ∇_μ k^μ, at a point y ∈σ, can be equivalently defined as the rate of change per unit area of the area of the slice, A[σ], when a small patch of proper area 𝒜 is deformed along the null ray generated by k^μ at y with an affine parameter λ:θ(y) = lim_𝒜→ 0. 1/𝒜dA[σ]/dλ|_yThis definition is illustrated in fig:qexp below.A holographic screen is a smooth codimension-1 hypersurface that can be foliated by marginal surfaces, which are then called its leaves. Note that while the leaves σ are spacelike, in general a holographic screen need not have a definite character. A marginal surface σ is said to be marginally trapped if the expansion of the congruence in the other null direction is negative everywhere on σ, and a future holographic screen is a holographic screen whose leaves are marginally trapped; marginally anti-trapped surfaces and past holographic screens are defined analogously. Then, assuming the null curvature condition as well as a handful of mild generic conditions, Bousso and Engelhardt proved that future and past holographic screens obey the area theorem paraphrased below <cit.>: Let H be a regular holographic screen. The area of its leaves changes strictly monotonically under the flow through the foliation of H. Q-screens are related to holographic screens, but with expansion replaced by what is dubbed the “quantum expansion.” Let σ again denote a compact connected surface. The quantum expansion at a point y ∈σ in the orthogonal null direction k^μ is defined as the rate of change per unit proper area of the generalized entropy (<ref>), i.e., the sum of both area and matter entropy, with respect to affine deformations along the null ray generated by k^μ:Θ_k[σ;y] = lim_𝒜→ 0. 4G/𝒜d S_gen/dλ|_yThen similar to before, a quantum marginal surface is a surface σ such that the quantum expansion in one orthogonal null direction vanishes everywhere on σ. Just as a marginal surface locally extremizes area along a lightsheet, a quantum marginal surface locally extremizes the generalized entropy along the lightsheet generated by k^μ.The adjective “quantum” can be confusing in this context.In this work it denotes a shift from classical general relativity, where one proves theorems about the area of surfaces, to quantum field theory on a semiclassical background, where analogous theorems refer to a generalized entropy that adds the entropy of matter degrees of freedom to such an area. That matter entropy may be be calculated as the quantum (von Neumann) entropy of a density operator, but in the right circumstances (which we will in fact be dealing with below) it is equally appropriate to treat it as a classical thermodynamic quantity. So here “quantum” should always be interpreted as “adding an entropy term to the area of some surface,” whether or not quantum mechanics is directly involved.The remaining constructions have similarly parallel definitions. A Q-screen is a smooth codimension-1 hypersurface that can be foliated by quantum marginal surfaces. A quantum marginal surface σ is marginally quantum trapped if the quantum expansion in the other null direction is negative everywhere on σ, and a future Q-screen is a Q-screen whose leaves are marginally quantum trapped. Analogous definitions apply for anti-trapped surfaces and past Q-screens. A Q-screen may be timelike, null, spacelike, or some combination thereof in different regions. Future and past Q-screens that also obey certain generic conditions analogous to those for holographic screens are the objects that are conjectured to obey a Generalized Second Law <cit.>:Let 𝒬 be a regular future (resp. past) Q-screen. The generalized entropy of its leaves increases strictly monotonically under the past and outward (resp. future and inward) flow along 𝒬.Note that while the GSL remains unproven in general, it is known to hold in several examples, and it can in fact be shown to hold if one assumes the Quantum Focusing Conjecture <cit.>.So far we have not said much about the precise definition of generalized entropy, so let us discuss how it is defined in more careful terms. Our context here is quantum field theory in curved spacetime, rather than a full-blown theory of quantum gravity. Given some spacetime, suppose that it comes equipped with a foliation by Cauchy hypersurfaces, and suppose that the spacetime's matter content is described by a density matrix ρ(Σ) on each Cauchy hypersurface Σ. Let σ be a compact connected surface that divides a Cauchy hypersurface Σ into two regions: the interior and exterior of σ. The generalized entropy computed with respect to σ and Σ is then the sum of the area of σ in Planck units and S_out, the von Neumann entropy of the reduced state of ρ restricted to the exterior of σ, cf. eq:Sgendef. The reduced state of ρ outside σ, which we denote ρ_out, is obtained by tracing out degrees of freedom on Σ in the interior of σ,ρ_out≡tr_int σ [ρ(Σ)] ,and the Von Neumann entropy of ρ_out isS_out[σ,Σ] = - tr[ ρ_outlnρ_out] . For a general field-theoretic state, the von Neumann entropy S_out[σ,Σ] is a formally divergent quantity. Consequently, there is some subtlety surrounding how it should be regulated, whether through an explicit ultraviolet cutoff or via subtracting a divergent vacuum contribution <cit.>. Since we will exclusively be concerned with cosmology, we will work in a regime where the matter content of the spacetime has a conserved “thermodynamic," or coarse-grained entropy s per unit comoving volume. (Entropy per comoving volume is approximately conserved in cosmologies that do not have too much particle production <cit.>.) The von Neumann entropy of a quantum mechanical system coincides with the thermodynamic Gibbs entropy in the classical limit where the state ρ_out has no coherence, i.e., is diagonal in the energy eigenbasis of Gibbs microstates.We will suppose that we can take the matter contribution to the generalized entropy, which is formally given by the von Neumann entropy S_out[σ,Σ], to be given by a coarse-grained entropy S_CG[σ,Σ] in the interior of σ:S_out[σ,Σ]  →  S_CG[σ,Σ] = s ·vol_c[σ,Σ]Here, vol_c[σ,Σ] denotes the comoving (coordinate) volume of int σ on Σ. (This approach is also taken in the examples of <cit.>.) This expression is appropriate for cosmology, where observers find themselves on the inside of Q-screens and cosmological horizons when present, as opposed to observers who remain outside of a black hole and who are unable to access the interior of the black hole's horizon. Moreover, in the field-theoretic case where ρ(Σ) is a pure state, then it follows that S_in = S_out, where S_in is the Von Neumann entropy of ρ_in≡tr_ext σ[ρ(Σ)].The fact that each leaf of a Q-screen extremizes the generalized entropy on an orthogonal lightsheet leads to a useful method for constructing Q-screens <cit.>. Given some spacetime with a foliation by Cauchy surfaces, suppose that one is also supplied with a foliation of the spacetime by null sheets with compact spatial cross-sections. Let each null sheet be labeled by a parameter r, and on each null sheet, let σ(r) be the spatial section with extremal generalized entropy, when it exists. (Not every spacetime contains Q-screens, such as Minkowski space. But in Big Bang cosmologies, we expect both the area of, and entropy inside, a light cone to decrease in the very far past, so the generalized entropy will have an extremum somewhere.)It follows that each σ(r) is a quantum marginal surface, and so if the quantum expansion has a definite sign in the other orthogonal null direction on each σ(r), the union of these surfaces, 𝒬 = ⋃_r σ(r), is by construction a Q-screen.One way to generate a null foliation of a spacetime is to consider the past light cones of some timelike trajectory. Q-screens constructed from this type of foliation will be particularly useful for our purposes. This construction is illustrated through a worked example in Appendix <ref>. § A COSMIC NO-HAIR THEOREM FOR RW SPACETIMESWe can used the notions reviewed above to show that spacetimes that expand and approach a constant maximum entropy along Q-screens will asymptote to de Sitter space. The basic idea of our proof is made clear by the simple example of a metric that is already homogeneous and isotropic, so that all we are showing is that the scale factor approaches e^Ht for some fixed constant H. The anisotropic case, considered in the next section, is considerably more complex, but the ideas are the same.Let ℳ be a Robertson-Walker (homogeneous and isotropic) spacetime with the line elementds^2 = -dt^2 + a^2(t) (dχ^2 + χ^2 dΩ_d-1^2 ),where t ∈ (t_i,∞). Our aim is to show that if ℳ admits a past Q-screen along which the generalized entropy monotonically increases up to a finite maximum value, then this alone, together with a handful of generic conditions on ℳ, implies that ℳ is asymptotically de Sitter, or in other words, thatlim_t→∞ a(t) = e^Htfor some constant H. In particular, we will neither make use of the Einstein field equations nor assume that there is a positive cosmological constant.Begin by foliating ℳ with past-directed light cones whose tips lie at the spatial origin χ = 0, and suppose that ℳ admits a past Q-screen, 𝒬, constructed with respect to this foliation. In other words, suppose that each light cone has a spatial slice with extremal generalized entropy so that 𝒬 is the union of all of these extremal slices. Past light cones will generically have a maximal entropy slice in cosmologies which, for example, begin with a big bang where a(t_i) = 0. An example is portrayed in fig:QscreenEx, which shows a holographic screen and a Q-screen in a cosmological spacetime with a past null singularity and a future de Sitter evolution; this example is explained in more detail in Appendix <ref>. The intuition here is that while the past-directed null geodesics that make up a light cone may initially diverge, eventually they must meet again in the past when the scale factor vanishes and space becomes singular. Ultimately, however, we need only assume that the Q-screen exists, and we only remark on its possible origins for illustration.Because RW spacetimes are spherically symmetric, the extremal-entropy light cone slices will be spheres, i.e., constant-t slices. If the quantum expansion vanishes in the lightlike direction along the light cone and is positive in the other lightlike direction at a single point on some test sphere, then it maintains these properties at every point on that sphere due to symmetry. This sphere is by construction a marginally quantum anti-trapped surface, or equivalently has extremal generalized entropy on the light cone. We therefore take the Cauchy surfaces Σ with respect to which generalized entropy is defined to be the constant-t surfaces in ℳ, since constant-t slices of light cones are spheres.We will also make a handful of generic assumptions about ℳ and 𝒬 without which a cosmic no-hair theorem is not guaranteed. Indeed, Wald's theorem does not hold in completely general cosmologies either; it requires that the spacetime is initially expanding and that its matter content satisfies the strong and dominant energy conditions. Here, we will assume that space continues to expand for all cosmic time.[In principle, the expansion need not be monotonic, but we will find that monotonicity is implied when ℳ admits a Q-screen such as 𝒬.] We want to avoid cosmologies that crunch or that otherwise clearly do not admit a no-hair theorem. We will also suppose that 𝒬 satisfies the generic conditions outlined in <cit.>.With these considerations in mind, the theorem that we wish to prove is the following:Let ℳ be a RW spacetime with the line element (<ref>) and whose matter content has constant thermodynamic entropy s per comoving volume. Suppose that ℳ admits a past Q-screen, 𝒬, constructed with respect to a foliation of ℳ with past-directed light cones that are centered on the origin, χ = 0, and suppose that the Generalized Second Law holds on 𝒬. Suppose that ℳ and 𝒬 together satisfy the following assumptions: (a) a(t) →∞ as t →∞, (b) S_gen→ S_max < ∞ along 𝒬.Then, ℳ is asymptotically de Sitter and the scale factor a(t) approaches e^Ht, where H is a constant. For convenience we work in d=3 spatial dimensions, but the generalization to arbitrary dimensions is straightforward. As discussed above, the leaves of 𝒬 are spheres. Letting the leaves be labeled by some parameter r, the generalized entropy is then given byS_gen[σ(r),Σ(r)] ≡ S_gen(r) = π/Gχ^2(r) a^2(t(r)) + 43πχ^3(r) s .The hypersurface Σ(r) is the constant-t(r) surface in which the leaf σ(r) is embedded, and χ(r) denotes the radius of the leaf.First, we need to establish that 𝒬 extends out to future timelike infinity. In principle, 𝒬 could become spacelike and consequently not extend beyond some time t (or in other words, t(r) could have some finite maximum value), but it turns out that this does not happen.Recall the property of Q-screens that generalized entropy is extremized on each leaf with respect to lightlike deformations. Here we may writek^μ∂_μ S_gen = 0 ,where k^μ = (a(t), -1, 0, 0) is the lightlike vector that is tangent to the light cone and with respect to which S_gen is extremal. (Any point x^μ belongs to a unique sphere on a past-directed light cone and may therefore be associated with a particular value of S_gen. This lets us define the partial derivative in eq:sgendiv above.) The deformation corresponds to dragging the leaf σ(r) up and down the light cone, and by construction S_gen(r) is extremal on the leaf σ(r). Note that in more general settings we should consider deformations with respect to null geodesics, since the null generators of the light cone could have different normalizations at different points on σ(r). Or, in other words, the geometry of the leaf σ(r) could change as it is dragged by some fixed affine amount along the light cone. Here, however, the spherical symmetry of RW ensures that the null generators on σ(r) all have the same normalization, so that k^μ as defined above is proportional to the null generators everywhere on σ(r).Writing out the partial derivatives, (<ref>) becomes0= (a∂_t - ∂_χ) ( π/Gχ^2 a^2 + 43πχ^3 s )= 2π/Gχ^2 a^2 ȧ - 2π/Gχ a^2 - 4 πχ^2 s .(One must be careful to distinguish between the coordinate t and the value t(r) which labels leaves in the Q-screen.) If χ≠ 0, then it follows that1/χ = ȧ(t) - 2Gs/a^2(t).eq:Qexist lays out the criterion for when there is a leaf in a constant-t slice; when the right side is finite and positive, then there must be a leaf in that slice.Observe that the right side of eq:Qexist does not diverge for any finite t > t_i since a(t) is defined for all t ∈ [t_i,∞) and only diverges in the infinite t limit by assumption. Furthermore, if the right side is nonzero and positive for some time t_time (and consequently there is a leaf σ(r) in the t(r) = t_time slice), then the right side cannot approach zero, since this would cause the radius of subsequent leaves to grow infinitely large, which contradicts the assumption that S_gen remains finite. Therefore, if 𝒬 has a leaf at some time, then eq:Qexist shows that 𝒬 must have leaves in all future slices. 𝒬 is therefore timelike and extends out to future timelike infinity.[Alternatively, we could instead replace Assumption (a) with the assumption that 𝒬 is timelike and extending out to future timelike infinity and argue that a →∞. The arguments given here show that these two points are logically equivalent.] Furthermore, that the right side of eq:Qexist cannot vanish immediately implies that ȧ > 2Gs/a^2 > 0 for t > t_time, so that the expansion must be monotonic.Because 𝒬 is timelike, we can label each leaf by the constant-t_1 surface in which it lies, i.e., let the parameter r be a time t_1 (subscripted as such to distinguish it from the coordinate t). Referring to eq:sgenRW, since a(t) grows without bound by assumption, it must be that χ(t_1) decreases at least as fast as a^-1(t_1) in order for the area term in S_gen to remain finite (as it must, since by hypothesis S_gen≤ S_max). The matter entropy term therefore becomes irrelevant in the asymptotic future, and so that S_gen→ S_max, it must be thatχ(t_1) →√(G S_max/π)1/a(t_1)as t_1 →∞.Next, rearrange eq:Qexist to solve for ȧ. Using the asymptotic form for χ(t_1) in eq:chiRW, to leading order in a we find thatȧ→√(π/G S_max) a+ (subleading).Therefore, it follows that a(t) → e^Ht as t →∞, where H = (π/GS_max)^1/2, demonstrating that the metric approaches the de Sitter form, as desired. The entropy S_max = π/GH^2 coincides with the usual de Sitter horizon entropy.  We close this section by briefly remarking that the result above extends straightforwardly to open and closed RW spacetimes as well. More generally, the result of Theorem <ref> applies to a RW spacetime ℳ of any spatial curvature, i.e., with the line elementds^2 = -dt^2 + a^2(t) (dχ^2 + f^2(χ) dΩ_d-1^2 )wheref(χ) = {[ sinχ χ∈ [0,π] (closed);χ χ∈ [0,∞) (flat);sinhχ χ∈ [0,∞) (open) ]. .The overall proof technique is the same as in the proof of Theorem <ref>. Working in 1+3 dimensions, in the more general case, the generalized entropy of the leaves of 𝒬 is given byS_gen[σ(r),Σ(r)] ≡ S_gen(r) = π/G f^2(χ(r)) a^2(t(r)) + v(χ(r)) s .When ℳ is closed, the comoving volume v(χ) is given by v(χ) = 2π (χ - sinχcosχ), and when ℳ is open, v(χ) is given by v(χ) = 2π (sinhχcoshχ - χ). Consequently, the condition k^μ∂_μ S_gen = 0, which determines when there is a leaf in the constant-t hypersurface, gives1/f^2(χ) = ( ȧ(t) - 2Gs/a^2(t))^2 + k ,where k = +1, 0, or -1 if ℳ is respectively closed, flat, or open. Here as well, if there is a leaf at some time t_time so that the right-hand side of eq:Qexistgen is nonzero, then there are leaves in all subsequent constant-t slices, since the finiteness of S_gen demands that the right-hand side cannot approach zero. Therefore, 𝒬 extends out to future timelike infinity.For the general case, the condition in eq:chiRW that S_gen→ S_max reads[A minor technical point worth noting is that the condition in eq:chiRWgen is not identically equivalent to the condition χ(t_1) →√(GS_max/π)/a(t_1) when ℳ is closed. In this case, χ(t_1) →π - √(GS_max/π)/a(t_1) is also admissible.]f(χ(t_1)) →√(GS_max/π)1/a(t_1).Upon substituting eq:chiRWgen into eq:Qexistgen (and taking the positive root, since ℳ is expanding), we recover eq:RWasymp, and so the rest of the proof follows as before. § A COSMIC NO-HAIR THEOREM FOR BIANCHI I SPACETIMESIn a RW spacetime, we demonstrated that the existence of a Q-screen along which entropy monotonically increases to a finite maximum implies that the scale factor tends to the de Sitter scale factor far in the future. Now we will go one step further and show that in the case where the cosmology is allowed to be anisotropic, similar assumptions imply that any initial anisotropies decay at late times as well. Specifically, we will prove a cosmic no-hair theorem for Bianchi I spacetimes. The calculations in the proof for Bianchi I spacetimes are more involved than the RW case, so we will begin with a proof in 1+2 dimensions, where the anisotropy only has one functional degree of freedom. We will then generalize to 1+3 dimensions, which also makes apparent how to generalize to arbitrary dimensions.§.§ 1+2 dimensions Let ℳ be a Bianchi I spacetime in 1+2 dimensions with the line elementds^2 = -dt^2 + a_1^2(t) dx^2 + a_2^2(t) dy^2where t ∈ (t_i, ∞). Once again foliate ℳ with past-directed light cones whose tips lie at x = y = 0, and suppose that ℳ admits a past Q-screen 𝒬, constructed with respect to this foliation, together with an accompanying foliation by Cauchy hypersurfaces. Our aim is to show that if generalized entropy tends to a finite maximum along 𝒬, then the GSL implies that a_1(t), a_2(t) → e^Ht as t →∞ for some constant H.Here as well we will assume that space expands for all time, with a_1(t), a_2(t) →∞ as t →∞. We will also further assume that 𝒬 is timelike and extends out to future timelike infinity past some time t_time. We suspect that it might be possible to show that this latter property follows from the assumption that a_1(t) and a_2(t) grow without bound, as in the case of a RW spacetime, but we do not know of a straightforward way to show this.We will assume that generalized entropy is globally maximized on each light cone by the corresponding screen leaf (as opposed to only assuming local extremality). In other words, we will assume that there are no other slices of each light cone whose generalized entropy is larger than that of the screen leaf. This property of leaves is certainly true when the Quantum Focusing Conjecture (QFC) holds <cit.>. Moreover, the GSL is provably true when the QFC holds.The QFC is the conjecture that the quantum expansion of a null congruence is nonincreasing along the congruence. In symbols, for a null congruence generated by k^μ with an affine parameter λ on a given null ray, the QFC readsd Θ_k/dλ≤ 0 .The QFC is the semiclassical analogue of classical focusing, dθ/dλ≤ 0, which holds when the null curvature condition holds. In particular, eq:QFC makes it clear that if a light cone slice locally maximizes generalized entropy with respect to deformations on the light cone, then it is also the unique global maximum. A leaf σ that locally maximizes generalized entropy obeys Θ_k[σ,y] = 0 for all y ∈σ. Therefore, if Θ_k is nonincreasing on the light cone[The pathological case of dΘ_k/dλ = 0 on a subset of the congruence with nonzero measure is ruled out by appropriate genericity conditions.], there are no deformations of σ that lead to a larger generalized entropy, and so σ defines a globally maximal generalized entropy. It is interesting to explore ways in which this assumption about global maximality of generalized entropy can be relaxed, which we shall do after the proof of the no-hair theorem.Next, we introduce conformal light cone coordinates <cit.>, which are more convenient coordinatesto work in when dealing with anisotropy. First, observe that we may rewrite the line element (<ref>) asds^2 = -dt^2 + a^2(t)[e^2b(t) dx^2 + e^-2b(t) dy^2 ]with a_1(t) = a(t) e^b(t) and a_2(t) = a(t) e^-b(t) <cit.>. In this parameterization, the “volumetric scale factor” a(t) describes the overall expansion of space while b(t) characterizes the anisotropy. Next, make the coordinate transformation to conformal time defined by dt = ± a(η) dη so that the line element (<ref>) readsds^2 = a^2(η) [ -dη^2 + e^2b(η) dx^2 + e^-2b(η) dy^2 ].Choose the sign of η so that η(t) is a monotonically increasing function of t, and denote the limiting value of η(t) as t →∞ by η_∞. Conformal light cone coordinates are then defined by the following coordinate transformation:x(η,η_o,θ)= cosθ∫_η^η_oe^-2b(ζ)/√(cos^2 θ  e^-2b(ζ) + sin^2 θ e^2b(ζ)) dζy(η,η_o,θ)= sinθ∫_η^η_oe^2b(ζ)/√(cos^2 θ  e^-2b(ζ) + sin^2 θ e^2b(ζ)) dζThe point with coordinates (η,η_o,θ) is reached by firing a past-directed null geodesic from the spatial origin x = y = 0 at an angle θ∈ [0,2π) counterclockwise relative to the x-axis at conformal time η_o and following the light ray in the past down to the conformal time η (fig:clc).Note that while η is a timelike coordinate, η_o acts as a radial coordinate at each η.The surfaces of constant η_o are precisely the past-directed light cones with respect to which 𝒬 is constructed. We can therefore label the leaves σ of 𝒬 by the values of η_o corresponding to the light cones on which they lie (fig:Qscreen):𝒬 = ⋃_η_o σ(η_o)Similarly, label the Cauchy hypersurfaces with respect to which each leaf is defined by Σ(η_o). In various instances, it will be useful to use another coordinate,χ = η_o - η ,which may be thought of as a comoving radius in a sense that will be made precise later. We will also sometimes work in the coordinates (η,χ,θ) or (χ,η_o,θ) in addition to the conformal light cone coordinates (η,η_o,θ).The no-hair theorem that we will prove is as follows: Let ℳ be a Bianchi I spacetime with the line element (<ref>) and whose matter content has constant thermodynamic entropy s per comoving volume. Suppose that ℳ admits a past Q-screen 𝒬, with globally maximal entropy leaves constructed with respect to a foliation of ℳ with past-directed light cones that are centered on the origin, x = y = 0. Suppose that the Generalized Second Law holds on 𝒬 and that ℳ and 𝒬 together satisfy the following assumptions: (i) a_1(t), a_2(t) →∞ as t →∞,(ii) 𝒬 is timelike past some t_time and extends out to future timelike infinity(iii) ȧ_1(t), ȧ_2(t) > 0 after some t_mono,(iv) S_gen→ S_max < ∞ along 𝒬.Then, ℳ is asymptotically de Sitter and the scale factors a_1(t) and a_2(t) approach C_1 e^Ht and C_2 e^Ht, respectively, where H, C_1, and C_2 are constants.Notes: To obtain a manifestly isotropic metric, rescale the coordinates x and y by C_1 and C_2, i.e., set X = C_1 x and Y = C_2 y. Then, the line element (<ref>) asymptotically reads ds^2 → -dt^2 + e^2Ht(dX^2 + dY^2). Also note that we have introduced an additional assumption compared to the RW case: Assumption (iii), that a_1(t) and a_2(t) grow monotonically past some time t_mono. Finally, also note that in terms of a(η) and b(η), Assumption (i) becomes: (i^') a(η) →∞ as η→η_∞ and a(η) e^± b(η)→∞.In terms of a(η) and b(η), the theorem is established by showing that a(η) → -1/Hη and b(η) → B as η→ 0^- (and also that η_∞ = 0) for some constant B. The proof can be broken down into three parts. First, we show that, asymptotically, 𝒬 squeezes into the comoving coordinate origin. Second, we use this asymptotic squeezing behaviour to demonstrate that the volumetric scale factor a(η) tends to the de Sitter scale factor. Finally, we show that the asymptotic behaviour of a(η) and Assumption (iii) together imply that anisotropy decays. §.§.§ Showing that 𝒬 squeezes into the coordinate origin χ=0 as η→η_∞. Consider the leaves of 𝒬 and work in x̃^μ = (η,η_o,θ) coordinates. On the light cone whose tip is at η_o, each leaf σ(η_o) is a closed path parameterized byx̃^μ(u; η_o) = (η(u; η_o), η_o, u)u ∈ [0,2π).Our first task is to show that χ(u;η_o) ≡η_o - η(u;η_o) tends to zero for all values of u as η→η_∞. We will do so through a proof by contradiction.Suppose to the contrary that 𝒬 never squeezes into the comoving coordinate origin. That is, suppose that there exists M > 0 such that, given any η_o > η_time, one can find values η̃_o > η_o and ũ such that χ(ũ; η̃_o) ≥ M. Let η̃≡η(ũ; η̃_o) and consider the constant η = η̃ slice of the light cone whose tip is at η̃_o (fig:bound_slice). Denote this (co-dimension 2) surface by ς(η̃; η̃_o), and denote the (co-dimension 1) hypersurface of constant-η̃ by X(η̃). Since the generalized entropy of the leaf σ(η̃_o) is globally maximal on this light cone by assumption, it must follow thatS_gen[σ(η̃_o),Σ(η̃_o)] ≥ S_gen[ς(η̃; η̃_o), X(η̃)] ≥A[ς(η̃; η̃_o)]/4G,where the last inequality follows because S_gen is always greater than or equal to just the area term. Our basic strategy will be to show that A[ς(η̃; η̃_o)] diverges as η̃_o →η_∞, which contradicts the assumption (iv) that S_gen must remain finite on 𝒬.To do this, let us compute the proper area A[ς(η̃; η̃_o)]. In three dimensions, the induced metric on a surface of constant η and η_o has only a single component, given byγ = ∂ x^μ/∂θ∂ x^ν/∂θ g_μν = a^2(η) [ e^2b(η)( ∂ x/∂θ)^2 + e^- 2b(η)( ∂ y/∂θ)^2 ] ≡ a^2(η) γ̃,where the coordinate partial derivatives read[A Maple worksheet which implements the calculations in this article is available through the online repository <cit.>.]∂ x/∂θ = ∫_η^η_o- sinθ/( cos^2 θ e^-2 b(s) + sin^2 θ e^2b(s))^3/2 ds∂ y/∂θ = ∫_η^η_ocosθ/( cos^2 θ e^-2 b(s) + sin^2 θ e^2b(s))^3/2 ds .It follows that the area of this surface isA(η,η_o) = ∫_0^2π√(γ) dθ = a(η) ∫_0^2π√(γ̃) dθ.It is fairly straightforward to place a lower bound on this area:A(η,η_o)≥ a(η) e^b(η)∫_0^2π| ∂ x/∂θ| dθ= a(η) e^b(η)∫_η^η_o ds ∫_0^2π dθ| sinθ|/( cos^2 θ e^-2 b(s) + sin^2 θ e^2b(s))^3/2= 4 a(η) e^b(η)∫_η^η_o ds e^-b(s)One arrives at a similar expression using ∂ y/∂θ. Note that in the middle line above, we were able to bring the absolute value into the integrand of ∂ x/∂θ because it has a definite sign for any given θ. Then, if e^-b(s) is minimized at s = η_m ∈ [η,η_o], it follows thatA(η,η_o) ≥ 4 a(η) e^b(η)e^-b(η_m) (η_o - η) ≥ 4 a(η) (η_o - η) .Applied to our surface ς(η̃; η̃_o), for which η̃_o - η̃≥ M, the bound readsA[ς(η̃; η̃_o)] ≡ A(η̃,η̃_o) ≥ 4 M a(η̃) ,which diverges as η̃_o and η̃ are chosen arbitrarily large. We therefore have the contradiction that we sought, and so the leaves of the Q-screen must squeeze into the comoving coordinate origin in the asymptotic future. §.§.§ Showing that a(η) is asymptotically de Sitter.Now we turn our attention to calculating S_gen[σ(η_o), Σ(η_o)] itself, and using its asymptotic properties as η_o →η_∞ to demonstrate that a(η) → -1/Hη for a constant H with η_∞ = 0. First, we we will argue that the matter entropy term, which we assume can be calculated using the coarse-grained entropy S_CG[σ(η_o),Σ(η_o)], vanishes asymptotically in the future. To this end, let us prove the following useful lemma about constant-η slices of light cones when χ = η_o - η is infinitesimally small:Let ς(η; η+χ) be the constant-η slice of the past-directed light cone whose tip is at η_o = η + χ. The generalized entropy defined by this slice is given byS_gen[ς(η;η+χ),X(η)] = A(η,η+χ)/4G + c_g(η,χ) χ^2 s,where A(η,η+χ) is given byA(η,η+χ) = a(η) ·[ 2πχ + O(χ^3) ],and c_g(η,χ) is some O(1) geometric factor due to anisotropy that does not depend on a(η). First we justify the parameterization of the coarse-grained entropy S_CG = c_g(η,χ)χ^2 s. In the coordinates of the metric (<ref>), S_CG is given byS_CG[ς(η;η+χ),X(η)] = s ·vol_c[ς(η;η+χ),X(η)] = s ∬_int ς dx dy ,where int ς(η;η+χ) denotes the region on X(η) inside ς(η;η+χ). In terms of the coordinates (η,χ,θ), S_CG isS_CG[ς(η;η+χ),X(η)] ≡ S_CG(η,χ) = s ∫_0^χ∫_0^2π| ∂ (x,y)/∂ (χ^',θ)|   dθdχ^'.Formally, the Jacobian can be calculated from the coordinate transformation (<ref>)-(<ref>) above. Expanding in powers of χ, one finds thatS_CG(η,χ) = s ·( πχ^2 + π/8 b^'(η)^2 χ^4 ) + O(χ^5) .Therefore, we can simply define the function c_g(η,χ) to be the functionc_g(η,χ) ≡S_CG(η,χ)/χ^2 s = π + π/8 b^'(η)^2 χ^2 + O(χ^3) .The function c_g(η,χ) is O(χ^0) by construction, and from the coordinate transformation (<ref>)-(<ref>), in which a(η) never appears, we see that c_g cannot depend on a(η), as claimed.The expansion of A(η,η+χ) for small χ follows from expanding √(γ̃) in eq:AconstExact in powers of χ and then integrating. Note that eq:A3Dasymptotic demonstrates the sense in which χ is a comoving radius (at least for small values of χ).  We can use the result of Lemma <ref> to show that S_CG[σ(η_o),Σ(η_o)] vanishes asymptotically in the future. Given a leaf σ(η_o), let η_min be the minimum value attained by η(u;η_o):η_min = min_u {η(u; η_o) }Consider the constant-η_min slice of the light cone whose tip is at η_o, which we label by ς(η_min;η_o) (fig:const_slice). The comoving volume of σ(η_o) is contained within the comoving volume of ς(η_min;η_o), which, according to Lemma <ref>, vanishes in the asymptotic future limit. Therefore, the comoving volume of σ(η_o) vanishes as well, so S_CG[σ(η_o),Σ(η_o)] vanishes asymptotically in the future.Next we investigate the asymptotic behaviour of A[σ(η_o)]. For this part of the proof, we will work in the coordinates (χ,η_o,θ). In these coordinates, the leaf σ(η_o) is parameterized by some path x̃^μ(u) = (χ(u;η_o),η_o,u) with η_o held constant and 0 ≤ u < 2π. In the future when S_CG becomes negligible, this path is the maximal area (also known as length in 1+2 dimensions) path on the light cone whose tip is at η_o, and so A[σ(η_o)] satisfiesδ A[σ(η_o)]/δχ(u;η_o) = 0 .In principle, one can therefore solve the Euler-Lagrange problem above to obtain the path χ(u; η_o) and hence also the maximal area A[σ(η_o)].The tangent to the path is t^μ = dx̃^μ/du = (χ̇(u; η_o), 0, 1) (where a dot denotes a derivative with respect to the parameter u). Therefore, the area of σ(η_o) is given byA[σ(η_o)] = ∫_0^2π√(g̃_μν t^μ t^ν) du = ∫_0^2π√(g̃_00χ̇^2 + 2 g̃_02χ̇+ g̃_22) du,where g̃_μν is the metric of eq:conformallineelement but rewritten in (χ,η_o,θ) coordinates. One finds that g̃_00 = 0 exactly, but g̃_02 and g̃_22 do not admit any such simplifications. Because of this, solving the full Euler-Lagrange problem to actually obtain the path χ(u;η_o) is intractable in general.Nevertheless, we can exploit the fact that 𝒬 squeezes into the coordinate origin and perform a small-χ expansion of A[σ(η_o)]. First, pull out a factor of a(η_o-χ) from the square root:A[σ(η_o)] = ∫_0^2π a(η_o-χ) √(2f_02χ̇+ f_22) duIn so doing we have defined g̃_μν = [a(η_o - χ)]^2f_μν. Then, expand the square root in χ. The result isA[σ(η_o)] = ∫_0^2π a(η_o - χ) [ χ/R(u;η_o) + 1/2 b^'(η_o) Q(u;η_o)/R^2(u;η_o)χ^2 + O(χ^3) ] du,whereR(u;η_o)= e^-2b(η_o)cos^2 u + e^2b(η_o)sin^2 u Q(u;η_o)= e^-2b(η_o)cos^2 u - e^2b(η_o)sin^2 u.Pulling out the scale factor is necessary to avoid pathologies that arise because both χ and η_o become small in the same limit (see Appendix <ref> for illustration).Only keeping the first order term, the variation δ A/δχ = 0 gives0 = -a^'(η_o-χ) χ/R(u;η_o) + a(η_o - χ) 1/R(u;η_o),so asymptotically, the maximizing path χ(u;η_o) = χ(η_o) is given implicitly by the solution ofχ = a(η_o - χ)/a^'(η_o-χ).To first order, A[σ(η_o)] is given byA[σ(η_o)] = 2πa^2(η_o - χ)/a^'(η_o-χ) .But the requirement that S_gen→ S_max means that A[σ(η_o)]/4G must tend to the constant value S_max, or in other words,lim_η_o →η_∞ χ→ 0a^2(η_o - χ)/a^'(η_o-χ) = 2GS_max/π≡1/H.Therefore, a(η) asymptotically approaches de Sitter, a(η) → -1/Hη as η→ 0^-, with H = π/2GS_max.Since χ(η_o) is a function of η_o, a technical detail to address is to check that the higher-order coefficients in the expansion (<ref>), which themselves depend on η_o through b(η_o) and its derivatives, do not cause the higher-order terms to be larger than the term that is first-order in χ. This we can achieve by bounding the remainder, r_1(χ;η_o) ≡√(2f_02χ̇+ f_22) - χ/R.Let F = √(2f_02χ̇+ f_22). We may write its second derivative with respect to χ as∂^2 F/∂χ^2 = b^'(η_o-χ) Q(u; η_o - χ)/R^2(u; η_o-χ) + ε(χ; η_o),where the term ε(χ; η_o) → 0 as χ→ 0 for any η_o. As such, choose χ and η_o both small enough such that |ε(χ; η_o)| < |b^'(η_o-χ)|/R(u;η_o-χ) for all u.[The only instance in which this is not possible is if |b^'(η_o-χ)|/R(u;η_o-χ) vanishes faster than |ε(χ; η_o)|. But, in this case, the remainder |r_1(χ;η_o)| can be bounded arbitrarily tightly, since |∂^2 F / ∂χ^2| can be made arbitrarily small. ] With this choice, and since |Q/R| ≤ 1, we have that| ∂^2 F/∂χ^2| < 2 |b^'(η_o-χ)|/R(u; η_o-χ).Next we invoke the monotonicity Assumption (iii). Let η_⋆ = max{η_mono, η_time}. In terms of a(η) and b(η), Assumption (iii) reads ( a(η) e^+ b(η))^' > 0 and ( a(η) e^- b(η))^' > 0, or |b^'(η)| < a^'(η)/a(η). Therefore, upon additionally requiring 0 > η_o - χ > η_⋆ (i.e. possibly making χ and η_o smaller), it follows that| ∂^2 F/∂χ^2| < 2/R(u; η_o-χ)a^'(η_o-χ)/a(η_o-χ) ⟶ 2/R(u; η_o-χ)χ(η_o).So, by Taylor's remainder theorem we have that |r_1(χ; η_o)| < R(u;η_o-χ_1)^-1(χ^2/χ(η_o)) on any interval χ∈ [χ(η_o),χ_2], where χ_1 ∈ [χ(η_o),χ_2] minimizes R(u;η_o-χ). Or, at the edge of the interval,|r_1(χ(η_o); η_o)| < χ(η_o)/R(u; η_o-χ_1).Since ∫_0^2π R(u; η_o)^-1 du = 2π irrespective of the value of η_o, it follows that remainder in the expansion is strictly smaller than the first-order term, so we were safe in restricting our attention to the first-order solution. §.§.§ Showing that the anisotropy decays. The decay of anisotropy is directly implied by Assumption (iii) once we have established that the volumetric scale factor a(η) is asymptotically de Sitter. In the far future limit, Assumption (iii) recast as ( a(η) e^± b(η))^' > 0 gives|b^'(η)| < a^'(η)/a(η) η→ 0^-⟶H a(η) = 1/-η.Therefore, to capture the asymptotic scaling of b^'(η), we can writeb^'(η) = f(η)/(-η)^1-p,where p>0 and where |f(η)| ≤ F for some bounded constant F when η > η_⋆. In other words, b^'(η) cannot grow faster than 1/η as η→ 0^-, so that (-η)^1-pb^'(η) is some bounded function. To establish that the anisotropy decays, and thus complete the proof of the theorem, we need only establish that b(η) goes to a fixed limit at late times:If b^'(η) satisfies eq:bpasymp on (η_⋆,0), then lim_η→ 0^- b(η) exists.We show that the limit exists by showing that b(η) is a Cauchy function. Let ϵ > 0. We must find δ > 0 such that 0 < -η_1 < δ and 0 < -η_2 < δ implies that |b(η_2) - b(η_1)| < ϵ. Without loss of generality, suppose that η_⋆ < η_1 < η_2. Then:| b(η_2) - b(η_1) |= | ∫_η_1^η_2f(u)/(-u)^1-p du | ≤∫_η_1^η_2|f(u)|/(-u)^1-p du ≤ F ∫_η_1^η_21/(-u)^1-p du ≤F/p(-η_1)^p.Therefore, let δ = (ϵ F/p)^1/p. Then |b(η_2) - b(η_1)| < ϵ as required. It is interesting to briefly consider how one may relax the assumption that each light cone has a globally maximal generalized entropy section.[A particularly astute reader may have noticed that the light cones in the example in app:example do not satisfy this global maximality property, but this is just because the approximation in which S_out is estimated by S_CG breaks down. More precisely, S_CG is not a good estimate of the matter contribution to generalized entropy for light cone slices that are far to the past of the light cone's tip. For such slices, the comoving volume enclosed by the slice grows arbitrarily large.] If we do not assume that each light cone has a maximum generalized entropy surface, then the proof above pauses at eq:SgenLowerBound. In this case, it is no longer true that S_gen[σ(η̃_o),Σ(η̃_o)] must be greater than S_gen[ς(η̃; η̃_o),X(η̃)]; the generalized entropy of the leaf σ(η̃_o) could just be a local maximum, and the entropy of the constant-η̃ slice ς(η̃; η̃_o) could be larger. We must therefore make a slightly different argument. It turns out that a weaker but sufficient assumption is to only assume that each light cone has a unique maximum area surface.As before, let us still suppose that 𝒬 never squeezes into the comoving coordinate origin and find a contradiction.We again suppose that there exists M>0 such that, given any η_o > η_time, one can find values η̃_o > η_o and ũ such that χ(ũ; η̃_o) ≥ M. First note that in order for S_gen[σ(η̃_o),Σ(η̃_o)] to remain finite, it must be that the function χ(u; η̃_o) is only greater than M on an interval that has vanishing measure in the limit as η̃_o →η_∞. Otherwise, the proper area of σ(η̃_o) diverges. Therefore, the leaves of 𝒬 develop “tendrils” in the asymptotic future limit, as illustrated in fig:tendrils. In this case, however, the comoving volume that is enclosed by σ(η̃_o) vanishes as η̃_o →η_∞, which means that the (locally) maximal entropy slice of each light cone coincides with the (globally) maximal area slice in the asymptotic future limit. We can then repeat the same arguments presented in the proof above for the constant-η̃ slice, but applied in comparison to the maximal area slice, to construct the required contradiction. Once it is established that 𝒬 squeezes into the comoving coordinate origin, the proof continues as before.This relaxation is interesting (albeit somewhat artificial) because it makes it possible to avoid assuming the Quantum Focusing Conjecture. Moreover, as is shown in app:hscreenContinuity, if a RW spacetime admits a continuous holographic screen that has maximal area leaves on every past-directed light cone, then the screen itself is unique and there is always a finite globally maximal area slice of each light cone. (However, this slice is not necessarily unique and may not be part of the unique continuous holographic screen with leaves on every past-directed light cone.) This result suggests that it might in general be possible to relate continuity properties of screens to the properties of extremal-area light cone slices. For practical purposes, however, it is much cleaner to simply assume the QFC (which also guarantees that the GSL holds).§.§ 1+3 dimensions Now suppose that ℳ is a Bianchi spacetime in 1+3 dimensions with the line elementds^2 = -dt^2 + a_1^2(t) dx^2 + a_2^2(t) dy^2 + a_3^2(t) dz^2 .The case where ℳ has 3 dimensions of space parallels the 1+2-dimensional case with only a handful of technical complications. The main difference is that now the anisotropy has two functional degrees of freedom:ds^2 = -dt^2 + a^2(t)[e^2b_1(t) dx^2 + e^2b_2(t) dy^2 + e^2b_3(t) dz^2 ]One arrives at the equation above by setting a_i(t) = a(t)e^b_i(t) for i = 1, 2, 3, where the b_i(t) are subject to the constraint ∑_i=1^3 b_i(t) = 0. The definition of conformal light cone coordinates (η, η_o, θ, ϕ) is correspondingly modified:x^j(η,η_o,θ,ϕ) = D^j(θ,ϕ) ∫_η^η_oe^-2b_j(ζ)/√(∑_i=1^3 D^i(θ,ϕ)^2 e^-2b_i(ζ)) dζ,whereD^j(θ,ϕ) = (sinθ cosϕ , sinθ sinϕ , cosθ).Nevertheless, the essential construction remains unchanged. We still consider a past Q-screen, 𝒬, constructed with respect to a foliation of ℳ by past-directed light cones, and the leaves of 𝒬 are still labeled by the conformal time η_o where the tip of their corresponding light cone is located. The no-hair theorem also generalizes in a straightforward way:Let ℳ be a Bianchi I spacetime with the line element (<ref>) and whose matter content has constant thermodynamic entropy s per comoving volume. Suppose that ℳ admits a past Q-screen, 𝒬, with globally maximal entropy leaves constructed with respect to a foliation of ℳ with past-directed light cones that are centered on the origin, x = y = z = 0. Suppose that the Generalized Second Law holds on 𝒬 and that ℳ and 𝒬 together satisfy the following assumptions for i ∈{1,2,3}: (i) a_i(t) →∞ as t →∞,(ii) 𝒬 is timelike past some t_time and extends out to future timelike infinity,(iii) ȧ_i(t) > 0 past some t_mono,(iv) S_gen→ S_max < ∞ along 𝒬.Then, ℳ is asymptotically de Sitter and the axial scale factors a_i(t) approach C_i e^Ht, where H and C_i are constants. Note: In terms of a(η) and the b_i(η), Assumption (i) becomes: (i^') a(η) →∞ as η→η_∞ and a(η)e^b_i(η)→∞.The proof for 1+3 dimensions exactly parallels the proof of Theorem <ref>, so we only note the most important modifications. Beginning with Part 1, in (η,η_o,θ,ϕ) coordinates, the leaves σ(η_o) are now parameterized surfaces,x̃^μ(u,v; η_o) = (η(u,v; η_o), η_o, u, v)u ∈ [0,π]v ∈ [0,2π) .Our first task is again to show that χ(u,v;η_o) ≡η_o - η(u,v;η_o) tends to zero for all values of u and v as η_o →η_∞.As before, let us construct a contradiction of Assumption (iv) by supposing that 𝒬 never squeezes into the comoving coordinate origin. Suppose that there exists M > 0 such that, given any η_o > η_time, one can find values η̃_o > η_o, ũ, and ṽ such that χ(ũ, ṽ; η̃_o) ≥ M. Let η̃≡η(ũ, ṽ; η̃_o) and consider the constant η = η̃ slice of the light cone whose tip is at η̃_o. Denote this (co-dimension 2) surface by ς(η̃; η̃_o), and denote the (co-dimension 1) hypersurface of constant-η̃ by X(η̃). Here as well, eq:SgenLowerBound will lead us to the contradiction via a divergence in A[ς(η̃; η̃_o)].In 1+3 dimensions, the induced metric on a surface of constant η and η_o is given byγ_ab = ∂ x^μ/∂θ^a∂ x^ν/∂θ^b g_μν = a^2(η) ∑_j=1^3 e^2b_j(η)∂ x^j/∂θ^a∂ x^j/∂θ^b,and where x^j and g_μν refer to eq:bianchiI4d_abeq:comovingLC4D with θ^a ≡ (θ,ϕ). The area of this surface is now given by the surface integralA(η,η_o) = ∫_0^π∫_0^2π√(γ) dϕdθ,where the determinant of the induced metric isγ = a^4(η) ∑_i<j e^2(b_i(η) + b_j(η))( ∂ x^i/∂θ∂ x^j/∂ϕ - ∂ x^j/∂θ∂ x^i/∂ϕ)^2 .One may therefore bound the area of ς(η;η_o) by, e.g.,A(η,η_o) ≥ a^2(η) e^b_1(η) + b_2(η)∬ dθdϕ | ∂ x/∂θ∂ y/∂ϕ - ∂ y/∂θ∂ x/∂ϕ| .Using the coordinate transformation eq:comovingLC4D, one can show that the Jacobian in the integrand above is given by| ∂ x/∂θ∂ y/∂ϕ - ∂ y/∂θ∂ x/∂ϕ| = ∬_η^η_o ds ds^'  sinθ| cosθ| ( sin^2 θcos^2 ϕe^2(b_2(s)+b_3(s^')). . + sin^2 θsin^2 ϕe^2(b_3(s)+b_1(s^')) + cos^2 θe^2(b_2(s)+b_1(s^'))) ×( ∑_i=1^3 D^i(θ,ϕ)^2 e^-2b_i(s))^-3/2( ∑_j=1^3 D^j(θ,ϕ)^2 e^-2b_i(s^'))^-3/2.This is quite beastly, but fortunately we can bound it nicely:| ∂ x/∂θ∂ y/∂ϕ - ∂ y/∂θ∂ x/∂ϕ| ≥∬_η^η_o ds ds^'  sinθ| cos^3 θ| e^2(b_2(s)+b_3(s^'))×((e^-2b_1(s) + e^-2b_2(s)) sin^2θ + e^-2b_3(s)cos^2 θ)^-3/2×((e^-2b_1(s^') + e^-2b_2(s^')) sin^2θ + e^-2b_3(s^')cos^2 θ)^-3/2(One would arrive at similar results by choosing different terms to keep in the numerator of eq:beast1.) Then, inserting eq:beast2 into eq:areaBound2D and performing the angular integration, one arrives atA(η,η_o)≥ 4 π a^2(η) e^b_1(η)+b_2(η)∬_η^η_o ds ds^' f̃(s,s^')wheref̃(s,s^')= e^b_1(s) + 3b_2(s) + 3b_1(s^') + b_2(s^')/[ √(e^2b_1(s) + e^2b_2(s)) e^2(b_1(s^') + b_2(s^'))+ √(e^2b_1(s^') + e^2b_2(s^')) e^2(b_1(s) + b_2(s))]^2≥e^b_1(s) + 3b_2(s) + 3b_1(s^') + b_2(s^')/[ (e^b_1(s) + e^b_2(s)) e^2(b_1(s^') + b_2(s^'))+ (e^b_1(s^') + e^b_2(s^')) e^2(b_1(s) + b_2(s))]^2≡ f(s,s^') .Note that we have also used the fact that b_3 = -b_1 - b_2 to eliminate b_3. Then, if f(s,s^') is minimized at s_m, s^'_m ∈ [η,η_o], it follows thatA(η,η_o) ≥ 4 π (η_o - η)^2 a^2(η) e^b_1(η)+b_2(η) f(s_m,s^'_m) . Given this result, we now apply it to our surface ς(η̃; η̃_o), for which (η̃_o - η̃) ≥ M. Doing so, we arrive atA[ς(η̃; η̃_o)] ≡ A(η̃, η̃_o) ≥ 4 π M^2 a^2(η̃) e^b_1(η̃)+b_2(η̃) f(s_m,s^'_m) .The right-hand side of the bound above then diverges as η̃ and η̃_o are chosen arbitrarily large. The only subtlety arises if either or both of b_1 and b_2 also diverge, but because the numerator and the denominator of the b-dependent part of the bound eq:4Dbound contain equal powers of b_1 and b_2, the overall divergent behaviour induced by a(η) is unchanged. (Recall that a(η) e^b_1(η), a(η) e^b_2(η), and a(η) e^b_3(η) = a(η) e^-b_1(η) - b_2(η) all grow infinitely large by assumption.) We therefore arrive at the desired contradiction of Assumption (iv) via eq:SgenLowerBound, and so the leaves of 𝒬 squeeze into the comoving coordinate origin in the asymptotic future.Next we turn to showing that the scale factor a(η) is asymptotically de Sitter (Part 2). Consider the generalized entropy S_gen[σ(η_o),Σ(η_o)] once more. First, Lemma <ref> is correspondingly modified:Let ς(η; η+χ) be the constant-η slice of the past-directed light cone whose tip is at η_o = η + χ. The generalized entropy defined by this slice is given byS_gen[ς(η;η+χ),X(η)] = A(η,η+χ)/4G + c_g(η,χ) χ^3 s,where A(η,η+χ) is given byA(η,η+χ) = a^2(η) ·[ 4πχ^2 + O(χ^4) ],and c_g(η,χ) is some O(1) geometric factor due to anisotropy that does not depend on a(η). Repeating the steps described in Lemma <ref>, one finds thatc_g(η,χ) ≡S_CG(η,χ)/χ^3 s = 4π/3 + 8π/45(b_1^'(η)^2 + b_1^'(η) b_2^' (η) + b_2^'(η)^2 ) χ^2 + O(χ^3). The expansion of A(η,η+χ) for small χ follows from expanding √(γ) in eq:area4D in powers of χ and then integrating.  From Lemma <ref>, it therefore again follows that the matter contribution to the generalized entropy, S_CG[σ(η_o),Σ(η_o)], vanishes in the asymptotic future limit. Consequently, we focus on the area term, A[σ(η_o)].For this part of the proof, we will work in the coordinates (χ,η_o,θ,ϕ). The leaf σ(η_o) is parameterized by some surface x̃^μ(u,v) = (χ(u,v;η_o),η_o,u,v) with η_o held constant and 0 ≤ u ≤π, 0 ≤ v < 2π. In the asymptotic future, this surface is the surface on the light cone with tip at η_o with maximal area, and so it is the solution ofδ A[σ(η_o)]/δχ(u,v;η_o) = 0 .The induced metric on this surface is, as usual, given byh_ab = ∂x̃^μ/∂ u^a∂x̃^ν/∂ u^bg̃_μνwhere g̃_μν is the metric of eq:bianchiI4d_ab but rewritten in (χ,η_o,θ,ϕ) coordinates. The area of σ(η_o) is given byA[σ(η_o)] = ∫_0^π∫_0^2π√( h) dv du ,and the components of h_ab are as follows:h_uu = (∂_u χ)^2 g̃_00 + 2(∂_u χ) g̃_02 + g̃_22h_uv = (∂_u χ)(∂_v χ) g̃_00 + (∂_u χ) g̃_03 + (∂_v χ) g̃_02 + g̃_23h_vv = (∂_v χ)^2 g̃_00 + 2(∂_v χ) g̃_03 + g̃_33. Once more, solving the full Euler-Lagrange problem for χ(u,v;η_o) to obtain the maximal area A is intractable, so we use the same trick where we extract an overall factor of a^4(η_o-χ) from h and then expand the square root of the quotient in powers of χ. The result isA[σ(η_o)] = ∫_0^π∫_0^2π a^2(η_o - χ) [ sinθ/R(u,v;η_o)^3/2χ^2 + Q(u,v;η_o) sinθ/R(u,v;η_o)^5/2χ^3 + O(χ^4) ]   dvdu,whereR(u,v;η_o)= ∑_i=1^3 e^-2b_i(η_o) D^i(u,v)^2 Q(u,v;η_o)= ∑_i=1^3 b_i^'(η_o) e^-2b_i(η_o) D^i(u,v)^2. Only keeping the lowest order term, the variation δ A/δχ = 0 gives the maximal path χ(u,v;η_o) = χ(η_o) as the solution ofχ = a(η_o - χ)/a^'(η_o-χ).So, to lowest order, A[σ(η_o)] is given byA[σ(η_o)] = χ^2 a^2(η_o - χ) ∫_0^π∫_0^2πsinθ/R(u,v;η_o)^3/2  dv du = 4π(a^2(η_o - χ)/a^'(η_o-χ))^2 .But the requirement that S_gen→ S_max means that A[σ(η_o)]/4G must tend to the constant value S_max, or in other words,lim_η_o →η_∞ χ→ 0a^2(η_o - χ)/a^'(η_o-χ) = √(GS_max/π)≡1/HTherefore, a(η) asymptotically approaches de Sitter, a(η) → -1/Hη as η→ 0^-, with H^2 = π/GS_max. Note that we recover the same Hubble constant as in Theorem <ref> for RW spacetimes in 1+3 dimensions.Finally, as in the case of 1+2 dimensions, the condition (a(η)e^b_i(η))^' > 0 is enough to show that lim_η→ 0^- b_i^'(η) exists for each i. § DISCUSSIONAssuming the Generalized Second Law, we have shown that if a Bianchi I spacetime admits a past Q-screen along which generalized entropy increases up to a finite maximum, then this implies that the spacetime is asymptotically de Sitter. We recover a version of Wald's cosmic no-hair theorem by making thermodynamic arguments about spacetime, without appealing to Einstein's equations.While the proof of these cosmic no-hair theorems is most tractable (and certainly easiest to visualize) in 1+2 dimensions, the generalization to 1+3 dimensions was fairly immediate. In principle, the proof strategy for arbitrary dimensions is the same, albeit more difficult from the perspective of calculation. This is chiefly because calculating area elements of codimension-2 surfaces in arbitrary dimensions is cumbersome. Nevertheless, it is natural to expect that analogous cosmic no-hair theorems hold for Bianchi I spacetimes of arbitrary dimensions.Within the proof itself, it would be interesting to see if the monotonicity assumptions, a^'_i(η) > 0, could be eliminated. The fact that the Generalized Second Law asserts that S_gen increases monotonically along a Q-screen does offer some leverage. In particular, asymptotically this implies that the average scale factor a(η) = (∏_i=1^d a_i(η))^1/d increases monotonically; however, we learn nothing about the anisotropies b_i(η), since the leading order behaviour of S_gen does not depend on the b_i(η) in the asymptotic future regime. We also note that the monotonicity assumptions do not trivialize the cosmic no-hair theorems demonstrated in sec:Bianchi. For example, assuming monotonicity does not rule out exponential expansion with different rates in different spatial directions, nor asymptotically power-law scale factors, nor does it even imply accelerated expansion at all.An interesting extension would be to try to prove a no-hair theorem for classical cosmological perturbations <cit.>, or for quantum fields in curved spacetime. Given a scalar field on a curved spacetime background, the task is to show that the combined metric and scalar field perturbations approach the Bunch-Davies state <cit.> at late times. In principle it would suffice to show that the background spacetime still tends to de Sitter in the future in this case, since one could then simply invoke known no-hair results about scalar fields in curved backgrounds <cit.>. Conceptually, such a calculation would be interesting because one can explicitly write down the quantum state of cosmological perturbations, and so a full treatment of the matter entropy as von Neumann entropy (modulo ultraviolet divergences) is possible.To prove our theorem, it was not strictly necessary to assume that the gravitational contribution to the entropy was precisely proportional to the surface area. We could imagine choosing some other function of the area, such thatS_gen[σ,Σ] = f(A[σ]/G) + S_CG[σ,Σ].For example, returning to the RW case, if one sets f(A/G) = C(A/G)^p for some constants C and p, exactly the same analysis as in the proof of Theorem <ref> leads to the conclusion that (cf. eq:RWasymp)ȧ(t) →√(4π/G(C/S_max)^1/p) a(t)in the limit as t →∞. In other words, one still concludes that the scale factor is asymptotically de Sitter, albeit with a Hubble constant that differs from the usual case of f(A/G) = A/4G.Finally, while we did not make use of the Einstein field equations in our derivation, upon reinvoking them, we note that the cosmic no-hair theorems established here imply a pure dark energy phase asymptotically in the future (in the sense that the stress energy tensor becomes proportional to the metric, g_μν). However, the GSL is not sensitive to the nature of the dark energy (whether it is a pure cosmological constant, whether it turns on, whether it's due to a slowing scalar field, and so on).This work can be thought of as part of the more general program of connecting gravitation to entropy, thermodynamics, and entanglement <cit.>. As in attempts to derive Einstein gravity from entropic considerations, we deduce the behavior of the geometry of spacetime from thermodynamics, without explicit field equations. Our result is less general, as we only obtain the asymptotic behavior of the universe, but is perhaps also more robust, as our assumptions are correspondingly minimal. Thinking of spacetime as emerging thermodynamically from a set of underlying degrees of freedom can change our perspective on the knotty problems of quantum gravity; for example, as emphasized by Banks <cit.>, the cosmological constant problem becomes the question of “Why does Hilbert space have a certain number of dimensions?" rather than “Why is this parameter in the low-energy effective Lagrangian so small?" Problems certainly remain (including why the entropy was so low near the Big Bang), but this alternative way of thinking about gravitation may prove useful going forward. § ACKNOWLEDGMENTS We would like to thank Cliff Cheung, John Preskill, and Alan Weinstein for helpful discussions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award No. DE-SC0011632, as well as by the Walter Burke Institute for Theoretical Physics at Caltech and the Gordon and Betty Moore Foundation through Grant No. 776 to the Caltech Moore Center for Theoretical Cosmology and Physics. § Q-SCREENS, A WORKED EXAMPLEIn this appendix, we illustrate Q-screens by explicitly constructing one in a RW spacetime that is asymptotically de Sitter. Consider a RW spacetime in 1+3 dimensions with the line element ds^2 = -dt^2 + a^2(t)(dχ^2 + χ^2 dΩ_2^2) and where the scale factor is a(t) = sinh t, t ∈ (0,∞). Conformal time is given by η(t) = -2arccoth(e^t), η∈ (-∞,0), and the scale factor in conformal time isa(η) = 1/sinh(-η).Foliate the spacetime with past-directed light cones centered at the coordinate origin χ = 0, and let the Cauchy hypersurfaces of the spacetime be the constant-η hypersurfaces. Let us now construct a Q-screen by extremizing the generalized entropy on each light cone.Consider a past-directed light cone whose tip is at the conformal time η_o. A constant-η < η_o slice of this light cone is a 2-sphere of coordinate radius η_o - η, and so the generalized entropy computed with respect to this slice isS_gen(η;η_o) = π/G(η_o - η/sinh(-η))^2 + 4/3π (η_o - η)^3 s .A plot of S_gen(η;η_o) as a function of η for several values of η_o is shown in fig:SgenEx. The area term A(η;η_o)/4G alone is also overlaid on the plot, which illustrates that it is the dominant contribution to the generalized entropy at late times. Notice that in addition to having a local maximum, S_gen(η;η_o) also has a local minimum, and below a certain critical value η_o^crit there is in fact no nonzero value of η which locally extremizes S_gen(η;η_o). As such, the Q-screen, which is defined as the union of the slices with maximal generalized entropy, is only defined for η_o ≥η_o^crit. This is in contrast to the area A(η; η_o), which has a locally maximizing value of η for all η_o. The holographic screen, which is made up of extremal area slices, is therefore defined for all times. Both the Q-screen and the holographic screen were schematically illustrated previously in fig:QscreenEx.Generalized entropy is extremal when ∂ S_gen/∂η = 0. Excluding η = 0 and η→ -∞, the extremizing values of η are the real-valued solutions ofη_o - η = sinh(-η)/cosh(-η) - 2Gs sinh(-η)^3when they exist. Let η^Q(η_o) denote the maximizing value, and hence also define the Q-screen leaf radius χ^Q(η_o) ≡η_o - η^Q(η_o). A plot of χ^Q(η_o) is shown in fig:chimax. As expected, χ^Q(η_o) vanishes as η_o → 0^-. For comparison, we also plot the holographic screen radius χ^H(η_o) ≡η_o - η^H(η_o), where η^H(η_o) maximizes the area of the light cone slice, i.e., it is the solution ofη_o - η = tanh(-η) .In particular note that χ^Q(η_o) is always slightly larger than χ^H(η_o), but they ultimately coincide in the limit η→ 0^- (cf. fig:QscreenEx).As a final exercise, let us investigate the asymptotic dependence of χ^H(η_o) on η_o (which is also the asymptotic dependence of χ^Q(η_o), since the two coincide as η_o → 0^-) to illustrate some of the subtleties involved in performing asymptotic expansions. Consider eq:maxAcond and let η = η_o - χ so that we have χ = tanh(χ-η_o). Since, asymptotically, χ→ 0, one may be tempted to expand this last equation for small values of χ:χ = tanh(-η_o) + (1 - tanh^2(-η_o))χ + O(χ^2) ⇒χ^H(η_o) ?=1/tanh(-η_o)Notice, however, that since 0 < tanh(-η_o) < 1, this expression for χ^H(η_o) cannot be infinitesimally small—the expansion is inconsistent! Rather, χ and η_o are simultaneously infinitesimal. Consider instead the double Taylor series in χ and η_o:χ = χ - η_o - 13χ^3 + η_o χ^2 - η_o^2 χ + 13η_o^3 + ⋯⇒χ^H(η_o) = (-3η_o)^1/3 + η_o+ ⋯This last result is the correct asymptotic behaviour of χ^H(η_o).Similarly, writing A = 4πχ^2 a^2(η_o - χ), one arrives at the wrong expressions for extremal values if one tries to expand A in small values of χ, η_o, or even both at the same time. The key is to keep a(η_o - χ) intact so that one arrives at eq:maxAcond. Doing so leaves just enough nonlinearity to be able to restore the correct asymptotic behaviour of χ^H(η_o). This technique is exploited in sec:asympdS. § HOLOGRAPHIC SCREEN CONTINUITY AND MAXIMAL AREA LIGHT CONE SLICESWhen the null curvature condition holds, the Raychaudhuri equation guarantees that light rays focus, or in other words, that the expansion of a null congruence is always nonincreasing: dθ/dλ≤ 0. In particular, this means that if a null congruence has a spacelike slice whose area is maximal with respect to local deformations, then this is in fact the unique globally maximal area slice. A consequence of this observation is that if one's aim is to construct a holographic screen by stitching together maximal area slices of each null sheet in a null foliation, then the holographic screen is uniquely fixed by the choice of foliation.Here, we connect the uniqueness of locally maximal area slices to continuity properties of holographic screens in RW spacetimes. What we will first show is that, given a foliation of a RW spacetime by past-directed light cones, there is at most one continuous holographic screen that can be constructed with respect to this foliation that has maximal area leaves on every light cone. We will then show that a consequence of this observation is that if a spacetime admits a continuous holographic with maximal area leaves on every light cone, then each light cone necessarily has a globally maximal finite area slice.Let ℳ be a RW spacetime with the line elementds^2 = a^2(η) ( -dη^2 + dχ^2 + χ^2 dΩ_d-1^2 ),where the conformal time η takes values in an unbounded (connected) interval ℐ⊆ℝ. Consider a foliation of ℳ by past-directed light cones whose tips are at χ = 0. If there is a past-directed light cone that has multiple spacelike slices that have maximal area with respect to local deformations, then ℳ admits at most one holographic screen, H, constructed with respect to the given foliation that is both (a) continuous, and (b) has maximal area leaves on every past-directed light cone. Consider a past-directed light cone whose tip is at η_o. For η < η_o, the area of the constant-η slice of this light cone is given byA(η,η_o) = 𝒩_d [(η_o - η) a(η)]^d-1,where 𝒩_d is a dimension-dependent constant. Because ℳ is spherically symmetric, such a slice has extremal area if ∂ A/∂η = 0, or equivalently, ifη_o = η + a(η)/a^'(η)≡ f(η) .Therefore, constant-η slices of the past-directed light cone whose tip is at η_o for which f(η) = η_o and η < η_o are potential holographic screen leaves.Now suppose that there is a light cone whose tip is at η_o that has n locally maximal area slices at η = η_1, η_2, ..., η_n where, for convenience, these conformal times are ordered such that η_1 > η_2 >… > η_n. This means that a graph of f(η) must intersect the horizontal line at η_o at least 2n-1 times. (Between any two adjacent local maxima η_i and η_i+1, there must be a local minimum of area at some η^min_i ∈ (η_i,η_i+1), and there may also be inflection points.) Consider any two adjacent local maxima η_i and η_i+1. Schematically, in the vicinity of these points, the graph of f(η) must look like one of the two configurations shown in fig:hscreenf (a,b), since f is continuous if a/a^' is continuous. Now consider shifting the horizontal line at η_o up and down. This corresponds to shifting the tip of the light cone to the future and past of η_o. Where the horizontal line intersects the graph of f(η) tracks how the location of the local maxima and the local minimum of A move. In particular, notice that by moving the horizontal line sufficiently far to the future or the past, one of the local maxima and the local minimum must eventually meet and become an inflection point before disappearing altogether. (Note that we may always move the horizontal line sufficiently far in at least one of the past or future directions, since the interval ℐ in which the conformal time takes its values is unbounded in at least one direction.) Therefore, if we track how the locations of the maxima at η_i and η_i+1 change as we move the location of the light cone's tip, we see that one of these local maxima must eventually disappear, as illustrated in fig:hscreenf (d).Inductively, then, there exists at most one continuous function, call it η_max(η_o), whose domain is all η_o ∈ℐ and is such that η = η_max(η_o) is a local maximum of A(η,η_o) for all η_o. The union of the constant-η_max(η_o) slices of all past-directed light cones is precisely the holographic screen H described in the statement of the proposition.  Examples of various f(η) are sketched below. fig:fex (a) depicts a case in which there exists a continuous holographic screen with leaves on every light cone. fig:fex (b) depicts a case in which there is no such holographic screen. In fact, from this example, one can see that if ℐ = ℝ, then there can never be a continuous holographic screen with leaves on every light cone if there is a light cone that has multiple maximal area slices. Referring to the proof above, the technical reason is that in this case, the horizontal line of constant η_o can be pushed arbitrarily far up and down since η_o can take all values in ℝ, and so any pair of adjacent maxima and minima will eventually merge (as a function of η_o).Finally, there is a partial converse of the result above: If ℳ as described in Proposition <ref> admits a continuous holographic screen, H, with maximal area leaves on every past-directed light cone, then each light cone has a spatial slice which is a global maximum of the area of all spatial slices of the light cone, and the area of this slice is finite. Consider first the case where there is a unique local maximum on each past-directed light cone and H is the union of these maximal area surfaces. Again denote the value of η that maximizes A(η,η_o) for a given η_o by η_max(η_o). The only way that η_max(η_o) could not be a global maximum of area is if there was some η_M < η_max(η_o) such that A(η_M,η_o) > A(η_max(η_o),η_o). However, for this to be possible, there must be a local minimum of A in between η_M and η_max(η_o). In other words, the function f(η) must intersect the horizontal constant-η_o line once for the local maximum, once for the local minimum, and then possibly an additional even number of times for pairs of inflection points—there cannot be more intersections if η_max(η_o) is the unique local maximum of area. This means that the graph of f(η) must be concave up or concave down (fig:fconcave), in which case there will be some horizontal η_o lines that do not intersect the graph of f(η), which contradicts the requirement that H have leaves on every light cone. Therefore, η = η_max(η_o) is in fact a global maximum of A(η,η_o).Then, according to Proposition <ref>, the other case is where some light cones have multiple local maxima of A(η,η_o), in which case ℐ is only semi-infinite. This can only happen for light cones whose tips are near the finite endpoint of the interval ℐ. Beyond some threshold value of η in the direction in which ℐ is unbounded, f(η) must still be monotonic in order for there to be leaves on every light cone. Therefore, for η_o beyond the threshold, the first case applies, and when there are multiple local maxima of area on a given light cone for η_o between the threshold and the finite endpoint of ℐ, at least one of them is a global maximum of A(η,η_o). utphys
http://arxiv.org/abs/1703.09241v4
{ "authors": [ "Sean M. Carroll", "Aidan Chatwin-Davies" ], "categories": [ "hep-th", "astro-ph.CO", "gr-qc" ], "primary_category": "hep-th", "published": "20170327180137", "title": "Cosmic Equilibration: A Holographic No-Hair Theorem from the Generalized Second Law" }
theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition remark[theorem]Remark problem[theorem]Problem definition[theorem]Definitionproof[1][Proof]#1[footnoteinfo] First]Kemi Ding Second]Yuzhe Li Third]Subhrakanti Dey First]Ling Shi[First]Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong. (e-mail: {kdingaa,eesling}@ust.hk). [Second]Department of Electrical and Computer Engineering, University of Alberta, Canada. (e-mail: pku.ltracy@gmail.com) [Third]Department of Engineering Science, Uppsala University, Sweden. (e-mail: subhrakanti.dey@angstrom.uu.se) This paper considers the remote state estimation in a cyber-physical system (CPS) using multiple sensors. The measurements of each sensor are transmitted to a remote estimator over a shared channel, where simultaneous transmissions from other sensors are regarded as interference signals. In such a competitive environment, each sensor needs to choose its transmission power for sending data packets taking into account of other sensors' behavior. To model this interactive decision-making process among the sensors, we introduce a multi-player non-cooperative game framework. To overcome the inefficiency arising from the Nash equilibrium (NE) solution, we propose a correlation policy, along with the notion of correlation equilibrium (CE). An analytical comparison of the game value between the NE and the CE is provided, with/without the power expenditure constraints for each sensor. Also, numerical simulations demonstrate the comparison results.§ INTRODUCTION Cyber-physical systems (CPSs), which combine the traditional control system with information and communication technologies, can provide great improvements in the system performance, including the robustness to unexpected disturbance and efficient utilization of resources, see <cit.>.As the next generation control systems, CPSs have attracted increasing interest in different realms, such as the smart grid, intelligent transportation, ubiquitous health care, and so on. The incorporation of communication networks although provides stability and efficiency for physical systems, unfortunately raises a number of technical challenges in the control system design. For example, when we consider remote state estimation using multiple sensors, if the communication bandwidth is limited and cannot allow all sensors to transmit data, then simultaneous data transmission will lead to signal interference which will further lead to packet drop and hence deteriorate the estimation performance.There are several representative methods for interference management in communication theory, such as code division multiple access (CDMA), see <cit.>. However, there lack efficient approaches to cope with the multi-access issue in the remote state estimation. Another factor to consider is the limited sensor energy budget.As most sensor nodes use on-board batteries, which are difficult to replace or recharge, the energy for sensing, computation and transmission is restricted. Motivated by this, a considerable amount of literature has been published on sensor transmission scheduling to achieve accurate estimation under limited energy constraints, e.g., <cit.>. However, many of them focus on the one-sensor case and model the sensor scheduling as a Markov decision problem (MDP). The problem becomes difficult when taking multiple sensors into account. In this work, we provide quantitative analysis of transmission competition over a shared channel for remote estimation under abundant energy and limited energy,respectively.In communication theory, the traditional way to solve the competition problem is to model it as a non-cooperative game (see <cit.>). Precisely, the sensors are treated as selfish players aiming at maximizing their utilities such as their own throughput or certain thresholds of signal-to-noise-ratio, and the Nash equilibrium (NE) concept provides the optimal strategy for each player. Different from these preliminary works, our work focuses on dynamic systems and considers the state estimation performance. Since the sensors have different time-varying objective functions, more thorough analysis of the NE solution is required, as demonstrated in <cit.>. Unfortunately, the obtained NE in <cit.> leads to an inefficient outcome, called “tragedy of the commons". To overcome these limitations, we introduce the notion of correlated equilibrium (CE), along with a correlation mechanism, and analyze its impact on the state estimation performance.As proposed by <cit.>, the CE is a generalization of the NE concept to capture the strategic correlation opportunities that the players face. More importantly, it allows an increase in all players' profits simultaneously. The definition of CE includes an arbitrator who can send (private or public) signals to the players. Remarkably, this arbitrator requires no intelligence or any knowledge of the system, which is different from centralized management (where everyone obeys some rules provided by the mediator). Therefore, the generated signal does not depend on the system states; for example, see <cit.>, the surrounding weather conditions or the thermal noises for communication channels. Unlike the cooperative game, each player is self-enforced to comply with the outcome suggested by the mediator, rather than being restricted by a contract. In conclusion, the CE concept not only provides a tractable solution for this competition problem, but also may bring more benefits to each player than the NE.By the employment of the CE concept, we investigate the optimal transmission strategies for the sensors in a large-scale CPS with shared public resources. Compared with the previous work by <cit.>, not only is the performance difference between CE and NE studied, but the respective power constraint for each sensor is also considered. The main contributions of our current work are summarized as follows: * We provide a general game-theoretic framework for remote state estimation in a multi-access system, where the sensors compete over access to the same channel for packet transmission.* In the absence of power restrictions, we analyze the existence and uniqueness of NE for this game. That is, at the NE, each sensor transmits with its maximum energy level. Moreover, the CE is proved to be equivalent to the NE.* With energy limitations, we formulate the problem as a constrained game and provide the closed-form NE. Moreover, after proposing an easy-to-implement correlation mechanism, we obtain the explicit representation of the CE. By comparison, the CE is preferable to the NE for this game. The remainder of the paper is organized as follows. Mathematical models of the system are described in Section <ref>. In Section <ref>, we introduce the multi-player non-cooperative game and give the definition of CE. Section <ref> demonstrates the main theoretical results with/without power constraints. The correlation policy is also introduced in Section <ref>, and the simulation results are shown in Section <ref>. Some concluding remarks are given in the end.Notations: ℤ is the set of non-negative integers. ℕ is the set of positive integers. k∈ℤ is the time index. ℝ^n is the n dimensional Euclidean space. 𝕊_+^n is the set of n by n positive semi-definite matrices. When X ∈𝕊_+^n, it is written as X ≥ 0. X ≥ Y if X - Y ∈𝕊_+^n. [·] is the expectation of a random variable and (·) is the trace of a matrix. For functions f, f_1, f_2: 𝕊_+^n→𝕊_+^n, f_1∘ f_2 is defined as f_1∘ f_2(X) ≜ f_1(f_2(X)). 1(·) is the indicator function. Δ(·) represents a set of probability measures and “w.p." means with probability.§ PROBLEM SETUP As depicted in Fig. <ref>, the state information of different processes is sent to the remote estimator through one shared channel, and essential components of the overall system structure will be introduced in this section. §.§ Local Kalman FilterConsider the following network system containing one remote estimator and N sensors, which separately monitor different linear systems:x_i(k+1) = A_ix_i(k) + w_i(k), y_i(k) = C_ix_i(k) + v_i(k), i∈{1,…,N},where at time k, the state vector of the system measured by sensor i is x_i(k) ∈R^n_x, and the obtained noisy measurement is y_i(k) ∈R^m_y. For each process i∈{1,…,N}, the process noise w_i(k) ∈R^n_x and the measurement noise v_i(k) ∈R^m_y are zero-mean i.i.d. Gaussian random variables with 𝔼[w_i(k)w_i(j)'] = δ_kjQ_i (Q_i≥ 0), 𝔼[v_i(k)v_i(j)'] = δ_kjR_i (R_i> 0), and 𝔼[w_i(k)v_i(j)'] = 0∀ j,k. The initial state x_i(0) is a zero-mean Gaussian random vector with covariance Σ_i(0)≥ 0, and it is uncorrelated with w_i(k) and v_i(k). The time-invariant pair (A_i, C_i) is assumed to be detectable and (A_i,√(Q_i)) is stabilizable.Here, we adopt “smart" sensors to improve the estimation/control performance of the current system. As illustrated in <cit.>, the so-called “smart" sensors, equipped with memory and the embedded operators, are capable of processing the collected data. In Fig. <ref>, by running a Kalman filter locally, sensor i can compute the optimal estimate of the corresponding state x_i(k) based on the collected measurements {y_i(1),…,y_i(k)}. The obtained minimum mean-squared error (MMSE) estimate of the process state is given byx̂^s_i(k)=[x_i(k)|y_i(1),…,y_i(k)].The corresponding estimation error covariance is denoted as:P^s_i(k)≜[(x_i(k) - x̂^s_i(k)) (x_i(k) - x̂^s_i(k))'|y_i(1),…,y_i(k)]. These terms are computed recursively following the standard Kalman filter equations:x̂^s_i(k|k-1) = A_ix̂^s_i(k-1), P_i(k|k-1)^s = A_iP^s_i(k-1)A_i' + Q_i,K_i(k) = P^s_i(k|k-1)C_i'[C_iP^s_i(k|k-1)C_i' + R_i]^-1, x̂^s_i(k) = A_ix̂^s_i(k-1) + K_i(k)y_i(k) - C_iA_ix̂^s_i(k-1), P^s_i(k) =(I -K_i(k)C_i)P^s_i(k|k-1).The iteration starts from x̂^s_i0 = 0 and P^s_i(0) = Σ_i(0). For notational simplicity, we define the Lyapunov and Riccati operators h_i(·) and g̃_i(·): S_+^n→S_+^n ash_i(X)≜ A_iXA_i' + Q_i,g̃_i(X)≜ X - XC_i'[C_iXC_i' + R_i]^-1C_iX. Suppose that the time-invariant pair (A, C) is detectable and (A,√(Q)) is stabilizable, the estimation error covariance P_k^s converges exponentially to a unique fixed point P_i of h_i ∘g̃_i according to <cit.>. For brevity, we ignore the transient periods and assume that the Kalman filter at the sensor has entered steady state; i.e.,P^s_i(k) = P_i, k ≥ 1. As mentioned in <cit.>, the steady-state error covariance P_i has the following property. For 0≤ t_1 < t_2, the following inequality holds:[P_i] ≤[h^t_1_i(P_i)] < [ h^t_2_i(P_i)]. §.§ Communication ModelAs demonstrated in Fig. <ref>, the sensor i will transmit the local estimate x̂^s_i(k) as a packet to the remote estimator through a single channel, which may be occupied by other sensors. Hence, this information delivery interferes directly with other transmissions of the sensors that use the same channel, hence, the estimation of state x_i(k) is affected.In this multi-access system, we assume that the shared channel has independent Additive White Gaussian Noise (AWGN). By modeling the signals of other sensors as interfering noises, the channel quality, as measured by the signal-to-noise-ratio (SNR) in point-to-point communication, is closely related to the revised signal-to-interference-and-noise-ratio (SINR), see <cit.>. For sensor i∈{1,…,N}, its SINR is defined as:γ_i(k) = L h_i a_i(k)/∑_j≠ ih_j a_j(k) + σ^2 = L q_i(k)/q̂_-i(k)+σ^2,in which a_i(k)≥ 0 and a_j(k)≥ 0 correspond to the transmission power taken by sensor i and sensor j, respectively. For simplicity, we define q_i(k)≜ h_i a_i(k) and q̂_-i(k) ≜∑^N_j=1,j≠ i q_j(k). The extra term ∑_j≠ ih_j a_j(k) in the denominator of (<ref>) is due to the interference from the other sensors, and σ^2 is the channel noise. The parameter h_i ∈ (0,1), ∀ i ∈{1,…,N} is the channel gain from sensor i to the remote estimator, and L is the spreading gain of the communication system. These channel parameters are assumed to be time-invariant[Time-variety has little effect on the results afterwards.]. Moreover, they can be acquired by the sensors, as the sensors can access the channel state information (CSI) using pilot-aided channel estimation techniques.To characterize the packet-dropout for sensor i, we introduce the notation symbol error rate (SER) and adopt a general function to represent the relationship between SER and SINR:SER_i≜ f(γ_i),∀ i∈,where f(·) depends on the channel characteristic and the modulation schemes. Note that since the interference takes great importance in the multi-access system, the packet transmission typically operates at low SINRs. As investigated by <cit.>, the error probability function f(·) is strictly concave and decreasing in γ_i.Here, we consider an erasure channel that the packet will be dropped if it contains any error (in general, the symbol error can be detected by the channel coding method). Therefore, the simultaneous transmissions of this system are characterized by independent Bernoulli processes, denoted by η_i(k). Let η_i(k)=0 denotes the loss of packet x̂^s_i(k), and η_i(k)=1 otherwise. Hence, we have(η_i(k)=1) = 1- f(γ_i),∀ i∈.Note that, the arrival of packet x̂^s_i(k) not only depends on the transmission power of sensor i, but also is affected by the behaviors of the other sensors. §.§ Remote State Estimation Let x̂_i(k) denote the MMSE estimate of the process x_i(k) generated by the remote estimator, with the error covariance matrix P_i(k). Similar to <cit.>, the estimation process is as follows: ifx̂^s_i(k) arrives successfully, the estimator synchronizes its respective estimate x̂_i(k) with it; otherwise, the estimator simply predicts the estimate based on its previous estimate and system dynamics. In short, the estimation x̂_i(k) is denoted byx̂_i(k) =η_i(k) x̂^s_i(k) + (1-η_i(k)) A_ix̂_i(k-1). Similarly, the simple recursion of the error covariance P_i(k) isP_i(k)≜ [(x_i(k)-x̂_i(k)) (x_i(k)-x̂_i(k))'] = η_i(k) P_i + (1-η_i(k)) h_i(P_i(k-1)),where P_i stands for the steady-state error covariance defined in (<ref>). For each sensor, we define a random variable τ_i(k) ∈Z as the holding time:τ_i(k) ≜ k - max_0≤ l ≤ k{l: η_i(l) = 1 },which represents the intervals between the present moment k and the most recent time when the data packet x̂^s_i(k) arrives successfully. Without loss of generality, for all i∈, we assume that the initial packets x̂^s_i(0) are received by the estimator, and hence τ_i(0) = 0.Note that, the equivalent relationship between the holding time and the estimation error covariance at the remote estimator isP_i(k) = h_i^τ_i(k)(P_i).Furthermore, the iteration of the holding time isτ_i(k+1) =(1-η_i(k))( τ_i(k)+1).§.§ Problem of Interest In our work, every sensor competes for public communication resources to obtain an accurate estimation performance, which can be formulated as a game with multiple self-interested players. Alternatively, it can be modeled as a constrained game with the consideration of power limitations. The best response for each player is the NE in a traditional manner. Differently, in this work we consider the notion of CE and investigate whether the coordination (CE) mechanism, compared to the NE, can bring extra benefits to each player simultaneously. § MULTI-SENSOR TRANSMISSION GAME In this section, we model the interactive decision-making process of each sensor as a multi-player game and introduce the concept of equilibrium solution. §.§ Game theoretic framework The multi-player game, denoted by 𝒢, is characterized by a triplet <ℐ,𝒜,𝒰> where§.§.§ Players: ℐ= is the set of players, in which i∈ℐ represents sensor i. As a necessary condition for the equilibrium analysis, we assume that all sensors are rational; that is, each sensor will make the best decision to maximize their benefit among all available actions. Also, the rationality assumption is common knowledge shared among the players.§.§.§ Actions: 𝒜 = {𝒜_i,i∈ℐ} illustrates the set of actions for each player i. For simplicity, we consider the transmission action sets with discrete energy levels, i.e., 𝒜_i = {e^(1)_i,…, e^(m_i)_i} for player i. Let a_i(k) ∈𝒜_i denote the transmission action (or pure strategy) taken by player i at time k. The mixed strategy for each player, denoted by s_i(k) ∈Δ(𝒜_i), i∈, is a probability distribution over the pure action space 𝒜_i. That is, player i, following strategy profile s_i(k), may take the transmission power e^(l)_i,l∈{1,…,m_i} w.p. s_i(a_i(k)=e^(l)_i)[Here, we abuse the notation s_i(a_i(k)) to represent one element of vector s_i(k).] at time k. Moreover, define a(k) = {a_1(k), a_2(k),…,a_N(k)} as the joint action played by the overall players. Alternatively, a(k) = {a_i(k),a_-i(k)}, in which a_-i(k) represents the joint action excluding player i. Similarly, the joint strategy profiles are represented by s(k) = {s_1(k), s_2(k),…, s_N(k)} = {s_i(k),s_-i(k)}. §.§.§ Utility: 𝒰 = {u_i,i∈ℐ} is the utility set and u_i represents the utility function for player i with u_i: 𝒜→ℝ. As discussed previously, each sensor focuses on improving its respective estimation accuracy, measured by the estimation error covariance. Hence, based on (<ref>) the utility function for player i is characterized by[In the rest of this paper, we will omit the variable k of τ_i(k), γ_i(k), a_i(k), s_i(k) and q_i(k) when the underlying time index k is obvious from the context; otherwise, it will be indicated.]u_i(a)≜ -{[P_i(k)]}= -{[(1-f(γ_i))P_i+f(γ_i)h_i^τ_i+1(P_i)]}= f(γ_i)c_i - {[P_i]},in which c_i ≜{[P_i-h_i^τ_i+1(P_i)]} is independent of γ_i, and c_i<0 is derived from Lemma <ref>.Next, we define the expected utility function of player i in a slight abuse of notation u_i(·). Under the joint strategy profile s, the benefit obtained by player i isu_i(s)≜∑_a∈𝒜(a|s) u_i(a),in which (a|s) is the probability over the joint action a under strategy s. §.§ Equilibrium and Coordination In our current game, player i is subject to maximize selfishly its utility at time k, i.e.,For any player i∈ℐ,max_s∈Δ(𝒜)u_i(s) s.t. ∑_a_i ∈𝒜_i s_i(a_i) =1.Note that the game theory, see <cit.>, provides a way to cope with these coupled optimization problems. One common solution concept is defined as follows:In this multi-player one-stage game with finite action space, the strategy profile s^⋆,NE ={s_1^⋆,NE,…, s_N^⋆,NE} is a Nash equilibrium if no player can benefit from changing strategies while the others keep their own equilibrium strategy unchanged; i.e., for any player i∈ℐ,u_i(s=s^⋆,NE)≥ u_i([s_i=s,s_-i=s^⋆,NE_-i]),∀ s ∈Δ(𝒜_i).The respective optimal utility value for each player is denoted by u^⋆,NE_i. Regarding the non-cooperation among the overall players, the NE assumes that players choose actions independently, i.e., (a|s) = ∏_i∈ℐ s_i(a_i) in (<ref>). However, it is possible to extend the sets of strategies available to the players by allowing them to correlate their choices. Motivated by that, a more general concept than NE, called the correlated equilibrium (CE), is proposed. In CE, the players can receive recommendations on what to play from an omniscient mediator. To be specific, at time k, the imagined mediator samples an N-tuple joint action a = {a_1,…,a_N} as a mode of play w.p. s(a), and the recommended action for player i is a_i=a. Player i may accept the recommendation or may use a meta-strategy, denoted by a transition t(a_i): 𝒜_i →𝒜_i, when it is suggested to play a_i. At a CE, no such meta-strategy would improve each player's expected utility if the others are assumed to play according to the recommendation. Hence, we have the following definition.For this game 𝒢, a strategy profile s^⋆,CE is a correlated equilibrium if and only if∑_a^-∈𝒜_-i(a_-i=a^-|a_i=a,s^⋆,CE) {u_i( [a_i=a, a_-i=a^-])-u_i( [t(a), a_-i=a^-])}≥ 0,for all players, all a ∈𝒜_i s.t. (a_i=a|s^⋆,CE)>0, and all transitions t(a) ∈𝒜_i. We denote by u^⋆,CE_i the corresponding optimal utility value for each player.The CE concept can deal with some drawbacks of the original NE concept. One of the advantages of CE is that it can be computed in polynomial time (via a linear programming); whereas, the respective complexity for NE computation (finding its fixed point completely) is known as an NP-hard problem, see <cit.>. More importantly, at a CE, multiple self-interested players may achieve higher rewards by coordinating their actions than they could at an NE. The comparison between NE and CE is analyzed rigorously in the following section. § MAIN RESULTS In this section, we discuss the equilibrium solution for this game under two different cases: with or without energy constrains. The specific representations and the comparison between NE and CE are also provided. §.§ Without Energy Constraints First, we consider the existence and uniqueness of NE for Prob. <ref>, and provide the complete NE solution in the following theorem. The multi-player non-cooperative game 𝒢 admits a unique NE, which has the property that all players transmit at its corresponding maximum power level; i.e.,s^⋆,NE_i(a_i) ={[1, if a_i=e^(m_i)_i;;0, others. ].∀ i∈ℐ. First, we consider the existence of a pure strategy NE, denoted by {a^⋆_i,a^⋆_-i}. By definition, we havea^⋆_i = max_a_i ∈𝒜_iu_i([a_i,a^⋆_-i])⇔ a^⋆_i =min_a_if(γ_i(a_i,a^⋆_-i)) = min_a_if( L a_i/q̂^⋆_-i+σ^2) (1)⇔ a^⋆_i = e^(m_i)_i.The derivation of (1) is based on that f(·) decreases with a_i. Hence, we obtain a pure strategy NE shown in (<ref>).Next, we prove the uniqueness of this NE. If there exists another NE, denoted by s̃^⋆_i with s̃^⋆_i(e^(m_i)_i)<1,u_i(s̃^⋆) =∑_a_-i∈𝒜_-i∑_a_i∈𝒜 [∏_j≠ is̃^⋆_j(a_j)] s̃^⋆_i(a_i) u_i([a_i,a_-i])< ∑_a_-i∈𝒜_-i∑_a_i∈𝒜[∏_j≠ is̃^⋆_j(a_j)] s̃^⋆_i(a_i) u_i([e^(m_i)_i,a_-i]) = ∑_a_-i∈𝒜_-i [∏_j≠ is̃^⋆_j(a_j)] u_i([e^(m_i)_i,a_-i]) = u_i([s^⋆_i,s̃^⋆_-i]).This shows that player i tends to adopt s^⋆_i if the others keep their strategy unchanged, which contradicts the NE concept. Hence, s^⋆,NE_i is unique. ▪ Recall that, a CE is a joint distribution over actions from which no agent is motivated to deviate unilaterally. The following theorem demonstrates the uniqueness of the CE solution, and captures the relationship between CE and NE for the game 𝒢. The multi-player game 𝒢 has a unique CE, denoted by s^⋆,CE, and it is equivalent to the NE.From the definition of CE and conditional probability, we have∑_a_-is^⋆,CE(a) [u_i(a)-u_i([t(a_i),a_-i])] ≥ 0,∀ a_i, t(a_i)∈𝒜_i,in which s^⋆,CE(a) is the probability over joint action set a following the CE strategy profile, and t(·) is a mapping from 𝒜_i to 𝒜_i. Next, we will interpret the computation of s^⋆,CE based on (<ref>).For player 1, if a_1 = e^(1)_1, then at a CE, for all a_1, t(a_1)∈𝒜_1 the following inequality holds:∑_a_-is^⋆,CE([e^(1)_1,a_-1]) [u_1([e^(1)_1,a_-1])-u_i([t(e^(1)_1),a_-1])] ≥ 0.Based on property (1) in the Appendix, we have u_1([e^(1)_1,a_-1])< u_1([t(e^(1)_1),a_-1]), ∀a_-1∈𝒜_-1 and t(a_1)≠ a_1. Therefore, s^⋆,CE([e^(1)_1,a_-1]) = 0, ∀a_-1∈𝒜_-1. Analogously, we can obtain that s^⋆,CE([e^(1)_i,a_-i]) = 0, ∀a_-i∈𝒜_-i,∀ i ∈{1,⋯,N}. That is, all players will choose their smallest power level e^(1)_i w.p. 0, no matter what strategies the others take.Here, we construct a game 𝒢' with the action space excluding the lowest power levels, i.e., 𝒜'_i = {e^(2)_i,⋯,e^(m_i)_i}, ∀ i∈ℐ. Obviously, the CE problem of the original game 𝒢 is converted into that of the game 𝒢'. Let s'^⋆,CE denote the CE of 𝒢'. Similar to the aforementioned analysis, we can obtain s'^⋆,CE([e^(2)_i, a_-i]) = 0, ∀a_-i∈𝒜'_-i and ∀ i∈ℐ. Consequently, the CE of the original game 𝒢 can be computed easily by recursive analysis:s^⋆,CE(a) = {[1, if a={e^(m_1)_1,…,e^(m_N)_N}.;0, others. ].By comparing (<ref>) and (<ref>), the equivalence between the CE and NE is proved. ▪§.§ With Energy Constraint As discussed previously, the optimal response for each player is to transmit the estimation packet constantly at its maximum power level. Nevertheless, such inefficient situation is less achievable, especially in a practical CPS, as the energy for sensor transmission is restricted. Here, we provide different constraints on the expected power consumption for each sensor, and provide the constrained game as follows:For each player i∈ℐ,max_s∈Δ(𝒜)u_i(s) s.t. ∑_a∈𝒜s(a) =1, [a_i|s_i] = ∑_a_i∈𝒜_i s_i(a_i)a_i ≤ e^max_i.Moreover, a strict power constraint is considered: e^(1)_i<e^max_i < e^(m_i)_i. For the NE solution, we have the following result. The constrained game (i.e., Prob. <ref>) has a unique mixed strategy NE with its specific representation given in (<ref>). Let s^⋆,NE = {s^⋆,NE_1,…,s^⋆,NE_N} denote the optimal strategy profile. For player i, given the optimal strategies of others s^⋆,NE_-i, we haveu_i(s^⋆,NE_i,s^⋆,NE_-i)= max_s_i∈Δ(𝒜_i)u_i(s_i,s^⋆,NE_-i) = max_s_i∑^m_i_l=1 s_i(a_i=e^(l)_i) u^(l)_i,in whichu^(l)_i≜ u_i(a_i=e^(l)_i,s^⋆_-i)= ∑_a^-∈𝒜_-i∏^N_j=1,j≠ i s^⋆_j(a_j) u_i(a_i = e^(l)_i,a^-) < ∑_a^-∈𝒜_-i∏^N_j=1,j≠ i s^⋆_j(a_j) u_i(a_i = e^(l')_i,a^-) = u^(l')_i,and in which l' ∈{l+1, …, m_i}.Note that maximum utility is achieved when the equality in (<ref>) holds. Next, we rewrite the energy constraint in (<ref>) by replacing s_i(e^(1)_i) with 1- ∑^m_i_l=2 s_i(e^(l)_i). We can obtain that∑^m_i_l=2 d_l s_i(e^(l)_i) =1,in which d_l ≜e^(l)_i-e^(1)_i/e^max_i-e^(1)_i>0, ∀ l ∈{2, …, m_i} is a constant, andmax_s_iu_i(s_i,s^⋆,NE_-i) =max_s_iu^(1)_i+∑^m_i_l=2u^(l)_i-u^(1)_i/d_l· [d_l s_i(e^(l)_i)]. According to property (2) in the Appendix, we have0<u^(l)_i-u^(1)_i/d_l< u^(l')_i-u^(1)_i/d_l'in which l' ∈{l+1, …, m_i}. Apparently, the optimal solution is obtained when d_m_i s_i(e^(m_i)_i) =1, and the formulation of NE for this constrained game is: for all players,s_i^⋆,NE(a_i) ={[ (d_m_i)^-1=e^max_i-e^(1)_i/e^(m_i)_i-e^(1)_i, ifa_i= e^(m_i)_i,; 1-(d_m_i)^-1=e^(m_i)_i-e^max_i/e^(m_i)_i-e^(1)_i,ifa_i = e^(1)_i,;0, others. ].The uniqueness of the NE is guaranteed by the optimal solution of (<ref>). ▪ Note that at the NE, each player transmits the data packet with the highest or lowest energy power, regardless of the middle levels. Motivated by this, we propose the following mechanism.We define the set of correlation policies as follows: * Assume at each time k, all sensors can observe a signal in the form of a random variable X(k), uniformly distributed over the integers {1,…,N}.* A correlated strategy of sensor i is described by two numbers: α_i∈[0,1] and β_i∈[0,1].* At time k, if X(k)=i, then sensor i is chosen to transmit a packet at the highest power e^(m_i)_i w.p. α_i and at the lowest power e^(1)_i w.p. (1-α_i). Otherwise it transmits w.p. β_i for the highest power e^(m_i)_i and w.p. (1-β_i) for the lowest.Next, we interpret the computation of the CE under this correlated policy. To simplify the calculation and obtain a closed-form formula of the CE, we assume that d_m_i=d_m_j,∀ i≠ j for this constrained game. Recall that, the CE concept, compared with the ordinary NE, can simultaneously improve the benefit of each player under this competitive environment. The following theorem captures this and provides a complete representation of the CE. This constrained game with d_m_i=d,∀ i∈ℐ, admits the existence of a CE under the correlation policy proposed in Def. <ref>. The corresponding parameter is illustrated by (<ref>).Moreover, the CE outcome of this game is superior to the NE outcome for each player i∈ℐ.First, we discuss the expected utility for player i under this policy. By definition, if X(k)=i, thens_i(a_i)= 1_{e^(m_i)_i}(a_i)α_i+1_{e^(1)_i}(a_i)(1-α_i),and for j≠ is_j(a_j)= 1_{e^(m_j)_j}(a_j)β_j+1_{e^(1)_j}(a_j)(1-β_j).The expected utility for the sensor i is:u_i(s)= α_i u_i([e^(m_i)_i,s_l,s_-i-l]) + (1-α_i) u_i([e^(1)_i,s_l,s_-i-l]),in which l≠ j, and the definition of u_i([a_i,s_l,s_-i-l]) is similar to (<ref>).Analogously, if X(k)=l, the expected utility for sensor i is:u_i(s')= β_i u_i([e^(m_i)_i,s'_l,s_-i-l]) + (1-β_i) u_i([e^(1)_i,s'_l,s_-i-l]),in whichs'_l(a_l)= 1_{e^(m_l)_l}(a_l)α_l+1_{e^(1)_l}(a_l)(1-α_l). Next, we consider the computation of the CE. Solving the constrained optimization problem, as well as finding the CE, becomes rapidly intractable when the number of players N increases. Here, we can restrict to the same strategy (α^⋆,β^⋆) being adopted by all players, and investigate if a single player deviating from this strategy by using a different strategy (α,β).If X(k)=i and player i adopts a meta-strategy characterized by α, then it obtains the benefitU_α =∑_a_-i-l(a_-i-l|α^⋆,β^⋆) [αβ^⋆ u^m,m_i +α(1-β^⋆)u^m,1_i+ (1-α)β^⋆u^1,m_i + (1-α)(1-β^⋆) u^1,1_i],in which, for example, u^m,m_i≜ u_i([e^(m_i)_i,e^(m_l)_l,a_-i-l]) and u^1,1_i≜ u_i([e^(1)_i,e^(1)_l,a_-i-l]).If X(k) = l and β is used by player i, then its utility isU_β =∑_a_-i-l(a_-i-l|α^⋆,β^⋆)[βα^⋆ u^m,m_i+ β(1-α^⋆)u^m,1_i +(1-β)α^⋆u^1,m_i + (1-β)(1-α^⋆)u^1,1_i].Then, the expected utility of player i by adopting (α,β) isu_i(α,β)=U_α+(N-1)U_β/N(1)= ∑_a_-i-l(a_-i-l|α^⋆,β^⋆)N-1/N(α^⋆-β^⋆)ûβ +c',in which û= u^m,m_i-u^m,1_i-u^1,m_i+u^1,1_i<0 is based on property (3) in the Appendix, and c' is a constant. The equality (1) is derived by replacing α with the expression N d^-1-(N-1)β (obtained by the energy constraint). Hence, u_i(α,β) is an affine function in β, and the optimal β isβ^⋆ = {[ max(0, Nd^-1-1/N-1),If α^⋆>β^⋆;; min(1, Nd^-1/N-1),If α^⋆<β^⋆. ].Last, we compare the CE outcome with that of the NE. The NE is a special case of the CE, that is α^⋆ = β^⋆ =s. From the previous discussion, we know that max_β u_i(α,β) = β^⋆≠ s. Hence, we have the payoff of the CE strictly greater than the payoff of the NE. ▪In the absent of energy limitations, the best response for each sensor is transmitting with the maximum energy level, no matter what equilibrium it chooses. The result is of common sense since the players' utilities do not involve the power expenditure items. However, when taking the power restrictions into account, the correlation mechanism displays its advantages over the NE. Intuitively, in NE every player will take full advantage of their transmission power, which inevitably causes heavy communication conflicts. But, the existence of mediator in CE can coordinate the players' behaviors to alliterate these conflicts and achieve better performance.§ SIMULATION AND EXAMPLES In this section, we will compare the NE outcome and the CE outcome of the game-theoretic model using an example. Here, we consider a multi-agent system with three sensors and one remote estimator. The monitored dynamic processes have respective system parameters, demonstrated in Tab. <ref>. In addition, the error covariances of the Gaussian noises are Q_i = R_i =0.8, ∀ i ∈{1,2,3}. Some parameters of the communication channel are given in Tab. <ref>. Moreover, e^(1)_i=0 (note that each sensor may choose to stay inactive), h_i =1,∀ i ∈{1,2,3} and L = 2. For the SER, we adopt the formula: f(γ) = 1- 2 𝚀(√(4γ^-1-1)), where 𝚀(x) ≜1/√(2π)∫^∞_xexp(-y^2/2)ỵ.By Theorem. <ref> and Theorem. <ref>, we can obtain the following two strategy profiles for the sensor to transmit data packets: * s^NE: s_i(0)=s_i(e^(m_i)_i)=0.5,∀ i ∈{1,2,3}.* s^CE:s_i(e^(m_i)_i) = {[0.75, if sensor i is chosen;;0.25,others. ]. and s_i(0)= 1-s_i(e^(m_i)_i). Via 100,000 Monte Carlo simulations, the comparison (between s^NE and s^CE) results of state estimation error covariance for sensors 1 and 2 are depicted in Fig. <ref>. Furthermore, we represent the performance difference of sensor 3 in Fig. <ref>. All comparison results illustrate the analytical performance in Theorem. <ref> and highlight the superiority of the proposed coordination mechanism. Last, but not least, comparing Fig. <ref> and Fig. <ref>, the performance difference between the NE and the CE decreases as the energy constraint becomes stronger. § CONCLUSION We have investigated the remote estimation issue for a multi-sensor system under the game-theoretic framework. Motivated by the concept of Nash equilibrium in the previous work, we analyzed the performance advantage brought by the correlation policy. In the absence of power constraints, the correlated equilibrium outcome is equal to the NE outcome. However, the correlated policy improves the estimation performance in the presence of power constraints.§ PROPERTIES OF THE UTILITY FUNCTION Consider the utility function u_i(a_i,a_-i), ∀ i ∈, defined in (<ref>). We can obtain the following properties: * ∂ u_i(a_i,a_-i)/∂ a_i>0 if a_i>0.* If a_2>a_1>a_0>0, thenw_a_0(a_i=a_2)>w_a_0(a_i=a_1),in which w_a_0(a_i) ≜u_i(a_i,a_-i)-u_i(a_i=a_0,a_-i)/a_i-a_0 and a_-i is given.* If a_2>a_1>0 and a_4>a_3>0 thenu_i(a_2,a_4,a_-i-l) -u_i(a_2,a_3,·)-u_i(a_1,a_4,·) +u_i(a_1,a_3,·)<0,where a_-i-l is given.* The first statement can be obtained from c_i<0 and∂ u_i(a_i,a_-i)/∂ a_i = c_i f'(γ_i)γ_i/a_i. * Since f”(γ_i)< 0, then∂^2 u_i(a_i,a_-i)/∂ a^2_i = c_i f”(γ_i)(γ_i/a_i)^2>0.Hence, u_i(a_i,a_-i) is an increasing and strictly convex function in a_i. Obviously, we can obtain that if a_2>a_1>a_0>0, thenw_a_0(a_2)>w_a_0(a_1). * Consider the second partial derivative of function u_i(·),∂^2 u_i(a_i,a_la_-i-l)/∂ a_i ∂ a_l = -c_i h_lγ^2_i/h_i L a^2_i [f'(γ_i)+γ_i f”(γ_i)] <0.Hence, we haveu_i(a_2,a_4,·)-u_i(a_1,a_4)/a_2-a_1 < u_i(a_2,a_3,·)-u_i(a_1,a_3)/a_2-a_1Obviously, the statement (3) holds.▪
http://arxiv.org/abs/1703.09124v1
{ "authors": [ "Kemi Ding", "Yuzhe Li", "Subhrakanti Dey", "Ling Shi" ], "categories": [ "stat.ME", "cs.IT", "math.IT" ], "primary_category": "stat.ME", "published": "20170327145209", "title": "Multi-sensor Transmission Management for Remote State Estimation under Coordination" }
Cross section and transverse single-spin asymmetry of muons fromopen heavy-flavor decays in polarized p+p collisions at √(s)=200 GeV L. Zou December 30, 2023 ======================================================================================================================================theoremTheorem lemmaLemma Stabilization is a key dependability property for dealing with unanticipated transient faults, as it guarantees that even in the presence of such faults, the system will recover to states where it satisfies its specification. One of the desirable attributes of stabilization is the use of bounded space for each variable. In this paper, we present an algorithm that transforms a stabilizing program that uses variables with unbounded domain into a stabilizing program that uses bounded variables and (practically bounded) physical time. While non-stabilizing programs (that do not handle transient faults) can deal with unbounded variables by assigning large enough but bounded space, stabilizing programs –that need to deal with arbitrary transient faults– cannot do the same since a transient fault may corrupt the variable to its maximum value. We show that our transformation algorithm is applicable to several problems including logical clocks, vector clocks, mutual exclusion, leader election, diffusing computations, Paxos based consensus, and so on. Moreover, our approach can also be used to bound counters used in an earlier work by Katz and Perry for adding stabilization to a non-stabilizing program. By combining our algorithm with that earlier work by Katz and Perry, it would be possible to provide stabilization for a rich class of problems, by assigning large enough but bounded space for variables. § INTRODUCTION Self stabilization is one of the highly desirable dependability properties of distributed systems. Self-stabilization ensures that a system affected by a fault eventually stabilizes to or reaches a valid state in finite time. Stabilizing fault-tolerance is especially useful for dealing with unexpected transient faults. Transient faults can perturb the system to potentially arbitrary state, and guaranteeing that the program recovers to legitimate states ensures that the effect of these faults would only be temporary. A key desirable property of stabilizing systems is for them to utilize only variables with a bounded domain. For non-stabilizing systems (i.e. systems that do not handle transient faults), one could utilize counters that grow unbounded by ensuring that the value of the variable remains manageable during the length of the system computation. For example, one could argue that if a variable increases by at most 10 every second and the system can run for at most 1000 seconds then the value would never be more than 10,000. However, for stabilizing systems with variables with a bounded domain, this argument does not hold true. This is because a transient fault could perturb the system to a state where the value has already reached 10,000 i.e. the bound. The same argument was made by Lamport and Lynch <cit.> and we recall the argument in the following quote, "Simply bounding the number of instance identifiers is of little practical significance, since practical bounds on an unbounded number of identifiers are easy to find. For example, with 64-bit identifiers, a system that chooses ten per second and was started at the beginning of the universe would not run out of identifiers for several billion more years. However, through a transient error, a node might choose too large an identifier, causing the system to run out of identifiers billions of years too soon–perhaps within a few seconds. A self-stabilizing algorithm using a finite number of identifiers would be quite useful, but we know of no such algorithm." However, the above quote about unbounded counters conflicts with usage in distributed systems where one often utilizes time as a variable whose value is theoretically unbounded. This is because with time there is a guarantee for convergence offered by protocols like NTP, i.e. any inconsistency can be easily detected and corrected in finite time. Based on this conflict –we seem to find the use of physical time (that is theoretically unbounded but practically bounded) acceptable but find the use of other unbounded variables unacceptable, so we consider the question: Why is the usage of unbounded time reasonable, but the usage of other unbounded variables is not?We observe that there is an inherent difference between the variable time and any other variable. In particular, detecting whether time is corrupted is much easier than detecting if other variables are corrupted. With the usage of redundancy, atomic clocks, etc., one can ensure that the time of a process is close to the correct value. In other words, if transient faults perturb a clock to a value that is far away from the current value, this corruption can be detected before using that clock value. Also, if a process receives a message where the value of time is far away from its own clock (e.g., a message with time April 15, 3017/1017 instead of April 15, 2017), it can be detected before the receiving process accepts that message.Observe that this property of time may not be satisfied by other variables in a given program. For example, if we use logical clocks by Lamport <cit.>, it is possible that clocks of two processes could genuinely differ by a large value.Our goal in this paper is to identify a class of programs for which we can begin with a stabilizing program that relies on unbounded counters and transform it into a program with bounded counters and (theoretically unbounded but practically bounded) physical time. Contributions of the paper.* We introduce the notion of free and dependent counters and utilize them to develop an algorithm that transforms a stabilizing program with unbounded counters into a stabilizing program with bounded counters. * We demonstrate that our approach can be combined with that by Katz and Perry <cit.>. Specifically, <cit.> provides a mechanism to transform a non-stabilizing program into a stabilizing program with unbounded counters. We show that the generated stabilizing program can then be transformed into a stabilizing program that uses bounded counters and (practically bounded) physical clock. * We demonstrate our algorithm in the context of several classical problems such as consensus, logical/vector clocks, mutual exclusion, diffusing computation, etc. * We show that even with trivially satisfiable parameters for a practical system (like clock drift of less than 100 seconds, messages are either delivered or lost within an hour, etc), the size of counters in our programs is small.We present an algorithm for transforming a stabilizing program with unbounded counters into a program with bounded counters. The main idea behind this algorithm is to introduce the notion of free counters and dependent counters. Intuitively, free counters are those that are independent of other counters and can increase at will. Dependent counters on the other hand cannot be changed at will. However, they have a limited life after which they no longer affect the operation of the program.We observe that in fact several programs can be viewed to consist of a set of bounded variables, unbounded free counters and unbounded dependent counters. Examples of such programs are logical clocks, diffusing computations, paxos based consensus algorithms, leader election algorithms, mutual exclusion algorithms and so on. We show that for these and for several other programs, we can transform a given solution that uses unbounded counters into a solution that uses only bounded counters and physical time that is kept reasonably synchronized with NTP. Organization of the paper.The rest of the paper is organized as follows: We define distributed programs and relate their execution with time in Section <ref>.We define the notion of free and dependent counters in Section <ref> and illustrate them with the example of Lamport's logical clocks in Section <ref>. We present our algorithm in Section <ref> and discuss some applications of it in Section <ref> (and some in Appendix <ref>, <ref> and <ref>). Section <ref> also demonstrates that our approach can be combined with the approach in <cit.> by Katz and Perry, so that an existing program can be transformed into a stabilizing program that uses bounded variables and physical time.Section <ref> discusses related work and questions raised by our work. In Section <ref>, we present concluding remarks and future work.In Appendix <ref> and <ref>, we present the step-by-step illustration of our algorithm and its proof of correctness.In Appendix <ref> we summarize the notations used in this paper. But this does not apply to all unbounded counters due to lack of such reliable global reference to aid in such inconsistency detection and correction. So the need for equivalent bounded counters is evident. Also, bounded counters are applicable in place of unbounded counters in most scenarios, for example in a system where different instances of a program are identified and tracked using an unbounded counter, one can satisfy this by using finite number of identifiers <cit.> i.e. by using an equivalent bounded counter. However “simply bounding the number of instance identifiers is of little practical significance, since practical bounds on an unbounded number of identifiers are easy to find. For example, with 64-bit identifiers, a system that chooses ten per second and was started at the beginning of the universe would not run out of identifiers for several billion more years. However, through a transient error, a node might choose too large an identifier, causing the system to run out of identifiers billions of years too soon–perhaps within a few seconds. A self-stabilizing algorithm using a finite number of identifiers would be quite useful, but we know of no such algorithm."<cit.> § PRELIMINARIES§.§ Modeling Distributed ProgramsA distributed program consists of a set of processes.Each process has a set of actions and the program executes in an interleaving manner where an action of some process is executed in every step. Execution of the program is captured with a set of program variables, each of which is associated with a domain.Variables of the program can be classified as simple and compound variables.Simple variables include bounded and unbounded variables (counters). Complex (composite or abstract) variables include containers, lists, queues, etc. The total number of variables in the program remains fixed throughout the execution. However, when an action of a process is executed, new entries may be added to or removed from one or more complex variables.With this intuition, we now formally define a program in terms of its variables and actions. Definitions <ref> to <ref> are from standard literature such as <cit.>.definition (Program). A program p is of the form ⟨ V_p,A_p ⟩, where V_p is a set ofvariables, and A_p is a set of actionsthat are of the form guard → statement, where guard is a condition involving the variables in V_p and the statement updates a subset of variables in V_p. (State).A state s of program p is obtained by assigning each variable in V_p a value from its domain.(Enabled). An action of the form guard → statement is enabled in state s iff guard evaluates to true in state s. (Computation). A computation is a sequence of states s_0, s_1, s_2,⋯, where a state s_l+1, l ≥ 0, is obtained by executing some enabled action in state s_l. For the sake of simplicity, we assume that there is at least one action enabled in state s. If such an action does not exist, we pretend that the program has an action corresponding to a self-loop at state s.(Computation Prefix). A finite sequence of states s_0, s_1, s_2, ⋯, s_n is a computation-prefix of program p iff it is a prefix of some computation of p. Computation Prefix. A finite computation M_f = s_0, s_1, s_2, ⋯, s_n is a computation prefix of a computation M = s_0, s_1, s_2,⋯ iff:[Chec the above definition](i) The initial state of M_f is same as the initial state of M,(ii) If s_n is the final state of M_f, then every state in M_f from the initial state to s_n is also present in M in the same relative order or sequence as in M_f.In the program considered in this paper we partition variables in V_p into simple variables and complex variables. Simple variables are bounded variables with a finite domain or unbounded variables whose domain is N, the set of natural numbers. A complex variable is an dynamic ”collection” of simple variables, where a collection can be a set, queue, array, etc of flexible size. We can view such a program by ignoring the structure provided by complex variables.In other words, if there is a complex variable C ∈ V_p which is a collection or a set of simple variables say C={c_1,c_2,c_3,⋯}, then we can ignore (the structure provided by) C by just adding its elements i.e. the simple variables c_1,c_2,c_3,⋯ in C to V_p directly. However, such an inclusion makes V_p a dynamic set, whose size is not fixed anymore. Given the mapping of complex variables to simple variables, we can view V_p as a collection of only simple variables. So for simplicity, in our further discussion we will view the variables of the program in this manner. Finally, we recall the definition of stabilization from <cit.>:(Stabilization).Program p is stabilizing to S, where S is a set of states, iff * Starting from an arbitrary state, every computation of p reaches S, and* Starting from a state in S, no computation of p reaches a state outside S. §.§ RelatingProgram Computationand TimeAs discussed in the Introduction, our goal is to combine the existence of (reasonably synchronized) global time achieved through services such as NTP with reasonable timing properties in the given algorithm. Since the definitions in Section <ref> are time-independent, in this section, we identify the role of time and the relation between program steps and time in our algorithm. Our algorithm relies on NTP-like algorithm to provide physical clock for each process, which is close to an abstract global clock (this global clock is not available to processes themselves). We partition the abstract global time say t into regions of size . Thus, the (global) region is identified by t/. Likewise, each process j is also associated with a physical time, say t_j. This time is also mapped to the region of process j. Thus, the region of process j is t_j/.Note that due to clock drift the global region and region associated with process j may not be identical. Likewise, region associated with process j may not be the same as that associated with process k. We choose such that (1) the region identified by the process from its own local physical clock differs from the region identified by the global clock by at most 1, and (2) the regions identified by two processes from their local clocks differ by at most 1. Given the current technology, choosingto be a few milliseconds would be reasonable for many existing systems to satisfy this assumption.In our analysis, we assume to be 100 seconds. Note that achieving clock synchronization to be within 100 seconds is trivial in any practical system.For a given computation, we identify a program subsequence that occurred in a given (abstract) global region. Although the processes themselves are not aware of this (abstract) global time, this association allows us to model assumptions such as any message would be delivered within time δ or it will be lost.2) any request for mutual exclusion would be satisfied within time δ, etc. We can model such assumptions in terms of regions; if we utilize regions to be 100 second long and we are guaranteed that messages would either be received or lost within one hour (3600 seconds) then this would mean that a message has a lifetime of at most 36 regions. § FREE COUNTERS AND DEPENDENT COUNTERS In this section, we define the notion of free and dependent counters that form the basis of our transformation algorithm. However, before we do that, we focus on the structure of the variables in the program. In particular, for program p, we partition its variables V_p into two types: simple variables and complex variables. Simple variables are those variables with domain that is either a finite set or N, the set of natural numbers. And, complex variables are collections (e.g., set, sequence, list, etc.) of simple variables, and the constituent variables can be removed/added dynamically. To define the notion of free and dependent counters, we will unravel the structure of a complex variable and focus only on the simple variables contained in it. For example, if the program contains a complex variable, say C, which is a set and its current value is { 3, 5, 7}, then we visualize this as having three simple variables c_1, c_2 and c_3 whose values are 3, 5 and 7 respectively. With this intuition, we can view a program p with variables V_p as an equivalent program with variables SV_p,where SV_pis a dynamically changing collection of simple variables.Moreover, the domain of any variable in SV_p is either finite or equal to N, the setof natural numbers. A reader might wonder why we do not define program p in terms of SV_p in the first place. As mentioned above, SV_p is a dynamic set that has a flexible size.To update a dynamic set of variables one would require a dynamic or infinite set of actions. Without making explicit efforts, such a model has the potential to model programs that are not recursively enumerable. Our modeling with complex variables in V_p avoids this problem, as the set of actions is always finite.The set SV_p is dynamic.We say that a variable in SV_p is a permanent variable if it is guaranteed to be present in every state of p. For example, any simple variable in V_p would be a permanent variable since it will be present in SV_p at all times. A variable that is not a permanent variable is called a temporary variable. definition (Valuation of variable in V_p). Let x be a variable in V_p and let s be a state of program p. x(s) denotes the value of x in state s.We overload this definition for SV_p. Specifically, if variable x is present in SV_p in the given state, the value of that variable is defined in the same manner as in the above definition. And, if the variable x is not present in that state (entries in complex variables in V_p or their equivalent simple variables in SV_p may be added/removed), we denote its value as . In other words, definition (Valuation of variable in SV_p). Let x be a variable in SV_p and let s be a state of program p. If x is present in state s, then x(s) denotes the value of x in state s. And, if x is not present in s, then we denote it as x(s) =.With the help of SV_p and permanent/temporary variables, we define the notion of free and dependent counters. Intuitively, a free counter is a permanent variable whose value never decreases. Moreover, if we increase the value of the free counter in the final state of a computation-prefix then the resulting sequence is also a valid computation prefix of the given program. Formally, (Free counter). A permanent variable fc of program p is a free counter iff for any computation prefix ρ = s_0, s_1, s_2,⋯,s_l of p the following conditions hold:(i) ∀ w: 0 ≤ w < l : fc(s_w+1) ≥ fc(s_w),(ii)ρ' = ρ + s_l+1 is also a valid computation prefix, where state s_l+1 is reached from state s_l by increasing the value of fc (and leaving other variables unchanged), and ρ + s_l+1 denotes concatenation of ρ and s_l+1.Thus, if fc(s_l) is the value of the free counter fc in program p in state s_l, then fc(s_l+1) (i.e., the value of the free counter fc in the subsequent state s_l+1) is never less than its value in the previous state s_l.Also, if ρ = s_0, s_1, s_2,⋯,s_l is a valid computation prefix of p, appending state s_l+1 where fc(s_l+1) = fc(s_l) +d (where d≥0) to ρ results in another valid computation prefix ρ' = ρ + s_l+1.Next, we define the notion of dependent counters. A dependent counter is a temporary variable. We require that when this variable is created/added, its value is set to the value of some free counter within at most preceding steps.Moreover, aftersteps,this temporary variable is removed. And, in between the value remains unchanged. Note that this requirement is not restrictive, because essentially, the requirement is just that the value assigned to the dependent counter is somehow related to a free counter in the recent past. For example, if variable dc is set to fc-5 where fc is a free counter, then we can treat it as having two variables dc1 and dc2, where setting dc to fc-5 is modeled as setting dc1 to be same as fc and dc2 to -5, and using dc1+dc2 instead of dc. Note that the latter is a bounded variable whereas the former can be used to satisfy the requirements of dependent counters. Likewise, setting dc to 2*fc or fc^2 + 10 would be acceptable as well. Since there are too many such choices, to keep the transformation algorithm simple, we use the above definition. However, in practice it may require some syntactic tweaking of a given program without affecting its properties.The goal of this requirement is that the value of the counter will eventually become obsolete and hence will no longer affect the program execution. We discuss this further in Section <ref>, where this requirement is handled by syntactic changes to a given program.((Step based) Dependent counter).A temporary variable dc of program p is a (,)-(step-based) dependent counter iff for any computation ρ = s_0, s_1, s_2,⋯ of p the following condition holds: ∀ a : a ≥ 0:* dc(s_a) = ∧ dc(s_a+1) ≠ ⇒∃ w : a-≤ w ≤ a +1 :dc(s_a+1) = fc(s_w),where fc is a free counter in p* dc(s_a) ≠⇒∀ w : w > a+: dc(s_w) =* dc(s_a) ≠∧ dc(s_a+1) ≠ ⇒ dc(s_a) = dc(s_a+1)Note that when the values ofandare clear from the context, we omit them and denote it just as dependent counter. Also, the above definition characterizes dependent counters in terms of number of program steps. We extend this definition in Section <ref> (cf. Definition <ref>) by considering the execution time. Recall that one of the assumptions in Section <ref> was intended to translate the steps of a program into the corresponding time. Based on this assumption, next we define the notion of (region-based) dependent counters where the value of the dependent counter is based on the value of free counters in preceding regions. In particular, we translateandin Definition <ref> into corresponding region values. We treat a counter as (, )-dependent counter (1) when the dependent counter is set to a value different from , it is set to the value of some free counter in at most(global) regions in the past, and (2) after the value of the dependent counter is set to a value different from , within(global) regions it is set back to . Hence, we define region based dependent counters as follows: ((Region based) Dependent counter).A temporary variable dc of program p is a (,)-dependent counter iff for any computation ρ = s_0, s_1, s_2,⋯ of program p the following conditions hold: ∀ a : a ≥ 0: * dc(s_a)= ∧ dc(s_a+1) ≠ ∧ {s_a+1 is in (global) region r} ⇒∃ w :w ≤ a +1 :dc(s_a+1) = fc(s_w),where fc is a free counter in p and s_w is in (global) region [r -..r]* dc(s_a) ≠ ⇒∀ w : region of s_w is greater than r+ : dc(s_w) =* dc(s_a) ≠∧ dc(s_a+1) ≠ ⇒ dc(s_a) = dc(s_a+1) Observe that the above definition overloads the definition of step-based dependent counter. Specifically, we use the term (, )-(step-based) dependent counter while viewing the counter in terms of number of steps. And, we use (, ) while viewing it in terms of regions. In the rest of the paper, unless specified otherwise, we assume that dependent counters are specified in terms of regions. In a given system, irrespective of what kind of collection(a set or a list or a sequence) that a complex variable may correspond to, our algorithm focuses only on bounding each constituent simple variable or entry in the complex variable, whereas the overall structure or the complex variable itself remains unaffected by the algorithm.In other words, operations associated with the data structure itself (e.g., the next element in the list) are performed as is. However, any operation on the data item (e.g., if first item in the list is equal to 0) would be affected by our transformation algorithm. In this case, before the equality operation is performed, we apply the transformation based on the properties (defined in the subsequent discussion) of that list item. Since the global region may differ from region of a process, our algorithm will account for possible discrepancies between them. We do not use regions associated with processes in the above definition since dependent counters may be accessed by multiple processes. For example, cl.m, clock of message m in Lamport's clock, is accessed by both the sender and the receiver.(Dependent counter).A temporary variable (or an entry of a complex variable) dc of program p with limited lifetime say v is a dependent counter iff it is of the form : dc(s_l) = f(fc(s_l))orf(dc_1(s_l))Here the function f stands for a linear function. Briefly, the value of the dependent counter dc of a program p at a state s_l depends on the value of some free counter fc or some other dependent counter dc_1 of the program p. plain § ILLUSTRATING FREE AND DEPENDENT COUNTERS In this section, we illustrate our definitions of free and dependent counters with the help of Lamport's logical clocks <cit.>.In this program, the processes in the system communicate through messages.At any point in time each process j has a logical clock value cl.j associated with it, and cl.j increases whenever an event occurs at j.Next, using our formalism from Section <ref>, we specify the actions of this program. Also, we identify the notion of simple versus complex variables, dependent versus free counters, etc. The actions of a process, say j, in this program are as follows: * Action Local Eventtrue ⟶ cl.j = cl.j+d; * Action Send Event, say to process ktrue ⟶ cl.j = cl.j+d; cl.m = cl.j;channel_j, k = channel_j,k∪{m}. * Action Receive Event, say from process km ∈ channel_j,k⟶ cl.j = max(cl.j, cl.m)+d;channel_j, k = channel_j,k - {m} where d is any positive integer that can be different at different instances of the actions. Observe that for every process j, cl.j is a permanent variable. The variable channel_j,k is a complex variable which contains timestamps of messages in transit. If we unravel this variable, we get multiple timestamps, each corresponding to a message in transit. In Lamport's logical clock program, * cl.j is a free counterProof. The permanent variable cl.j is a free counter of process j that satisfies definition <ref>. In particular if ⟨ s_0,s_1,s_2,⋯,s_n⟩ is a computation prefix of p, then:(i) at a given state s_l when an event occurs, the value of cl.j is computed as cl.j(s_l) = cl.j(s_l-1)+d, d > 0, or cl.j(s_l) = max(cl.j(s_l-1),cl.m) + d, d > 0, i.e., it is higher than the logical clock value of j in its previous state s_l-1.Thus cl.j is an unbounded counter that has the form cl.j(s_l) > cl.j(s_l-1) i.e. it never decreases. (ii) Also, if ρ = s_0, s_1, s_2,⋯,s_l is a valid computation prefix of p, appending state s_l+1 to ρ, where cl.j(s_l+1) = cl.j(s_l)+ d results in ρ' = ρ + s_l+1 which is also a valid computation prefix that contains one extra event.In other words, an increase in the logical clock value of a process by d continues to preserve the correctness of the overall system. Thus the logical clock value associated with any process is a free counter.* Each entry in channel_j,k is a (0,)-(step-based) dependent counter provided any message is guaranteed to be received withinsteps. Proof. Entries in channel_j,k i.e., message timestamps are dependent counters in the system, since they are temporary variables that have the form outlined in definition <ref>. In particular, let ⟨ s_0,s_1,s_2,⋯⟩ be a computation of p and let cl.m denote the timestamp of a message m in channel_j,k . Then, cl.m(s_j) is equal towhen m is not in transit (before transmission or after reception) and cl.m equals the timestamp of m when message m is in transit.(i) if there exists a state s_a such that cl.m(s_a) = and cl.m(s_a+1) ≠ then this corresponds to sending of m. In this case, cl.m(s_a+1) is set to cl.j(s_a+1). It follows that this satisfies condition 1 of Definition <ref>.(ii) if cl.m(s_a) ≠ in some state s_a then it means that message m is in transit in state s_a. Since we assume that every message would be delivered in steps, it follows that aftersteps, in state s_a+, message m will no longer be in transit. It follows that this satisfies condition 2 of Definition <ref>.(iii) a message m is timestamped only once, i.e. when it is added to channel_j,k and cl.m is set toonly after it is removed from channel_j,k. When m is in transmission the value of cl.m is never changed. This satisfies condition 3 of Definition <ref>. When a local event a occurs at process p_j, p_j updates its clock and the event is timestamped using the current logical clock value of the process, cl.j = cl.j+1cl.a = cl.jIf a message send event a occurs at process p_j, i.e.if process p_j sends a message m, p_j also timestamps m,cl.j = cl.j+1cl.a = cl.j cl.m = cl.j When process p_j receives a message m from some other process, then p_j updates its clock as cl.j(s_l) = max(cl.j,cl.m)+dcl.a = cl.jObserve that for a system that has periodic or frequent local events max(cl.j,cl.m)=cl.j.Also, there are no bounded variables in this system. Variables cl.j,cl.a,cl.m are unbounded variables. One can observe that in this system V_p consists of cl.p_0, cl.p_1,⋯,cl.p_n, i.e. logical clock value of each process, which are simple variables. Recall that all simple variables in V_p are permanent variables in SV_p.Let cl.M and cl.E represent sets of message and event timestamps respectively, i.e. they are complex variables in V_p of flexible size. When an event a1 occurs, variable cl.a1 is added to cl.E in V_p, equivalently cl.a1 is added to SV_p as a simple variable. Similarly, when a message m is sent or received, variable cl.m1 is added to cl.M in V_p, equivalently cl.m1 is added to SV_p as a simple variable. Observation 1. cl.j is a free counter in p_j.The permanent variable cl.j is a free counter of process p_j that satisfies definition <ref>. In particular if ⟨ s_0,s_1,s_2,⋯,s_n⟩ is a computation prefix of j, then:(i)at a state s_l, cl.j(s_l) = cl.j(s_l-1)+1, i.e. it is higher than the logical clock value of p_j in its previous state s_l-1.Thus cl.j is an unbounded counter that has the form cl.j(s_l) > cl.j(s_l-1) i.e. it never decreases. (ii) Also, if ρ = s_0, s_1, s_2,⋯,s_l is a valid computation, appending state s_l+1 to ρ, where cl.j[s_l+1] = cl.j[s_l]+ 1 results in ρ' = ρ + s_l+1 which is also a valid computation. In other words, an increase in the logical clock value of a process by a 1 continues to preserve the correctness of the overall system. Thus the logical clock value associated with any process is a free counter of that process.Observation 2. cl.a,cl.m are dependent counters in p_j.Event-timestamps and message-timestamps are dependent counters in the system, since they are temporary variables that have the form outlined in definition <ref>, discussed below.Whenever an event(local event/message send/receive event) say a1 occurs at a process p_j, the event is timestamped using the current logical clock value of that process i.e. variable cl.a1 is added to SV_p with value cl.a1 = cl.j. Similarly, any message say m1 in the system is also timestamped as cl.m1 = cl.j..So the newly added variables (event-timestamps and message-timestamps) have the form cl.event=f(cl.j) , cl.message=f(cl.j), i.e.they depend on the value of the logical clock of the process, which is a free counter. When the next event a2 occurs at process p_j, process updates its clock value cl.j and the new event is assigned the timestamp cl.a2=cl.j (updated clock value of process p_j) and cl.a1 is no longer relevant. Similarly, when process p_j sends/receives the next message m2, the new message is assigned the timestamp cl.m2=cl.j and cl.m1 is no longer relevant. Thus event-timestamps and message-timestamps act as dependent counters that depend on the logical clock value(free counter) of the process,and also have limited life.§.§ Lamport's Mutual Exclusion:Consider a system of n processes p_0,p_1,p_2, ⋯,p_n with one shared resource r. At any point in time |holder(r)|=1, i.e. r is accessed by utmost one process at a time.The system uses Lamport's Clocks or Logical Clocks for timestamping resource-requests from processes.Resource-requests from two different processes can have same logical clock value, i.e when two processes p_j and p_k request for r their current logical clock values can be the same cl_j=cl_k. So to handle ordering of requests that may fall under this scenario any timestamp consists of the logical clock value of the process and the process id, i.e. when process p_j makes a request for r, it timestamps the request with ts_j=⟨ cl_j,j ⟩.The request with the smallest timestamp accesses the resource. The request from process p_j has a smaller timestamp than a request from process p_k iff ⟨ cl_j< cl_k ⟩ (⟨ cl_j= cl_k ⟩⟨ j < k ⟩). Every process maintains a queue of requests. When a process p_j makes a request for r it adds its timestamp ⟨ cl_j,j ⟩ into its queue. Simultaneously p_j broadcasts its request with the timestamp ⟨ cl_j,j ⟩ to all other processes.Whenever a process receives a request from another process it inserts the received timestamp into its queue of requests and responds back to the sender process acknowledging the reception.A process p_j accesses r if it received an acknowledgement from all other processes and if the smallest timestamp in its queue has j as the process id. Process p_j removes this smallest timestamp from its queue as soon as it finishes accessing r and simultaneously broadcasts a release message to all other processes. When a process receives a release message it removes the corresponding entry from its queue.Whenever a process p_j adds a request-timestamp to its queue, its current request has a higher timestamp than its earlier request, i.e it has the form ts_j' > ts_j.Also, adding a request-timestamp with an increase in the current logical clock value by any constant value ts_j=⟨ cl_j + K⟩, j continues to preserve the overall correctness of the system.Thus the request-timestamp of a process ts_j with respect to its queue is an unbounded counter that acts a free counter.However for a given process p_j any request-timestamp in its queue corresponding to any other process p_k has the form ts_k=f(cl_k,k). Also ts_k has a limited life of v=|min{queue at p_j}-ts_k|+1. ts_k is valid when it is greater that the smallest timestamp in the queue. When ts_k becomes the smallest timestamp in the queue, process p_k accesses r and after p_k is done accessing r, ts_k is removed from all request queues. Thus the request-timestamp ts_k of process p_k with respect to the queue at any other process p_j is an unbounded counter that acts a dependent counter. §.§ Vector Clocks:Consider an asynchronous system of n processes p_0,p_1,p_2,⋯,p_n that uses Vector Clocks for timestamping events. Each process p_j has a vector clock cl_j associated with it, which is a vector of size n consisting of latest clock values of all n processes that p_j is aware of. The value of cl_j[j] increases whenever an event occurs at p_j, i.e. when a local event occurs at process p_j or when p_j sends or receives a message m, the process increments its clock by 1: cl_j[j](s_l)=cl_j[j](s_l-1)+1, i.e. its current clock value at state s_l is obtained by adding one to its clock value in the previous state s_l+1. Thus the clock value of any process its own vector clock is an unbounded counter that has the form cl_j[j](s_l+1) > cl_j[j](s_l) i.e. it has the form of a free counter. Also, at a given state s_l, an increase in the clock value of p_j by a constant value K i.e.cl_j[j](s_l+1)=cl_j[j](s_l)+K, does not affect the overall correctness of the system.Whenever a process p_k sends a message m to process p_j, p_k timestamps m with its current vector clock and then sends it to p_j. On receiving m, p_j updates clock values of each of n processes in its vector clock cl_j by comparing it with the clock values in the received vector clock cl_k. For any process p_a ≠ p_j, if cl_j[a] < cl_k[a]cl_j[a]=cl_k[a], i.e. for any process p_a ≠ p_j, cl_j[a] has the form cl_j[a] = f(cl_k[a]). The current value of cl_j[a] is valid only until p_j receives a new message m from some process p_k such that cl_j[a] < cl_k[a]. However if p_j does not receive any more messages from any other process in the system then the vector clock values in cl_j corresponding to all other process remain outdated and unchanged. To benefit from such cases, Hybrid Vector Clocks <cit.> stop maintaining such outdated clock values, i.e. if process p_j does not hear about the clock value of process p_a directly or indirectly for ϵ time, then cl_j[a] is no longer valid. So for any process p_a ≠ p_j, cl_j[a] has a limited life v=ϵ. Thus for any process p_a ≠ p_j, cl_j[a] is an unbounded counter that acts as a dependent counter. § CURRENT PROTOCOL: We have a protocol that has two variable R and x for each process. R is like a variable that gets updated whenever it feels like. Only guarantee about R is |Rj-Rk|≤ 1. We guarantee at any instant variable x is such that, |xj-xk|≤ 1 and |Rj-xj|≤1. We provide a set of actions such that whenever x violates its properties the corresponding corrective action gets executed thereby bringing it back to the invariant.§ FAULT TOLERANCE/ DISCUSSION In our protocol the variable x is a free counter which gets incremented always, unless it gets corrupted and has to be decremented or incremented by corrective actions to bring x back to the invariant.We do not have dependent variables in our current protocol. So lets introduce a variable y which is a dependent variable. At any instant since y depends on x (directly or indirectly) y preserves the properties of x, i.e. |y1-y2|≤ 1 (at the worst case x values are one interval apart and have only two different values, y at any instant is equal to one of these two values) and if y is restricted in such a way that y’ ≥ y, then y never decreases. Otherwise y can decrease by a maximum of I (I=max|xj-xk|).When x gets corrupted then x is pulled back. How does this affect y? y gets pulled back eventually when it gets assigned to some corrected x. What happens if y alone gets corrupted? Assuming that the faults stop at some point in time, then y gets corrected as in the above scenario.What happens if y gets assigned and does not get updated for a very long time? That is though x and y continue to preserve the property |x1-x2|≤ 1 and |y1-y2|≤ 1, their relative distance |y-x| can keep increasing if y does not get updated for a very long time. But since y is a dependent variable it has limited life v, i.e. y becomes invalid after v states. So the maximum possible relative distance is |y-x|=v.§ TRANSFORMATION ALGORITHMOur transformation focuses on a three-step approach. In the first step (Section <ref>), we focus on revising a given program such that the free counters in the program, while still being unbounded, are closely related to the physical time. In the second step (Section <ref>), we do the same for dependent counters. Finally, in the third step (Section <ref>), we revise the program obtained in the second step such that all counters become bounded.Due to reasons of space, we illustrate our algorithm in the context of the example in Section <ref> in Appendix <ref>.Our algorithm utilizes the observation that while the counters used in a program can grow unbounded, their growth in a given time period (assuming no transient faults) can be computed. In particular, consider a computation within one region as determined by the global clock. We assume that the growth of the counter (from its original value) would be bounded by a constant in this region. As an illustration, for the program in Section <ref>, we can identify this bound by considering the number of events that could be created in the given region. Note that the region from the perspective of the global clock may not be the same as that of a process, (cf. Figure <ref>). Hence, from the perspective of the process, the growth of the counter in its region may be different. §.§ Illustration of the Step 1 for Logical ClocksIn this section, before we describe our approach, we illustrate it in the context of logical clocks by Lamport from Section <ref>. Note that in this program, the value of cl.j for every process j is a free counter.For sake of illustration, let us assume the maximum increase in any free counter (i.e., cl.j for any process j) in one global region is at most 10. In other words, the value of cl.j in a computation that goes on forglobal time increases by at most 10. Furthermore, assume that initially, all values of cl.j are -1. Initially, the region of every process is -1. As soon as it creates the first event, it is in . Thus, in onetime, the value of cl.j would increase to at most 10. In 2 time, it would increase to at most 20 and so on. Our first attempt to revise this program would be to require that inthe value of cl.j would be between [0..9]. In , cl.j would be between [10..19] and so on. Observe that the first property is already satisfied by the original program. However, the second property may be violated since cl.j may be less than 10 even in . We can remedy this by increasing the value of cl.j as needed. Note that in this instance, the fact that cl.j is a free counter is important, as it guarantees that cl.j will never decrease and we are permitted to increase cl.j as needed. With , we need to ensure that cl.j does not increase beyond 19, as we are not allowed to decrease it. We can try to ensure this property by the length of computation which guarantees a bound on the number of events that can be created in .While the above approach is reasonable, it suffers from aproblem that the processes do not always agree on what the current region is. In particular, process j could be in but process k could still be in . Now, j could send a message to k causing k to have a value of cl.k that is outside [0..9]. Also, if process j moves quickly to while process k is still in then it creates some additional difficulties. In such a system, the clock synchronization may force j to slow down its clock to ensure that k can catch up. (An alternative is to let process k advance its clock more quickly. But we assume that we do not control the clock synchronization algorithm.)In other words, as far as process j is concerned, even if it starts with initial value of cl.j to be 10 at the beginning of , the value of cl.j may exceed 19 before j enters .We can remedy the above problems with the observation that region of two processes differs by at most 1. Hence, even if clock of j is forced to slow down to let k catch-up, as long as process j is in the same region, its clock will not increase by more than 30.(Note that the value 30=10*3 is due to the fact that of process j can overlap with global regions 0, 1 and 2 and in each global region the increase in cl.j is bounded by 10.) With this approach, we proceed as follows: In , we try to ensure that the value of cl.j is between [0..29], in , the value of cl.j is between [30..59], in , the value of cl.j is between [60..89] and so on. Observe that with this change, when the first process, say j, moves to , all values were less than 30. Based on assumption about number of events in the region, as long as process j is in , the cl value cannot increase beyond 59. We can summarize the above approach by the constraint that if the region of process j equals r then we try to ensure that cl.j is between [30r .. 30r+29]. However, while process j is in , process k could move to thereby creating a value that is greater than 30r+29. Moreover, if process k communicates with process j, it could force process j to increase cl.j to be more than 30r+29. However, as long as process j is in and process k is in (which can last for at most 2 time), values of cl.j or cl.k cannot increase beyond 30r+29+20. Based on this observation, we define and that identifies the minimum region value held by some process and maximum region value held by some process. (Note that these values differ by at most 1. Furthermore, the processes themselves are not aware of these values. They only know that the value of their region equals one of them.) From the above discussion, we observe that the value of cl.j is in the interval [30..30+29+20]. Moreover, it is also guaranteed to be in [30-30..30+19]. In addition, due to the property of regions, at some point, all processes must be in the same region. (If some processes are in Region r-1 and some are in , then no process can move to as long as some process is in Region r-1. Thus, just before the first process moves to , all processes must be in the same region, namely .) Let this region be r. Clearly,andare equal to r at this time. From the above discussion, at this point, cl.j must be in [30r .. 30r+19]. The above analysis is correct if we assume that the value of cl.j started with initialized values. However, if the values are corrupted, the above property may not hold. To achieve this, we change the value of cl.j when we know that it is corrupted. For example, let process j be in . Sinceis at least r, the value of cl.j is at least 30r. Also, sinceis at most r+1, the value of cl.j is at most 30(r+1)+19. If process j finds itself in a situation where the value of cl.j is outside this domain, then it resets it to 30r. Additionally, as discussed above, there exists a time when all processes are in the same region. Given the local correction action to ensure that cl.j is in the range [30r .. 30(r+1)+19], it follows that when the first process is about to move to , the highest cl value of any process is 30(r+1)+19. When the first process moves to Region r+2 (the overlap can be with at most two global regions), the increase can be up to 20 so the highest cl value is 30(r+2)+9. Using the same argument, when the first process moves to Region r+3, all cl values are less than 30(r+3). Thus, when all processes move to Region r+3, their cl values are in the range [30(r+3)..30(r+3)+29]. And, this property is preserved for all future regions. Observe that this implies that within 3 regions, the value of the free counters is within their expected range. The above discussion rested on the assumption that the number of events created in one region is at most 10. Next, we generalize this to obtain the first step of our algorithm. §.§ Algorithm for Step 1: Adjusting Free CountersLetbe the maximum increase in any free counter in one global region.Since free counters can be increased at will, the natural approach is to try to keep the value of the free counter in region r to be [r* .. (r+1)*]. It turns out that this is not feasible since the process regions may not be identical. So, a process in region r+1 may send a message to process in region r causing it to receive values that are outside this range. Hence, in our algorithm, we proceed as follows: we try to ensure that any free counter is in the range [3r* .. 3(r+1)*-1]. However, in practice, since the regions of two processes may not be identical, we will ensure that the value of the free counter is in the range [3r* .. 3(r+1)*+2*-1].[The reason for parameters 2 and 3 in this equation is discussed in Appendix <ref>, where we illustrate the first step with the problem of logical clocks.] Each process will first ensure that this constraint is satisfied. If it is not, it will restore the value of the free counter to 3r*, where r is its current region. This can be achieved by checking the values of the free counter (1) as soon as the region of the process changes (II.1 in Figure <ref>), or (2) the process updates its free counter as part of its actions (III.4 in Figure <ref>), or (3) the process uses the free counter (in evaluating guard of an action) (III.1 in Figure <ref>). Thus, the algorithm for transformation is as shown in Figure <ref>. For all actions guard ⟶ statement// The transformed program maintains free/dependent counters in a modulo form. First concert it to corresponding integer format. For each free counter fc, obtain intfc intfc = convert(fc) Evalauate guards from the values of the converted couters Evaluate the guards to identify one guardthat evaluates to true Select one guard which evaluates to true and execute corresponding statement While executing the statementAfter any update to free counter intfc check(fc, r) After any update to dependent counter dc At process j:Function check( fc, r):Function checkdc( dc, r):Algorithm for Step 1 and Step 2 §.§ Text that needs to find space somewhere elseIn Step 1 of our transformation algorithm, we changed the values of the free counters so that they are related to the time maintained by that process. Step 2 did the same for dependent counters. However, these counters are still unbounded. The goal of this task is to ensure that.It did not affect dependent variables directly. However, there was a side effect on these variables when they were updated/utilized. For example, changing the free counter cl.j affected cl.m where m is a message sent by j. Step 2 is intended to overcome these difficulties. We illustrate our approach with the logical clocks from Section <ref>. Observe that dependent counters have the property that after a certain number of steps of the program, they are removed from the system. In case of logical clocks, the value cl.m for every message m is a dependent counter; its value is created when the message is sent and it is no longer needed in Lamport's algorithm after a message is received. (Note that if a message timestamp is stored by the receiver, it would still be needed. We discuss this in Section <ref>.)Step 2 relates the lifetime of dependent counters to regions. In particular, it maps how long a dependent counter needs to be maintained. For sake of illustration, suppose that in Lamport's algorithm for logical clocks, any message sent in Region x will be received or lost before Region x+5. (For example, if region is 1 hour long, this would correspond to a message being delivered in 5 hours or being lost.)Continuing with our earlier analysis from Section <ref>, consider the case where current region number for process j is 10. This implies that the global region is either 9, 10, or 11. This implies that any message received by j would have originated when the sender's region was from 4..11. Given that m was sent when the sender was in Region 4..11, the minimum timestamp of m is 30*4=120. And, the maximum timestamp of m is 30*11+20-1. Thus, any time process when j compares the value of two counters, the difference between them is less than 30*8. Hence, instead of maintaining the entire value of cl.j and cl.m, we can simply maintain cl.j mod (30*8) and cl.m mod (30*8). Observe that given the current time at process j, we can uniquely identify the values of cl.j and cl.m if we are aware of cl.j mod (30*8) and cl.m mod (30*8) respectively. With this change, the values of cl.j and cl.m will be bounded.§.§ Algorithm for Step 2: Adjusting Dependent Counters Let dc be a(, ) region-based dependent counter. Now, we identify the possible values of dc that may happen under legitimate states, i.e., in the absence of faults. Consider the case where a process is in , the value of its free counter is in the range r. At this time, the global region is at least r-1. If the counter dc is used in global region r-1, then it was initialized in global region greater than or equal to r-1-. Moreover, the value it was set to can only come from a free counterregions earlier. In other words, the value of dc was set to the value of a free counter in global region r-1-- or higher. Since the process region and global region may differ by 1, the region of the process that set the value of dc is at least r-2--. Hence, the value of dc is at least (r-2--) Moreover, the maximum value of dc is the maximum value of some free counter, i.e., it is r.Hence, in Step 2 of our algorithm, we ensure that the value of a given dependent counter is always within this range. If it is not in this range, we set it to the minimum permitted value in this range, i.e., we set it to (r-2--). Similar to free counters, this is done (1) as soon as the region of the process changes (II.2 in Figure <ref>), or (2) when the process sets a dependent counter as part of its actions (III.5 in Figure <ref>), or (3) the process uses the dependent counter (in evaluating guard of an action) (III.2 in Figure <ref>).Each dependent counter is characterized by parametersand . Let the maximum value of + for any dependent counter be , Thus, if a process is in , its dependent counters must be in [(r-2-)..r]. §.§ Algorithm for Step 3: Bounding the CountersSteps 1 and 2 focused on relating free and dependent counters to physical time. Recall that if a process is in , then any free counter is in the range [r..r].Recall that the value of any dependent counter is in the range [r..r]. Observe that the size of the above range is . In Step 3, we revise the program so that instead of maintaining each counter to be an unbounded variable, we only maintain it in modulo m arithmetic, where m is 3 times the range of any dependent counter. In other words, m=. Next, we give a brief description of why the value is chosen. Towards this end, we split into three intervals,[0..-1],[..2-1] and [2..3-1]. Each interval corresponds to the range of dependent counters.First observe that the interval is long enough to ensure that all free counters stabilize to their expected values. Regarding dependent counters, consider the case where a process, say j, is about to move from Interval 0 to Interval 1. Since the program can be perturbed to an arbitrary state, at this point, a dependent counter could be in any interval. However, any dependent counter that exists when j is about to move to Interval 1 will be removed from the system before process j moves to Interval 2. (Note that the length of the interval was chosen in order to guarantee this property.) Now, consider the computation of the program where process j just enters Interval 1 and continues its execution until it enters Interval 2. During this computation, process j will discard all dependent counters in Interval 2. This is due to the fact that only valid values for dependent counters in Interval 1 are from Interval 0 or Interval 1. Moreover, given the life-span of dependent counters, any dependent counter generated in Interval 0 will be removed before j enters Interval 2.In other words, when the first process enters Interval 2, all dependent counters are from Interval 1. Moreover, this property will be preserved for all subsequent intervals.§.§ Correctness of Transformation Algorithm Let p be the given stabilizing program. Let p' be the program obtained after Steps 1, 2 and 3. Starting from a legitimate state, we show that there is a mapping of a computation of p' to a computation of p. And, we also show that starting from an arbitrary state, any computation of p' has a suffix that maps to a computation of p. Taken together, this shows that if p is stabilizing then so is p'. For reasons of space, we present the proof in Appendix <ref> in Theorem <ref>. If p is stabilizing to state predicate S then p' is stabilizing to S^m, where S^m = { s^m | s ∈ S and s^m is the modulo m state of s }, where m =. § APPLICATION OF OUR ALGORITHMIn this section, we demonstrate how our algorithm can be used to transform stabilizing programs that use unbounded variables to stabilizing programs that use bounded variables and (practically bounded) physical time. First, in Section <ref>, we show that our approach can be applied to any algorithm that can benefit from earlier seminal work by Katz and Perry <cit.> that focuses on adding stabilization to any program. One issue with that approach is that they need to utilize unbounded counters. We show that our approach can be used to bound counters in those applications.In Section <ref>, we demonstrate its application in Paxos.In Appendix <ref>, we show that our algorithm can be applied to diffusing computation which is also applicable to leader election, mutual exclusion, loop free routing, distributed reset, etc. In Appendix <ref>, we show that our algorithm can be applied to vector clocks and the resulting algorithm is similar to that in <cit.>. And, in Appendix <ref>, we demonstrate that our approach can be used for mutual exclusion algorithm.Due to reasons of space, weinclude only an outline of our approach and why the unbounded variables in these programs are either free or a dependent counter. We have chosen these applications because they demonstrate the generality of our approach and also they provide several insights into how the algorithm can be applied in a setting where free and dependent counters may not be immediately visible. These insights are discussed in remarks after corresponding sections.§.§ Application in Katz and Perry Framework <cit.> for adding StabilizationIn<cit.>, Katz and Perrypresented an algorithm to add stabilization to an existing program. The key idea of the algorithm is as follows: (1) An initiator performs a snapshot of the system using the algorithm by Chandy and Lamport <cit.>. (2) If the snapshot indicates that the program state is not legitimate then it performs a reset whereby the program state is restored to a legitimate state and the computation proceeds thereafter. To enable calculation of the snapshot and to perform reset without stopping the program execution, they utilize a round number –which is an unbounded integer variable– This value is incremented every time a new reset is performed. Furthermore, if a process in round x receives a message with round y then (1) if x < y, the process moves to round y, (2) if x=y then the process treats it as a normal program execution and (3) if x > y then it ignores that message. While the round number in <cit.> does not meet the requirements of the free or dependent counter directly, we can modify the program slightly so that the new variables meet the constraints of free/dependent counter. First, we change the algorithm so that after every snapshot, a reset is performed. However, a Boolean variable is used to identify that this reset is fake and, hence, processes should simply move to the next round but continue accepting previous messages and not perform actual reset. Towards this end, we maintain the following variables: (1) variable nr that is maintained at the initiator only to identify the next round it should use for performing reset, (2) variable cr that is maintained at all processes to identify the current round. This variable is in the same spirit as the algorithm in <cit.>. (3) variable b is a Boolean that identifies whether the reset being done is real or fake. This variable is always 0 except for the moment when the process wants to perform a real reset because the snapshot indicated that the state is not legitimate, and (4) variable lr that denotes the sequence number of the last real reset performed in the system.With this change, we can observe that (1) nr is a free counter; it can be increased at will. (2) cr can be viewed as a dependent counter provided we split the action that increases the current round into cr = ; and cr = new value of current round. In this case, whenever cr is changed, we treat it as if it was first set tothereby removing this dependent counter from the system. Then, we set it to the new value thereby creating a new dependent counter in the system. (3) lr is a dependent counter since lr needs to be set when a new real reset is performed. It can be set toafter sufficient time to ensure that all existing messages in the system have been delivered. In <cit.>, authors have utilized channel bounds as a mechanism to bound the counters. The above discussion shows that bounding is possible without using channel bounds.Figure <ref> shows the size of counters that is sufficient to preserve stabilization provided by <cit.>. For the sake of analysis, we consider parameters that are satisfied by any practical system. Hence, we consider the clock drift between any two processes as at most 100 seconds. Note that protocols such as NTP <cit.> provide clock synchronization within 100 milliseconds. In such a system, we consider different parameters for message delay on a single channel and number of resets performed in one 100 second window to compute the size of counters. Even if one makes really conservative assumptions namely, a message delay of up to one hour and as many as 100 resets could be performed in a 100 second window, the size of the counters is very small (21 bits). The size is even lower for more reasonable parameters. Furthermore, Figure <ref> shows that the size of counters is not very sensitive to the message delay and number of resets in one window. Therefore, the designer can make very conservative assumptions without increasing the size of counters.Note that the above discussion also illustrates that neither the requirement that the dependent counter cannot be changed nor that it is reset tois restricting. Essentially, we need to treat an update of a dependent variable as a two-step process where we first remove the old value of the dependent counter and then initialize it as a new dependent counter (which happens to have the same name) with the new value. All that our definition requires is that the old value is no longer relevant for the subsequent computations and that the new value of the dependent counter is set to a recent value of some free counter. §.§ Paxos Based ConsensusA Paxos based consensus protocol has the following features: (1) Proposer c proposes a prepare request with a sequence number c.seq to the acceptors (2) Each replica accepts the request if it has not accepted a request with a higher sequence number. To do so, each acceptor a maintains a.seq which is the highest sequence number it has seen. (3) If an acceptor replies NO, it also notifies the proposer the value of a.seq so that the proposer can choose a number higher than a.seq for its subsequent request. (4) If a proposer receives sufficiently many YES responses (the precise number depends upon the number of failstop/byzantine faults we want to tolerate) it sends accept request to the acceptors. (5) An acceptor accepts this request iff it has not already responded to a prepare request with a higher sequence number. (6) A value is chosen provided sufficiently many acceptors accept the accept request. Observe that in this protocol we have sequence numbers maintained by proposers and acceptors. We associate two sequence numbers for eachproposer; PendingSeq that denotes the sequence number of a pending request, if any. And, NextSeq, that denotes the sequence number it would use for a future request.Observe that NextSeq is a free counter. The proposer can increase it at will without affecting the correctness of the Paxos based consensus algorithm. On the other hand, PendingSeq is a dependent counter; it is set to be equal to NextSeq whenever a request is made. As long as there exists a bound on message delivery and time required for acceptors to send a YES or NO message, PendingSeq will be valid for a limited time sinceeach pending request will be accepted or rejected within a finite time. After this time, the value of PendingReq will no longer be relevant and can be set to . If the proposer chooses to send a new request, PendingReq will be set to a different value.A paxos algorithm typically uses only one variable to model NextSeq and PendingSeq, which is always an integer (and is never set to ). However, in this case, this variable is neither a free nor a dependent counter, as it is never set toand it cannot be increased at will. However, by having two variables, we can observe that NextSeq is a free counter. To make PendingReq a dependent counter, we split the action where the proposer learns that its previous request has failed (when we set PendingReq to ) and when it starts a new request. In this case, each instance of the pending request is a new dependent counter. Finally, the sequence number associated with acceptors is also a dependent counter. It is relevant only until (1) it receives a new request with a higher sequence number or (2) if it has not received a request for a long enough time, thereby it can treat a future request as if it is the first request it has ever received.Once again, we use very conservative assumptions to identify the size of these counters. Similar to Section <ref>, we assume that clocks are synchronized to be within 100 seconds of each other. In such a system, even if a message can be delayed upto 1 hour and there are 10^9 requests in one 100 second window, 46 bits are sufficient. And, for more reasonable assumptions, even less bits are required for each counter. Once again, since the number of bits do not increase substantially as we increase message delay/number of requests, the designer can utilize extremely conservative assumptions. For example, the number of bits for a counter only increases from 41to 46 bits even if the message delay is increased from 1 second to 4000 seconds.§ DISCUSSION AND RELATED WORKOne of the questions raised by our work is whether the timing properties utilized in our transformation algorithm affect the generality of the algorithm. We note that given the impossibility of solving consensus, leader electionand several other interesting problems in asynchronous systems <cit.>, any fault-tolerant solution to these programs must make some reasonable assumptions about the underlying system. Some typical guarantees are process speeds, message delays etc. Our algorithm utilizes assumptions of this nature to identify free and dependent counters. Also, as shown in our case studies, even trivially satisfiable requirements –such as clocks differ by at most 100s (when current state of art guarantees synchronization to be less than 10 milliseconds) or number of events in a given region is 10^9 or a message is delivered within an hour– are suffice to bound the variables within acceptable limits.Not all programs that use unbounded counters can be used with our transformation algorithm. For example, consider algorithms such as those for causal broadcast that maintain an unbounded counter to keep track of the number of messages sent by each process. We cannot treat this as a free counter since incrementing it would require us to send broadcast messages. In other words, there are programs where unbounded counters may be neither free nor dependent. Our work also differs from previous work that uses distributed reset mechanism <cit.> to bound the values of counters. Distributed reset affects all processes. By contrast, stabilization can often be achieved by only processes in the vicinity of the affected processes <cit.>. Compared with the work in <cit.> which assumes the counter size to be equal to the size of integers (32/64 bit in most systems), our approach has the potential to reduce the size of the counters. For example, the analysis from Section <ref>, shows a bound of 780 is sufficient. In other words, the bound depends upon the need of the given application. Also, the algorithm in <cit.> requires multiple/all processes to reset their counters if some process has to reset its counters. By contrast, our algorithm, when applied in the context of Paxos, addresses this issue by ignoring messages and resetting processes whose counters are affected rather than affecting all processes. Thus, if perturbation is small, it is anticipated that our solution will affect only the corrupted processes. § CONCLUSION AND FUTURE WORKWe presented an algorithm to transform a stabilizing program with unbounded variables into a corresponding stabilizing program that only utilizes bounded variables and physical time. Our algorithm relied on classifying unbounded variables into free counters and dependent counters. Intuitively, the former required that they can be increased at will whereas the latter required that they have a limited life.Our work addresses a key conflict in the context of stabilization: (1) use of unbounded variables in stabilizing programs should be avoided since any implementation of that stabilizing program would rely on allocating large enough but bounded memory to each variable and transient faults could perturb the program to a state where the large bound associated with the variable is reached, and (2) use of (practically bounded) physical time is used in many systems because corruption associated with time is typically easily detectable and correctable. It provides an alternate approach for providing practically bounded-space stabilization by utilizing system and application properties such as clock synchronization properties, message delivery properties, etc. Since a rich class of problems easily admit unbounded state-space solutions, our approach can be used to provide solutions where all program variables are bounded. We demonstrated that our algorithm is applicable in several classic problems in distributed computing, namely logical clocks, mutual exclusion, vector clocks, diffusing computation and Paxos based consensus. We also demonstrated that our work can be combined with that of <cit.>. This work transforms a given program into a stabilizing program with unbounded counters. Our work can be used to convert those unbounded counters into bounded counters while still preserving stabilization.This work also demonstrates that for a rich class of programs, the approach taken by non-stabilizing programs to deal with unbounded variables –provide large enough but bounded space– is feasible even with stabilizing programs. In our work, we chose the size of the region so that region of any two processes differs by at most 1. We anticipate that by choosing a more fine grained value of region, the value of , i.e., the value by which a dependent counter may increase in one region, it would be possible to reduce the size of the dependent counters. § PROOF OF CORRECTNESSIn this section, we present proofs of correctness and the step-by-step illustration of our algorithm along with some of its applications –that were omitted due to reasons of space.§.§ Illustration of Our Algorithm in Logical Clocks (Section <ref>)Illustration of the Step 1.In this section, we identify the reason for choosing free counters of a process in region r to be in the range[r..r]. We use the context of logical clocks by Lamport from Section <ref> to identify how this bound was derived. Note that in this program, the value of cl.j for every process j is a free counter.For the sake of illustration, let us assume that the maximum increase in any free counter (i.e., cl.j for any process j) in one global region is at most 10. In other words, the value of cl.j in a computation that goes on forglobal time increases by at most 10. Assume that initially, all values of cl.j are -1 (i.e.,logical clock value of every process) and the region of every process is -1. As soon as it creates the first event, it is in . Thus, in onetime, the value of cl.j would increase to at most 10. In 2 time, it would increase to at most 20 and so on. Our first attempt to revise this program would be to require that inthe value of cl.j would be between [0..9]. In , cl.j would be between [10..19] and so on. Observe that the first property is already satisfied by the original program. However, the second property may be violated since cl.j may be less than 10 even in . We can remedy this by increasing the value of cl.j as needed. Note that in this instance, the fact that cl.j is a free counter is important, as it guarantees that cl.j will never decrease and we are permitted to increase cl.j as needed. With , we need to ensure that cl.j does not increase beyond 19, as we are not allowed to decrease it. We can try to ensure this property by the length of computation which guarantees a bound on the number of events that can be created in .While the above approach is reasonable, it suffers from aproblem that the processes do not always agree on what the current region is. In particular, process j could be in but process k could still be in . Now, if j sends a message to k, it can cause k to have a value for cl.k that is outside [0..9]. Also, if process j moves quickly to while process k is still in then it creates some additional difficulties. In such a system, the clock synchronization may force j to slow down its clock to ensure that k can catch up. (An alternative is to let process k advance its clock more quickly. But we assume that we do not control the clock synchronization algorithm.)In other words, as far as process j is concerned, even if it starts with initial value of cl.j to be 10 at the beginning of , the value of cl.j may exceed 19 before j enters .We can remedy the above problems with the observation that region of two processes differ by at most 1. So, even if theclock of j is forced to slow down to let k catch-up, as long as process j is in the same region, its clock will not increase by more than 30.(Note that the value 30=10*3 is due to the fact that of process j can overlap with global regions 0, 1 and 2 and in each global region the increase in cl.j is bounded by 10.) With this approach, we proceed as follows: In , we try to ensure that the value of cl.j is between [0..29], in , the value of cl.j is between [30..59], in , the value of cl.j is between [60..89] and so on. Observe that with this change, when the first process, say j, moves to , all values were less than 30. Based on the assumption about number of events in the region, as long as process j is in , cl.j cannot increase beyond 59. We can summarize the above approach by the constraint that if the region of process j equals r then we try to ensure that cl.j is between [30r .. 30r+29]. However, while process j is in , process k could move to and if process k communicates with process j, it could force process j to increase cl.j to be more than 30r+29. However, as long as process j is in and process k is in (which can last for at most 2 time), values of cl.j or cl.k cannot increase beyond 30r+29+20. Based on this observation, we define and that identify the minimum region value held by some process and maximum region value held by some process. (Note that these values differ by at most 1. Furthermore, the processes themselves are not aware of these values. They only know that the value of their region equals one of them.) From the above discussion, we observe that the value of cl.j is in the interval [30..30+29+20]. Moreover, it is also guaranteed to be in [30-30..30+19]. In addition, due to the property of regions, at some point, all processes must be in the same region. (If some processes are in region r-1 and some are in , then no process can move to as long as some process is in region r-1. Thus, just before the first process moves to , all processes must be in the same region, namely .) Let this region be r. Clearly,andare equal to r at this time. From the above discussion, at this point, cl.j must be in [30r .. 30r+19]. The above analysis is correct if we assume that the value of cl.j started with initialized values. However, if the values are corrupted, the above property may not hold. To rectify this, we change the value of cl.j when we know that it is corrupted. For example, let process j be in . Sinceis at least r, the value of cl.j is at least 30r. Also, sinceis at most r+1, the value of cl.j is at most 30(r+1)+19. If process j finds itself in a situation where the value of cl.j is outside this domain, then it resets it to 30r. Additionally, as discussed above, there exists a time when all processes are in the same region. Given the local correction action to ensure that cl.j is in the range [30r .. 30(r+1)+19], it follows that when the first process is about to move to , the highest cl value of any process is 30(r+1)+19. When the first process moves to region r+2 (the overlap can be with at most two global regions), the increase can be up to 20 so the highest cl value is 30(r+2)+9. Using the same argument, when the first process moves to region r+3, all cl values are less than 30(r+3). Thus, when all processes move to region r+3, their cl values are in the range [30(r+3)..30(r+3)+29]. And, this property is preserved for all future regions. Observe that this implies that within 3 regions, the value of the free counters is within their expected range. The above discussion rested on the assumption that the number of events created in one region is at most 10. Our algorithm generalizes this (as ) to adjust the free counters in the first step of our algorithm. Illustration of the Step 2. For the sake of illustration, suppose that in Lamport's algorithm for logical clocks, any message sent in region x will be received or lost before region x+5. In Logical clocks, the value of cl.m is a dependent counter. When the value of cl.m is set, it is set to some current value of free counter (namely, cl value of the sender process). Moreover, the value will be available for at most 5 additional regions. In other words, cl.m is a 0,5-dependent counter. Hence, the above analysis requires that when a process receives a message m in region r, it checks whether its timestamp is at least (r-7) and at most r, where equals 10.Illustration of the Step 3. Since we assume that cl.m is a 0,5-dependent counter, =5. Putting this value in , we obtain 780. Hence, instead of maintaining variables cl.j and cl.m, we maintain them as cl.jmod780 and cl.mmod780 respectively. Thus, 10 bits are sufficient to represent cl.j and cl.m. §.§ Proof of CorrectnessIn this section, we show that our algorithm consisting of 3 stepspreserves correctness and stabilization property of the original program. Let p be the original program and let p' be the program obtained after all 3 steps. To facilitate the proof we define the notion of modulo state. (modulo m state).Let s be a state, modulo m state of s, where m is an integer, denoted by s^m is obtained by changing each variable x with unbounded domain in s to x mod m.We extend this to modulo state predicate and modulo computation. For example, s_0^m, s_1^m ⋯ is a modulo m computation of p iff s_0, s_1, ⋯ is a computation of p and ∀ w: s_w^m is the modulo m state of s_w. In the absence of faults, (i.e., starting from a valid initial state), any computation of p' is also a modulo m computation of p, where m =.Observe that the bounds on free counters are derived based on the property that in any global region, the value of the free counter would be in the range [r..r]. Hence, starting from an initial state, there would be no need to reset a free counter before/after execution of an action. Likewise, there is no need to reset a dependent counter. The only additional change to free counters is when a process moves from one region to another.Now, consider a computation of p', say s_0, s_1, s_2, ⋯. Since s_0 is an initial state of p, s_0 is a modulo m computation-prefix of p from an initial state.Next, assume that s_0, s_1, ⋯s_j is a modulo computation-prefix of p.From the above discussion, (s_j, s_j+1) either executes an action of p or it increases a free counter. In the former case, s_0, s_1, ⋯s_j, s_j+1 is clearly a modulo m computation of p. In the latter case, s_0, s_1, ⋯s_j, s_j+1 is also a modulo m computation of p by the property of free counters, namely that their value can be incremented at anytime. From the above discussion it follows that s_0, s_1, s_2, ⋯ is a modulo m computation of p, where m =.A computation of p' that starts from an arbitrary state has a suffix that is a modulo m computation of p, where m =.Consider a computation of p'. Due to local correction of free counters, the value of any free counter of a process in would be in the range [r..r]. Note that even with taking modulo under , this value can be uniquely identified given the physical time,As discussed above, we partition into three intervals as shown in Figure <ref>. Wlog, let us assume that the current Interval is 0 as determined by the physical time. As discussed above, if we consider the computation of p' in the entire Interval 1 then when the first process enters Interval 2, all counters are within Interval 1. Hence, the value of any counter can be uniquely determined from the physical time even if it is maintained in modulo arithmetic. Thus, from this point forward, every transition of p' is also a modulo m transition of p. It follows that, every computation of p' has a suffix that is a modulo m computation of p, where m =.If p is stabilizing to state predicate S then p' is stabilizing to S^m, where S^m = { s^m | s ∈ S and s^m is the modulo m state of s }, where m =. To show that p' is stabilizing, we need to show that * If p' starts from an arbitrary state then it recovers to its legitimate states. This follows from Lemma <ref>.* If p' starts from a legitimate state then in the absence of faults, it remains in legitimate states forever and it is correct with respect to its specification. This follows from Lemma <ref>.§.§ Application of our Algorithm in Diffusing ComputationThe problem of diffusing computation <cit.> is intended to check/modify all processes in a given system. It is used in many applications such as ensuring loop free routing<cit.>, leader election <cit.>, termination detection <cit.>, mutual exclusion <cit.>, and distributed reset <cit.>. A key property of diffusing computation is that some process (or more than one process) may initiate it. This process is called the initiator.Upon initiation, the process forwards it to its neighbors. This is called the propagation phase. When the neighbors receive this diffusing computation for the first time, they forward it to their neighbors. If they receive the same diffusing computation again –it can happen since there are several paths from the initiator to the given process– they acknowledge it but do not forward it to others.When a process receives the diffusing computation from all its neighbors, it begins the completion phase and sends an acknowledgement to its parent, i.e., the process from which it received the diffusing computation for the first time. When the initiator completes its diffusing computation, it is guaranteed that all processes in the system (that were present throughout the computation) have received and completed their diffusing computation.One important requirement of diffusing computation is that a process will have to know whether the diffusing computation that it has received is the same as the one it had received before. This is achieved by using sequence numbers; the initiator utilizes a higher sequence number every time it begins a new diffusing computation. A straightforward approach to achieve this is via an unbounded sequence number. (If there are multiple initiators, we can use the ID of the initiator and sequence number.)Following our algorithm, we can observe the following: * The sequence number of the initiator is a free counter; it can be increased by any value when the initiator begins a new diffusing computation. * The sequence number at other (non-initiator) processes is a dependent counter. It is only relevant when the process begins propagation of the diffusing computation and ends when it completes the diffusing computation. The specific values of (, ) for this dependent counter would be determined by the worst case time it would take for a diffusing computation to complete.Thus, our algorithm can be applied to bound the variables of a stabilizing diffusing computation algorithm. Figure <ref> shows the size of counters needed to achieve this. Once again, even if the number of diffusing computations initiated by one process in a 100 seconds window is 100 and message delay is up to 1 hour, the number of bits required is 26. Furthermore, a process can ensure that the number of diffusing computations initiated by it satisfies this limit by simply counting the number of resets in one window. [23] S. Lee, M. Rahman, and C. Kim, “A Leader Election Algorithm Within Candidates on Ad Hoc Mobile Networks,” Embedded Software and Systems, Lecture Notes in Computer Science, Vol. 4523, pp: 728-738, Springer Berlin / Heidelberg 2007 §.§ Application of our Algorithm in Vector ClocksWe discussed the application of our algorithm in Lamport's logical clocks in Section <ref>. We can also extend it to vector clocks <cit.> or hybrid vector clocks <cit.>. Vector clocks maintain the variable vc.j.k for each pair j and k. This variable captures the knowledge that j has about k. And,vc.j.j denotes a counter maintained by j for itself. In this program, vc.j.j is a free counter; j can increment the counter maintained by itself by any value it desires. For j ≠ k, vc.j.k is a dependent counter, provided the underlying communication graph is strongly connected and there exists a time t such that a message (timestamped with vector clocks) is sent on every link in time t. Following this approach, we can obtain a stabilizing program for vector clocks that uses bounded counters. The resulting program is the same as that in <cit.>. In other words, our algorithm can be utilized to derive the program in <cit.>. The size of a counter with vector clocks is small as shown in Figure <ref> even in scenarios where 10^9 events are created in each window (of size 100 seconds) and message delay as long as an hour.§.§ Application of our Algorithm in Mutual ExclusionThe classic algorithm by Lamport <cit.> for mutual exclusion utilizes logical clocks (recalled in Section <ref>). It works as follows: (1) All messages are timestamped with logical timestamps presented in Section <ref>. (2) When a process wants to access the critical section, it sends a request to all processes. (3) When a process receives the request, it adds the request timestamp to its queue and replies to the requesting process. (3) A process enters critical section iff it has received replies from all processes and if the smallest request contained in its queue corresponds to its own request. And, (4) finally, after a process is done with its critical section, it sends a release message to all other processes thereby allowing others to remove its corresponding request from their queues. While this algorithm is typically not viewed as a stabilizing algorithm, it can be made stabilizing with simple local checks and corrections. For example, if the queue of process j contains a request from k, but process k did not make the request then this request should be removed. The algorithm will ensure stabilization from this state, but it would still involve counters that are unbounded. In this program, we can observe the following: (1) As shown in Section <ref>, the value of cl.j, the timestamp of process j is a free counter. (2) Timestamps contained in any message are dependent counters. And, (3) the timestamps saved in the request queue or contained in a request/release message are dependent counters. Note that for the algorithm in Section <ref>, the value of cl.m became irrelevant when m was received. However, in this algorithm this value may be saved in the request queue. We treat this as creation of a new dependent counter.Observe that in the timestamping algorithm <cit.>, the timestamp of a message became irrelevant as soon as the message was received. However, when the same timestamp was used in the mutual exclusion algorithm, even though the message timestamp became irrelevant, it was also saved in the request queue. In other words, it extended how long a dependent counter remains relevant. In other words, superimposing another program on an existing stabilizing program may increase the time for which a dependent counter is relevant thereby making it necessary to increase the bound associated with those counters.§.§ Summary of NotationsGeneric Variablesp program V_p set of variables of program p SV_p dynamic-sized equivalent of V_p, i.e., adynamically changing collection of only simple variables, obtained by unraveling complex variables of V_p into their constituentsimple variables A_p set of actions of program ps state of program ps_l l^th state in a computation of program p guard condition involving variables in V_pstatement task involving update of a subset of variablesin V_pρ, ρ' computation prefixesx variable in V_px(s) value of variable x in state s fc free counterfc(s_l) value of free counter fc in state s_l w, a, d positive integers unless specified otherwisek_b, k_f used to characterize the life of a dependent counter in terms of program steps dc dependent counterS set of statesRS region sizet abstract global timet_j physical time at process jt/ abstract global region t_j/ region of process j δ duration/length of timer regionr_b,r_f used to characterize the life of a dependent counter in terms of regionsmax_r maximum of (r_b+r_f) of any dependentcountermax_inc maximum increase in any free counter withina global regionp' program obtained by applying our transformation algorithm to program p Variables in Lamport's Logical Clocks examplej,k processescl.j logical clock value of process jm messagecl.m message timestamp or logical clockvalue associated with mchannel_j,k complex variable that contains timestamps ofmessages in transit between process j andprocess kv number of program steps within which a message is guaranteed to be delivered atthe receiver process Variables in Katz and Perry examplex,y round numbernr next roundcr current roundlr round number when the last real reset was performedb boolean variable that identifies if the reset was real or fake Variables in Paxos based Consensus examplec.seq sequence number of the request made by proposer ca.seq highest sequence number seen by the acceptor aPendingSeq sequence number of pending requestNextSeq sequence number that would be used for a future request Variables in Vector Clocks examplevc.j vector clock maintained at process jvc.j.k highest clock or counter value of process kthat process j is aware of §.§ Mutual Exclusion (Bakery Algorithm)THIS EXAMPLE IS INCOMPLETE and MAY HAVE ISSUESLamport's bakery algorithm is an instance of mutual exclusion where among the processes that are requesting for critical section, one process is chosen for executing critical section. The intuitive description of the algorithm is as follows: (1) Each process is associated with a number; when the process is not requesting this number is set to While the algorithm is not typically viewed as a stabilizing program, it is stabilizing tolerant to transient faults that perturb the program counter that determines which action is executed next and the numbers selected by different processes.Intuitively, a distributed program consists of a set of processes.Each process has a set of actions. Specifically, we consider processes defined by a set of event-driven actions. The program executes in an interleaving manner where an action of some process is executed in every step. An execution is captured with a set of initial variables, say V_j^0, for process j. The union of these variables, V_p^0 = ∪ V_j^0 denotes the initial variables of program p.When an event occurs, the corresponding event-driven action of a process is executed, and the variables are updated and/or new variables may be introduced.Hence, we utilize V_p^n to denote the variables of program p in Step n. We consider two types of variables: bounded variables which have a finite domain and unbounded variables whose domain is N, the set of natural numbers. With this intuition, we now formally define a program in terms of its variables and actions. Formally,definition Program. A program p is of the form ⟨ V_p,A_p ⟩, where V_p is a set of bounded and unbounded variables (counters), and A_p is a set of event-driven actions that update variables in V_p and/or add new unbounded variables to it.An event-driven action of a process in Step n can read the variables in V_p^n - 1 and update them as well as introduce new variables. Intuitively, we model these actions in the form event ⟶ statement which states that the process can execute the action only if the event evaluates to true. And, in that case, it can execute the corresponding statement. Now we define an action formally,Action. An event-driven action in A_p of program p is of the form event → statement, where event is some program or process (inter-process/local) event (e.g. local update event at a process, message send/receive event, etc.) and when an event occurs the statement updates variables in V_p (belonging to the process) and/or adds new unbounded variables to V_p.State.Let V be a set of variables. A state associated with V is obtained by assigning each variable in V a value from its domain.Since an event-driven action can add new variables to p, we can observe that the initial state of program p will be obtained by assigning variables in V_p^0 a value from their respective domain. In step n, the state of p will be obtained by assigning each variable in V_p^n a value from its domain. Let s be a state associated with set of variables in V. Given a variable x, x ∈ V, we use x(s) to denote the value of x in state s. For simplicity of presentation, we let x(s) = if x ∉V. In other words, x(s) is defined irrespective of whether x is in V or not.Computation. A computation is a sequence of states s_0, s_1, s_2,⋯, where a state s_l+1 is reached by executing an event-driven action(or actions) at state s_l.
http://arxiv.org/abs/1703.09326v2
{ "authors": [ "Vidhya Tekken Valapil", "Sandeep S. Kulkarni" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20170327222436", "title": "Preserving Stabilization while Practically Bounding State Space" }
We improve the upper bound on the Ramsey number R(5,5) from R(5,5) ≤ 49 to R(5,5) ≤ 48. We also complete the catalogue of extremal graphs for R(4,5). SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-rays Wei Dai, Joseph Doyle, Xiaodan Liang, Hao Zhang, Nanqing Dong, Yuan Li, Eric P. XingPetuum Inc.{wei.dai,joe.doyle,xiaodan.,hao.zhang,nanqing.dong,christy.li,eric.xing}@petuum.comDecember 30, 2023 =====================================================================================================================================================================================================§ INTRODUCTIONThe Ramsey number R(s,t) is defined to be the smallest n such that every graph of order n contains either a clique of s vertices or an independent set of t vertices.The Ramsey number R(5,5) is less than or equal to 48. The history of R(5,5) is provided in <cit.>.The lower bound of 43 established constructively by Exoo <cit.> is still the best. The previous best upper bound of 49 was proved by McKay and Radziszowski <cit.>. By Theorem <ref> we now have 43 ≤ R(5,5) ≤ 48.The actual value of R(5,5) is widely believed to be 43, because a lot of computer resources have been expended in an unsuccessful attempt to construct a Ramsey(5,5)-graph of order 43 <cit.>. As additional evidence, we can report that, in unpublished 2014 work, Lieby and the second author proved that any Ramsey(5,5) graph on 42 vertices other than the 656 reported in <cit.> do not share a 37-vertex subgraph with any of the 656.The proof of Theorem <ref> is via computer verification, checking approximately two trillion separate cases. We wrote two independent programs to carry out the calculation, to minimise the chance of any computer bugs affecting our results.§ OUTLINE OF THE PROOF OF THEOREM <REF>Let (s,t,n) denote the set of isomorphism classes of graphs of order n without an s-clique or independent t-set, and (s,t)=⋃_n (s,t,n). The main idea is that given a graph F ∈(5,5,48), a large subgraph of it must be obtained by gluing together two graphs in (4,5,24) along a graph in (3,5,d) for some d.A list of 350,904 graphs in (4,5,24) was compiled by McKay and Radziszowski <cit.> in 1995, and our first task was to complete their list. This was actually the most time-consuming part of the project.(4,5,24) = 352,366. We now explain the main proof idea in more detail. For a graph F, F is the vertex-set of F, N_F(w) is theneighbourhood of vertex w∈ F and F[W] is the subgraph of F induced by W⊆ F. First, note that because R(4,5)=25 <cit.>, every vertex in a graph F ∈(5,5,48) must have degree 23 or 24. By replacing F by its complement if necessary we can assume that F has at least 24 vertices of degree 24. Hence F must have two adjacent vertices a,b of degree 24. DefineG= F[N_F(b)],H= F[N_F(a)],K= F[ G ∩ H].In words, G is the subgraph of F induced by the 24 vertices adjacent to b (this includes a but not b), H is the subgraph induced by the vertices adjacent to a, and K is the intersection of G and H. Please see Figure <ref>. Note that G, H ∈(4,5,24) and that K ∈(3,5,d) for some d. Because R(3,5) = 14 we must have d ≤ 13, and d is also equal to the degree of a in G and the degree of b in H.To reconstruct F[ G ∪ H], which is a graph with 48-d vertices, from G, H and K, it suffices to specify how K is a subgraph of G and H, and whether or not we have an edge between x and y for x ∈ G- K-{a} and y ∈ H- K-{b}; i.e. between parts labelled A and B in Figure <ref>. We call this procedure gluing. For each inclusion of K into G and H there are 2^(23-d)^2 ways of gluing G and H along K, but we will only consider gluings that could give a graph in (5,5,48-d).For K ∈(3,5,d), define(4,5,24,K) = { (G,a) | G ∈(4,5,24), a ∈ G, G[N_G(a)] ≅ K}.We will call (G,a) a pointed graph of type K. Our proof of Theorem <ref> consists of the following steps. Step 1: We completed the list of graphs in (4,5,24) compiled by McKay and Radziszowski,thereby proving Theorem <ref>. This was done by a straightforward (but computationally expensive) extension of the method in <cit.>. While that calculation would have taken too long in 1995, it was doable in 2016. Step 2: For each K ∈(3,5,d) with d ≤ 11 and for each pair (G,a), (H,b)∈(4,5,24,K), we used a computer program to calculate all ways of gluing G and H along K. Note that this consisted of one gluing problem for each automorphism of K. Step 3: For each graph generated in Step 2, we used another program which attempts in all possible ways to add one vertex while staying within (5,5). Since this was never possible, none of the graphs generated in Step 2 are subgraphs of a graph in (5,5,48).Execution of Steps 1–3 is sufficient to prove Theorem <ref>. Suppose F∈(5,5,48).We first prove that either F or its complementhas a vertex of degree 24 adjacent to at least 12 other vertices of degree 24.Suppose that F is a counterexample to this claim, and let W⊆ F beits vertices of degree 24.Since F[W] has maximum degree 11, there areat least e_1=13W edges between W and F∖ W in F.Similarly, there are at least e_2=13 (48-W) edges betweenW and F∖ W in F̅.However, this is impossible since e_1+e_2=13× 48=624and W(48- W)≤ 24^2=576.So let b be a vertex of F of degree 24 that is adjacent to at least 12other vertices of degree 24 and define G=F[N_F(b)].From the(4,5,24) catalogue we find that G has at most 8 vertices ofdegree more than 11, so we can choose a∈ N_F(b) that hasdegree 24 in F and degree at most 11 in G.DefineH=F[N_F(a)].Then the gluing of (G,a) and (H,b) inStep 2 will find a subgraph of F and the failure of one pointextension in Step 3 will show that F doesn't exist. § STEP 1: COMPLETING THE LIST OF GRAPHS IN (4,5,24)McKay and Radziszowski <cit.> produced a list of 350,904 such graphs, and proved that the list contains all graphs in (4,5,24) with minimum degree is 6, 7 or 8, or maximum degree 12 or 13, or if the graph is regular of degree 11.To complete the catalogue it suffices to find those graphs with minimum degree 9 or 10.We did this using the well-tested code from <cit.> to glue together graphs of type (3,5,9) and (4,4,14), and of types (3,5,10) and (4,4,13). Although this requires a very large number of graph pairs to be glued, it is feasible when the graphs of type(3,5,9) and (3,5,10) are arranged in a tree structure that exhibits common subgraphs and symmetries.See <cit.> for details. All graphs in (4,5,24) with a vertex of degree 9 or 10 were found, to increase the overlap with <cit.> for checking purposes.This took about 1.5 core-years of computer time and discovered 1462 new graphs in (4,5,24); recall that the search in <cit.> was not intended to be complete.Then we devoted another 6 core-months to sanity-checking of the completed catalogue.As an example, let ' be the set of all neighbourhoods of a vertex of degree 9 or 10 in the 1462 new graphs, and let ' be the set of all complementary neighbourhoods of the same vertices in those graphs. Then, using a completely separate program, we constructed all graphs in (4,5,24) with a vertex having a neighbourhood in ' and a complementary neighbourhood in '.Only known graphs appeared. We also proved, with a separate computation, that if there are any graphs in (4,5,24) but not in the catalogue, they do not share any 21-vertex subgraph with a graph in the catalogue.Summary statistics of the catalogue, to complete <cit.>, are provided in Table <ref>; e is the number of edges, i_k is the number of independent sets of size k, c_3 is the number of triangles, and δ, are the minimum and maximum degrees.The graphs themselves are available at <cit.>.§ THE STRUCTURE OF (4,5,24,K)The neighbourhood of a vertex a of degree d in a pointed graph (G,a)∈(4,5,24,K) is the graph K∈(3,5,d). However not all graphs in (3,5) appear in pointed graphs. In Table <ref>, we show the number of graphs K which occur at least once and the total number of pointed graphs for each d. Note that we have not used the automorphism group of G, so some of the pointed graphs are isomorphic. The great majority of graphs in (4,5,24) have trivial automorphism group, so we gave up the small available speedup (estimated at 3%) in order to have fewer steps in the computation. The total of 8,456,784 in the table is 24×(4,5,24). The number of pointed graphs in (4,5,24,K) for K∈(3,5,≤11) varies greatly: from 0 to 526,073, the latter from a rather irregular graph of order 11 and 21 edges. For Step 2 we take two pointed graphs (G,a),(H,b)∈(4,5,24,K) and overlap them so that their common subgraph K coincides.This can be done in one distinct way for each automorphism of K (again ignoring some small reductions arising from automorphisms of G and H). Most graphs K have only trivial automorphisms but some havelarge automorphism groups, the largest having order 1152 (a vertex-transitive quartic graph of order 8).Taking the wildly varying sizes of (4,5,24,K) as well as the automorphism groups of the various K into account we needed to solve approximately 2 trillion gluing problems. While that is certainly a lot, we were able to perform hundreds of thousands of such gluings per second per core. The whole calculation took approximately six core-months for one implementation and two core-months for the other.§ STEP 2. FINDING ALL WAYS TO GLUEIn order to ensure correctness, the list of pointed graphs was prepared independently by the two authors and all the gluings were performed by two programs written independently using different methods. The decision to use two different methods rather than identifying the fastest method and implementing it twice was based on the long-established axiom of software engineering that different programmers tend to make the same errors when faced with the same task.Now we will describe the two different methods for gluing (G,a), (H,b)∈(4,5,24,K) after they are overlapped at the common subgraph K. Because of the large number of calculations needed, the naive approach of deciding one unknown adjacency at a time takes far too long.Define d'=23-d.Suppose K has vertices v_0,…,v_d-1, G has vertices v_0,…,v_d-1, a, a_1,…,a_d' and H has vertices v_0,…,v_d-1, b, b_1,…,b_d'. Note that the vertices a and b cannot participate in any 5-cliques or independent 5-sets by the construction. To specify a gluing it suffices to specify whether or not a_i and b_j are connected by an edge for 1 ≤ i, j, ≤ d'. We will record this data in a d' × d' matrix M with entries 0 (for no edge) and 1 (for edge).Define a potential (r,s,t)-clique to be r vertices w_1,…,w_r in K, s vertices x_1,…,x_s in G- K-{a}, and t vertices y_1,…,y_t in H- K-{b} such that{w_1,…,w_r,x_1,…,x_s}is an (r+s)-clique in G and{w_1,…,w_r,y_1,…,y_t}is an (r+t)-clique in H. Define a potential independent (r,s,t)-set similarly. The following lemma is immediate.A d' × d' 0-1 matrix M = (m_ij) defines a gluing if and only if * For each potential (r,s,t)-clique with r+s+t = 5, m_x_iy_j=0for some 1≤ i≤ s,1≤ j≤ t. (This is needed for(1,2,2), (0,2,3) and (0,3,2).) * For each potential independent (r,s,t)-set with r+s+t = 5, m_x_iy_j=1 for some 1≤ i≤ s,1≤ j≤ t.(This is needed for (3,1,1), (2,1,2), (2,2,1), (1,1,3), (1,2,2),(1,3,1), (0,2,3) and (0,3,2).)Please refer to Figure <ref> and consider a set W of size 5. For W to be a clique in the completed graph, it must overlap both K∪ A and K∪ B, and the pairs of vertices in each those intersections must be edges.That implies it is one of the potential (r,s,t)-cliques listed in part (1), and to prevent W from being a clique in the completed graph we need to include a non-edge.The case of an independent set is similar. The two gluing methods are logically similar but implemented very differently. The first gluing method expands on the method in <cit.>. Define an interval to be a set of the form I = {X | B ⊆ X ⊆ T}, where B and T are subsets of {a_1,…,a_d'}×{b_1,…,b_d'}. We write I=[B,T]. We represent I by two d' × d' matrices with coefficients in {0,1}.Given an interval [B,T], we define collapsing rules as follows. There are 11 in total, one for each of the triples in Lemma <ref> above. The special event FAIL means that there is no X∈[B,T] which corresponds to aproper gluing.Rule K_1,2,2. Suppose {w_1, x_1, x_2, y_1, y_2} is a potential (1,2,2)-clique. The collapsing rules for K_0,2,3 and K_0,3,2 are similar. In each case, the rule says that if 5 vertices include 9 edges, then the remaining vertex pair must not be an edge.Rule E_3,1,1. Suppose {w_1, w_2, w_3, x_1, y_1} is a potential independent (3,1,1)-set. The collapsing rules for the other potential independent sets from Lemma <ref> are once again similar.We start the search with a single interval I=[B,T] with B = ∅ and T = {a_1,…,a_d'}×{b_1,…,b_d'}, and we note that the collapsing rule E_3,1,1 can be applied even in this case. Each time we add an edge to B or remove an edge from T the number of possible gluings is cut in half.After applying these collapsing rules repeatedly, we must eventually encounter either FAIL or a stable situation. The discussion in <cit.> applies, and the final state is independent of the order of the application of the collapsing rules.If we do not encounter FAIL, we pick some (a_i,b_j) with (a_i,b_j) ∉B and (a_i,b_j) ∈ T, and consider the cases I=[B, T-(a_i,b_j)] and I=[B ∪ (a_i,b_j), T] separately.The second method applies an equivalent procedure usingdata structures familiar from the constraint satisfaction area. Each entry m_ij of M is a variable, with value FALSE, TRUE or UNKNOWN, while each set {x_1,…,x_s}×{y_1,…,y_t} is a clause. Clauses from potential (r,s,t)-cliques can't have all their variables TRUE, while clauses from potential independent (r,s,t)-sets can't have all their variables FALSE. Each variable α has a list (α) of the clique clauses which contain α, and a list (α) of the independent set clauses which contain α. There is also a stack S which maintains a set of distinct variables on a last-in first-out basis. Informally, at each moment S contains those variables which have been assigned FALSE or TRUE, but their clause lists have not yet been scanned.Initially, variables are set to TRUE if required by independent (3,1,1)-set clauses, and UNKNOWN otherwise.The variables equal to TRUE are put onto S.Then we execute the following until it terminates.0pt plus 50pt-3000pt plus -50ptFor good efficiency it is essential that variables be assigned values as they enter the stack and not when they leave it.Also, a good optimization is for clauses to remember how many UNKNOWN variables they have. If the algorithm terminates with “exit FAIL”, there is no solution. Otherwise, all the variables with value FALSE or TRUE have those values in all solutions.If there is any variable with value UNKNOWN, we can choose one such variable and try FALSE and TRUE separately with S initialised to that variable only. And so on, recursively.Both methods were very fast for d≥ 8, often performing 100,000 gluings per second per core, primarily because failure occurred early most of the time.For d≤ 7, the methods as described could take much longer since extremely large search trees with many useless branches could be generated.For those values of d we used additional techniques.For the first method, two techniques were used. First, for each pair (a_i,b_j) ∈ T-B we applied the collapsing rules to both [B, T-(a_i,b_j)] and [B ∪ (a_i,b_j), T]. If for some pair (a_i,b_j) we arrived at FAIL in both cases we then concluded that there were no gluings. If [B,T-(a_i,b_j)]led to FAIL then we replaced [B,T] by [B ∪ (a_i,b_j), T], and if [B ∪ (a_i,b_j), T] led to FAIL then we replaced [B,T] by [B,T-(a_i,b_j)]. This is of course more expensive than the original algorithm at each node of the search tree, but we found that for 6 ≤ d ≤ 7 it was worth it.Second, we ordered the pairs (a_i,b_j) according to how many independent sets of type (2,2,1) and (2,1,2) they were contained in and started the binary search with a pair (a_i,b_j) which was maximal in this sense. The advantage is that when considering [B,T-(a_i,b_j)] the collapsing rules E_2,2,1 and E_2,1,2, which require only a single edge to be missing from T in order to modify B, come into play as much as possible.For the second method, instead of choosing an arbitrary UNKNOWN variable to branch on, we used an UNKNOWN variable which occurred in the greatest number of clique clauses with all TRUE variables except two UNKNOWN variables, or independent set clauses with all FALSE variables except two UNKNOWN variables.This is a heuristic for how beneficial it is to assign FALSE or TRUE to the variable.In both cases, these enhancements made the cost per node of the search tree much greater but, due to the smaller number of pointed graphs for small d, the computation finished quickly enough.§ STEP 3. EMPIRICAL RESULTSFor 6≤ d≤ 9, no gluings produced any output graphs, so Step 3 was unnecessary. For d=10 we found a total of 647,424 graphs (81,936 nonisomorphic) in (5,5,38), all of them from a single K ∈(3,5,10). For d=11 we found a total of 15,244 graphsin (5,5,37), with 15,152 graphs (14,412 nonisomorphic) coming from one K ∈(3,5,11) and 92 graphs (84 nonisomorphic) coming from another K. An example is shown in Figure <ref>. None of these graphs could be extended by one more vertex while staying within (5,5), so Step 3 was completed successfully.By Step 2, we do not need gluings for d≥ 12, which is fortunate since the number of successful gluings is around 57 billion for d=12 and perhaps even larger for d=13.This would make Step 3 very onerous. Of course, these considerations are the reason we sought to eliminate d≥ 12 theoretically (Lemma <ref>).We wish to acknowledge useful comments from Staszek Radziszowski. 1Ex89 Geoffrey Exoo. A lower bound for R(5,5). J. Graph Theory, 13(1):97–98, 1989.RamseyWeb Brendan D. McKay. Ramsey Graphs. Web site atMcRa95 Brendan D. McKay and Stanisław P. Radziszowski. R(4,5)=25. J. Graph Theory, 19(3):309–322, 1995.McRa97 Brendan D.  McKay and Stanisław P. Radziszowski. Subgraph counting identities and Ramsey numbers. J. Combin. Theory Ser. B, 69(2):193–209, 1997.Sp94 Joel Spencer. Ten lectures on the probabilistic method, volume 64 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 1994.
http://arxiv.org/abs/1703.08768v2
{ "authors": [ "Vigleik Angeltveit", "Brendan D. McKay" ], "categories": [ "math.CO", "05D10" ], "primary_category": "math.CO", "published": "20170326053113", "title": "$R(5,5) \\le 48$" }
http://arxiv.org/abs/1703.09173v1
{ "authors": [ "Y. A. Kharkov", "O. P. Sushkov", "M. Mostovoy" ], "categories": [ "cond-mat.str-el", "nlin.PS", "nucl-th", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20170327163459", "title": "Bound states of skyrmions and merons near the Lifshitz point" }
APS/123-QEDfirat.yilmaz@bilkent.edu.trDepartment of Physics, Bilkent University, 06800, Ankara, TurkeyThe self-similar energy spectrum of a particle in a periodic potential under a magnetic field, known as the Hofstadter butterfly, is determined by the lattice geometry as well as the external field. Recent realizations of artificial gauge fields and adjustable optical lattices in cold atom experiments necessitate the consideration of these self-similar spectra for the most general two-dimensional lattice. In a previous work, we investigated the evolution of the spectrum for an experimentally realized lattice which was tuned by changing the unit cell structure but keeping the square Bravais lattice fixed. We now consider all possible Bravais lattices in two dimensions and investigate the structure of the Hofstadter butterfly as the lattice is deformed between lattices with different point symmetry groups. We model the optical lattice by a sinusoidal real space potential and obtain the tight binding model for any lattice geometry by calculating the Wannier functions. We introduce the magnetic field via Peierls substitution and numerically calculate the energy spectrum. The transition between the two most symmetric lattices, i.e. the triangular and the square lattice displays the importance of bipartite symmetry featuring deformation as well as closing of some of the major energy gaps. The transition from the square to rectangular and from the triangular to centered rectangular lattices are analyzed in terms of coupling of one-dimensional chains. We calculate the Chern numbers of the major gaps and Chern number transfer between bands during the transitions. We use gap Chern numbers to identify distinct topological regions in the space of Bravais lattices. Hofstadter Butterfly Evolution in the Space of Two-Dimensional Bravais Lattices M. Ö. Oktel December 30, 2023 ===============================================================================§ INTRODUCTIONThe Hofstadter butterfly is the self-similar energy spectrum of an electron moving in a periodic potential under a uniform magnetic field<cit.>. The self-similar behavior is due to the competition between two length scales, the lattice constant and the magnetic length. The energy spectrum and the topological properties of the system are determined solely by the magnetic field and the lattice geometry. The lattice geometry is in turn determined by the Bravais lattice and the structure of the unit cell. The observation of the self-similar spectrum requires a magnetic flux which is on the order of one flux quantum per unit cell. For typical solid state lattices, this requires very high magnetic fields on the order of thousands of Tesla, which is not experimentally accessible. One solution to overcome this limitation is to increase the lattice constant, such as the two stacked layers forming a superlattice with a larger lattice constant<cit.>. A new and different approach is demonstrated in cold atom experiments utilizing artificial gauge fields<cit.>. Both of these approaches can create arbitrary lattices with desired lattice parameters. A striking demonstration of this flexibility is the creation of the two dimensional tunable optical lattices<cit.>. Therefore, the study of the energy spectra under a magnetic field for arbitrary lattices and their dynamical properties are relevant problems today.In a previous work<cit.>, we calculated and examined the energy spectrum and the topological properties of the experimentally realized two-dimensional (2D) tunable optical lattices created by the Zurich group<cit.>. That lattice can be tuned from the square lattice to a Honeycomb-like geometry. However, throughout this transition, the unit cell remains square and only the unit cell potential is varied. The honeycomb-like geometry which also referred to as the brick wall lattice still has a square Bravais lattice which has a two point basis. A natural follow up question is to ask: “How does the energy spectrum evolve as the symmetries of the underlying Bravais lattice changes?”In this paper, we answer this question by calculating the energy spectrum for all two-dimensional Bravais lattices. We particularly investigate the transitions between lattices of distinct symmetry groups. To this end, we propose a real space sinusoidal potential which can create any arbitrary Bravais lattice with a one point basis. Such a potential can be created as an optical lattice which can be adjusted to generate all two dimensional Bravais lattices. We consider this potential in the deep lattice limit to describe the system by a tight binding (TB) model. We calculate the TB parameters by both fitting momentum space bands to numerical solution of the Schrödinger equation and also by calculating the Wannier functions (WF).Once the zero field TB Hamiltonian is constructed, the effect of the magnetic field is introduced by the Peierls substitution<cit.>. This method modifies the hopping amplitude with a complex phase with the constraint that the sum of the hopping phases over any closed loop on the lattice is proportional to the total magnetic flux through the loop. While this substitution only works for lattices in the TB limit, it is actually a more factual description of the recent cold atom lattice experiments where the artificial magnetic field is generated by creating these phases by modifying the tunneling between neighboring sites<cit.>. Because the phases are written on the links between the sites, it is easier to envision changing the lattice geometry by keeping the magnetic flux per plaquette of the lattice constant.We first reduce the magnetic TB Hamiltonian to a q-by-q matrix by using translation symmetries under a magnetic flux per plaquette which we take to be p/q times the magnetic flux quantum. We restrict p and q to be co-prime integers. We numerically diagonalize this matrix for each k-point in the magnetic Brillouin zone to determine band edges. Repeating this calculation for different Bravais lattice parameters enables us to investigate the evolution of the energy spectrum. We find that the geometry of the Bravais lattice plays a critical role in determining the energy spectrum. New symmetries can emerge according to the point group of the lattice. In two dimensions, Bravais lattices can be classified into five distinct classes by their point group. The triangular and the square lattice are the most symmetric 2D lattices. A reduction of one mirror symmetry produces the centered rectangular and rectangular lattices from them. The oblique lattice is the most general form for 2D Bravais lattices. We parametrize the space of all these lattices and observe the evolution of the Hofstadter butterfly during the transition between them. The most striking evolution is between the square lattice and the triangular lattice. One of the main diagonal gap in the square lattice energy spectrum shrinks, and after infinitely many gap closures and openings, the triangular lattice Hofstadter butterfly emerges. These gap closings and re-openings through the evolution are necessary to connect gaps with different Chern numbers in the two limits. We find that even a small departure from the high symmetry points such as the Triangular lattice, causes drastic changes in the spectrum, resulting in sudden energy gap openings and closures. The square lattice's bipartite symmetry causes the reflection symmetry along E=0 line in the Hofstadter butterfly. This symmetry is swiftly broken during the evolution and completely fades away in the Triangular lattice limit. For the Triangular lattice, the area of the primitive cell and consequently the flux per plaquette is reduced by one half compared to the oblique lattice. Hence, the periodicity of energy spectrum as a function of the flux is doubled. The transition between the square lattice butterfly and the triangular lattice butterfly has previously been studied by Hatsugai and Kohmoto<cit.> by considering a square lattice with NNN hopping. While the results obtained directly starting from the TB model are valuable, it is not straightforward to link the TB parameters with the lattice geometry and real space potential. In this work, we generate the TB parameters starting from a realistic optical lattice potential and relate lattice geometry directly to the energy spectrum. For the square to triangular lattice transition, our results are in agreement with Ref.<cit.>.The square to rectangular lattice transition demonstrates how the self similar spectrum emerges from a continuous band. For a rectangular lattice with extreme anisotropy, the hopping amplitude in one direction is suppressed and the system is equivalent to a collection of one-dimensional chains. In one dimension, an external magnetic field can be gauged away and has no effect on the spectrum. As the anisotropy of the rectangular lattice is reduced, the isolated one-dimensional chains are weakly connected. Treating this connection perturbatively, we observe how the self similar energy gaps are formed. A similar transition takes place between the triangular lattice and the centered rectangular lattices. We also analyze the oblique and the centered rectangular to the triangular lattice transitions.Finally, we investigate the Chern numbers of the main gaps during the lattice evolution. We find that Chern numbers are always transferred by a multiple of q when two bands connect and split. We argue that this is due to the q-fold degeneracy in momentum space causing q Dirac cones to emerge simultaneously. We also investigate the behavior of Chern numbers on a closed path in the space of two dimensional Bravais lattices.This article is organized as follows, section II introduces the real space potential and constructs a proper TB model description under magnetic field. The next section explains the diagonalization of the magnetic TB Hamiltonian and calculates of the corresponding energy spectra. In Section IV, all possible Bravais lattice transitions and the corresponding Hofstadter butterflies are discussed in detail. In Section V, the Bravais lattices are characterized by their Chern number and the parameter space of lattices are classified by set of Chern numbers for each magnetic flux per plaquette. In the conclusion, we give an overview of our results and discuss the experimental implications. § THE MODELIn this section, we propose a lattice potential which can be adjusted to form any Bravais lattice in two dimensions. We reduce this potential to a TB model in the deep potential limit. The effect of a magnetic field is introduced by the Peierls substitution<cit.> in the next section. A 2D Bravais lattice is described by its primitive vectors<cit.>,a⃗_1= λ_1 x̂,a⃗_2= λ_2 ( cosθx̂ + sinθŷ),where θ is the angle between the primitive lattice vectors, the lattice constants are λ_1 and λ_2. Without loss of generality, we choose a⃗_1 to be along the x̂ direction. The Bravais lattice points are at positions,R⃗_n_1,n_2 =n_1 a⃗_1 + n_2 a⃗_2,n_1,n_2 ∈ℤ,= [(λ_1 n_1 + λ_2 n_2 cosθ) x̂ + (λ_2 n_2 sinθ)ŷ]. A real space potential V(x,y) which is capable of generating all possible two-dimensional Bravais lattices with minimum number of Fourier components is,V(x,y) =- V_X cos (k⃗_1·x⃗) - V_Y cos (k⃗_2·x⃗) - 2 cosθ√(V_X V_Y)cos (( k⃗_1 - k⃗_2 )·x⃗).Sinusoidal potentials are routinely produced in cold atom experiments by a retro reflected lasers. Two dimensional optical lattices are generated by using at least two beams. In principle, the above potential can be produced by at most three laser beams, but if |k_1| = |k_2|, the last term in the potential results from the interference of the first two. The laser wave vectors are k⃗_1 = 2 π/λ_1 sinθ(0,1), k⃗_2 = 2 π/λ_2 sinθ(- sinθ,cosθ). Up to a scale transformation, all two dimensional Bravais lattices can be formed as |λ_1/λ_2| and θ are varied. The Schrödinger equation for the above potential is,[ -ħ^2/2m(∂^2_x + ∂^2_y ) + V(x,y) ] ψ(x,y) = E ψ(x,y).We do not introduce the magnetic field into the continuum equation as the translation symmetry group under a magnetic field is more complicated than the usual crystal symmetries<cit.>. Instead, we first project the continuum problem onto the lowest band in k-space forming a TB description. Such a description is not only accurate in the deep lattice limit but also easily adapted to include the external magnetic field by Peierls substitution. The TB parameters can be obtained by two methods. The first method is to calculate them by fitting the energy band obtained from TB model with the lowest energy band of the numerical solution of the continuum problem. In the second method, WFs for the lowest energy band are constructed and used to project the continuum problem to the TB model. If the WFs are obtained from the numerical solution of the continuum problem by direct Fourier transformation, these methods are equivalent. However, we use an alternative definition of WFs<cit.> which facilitates direct construction of these functions from a finite system. We use both methods for all our lattices and find that the methods are in good agreement in the deep lattice limit.The usual definition of WFs has a phase ambiguity. This gauge choice is usually resolved by requiring maximum localization which increases the computational burden<cit.>. An alternative definition of WFs was given by Kivelson<cit.> based on projections to the single band Hilbert space. The projected position operators for the n^th band are defined as,x̂_n= P̂_n x̂P̂_n,ŷ_n= P̂_n ŷP̂_n,where P̂_n = ∑_k^BZ | n,k⟩⟨ n,k |,The eigenstates of x̂_n, ŷ_n are the WFs and the eigenvalues are the corresponding Wannier centers,x̂_n | W_n (r⃗-R⃗) ⟩=R⃗ | W_n (r⃗-R⃗) ⟩.This definition reduces to the usual Fourier transform WF definition, but is equally applicable to finite or disordered systems. Thus, we use this definition on a finite system to generate WF and calculate the TB parameters. We use a finite system with four unit cells along each primitive vector. The projection operator for the lowest band is formed by the first sixteen nearly degenerate eigenstates. Instead of separately diagonalizing x̂_n, ŷ_n one after another, we diagonalize a linear combination, say Ô_n = x̂_n + αŷ_n with arbitrary α and obtain the corresponding eigenstates. Surprisingly, even a 4-by-4 finite lattice is large enough to capture the infinite lattice hopping parameters within one percent error.During the evolution of the lattice the number of nearest neighbors (NN) and the distance between NNs and next nearest neighbors (NNN) changes. In order to capture the physics of the transition, we calculate the TB parameters for eight neighbors shown in Fig.<ref>. Due to inversion symmetry, only four distinct parameters are required. The TB Hamiltonian for this system is,Ĥ = - ∑_m_1,m_2[ t_0 | m_1+1,m_2⟩⟨ m_1, m_2| +t_1 | m_1,m_2+1⟩⟨ m_1, m_2| +t_2 | m_1-1,m_2+1 ⟩⟨ m_1, m_2| +t_3 | m_1+1,m_2+1⟩⟨ m_1, m_2| +h.c.].The Schrödinger equation Ĥ | Ψ⟩ = E |Ψ⟩ can be expressed in the localized basis |Ψ⟩= ∑_n_1,n_2ψ_n_1,n_2 |n_1,n_2⟩. Then the Hamiltonian is diagonalized by a discrete Fourier transform, yielding the eigenvalues,E(k_1,k_2) =-2 t_0 cos(k_1) - 2 t_1 cos(k_2) -2 t_2 cos(k_1-k_2) - 2 t_3 cos (k_1 + k_2).The dimensionless wavenumbers are, k_1 = k⃗·a⃗_⃗1⃗ and k_2 = k⃗·a⃗_⃗2⃗. This function reduces to TB bands for the square and the triangular lattice with corresponding NN and NNN hopping parameters.We introduce the magnetic field using the Peierls substitutiont_𝐦,𝐧 |R⃗_𝐧⟩⟨R⃗_𝐦 | → e^i Θ_𝐦,𝐧 t_𝐦,𝐧 |R⃗_𝐧⟩⟨R⃗_𝐦 |,where 𝐦 = (m_1,m_2) and 𝐧 = (n_1,n_2). We choose the magnetic vector potential in the Landau gauge along a⃗_1 direction, A⃗ = B y x̂. The hopping phases are calculated as,Θ_𝐦,𝐧 =-e/ħ∫_R⃗_𝐦^R⃗_𝐧A⃗·d⃗ℓ⃗=- 2 πϕ([R⃗_𝐧-R⃗_𝐦]·x̂)(R⃗_𝐧+R⃗_𝐦/2·ŷ),where ϕ = Bλ_1 λ_2sinθ / ϕ_0 is magnetic flux per plaquette normalized to flux quantum ϕ_0 = h/e. This method faithfully describes a uniform magnetic field as long as the zero-field description of the TB model holds<cit.>.With the Peierls substitution, the TB Hamiltonian under a magnetic field becomes,Ĥ =- ∑_m_1,m_2[ t_0 e^-i2πϕ m_2 | m_1+1,m_2⟩⟨ m_1, m_2| +t_1 e^-i2πϕcosθ (m_2+1/2) | m_1,m_2+1⟩⟨ m_1, m_2| +t_2 e^i2πϕ(1-cosθ) (m_2+1/2) | m_1-1,m_2+1 ⟩⟨ m_1, m_2| +t_3e^-i2πϕ(1+cosθ) (m_2+1/2) | m_1+1,m_2+1⟩⟨ m_1, m_2| +h.c.].The magnetic field enters the TB Hamiltonian only through the parameter ϕ which is the magnetic flux through a primitive unit cell of the lattice. As the lattice geometry evolves, the area of the primitive unit cell will in general change. Therefore, it is possible to investigate the evolution under two different constraints. First, one can envision that a uniform magnetic field is acting on the system so that ϕ changes with lattice geometry. Second approach is to take ϕ to be constant during the evolution. We use the second approach as it is more relevant to cold atom experiments where the artificial magnetic flux is generated by modifying the hopping parameters and is not affected by the unit cell area. In the next section, we calculate the energy spectrum under a constant ϕ as the geometry evolves.§ CALCULATION OF THE ENERGY SPECTRUMThe magnetic Hamiltonian in Eq.<ref> acting on the state |Ψ⟩= ∑_n_1,n_2ψ_n_1,n_2 |n_1,n_2⟩, yields the following difference equation,Eψ_m_1,m_2 = -t_0 (e^-i2 πϕ m_2ψ_m_1-1,m_2 + e^i2 πϕ m_2ψ_m_1+1,m_2)-t_1 (e^-i2 πϕcosθ(m_2 - 1/2)ψ_m_1,m_2-1+ e^i2 πϕcosθ(m_2 + 1/2)ψ_m_1,m_2+1) -t_2 (e^i2 πϕ (1-cosθ)(m_2 - 1/2)ψ_m_1+1,m_2-1 +e^-i2 πϕ (1-cosθ)(m_2 + 1/2)ψ_m_1-1,m_2+1) -t_3 (e^-i2 πϕ (1+cosθ)(m_2 - 1/2)ψ_m_1-1,m_2-1 +e^i2 πϕ (1+cosθ)(m_2 + 1/2)ψ_m_1+1,m_2+1).The Landau gauge, A⃗ is parallel to a⃗_1 and the Hamiltonian consequently preserves zero-field discrete translational symmetry in this direction. We choose a superposition of plane waves as the the mutual eigenstates of the Hamiltonian and the discrete translation operator along a⃗_1,ψ_m_1,m_2 (k_1,k_2)= e^i k_1 m_1 g_m_2(k_1,k_2).With this choice, we obtain a one-dimensional difference equation for g_m_2. This equation is periodic only for rational values of ϕ. Assuming ϕ = p/q, where p and q are mutually prime integers, the one-dimensional equation becomes,Eg_m_2 = -t_0 (e^-i2 πp/q m_2 e^-ik_1 g_m_2 + e^i2 πp/q m_2 e^ik_1 g_m_2) -t_1 ( + e^-i2 πp/qcosθ(m_2 - 1/2)g_m_2-1+ e^i2 πp/qcosθ(m_2 + 1/2) g_m_2+1) -t_2 (e^i2 πp/q (1-cosθ)(m_2 - 1/2) e^ik_1 g_m_2-1+ e^-i2 πp/q (1-cosθ)(m_2 + 1/2) e^-ik_1 g_m_2+1)-t_3 (e^-i2 πp/q (1+cosθ)(m_2 - 1/2) e^-ik_1 g_m_2-1+ e^i2 πp/q (1+cosθ)(m_2 + 1/2) e^ik_1 g_m_2+1).However, the periodicity of this difference equation by q is not obvious. Following Rammal's approach<cit.>, we apply the following unitary transformation,g_m_2 = e^-i πp/q cosθ m_2^2 f_m_2.The periodicity of the resulting equation allows the use of Bloch's theorem. Hence, the diagonalization is reduced to a q-by-q matrix,E f_m_2 =-t_0 (2 cos(2 πp/q m_2 + k_1) f_m_2)-t_1 ( f_m_2-1+ f_m_2+1) -t_2 (e^i2 πp/q(m_2 - 1/2) e^ik_1 f_m_2-1+ e^-i2 πp/q (m_2 + 1/2) e^-ik_1 f_m_2+1)-t_3 (e^-i2 πp/q (m_2 - 1/2) e^-ik_1 f_m_2-1+ e^i2 πp/q (m_2 + 1/2) e^ik_1 f_m_2+1), f_m_2 + q =e^i k_2 q f_m_2.The calculation of the energies for all momenta (k_1,k_2) within the magnetic BZ is computationally laborious. Instead of sampling the whole BZ, it is possible to identify the special k-points for which the energy values will be an extremum. Using symmetry to calculate these k-points makes it possible to obtain the band edges<cit.> for each ϕ = p/q value by a few diagonalizations of a q-by-q matrix.The reduced difference equation is solved numerically and we obtain the energy spectra for any lattice geometry. In the next section, we discuss these eigenvalue spectra and analyze the lattice transitions among the five Bravais lattices.§ TRANSITIONS BETWEEN BRAVAIS LATTICESIn this section, we analyze the changes in the butterfly spectra calculated in the previous section as the lattice geometry between Bravais lattices of different symmetry adiabatically evolves. As discussed in the model section, we characterize all the Bravais lattices by two parameters, θ and |a⃗_2|/|a⃗_1|. In Fig.<ref>, we show the five lattices of distinct symmetry groups and the transition paths between them in the space of these two parameters. There are five Bravais lattices in two dimensions, the square, the triangular, the rectangular, the centered rectangular and the oblique lattices. The lattices with the higher symmetry occupy regions of smaller measure. In our representation, the whole area is covered by the oblique lattice which has the least amount of symmetry. The rectangular and the centered rectangular lattices correspond to one-dimensional curves while the most symmetric triangular and square lattices are confined to isolated points.We start by considering the transition between the two most symmetric lattices, the square and the triangular lattices. In our parameter space Fig.<ref>, the path BA indicates this transition. The energy spectra at four representative points along that path are displayed in Fig.<ref>. During the square to the triangular lattice transition, the magnitude of the hopping elements are calculated over this path. For a real space potential with V_X = V_Y = 50 (E_R) resembling the square lattice, the hopping parameters are t_0 = t_1 = 0.1913 and t_2,t_3 ≊ 0. The four NN hopping amplitudes are equal while the four NNN hoppings almost vanish. This is an expected result in the deep lattice limit as the WFs are localized to within a unit cell and the NNN transitions are strongly suppressed. The interference term in the potential in Eq.<ref> increases towards the triangular lattice. During the evolution, t_0 and t_1 decrease with the same magnitude until they are equal to t_2 = 0.0345 in the triangular lattice limit, see Fig.<ref>. It is natural that the NN elements are almost constant because only the angles between the local minima of two adjacent sites changes.The square lattice butterfly has bipartite symmetry when NNN hopping is neglected. Even when this symmetry is broken by weak NNN hopping, the spectrum approximately preserves the reflection symmetry along E=0 line. This symmetry is broken more strongly when the system moves towards the triangular lattice. The amount of the asymmetry is maximum at ϕ = 1/2 where it is proportional to |t_0-t_2|. The asymmetry of the energies results in shrinking of the energy gaps at the upper right and the lower left main gaps of the butterfly. Meanwhile, the lower right and the upper left wings enlarge to form the triangular butterfly main skeleton. This lattice transition is non-trivial as infinitely many gaps close and reopen during the evolution. This is expected as the square lattice and the triangular lattice are adiabatically disconnected<cit.>. We observe that the energy spectrum is sensitively dependent on the parameters near the high symmetry points. Even small deviations of the lattice from these points result in large shifts of energy bands as well as gap closures. Hence, the triangular and the square lattice butterflies are representative of only small regions in the parameter space. This point is especially important for design of experimental lattices. As discussed in the model section, we keep the flux per unit cell ϕ constant during the transition. The flux per plaquette drops to ϕ/2 at the triangular lattice limit. This flux halving was first discussed by Claro and Wannier<cit.>. Consequently, the triangular lattice butterfly is periodic by ϕ = 2 as seen in Fig.<ref>. The energy spectrum is symmetric under ϕ→ -ϕ operation for all the Bravais lattices. This symmetry is easy to understand for all cases except the oblique geometry. If the lattice geometry does not distinguish between ± z axis, the resulting Hamiltonian is invariant under the reversal of the flux. For the oblique lattice, although ± z directions can be uniquely defined, the lattice still has inversion symmetry which coupled by a reflection in the lattice plane restores the ϕ→ -ϕ symmetry as seen in Fig.<ref> and it is therefore valid for all Bravais lattices. The inversion symmetry can only be broken when there is an energy difference in on-site energies of NNs.The next transition we consider is between the square lattice and the rectangular lattice.Although this transition happens as soon as four-fold symmetry is broken, it is instructive to study the evolution from the square limit to the extremely anisotropic case. If tunneling in one direction is much weaker than the other one, the system is a collection of one-dimensional chains. During this transition the square lattice butterfly energy gaps disappear and the spectrum becomes a one-dimensional TB band. It is not surprising that for a collection of 1D chains, the spectrum is independent of the magnetic field. For a 1D chain, the external magnetic field can be gauged out. Fig.<ref> shows the evolution of the energy spectrum till the rectangular lattice with extreme asymmetry (for |a⃗_1/a⃗_2| = 2.5 and the hopping amplitudes, t_0 = 0.19(E_R), t_1 = 0.04(E_R)). The energy spectra are mainly divided into three bands and corresponding two main energy gaps are preserved for even for large |a⃗_1/a⃗_2| values.The robustness of the major gaps can be understood by a perturbative approach. We consider the Hamiltonian of independent 1D chains as the unperturbed system and the hopping between the chains as the perturbation. The dispersion of the m^th unperturbed chain is,E_m(k_1)= - 2 t_0 cos(2 πϕ m + k_1). The energy bands of adjacent chains are degenerate at two k-points and separated by 2 πp/q for ϕ = p/q. The tunneling amplitude between the adjacent chains, t_1 lifts the degeneracy at these k-points and the new energies become ± 2 t_0 cos(πp/q + k_1)± t_1. Therefore, t_1 is the measure of the energy splitting for the main gaps of the butterfly as in Fig.<ref>. The perturbative approach can also generate the minor gaps and the Hofstadter butterfly emerges when higher orders are included<cit.>.We conclude this section with the discussion of transitions from the triangular lattice to the centered rectangular and to the highly anisotropic oblique lattice. These two transitions are represented in Fig.<ref> by the paths AG, AE. In both cases, the evolution of the energy spectrum is similar to the transition from the square to the rectangular lattice. Starting from the triangular lattice butterfly, the smaller gaps close first and then the larger gaps shrink to form an energy spectrum of weakly coupled one-dimensional chains. Due to the lack of bipartite symmetry, only one of the major gaps is robust unlike the square to rectangular transition. The asymmetry between the two diagonal gaps can be used as a measure of the closeness of the system to bipartite symmetry.All possible energy spectra for the Bravais lattice parameter space is depicted in Fig.<ref>.In our TB model, the smallest plaquette area that can be enclosed by hopping is half of the unit cell. Therefore, all the energy spectra we calculate are periodic by ϕ = 2. However, in the cases where NNN hoppings (t_2,t_3) are negligible the smallest enclosed area is the unit cell and the spectra are periodic with ϕ =1. In general, the periodicity of the spectrum with ϕ can be used to determine how long the range of significant hopping is. In the centered rectangular and the oblique lattice limits, the energy spectra preserve the periodicity of ϕ = 2 magnetic flux. The smallest cells in each case are an half of the unit cells and therefore the enclosed flux are halved as well. § TOPOLOGICAL CHARACTERIZATION OF THE BRAVAIS LATTICE PHASE SPACEThe energy bands of lattices under magnetic field are characterized by a topological invariant, the first Chern number<cit.>. The contribution of a band to the Hall conductivity for a non-interacting fermionic system is given by the Chern number, σ_xy = 𝒞 e^2/h. Alternatively, Chern numbers can be associated with gaps by summing Chern numbers of all the bands below a certain gap. In this section, we use the Chern numbers to characterize the energy spectrum as the geometry of the lattice changes.For a lattice with flux ϕ = p/q, the energy spectrum will have q bands and q-1 gaps. While the number of the gaps are determined only by the flux, the Chern numbers associated with these gaps change with lattice geometry. For example, for ϕ =1/3, the square lattice gaps have Chern numbers 1,-1, while the triangular lattice gaps have 1,2. These Chern numbers would not be affected by small changes in the geometry as they are topologically protected. Chern numbers only change if a gap in the spectrum closes. Thus, we can classify all lattices into equivalence classes by their Chern number sequence. Hence, our parametrization of the lattice space in Fig.<ref> can be separated into regions of topologically equivalent phases. The phase boundaries, then, show the parameters for which at least one gap closes.Chern numbers can be calculated by k-space integration of Berry curvature over the BZ. However, we use an indirect but computationally simpler approach<cit.>. The conductivity of the system is a thermodynamic variable and can be calculated as a variation of the number of levels below the Fermi level with respect to the changes in the magnetic field in two dimensions<cit.>. In lattice systems, the Chern number for a gap is,𝒞 = ∂ n/∂ϕwhere n is the density per unit cell for states below the Fermi level, ϕ is the magnetic flux per plaquette. Chern number is calculated through the difference of the energy eigenvalues below the Fermi energy for two close magnetic flux values.We explored the lattice phase space for two simplest non-trivial fluxes, ϕ = 1/3,1/5. First, we take a closed rectangular path over the phase space, ATSBA and calculate the evolution of the energy bands. The path is chosen such that it avoids the 1D chains limit where gaps are very small as in Fig.<ref>. These evolutions are given in Fig.<ref>. For ϕ=1/3, we find that all lattices are either equivalent to the square lattice with Chern numbers -1,1 or equivalent to the triangular lattice with 2,1. Thus, we separate the parameter space into two regions as can be seen in Fig.<ref>. We find that the region equivalent to the triangular lattice roughly tracks the centered rectangular lattice curve. For ϕ=1/5, we find four different regions which are sampled by the rectangular cut in Fig.<ref>. As q is increased, we get smaller gaps and correspondingly the phase diagram splits into smaller regions. An important observation is that at the phase boundaries, the Chern numbers for energy gaps change exactly by q for magnetic flux ϕ = p/q. It is possible to understand this surprising observation by a symmetry argument. Because of the magnetic translation symmetry, the energy spectrum is q-fold degenerate inside the magnetic BZ. When two bands touch they must be degenerate at least q points. A Chern number of exactly 1 is exchanged when a single Dirac cone closes and re-opens.If each one of the degenerate points is a Dirac cone, total Chern number of q is exchanged between the bands. If degenerate points are of higher order, integer multiples of q can be exchanged. However, we have not numerically observed such an exchange.§ CONCLUSIONThe cold atom experiments in optical lattices pioneered a vast number of interesting phenomena, which are not possible to realize in solid state systems. Especially, the enhanced control over the lattice geometry and the artificial gauge fields can be utilized to directly realize the Hubbard-Hofstadter model<cit.>.In this respect, we investigate the inherent relation between the point group symmetries in two dimensions and the energy spectrum as a function of magnetic flux per plaquette. We focus on the transitions between lattices of different symmetry by using an optical lattice potential, which can realize all the two dimensional Bravais lattices. We describe this potential through a TB model and calculate its parameters ab-initio. The effect of the magnetic field is introduced by the Peierls substitution. Then, the energy spectra for each symmetry group are calculated as well as the evolution of the energy spectrum during transition between the symmetries.We find that the lattice deformations around high symmetry points yield dramatic changes in the energy spectra. The evolution between the square lattice and the triangular lattice is mainly influenced by broken bipartite symmetry. We find that the energy spectrum changes dramatically in the vicinity of θ = π/3,the triangular lattice limit. This rapid change should be experimentally observable even for the simplest non-trivial flux value, ϕ = 1/3. A few degrees of deviation from θ = π/3 is found to make a jump in Hall conductivity if the system is filled up to the first gap. During the evolution, bands touch and reopen to transfer Chern numbers between the gaps. We find that the Chern number of bands only change with integer multiples of q. We explain this observation by invoking the q-fold degeneracy within a magnetic BZ. This result is particularly important for solid state experiments where Hall conductivity is directly measured through transport.Finally, we regard the space of all possible Bravais lattices as a phase diagram where each phase is identified by q-1 Chern numbers of the gaps. For ϕ = 1/3, there are two distinct regions belonging to the square lattice and the triangular lattice. In addition, the Chern number map for the phase space indicates that the region corresponding to the triangular lattice is found to roughly follow the centered rectangular lattice curve.For ϕ = 1/5, the space is divided into four phases and larger values of q result in smaller topological regions. F.Y. is supported by Türkiye Bilimsel ve Teknolojik Araṣtırma Kurumu (TÜBİTAK) Scholarship No. 2211.
http://arxiv.org/abs/1703.08810v1
{ "authors": [ "F. Yılmaz", "M. Ö. Oktel" ], "categories": [ "cond-mat.quant-gas", "J.2" ], "primary_category": "cond-mat.quant-gas", "published": "20170326124520", "title": "Hofstadter Butterfly Evolution in the Space of Two-Dimensional Bravais Lattices" }
L. Zhao, F. Han and X. Peng et al.1]Long Zhaocor1lz311@cs.rutgers.edu [cor1]Corresponding author at: Rutgers University, 110 Frelinghuysen Road, Piscataway, New Jersey, 08854-8019, United States. 1]Fangda Han 2]Xi Peng 1]Xun Zhang 1]Mubbasir Kapadia 1]Vladimir Pavlovic 1]Dimitris N. Metaxas[1]Department of Computer Science, Rutgers University, NJ, United States [2]Department of Computer Science, Binghamton University, New York, United States 9 December 2018 13 January 2019 We address the problem of using hand-drawn sketches to create exaggerated deformations to faces in videos, such as enlarging the shape or modifying the position of eyes or mouth. This task is formulated as a 3D face model reconstruction and deformation problem. We first recover the facial identity and expressions from the video by fitting a face morphable model for each frame. At the same time, user's editing intention is recognized from input sketches as a set of facial modifications. Then a novel identity deformation algorithm is proposed to transfer these facial deformations from 2D space to the 3D facial identity directly while preserving the facial expressions. After an optional stage for further refining the 3D face model, these changes are propagated to the whole video with the modified identity. Both the user study and experimental results demonstrate that our sketching framework can help users effectively edit facial identities in videos, while high consistency and fidelity are ensured at the same time. Video editingSketch-based modelingShape deformationDeformation transfer3D morphable model § INTRODUCTION Recent years have witnessed tremendous advances in the field of facial performance capture in videos, which serves as a vital foundation for other computer graphics applications <cit.>. Especially, impressive results have been achieved in state-of-the-art face editing frameworks, and they are widely used in creating funny facial effects for video games, movies and even mobile applications. In order to express user's editing intention, this kind of frameworks always involves complex inputs (e.g., other images or videos within the same domain <cit.>) or additional capture devices (e.g., RGB or RGB-D cameras <cit.>). However, it is quite inconvenient for artists or amateur editors to reach these resources in our daily life. Moreover, current state-of-the-art methods always aim to enable users to modify the facial expression of the actor in a video, since this kind of editing intention can be easily detected with fixed facial identities. While changing the identity, i.e., the original appearance of a face without the influence from the pose and facial expression, is quite difficult, since it is a form of modification which is hard to be computed straightforward from reference inputs or several parameters.We address these two shortcomings by making use of sketch, which offers more efficiency and flexibility to editors as demonstrated in the recent research <cit.>. Consider the problem of transferring facial characters in a cartoon image to an actor in the video as shown in Fig. <ref>. In this paper, we propose a novel and robust interactive sketch-based face editing framework for both professional and amateur editors to finish this task very conveniently on a standard PC. We note that our framework is not a caricature system: the cartoon image we introduced here is not an input which will be processed by our framework; we are simply using it as an inspiration or guideline for users to edit the source video. In fact, users are free to modify any facial appearance of an actor in the given video with the help of our framework. Compared to previous work, we focus on allowing users to edit the facial identity of the actor in the whole video, but not the expressions.There are three challenges towards this goal. (1) There is an inherent tradeoff between flexibility of sketch-based specification and robustness. Specifically, unconstrained hand-drawn strokes may produce ambiguous inputs <cit.>. For example, a stroke drawn between the eyebrow and upper eyelid might indicate editing either of them. And it is quite difficult for the framework to determine the user's true editing intention by dealing with this stroke alone. (2) Since the face appearance depends on the pose of the actor as well as the identity, the influence of facial expression should be taken into account when applying changes to the identity. (3) Compared with previous sketch-based methods designed for static 2D images or 3D models <cit.>, our framework has to further propagate the modifications from one frame to the whole video. In this process, we need to predict the modified face appearance in each frame while ensuring consistency and fidelity.In this paper, we introduce a 3D face model fitting and identity deformation transfer formulation. Our core idea is to first transfer modifications from the input sketch to the corresponding 3D face model fitted by the facial identity and expression, which is then used for propagating changes to the whole video. To the best of our knowledge, we are the first framework which allows users to edit the facial identity of an actor in a video using hand-drawn sketches. This is made possible with the following key contributions: (1) a sketch-based facial identity editing framework for videos, (2) a novel 2D to 3D sketch-based identity deformation transfer algorithm, (3) and a contour-based interface for 3D model refinement. § RELATED WORK In this section, we review the previous work in the filed of sketch-based editing and deformation transfer which motivate the design of our sketching system.§.§ Sketch-based shape editing Hand-drawn sketches are widely used in modeling static facial inputs, such as images or 3D shapes <cit.>. The main challenge of these systems is to handle ambiguous user inputs, i.e., strokes which are difficult to match. Previous work <cit.>limits the use of only pre-recorded data (curves or points) to mitigate ambiguity. The following two sketch-based facial animation editing systems proposed recently are most related to our work. Nataneli et al. <cit.> introduced an internal representation of sketch's semantics, while users have to draw sketch in some predefined regions. Miranda et al. <cit.> built a sketching interface control system, which only allows users to draw strokes on a predefined set of areas corresponding to different face landmarks to avoid ambiguous conditions. In this paper, we introduce a sketch-based editing framework which differs from previous work in the following two aspects: (1) our method is the first framework that allows users to edit the face by sketch in a video sequence other than a static image or 3D shape; (2) we utilize the sequence information of strokes to deal with ambiguous user inputs without predefined constraints.§.§ Deformation transfer Deformation transfer <cit.> firstly addressed the problem of transferring local deformations between two different meshes, where the deformation gradient of meshes is directly transferred by solving an optimization problem. Semantic deformation transfer <cit.> inferred a correspondence between the shape spaces of the two characters from given example mesh pairs by using standard linear algebra. Zhou et al. <cit.> further utilized these methods to automatically generate a 3D cartoon of a real 3D face. Thies et al. <cit.> developed a system that transfers expression changes from the source to the target actor based on <cit.> and achieves real-time performance. Xu et al. <cit.> designed a facial expression transfer and editing technique for high-fidelity facial performance data. Moreover, other flow-based approaches <cit.> are also proposed to transfer facial expression to different face meshes. However, these traditional methods aim to transfer deformations, especially facial expressions, between 3D meshes. Differing from them, we propose a transfer pipeline which can be used to directly transfer local identity changes in 2D space to a 3D face model. Huang et al. <cit.> presented an approach to project changes of a mesh in 2D to 3D as the projection constraint. Compared with it, the main novelty of our algorithm is that we combine a sketch-based interface to enable users to perform the editing with hand-drawn sketches from 2D to 3D. We first map sketch into a set of modifications corresponding to 3D space, and then transfer it to the target 3D mesh. § FRAMEWORK OVERVIEW The input of our framework is a monocular video consisting of continuous frames of a person's face, together with a frame t_0 in this video containing a corresponding hand-drawn sketch. This sketch may be a complete facial sketch or partial strokes representing changes that the user wants to be made to the appearance of the face, e.g., to enlarge the mouth or modify the position of the eyebrows. Our goal is to recognize all these changes from the sketch and apply them to the whole video. Inspired by Thies et al. <cit.>, we formulate this task as a parametric face model fitting and deformation transfer problem.The whole pipeline of our framework is outlined in Fig. <ref>. Our core idea is to first reconstruct a 3D face model F_t for each frame t in the input video, where F_t can be disentangled into a unique component I to represent the facial identity and a sequence of E_t to describe the facial expression changes over time. Meanwhile, the face deformation encoded in the input sketch is approximated by a set of local deformations in 2D space. Then we transfer these deformations from 2D space to the target 3D facial identity I, while the influence of expression E_t is removed by solving an energy minimization problem. After computing the modified identity shape Î, we can get the updated full 3D face model F̂_t for each frame t. Finally, these modifications are propagated throughout the whole video by rendering F̂_t to frame t with the isomap M_t.In the following, we discuss the individual steps in detail. First, in Section <ref>, we show how the 3D face model is reconstructed and disentangled into the identity as well as expression for each frame in the video by a robust 3D face morphable model fitting algorithm <cit.>. In Section <ref>, a robust sketch mapping and fitting schema is introduced to recognize user's editing intentions and apply them to the face in 2D space. Specifically, we utilize the order information carried by a series of strokes to mitigate ambiguity. Then an energy function is minimized to deform the face appearance and handle stroke noises at the same time. In Section <ref>, we further present an approach to transfer deformations from 2D space to the 3D facial identity with depth estimation, while get rid of the influence from expression. We also implement a contour-based interface for users to refine the 3D identity optionally in Section <ref>. Finally, Section <ref> presents the optimization algorithms for rendering the deformed face texture back to the whole video, which removes artifacts generated from the previous steps. The user study as well as experiment results shown in Section <ref> indicate that our sketching framework is simple to use even for amateurs, while high-fidelity results are guaranteed as well. For clarity, we list the main symbols we used throughout this paper in Table <ref>. § FACE MODEL FITTING We utilize FaceWarehouse <cit.> to construct the blendshape face meshes for each frame in the input video. Specifically, a fully transformed 3D face model F can be represented as:F = 𝐑· V + 𝐭 = 𝐑·(C ×_2 𝐮^T ×_3 𝐞^T) + 𝐭,where V is a set of vertices describing the shape of the face mesh. We utilize 𝐑 and 𝐭 to represent the global rotation and translation of V, respectively. C is the rank-3 core tensor from the FaceWarehouse database <cit.>, which corresponds to vertices of the face mesh, identity and expression. 𝐮 is the identity vector and 𝐞 is the expression vector. Let L = {𝐥_k} denote a set of 2D facial landmarks of the face in a frame and {𝐯_k} denote their corresponding vertices of the 3D face mesh V. Let F^(𝐯_k) be the 3D position of 𝐯_k after the global rotation and translation according to Eq. <ref>. To compute 𝐥_k, we define a set of 2D displacement D = {𝐝_k} and each of them is added to the projection of F^(𝐯_k):𝐥_k = Π_𝐏(F^(𝐯_k)) + 𝐝_k,where Π_𝐏(·) denotes a perspective projection operation which is parameterized by a projection matrix 𝐏. To recover these unknown face parameters from the video, we employ DDE <cit.>, a state-of-the-art real-time regression algorithm for facial tracking. DDE predicts a sextuple ⟨𝐏, 𝐮; 𝐞_t, 𝐑_t, 𝐭_t, D_t ⟩ for each frame t in the video. Note that 𝐏 and 𝐮 are invariant across all frames for the same actor and the same video camera during tracking, while the other unknowns change in different frames. The expression blendshapes B = {𝐛_k} of the certain actor in the input video is constructed as:B = C ×_2 𝐮^T.As commonly assumed in blendshape models, 𝐛_0 is 3D shape of the neutral face. We can further represent the face shape V_t of the actor in the frame t by:V_t = 𝐛_0 + ∑_n=1^N_B - 1(𝐛_n - 𝐛_0) ·ê_t^(n) = 𝐛_0 + E_t = I + E_t,where 𝐞̂_t = [ê_t^(1),…,ê_t^(n)] is computed from the initially fitted expression coefficient vector 𝐞_t estimated by DDE. Intuitively, Eq. <ref> disentangles the actor's 3D face shape V_t into the identity I and expression E_t. For each frame t, we also extract M_t, the isomap <cit.> which contains pixel textures for the face model, and compute 2D landmarks L_t according to Eq. <ref>. At last, we get the final per-frame face performances which consist of six parameters ⟨ I; E_t, 𝐑_t, 𝐭_t, L_t, M_t ⟩. In the following steps, I, E_t and L_t are used for computing the deformed 3D face mesh; while 𝐑_t, 𝐭_t and M_t are employed to propagate the modifications throughout the entire video to generate the final result. § SKETCH-BASED DEFORMATION TRANSFER We present a robust sketch-based face editing framework to enable users to edit all possible face details once they have been marked with the corresponding vertices on the 3D face mesh. In this paper, we allow users to edit 68 face landmarks predefined by Cao et al. <cit.> for illustration as shown in Fig. <ref>(b). In order to apply user's editing intention from the input sketch, we need first map each stroke to a suitable part of the face, e.g., the contour of eyes or mouth, and then deform this part according to the hand-drawn stroke.§.§ Sketch mapping Our target is to map each stroke to a landmark group in the 2D space (a collection of landmarks which represents a meaningful part of the face, e.g., the left eyebrow or the upper eyelid), and to remove unreasonable strokes from the result at the same time. The main challenge in this task is how to deal with ambiguous user inputs, and Fig. <ref>(d) shows an example. Previous methods solve this problem by only allowing users to edit the face with pre-defined curves <cit.> or draw strokes in pre-ordered regions <cit.>. Instead, we introduce a robust mapping schema which enables users to draw strokes without certain constraints, while the only assumption we made here is that the stroke should be “clean”, i.e., each stroke aims to edit just one target landmark group.We notice users always draw a sketch in a meaningful order encoding their editing intention. Landmark groups having a strong relation with each other, e.g., the upper and bottom eyelids of the same eye, trend to be drawn at the same time. Based on this observation, the input sketch is regarded as an ordered sequence of strokes and Hidden Markov Model (HMM) is employed to formulate this problem. Note that our HMM-based algorithm leverages the order in which users draw strokes, for efficient matching with landmark groups. Prior works, e.g., <cit.> have not combined this order information with HMM. It uses HMM to model the shape of this stroke to match mesh templates. Let {G_1, …, G_t} be a set of landmark groups on the 2D image, and {S_1, …, S_t} be the stroke sequence of the input sketch. We treat each landmark group G as the hidden state while a stroke S is the observation of HMM, and our target is to find the most probable sequence of hidden states (landmark groups) for a given observation sequence (strokes):max_G_1:tP(G_1:t|S_1:t).The initial probabilities P(G_0) for each hidden state is set to 1/N_G. And P(S_t|G_t) is the emission probability which measures the probability of each stroke belonging to a certain landmark group. We define P(S_t|G_t) as:P(S_t|G_t) = {[ exp(-d(S_t, G_t)^2/2σ^2),ifd(S_t, G_t) ≤ 3σ;0, otherwise ].where d(S_t, G_t) measures the difference between S_t and G_t, which is the average Euclidean distance of their corresponding key points. Note that if S_t has a high distance with all landmark groups (which means that this stroke is invalid), S_t will not be matched with any G_t and excluded from the result. P(G_t|G_t - 1) is the transition matrix which expresses the probability of moving from one hidden state to another. Transition probabilities between two landmark groups with a strong relation are assigned a higher value, i.e., twice as large as the other values, which makes it easier for corresponding strokes to be mapped together and helps when strokes are ambiguous. Given these parameters, the most probable sequence problem can be solved by the Viterbi Algorithm <cit.>.§.§ Landmark deformation For each input stroke S and its mapped landmark group G, we need to deform G into Ĝ according to S, where Ĝ is the final modified landmark group. This is achieved by solving an energy minimization function that leverages the position of the input stroke (editing intention of the user) and the original shape of the landmark group. Let 𝐠_i and 𝐠̂_i be the coordinates of the i-th landmark in G and Ĝ, respectively, and 𝐬_i be the corresponding i-th key point in S. The energy function is formulated as: ℰ = ∑_i = 1^N_G||𝐠̂_i - 𝐬_i||^2 + ∑_i = 1^N_G - 1(1 - cos(γ̂_i - γ_i)),where γ_i is the included angle of 𝐠_i and 𝐠_i + 1; γ̂_i is that of 𝐠̂_i and 𝐠̂_i + 1. To minimize this target function, we use the value of ||𝐠_i + 1 - 𝐠_i|| to approximate ||𝐠̂_i + 1 - 𝐠̂_i||, and we solve it with gradient descent algorithm.Intuitively, the first term of Eq. <ref> is the position constrain. It measures the distance between the modified landmark group and the input stroke, which moves the landmarks of this landmark group to their expected positions. Meanwhile, the second term of Eq. <ref> represents the shape prior. It is employed to maintain the original shape information of this landmark group after the modification, which helps to prevent generating unrealistic results due to noises carried by the input sketch. § FACIAL IDENTITY DEFORMATION TRANSFER The final modified facial identity Î is calculated from the target identity I by transferring 2D deformations (a set of modified 2D face landmarks) to I while removing the influence of expressions. Our strategy is first to estimate the 3D positions of 2D landmarks with reconstructed face model parameters as shown inSection <ref>. Then a robust deformation transfer technique is proposed to determine the modified identity according to these 3D landmark positions as well as the expression. It is achieved by two steps: depth estimation for key points in 2D and solving an extended target function of <cit.>. The main contribution of our identity deformation transfer algorithm is that we perform the deformation transfer between different feature spaces: from 2D to 3D; while prior work <cit.> performs the transfer in the same 3D space.To estimate the 3D position (x̂_i, ŷ_i, ẑ_i) of a certain modified landmark 𝐥̂_i whose 2D coordinate in this frame is (x̂_i, ŷ_i), we need to reconstruct ẑ_i (the depth of this point towards the screen plane). However, the depth value is unknown since the deformation is made in 2D space. Notice that when the front face is right against the screen plane, i.e., the face plane and screen plane are parallel to each other, points on the face mesh will have similar depth value (the delta of the maximum and minimum depth value of all the points will reach its minimum), especially for the landmark whose normal vector is perpendicular to the screen plane. Based on this observation, we estimate the depth ẑ_i of the modified landmark 𝐥̂_i with the original depth z_i of 𝐥_i, which can be computed directly according to Eq. <ref>. This estimation can avoid generating unrealistic facial identity effectively. However, as a result, our approach is able to achieve the best performance if the editing is applied on the frame when the actor is facing the camera.Then we can get the modified facial identity by further transferring deformations (a set of modified 3D landmark coordinates computed above) to the target identity. Our approach is inspired by the correspondence system in <cit.>, but developed in the context of our deformation framework. Let V = {𝐯_1, …, 𝐯_n} and V̂ = {𝐯̂_1, …, 𝐯̂_n} be the n vertices of the original and modified facial identity. Note that here V is equal to the facial identity I when we remove the effect of the facial expression E_t according to Eq. <ref>. We let Q = {𝐐_i} be a set of mesh triangles and 𝐐_i = 𝐕̂_i(𝐕_i)^-1 be the affine transformations that define the deformation for the i-th triangle, where 𝐕̂_i and 𝐕_i are the corresponding vertex matrix <cit.> calculated from V and V̂, respectively. The vertex positions of the modified identity are computed by minimize the distance between the original and modified 3D landmarks after removing the influence of facial expression. We define the landmark term as:ℰ_l = 1/2∑_i = 1^m||𝐯̂_i - 𝐥̃_i||^2,where m is the number of modified 3D landmarks, and 𝐥̃_i = 𝐥̂_i + E_t^(𝐯_i) is the coordinate of the i-th modified landmark after merging the influence of the facial expression E_t. Then the whole energy function is defined as:[ ℰ(𝐯̂_1, …, 𝐯̂_n) = w_sℰ_s + w_rℰ_r + w_lℰ_l;s.t. 𝐯̂_i= 𝐛_k, 𝐛_k ∈ℱ(V), ]where w_* controls the effect of each term, and w_r = 0.1 while the other two are set to 1 experimentally; 𝐛_k in Eq. <ref> is a set of points on the face boundary ℱ(V). ℰ_s is a smooth term to make the transformations for the set of adjacent triangles 𝒜 be similar with each other <cit.>:ℰ_s(𝐯̂_1, …, 𝐯̂_n) = 1/2∑_i = 1^|Q|∑_j ∈𝒜(i)||𝐐_i - 𝐐_j||^2,and ℰ_r maintains the original triangle shapes which are not affected by the deformation in order to prevent generating a drastic change in the shape of the target identity <cit.>:ℰ_r(𝐯̂_1, …, 𝐯̂_n) = 1/2∑_i = 1^|Q|||𝐐_i - 𝐈||^2,where 𝐈 is the identity matrix. The whole energy function can be minimized by solving a system of linear equations. During this procedure, the boundary points of the deformed mesh will match exactly since they are specified as constraints to keep the original face contour, the local deformations are transferred by the landmark term, and the rest of the mesh will be carried along by the smoothness and identity terms. § CONTOUR-BASED FACE MODEL REFINEMENT Since the quality of a 3D face model computed from the identity deformation transfer highly influences our final result, we implement an interface for artists to directly edit the deformed 3D face model for refinement. To ensure fluency and simplicity of user interaction during this process, we present a contour-based editing schema. Note that this refinement step can be skipped if a favorable result has already been achieved during previous steps or users have no experience in 3D model editing.At the beginning of this refinement stage, selected feature contours on the deformed 3D face model are projected onto the 2D canvas to produce an initial sketch-like contour map. Then the user can drag or rotate it to observe the model from different views. If some of the projected lines are not satisfactory, the user can redraw them with new sketches. In this refinement phase, these redrawn lines are matched more closely with user inputs by adding them as new constraints for the 3D face model. Compared with traditional interactive 3D mesh editing softwares such as MAYA or 3DS-MAX, which always take a long time for a skilled artist to create a decent 3D face model, our contour-based approach is much simpler and faster. Moreover, this contour-based refinement interface also ensures a smooth user experience during the whole editing process for not involving different softwares or interactive modes other than hand-drawn sketches.Contour rendering. Recent studies <cit.> present a hybrid line rendering method to generate 2D contour maps for 3D shapes and obtain good performance. In this paper, we adopt this approach which combines predefined exterior silhouettes, occluding contours, suggestive contours <cit.> and shape boundaries to generate the final contour map from a given view point for further editing. Examples are shown in Fig. <ref>. Note that the preprocess steps in <cit.> are also applied to reduce the noise in the initial map.3D model refinement. Users are allowed to rotate the input 3D model to edit its 2D contour map from different view points. Once an unsatisfactory line in the map is found, users can modify it by marking a certain region around to erase it first, and draw a new relevant sketch. After sampling the key points from them, the edited line is then calculated by the same algorithm as described in Section <ref>. We treat the key points in the edited line as new landmark constraints, and the identity deformation transfer in Section <ref> is employed to update the face model. Users are able to repeat these steps until a favorable 3D facial identity Î is achieved. Note that since more constraints are added, we increase the weights of w_s and w_r in Eq. <ref> at this stage so as to prevent drastic deformations in the final 3D shape Î. § TEXTURE RE-RENDERING AND SMOOTHING Propagating deformations to the whole video is achieved by computing modified F̂_t with Î for each frame t according to Eq. <ref> and <ref>, and then re-rendering the extracted face texture isomap M_t back to the frame with F̂_t. For a high-fidelity result, the background should be warped as well, so that both sides of the face boundary deform coherently. We also apply the median filter on the boundary to blur the difference between the face and its surrounding background.Note that there might be some “holes” (invisible pixels) on the isomap M_t due to the occlusion. However, if modifications are applied on the boundary of the 3D facial identity, these “holes” will lead to artifacts after smoothing since they might be visible as a result of the deformation. We notice that these missing pixels may be seen from other frames (since the actor always have different poses in different frames), which can be employed to fill “holes” in one certain frame. Therefore, we utilize a refined isomap M̂_t to synthesize the modified face in each frame. To obtain the refined isomap M̂_t, we first compute a mean isomap M̅ from all frames in the given video. And then for the isomap M_t of each frame t, we use this mean isomap to fill the “holes” in it. We obtain the final refined isomap M̂_t for each frame after applying a Gaussian filter on it to smooth the boundaries. Finally, the artifacts on the boundary can be removed by re-rendering the face with M̂_t as shown in Fig. <ref>.To handle missing background due to the facial deformation, one simple strategy is building a background model (background as in non-face and non-body) over successive frames and then replacing missing background pixels with newly revealed ones. However, this approach depends on an accurate background segmentation algorithm. In this paper, we solve it in a more robust warping-based manner. Firstly, we employ SIFT <cit.> to detect the static key points from the starting frame; optical flow is calculated throughout the whole video in order to track the dynamic key points of the background. Then we construct a set of control points for each frame by combining the static and dynamic points. Finally, we use Moving Least Square <cit.> algorithm to warp the background pixels based on detected control points. This optimization strategy can effectively avoid shaking for static objects in the background, while maintain the consistency of the face boundary concurrently. § RESULTS We evaluate the performance of our approach on different Youtube videos at a resolution of 1280 × 720. The videos show different actors with different scenes captured from varying camera angles; we choose one frame for each video and provide a corresponding sketch of the actor's face as the input. In our experiments, users are allowed to edit 68 face landmarks marked by Huber et al. <cit.> by sketch for demonstration. Example results created by amateurs using our sketching interface are shown in Figs. <ref> and <ref>.§.§ Runtime performance We evaluate the runtime performance of our methods by computing the average runtime of each step with respect to different video resolutions. Within the step of texture re-rendering, an isomap with 256 × 256 resolution is computed for 360P and 480P videos; 512 × 512 resolution is configured for HD videos. Our approach runs on a desktop computer with an Intel 4.00GHz Core i7-6700K CPU. Table <ref> shows the result. The texture re-rendering and background optimization is the slowest components, while others run in a matter of milliseconds. Note that our framework can achieve real-time performance (≥ 25 FPS) for standard resolution videos without background optimization, which is compatible with streaming inputs. Moreover, our method does not rely on a power GPU and can be extended to light-weight devices.§.§ User study In this section, the user study is conducted to evaluate the user experience as well as the video results achieved by our framework. We invited 20 people, 10 men and 10 women. All participants are graduate students aged from 23 to 25, and 3 of them major in arts while the others come from statistics and computer science departments. In addition, 4 of them have background in arts and 3D animation modeling (with 3-year experiences in MAYA and 3DS-MAX as well), while the others are amateur users that have limited or no knowledge about drawing and 3D editing. Before the following sessions, a 10-minute tutorial as well as 20 minutes for practice were given to guarantee that every participant knows how to use our interface. Another 40 minutes are employed to introduce MAYA to amateur users. In our evaluation, we use cartoon images as reference to compare our results with users' true editing intention in an effective way. Users are asked to edit the face to match a cartoon image using our system. However, they are free to make additional modifications as they wish. Therefore, the results may contain some unrelated user inputs. §.§.§ User experience of interfaces The goal of the first session is evaluating the user experience of our sketched-based interface. To guide users' editing intention, each participant was given a Youtube video together with a 2D cartoon face image as reference, and asked to edit the facial appearance of the actor in this video to match at least one prominent facial character in the cartoon image using our system. Note that the created facial identity was not required to strictly follow the reference image and differences were allowed. All participants should complete 5 tasks in this session with different pairs of videos and reference images as illustrated in Figs. <ref> and <ref>. We also implemented another deformation-based user interface, where 3D face models were first calculated according to Section <ref>; then MAYA was utilized as the editing tool instead of sketches, and final results were directly generated by the modified models as Section <ref>. In this deformation-based interface, users are only allowed to edit/modify the positions of mesh vertices with MAYA. This constraint is made for fair comparison, since our system deforms the mesh in the same way. All participants were asked to repeat the same tasks with this interface. For amateur users, they could stop anytime if it was too difficult for them to continue while the artists were required to finish all tasks. To verify the effectiveness of our contour-based refinement mode in Section <ref>, the amateur users were recommended to try it after the initial sketching, and the artists should use it for all tasks. At the end of this session, we asked the participants which system is easier to use.Among all the 16 amateur users, 12 of them finished editing tasks with the deformation-based interface while the others gave up halfway. Instead, all participants completed the same tasks with our sketch-based interface. Moreover, 10 of the amateurs used our contour-based refinement mode; the others chose to skip it for favorable results had already been achieved in the previous steps. In terms of user experience, all participants agree that our system is simpler to use and yields decent results. Those who tried our refinement mode agree that it was very helpful, and the four professional artists agree that it is more efficient than traditional deformation-based software. §.§.§ Comparison on mesh deformation Fig. <ref> shows a gallery of deformed 3D face models, which are corresponding to the actors of the cases as shown in Figs. <ref> and <ref>, with three different editing interface settings. They were created using our sketch-based interface by amateurs (Ours), by artists with the deformation-based interface (MAYA*) and our sketch-based interface with the refinement mode (Ours*+RF), respectively. Detailed timings are shown in Table <ref>. In addition, we also report the timing for editing using the deformation-based interface by amateurs (MAYA). As shown in Table <ref>, an amateur on average only spent 3.6 minutes to complete the task via our system, which is 3 times faster than using the interface based on MAYA. Meanwhile, our sketching interface also doubled the editing efficiency for artists. In Fig. <ref>, our system is able to achieve results comparable with the ones created using MAYA by artists, while amateurs managed to perform reasonable mesh deformations with our interface. §.§.§ Evaluation on visual results To further evaluate the visual results generated by our sketch-based face editing system, we also designed the second session following the editing session. In this session, results in the previous session were selected into 4 groups: videos edited by amateurs using the deformation-based interface or our sketch-based interface, and ones generated by artists using the deformation-based interface or our sketch-based interface with the refinement mode. For fair comparison, we manually chose the best result in each group for every task in the editing session. After that, all these videos (5 tasks done by 4 groups, respectively) together with their corresponding reference cartoon images were presented to additional 30 students who did not participated in the editing session. Given a reference image (displayed in random order), every participant was asked to look at the corresponding videos from the 4 different groups, and then rank them by choosing the one that better matches the cartoon. The final results are shown in Fig. <ref>.We can find that videos created by amateurs using our sketching interface obtain higher average rank than ones using deformation-based interface. A T-Test was also conducted to compare rankings obtained with these two different settings. There was a significant difference in the rankings using the deformation-based interface (M = 3.00, SD = 1.17) and sketch-based interface (M = 2.40, SD = 1.08); t(58) = 2.19, p = 0.03. It demonstrates that our framework does improve the results for amateurs. Notice that different results created by artists and amateurs with our sketching interface have similar average ranks (2.33 and 2.40, respectively). It suggests that, with the help of our sketch-based interface, amateurs manage to create comparable results to a certain extent compared with artists. Another observation is that results have similar ranks when artists use our interface and MAYA, which agrees that artists can produce competitive results with our interface compared with MAYA.§.§ Evaluation of 3D-based editing We argue that the 3D face model and deformation transfer algorithms are the keys to ensure the consistency as well as fidelity of the editing results. To evaluate the effectiveness of our 3D-based editing system, we implemented a 2D-warping baseline for comparison. We use the results of deformed 2D face landmarks in Section <ref> as the inputs for this baseline. Then the face is deformed by Moving Least Square <cit.> in the 2D space. All participants of the user study are invited to try this 2D-warping baseline, and 85% of them prefer our system over the baseline.For quantitative comparison, we measure the content consistency of an editing video using the Content Distance metric introduced in <cit.>. We employed OpenFace <cit.>, which outperforms human performance in the face recognition task, for measuring video content consistency. A feature vector is produced by OpenFace for each frame in a given video. The distance is then calculated by the pairwise L2 distance of the feature vectors. To measure the quality of an edited video, we compute the distances between each frame and the first frame. A method that owns a lower distance curve handles changes in rotations or expressions better.We collected all the edited results in the user study for evaluation. To highlight the performance, the videos are manually aligned so that a noticeable change in head rotation or expression occurs around the 25th frame. Using the deformed 2D landmarks as the inputs, we compute the distance curves for the edited videos generated by our 3D-editing system and the 2D-warping baseline for comparison. The results are given in Fig. <ref>. From the figure, we find that the content of the videos generated by our system is more consistent: ours achieves a lower distance curve. More importantly, our system is able to handle rotation or expression changes better than the 2D-warping baseline, since there is a smaller gap in our distance curve. Therefore, our system offers a 3D solution which substantially outperforms the 2D-warping approach.§.§ Evaluation of sketch matching To evaluate the performance of sketch matching, we compare our method with a geometry-based algorithm described in <cit.> and another learning-based approach <cit.> which both achieve state-of-the-art performance. We use the stroke similarity measurement described in them to match strokes with landmarks, respectively as their corresponding approximate implementation. Detailed results are shown in Fig. <ref>. We can find that all the methods produce competitive results with clear user inputs. However, as shown in the second case of Fig. <ref>, <cit.> is sensitive to noises since it fits landmarks to strokes via only geometry features; Nataneli et al. <cit.> is able to handle this case due to pre-learned prior knowledge; our method can remove noises by taking the original appearance of the landmark group (the shape prior term in Eq. <ref>) into consideration. For ambiguous inputs as shown in the third case of Fig. <ref>, both <cit.> and <cit.> map the second stroke to eyebrow incorrectly; we can successfully match it with the upper eyelid since the HMM we employed trends to match the upper and bottom eyelids at the same time during optimization. § CONCLUSIONS AND DISCUSSION This paper presents the first sketch-based face editing framework for monocular videos. In an attempt to recognize the user's editing intentions from hand-drawn sketch, a robust sketch matching schema is introduced to convert them to a set of face landmark deformations. Furthermore, a novel facial identity deformation transfer algorithm is employed to propagate these changes throughout the whole video, while consistency and fidelity are maintained. Without background optimization, our framework is able to achieve real-time performance for streaming inputs with standard definition. Overall, we believe our framework will contribute to many new and exciting applications in the field of face editing on light-weight devices, e.g., a tablet PC and mobile phone.Limitations. There are some notable limitations to our work. One limitation of our sketch-mapping algorithm is that amateur users may not produce correctly ordered sketches in their first try. We make use of HMM to model the relation of different strokes, and users have to redraw a part of strokes with the incorrect order. However, this is mitigated by the fact that on average, the complexity and number of sketches is small (less than 6 strokes in most cases), and our interactive system supports rapid iteration and refinement of strokes. Another limitation is that we only allow users to edit a few landmarks on a face. This is due to the limitation of the morphable models we employed to construct the 3D face: local geometric details such as wrinkles cannot be represented. Moreover, since our method relies on a fixed z-value for more accurate depth estimation, users have to draw sketches on a front face to achieve the best performance.Future work. In the future, we will consider implementing a stereo editing interface to enhance the user experience and enable users to edit faces from different view points as well. To alleviate the problem of limited editing ability, we propose to utilize Generative Adversarial Network (GAN) <cit.> to make pixel-to-pixel prediction <cit.> directly from sketches instead of using morphable models. Moreover, allowing users to edit more facial details from sketches is another future direction. We expect other interesting applications for the framework we have shown here. One can imagine coupling this work with an artist to create a cartoon talking avatar starting from a video. § ACKNOWLEDGMENTSThe authors would like to thank the reviewers for their constructive comments and the participants of our user study for their precious time. This work was funded in part by grant BAAAFOSR-2013-0001 to Dimitris Metaxas. This work is also partly supported by NSF 1763523, 1747778, 1733843 and 1703883 Awards. Mubbasir Kapadia has been funded in part by NSF IIS-1703883, NSF S&AS-1723869, and DARPA SocialSim-W911NF-17-C-0098. cag-num-names
http://arxiv.org/abs/1703.08738v3
{ "authors": [ "Long Zhao", "Fangda Han", "Xi Peng", "Xun Zhang", "Mubbasir Kapadia", "Vladimir Pavlovic", "Dimitris N. Metaxas" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170325203345", "title": "Cartoonish sketch-based face editing in videos using identity deformation transfer" }