article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
fractional differential equations ( fdes ) have become more and more popular in applied science and engineering field recently .the history and mathematical background of fractional differential operators are given in with definitions and applications of fractional calculus .this kind of equations has been used increasingly in many fields , for example , in nature fractional operators applied in fractal stream chemistry and its implications for contaminant transport in catchments , in the fractional calculus motivated into bioengineering , and its application as a model for physical phenomena exhibiting anomalous diffusion , l motion , turbulence , etc .let us briefly review the development of numerical methods for the fractional convection - diffusion equations .several authors have proposed a variety of high - order finite difference schemes for solving time - fractional convection - diffusion equations , for example , and solving space - fractional convection - diffusion equations . in , w. mclean and k. mustaphahave used the piecewise - constant and piecewise - linear discontinuous galerkin ( dg ) methods to solve the time - fractional diffusion and wave equations , respectively .but these methods require more computational costs . in order to tackle those problems , in w. mcleanhas proposed an efficient scheme called fast summation by interval clustering to reduce the implementation memory ; more recent works on this issue can been in .furthermore , in deng and hesthaven have developed dg methods for fractional spatial derivatives and given a fundamental frame to combine the dg methods with fractional operators . in xu and hesthaven have applied the dg methods to the fractional convection - diffusion equations in one dimension . in the two dimensional case , ji and tang have applied the dg methods to recast the fractional diffusion equations in rectangular meshes with the numerically optimal convergence order . however , there are no theoretical results .so far very few literatures deal with the fractional problems in triangular meshes , besides .this motives us to consider a successful dg method for solving the fractional problems in triangular meshes . here, we consider the time - dependent space - fractional convection - diffusion problem in the domain and ] with uniform mesh and the interval length .the characteristic tracing back along the field of a point at time to is approximated by therefore , the approximation for the hyperbolic part of ( [ eq1 ] ) at time can be approximated as where , , and .[ see ][remark 2 ] assume that the solution of ( [ eq1 ] ) is sufficiently regular . under the assumption of the function , we have thus , the fully discrete scheme corresponding to the variational formulation ( [ eq3.4 ] ) is to find , for any , such that where .define the bilinear forms by and the linear form we can rewrite ( [ eq3.5 ] ) as a compact formulation : find at time , such that section focuses on providing the proof of the unconditional stability and the error estimates of the schemes . in the following, indicates a generic constant independent of and , which takes different values in different occurrences .[ ][lemma4.1 ] if , for any function and each , there is where .[ theorem4.2 ] if , the hdg scheme ( [ eq3.5 ] ) is stable , i.e. , for any integer , there is where , and the semi - norm is defined as let in the equations of ( [ eq3.6 ] ) , respectively . by the symmetry of the bilinear formulas , adding the above equations , we obtain following from the young inequality , the definition of and , and lemma [ lemma4.1 ] , we have summing from n=1,2, ... ,n , we get using the discrete grnwall inequality , with , there is in this subsection we state and discuss the error bounds for the hdg scheme .the main steps of our error analysis follow the classical methods in finite element analysis , i.e. , the so - called galerkin orthogonality property . as usual, we denote the errors by where and are the -projection and -projection operators from and onto the finite element spaces and , respectively . from ( [ eq3.6 ] ), we obtain the compact form where assume that the solution of problem ( [ eq1 ] ) is sufficiently regular. then by the consistency of the numerical fluxes , the exact solution satisfies ( [ eq3.4 ] ) . taking and subtracting ( [ eq3.5 ] ) from ( [ eq3.4 ] ) yield and by the galerkin orthogonality , there is substituting the equalities ( [ erro2 ] ) and ( [ erro3 ] ) into ( [ erro1 ] ) leads to the desired result .next we review two lemmas for our analysis .the first one is the standard approximation result for the -projection operator from onto satisfying for any .the second one is the standard trace inequality .[ ][lemma4.3 ] let . is the -projection operator from onto such that for any .then , for [ lemma4.4 ] there exists a generic constant being independent of , for any , such that now we are ready to prove our main results . in this subsection, we estimate the first left - side term of ( [ erro ] ) .[ lemma4.5 ] if , for any function and each , where .the following result is a straightforward consequence of the estimate of the first left - side term of ( [ erro ] ) .[ theorem4.6 ] assume that the solution of problem ( [ eq1 ] ) is sufficiently smooth and satisfies ( [ eq3.5 ] ) .if , we have from ( [ erro ] ) , it can be noted that using lemma [ lemma4.1 ] , we obtain where . also by the taylor expansion and the hlder inequality , there are and where and follow from cauchy - schwarz s inequality , young s inequality and lemma [ lemma4.5 ] . substituting into ( [ eq4.16 ] ) , the desired result is reached . in this subsection, we use the general analytic methods to get the bound of the right side term of ( [ erro ] ) .[ theorem4.7 ] let be sufficiently smooth solution of ( [ eqloworder ] ) . are standard -projection operators of , and solve ( [ eq3.5 ] ) .if , we have from the definition of , we have using hlder s , young s inequalities and lemma [ lemma4.3 ] , we obtain from lemma [ lemma2.3 ] , lemma [ lemma4.3 ] , definition [ definition2.7 ] , definition [ definition2.11 ] , and theorem [ theorem2.10 ] , it follows that where and are chosen as sufficiently small numbers such that and . integrating the first term of by parts , and using the orthogonal property of projection operator , we get with the same deduction of , there is by lemma [ lemma4.3 ] , we get note that and vanish because of the orthogonal property of the projection . substituting into ( [ eq4.18 ] ), the desired result is obtained .assuming that the solution of ( [ eq1 ] ) is sufficiently regular , we have the following error estimates .[ theorem4.8 ] let be the exact solution of ( [ eqloworder ] ) , the numerical solution of the fully discrete hdg scheme ( [ eq3.5 ] ) .if , for any integer , there is where , , , and are chosen as above , is dependent of . substituting the results of theorem [ theorem4.6 ] and theorem [ theorem4.7 ] into ( [ erro ] ), there is with , multiplying the above inequality by on both sides , summing over from to , and using the discrete grnwall inequality , there is by the triangle inequality , we obtain the desired result .in this section , we illustrate the numerical performance of the proposed schemes by the numerical simulations of two examples . in the first example , we take the vector function and verify the accuracy of the schemes with the exact smooth solution combining with the left fractional riemann - liouville derivatives with respect to -variable and -variable , respectively . when we compute the fractional integral part in triangular meshes( see figures 1 - 2 ) , the gauss points and weights are used to deal with the terms relating with the fractional operators element - by - element ( see ) . sincethis part needs more time and memory spaces ( see ) , we only use the piecewise linear basis functions to simulate the solution in triangular meshes .tables 1 - 3 illustrate that the schemes have a good convergence order with piecewise linear basis function for different choices of the fluxes . in the second example, we take to be a vector function and perform some numerical experiments with some figures ( see figures 3 - 4 ) which justify that the schemes simulate the solution very well for 2d - fractional convection - diffusion problems . consider 2d space - fractional convection - diffusion problem ( [ eq1 ] ) in domain .the initial condition and the exact solution are specified as then the force term is determined accordingly from ( 1.1 ) . in this case , we present a few results to numerically validate the analysis . for the numerical simulations , in order to validate the stability and the accuracy of the presented hdg scheme , we choose the time - stepsize , , used to advance the discrete formulation from to .the experimental convergence rate is given by .the -errors and convergence rates for and for example 5.1 .[ cols="^,^,^,^,^,^,^,^ " , ] in table 1 and table 2 we choose different observation time and to justify that the convergence rates at least have an order of for the solution in -norms based on the piecewise linear basis function . in table 3we take the same choice of as ref . and see that the convergence rates increase to ( see the explanations in ref . ) . comparing the numerical results with the work , we can see that the hdg method has smaller numerical errors for the first order polynomial approximation .* example 5.2 . * in this example , we investigate the approximation solution of problem ( [ eq1 ] ) . for convenience, we still choose the domain .the exact solution , initial value and the vector function are given by for the second example , in order to further support the theoretical convergence and justify the powerful hdg scheme , we take to be nonzero vector function and give some approximation solutions with the refining space - step to compare with the exact solutions and display the efficiency of the simulations . figure 3 displays the exact solution and the numerical solutions based on different space stepsizes at with , .figure 4 displays the exact solution and the numerical solutions based on different space stepsizes at with .it is clear that the exact solution of example 5.2 is nonnegative with four hills . in the simulations ,the -hdg solutions recover the exact solution perfectly with all four hills in coarse meshes .note that the numerical results display that the approximations are more and more accurate with the refining of the meshes .by carefully introducing the auxiliary variables , constructing the numerical fluxes , adding the penalty terms , and using the characteristic method to deal with the time derivative and convective term , we design the effective hdg schemes to solve 2d space - fractional convection - diffusion equations with triangular meshes . as we know, this work is the first time to deal two - dimensional space - fractional convection - diffusion equation with triangular mesh by the dg method .the stability and error bounds analysis are investigated . besides the general advantages of hdg method , the presented scheme is shown to have the following benefits : 1 )it is symmetric , so easy to deal with the fractional operators ; 2 ) theoretically , the stability can be more easily proved ; 3 ) the penalty terms make the error analysis more convenient ; 4 ) numerically verified to have efficient approximations ; 5 ) the schemes are performed very well in triangular meshes ; 6 ) it is possible to use this scheme to solve nonlinear equations which is the future research task .this work was partially supported by the national basic research ( 973 ) program of china under grant 2011cb706903 , the national natural science foundation of china under grant 11271173 and 11471150 , and the capes and cnpq in brazil .douglas , j. , russell , jr . , t. f. : numerical method for convection - dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures .siam j. numer .19*(5 ) , 871 - 885 ( 1982 ) .ji , x. , tang , h. z. : high - order accurate runge - kutta ( local ) discontinuous galerkin methods for one- and two - dimensional fractional diffusion equations .5*(3 ) , 333 - 358 ( 2012 ) .liu , f. , zhuang , p. , anh , v. , turner , i. , burrage , k. : stability and convergence of the difference methods for the space - time fractional advection - diffusion equation .comp . * 191 * , 12 - 20 ( 2007 ) .
a hybridized discontinuous galerkin method is proposed for solving 2d fractional convection - diffusion equations containing derivatives of fractional order in space on a finite domain . the riemann - liouville derivative is used for the spatial derivative . combining the characteristic method and the hybridized discontinuous galerkin method , the symmetric variational formulation is constructed . the stability of the presented scheme is proved . theoretically , the order of is established for the corresponding models and numerically the better convergence rates are detected by carefully choosing the numerical fluxes . extensive numerical experiments are performed to illustrate the performance of the proposed schemes . the first numerical example is to display the convergence orders , while the second one justifies the benefits of the schemes . both are tested with triangular meshes . + + * ams * 26a33 , 35r11 , 65m60 , 65m12 .
answer set programming ( asp ) , namely logic programming under the answer set semantics , is a constraint programming paradigm , which has been successfully deployed in many applications .recently , asp was extended to include constraints to facilitate reasoning with sets of atoms .these constraints include weight constraints , aggregates and abstract constraints . among them ,weight constraints and aggregates are the most widely used constraints in practice . in this paper , logic programs with weight constraints and aggregateswill be referred to as _ weight constraint _ and _ aggregate programs _ , respectively .the semantics of weight constraint programs , called the _ stable model semantics _ , is well established and implemented in a number of asp solvers .especially , the results of the asp solver competitions show that clasp is an efficient solver that implements this semantics . for aggregate programs , various semanticshave been proposed .the one proposed in ( previously in ) , called the _ ultimate stable semantics _ , is based on an iterative construction on partial interpretations .the same semantics is reformulated by and extended to logic programs with arbitrary abstract constraint atoms , which embodies a key concept called _ conditional satisfaction_. since this reformulation is conceptually simpler , as it does not resort to 3-valued logic , in this paper we call this semantics _ conditional satisfaction - based_. among the semantics for aggregate programs , this semantics is known to be the most conservative , in the sense that any answer set under this semantics is an answer set under others , but the reverse may not hold . the relationships of these semantics have been studied in . in this paper , we refer to the semantics based on conditional satisfaction as the _ answer set semantics_. despite the fact that weight constraint and aggregate programs are among the most popular classes of programs in practice , the relationship among them has not been fully studied , both in semantics and in representation . in this paper, we study the relationship between the stable model semantics and the answer set semantics .we show that for a broad class of weight constraint programs , called _ strongly satisfiable programs _ , the stable model semantics agrees with the answer set semantics .for example , weight constraint programs where weight constraints are upper bound free are all strongly satisfiable .this result is useful in that we are now sure that the known properties of the answer sets also hold for these programs .one important property is that any answer set is a _well - supported model _ , ensuring that any conclusion must be supported by a non - circular justification in the sense of .our study further reveals that for weight constraint programs where the stable model and answer set semantics disagree , stable models may be circularly justified .we then show that the gap between the two can be closed by a transformation , which translates an arbitrary weight constraint program to a strongly satisfiable program so that the answer sets of the original program are exactly the stable models of the translated program .we further demonstrate the precise difference between the two semantics using a more general logic programming framework , logic programs with nested expressions .we propose yet another transformation from weight constraint programs to logic programs with nested expressions which preserves the answer set semantics .we compare this transformation to the one given in , which is faithful to the stable model semantics .interestingly , the difference is small but subtle : given a weight constraint u ] expresses a collection of literals with weights , while in our transformation the satisfaction of the upper bound is interpreted directly as `` less than or equal to '' , in the interpretation is by negation - as - failure `` not greater than '' .the observation that the gap between the answer set and the stable model semantics can be closed by a transformation leads to an approach for computing answer sets of aggregate programs using the asp solvers that implement the stable models semantics of weight constraint programs .we propose such an approach where aggregate programs are encoded compactly as weight constraint programs and their answer sets are computed using a stable model solver .we conducted a series of experiments to evaluate this approach .the results suggest that representing aggregates by weight constraints is a promising alternative to the explicit handling of aggregates in logic programs .besides efficiency , another advantage is at the system level : an aggregate language can be built on top of a stable models solver with a simple front end that essentially transforms standard aggregates to weight constraints in linear time .this is in contrast with the state - of - the - art in handling aggregates in asp , which typically requires an explicit implementation for each aggregate .the paper is organized as follows .the next section gives preliminary definitions . in section[ relate ] we relate the stable model semantics with the answer set semantics .we first establish a sufficient condition for the two to coincide , and then discuss their differences . in section[ transform ] , we present a transformation to close the gap between the two semantics , followed by section [ aggr - wprogram ] where we show how to represent aggregate programs by weight constraint programs .further in section [ nested - expression ] , to pinpoint the precise difference between the stable model semantics and the answer set semantics for weight constraint programs , by proposing a transformation from weight constraint programs to logic programs with nested expressions which preserves the answer set semantics , and comparing this with that of .we implemented a prototype system called alparse and in section [ experiments ] we report some experimental results .section [ conclusion ] concludes the paper .a preliminary version of this paper has appeared as .the main extensions here include : ( i ) section [ nested - expression ] , where we propose a transformation from weight constraint programs to logic programs with nested expressions which preserves the answer set semantics - this transformation shows exactly what makes the answer set semantics differ from the stable model semantics ; ( ii ) section [ experiments ] , where experiments are expanded including the benchmarks for aggregate programs used in the 2007 asp solver competition ; and ( iii ) the proofs of all the theorems and lemmas .throughout the paper , we assume a fixed propositional language with a countable set of propositional atoms .a _ weight constraint _ is of the form \ , u \label{w - form}\end{aligned}\ ] ] where each , is an atom , and each atom and not - atom ( negated atom ) is associated with a _ weight_. atoms and not - atoms are also called _ literals _ ( the latter may be emphasized as _ negative _ literals).the literal set of a weight constraint , denoted , is the set of literals occurring in .the numbers and are the _ lower _ and _ upper bounds _ , respectively .the weights and bounds are real numbers .either of the bounds may be omitted in which case the missing lower bound is taken to be and the missing upper bound by .a set of atoms satisfies a weight constraint of the form ( [ w - form ] ) , denoted , if ( and only if ) , where satisfies a set of weight constraints if for every . a weight constraint is _ monotone _ if for any two sets and , if and , then ; otherwise , is _nonmonotone_. there are some special classes of nonmonotone weight constraints . is _ antimonotone _ if for any and , and imply ; is _ convex _if for any and such that , if and then for any such that we have . a _ weight constraint program _is a finite set of _ weight rules _ of the form where each is a weight constraint . given a ( weight ) rule of the above form , we will use to denote and the conjunction of the weight constraints in the body of the rule .we will use to denote the set of the atoms appearing in a program .weight constraint programs are often called _ lparse programs _ , which generally refer to the kind of non - ground , function - free logic programs one can write based on the lparse syntax .these programs are grounded before calling an asp solver . in this paper , for the theoretical study we always assume a given weight constraint program is ground . given a weight constraint program , if the head of each rule in is of the form ~1 ] , where is a literal , then we have a normal program . we will simply write a weight constraint ~1 ] can be transformed to ~4 ] in is not strongly satisfiable , since although satisfies the upper bound , its subset does not .both and are stable models by definition [ smodels - semantics ] .note that this is because and .but , is an answer set and is not , by definition [ w - program - ans ] . the reason that is not an answer set of is due to the fact that is derived by its being in .this kind of circular justification can be seen more intuitively below using equivalence substitutions .* the weight constraint is substituted with an equivalent aggregate : where . *the weight constraint is transformed to an equivalent one without negative literal , but with a negative weight , according to : {-1}\ ] ] * the weight constraint is substituted with an equivalent _ abstract constraint atom _ , where is a finite set of ground atoms called the _domain _ , and is a collection of subsets of called _ admissible solutions_. in this example , the set satisfies the abstract constraint atom , since the admissible solution in it is satisfied by . ] : for the claim of equivalence , note that for any set of atoms , we have : ] iff iff . for logic programs with abstract constraint atoms, it is often said that all of the major semantics coincide for programs with monotone constraints .for example , this is the case for the semantics proposed in .what is unexpected is that this is not the case for the stable model semantics for weight constraint programs . by the standard definition of monotonicity ,the constraint ~ 0 ] .we note that aggregates and can be encoded simply by substituting the weights in ( [ sum ] ) with and ( for avg the lower bound is also replaced by zero ) , respectively . + * * + let be an aggregate .the idea in the encoding of is that for a set of numbers , the maximum number in is greater than or equal to if and only if for each atom , two new literals and are introduced .the encoding consists of the following constraints .i \leq n\\ \label{max-2 } 0~[p(a_i)=-d_i , p^{+}(a_i)=d_i],~1\leq i \leq n\\ \label{max-3 } 0~[p(a_i)=d_i , p^{-}(a_i)=-d_i],~1\leq i \leq n\\ \label{max-4 } 1~[p(a_1)=d_1 , p^{+}(a_1)=d_1 , p^{-}(a_1)=-d_1 , \nonumber \\ ... , p(a_n)=d_n , p^{+}(a_n)=d_n , p^{-}(a_n)=-d_n]\\ \label{max-5 } 1~[p(a_1)=1 , ... , p(a_n)=1]\end{aligned}\ ] ] where . in the following , for any model of such an encoding, means and means . the constraints ( [ max-1 ] ) , ( [ max-2 ] ) and ( [ max-3 ] ) are used to encode . clearly ,if , we have and ; if , we have and ; and if , we have or . the constraint ( [ max-4 ] ) encodes the relation ( [ max - relation ] ) and the constraint ( [ max-5 ] ) guarantees that a model of is not an empty set . + * * + let be an aggregate .the idea in the encoding of is that for a set of numbers , the minimal number in is greater than or equal to if and only if similar to , the aggregate can be encoded by the following weight constraints .i \leq n\\ \label{min-2 } 0~[p^{+}(a_i)=d_i , p(a_i)=-d_i],~1\leq i \leq n\\ \label{min-3 } 0~[p^{-}(a_i)=-d_i , p(a_i)=d_i],~1\leq i \leq n\\ \label{min-4 } 0~[p(a_1)=d_1 , p^{+}(a_1)=-d_1 , p^{-}(a_1)=d_1 , \nonumber \\ ... ,p(a_n)=d_n , p^{+}(a_n)=-d_n , p^{-}(a_n)=d_n]~ 0\\ \label{min-5 } 1~[p(a_1)=1 , ... , p(a_n)=1]\end{aligned}\ ] ] where .the constraint ( [ min-1 ] ) , ( [ min-2 ] ) and ( [ min-3 ] ) are the same to the first three constraints in the encoding of ( except for the value of ) , respectively . the constraint ( [ min-4 ] ) encodes the relation ( [ min - relation ] ) and the constraint ( [ min-5 ] ) guarantees that a model of is not an empty set .+ we note that all the encodings above result in weight constraints whose collective size is linear in the size of the domain of the aggregate being encoded . in the encoding of ( similarly for ) , the first three constraints are the ones between the literal and the newly introduced literals and . we call them _ auxiliary constraints_.the last two constraints code the relation between and , where .we call them _ relation constraints_. let be an aggregate , we denote the set of auxiliary constraints in by and the set of relation constraints by .if is aggregate , , or , we have that , because no new literals are introduced in their encodings .[ encoding - aggr ] the set of weight constraint ( [ sum ] ) , the set of weight constraints from ( [ max-1 ] ) to ( [ max-5 ] ) , and the set of weight constraints from ( [ min-1 ] ) to ( [ min-5 ] ) , are weight constraint encodings ( definition [ w - encoding ] ) of the aggregates , , and , respectively .the proof for the encoding of aggregate is straightforward .the proof for the encoding of aggregate is similar to that for , which we show below .let be a set of atoms and .suppose and .then , we can construct as follows : a. and , if and ; b. and , if and .we use to to denote the weight constraints in ( [ max-1 ] ) to ( [ max-5 ] ) .it is easy to check that the weight constraints , , and are satisfied by . since , we have and .therefore and are also satisfied by .so .let be a set of atoms and .since satisfies , and , we have and , for ; and , for ; and or , if . since and , there must be an , such that and .that is , .then , we have .we translate an aggregate program to a weight constraint program , denoted , as follows : 1 .for each rule of the form ( [ aggr - rule - form ] ) in , we include in a weight rule of the form where is the conjunction of the weight constraints that encode the aggregate ; and 2 .if there are newly introduced literals in the encoding of aggregates , the _ auxiliary rule _ of the form is included in , for each auxiliary constraint of each atom in the aggregates .note that a weight constraint program can be translated to a strongly satisfiable program using the translation given in section [ transform ] .we have the following theorem establishing the correctness of the transformation .[ thm - translation ] let be an aggregate program where the relational operator is not . for any stable model of , is an answer set of . for any answer set for ,there is a stable model of such that . the rules of the form ( [ transformed - rule - form ] ) are the translated counterpart of the rules in . the auxiliary rules of the form ( [ auxiliary - rule - form ] ) are added to enforce the auxiliary constraints .note that is a strongly satisfiable program .then the theorem follows from theorem [ ans - strong - programs ] and theorem [ encoding - aggr ] .* remark * for an aggregate where the relation operator is not , the aggregate can be encoded by a conjunction of weight constraints as we have shown in this section . in this case, logic equivalence leads to equivalence under conditional satisfaction .that is why we only need to ensure that an encoding is satisfaction - preserving .for an aggregate where the relation operator is , two classes are distinguished .one consists of aggregates of the forms .for these aggregates , the operator can be treated as the disjunction of the operators and . consider the aggregate . is logically equivalent to , where and .let and be two sets of atoms , it is easy to show that iff or .the other class consists of the aggregates of the forms , , , and . for these aggregates ,the operator can not be treated as the disjunction of and , since the conditional satisfaction may not be preserved .below is an example .consider the aggregates , , and .note that is logically equivalent to .consider and . while conditionally satisfies w.r.t . ( i.e. , ) , it is not the case that conditionally satisfies w.r.t . or conditionally satisfies w.r.t . .in this section , we further relate answer sets with stable models in terms of logic programs with nested expressions .we formulate a transformation of weight constraint programs to programs with nested expressions and compare this transformation to the one in .the comparison reveals that the difference of the semantics lies in the different interpretations of the constraint on the upper bounds of weight constraints in a program : while our transformation interprets it directly , namely as `` less than or equal to '' , the one in interprets it as `` not greater than '' , which may create double negations ( the atoms that are preceded by ` not ` ` not ` ) in nested expressions .it is the semantics of these double negations that differentiates the two semantics . in the language of nested expressions ,_ elementary formulas _ are atoms .the classical negation is irrelevant here . ] and symbols ( false ) and ( true ) ._ formulas _ are built from elementary formulas using the unary connective and the binary connectives , ( conjunction ) and ; ( disjunction ) .a rule with nested expressions is of the form where both and are formulas . for a rule of the form ( [ nested - rule - form ] ), we use and to denote the and the of , respectively .a program with nested expressions is a set of rules with nested expressions .the satisfaction of a formula by a set of atoms is defined as follows : * for a literal , if * * * if and * if or * if .the reduct of a formula with respect to a set of atoms , denoted , is defined recursively as follows : * for an elementary formula , * * * the reduct of a program with respect to a set of atoms is the set of rules for each rule of the form ( [ nested - rule - form ] ) in .the concept of a stable model is defined as follows .[ ] let be a logic program with nested expressions and a set of atoms . is a stable model of if is a minimal model of .we present a nested expression encoding , called the _ direct nested expression encoding _ of weight constraints .we show that conditional satisfaction of a weight constraint can be captured by the standard satisfaction of the reduct of the encoding of the weight constraint . in the rest of the paper, we will use the following notation : for a set of literals , we define and . given a weight constraint of the form ( [ w - form ] ) , the _ nested expression encoding _ of , denoted , is the formula \end{aligned}\ ] ] where is the set of atoms in . for and for .then we can rewrite this formula as \ ] ] ] intuitively , is a nested expression representing the sets that satisfy .let ] , where =[a_1 = w_{a_1}, ...,a_n = w_{a_n } , { \texttt{not } } b_1= w_{b_1}, ... ,{\texttt{not } } b_m= w_{b_m}] ] and ] is _ directly _ encoded by the sets of atoms that satisfy it . in the fl - translation , ] , where ] , possibly creating double negations .this difference is the _ only _ reason that the stable models of our translated program are the answer sets of the original program while the stable models of the fl - translated program are the stable models of the original program .it should be clear that the extra stable models that are not answer sets are created by double negations generated by the indirect interpretation in the fl - translation .we use the following example for an illustration .consider the program in example [ key - example ] , which consist of a single rule \end{aligned}\ ] ] by our translation , consists of the only stable model of is , which is the unique answer set of . by the fl - translation , the translated program is the stable models of are and . among them ,the set is not an answer set , but it is justified by the stable model semantics through the double negation .the theoretical studies show that an aggregate program can be translated to a weight constraint program whose stable models are precisely the answer sets of the original program .this leads to a prototype implementation called alparse to compute the answer sets for aggregate programs . in alparse ,an aggregate program is firstly translated to a strongly satisfiable program using the translation given in section [ aggr - wprogram ] , then the stable models of the translated strongly satisfiable program are computed using an asp solver that implements the stable model semantics for weight constraint programs . in the next two subsections , we use smodels version 2.34 and clasp version 2.0.3 respectively as the underlying asp solver of alparse andcompare alparse with the implementations of aggregate programs smodels and dlv version 2007 - 10 - 11 .the experiments are run on scientific linux release 5.1 with 3ghz cpu and 1 gb ram .the reported time of alparse consists of the transformation time ( from aggregate programs to strongly satisfiable programs ) , the grounding time ( calling to lparse version 1.1.2 for smodels and gringo version 2.0.3 for clasp ) , and the search time ( by smodels or clasp ) .the time of smodels consists of grounding time , search time and unfolding time ( computing the solutions to aggregates ) .the time of dlv includes the grounding time and search time ( the grounding phase is not separated from the search in dlv ) .all times are in seconds . in this section ,we compare our approach with two systems , smodels and dlv . * comparison with smodels * we compare the encoding approach proposed in last section to the unfolding approach implemented in the system smodels .ielkaban / asp - aggr.html . ] the aggregates used in the first and second set of problems ( the company control and employee raise problems ) are ; the third set of problems ( the party invitation problems ) are , and the fourth and fifth set of problems ( the nm1 and nm2 , respectively ) are and , respectively .the experimental results are reported in table [ smodels - a ] , where the `` sample size '' is measured by the argument used to generate the test cases .the times are the average of one hundred randomly generated instances for each sample size .the results show that smodels is often faster than smodels , even though both use the same search engine .scale - up could be a problem for smodels , due to exponential blowup .for instance , for an aggregate like , smodels would list all _ aggregate solutions _ in the unfolded program , whose number is . for a large domain and being around ,this is a huge number .if one or a few solutions are needed , alparse takes much less time to compute the corresponding weight constraints than smodels .* comparison with dlv * in the seating problem was chosen to evaluate the performance of dlv .the problem is to generate a sitting arrangement for a number of guests , with tables and chairs per table .guests who like each other should sit at the same table ; guests who dislike each other should not sit at the same table .the aggregate used in the problem is .we use the same setting to the problem instances as in .the results are shown in table [ seating ] .`` tables '' and `` chairs '' are the number of tables and the number of chairs at each table , respectively .the instance size is the number of atom occurrences in a ground program .we report the result of the average over one hundred randomly generated instances for each problem size .the experiments show that , by encoding logic programs with aggregates as weight constraint programs , alparse solves the problem efficiently . for large instances ,the running time of alparse is about one order of magnitude lower than that of dlv and the sizes of the instances are also smaller than those in the language of dlv .we use the benchmarks reported in an asp solver competition and run all instances for each benchmark . in the experiments, we set the cutoff time to 600 seconds .the instances that are solved in the cutoff time are called `` solvable '' , otherwise `` unsolvable '' .table [ alparse - summary ] is a summary of the results . in the table ,the time is the average running time in seconds for the solvable instances .it can be seen that alparse constantly outperforms dlv by several orders of magnitude , except for the benchmark of towers of hanoi .the system clasp has progressed to support aggregates , and .the aggregates used in the benchmarks are except for towers of hanoi where the aggregate is used .the aggregate is essentially the same as weight constraints .we compare the clasp programs with the aggregate and the corresponding translated weight constraint programs ( note that , the answer sets of this aggregate program correspond to those of the corresponding weight constraint program ) .the performances of clasp on these two kinds of programs are similar .as we have mentioned , the transformation approach indicates that it is important to focus on an efficient implementation of aggregate rather than on the implementation of other aggregates one by one , since they can be encoded by .can be translated to , using a logarithm transformation , thanks to tomi janhunen for the comments during the presentation of . ].benchmarks used by smodels [ cols="<,>,>,>",options="header " , ]we have shown that for a large class of programs the stable model semantics coincides with the answer set semantics based on conditional satisfaction . in general , answer sets admitted by the latter are all stable models . when a stable model is not an answer set , it may be circularly justified .we have proposed a transformation , by which a weight constraint program can be translated to strong satisfiable program , such that all stable models are answer sets and thus well - supported models .we have also given another transformation from weight constraint programs to logic programs with nested expressions which preserves the answer set semantics .in conjunction with the one given in , their difference reveals precisely the relation between stable models and answer sets . as an issue of methodology, we have shown that most standard aggregates can be encoded by weight constraints .therefore the asp systems that support weight constraints can be applied to efficiently compute the answer sets of logic programs with aggregates .the experimental results demonstrate the effectiveness of this approach .currently , alparse does not handle programs with aggregates like or , due to the fact that the complexity of such programs is higher than .what is the best way to include this practically requires further investigation .
weight constraint and aggregate programs are among the most widely used logic programs with constraints . in this paper , we relate the semantics of these two classes of programs , namely the stable model semantics for weight constraint programs and the answer set semantics based on conditional satisfaction for aggregate programs . both classes of programs are instances of logic programs with constraints , and in particular , the answer set semantics for aggregate programs can be applied to weight constraint programs . we show that the two semantics are closely related . first , we show that for a broad class of weight constraint programs , called _ strongly satisfiable programs _ , the two semantics coincide . when they disagree , a stable model admitted by the stable model semantics may be circularly justified . we show that the gap between the two semantics can be closed by transforming a weight constraint program to a strongly satisfiable one , so that no circular models may be generated under the current implementation of the stable model semantics . we further demonstrate the close relationship between the two semantics by formulating a transformation from weight constraint programs to logic programs with nested expressions which preserves the answer set semantics . our study on the semantics leads to an investigation of a methodological issue , namely the possibility of compact representation of aggregate programs by weight constraint programs . we show that almost all standard aggregates can be encoded by weight constraints compactly . this makes it possible to compute the answer sets of aggregate programs using the asp solvers for weight constraint programs . this approach is compared experimentally with the ones where aggregates are handled more explicitly , which show that the weight constraint encoding of aggregates enables a competitive approach to answer set computation for aggregate programs . [ firstpage ] stable model , weight constraint , aggregates , logic programs with constraints .
in this work we study rotating waves in rings of neurons described by the theta model . the theta model , which is derived as a canonical model for neurons near a ` saddle - node on a limit cycle ' bifurcation , assumes the state of the neuron is given by an angle , with , corresponding to the ` firing ' state , and the dynamics described by where represents the inputs to the neuron . when this represents an ` excitable ' neuron , which in the absence of external input ( ) approaches a rest state , while if this represents an ` oscillatory ' neuron which performs spontaneous oscillations in the absence of external input .a model of synaptically connected neurons on a continuous spacial domain takes the form : ,\ ] ] where is a positive function and is defined by here ( ) measures the synaptic transmission from the neuron located at , and according to ( [ syn]),([defp ] ) it decays exponentially , except when the neuron fires ( _ i.e. _ when , ) , when it experiences a jump .( [ de ] ) says that the neurons are modelled as theta - neurons , where the input to the neuron at , as in ( [ thn ] ) , is given by ( here assumed to be positive ) describes the relative strength of the synaptic coupling from the neuron at to the neuron at , while is a parameter measuring the overall coupling strength .the above model , in the case , is the one presented in . in the case this model is the one presented in ( remark 2 ) and .we always assume . when the geometry is linear , , it is natural to seek travelling waves of activity along the line in which each neuron makes one or more oscillations and then approaches rest .in it was proven that for sufficiently strong synaptic coupling , at least two such waves , a slow and a fast one , exist , and also that they always involve each neuron firing more than one time before it approaches rest , while for sufficiently small such waves do not exist .it was not determined how many times each neuron fires before coming to rest , and it may even be that each neuron fires infinitely many times .some numerical results in the case of a one and a two - dimensional geometry were obtained in . in this workwe consider a different possibility for the spacial geometry : , so the neurons are placed on a ring and our equations are ( [ syn ] ) and with where is continuous , positive and periodic and the solutions satisfy the periodicity conditions the integer ( the ` winding number ' ) is determined by the initial condition , and will be preserved as long as the solution remains continuous . in this geometry ,a different kind of wave of activity is possible : a wave that rotates around the ring repeatedly .such waves , that is solutions of the form : where is the wave velocity , are the focus of our investigation . in section [ pre ]we show that in the case that the winding number , there can exist only trivial rotating waves .thus the interesting cases are when . herewe study the case , the case being beyond our reach .thus , this work concentrates on the first non - trivial case .our central results about existence , nonexistence and multiplicity of rotating waves can be summarized as follows ( see figures [ syn1],[syn2 ] for the _ simplest _ diagrams consistent with these results ) : [ sum ] consider the equations ( [ ge]),([syn ] ) with conditions ( [ bnt ] ) , ( [ bns ] ) , and .\(i ) in the oscillatory case : for all there exists a rotating wave , with velocity going to as .\(ii ) in the excitable case : \(i ) for sufficiently small there exist no rotating waves .\(ii ) for sufficiently large there exist at least two rotating waves , a ` fast ' and a ` slow ' one , in the sense that their velocities approach and , respectively , as . therefore our results bear resemblance to those obtained in for the case of a linear geometry .we note that although for the rotating waves found here each neuron fires infinitely many times , the reason for this is that it is re - excited each time , because of the periodic geometry . during each revolution of the rotating wave ,each neuron fires once , so naively one could think that the analogous phenomenon in a linear geometry would be a travelling wave with each neuron firing once - but this was shown to be impossible in .it is interesting to note that while in some restrictions were made on the coupling function , like being decreasing with distance , here no such restrictions are imposed beyond ( [ jpos ] ) , ( [ jper ] ) .we would expect however that some restriction would need to be imposed on in order to obtain stability of travelling waves .the whole issue of stability remains quite open and awaits future investigation . in the case , both numerical evidence in and results obtained in other models indicate that the fast wave is stable while the slow wave is unstable , so we might conjecture that this is true for the case investigated here as well - at least under some natural assumptions on .some analytical progress on the stability question in the case has recently been achieved in .let us note that the model considered here , in the case , describes waves in an excitable medium , about which an extensive literature exists ( see and references therein ) . however , most models consider diffusive rather than synaptic coupling . in the case of the theta model on a ring , with _ diffusive _ coupling , and , it is proven in that a rotating wave exists regardless of the strength of coupling ( _ i.e. _ the diffusion coefficient ) , so that our results highlight the difference between diffusive and synaptic coupling . in section [ reduction ]we reduce the study of rotating waves to the investigation of the zeroes of a function of one variable . in section [ constant ]we investigate the special case in which the coupling is uniform ( is a constant function ) , which , although artificial from a biological point of view , allows us to obtain closed analytic expressions for the wave - velocity vs. coupling - strength curves in an elementary fashion .we can thus gain some intuition for the general case , and obtain information which is unavailable in the case of general , like precise multiplicity results .it is interesting to investigate to what extent the more precise results obtained in the uniform - coupling case extend to the general case , and we shall indicate several questions , which remain open , in this direction .in section [ general ] we turn to the case of general coupling functions , and prove the results of theorem [ sum ] above , obtaining also some quantitative estimates : lower and upper bounds for the critical values of synaptic coupling coupling strength , as well as for the wave velocities .we begin with an elementary calculus lemma which is useful in several of our arguments below .[ cut ] let be a differentiable function , and let , , be constants such that we have the following property : then the equation has at most one solution . assume by way of contradiction that the equation has at least two solutions . define by is nonempty because .let .by continuity of we have .we have either or , and we shall show that both of these possibilities lead to contradictions . if , then by ( [ kp ] ) we have so we conclude that there exists with , contradicting the definition of . if then is a limit - point of , which implies that , contradicting ( [ kp ] ) .these contradictions conclude our proof .turning now to our investigation , we note a few properties of the functions and defined by ( [ spec ] ) which will be used often in our arguments : plugging ( [ tw1]),([tw2 ] ) into ( [ ge ] ) , ( [ syn ] ) , and setting we obtain the following equations for , : in order to satisfy the boundary conditions ( [ bnt]),([bns ] ) , and have to satisfy us first dispose of the case of zero - velocity waves , .we get the equations if there exists some with , , then , substituting into ( [ teq00 ] ) and using ( [ hpi]),([wz ] ) , we obtain , a contradiction. hence we must have which implies that , so that ( [ req00 ] ) gives , and ( [ teq00 ] ) reduces to , and thus is a constant function , the constant being a root of .this implies , first of all , that the winding number is , since a constant can not satisfy ( [ bnp0 ] ) otherwise .in addition the function must vanish somewhere , which is equivalent to the condition .we have thus proven [ zvel ] zero - velocity waves exist if and only if and , and in this case they are just the stationary solutions we will now show that the trivial waves " of lemma [ zvel ] are the only ones that occur for .[ mz ] assume .\(i ) if there are no rotating waves .\(ii ) if the only rotating waves are those given by lemma [ zvel ] .assume is a solution of ( [ teq0]),([req0 ] ) satisfying ( [ bnp0 ] ) with , _i.e. _ and ( [ bnr0 ] ) .we also assume , otherwise we are back to lemma [ zvel ]. we shall prove below that must satisfy ( [ npi ] ) , and hence that , so that by ( [ req0]),([bnr0 ] ) we have , so that ( [ teq0 ] ) reduces to since , if has no roots ( ) , ( [ red ] ) has no solutions satisfying ( [ bnp00 ] ) . if does have roots ( ) then the only solutions of ( [ red ] ) satisfying ( [ bnp00 ] ) are constant functions , the constant being a root of , and we are back to the same solutions given in lemma [ zvel ] , which indeed can be considered as rotating waves with arbitrary velocity .it remains then to prove that ( [ npi ] ) must hold . assume by way of contradiction that for some integer . by ( [ hpi]),([wz]),([teq0 ] ) , and the assumption , we have thus the assumptions of lemma [ cut ] , with , , , are satisfied , and we conclude that the equation has at most one solution , contradicting the fact that , by ( [ bnp00 ] ) , we have .having found all possible rotating waves in the case , we can now turn to the case . in fact , as was mentioned in the introduction , we shall treat the case , the cases being harder . by lemma [ zvel ] we know that there are no zero - velocity waves , so we can assume and define so that our equations for the rotating waves can be rewritten with periodic conditions study the equations ( [ teq]),([req ] ) for with periodic conditions ( [ bnp]),([bnr ] ) .we will derive a scalar equation ( see ( [ tt ] ) below ) so that rotating waves are in one - to - one correspondence with solutions of that equation .we note first that , since by ( [ bnr ] ) we have , and since any rotating wave generates a family of other rotating waves by translations , we may , without loss of generality , fix the following lemma shows that for a rotating wave ( in the case ) there is at any specific time a unique neuron on the ring which is firing .this fact is very important for our analysis .[ one ] assume satisfy ( [ teq]),([req ] ) with conditions ( [ bnp]),([bnr]),([fix ] ) .then ( [ one2 ] ) follows from ( [ one1 ] ) by ( [ bnp ] ) . to prove ( [ one1 ] ) , we first note that certainly , since and ( [ teq ] ) imply is constant , contradicting ( [ bnp ] ) .we note the key fact that , by ( [ teq]),([hpi ] ) and ( [ wz ] ) , by lemma [ cut ] , ( [ kf ] ) implies that the equation has at most one solution for each .in particular , since , we have for , and by continuity of this implies ( [ one1 ] ) .let us note that if we knew that for rotating waves the function must be monotone , then lemma [ one ] would follow immediately from ( [ fix ] ) .is it true in general that rotating wave solutions are monotone ( for m>0 ) ?[ lampos ] assume satisfy ( [ teq]),([req ] ) with conditions ( [ bnp]),([bnr]),([fix ] ). then . in other words , for all rotating waves with ,so the waves rotate clockwise .of course in the symmetric case the waves will rotate counter - clockwise . by ( [ fix ] ) and ( [ kf ] )we have .we have already noted that .if were negative , then would be decreasing near , so for small we would have , contradicting ( [ one1 ] ) .our next step is to solve ( [ req]),([bnr ] ) for , in terms of .we will use the following important consequence of lemma [ one ] : [ ret ] by lemma [ one ] we have so we will show that let be a test function . using lemma [ one ]again we have where is arbitrary .in particular , since , we may choose sufficiently small so that for , so that we can make a change of variables , obtaining this proves ( [ jon ] ) , completing the proof of the lemma . by lemma [ ret ] we can rewrite equation ( [ req ] ) on the interval as the solution of which is given by where is the heaviside function : for , for . substituting into ( [ gs ] ) and using ( [ bnr ] ) ,we obtain an equation for whose solution is and substituting this back into ( [ gs ] ) , we obtain that the solution of ( [ req]),([bnr ] ) which we denote by in order to emphasize the dependence on the parameter , is given on the interval by \;\;\;\ ; 0<|z|<2\pi.\ ] ] we note that , for general , is given as the -periodic extension of the function defined by ( [ fr ] ) from ] , hence where the standard theorems on dependence of solutions of initial - value problems on parameters hence imply uniformly in $ ] , where satisfies and . since , by ( [ wz ] ) , the constant function is a solution to this initial - value problem , the uniqueness theorem for initial - value problems implies that . in the case of constant the resultcan be proven by direct computation , so we now assume is not a constant function .we will show that when we have ,\ ] ] .\ ] ] together with ( [ aps ] ) , these imply the result of our lemma .to prove our claim we note that , using ( [ tww]),([hz]),([wnn ] ) , part ( ii ) of lemma [ ulb ] ( which is why we need the assumption that is non - constant ) and ( [ cl ] ) we now show that ( [ kee ] ) implies ( [ cons1 ] ) .if ( [ cons1 ] ) fails to hold , then we set \;|\ ; \phi_{\lambda}(z)= 2\pi\}.\ ] ] this number is well - defined by continuity and by the fact that , which implies also that . by ( [ kee ] )we have , but this implies that is decreasing in a neighborhood of , and in particular that there exist satisfying but this contradicts the definition of , and this contradiction proves ( [ cons1 ] ) .similarly , assuming ( [ cons2 ] ) does not hold and defining \;|\ ; \phi_{\lambda}(z)=0\},\ ] ] we conclude that and , so that is decreasing in a neighborhood of , and this implies a contradiction to the definition of and proves that ( [ cons2 ] ) holds .this concludes the proof of the lemma .let us note that since implies that ( [ tt ] ) does nt hold , and since , we can reformulate the previous lemma as a lower bound for the velocities of rotating waves in the excitable case. we shall show that if then for all , and thus that equation ( [ tt ] ) can not hold . by ( [ aps ] ) , ( [ sm ] )is equivalent to we note that , by lemma [ ell ] , we already have ( [ sm ] ) when ( [ cl ] ) holds , hence we may assume we define and we note that ( [ aaa ] ) is equivalent to the statement that using ( [ tww ] ) and lemma [ ulb ] we have \nonumber\\&\leq & \lambda \big[1-\cos(\phi_{\lambda}(x))+ \big(\beta+\frac{g}{2\lambda}\rho_c(\lambda)\max_{x\in{\mathbb{r}}}{j(x)}\big ) ( 1+\cos(\phi_{\lambda}(z)))\big]\nonumber\\&= & \lambda [ ( \mu + 1)+(\mu-1)\cos(\phi_{\lambda}(z))].\end{aligned}\ ] ] which implies ( note that the integral below is well - defined because of ( [ mgz ] ) ) making the change of variables we obtain if we assume , by way of contradiction , that ( [ din0 ] ) does not hold , _i.e. _ that , then , using ( [ form ] ) , so together with ( [ ha0 ] ) we obtain which is equivalent to which contradicts .this contradiction proves ( [ din0 ] ) , concluding the proof of the theorem .[ two ] in the excitable case , if there exists some with then there exist at least two solutions of ( [ tt ] ) with , hence two rotating waves , with velocities satisfying by lemma [ psiprop ] , we can choose so that .by lemma [ ell0 ] , if we fix then for all , and in particular it follows that .we thus have with thus by the intermediate value theorem , the equation ( [ tt ] ) has a solution and a solution , corresponding to two rotating waves . by ( [ aps ] ) , our claim is equivalent to we define and we note that ( [ kein ] ) is equivalent to using ( [ tww ] ) and lemma [ ulb ] we have \nonumber\\&\geq & \lambda\big[1-\cos(\phi_{\lambda}(x))+ \big(\beta+\frac{g}{2\lambda}\rho_c(\lambda)\min_{x\in{\mathbb{r}}}{j(x)}\big ) ( 1+\cos(\phi_{\lambda}(z)))\big]\nonumber\\&= & \lambda [ ( \eta + 1)+(\eta-1)\cos(\phi_{\lambda}(z))],\end{aligned}\ ] ] which implies making the change of variables , we obtain if we assume , by way of contradiction , that ( [ din ] ) does not hold , _i.e. _ that , then , using ( [ form ] ) , so together with ( [ ha ] ) we obtain this contradicts ( [ eg2 ] ) , and this contradiction implies that ( [ din ] ) holds , completing our proof . [ ts ] in the excitable case , let where is defined by ( [ defom ] ) .then when , there exist at least two rotating waves .in fact , we have a ` slow ' wave with velocity bounded from above by and a ` fast wave ' with velocity bounded from below by where are the functions defined by ( [ vdef ] ) .to prove ( [ sw]),([fw ] ) , we note that , assuming , the range of values of for which ( [ kein ] ) holds is the interval where the functions are defined in section [ constant ] .thus , applying lemma [ two ] with where is arbitrarily small , we obtain the existence of a solution of ( [ tt ] ) with .since is arbitrary , we have a solution of ( [ tt ] ) with , hence a rotating wave with velocity satisfying ( [ sw ] ) . similarly applying lemma [ two ] with , we obtain the existence of a wave with velocity satisfying ( [ fw ] ) .theorems [ noex ] and [ ts ] show that several of the qualitative features that we saw explicitly in the case of uniform coupling ( section [ constant ] ) remain valid in the general case .it is natural to ask whether more can be said , _e.g. _ , whether the following conjecture , or some weakened form of it , is true : for any , there exists a value such that : [ osct ] [ osc ] if , there exists a rotating wave solution for any value of , with velocity bounded from below by where is the function defined by ( [ vdef1 ] ) , and the asymptotic formulas ( [ fwe0]),([fwe ] ) hold with replaced by . if , then for any the equation has the unique solution hence , any satisfies ( [ kein ] ) , so that by lemma [ glar ] . on the other hand for small we have , by lemma [ psiprop ] , .hence there exists a solution of ( [ tt ] ) . since is arbitrary ,we conclude that there exists a solution of ( [ tt ] ) .hence a rotating wave with velocity satisfying ( [ fw1 ] ) .investigate the question of stability of the rotating waves , _i.e. _ , do arbitrary solutions of ( [ ge ] ) , ( [ syn ] ) approach one of the rotating waves in large time ? we conjecture that , at least under some restrictions on , the rotating wave is stable in the case , while in the case the fast rotating wave is stable and the slow one is unstable .izhikevich , _ class 1 neural excitability , conventional synapses , weakly connected networks , and mathematical foundations of pulse - coupled models _ ,ieee trans .neural networks * 10 * ( 1999 ) , 499 - 507 .
we study rotating waves in the theta model for a ring of synaptically - interacting neurons . we prove that when the neurons are oscillatory , at least one rotating wave always exists . in the case of excitable neurons , we prove that no travelling waves exist when the synaptic coupling is weak , and at least two rotating waves , a ` fast ' one and a ` slow ' one , exist when the synaptic coupling is sufficiently strong . we derive explicit upper and lower bounds for the ` critical ' coupling strength as well as for the wave velocities . we also study the special case of uniform coupling , for which complete analytical results on the rotating waves can be achieved .
the permeability of porous media is an important parameter to predict the unconventional gas production . for laminar flows in highly permeable porous media ,the darcy s law states that the volume flow rate is proportional to the pressure gradient : where is the cross - section area of the flow , is the shear viscosity of the fluid , and is the permeability of a porous medium that is independent of the fluid . for this reason , known as the intrinsic permeability . for gas flows in low permeable porous media , however , the measured permeability is larger than the intrinsic permeability and increases with the reciprocal mean gas pressure . in order to distinguish it from the intrinsic permeability , the permeabilityis called the apparent permeability , which can be expressed as : where is the correction factor .the variation of the apparent permeability with respect to the mean gas pressure is due to the rarefaction effects , where infrequent collisions between gas molecules not only cause the gas slippage at the solid surface , but also modify the constitution relation between the stress and strain - rate .the extent of rarefaction is characterized by the knudsen number ( i.e. the ratio of the mean free path of gas molecules to the characteristic flow length ) : where is the shear viscosity of the gas at a reference temperature , and is the gas constant .gas flows can be classified into four regimes ; for gas flows in porous media , the region of for different flow regimes may change . ] : continuum flow in which navier - stokes equations ( nses ) can be used ; slip flow where nses with appropriate velocity - slip / temperature - jump boundary conditions may be used ; transition flow and free - molecular flow , where nses break down and the boltzmann equation is used to describe rarefied gas flows .recently , based on nses with the first - order velocity - slip boundary condition ( fvbc ) , found that the apparent gas permeability of the porous media is a nonlinear function of .this result , however , is questionable , because the nses were used beyond its validity . through our theoretical analysis and numerical calculations ,we show that nses with fvbc can only predict the apparent permeability of porous media to the first - order accuracy of .consider a gas flowing through the periodic porous media .suppose the geometry along the -direction is uniform and infinite , the gas flow is effectively two - dimensional and can be studied in a unit rectangular cell abcd , with appropriate governing equations and boundary conditions ; one example of the porous medium consisting of a periodic array of discs is shown in fig .we are interested in how the apparent permeability varies with the knudsen number .the boltzmann equation is fundamental in the study of rarefied gas dynamics from the continuum to the free - molecular flow regimes , which uses the distribution function to describe the system state : where is the three - dimensional molecular velocity normalized by the most probable speed , is the spatial coordinate normalized by the length of the side ab , is the time normalized by , is normalized by , while is the boltzmann collision operator . in order to save the computational cost , is usually replaced by the relaxation - time approximation , resulting in the bhatnagar - gross - krook ( bgk ) equation .numerical simulation for the poiseuille flow between two parallel plates shows that the bgk equation can yield accurate mass flow rates when the gas flow is not in the free - molecular regime .when the porous medium is so long that the pressure gradient is small , the bgk equation can be linearized .the distribution function is expressed as , where is the equilibrium distribution function , and the perturbation is governed by : ,\ ] ] where macroscopic quantities such as the perturbed density , the velocity and , and the perturbed temperature are calculated as the kinetic equation has to be supplied with the boundary condition .suppose the pressure gradient is along the direction , on the inlet and outlet of the computational domain abcd ( the coordinates of the four corners a , b , c , and d are , and , respectively ) , the pressure gradient is applied and the periodic condition for the flow velocity is used : at the lines ab and cd , the specular reflection boundary condition is used to account for the symmetry : when , and when , while at the solid surface , the diffuse boundary condition is used : where is the normal velocity vector at the solid surface .the apparent gas permeability , which is normalized by , is calculated by historically , the state of a gas is first described by macroscopic quantities such as the density , velocity , and temperature ; and its dynamics is described by the euler equations or nses ( based on the empirical newton s law for stress and the fourier s law for heat flux ) .these equations , however , can be derived rigorously from the boltzmann equation , at various order of approximations . by taking the velocity moments of the boltzmann equation ,the five macroscopic quantities are governed by the following equations : however , the above equations are not closed , since expressions for the shear stress and heat flux are not known .one way to close - is to use the chapman - enskog expansion , where the distribution function is expressed in the power series of : where is the equilibrium maxwellian distribution function .when , we have , and - reduce to the euler equations . when the distribution function is truncated at the first - order of , that is , we have and - reduce to nses .when , burnett equations can be derived . alternatively , following the method of , 13- , 20- and 26-moment equations can be derived from the boltzmann equation to describe flows at different levels of rarefaction . herethe regularized 20-moment ( r20 ) equations are used , which , in addition to - , include governing equations for the high - order moments , , and : the constitutive relationships between the unknown higher - order moments ( , and ) and the lower - order moments were given by structrup & torrilhon ( 2003 ) and gu & emerson ( 2009 ) to close to . for linearized flows , it is adequate to use the linear gradient transport terms only and they are : where the collision constant is 2.097 for maxwell molecules .macroscopic wall boundary conditions were obtained from the diffuse boundary condition . in a frame where the coordinates are attached to the wall , with the normal vector of the wall pointing towards the gas and the tangential vector of the wall , the velocity - slip parallel to the wall and temperature - jump conditions are : where .the rest of wall boundary conditions for higher - order moments are listed in appendix [ wall_hob ] .note that the velocity - slip boundary condition is also of higher - order due to the appearance of the higher - order moment . following the above introduction, it is clear that nses with fvbc are only accurate to the first - order of ; therefore , any apparent gas permeability showing the nonlinear dependence with is highly questionable .the r20 equations are accurate to the third - order of , which should give the some apparent permeability of the porous media as nses when , and be more accurate than nses as increases .numerical simulations are also performed to demonstrate this .we first investigate the rarefied gas through a periodic array of discs with the diameter , as shown in fig .[ geo ] . using nses and fvbc ,when the porosity is large , the slip - corrected permeability can be obtained analytically : ,\ ] ] where is the solid fraction and when the diffuse boundary condition is used .the intrinsic permeability is obtained when .+ the accuracy of the slip - corrected permeability is assessed by comparing to numerical solutions of the bgk equation and r20 equations , when the porosity is . for the linearized bgk equation ,two reduced distribution functions were introduced to cast the three - dimensional molecular velocity space into a two - dimensional one , and the obtained two equations are solved numerically by the discrete velocity method and the unified gas kinetic scheme . in the unified gas kinetic scheme ,a body - fitted structured curvilinear mesh is used , with 150 lines along the radial direction and 300 lines along the circumferential direction , see fig .[ apparent_permeability](a ) . in the discrete velocity method ,a cartesian grid with equally - spaced points is used and the solid surface is approximated by thestair - case " . in solving the r20 equations ,a similar body - fitted mesh with cells is used , and the detailed numerical method is given by .the molecular velocity space in the bgk equation is also discretized : and are approximated by the gauss - hermite quadrature when is small ( in this case ) , and the newton - cotes quadrature with non - uniform discrete velocity points when is large . the apparent permeability is plotted in fig .[ apparent_permeability](b ) as a function of .when , our numerical simulations based on the linearized bgk equation and r20 equations agree with each other , and the apparent permeability is a linear function of .when , the r20 equations ,although being accurate to the third - order of , predict lower apparent permeability than that of the bgk equation .the slip - corrected permeability increases linearly with only when , and then quickly reaches to a maximum value when .this comparison clearly demonstrates that , nses with fvbc are only accurate to the first - order of .this result is in accordance with the approximation adopted in the derivation of nses from the boltzmann equation .although the `` curvature of the solid - gas interface '' makes the apparent permeability a concave function of in the framework of nses with fvbc , higher - order moments in - and the higher - order velocity slip in , which are derived from the boltzmann equation and the gas kinetic boundary condition to the third - order accuracy of , restore linear dependence of the apparent permeability on when . + the conclusion that nses with fvbc is accurate only to the first - order of not only holds for the simple porous medium as shown in fig .[ geo ] , but also applies to more complex porous media , for example , see the unit cell in fig .[ randomdisc](a ) where the porosity is 0.6 . in this case, the linearized bgk equation is solved by the discrete velocity method , with a cartesian mesh of cells ; the grid convergence is verified , as using cells only results in a 0.6% increase of the apparent permeability when .nses with fvbc are solved in openfoam using the simple algorithm and a cell - centered finite - volume discretization scheme , on unstructured grids .a body - fitted computational grid is generated using the openfoam meshing tool , resulting in a mesh of about 600,000 cells of which the majority are hexahedra and the rest few close to the walls are prisms .the apparent permeability from nses increases linearly over a very narrow region of the knudsen number ( i.e. ) and then quickly reaches a constant value . again , from fig .[ randomdisc](b ) we see that nses with fvbc is roughly accurate when the apparent permeability is a linear function of ; in this region of , the maximum apparent permeability is only about one and a half times larger than the intrinsic permeability .in summary , through our numerical simulations based on the linearized bhatnagar - gross - krook equation and the regularized 20-moment equations , we show that the navier - stokes equations with the first - order velocity - slip boundary condition can only predict the apparent permeability of the porous media to the first - order accuracy of the knudsen number .lw acknowledges the support of an early career researcher international exchange award from the glasgow research partnership in engineering , allowing him to visit the hong kong university of science and technology for one month .lg thanks lianhua zhu for helpful discussions on the first - order velocity - slip boundary condition in openfoam .this work is also partly supported by the engineering and physical sciences research council in the uk under grant ep / m021475/1 .the wall boundary conditions for higher - order moments are given as follows : where and .
in a recent paper by lasseux , valds - parada and porter ( j. fluid mech . * 805 * ( 2016 ) 118 - 146 ) , it is found that the apparent gas permeability of the porous media is a nonlinear function of the knudsen number . however , this result is highly questionable , because the adopted navier - stokes equations and the first - order velocity - slip boundary condition are first - order ( in terms of the knudsen number ) approximations of the boltzmann equation and the kinetic boundary condition for rarefied gas flows . our numerical simulations based on the bhatnagar - gross - krook kinetic equation and regularized 20-moment equations prove that the navier - stokes equations with the first - order velocity - slip boundary condition are only accurate at a very small knudsen number limit , where the apparent gas permeability is a linear function of the knudsen number . authors should not insert the keywords
continuous - variable quantum key distribution ( cvqkd ) encodes information into the quadratures of optical fields and extracts it with homodyne detection , which has higher efficiency and repetition rate than that of the single photon detector .cvqkd , especially the gg02 protocol , is hopeful to realize high speed key generation between two parties , alice and bob . besides experimental demonstrations , the theoretical security of cvqkd has been established against collective attacks , which has been shown optimal in the asymptotical limit .the practical security of cvqkd has also been noticed in the recent years , and it has been shown that the source noise in state preparation may be undermine the secure key rate . in gg02 , the coherent states should be displaced in phase space following gaussian modulation with variance . however , due to the imperfections in laser source and modulators , the actual variance is changed to , where is the variance of source noise .an method to describe the trusted source noise is the beamsplitter model .this model has a good approximation for source noise , especially when the transmittance of beamsplitter approaches , which means that the loss in signal mode is negligible .however , this method has the difficulty of parameter estimation to the ancilla mode of the beamsplitter , without the information of which , the covariance matrix of the system are not able to determine . in this case, the optimality of gaussian attack should be reconsidered , and we have to assume that the channel is linear to calculate the secure key rate . to solve this problem , we proposed an improved source noise model with general unitary transformation . without extra assumption on quantum channel and ancilla state , we are able to derive a tight security bound for reverse reconciliation , as long as the variance of source noise can be properly estimated .the optimality of gaussian attack is kept within this model .the remaining problem is to estimate the variance of source noise properly .without such a source monitor , alice and bob can not discriminate source noise from channel excess noise , which is supposed to be controlled by the eavesdropper ( eve ) . in practice ,source noise is trusted and is not controlled by eve .so , such _ untrusted source noise model _ just overestimates evepower and leads to an untight security bound .a compromised method is to measure the quadratures of alice s actual output states each time before starting experiment .however , this work is time consuming , and in qkd running time , the variance of source noise may fluctuate slowly and deviate from preliminary result . in this paper , we propose two real - time schemes , that the active switch scheme and the passive beamsplitter scheme to monitor the variance of source noise , with the help of which , we derive the security bounds asymptotically for both of them against collective attacks , and discuss their potential applications when finite size effect is taken into account .in this section , we introduce two real - time schemes to monitor the variance of source noise for the gg02 protocol , based on our general model .both schemes are implemented in the so - called prepare and measurement scheme ( p&m scheme ) , while for the ease of theoretical research , here we analyze their security in the entanglement - based scheme ( e - b scheme ) .the covariance matrix , used to simplify the calculation , is defined by ,\ ] ] where operator , , mean value $ ] , is the density matrix , and denotes the anticommutator . in e - b scheme, alice prepares epr pairs , measuring the quadratures of one mode with two balanced homodyne detectors , and then send the other mode to bob .it is easy to verify that the covariance matrix of an epr pair is where is the variance of the epr modes , and corresponds to alice s modulation variance in the p&m scheme .however , due to the effect of source noise , the actual covariance matrix is changed to where is the variance of source noise .as mentioned in , we assume this noise is introduced by a neutral party , fred , who purifies and introduces the source noise with arbitrary unitary transformation . in this section, we show how to monitor with our active and passive schemes , and derive the security bounds in the infinite key limit .a method of source monitoring is to use an active optical switch , controlled by a true random number generator ( trng ) , combined with a homodyne detection .the entanglement - based version of this scheme is illustrated in fig .[ pic2 ] , where we randomly select parts of signal pulses , measure their quadratures and estimate their variance . in the infinite key limit, the pulses used for source monitor should have the same statistical identities with that sent to bob . comparing the estimated value with the theoretical one , we are able to derive the variance of source noise , and the security boundcan be calculated by ,\ ] ] where is the sampling ratio of source monitoring , is the classical mutual information between alice and bob , is the quantum mutual information between eve and bob , and is the reconciliation efficiency . after channel transmission , the whole system can be described by covariance matrix where is the variance of source noise , matrix is related to fred s two - mode state , is the transmittance and is the channel noise and is the channel excess noise . in practice, the covariance matrix can be estimated with experimental data with source monitor and parameter estimation . here, for the ease of calculation , we assume that parameters and have known values . in the infinite key limit . , height=144 ] given , the classical mutual information can be directly derived , while can not , since the ancilla state is unknown in our general model .fortunately , we can substitute with another state when calculating , where }\sigma_{z } \\0 & 0 & \sqrt{\eta [ ( v+\chi_{s})^{2}-1]}\sigma_{z } & \eta ( v+\chi_{s}+\chi)\mathbb{i } \\ \end{array } \right),\ ] ] and we have shown that such substitution provides a tight bound for reverse reconciliation . here , we have assumed that the pulses generated in alice is i.i.d . , and the true random number plays an important role in this scheme , without which , the sampled pulses may have different statistical characters from signal pulses sent to bob. the asymptotic performance of this scheme will be analyzed in sec .though the active switch scheme is intuitive in theoretical research , it is very not convenient in the experimental realization , since the high speed optical switch and an extra trng are needed .also , it lowers the secure key rate with due to the sampling ratio .inspired by , we propose a passive beam splitter scheme to simplify the implementation . as illustrated in fig .[ bsscheme ] , a beamsplitter is used to separate mode into two parts . one mode , , is monitored by alice , and the other , , is sent to bob .is replaced by a beamsplitter .alice and bob are able to estimate the source noise by measuring mode with the homodyne detection ., height=144 ] the security bound of passive beam splitter scheme can be calculated in a similar way that we substitute the whole state with .the covariance of its subsystem , , can be written as where mode is initially in the vacuum state .the covariance matrix after beam splitter is where then , mode is sent to bob through quantum channel , characterized by .the calculation of in this scheme is a little more complex , since an extra mode is introduced by the beamsplitter .we omit the detail of calculation here , which can be derived from .the performance of this scheme is discussed in the next section .in this section , we analyze the performance of both schemes with numerical simulation .as mentioned above , the simulation is restricted to the asymptotic limit .the case of finite size will be discussed later . to show the performance of source monitor schemes , we illustrate the secure key rate in fig .[ comparision ] , in which the _ untrusted noise scheme _ is included for comparison . for the ease of discussion , the imperfections in practical detectors are not included in our simulation , the effect of which have been studied previously . , the source noise is , the channel excess noise is , and the reconciliation efficiency is .the sample ration in the optical switch scheme is , and the transmittance in the beam splitter scheme is .,height=288 ] as shown in fig .[ comparision ] , secure key rate of each scheme is limited within , where large excess noise is used . under state - of - the - art technology , the excess noise can be controlled less than a few percent of the shot noise .so , our simulation is just a conservative estimation on the secure key rate .the _ untrusted source noise scheme _ has the shortest secure distance , because it ascribes the source noise into channel noise , which is supposed to be induced by the eavesdropper .in fact , source noise is neutral and can be controlled neither by alice and bob , nor by eve .so , this scheme just overestimates eve s power by supposing she can acquire extra information from source noise , which lower the secure key rate of this scheme .both the active and passive schemes have longer secure distance than the untrusted noise scheme , since they are based on the general source noise model , which does not ascribe source noise into eve s knowledge .the active switch scheme has lower secure key rate in the short distance area .this is mainly because that the random sampling process intercepts parts of the signal pulses to estimate the variance of source noise , which reduces the repetition rate with ratio .nevertheless , it does not overestimate eve s power . as a result, the secure key distance is improved . and transmittance , in which t varies from to .the colored parts illustrates the area with positive secure key rate , the empty parts illustrates to insecure area , and the abscissa values of boundary points corresponds to the secure distance.,height=336 ] both the secure key rate and secure distance of beam splitter scheme are superior than that of other schemes , when the transmittance is set to be , equal to the sampling rate in optical switch scheme , where no extra vacuum noise is introduced .this phenomena is quite similar to the `` noise beat noise '' scheme , which improves the secure key rate by introduce an extra noise into bob s side . though such noise lowers the mutual information between alice and bob , it also makes eve more difficult to estimate bob s measurement result . with the help of simulation, we find a similar phenomenon in the beam splitter scheme , the vacuum noise reduces mutual information more rapidly than its effect on .a preliminary explanation is that the sampled pulse in optical switch scheme is just used to estimate the noise variance , while in beam splitter model it increases eve s uncertainty on bob s information .combined with advantages in experimental realization , beam splitter scheme should be a superior choice . to optimize the performance of passive beam splitter scheme, we illustrate the secure key rate in fig .[ 3d ] for different beam splitter transmittance .the maximal secure distance about is achieved when , about km longer than that when .combined with the discussion above , this result can be understood as a balance between the effects of the noise on and , induced by beamsplitter .when is too small , also decreases rapidly , which limits the secure distance .the performance of source monitor schemes above is analyzed in asymptotical limit . in practice , the real - time monitor will concern the finite - size effect , since the variance of source noise may change slowly .a thorough research in finite size effect is beyond the scope of this paper , because the security of cvqkd in finite size is still under development , that the optimality of gaussian attack and collective attack has not been shown in the finite size case .nevertheless , we are able to give a rough estimation on the effect of block size , for a given distance . taking the active optical switch scheme for example , with a similar method in ,the maximum - likelihood estimator is given by where , is the measurement result of source monitor , and is the expected value of the variance of source noise . for large , the distribution converges to a normal distribution .so , we have where is such that , and is the failure probability .the reason why we choose is that given the values of and , estimated by bob , the minimum of corresponds to the maximum of channel noise , which may be fully controlled by eve .the extra variance due to the finite size effect in source monitor is for , we have . as analyzed in ,if the distance between alice and bob is km ( ) , the block length should be at least , which corresponds to induced by the finite size effect in source monitor .compared with the channel excess noise of , the effect of finite size in source monitor is very slight . due to the high repetition rate in cvqkd ,alice and bob are able to accumulate such a block within several minutes , during which the source noise may change slightly .in conclusion , we propose two schemes , the active optical switch scheme and the passive beamsplitter scheme , to monitor the variance of source noise . combined with previous general noise model , we derive tight security bounds for both schemes with reverse reconciliation in the asymptotic limit .both schemes can be implemented under current technology , and the simulation result shows an better performance of our schemes , compared with the untrusted source noise model .further improvement in secure distance can be achieved , when the transmittance is optimized . in practise ,the source noise varies slowly . to realize real - time monitoring ,the finite size effect should be taken into account , that the block size should not be so large , that the source noise has changed significantly within this block , and the block size should not be too small , that we can not estimate the source noise accurately .the security proof of cvqkd with finite block size has not been established completely , since the optimality of collective attack and gaussian attack has not been shown in finite size .nevertheless , we derive the effective source noise induced by the finite block size , and find its effect is not significant in our scheme .so , our schemes may be helpful to realize real - time source monitor in the future .this work is supported by the key project of national natural science foundation of china ( grant no .60837004 ) , national hi - tech research and development ( 863 ) program .the authors thank yujie shen , bingjie xu and junhui li for fruitful discussion .10 scarani v , bechmann - pasquinucci h , cerf n j , duek m , ltkenhaus n , and peev m 2009 _ rev . mod .phys . _ * 81 * 1301
the noise in optical source needs to be characterized for the security of continuous - variable quantum key distribution ( cvqkd ) . two feasible schemes , based on either active optical switch or passive beamsplitter are proposed to monitor the variance of source noise , through which , eve s knowledge can be properly estimated . we derive the security bounds for both schemes against collective attacks in the asymptotic case , and find that the passive scheme performs better .
along with turbo codes , low - density parity - check ( ldpc ) block codes form a class of codes which approach the ( theoretical ) shannon limit .ldpc codes were first introduced in the by gallager .however , they were considered impractical at that time and very little related work was done until tanner provided a graphical interpretation of the parity - check matrix in 1981 . more recently , in his ph.d .thesis , wiberg revived interest in ldpc codes and further developed the relation between tanner graphs and iterative decoding .the convolutional counterpart of ldpc block codes was introduced in , and ldpc convolutional codes have been shown to have certain advantages compared to ldpc block codes of the same complexity . in this paper , we use ensembles of tail - biting ldpc convolutional codes derived from a protograph - based ensemble of ldpc block codes to obtain a lower bound on the free distance of unterminated , asymptotically good , periodically time - varying ldpc convolutional code ensembles , i.e. , ensembles that have the property of free distance growing linearly with constraint length . in the process , we show that the minimum distances of ensembles of tail - biting ldpc convolutional codes ( introduced in ) approach the free distance of an associated unterminated , periodically time - varying ldpc convolutional code ensemble as the block length of the tail - biting ensemble increases .we also show that , for protographs with regular degree distributions , the free distance bounds are consistent with those recently derived for regular ldpc convolutional code ensembles in and .further , for protographs with irregular degree distributions , we obtain new free distance bounds that grow linearly with constraint length and whose free distance to constraint length ratio exceeds the minimum distance to block length ratio of the corresponding block codes . the paper is structured as follows . in section [ sec : ldpccc ] , we briefly introduce ldpc convolutional codes .section [ sec : proto ] summarizes the technique proposed by divsalar to analyze the asymptotic distance growth behavior of protograph - based ldpc block codes . in section [ sec : distbnd ] , we describe the construction of tail - biting ldpc convolutional codes as well as the corresponding unterminated , periodically time - varying ldpc convolutional codes .we then show that the free distance of a periodically time - varying ldpc convolutional code is lower bounded by the minimum distance of the block code formed by terminating it as a tail - biting ldpc convolutional code . finally , in section [ sec : results ] we present new results on the free distance of ensembles of ldpc convolutional codes based on protographs .we start with a brief definition of a rate binary ldpc convolutional code .( a more detailed description can be found in . ) a code sequence } ] is the syndrome former matrix and }= ] the sparsity of the parity - check matrix is ensured by demanding that its rows have very low hamming weight , i.e. , , where denotes the -th row of } ] has exactly ones in every column and , starting from row , ones in every row .the other entries are zeros .we refer to a code with these properties as an -regular ldpc convolutional code , and we note that , in general , the code is time - varying and has rate .an -regular time - varying ldpc convolutional code is periodic with period if is periodic , i.e. , , and if , the code is time - invariant .an ldpc convolutional code is called irregular if its row and column weights are not constant .the notion of degree distribution is used to characterize the variations of check and variable node degrees in the tanner graph corresponding to an ldpc convolutional code .optimized degree distributions have been used to design ldpc convolutional codes with good iterative decoding performance in the literature ( see , e.g. , ) , but no distance bounds for irregular ldpc convolutional code ensembles have been previously published .suppose a given protograph has variable nodes and check nodes .an ensemble of protograph - based ldpc block codes can be created by the copy - and - permute operation .the tanner graph obtained for one member of an ensemble created using this method is illustrated in fig .[ fig : proto ] . the parity - check matrix corresponding to the ensemble of protograph - based ldpc block codes can be obtained by replacing ones with permutation matrices and zeros with all zero matrices in the underlying protograph parity - check matrix , where the permutation matrices are chosen randomly and independently .the protograph parity - check matrix corresponding to the protograph given in figure [ fig : proto ] can be written as where we note that , since the row and column weights of are not constant , represents the parity - check matrix of an irregular ldpc code .if a variable node and a check node in the protograph are connected by parallel edges , then the associated entry in equals and the corresponding block of consists of a summation of permutation matrices .the sparsity condition of an ldpc parity - check matrix is thus satisfied for large .the code created by applying the copy - and - permute operation to an protograph parity - check matrix has block length . in addition, the code has the same rate and degree distribution for each of its variable and check nodes as the underlying protograph code .combinatorial methods of calculating ensemble average weight enumerators have been presented in and .the remainder of this section summarizes the methods presented in .suppose a protograph contains variable nodes to be transmitted over the channel and punctured variable nodes .also , suppose that each of the transmitted variable nodes has an associated weight , where for all .copies of the protograph , the weight associated with a particular variable node in the protograph can be as large as . ]let be the set of all possible weight distributions such that , and let be the set of all possible weight distributions for the remaining punctured nodes .the ensemble weight enumerator for the protograph is then given by where is the average number of codewords in the ensemble with a particular weight distribution .the normalized logarithmic asymptotic weight distribution of a code ensemble can be written as where , , is the hamming distance , is the block length , and is the ensemble average weight distribution .suppose the first zero crossing of occurs at .if is negative in the range , then is called the _minimum distance growth rate _ of the code ensemble . by considering the probability is clear that , as the block length grows , if , then we can say with high probability that the majority of codes in the ensemble have a minimum distance that grows linearly with and that the distance growth rate is .in this section we present a method for obtaining a lower bound on the free distance of an ensemble of unterminated , asymptotically good , periodically time - varying ldpc convolutional codes derived from protograph - based ldpc block codes . to proceed ,we will make use of a family of tail - biting ldpc convolutional codes with incremental increases in block length .the tail - biting codes will be used as a tool to obtain the desired bound on the free distance of the unterminated codes .suppose that we have an protograph parity - check matrix , where gcd .we then partition as a block matrix as follows : ,\ ] ] where each block is of size . can thus be separated into a lower triangular part , , and an upper triangular part minus the leading diagonal , .explicitly , where blank spaces correspond to zeros . this operation is called ` cutting ' a protograph parity - check matrix .rearranging the positions of these two triangular matrices and repeating them indefinitely results in a parity - check matrix of an unterminated , periodically time - varying convolutional code with constraint length and period given by of , where divides without remainder . if then the resulting convolutional code is time - invariant . ].\ ] ] note that if gcd , we can not form a square block matrix larger than with equal size blocks . in this case, and is the all zero matrix of size .this trivial cut results in a convolutional code with syndrome former memory zero , with repeating blocks of the original protograph on the leading diagonal .it is necessary in this case to create a larger protograph parity - check matrix by using the copy and permute operation on .this results in an parity - check matrix for some small integer .the protograph parity - check matrix can then be cut following the procedure outlined above . in effect , the choice of permutation matrix creates a mini ensemble of block codes suitable to be unwrapped to an ensemble of convolutional codes .we now introduce the notion of tail - biting convolutional codes by defining an ` unwrapping factor ' as the number of times the sliding convolutional structure is repeated . for , the parity - check matrix of the desired tail - biting convolutional codecan be written as {\lambda n_c \times \lambda n_v}.\ ] ] note that the tail - biting convolutional code for is simply the original block code .given a protograph parity - check matrix , we generate a family of tail - biting convolutional codes with increasing block lengths , , using the process described above .since tail - biting convolutional codes are themselves block codes , we can treat the tanner graph of as a protograph for each value of . replacing the entries of this matrix with either permutation matrices or all zero matrices , as discussed in section [ sec : proto ] ,creates an ensemble of ldpc codes that can be analyzed asymptotically as goes to infinity , where the sparsity condition of an ldpc code is satisfied for large .each tail - biting ldpc code ensemble , in turn , can be unwrapped and repeated indefinitely to form an ensemble of unterminated , periodically time - varying ldpc convolutional codes with constraint length and , in general , period .intuitively , as increases , the tail - biting code becomes a better representation of the associated unterminated convolutional code , with corresponding to the unterminated convolutional code itself .this is reflected in the weight enumerators , and it is shown in section [ sec : results ] that increasing provides us with distance growth rates that converge to a lower bound on the free distance growth rate of the unterminated convolutional code .tail - biting convolutional codes can be used to establish a lower bound on the free distance of an associated unterminated , periodically time - varying convolutional code by showing that the free distance of the unterminated code is lower bounded by the minimum distance of any of its tail - biting versions .a proof can be found in . consider a rate unterminated , periodically time - varying convolutional code with decoding constraint length and period .let be the minimum distance of the associated tail - biting convolutional code with length and unwrapping factor .then the free distance of the unterminated convolutional code is lower bounded by for any unwrapping factor , i.e. , a trivial corollary of the above theorem is that the minimum distance of a protograph - based ldpc block code is a lower bound on the free distance of the associated unterminated , periodically time - varying ldpc convolutional code .this can be observed by setting .one must be careful in comparing the distance growth rates of codes with different underlying structures .a fair basis for comparison generally requires equating the complexity of encoding and/or decoding of the two codes . traditionally , the minimum distance growth rate of block codes is measured relative to block length , whereas constraint length is used to measure the free distance growth rate of convolutional codes .these measures are based on the complexity of decoding both types of codes on a trellis .indeed , the typical number of states required to decode a block code on a trellis is exponential in the block length , and similarly the number of states required to decode a convolutional code is exponential in the constraint length .this has been an accepted basis of comparing block and convolutional codes for decades , since maximum - likelihood decoding can be implemented on a trellis for both types of codes .the definition of decoding complexity is different , however , for ldpc codes .the sparsity of their parity - check matrices , along with the iterative message - passing decoding algorithm typically employed , implies that the decoding complexity per symbol depends on the degree distribution of the variable and check nodes and is independent of both the block length and the constraint length .the cutting technique we described in section [ sec : tb ] preserves the degree distribution of the underlying ldpc block code , and thus the decoding complexity per symbol is the same for the block and convolutional codes considered in this paper .also , for randomly constructed ldpc block codes , state - of - the - art encoding algorithms require only operations per symbol , where , whereas for ldpc convolutional codes , if the parity - check matrix satisfies the conditions listed in section ii , the number of encoding operations per symbol is only . here again , the encoding complexity per symbol is essentially independent of both the block length and the constraint length . hence , to compare the distance growth rates of ldpc block and convolutional codes, we consider the hardware complexity of implementing the encoding and decoding operations in hardware .typical hardware storage requirements for both ldpc block encoders and decoders are proportional to the block length .the corresponding hardware storage requirements for ldpc convolutional encoders and decoders are proportional to the decoding constraint length ., encoding constraint lengths may be preferred to decoding constriant lengths . for further details ,see . ]we now present distance growth rate results for several ensembles of rate asymptotically good ldpc convolutional codes based on protographs . _example _ consider a regular ldpc code with the folowing protograph : for this example , the minimum distance growth rate is , as originally calculated by gallager .a family of tail - biting ldpc convolutional code ensembles can be generated according to the following cut : for each , the minimum distance growth rate was calculated for the tail - biting ldpc convolutional codes using the approach outlined in section [ sec : tailll ] .the distance growth rates for each are given as the free distance growth rate of the associated rate ensemble of unterminated , periodically time - varying ldpc convolutional codes is , as discussed above .then ( [ emre ] ) gives us the lower bound for .these growth rates are plotted in fig .[ fig : ex1 ] ..,width=307 ] we observe that , once the unwrapping factor of the tail - biting convolutional codes exceeds , the lower bound on levels off at , which agrees with the results presented in and and represents a significant increase over the value of . in this case , the minimum weight codeword in the unterminated convolutional code also appears as a codeword in the tail - biting code .example _ the following irregular protograph is from the repeat jagged accumulate ( rja ) family .it was shown to have a good iterative decoding threshold ( db ) while maintaining linear minimum distance growth ( ) .we display below the associated matrix and cut used to generate the family of tail - biting ldpc convolutional code ensembles .we observe that , as in example , the minimum distance growth rates calculated for increasing provide us with a lower bound on the free distance growth rate of the convolutional code ensemble using ( [ lb ] ) .the lower bound was calculated as ( for ) , significantly larger than the minimum distance growth rate of the underlying block code ensemble . _example _ the following irregular protograph is from the accumulate repeat jagged accumulate family ( arja ) : where the undarkened circle represents a punctured variable node .this protograph is of significant practical interest , since it was shown to have and iterative decoding threshold , i.e. , pre - coding the protograph of example provides an improvement in both values . in this arja example, the protograph matrix is of size .we observe that gcd , and thus we have the trivial cut mentioned in section [ sec : tb ]. we must then copy and permute to generate a mini ensemble of block codes .results are shown for one particular member of the mini ensemble with , but a change in performance can be obtained by varying the particular permutation chosen .increasing for the chosen permutation results in a lower bound , found using ( [ lb ] ) , of for .again , we observe a significant increase in compared to .simulation results for ldpc block and convolutional codes based on the protograph of example were obtained assuming bpsk modulation and an additive white gaussian noise channel ( awgnc ) .all decoders were allowed a maximum of iterations , and the block code decoders employed a syndrome - check based stopping rule . as a result of their block structure, tail - biting ldpc convolutional codes were decoded using standard ldpc block decoders employing a belief - propagation decoding algorithm .the ldpc convolutional code , on the other hand , was decoded by a sliding - window based belief - propagation decoder .the resulting bit error rate ( ber ) performance is shown in fig.[fig : sim ] ..,width=336 ] we note that the protograph - based tail - biting ldpc convolutional codes outperform the underlying protograph - based ldpc block code ( which can also be seen as a tail - biting code with unwrapping factor ) .larger unwrapping factors yield improved error performance , eventually approaching the performance of the unterminated convolutional code , which can be seen as a tail - biting code with an infinitely large unwrapping factor .we also note that no error floor is observed for the convolutional code , which is expected , since the code ensemble is asymptotically good and has a relatively large ( ) distance growth rate .we also note that the performance of the unterminated ldpc convolutional code is consistent with the iterative decoding threshold computed for the underlying protograph . at a moderate constraint length of , the unterminated code achieves ber at roughly db away from the threshold , and with larger block ( constraint ) lengths , the performance will improve even further .this is expected , since both the unterminated and the tail - biting convolutional codes preserve the same degree distribution as the underlying protograph .in this paper , asymptotic methods were used to calculate a lower bound on the free distance that grows linearly with constraint length for several ensembles of unterminated , protograph - based periodically time varying ldpc convolutional codes .it was shown that the free distance growth rates of the ldpc convolutional code ensembles exceed the minimum distance growth rates of the corresponding ldpc block code ensembles .further , we observed that the performance of the ldpc convolutional codes is consistent with the iterative decoding thresholds of the underlying protographs .this work was partially supported by nsf grants ccr02 - 05310 and ccf05 - 15012 and nasa grants nng05gh736 and nnx07ak536 .in addition , the authors acknowledge the support of the scottish funding council for the joint research institute with the heriot - watt university , which is a part of the edinburgh research partnership .mitchell acknowledges the royal society of edinburgh for the award of the john moyes lessells travel scholarship. 1 r. g. gallager , `` low - density parity - check codes '' , _ ire trans .inform . theory _, it-8 : 21 - 28 , jan . 1962 .r. michael tanner , `` a recursive approach to low complexity codes '' , _ ieee trans .inform . theory _ , it-27 , pp.533 - 547 , sept .n. wiberg , codes and decoding on general graphs " , ph.d . thesis , dept . of e. e. , linkping university , linkping , sweden .a. jimnez - felstrm and k. sh .zigangirov , `` time - varying periodic convolutional codes with low - density parity - check matrices '' , _ ieee trans .inform . theory _ , it-45 , pp.2181 - 2191 , septd. j. costello , jr ., a. pusane , s. bates , and k. sh .zigangirov , `` a comparison between ldpc block and convolutional codes '' , _ proc .inform . theory and app .workshop _ , san diego , ca , feb . 2006 .d. j. costello , jr ., a. e. pusane , c. r. jones , and d. divsalar , `` a comparison of ara- and protograph - based ldpc block and convolutional codes '' , _ proc .inform . theory and app . workshop _ , san diego , ca , feb .m. tavares , k. sh .zigangirov , and g. fettweis , `` tail - biting ldpc convolutional codes '' , _ proc .symp . on inform .theory _ , nice , france , june 2007 .a. sridharan , d. truhachev , m. lentmaier , d. j. costello , jr . , and k. sh .zigangirov , `` distance bounds for an ensemble of ldpc convolutional codes '' , _ ieee trans .inform . theory _ , it-53 , pp .4537 - 4555 , dec . 2007 .d. truhachev , k. sh .zigangirov , and d. j. costello , jr ., `` distance bounds for periodic ldpc convolutional codes and tail - biting convolutional codes '' , _ ieee trans .inform . theory _( submitted ) , jan .available online at http://luur.lub.lu.se/luur?func=downloadfile + & fileoid=1054996 .d. divsalar , `` ensemble weight enumerators for protograph ldpc codes '' , _ proc .symp . on inform .seattle , wa , jul . 2006 .g. richter , m. kaupper , and k. sh .zigangirov `` irregular low - density parity - check convolutional codes based on protographs '' , _ proc .symp . on inform .seattle , wa , jul .a. sridharan , d. sridhara , d. j. costello , jr . , and t. e. fuja `` a construction for irregular low density parity check codes '' , _ proc .symp . on inform .theory _ , yokohama , japan , june 2003 .a. e. pusane , k. sh .zigangirov , and d. j. costello , jr . ,`` construction of irregular ldpc convolutional codes with fast decoding '' , _ proc .conf . on communications _ ,istanbul , turkey , june 2006 .j. thorpe , `` low - density parity - check ( ldpc ) codes constructed from protographs '' , _ jpl inp progress report _ , vol .42 - 154 , aug .s. l. fogal , r. mceliece , and j. thorpe , `` enumerators for protograph ensembles of ldpc codes '' , _ proc .symp . on inform .adelaide , australia , sept . 2005 .t. j. richardson and r. l. urbanke , `` efficient encoding of low - density parity - check codes , '' _ ieee trans .inform . theory _ , it-47 , pp.638 - 656 , feba. e. pusane , a. jimnez - feltstrm , a. sridharan , m. lentmaier , k. sh .zigangirov , and d. j. costello , jr ., `` implementation aspects of ldpc convolutional codes , '' _ ieee trans . commun . _ , to appear .d. g. m. mitchell , a. e. pusane , n. goertz , and d. j. costello , jr ., `` free distance bounds for protograph - based regular ldpc convolutional codes '' , _ 2008 int . symp . on turbo codes and related topics _ ( submitted ) .available online at http://arxiv.org/abs/0804.4466 .d. divsalar , s. dolinar , and c. jones , `` construction of protograph ldpc codes with linear minimum distance '' , _ proc .symp . on inform .theory _ , seattle , wa , jul .
ldpc convolutional codes have been shown to be capable of achieving the same capacity - approaching performance as ldpc block codes with iterative message - passing decoding . in this paper , asymptotic methods are used to calculate a lower bound on the free distance for several ensembles of asymptotically good protograph - based ldpc convolutional codes . further , we show that the free distance to constraint length ratio of the ldpc convolutional codes exceeds the minimum distance to block length ratio of corresponding ldpc block codes .
entanglement is a key property that makes quantum information theory different from its classical counterpart .maximally entangled states , for example , are the basis of quantum state teleportation which has recently been demonstrated . under realistic conditions ,however , one will only be able to generate partially entangled mixed states .it is then of interest to be able quantify the amount of entanglement in such states .some measures of entanglement for mixed states have been suggested recently .they are useful , for example , as upper bounds to entanglement purification protocols and to the quantum channel capacity of certain quantum communication channels .unfortunately entanglement measures for mixed states ( which are relevant in the presence of noise ) are usually quite hard to calculate analytically , although an analytic expression for the entanglement of formation of two spin- particles is now known .the general case remains unsolved .for some problems , however , it is not so important to know the exact amount of entanglement ( a quantity that is not unique anyway ). it would be completely sufficient if one knew which state of a family of states has the most entanglement . to answer this questionit would be sufficient to find a ( hopefully as simple as possible ) quantity that preserves the ordering of density operators with respect to entanglement , i.e. that for two measures and and any two density operators and we have that is equivalent to .in this paper we will compare the entanglement of formation for two spin- particles , for which a closed analytical form is known , with a quantity which was proposed in as a way to quantify the degree of entanglement of a mixed state .the basis of this ` measure of entanglement ' is the peres - horodecki criterion for the separability of bipartite systems . given a state of , for example , twospin- systems one calculates the partial transpose of the density operator .the state is separable exactly if the partial transpose is again a positive operator .if , however , one of the eigenvalues of the partial transpose is negative then the state is entangled .one can now imagine that the amount of entanglement is quantified by the modulus of this negative eigenvalue , i.e. the larger it is , the larger the entanglement of the state .it is important to check whether this measure indeed preserves the ordering of density operators as it has been used for this purpose in some publications . in sectionii we will explain the entanglement of formation and subsequently we summarize some properties of the negative eigenvalue measure of entanglement . finally , we present in section iii both a numerical and an analytical comparison of the two measures of entanglement with respect to the ordering of density operators induced by them . in sectioniv we sum up the results of this paper .in this section we briefly describe some entanglement measures in particular the entanglement of formation and the negative eigenvalue measure of entanglement .there are not very many good measures of entanglement .one example is the relative entropy of entanglement .in fact , it gives rise to the most restrictive upper bound on the channel capacity of the depolarizing channel .unfortunately , even for two spin- particles no general analytical expression has been found for it so far although many special cases can be solved analytically .a second measure of entanglement and actually the first measure of entanglement that has been proposed for mixed states is the entanglement of formation .it basically describes the amount of entanglement that needs to be shared previously in order to be able to create a particular ensemble in a given state by local operations .mathematically , this means that we find that pure state ensemble that realizes the state and which has the smallest amount of entanglement , i.e. , where is a set of not necessarily orthogonal pure states .the entanglement of formation is known to be larger than the relative entropy of entanglement which proves that in general quantum state purification methods can not recover all the entanglement that has been invested in the creation of the quantum state .a nice feature of the entanglement of formation is the fact that it can be solved analytically for a system of two spin- particles .this allows for fast numerical studies as the cumbersome minimization eq .( [ formation ] ) can be avoided .the entanglement of formation can be expressed in terms of the function where for a density operator one defines the spin flipped state where the denotes the complex conjugate in the standard computational basis .one then finds the entanglement of formation to be where the so called concurrence is defined by here , are the eigenvalues , in decreasing order , of the hermitean matrix . for properties of this measure of entanglement the reader should consult the literature .it is interesting to note that since as a function of the concurrence is a strictly monotonous function and maps the interval ] can in fact also be regarded as a measure for entanglement . we will now consider the negative eigenvalue of the partial transpose of a density operator as a measure of entanglement . in the next sectionwe will then compare it to the entanglement of formation .for two spin- particles ( which form the two systems and ) any disentangled state can be written as the convex sum of product states states which permit a representation of the form eq .( [ separa ] ) are also called separable . for two spin- particlesthere is a simple criterion to decide whether a given state is separable or not .one calculates the partial transpose of the density operator in the computational basis .this means that we transpose only one subsystem , either subsystem or .if the resulting matrix is positive semidefinite then the density operator is separable ; otherwise it is not .therefore the partial transpose of an entangled state has a negative eigenvalue and the idea of the negative eigenvalue measure is to use the modulus of the negative eigenvalue to quantify the entanglement of the state . in a mathematical formthis reads as where the are the eigenvalues of the partial transpose .however , we do not know whether this way of quantifying the entanglement constitutes a proper measure of entanglement. therefore we will investigate in the next section numerically whether the entanglement of formation and the negative eigenvalue measure are compatible from a different point of view .we would expect that any two ` good ' entanglement measures should generate the same ordering of the density operators .this means that for two entanglement measures and and any two density operators and we have that why do we expect this relation to be true ? if in one measure of entanglement contains more entanglement than then we would expect that a quantum state purification method would generate more singlets from an ensemble in state than ensemble . if is also a measure of entanglement then we would expect that would yield more singlets than .while this reasoning is not strict , it nevertheless indicates that eq .( [ consist ] ) should be true for two ` good ' measures of entanglement . in the next two subsections we will now check whether eq .( [ consist ] ) is satisfied for the entanglement of formation and the negative eigenvalue measure of entanglement . for some classes of density operators we can easily check analytically whether the relation eq .( [ consist ] ) is true . for pure statesit is sufficient to consider states of the form as it follows from the schmidt decomposition .the entanglement of formation then reduces to the von neumann entropy of entanglement while the negative eigenvalue measure yields both measures decrease monotonically with so that for pure states eq .( [ consist ] ) is satisfied .it should be mentioned that for pure states with the concurrence is given by hence , also for arbitrary pure states the negative eigenvalue measure and the concurrence are connected by the simple equation werner states are defined as where is the singlet state and ] such that for all states .this is also obvious from fig .[ fig2 ] . in fig .[ fig3 ] again the distribution of states is shown , but this time the negative eigenvalue measure is plotted versus the concurrence .we can see that most of the dots are located close to the diagonal connecting and ; this diagonal corresponds to states satisfying eq . ( [ connection ] ) .note that the numerical simulation also strongly suggests that for a state with a certain value of the upper bound for the possible values of the negative eigenvalue measure is given by , that is , that in general holds for any state .furthermore , since for pure states the ordering induced by the two entanglement measures is the same but in general it is not , it is of interest to investigate the dependence of the probability that eq .( [ consist ] ) is violated for a pair of entangled states on the ` purity of those states ' with respect to a certain characterization of the purity . in fig .[ fig4 ] we show the relative number of pairs of states which do not satisfy eq .( [ consist ] ) versus based on a million pairs of entangled states . here , and are the linear entropies of and , respectively , as commonly employed , e.g. , in decoherence studies .obviously , pure states correspond to a vanishing linear entropy . we observe that the fraction of pairs of states with increases monotonically with the sum of the linear entropies of the respective states .this indicates that the more mixed the two states are with respect to the arithmetic mean of their linear entropies , the larger is the probability that this pair violates eq .( [ consist ] ) . above a certain cut off no pairs of entangles states can be found at all - the numerical data shown in fig .[ fig4 ] give an estimate for the probability density of finding a pair of entangled states with a certain value of ( compare also ) .it should finally be mentioned that in yet another measure of entanglement incorporating the partial transpose is proposed : for a given state the quantity is taken as a measure , where again , the denote the eigenvalues of the partial transpose of .while the actual value of for a given state is of course different from that of , from numerical simulations we come to the same conclusion that the ordering induced by this measure is different from the one induced by the entanglement of formation .we have compared the entanglement of formation with a potential measure of entanglement that is given by the negative eigenvalue of the partial transpose of the density operator of the system .in particular the ordering of density operators with respect to the amount of entanglement induced by the two measures has been compared both numerically and analytically .we have shown that the negative eigenvalue measure does not induce the same ordering as the entanglement of formation .therefore we do not expect it to be a ` good ' measure of entanglement .in particular it can not , in general , be used to determine the most entangled state for a given family of density operators as it has been used previously .the authors would like to thank peter knight , vlatko vedral , and martin wilkens for discussions and useful hints .this work was supported in part by the epsrc , the european tmr research network erbfmrxct960066 and the european tmr research network erbfmrxct960087 .plenio and v. vedral , to appear in cont .september 1998 , also available as lanl e - print quant - ph/9804075 .bennett , g. brassard , c. crepeau , r. jozsa , a. peres , and w.k .wootters , phys . rev .lett . * 70 * , 1895 ( 1993 ) .d. boschi , s. branca , f. demartini , l. hardy , and s. popescu , phys .lett . * 80 * , 1121 ( 1998 ) .d. bouwmeester , j.w .pan , k. mattle , m. eibl , h. weinfurter , and a. zeilinger , nature * 390 * , 575 ( 1997 ) .bennett , d.p .divincenzo , j.a .smolin , and w.k .wootters , phys . rev .a * 54 * , 3824 ( 1996 ) .v. vedral , m.b .plenio , m.a .rippin , and p.l .knight , phys .lett . * 78 * , 2275 ( 1997 ) .v. vedral , m.b .plenio , k. jacobs , and p.l .knight , phys .a * 56 * , 4452 ( 1997 ) .v. vedral and m.b .plenio , phys .a * 57 * , 1619 ( 1998 ) .bennett , h.j .bernstein , s. popescu , and b. schumacher , phys .a * 53 * , 2046 ( 1996 ) .bennett , g. brassard , s. popescu , b. schumacher , j.a .smolin , and w.k .wootters , phys .lett . * 76 * , 722 ( 1996 ) .n. gisin , phys . lett . * 210 * , 151 ( 1996 ) .m. horodecki , p. horodecki , r. horodecki , phys .lett . * 78 * , 574 ( 1997 ) .m. murao , m.b .plenio , s. popescu , v. vedral , and p.l .knight , phys .rev . a * 57 * , r4075 ( 1998 ) .e. rains , lanl e - print quant - ph/9707002 .wootters , phys .lett . * 80 * , 2245 ( 1998 ) .m. horodecki , p. horodecki , r. horodecki , phys . lett .a * 223 * , 1 ( 1996 ) .a. peres , phys .. lett . * 77 * , 1413 ( 1996 ) .e. schmidt , math .ann . * 63 * , 433 ( 1907 ) . for a recent analysis of measures on the set of mixed statesslater , lanl e - print quant - ph/9806039 .k. yczkowski and m. ku , j. phys .a * 27 * , 4235 ( 1994 ) ; see also m. poniak , k. yczkowski , and m. ku , j. phys .a * 31 * , 1059 ( 1998 ) .k. yczkowski , p. horodecki , a. sanpera , and m. lewenstein , lanl e - print quant - ph/9804024 .zurek , s. habib , j.p .paz , phys .* 70 * , 1187 ( 1993 ) .
we compare the entanglement of formation with a measure defined as the modulus of the negative eigenvalue of the partial transpose . in particular we investigate whether both measures give the same ordering of density operators with respect to the amount of entanglement . 2
a number of papers have recently considered the problem of constructing a coherent quantum observer for a quantum system ; e.g. , see . in the coherent quantum observer problem , a quantum plantis coupled to a quantum observer which is also a quantum system .the quantum observer is constructed to be a physically realizable quantum system so that the system variables of the quantum observer converge in some suitable sense to the system variables of the quantum plant . the papers considered the problem of constructing a direct coupling quantum observer for a given closed quantum system . in , the proposed observer is shown to be able to estimate some but not all of the plant variables in a time averaged sense .also , the paper shows that a possible experimental implementation of the augmented quantum plant and quantum observer system considered in may be constructed using a non - degenerate parametric amplifier ( ndpa ) which is coupled to a beamsplitter by suitable choice of the ndpa and beamsplitter parameters .one important limitation of the direct coupled quantum observer results given in is that both the quantum plant and the quantum observer are closed quantum systems .this means that it not possible to make an experimental measurement to verify the properties of the quantum observer . in this paper, we address this difficulty by extending the results of to allow for the case in which the quantum observer is an open quantum linear system whose output can be monitored using homodyne detection . in this case , it is shown that similar results can be obtained as in except that now the observer output is subject to a white noise perturbation .however , by suitably designing the observer , it is shown that the level of this noise perturbation can be made arbitrarily small ( at the expense of slow observer convergence ) .also , the results of are extended to show that a possible experimental implementation of the augmented quantum plant and quantum observer system may be constructed using a non - degenerate parametric amplifier ( ndpa ) which is coupled to a beamsplitter by suitable choice of the ndpa and beamsplitter parameters . in this case , the ndpa contains an extra field channel as compared to the result in and this extra channel is used for homodyne detection in the observer .in this section , we extend the theory of to the case of a direct coupled quantum observer which is also coupled to a field to enable measurements to be made on the observer . in our proposed direct coupled coherent quantum observer , the quantum plant is a single quantum harmonic oscillator which is a linear quantum system ( e.g. , see ) described by the non - commutative differential equation where denotes the system variable to be estimated by the observer and .this quantum plant corresponds to a plant hamiltonian . here ] is a vector of quantum noises expressed in quadrature form corresponding to the input field for the observer and is the corresponding output field ; e.g. , see .the observer output will be a real scalar quantity obtained by applying homodyne detection to the observer output field . , , . also , ] is hurwitz and hence , the system ( [ augmented4 ] ) will converge to a steady state in which represents a standard quantum white noise with zero mean and unit intensity .hence , at steady state , the equation ^{-1 } j\beta z_p dt + dw^{out}\ ] ] shows that the output field converges to a constant value plus zero mean white quantum noise with unit intensity .we now consider the construction of the vector defining the observer output .this vector determines the quadrature of the output field which is measured by the homodyne detector .we first re - write equation ( [ yo ] ) as where ^{-1 } j\beta\ ] ] is a vector in .then hence , we choose such that and therefore where will be a white noise process at steady state with intensity .thus , to maximize the signal to noise ratio for our measurement , we wish to choose to minimize subject to the constraint ( [ kconstraint ] ) .note that it follows from ( [ kconstraint ] ) and the cauchy - schwartz inequality that and hence however , if we choose then ( [ kconstraint ] ) is satisfied and .hence , this value of must be the optimal .we now consider the special case of . in this case, we obtain j\beta = \frac{4}{\sqrt{\kappa}}j\beta.\ ] ] hence , as , and therefore .this means that we can make the noise level on our measurement arbitrarily small by choosing sufficiently small .however , as gets smaller , the system ( [ augmented4 ] ) gets closer to instability and hence , takes longer to converge to steady state .in this section , we describe one possible experimental implementation of the plant - observer system given in the previous section .the plant - observer system is a linear quantum system with hamiltonian and coupling operator defined so that = w_o x_o.\ ] ] furthermore , we assume that , , where , , and . in order to construct a linear quantum system with a hamiltonian of this form , we consider an ndpa coupled to a beamsplitter as shown schematically in figure [ f2 ] ; e.g. , see .a linearized approximation for the ndpa is defined by a quadratic hamiltonian of the form where is the annihilation operator corresponding to the first mode of the ndpa and is the annihilation operator corresponding to the second mode of the ndpa . these modes will be assumed to be of the same frequency but with a different polarization with corresponding to the quantum plant and corresponding to the quantum observer . also , is a complex parameter defining the level of squeezing in the ndpa and corresponds to the detuning frequency of the mode in the ndpa .the mode in the ndpa is assumed to be tuned .in addition , the ndpa corresponds to a vector of coupling operators ] and ,\\ f_2 & = & \left[\begin{array}{ll}0 & \frac{\epsilon}{2}\\\frac{\epsilon}{2 } & 0\end{array}\right].\end{aligned}\ ] ] also , the matrix is given by ,\ ] ] and the matrix is given by .\ ] ] it now follows from the proof of theorem 1 in that we can construct a hamiltonian for this system of the form \left[\begin{array}{l } a \\ b \\a^ * \\ b^*\end{array } \right]\ ] ] where the matrix is given by and ] where ,\nonumber \\ m_2 & = & \frac{\imath}{2}\left[\begin{array}{ll}0 & \epsilon\\\epsilon & 0\end{array}\right].\end{aligned}\ ] ] also , we can construct the coupling operator for this system in the form \left[\begin{array}{l } a \\b \\a^ * \\ b^*\end{array } \right]\ ] ] where the matrix $ ] is given by hence , , n_2 = 0.\ ] ] we now wish to calculate the hamiltonian in terms of the quadrature variables defined such that = \phi \left[\begin{array}{l } q_p \\ p_p \\q_o \\p_o\end{array } \right]\ ] ] where the matrix is given by .\ ] ] then we calculate \left[\begin{array}{l } q_p \\p_p \\q_o \\p_o\end{array } \right]\nonumber \\ & = & \half \left[\begin{array}{ll}x_p\trp & x_o\trp \end{array } \right ] r\left[\begin{array}{l}x_p\\ x_o \end{array } \right]\end{aligned}\ ] ] where the matrix is given by ,\end{aligned}\ ] ] \ ] ] and .hence , comparing this with equation ( [ rc ] ) , we require that = \alpha\beta^t\ ] ] and the condition ( [ kconstraint ] ) to be satisfied in order for the system shown in figure [ f2 ] to provide an implementation of the augmented plant - observer system .we first observe that the matrix on the right hand side of equation ( [ rc1 ] ) is a rank one matrix and hence , we require that = |\delta|^2 - |\epsilon|^2 = 0.\ ] ] that is , we require that note that the function takes on all values in for and hence , this condition can always be satisfied for a suitable choice of . this can be seen in figure [ f4 ] which shows a plot of the function ..,width=302 ] furthermore , we will assume without loss of generality that andhence we obtain our first design equation in practice , this ratio would be chosen in the range of in order to ensure that the linearized model which is being used is valid .we now construct the vectors and so that condition ( [ rc1 ] ) is satisfied . indeed , we let ,~~ \beta= \left[\begin{array}{l } -\im(\epsilon ) -\im(\delta)\\\re(\epsilon ) + \re(\delta)\end{array}\right].\ ] ] for these values of and , it is straightforward to verify that ( [ rc1 ] ) is satisfied provided that . with this value of , we now calculate the quantity defined in ( [ e ] ) as follows : \beta \\ & = & -\frac{4}{\kappa_3 ^ 2 + 16 \omega_o^2}\left[\begin{array}{ll } \kappa_3 & 4 \omega_o \\-4\omega_o & \kappa_3 \end{array}\right ] \left[\begin{array}{l}\re(\epsilon ) + \re(\delta ) \\\im(\epsilon)+ \im(\delta)\end{array}\right ] . \ ] ] then , the vector defining the quadrature measured by the homodyne detector is constructed according to the equation ( [ k ] ) . in the special case that , this reduces to .\ ] ] in terms of complex numbers , we can write this as then , in terms of complex numbers , the formula ( [ k ] ) becomes where denotes complex conjugate .also , as noted in section [ sec : observer ] , the steady state measurement noise intensity is given by which approaches zero as .however , this is at the expense of increasingly slower convergence to steady state .in this paper , we have shown that a direct coupling observer for a linear quantum system can be implemented in the case that the observer can be measured using a homodyne detection measurement .this would allow the plant observer system to be constructed experimentally and the performance of the observer could be verified using the measured data .i. vladimirov and i. r. petersen , `` coherent quantum filtering for physically realizable linear quantum plants , '' in _ proceedings of the 2013 european control conference _ ,zurich , switzerland , july 2013 , arxiv:1301.3154 .z. miao , l. a. d. espinosa , i. r. petersen , v. ugrinovskii , and m. r. james , `` coherent quantum observers for finite level quantum systems , '' in _australian control conference _ , perth , australia ,november 2013 .i. r. petersen , `` a direct coupling coherent quantum observer , '' in _ proceedings of the 2014 ieee multi - conference on systems and control _ , antibes , france , october 2014 , also available arxiv 1408.0399 . , `` a direct coupling coherent quantum observer for a single qubit finite level quantum system , '' in _ proceedings of 2014 australian control conference _ , canberra , australia , november 2014 , also arxiv 1409.2594 . , `` time averaged consensus in a direct coupled coherent quantum observer network for a single qubit finite level quantum system , '' in _ proceedings of the 10th asian control conference 2015 _ , kota kinabalu , malaysia , may 2015 .i. r. petersen and e. h. huntington , `` a possible implementation of a direct coupling coherent quantum observer , '' in _ proceedings of 2015 australian control conference _, gold coast , australia , november 2015 .m. r. james , h. i. nurdin , and i. r. petersen , `` control of linear quantum stochastic systems , '' _ ieee transactions on automatic control _ , vol .53 , no .8 , pp . 17871803 , 2008 , arxiv : quant - ph/0703150 .a. j. shaiju and i. r. petersen , `` a frequency domain condition for the physical realizability of linear quantum systems , '' _ ieee transactions on automatic control _ , vol .57 , no . 8 , pp .2033 2044 , 2012 .
this paper considers the problem of constructing a direct coupling quantum observer for a quantum harmonic oscillator system . the proposed observer is shown to be able to estimate one but not both of the plant variables and produces a measureable output using homodyne detection .
large complex systems are composed of various interconnected components .the measure of the behavior of a single component thus results from the superimposition of different factors acting at different levels .common factors such as global trends or external socio - economical conditions obviously play a role but usually different sub - units ( such as users in the internet , states or regions in a country ) will react in different ways and add their local dynamics to the collective pattern . for example, the number of downloads on a website depends on factors such as the time of the day but one can also observe fluctuations from a user to another one . in the case of criminality , favorable socio - economical conditions will impose a global decreasing trend while local policies will affect the regional time series . in the case of financial series, the market imposes its own trend and some stocks respond to it more or less dramatically . in all these cases it is important to be able to distinguishif the stocks or regions are at the source of their fluctuations or if on the opposite , they just follow the collective trend . extracting local effects in a collection of timeseries is thus a crucial problem in assessing the efficiency of local policies and more generally , for the understanding of the causes of fluctuations .this problem is very general and as the availability of data is always increasing particularly in social sciences , it becomes always more important for the modeling and the understanding of these systems .there is obviously a huge literature on studying stochastic signals ranging from standard methods to more recents ones such as the detrended fluctuation analysis , independent component analysis , and separation of external and internal variables .most of these methods treat the internal dynamics as a small local perturbation with zero mean which is in contrast with the method proposed here . in a first part we present the method . in a second part, we test it on synthetic series generated by correlated random walkers .we then apply the method to empirical data of crime rates in the us and france , and obesity rates in the us , for which , to our knowledge , no general quantitative method is known to provide such separation between global and local trends .in general , one has a set of time series where and we will assume that the number of units is large .the index refers to a particular unit on a specific scale such as a region , city , a country .the problem we address consists in extracting the collective trend and the effect of local contributions .one way to do so is to assume the signal to be of the form where the ` external ' part , , represents the impact on the region of a global trend , while the ` internal ' part , , represents the contribution due to purely local factors .usually , in order to discuss the impact of local policies , one compares a regional ( local ) curve to the average ( the national average in case of regions of a country ) computed as ( or if one has intensive variables and populations ) .although reasonable at first sight , this assumes that the local component is purely additive : _ local term_. in this article , following , we will rather consider the possibility of having both multiplicative and additive contributions .more specifically , we assume where is a collective trend common to all series , and which affects each region with a corresponding prefactor .these coefficients are assumed to depend weakly on the period considered , ie .to vary slowly with time .we thus write we first note that the global trend is known up to a multiplicative factor only ( one can not distinguish from whatever ) and we will come back to this issue of scale later . also , the purely additive case is recovered if the s are independent of .if on the contrary the s are different from one region to the other , the national average ( [ eq : fnat ] ) , , is then given by here and in the following we denote the sample average , that is the average over all units , by a bar , , and the temporal average by brackets .the ` naive ' local contribution is then estimated by the difference with the national average the estimated local contribution can thus be very different from the original one , , and the difference will be very large at all times where is large ( note that the conclusion would be the same by taking the national average as ) .this demonstrates that comparing local time series with the naive average could in general be very misleading . beside the correct computation of the external and internal contributions, the existence of both multiplicative and additive local contributions implies that the effect of local policies must be analyzed by considering both how the local unit follows the global trend ( ) and how evolves the purely internal contribution ( ) . in a previous study , menezes and barabasi proposed a simple method to separate the two contributions , internal ( ) and external ( written as ) .they assume that the temporal average is zero , and compute the external and internal parts by writing and .this method can be shown to be correct in very specific situations , such as the case where is the fluctuating number of random walkers at node in a network , but in many cases however , one can expect that the local contributions have a non zero sample average and the method of will yield incorrect results . indeed , if the hypothesis eq .( [ eq : hyp ] ) is exact , this method would give for the estimate , and in the limit for would lead to the estimates and , which are different from the exact results , except if . in order to separate the two contributions we propose in this article a totally different approach , by taking an independent component analysis point of view in which we do not assume that the local contribution has a zero average ( over time and/or over the regions ) .to express the idea that the ` internal ' contribution is by definition what is specifically independent of the global trend , and that the correlations between regions exist essentially only through their dependence in the global trend , we impose that the global trend is statistically independent from local fluctuations ( we denote by the connected correlation ) , and that these local fluctuations are essentially independent from region to region , that is for where this statement will be made more precise below .we show that , for large , these constraints ( [ eq:<wfint > ] ) , ( [ eq:<gigj > ] ) are sufficient to extract estimates of the global trend and of the s .we denote by the average of and by its dispersion , so that we write with and .if we denote by and , we have with note that .if we now consider the correlations between these centered quantities , , we find if we assume that for is negligible ( of order ) compared to ( which is what we mean by having small correlations between internal components , eq .( [ eq:<gigj > ] ) ) , from this last expression we can show that at the dominant order in , we have these equations lead to which is valid when .we note that our method has a meaning only if strong correlations exist between the different s and if it is not the case , the definition of a global trend makes no sense and the approximation used in our calculations are not valid . in the supporting information ( section si1 )we show that the factors s can also be computed as the components of the eigenvector corresponding to the largest eigenvalue of - a method which is valid under the weaker assumption of having a small number ( compared to ) of non diagonal terms of the matrix which are not negligible .once the quantities are known , we can compute the global normalized pattern with the reasonable estimator given by , indeed , and since the quantity is a sum of independent variables with zero mean , we can expect it to behave as .we can show that this actually results from the initial assumptions .indeed , by construction and the second moment is by assumption we have if and we thus obtain . the computation of the s and of is equivalent to an independent component analysis ( ica ) with a _single _ source ( the global trend ) and a large number of sensors .however , in contrast with the standard ica , we are not interested in getting only the sources ( here the trend ) , but also the internal contributions ( which , in a standard ica framework , would be considered as noise terms , typically assumed to be small ) .we have already the s , and since has been calculated we can compute .we thus obtain at this stage this is a set of equations for unknown ( and the s ) and we are thus left with one free parameter , the ratio . knowingits value would give the local averages , the s .less importantly one may want also to fix the average ( hence both and ) in order to fully determine the pattern : this will be of interest only for making a direct comparison between this pattern and the national average ( [ eq : fnat ] ) .this equation ( [ eq : mui ] ) suggests a statistical linear correlation between and , with a slope given by . we will indeed observe a linear correlation in the data sets ( next section , figure )however , it could be that the s themselves are correlated with the . hence , and unfortunately, a linear regression can not be used to get an unbiased estimate of the parameter . in the absence of additional information or hypothesisthis parameter remains arbitrary .however one may compare the qualitative results obtained for different choices of : which properties are robust , and which ones are fragile .in particular one would like to be able to access how a given region is behaving , compared to another given region , and/or to the global trend . to do so , in the applications below wewill in particular analyze : ( i ) the correlations between the two local terms , and ; ( ii ) the robustness of the rank given by the s ; ( iii ) the sign of ; ( iv ) the quantitative and qualitative similarities between and the naive estimate .we will focus on two particular scenarios .first , one may ask the global trend to fall ` right in the middle ' of the series .there are different ways to quantify this .one way to do so is to note that , in the absence of internal contribution , would be equal to , hence would be equal to .therefore we may compute by imposing which is thus equivalent to impose an alternative is to ask the resulting to be as close as possible to the naive ones ( eq . ( [ eq : naive ] ) ) , by minimizing which gives in both cases one may then fix from or by imposing for some arbitrary chosen .finally , one may rather ask for a conservative comparison with the naive approach by minimizing the difference between and : either by writing ( or ) and , or by minimizing , which gives for is large , one can check that the results depend weakly on any one of these reasonable choices .the second scenario considers the correlations between the s and the s .as we will see , the first hypothesis leads to a strictly negative correlation .an alternative is thus to explore the consequences of assuming no correlations , hence asking for which implies that the slope of the observed linear correlation with gives the value of .as explained above , for each application below we will discuss the robustness of the results with respect to these choices of the parameter .we can now summarize our method .it consists in ( i ) estimating the s using eq .( [ eq : ai ] ) ( or using the eigenvector corresponding to the largest eigenvalue of the correlation matrix , section si1 ) , ( ii ) computing using eq .( [ eq : w ] ) , and finally ( iii ) comparing the results for different hypothesis on as discussed above .we propose to call this method the _ external trend and internal component analysis _ ( * etica * ) .we note that if the hypothesis eq .( [ eq : hyp ] ) , ( [ eq:<wfint > ] ) , ( [ eq:<gigj > ] ) are correct , the method gives estimates of , the ( hence of ) which become exact in the limit and large , and a good estimate of the full trend ( hence of the ) whenever this trend , qualitatively , does fall ` in the middle ' of the time series .once we have extracted with this method the local contribution , and the collective pattern together with its redistribution factor for each local series , we can study different quantities , as illustrated below on different applications of the method .in general , although this method gives a pattern very similar to the sample average , we will see that there is non trivial structure in the prefactors s leading to non trivial local contributions .in some cases one may expect to have , in addition to the local contribution , a linear combination of several global trends ( a small number of sources ) : we leave for future work the extension of our method to several external trends .we first test our method on synthetic series and we then illustrate it on crime rate series ( in the united states and in france ) and on us obesity rate series . for the crime rates , a plot of the time series shows that obviously a common trend exists ( fig .c + after computing the internal and external terms , we perform different tests in order to assess the validity of the approach .in particular , figure shows a plot of the local factors versus the data time - averages , the s .c + one observes a statistical linear correlation in the four set of time series .we stress that the s are computed from the covariance matrix of the data , hence after removing the means from the time series .the fact that we do observe a linear correlation is thus a hint that our hypothesis on the data structure is reasonable ( in contrast the very good linear correlation observed in can be shown to be an artefact of the method used in these works , leading to an exact proportionality independently of the data structure , ( see the section si2 ) .we now discuss in more detail the synthetic series , each one of the crime rate data sets , and the obesity rate .we can illustrate our method on the case of correlated random walkers described by the equation where is the global trend imposed to all walkers and the are gaussian noises but with possible correlations between different walkers /12 ] while states have always a negative local contribution ( vt , ga , la , nh , ct , ms ) . in these caseswe can reasonably imagine that local policies have a noticeable effect .finally , we can also analyze the ranking of the local contributions versus by studying kendall s for the two consecutive series and . in both cases ( france and us )we observe a larger than for the range chose ] or in other fields , e.g. in protein structure analysis ] is actually built in the method proposed by these authors : it is a direct consequence of their definitions of the internal and external parts , and it does not depend on the data structure . indeed , let be an arbitrary data set such that . for , following ] and by letting vary .we then obtain for the crime in the us ( in the case of the crime rates in france , the dataset is not large enough ) the figure (a ) .c + [ fig : ait ] this figure shows that in the case of the crime rate in the us , the converge to a stationary value , independent of the time interval , provided it is large enough .our method will then lead to reliable results constant in time .we also tested our method on the financial time series given by the 500 most important stocks in the us economy [ 1 ] , and which composition leads to the index . herethe ` local ' units are the individual stocks ( ) , and the ( naive ) average - analogue to a national average - is precisely the index time serie .we study the time series for these stocks on the days of the period and we compute the global pattern , the coefficients , and the parameters ( defined in the text ) computed for the time window $ ] for varying from to .these quantities measure quantitatively the importance of local versus external fluctuations for the stock .the results for the s are shown in figure (b ) and display large variations , particularly when we approach october , 2008 , a period of financial crisis .it is therefore not completely surprising that the ( and the s ) in this case fluctuate a lot . in some sense, we can conclude that the s correspond to an average susceptibility to the global trend , are not invariable quantities and can vary for different periods .we thus see on this example , that it is important to check the stability of the coefficients which is an crucial assumption in our method .the variations of these coefficients is however interesting and further studies are needed in order to understand these variations .[ 1 ] historical data for stocks http://biz.swcp.com/stocks/for the obesity rate series , we compare the variances of the internal ( ) and the external ( ) contributions .we observe on the figure that the variance of the external contribution became dominant after the year .
a single social phenomenon ( such as crime , unemployment or birth rate ) can be observed through temporal series corresponding to units at different levels ( cities , regions , countries ... ) . units at a given local level may follow a collective trend imposed by external conditions , but also may display fluctuations of purely local origin . the local behavior is usually computed as the difference between the local data and a global average ( e.g. a national average ) , a view point which can be very misleading . we propose here a method for separating the local dynamics from the global trend in a collection of correlated time series . we take an independent component analysis approach in which we do not assume a small unbiased local contribution in contrast with previously proposed methods . we first test our method on synthetic series generated by correlated random walkers . we then consider crime rate series ( in the us and france ) and the evolution of obesity rate in the us , which are two important examples of societal measures . for crime rates , the separation between global and local policies is a major subject of debate . for the us , we observe large fluctuations in the transition period of mid- s during which crime rates increased significantly , whereas since the s , the state crime rates are governed by external factors and the importance of local specificities being decreasing . in the case of obesity , our method shows that external factors dominate the evolution of obesity since 2000 , and that different states can have different dynamical behavior even if their obesity prevalence is similar . : physical sciences ( applied mathematics , physics ) , social sciences . : time series analysis , independent component analysis , financial time series , crime rates , obesity .
one of the oldest problem in galactic dynamics is the determination of the mass distribution in the stellar and gaseous disk from observed velocities .observationally , noisy measurements along with a certain velocity dispersion makes this inverse problem difficult to solve ( beauvais & bothum , 2001 ) . from a theoretical point of view , various levels of approximation are possible . for instance , the inversion is straightforward if the disk is considered as a flattened spheroid , since the surface density is simply obtained by an inverse abel transform over the velocity field ( brandt , 1960 ) .flattened spheroids are however a poor representation of real disks ( e.g. kochanek 2001 ) .another , more intuitive way to proceed is to generate a collection of surface density profiles , then to determine the corresponding potential or gravitational field , and find only those which model the velocity data at best . for a razor - thin axi - symmetrical disk , only one numerical quadrature over its radial extent must be performed to obtain the potential or the field , which is a priori very convenient because time in - expansive .meanwhile , this operation is commonly believed to be numerically uncomfortable when using elliptic integrals ( binney & tremaine , 1987 , cuddeford , 1993 , kochanek , 2001 ) .the reason is that these functions contain a singularity everywhere in the source . as suggested by binney & tremaine ( 1987 ) ,this drawback can be simply avoided by considering field points located just above / below the equatorial plane .even if the potential and its radial derivative do generally not exhibit strong variations slightly off the mid - plane , such a vertical shift introduces an inconsistency or a bias . as a matter of fact , instead of elliptic integrals , most authors work with bessel functions which have finite amplitudes ( toomre , 1963 ) .but their oscillatory behavior and specially their spatial extension ( the infinite range ) are a severe disadvantage from a numerical point of view ( cuddeford , 1993 ; cohl & tohline 1999 ) .+ in this paper , we show that point mass singularities can be properly and easily handled numerically from elliptic integrals by density splitting . for the case of a disk with zero thicknessas considered here , the gravitational potential and field can be determined exactly in the equatorial plane without any shift , and whatever the surface density profile ( provided it vanishes at the inner and outer edges of the disk ) . the extension to tri - dimensional disks is possible .for a disk with zero thickness , the expression for the radial gravitational field in the equatorial plane is known in a closed form .using cylindrical coordinates , this reads ( e.g durand , 1964 ) : , \label{eq : potential}\ ] ] where is the total surface density , is the field point , refers to the material distribution , and are respectively the inner and the outer radius of the disk , and and are respectively defined by and finally , ( respectively ) is the complete elliptic integral of the first ( resp .second ) kind . note that the integrand of equation ( [ eq : potential ] ) diverges as the modulus , namely when .the nature of the singularity is twofold : a logarithmic singularity due to the -function , and a hyperbolic one due to the term .however , this singularity is integrable .there are different ways to estimate improper integrals .here is a simple and efficient recipe .let us write the surface density as where is the surface density at the field point and is the `` residual '' surface density which , by construction , equals zero when .thus , equation ( [ eq : potential ] ) can be written as the sum of two contributions , namely : where \label{eq : hom}\ ] ] is the radial gravitational field due to a radially homogeneous disk and \label{eq:1}\ ] ] corresponds to the residual profile .the point is that the expression for also exists in a closed analytical form ( see appendix [ sec : appena ] ) .this quantity is finite everywhere inside the disk , even at the disc edges provided that vanishes continuously there .it is important to mention that this restriction is no more necessary if we consider the disk as a tri - dimensional system . besides, can be easily computed numerically because both and are fully regular whatever and , and especially for ( see appendix [ sec : appenb ] for a proof ) . as a consequence ,the accuracy on ( and subsequently on the total field ) depends only on the performance of the quadrature scheme used to performed the radial integral in equation ( [ eq:1 ] ) . in figure[ figure ] , we illustrate this simple and efficient technique by considering a disk with inner edge , outer edge and surface density as commonly used to model spiral galaxies ( freeman , 1970 ) .the graph shows the residual profile , as well as the two regular functions and . here, the field point where stands the singularity is arbitrarily set to ( about the middle of the disk ) .we can now apply this method to compute the rotation law of any disk , whatever its size and surface density profile . here, we simply assume that the contribution to the rotation is only gravitation ( and neglect pressure and viscosity effects ) .thus , is defined by the relation where : and using the previous expressions for the radial field , these two terms respectively read : -{\bf e}\left(\frac{r_{\rm in}}{r}\right)+{\bf k}\left(\frac{r_{\rm in}}{r}\right)\right\ } \label{eq : omegahomo}\ ] ] and \label{eq:2}\ ] ] these two expressions are easy to compute numerically , for reasons mentioned above .let us remind that at the edges , vanishes because as assumed ( see above ) .we have reported a simple and efficient method to compute the radial field and the rotation curve of any razor - thin axi - symmetric disk in the mid - plane , using complete elliptic integrals . as a matter of fact, a similar treatment can be considered for the gravitational potential .even , this splitting technique can be employed to disks with finite ( non - zero ) thickness as suggested in hur ( 2003 ) , and to the fully tri - dimensional systems as well ( pierens & hur 2004 ) .abramowitz , m. & stegun , i. a. 1970 , handbook of mathematical functions , new york : dover , 1970 beauvais , c. & bothun , g. 2001 , , 136 , 41 binney , j. & tremaine , s. 1987 , princeton , nj , princeton university press , 1987 , 747 p. brandt , j. c. 1960 , , 131 , 293 cohl , h. s. & tohline , j. e. 1999 , , 527 , 86 durand e. , 1964 , in _ `` electrostatique i. les distributions '' _ , masson ed .freeman , k. c. 1970 , , 160 , 811 gradshteyn , i. s. & ryzhik , i. m. 1980 , new york : academic press , 1980 .hur j - m . , 2003 ,submitted to pierens a. , & hur j.m ., 2004 , in preparation kochanek c.s . , 2001 ,submitted ( astro - ph/0108162 ) toomre , a. 1963 , , 138 , 385 sofue , y. & rubin , v. 2001 , , 39 , 137let us begin with the expression for the radial field due to a disk with constant surface density , i.e. equation ( [ eq : hom ] ) .this expression can be written as follows : +\int_{r}^{r_{\rm out}}\sqrt\frac{a}{r}k\left[\frac{{\bf e}(k)}{\varpi}-{\bf k}(k)\right]da\right\ } \label{eq : grhom}\ ] ] we can now set in the first integral and in the second one . changing the modulus in the elliptic integrals according to ( gradshteyn & ryzhik , 1980 ) : and \ ] ] with and , then equation ( [ eq : grhom ] )becomes : \label{eq : grhom2}\ ] ] each integral in this expression is known .actually , we have ( gradshteyn & ryzhik , 1980 ) : and inserting these relations into equation ( [ eq : grhom2 ] ) and re - arranging terms yields the formula -{\bf e}\left(\frac{r_{\rm in}}{r}\right)+{\bf k}\left(\frac{r_{\rm in}}{r}\right)\right\}\ ] ]in order to compute numerically equations ( [ eq:1 ] ) and ( [ eq:2 ] ) , we have to check that and remain finite when , which is not obvious at first glance .since by construction , a taylor expansion of around yields :
there is apparently a widespread belief that the gravitational field ( and subsequently the rotation curve ) `` inside '' razor - thin , axially symmetric disks can not be determined accurately from elliptic integrals because of the singular kernel in the poisson integral . here , we report a simple and powerful method to achieve this task numerically using the technique of `` density splitting '' .
the range is commonly regarded as a bound in spectral reconstructions . rather than being a fundamental limit , however , this bound only holds for certain classes of control sequences . here, we show that the ultimate bound is set by the _time resolution _ of the control sequence , .specifically , , with sequence having duration , where and .for example , all and for the family we construct from different orders of cdd , whereas the are different positive integers and the are rational multiples of for the cpmg sequences used in the protocol of alvarez and suter .each sequence has pulses at times .define and . in practice , the times constrained by the time resolution : since times less than are not resolvable , all times must be integer multiples of , say for .consequently , the time separations between any two times and must also be integer multiples of . we denote the time separations by for , . for simplicity , consider the task of reconstructing the spectrum of a stationary gaussian noise process .this requires measuring ^m} ] to the psd evaluated at a set of harmonic frequencies for . in the gaussian case , eq .( [ eq::chim ] ) reduces to ^m}\approx & -\frac { m}{2t_p}\sum_{h=0}^{h_p } m_{1}\!\!\left(\frac{2\pi h}{t_p}\right ) \big|f_p\!\left(\frac{2\pih}{t_p}\right)\notag \\ = & -\frac{m}{2t_p}\sum_{h=0}^{h_p } m_{1}\!\!\left(\frac{2\pi n_ph}{t}\right ) \big|f_{p}\!\left(\frac{2\pi n_ph}{t}\right)\big|^2s\!\left(\frac{2\pi n_ph}{t}\right ) \label{eq::chim2}.\end{aligned}\ ] ] inverting the system of linear equations formed by eq .( [ eq::chim2 ] ) for each in enables one to reconstruct the psd at the base harmonics " , integer multiples of .non - degeneracy of this linear system depends on the periodicity of the ffs .the periodicity of sequence is determined by the oscillatory component of , which is given by - 2(-1)^{n_p}\text{cos}[(t_{n_p+1}-t_0)\omega]\label{eq::ffnumerator}\\ & -4\sum_{j=1}^{n_p}(-1)^{j+n_p}\text{cos}[(t_j - t_{n_p+1})\omega]+4\sum_{j=1}^{n_p}(-1)^j\text{cos}[(t_j - t_0)\omega]\notag .\end{aligned}\ ] ] note that the last two terms in this expression cancel when is a time - symmetric sequence . in an abuse of notation ,define , and . for integer ,these definitions reduce to the ordinary gcf and lcm , respectively . in the frequency domain , the periodicities of the cosine terms in eq .( [ eq::ffnumerator ] ) are determined by the time separations . from these cosine terms , the periodicity of is , where here , when is time - symmetric and otherwise .note that the arguments of the cosine terms in eq .( [ eq::ffnumerator ] ) are even multiples of when .consider now the half periodicity " , .as expected , the oscillatory component is symmetric about , i.e. for all .we now extend this notion of periodicity to a set of control sequences .define the half periodicity of as where note that for all and . because divides all time separations by an integer , for some , implying . in the casethat is even , are base harmonics .this implies and ) for each is degenerate and non - invertible when .similarly , if is odd , then are base harmonics and latexmath:[\ ] ] by measuring ^m} ] to use will be an iterative process .for example , from a single set of measurements one can reconstruct both using the entangled form and using the separable form .these reconstructions can then be used to predict the dynamics of the qubit under free decay . by comparing the predictions to actual measurements of the qubit under free decay, one can decide which model best describes the noise , as mentioned in the text .
we introduce open - loop quantum control protocols for characterizing the spectral properties of non - gaussian noise , applicable to both classical and quantum dephasing environments . the basic idea is to engineer a multi - dimensional frequency comb via repetition of suitably designed pulse sequences , through which the desired high - order noise spectra may be related to observable properties of the qubit probe . we prove that access to a high time resolution is key to achieve spectral reconstruction over an extended bandwidth , overcoming limitations of existing schemes . non - gaussian spectroscopy is demonstrated for a classical noise model describing quadratic dephasing at an optimal point , as well as a quantum spin - boson model out of equilibrium . in both cases , we obtain spectral reconstructions that accurately predict the qubit dynamics in the non - gaussian regime . accurately characterizing the spectral properties of environmental noise in open quantum systems has broad practical and fundamental significance . within quantum information processing , this is a prerequisite for optimally tailoring the design of quantum control and error - correcting strategies to the noisy environment that qubits experience , and for testing key assumptions in fault - tolerance threshold derivations . from a physical standpoint , precise knowledge of the noise is necessary for quantitatively modeling and understanding open - system dynamics , with implications ranging from the classical - to - quantum transition to non - equilibrium quantum statistical mechanics and quantum - limited metrology . quantum noise spectroscopy seeks to characterize the spectral properties of environmental noise by using a controlled quantum system ( a qubit under multi - pulse control in the simplest case ) as a dynamical probe . in recent years , interest in quantum noise spectroscopy has heightened thanks to both improved theoretical understanding of open - loop controlled dynamics in terms of transfer filter - function ( ff ) techniques and experimental validation in different qubit platforms . in particular , quantum control protocols based on dynamical decoupling ( dd ) have been successfully implemented to characterize noise properties during memory and driven evolution in systems as diverse as solid - state nuclear magnetic resonance , superconducting and spin qubits , and nitrogen vacancy centers in diamond . despite the above advances , existing noise spectroscopy protocols suffer from several disadvantages . notably , they are restricted to _ classical , gaussian _ phase noise . while dephasing ( - ) processes are known to provide the dominant decoherence mechanism in a variety of realistic scenarios , the gaussianity assumption is _ a priori _ far less justified . on the one hand , the gaussian approximation tends to break down in situations where the system is strongly coupled with an environment consisting of discrete degrees of freedom such as for noise , as ubiquitously encountered in solid - state devices . even for environments well described by a continuum of modes , non - gaussian noise statistics may be generally expected away from thermal equilibrium , or whenever symmetry considerations forbid a linear coupling . in all such cases , accurate noise spectroscopy mandates going beyond the gaussian regime . in this paper , we introduce open - loop control protocols for characterizing _ stationary , non - gaussian dephasing _ using a qubit probe . our approach is applicable to classical noise environments and to a paradigmatic class of open quantum systems described by linearly coupled oscillator environments as long as all relevant noise spectra obey suitable smoothness assumptions . while we build on the noise spectroscopy by sequence repetition proposed by alvarez and suter , our central insight is to leverage the simple structure of ffs in a purely dephasing setting to establish the emergence of a frequency comb for arbitrary high - order noise spectra ( so - called ) , paving the way to the desired multi - dimensional spectral estimation . we first demonstrate the power of our approach for gaussian noise , where we extend the range of spectral reconstruction over existing protocols . in the non - gaussian regime , we reconstruct the spectra associated with the leading high - order cumulants of the noise , absent in the gaussian limit . quantitative prediction of the qubit free evolution in the presence of these non - gaussian environments reveals clear dynamical signatures in both the classical and quantum case . _ control setting and noise polyspectra. _ we consider a qubit coupled to an uncontrollable environment ( bath ) . in the interaction picture with respect to the bath hamiltonian , , and the qubit hamiltonian , the joint system is described by , where the first term accounts for the bath - induced dephasing and is the external open - loop control , acting non - trivially on the qubit alone . for a classical bath , is a stochastic noise process , whereas is a time - dependent operator for a quantum bath . the applied control consists of repeated sequences of -pulses ( say , about ) , which for simplicity we take to be instantaneous . after transforming to the interaction picture associated with , the joint hamiltonian becomes , where the switching function " changes sign between with every -pulse applied to the qubit . the effect of dephasing is seen in the dynamics of the qubit s coherence element , which we may express in terms of bath - operator cumulants . specifically , , where the decay parameter and phase angle are respectively given by : where the - order noise cumulant depends on the bath correlation functions , , and denotes an ensemble average for a classical bath or an expectation value with respect to the initial bath state , , in the quantum case . for zero - mean gaussian noise , except for thus , gaussian noise gives _ no phase evolution_. for non - gaussian noise , higher - order even ( odd ) cumulants contribute to decay ( phase evolution ) , respectively . for stationary noise , where is a function of the time separations , , the noise spectral properties are fully characterized by the fourier transforms of the cumulants with respect to . using the compact notation , the _ - order _ is defined as where is the familiar power spectral density ( psd ) , and , are known as the `` bi - spectrum '' and `` tri - spectrum '' , respectively . for all orders , is a smooth -dimensional surface when the noise is classical and ergodic . more generally , may depend on fewer than time separations , leading to the presence of delta functions in . all polyspectra possess a high degree of symmetry , _ irrespective of the noise_. that is , is fully specified in all frequency space by its value on a particular subspace , , known as the _ principal domain _ . _ noise spectroscopy protocol. _ our objective is to characterize not only the psd but the polyspectra . we accomplish this by adapting the dd noise spectroscopy protocol proposed in for gaussian noise . this protocol relies on repetitions of identical _ base sequences _ , whose duration ( `` cycle time '' ) we shall denote by . following , the effect of a base control sequence in the frequency domain is characterized by a single _ fundamental ff _ , . if , direct calculation shows that repetitions of yield ^m}^{(k ) } \!=&\!\!\!\int_{\mathbb{r}^{k-\!1}}\!\!\!\!\!\!\!d\vec{\omega}_{k-\!1}\!\ ! \prod_{j=1}^{k-\!1}\!\!f_p(\omega_j)\frac{\text{sin}(m\omega_jt/2)}{\text{sin}(\omega_jt/2)}\notag \\&\times f_p(-|\vec{\omega}_{k-\!1}|)\frac{\text{sin}(m|\vec{\omega}_{k-\!1}|t/2)}{\text{sin}(|\vec{\omega}_{k-\!1}|t/2)}\frac{s_{k-\!1}(\vec{\omega}_{k-\!1})}{(2\pi)^{k-\!1 } } , \label{eq::ftcumulantm}\end{aligned}\ ] ] the key to extending the protocol in beyond gaussian noise ( ) is to realize that _ repetition produces a multi - dimensional frequency comb for all orders _ , namely , \frac{\text{sin}(m|\vec{\omega}_{k-1}|t/2)}{\text{sin}(|\vec{\omega}_{k-1}|t/2 ) } \label{eq::hypercomb } \\&\approx\!m\prod_{j=1}^{k-1}\!\!\big[\frac{2\pi}{t}\!\!\sum_{n_j\!=-\!\infty}^{\infty}\!\!\!\delta\big(\omega_{j}\!-\!\frac{2\pi n_j}{t}\big)\big]\notag , \quad m \gg1 , \forall k , \end{aligned}\ ] ] provided that in eq . ( [ eq::ftcumulantm ] ) is a smooth function . thanks to the `` hyper - comb '' in eq . ( [ eq::hypercomb ] ) , obtaining the polyspectra becomes an inverse problem . substituting eq . ( [ eq::hypercomb ] ) into eq . ( [ eq::ftcumulantm ] ) produces a linear equation that couples the polyspectra and the ffs evaluated at the harmonic frequencies , ^m}^{(k)}\!\!=\!\!\!\!\!\!\!\!\!\sum_{\vec{h}_{k-\!1}\in { \cal h}_{k-\!1}}\!\!\!\!\!\!\!\!\frac{m}{t^{k-\!1}}\!\!\prod_{j=1}^{k-1}\!\!f_p(h_j ) f_p(-|\vec{h}_{k-\ ! 1}|)s_{k-\!1}(\vec{h}_{k-\!1 } ) \label{eq::ftlinear}.\end{aligned}\ ] ] to obtain a finite linear equation , we need to truncate the above sum to a finite set . with no prior knowledge of the noise , it suffices to consider in the principal domain of the polyspectrum . truncating the expression in eq . ( [ eq::ftlinear ] ) enables us to relate the sampled polyspectra to experimentally observable dynamical quantities : ^m } \approx&\sum_{\ell=1}^\infty\frac{(-1)^\ell m}{(2\ell)!\,t^{2\ell-\!1}}\hspace*{-3 mm } \sum_{\vec{h}_{2\ell-\!1}\in\omega_{2\ell-\!1}}\!\!\!\!\!m_{2\ell-\!1}(\vec{h}_{2\ell-\!1 } ) \nonumber \\ & \times\prod_{j=1}^{2\ell-\!1}\!\!f_p(h_j ) f_p(-|\vec{h}_{2\ell-\!1}|)s_{2\ell-1}(\vec{h}_{2\ell-\!1 } ) , \label{eq::chim } \\ \phi_{[p]^m } \approx&\sum_{\ell=1}^\infty\frac{(-1)^\ell m}{(2\ell+1)!\,t^{2\ell}}\sum_{\vec{h}_{2\ell}\in\omega_{2\ell}}m_{2\ell}(\vec{h}_{2\ell } ) \nonumber \\ & \times\prod_{j=1}^{2\ell}f_p(h_j ) f_p(-|\vec{h}_{2\ell}|)s_{2\ell}(\vec{h}_{2\ell } ) , \label{eq::phim}\end{aligned}\ ] ] where the multiplicity accounts for the symmetry of the polyspectrum . whenever the contributions from high - order multi - point correlation functions are negligible ( e.g. , for sufficiently small evolution time and/or noise strength ) , the cumulant expansion in eqs . ( [ eq::chim])-([eq::phim ] ) may be truncated at a finite . if terms remain after truncation , experimentally measuring ^m} ] ) for at least control sequences creates a system of linear equations , that can be inverted to obtain the odd ( even ) polyspectra up to order ( ) . _ base sequence construction. _ in the original noise spectroscopy protocol of , a _ fixed _ base sequence is used ( cpmg , after carr , purcell , meiboom , and gill ) , with cycle times varying from to , where and is the minimum time between pulses . while this produces a well - conditioned linear inversion , both the number of distinct control sequences and the range of spectral reconstruction are limited in particular , for a minimum allowed . the use of a fixed dd sequence has an additional disadvantage : cpmg refocuses static noise ( , hence the `` filtering order '' is non - zero ) , precluding reconstruction at any point in frequency space containing a zero , a substantial information loss for higher - dimensional polyspectra . non - gaussian noise spectroscopy demands a _ large number of sequences with spectrally distinct ffs _ , including some with zero filtering order . we generate a family of base sequences satisfying these requirements by using different orders of concatenated dd , cdd : namely , not only cpmg ( ) , but also durations of free evolution ( ) , up to maximum dd order . the presence of free evolution permits sequences with zero filtering order , enabling the polyspectra to be reconstructed at points containing a zero . specifically , let a _ fixed _ cycle time be expressed in terms of a minimum _ time resolution _ , , where . while all pulse times will be integer multiples of , and are two independent constraints a priori , with in typical settings . if is an integer partition of , we place a cdd sequence into the interval , of duration , subject to the condition that no two pulses are separated by less than . as shown in the supplement , the range of spectral reconstruction is bounded by . a high resolution ( small ) is key to generate sequences with _ incommensurate _ periodicities , making it possible to achieve spectral reconstruction over an extended range . repetitions of control sequences with and . for our protocol , we employ 25 base sequences assembled from cdd , . the spectrum is a sum of lorentzians peaks , ^ 2 } + { w_2}/\{1 + [ 8\,(\text{sign}(\omega)\omega - d)/\omega_c)^2]\} ] , which we obtain numerically for each control sequence . in addition to accurately reconstructing the larger peaks over the expanded range , our protocol successfully resolves the small peak at , thanks to inclusion of control sequences with zero filtering order . with different degrees of gaussianity [ see text ] , and same spectrum for as in fig . [ fig : classicalbispectrum ] . curves are ordered according to decreasing degree of non - gaussianity : ( blue solid ) , ( red dashes ) and ( green dots ) . for the fully non - gaussian case , we have used the reconstructed spectrum and bi - spectrum [ fig . [ fig : classicalbispectrum ] ] to predict the qubit decay and phase evolution ( blue asterisks ) , showing excellent agreement with the theoretical evolution , computed up to the fifth - order noise cumulant . ] _ non - gaussian spectral reconstructions. _ we now return to our main goal , namely characterizing non - gaussian polyspectra . as a first example , we consider a classical square noise " process arising from a quadratic coupling to a gaussian source , as encountered in superconducting qubits operating at an optimal working point . that is , , where is a zero - mean gaussian process , and ] . here , the relevant principal domain is an octant bounded by and . reconstructing 35 points in enables us to obtain at 325 points in . representative results for the actual vs. reconstructed bi - spectrum at are shown in fig . [ fig : classicalbispectrum](a)-(b ) . the relative error in [ fig : classicalbispectrum](c ) indicates very good agreement at the interior points , but larger error in the tails . because there is minimal spectral concentration in the tails , however , this error has little effect on the qubit dynamics . as fig . [ fig::fid ] shows , excellent agreement is found between the theoretical phase evolution and the one predicted by the reconstructed bi - spectrum . extending quantum noise spectroscopy methods to quantum environments entails qualitatively new challenges because non - gaussian statistics ensues now from the _ combined effect of the bath operators and the initial bath state _ , and no general characterization of quantum polyspectra ( and their smoothness properties ) is available to the best of our knowledge . we take a first step in this direction by focusing on linearly coupled spin - boson environments , in which case and , where are canonical bosonic operators and both have units of ( angular ) frequency . for a general quantum bath , the noise is stationary if and only if =0 ] . is the quantity relevant to the qubit dynamics . as shown in , the tri - spectrum for any _ non - separable _ , stationary initial bath state has the form . ] and inverting the appropriate system of linear equations . to test the accuracy of our reconstructions , we again predict the dynamics of the qubit under free evolution . as shown in fig . [ fig::sb](a ) , taking into account the non - gaussianity of the noise by reconstructing both the effective spectrum and tri - spectrum improves the prediction by almost an order of magnitude in time . because the non - gaussian prediction only uses spectral quantities associated with the second and fourth cumulants , however , it fails when the fourth cumulant becomes comparable in size to the second , indicating that the higher - order cumulants can no longer be neglected ( see also fig . [ fig::sb](b ) ) . ( b ) , reconstructed effective tri - spectrum ( c ) . the non - gaussian initial bath state is , where , are thermal states at temperatures , . ohmic spectral density is assumed , with nhz , khz . the curves in ( a ) represent theoretical decay ( black solid ) , decay predicted by reconstructing and ( grey asterisks ) , and decay predicted by approximating the noise as gaussian and reconstructing only ( teal squares ) . the non - gaussian prediction in ( b ) fails when and become comparable . all reconstructions used repetitions of 21 base sequences composed of cdd , with , . ] _ conclusion. _ we introduced control protocols for characterizing the high - order spectra associated with non - gaussian dephasing on a qubit probe coupled to a classical or a quantum bosonic environment . our approach overcomes limitations of existing protocols , allowing in particular for spectral reconstruction over an extended bandwidth , which is of independent interest for quantum sensing applications . our work also points to the need for a deeper understanding of high - order quantum noise spectra beginning from more complex dephasing settings described by non - linear spin - boson models or spin baths . we expect implementation of our protocols to be within reach for various device technologies , in particular transmon or flux qubits , where they may help shed light onto the microscopic origin of the noise itself . we thank m. biercuk , w. oliver , s. gustavsson , and f. yan for valuable discussions . work at dartmouth was supported from the us aro under contract no . w911nf-14 - 1 - 0682 and the constance and walter burke special projects fund in quantum information science . gaps acknowledges support from the arc centre of excellence grant no . ce110001027 . 16ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( , ) p. * * , ( ) * * , ( )
the second law of newtonian mechanics states that if is the force acting on a point particle of mass and is its acceleration , then . in a sense ,the physical meaning of this expression lies in its tacit assumptions , namely that forces are vectors , that is , elements of a vector space , and as such they sum .this experimental fact embodied in the second law is what prevent us from considering the previous identity as a mere definition of _force_. coming to the study of the rigid body , one can deduce the first cardinal equation of mechanics , where is the affine point of the center of mass , is the total mass and is the resultant of the external applied forces .this equation does not fix the dynamical evolution of the body , indeed one need to add the second cardinal equation of mechanics , where and , are respectively , the total angular momentum and the total mechanical momentum with respect to an arbitrary fixed point . naively adding the applied forces might result in an incorrect calculation of .as it is well known , one must take into account the line of action of each force in order to determine the _ central axis _ , namely the locus of allowed application points of the resultant .these considerations show that applied forces do not really form a vector space .this unfortunate circumstance can be amended considering , in place of the force , the field of mechanical momenta that it determines ( the so called _ dynamical screw _ ) .these type of fields are constrained by the law which establishes the change of the mechanical momenta under change of pole an analogy between momenta and velocities , and between force resultant and angular velocity is apparent considering the so called _ fundamental formula of the rigid body _ , namely a constraint which characterizes the velocity vector field of the rigid body the correspondence can be pushed forward for instance by noting that the concept of _ instantaneous axis of rotation _ is analogous to that of _ central axis_. screw theory explores these analogies in a systematic way and relates them to the lie group of rigid motions on the euclidean space .perhaps , one of the most interesting consequences of screw theory is that it allows us to fully understand that angular velocities should be treated as vectors applied to the instantaneous axis of rotation , rather than as free vectors .this fact is not at al obvious .let us recall that the angular velocity is defined through poisson theorem , which states that , given a frame moving with respect to an absolute frame , any normalized vector which is fixed with respect to satisfies in the original frame , where is unique .the uniqueness allows us to unambiguously define as the angular velocity of with respect to .as the vectors are free , their application point is not fixed and so , according to this traditional definition , is not given an application point .this fact seems close to intuition .indeed , let us consider foucault s 1851 famous experiment performed at the paris observatory .by using a pendulum he was able to prove that the earth rotates with an angular velocity which coincides with that inferred from the observation of distant stars .of course , the choice of paris was not essential , and the measurement would have returned the same value for the angular velocity were it performed in any other place on earth . in fact , the reason for assigning to the angular velocity an application point in the instantaneous axis of rotation becomes clear only in very special applications , and in particular when the composition of rigid motions is considered .this fact will be fully justified in section [ njc ] .here we just wish to illustrate how , using the analogy between forces and angular velocities , it is possible to solve non - trivial problems on the composition of motions .consider , for instance , four frames , , where is the absolute frame and , , moves with respect to with an angular velocity .let us suppose that at the given instant of interest , and for , the instantaneous axes of rotation of as seen from , lie all in the same plane as shown in figure [ fun ] .we can apply the well known rules of statics , for instance using the funicular polygon , to obtain the angular velocity and the instantaneous axis of rotation of with respect to .motion with respect to by using the funicular polygon method .this method was originally developed for finding the central axis in problems of statics.,width=340 ] it is also interesting to observe if a frame rotates with angular velocity with respect to , and rotates with angular velocity with respect to , and if the two instantaneous axes of rotation are parallel and separated by an arm of length , then , at the given instant , translates with velocity in a direction perpendicular to the plane determined by the two axes . as a consequence, any act of translation can be reduced to a composition of acts of pure rotation .this result is analogous to the usual observation that two opposite forces and with arm generate a constant mechanical momenta of magnitude and direction perpendicular to the plane determined by the two forces . as a consequence, any applied momenta can be seen as the effect of a couple of forces .of course , screw theory has other interesting consequences and advantages .we invite the reader to discover and explore them in the following sections .the key ideas leading to screw theory included in this article have been taught at a second year undergraduate course of `` rational mechanics '' at the faculty of engineer of florence university ( saved for the last technical section ) .we shaped this text so as to be used by our students for self study and by any other scholars who might want to introduce screw theory in an undergraduate course .indeed , we believe that it is time to introduce this beautiful approach to mechanics already at the level of undergraduate university programs .screw theory is venerable ( for an account of the early history see ) .it originated from the works of euler , mozzi and chasles , who discovered that any rigid motion can be obtained as a rotation followed by a translation in the direction of the rotation s axis ( this is the celebrated chasles s theorem which was actually first obtained by giulio mozzi ) , and by poinsot , chasles and mbius , who noted the analogy between forces and angular velocities for the first time .it was developed and reviewed by sir r. ball in his 1870 treatise , and further developed , especially in connection with its algebraic formulation , by clifford , kotelnikov , zeylinger , study and others .unfortunately , by the end of the nineteenth century it was essentially forgotten to be then fully rediscovered only in the second half of the twentieth century .it remains largely unknown and keeps being rediscovered by various authors interested in rigid body mechanics ( including this author ) .unfortunately , screw theory is usually explained following descriptive definitions rather than short axiomatic lines of reasoning . as a result , the available introductions are somewhat unsatisfactory to the modern mathematical and physical minded reader . perhaps for this reason , some authors among the few that are aware of the existence of screw theory claim that it is too complicated to deserve to be taught .for instance , the last edition of goldstein s textbook includes a footnote which , after introducing the full version of chasles theorem ( sect . [ mvj ] ) , which might be regarded as the starting point of screw theory , comments _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ there seems to be little present use for this version of chasles theorem , nor for the elaborate mathematics of screw motions elaborated at the end of the nineteenth century ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ were it written in the fifties of the last century this claim could have been shared , but further on screw theory has become a main tool for robotics where it is ordinarily used .furthermore , while elaborate the mathematics of screw theory simplifies the development of mechanics .admittedly , however , some people could be dissatisfied with available treatments and so its main advantages can be underestimated .we offer here a shorter introduction which , hopefully , could convince these readers of taking a route into screw theory .let us comment on some definitions of screw that can be found in the literature , so as to justify our choices .a first approach , that this author does not find appealing , introduces the screw by means of the concept of _ motor_.this formalism depends on the point of reduction , and one finds the added difficulty of proving the independence of the various deductions from the chosen reduction point .it hides the geometrical content of the screw and makes proofs lengthier .nevertheless , it must be said that the motor approach could be convenient for reducing screw calculations to a matter of algebra ( the so called screw calculus ) . in a similar vein ,some references , including selig s , introduce the screw from a matrix formulation that tacitly assumes that a choice of reference frame has been made ( thus losing the invariance at sight of the definition ) .still concerning the screw definition , some literature follows the practical and traditional approach which introduces the screw from its properties ( screw axis , pitch , etc . ) , like in old fashioned linear algebra where one would have defined a vector from its direction , verse and module , instead of defining it as an element of a vector space ( to complicate matters , some authors define the screw up to a positive constant , in other words they work with a projective space rather that a vector space ) .this approach could be more intuitive but might also give a false confidence of understanding , and it is less suited for a formal development of the theory .it is clear that the vector space approach in linear algebra , while less intuitive at first , proves to be much more powerful than any descriptive approach .of course , one has to complement it with the descriptive point of view in order to help the intuition . in my opinion the same type of strategy should be followed in screw theory , with a maybe more formal introduction , giving a solid basis , aided by examples to help the intuition .since descriptive intuitive approaches are not lacking in the literature , this work aims at giving a short introduction of more abstract and geometrical type .it should be said that at places there is an excess of formality in the available presentations of screw theory .i refer to the tendency of giving separated definitions of screws , one for the kinematical _ twist _ describing the velocity field of the rigid body , and the other for the _ wrench _ describing the forces acting on the body .this type of approach , requiring definitions for screws and their dual elements ( sometimes called co - screws ) , lengthens the presentation and forces the introduction and use of the dual space of a vector space , a choice which is not so popular especially for undergraduate teaching . who adopts this point of view argues that it should also be adopted for forces in mechanics , which should be treated as 1-forms instead as vectors .this suggestion , inspired by the concept of conjugate momenta of lagrangian and hamiltonian theory , sounds more modern , but would be geometrically well founded only if one could develop mechanics without any mention to the scalar product .the scalar product allows us to identify a vector space with its dual and hence to work only with the former .if what really matters is the pairing between a vector space and its dual then , as this makes sense even without scalar product , we could dispense of it .it is easy to realize that in order to develop mechanics we need a vector space ( and/or its dual ) as well as a scalar product and an orientation ( although most physical combinations of interest might be rewritten so as a to get rid of it , e.g. the kinetic energy is ] of two screws . in this sectionwe check that this commutator is itself a screw and calculate its resultant .[ mjc ] the lie bracket ] which , using eq .( [ kgu ] ) becomes ^i ] reads \rangle = s_3(p)\cdot ( { \bm{s}}_1\times { \bm{s}}_2)+s_2(p)\cdot({\bm{s}}_3\times { \bm{s}}_1)+s_1(p)\cdot({\bm{s}}_2\times { \bm{s}}_3),\ ] ] is independent of , and does not change under cyclic permutations of its terms .we use eq .( [ jfp ] ) \rangle & = { \bm{s}}_1\cdot[{\bm{s}}_2\times s_3(p)-{\bm{s}}_3\times s_2(p)]+s_1(p)\cdot(-{\bm{s}}_3\times { \bm{s}}_2 ) \\ & = s_3(p)\cdot ( { \bm{s}}_1\times { \bm{s}}_2)+s_2(p)\cdot({\bm{s}}_3\times { \bm{s}}_1)+s_1(p)\cdot({\bm{s}}_2\times { \bm{s}}_3).\end{aligned}\ ] ] this expression changes sign under exchange of and , thus we obtain the desired conclusion . given a screw it is possible to construct the linear map which is an element of the dual space .the linear map sends every screw to zero ( namely , it is the null map ) , if and only if . if is such that , then the scalar product with the free screw , shows that , a contradiction .if is a constant screw with vector invariant , then the screw scalar product with the applied screw , where is some point , gives , hence and thus is the null screw .we have shown that the linear map is injective .we wish to show that is surjective , namely any element of the dual vector space , can be regarded as the scalar product with some screw .we could deduce this fact using the injectivity and the equal finite dimensionality of and , but we shall proceed in a more detailed way which will allow us to introduce a useful basis for the space of screws and its dual .let us choose , and let be a positive oriented orthonormal base for , where denotes the orientation .namely , assume that we have made a choice of reference frame .the six screws , , , generate the whole space .indeed , if is a screw and is its motor at , , , then ] .clearly , and if are screws , the jacobi identity for the lie bracket of vector fields +[z,[x , y]]+[y,[z , x]]=0 ] , which preserves the distances between points , i.e. for every , ] .let be the above ( left ) group action on so that .each point induces an _ orbit map _ given by , thus .we are interested on at , so that .if then , gives , for every , a vector field on which is the image of the lie algebra element .such vector fields on are called _ fundamental vector fields_. let us consider the exponential which is obtained by the integration of the vector field from .the orbit passing through is obtained from the integration of the vector field starting from .in other words , the 1-parameter group of rigid maps coincides with the 1-parameter group of diffeomorphisms ( which are rigid maps ) generated by the vector field .conversely , every such 1-parameter group of rigid maps determines a lie algebra element .since every screw element , once integrated , gives a non - trivial 1-parameter group of rigid maps , every screw is the fundamental vector field of some lie algebra element .that is , the map is surjective . but and are two vector spaces of the same dimensionality , thus this map is also injective . in summary ,the screws are the representation on of the elements of .it is particularly convenient to study the lie algebra of through their representative vector fields on , indeed many features , such as the existence of a screw axis for each lie algebra element , become very clear .we can now use several results from the study of lie groups and their actions on manifolds .a central result is that the bijective map is linear and sends the lie bracket to the commutator of vector fields on . in some cases the exponential map from the lie algebra to the lie group is surjective ( which is not always true as the example of shows ) .this is the case of the group ( see ( * ? ? ?2.9 ) ) thus , since every element of the lie algebra corresponds to a screw , and the exponential map corresponds to the rigid map obtained from the integration of the screw vector field by a parameter 1 , the surjectivity of the exponential map implies that every rigid map can be accomplished as the result of the integration along a screw , or , which is the same , by a suitable rotation along an axis combined with the translation along the same axis .this is the celebrated chasles theorem ) reformulated and reobtained in the lie algebra language .let us recall that if , then the expression $ ] defines a linear map called _adjoint endomorphism_. the trace of the composition of two such endomorphisms defines a symmetric bilinear form called _ killing form _ on .the killing form is a special _ invariant bilinear form _ on , namely it is bilinear and satisfies ( use eq .( [ bjz ] ) ) proposition [ mns ] shows that the scalar product of screws provides an invariant symmetric bilinear form on the lie algebra .we wish to establish if there is any connection with the killing form .as done in section [ gvo ] let us introduce an origin and use the isomorphism of with .if are represented by and , then the screw scalar product is according to the result of section [ gvo ] we have we conclude that the killing form is an invariant bilinear form which is distinct from the screw scalar product .it coincides with the killing form of the lie group of rotations alone and thus , it does not involve the translational information inside the -terms .therefore , the screw scalar product provides a new interesting invariant bilinear form , which is sometimes referred to as the _ klein form _ of . from eq .( [ lmu ] ) we find that is represented by from which , using the symmetry properties of the mixed product , we can check again that the screw scalar product is invariant \\ & + [ ( { \bm{y } } \times { \bm{z}})\cdot { \bm{x}}^{o}+({\bm{y}}^{o}\times { \bm{z}})\cdot { \bm{x } } + ( { \bm{y } } \times { \bm{z}}^{o})\cdot { \bm{x}}]=0.\end{aligned}\ ] ]screw theory , although venerable , has found some difficulties in affirming itself in the curricula of the physicist and the mechanical engineer .this has changed in the last decades , when screw theory has finally found application in robotics , where its ability to deal with the composition of rigid motions has proved to be much superior with respect to treatments based on euler coordinates .we have given here a short introduction to screw theory which can provide a good starting point to a full self study of the subject .we started from a coordinate independent definition of screw and we went to introduce the concepts of screw axis , screw scalar product and screw commutator .we introduced the dual space and showed that any frame induces an isomorphism on which might be used to perform calculations .we then went to consider kinematical and dynamical examples of screws , reformulating the cardinal equations of mechanics in this language .particularly important was the application of the screw scalar product in the expressions for the kinetic energy and power , in fact the virtual work ( power ) is crucial in the formulation of lagrangian mechanics .in this connection , we mentioned the importance of reciprocal screws .finally , we showed that the space of screws is nothing but the lie algebra , and that the screw scalar product is the klein form .philosophically speaking , screw theory clarifies that the most natural basic dynamical action is not the force , but rather the force aligned with a mechanical momenta ( remark [ nhs ] ) . in teachingwe might illustrate the former action with a pushing finger and the latter action with a kind of pushing hand .analogously , the basic kinematical action is not given by the act of pure rotation ( or translation ) but by that of rotation aligned with translation .again , for illustration purposes this type of motion can be represented with that of a ( real ) screw .clearly , in our introduction we had to omit some arguments .for instance , we did not present neither the cylindroid nor the calculus of screws . nevertheless , the arguments that we touched were covered in full generality , emphasizing the geometrical foundations of screw theory and its connection with the lie group of rigid maps .we hope that this work will promote screw theory providing an easily accessible presentation to its key ideas .i thank d. zlatanov and and m. zoppi for suggesting some of the cited references . this work has been partially supported by gnfm of indam .
this work introduces screw theory , a venerable but yet little known theory aimed at describing rigid body dynamics . this formulation of mechanics unifies in the concept of screw the translational and rotational degrees of freedom of the body . it captures a remarkable mathematical analogy between mechanical momenta and linear velocities , and between forces and angular velocities . for instance , it clarifies that angular velocities should be treated as applied vectors and that , under the composition of motions , they sum with the same rules of applied forces . this work provides a short and rigorous introduction to screw theory intended to an undergraduate and general readership . keywords : rigid body , screw theory , rotation axis , central axis , twist , wrench . + msc : 70e55 , 70e60 , 70e99 .
while conventional causal inference methods use conditional independences to infer a directed acyclic graph of causal relations among at least three random variables , there is a whole family of recent methods that employ more information from the joint distribution than just conditional independences .therefore , these methods can even be used for inferring the causal relation between just two observed variables ( i.e. , the task to infer whether causes or causes , given that there is no common cause and exactly one of the alternatives is true , becomes solvable ) .as theoretical basis for such inference rules , postulate the following asymmetry between cause and effect : if causes then and represent independent mechanisms of nature and therefore contain no information about each other . here ,`` information '' is understood in the sense of description length , i.e. , knowing provides no shorter description of and vice versa , if description length is identified with kolmogorov complexity .this makes the criterion empirically undecidable because kolmogorov complexity is uncomputable . pointed out that `` information '' can also be understood in terms of predictability and used this to formulate the following hypothesis : semi - supervised learning ( ssl ) is only possible from the effect to the cause but not visa versa .this is because , if causes , knowing the distribution may help in better predicting from since it may contain information about , but can not help in better predicting from .information - geometric causal inference ( igci ) has been proposed for inferring the causal direction between just two variables and . in its original formulationit applies only to the case where and are related by an invertible functional relation , i.e. , and , but some positive empirical results have also been reported for noisy relations .we will also restrict our attention to the noiseless case .this is because attempts to generalize the theory to non - deterministic relations are only preliminary .moreover , the deterministic toy model nicely shows _ what kind _ of dependences between and occur while the dependences in the non - deterministic case are not yet well understood .we first rephrase how igci has been introduced in the literature and then explain our new interpretations .they also provide a better intuition about the relation to ssl . for didactic reasonswe restrict the attention to the case where is a monotonously increasing diffeomorphism of ] .then , the difference between the left and the right hand side of ( [ uncorr ] ) is the covariance of and with respect to the uniform distribution .the intuition is that it is unlikely , if and are chosen independently , that regions where the slope of is large ( i.e. large ) , meet regions where is large and others where is small .simple calculations show that ( [ uncorr ] ) implies that is positively correlated with the slope of since with equality iff .this is illustrated in figure [ fig : corr ] a ) .a ) ( taken from ) if the fluctuations of do nt correlate with the slope of the functions , regions of high density tend to occur more often in regions where is flat .b ) maps the cube to itself .the regions of points with large ( here : the leftmost sphere ) can only be a small fraction of the cube.,title="fig:",scaledwidth=30.0% ] a ) ( taken from ) if the fluctuations of do nt correlate with the slope of the functions , regions of high density tend to occur more often in regions where is flat .b ) maps the cube to itself .the regions of points with large ( here : the leftmost sphere ) can only be a small fraction of the cube.,title="fig:",scaledwidth=33.0% ] moreover , using , eq .( [ uncorr ] ) implies using we get with equality only for . for empirical data , with ( and hence ) , this suggests the following inference method : [ def : igci ] infer whenever some robustness of igci with respect to adding noise has been reported when the following modification is used : on the left hand side of ( [ empigci ] ) the -tuples are ordered such that , while the right hand side assumes .note that in the noisy case , the two conditions are not equivalent .moreover , the left hand side of ( [ empigci ] ) is no longer minus the right hand side since eq .( [ minus ] ) no longer makes sense .albeit hard to formulate explicitly , it is intuitive to consider the left hand side as measuring `` non - smoothness '' of and the right hand side the one of . then , the causal direction is the one with the smoother conditional . to describe the information theoretic content of ( [ uncorr ] ) and ( [ pc ] ), we introduce the uniform distributions and for and , respectively .their images under and are given gy the probability densities and , respectively .we will drop the superscripts and whenever the functions they refer to are clear . then( [ uncorr ] ) reads and is equivalent to the following additivity of relative entropies : likewise , ( [ pc ] ) reads in the terminology of information geometry , ( [ orth1 ] ) means that the vector connecting and is orthogonal to the one connecting and .thus the `` independence '' between and has been phrased in terms of orthogonality , where is represented by .likewise , the dependence between and corresponds to the fact that the vector connecting and is not orthogonal to the one connecting and .the information - theoretic formulation motivates why one postulates uncorrelatedness of and instead of one between and itself .a further advantage of this reformulation is that and can then be replaced with other `` reference measures '' , e.g. , gaussians with the same variance and mean as and , respectively ( which is more appropriate for variables with unbounded domains ) .however , both conditions ( [ uncorr ] ) and ( [ orth1 ] ) are quite abstract .therefore , we want to approach igci from completely different directions . in section [ sec : untyp ] we will argue that a large positive value for shows that the observed -tuple is untypical in the space of all possible -tuples . in section [ sec : numb ] we show that condition ( [ empigci ] ) implies that there are , in a sense , more functions from to than vice versa . in section [ sec : ssl ] we explain why the correlation between distribution and slope that occurs in the anticausal direction helps for unsupervised and semi - supervised learning .let us consider again a monotonously increasing diffeomorphism \rightarrow [ 0,1] ] can have a `` typical '' or an `` untypical '' position relative to .consider the function shown in figure [ fig : one_point ] a ) .the point is untypical because it meets in a region whose slope is larger than for the majority of points . of course , can also be untypical in the sense that the slope of is smaller than for the majority of points , see figure [ fig : one_point ] b ). there is , however , an important asymmetry between large slope and small slope : if the slope at is significantly higher than the _ average _ slope over the entire domain , then is necessarily untypical because the slope can significantly exceed the average only for a small fraction of points .if the slope is significantly _ below _ the average , this does not mean that the point is untypical because this may even be the case for most of the points , as one can easily see on figure [ fig : one_point ] a ) .this asymmetry is known from statistics : a non - negative random variable may quite often attain values that are smaller than their expected value by orders of magnitude , but exceeding the expectation by a large factor is unlikely due to the markov inequality .( a ) has an untypical position in both cases because it meets in a small region with large slope or in a region with small slope ( b ) .c ) function on a grid .the large dots denote the observed points , the small ones visualize one option of interpolating by a monotonic function that crosses and .,title="fig:",scaledwidth=30.0% ] ( a ) has an untypical position in both cases because it meets in a small region with large slope or in a region with small slope ( b ) .c ) function on a grid .the large dots denote the observed points , the small ones visualize one option of interpolating by a monotonic function that crosses and .,title="fig:",scaledwidth=30.0% ] ( a ) has an untypical position in both cases because it meets in a small region with large slope or in a region with small slope ( b ) .c ) function on a grid .the large dots denote the observed points , the small ones visualize one option of interpolating by a monotonic function that crosses and .,title="fig:",scaledwidth=30.0% ] the above idea straightforwardly generalizes to mappings between multi - dimensional spaces : then a point can be untypical relative to a function in the sense that the jacobian of is significantly larger than the average .this is , for instance , the case for the points in the leftmost sphere of figure [ fig : corr ] b ) .we first introduce `` untypical '' within a general measure theoretic setting : [ thm : unt ] let and be probability distributions on measure spaces and , respectively .let be measurable and let the image of under have a strictly positive density with respect to .then , points for which are unlikely ( `` untypical '' ) in the sense that for all .[ cor : hyper ] let ^n \rightarrow [ 0,1]^n ] for and and assume that all are taken from the grid as in figure [ fig : one_point ] c ) .we assume furthermore that and , similarly , and denote these observations by and .let be the set of all monotonic functions for which with .our decision which causal direction is more plausible for the observation will now be based on the following generating models : a function is chosen uniformly at randomly from , i.e. , the set of functions from to that pass the points and . then , each with is chosen uniformly at random from .this yields the following distribution on the set of possible observations : likewise , we obtain a distribution for the causal direction given by where denotes the corresponding set of functions from to . + for a general grid , elementary combinatorics shows that the number of monotonic functions from to that pass the corners and is given by therefore , the pair defines grids and is the product of the numbers for each grid .thus , where we have applied rule ( [ comb ] ) to each grid . combining ( [ genxy ] ) , ( [ genyx ] ) , ( [ comb ] ) , and ( [ combcomb ] ) yields we now consider the limit of arbitrarily fine grid , i.e. , ( while keeping the ratios of all and those of all constant ) . then expression ( [ igcid ] )becomes independent of the grid and can be replaced with thus , igci as given by definition [ def : igci ] simply compares the loglikelihoods of the data with respect to the two competing generating models above since ( [ discrigci ] ) coincides with the left hand side of ( [ empigci ] ) after normalizing such that .the above link is intriguing , but the function counting argument required that we discretized the space , leading to finite function classes , and it is not obvious how the analysis should be done in the continuous domain . in statistical learning theory , the core of the theoretical analysis is the following : for consistency of learning , we need uniform convergence of risks over function classes . for finite classes ,uniform convergence follows from the standard law of large numbers , but for infinite classes , the theory builds on the idea that whenever these classes are evaluated on finite samples , they get reduced to finite collections of equivalence classes consisting of functions taking the same values on the given sample . in transductive inference as well as in a recently proposed inference principle referred to as inference with the `` universum , '' the size of such equivalence classes plays a central role .the proposed new view of the igci principle may be linked to this principle .universum inference builds on the availability of additional data that is not from the same distribution as the training data in principle , it might be observed in the future , but we havent seen it yet and it may not make sense for the current classification task . let us call two pattern recognition functions equivalent if they take the same values on the training data . we can measure the size of an equivalence class by how many possible labellings the functions from the class can produce on the universum data .a classifier should then try to correctly separate the training data using a function from a large equivalence class i.e. , a function from a class that allows many possible labellings on the universum data , i.e. , one that does not make a commitment on these points .loosely speaking , the universum is a way to adjust the geometry of the space such that it makes sense for the kind of data that might come up .this is consistent with a paper that linked the universum - svm to a rescaled version of fisher s discriminant . taken to its extreme , it would advocate the view that there may not be any natural scaling or embedding of the data , but data points are only meaningful in how they relate to other data points . in our current setting , if we are given a set of universum points in addition to the training set , we use them to provide the discretization of the space .we consider all functions equivalent that interpolate our training points , and then determine the size of the equivalence classes by counting , using the universum points , how many such functions there are .the size of these equivalence classes then determines the causal direction , as described above our analysis works exactly the same no matter whether we have a regular discretization or a discretization by a set of universum points .we now argue that the correlations between and are relevant for prediction in two respects : first , knowing tells us something about , and second , tells us something about . note that section [ sec : untyp ] already describes the first part :assume is large .then , knowing ( and , in addition , a lower bound for ) restricts the set of possible -tuples to a region with small volume .we know explore what tells us about .this scenario is the one in unsupervised and semi - supervised learning ( ssl ) since the distribution of unlabeled points is used to get information about the labels . hypothesized that this is not possible in causal direction , i.e. , if the labels are the effect .in anticausal direction , the labels are the cause and unsupervised learning employs the information that contains about .as opposed to the standard notation in machine learning , where is the variable to be predicted from , regardless of which of the variables is the cause , we prefer to keep the convention that causes throughout the paper .thus , we consider the task of predicting from and discuss in which sense knowing the distribution helps .we study this question within the finite grid to avoid technical difficulties with defining priors on the set of differentiable functions .we use essentially the generating model from section [ sec : numb ] with monotonic functions on the grid with the following modification : we restrict the set of functions to the set of surjective functions to ensure that the image of the uniform distribution is a strictly positive distribution . to avoid that this is a strong restriction , we assume that .since we use the grid only to motivate ideas for the continuous case , this does not introduce any artificial asymmetry between and .then we assume that a function is drawn uniformly at random from .afterwards , -values are drawn uniformly at random from .this generating model defines a joint distribution for -tuples and functions via where denotes the application of in each component to yields the same distribution as in section [ sec : numb ] up to the technical modifications of having fixed endpoints and surjective functions . ] . in analogy to the continuous case, we introduce the image of the uniform distribution on under by and obtain hence , where we have used the fact that all functions are equally likely .we rephrase ( [ x > y ] ) as where denotes the distribution of empirical relative frequencies defined by the -tuple and is a summand that does not depend on .( [ predf ] ) provides a prediction of from .we now ask why this prediction should be _ useful _ although it is based on the wrong model because we assume that the true data generating model does not draw -values from the uniform distribution ( instead , only `` behaves like the uniform one '' in the sense of ( [ uncorr ] ) ) . to this end, we show that the likelihood of is unusually high compared to other functions that are , in a sense , equivalent . to define a set of equivalent functions ,we first represent by the following list of non - negative integers : and observe that this list describes uniquely because is monotonic .then every permutation on defines a new monotonic function by the list with .note that .therefore , one can easily argue that for large , most permutations induce functions for which this is because the difference between left and right hand side can be interpreted as covariance of the random variables and with respect to the uniform distribution on ( see also section [ sec : int ] ) and a random permutation yields approximately uncorrelated samples with high probability and upper bounds on , which goes beyond the scope of this paper . ] .therefore , if we observe that in the sense of significant violation of equality , the true function has a higher likelihood than the overwhelming majority of the functions . in other words , prefers the true function within a huge set of functions that are equivalent in the sense of having the same numbers of pre - images . translating this into the continuous setting, we infer from by defining a loglikelihood function over some appropriate set of sufficiently smooth functions via with a free parameter , since we have explained in which sense this provides a useful prediction in the discrete setting . rather than getting a distribution over the possible functions for we often want to get a single function that predicts from , i.e. , an estimator for .we define and observe that maps to the uniform distribution due to , i.e. , provides the correct prediction if is uniform .moreover , its inverse is the unique maximizer of ( [ flike ] ) since it maps to . to understand inwhat sense still provides a good prediction even if strongly deviates from , we observe that the error remains small if the cumulative distribution function does not deviate too much from the one for the uniform distribution .furthermore , shares some qualitative behavior with because it tends to have large slope where has large slope because correlates with due to ( [ pc ] ) .figure [ fig : exp ] visualizes unsupervised prediction based on for a simple function . a simple function from to ( left ) and the functions inferred from the empirical distributions of for two different input distributions .,scaledwidth=110.0% ] we now argue that information theory provides theoretical results on how close is to . to this end , we define an ( admittedly uncommon ) distance of functions by the relative entropy distance of the densities that they map to the uniform distribution . thus , measures the distance between and .since relative entropy is conserved under bijections , we have i.e. , the deviation between and coincides with the deviation of from the uniform distribution .together with ( [ orth1 ] ) , ( [ transf ] ) implies with equality only for .note that represents the functions obtained from the analog of ( [ ghat ] ) when trying to infer from ( although we know that this is pointless when and are chosen independently ) .since represents the true function , we conclude : no matter how much deviates from , deviates even more from , i.e. , the error of unsupervised prediction in causal direction always exceeds the one in anticausal direction . for the semi - supervised version ,we are given a few labeled points as well as a large number of unlabeled points .we consider again the limit where is infinite and the observations tell us exactly the distribution .then we use the information that provides on for interpolating between the labeled points via whenever $ ] .note that the above schemes for un- and semisupervised prediction are not supposed to compete with existing methods for real - world applications ( the assumption of a noiseless invertible relation does not occur too often anyway ) .the goal of the above ideas is only to present a toy model that shows that the independence between and typically yields a dependence between and that can be employed for prediction .generalizations of these insights to the noisy case could be helpful for practical applications .the authors are grateful to joris mooij for insightful discussions .p. hoyer , d. janzing , j. mooij , j. peters , and b schlkopf .nonlinear causal discovery with additive noise models . in d.koller , d. schuurmans , y. bengio , and l. bottou , editors , _ proceedings of the conference neural information processing systems ( nips ) 2008 _ , vancouver , canada , 2009 . mit press .http://books.nips.cc/papers/files/nips21/nips2008_0266.pdf .j. peters , d. janzing , and b. schlkopf . identifying cause and effect on discrete data using additive noise models . in _ proceedings of the thirteenth international conference on artificial intelligence and statistics ( aistats ) 2010 , jmlr : w&cp 9 , chia laguna , sardinia , italy , 2010_. http://jmlr.csail.mit.edu / proceedings / papers / v9/. j. peters , j. mooij , d. janzing , and b. schlkopf .identifiability of causal graphs using functional models . in _ proceedings of the 27th conference on uncertainty in artificial intelligence ( uai 2011)_. http://uai.sis.pitt.edu/papers/11/p589-peters.pdf .jason weston , ronan collobert , fabian sinz , lon bottou , and vladimir vapnik .inference with the universum .in _ in icml 06 : proceedings of the 23rd international conference on machine learning _ , pages 10091016 .acm , 2006 .sinz , o. chapelle , a. agarwal , and b. schlkopf .an analysis of inference with the universum . in jc platt , d koller , y singer , and s roweis , editors , _ advances in neural information processing systems 20 _ , pages 13691376 , 9 2008 .
information geometric causal inference ( igci ) is a new approach to distinguish between cause and effect for two variables . it is based on an independence assumption between input distribution and causal mechanism that can be phrased in terms of orthogonality in information space . we describe two intuitive reinterpretations of this approach that makes igci more accessible to a broader audience . moreover , we show that the described independence is related to the hypothesis that unsupervised learning and semi - supervised learning only works for predicting the cause from the effect and not vice versa .
intro a remarkable phenomenon quite ubiquitous in nature is that of collective synchronization , in which a large population of coupled oscillators spontaneously synchronizes to oscillate at a common frequency , despite each constituent having a different natural frequency of oscillation .one witnesses such a spectacular cooperative effect in many physical and biological systems over length and time scales that span several orders of magnitude . some common examples are metabolic synchrony in yeast cell suspensions , synchronized firings of cardiac pacemaker cells , flashing in unison by groups of fireflies , voltage oscillations at a common frequency in an array of current - biased josephson junctions , phase synchronization in electrical power distribution networks , rhythmic applause , animal flocking behavior ; see for a survey .the kuramoto model provides a simple theoretical framework to study how synchronization may emerge spontaneously in the dynamics of a many - body interacting system .the model comprises globally - coupled oscillators of distributed natural frequencies that are interacting via a mean - field coupling through the sine of their phase differences , with the phases following a first - order dynamics in time . over the years, many aspects of the model , including applications cutting across disciplines , from physical and biological to even social modelling , have been considered in the literature . an early motivation behind studying the kuramoto model was to explain the spectacular phenomenon of spontaneous synchronization among fireflies : in parts of south - east asia , thousands of male fireflies gather in trees at night and flash on and off in unison .in this respect , focussing on fireflies of a particular species ( the _ pteroptyx mallacae _ ) , a study due to ermentrout revealed that the approach to synchronization from an initially unsynchronized state is faster in the kuramoto setting than in reality .ermentrout proposed a route to reconciliation by elevating the first - order dynamics of the kuramoto model to the level of second - order dynamics . including also a gaussian noise term that accounts for the stochastic fluctuations of the natural frequencies in time , one arrives at a generalized kuramoto model including inertia and noise , in which oscillator phases have a second - order dynamics in time .one can prove that the resulting dynamics leads to a nonequilibrium stationary state ( ness ) at long times .study of nesss is an active area of research of modern day statistical mechanics .such states are characterized by a violation of detailed balance leading to a net non - zero probability current around a closed loop in the configuration space .one of the primary challenges in this field is to formulate a tractable framework to analyze nonequilibrium systems on a common footing , similar to the one due to gibbs and boltzmann that has been established for equilibrium systems . in a different context than that of coupled oscillators, the dynamics of the generalized kuramoto model also describes a long - range interacting system of particles moving on a unit circle under the influence of a set of external drive in the form of a quenched external torque acting on the individual particles , in the presence of noise . with the noise , but without the external torques , the resulting model is the so - called brownian mean - field ( bmf ) model , introduced as a generalization of the celebrated hamiltonian mean - field ( hmf ) model that serves as a prototype to study statics and dynamics of long - range interacting systems . in recent years , there has been a surge in interest in studies of systems with long - range interactions . in these systems ,the inter - particle potential in dimensions decays at large separation as , with .examples are gravitational systems , plasmas , two - dimensional hydrodynamics , charged and dipolar systems , etc . unlike systems with short - range interactions , long - range interacting systems are generically non - additive , implying that dividing the system into macroscopic subsystems and summing over their thermodynamic variables such as energy do not yield the corresponding variables of the whole system .non - additivity leads to many significant thermodynamic and dynamical consequences , such as negative microcanonical specific heat , inequivalence of statistical ensembles , and others , which are unusual with short - range interactions . in this review ,starting with the first - order mean - field dynamics of the original kuramoto model , we progressively modify the dynamics by including first the effects of a gaussian noise , and then the consequences of an inertial term that makes the dynamics second order in time .in each case , we discuss the possible transitions to synchrony that the resulting stationary state exhibits . here, we will explicitly consider a unimodal distribution of the natural frequencies . while the derivation of the phase diagram in the original model is based on an insightful self - consistent approach due to kuramoto , inclusion of gaussian noise allows to employ usual tools of statistical mechanics and explicitly study the evolution of the phase space distribution by using a fokker - planck approach . in both these cases , the transition between the unsynchronized and the synchronized phase turns out to be continuous or second order .we conveniently study the dynamics of the generalized kuramoto model that includes the effects of both inertia and noise by introducing a reduced parameter space involving dimensionless moment of inertia , temperature , and width of the frequency distribution .we point out the relation of the model to the bmf model , thereby making references to the literature on long - range interacting systems .we give a rigorous proof that the system at long times settles into a ness unless the width of the frequency distribution is zero when it has an equilibrium stationary state .we highlight that the generalized dynamics exhibits a nonequilibrium first - order transition from a synchronized phase at low parameter values to an unsynchronized phase at high values . as a result , the system as a function of the transition parameters switches over in a discontinuous way from one phase to another , thereby mimicking an abrupt off - on switch .this may be contrasted to the case of no inertia when the transition is continuous . in proper limits ,we discuss how one may recover the known continuous phase transitions in the kuramoto model and in its noisy extension , and an equilibrium continuous transition in the bmf model .the present approach offers a complete and consistent picture of the phase diagram , unifying previous results with new ones in a common framework . in the last part of the review , we consider deviations from the mean - field setting of the kuramoto model . to this end, we analyze the generalized kuramoto dynamics on a one - dimensional periodic lattice on the sites of which the oscillators reside and interact with a coupling that decays as an inverse power - law of their separation along the lattice .we consider two specific cases of the dynamics , namely , in the absence of noise and inertia , and in the case when the natural frequencies are the same for all the oscillators ( giving rise to the so - called -hmf model ) .for the latter case , we consider both overdamped and underdamped dynamics .in particular , we discuss how the long - time transition to synchrony is governed by the dynamics of the mean - field mode ( zero fourier mode ) of the spatial distribution of the oscillator phases . in this review , besides extensive numerical simulations , aspects of phase diagram are derived analytically by performing a linear stability analysis of the mean - field incoherent stationary state .moreover , for the case of the overdamped dynamics of the generalized kuramoto model on the lattice with the same natural frequency for all the oscillators , we present analytical results also on the linear stability analysis of the mean - field synchronized stationary state .we end the review with conclusions and perspectives .kuramoto - bare we start with a derivation of the dynamics of the kuramoto model by following ref . .consider first a single landau - stuart oscillator .its dynamics is given in terms of the complex variable as = iq + ( -|q|^2)q , ls with , and additionally , . in ref . , it is explained that the oscillator represented by eq .( [ ls ] ) is a simple model for self - organized systems like , e.g. , reacting chemical species . writing in terms of its argument and modulus as with and ] ; .this group of oscillators are thus `` locked '' or synchronized , and has the distribution _st(,)=k r_st ( - k r_st ) ( ) ; || kr_st , kura - distr - locked where is the heaviside step function .on the other hand , oscillators with have ever drifting time - dependent phases .however , to be consistent with the fact that we have a time - independent average phase , it is required that for this group of `` drifting '' oscillators has the form _ st(,)= ; || > kr_st ; kura - distr - drifting this ensures that oscillators are more crowded at -values with lower local velocity than at values with higher local velocity .the constant in equation ( [ kura - distr - drifting ] ) is fixed by the normalization condition , yielding c=. we now require that the given value of coincides with the one implied by the distributions in equations ( [ kura - distr - locked ] ) and ( [ kura - distr - drifting ] ) .plugging the latter forms into equation ( [ r - st ] ) , we get r_st&=&_-^_|| >kr_st g ( ) e^i + & + & _ -^ _ || kr_st g ( ) e^i k r_st ( - k r_st ) .the first integral on the right hand side vanishes due to the symmetry combined with the property that for the `` drifting '' oscillators , see equation ( [ kura - distr - drifting ] ) .the imaginary part of the second integral vanishes on using and for the `` locked '' oscillators , see equation ( [ kura - distr - locked ] ) ; the real part , after integration over , finally yields r_st = kr_st_-/2^/2 ^2 g(kr_st ) , which is the desired self - consistent equation .this equation has the trivial solution , valid for any value of , corresponding to the incoherent phase with .there can however be another solution corresponding to that satisfies 1=k_-/2^/2 ^2 g(kr_st ) .kura - bifurcation this solution bifurcates continuously from the incoherent solution at the value given by equation ( [ kura - barekc ] ) that follows from the above equation on taking the limit .since for a unimodal , one has a negative second derivative at , , one finds by expanding the integrand in equation ( [ kura - bifurcation ] ) as a powers series in that the bifurcation in this case is supercritical . as a matter of fact , it is not difficult to see that for a unimodal , a solution of equation ( [ kura - bifurcation ] ) exists only for .indeed , the right hand side of equation ( [ kura - bifurcation ] ) is equal to for , while its partial derivative with respect to , given by k^2_-/2^/2^2 g(kr_st ) , kura_r_deriv is negative definite ( here and henceforth , prime will denote derivative ) . on the other hand , for , the right hand side of equation ( [ kura - bifurcation ] ) afterthe change of variable can be written as _-k^k u ( 1- ) ^g(u ) , kura_r_eq1 which is clearly smaller than , tending to as . finally , its derivative with respect to is _-/2^/2 ^2 g(kr_st ) + kr_st _ -/2^/2 ^2 g(kr_st ) + = _-/2^/2 ^2 g(kr_st ) , kura_k_deriv which is positive .these properties imply that a solution of equation ( [ kura - bifurcation ] ) exists for , which equals for , and which increases with and approaches unity as .the linear stability of the incoherent solution , , will be considered in sec .[ seckurlongrange ] , where it will appear as a special case of the kuramoto model with non - mean - field long - range interactions . the stability analysis will establish that the incoherent state is neutrally stable below and unstable above .noisykuramoto in order to account for stochastic fluctuations of the s in time , the dynamics ( [ kuramoto - eom ] ) with an additional gaussian noise term on the right hand side was studied by sakaguchi .the dynamical equations are = _i+k r ( -_i)+_i(t ) , kura - noise - eom where _i(t ) = 0 , _i(t)_j(t ) = 2d_ij(t - t ) , with the parameter standing for the noise strength , while here and from now on , angular brackets will denote averaging with respect to noise realizations . in presence of , the continuous transition of the bare modelis sustained , with shifted to k_c(d ) = 2 ^-1 .kura - noise - kc on taking the limit in the above equation , one recovers the transition point ( [ kura - barekc ] ) for the bare model . in the following ,we briefly sketch the derivation of equation ( [ kura - noise - kc ] ) , following ref . .the starting point is to write down a fokker - planck equation for the time evolution of the distribution , which follows straightforwardly from the dynamics ( [ kura - noise - eom ] ) as = -+d .fokplaeqnoisy as before , in the stationary state , we set , and obtain from the above equation the result _ st(,)&=&()_st(0 , ) + & & , kura - noise - distr where is fixed by the normalization . for above expression reduces to the incoherent state . substituting equation ( [ kura - noise - distr ] ) into equation ( [ r - st ] ) , one obtains a self - consistent equation for . as for the kuramoto model , it has the trivial solution , corresponding to the incoherent state . in finding the other solution , one observes that the imaginary part of the right hand side of equation ( [ r - st ] ) is zero due to the symmetry together with the property , see equation ( [ kura - noise - distr ] ) ; thus , only the real part contributes .expanding the resulting equation in powers of , and taking the limit yield the critical coupling strength given by equation ( [ kura - noise - kc ] ) .linear - stability - kura - noise the stability analysis of the incoherent state is performed by studying the linearized fokker - planck equation obtained from equation ( [ fokplaeqnoisy ] ) after expanding as ( , , t ) = + ( , , t ) ; || 1 .linearnoisy writing explicitly the expression for , the resulting linear equation is fokplalinearnoisy ( , , t ) = - ( , , t ) + d ( , , t ) + + _ -^ g( ) ( - ) ( ,,t ) . with the fourier expansion fouriernoisy ( , , t )= _ k=-^+ _ k(,t)e^ik , equation ( [ fokplalinearnoisy ] ) gives fokplalinearnoisyfourier _k(,t ) = -ik_k(,t ) - d k^2 _ k(,t ) + ( _ k,1 + _ k,-1 ) g()_k(,t ) . for the integral term vanishes , and we have fokplalinearnoisyfourierkne1 _ k(,t ) = -ik_k(,t ) - d k^2 _ k(,t ) , so that with varying in the support of , one has a continuous spectrum of stable modes that decay exponentially in time with rate . for , after posing expokeq1 _1(,t ) = _ 1(,)e^t , we have fokplalinearnoisyfourierk1 _ 1 ( , ) = g()_1( , ) .this equation also admits a continuous spectrum of stable modes , given by for each in the support of .the modes , normalized so that the right hand side of equation ( [ fokplalinearnoisyfourierk1 ] ) is equal to 1 , are given by _1(,i _ 0 -d ) = i p + c_1(_0 ) ( - _ 0 ) , eigen_cont_pm1_noisy with c_1(_0 ) g(_0 ) = i p , eigen_factor_noisy where denotes the principal value .however , unlike for , there is also a discrete spectrum for . from equation ( [ fokplalinearnoisyfourierk1 ] ), we have _ 1 ( , ) = _ 1( , ) g( ) .fokplalinearnoisyfourierk1_discr in order to have a non - trivial solution of the above equation , the integral on the right hand side must not vanish .we can impose that this integral is equal to , since equation ( [ fokplalinearnoisyfourierk1 ] ) is linear .we then obtain the dispersion relation _disper_relat_noisy decomposing into real and imaginary parts , , we obtain from equation ( [ disper_relat_noisy ] ) that _ -^+ g ( ) = 1 , disper_relatr_noisy + _ -^+ g ( ) = 0 .disper_relati_noisy with the change of variable , the integral in the second equation can be transformed to _ 0^+ x .disper_relati_noisy_b one may check that for unimodal , the above expression can be equal to only for .so equation ( [ disper_relatr_noisy ] ) becomes _-^+ g ( ) = 1 , disper_relatr_noisy_b with real .this equation shows that only solutions are possible ; when such a solution is not present , there is no discrete spectrum , and the incoherent state is stable .however , stability holds also when there is a solution , since we have seen that all the eigenvalues of the continuous spectrum have a negative real part .the change of variable transforms equation ( [ disper_relatr_noisy_b ] ) to _ -^+ y g = 1 .disper_relatr_noisy_c the left hand side tends to as , while its derivative with respect to is _-^+ y g , disper_relatr_noisy_der which is negative .therefore , a solution for exists only when the value of the left hand side of equation ( [ disper_relatr_noisy_c ] ) for is larger than .in particular , we have stability when this solution is negative ; the threshold for stability is thus given by _y g(dy ) = 1 , disper_relatr_noisy_thres that gives the critical value ( [ kura - noise - kc ] ) .chap2 in the generalized dynamics , an additional dynamical variable , namely , angular velocity , is assigned to each oscillator , thereby elevating the first - order dynamics of the kuramoto model to the level of second - order dynamics ; the equations of motion are : = v_i , + eom + m =- v_i+ r(-_i)+_i+_i(t ) . here, is the angular velocity of the oscillator , is the moment of inertia of the oscillators , is the friction constant , is the strength of the coupling between the oscillators , while is a gaussian noise with _i(t ) = 0 , _i(t)_j(t ) = 2_ij(t - t ) .etatide - prop in the limit of overdamped motion ( at a fixed ) , the dynamics ( [ eom ] ) reduces to = r(-_i)+_i+_i(t ) .eom - overdamped1 then , defining and so that , the dynamics ( [ eom - overdamped1 ] ) for becomes that of the kuramoto model , equation ( [ kuramoto - eom ] ) , and for that of its noisy version , the dynamics ( [ kura - noise - eom ] ) . in appendixa , we illustrate how the dynamics ( [ eom ] ) without the noise term , studied in , arises in a completely different context , namely , in electrical power distribution networks comprising synchronous generators ( representing power plants ) and motors ( representing customers ) ; the dynamics arises in the approximation in which every node of the network is connected to every other .longrangemodel we now discuss that in a different context than that of coupled oscillators , the dynamics ( [ eom ] ) describes a long - range interacting system of particles moving on a unit circle , with each particle acted upon by a quenched external torque .much recent exploration of the static and dynamic properties of long - range interacting systems has been pursued within the framework of an analytically tractable prototypical model called the hamiltonian mean - field ( hmf ) model .the model comprises particles of mass moving on a unit circle and interacting through a long - range interparticle potential that is of the mean - field type : every particle is coupled to every other with equal strength .this system can also be seen as a set of -rotators that reside on a lattice and interact through ferromagnetic coupling .the structure and dimensionality of the lattice need not be specified , since the coupling between each pair of rotators is the same ( mean - field system ) . since the configuration of an -rotator is defined by a single angle variable, one might also view the rotators as spin vectors .however , one should be aware that this identification is not completely correct .this is because the dynamics of spins is defined differently , through poisson brackets , or , equivalently , through the derivative of the hamiltonian with respect to the spins , that yields the effective magnetic field acting on the individual spins . on the other hand , the -rotators are more correctly identified with particles confined to a circle , with angle and angular momentum as canonically conjugate variables , and with the dynamics generated by the hamilton equations for these variables .apart from academic interest , the model provides a tractable reference to study physical systems like gravitational sheet models and the free - electron laser .the hamiltonian of the hmf model is h=_i=1^n+_i , j=1^n , hmf - h where ] . *the transition in the bmf dynamics ( ) corresponds now to a continuous transition occurring at the critical temperature . ) in terms of dimensionless moment of inertia , temperature , and width of the frequency distribution . here, the shaded blue surface is a first - order transition surface , while the thick red lines are second - order critical lines .the system is synchronized inside the region bounded by the surface , and is incoherent outside .the transitions of known models are also marked in the figure .the blue surface in ( a ) is bounded from above and below by the dynamical stability thresholds and of respectively the synchronized and the incoherent phase , which are estimated in -body simulations from hysteresis plots ( see fig .[ fig : hys - mvary ] for an example ) ; the surfaces and for in the case of a gaussian with zero mean and unit width are shown in panel ( b).,title="fig:",scaledwidth=60.0% ] ) in terms of dimensionless moment of inertia , temperature , and width of the frequency distribution . here, the shaded blue surface is a first - order transition surface , while the thick red lines are second - order critical lines .the system is synchronized inside the region bounded by the surface , and is incoherent outside .the transitions of known models are also marked in the figure .the blue surface in ( a ) is bounded from above and below by the dynamical stability thresholds and of respectively the synchronized and the incoherent phase , which are estimated in -body simulations from hysteresis plots ( see fig .[ fig : hys - mvary ] for an example ) ; the surfaces and for in the case of a gaussian with zero mean and unit width are shown in panel ( b).,title="fig:",width=377 ] ) , the figure shows vs. adiabatically tuned for different values at , showing also the stability thresholds , and , for .the data are obtained from simulations with . for a given , the branch of the plot tothe right ( left ) corresponds to increasing ( decreasing ) ; for , the two branches almost overlap .the data are for the gaussian given by equation ( [ gomega - gaussian]).,width=377 ] fig : hys - mvary ) , the figure shows vs. adiabatically tuned at .the data are obtained from simulations with .the branch of the plot to the right ( left ) corresponds to increasing ( decreasing ) .the data are for a lorentzian with zero mean and unit width ., width=377 ] fig : hys - lor ) , the figure shows vs. adiabatically tuned for different temperatures at a fixed moment of inertia .the data are obtained from simulations with . for a given , the branch of the plot tothe right ( left ) corresponds to increasing ( decreasing ) ; for , the two branches almost overlap .the data are for the gaussian given by equation ( [ gomega - gaussian]).,width=377 ] fig : hys - tvary ) at , and the gaussian given by equation ( [ gomega - gaussian ] ) , ( a ) shows at , the numerically estimated first - order phase transition point , vs. time in the stationary state , while ( b ) shows the distribution at several s around .the data are obtained from simulations with .,title="fig:",width=377 ] + ) at , and the gaussian given by equation ( [ gomega - gaussian ] ) , ( a ) shows at , the numerically estimated first - order phase transition point , vs. time in the stationary state , while ( b ) shows the distribution at several s around .the data are obtained from simulations with .,title="fig:",width=377 ] fig : r - vs - t - pr the complete phase diagram is shown schematically in fig .[ fig : phdiag](a ) , where the thick red second - order critical lines denote the continuous transitions mentioned above . on the other hand , for all non - zero, we demonstrate below that the synchronization transition becomes first order , occurring across the shaded blue transition surface . this surfaceis bounded by the second - order critical lines on the and planes , and by a first - order transition line on the -plane .let us remark that all phase transitions for are in nesss , and are interpreted to be of dynamical origin , accounted for by stability considerations of stationary solutions of equations ( for example , the kramers equation discussed below ) for temporal evolution of phase space distributions .more rigorously , to qualify as thermodynamics phases , one needs to show that the different phases extremize a free energy - like quantity ( e.g. , a large deviation functional ) .such a demonstration in this nonequilibrium scenario is a daunting task , while for , the phases have actually been shown to minimize the equilibrium free energy . in order to demonstrate the first - order nature of the transition, we performed -body simulations for a representative , i.e. , the gaussian given by equation ( [ gomega - gaussian ] ) . for given and , we prepared an initial state with all oscillators at and frequencies s sampled from a gaussian distribution with zero mean and standard deviation .we then let the system equilibrate at , and subsequently increase adiabatically to high values and back in a cycle .the simulations involved integrations of the coupled equations of motion ( [ eom - scaled ] ) , see appendix b for details .figure [ fig : hys - mvary ] shows the behavior of for several s at a fixed less than the bmf transition point , where one may observe sharp jumps and hysteresis behavior expected of a first - order transition . with decrease of , the jump in becomes less sharp , and the hysteresis loop area decreases , both features being consistent with the transition becoming second - order - like as , see fig .[ fig : phdiag](a ) . for , we mark in fig . [fig : hys - mvary ] the approximate stability thresholds for the incoherent and the synchronized phase , denoted respectively by and .the actual phase transition point lies in between the two thresholds .let us note from the figure that both the thresholds decrease and approach zero with the increase of .a qualitatively similar behavior is observed for a lorentzian , see fig .[ fig : hys - lor ] . figure [ fig : hys - tvary ] shows hysteresis plots for a gaussian at a fixed and for several values of , where one observes that with approaching , the hysteresis loop area decreases , jumps in become less sharp and occur between smaller and smaller values that approach zero .moreover , the value at decreases as increases towards , reaching zero at .disappearance of the hysteresis loop with increase of similar to that in fig .[ fig : hys - tvary ] was reported in ref .our findings suggest that the thresholds and coincide on the second - order critical lines , as expected , and moreover , they asymptotically come close together and approach zero as at a fixed . for given and , and for in between and , fig .[ fig : r - vs - t - pr](a ) for as a function of time in the stationary state shows bistability , whereby the system switches back and forth between incoherent ( ) and synchronized ( ) states .the distribution depicted in figure [ fig : r - vs - t - pr](b ) is bimodal with a peak around either or as varies between and .figure [ fig : r - vs - t - pr ] lends further credence to the phase transition being first order .analysis we now turn to an analytical characterization of the dynamics ( [ eom - scaled ] ) in the continuum limit . to this end, we define the single - oscillator distribution that gives at time and for each the fraction of oscillators with phase and angular velocity .the distribution is -periodic in , and obeys the normalization _ -^ _ -^ v f(,v,,t)=1 , while evolving following the kramers equation = -v+(--r(-))f+ , kramers where re^i = v g()e^if(,v,,t ) .let us briefly sketch the derivation of equation ( [ kramers ] ) , while the details may be found in ref .we will along the way also indicate how one may prove rigorously that the dynamics ( [ eom - scaled ] ) does not satisfy detailed balance unless . for simplicity of presentation , we first consider the case of a discrete bimodal , and then in the end extend our discussion to a general . then , consider a given realization of in which there are oscillators with frequencies and oscillators with frequencies , where .let us then define the -oscillator distribution function as the probability density at time to observe the system around the values . in the following ,we use the shorthand notations and . note that satisfies the normalization ( _ i=1^nz_i)f_n(,t)=1 .the distribution evolves in time according to the following fokker - planck equation that may be derived straightforwardly from the equations of motion ( [ eom - scaled ] ) : -\sigma\sum_{j=1}^{n}\big(\omega^{t}\big)_{j}\frac{\partial f_{n}}{\partial v_{j}}+\frac{t}{\sqrt{m}}\sum_{i=1}^{n}\frac{\partial^{2}f_{n}}{\partial v_{i}^{2}}\nonumber \\ &-&\frac{1}{2n}\sum_{i , j=1}^{n}\sin(\theta_{j}-\theta_{i})\big[\frac{\partial f_{n}}{\partial v_{i}}-\frac{\partial f_{n}}{\partial v_{j}}\big ] , \label{eq : fp - eqn}\end{aligned}\ ] ] where the column vector has its first entries equal to and the following entries equal to , and where the superscript denotes matrix transpose operation : ^t .detailedbalance let us rewrite the fokker - planck equation ( [ eq : fp - eqn ] ) as }{\partial x_{i}}+\frac{1}{2}\sum_{i , j=1}^{2n}\frac{\partial^{2}[b_{i , j}(\mathbf{x})f_{n}(\mathbf{x})]}{\partial x_{i}\partial x_{j}},\label{eq : fp - compact}\end{aligned}\ ] ] where and here , the drift vector is given by while the diffusion matrix is the dynamics described by the fokker - planck equation of the form ( [ eq : fp - compact ] ) satisfies detailed balance if and only if the following conditions are satisfied : }{\partial x_{j}},\label{eq : detailed - balance - cond2}\end{aligned}\ ] ] where is the stationary solution of equation ( [ eq : fp - compact ] ) . here, denotes the parity with respect to time reversal of the variables s : under time reversal , we have , where ( respectively , ) depending on whether is odd ( respectively , even ) under time reversal . for example , s are even , while s are odd .using equation ( [ eq : bij - defn ] ) , the condition ( [ eq : detailed - balance - cond1 ] ) is trivially satisfied , while to check the condition given by ( [ eq : detailed - balance - cond2 ] ) , we formally solve this equation for and check if the solution solves equation ( [ eq : fp - compact ] ) in the stationary state . from equation ( [ eq : detailed - balance - cond2 ] ), we see that for , the condition reduces to the above equation , using equation ( [ eq : ai - defn ] ) , is obviously satisfied . for , we have solving which we get , \label{eq : stationary - soln1}\end{aligned}\ ] ] where is a function to be determined . substituting the distribution ( [ eq : stationary - soln1 ] ) into equation ( [ eq : fp - compact ] ) and requiring that it is a stationary solution implies that has to be equal to zero , while d(_1,_2, ,_n)=(-_i , j=1^n ) .thus , for , when the dynamics reduces to that of the bmf model , we get the stationary solution as .\label{eq : bmf - soln}\ ] ] where is the hamiltonian ( [ hmf - h ] ) ( expressed in terms of dimensionless variables introduced above ) .the lack of detailed balance for obviously extends to any distribution .kramers the starting point is to define the reduced distribution function , with and as note that the following normalizations hold for the single - oscillator distribution functions : z_1 f_1,0(z_1,t)=1 + 1 f_0,1(z_n_1 + 1,t)=1 . assuming that 1 . is symmetric with respect to permutations of dynamical variables within the same group of oscillators , and 2 . , together with the derivatives , vanish on the boundaries of the phase space , and then using equation ( [ eq : fp - eqn ] ) in equation ( [ eq : fs - defn ] ) , one obtains the bogoliubov - born - green - kirkwood - yvon ( bbgky ) hierarchy equations for the dynamics ( [ eom - scaled ] ) ( for details , see ) . in particular , the first equations of the hierarchy are and a similar equation for . in the limit , writing ,\ ] ]one can express equation ( [ eq : fp-1 ] ) in terms of . in order to generalize the above treatment to the case of a continuous , note for this casethat the single - oscillator distribution function is .the first equation of the hierarchy is then in the continuum limit , one may neglect oscillator - oscillator correlations , and approximate as f(,v,,v,,,t ) & = & f(,v,,t)f(,v,,t ) + & + & corrections subdominant in n , so that equation ( [ eq : final - eqn ] ) reduces to the kramers equation ( [ kramers ] ) .solutionkramers the stationary solutions of equation ( [ kramers ] ) are obtained by setting the left hand side to zero . for ,the stationary solution is f_st(,v ) , that corresponds to canonical equilibrium , with determined self - consistently , see equation ( [ rx - hmf ] ) . for ,the incoherent stationary state is f^inc_st(,v,)=1/((2)^3/2 ) .inc - state ) , here we show the marginal distributions , and , corresponding to the incoherent phase for .the points denoting simulation data are for for one fixed realization of the s sampled from the gaussian distribution ( [ gomega - gaussian ] ) , while the continuous lines denote theoretical results ( [ marginalv ] ) and ( [ marginalth ] ) . ,title="fig:",width=377 ] ) , here we show the marginal distributions , and , corresponding to the incoherent phase for .the points denoting simulation data are for for one fixed realization of the s sampled from the gaussian distribution ( [ gomega - gaussian ] ) , while the continuous lines denote theoretical results ( [ marginalv ] ) and ( [ marginalth ] ) ., title="fig:",width=377 ] in the class of unimodal frequency distributions , let us consider a representative , namely , a gaussian : g()= .gomega - gaussian we then have for the marginal angular velocity distribution p^inc_st(v)&=&_-^ g ( ) _ -^ f^inc_st(,v , ) + & = & , marginalv and the marginal angle distribution p^inc_st()&=&_-^ g ( ) _ -^ v f^inc_st(,v,)= , marginalth both correctly normalized to unity . in figs .[ marginal - thv - inc ] and [ marginal - thv - syn ] , we compare our theoretical predictions , ( [ marginalv ] ) and ( [ marginalth ] ) , with numerical simulation results . ) , the figure shows the marginal distributions , and , corresponding to the synchronized phase for .the points denoting simulation data are for for one fixed realization of the s sampled from the gaussian distribution ( [ gomega - gaussian]).,title="fig:",width=377 ] ) , the figure shows the marginal distributions , and , corresponding to the synchronized phase for .the points denoting simulation data are for for one fixed realization of the s sampled from the gaussian distribution ( [ gomega - gaussian]).,title="fig:",width=377 ] marginal - thv - syn the existence of the synchronized stationary state is borne out by our simulation results in fig .[ marginal - thv - syn ] , although its analytical form is not known .linearstability - incoherent - inertia we now discuss about the linear stability analysis of the incoherent state ( [ inc - state ] ) ; a similar analysis for the bmf model is discussed in ref .following ref . , we linearize equation ( [ kramers ] ) about the state by expanding as f(,v,,t)=f^inc_st(,v,)+e^tf(,v , ) , expansion_linear where satisfies the linearized kramers equation : since and are normalized , we have _-^_-^ v f(,v,)=0 .norm substituting f(,v , ) = _n=-^ b_n(v,,)e^i n try in equation ( [ linear - kramers ] ) , one gets & & + ( v-)+(1 - -inv)b_n + & & = ( i _ n,1 - i _n,-1 ) 1,b_n , b - eqn where one has the scalar product with denoting complex conjugation .since is real , one has , while equation ( [ norm ] ) implies that .we can then restrict to consider only .next , equation ( [ b - eqn ] ) is transformed into a nonhomogeneous parabolic cylinder equation by the transformations & & b_n(v , , ) = _ n ( z , , ) , trans + & & z = ( v- + 2nti ) , which when substituted into equation ( [ b - eqn ] ) yield \ , \beta_n\nonumber\\ = i\pi \sqrt{m}\fr{\partial f_{\rm st}^{\rm inc}}{\partial v } e^{{1\over 4}(z-2i\sqrt{mt } ) ^{2}}\ , \langle 1 , e^{-{1\over 4}(z-2i\sqrt{mt } ) ^{2 } } \beta_1 \rangle\ , \delta_{n,1}. \end{aligned}\ ] ] for , the right hand side of the above equation is zero , yielding the eigenvalues and the corresponding eigenfunctions that do not depend on and ; here , and are respectively the parabolic cylinder function and the hermite polynomial of degree .the eigenvalues form a continuous spectrum .all of them have negative real parts , thus leading to linear stability of the incoherent state ( [ inc - state ] ) , for and . for eigenvalues have also negative real parts unless those with , that have a vanishing real part .they would correspond to neutrally stable modes ; however , the modes with have zero amplitude due to the normalization condition ( [ norm ] ) . for , solving ( [ b - eqn ] )1(z,,)&=&-i1,e^- ( - i)^2 _ 1 + & & _p=0^ d_p(z ) , where ' & = & \left .\fr{\partial f_{\rm st}^{\rm inc}}{\partial v } \right|_{v= \sigma\omega\sqrt{m } - i2t\sqrt{m } + \sqrt{t } z}= - { ( z - 2i\sqrt{mt})\over ( 2\pi)^{{3\over 2}}t}e^{-{1\over 2}(z - 2i\sqrt{mt})^{2}},\end{aligned}\ ] ] using the above expression to compute , one obtains from the resulting self - consistent equation the following eigenvalue equation for : _p=0^_-^ = 1 .stability - eqn -plane , ( b ) , corresponding to the loop in the complex -plane , ( a ) , as determined by the function in equation ( [ eqfond]).,width=377 ] eigenvalueequation a detailed analysis of the eigenvalue equation ( [ stability - eqn ] ) , carried out in ref . , shows that the equation admits at most one solution for with a positive real part , and when the solution exists , it is necessarily real .we now briefly sketch the analysis .we rewrite equation ( [ stability - eqn ] ) as where is unimodal .the incoherent state ( [ inc - state ] ) is unstable if there is a with a positive real part that satisfies the above eigenvalue equation .we first look for possible pure imaginary solutions . separating equation ( [ eqfond ] ) into real and imaginary parts, we have \nonumber \\ & & = \frac{e^{mt}}{2t}\sum_{p=0}^\infty \frac{\left(-mt\right)^p}{p ! } \int { \mbox{d}}\omega \ , g(\omega ) \frac{\left(p+mt\right)^2}{\left(p+mt\right)^2+\left(m\sigma \omega + \sqrt{m}\mu\right)^2 } - 1 = 0 ,\\ \label{iml_imag } & & { \rm i m } \left[f(i\mu;m , t,\sigma)\right]\nonumber \\ & & = -\frac{e^{mt}}{2 t } \sum_{p=0}^\infty \frac{\left(-mt\right)^p}{p!}\int { \mbox{d}}\omega \ , g(\omega ) \frac{\left(p+mt\right)\left(m\sigma \omega + \sqrt{m}\mu \right ) } { \left(p+mt\right)^2+\left(m\sigma \omega + \sqrt{m}\mu\right)^2}= 0.\end{aligned}\ ] ] in the second equation above , making the change of variables , and exploiting the parity in of the sum , we get &=&-\frac{e^{mt}}{2 t } m\sigma\int_0^\infty { \mbox{d}}x\big\ { \left[g\left(x-\frac{\mu}{\sqrt{m}\sigma}\right ) -g\left(-x-\frac{\mu}{\sqrt{m}\sigma}\right)\right]\nonumber \\ & \times & x \sum_{p=0}^\infty \frac{\left(-mt\right)^p}{p ! } \frac{p+mt}{\left(p+mt\right)^2+m^2\sigma^2 x^2 } \big\ } = 0.\end{aligned}\ ] ] it is possible to show that the sum on the right - hand side is positive definite for any finite , while for our class of unimodal s , the term within the square brackets is positive ( respectively , negative ) definite for ( respectively , for ) .therefore , the last equation is never satisfied for , implying thereby that the eigenvalue equation ( [ eqfond ] ) does not admit pure imaginary solutions ( the proof holds also for the particular case ) .this analysis also proves that there can be at most one solution of equation ( [ eqfond ] ) with positive real part .in fact , let us consider , in the complex -plane , the loop depicted in fig .[ sm - fig1](a ) , with and representing , respectively , and the radius of the arc going to . due tothe sign properties of ] , obtained as the continuum limit of .however , contrary to section [ seckurlongrange ] , the one - particle distribution function will now not depend on the frequency . here, we introduce the one - particle distribution function , defined such that the quantity represents the fraction of oscillators located between and that at time has their phase between and .the normalization is _-^ ( , s , t ) = 1 s. norm_cond_rhoq in the continuum limit , the equation of motion takes the form = _ 0 ^ 1 s _ -^ ( ,s,t ) + ( s , t ) , kuramoto - eom_decaying_noise_cont where the normalizing factor and the closest distance convention are given in equations ( [ norm_fact ] ) and ( [ clos_dist_conv_cont ] ) , respectively .the statistical properties of the noise become ( s , t ) = 0 , + ( s , t ) ( s,t ) = 2t(s - s)(t - t ) .noise_statistics_cont the fokker - planck equation governing the evolution of is fokkerplanck_eq_decaying + = - \ { ( , s , t ) } + t .the generic stationary solution of the fokker - planck equation ( [ fokkerplanck_eq_decaying ] ) is obtained by setting the left hand side to zero , yielding _ 0(,s ) = a(s ) , noise_stationary where the constants for every are determined by the normalization condition ( [ norm_cond_rhoq ] ) .there are also consistency relations to be satisfied , as we now show .let us denote by and the two components of the local magnetization : m_x(s ) _-^ _ 0(,s ) , mxsdef + m_y(s ) _-^ _ 0(,s ) .mysdef from the definition of the modified bessel function of the first kind of order , i_n(x ) = _ -^ ( n ) e^x , defbessel and denoting _x^()(s ) = _ 0 ^ 1 s , mxsaldef + _ y^()(s ) = _ 0 ^ 1 s , mysaldef one obtains for the normalization constant the equation a(s ) = ^-1 , normconst together with the self - consistency relations = ( ) .selfalcons we note for later use that if we choose an -independent stationary distribution , then , and are also -independent , with .linear - stability - alphahmf - overdamped let us now consider the -independent ( that is , the mean - field ) incoherent state , obtained when , i.e. , _ 0 ( ) = .stationincoh as before , its linear stability can be analyzed by posing ( , s , t ) = + ( , s , t ) ; || 1 , alphaperturbation_b and studying the linearized fokker - planck equation for : fokkerplanck_eq_decaying_linear = _ 0 ^ 1 s _ -^ ( ,s,t ) + t .the procedure for stability analysis is the same as that adopted in the preceding subsection .we first perform a fourier expansion in : ( , s , t ) = _ k=-^+ _ k(s , t ) e^ik , fourier_theta_noise which when used in equation ( [ fokkerplanck_eq_decaying_linear ] ) gives linear_cont_eq_four_noise =( _k,1 + _ k,-1 ) _ 0 ^ 1 s -k^2 t_k(s , t ) . for , the first term on the right hand side of equation ( [ linear_cont_eq_four_noise ] ) vanishes , and we have _k(s , t ) = _ k(s,0 ) e^-k^2 t t ; k 1 ; damped_modes these are perturbations that decay exponentially in time , and thus correspond to stable modes .the equation for , = _ 0 ^ 1 s -t_1(s , t ) , linear_cont_eq_fourpm1_noise is studied by performing a further fourier expansion in -space : _1(s , t ) = _ n=-^+ _ 1,n(t ) e^2i ns .fourier_sspace_noise substituting in equation ( [ linear_cont_eq_fourpm1_noise ] ) , we obtain = _1,n(t ) -t_1(t ) , linear_cont_eq_fourpm1s_noise where is given by equation ( [ eigenlambda ] ) .we therefore have _1,n(t ) = _ 1,n(0 ) .solution_n1_noise for a fixed , this expression determines the value of the temperature for which the mode is stable .precisely , the mode decays exponentially in time and is therefore stable for , while it is unstable , growing exponentially in time , for .therefore , the critical temperature for the neutral stability of is t_c , n = .crit_temper since , as previously explained , , and is a decreasing function of , we have , and = t_c,0 > t_c,1 > t_c,2 > t_c,3 > order_critical we note in particular that for , we have for , so that the modes for never destabilize .numerics - alphahmf - overdamped in the following , we take without loss of generality ( with a rescaling of the time unit , it is always possible to reduce to such a case ) . from the analysis presented above, we see that for , the incoherent state is stable .decreasing the temperature , the first perturbation mode to destabilize will be , which happens at .decreasing further the temperature , the modes with will progressively destabilize .we now discuss the results of simulations of the equation of motion ( [ kuramoto - eom_decaying_noise ] ) for a system with oscillators with .the effect of the stochastic noise has been taken into account with the same method as that described in appendix b , equation ( [ formalintegration3 ] ) for the case of systems with inertia .we have studied the observables defined in equation ( [ discrete_order_params ] ) . in fig .[ sas - fig1 ] , we show the time evolution of , , and for simulations performed at , with initial conditions reproducing the incoherent state , for all , obtained by taking the phases independently and uniformly distributed between and . from equation ( [ crit_temper ] ) , we find that lies between and .then , in particular , the observables plotted in fig . [ sas - fig1 ] should all increase exponentially in time .the plot shows that the agreement between the numerical and the theoretical growth rates is very good . ), the figure shows the time evolution of the observables , and starting from an initial state that has been obtained by extracting the s uniformly in ] ; this is in analogy with the mean - field case ( ) studied in ref .the right hand side of the last equation vanishes only for the stationary states given in ( [ noise_stationary ] ) , and in particular , for the state ( [ noise_stationary_nos_b ] ) .in addition , as proved in , the -independent stationary state ( [ noise_stationary_nos_b ] ) realizes the minimum of the free energy .therefore , equation ( [ htheorem ] ) suggests that if this state is perturbed , the dynamics tends to restore it .the eigenvalues of the system ( [ system_x ] ) and ( [ system_y ] ) have been numerically evaluated by truncating the system at a finite value of , denoted by . as a matter of fact, we have found that the eigenvalues of the systems ( [ system_x ] ) and ( [ system_y ] ) always have a negative real part for any value of between and and for any temperature in the range ( except for the zero eigenvalue that we will consider in detail below ) .we recall that varying and , the factor can take any value in that range . obviously , by truncating the system , one can find only a finite number of eigenvalues , but by increasing the truncation value , we have checked that the new eigenvalues have negative real parts with larger absolute values , and the eigenvalues with negative real parts that have smaller absolute values converge extremely fast .we have also found that for not close to , the eigenvalues are in addition real .this can be understood by considering the systems ( [ system_x ] ) and ( [ system_y ] ) for . in that case , since for , they reduce to system_x_diag _ x , n^(p ) = - tp^2 _ x , n^(p ) + _ p,1_n ( ) _ x , n^(1 ) , + system_y_diag _ y , n^(p ) = - tp^2 _ y , n^(p ) + _ p,1_n()_y , n^(1 ) .the right hand sides give directly the eigenvalues .they are real and all negative , since and ( except for exactly equal to and for , where and then the right hand sides for are zero ) . by continuity, the eigenvalues will be real for at least a range of temperatures smaller than .we conclude the analysis by studying the zero eigenvalue for . for this, it is not convenient to analyze the systems ( [ system_x ] ) and ( [ system_y ] ) , but to start directly from equation ( [ fokkerplanck_eq_decaying_nonunif_mu ] ) with , i.e. , m_x ( _ n(,0 ) ) -_n ( ) ( _ 0 ( ) ) + + t = 0 .fokkerplanck_eq_decaying_nonunif_mu_0 the solution of this equation that satisfies the periodicity condition and equation ( [ zeronorm ] ) is _n(,0 ) = _ n ( ) , solution_mu_0 where the normalization constant is given in equation ( [ normconst_nos ] ) , and where we have used the definition ( [ tildemag ] ) .this equation shows that in order to have a non - trivial solution , and can not both be equal to .we still have to satisfy equation ( [ tildemag ] ) as a self - consistent equation .multiplying equation ( [ solution_mu_0 ] ) by and by , we obtain self_mtildex _ x , n^(1 ) = _ x , n^(1 ) ( 1 - t - m_x^2 ) , + _ y , n^(1 ) = _ y , n^(1 ) _ n ( ) .self_mtildey the first of these equations is satisfied by , or by self_mtildex_b m_x = , that must be satisfied together with the self - consistent relation ( [ selfalcons_nos ] ) . in fig .[ sas - fig3 ] , we plot as a function of as determined by the self - consistent relation ( [ selfalcons_nos ] ) and by equation ( [ self_mtildex_b ] ) for .we see that there is no solution for .since the right hand side of equation ( [ self_mtildex_b ] ) decreases for decreasing , this also proves that there is no solution for any .therefore , the only solution of equation ( [ self_mtildex ] ) is .this requires that , and then equation ( [ self_mtildey ] ) becomes .this is verified only for .we have finally arrived at the conclusion that equation ( [ fokkerplanck_eq_decaying_nonunif_mu_0 ] ) admits a solution only for , and that this solution is unique and is given by _0(,0 ) = _ 0 ( ) , solution_mu_0_n0 with .this solution represents a global rotation of all oscillators , and is a neutral mode due to the global rotational invariance .the uniqueness of the mode associated with the zero eigenvalue assures that there are no secular terms with a linear growth , thus completing the proof of the linear stability of . as a function of as determined implicitly by the self - consistent relation ( [ selfalcons_nos ] ) and by equation ( [ self_mtildex_b ] ) with .the two curves do not intersect at any in the range , showing that there is no solution satisfying both relations.,width=377 ] alphahmf we will now be concerned with the model with inertia , that in the overdamped limit reduced to the model studied in the preceding subsection .the equations of motion are = v_i , + long - range - inertia + m= - v_i + _ j=1^n + _i(t ) , with the same definitions as before of and of the closest distance convention .we recall the statistical properties of the gaussian white noise : _i(t ) = 0 , _i(t)_j(t ) = 2 t _ ij(t - t ) .etatide - prop_bis the equations of motion ( [ long - range - inertia ] ) describe the evolution of the -hmf model , within a canonical ensemble . by performing the reduction to dimensionless quantities as in equations ( [ dmsless1])-([dmsless6 ] ) ,the equations of motion become = v_i , + long - range - inertia_dms + = - v_i + _ j=1^n + _i(t ) , where we have disregarded the overbars of the dimensionless quantities for notational convenience , and we have _i(t ) = 0 , _i(t)_j(t ) = 2 ( t/ ) _ ij(t - t ) .etatide - prop_dms the continuum limit of the dynamics is implemented in a manner analogous to that in preceding sections , by introducing the variable ], we first choose a time step size .next , we set as the -th time step of the dynamics , where , and . in the numerical scheme ,we first discard at every time step the effect of the noise ( i.e. , consider ) , and employ a fourth - order symplectic algorithm to integrate the resulting symplectic part of the dynamics . following this , we add the effect of noise , and implement an euler - like first - order algorithm to update the dynamical variables .specifically , one step of the scheme from to involves the following updates of the dynamical variables for : for the symplectic part , we have , for , ; \nonumber \\ & & r\big(t_n+\frac{(k-1)\delta t}{4}\big)=\sqrt{r_x^2+r_y^2},\psi\big(t_n+\frac{(k-1)\delta t}{4}\big)=\tan^{-1}\frac{r_y}{r_x } , \nonumber \\ & & r_x=\frac{1}{n}\sum_{j=1}^n \sin\big[\th_j\big(t_n+\frac{(k-1)\delta t}{4}\big)\big],r_y=\frac{1}{n}\sum_{j=1}^n \cos\big[\th_j\big(t_n+\frac{(k-1)\delta t}{4}\big)\big ] , \nonumber \\ \label{formalintegration1 } \\ & & \th_i\big(t_{n}+\frac{k\delta t}{4}\big)=\th_i\big(t_n+\frac{(k-1)\delta t}{4}\big)+a(k)\delta t ~v_i\big(t_n+\frac{k\delta t}{4}\big ) , \label{formalintegration2 } \end{aligned}\ ] ] where the constants s and s are obtained from ref . . at the end of the updates ( [ formalintegration1 ] ) and ( [ formalintegration2 ] ), we have the set .next , we include the effect of the stochastic noise by keeping s unchanged , but by updating s as +\sqrt{2\delta t\frac{t}{\sqrt{m}}}\delta x(t_{n+1 } ) .\label{formalintegration3}\ ] ] here is a gaussian distributed random number with zero mean and unit variance .app - fast_algo for the models discussed in section [ chap3 ] , the interaction term in the equation of motion for each of the oscillators involves a sum over terms .this would imply at each time step of numerical simulation of the dynamics a computation time that scales as . herewe discuss an alternative and efficient numerical algorithm that transforms the interaction term into a convenient form , allowing for its computation by a fast fourier transform ( fft ) scheme in a time scaling as .use of fft requires that we choose a power of for .let us denote with the sum appearing in the equations of motion ( [ kuramoto - eom_decaying ] ) , ( [ kuramoto - eom_decaying_noise ] ) and ( [ long - range - inertia_dms ] ) : j_i = _ j=1^n , eom1 where is the shortest distance between sites and on a one - dimensional periodic lattice of sites .our simulations results presented in section [ chap3 ] were obtained by considering the lattice constant to be unity .therefore , for is given by d_ij=\ { ll |j - i| ; & , + n-|j - i| ; & , .+ [ dij ] while , as explained in the main text , we choose the value of , irrelevant for the equations of motion , equal to . equation ( [ eom1 ] ) may be rewritten as j_i = _ i _ j=1^nv_ij_j - _ i _j=1^nv_ij _ j .eom2 the first summation may be interpreted as the element of the column vector formed by the product of an matrix {i , j=1,2,\ldots , n} ] , where f_jk = e^-i2(j-1)(k-1)/n for 1 j , k n. then , one has {ij}=\lambda_j \delta_{ij}$ ] . in terms of the matrices and , one can rewrite equation ( [ eom2 ] ) as j_i = _ i _ j=1^n ( f^-1)_ij_j ( f ) _ j -_i _ j=1^n ( f^-1)_ij_j ( f ) _ j , eom3 where ( respectively , ) is the element of the column vector formed by multiplying the matrix with the column vector ( respectively , ) . and are just discrete fourier transforms , and may be computed very efficiently by standard fft codes ( see , e.g. , ref .the simulations reported in section [ chap3 ] were performed by using equation ( [ eom3 ] ) .
the phenomenon of spontaneous synchronization , particularly within the framework of the kuramoto model , has been a subject of intense research over the years . the model comprises oscillators with distributed natural frequencies interacting through a mean - field coupling , and serves as a paradigm to study synchronization . in this review , we put forward a general framework in which we discuss in a unified way known results with more recent developments obtained for a generalized kuramoto model that includes inertial effects and noise . we describe the model from a different perspective , highlighting the long - range nature of the interaction between the oscillators , and emphasizing the equilibrium and out - of - equilibrium aspects of its dynamics from a statistical physics point of view . in this review , we first introduce the model and discuss both for the noiseless and noisy dynamics and for unimodal frequency distributions the synchronization transition that occurs in the stationary state . we then introduce the generalized model , and analyze its dynamics using tools from statistical mechanics . in particular , we discuss its synchronization phase diagram for unimodal frequency distributions . next , we describe deviations from the mean - field setting of the kuramoto model . to this end , we consider the generalized kuramoto dynamics on a one - dimensional periodic lattice on the sites of which the oscillators reside and interact with one another with a coupling that decays as an inverse power - law of their separation along the lattice . for two specific cases , namely , in the absence of noise and inertia , and in the case when the natural frequencies are the same for all the oscillators , we discuss how the long - time transition to synchrony is governed by the dynamics of the mean - field mode ( zero fourier mode ) of the spatial distribution of the oscillator phases . keywords : stochastic particle dynamics ( theory ) , stationary states , phase diagrams ( theory )
the development of strategies to control the dynamic of a viral spread in a population is a central problem in public health and network security . in particular , how to control the traffic between subpopulations in the case of an epidemic outbreak is of critical importance . in this paper, we analyze the problem of controlling the spread of a disease in a population by regulating the traffic between subpopulations .the dynamic of the spread depends on both the characteristics of the subpopulation , as well as the structure and parameters of the transportation infrastructure .our work is based on a recently proposed variant of the popular sis epidemic model to the case of populations interacting through a network .we extend this model to a metapopulation framework in which large subpopulations ( i.e. , cities ) are represented as nodes in a metagraph whose links represent the transportation infrastructure connecting them ( i.e. , roads ) .we propose an extension of the susceptible - infected - susceptible ( sis ) viral propagation model to metapopulations using stochastic blockmodels .the stochastic blockmodel is a complex network model with well - defined random communities ( or blocks ) .we model each subpopulation as a random regular graph and the interaction between subpopulations using random bipartite graphs connecting adjacent subpopulations .the main advantage of our approach is that we can find the optimal traffic among subpopulations to control a viral outbreak solving a standard form convex semidefinite program .in this section we introduce some graph - theoretical nomenclature and the dynamic spreading model under consideration .let denote an undirected graph with nodes , edges , and no self - loops .we denote by the set of nodes and by the set of undirected edges of .the number of neighbors of is called the _ degree _ of node i , denoted by . a graph with all the nodes having the same degreeis called regular .the adjacency matrix of an undirected graph , denoted by ] .the infection probability of an individual at node at time is denoted by .let us assume , for now , that the viral spreading is characterized by the infection and curing rates , and , .hence , the linearized n - intertwined sis model in is described by the following differential equation : where , , and . concerning the non - homogeneous epidemic model, we have the following result : [ prop : heterogeneous sis stability condition]consider the heterogeneous n - intertwined sis epidemic model in ( [ eq : heterosis ] ) .then , if an initial infection ^{n} ] , and ,\end{aligned}\ ] ] according to proposition [ prop : heterogeneous sis stability condition ] , a small initial infection dies out exponentially fast if the largest eigenvalue of is strictly negative . in what follows ,we study the largest eigenvalue of in terms of metapopulation parameters .notice that is a random matrix , since its blocks represent random graphs . to analyze the largest eigenvalue of this random matrix , we make use of the following spectral concentration result from : consider the random matrix , then , almost surely , where is the expectation of .\ ] ] and is the largest eigenvalue under study . or equivalently , where ] , and . therefore , we can approximate the largest eigenvalue of the matrix ( where is the number of individuals in the population ) , using the largest eigenvalue of the matrix ( where is the number of subpopulations in the model ) .hence , the condition under which the epidemic is guaranteed die out at rate if is satisfied .in the following section we use this result to find an optimal distribution of traffic between subpopulations in order to contain the epidemic spread .we assume that ( [ die ] ) is not satisfied without implementing a travel restriction policy .we define the travel restriction policy as $ ] where with cost convex .the problem travel restriction problem is formally stated as where , since we do not consider permanent relocation between cities , we assume that . the following lemma states the condition on the model parameters under which the travel restriction problem is feasible . [ feas ]there exists a set the constraints in ( [ problem ] ) if for all cities . included in proof of theorem [ sdp ] the constraint in lemma [ feas ] is equivalent to the condition that in each individual city the virus would die out at a rate with no intercity connections .the virus can not be forced to die out in the whole system by controlling intercity connections if it can persist in any city in isolation .[ sdp ] the traffic restriction problem given in ( [ problem ] ) is equivalent to the standard form semidefinite program the eigenvalue constraint in ( [ problem ] ) is equivalent to because can be expressed with any basis .multiplying by the positive definite matrix preserves the sign of the largest eigenvalue so we can express the relation as since is a symmetric matrix , we can express ( [ sym ] ) as the semidefinite constraint given in ( [ sdp ] ) , completing the proof of theorem [ sdp ] . to prove lemma [ feas ], we construct a feasible point satisfying the equivalent constraint ( [ sym ] ) and the box constraint .let from ( [ feascond ] ) , . applying the triangle inequality choosing where guarantees that and that , completing the proof .standard form semidefinite programs are solvable in polynomial time via convex optimization methods therefore , a central authority can set traffic limits on all cities in order to guarantee the epidemic dies out at rate while minimizing the cost .in many cases it may not be possible to compute or implement a centralized policy . if we suppose the costs are incurred locally by each city directly effected by the restriction , then we can compute a heuristic local solution by first having each city manager solve then coordinating with neighboring cities allowing traffic [ heur ] the local heuristic solution defined in ( [ local1 ] ) and ( [ local2 ] ) yields a feasible solution to the traffic restriction problem , ( [ problem ] ) .equation ( [ local2 ] ) guarantees that is symmetric and . combining with the first constraint in ( [ local1 ] ) , the box constraint in ( [ local1 ] ) guarantees that .consider the matrix whose diagonal entries are strictly negative according to lemma [ feas ] .equation ( [ ddom ] ) guarantees that the matrix ( [ mat ] ) is diagonally dominant .theorem 6.1.10 from guarantees that the matrix in ( [ mat ] ) is negative semidefinite , satisfying the eigenvalue constraint and completing the proof .the heuristic solution proposed in ( [ local1 ] ) and ( [ local2 ] ) is local in the sense that traffic restrictions on the edges in the intercity network can be computed by each city solving the optimal restrictions for the edges connecting them to other cities .two cities will not necessarily compute the same optimal restriction so the minimum of the two values is used .this is consistent with our model because all travel assumed to be is round trip , thus the realized traffic can be at most the minimum of the traffic allowed by the two cities involved .theorem [ heur ] formally guarantees that this local method yields a solution that causes the virus to die out at at least rate .the fraction of people who are infected is driven to 0 in every city via traffic control .the cost incurred by the local heuristic is about 33% greater than the optimal cost . ]we demonstrate the relative performance of our centralized and local solutions using a sample problem with randomly generated parameters .we choose the cost function because it is convex and satisfies ( [ sep ] ) .furthermore , ( [ cost ] ) intuitively captures the cost of restricting traffic on each edge because there is no cost when traffic is unrestricted ( i.e. ) but the cost tends to as .it is not possible to completely delete an intercity connection .figure [ sim ] shows that with no traffic restrictions all cities go to 100% infection rate while both the optimal solution and local heuristic force the virus to die out .the heuristic solution incurs a higher total cost but also forces the epidemic to die out faster .figure [ net ] shows the network of cities and the traffic restrictions on the edges .points representing cities are scaled proportional to their populations .edges are scaled proportional to the the unrestricted traffic .the color of each edge is linearly scaled from green ( ) to red ( ) .it significant to note that despite the local approach , the restrictions imposed are very similar to the optimal case .more traffic restrictions are required by the heuristic solution , however there is a strong correlation between which edges are restricted by the heuristic solution and by the optimal solution . ]we have proposed a convex framework to contain the propagation of an epidemic outbreak in a metapopulation model by controlling the traffic between subpopulations . in this context , controlling the spread of an epidemic outbreak can be written as a spectral condition involving the eigenvalues of a matrix that depends on the network structure and the parameters of the model . based on our spectral condition, we can find cost - optimal approaches to traffic control in epidemic outbreaks by solving an efficient semidefinite program. 1 r.m .anderson and r.m .may , _ infectious diseases of humans : dynamics and control _ , oxford university press , 1991.[27 ] i. hanski and m. gilpin , metepopulation biology : ecology , genetics and evolution ( academic press , san diego , 1997 ) .
we propose a novel framework to study viral spreading processes in metapopulation models . large subpopulations ( i.e. , cities ) are connected via metalinks ( i.e. , roads ) according to a metagraph structure ( i.e. , the traffic infrastructure ) . the problem of containing the propagation of an epidemic outbreak in a metapopulation model by controlling the traffic between subpopulations is considered . controlling the spread of an epidemic outbreak can be written as a spectral condition involving the eigenvalues of a matrix that depends on the network structure and the parameters of the model . based on this spectral condition , we propose a convex optimization framework to find cost - optimal approaches to traffic control in epidemic outbreaks .
the essence of radiotherapy with protons and heavier ions lies in precise control of incident particles that are designed to stop in tumor volume .the targeting precision will be inevitably deteriorated by multiple scattering in beam modifiers and patient body and such effects must be accurately handled for dose calculations in treatment planning .on the other hand , simplicity and efficiency are also essential in clinical practice and there has always been need for a computational method that balances all these demanding and conflicting requirements .fermi and then eyges ( 1948 ) developed a general theory for charged particles that undergo energy loss and multiple scattering in matter .a group of particles is approximated as a gaussian beam growing in space with statistical variances where is the longitudinal position and and are the projected particle position and angle .the original fermi - eyges theory adopted purely gaussian approximation ( rossi and greisen 1948 ) with ( projected ) scattering power where mev is a constant , is the radiation length of the material , and , , and are the charge , the momentum , and the velocity of the particles .the fermi rossi formula totally ignores effects of large - angle single scattering ( hanson 1951 ) and was found to be inaccurate ( wong 1990 ) . based on formulations by highland ( 1975 , 1979 ) and gottschalk ( 1993 ) , kanematsu ( 2008b )proposed a scattering power with correction for the single - scattering effect , although within the gaussian approximation , where is the radiative path length .although it would be difficult to calculate integrals because of the embedded integral in the terms , kanematsu ( 2008b ) further derived an approximate formula for the rms displacement of incident ions at the end point in homogeneous matter as where is the ion mass in units of the proton mass , is the expected in - water range on the incidence , is the stopping - power ratio of the matter relative to water , and and are constants . despite the complex involvement of variable in , kanematsu ( 2008b ) found the relation for ions in water to be very linear .in fact , preston and kohler of harvard cyclotron laboratory knew the linear relation and derived universal curve with for relative growth of rms displacement in homogeneous matter in an unpublished work in 1968 .starting with the empirical linear relation , this work is aimed to develop a simple and general multiple - scattering model to improve efficiency of numerical heterogeneity handling and to enable further analytical beam modeling .linear approximation for homogeneous systems greatly simplifies to where cm is the radiation length of water , is the scattering / stopping ratio of the material relative to water , and is the geometrical range .was calibrated to for water at .equation at the end point associates and as to lead to another scattering power where is the particle - type - dependent factor .the scattering power is inherently applicable to any heterogeneous system by numerical integral of .we examined these fermi - rossi , extended highland , and linear - displacement models with unpublished measurements by phillips ( hollmark 2004 ) , those by preston and kohler ( kanematsu 2008b ) , and molire - hanson calculations by deasy ( 1998 ) .we took growths of rms displacement with depth for cm protons , cm helium ions , and cm carbon ions in water and rms end - point displacements for them with varied incident range .for a point mono - directional ion beam with in - water range incident into homogeneous matter with constant and , equations are analytically integrated to as a function of residual range at distance .in fact reduces to the universal curve by preston and kohler .a radiation field at a given position can be effectively or virtually modeled with a source , ignoring the matter ( icru-35 1984 ) .the effective extended source is at , where would be minimum in vacuum .the virtual point source is at , from which radiating particles would form a field of equivalent divergence .similarly , the effective scattering point is at ^{1/2}$ ] , at which a point - like scattering would cause equivalent rms angle and displacement ( gottschalk 1993 ) .with depth and ( b ) rms end - point displacement with incident range for protons , helium ions , and carbon ions in water , calculated with the fermi - rossi ( dashed ) , extended highland ( solid ) , linear - displacement ( dotted overlapping with the solid ) models along with a molire - hanson calculation by deasy ( ) and measurements by phillips ( , , ) and by preston and kohler ( ) .( c ) end - point lateral shape of a proton pencil beam in water ( cm ) measured by preston and kohler ( ) and estimated by fermi - rossi ( dashed ) and the present ( solid ) formulations . ]\(a ) shows rms - displacement growths and ( b ) shows rms end - point displacements .the present model was virtually identical to the extended highland model with deviations from measurements or molire - hanson calculations within either 2% or 0.02 cm , while the fermi - rossi model overestimated the rms displacements by nearly 10% .( c ) shows an end - point shape of a pencil beam measured by preston and kohler for cm protons in water and curves estimated by fermi - rossi and the present formulations with additional to for incident beam emittance in their experiment .the gaussian approximation was in fact adequate in this case .\(a ) shows analytical variances , , and growing with depth for protons in water .( b ) shows the relative distances from current position to the effective extended source , the virtual point source , and the effective scattering point .they approach , , and at the limit .increase of the scattering power with depth moves these points relatively closer to the current position .in application of bragg peaks , the end - point displacement is the most important , for which the gaussian approximation was valid in the proton experiment by preston and kohler .in fact , the single - scattering effect is theoretically small for a thick target ( hanson 1951 ) .although ions suffer nuclear interactions with resultant fragments that generally scatter at large angles ( matsufuji 2005 ) , their contributions may be relatively less significant at the bragg peaks .the present formulation would be thus adequate for radiotherapy . the linear - displacement model with the fermi - eyges theory has brought general formulas for ions , including the universal curve intuitively derived by preston and kohler without explicit formulation of the scattering power .in the present model , the kinematic properties are encapsulated in residual range that is always tracked in beam transport . the ion - type dependence in scattering angle and displacement is simply as proportional to , which leads to 50% for , 28% for , and 24% for with respect to that of protons for a given incident range .these numbers coincide with detailed numerical calculations by hollmark ( 2004 ) .the linearity between end - point displacement and range observed for water is the basis of the present model .its validity for general body - tissue materials is not obvious .we here examine water and two extreme elements hydrogen ( cm ) and calcium ( cm ) among major elements of body tissues ( icru-46 1992 ) , using in the extended highland model .shows their relations with geometrical scale correction and indicates that the linearity will generally hold for body - tissue elements .however , the elemental linearity may not truly warrant the validity for systems heterogeneous in atomic compositions .the scattering power only depends on the residual range that is irrelevant to the multiple scattering accumulated in the other upstream materials , whereas the accumulation should influence the single - scattering effect ( kanematsu 2008b ) . in other words ,the present model implicitly assumes heterogeneity in density only . in the current practice of treatment planning, the patient heterogeneity is normally modeled with variable - density water ( kanematsu 2003 ) , for which the present model is rigorous with further simplified scattering power the simplicity will minimize computation of the integrands in path integrals for demanding dose calculations ( kanematsu 1998 , 2006 , 2008a ) .a novel multiple - scattering model has been formulated based on the fact such that the rms end - point displacement is proportional to the incident range in water .the model was designed to be equivalent with the extended highland model for stopping ions in water and agreed with measurements within 2% or 0.02 cm in rms displacement .the resultant scattering - power formula that is only inversely proportional to residual range is much simpler than former formulations and can be used in the framework of the fermi - eyges theory for gaussian - beam transport in tissue - like matter .the simplicity enables analytical beam modeling for homogeneous systems and improves efficiency of numerical path integrals for heterogeneous systems .the present model is ideal for demanding dose calculations in treatment planning of heavy - charged - particle radiotherapy .hollmark m , uhrdin j , d b , gudowska i and brahme a 2004 influence of multiple scattering and energy loss straggling on the absorbed dose distributions of therapeutic light ion beams : i. analytical pencil beam model 324765 kanematsu n , akagi t , futami y , higashi a , kanai t , matsufuji n , tomura h and yamashita h 1998 a proton dose calculation code for treatment planning based on the pencil beam algorithm _ jpnphys . _ * 18 * 88103 matsufuji n , komori m , sasaki h , akiu k , ogawa m , fukumura a , urakabe e , inaniwa t , nishio t , kohno t and kanai t 2005 spatial fragment distribution from a therapeutic pencil - like carbon beam in water 3393403
dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue . this work provides a simple model for such demanding applications . there is an approximate linearity between rms end - point displacement and range of incident particles in water , empirically found in measurements and detailed calculations . this fact was translated into a simple linear formula , from which the scattering power that is only inversely proportional to residual range was derived . the simplicity enabled analytical formulation for ions stopping in water , which was designed to be equivalent with the extended highland model and agreed with measurements within 2% or 0.02 cm in rms displacement . the simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity .
an important topic of interest , in modeling of saturated flow through porous media , is the analysis of the phenomenon when there is a microstructure present in the rock matrix .some of the achievements in the preexisting literature address the case when the microstructure is periodic ; see for the analytical approach and for the numerical point of view . from a different perspective , in is presented the homogenization analysis for a non - periodic fissured system where the geometry of the cracks surface satisfies -smoothness hypotheses .however , none of these mathematical analysis accomplishments , takes in consideration a fractal geometric structure of porous media , which is an important case due to the remarkable evidence of this fact ; in particular in the authors found that pore space and pore interface have fractal features .also , see for pore structure characterization , including random growth models .see for fractal geometry results in tracing experiments within a porous medium including dispersion , fingering and percolation .finally , discusses the use of fractal surfaces in modeling the storage phenomenon in gas reservoirs .this paper concentrates on finding adequate fluid transmission conditions for flow in porous media with fractal interface , as well as the well - posedness of the corresponding weak variational formulations .our goal is to blend " the modeling of porous media flow with the fractal roughness of the microstructure .we differ from the previously mentioned achievements since we replace the geometric feature of periodicity by that of self - similarity and explore , numerically , the effect of some randomness in the fractal geometry . on the other hand , the present study has a very different approach to the analysis on fractals from the preexisting literature , given that the mainstream pde analysis on fractals concentrates its efforts in solving strong forms on the fractal domain , the analysis of the associated eigenvalues and eigenfunctions , or the study of the adequate function spaces . in order to start understanding the key features of the phenomenon , we limit the study to the 1-d setting , defining the domain of analysis as .additionally , we will use and adjustment of the classic stationary diffusion problem below , in order to introduce the fluid exchange transmission conditions across the fractal interface here stands for the pressure , indicates the flux according to darcy s law and denotes de permeability , which will be set as throughout this work .in addition , we set dirichlet and neumann boundary conditions on the extremes of the interval .we use classical notation and results on function spaces and indicate with standard letters the functions on these spaces .we adopt the letters , to denote the microstructure and its elements respectively , in particular , the following classical hilbert space will be frequently used [ eqn hilbert space on fractal ] endowed with its natural inner product in the next section , we introduce the geometry of the fractal interface , together with the adequate mathematical setting in order to include it successfully in the pde model . throughout this work we limit to a particular type of fractal microstructure ,first we need to introduce a previous definition and a related result ( see ) let be a closed set a. a function is said to be a * contraction * if there exists a constant such that for all .additionally , we say that is a * similarity * if for all and is said to be its * ratio*. b. a finite family of contractions , on with is said to be an * iterated function system * or * ifs*. c. a non - empty compact subset is said to be an * attractor * of the ifs if in particular , if every contraction of the ifs is a similarity then , the attractor is said to be a * strictly self - similar * set .consider the ifs given by the contractions on then , there is a unique attractor satisfying the identity .see _ theorem _ 9.1 .[ rem the attractor ] in this work , the attractor of an ifs will prove to be important in an indirect way : not for the definition of a microstructure , but for analyzing the nature of the attained conclusions .it is a well - known fact that under certain conditions ( see , _ lemma _ 9.2 ) a strictly self - similar set has both , hausdorff and box dimensions which are equal ( see , _ theorem _ 9.3 ) , namely . moreover , if is the ratio of the similarity , then from now on , we limit our attention to fractal structures satisfying strict self - similarity .finally , we introduce the type of microstructure to be studied in throughout this work .[ def fractal microstructure ] we say that a set ] , satisfying the following conditions a. the set is finite and is recursively defined by b. the sequence of sets is monotonically increasing and in the following , we refer to as a **-finite development * * of .[ rem properties of the sigma finite development ] let ] .in particular for all , consequently and .moreover , since the above holds for any convergent subsequence of it follows that the whole sequence is weakly convergent i.e. , from here , the convergence statement follows trivially .[ rem partial interface development functionals ] it is important to observe that the weak convergence of the functionals in hypothesis is a stronger condition than the _ statement _ , since it can not be claimed that the fractal trace operator , is surjective .moreover , it will be shown that in most of the interesting cases this operator is not surjective .consider the variational problem with microstructure interface where , and .we claim that the solution of the problem above is the weak limit i.e. , _ problem _ is the * limit * " of _ problems _ .[ th well - posedness fractal interface problem ] the _ problem _ is well - posed .consider the bilinear form using the cauchy - schwartz inequality in each summand of the expression above , we conclude the continuity of the bilinear form . on the other hand ,it is direct to see that , which implies that the bilinear form is -elliptic . applying the lax - milgram theorem ,the result follows , see .now we are ready to prove the weak convergence of the whole sequence of solutions of _ problems _ to the solution of problem .[ th weak convergence of the solutions in v ] let be the sequence of solutions of _ problems _ , then , it converges weakly in to the unique solution of _ problem _ .since is bounded in there must exist a weakly convergent subsequence and a limit .test with arbitrary , this gives letting in the expression above and , in view of _ theorem _ [ th weak convergence of the whole sequence ] , we get additionally , _[ th weak convergence of the whole sequence ] implies that . therefore is a solution to _ problem _ , which is unique due to _ theorem _ [ th well - posedness fractal interface problem ]; therefore we conclude that .since the above holds for any weakly convergent subsequence of it follows that the whole sequence must converge weakly to .[ th norms convergence ] let be the sequence of solutions of _ problems _ then [ stmt strong convergence ] { } 0 .\ ] ] { } 0 .\ ] ] where is the fractal trace operator on .we know that weakly in and weakly in from _ theorems _ [ th weak convergence of the solutions in v ] and [ th weak convergence of the whole sequence ] respectively , therefore on the other hand , testing on the diagonal , we get letting in the expression above we get hence from here , a simple exercise of real sequences shows that both sequences and converge . combining these facts with _ inequality _ we have therefore , if or , the equality above could not be possible .then , it holds that and finally , the convergence of norms together with the weak convergence on the underlying spaces , imply the strong convergence of both sequences .we start this section proving a lemma which is central in the understanding of the space .[ th microstructure accumulation points ] let be a microstructure in ] then .due to _ lemma _ [ th microstructure accumulation points ] if it must hold that for every accumulation point of .since is dense in every point of is an accumulation point of , therefore for all . on the other hand, is absolutely continuous because it belongs to and due to the density of in ] be a dense microstructure then , the _ problem _ becomes trivial and the sequence satisfies the facts presented in _ lemma _[ th microstructure accumulation points ] and _ corollary _ [ th dense microstructure trivializes ] are unfortunate , since several important fractal microstructures are dense in ] but it is a perfect set ( which is also an important case ) the _ problem _ , without becoming trivial , becomes fully decoupled and consequently uninteresting .an important example of the first case are the dyadic numbers in ] be a fractal microstructure with as given in _ definition _ [ def fractal microstructure ] .then , a storage coefficient is said to * scale consistently * with a given -finite development , if it satisfies [ th scaled storage coefficient convergence ] let be a storage coefficient consistently scaled with then .since is the -finite development of with , the cardinality _ identity _ implies since , as stated in _ definition _ [ def scaled storage coefficient ] , the expression above is convergent and the result follows .it is immediate to see some variations of _ definition _ [ def scaled storage coefficient ] based on the geometric series properties , for instance if is a sequence such that {\alpha_{n } } < \frac{1}{l} ] , with .it is also assumed that the storage coefficient scales consistently with the fractal microstructure and that the forcing term is summable .[ th well posedness good coefficient problems ] the following problems are well - posed . consider the bilinear forms [ def bilinear forms scaled coefficient ] since , the _ inequality _ is satisfied and the bilinear forms , for each are continuous ; consequently the bilinear forms are also continuous . on the other hand , since the coefficient is non - negative the bilinear forms are both coercive on . applying the lax - milgram theorem the result follows .let be the sequence of solutions of problems and let be the solution to problem .then a. converges weakly to in .b. the following convergence statements hold + [ stmt convergence interface functionals ] { } \sum_{\b \ , \in \ , \b } \beta(\b)\ , p^{2 } ( \b).\ ] ] { } \sum_{\b \ , \in \ , \b } f(\b)\ , p ( \b).\ ] ] c. converges strongly to in .a. the result follows using the techniques presented in _ theorem _ [ th boundedness of the sequence of solutions ] yield the boundedness of the sequence and using the reasoning of _ theorem _ [ th weak convergence of the solutions in v ] the weak convergence follows .b. due to the weak convergence of the solutions and the fact that the evaluation is a continuous linear functional , it follows that and for all .let be such that and for all .fix such that implies , which we know to exist since .then , for any it holds that since is fixed choose such that implies hence , combining with the previous estimate , the convergence _statement _ follows . for the _ statement _it is enough to combine the strong convergence of the forcing terms {}0 ] be a microstructure and let be its set of limit points. then , _ problem _ is a weak formulation of the following strong problem .[ pblm strong microstructure good coefficients ] together with the interface fluid transmission conditions for isolated points of and the non - localizable fluid transmission conditions for limit points of the boundary condition holds because .let , then , there exists such that for all .problem _ with to get since the above holds for each smooth function on , we conclude that in .similarly , it follows that in , therefore and the statement follows .now , choose and test _ problem _ to get next , in the expression above , we break the interval conveniently in order to use integration by parts from the initial part of the proof , the first summand of the left hand side cancels with the first two summands of the right hand side .evaluating the boundary terms and recalling that this gives consequently , the normal flux balance condition in follows for points .the normal stress balance condition in follows from the continuity of across the interface which holds for any function in . the normal stress balance in _ statement _ holds due to the continuity of at any point of .now , fix and test the _ problem _ with , where ; this gives taking limits in the expression above and recalling the lebesgue dominated convergence theorem the result follows .notice that , by definition of distributions , the following equality holds in the sense of distribution for the solution of _ problem _ where is the dirac evaluation functional .it is easy to observe that if is a limit point of , both series in the expression above will pick up infinitely many non - null terms for test functions such that .therefore , the techniques used in _ theorem _ [ th pseudo strong form ] to derive point - wise statements from weak variational ones , do not apply .again , this fact is unfortunate because , as pointed out at the end of _ section _ [ sec a sigma finite interface microstructure ] , several important fractal microstructures are dense in ] is the collection of extremes of the removed intervals in the construction of the cantor set .in addition , its -finite development is the natural one i.e. , for the experiments it will be shown that the sequences are cauchy ; where indicates de stage of the cantor set construction . in all the cases ,the computations are made for the stages .for each example we present graphics for values of an -stage chosen from , based on optical neatness ; two or three graphs for the solution and the graphs of the corresponding derivative .also , for visual purposes we introduce vertical lines in the graph of the derivatives , to highlight the fractal structure of the function .for all the examples we set the forcing term equal to zero and he interface forcing term , is given by the following expression [ ex basic example ] in the first example we implement the model studied in _[ sec a sigma finite interface microstructure ] , in order to verify its conclusions .in particular , it must hold that .we set the storage coefficient and the forcing terms , defined in above .the convergence table is displayed below , together with the corresponding graphics ._ example _ [ ex basic example ] : convergence table , . [ cols="^,^,^,^,^,^",options="header " , ] in the table below the and -norms for the difference of the partial averages and the cesro average are displayed .clearly convergence is observed in both cases and , due to the law of large numbers , better behavior is observed at the stage over the stage , as expected . the average and varianceare also presented at the bottom of the table , which shows better behavior for the more developed stage over the stage not only in terms of centrality , but also in terms of deviation . + a. the authors tried to find , experimentally , a rate of convergence using the well - know estimate the sampling was made on the sequence of stages .experiments were run for _ examples _ [ ex basic example ] and [ ex scaled example ] however , in none of the cases , solid numerical evidence was detected that could suggest an order of convergence for the phenomenon .b. additional experiments for the random behavior _ example _[ ex random behavior example ] were executed .the probabilistic variations involved were a. experiments for the stages and . b. experiments with different number of realizations , namely , and , depending on the computational time demanded .c. using a storage coefficient , with different intervals and with uniform distribution normal distribution .d. probabilistic variations of the forcing terms .e. combinations of one or more of the previous factors .f. execution of the aforementioned probabilistic variations adjusted to the scaled _[ ex scaled example ] .+ in all the cases , convergence behavior was observed as expected . naturally , the quality of convergence deteriorates depending on the deviation of the distributions and the combination of uncertainty factors introduced in the experiment . c. all the previously mentioned scenarios were also executed in the same code for the case of homogeneous dirichlet boundary conditions on both ends i.e. , .as expected convergence behavior is observed which is comparable with the corresponding analogous version setting the boundary conditions , , used along the analysis of this paper .the present work yields several accomplishments as well as limitations listed below .a. the unscaled storage model presented in _ section _[ sec a sigma finite interface microstructure ] is in general not adequate , as it excludes most of the important cases of fractals .b. the scaled storage model presented in _ section _[ dec fractal scaling modeling ] is suitable for an important number of fractal microstructures .however , the asymptotic variational model is equivalent to a pointwise strong model only if the closure of the microstructure is negligible i.e. , if it has null lebesgue measure .such hypothesis excludes important cases e.g. the family of fat " cantor sets , see in 1-d or the fat " versions of the classic fractal structures in 2-d or 3-d , where only the averaged normal flux " _ statement _ can be concluded for the accumulation points of the microstructure . c. in order to overcomethe deficiency previously mentioned a first approach would be to take the traditional treatment of solving strong forms in fractal domains ( e.g. ) and then try to blend " it with the point of view presented here .such analysis is to be pursued in future research .d. the fractal microstructures addressed in this work are self - similar .this requirement is important only for the second model ( _ section _ [ dec fractal scaling modeling ] ) because it scales the storage according to the geometric detail of the structure at every level . in particular , we need an accurate idea of the growth rate of the microstructure from one level to the next , in order to scale the storage adequately .e. the self - similarity requirement for the microstructure in the introduction of the scaled model excludes the important family of the self - affine fractals ; this type of microstructures is a topic for future work . on the other hand , the unscaled model of _ section _ [sec a sigma finite interface microstructure ] does not require such detailed knowledge of the microstructure precisely because it avoids scaling , although it is likely to be unsuited for the analysis of self - affine microstructures it may be a good starting point in order to detect the needs of the modeling for this case .f. it is important to observe the relevance of self - similarity versus the fractal dimension in the scaled model .while the self - similarity of the microstructure is the corner stone of the scaling ( hence it can not be given up ) the model is more flexible with respect to the fractal dimension of as long as it is not the same of the host " domain .g. we also stress that the input needed by the present result , in terms of geometric information on the fractal structure , does not have to be as detailed as in the strong forms pde analysis on fractals . h. the random experiments presented in _ example _ [ ex random behavior example ] , as well as those only mentioned in _ subsection _ [ sec closing observations numerical ] , furnish solid numerical evidence of good behavior for probabilistic versions of the unscaled and the scaled models respectively .additionally , it is important to handle certain level of uncertainty because , assuming to have a deterministic description of the fractal microstructure is a too strong hypothesis to be applicable in realistic scenarios . in the authors opinion ,this is justification enough to pursue rigorous analysis of these problems to be addressed in future work .i. in _ example _ [ ex random behavior example ] uncertainty was introduced in the storage coefficient , or the forcing term , however the geometry of the microstructure was never randomized . in several works ( e.g. ) the self - similarity is replaced by the concept of statistical self - similarity in the sense that scaling of small parts have the same statistical distribution as the whole set .clearly , this random property is consistent with the scaling of in _ definition _ [ def scaled storage coefficient ] ; consequently , the statistical self - similarity is a future line of research for handling geometric uncertainty of the fractal microstructure . in particular , the fractal percolation microstructures ,are of special interest for real world applications , see .j. the execution of all the numerical experiments shows that the code becomes unstable beyond the 9th stage of the cantor set construction .this suggests that in order to overcome these issues , an adaptation of the fem method has to be developed , targeted to the microstructure of interest .this aspect is to be analyzed in future research .k. the study of fractal microstructures in 2-d and 3-d are necessary for practical applications and in higher dimensions for theoretical purposes . given that passing from one dimension to two or more dimensions increases significantly the level of complexity in the microstructure and in the equation , considerable challenges are to be expected in this future line of research .l. another important analysis to be developed is the study of the models both , scaled and unscaled , in the mixed - mixed variational formulation introduced in . on one hand , this approach allows great flexibility for the underlying spaces of velocity and pressure which can constitute an advantage with respect to the treatment presented here . on the other hand , the mixed - mixed variational formulation allows modeling fluid exchange conditions across the interface of greater generality than , used in the present work ; this fact can contribute significantly to the development of the field .the authors wish to acknowledge universidad nacional de colombia , sede medelln for its support in this work through the project hermes 27798 .the authors also wish to thank professor magorzata peszyska from oregon state university , for authorizing the use of code * fem1d.m * in the implementation of the numerical experiments presented in _ section _ [ sec numerical experiments ] .it is a tool of remarkable quality , efficiency and versatility which has contributed significantly to this work .
we seek suitable exchange conditions for saturated fluid flow in a porous medium , where the interface of interest is a fractal microstructure embedded in the porous matrix . two different deterministic models are introduced and rigorously analyzed . also , numerical experiments for each of them are presented to verify the theoretically predicted behavior of the phenomenon and some probabilistic versions are explored numerically to gain further insight . coupled pde systems , fractal interface , porous media . 35q35 , 37f99 , 76s05
there have been many studies of dynamical predator - prey systems that simulate biological processes .particularly interesting early work was done by bell , who showed that the immune response can be modeled quite effectively by such systems . in bells work the time evolution of competing concentrations of one antigen and one antibody is studied .the current paper shows what happens if we combine two antibody - antigen subsystems _ in a -symmetric fashion _ to make an immune system in which there are _ two _ antibodies and _ two antigens_. an unexpected conclusion is that even if one antigen is lethal ( because the antigen concentration grows out of bounds ) , the introduction of a _ second _ antigen can stabilize the concentrations of _ both _ antigens , and thus save the life of the host .introducing a second antigen may actually drive the concentration of the lethal antigen to zero .we say that a classical dynamical system is _ symmetric _ if the equations describing the system remain invariant under combined space reflection and time reversal .classical -symmetric systems have a typical generic structure ; they consist of two coupled identical subsystems , one having gain and the other having loss .such systems are symmetric because under space reflection the systems with loss and with gain are interchanged while under time reversal loss and gain are again interchanged .systems having symmetry typically exhibit two different characteristic behaviors .if the two subsystems are coupled sufficiently strongly , then the gain in one subsystem can be balanced by the loss in the other and thus the total system can be in equilibrium . in this casethe system is said to be in an _ unbroken _ -symmetric phase .( one visible indication that a linear system is in an unbroken phase is that it exhibits rabi oscillations in which energy oscillates between the two subsystems . )however , if the subsystems are weakly coupled , the amplitude in the subsystem with gain grows while the amplitude in the subsystem with loss decays .such a system is not in equilibrium and is said to be in a _ broken _ -symmetric phase .interestingly , if the subsystems are very strongly coupled , it may also be in a broken -symmetric phase because one subsystem tends to drag the other subsystem . a simple linear -symmetric system that exhibits a phase transition from weak to moderate coupling and a second transition from moderate to strong coupling consists of a pair of coupled oscillators , one with damping and the other with antidamping .such a system is described by the pair of linear differential equations this system is invariant under combined parity reflection , which interchanges and , and time reversal , which replaces with .theoretical and experimental studies of such a system may be found in refs . . for an investigation of a -symmetric system of many coupled oscillatorssee ref .experimental studies of -symmetric systems may be found in refs . .it is equally easy to find physical nonlinear -symmetric physical systems .for example , consider a solution containing the oxidizing reagent potassium permanganate and a reducing agent such as oxalic acid .the reaction of these reagents is self - catalyzing because the presence of manganous ions increases the speed of the reaction .the chemical reaction in the presence of oxalic acid is thus , if is the concentration of permanganate ions and is the concentration of manganous ions , then the rate equation is where is the rate constant .this system is invariant , where exchanges and and replaces with . for this system , the symmetry is always broken ; the system is not in equilibrium .the volterra ( predator - prey ) equations are a slightly more complicated -symmetric nonlinear system : this system is oscillatory and thus we say that the symmetry is unbroken .these equations are discussed in ref .. a nonlinear -symmetric system of equations that exhibits a phase transition between broken and unbroken regions may be found in ref . . in analyzing elementary systems like that in ( [ e1 ] ) , which are described by constant - coefficient differential equations ,the usual procedure is to make the _ ansatz _ and .this reduces the system of differential equations to a polynomial equation for the frequency .we then associate unbroken ( or broken ) phases with real ( or complex ) frequencies .if is real , the solutions to both equations are oscillatory and remain bounded , and this indicates that the physical system is in dynamic equilibrium .however , if is complex , the solutions grow or decay exponentially with , which indicates that the system is not in equilibrium . for more complicated nonlinear -symmetric dynamical systems ,we still say that the system is in a phase of broken symmetry if the solutions grow or decay with time or approach a limit as because the system is not in dynamic equilibrium .in contrast , if the variables oscillate and remain bounded as increases we say that the system is in a phase of unbroken symmetry .however , in this case the time dependence of the variables is unlikely to be periodic ; such systems usually exhibit _ almost periodic _ or _chaotic _ behavior . to illustrate these possibilitieswe construct a more elaborate -symmetric system of nonlinear equations by combining a two - dimensional dynamical subsystem whose trajectories are _ inspirals _ with another two - dimensional dynamical subsystem whose trajectories are _outspirals_. for example , consider the subsystem this system has two saddle points and one stable spiral point , as shown in fig .[ f1 ] ( left panel ) .plane for the dynamical subsystem ( [ e4 ] ) with .the initial conditions are .right panel : an outspiral trajectory for ( [ e5 ] ) in the plane with .the initial conditions are . in the left panel from to and in the right panel ranges from to . ] next , we consider the reflection ( , , ) of the subsystem in ( [ e4 ] ) : the trajectories of this system are outspirals , as shown in fig . [ f1 ] ( right panel ) . the time evolution of the four dynamical variables in fig .[ f1 ] , and , and , is shown in fig .[ f2 ] . .the four variables , , , and are plotted as functions of . ]let us now couple the two subsystems in ( [ e4 ] ) and ( [ e5 ] ) in such a way that the symmetry is preserved .the resulting dynamical system obeys the nonlinear equations in which and are the coupling parameters .this system has a wide range of possible behaviors .for example , for the parametric values , , and and the initial conditions we can see from figs .[ f3 ] and [ f4 ] that the system is in a broken--symmetric phase .-symmetric system ( [ e6 ] ) in a broken--symmetric phase , as indicated by the outspiral behavior in the ] planes .the parametric values are , , and and the initial conditions are . in these plots ranges from to . ] , , , for the parametric values and initial conditions shown in fig .[ f3 ] . ] when the coupling parameters are chosen so that the system ( [ e6 ] ) is in a phase of unbroken symmetry , the initial conditions determine whether the behavior is chaotic or almost periodic .for example , for the same parametric values , , and the system in ( [ e6 ] ) is in an unbroken--symmetric phase .two qualitatively different behaviors of unbroken symmetry are illustrated in figs .[ f5 ] , [ f6 ] , [ f78 ] and [ f9 ] , [ f10 ] , and [ f1112 ] .the first three figures display the system in two states of chaotic equilibrium and the next three show the system in two states of almost - periodic equilibrium .the poincar plots in figs .[ f5 ] and [ f6 ] ( left panels ) and figs . [ f9 ] and [ f10 ] ( left panels ) distinguish between chaotic and almost periodic behavior . ) in a phase of chaotic unbroken symmetry .the parametric values are , , and the initial conditions are . left panel : poincar plot of versus when .the two - dimensional scatter of dots indicates that the system is chaotic . in this plot from to .right panel : a plot of versus for ranging from to . ] symmetry .the parametric values and the ranges of are the same as in fig .[ f5 ] , but the initial conditions are now . ] plotted as a function of time .the chaotic behavior can be seen as the uneven oscillations .these oscillations are reminiscent of a trajectory under the influence of a pair of strange attractors.,title="fig : " ] + plotted as a function of time . the chaotic behavior can be seen as the uneven oscillations .these oscillations are reminiscent of a trajectory under the influence of a pair of strange attractors.,title="fig : " ] ) in a state of unbroken symmetry .the parametric values and the time ranges are the same as in fig .[ f5 ] , but the initial conditions are .the presence of one - dimensional _ islands _ in the poincar plot ( left panel ) shows that the time evolution of the system is almost periodic . ] ) in a different state of unbroken symmetry .the parametric values and the ranges of are the same as in fig .[ f9 ] , but the initial conditions are . the poincar plot ( left panel ) again shows that the time evolution of the system is almost periodic . ]plotted as a function of time .the almost periodic behavior is particularly evident in the graphs on the left , where the oscillations are quite regular.,title="fig : " ] + plotted as a function of time .the almost periodic behavior is particularly evident in the graphs on the left , where the oscillations are quite regular.,title="fig : " ] the choice of coupling parameters usually ( but not always ) determines whether the system is in an unbroken or a broken -symmetric phase . to demonstrate this, we take and examine the time evolution for roughly 11,000 values of the parameters and . figure [ f13 ]indicates the values of and for which the system is in a broken or an unbroken ( chaotic or almost periodic ) phase .plane for the system ( [ e6 ] ) with the parametric value .the initial conditions are .the dots correspond to parametric values in the region of broken symmetry , and the white space corresponds to the region of unbroken symmetry .the edges of the regions are not completely sharp ; it can be difficult to determine the precise location of the boundary curves separating broken and unbroken regions because this requires integrating for extremely long times . ]having summarized the possible behaviors of coupled -symmetric dynamical subsystems , in sec . [ s2 ]we construct and examine in detail a -symmetric dynamical model of an antigen - antibody system containing _ two _ antigens and _ two _ antibodies .this system is similar in structure to that in ( [ e6 ] ) .we show that in the unbroken region the concentrations of antigens and antibodies generally become chaotic and we interpret this as a chronic infection .however , in the unbroken regions there are two possibilities ; either the antigen concentration grows out of bounds ( the host dies ) or else the antigen concentration falls to zero ( the disease is completely cured ) .some concluding remarks are given in sec .infecting an animal with bacteria , foreign cells , or virus may produce an immune response .the foreign material provoking the response is called an _ antigen _ and the immune responseis characterized by the production of _ antibodies _ , which are molecules that bind specifically to the antigen and cause its destruction .the time - dependent immune response to a replicating antigen may be treated as a dynamical system with interacting populations of the antigen , the antibodies , and the cells that are involved in the production of antibodies .a detailed description of such an immune response would be extremely complicated so in this paper we consider a simplified mathematical model of the immune response proposed by bell .bell s paper introduces a simple model in which the multiplication of antigen and antibodies is assumed to be governed by lokta - volterra - type equations , where the antigen plays the role of prey and the antibody plays the role of predator .while such a model may be an unrealistic simulation of an actual immune response , bell argues that this mathematical approach gives a useful qualitative and quantitative description .following bell s paper we take the variable to represent the concentration of antibody and the variable to represent the concentration of antigen at time . assuming that the system has an unlimited capability of antibody production , bell s dynamical model describes the time dependence of antigen and antibody concentrations by the differential equations according to ( [ e7 ] ) , the antigen concentration increases at a constant rate if the antibody is are not present .as soon as antigens are bound to antibodies , the antibodies start being eliminated at the constant rate .analogously , the concentration of antibody decays with constant rate in the absence of antigens , while binding of antigens to antibodies stimulates the production of antibody with constant rate .the functions and denote the concentrations of bound antibodies and bound antigens . assuming that , an approximate expression for the concentration of bound antigens and antibodies is where is called an _association constant_. with the scalings and and the change of variable ^{-1},\ ] ] system ( [ e7 ] ) becomes the system ( [ e9 ] ) exhibits four different behaviors : * if , there is unbounded monotonic growth of antigen . *if and , there is an outspiral ( oscillating growth of antigen ) . *if and , there is an inspiral ( the antigen approaches a limiting value in an oscillatory fashion ) . *if and , the system exhibits exactly periodic oscillations .this behavior is unusual in a nonlinear system and indeed ( [ e6 ] ) does not exhibit exact periodic behavior .subsequent to bell s paper there have been many studies that use two - dimensional dynamical models to examine the antigen - antibody interaction .however , in this paper we construct a _four_-dimensional model consisting of two antigens and two antibodies .let us assume that an antigen attacks an organism and that the immune response consists of creating antibodies as described by ( [ e7 ] ) .however , we suppose that the organism has a second system of antibodies and antigens .this second subsystem plays the role of a -symmetric partner of the system , where parity interchanges the antibody with the antigen and the antigen with the antibody , and time reversal makes the replacement .the time evolution of this new antibody - antigen system is regulated by the equations we assume that the interaction between antibody and antigen is controlled by the same constant as in ( [ e8 ] ) .we assume that because antibodies may have many possible binding sites , can also bind to antigen and that antibody can also bind to antigen .moreover , for this model we assume that we can scale the dynamical variables so that this interaction is the same as the interaction and .this means that after the scaling , , , and , the dynamical behavior of the total system is described by the production of the antibody is stimulated by the presence of the antigen .the terms involving the parameter describe the production of additional antibodies and additional elimination of antigens .similarly , terms describe the production of new antibodies and additional elimination of antigens .we remark that the system ( [ e11 ] ) with can be derived from the hamiltonian that is , it can be recovered from the hamilton equations where figure [ f14 ] displays a phase diagram of the -symmetric model in ( [ e11 ] ) , where we have taken , , and . in this figure a portion of the plane is shown and the regions of broken and unbroken symmetry are indicated .unbroken--symmetric regions are indicated as hyphens ( blue online ) .there are two kinds of broken--symmetric regions ; x s ( red online ) indicate solutions that grow out of bounds and o s ( green online ) indicate solutions for which the concentration of antigen approaches 0. portion of the coupling - parameter plane for the -symmetric immune - response system ( [ e11 ] ) showing regions of broken and unbroken symmetry .we take as initial conditions and ; that is , we assume that the disease associated with antigen - antibody 1 is well established and that at a very small amount of antigen - antibody 2 is injected .points in the unbroken region are indicated as hyphens ( blue ) . in this regionthe concentrations are all oscillatory in time . in general , depending on the initial conditions ,the solutions can be either almost periodic or chaotic .however , as shown in fig .[ f18 ] , the solutions to ( [ e11 ] ) are chaotic .thus , in this region the introduction of antigen - antibody 2 makes the potentially lethal infection chronic .the regions whose points are indicated as o s ( green ) and x s ( red ) have broken symmetry . in the x regions the solutions oscillate and grow out of bounds . in the o regions and vanish and and approach small finite values as .thus , in the x regions the host dies , but in the o regions the disease due to antigen is completely cured . ]figure [ f15 ] shows that the organism does not survive if the second antibody - antigen pair is not initially present . in this figurewe take but we we take . ) is described by ( [ e7 ] ) because we take and thus and remain 0 for all .we have taken , , and .the initial conditions are . ]figure [ f16 ] shows what happens in a broken--symmetric phase when the organism does not survive .we take and , which puts us in the lower - left corner of fig .the initial conditions are and .note that the level of the antigen grows out of bounds .-symmetric phase in the lower - left corner of fig .[ f14 ] ; specifically and .the organism does not survive the antigen attack .the antigen - antibody dynamics is described by ( [ e11 ] ) , where , , , , and the initial conditions are the same as in fig . [ f14 ] . ] figure [ f17 ] shows what happens in the unbroken region in fig .the organism survives but the disease becomes chaotically chronic . and , which is in the unbroken- phase in the lower - left portion of fig .the antigen - antibody dynamics is described by ( [ e11 ] ) , where , , , , and the initial conditions are the same as in fig . [ f14 ] . the concentrations of antigens and antibodies behave chaotically in time . ]figure [ f18 ] demonstrates the chaotic behavior at a point in the upper - right unbroken- portion of fig .[ f14 ] , specifically at and .the figure shows a poincar map in the plane for . ) , where , , , , and the initial conditions are the same as in fig .[ f14 ] . in this plot and , which places the system in the unbroken phase in the upper - right corner of fig .the dynamical behavior is chaotic and the disease becomes chronic , as implied by the poincar map in which trajectory points are plotted in the plane for .the scatter of points indicates chaotic behavior .the time interval for the plot is from to . ]figure [ f19 ] shows what happens in the broken- region in the lower - right corner of fig .[ f14 ] at and . in this regionthe antigen completely disappears and the disease is cured . and in the broken- region in the lower - right corner of fig .[ f14 ] . in this plot , , , , andthe initial conditions are the same as in fig .note that the antigen concentration decays to zero and the disease is cured .the concentration of antigen approaches a small nonzero value as and , as was noted in ref . , this value is so small that we regard it as negligible . ]in this paper we have extended bell s two - dimensional predator - prey model of an immune response to a four - dimensional -symmetric model and have examined the outcomes in the broken- and the unbroken--symmetric phases .we have found that in the unbroken phase the disease becomes chronic ( oscillating ) while in the broken phase the host may die or be completely cured . in bells model ( ref . ) an oscillating regime is assumed to be a transitory state and that either the antigen is completely eliminated at an antigen minimum or the host dies at an antigen maximum . however , there are many examples in which the immune system undergoes temporal oscillations ( occurring in pathogen load in populations of specific cell types , or in concentrations of signaling molecules such as cytokines ) .some well known examples are the periodic recurrence of a malaria infection , familial mediterranean fever , or cyclic neutropenia .it is not understood whether these oscillations represent some kind of pathology or if they are part of the normal functioning of the immune system , so they are generally regarded as aberrations and are largely ignored .a discussion of immune system oscillation can be found in ref .additional chaotic oscillatory diseases such as chronic salmonella , hepatitis b , herpes simplex , and autoimmune diseases such as multiple sclerosis , crohn s disease , and fibrosarcoma are discussed in ref . . in ref . it is not possible to completely eliminate the antigen , that is , to make the antigen concentration go to zero .however , it is possible to reduce the antigen concentration to a very low level , perhaps corresponding to less than one antigen unit per host , which one can interpret as complete elimination .however , we will see that in the -symmetric model ( [ e11 ] ) the antigen can actually approach 0 in the broken phase . in ref . it is stated that the predicted oscillations of increasing amplitude should be viewed with caution .such oscillations are predicted to involve successively lower antibody minima , which in reality may not occur .however , in ref . a modified two - dimensional predator - prey model for the dynamics of lymphocytes and tumor cells is considered .this model seems to reproduce all known states for a tumor . for certain parameters the system evolves towards a state of uncontrollable tumor growth andexhibits the same time evolution as that of and in figs .[ f15 ] and [ f16 ] . for other parametersthe system evolves in an oscillatory fashion towards a controllable mass ( a time - independent limit ) of malignant cells . in this casethe temporal evolution is the same as that of and in fig .[ f19 ] . in ref . this state is called a _dormant _ state .it is also worth mentioning that in ref .two_-dimensional dynamical system describing the immune response to a virus is considered ; this model can exhibit periodic solutions , solutions that converge to a fixed point , and solutions that have chaotic oscillations .ordinarily , a two - dimensional dynamical system can not have chaotic trajectories butthe novelty in this system is that there is a time delay .finally , we acknowledge that it is not easy to select reasonable parameters if one considers the application of bell s model to real biological systems . in the -symmetric model it is also difficult to make realistic estimates of relevant parameters .nevertheless , we believe that some of the qualitative features described in this paper may also be seen in actual biological systems . we thank m. rucco and f. castiglione for helpful discussions on the functioning of the immune system .cmb thanks the doe for partial financial support and mg thanks the fondazione angelo della riccia for financial support . c. m. bender , m. gianfreda , b. peng , s. k. ozdemir , and l. yang , phys .a * 88 * , 062111 ( 2013 ) .b. peng , s. k. ozdemir , f. lei , f. monifi , m. gianfreda , g. l. long , s. fan , f. nori , c. m. bender , l. yang , nat .* 10 * , 394 ( 2014 ) . c. m. bender , m. gianfreda , and s. p. klevansky , phys .a * 90 * , 022114 ( 2014 ) .j. rubinstein , p. sternberg , and q. ma , phys .lett . * 99 * , 167003 ( 2007 ) .a. guo , g. j. salamo , d. duchesne , r. morandotti , m. volatier - ravat , v. aimez , g. a. siviloglou , and d. n. christodoulides , phys . rev. lett . * 103 * , 093902 ( 2009 ) . c. e. rter , k. g. makris , r. el - ganainy , d. n. christodoulides , m. segev , and d. kip , nat .phys . * 6 * , 192 - 195 ( 2010 ) . k. f. zhao , m. schaden , and z. wu , phys .a * 81 * , 042903 ( 2010 ) .z. lin , h. ramezani , t. eichelkraut , t. kottos , h. cao , and d. n. christodoulides , phys .lett . * 106 * , 213901 ( 2011 ) . l. feng , m. ayache , j. huang , y .- l .xu , m. h. lu , y. f. chen , y. fainman , and a. scherer , science * 333 * , 729 ( 2011 ) .s. bittner , b. dietz , u. gnther , h. l. harney , m. miski - oglu , a. richter , and f. schfer , phys . rev .lett . * 108 * , 024101 ( 2012 ) .n. chtchelkatchev , a. golubov , t. baturina , and v. vinokur , phys .* 109 * , 150405 ( 2012 ) .c. zheng , l. hao , and g. l. long , phil .. r. soc .a * 371 * , 20120053 ( 2013 ) .j. schindler , a. li , m. c. zheng , f. m. ellis , and t. kottos , phys .a * 84 * , 040101(r ) ( 2011 ) . c. m. bender , d. d. holm , and d. w. hook , j. physics a : math .* 40 * , f793 ( 2007 ) .i. v. barashenkov and m. gianfreda , j. phys .a : math . theor . * 47 * , 282001(ftc ) ( 2014 ) .a. s. perelson and g. weisbuch , rev .mod . phys .* 69*,1219 ( 1997 ). m. plank , j. math .36 * , 7 ( 1995 ) .r. s. desowitz , _ the malaria capers : more tales of parasites and people , research and reality _( norton , new york , 1991 ) ; c. m. poser and g. w. bruyn , _ an illustrated history of malaria _ ( parthenon , new york , 1999 ) .
the study of -symmetric physical systems began in 1998 as a complex generalization of conventional quantum mechanics , but beginning in 2007 experiments began to be published in which the predicted phase transition was clearly observed in classical rather than in quantum - mechanical systems . this paper examines the phase transition in mathematical models of antigen - antibody systems . a surprising conclusion that can be drawn from these models is that a possible way to treat a serious disease in which the antigen concentration is growing out of bounds ( and the host will die ) is to inject a small dose of a _ second _ ( different ) antigen . in this case there are two possible favorable outcomes . in the unbroken--symmetric phase the disease becomes chronic and is no longer lethal while in the appropriate broken--symmetric phase the concentration of lethal antigen goes to zero and the disease is completely cured . = 1
a number of studies over the past ten years have estimated the inter - galactic void probability function and investigated its departure from randomness .the basic random model is that arising from a poisson process of mean density galaxies per unit volume in a large box . then , in a _ given region of volume _ the probability of finding exactly galaxies is so the probability that the given region is devoid of galaxies is then it follows that the probability density function for the continuous random variable in the poisson case is for comparison with observations , the approximation fails for very large since a finite volume box is involved in any catalogue .a hierachy of -point correlation functions needed to represent clustering of galaxies in a complete sense was devised by white and he provided explicit formulae , including their continuous limit .in particular , he made a detailed study of the probability that a sphere of radius is empty and showed that formally it is symmetrically dependent on the whole hierarchy of correlation functions .however , white concentrated his applications on the case when the underlying galaxy distribution was a poisson process , the starting point for the present approach which is concerned with geometrizing the parameter space of departures from a poisson process .we choose a family of parametric statistical models that includes ( [ negexp ] ) as a special case .there are of course many such families , but we take one that has been successful in modelling void size distributions in terrestrial stochastic porous media and has been used in the representation of clustering of galaxies .the family of gamma distributions has event space parameters and probability density functions given by then and and we see that controls the mean of the distribution while the spread and shape is controlled by , the square of the coefficient of variation .the special case corresponds to the situation when represents the random or poisson process in ( [ negexp ] ) with thus , the family of gamma distributions can model a range of stochastic processes corresponding to non - independent ` clumped ' events , for and dispersed events , for as well as the random case ( cf .thus , if we think of this range of processes as corresponding to the possible distributions of centroids of extended objects such as galaxies that are initially distributed according to a poisson process with then the three possibilities are : chaotic or random structure : : with no interactions among constituents , clustered structure : : arising from mutually attractive interactions , dispersed structure : : arising from mutually repulsive interactions , figure [ fgamma ] shows a family of gamma distributions , all of unit mean , with ( 300,220)(0,0 ) ( 80,0 ) ( 45,177) ( 125,160) ( 180,70) ( 310,50) ( 300,-7)void volume shannon s information theoretic ` entropy ' or ` uncertainty ' for such stochastic processes ( cf .jaynes ) is given , up to a factor , by the negative of the expectation of the logarithm of the probability density function ( [ feq ] ) , that is in particular , at unit mean , the maximum entropy ( or maximum uncertainty ) occurs at which is the random case , and then the ` maximum likelihood ' estimates of can be expressed in terms of the mean and mean logarithm of a set of independent observations these estimates are obtained in terms of the properties of by maximizing the ` log - likelihood ' function with the following result where and is the digamma function , the logarithmic derivative of the gamma function the usual riemannian information metric on the parameter space is given by for more details about the geometry see .the 1-dimensional subspace parametrized by corresponds to the available ` random ' processes .a path through the parameter space of gamma models determines a curve \rightarrow { \cal{s } } : t\mapsto ( c_1(t),c_2(t))\ ] ] with tangent vector and norm given via ( [ gammametric ] ) by the information length of the curve is and the curve corresponding to an underlying poisson process has so and and the information length is as we know from elementary geometry , arc length is often difficult to evaluate analytically because it contains the square root of the sum of squares of derivatives .accordingly , we sometimes use the ` energy ' of the curve instead of length for comparison between nearby curves . energy is given by integrating the _ square _ of the norm of so in the case of the curve the energy is it is easily shown that a curve of constant has where and this has energy locally , minimal paths joining nearby pairs of points in are given by the autoparallel curves or geodesics defined by ( [ gammametric ] ) .some typical sprays of geodesics emanating from various points are provided in .the gaussian curvature of the surface actually controls all of the geometry of geodesics and it is given by a general account of large - scale structures in the universe , see fairall .kauffmann and fairall developed a catalogue search algorithm for nearly spherical regions devoid of bright galaxies and obtained a spectrum for diameters of significant voids .this had a peak in the range 8 - 11 a long tail stretching at least to 64 and is compatible with the recent extrapolation models of baccigalupi et al which yield an upper bound on void diameters of about 100 we shall return to the data of kauffmann and fairall later in this section .simulations of sahni et al . found strong correlation between void sizes and primordial gravitational potential at void centres ; void topologies tended to simplify with time .ghigna et al . found in their simulations that void statistics are sensitive to the passage from cdm to chdm models .this suggested that the void distribution is sensitive to the type of dark matter but not to the transfer function between types .chdm simulations gave a void probability in excess of observations , cdm simulations being somewhat better .vogeley et al . compared void statistics with cdm simulations of a range of cosmological models ; good agreement was achieved for samples of very bright galaxies ( ) but for samples containing fainter galaxies the predicted voids were reported to be ` too empty ' .ghigna et al . compared observational data with gaussian - initiated -body simulations in a box and found that at the scales the void probability for chdm was significantly larger than observed .ghigna et al . compared simulated galaxy samples with the perseus - pisces redshift survey .the void probability function did discriminate between dm and cdm models , the former giving particularly good agreement with the survey .little and weinberg used similar -body simulations , and found that the void probability was insensitive to the shape of the initial power spectrum .watson and rowan - robinson found that standard cdm predictors do yield reasonably good void probability function estimates whereas voronoi foam models performed less well .lachize - rey and dacosta were of the opinion that the available samples of galaxies were insufficiently representative , because of greater apparent frequency of larger voids in the southern hemisphere .bernardeau started from a gaussian field and derived an expression for the void probability function , obtaining with the variance of the number of galaxies in volume this distribution has an extended large tail because it is asymptotically more like the exponential of than the poisson case which decays like the exponential of cappi et al . examined the dependence of the void probability function on scale for a range of galaxy cluster samples , finding a general scaling to occur up to void diameters of about kerscher et al . used the void probability function to obtain spatial statistics of clusters on scales obtaining satisfactory agreement in a model with a cosmological constant and in a model with breaking of scale invariance of perturbations .( 300,220)(0,0 ) ( 80,0 ) ( 43,160) ( 300,-7)void diameter for our model we consider the diameter of a spherical void with volume having distribution ( [ feq ] ) .something close to the random variable has direct representation in some theoretical models , for example as polyhedral diameters in voronoi tesselations .( 300,220)(0,0 ) ( 50,0 ) the probability density function for is given by then the mean variance and coefficient of variation of are given , respectively , by the fact that the coefficient of variation ( [ cvd ] ) depends only on gives a rapid fitting of data to ( [ dpdf ] ) .numerical fitting to ( [ cvd ] ) gives this substituted in ( [ meand ] ) yields an estimate of to fit a given observational mean . by way of illustration, this has been done in figure [ kfzcat ] for the zcat / src data from kauffmann and fairall figure 8a ( cf also figure 6.5 ) , both set to unit mean diameter the true mean for that catalogue was about the fitted values were and this fit is not particularly good if the reported peaks are not an artifact but , qualitatively , we observe that the fitted value is apparently significantly less than which would correspond to the random model .we conclude tentatively that , for the zcat / src data subjected to the kauffmann and fairall search algorithm , the new model suggests clumping rather than dispersion in the underlying stochastic process .a _ mathematica _notebook for performing the fitting procedure is available from the author , who would welcome more sets of data .suppose that the parameter space of gamma - based models is a meaningful representation of the evolutionary process and that some coordinates such as in represent current data .then geodesics in through this point represent some kind of extremal path .moreover , we may consider a vector field on such that is the present endpoint of an integral curve of the initial point of this curve being presumably in an epoch when less clustering ( ) , a random state ( ) or even dispersion ( ) was present at higher density .it would be interesting to investigate the various candidate cosmologies for their appropriate vector fields , via the statistics of matter and voids they predict .the present gamma - related parametric statistical models provide the means to convert the catalogue statistics into the coordinate parameters and a background geometrization of the statistics on which dynamical processes may be formulated .the author wishes to thank a.p .fairall , f. labini and m.b .ribeiro for helpful comments during the preparation of this article .c.t.j . dodson .gamma manifolds and stochastic geometry . in : *proceedings of the workshop on recent topics in differential geometry * , santiago de compostela 16 - 19 july 1997 . _ public .geometra y topologa _ 89 ( 1998 ) 85 - 92 .m. kerscher , j. schmalzing , j. retzlaff , s. borgani , t. buchert , s. gottlober , v. muller , m. plionis and h. wagner .minkowski functionals of abell / aco clusters ._ mon . not ._ 284 , 1 ( 1997 ) 73 - 84 .t. piran , m.lecar , d.s .goldwirth , l. nicolaci da costa and g.r .limits on the primordial fluctuation spectrum : void sizes and anisotropy of the cosmic microwave background radiation . _ mon . not ._ 265 , 3 ( 1993 ) 681 - 8 .
a number of recent studies have estimated the inter - galactic void probability function and investigated its departure from various random models . we study a family of parametric statistical models based on gamma distributions , which do give realistic descriptions for other stochastic porous media . gamma distributions contain as a special case the exponential distributions , which correspond to the ` random ' void size probability arising from poisson processes . the random case corresponds to the information - theoretic maximum entropy or maximum uncertainty model . lower entropy models correspond on the one hand to more ` clustered ' structures or ` more dispersed ' structures than expected at random . the space of parameters is a surface with a natural riemannian structure , the fisher information metric . this surface contains the poisson processes as an isometric embedding and provides the geometric setting for quantifying departures from randomness and perhaps on which may be written evolutionary dynamics for the void size distribution . estimates are obtained for the two parameters of the void diameter distribution for an illustrative example of data published by fairall . + * keywords * gamma distribution , void distribution , randomness , information geometry + msc classes : 85a40 ; 60d05
ocean - tide information has considerably many applications .the data obtained is used to solve vital problems in oceanography and geophysics , and to study earth tides , elastic properties of the earth s crust and tidal gravity variations .it is also used in space studies to calculate the trajectories of man - made satellites of the earth and to interpret the results of satellite measurements .the interaction of tides with deep sea ridges and chains of seamounts give rise to deep eddies which transport nutrients from the deep to the surface .the alternate flooding and exposure of the shoreline due to tides is an important factor in the determination of the ecology of the region .[ [ section ] ] one of the first mathematical explanation for tides was given by newton by describing tide generating forces .the first dynamic theory of tides was proposed by laplace .here we consider the tidal dynamics model proposed by marchuk and kagan . the existence and uniqueness of weak solutions of the deterministic tide equation and that of strong solutions of the stochastic tide equation with additive trace class gaussian noise have been proved in manna , menaldi and sritharan . in this work , we consider the stochastic tide equation with lvy noise and prove the existence and uniqueness and regularity of solution in bounded domains .control of fluid flow has numerous applications in control of pollutant transport , oil recovery / transport problems , weather predictions , control of underwater vehicles , etc .unification of many control problems in the engineering sciences have been done by studying the optimal control problem of navier - stokes equations ( see , ) .here we consider the initial data optimal control of the stochastic tidal dynamics model .we consider the stroock - varadhan martingale formulation of the stochastic model to prove the existence of optimal initial value control .[ [ section-1 ] ] the organization of the paper is as following . a brief description of the model has been given in section [ model ] .section [ setting ] describes the functional setting of the problem and states the monotonicity property of the non - linear operator . in section[ estimate ] we consider the a - priori estimates and prove the existence , uniqueness and regularity of strong solution . in section [ stochastic ]we consider the stochastic optimal control problem with initial value control .[ [ section-2 ] ] in the framework of gelfand triple we consider the following tidal dynamics model with lvy noise lcl du + [ au+b(u)+g]dt = f(t)dt+(t , u(t))dw(t)+_z h(u , z ) ( dt , dz ) , + , + + div(hu)=0 , + u(0)=u_0,(0)=_0 .the operators and are defined in sections [ model ] and [ setting ] . is an -valued wiener process with trace class covariance . is a compensated poisson random measure , where denotes the poisson counting measure associated to the point process on , a measurable space , and a -finite measure on .[ [ section-3 ] ] the following theorem states the main result of section [ estimate ] .the functional spaces appearing in the statement of the theorem have been defined in section [ setting ] .[ thmint1 ] let us consider the above stochastic tide model with and such that assume that and satisfy the following hypotheses : 1 .\times{\mathbb{h}_0 ^ 1}(\mathcal{o});l_q(h_0,{\mathbb{l}^2 } ) ) , h\in\mathbb{h}^2_\lambda([0,t]\times z;{\mathbb{l}^2}(\mathcal{o})) ] , where is a bounded 2-d domain ( horizontal ocean basin ) with coordinates and represents the time . here denotes the time derivative , and are the laplacian , gradient and the divergence operators respectively .[ [ section-5 ] ] the unknown variables represent the total transport 2-d vector ( i.e. , the vertical integral of the velocity from the ocean surface to the ocean floor ) and the displacement of the free surface with respect to the ocean floor . for details on the coefficients and the domain description see manna , menaldi and sritharan .[ [ section-6 ] ] denote by the following matrix operator and the nonlinear vector operator where and are positive constants , is a strictly positive smooth function . in this modelwe assume the depth to be a continuously differentiable function of , nowhere becoming zero , so that where m is some positive constant which equals zero at a constant ocean depth . [[ section-7 ] ] to reduce to homogeneous dirichlet boundary conditions consider the natural change of unknown functions and which are referred to the tidal flow and the elevation .the full flow which is given a priori on the boundary , has been extended to the whole domain ] satisfies and for ] .[ [ section-10 ] ] let be a symmetric non - negative operator . define . then is a hilbert space equipped with inner product , where is the pseudo - inverse of , and is the covariance operator of the -valued wiener process .[ [ section-11 ] ] let denote the space of linear operators such that is a hilbert - schmidt operator from to . define the norm on the space by .[ [ section-12 ] ] let be a separable and complete metric space .let denote the set of -valued functions on ] is right continuous at and has a left limit at ) , endowed with the skorokhod topology .this topology is metrizable by the following metric rcl _t(u , v):=__t & & , where is the set of increasing homeomorphisms of ] such that tends to the identity uniformly on ] .let be a filtered probability space , and be a banach space .a process with state space is called a lvy process if 1 . is adapted to , 2 . a.s . , 3 . is independent of if , 4 . is stochastically continuous , i.e. , , 5 . is cdlg , 6 . has stationary increments , i.e. , = { \mathbb{e}}[x_{t - s}],0\leq s < t ] as \times z;{\mathbb{l}^2}(\mathcal{o}))=\{x:{\mathbb{e}}\left[\int_0^t\int_z \|x\|_{{\mathbb{l}^2}}^2 { \lambda(dz)}dt\right]<\infty\}.\ ] ] we assume that and satisfy the following hypotheses : 1 .\times{\mathbb{h}_0 ^ 1}(\mathcal{o});l_q(h_0,{\mathbb{l}^2 } ) ) , h\in\mathbb{h}^2_\lambda([0,t]\times z;{\mathbb{l}^2}(\mathcal{o})) ] over equation before taking expectation rcl [ eq17 ] & [ & _ 0tt_n ( u^n(t)_^2 ^ 2+^n(t)_l^2 ^ 2)]+ + & & c+ + & & + + 3k(t_n)+3k + & & + 2 + + & & + + & & + [ u^n_0_^2 ^ 2+^n_0_l^2 ^ 2 ] . using burkholder - davis - gundy inequality , young s inequality and assumption h.2 lcl [ eq18 ] 2 + c_3 ^ 1/2 + c_3 ^ 1/2 + c_3k^1/2 + c_3k^1/2 lcl + ( c_3k)^2 + + ( c_3k)^2(t_n ) . again using burkholder - davis - gundy inequality , young s inequality and assumption h.2 lcl [ eq19 ] 2 + c_4 ^1/2 + c_4 ^1/2 + c_4k ^1/2 + + ( c_4k)^2 + + ( c_4k)^2(t_n ) . substituting equations and in equation and rearranging the terms we have rcl & [ & _ 0tt_n ( u^n(t)_^2 ^ 2+^n(t)_l^2 ^ 2)]+2 + & & c^+ + & & + 2+c ( t_n)+2[u^n_0_^2 ^ 2+^n_0_l^2 ^ 2 ] , where ] .now taking limit as and gronwall s inequality we get the desired a priori estimate .let .we assume that and satisfy the following hypotheses : 1 .\times{\mathbb{h}_0 ^ 1}(\mathcal{o});l_q(h_0,{\mathbb{l}^2 } ) ) , h\in\mathbb{h}^p_\lambda([0,t]\times z;{\mathbb{l}^2}(\mathcal{o})) ] which solves the differential equations and. then we have the following a priori estimate : l [ eq16 ] [ _ 0ttu^n(t)_^2^p+^n(t)_l^2^p]+p[_0^t u^n(t)_^2^p-2u^n(t)__0 ^ 1 ^ 2 dt]c_1(p ) , + where the constant depends on the coefficients and the norms and .the proposition can be proved using the same ideas used in proposition [ prop ] . a path - wise strong solution is defined on a given filtered probability space as a valued function which satisfies the stochastic tide equations and in the weak sense and also the energy inequalities in proposition [ prop ] [ thmuniq ] let and be such that suppose and satisfy the conditions in h.1-h.3 , then there exist a unique path - wise strong solution and with the regularity satisfying the stochastic equations - and the apriori bounds - * existence * : + define then using the a priori estimates - , it follows from the banach - alaoglu theorem that along a subsequence , the galerkin approximations have the following limits : rcl & & u^nu l^2(;l^(0,t,^2())l^2(0,t;_0 ^ 1 ( ) ) ) , + & & ^nl^2(;l^2(0,t;l^2 ( ) ) ) , + & & f(u^n)f_0l^2(;l^2(0,t;^-1 ( ) ) ) , + & & ( , u^n)_0l^2(;l^2(0,t;l_q ) ) , + & & h^n(u^n,)h_0_^2 ( [ 0,t]z;^2 ) , where has the differential form weakly in . + + applying it s formula to and the process rcl d[e^-ltu^n(t)_^2 ^ 2 & + & e^-ltg^n(t)_l^2 ^ 2 ] + & & = -le^-ltu^n(t)_^2 ^ 2 dt -le^-ltg^n(t)_l^2 ^ 2 dt + & & -e^-lt(2f(u^n(t)),hu^n(t))_^2 dt + & & + 2e^-lt(^n(t , u^n(t)),hu^n(t))_^2dw^n(t ) + & & + e^-lt^n(t , u^n(t))_l_q^2dt + & & + 2e^-lt_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) + & & + e^-lt_z h^n(u^n(t-),z)^2_^2 n(dt , dz ) .integrating from to and taking expectation lcl & [ & e^-ltu^n(t)_^2 ^ 2 + e^-ltg^n(t)_l^2 ^ 2 - u^n(0)_^2 ^ 2 - g^n(0)_l^2 ^ 2 ] + & = & -[_0^t le^-ltu^n(t)_^2 ^ 2 dt ] -[_0^t le^-ltg^n(t)_l^2 ^ 2 dt ] + & & -2[_0^t e^-lt(f(u^n(t)),hu^n(t))_^2 dt ] + & & + [ _ 0^t 2e^-lt(^n(t , u^n(t)),hu^n(t))_^2dw^n(t ) ] + & & + [ _ 0^t e^-lt^n(t , u^n(t))_l_q^2dt ] + & & + [ _ 0^t 2e^-lt_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) ] lcl & & + [ _ 0^t e^-lt_zh^n(u^n(t-),z)^2_^2 n(dt , dz ) ] . since ,\ ] ] and ,\ ] ] are martingales with zero averages , and since is strong 1-integrable w.r.t , lrl [ _0^t e^-lt_z h^n(u^n(t-),z)^2_^2 n(dt , dz ) ] + = [ _ 0^t e^-lt_z h^n(u^n(t-),z)^2_^2 ( dz)dt ] .so lcl [ e^-ltu^n(t)_^2 ^ 2 + e^-ltg^n(t)_l^2 ^ 2 - u^n(0)_^2 ^ 2 - g^n(0)_l^2 ^ 2 ] + = -[_0^t le^-ltu^n(t)_^2 ^ 2 dt ] -[_0^t le^-ltg^n(t)_l^2 ^ 2 dt ] + -2[_0^t e^-lt(f(u^n(t)),hu^n(t))_^2 dt ] + [ _ 0^t e^-lt^n(t , u^n(t))_l_q^2dt ] + + [ _ 0^t e^-lt_z h^n(u^n(t-),z)^2_^2 ( dz)dt ] .using the lower semi - continuity of -norm lcl [ eq25 ] _n \{-[_0^t le^-ltu^n(t)_^2 ^ 2 dt ] -[_0^t le^-ltg^n(t)_l^2 ^ 2 dt ] + -2[_0^t e^-lt(f(u^n(t)),hu^n(t))_^2 dt ] + [ _ 0^t e^-lt^n(t , u^n(t))_l_q^2dt ] + + [ _ 0^t e^-lt_z h^n(u^n(t-),z)^2_^2 ( dz)dt ] } + = _n \{[e^-ltu^n(t)_^2 ^ 2 + e^-ltg^n(t)_l^2 ^ 2 + - u^n(0)_^2 ^ 2 - g^n(0)_l^2 ^ 2 ] } + [ e^-ltu(t)_^2 ^ 2 + e^-ltg(t)_l^2 ^ 2 - u(0)_^2 ^ 2 - g(0)_l^2^ 2 ] + = -[_0^t le^-ltu(t)_^2 ^ 2 dt ] -[_0^t le^-ltg(t)_l^2 ^ 2 dt ] + -2[_0^t e^-lt(f_0(t),hu(t))_^2 dt ] + [ _ 0^t e^-lt_0(t)_l_q^2dt ] + + [ _ 0^t e^-lt_z h_0(t , z)^2_^2 ( dz)dt ] .using the monotonicity property of and assumption h.3 , we have for all lcl -2[_0^t e^-lt(f(u^n(t))-f(v(t)),hu^n(t)-hv(t))_^2 dt ] + + [ _ 0^t e^-lt^n(t , u^n(t))-^n(t , v(t))_l_q^2dt ] + + [ _ 0^t e^-lt_zh^n(u^n(t-),z)-h^n(v(t-),z)^2_^2 ( dz)dt ] + -[_0^t le^-ltu^n(t)-v(t)_^2 ^ 2 dt ] + -[_0^t le^-ltg-^n_l^2 ^ 2 dt ] + 0 . rearranging the terms lcl & -2&[_0^t e^-lt(f(u^n(t)),hu^n(t))_^2 dt ] + [ _ 0^t e^-lt^n(t , u^n(t))_l_q^2dt ] + &+ & [ _ 0^t e^-lt_zh^n(u^n(t-),z)^2_^2 ( dz)dt ] -[_0^t le^-ltu^n(t)_^2 ^ 2 dt ] + & -&[_0^t le^-ltg^n_l^2 ^ 2 dt ] + & & -2[_0^t e^-lt(f(v(t)),hu^n(t)-hv(t))_^2 dt ] -[_0^t e^-lt^n(t , v(t))_l_q^2dt ] + & & + 2[_0^t e^-lt(^n(u^n(t)),^n(v(t)))_l_qdt ] + & & -[_0^t e^-lt_zh^n(v(t-),z)^2_^2 ( dz)dt ] + & & + 2[_0^t e^-lt_z ( h^n(u^n(t-),z),h^n(v(t-),z))_^2 ( dz)dt ] + & & + [ _ 0^t le^-ltv(t)_^2 ^ 2 dt ] -2[_0^t le^-lt(u^n(t),v(t))_^2 dt ] + & & + [ _ 0^t le^-ltg_l^2 ^ 2 dt ] -2[_0^t le^-ltg(^n(t),(t))_l^2 dt ] + & & -2[_0^t e^-lt(f(u^n(t)),hv(t))_^2 dt ] .taking limit in and using lcl & -2&[_0^t e^-lt(f_0(t),hu(t))_^2 dt ] + [ _ 0^t e^-lt_0(t)_l_q^2dt ] + & + & [ _ 0^t e^-lt_zh_0(t , z)^2_^2 ( dz)dt ] -[_0^t le^-ltu(t)_^2 ^ 2 dt ] lcl & -&[_0^t le^-ltg_l^2 ^ 2 dt ] + & & -2[_0^t e^-lt(f(v(t)),hu(t)-hv(t))_^2 dt ] + & & -[_0^t e^-lt(t , v(t))_l_q^2dt ] + & & + 2[_0^t e^-lt(_0(t),(v(t)))_l_qdt ] + & & -[_0^t e^-lt_zh(v(t-),z)^2_^2 ( dz)dt ] + & & + 2[_0^t e^-lt_z ( h_0(t , z),h(v(t-),z))_^2 ( dz)dt ] + & & + [ _ 0^t le^-ltv(t)_^2 ^ 2 dt ] -2[_0^t le^-lt(u(t),v(t))_^2 dt ] + & & + [ _ 0^t le^-ltg_l^2 ^ 2 dt ] -2[_0^t le^-ltg((t),(t))_l^2 dt ] + & & -2[_0^t e^-lt(f_0(t),hv(t))_^2 dt ] . rearranging the terms lcl -2[_0^t e^-lt(f_0(t)-f(v(t)),hu(t)-hv(t))_^2 dt ] + + [ _ 0^t e^-lt_0(t)-(t , v(t))_l_q^2dt ] + + [ _ 0^t e^-lt_zh_0(t , z)-h(v(t-),z)^2_^2 ( dz)dt ] + -[_0^t le^-ltu(t)-v(t)_^2 ^ 2 dt ] + 0 .this estimate holds for any , for any .it is obvious from the density argument that the above inequality remains the same for any .in fact , for any there exists a strongly convergent subsequence satisfying the above inequality .+ + taking , we get and .+ now we take where and is an adapted process in .+ then we have \geq \lambda\mathbb{e}[\int_0^t e^{-lt}(f_0(t),hw(t))_{{\mathbb{l}^2 } } dt].\ ] ] dividing by on both sides on the inequality above and letting go to 0 , we have by the hemicontinuity of \geq 0.\ ] ] since is arbitrary and is a positive , bounded , continuously differentiable function , .this proves the existence of a strong solution .+ + * uniqueness * : + if and are two solutions then solves the stochastic differential equation rcl dw(t)+aw(t)dt&+&g((t)-(t))dt + & = & b(v(t))-b(u(t))dt+((t , u(t))-(t , v(t)))dw(t ) + & & + _ z ( h(u(t-),z)-h(v(t-),z))(dt , dz ) .applying it s formula to and to the process rcl d(w(t)_^2 ^ 2)&+&2w(t)__0 ^ 1 ^ 2 dt+2g(((t)-(t)),w(t))_^2 dt + & = & 2(b(v(t))-b(u(t)),u(t)-v(t))_^2dt + & & + 2((t , u(t))-(t , v(t)),w(t))_^2 dw(t ) + & & + ( t , u(t))-(t , v(t))_l_q^2 dt + & & + _ z h(u(t-),z)-h(v(t-),z)^2_^2 n(dt , dz ) + & & + _ z ( ( h(u(t-),z)-h(v(t-),z)),w(t-))_^2(dt , dz ) . using the result from lemma [ mon ] and equation rcl [ eq27 ] d&(&w(t)_^2 ^ 2)+2w(t)__0 ^ 1 ^ 2 dt + & & ( t)-(t)_l^2 ^ 2 dt+w(t)__0 ^ 1 ^ 2 dt + & & + 2((t , u(t))-(t , v(t)),w(t))_^2 dw(t)+(t , u(t))-(t , v(t))_l_q^2 dt + & & + _ z h(u(t-),z)-h(v(t-),z)^2_^2 n(dt , dz ) + & & + _ z ( ( h(u(t-),z)-h(v(t-),z)),w(t-))_^2(dt , dz ) .notice that taking inner product with , we have as in equation lcl [ eq28 ] d((t)-(t)_l^2 ^ 2 ) + mw(t)_^2 ^ 2 dt + ( + m)(t)-(t)_l^2 ^ 2 dt + w(t)__0 ^ 1 ^ 2 dt .let adding equations and rcl d&(&w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2)+w(t)__0 ^ 1 ^ 2 dt + & & c(w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2 ) dt + & & + 2((t , u(t))-(t , v(t)),w(t))_^2 dw(t)+(t , u(t))-(t , v(t))_l_q^2 dt + & & + _ z h(u(t-),z)-h(v(t-),z)^2_^2 n(dt , dz ) + & & + _ z ( ( h(u(t-),z)-h(v(t-),z)),w(t-))_^2(dt , dz ) .integrating from 0 to and taking expectation rcl & [ & w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2]+[_0^t w(t)__0 ^ 1 ^ 2 dt ] + & & c[_0^t(w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2 ) dt ] + & & + [ _ 0^t ( t , u(t))-(t , v(t))_l_q^2 dt ] + & & + [ _ 0^t _ z h(u(t-),z)-h(v(t-),z)^2_^2 ( dz)dt ] + & & + [ w(0)_^2 ^ 2+(0)-(0)_l^2 ^ 2 ] . using assumption h.3 rcl & [ & w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2]+[_0^t w(t)__0 ^ 1 ^ 2 dt ] + & & c[_0^t(w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2 ) dt ] + l[_0^t w(t)_^2 ^ 2 dt ] + & & + [ w(0)_^2 ^ 2+(0)-(0)_l^2 ^ 2 ] . in particular rcl [ w(t)_^2 ^ 2&+&(t)-(t)_l^2 ^ 2 ] + & & [ w(0)_^2 ^ 2+(0)-(0)_l^2 ^ 2 ] + & & + ( c+l)[_0^t(w(t)_^2 ^ 2+(t)-(t)_l^2 ^ 2 ) dt ]. + hence the uniqueness of pathwise strong solution follows using gronwall s inequality .we assume that and satisfy the following hypotheses : 1 .\times { \mathbb{h}_0 ^ 1}(\mathcal{o});l_q(h_0,{\mathbb{h}_0 ^ 1 } ) ) , h\in\mathbb{h}^2_\lambda([0,t]\times z;{\mathbb{h}_0 ^ 1}(\mathcal{o})) ] which solves the differential equations and. then we have lr _ t0_n ( _ 0tt_nu^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2 + _ 0^t_nu^n(t)^2_^2 dt + > ( n-1)+u^n_0__0 ^ 1 ^ 2+^n_0_h^1_0 ^ 2)=0 . applying it s formula to and to the process lr d(u^n(t)__0 ^ 1 ^ 2)+ 2u^n(t)^2_^2dt+2(b(u^n),u^n)_^2dt + + 2((g^n ) , u^n)_^2dt + = 2(f , u^n(t))_^2dt+2(^n(t , u^n(t)),u^n(t))_^2dw^n(t ) + + ^n(t , u^n(t))_l_q^2dt + _zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) + + 2_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) . using divergence theorem lr d(u^n(t)__0 ^ 1 ^ 2)+ 2u^n(t)^2_^2 dt + = 2 ( b(u^n),u^n)_^2dt+2(g^n , u^n)_^2dt+2(f , u^n(t))_^2dt + + 2(^n(t , u^n(t)),u^n(t))_^2dw^n(t)+^n(t , u^n(t))_l_q^2dt + + _zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) + + 2_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) .using young s inequality and .\ ] ] using lemma [ lemw ] , equation and young s inequality lrl 2(b(u^n),u^n)_^2&&b(u^n)_^2 ^ 2+u(t)_^2 ^ 2 + & & c_2(u^n(t)^4_^4+w^0(t)_^4 ^ 4)+u(t)_^2 ^ 2 + & & c_2(2u^n(t)^2_^2u^n(t)__0^ 1 ^ 2+w^0(t)_^4 ^ 4)+u(t)_^2 ^ 2 . now using the a - priori estimate given by equation we get hence lr d(u^n(t)__0 ^ 1 ^ 2)+ u^n(t)^2_^2 dt + ( 2c_2c_1(2)+1)u^n(t)__0 ^ 1 ^ 2dt+^n(t)_h_0 ^ 1 ^ 2+f(t)__0 ^ 1 ^ 2dt+c_2w^0(t)_^4 ^ 4 + + 2(^n(t , u^n(t)),u^n(t))_^2dw^n(t)+^n(t , u^n(t))_l_q^2dt + + _zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) + + 2_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) . using assumption a.2 lr [ eeeeq ] d(u^n(t)__0 ^ 1 ^ 2)+ u^n(t)^2_^2 dt + ( 2c_2c_1(2)+1)u^n(t)__0 ^ 1 ^ 2dt+^n(t)_h_0 ^ 1 ^ 2+f(t)__0 ^ 1 ^ 2dt+c_2w^0(t)_^4 ^ 4 + + 2(^n(t , u^n(t)),u^n(t))_^2dw^n(t)+k(1+u^n(t)__0 ^ 1 ^ 2)dt + + _zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) + + 2_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) .using now rcl |(div(hu^n(t)),^n(t))_l^2| & & u^n(t)__0 ^ 1^n(t)_l^2 + mu^n(t)_^2^n(t)_l^2 + & & [ u^n(t)__0 ^ 1^ 2 + ^n(t)_l^2 ^ 2 ] + & & + [ u^n(t)_^2 ^ 2+^n(t)_l^2 ^ 2 ] . using poincar s inequality and rearranging adding equations and lr d(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2)+ u^n(t)^2_^2 dt + ( 2c_2c_1(2)+1+mc_p^2+)u^n(t)__0 ^ 1 ^ 2dt+^n(t)_h_0 ^ 1 ^ 2 dt + + ( + m)^n(t)_l^2 ^ 2 dt+f(t)__0 ^ 1 ^ 2dt+c_2w^0(t)_^4 ^ 4 + + 2(^n(t , u^n(t)),u^n(t))_^2dw^n(t)+k(1+u^n(t)__0 ^ 1 ^ 2)dt + + _zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) + + 2_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) .define lrl _ n : = \{t : u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2&+&_0^tu^n(s)^2_^2ds+^n(t)_^2 ^ 2 + & & >n+u^n_0__0 ^ 1 ^ 2+^n_0__0 ^ 1 ^ 2}. integrating and taking supremum over lr _ 0tt_n(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2)+ _ 0^t_nu^n(t)^2_^2 dt + s_0^t_n(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2 + ^n(t)_l^2 ^ 2+f(t)__0 ^ 1 ^ 2+w^0(t)_^4 ^ 4 + 1 ) dt + + 2_0tt_n_0^t((^n(t , u^n(t)),u^n(t))_^2dw^n(t ) + + _ 0^t_n_z ( h^n(u^n(t-),z),u^n(t-))_^2(dt , dz ) + + _ 0^t_n_zh^n(u^n(t-),z)_^2 ^ 2 n(dt , dz ) ) + + u^n_0__0 ^ 1 ^ 2+^n_0_h^1_0 ^ 2 , where .+ hence lr ( _ 0tt_nu^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2 + _ 0^t_nu^n(t)^2_^2 dt . + .> ( n-1)+u^n_0__0 ^ 1 ^ 2+^n_0_h^1_0 ^ 2 ) + ( s_0^t_n(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2+^n(t)_l^2 ^ 2 .+ f(t)__0 ^ 1 ^ 2+w^0(t)_^4 ^ 4 + 1 ) dt>(n-1)/2 ) + + 2(_0tt_n_0^t((^n(t , u^n(t)),u^n(t))_^2dw^n(t ) .+ + _ 0^t_n_z ( h^n(u(t-),z),u^n(t-))_^2(dt , dz ) + .+_0^t_n_zh^n(u(t-),z)_^2 ^ 2 n(dt , dz ) ) > ( n-1)/2 ) . now lr ( _ 0tt_n_0^t((^n(t , u^n(t)),u^n(t))_^2dw^n(t ) . + + _ 0^t_n_z ( h^n(u(t-),z),u^n(t-))_^2(dt , dz ) + .+_0^t_n_zh^n(u(t-),z)_^2 ^ 2 n(dt , dz ) ) > ( n-1)/2 ) lr ( _ 0tt_n_0^t(^n(t , u^n(t)),u^n(t))_^2dw^n(t)>(n-1)/6 ) + + ( _ 0tt_n_0^t _ z ( h^n(u(t-),z),u^n(t-))_^2(dt , dz)>(n-1)/6 ) + + ( _ 0tt_n_0^t_zh^n(u(t-),z)_^2 ^ 2 n(dt , dz ) > ( n-1)/6 ) . using doob s inequality lr ( _ 0tt_n_0^t(^n(t , u^n(t)),u^n(t))_^2dw^n(t)>(n-1)/6 ) + [ _ 0^t_n u^n(t)__0 ^ 1 ^ 2^n(t , u^n(t))_l_q^2 dt ] + k[_0^t_n u^n(t)__0 ^ 1 ^ 2(1 + u^n(t)__0 ^ 1 ^ 2 ) dt ] + kn(n+1)(t_n ) . using doob s inequality again lr ( _ 0tt_n_0^t_z ( h^n(u(t-),z),u^n(t-))_^2(dt , dz)>(n-1)/6 ) + [ _ 0^t_n u^n(t)__0 ^ 1 ^ 2_zh^n(u(t-),z)_^2 ^ 2 ( dz)dt ] + k[_0^t_n u^n(t)__0 ^ 1 ^ 2(1 + u^n(t)__0 ^ 1 ^ 2 ) dt ] + kn(n+1)(t_n ) . using doob s inequality and strong 2-integrability of lr ( _ 0tt_n_0^t_zh^n(u(t-),z)_^2 ^ 2 n(dt , dz )> ( n-1)/6 ) + [ _ 0^t_n_zh^n(u(t-),z)_^2 ^ 2 ( dz)dt ] + k[_0^t_n(1+u(t)__0 ^ 1 ^ 2 ) dt ] + k(n+1)(t_n ) . using chebychev s inequality lr ( s_0^t_n(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2+^n(t)_l^2 ^ 2 .+ f(t)__0 ^ 1 ^ 2+w^0(t)_^4 ^ 4 + 1 ) dt>(n-1)/2 ) + [ s_0^t_n(u^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2+^n(t)_l^2 ^ 2 + + f(t)__0 ^ 1 ^ 2+w^0(t)_^4 ^ 4 + 1 ) dt ] lr 3ns ( t_n)+s[_0^t_n(f(t)__0 ^ 1 ^ 2+w^0(t)_^4 ^ 4 + 1)dt ] .hence lr _t0_n ( _ 0tt_nu^n(t)__0 ^ 1 ^ 2+^n(t)_h_0 ^ 1 ^ 2 + _ 0^t_nu^n(t)^2_^2 dt . + .> ( n-1)+u^n_0__0 ^ 1 ^ 2+^n_0_h^1_0 ^ 2)=0 .let be a separable and complete metric space .let and let be given .a modulus of is defined by ,\mathbb{s}}(u,\delta):=\inf_{\pi_\delta}\max_{t_i\in\overline{\omega}}\sup_{t_i\leq s < t< t_{i+1}\leq t}\rho(u(t)u(s)),\ ] ] where is the set of all increasing sequences with the following property taking the path space into account , we call , where denotes the extended skorokhod topology , , where denotes the weak - star topology and , where denotes the weak topology and as the strong topology of . note that the spaces are completely regular and continuously embedded in .let be the supremum of four topologies , that is , .we can assume that is a closed subset of .by condition ( a ) the weak - star topology in induced on is metrizable .similarly by condition ( b ) the weak topology in induced on is metrizable .thus the compactness of a subset of is equivalent to its sequential compactness .let be a sequence in . by the banach - alaoglu theorem compact in as well as .+ we need to prove that is compact in . by condition ( a ) for every ] satisfying the usual hypotheses , and let be a sequence of cdlg , -adapted and -valued processes . is said to satisfy the aldous condition iff such that for every sequence of stopping times with [ aldt ] let satisfy the aldous condition .let be the law of on .then for every there exists a subset such that and }(u,\delta)=0.\ ] ] [ ald ] let be a separable banach space and let be a sequence of -valued random variables .assume that for every sequence of -stopping times with and for every and the following condition holds \leq c\theta^{\beta},\ ] ] for some and some constant .then the sequence satisfies the aldous condition in the space .we use the tightness condition for the prokhorov - varadarajan theorem which states that a sequence of measures is tight on a topological space if for every there exists a compact set such that .hence the tightness of measure in is given by the following theorem .* there exists a positive constant such that }\|x_n(s)\|_{{\mathbb{l}^2}}]\leq c_1,\ ] ] * there exists a positive constant such that \leq c_2,\ ] ] * satisfies the aldous condition in .let be the law of on .then for every there exists a compact subset of such that and the sequence of measures is said to be tight on .let . by the chebyshev inequality and by ( a ) , we see that for any }\|x_n(s)\|_{{\mathbb{l}^2}}>r)\leq\dfrac{\mathbb{e}[\sup_{s\in[0,t]}\|x_n(s)\|_{{\mathbb{l}^2}}]}{r}\leq \dfrac{c_1}{r}.\ ] ]let be such that . then }\|x_n(s)\|_{{\mathbb{l}^2}}>r_1)\leq\dfrac{\epsilon}{3}.\ ] ] let }\|u(s)\|_{{\mathbb{l}^2}}\leq r_1\} ] , 2 . for all ,there exists a positive constant such that for all 3 . for all ,there exists a positive constant such that for all * is a filtered probability space with a filtration , * is a time homogeneous poisson random measure over with the intensity measure , * is a cylindrical wiener process over , * for all ] , hence using fubini s theorem \leq c_1t.\ ] ] using the chebychev inequality , we see that for any }{r^2}\leq \dfrac{c_1t}{r^2}.\ ] ] let be such that . then define hence . using the assumption \leq c_c ] , we have \leq c_c.\ ] ] define for all lrl k^1(^n,^n,^n,^n,^n , v)(t)&=&(^n_0,v)_^2+(^n , v)_^2+_0^t ( a(^n(s)),v)_^2ds + & & + _ 0^t ( b(^n(s)),v)_^2ds-_0^t(^n(s),v)_^2 ds + & & + _ 0^t ( ^n(s,^n(s)),v)_^2d^n(s ) + & & + _ 0^t_z ( h^n(^n(s-),z),v)_^2^n(ds , dz ) , lrl k^1(u^*,z^*,u^*,n^*,w^*,v)(t)&=&(u^*_0,v)_^2+(u^*,v)_^2+_0^t ( a(u^*(s)),v)_^2ds + & & + _ 0^t ( b(u^*(s)),v)_^2ds-_0^t(z^*(s),v)_^2 ds + & & + _ 0^t ( ( s , u^*(s)),v)_^2dw^*(s ) + & & + _ 0^t_z ( h(u^*(s-),z),v)_^2^*(ds , dz ) , we show the term by term convergence of the above equation in . + since in , we have -a.s . hence by and the vitali theorem =0.\ ] ] hence since in , we have -a.s . since , by the vitali theorem =0.\ ] ] hence since in , therefore by vitali theorem , for all ] is measurable , 2 . is lower semicontinuous ] where is non - negative and where .the first integral on the right - hand side is zero due to the lemma given below . due to the lower semicontinuity of , we have so now using the beppo - levi theorem on the bounded measurable functions we get the required semicontinuity .let in the -topology and in -weak .let \times{\mathbb{h}_0 ^ 1}(\mathcal{o})\times{\mathbb{l}^2}(\mathcal{o})\rightarrow \mathbb{r}_+$ ] be a bounded measurable function such that , is lower semicontinuous . then define . for and , define \times{\mathbb{l}^2}(\mathcal{o});\inf\limits_{|\langle y , u\rangle-\langle y , z\rangle|\leq 1/m}\theta(t , z , u)\leq -\gamma\right\},\ ] ] and \times{\mathbb{l}^2}(\mathcal{o});\inf\limits_{|\langle y , u\rangle-\langle y , z\rangle|\leq 1/m}\theta(t , z , u)\leq -\gamma\right\}.\ ] ] clearly , .+ + as , we have hence the lower semicontinuity of implies that each -section of is closed .hence we have we restrict ourselves to admissible pairs which satisfy - in the martingale sense such that + + we have and now , implies that \leq c(r),\ ] ] for some constant .hence by lemma [ tight3 ] there exists a sequence of tight measures in . by theorem [ martexist ], there exists a corresponding sequence such that the pair gives a martingale solution to - , where in the -topology and weakly in .by the same theorem we also have that solves - .+ by lemma [ lowersemcont ] , is lower semi - continuous and hence by theorem 55 in chapter iii of , so is .so , * acknowledgements : * pooja agarwal would like to thank department of science and technology , govt . of india , for the inspire fellowship .utpal manna s work has been supported by the national board of higher mathematics of department of atomic energy , govt . of india , under grant no .nbhm / rp46/2013/fresh/421 .the authors would like to thank indian institute of science education and research(iiser ) - thiruvananthapuram for providing stimulating scientific environment and resources .martingale solutions to the 2d and 3d stochastic navier - stokes equations driven by the compensated poisson random measure ; preprint 13 .department of mathematics and computer sciences , lodz university .stochastic analysis of tidal dynamics equation ; _ infinite dimensional stochastic analysis , special volume in honor of professor hh ., edited by a. sengupta and p. sundar , world scientific publishers .
in this work we first present the existence , uniqueness and regularity of the strong solution of the tidal dynamics model perturbed by lvy noise . monotonicity arguments have been exploited in the proofs . we then formulate a martingale problem of stroock and varadhan associated to an initial value control problem and establish existence of optimal controls .
this is to certify that : 1 .the thesis comprises only my original work towards the mphil ; 2 .due acknowledgement has been made to all other material used ; and 3 .the thesis is less than 50,000 words in length . [cols="^",options="header " , ] in this section , we will prove that under certain conditions , the information state converges in distribution .this fact is already known for classical hidden markov models , and is quite robust : legland and mevel prove geometric ergodicity of the information state even when calculated from incorrectly specified parameters , while capp , moulines and rydn prove harris recurrence of the information state for certain uncountable state underlying chains .we will present a mostly elementary proof of convergence in the case of multiple observation processes . to determine the limiting behaviour of the information state , we begin by finding an explicit form for its one - step time evolution . [ def : rfunction ] for each observation process and each observed state , the * -function * is the function given by where is the dirac measure on and is the component of .[ lem : informationstaterecurrence ] in a hidden markov model with multiple observation processes and a fixed policy , the information state satisfies the recurrence relation let and . by the markov property as in definition [ def : observationprocess ] , and the simplification ( [ eqn : noidependence ] ) , & \hspace{3cm}\times{\mathbb p}\big(x_{t+1}=x{\,\big|\,}x_t = j\big){\mathbb p}\big(x_t = j , y^{(i_{(t)})}_{(t)}=y_{(t)}\big)\nonumber\\ & = \frac1{k_{t+1}}\sum_jm^{(i_{t+1})}_{x , y_{t+1}}t_{j , x}k_tz_t(y_{(t)})_j\nonumber\\ & = \frac{k_t}{k_{t+1}}\sum_jm^{(i_{t+1})}_{x , y_{t+1}}t_{j , x}z_t(y_{(t)})_j\nonumber\\ & = \frac{\sum_jm^{(i_{t+1})}_{x , y_{t+1}}t_{j , x}z_t(y_{(t)})_j}{\sum_x\sum_jm^{(i_{t+1})}_{x , y_{t+1}}t_{j , x}z_t(y_{(t)})_j},\end{aligned}\ ] ] since does not depend on and .note that for each information state and each observation process , there are at most possible information states at the next step , which are given explicitly by for each observation .[ lem : informationdistributionrecurrence ] the information distribution satisfies the recurrence relation where the sum is taken over all observation processes and all observation states , is the dirac measure on , and is the matrix product considering as a row vector .since is a deterministic function of , given that , this depends only on and , so given that and , integration over gives by definition [ def : informationstate ] , is the posterior distribution of given the observations up to time , so , the coordinate of the vector . since is a function of , which is a function of and the observation randomness , by the markov property as in definition [ def : observationprocess ] , substituting ( [ eqn : pygivenz ] ) into ( [ eqn : mutplusone ] ) completes the proof . notethat lemma [ lem : informationdistributionrecurrence ] shows that the information distribution is given by a linear dynamical system on , and therefore the information state is a markov chain with state space .we will use tools in markov chain theory to analyse the convergence of the information state , for which it will be convenient to give a name to this recurrence .[ def : transitionfunction ] the * transition function * of the information distribution is the deterministic function given by , extended linearly to all of by the recurrence in lemma [ lem : informationdistributionrecurrence ] .the coefficients are called the * -functions*. we now give a criterion under which the information state is always positive recurrent . a discrete state markov chain is called * ergodic * if it is irreducible , aperiodic and positive recurrent .such a chain has a unique invariant measure , which is a limiting distribution in the sense that converges to in total variation norm .a discrete state markov chain is called * positive * if every transition probability is strictly positive , that is , for all , .this is a stronger condition than ergodicity .[ def : anchored ] we shall call a hidden markov model * anchored * if the underlying markov chain is ergodic , and for each observation process , there is a state and an observation such that and for all .the pair is called an * anchor pair*. heuristically , the latter condition allows for perfect information whenever the observation is made using observation process .this anchors the information chain in the sense that this state can be reached with positive probability from any other state , thus resulting in a recurrent atom in the uncountable state chain . on the other hand , since each information state can make a transition to only finitely many other information states , starting the chain at results in a discrete state markov chain , for which it is much easier to prove positive recurrence .[ lem : anchor ] in an anchored hidden markov model , for any anchor pair , for all . when , by definition [ def : anchored ] , so every term in the numerator of definition [ def : rfunction ] is zero except the coefficient of .since we know the coefficients have sum 1 , it follows that .[ lem : alpha ] in a positive , anchored hidden markov model , the -functions , for each , are uniformly bounded below by some , that is , for all and .we can write by definitions [ def : transitionfunction ] and [ def : anchored ] , which is bounded below by since . since each , if all the entries of are positive , then is bounded below uniformly in for fixed , which then implies a uniform bound in and since there are only finitely many .[ def : orbit ] for each state , the * orbit * of under the -functions is by requiring the -functions to be positive , we exclude points in the orbit which are reached with zero probability .let .[ prop : discreteness ] in a positive , anchored hidden markov model , there exists a constant such that for all measures , the mass of the measure outside is bounded by , that is , . we can rewrite definition [ def : transitionfunction ] as where in this notation , the integral is the lebesgue integral of the function with respect to the measure . since takes values in the and a probability , the integral also takes values in , thus maps the information state space to itself .since is a measure supported on the set of points reachable from via an -function , and is a union of orbits of -functions and therefore closed under -functions , it follows that all mass in is mapped back into under the evolution function , that is on the other hand , by lemma [ lem : anchor ] , for all , hence putting these together gives setting gives , hence by induction . by lemma [ lem : alpha ] , , while since we can always choose a larger value .up to this point , we have considered the evolution function as a deterministic function .however , we can also consider it as a probabilistic function . by definition [ def : transitionfunction ], maps points in to , hence the restriction gives a probabilistic function , and therefore a markov chain , with countable state space . by proposition [ prop : discreteness ] , the limiting behaviour of the information chain takes place almost entirely in in some sense , so we would expect that convergence of the restricted information chain is sufficient for convergence of the full information chain .this is proved below .[ prop : positiverecurrent ] in a positive , anchored hidden markov model , under any policy , the chain has at least one state of the form which is positive recurrent , that is , whose expected return time is finite .construct a markov chain on the set , with transition probabilities for all , whenever is nonempty , and all other transition probabilities zero .we note that this is possible since we allow each state a positive probability transition to some other state .since is a finite state markov chain , it must have a recurrent state .each state can reach some state , so some state is recurrent ; call it .consider a state of the chain which is reachable from , where is a composition of -functions with corresponding -functions nonzero .since the partition , one of them must contain ; call it .we will assume ; the proof follows the same argument and is simpler in the case when . by definition of the -functions , this means that is reachable from in the chain , hence in the chain , is reachable from , and by recurrence of , must also be reachable from via some sequence of positive probability transitions by definition of , is nonempty , and thus contains some point , where is a composition of -functions with corresponding nonzero . by definition [ def : orbit ] , each transition to in the information chain occurs with positive probability , so since , by anchoredness and positivity , the markov property then gives continuing via the sequence ( [ eqn : path ] ) , we obtain thus , for every state reachable from , we have found constants and such that by lemma [ lem : alpha ] , is uniformly bounded below , while depends only on the directed path ( [ eqn : path ] ) and not on , and thus is also uniformly bounded below since there are only finitely many , and hence it suffices to choose finitely many such paths .similarly , also depends only on the directed path ( [ eqn : path ] ) , and thus is uniformly bounded above . in particular , it is possible to pick and such that and .let be the first entry time into the state . by the above bound, we have for any initial state reachable from .letting and be independent copies of and , the time - homogeneous markov property gives & \hspace{2cm}\times{\mathbb p}\big(z_{ks}=z'\big|\tau > ks , z_0=z\big)\nonumber\\[.2 cm ] & \le\sup_{z'}{\mathbb p}\big(\tau>(k+1)s\big|\tau > ks , z_{ks}=z'\big)\nonumber\\ & = \sup_{z'}{\mathbb p}\big(\tau'>s\big|z'_0=z'\big)\nonumber\\ & \le1-c.\end{aligned}\ ] ] by induction , for all .dropping the condition on the initial distribution for convenience , we have & = \sum_{k\in{\mathbb z}^+}{\mathbb p}(\tau > k)=\sum_{k\in{\mathbb z}^+}\sum_{0\le t < s}{\mathbb p}(\tau > ks+t)\nonumber\\ & \le\sum_{k\in{\mathbb z}^+}\sum_{0\le t < s}{\mathbb p}(\tau > ks)\le\sum_{k\in{\mathbbz}^+}s(1-c)^k=\frac sc<\infty.\end{aligned}\ ] ] in particular , <\infty ] , via equating each point ] .we can write .note that the coefficient of is strictly positive and in the denominator , while is exactly the same with instead of . since , , hence .[ lem : monotonic ] the linear fractional transformations and are both strictly increasing when and both strictly decreasing when .the derivative of is , which is positive everywhere if and negative everywhere if . the same holds for , since it is identical with instead of .[ lem : fixedpoint ] the linear fractional transformations and have unique fixed points and , which are global attractors of their respective dynamical systems .split the interval ] .[ lem : eta0bigger ] the fixed points satisfy . by lemma [lem : r0bigger ] , .first consider the case .applying lemma [ lem : monotonic ] times gives , so the orbit of under is monotonically decreasing , but it also converges to by lemma [ lem : fixedpoint ] , hence . in the remaining case , suppose .then by lemma [ lem : monotonic ] , , which is a contradiction , hence .[ prop : exception ] the first exception to theorem [ thm : convergence2 ] , case 1 , can not occur under a threshold policy .suppose , , and the policy is threshold .since and , it follows that every point of is less than every point of .since and are the orbits of 0 and 1 under and respectively , by lemma [ lem : fixedpoint ] , they have limit points and respectively , hence .this contradicts lemma [ lem : eta0bigger ] , hence this can not occur .the remaining exception is when the information chain is periodic with period 2 , in which case the expected entropy oscillates between two limit points .the limiting expected entropy can still be defined in a sensible way , by taking the average , minimum or maximum of the two limit points , depending on which is most appropriate for the situation .thus , for threshold policies , it is possible to define optimality without exception .we conclude this section by writing down a closed form general expression for the limiting expected entropy .[ prop : entropyformula ] under the conditions of theorem [ thm : convergence2 ] , that is , in case 0 , the limiting expected entropy of a policy is given by where , for : * is the entropy function and is the constant function with value 1 ; * and are the combined -function and combined -function respectively , with , , and defined as in ( [ eqn : specialcase ] ) ; * ] , so that .this gives an error bound of while the constant appears daunting at first glance , solving for a prescribed error of gives hence , we require for any realistic value of , this is easily within computational bounds , as each iteration requires at most 36 arithmetic operations , 2 calls to the policy function , and 4 calls to the logarithm function . an alternative approach to estimating limiting expectedentropy would be to simulate observation of the hidden markov model under the given policy .the major drawback of this method is that it requires working with the information state , which takes values in ) ] into subintervals and treat each subinterval as a discrete atom , but this produces a very imprecise result . even using an unrealistically small subinterval width of , the entropy function has a variation of over across the first subinterval , restricting the error bound to this order of magnitude regardless of the number of iterations . in comparison ,example [ eg : errorbound ] shows that the direct estimation method has greatly superior performance .an improvement is to use the fact that the limiting distribution is discrete , and store a list of pairs containing the locations and masses of discrete points .since any starting point moves to either 0 or 1 in one step , at the iteration , the list of points must contain at least the first points in the orbit of either 0 or 1 .each such point requires a separate calculation at each iteration , and thus the number of computations is rather than as for algorithm [ alg : entropy ] .since the number of iterations corresponds to the last point in the orbit of 0 or 1 which is represented , for any given , this method differs from the direct computation method only in the masses on these points , thus we would expect the relationship between precision and number of iterations to be similar . since the simulation method has quadratically growing number of computations , this would suggest that it is slower than the direct computation method , and indeed , this is also indicated by empirical trials .we will use the direct computation method of estimating limiting expected entropy for all of our results . the problem of finding the policy which minimises limiting expected entropy is made much easier by restricting the search space to the set of threshold policies , as these can be represented by a single number representing the threshold , and a sign representing which observation process is used on which side of the threshold .the simplest approach is to pick a collection of test thresholds uniformly in ] , so moving the threshold does not change the policy as long as it does not move past a point in . as shown in figures [ fig : regioni][fig : regionvi ] , points in tend to be quite far apart , and thus the naive approach will cause a large number of equivalent policies to be tested . on the other hand , points in close to the accumulation points are closely spaced , so even with a very fine uniform subset , some policies will be missed when the spacing between points in becomes less than the spacing between test points . a better way is to decide on a direction in which to move the threshold , and select the test point exactly at the next point in the previous realisation of in the chosen direction , so that every threshold in between the previous point and the current point gives a policy equivalent to the previous policythis ensures that each equivalence class of policies is tested exactly once , thus avoiding the problem with the naive method .however , a new problem is introduced in that the set of test points depends on the iteration number , which determines the number of points of that are considered .this creates a circular dependence , in that the choice of depends on the desired error bound , the error bound depends on the policy , and the set of policies to be tested depends on .we can avoid this problem by adapting proposition [ prop : errorbound ] to a uniform error bound across all threshold policies .[ prop : errorboundthreshold ] for a threshold policy and , the error is where is the smallest integer such that , and first note that exists , since by lemma [ lem : eta0bigger ] , iterations of and converge to respectively . using proposition [ prop : errorbound ] , it suffices to prove that for , it is not possible for and , since this would mean and , which gives the ordering , but and are intervals for a threshold policy .hence , either or for some .if , then .since , this gives , as required .a similar argument holds in the case .the existence of this uniform bound for in the threshold case is closely related to proposition [ prop : exception ] , which states that the exception case 1 , where and , can not occur in a threshold policy . in this exceptional case ,proposition [ prop : entropyformula ] does not hold , as the denominator is zero , and hence for all .the fact that this can not occur in a threshold policy is the key ingredient of this uniform bound .now that we have an error bound which does not depend on the policy , we can determine a uniform number of iterations that will suffice for estimating the limiting expected entropy for all threshold policies .this reduces the search space to a finite one , as each point in the orbits of 0 and 1 must be in one of the two policy regions , hence , there are at most policies. most of these will not be threshold policies , but since orbit points need not be ordered , there is no obvious bound on the number of threshold policies that need to be checked .simulation results later in this section will show that in most cases , the number of such policies is small enough to be computationally feasible .[ def : orientation ] the * orientation * of a threshold policy is the pair of whether is to the left or right of , and whether the threshold is included in the left or right interval .let ] , ] denote the four possibilities , with the square bracket indicating inclusion of the threshold and round bracket indicating exclusion . our strategy for simplifying the space of threshold policies that need to be considered is to note that the policy only matters on , the support of the invariant measure .although depends on the policy , for a given orientation and threshold , any policy with the same orientation and some other threshold such that no points of lie between and is an equivalent policy , in that sense that the invariant measure is the same , since no mass exists in the region where the two policies differ .thus , for each orientation , we can begin with , and at each iteration , move the threshold left past the next point in , since every threshold in between is equivalent to the previous threshold .although changes at each step , this process must terminate in finite time since we already showed that there are only finitely many policies for any given , and by testing equivalence classes of policies only once , it is likely to that far fewer steps are required than the bound .furthermore , since is a discrete set , every threshold policy has an interval of equivalent threshold policies , so we can assume without loss of generality that the threshold is contained in the interval to the right , that is , only test the orientations ] .[ alg : optimalthreshold ] finding the optimal threshold policy . 1 .find , the smallest integer such that , by repeated application of and to 0 and 1 respectively ; 2 .prescribe an error and determine the number of iterations 3 .start with the policy and ] or ] with is equivalent to ] with is equivalent to ] , each of whose endpoints is identified with an endpoint of the other interval , which is topologically equivalent to a circle . to be technically correct, we note that identifying 0 and does not present a problem , since we can simply extend the interval to ] , while the bottom line represents the orientation ] and , so we can paste them together , and similarly for the left end of the top line and the right end of the bottom line .hence , we see that the set of threshold policies is topologically a circle.,title="fig : " ] + 0 1 + ] .the right end of the top line and the left end of the bottom line are both the policy ] with or ] with .this is the most difficult to understand of the threshold policies , as the orbits do not converge to the accumulation points and , but rather , oscillate around the threshold . , , , .entropy is 0.3251 .note that the masses do not converge to the accumulation points . ] * region iii * ( green ) : ] with .when , every policy here is equivalent to the all- policy , since the mass in the orbit of 1 approaches 0 ., , , .entropy is 0.3337 . under the evolution function, any mass eventually enters the white region since it contains both accumulation points in its interior , after which it can not escape , hence in the limit , there is zero mass in the orbit of 1 , and the policy is equivalent to the all- policy . ]* region iv * ( cyan ) : ] with .when , every policy here is equivalent to the policy and . , , , .entropy is 0.3265 .note that the orbit of 0 converges to while the orbit of 1 converges to . ]* region vi * ( magenta ) : ] , and therefore on the subinterval ] , , which is equal to by symmetry .hence , the inequality ( [ eqn : symmetricineq ] ) reduces to , that is , . and in relation to 0 , , and 1 .all positions are fixed except that may be to the left of . since lies to the left of and the diagram is symmetric , has lower entropy than . since lies between and and entropy is concave , has higher entropy than .hence the minimises the entropy at .,title="fig : " ] + 0 1 next , we show that for each and any other policy , let . since , by ( [ eqn : specialcase ] ), is decreasing , while is increasing .we have already established that and , which implies and respectively .this shows that . for ,write since and , decreasing increases , and the same is true for , since in that case we can write the expression in the same way with .this proves ( [ eqn : coeffdominance ] ) . using ( [ eqn : coeffdominance ] ) and , for any , since identically , the second series vanishes as , while the first series is always non - negative by ( [ eqn : coeffdominance ] ) , hence .thus , this proves the required minimisation .note that the proof above relies heavily on the fact that equality is attained in ( [ eqn : symmetrich ] ) .this occurs only in the symmetric case , and thus this approach does not generalise readily .the complexity of the proof in the symmetric case is indicative of the difficulty of the problem in general , and thus highlights the importance of the empirical description provided by figure [ fig : thresholdregions ] .in the course of performing the computations to generate figure [ fig : thresholdregions ] , we noticed that entropy is unimodal with respect to threshold , with threshold considered as a circle as in figure [ fig : thresholddomain ] . while we can not prove this analytically , it is true for each of the 152000 points in the parameter space considered .this allows some simplification in finding the optimal threshold policy , since finding a local minimum is sufficient .thus , we can alter algorithm [ alg : optimalthreshold ] to begin by testing only two policies , then testing policies in the direction of entropy decrease until a local minimum is found .however , the running time improvement is only a constant factor ; if we model entropy as a standard sinusoid with respect to threshold , then the running time decreases by a factor of 3 on average .the problem of determining the optimal general policy is much more difficult , due to the complexity of the space of general policies .since a policy is uniquely determined by the value of the policy function at the orbit points , this space can be viewed as a hypercube of countably infinite dimension , which is much more difficult to study than the space of threshold policies , which is a circle .one strategy is to truncate the orbit and consider a finite dimensional hypercube , justified by the fact that orbit points have masses which decay geometrically , and thus the tail contributes very little .however , a truncation at ( that is , force the policy to be constant on , and similarly for the orbit of 1 ) gives possible policies , which is still far too large to determine optimality by checking the entropy of each policy .the next approximation is to only look for locally optimal policies , in the sense that changing the policy at each of the truncated orbit points increases entropy , and hope that by finding enough such locally optimal policies , the globally optimal policy will be among them .since a hypercube has very high connectivity , regions of attraction tend to be large , which heuristically suggests that this strategy will be effective .[ alg : localoptimum ] finding a locally optimal truncated policy . 1 .pick , and a starting policy , expressed as a pair of sequences of binary digits , with ; 2 .cycle through the digits , flipping the digit if it gives a policy with lower entropy , otherwise leaving it unchanged ; 3 .if the previous step required any changes , repeat it , otherwise a locally optimal truncated policy has been found .we picked since this allows a policy to be easily expressed as two unsigned 64-bit integers , and for each of the 152000 uniformly spaced parameters of figure [ fig : thresholdregions ] , we generated 10 policies uniformly on the hypercube and applied algorithm [ alg : localoptimum ] .none of the locally optimal policies for any of the parameter values had lower entropy than the optimal threshold policy from figure [ fig : thresholdregions ] , and on average 98.3% of them were equivalent to the optimal threshold policy , up to a precision of 0.1% , indicating that the optimal threshold policy is locally optimal with a very large basin of attraction , which strongly suggests that it is also the globally optimal policy . in the special case ,the infimum of entropy attainable under threshold policies is the same as that under general policies .the fact that a large proportion of locally optimal policies have globally optimal entropy gives a new method for finding the optimal policy . by picking 10 random truncated policies and running algorithm [ alg : localoptimum ], at least one of them will yield an optimal policy with very high probability .empirical observations suggest that this method is slower than algorithm [ alg : optimalthreshold ] on average , but since the success rate remains high while algorithm [ alg : optimalthreshold ] becomes significantly slower as approaches 1 , this method is a better alternative for some parameter values . .darkness increases with the proportion of simulated locally optimal policies which have the same entropy as the optimal threshold policy , up to a precision of 0.1% .the average is 9.83 out of 10 , but the distribution is far from uniform local optima are exceedingly likely to be the same as the threshold optimum for some parameter values and exceedingly unlikely for others .the boundaries are approximately those of the threshold regions ( see figure [ fig : thresholdregions ] ) , with some imprecision due to the non - deterministic nature of the simulation data . ]one last policy of interest is the greedy policy . in the previous sections, we considered a long term optimality criterion in the minisation of limiting expected entropy , but in some cases , it may be more appropriate to set a short term goal . in particular , one may instead desire to minimise expected entropy at the next step , in an attempt to realise maximal immediate gain while ignoring future implications .[ def_greedy ] the * greedy * policy is the policy such that the expected entropy after one observation is minimised .up to an exchage of strict and non - strict inequalities , this is given by :\alpha_0(z)h(r_0(z))<\alpha_1(z)h(r_1(z))\},\\ & a_1=\{z\in[0,1]:\alpha_0(z)h(r_0(z))\ge\alpha_1(z)h(r_1(z))\}.\end{aligned}\ ] ] the greedy policy has the benefit of being extremely easy to use , as it only requires a comparison of two functions at the current information state .since these functions are smooth , efficient numerical methods such as newton - raphson can be used to determine the intersection points , thus allowing the policy to be described by a small number of thresholds .in fact , only one threshold is required , as computational results show the greedy policy always a threshold policy . using the 152000 uniformly distributed data points from before , in each case the two functions defining the greedy policy crossed over at most once .the greedy policy is always a threshold policy .note that .it may appear at first glance that the factor of violates symmetry , but recall that maps to under relabelling . using this identity, the intersection points that define the greedy policy satisfy , where .it is easy to see that is monotonic decreasing on ] , hence is a well - defined one - parameter family of functions mapping ] to itself with fixed points at 0 and 1 .since the range of is contained in , we can discount the endpoints and , hence it suffices to show that the equation has at most one solution for .convexity methods may help in this last step but we have not been able to complete the proof . even when the greedy policy is not optimal , it is very close to optimal .of the 152000 uniformly distributed data points in figure [ fig : greedyoptimal ] below , the greedy policy is non - optimal at only 6698 points , or 4.41% , up to an error tolerance of . on average ,the greedy policy has entropy 0.0155% higher than the optimal threshold policy , with a maximum error of 5.15% occuring at the sample point , , and .thus the greedy polices provides an alternative suboptimal policy which is very easy to calculate and very close to optimal . .light grey indicates that the greedy policy is the optimal threshold policy ; darker points indicate suboptimality with darkness proportional to error .similarly to figure [ fig : localoptimum ] , the suboptimal points lie on the boundaries of the threshold regions . ] we make a final remark that the likelihood of a locally optimal policy being globally optimal as shown in figure [ fig : localoptimum ] , and the closeness of the greedy policy to the optimal threshold policy as shown in figure [ fig : greedyoptimal ] , both exhibit a change in behaviour at the boundaries of the threshold regions as shown in figure [ fig : thresholdregions ] .this suggests that these regions are indeed qualitatively different , and are likely to be interesting objects for further study .this thesis presents initial groundwork in the theory of hidden markov models with multiple observation processes .we prove a condition under which the information state converges in distribution , and give algorithms for finding the optimal policy in a special case , which provides strong evidence that the optimal policy is a threshold policy . *the information state converges for an anchored hidden markov model with only ergodicity rather than positivity ; * the greedy policy is always a threshold policy ; * among threshold policies , the limiting expected entropy is unimodal with respect to threshold ; and * the optimal threshold policy is also optimal among general policies .possible approaches to these problems are likely to be found in , and .the author was not aware of these papers under after the date of submission , and thus was unable to incorporate their methods into this thesis .better algorithms and error bounds for finding the optimal policy are also a worthwhile goal .although our algorithms are computationally feasible with reasonable prescribed errors , our focus was on finding workable rather than optimal algorithms , and thus there is plenty of room for improvement .baum and t. ptrie . statistical inference for probabilistic functions of finite state markov chains ._ annals of mathematical statistics_. volume 6 , number 37 , pages 15541563 , 1966 .o. capp , e. moulines and t. rydn ._ inference in hidden markov models_. springer , new york , 2005 .g. casella and r.l ._ statistical inference _ , second edition .thomson , pacific grove , 2002 .t.m . cover and j.a ._ elements of information theory_. wiley , new york , 1991 .elliott , l. aggoun and j.b ._ hidden markov models : estimation and control_. springer - verlag , new york , 1995 .evans and v. krishnamurthy .optimal sensor scheduling for hidden markov model state estimation ._ international journal of control_. volume 18 , number 74 , pages 17371742 , 2001 .l.a . johnston and v. krishnamurthy. opportunistic file transfer over a fading channel : a pomdp search theory formulation with optimal threshold policies ._ ieee transactions on wireless communications_. volume 5 , number 2 , pages 394205 , 2006. v. krishnamurthy .algorithms for optimal scheduling and management of hidden markov model sensors ._ ieee transactions on signals processing_. volume 6 , number 50 , pages 13821397 , 2002. v. krishnamurthy and d. djonin .structured threshold policies for dynamic sensor scheduling a pomdp approach ._ ieee transactions on signals processing_. volume 5 , number 10 , pages 49384957 , 2007 .f. le gland and l. mevel .exponential forgetting and geometric ergodicity in hidden markov models . _ mathematics of control , signals and systems_. volume 13 , pages 6393 , 2000 .macphee and b.p .optimal search for a moving target ._ probability in the engineering and informational sciences_. volume 9 , number 2 , pages 159182 , 1995 .meyn and r.l .tweedie . _ markov chains and stochastic stability_. springer - verlag , london , 1993 .rabiner . a tutorial on hidden markov models and selected applications in speech recognition ._ proceedings of the ieee_. volume 2 , number 77 , pages 257286 , 1989 .sensor scheduling for optimal observability using estimation entropy ._ proceedings of the fifth annual ieee international conference on pervasive computing and communications workshops_. march 2007 .d. sinno and d. cochran .dynamic estimation with selectable linear measurements ._ proceedings of the 1998 ieee international conference on accoustics , speech and signal processing_. may 1998 ._ supermodularity and complementarity_. princeton university press , 1998 .separation of estimation and control for discrete time systems ._ proceedings of the ieee_. volume 59 , number 11 , 1971 ._ hidden markov models with multiple observation processes_. honours thesis , university of melbourne , 2007 .
we consider a hidden markov model with multiple observation processes , one of which is chosen at each point in time by a policy a deterministic function of the information state and attempt to determine which policy minimises the limiting expected entropy of the information state . focusing on a special case , we prove analytically that the information state always converges in distribution , and derive a formula for the limiting entropy which can be used for calculations with high precision . using this fomula , we find computationally that the optimal policy is always a threshold policy , allowing it to be easily found . we also find that the greedy policy is almost optimal .
binary exponential backoff ( beb ) is widely adopted as a key collision resolution mechanism in popular random - access networks , such as ieee ethernet and ieee wireless local area network ( wlan ) . with exponential backoff ( eb ) ,a packet is transmitted after waiting a number of time slots randomly selected from a contention window , the size of which increases multiplicatively on collisions .mathematically , the contention window after consecutive collisions of a packet . here, with is the backoff function for eb must be an increasing function for the backoff process to be meaningful .therefore , must be larger than unity . ] and is the initial contention window size .beb is a special case with .most of the research attention has been focused on investigating the throughput provided by eb .thanks to the seminal work of bianchi , the throughput is now well understood through a fixed point equation that characterizes the backoff process .subsequently , shows that the throughput of eb is stable against the network size in the sense that the throughput converges to a nonzero constant when the network size goes to infinity ( assuming no retry limit is enforced ) .throughput stability has been the most intriguing aspect of eb , and has enabled eb - based mac protocols to support a wide range of throughput oriented applications regardless of the network congestion level . with the recent boom of delay - sensitive multimedia applications such as voip and video conferencing , research interestsare being shifted to other aspects of system performance such as delay , delay jitter , and short - term fairness .indeed , it can be shown that delay jitter significantly affects the users perception of quality of real - time multimedia services .eb , despite its good throughput performance , has been shown to suffer poor performance in delay and short - term fairness .more specifically , eb could induce divergent ( i.e. , infinite ) second- and high - order moments of medium access delay , yielding extraordinarily large delay jitter and severe transmission starvation of users .essentially , the medium access delay follows a power law distribution , implying that a non - negligible number of packets may experience much larger delay than the average . as a motivating example, we monitor the packet transmission during a second period in a -node ieee g wlan , where beb is adopted .alarmingly , out of the nodes experience severe transmission starvation , as illustrated in fig . .the figure shows that node and perceive starvation for a duration of and seconds , respectively .even worse , node barely receives any service throughout the entire simulation time .consecutive seconds for of nodes in a 802.11 g system .assume that all nodes are continuously backlogged and no retry limit is enforced.,width=336 ] in an attempt to address the above issues , this paper seeks to understand the following important questions . 1 .what is the root cause of the power - law delay distribution of eb .is it an intrinsic issue of eb , or can be avoided by adjusting the backoff exponent .2 . if the problem is intrinsic with eb , can we find an alternative backoff function that does not suffer the same problem .in general , what is the necessary and sufficient condition for a backoff function to have convergent delay moments , i.e. , not to experience power law delay .3 . is it possible to achieve throughput stability and convergent delay moments at the same time by certain backoff functions .if not , are there any backoff functions that exhibit convergent delay moments and good throughput performance at the same time when the network size is within a finite and practical range . in the literature , has been partly addressed . first finds that the medium access delay distribution of eb is heavy - tailed when retry limit is infinite , regardless of the backoff exponent . later proves that the medium access delay indeed follows a power law distribution , the slope of which is obtained as a function of the backoff exponent and the collision probability .noticeably , the effect of power law delay can not be eliminated even if a finite retry limit k is enforced in practical systems . and observe that the medium access delay follows a truncated power law distribution , implying that small retry limit does not eliminate the power law characteristics induced by eb .this directly translates to high packet loss rate , if packets have to be discarded upon reaching a retry limit . indeed ,our simulation results show that beb suffers packet loss rate in a -node network with , leading to an equal percentage reduction of throughput as that with .the analysis in these prior work can be treated as a special case of the analysis for general backoff functions in this paper . as to , there are some initial attempts to replace eb with other more moderate backoff algorithms , such as linear backoff ( ) and polynomial backoff ( ) .observations made by showed that linear and polynomial backoffs with appropriate parameter settings can improve upon beb in terms of throughput and delay performance . observed that pb can achieve a similar saturation throughput as eb but with much smaller delay jitter .however , to the authors best knowledge , no analysis was provided to explain the root cause behind the phenomenon . to fully address the important questions , this paper attempts to uncover the fundamental laws that govern the throughput stability and tail distribution of medium access delay .our main contributions are detailed below .* we find that the heaviness of the tail distribution of medium access delay is closely related to how rapidly the contention window is augmented with each collision .specifically , eb always induces power law delay distribution regardless of the choice of backoff exponent .meanwhile , power law delay is mitigated as long as the backoff function is slower than exponential functions , i.e. , for all , where will be defined more rigorously later .this explains the observations made by and .furthermore , we find that delay distribution becomes light - tailed if the backoff function increases linearly or sub - linearly .* we prove that throughput stability is achieved only when the backoff function is at least as fast as an exponential function , i.e. , for some , where will be defined rigorously later . in other words , pb fails to sustain non - zero asymptotic throughput , although they yield convergent delay moments .this presents a fundamental tradeoff between throughput stability and the heaviness of tail delay distribution .* we find that super - linear polynomial backoff achieves high throughput across a wide range of practical network size , despite its throughput instability asymptotically .this , together with our findings in , suggests that super - linear polynomial backoff is a better alternative than eb in supporting broadband network applications that call for both high throughput and low delay and delay jitter .our study on the delay tail distribution of backoff process is not only for theoretical interest but also closely related to engineering applications . in the past few years, a number of modified exponential backoff schemes , including quality of service enhancing protocols , have been proposed to improve the delay performance of conventional beb .for instance , proposed a lmild backoff algorithm , in which the contention window doubles upon collisions whereas decreases linearly upon successful transmissions . besides, the enhanced distributed channel access ( edca ) scheme , which is adopted in the standard , gives priority to delay - sensitive applications by setting a shorter contention window and shorter arbitration inter - frame space . despite their respective contributions , they do not eliminate the fundamental feature of power law delay distribution induced by exponential backoff , and thus may still perceive relatively large delay jitter or high packet loss rate .instead , we propose to fundamentally solve the power law delay problem by replacing eb with pb .meanwhile , we show that high throughput can be achieved in a wide range of practical network size through parameter tuning of pb . in this sense, we can mitigate the power law delay distribution of eb without hurting the advantageous throughput performance .our simulation results show that pb with reasonable backoff parameter outperforms beb regardless of the existence of the retry limit . with current hardware processing power ,the implementation of pb in random access networks incurs minor extra cost .therefore , we believe it is a promising algorithm with broad applications in future random access networks . the rest of the paper is organized as follows .we briefly review the backoff protocols and introduce some background information in section ii . in section iii, the main results of this paper is summarized . in section iv, we analyze the power law tail distribution of medium access delay for general backoff protocols . in sectionv , we derive the condition to sustain stable throughput .simulation results are presented in section vi , where we show that pb is a better alternative than eb in random access networks .finally , the paper is concluded in section vii .in this section , we first briefly review the operation of general backoff protocols .we then introduce the notion of medium access delay and some important metrics that will be used in later sections to evaluate the performance of different backoff schemes .we consider a fully connected wlan consisting of continuously backlogged nodes .illustrated in fig . , the transmission of nodes is coordinated by a backoff mechanism . at each packet transmission ,a node sets a backoff counter value by randomly choosing an integer from a contention window ]is finite for all and is infinite for all .in fact , the tail decaying rate of a probability distribution is closely related to the convergence of moments .specifically , a finite ] are finite for all , the tail distribution of decays faster than all power law functions and belongs to region or in fig . . in this case, we say that the power law distribution is mitigated .* definition 2 : * a probability distribution is heavy - tailed distribution if its moment generating function diverges , i.e. , using taylor expansion to ( [ 21 ] ) , it holds that .\ ] ] * remark 2 : * the tail decay rate of a heavy - tailed distribution is slower than any exponential functions . from ( [ 26 ] ) , any divergent moment ] is determined by the most significant term in the rhs of ( [ 9 ] ) , i.e. .\end{aligned}\ ] ] that is to say , the convergence of medium access delay is equivalent to that of the integrated backoff countdown process .similarly , let denote the total number of backoff countdowns before the packet successfully transmits .if a packet is successfully transmitted after collisions , the total number of backoff countdowns denoted by , is where is the backoff counter value at the backoff stage .therefore , the moment of is =(1-p_c)\sum_{j=0}^{\infty}p_c^j e\left[\left(\sum_{k=0}^j b_k\right)^n\right ] .\end{aligned}\ ] ] recall that .besides , the countdown time slot is bounded as therefore , ( [ 10 ] ) is lower bounded by \ ] ] meanwhile upper bounded by .\ ] ] it can be seen that ( [ 10 ] ) converges if and only if ] is equivalent to that of ] in the following discussions .theorem presents the relation between power law delay distribution and backoff function growth rate .* theorem 1 : * a random - access network with an increasing backoff function suffers a power law delay if such that , and does not suffer a power law delay if , ._ proof _ : we first prove that a suffers a power law delay if such that . to prove the argument, we only need to show that there exists an infinite ] , where &\geq\left(1-p_c\right)\sum_{j=0}^{\infty}p_c^j \left(\sum_{k=0}^{j}e\left[b_k\right]\right)^n\\ & \geq \left(1-p_c\right)\sum_{j=0}^{\infty}\left\{p_c^j \sum_{k=0}^{j}\left(e\left[b_k\right]\right)^n\right\}\\ & = \frac{1}{2^n}\left(1-p_c\right)\sum_{j=0}^{\infty}\left\{p_c^j \sum_{k=0}^{j}\left(w_k-1\right)^n\right\}. \end{aligned}\ ] ] the last equality holds because =\frac{w_k-1}{2} ] becomes infinite when , or equivalently .this leads to the proof that suffers a power law delay then , we prove that does not suffer a power law delay if , .this is equivalent to show that ] is upper bounded by &\leq\left(1-p_c\right)\sum_{j=0}^{\infty}\left\{p_c^j\left(j+1\right)^{n-1 } \sum_{k=0}^{j}e\left[b_k^n \right]\right\}. \end{aligned}\ ] ] by assumption , is uniformly generated from ] into ( [ 12 ] ) , we have \leq\frac{1-p_c}{n+1}\sum_{j=0}^{\infty}\left\{p_c^j\left(j+1\right)^{n-1}\cdot \sum_{k=0}^{j}w_k^{n}\right\}.\ ] ] by definition , for any and there exists a , such that for all .then , the following inequality holds for for all and , \leq\frac{\left(1-p_c\right)w_0^n}{n+1}\biggl\{\sum_{j=0}^{k_r}\left[p_c^j\left(j+1\right)^{n-1}\cdot \sum_{k=0}^{j}g^n(k)\right]+ \\ & \sum_{j = k_r+1}^{\infty}\left[p_c^j\left(j+1\right)^{n-1 } \left(\sum_{k=0}^{k_r}g^n(k ) + \sum_{k = k_r+1}^{j}c^nr^{nk}\right)\right]\biggr\}. \end{aligned}\ ] ] the second term in the rhs of ( [ 61 ] ) , which determines the convergence of the upper bound , can be expressed as where is a finite constant for a given .noticeably , ( [ 62 ] ) is the upper bound on ] is finite as long as there exists an such that ( [ 62 ] ) is finite .notice that the first term in ( [ 62 ] ) is convergent for all .then , ( [ 62 ] ) is finite if and only if is finite .this can be achieved by selecting a in other words , we can always find a finite upper bound of ] is finite if and only if .this implies that yields a power law delay if , and a non - power law delay if ._ proof : _ see appendix a. the backoff functions discussed in corollary are special cases of the general ones discussed in theorem , in that the limit of exists as .it is easy to check that for eb ( ) and for pb ( , ) and for seb ( , ) .thus , eb suffers a power law delay , while pb and seb do not .this is consistent with theorem .moreover , following corollary , we see that ] for eb , seb and pb , respectively .this means that the throughput of the three backoff schemes are very similar .we can see in fig . with eb , the difference in the amount of service received is significant across different nodes .the number of successfully transmitted packets by a node can vary all the way from to .worse still , severe transmission starvation is observed with eb . on averagemore than of nodes transmit very few or even zero packets during the entire simulation time . in fig . , seb performs much better than eb , where the disparity of successful transmission is smaller and the maximum number of successfully transmitted packets is reduced to around .however , we can still observe around of the nodes in transmission starvation .this is because large delay can still occur with non - negligible probability when seb is adopted , although the power law tail is mitigated . in vivid contrast, we can see in fig . the range of the number of successful transmissions is significantly reduced and no transmission starvation occurs when pb is implemented .this indicates that pb achieves the fairest air time allocation among nodes .such observation is consistent with our results in fig . that pb has the lightest " tail of delay distribution among the three schemes . with different backoff parameters , we plot the normalized saturation throughput of pb in fig . when the number of contending nodes varies from to .besides , the saturation throughput of beb is also presented for comparison .we can see that the throughput of pb gradually decreases as increases .in fact , the throughput will decrease to zero when becomes significantly large . on the other hand, the throughput of exponential backoff converges to a constant as increases .these observations verify out analysis in theorem that the throughput is stable with eb while unstable with pb .however , we also see that pb with can sustain higher saturation throughput than beb for all .this implies that high efficiency can be obtained with pb in practical scenarios when the order of backoff exponent is set properly .therefore , we can safely enjoy the small delay jitter and better user fairness brought by pb without worrying about the instability of asymptotic throughput in practical systems . )when varies.,width=336 ] simulation results in this section show that pb can achieve high throughput , smaller delay jitter and good user fairness at the same time within practical range of user population .it is therefore a better alternative than eb , especially for carrying real - time traffics with stringent delay requirements .in this paper , we have analyzed the tail delay distribution and throughput stability of general backoff functions .a tradeoff has been established between the tail decaying rate of medium access delay distribution and the stability of throughput . in particular, we found that power law delay distribution can be avoided if the backoff functions are slower " than an exponential function .examples of such slow " backoffs are pb and seb . in addition , the delay distribution becomes light tailed when linear - sublinear pb is used . on the other hand , non - zero asymptotic throughput is attainable only when backoff functions grow at least as fast as an exponential function , such as eb . for practical implementation , we show that pb obtains good throughput performance within a practical range of user population .meanwhile , all delay moments with pb are finite as opposed to the infinite delay moments with eb . as such, we advocate pb as a better alternative than eb , now that there are increasingly more multimedia applications with stringent delay requirements in the network ._ proof : _ we first show that the limit ^n}\ ] ] exists . since , it holds that , such that equivalently , we have then , it holds that the second term of the rhs of ( [ 70 ] ) can be rewritten as ^n } = \frac{1}{\sum_{k=0}^{k_0}\left[\frac{g(k)}{g(j+1)}\right]^n + \sum_{k = k_0 + 1}^{j}\left[\frac{g(k)}{g(j+1)}\right]^n},\ ] ] which is upper bounded by ^n + \sum_{k=1}^{j - k_0}\left(\frac{1}{\gamma+\epsilon}\right)^{kn } } \end{aligned}\ ] ] and lower bounded by ^n + \sum_{k=1}^{j - k_0}\left(\frac{1}{\gamma-\epsilon}\right)^{kn}}. \end{aligned}\ ] ] notice that ( ) can be made arbitrarily small ( large ) as .when , we see that the lower bound and upper bounds converge to as . otherwise , when , the lower bound and upper bounds converge to as . in both cases , next , we prove that ] .then , we prove that ] , which leads to the proof that ] into the rhs of ( [ 45 ] ) , & \geq \left(1-p_c\right)\left(\frac{w_0}{2}\right)^n\sum_{j=0}^{\infty}\left\{p_c^j \sum_{k=0}^{j}\left(1+k^b\right)^n\right\}\\ & \geq \left(1-p_c\right)\left(\frac{w_0}{2}\right)^n\sum_{j=0}^{\infty}\left\{p_c^j \sum_{k=0}^{j}k^{bn}\right\}\\ & \geq \left(1-p_c\right)\left(\frac{w_0}{2}\right)^n\sum_{j=0}^{\infty}\left\{p_c^j \int_{0}^{j } t^{bn } dt\right\}\\ & = \frac{1-p_c}{bn+1}\left(\frac{w_0}{2}\right)^n\sum_{j=0}^{\infty}p_c^j j^{bn+1}. \end{aligned}\ ] ] here , can be represented as where is a lerch s transcendent .when and , it holds that ( cf . , p. ) , where is a gamma function .correspondingly , we can write &\geq \frac{1-p_c}{bn+1}\left(\frac{w_0}{2}\right)^n\phi \left(p_c , -(bn+1 ) , 0\right)\\ & \approx\frac{1-p_c}{bn+1}\left(\frac{w_0}{2}\right)^n\gamma(bn+2 ) \left(\ln\frac{1}{p_c}\right)^{-\left(bn+2\right)}. \end{aligned}\ ] ] recall in ( [ 26 ] ) that is heavy - tailed if and only if =\infty,\ \ \forall \lambda>0.\ ] ] we substitute the inequality in ( [ 31 ] ) into the lhs of ( [ 38 ] ) , \\ \geq & \sum_{n=0}^{\infty } \frac{\lambda^n}{n ! } \text{e}\left[\lambda^n\right]l_{\text{min}}^n \\ \geq & \sum_{n=0}^{\infty } \frac{1-p_c}{bn+1}\left(\frac{w_0l_{\text{min}}\lambda}{2}\right)^n\frac{1}{n ! }\gamma(bn+2 ) \left(\ln\frac{1}{p_c}\right)^{-\left(bn+2\right)}. \end{aligned}\ ] ] the lhs of ( [ 38 ] ) is finite only if the lower bound is finite . using the ratio test to lower bound in ( [ 48 ] ) , we obtain the test parameter as the delay distribution of pb is heavy - tailed if is larger than for all .note that , when is a very large real positive number , gamma function can be well approximated by ( cf . , p. ) .we denote , then by lhospital s rule , .then , the rhs of ( [ 69 ] ) equals to therefore , we have accordingly , the test parameter in ( [ 30 ] ) is we can see that for all when .therefore , the delay distribution of a super - linear pb is heavy - tailed . for linear - sublinear pb with , however , we currently have not obtained conclusive analytical results to verify its heavy - tailed behavior .instead , we numerically calculate the probability mass function of and find it matches the features of light - tailed distribution when .the probability mass function can be obtained through calculating its probability generating function ( cf . , p. ) .we plot ) ] for some , indicating \thicksim e^{-\lambda_0 n} ] into ( [ 45 ] ) , then \geq \left(1-p_c\right)\left(\frac{w_0}{2}\right)^n\sum_{j=0}^{\infty}\left\{p_c^j \sum_{k=0}^{j}r^{nk^a}\right\}. \end{aligned}\ ] ] for any , and , there always so that as we have proved in previous subsection , the following inequality holds for pb with and , =\infty,\ ] ] . following ( [ 34 ] ) and( [ 35 ] ) , it suffices to claim to that the moment generating function of seb diverges , i.e. , =\infty,\ \ \forall \lambda>0.\ ] ] therefore , the distribution of medium access delay is heavy - tailed .y. j. zhang , s. c. liew , and d. r. chen , sustainable throughput of wireless lans with multipacket reception capability under bounded delay - moment requirements , " _ ieee transactions on mobile computing _ , vol9 , pp.1226 - 1241 , sept 2010 .a. kumar , e. altman , d. miorandi , and m. goyal , new insights from a fixed point analysis of single cell ieee 802.11 wlans , " _ ieee / acm transactions on networking _ ,vol.15 , no .588 - 601 , aug . 2007 .
exponential backoff ( eb ) is a widely adopted collision resolution mechanism in many popular random - access networks including ethernet and wireless lan ( wlan ) . the prominence of eb is primarily attributed to its asymptotic throughput stability , which ensures a non - zero throughput even when the number of users in the network goes to infinity . recent studies , however , show that eb is fundamentally unsuitable for applications that are sensitive to large delay and delay jitters , as it induces divergent second- and higher - order moments of medium access delay . essentially , the medium access delay follows a power law distribution , a subclass of heavy - tailed distribution . to understand and alleviate the issue , this paper systematically analyzes the tail delay distribution of general backoff functions , with eb being a special case . in particular , we establish a tradeoff between the tail decaying rate of medium access delay distribution and the stability of throughput . to be more specific , convergent delay moments are attainable only when the backoff functions grows slower than exponential functions , i.e. , when for all . on the other hand , non - zero asymptotic throughput is attainable only when backoff functions grow at least as fast as an exponential function , i.e. , for some . this implies that bounded delay moments and stable throughput can not be achieved at the same time . for practical implementation , we show that polynomial backoff ( pb ) , where is a polynomial that grows slower than exponential functions , obtains finite delay moments and good throughput performance at the same time within a practical range of user population . this makes pb a better alternative than eb for multimedia applications with stringent delay requirements . medium access control , backoff algorithms , wireless lan ( wlan ) , power law delay .
the design and management of green wireless networks has become increasingly important for modern wireless networks , in particular , to manage operating costs .futuristic ( beyond 5 g ) cellular networks face the dual challenges of being able to respond to the explosion of data rates and also to manage network energy consumption . due to the limited spectrum and large number of active users in modern networks ,energy - efficient distributed power control is an important issue .sensor networks , which have multiple sensors sending information to a common receiver with a limited energy , capacity have also recently surged in popularity .energy minimization in sensor networks has been analysed in many recent works .+ several of the above described systems have some common features : 1 .multiple transmitters connected to a common receiver .2 . lack of centralization or coordination , i.e. , a distributed and de - centralized network .3 . relevance of minimizing energy consumption or maximizing energy - efficiency ( ee ) .transmitters that have arbitrary data transmission .these features are present in many modern systems like a sensor network which has multiple sensors with limited energy connected in a distributed manner to a common receiver .these sensors do nt always have information to transmit , resulting in sporadic data transmission .another example would be several mobile devices connected to a hot - spot ( via wifi or even bluetooth ) . due to these features of the network ,inter - transmitter communication is not possible and the transmitters are independent decision makers .therefore , implementing frequency or time division multiple access becomes harder and a mac protocol ( with single carrier ) is often the preferred or natural method of channel access . in many existing works , both network - centric and user - centric approaches have been studied . in a network - centric approach , the global energy - efficiency ( gee )is defined as the ratio between the system benefit ( sum - throughput or sum - rate ) over the total cost in terms of consumed power . however ,when targeting an efficient solution in an user - centric problem , the gee becomes not ideal as it has no significance to any of the decision makers . in this case ,other metrics are required to reflect the individual interest of each decision maker .therefore , we redefine the gee to be the sum over individual energy - efficiencies as a suitable metric of interest .+ the major novelty of this work is in improving the * sum of energy - efficiencies * for a communication system with * all the listed features above*. in such a * decentralized and distributed network * , as each transmitter operates independently , implementing a frequency division or a time division multiple access is not trivial .therefore , we are interested in looking at a * mac system * where all transmitters operate on the same band. additionally , * ee * will be our preferred metric due to its relevance .this metric has been defined in as the ratio between the average net data rate and the transmitted power . in , the total power consumed by the transmitterwas taken into account in the ee expression to design distributed power control which is one of the most well known techniques for improving ee . however , many of the works available on energy - efficient power control consider the ee defined in where the possible presence of a queue at the transmitter is ignored .in contrast with the existing works , we consider a new generalized ee based on a cross - layer approach developed recently in .this approach is important since it takes into account : 1 ) a fixed cost in terms of power namely , a cost which does not depend on the radiated power ; and 2 ) * the presence of a finite packet buffer and sporadic packet arrival * at the transmitter ( which corresponds to including the 4th feature mentioned above ) .although providing a more general model , the distributed system in may operate at a point which is energy - inefficient . indeed , the point at which the system operates is a nash equilibrium ( ne ) of a certain non - cooperative static game .the present work aims at filling this gap by not only considering a cross - layer approach of energy - efficient power control but also improving the system performance in terms of sum of energy - efficiencies .nash bargaining ( nb ) solution in a cooperative game can provide a possible efficient solution concept for the problem of interest as it is pareto - efficient .however , it generally requires global channel state information ( csi ) .therefore , we are interested in improving the average performance of the system by considering long - term utilities .we focus then on repeated games ( rg ) where repetition allows efficient equilibrium points to be implemented . unlike static games which are played in one shot ,rg are a special case of dynamic games which consider a cooperation plan and consist in repeating at each step the same static game and the utilities result from averaging the static game utilities over time .there are two relevant dynamic rg models : finite ( frg ) and discounted ( drg ) .the frg is defined when the number of stages during which the players interact is finite . for the drg model ,the discount factor is seen as the stopping probability at each stage .the power control problem using the classic ee developed by goodman et al in has been solved with rg only in where authors developed an operating point ( op ) relying on individual csi and showed that rg lead to efficient distributed solution . here, we investigate the power control problem of a mac system by referring to rg ( finite and discounted ) where the utility function is based on a cross - layer approach .accordingly , we contribute to : 1 .determine the closed - form expressions of the minimum number of stages for the frg and the maximum discount factor for the drg .these two parameters identify the two considered rg .2 . determine a distributed solution pareto - dominating the ne and improving the system performances in terms of powers and utilities compared not only to the ne but also to the nb solution even for high number of users .3 . show that the rg formulation when using the new ee and the new op leads to significant gains in terms of social welfare ( sum of utilities of all the users ) compared to the ne .4 . show that the following aspects of the cross - layer model improve considerably the system performances when comparing to the goodman model even for large number of users : * the minimum number of stages in the cross - layer ee model can always be shorter than the minimum number of stages in the goodman ee formulation . *the social welfare for the drg in the cross - layer model decreases slightly when the number of users increases while it decreases considerably in the goodman model . 5 .show that in real systems with random packet arrivals , the cross - layer power control algorithm outperforms the goodman algorithm and then the new op with the cross - layer approach is more efficient .this paper is structured as follows . in section [ sec :problemstatement ] , we define the system model under study , introduce the generalized ee metric and define the non - cooperative static game .this is followed ( section [ sec : nashbargainingsolution ] ) by the study of the nb solution . in section [ sec : repeatedgamesformulation ] , we introduce the new op , give the formulation of both rg models ( frg and drg ) and determine the closed - form expressions of the minimum number of stages and the maximum discount factor as well .numerical results are presented in section [ sec : numericalresults ] and finally we draw several concluding remarks .we consider a mac system composed of small transmitters communicating with a receiver .the transmitter transmits a signal with a power ] denotes the efficiency function which is sigmoidal and corresponds to the packet success rate verifying and .authors of were the first to consider a total transmission cost of the type _ radiated power _( ) _ consumed power _ ( ) to design distributed power control strategies for multiple access channels as follows : in , a more generalized ee metric has been developed by considering a packet arrival process following a bernoulli process with a constant probability and a finite memory buffer of size .the new ee expression is given by : where the function identifies the packet loss due to both bad channel conditions and the finiteness of the packet buffer and is expressed as follows : where is the stationary probability that the buffer is full and is given by : with : it is important to highlight that this new generalized ee given by ( [ eqeegen ] ) includes the conventional case of ( [ eqchi ] ) when making .+ the static cross - layer power control game is a non - cooperative game which can be defined as a strategic form game . _the game is defined by the ordered triplet where is the set of players ( the transmitters ) , are the corresponding sets of strategies with ] and the utility function is continuous , the region is compact for a given channel configuration .since it is generally not convex , time - sharing has been a solution to convexify it . in order to illustrate the main idea of this technique applied to our problem ,let us consider a system of 2 users . during a time fraction ,the users use the powers to have utilities . during a time fraction , they use another combination of powers to have .thus , the new achievable utilities region ( for the 2-users system ) is : we define the pareto boundary ( the outer frontier ) of the convex hull of .[ fig : achievableregion ] shows the convexified achievable utilities region with the ne point , the nb solution and the nash curve ( both will be defined next ) .let define the improvement region of utilities versus the ne and it is given by : \}.\ ] ] the nb solution belongs to the region . here, in the power control game , there exists a unique nb solution denoted as and is given by : } } \prod_{i=1}^{n}{(u_{i}-u_{i}^{ne } ) } , \label{30}\ ] ] since the ne can always be reached and the achievable utility region is a compact convex set , the nb solution exists .it is unique since it verifies certain axioms : individual rationality and feasibility , independence of irrelevant alternatives , symmetry , pareto optimality ( efficiency ) and independence of linear transformations .the nb solution results from the intersection of the pareto boundary ( ) with the nash curve whose form is where is a constant chosen such that there is precisely one intersection point ( see fig . [ fig : achievableregion ] ) .although the nb solution is pareto - efficient , it generally requires global csi at the transmitters due to the nash product introducing all the users utilities . for this reason , we are looking for another efficient solution through the study of the dynamic rg .rg consist in their standard formulation , in repeating the same static game at every time instance and the players seek to maximize their utility averaged over the whole game duration .repetition allows efficient equilibrium points to be implemented and which can be predicted from the one - shot static game according to the folk theorem , which provides the set of possible nash equilibria of the repeated game . in a repeated game, certain agreements between players on a common cooperation plan and a punishment policy can be implemented to punish the deviators . in what follows, we introduce the new op and characterize the two rg models .the new op consists in setting to a constant which is unique when maximizing the expected sum utility over all the channel states .it is given by : .\label{14}\ ] ] the power of the player is then deduced as follows : the new op pareto - dominates the ne and relies on individual csi at the transmitter . in order to implement a cooperation plan between the players , we assume in addition to the individual csi assumption , that every player is able to know the power of the received signal at each game stage , which is denoted by : when assuming that is set to the constant , the received signal power can be written as : accordingly , each transmitter needs only its individual sinr and the constant ( depending only on and ) to establish the received signal power .we assume that the data transmission is over block fading channels and that channel gains lie in a compact set ] .since the players detect a variation of the received signal power , a deviation from the cooperation plan has occurred . indeed , when playing at the new op , the received signal power is constant and equal to . consequently , when any player deviates from the new op , the latter quantity changes and the deviation is then detected .a rg is a long - term interaction game where players react to past experience by taking into account what happened in all previous stages and make decisions about their future choices .the resulting payoff is an average over all the stage payoffs .we denote by , the game stage which corresponds to the instant in which all players choose their actions .accordingly , a profile of actions can be defined for all players as .a history of player at time is the pair of vectors and which lies in the set with = [ 0,p^{\max}] ] ) is seen as the stopping probability at each stage . the utility function of each player results from averaging over the instantaneous utilities over all the game stages in the frg while it is a geometric average of the instantaneous utilities during the game stages in the drg .we denote the joint strategy of all players . _ a joint strategy satisfies the equilibrium condition for the repeated game defined by if , , with for the frg or for the drg such that : _ in rg with complete information and full monitoring , the folk theorem characterizes the set of possible equilibrium utilities .it ensures that the set of ne in a rg is precisely the set of feasible and individually rational outcomes of the one - shot game .a cooperation / punishment plan is established between the players before playing .the players cooperate by always transmitting at the new op with powers .when the power of the received signal changes , a deviation is then detected and the players punish the deviator by transmitting with their maximum transmit power in the frg and by playing at the one - shot game in the drg .in what follows , we give the equilibrium solution of each repeated game model and mention the corresponding algorithm .it is important to note that in contrast with iterative algorithms ( e.g. , iterative water - filling type algorithms ) , there is no convergence problem in repeated games ( frg and drg ) .indeed , the transmitters implement an equilibrium strategy ( referred to as the operating point ) at every stage of the repeated game .the frg is characterized by the minimum number of stages ( ) .if the number of stages in the game verifies , a more efficient equilibrium point can be reached .however , if it is less than , the ne is then played .assuming that channel gains lie in a compact set ] defines the discount factor .accordingly , we can express the analytic form of the maximum discount factor in a drg when assuming that channel gains lie in a compact set ] and concave on .the throughput and the used bandwidth are equal to 1 mbps and 1 mhz respectively .the maximum power is set to 0.1 watt while the noise variance is set to watt .the buffer size , the packet arrival rate and the consumed power are fixed to 10 , 0.5 and watt respectively .we consider rayleigh fading channels and a spreading factor introducing an interference processing ( ) in the interference term of the sinr . in fig .[ reg2 ] , we present the achievable utility region , the new op , the ne and the nb solution .we stress that the new op and the nb solution dominate both the ne in the sense of pareto .the region between the pareto frontier and the min - max level is the possible set of equilibrium utilities of the rg according to the folk theorem . ) . ] in order to study the efficiency of the new op versus the nb solution and the ne , we are interested in comparing powers and utilities of the three equilibria by averaging over channel gains for different scenarios ( different number of users in the system ) . in fig .[ reg4 ] , we plot the power and the utility that a user ( in a system of users ) can reach for each equilibrium .thus , we highlight that the new op and the nb solution have better performances than the ne as they pareto - dominate it . when , we notice that the new op and the nb solution are more efficient than the ne .it is clear that the nb solution requires less power and provides higher utility compared to the new op , but it is important to stress that values , in terms of powers and utilities , are slightly different for both equilibria ( new op and nb solution ) .when , we highlight that lower powers are provided with the new op which leads also to higher values of the utilities .thus , we notice that the new op gives better performances than the ne and the nb solution .therefore , the new op contributes not only to improve the system performances better than the ne for any given scenario but also enables important gains in terms of powers and utilities when compared to the nb solution for a system with a large number of users ( ) . .] we are interested in studying the performances of the social welfare ( ) according to the frg versus the ne in a multi - users system .the corresponding expression is given by : in fig .[ reg5 ] , we present the ratio of the social welfare corresponding to the frg ( ) vs the ne social welfare ( ) .we proceed by averaging over channel gains lying in a compact set such that .we highlight that the social welfare of the frg reaches higher values than the ne ( ) .in addition , we notice that the social welfare ratio increases with the number of users for both models ( goodman and cross - layer ) .the minimum number of stages according to the cross - layer model is much lower compared to the one related to the goodman model . to illustrate this , when , for the goodman model is equal to 4600 while it is 3700 for the cross - layer model .this difference becomes considerable with the increase of the number of users .indeed , when , the minimum number of stages for the goodman ee is 14300 while it is equal to 10900 for the cross - layer approach .( ) . ]we are interested in plotting the minimum number of stages as a function of the consumed power and the packet arrival rate according to both ee models .results , obtained by averaging over channel realizations , are drawn in figures [ reg6 ] and [ reg11 ] . according to fig .[ reg6 ] , we stress that increases with the number of users while it decreases with the spreading factor .it is clear that for any values of and , it exists a consumed power for which is less than when .thus , a good choice of the fixed consumed power leads to a lower minimum number of stages for the cross - layer model compared to the goodman model . for the cross - layer model ( )lower than of goodman model ( ) . ] in fig .[ reg11 ] , we highlight that the minimum number of stages is an increasing function of the packet arrival rate according to the cross - layer model while it is a constant function for the goodman model since the latter does not take into account the packet arrival process. one can confirm that the minimum number of stages is an increase function of the number of users as deduced previously .simulations show that it exists a packet arrival rate before which of the cross - layer model is much lower than of the goodman model for different number of users .simulations show that and for , of the cross - layer model converges to corresponding to the goodman model .it is important to highlight that when and , of the cross - layer model takes higher values than corresponding to the goodman model but values are quite similar . with the increase of the number of users, the difference between the minimum number of stages for both models becomes noticeable . according to figures [ reg6 ] and [ reg11 ], one can conclude that the cross - layer model can be exploited for short games . of the cross - layer model when comparing to goodman model ( ) .] for the drg model , we plot in a first step the improvement of the social welfare ( ) versus the one - shot game ( ) for goodman and cross - layer models ( and respectively ) as a function of the spectral efficiency .we simulated our algorithm by averaging over channel gains for different number of users .results are given in fig .it is important to highlight that the drg social welfare reaches higher values than the ne social welfare ( ) .for low values of the spectral efficiency , the social welfare ratio is quite similar for both models while the difference becomes noticeable when the spectral efficiency takes higher values .the social welfare ratio increases with the number of users for both ee models . for each model , when takes high values , the social welfare ratios become closer ( for the cross - layer model , the curves corresponding to and are closer than with the curve of ) . for different number of users . ] for this reason , we studied the variation of as a function of and for both ee models and for different number of users .results are given in figures [ reg8 ] and [ reg10 ] .according to fig .[ reg8 ] , we deduce how decreases with the number of users for both ee models . in addition , we stress that the values reached by becomes closer when takes higher values .this can explain fig .[ reg7 ] . for goodman and cross - layer models as a function of the spectral efficiency with different number of users .] the study of the variation of versus the packet arrival rate ( in fig .[ reg10 ] ) shows that the maximum discount factor decreases with the number of users and with the packet arrival rate as well .simulations show that it exists a packet arrival rate before which the corresponding to the cross - layer model takes higher values than the maximum discount factor of the goodman model for different number of users .we notice that starting from , the maximum discount factor of the cross - layer model converges to corresponding to the goodman model . as a function of the packet arrival rate ( ) . ] in a second step , we plotted in fig .[ reg9 ] the variation of the drg social welfare as a function of .we notice that is an increase function of .thus , when , reaches highest value .however , we stress that decreases with the number of users especially for the goodman model while it is quite similar for the cross - layer model .this confirms that the proposed new op is still quite efficient and can be utilized for games with high number of users . as a function of ( ) . ] finally , we plot for both rg models ( frg and drg ) the social welfare when using the cross - layer approach against the constant power for two different values of the packet arrival rate ( and ) .the considered system is composed of users and the spreading factor is fixed to .the idea consists in studying the efficiency of the cross - layer approach regarding the goodman power control algorithm .accordingly , for each packet arrival rate , we plot the social welfare with the cross - layer approach ( powers at the equilibrium are determined normally according to ) and the social welfare with the cross - layer power control but when powers at the equilibrium are determined by the goodman algorithm )$ ] .indeed , the packet arrival rate is assumed constant in the goodman model and equal to 1 ( packets arrive with probability ) . for both rg models, we stress that the cross - layer power control approach outperforms the goodman algorithm for both values of the packet arrival rate .important ( relative ) gains are reached . to illustrate this , for and wattthe relative gain is higher than in the frg and the drg as well .therefore , we conclude that the op with the cross - layer approach provides better performances and is more efficient than the op with the goodman power control approach . for and : the cross - layer power control approach outperforms the goodman algorithm . ] for and : the cross - layer approach improves the power control when compared to the goodman algorithm . ]in this paper , we have investigated rg for distributed power control in a mac system .as the ne is not always energy - efficient , the nb solution might be a possible efficient solution since it is pareto - efficient .however , the latter , in general , requires global csi at each transmitter node .thus , we were motivated to investigate using the repeated game formulation and develop a new op , that simultaneously is both more efficient than the ne and achievable with only individual csi being required at the transmitter . also , we consider a new ee metric taking into account the presence of a queue at the transmitter with an arbitrary packet arrivals .cooperation plans are proposed where the new op is considered and closed - form expressions of the minimum number of stages for the frg and the maximum discount factor for the drg have been established .the study of the social welfare ( sum of utilities of all the users ) shows that considerable gains are reached compared to the ne ( for the frg and drg ) .moreover , our model proves that even with a high number of users , the frg can always be played with a minimum number of stages shorter than when using the goodman model .in addition , the social welfare in the drg decreases slightly with the number of users with the cross - layer approach while it decreases considerably with the goodman model .finally , the comparison of the cross - layer algorithm versus the goodman algorithm , shows that in real systems with random packet arrivals , the cross - layer power control algorithm outperforms the goodman algorithm .thus , the new op with the cross - layer approach is more efficient .an interesting extension to this work would be to consider the interference channel instead of the mac channel and generalize the framework applied here .another possible extension would be to consider the multi - carrier case and the resulting repeated game .let us determine the maximal utility that a player can get and which is denoted as follows : we denote the power maximizing the utility function and which is the solution of the following equation : =0 } , \label{a3}\ ] ] with , and .therefore , the expression of the maximum utility function writes as : with : we have to study then the behavior of regarding for and then we determine the sign of which is given by : we are interested to study the sign of the numerator : with : the next step would be to determine the sign of the expression + .it is obvious that since is an increasing function of the sinr .therefore , we need to determine the sign of .we have : the sign of the first term is negative while the sign of the second term is the same as since and we have : however and then : as shown in , we have : the latter quantity can be expressed as : consequently , we have : >0 . \label{13}\ ] ] therefore , and hence .in particular , we have .thus , we have and finally .we deduce then that is a decreasing function of .it reaches its maximum when and it is minimum when ( for all ) .when substituting in the sinr expression , this allows the determination of the optimal power : =0 , \label{17}\ ] ] with : .+ the latter equation is a function of the sinr . we determine then the solution in terms of sinr which we denote and for which the optimal power is .this sinr exists due to the quasi - concavity of in .then , we have : the sinr refers to the sinr when playing the new op while , and are the sinrs at the ne , at the maximal utility and at the utility min - max respectively . in order to simplify expressions, we define the following notations : at a stage , the equilibrium condition is : } \\\leq \lambda \tilde{u}_i(\mathbf{p}(t ) ) + \sum_{s\geq t+1}{\lambda ( 1-\lambda)^{s - t}\mathbb{e}_{g}[\tilde{u}_i(\mathbf{p}(s ) ) ] } \end{array}\ ] ]knowing that , we have : \leq \lambda \tilde{u}_i+(1-\lambda)\mathbb{e}_{g}[\tilde{u}_i]\ ] ] \\ \leq \lambda { \frac{g|g_i|^2}{{b|g_i|^2+\tilde{\alpha}h}}}+(1-\lambda)\mathbb{e}_{g}\left [ { \frac{g|g_i|^2}{{b|g_i|^2+\tilde{\alpha}h}}}\right ] \end{array}\ ] ] \\ \leq ( 1-\lambda)\left[{\frac{g\nu_i^{\min}}{{b\nu_i^{\max}+\tilde{\alpha}h } } - \frac{e\nu_i^{\min}}{{b\nu_i^{\max}+\gamma_i^{\ast}\left(\sigma^2+\sum_{j \neq i}{p_j^{\ast } \nu_i^{\max}}\right)f}}}\right ]. \end{array}\ ] ] let and define the following quantities : thus : authors declare that they have no competing interests .s. bandyopadhyay and e. j. coyle , an energy efficient hierarchical clustering algorithm for wireless sensor networks , infocom 2003 .22nd annual joint conference of the ieee computer and communications .ieee societies , 3:1713 - 1723 , ( 2003 ) .m. cardei , m. t. thai , y. li and w. wu , energy - efficient target coverage in wireless sensor networks , infocom 2005 .24th annual joint conference of the ieee computer and communications societies .proceedings ieee , 3:1976 - 1984 , ( mar .2005 ) . c. isheden , z. chong , e. jorswieck and g. fettweis , framework for link - level energy efficiency optimization with informed transmitter , ieee transactions on wireless communications , 11(8):2946 - 2957 , ( aug .2012 ) . s. m. betz and h. v. poor , energy efficient communications in cdma networks : a game theoretic analysis considering operating costs , ieee transactions on signal processing , 56(10):5181 - 5190 , ( sep .2008 ) .a. zappone , z. chong , e. jorswieck and s. buzzi , energy - aware competitive power control in relay - assisted interference wireless networks , ieee transactions on wireless communications , 12(4):1860 - 1871 , ( apr .2013 ) .v. s. varma , s. lasaulce , y. hayel and s. e. elayoubi , a cross - layer approach for distributed energy - efficient power control in interference networks , ieee transactions on vehicular technology , ( aug .2014 ) .m. mhiri , k. cheikhrouhou , a. samet , f. mriaux and samson lasaulce , energy - efficient spectrum sharing in relay - assisted cognitive radio systems , ieee proceedings of the 6th international conference on network games , control and optimization ( netgcoop ) , 86 - 91 , ( nov . 2012 ) .s. lasaulce , m. debbah and e. altman , methodologies for analyzing equilibria in wireless games : a look at pure , mixed , and correlated equilibria , ieee signal processing magazine , 26(5):41 - 52 , ( sep .2009 ) .m. mhiri , v. s. varma , m. le treust , s. lasaulce and a. samet , on the benefits of repeated game models for green cross - layer power control in small cells , 1st international black sea conference on communications and networking ( blackseacom ) , 137 - 141 , ( jul . 2013 ) .m. abidi and v. t. vakili , a game theoretic approach for sinr - constrained power control in 3 g cellular cdma communication systems , ieee 18th international symposium on personal , indoor and mobile radio communications ( pimrc ) , 1 - 5 , ( sep .2007 ) .y. xu , j. wang , q. wu , a. anpalagan and y. d. yao , opportunistic spectrum access in unknown dynamic environment : a game - theoretic stochastic learning solution , ieee transactions on wireless communications , 11(4):1380 - 1391 , ( apr .2012 ) .y. song , s. h. y. wong and k. w. lee , optimal gateway selection in multidomain wireless networks : a potential game perspective , mobicom11 proceedings of the 17th annual international conference on mobile computing and networking , 325 - 336 , ( 2011 ) .e. v. belmega and s. lasaulce , an information - theoretic look at mimo energy - efficient communications , valuetools09 proceedings of the 4th international icst conference on performance evaluation methodologies and tools , ( oct .
the main objective of this work is to improve the energy - efficiency ( ee ) of a multiple access channel ( mac ) system , through power control , in a distributed manner . in contrast with many existing works on energy - efficient power control , which ignore the possible presence of a queue at the transmitter , we consider a new generalized cross - layer ee metric . this approach is relevant when the transmitters have a non - zero energy cost even when the radiated power is zero and takes into account the presence of a finite packet buffer and packet arrival at the transmitter . as the nash equilibrium ( ne ) is an energy - inefficient solution , the present work aims at overcoming this deficit by improving the global energy - efficiency . indeed , as the considered system has multiple agencies each with their own interest , the performance metric reflecting the individual interest of each decision maker is the global energy - efficiency defined then as the sum over individual energy - efficiencies . repeated games ( rg ) are investigated through the study of two dynamic games ( finite rg and discounted rg ) , whose equilibrium is defined when introducing a new operating point ( op ) , pareto - dominating the ne and relying only on individual channel state information ( csi ) . accordingly , closed - form expressions of the minimum number of stages of the game for finite rg ( frg ) and the maximum discount factor of the discounted rg ( drg ) were established . our contributions consist of improving the system performances in terms of powers and utilities when using the new op compared to the ne and the nash bargaining ( nb ) solution . moreover , the cross - layer model in the rg formulation leads to achieving a shorter minimum number of stages in the frg even for higher number of users . in addition , the social welfare ( sum of utilities ) in the drg decreases slightly with the cross - layer model when the number of users increases while it is reduced considerably with the goodman model . finally , we show that in real systems with random packet arrivals , the cross - layer power control algorithm outperforms the goodman algorithm .
in classical logic , _ material implication _ or _ material conditional _ is defined by negation and disjunction . specifically , `` if _ p _ then _ q _ '' [ ole_link3][ole_link4]or `` _ _ p _ _ implies _ q _ '' is defined as `` not _ p _ or _, i.e. , _ p _ _ q _ _ _ _ q_. it is well known that this definition is problematic in that it leads to `` paradoxes '' such as \1 ) _ _ ( _ p _ _ q _ ) ( a false proposition implies any proposition ) , \2 ) _ p _ ( _ q _ _ p _ ) ( a true proposition is implied by any proposition ) , and \3 ) ( _ p _ _ q _ ) ( _ q _ _ p _ ) ( for any two propositions , at least one implies the other ) . these formulas can be easily proved as tautologies in classical logic with the _ rule of replacement _ for material implication , i.e. , replacing _ _ q _ with __ _ q_. but these tautologies are counterintuitive so they are called `` paradoxes '' of material implication in classical logic .there are many such paradoxes , see , e.g. , ( bronstein , 1936 ) , and ( lojko , 2012 ) where lists sixteen `` paradoxes '' of material implication . it should be noted that some paradoxes of material implication are even worse than `` paradoxical '' because they are actually wrong .let us take ( _ p _ _ q _ ) ( _ q _ _ p _ ) as an example . using the rule of replacement _ p _ _ q _= _ _ _ q _ ( where the equals sign denotes logical equivalence , see 1.2 ) , we have ( _ p _ _ q _ ) ( _ q _ _ p _ ) = ( _ _ _ q _ ) ( _ _ _ p _ ) = _ _ ( _ q _ _ _ ) _ p _ = _ _ _ p _ .thus , ( _ p _ _ q _ ) ( _ q _ _ p _ ) is a tautology , meaning that for any propositions _p _ and _ q _ , it must be that `` _ _ p _ _ implies _ q _ '' or `` _ _ q _ _ implies _ p _ '' ( or both ) .this is absurd since it is possible that it is not the case even if _ p _ and _ q _ are `` relevant '' .for instance , suppose _n _ is an integer , let _`` _ _ n _ _ = 1 '' and _ q _ `` _ _n _ _ = 0 '' , then , since ( _ p _ _ q _ ) ( _ q _ _ p _ ) is a tautology , it must be that `` if _ n _ = 1 then _ n _ = 0 '' is true or `` if _ n _ = 0 then _ n _ = 1 '' is true ( or both are true ) , and this is an absurdity .many efforts has been made to resolve this problem such as _ relevance logic _ which requires that antecedent and consequent be relevant , _ modal logic _ which uses concept of strict implication , _ intuitionistic logic _ which rejects the law of excluded middle , and _ inquisitive logic _( see , e.g. , ( ciardelli & roelofsen , 2011 ) ) which considers inquisitive semantics of sentences rather than just their descriptive aspects .these developments of non - classical logic are important to modern logics and other disciplines__. _ _however , classical logic exists on its own reason , and we are reluctant to discard it , see e.g. ( fulda , 1989 ) . indeed , classical logic , with its natural principles such as the law of excluded middle , is not only fundamental , but also simple and useful .classical logic is an important part of logic education , so it has been an essential part of most logic textbooks . on the other hand ,_ implication is a kernel concept in logic_. so it is not satisfying that the both simple and useful classical logic has an unnatural and even wrong definition of material implication appeared in many textbooks .therefore , the motivation of this work is to improve the definition of implication to replace that of the material implication in classical logic , so that 1 ) it is `` natural '' and `` correct '' , 2 ) it keeps the system still `` simple '' , and 3 ) it keeps the classical logic still as `` useful '' . in order to satisfy the second requirement, this work uses only concepts that already exist in classical logic rather than use those as specifically introduced in non - classical logics like relevance logic , modal logic , intuitionistic logic , many - valued logic , probabilistic logic , etc .the third requirement is to prevent developing `` too narrow '' a definition of implication , as bronstein ( 1936 ) commented on e. j. nelson s `` intensional logic '' : `` although his system does avoid the ` paradoxes ' , it does so only by unduly narrowing his conception of implication . ''although classical logics include at least propositional and first - order logic , this work concentrates on classical _ propositional logic _, as the relevant concepts are the same . there may exist different notations for one thing .for example , _ _ _ _ , _ _ _ _ , _ _ _ _ , _ _ = _ _ , and _ _ _ may all mean _ logical equivalence _ between _ _ and _ _ ( i.e. , they have the same truth - value in every model ) , which can also be denoted with the unicode symbol `` left and right double turnstile '' ( u+27da ) . on the other hand ,a notation may denotes different things .for instance , the equals sign ` = ' has a variety of usages in different contexts . to prevent ambiguities ,the usage of some important notions in this work is explained as follows . _ and _ _ are _ well - formed formulas _( hereinafter just referred to as formulas ) in a formal language .the _ equals sign _ ` = ' is used to denote logical equivalence in a logical equation like the form _ _ = _ _ that need not to be a tautology ( it may be just a condition or is conditional ) as in a derivation .the _ identical to _ symbol ` ' is used to denote logical equivalence in a tautology like the form _ _ _ _ , or used in derivations to emphasize that _ _ = _ _ is a tautology .the symbols ` ' and ` ' are used to denote the complementary cases of ` = ' and ` ' , respectively .for example , if _ _ = _ _ , then _ _= _ _ . the equals sign ` = ' is of special importance in this work to denote , e.g. , `` conditional '' logical equivalence as explained .the symbol ` ' is used to denote both traditional material implication and the implication relation defined in this work .the other symbols are used conventionally and unambiguously without explanation . for instance , ` [ole_link140][ole_link141 ] ' and ` ' are used to denote `` logically implies ( is logical consequence of ) '' and `` not logically imply ( is not logical consequence of ) '' , respectively ; ` ' denotes any tautology and ` ' denotes any contradiction , etc .the definition of material implication in classical logic is based on that , for propositions _p _ and _ q _ , `` if _ p _ then _ q _ '' or `` _ _ p _ _ implies _ q _ '' is logically equivalent to `` it is false that _ p _ and not _q _ '' , and the latter again is logically equivalent to `` not _ p _ orit is this `` logical equivalence '' that leads to the definition _ _ q _ _ _ _ q _ and hence the _ rule of replacement _ _ p _ _ q _= _ _ _ q _ for material implication .this definition makes the material implication a _ truth - functional _ connective , i.e. , the truth - value of the compound proposition _ _ q _ is a function of the truth - values of its sub - propositions .this means that the truth - value of `` if _ p _ then _ q _ '' is determined solely by the combination of truth - values of _ p _ and _ q_. this is unnatural as shown in the following .\1 ) when we know that _ p _ is true and _ q _ is true , can we decide that `` if _ p _ then _ q _ '' is true ( or false ) ?no , not sure .\2 ) when we know that _ p _ is true and _ q _ is false , can we decide that `` if _ p _ then _ q _ '' is true ( or false ) ?yes , we can decide that it must be false .\3 ) when we know that _p _ is false and _ q _ is true , can we decide that `` if _ p _ then _ q _ '' is true ( or false ) ?no , not sure .\4 ) when we know that _p _ is false and _ q _ is false , can we decide that `` if _ p _ then _ q _ '' is true ( or false ) ?no , not sure . in only one case , namely the second one , the truth - value of the compound proposition `` if _ p _ then _ q _ '' can be determined by the truth - value combination of _ p _ and _ q_. this indicates that `` if _ p _ then _ q _ '' is not logically equivalent to `` not _ p _ or _ q _ '' of which the truth - value is solely determined by the truth - value combination of _ p _ and _ q_. on the other hand , since `` if _ p _ then _ q _ '' is surely false when _ p _ is true and _ q _ is false , it suggests that when `` if _ p _ then _ q _ '' is true it must not be the case that _ p _ is true and _ q _ is false ( which is equivalent to `` not _ p _ or _q _ '' ) . thus , it should be that `` if _ p _ then _ q _ '' logically implies `` not _ p _ or _ q _ '' but not vice versa , so that _ p _ _ q _ _ _ _ q _ since _ p _ _ q _ _ _ _ q _ but _ _ _ q_. _ p _ _ q_. some researchers have already pointed out , or addressed the problem , although they might have different motivations or explained it in different ways , such as maccoll ( 1880 ) , bronstein ( 1936 ) , woods ( 1967 ) , dale ( 1974 ) , and lojko ( 2012 ) .therefore , the definition of material implication in classical logic is not only unnatural but also , even more severely , not correct .this is why it leads to some unacceptable results as exemplified in introduction . the use of the ( mistaken ) equivalence _ p _ _ q_ = _ _ _ q _ makes the classical logic unfortunately defective . in classical logic , `` not _ p _ '' , `` _ _ p _ _ and _ q _ '' , `` _ _ p _ _ or _ q _ '' , and `` if _ p _ then _ q _ '' are all viewed as the same kind of compound sentences formed with operations or functions , called `` logical connectives '' or `` logical operators '' . however , `` if _ p _ then _ q _ '' is actually different .essentially , `` implication '' should not be viewed as an operation but a relation . in mathematics ,1 + 2 is an expression formed by an operation while `` 1 2 '' is a sentence formed by a relation .we can not say that a mathematical expression like 1 + 2 is `` true '' or `` false '' , while we can say that a mathematical sentence like 1 2 is `` true '' .so , in mathematics , usually a function expression has no truth - value while a relation expression has . in propositional logic ,a function expression such as _ _ , _ p _ _ q _ or _ p _ _ q _ , unlike their mathematical counterpart , does have a truth - value , but this is because the output of such a function happens to be a proposition that has a truth - value itself .in contrast , when we say that _ p _ _ q _ is `` true '' or `` false '' , we concern about the implication `` '' itself being `` true '' or `` false '' .this intuition indicates that implication is a relation rather than a function .so , a relation expression is a `` higher level '' sentence than a function expression which if it happens to be a sentence . therefore , it is natural and important to view implication as a relation rather than an operation or function , and of course , the relation to represent the implication is not truth - functional ( woods , 1967 ) .let us now analyze possible relations between any two propositions _p _ and _ q_. there are three distinct cases as shown using venn diagram in figure 2.2.1 , where the square ` ' represents `` set of all interpretations '' , the circle ` _ _ p _ _ ' represents `` set of interpretations that make _ p _ true '' , and the circle ` _ _ q _ _ ' represents `` set of interpretations that make _ q _ true '' .thus , we have case 1 `` disjoint '' that is characterized by the equation _ p _ _ q _ = ; case 2 `` joint '' that is characterized by the equations _ q _ , _ p _ _ q _ _ p _ , and _ _ q _ _ q _ ; and case 3 `` inclusion '' that is characterized by the equation _ _ q _ = _ p_. the three cases are mutually exclusive except for trivial circumstances such as that _ p _ or _q _ equals to or .let us focus on case 3 characterized by _ _ q _ = _ p_. in this case , whenever _p _ is true _q _ must be true , so it is just the case that `` if _ p _ then _ q _ '' .therefore , it indicates that we can use _ _ q _ = _ p _ to define the implication , noting that _ r _ \{(_p _ , _ q _ ) _ p _ _ q _ = _ p _ } is a binary relation on the set of propositions .this is formalized in section 3 as follows .consider a propositional language with the set of logical connectives \{ , , , , , }. the semantics of the logical connectives is the same as in standard classical logic except for the implication symbol ` ' that is to be defined .* definition 3.1.1 . *( propositional language ) let _ p _ be a finite set of [ ole_link7][ole_link8]propositional letters and _ o _ \{ , , , , , } be the set of logical connectives .the propositional language _ _ = _ (_p _ , _ o _ ) is the set of formulas built from letters in _p _ using logical connectives in _o_. the _ valuation functions _ for the logical connectives , except for the implication symbol ` ' , are defined the same as in standard classical logic . *definition 3.1.2 . * ( semantics of implication ) for any formulas _ _ and _ _ in _ _ , the _ valuation function _ _ _ of the implication formula _ _ _ is defined as * proposition 3.1.1 . *( criterion ) for any formulas _ _ and _ _ in _ _ , it holds that _ _ _ iff _ _ _ = _ _ iff _ _ _ = iff _ _ _ _ = .* by the semantics given in definition 3.1.2 , _ _ _ iff _ _ _ = _ . on the other hand , if _ _ _ = _ _ then _ _ _ _ = ( _ _ _ _ ) _ _ , and if _ _ _ _= then , _ _ _ _ ( _ _ _ _ ) ( _ _ _ _ ) ( _ _ _ _ ) _ _ ( _ _ _ _ ) _ _ . so , we have _ _ _ = _ _ iff _ _ _ = , andthe latter is equivalent to __ _ _ = by _ de morgan s laws_. * remark 3.1.1*. it should be noted that _ _ _ _iff _ _ _ _ = in this work , while _ _ _ _= _ _ _ _ in traditional definition .the implication ` ' is thus a binary relation over _ _ , the set of formulas .this binary relation is determined by a logical equation on _. by definition 3.1.2 , the truth - value of an implication statement is not determined by the combination of the truth - values of its antecedent and consequent . in other words ,the so defined implication relation is non - truth - functional . *proposition 3.1.2 . *( association to logical implication ) for any formulas _ _ and _ _ in _ _ , _ _ _ _iff _ _ _ _ * proof . *_ [ ole_link54][ole_link55] _ _ _ means that the set of the interpretations that make _ _ true is a subset of the set of the interpretations that make _ _ true . in other words , there is no interpretation that make _ _ true and make _ _ false , i.e._ _ _ _ is false in all interpretations , this means _ _ _ , or _ _ _= , i.e. _ _ _ _ by proposition 3.1.1 . * proposition 3.1.3 . *( equivalence ) for any formulas _ _ and _ _ in _ _ , ( _ _ _ _ ) ( _ _ _ _ ) iff _ _ = _ . * proof . *if _ _= _ _, then _ _ _ _ = _ _ _ _ _ _ = _ _ and _ _ _ _ = _ _ _ _ = _ _ , so ( _ _ _ _ ) ( _ _ _ _ ) ; if ( _ _ _ _ ) ( _ _ _ _ ) , then _ _ = _ _ _ _ = _ . * remark 3.1.2*. the definition of implication relation in this work is clearly based on logical equivalence ` = ' , so there is no need to introduce another equivalence symbol such as ` ' .the implication relation defined in 3.1 has some important properties as listed in the following . *proposition 3.2.1 . * ( properties of the implication relation ) let _ _ , _ _ , and _ _ are any formulas in _ _ , the implication relation ` ' given by definition 3.1.2 has the following properties as a binary relation : \1 ) _ _ _ _ ( reflexivity ) ; \2 ) if _ _ _ and _ _ _ _ , then _ _ = _ _ ( anti - symmetry ) ; \3 ) if _ _ _ and _ _ _ _ , then _ _ _ ( transitivity ) ; \4 ) _ _ _ _ _ _ ( meet ) ; \5 ) _ _ _ _ _ _ ( join ) ; \6 ) _ _ ( bottom ) ;\7 ) _ _ ( top ) . *proof . * from results in 3.1 : \1 ) _ _ _ _ _ _ , so _ _ _ _ ; \2 ) it follows immediately from proposition 3.1.3 ; \3 ) _ _ _ _ and _ _ _ _ , so _ _ _ _ = _ _ and _ _ _ _ = _ _ , thus_ _ _ _ = ( _ _ _ _ ) _ _ = _ _ ( _ _ _ _ ) = _ _ _ _ = _ _ _ _ , therefore _ _ _ _ = _ _ _ _ , so __ _ _ ; \4 ) ( _ _ _ _ ) _ _ _ _ _ _ , so _ _ _ _ _ _ ; \5 ) _ _ ( _ _ _ _ ) _ _ _ _ _ _ _ _ , so _ _ _ _ _ _ ; \6 ) _ _ , so _ _ ; \7 ) _ _ _ _ , so _ _ .* remark 3.2.1 . *the implication relation ` ' is a _partial order _ over _ _ by properties 1 ) * * 3 ) , and the _ partial ordered set _ ( _ _ , ) is a _ bounded lattice _ with properties 4 ) * * 7 ) .* proposition 3.2.2 . *( rule of replacement and inference ) let _ _ and _ _ are any formulas in _ _ , for the implication relation ` ' given by definition 3.1.2 , we have \1 ) _ _ _ _ _ _ _ _ , \2 ) _ _ _ _ _ _ _ _ iff _ _ _ _ = iff _ _. * proof . *it is based on propositions 3.1.1 and 3.1.2 . _ ( _ _ _ _ ) ( _ _ _ _ ) ( _ _ _ _ ) ( _ _ _ _ ) .if _ _ _ is false , then _ _ is false ; if _ _ _ is true , then _ _ _ = , so _ _ is also false . thus _ _ is false in any interpretation , that is _ _ , or _ _= , or ( _ _ _ _ ) ( _ _ _ _ ) = , i.e. ( _ _ _ _ ) ( _ _ _ _ ) , that is _ _ _ _ . \2 ) if _ _ _ _ = , then _ _ _ _ is always true , so _ _ _ _ _ _ _ _ ; if _ _ _ _ , then _ _ _ _ _ _ _ _ can not hold , since this means that whenever _ _ _ _ is true then _ _ _ _ must be true so that _ _ _ _ = must be also true . * remark 3.2.2*. this means : 1 ) the traditional _ rule of replacement _ _ _ _ _ = _ _ _ _ can not be used unless it is known that _ _ _ _ = or_ _ _ is indeed true ; 2 ) the _ rule of inference _ _ _ _ _ _ _ _ _ can be used in any cases safely .* remark 3.2.3*. since the rule of replacement _ _ _ _ = _ _ _ _ can not be universally used , defining logical connectives , , or by is not appropriate .consider the most common `` paradoxes '' of traditional material implication mentioned in 1.1 and re - listed here for convenience .\1 ) _ _ ( _ p _ _ q _ ) ( a false proposition implies any proposition ) , \2 ) _ p _ ( _ q _ _ p _ ) ( a true proposition is implied by any proposition ) , and \3 ) ( _ p _ _ q _ ) ( _ q _ _ p _ ) ( for any two propositions , at least one implies the other ) .now let us check these `` paradoxes '' under the implication relation defined in 3.1 .* proposition 3.3.1 . _ and _ _ are formulas in _ _ , the implication relation ` ' is given by definition 3.1.2 , then we have that _ _ ( _ _ _ _ ) , _ ( _ _ _ _ ) , and ( _ _ _ _ ) ( _ _ _ _ ) are not tautologies . * proof .* consider the case that _ , _ _ , _ _ , _ _ , and_ _ _ _ = .\1 ) in this case , _ _ _ _ _ _ , so _ _ _ _ is false , thus ( _ _ ) ( _ _ _ _ ) = _ _ , therefore , _ _ ( _ _ _ _ ) is false .\2 ) in this case , _ _ _ _ _ _ , so _ _ _ _ is false , thus _ _ ( _ _ _ _ ) = _ _ , therefore , _ _ ( _ _ _ _ ) is false .\3 ) in this case , _ _ _ _ _ and _ _ _ _ _ _ , so both _ _ _ _ and _ _ _ _ are false , thus ( _ _ _ _ ) ( _ _ _ _ ) is false . this means that common `` paradoxes '' of traditional material implication do not exist under the implication relation defined in 3.1 .classical logic such as standard propositional logic is simple and useful except that the problematic definition of material implication making it unfortunately defective .this work defines an implication relation to replace the traditional material implication based on logical equivalence . specifically , _ _ _ _ is defined by the equation _ _ _ _ = _ _ that is equivalent to _ _ _ _ = or _ _ _ _ = .this prevents it from common `` paradoxes '' of traditional material implication , while keeps the system still simple and useful .it becomes clear that the rule of replacement _ _ _ _ = _ _ _ _ can only be used safely in the case that _ _ _ and _ _ _ _ = are known being true .the definition of the implication relation of this work is very natural so it is much easier to understand , thus it is also beneficial to logic education .several more points are noted as follows about the implication relation defined in this work .\1 ) it should distinguish truth - values \{true , false } from tautology and contradiction symbols \{ , } , since the latter are not truth - values but ( special ) propositions .so , _ _ _ _ is true ( or its truth - value equals `` true '' ) is not the same thing as _ _ _ _ = .\2 ) defining _ _ _ _ by _ _ _ _ = is rational , since `` an alternative sentence expresses not only 1 our knowledge of the fact that one at least of the alternants is true and 2 our ignorance as to which of them is true , but moreover 3 our readiness to infer one alternant from the negation of the other . ''( ajdukiewicz , 1978 ) .\3 ) _ _ and _ _ are not paradoxical .they can be interpreted to intuition like `` if an always - false - thing is true , then anything is true '' and `` an always - true - thing is true without any premises '' . the similar is also mentioned , e.g. , in ( ceniza , 1988 ) .the corresponding formal system and the soundness and completeness of the system under the implication relation defined in 3.1 , are not investigated in this work .: : k. ( 1978 ) .conditional statement and material implication ( 1956 ) . in k. ajdukiewicz ,_ the scientific world - perspective and other essays , 19311963 _ ( j. giedymin , trans . , pp . 222238 ) .dordrecht , holland : d. reidel publishing caompany .: : d. j. ( 1936 , apr . ) .the meaning of implication ._ mind , new series , 178 _ , pp . 157180 .: : c. r. ( 1988 ) . material implication and entailment . _ notre dame journal of formal logic , 29_(4 ) , pp . 510519 .: : i. , & roelofsen , f. ( 2011 ) .inquisitive logic ._ journal of philosophical logic , 40 _ , pp .doi:10.1007/s10992 - 010 - 9142 - 6 : : a. j. ( 1974 , jan . ) .a defence of material implication ._ analysis , 34_(3 ) , pp .: : v. ( 2003 ) .which notion of implication is the right one ?from logical considerations to a didactic perspective . _ educational studies in mathematics , 53_(1 ) , pp . 534 .: : j. s. ( 1989 , mar . ) . material implication revisited . _ the american mathematical monthly , 96_(3 ) , pp .: : p. ( 2012 ) .paradoxes of material implication and non - classical logics . in p. lojko ,_ inquisitive semantics and the paradoxes of material implication .master s thesis _( pp . 3050 ) .universiteit van amsterdam , amsterdam .: : h. ( 1880 , jan . ) .symbolical reasoning ._ mind , 5_(17 ) , pp .: : j. ( 1967 , jul . ) .is there a relation of intensional conjunction ? _mind , new series , 76_(303 ) , pp .
simple and useful classical logic is unfortunately defective with its problematic definition of material implication . this paper presents an implication relation defined by a simple equation to replace the traditional material implication in classical logic . common `` paradoxes '' of material implication are avoided while simplicity and usefulness of the system are reserved with this implication relation . * keywords . * implication ; material implication ; conditional ; relation ; classical logic ; propositional logic ; paradox * defining implication relation for classical logic * fu li school of software engineering , chongqing university , chongqing , china fuli.edu.cn
epidemic , one of the most important issues related to our real lives , such as computer virus on internet and venereal disease on sexual contact networks , attracts a lot of attention . among all the models on the process of the epidemic , susceptible - infected ( si ) model , susceptible - infected - susceptible ( sis ) model , and susceptible - infected - removed ( sir ) model ,are considered as the theoretical templates since they can , at least , capture some key features of real epidemics . after some classical conclusions have been achieved on regular and random networks , recent studies on small - world ( sw ) networks and scale - free ( sf ) networks introduce fresh air into this long standing area ( see the reviews and the references therein ) .the most striking result is that in the sis and sir model , the critical threshold vanishes in the limit of infinite - size sf networks .it is also a possible explanation why some diseases are able to survive for a long time with very low spreading rate . in this paper , we focus on the sis model .although it has achieved a big success , the standard sis style might contain some unexpected assumption while being introduced to the sf networks directly , that is , each node s potential infection - activity ( infectivity ) , measured by its possibly maximal contribution to the propagation process within one time step , is strictly equal to its degree . as a result , in the sf networks the nodes with large degree , named _ hubs_ , will take the greater possession of the infectivity , so - called _ super - spreader_. this assumption may fail to mimic some cases in the real world where the relation between degree and infectivity is not simply equal .the first example is that , in most of the existing peer - to - peer distributed systems , although their long - term communicating connectivity shows the scale - free characteristic , all peers have identical capabilities and responsibilities to communicate at a short term , such as the gnutella networks .second , in sexual contact networks , even the hub node has many acquaintances ; he / she has limited capability to contact with others during limited periods .third , the referral of a product to potential consumers costs money and time in network marketing processes ( e.g. a salesman has to make phone calls to persuade his social surrounding to buy the product ) .therefore , the salesman will not make referrals to all his acquaintances .the last one , in some email service systems , such as the gmail system schemed out by google , the clients are assigned by limited capability to invite others to become gmail - user after being invited by an e - mail from another gmail - user .similar phenomena are common in our daily lives , thus need a further investigation .in the epidemic contact network , node presents individual and link denotes the potential contacts along which infections can spread .each individual can be in two discrete states , whether susceptible ( s ) or infected ( i ) . at each time step , the susceptible node which is connected to the infected one will be infected with rate .meanwhile , infected nodes will be cured to be again susceptible with rate , defining the effective spreading rate as . without losing of generality , we set .individuals run stochastically through the cycle susceptible - infected - susceptible , which is also the origin of the name , sis .denote and the density of the susceptible and infected population at the time step , respectively .then in the standard sis model , each individual will contact all its neighbors once at each time step , thus the infectivity of each node is equal to its degree . in the present model , we assume that every individual has the same infectivity . that is to say , at each time step , each infected individual will generate contacts where is a constant .multiple contacts to one neighbor are allowed , and the contacts to the infected ones , although without any effect on the epidemic dynamics , are also counted . in this paper , with half nodes infected initially , we run the spreading process for sufficiently long time , and calculate the fraction of infected nodes averaging over the last 1000 steps as the density of infected nodes in the steady stage ( denoted by ) .all of our simulation results are obtained from averaging over different network realizations , and for each independent runs with different initial configurations .[ 0.7 ] ( color online ) average value of as a function of the effective spreading rate on a ba network with average degree and network size .the black points represent the case of standard sis model , and the red , green and blue points correspond to the present model with , 3 and 2 , respectively .the arrows point at the critical points gained from the simulation .the insert shows the threshold scaling with , with solid line representing the analytical results.,title="fig : " ]let denote the fraction of vertices of degree that are infected at time .then using the mean - field approximation , the rate equation for the partial densities in a network characterized by a degree distribution can be written as : \sum_{k'}\frac{p(k'|k ) i_{k'}(t)a}{k'},\ ] ] where denotes the conditional probability that a vertex of degree is connected to a vertex of degree .considered the uncorrelated networks , where , the rate equation takes the form : (t)a.\ ] ] using to denote the value of in the steady stage with sufficiently large , then which yields the nonzero solutions where is the infected density at the network level in the steady stage .then , one obtains to the end , for the critical point where , we get this equation defines the epidemic threshold below which the epidemic prevalence is null , and above which it attains a finite value .the previous works about epidemic spreading in sf networks present us with a completely new scenario that a highly heterogeneous structure will lead to the absence of any epidemic threshold , while now , in the present model , it is instead . as shown in fig .1 , the analytical result agrees very well with the simulations .furthermore , it is also clear that the larger infectivity will lead to the higher prevalence .[ 0.65](color online ) average value of as a function of the effective spreading rate on ( a ) the random and ba networks ; ( b ) the sf configuration networks for different values of ; ( c ) the ba networks with different average degree . in ( a ) and ( b ) , the average degree is with , and for all the simulations , and are fixed.,title="fig : " ] [ 0.65](color online ) average value of as a function of the effective spreading rate on ( a ) the random and ba networks ; ( b ) the sf configuration networks for different values of ; ( c ) the ba networks with different average degree . in ( a ) and ( b ), the average degree is with , and for all the simulations , and are fixed.,title="fig : " ] [ 0.65](color online ) average value of as a function of the effective spreading rate on ( a ) the random and ba networks ; ( b ) the sf configuration networks for different values of ; ( c ) the ba networks with different average degree . in ( a ) and ( b ), the average degree is with , and for all the simulations , and are fixed.,title="fig : " ] from the analytical result of the threshold value , , we can also acquire that the critical behavior is independent of the topology of networks which are valid for the mean - field approximation . to demonstrate this proposition, we implement the present model on various networks ; these include the random networks , the scale - free configuration model with different power - law exponent , and the ba networks with different average degree . as shown in fig .2 , under a given , the critical value are the same , which strongly support the valid of eq .furthermore , there is no distinct finite - size effect as shown in fig .3 . in the original sis model, the node s infectivity relies strictly on its degree and the threshold is .since the variance of degrees gets divergent with the increase of , the epidemic propagation on scale - free networks has an obvious size effect .however , in the current model , each infected node is just able to contact the same number of neighbors , , rather than its degree .thus the threshold value and the infected density beyond the threshold are both independent of the size .[ 0.7](color online ) average value of as a function of the effective spreading rate on the different sizes of ba networks with and .,title="fig : " ]for further understanding of the epidemic dynamics of the proposed model , we study the time behavior of the epidemic propagation .first of all , manipulating the operator on both sides of eq .( 3 ) , and neglecting the terms of order , we obtain thus the evolution of follows an exponential growing as where . [ 0.7](color online ) average value of in normal plots as time ( a ) and in single - log plots as rescaled time ( b ) for different spreading rate .the numerical simulations are implemented based on ba networks of size , , and .,title="fig : " ] [ 0.7](color online ) average value of in normal plots as time ( a ) and in single - log plots as rescaled time ( b ) for different spreading rate .the numerical simulations are implemented based on ba networks of size , , and .,title="fig : " ] in fig .4 , we report the simulation results of the present model for different spreading rates ranging from 0.7 to 0.9 . the rescaled curves ( fig .4(b ) ) can be well fitted by a straight line in single - log plot for small and the curves corresponding to different will collapse to one curve with rescaling time , which strongly supports the analytical result eq .( 10 ) .furthermore , a more precise characterization of the epidemic diffusion through the network can be achieved by studying some convenient quantities in numerical experiments .first , we measure the average degree of newly infected nodes at time as then , we present the inverse participation ratio to indicate the detailed information on the infection propagation , which is defined as : where the weight of recovered individuals in each -degree class ( here -degree class means the set of all the nodes with degree ) is defined by . from this definition , one can acquire that if is small , the infected are homogeneously distributed among all degree classes ; on the contrary , if is relatively larger then , the infection is localized on some specific degree classes .[ 0.7](color online ) time behavior of the average degree of the newly infected nodes ( top ) and inverse participation ratio ( bottom ) in ba networks of size , for different values of ( with fixed ) ( a ) and ( with fixed ) ( b).,title="fig : " ] [ 0.7](color online ) time behavior of the average degree of the newly infected nodes ( top ) and inverse participation ratio ( bottom ) in ba networks of size , for different values of ( with fixed ) ( a ) and ( with fixed ) ( b).,title="fig : " ] in fig . 5 , we exhibit the time behaviors of these quantities for ba networks and find a hierarchical dynamics , that is , all those curves show an initial plateau , which denotes that the infection takes control of the large degree nodes firstly .once the highly connected hubs are reached , the infection pervades almost the whole network via a hierarchical cascade across smaller degree classes .thus , decreases to the next plateau , which approximates the average degree .immunity , relating to the people s strategies to struggle with the disease epidemics , shows great importance in practice .since the current model , which can mimic some real cases more accurately , shows different characters with the standard sis model , it requires some in - depth and detailed investigation about the immunity on this model . as we know, immunized nodes can not become infected and , therefore , will not transmit the infection to their neighbors . the simplest immunization strategy is to select immunization population completely randomly , so - called _ random immunization _ .however , this strategy is inefficient for heterogenous networks .similar to the preferential attachment mechanism introduced by ba model , dezs and barabsi proposed the _ proportional immunization _ strategy ,in which the immunizing probability of each node is proportional to its degree .this preferential selection strategy can remarkable enhance the immunization efficiency in scale - free networks .the extreme strategy for immunization in heterogenous networks is the so - called _ targeted immunization _ , where the most highly connected nodes are chosen to be immunized .compared with the random immunization and proportional immunization , the targeted immunization is demonstrated as the most efficient one for various networks , and several different but relative dynamics .[ 0.7](color online ) reduced prevalence from numerical simulations of the present model in the random ( square point ) and ba ( circle point ) network with random ( black line ) , proportional ( red line ) and targeted immunizations ( blue line ) . in the simulations ,the parameter , , and are fixed.,title="fig : " ] in fig . 6 , we report the simulation results about the three mentioned immunization strategies on the current model .the -axis , , denotes the fraction of immunized population , and the -axis , , represents the performance , where is the prevalence of infected nodes without immunization and the one after immunization . from the simulation results, one can find that the epidemic thresholds under random , proportional and targeted immunizations of random networks are , and , respectively . and those of ba networks are , and .it is clear from the simulation results , even in the current model where the infectivities of large - degree nodes are greatly suppressed , the targeted immunization performs best .combine with the hierarchical behavior observed in sec .iv , it strongly indicates that the heterogeneities of degree and infectivity could both contribute to the violent spreading of disease .hence even for the current model with identical infectivity , the hub nodes play much more important roles in determining the dynamical property .note that , in this model , for heterogenous networks , the random immunization is more efficient than the standard case , and the threshold is the same for ba and random networks .actually , the random immunization is implemented by randomly selecting and immunizing nodes on a network of fixed size . at the mean - field level , the presence of uniform immunity will effectively reduce the spreading rate by a factor . according to eq .( 8) , the immunization threshold is given by as shown in fig .6 , the simulated result ( ) agrees with the analytical result ( ) well . to compare ,the random immunization threshold of standard sis model is given by .namely , to control the spreading , one have to immunize all the population as in the thermodynamic limit .[ 0.7](color online ) average value of as time for no immunization ( black ) , random immunization ( red ) , proportional immunization ( green ) , and targeted immunization ( blue ) at and .the numerical simulations are implemented based on ba networks of size , , and .the arrows indicate the time that the whole spreading process comes to the steady stage.,title="fig : " ] for further understanding the effects of those different immunization strategies , we study the time behaviors as shown in fig .7 . in accordance with the above results ,the spreading velocity under target immunization is the lowest .note that , different from the standard sis model , the random immunization can obviously slow down the spreading in the early stage even with a tiny population .in this paper , we investigated the behaviors of sis epidemics with the identical infectivity . by comparing the dynamical behaviors of the present model of different values of with the standard one on ba networks, we found the existence of epidemic spreading threshold .the analytical result of the threshold is provided , which agrees with numerical simulation very well .the critical value is independent of the topology of underlying networks , just depends on the dynamical parameter and the whole spreading process does not have the distinct finite - size effect . for sf networks , the infected population grows in an exponential form in the early stage , and then follows a hierarchical dynamics .in addition , the time scale is also independent of the underlying topology . the last but not the least , the numerical results of random , proportional , and targeted immunization are presented .we found that the targeted immunization performs best , while the random immunization is much more efficient in heterogenous networks than the standard case .bhwang acknowledges the support of 973 project under grant no .2006cb705500 , the special research founds for theoretical physics frontier problems under grant no .a0524701 , the specialized program under the presidential funds of the chinese academy of science , and the national natural science foundation of china under grant no . 10472116 .tzhou acknowledges the support of the national natural science foundation of china under grant nos .70471033 and 10635040 .m. barthlemy , a. barrat , r. pastor - satorras , and a. vespihnani , phys .lett . * 92 * , 178701 ( 2004 ) ; m. barthlemy , a. barrat , r. pastor - satorras , and a. vespihnani , j. theor . biol . * 235 * , 275 ( 2005 ) .r. pastor - satorras , and a. vespignani , _ epidemics and immunization in scale - free networks_. in : s. bornholdt , and h. g. schuster ( eds . ) _ handbook of graph and networks _ , wiley - vch , berlin , 2003 ; t. zhou , z. -q .fu , and b. -h .wang , prog .* 16 * , 452 ( 2006 ) ; s. boccaletti , v. latora , y. moreno , m. chavez , and d. -u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .j. joo , and j. l. leboitz , phys .e * 69 * , 066105 ( 2004 ) ; r. olinky , and l. stone , phys .e * 70 * , 030902 ( 2004 ) ; t. zhou , j .-liu , w .- j .chen , and b .- h .wang , phys .e * 74 * , 056109 ( 2006 ) ; r. yang , b .- h .wang , j. ren , w .- j .bai , z .- w .shi , w .- x .wang , and t. zhou , phys .a * 364 * , 189 ( 2007 ) .j. mller , siam j. appl . math . * 59 * , 222 ( 1998 ) ; d. s. callway , m. e. j. newman , s. h. strogatz , and d. j. watts , phys . rev . lett . * 85 * , 5468 ( 2000 ) ; r. cohen , k. erez , d. ben - avraham , and s. havlin , phys . rev . lett . * 85 * , 4626 ( 2000 ) .z. h. liu , y. c. lai , and n. ye , phys .e * 67 * , 031911 ( 2003 ) ; d. h. zanette , and m. kuperman , physica a * 309 * , 445 ( 2002 ) ; y. c. lai , z. h. liu , and n. ye , int . j. mod .b * 17 * , 4045 ( 2003 ) ; h. zhang , z. h. liu , and w. c. ma , chin .* 23 * , 1050 ( 2006 ) .n. madar , t. kalisky , r. cohen , d. ben - avraham , and s. havlin , eur .j. b * 38 * , 269 ( 2004 ) ; t. zhou , and b. -h .wang , chin .* 22 * , 1072 ( 2005 ) ; f. takeuchi , and k. yamamoto , lect .notes comput .* 3514 * , 956 ( 2005 ) ; w. -j .bai , t. zhou , and b. -h .wang , arxiv : physics/0610138 .
in this paper , a susceptible - infected - susceptible ( sis ) model with identical infectivity , where each node is assigned with the same capability of active contacts , , at each time step , is presented . we found that on scale - free networks , the density of the infected nodes shows the existence of threshold , whose value equals , both demonstrated by analysis and numerical simulation . the infected population grows in an exponential form and follows hierarchical dynamics , indicating that once the highly connected hubs are reached , the infection pervades almost the whole network in a progressive cascade . in addition , the effects of random , proportional , and targeted immunization for this model are investigated . based on the current model and for heterogenous networks , the targeted strategy performs best , while the random strategy is much more efficient than in the standard sis model . the present results could be of practical importance in the setup of dynamic control strategies .
this article is concerned with the approximation of the distribution of markov processes conditioned to not hit a given absorbing state .let be a discrete time markov process evolving in a state space , where is an absorbing state , which means that where .our first aim is to provide an approximation method based on an interacting particle system for the conditional distribution where denotes the law of with initial distribution on .our only assumption to achieve our aim will be that survival during a given finite time is possible from any state , which means that our second aim is to provide a general condition ensuring that the approximation method is uniform in time . the main assumption will be that there exist positive constants and such that , for any initial distributions and , this property has been extensively studied in . in particular , it is known to imply the existence of a unique quasi - stationary distribution for the process on .another main result of our paper is that , under mild assumptions , the approximation method can be used to estimate this quasi - stationary distribution .the nave monte - carlo approach to approximate such distributions would be to consider independent interacting particles evolving following the law of under and to use the following asymptotic relation and then however , the number of particles remaining in typically decreases exponentially fast , so that , at any time , the actual number of particles that are used to approximate is of order for some . as a consequence, the variance of the right hand term typically grows exponentially fast and then the precision of the monte - carlo method worsens dramatically over time . in fact , for a finite number of particles , the number of particles belonging to eventually vanishes in finite time with probability one .thus the right hand term in the above equation eventually becomes undefined .since we re typically interested in the long time behavior of or in methods that need to evolve without interruption for a long time , the nave monte carlo method is definitely not well suited to fulfill our objective . in order to overcome this difficulty , modified monte - carlo methodshave been introduced in the recent past years by del moral for discrete time markov processes ( see for instance or the well documented web page , with many applications of such modified monte - carlo method ) .the main idea is to consider independent particles evolving in following the law of , but such that , at each time , any absorbed particle is re - introduced to the position of one other particle , chosen uniformly among those remaining in ; then the particles evolve independently from each others and so on .while this method is powerful , one drawback is that , at some random time , all the particles will eventually be absorbed simultaneously . at this time ,the interacting particle system is stopped and there is no natural way to reintroduce all the particles at time .when the number of particles is large and the probability of absorption is uniformly bounded away from zero , the time is typically very large and this explain the great success of this method .however , many situations does not enter the scope of these assumptions , such as diffusion processes picked at discrete times or the neutron transport approximation ( see section [ sec : example_neutron ] ) .our method is non - failable in these situations .moreover the uniform convergence theorem provided in section [ sec : main2 ] also holds in these cases , under suitable assumptions .when the underlying process is a continuous time process , one alternative to the methods of has been introduced recently .the idea is to consider a continuous time -particles system , where the particles evolve independently until one ( and only one ) of them is absorbed . at this time , the unique absorbed particle is re - introduced to the position of one other particle , chosen uniformly among those remaining in .this continuous time system , introduced by burdzy , holyst , ingermann and march ( see for instance ) , can be used to approximate the distribution of diffusion processes conditioned not to hit a boundary .unfortunately , it yields two new difficulties .the first one is that it only works if the number of jumps does not explode in finite time almost surely ( which is not always the case even in non - trivial situations , see for instance ) .the second one is that , when it is implemented numerically , one has to compute the exact absorption time of each particles , which can be cumbersome for diffusion processes and complicated boundaries .note that , when this difficulties are overcome , the empirical distribution of the process is known to converge to the conditional distribution ( see for instance the general result and the particular cases handled in ) .finally , it appears that both methods are not applicable in the generality we aim to achieve in the present paper and , in some cases , both method will fail ( as in the case of the neutron transport example of section [ sec : example_neutron ] ) .let us now describe the original algorithm studied in the present paper .fix .the particle system that we introduce is a discrete time markov process evolving in .we describe its dynamic between two successive times and , knowing , by considering the following random algorithm which act on any -uplet of the form * algorithm 1 . * initiate by setting and for all and repeat the following steps until for all . 1 .choose randomly an index uniformly among 2 .choose randomly a position according to .then * if , chose an index among and replace by in . *if , replace by in . after a ( random ) finite number of iterations , the -uplet will satisfy for all . when this is achieved , we set .our first main result , stated in section [ sec : main1 ] , is that , for all , the empirical distribution of the particle system evolving following the above dynamic actually converges to the conditional distribution of the original process at time .we prove this result by building a continuous time markov process such that is distributed as for all entire time , and such that the general convergence result of applies .our second main result , stated in section [ sec : main2 ] , shows that , if the conditional distribution of the process is exponentially mixing ( in the sense of or for the time - inhomogeneous setting ) and under a non - degeneracy condition that is usually satisfied , then the approximation method converges uniformly in time . in section [ sec : example_neutron ], we illustrate our method by proving that it applies to neutron transport process absorbed at the boundary of an open set .in this section , we consider the particle system defined by algorithm 1 .we state and prove our main result in a general setting . [ thm : intro - main ] assume that converges in law to a probability measure on .then , for any and any bounded continuous function , {law } \mathbb{e}_{\mu_0}(f(x_n ) | n<\tau_\partial).\ ] ] moreover , we emphasize that our result applies to any process satisfying , overcoming the limitations of all previously cited particle approximation methods , as illustrated by the application to a neutron transport process in section [ sec : example_neutron ] . the proof is divided in two steps .first , we provide an implementation of algorithm 1 as the discrete time included chain of a continuous time fleming - viot type particle system .in particular , this step provides a mathematically tractable implementation of algorithm 1 . in a second step ,we use existing results on fleming - viot type particle systems to deduce that the empirical distribution of the particle system converges to the conditional distribution ._ step 1 : algorithm 1 as a fleming - viot type process _+ let us introduce the continuous time process defined , for any , by }1_{t < u_{[t ] } } + x_{[t]+1}1_{t \geq u_{[t]}}\ ] ] where ] . with this definition , is a non - markovian continuous time process such that and have the same law for all .now , we define the continuous time process by by construction , the continuous - time process defined by is a strong markov process evolving in , with absorbing set ( see figure [ fig : graph ] for an illustration when and ) . and the thin lines are the trajectory of , which jumps from to at time . at any time , is equal to if the process has already jumped during any interval ] killed at , reflected at and solution to the following stochastic differential equation ,\ \beta>2.\end{aligned}\ ] ] in this case , the continuous time fleming - viot approximation method introduced in explodes in finite time almost surely , as proved in . as a matter of fact, it is not known if the fleming - viot type particle system is well defined as soon as the diffusion coefficient is degenerated or not regular toward the boundary .on the contrary , our assumption holds true for fairly general one dimensional diffusion processes , thanks to the study provided in .hence our approximation method is valid and , using the next results , converges uniformly in time for both neutron transport processes and degenerate diffusion processes . for any , we define the empirical distribution of the process at time as , and for any bounded measurable function on , we set [ thm : main - two ] assume that and holds true .then there exist two constants and , such that , for all and all measurable function bounded by , with in the case where the initial position of the particle system are drawn as independent random variables distributed following the same law , then , choosing small enough so that , basic concentration inequalities and the equality imply that for some .this implies the following corollary .assume that are independent and identically distributed following a given law on .if and hold true , then there exist two constants and such that , and all measurable function bounded by , where is the same constant as in theorem [ thm : main - two ] .we emphasize that the above results and their proofs can be adapted to the time - inhomogeneous setting of , with appropriate modifications of assumption .the following result is specific to the time - homogeneous setting and is proved at the end of this section .[ thm : invariant - quasi - stationary - distribution ] under the assumptions of theorem [ thm : main - two ] , the particle system is exponentially ergodic , which means that it admits a stationary distribution ( which is a probability measure on ) and that there exists positive constants and such that moreover , there exists a positive constant such that , for all measurable function bounded by , where is the same as in theorem [ thm : main - two ] and is distributed following . using the exponential convergence assumption , we deduce that , for any function that and all , denoting by the natural filtration of the particle system , we deduce from theorem [ thm : intro - main ] that , almost surely , hence , for all , but ( * ? ? ?* theorem 2.1 ) entails the existence of a measure on and positive constants and such that for any and , note that , from now on , is a fixed constant .this entails where is the constant of .we deduce that with .our aim is now to control , uniformly in and for all . in order to do so , we make use of the following lemma , proved at the end of this subsection . [lem : controle_pos_part ] there exists and such that , for any value of , moreover , if for some , then from this lemma ( where we assume without loss of generality that , from the markov property applied to the particle system and since implies , we deduce that , for any , where the last line is obtained by iteration over .this and equation imply that , for any , taking and assuming , without loss of generality , that , we obtain finally , using inequality and taking straightforward computations implies the existence of a constant such that , for all , with now , for , we have using the same computations as above , this concludes the proof of theorem [ thm : main - two ] .assume that .we obtain from theorem [ thm : intro - main ] that markov s inequality thus implies that , for all , and hence that but , by assumption , we have , so that .we deduce that choosing , we finally obtained in the general case ( when one does not have a good control on ) , the above strategy is bound to fail since we do not have a good control on the distance between the conditioned semi - group and the empirical distribution of the particle system . as a consequence ,we need to take a closer look at algorithm 1 .as explained in the description of this algorithm , the position of the system at time is computed from the position of the system at time through several steps , each step being composed of two stages .we denote by the number of steps needed to compute the position of the system at time . for any step ,we denote by the position of the particle at the beginning of step , by the state of the particle at the beginning of step , and by the index of the particle chosen during the first stage of step . with this notation ,the process is a markov chain . in what follows ,we denote by the natural filtration of this markov chain .we also introduce the quantities of course , we have and , at the beginning of the first step , one has for any , conditionally to and on the event , the position of is chosen with respect to hence , conditionally to and on the event , is equal to from assumption , we deduce that , conditionally to and on the event , is equal to let us denote by the successive step numbers during which the sequence jumps , that is it is clear that , for all , is a stopping time with respect to the filtration .we are interested in the sequence of random variables , defined by conditionally to ( the filtration before the stopping time ) , we deduce from that , for all , since almost surely .we deduce that moreover , assumption entails that . as a consequence ,there exists a coupling between and the markov chain with initial law and transition probabilities such that , for all .the process is a positive super - martingale ( and a martingale if ) and hence it converges to a random variable almost surely as .let us now prove that is not equal to zero almost surely .consider a plya urn starting with balls with one white one , that is a markov chain in such that and it is well known that is a positive and bounded martingale which converges almost surely to a random variable distributed following a beta distribution with parameters .in particular , this implies that the event has a positive probability and that , conditionally to this event , converges to a positive random variable : {a.s . } \mathbf{1}_{w_k / k\leq c_0,\,\forall k\geq 0}\,s_\infty.\end{aligned}\ ] ] since and have the same transition probabilities at time from states such that , there exists a coupling such that and hence such that since the right hand side is positive with positive probability , we deduce that is positive with positive probability . but implies that , thus there exists such that because of the relation between and , we deduce that by definition of , implies and hence this concludes the proof of lemma [ lem : controle_pos_part ] .we first prove the exponential ergodicity of the particle system and then deduce .using lemma [ lem : controle_pos_part ] , we know that , for any initial distribution of , on the event , there exists at least one particle satisfying .let us denote by the set of indexes of such particles and by the set of indexes such that .the probability that the first steps of algorithm 1 concern the indexes of in strictly increasing order is strictly lowered by and hence by . for each of this step , the probability that the chosen particle with index in is killed and then is sent to the position of a particle with index in is lowered by and hence by .overall , the probability that , after the first steps of algorithm 1 , all the particles with index in have jumped on a particle with index in is bounded below by and hence by . on this event , the probability that , for each next step in the algorithm up to time , the chosen particle jumps without being absorbed is bounded below by .but , using ( * ? ? ?* theorem 2.1 ) , we know that , under assumption [ eq : expo - cv ] , there exist a probability measure on and a constant such that . since the particles are independent on the event where none of them is killed , we finally deduce that the distribution of the particle system at time satisfies classical coupling criteria ( see for instance ) entails the exponential ergodicity of the particle system .let us now prove that holds . consider and such that . then , applying theorem [ thm : main - two ] to the particle system with initial position , we deduce that , for all , using the exponential ergodicity of the particle system and the exponential convergence , we deduce that letting tend toward infinity implies and concludes the proof of theorem [ thm : invariant - quasi - stationary - distribution ] .the propagation of neutrons in fissible media is typically modeled by neutron transport systems , where the trajectory of the particle is composed of straight exponential paths between random changes of directions .the behavior of a neutron before its absorption by a medium is related to the behavior of neutron tranport before extinction , where extinction corresponds to the exit of a neutron from a bounded set .we recall the setting of the neutron transport process studied in .let be an open connected bounded domain of , let be the unit sphere of and be the uniform probability measure on .we consider the markov process in constructed as follows : and the velocity is a pure jump markov process , with constant jump rate and uniform jump probability distribution .in other words , jumps to i.i.d .uniform values in at the jump times of a poisson process . at the first time where , the process immediately jumps to the cemetery point , meaning that the process is absorbed at the boundary of .an example of path of the process is shown in fig .[ fig : sample - path ] . for all and , we denote by ( resp . ) the distribution of conditionned on ( resp .the expectation with respect to ) .we also assume the following condition on the boundary of the bounded open set .this is an interior cone type condition satisfied for example by convex open sets of and by open sets with boundaries .* is non - empty and connected ; * there exists and such that , for all , there exists measurable such that and for all , for all ] . by (* ( 4.3 ) ) , for any , there exists a constant such that in particular , we deduce that , for all , now , fix and consider the first ( deterministic ) time when the ray starting from with direction hits or , defined by let us first assume that .then for some and hence where denotes the successive jump times of the process . using the markov property , we deduce from that this implies that , for all and all such that , let us now assume that .using assumption ( h ) , we obtain using the strong markov property and , we deduce that if , then implies that the process jumps at least one time before reaching , that is before time , so that . using the markov property, we deduce that , for or , this , equations and together imply that assumption is fulfilled for all .this concludes the prood of proposition [ prop : neutron ] .
we consider a general method for the approximation of the distribution of a process conditioned to not hit a given set . existing methods are based on particle system that are failable , in the sense that , in many situations , they are not well defined after a given random time . we present a method based on a new particle system which is always well define . moreover , we provide sufficient conditions ensuring that the particle method converges uniformly in time . we also show that this method provides an approximation method for the quasi - stationary distribution of markov processes . our results are illustrated by their application to a neutron transport model . _ _ keywords : _ _ particle system ; process with absorption ; approximation method for degenerate processes _ 2010 mathematics subject classification . _ primary : 37a25 ; 60b10 ; 60f99 . secondary : 60j80
in recent years , a large number of investigations has been made into the control of chaotic dynamics , and many techniques have been applied in simulations and experiments ( see , for instance , for an overview ) .two techniques have been proposed for an _ open - loop _ control of chaos .the first approach is related to vibrational methods , as a scalar periodic perturbation is applied to the chaotic system . usually , the control signals are sinusoidal or two - mode forces .recently it has been shown that the method can be improved by use of optimized multimode signals which are more complex .the second method uses equations of motion and a specific goal dynamics to derive vector control forces .if the goal trajectory is suitably chosen , the chaotic system under control converges to the goal .therefore , this method is often referred to as entrainment control . in this paperit is shown how entrainment control can be improved with respect to small control forces . for this purpose ,a specific property of chaotic systems is exploited : dense unstable periodic orbits ( upos ) .in fact , upos are common goal orbits of most feedback techniques employed for control of chaos ( see , e.g. , ) .because a upo is a natural , but unstable motion of the system , control forces have to be applied only for transfer to and stabilization of the orbit . in the ideal( noiseless ) case , the stabilization forces tend to zero once the upo is actually reached .therefore , feedback control of upos can be maintained with very small forces in low noise systems . despite the power of such methods , however ,there are situations where one wants to or has to dispense with a feedback from the system . then , open - loop techniques are needed .while there exists a theory for open - loop stabilization of unstable fixed points ( vibrational control , see ) , a counterpart for stabilization of unstable periodic orbits is still lacking to the author s knowledge .although it is supposed that the scalar periodic perturbation methods mentioned above usually stabilize periodic dynamics in the vicinity of upos , this has not been shown yet explicitly . the underlying mechanism , possibly some resonance phenomenon , as well as the exact final dynamics are still unknown .this is different for entrainment control , where the appropriate dynamics is given _ a priori _ , and which is described in the following .we start with a nonlinear dynamical system in the continuous time domain , which is influenced by a control signal vector .the equations of motion are supposed to be known , according to the ordinary differential equation with the system state in the state space , the control vector in the control signal space and the vector field .let be a goal dynamics generated by a vector field , , . to introduce the goal dynamics as solutions of eq .( [ e1 ] ) , we have to apply forces which solve the equation thus , the vector field is simply changed to by the control forces .to do this , however , feedback from the system is necessary , as the actual system state appears in eq .( [ e3 ] ) .the main point of the entrainment control method is to eliminate by the assumption that the system is already located in the initial goal state when the control is started at .if so , the correct control signal is given by the solution of the equation equation ( [ e4 ] ) can be solved without any system state measurement ; in fact , not even a generating vector field has to be given : it is sufficient to know the goal trajectory itself and its time derivative ( velocity ) for the control time interval , } ] yields control forces according to eq .( [ e5 ] ) .parameters of the lorenz system are set to , , and .the resulting chaotic attractor is shown in fig .[ fig1](a ) . an embedded upo which corresponds to in eq .( [ e7 ] ) is given in fig .[ fig1](b ) by the interrupted line .the deformation is defined by five fourier modes ( =5 ) which leads to a 34-dimensional search space for optimization ; is set to 5 , to 100 in eq .( [ e8 ] ) .the best result of several optimization runs is shown in fig .[ fig1](b ) by the solid line .it is a stable goal trajectory and therefore a stable periodic orbit ( spo ) of the controlled system .the actual values of deformation parameters can be found in tab .[ tab1 ] together with additional data of the upo and the spo .the deformation lies in the range of some percent , and it is plotted in fig .[ fig2](a ) .the resulting forces , shown in fig .[ fig2](b ) , change the vector field of the chaotic system less than about 10% .this is an improvement of more than a magnitude if compared to goals in convergent regions .numerical tests indicated that the spo is globally asymptotically stable ; the basin is the whole phase space . however , transient times until control is established depend strongly on the initial state , and range from just a few up to a few hundred control periods . a typical behavior is presented in fig . [ fig3 ] .after control is turned on , an intermittent transient appears .finally , the system settles down on the desired goal orbit , which is maintained .it has been shown how open - loop entrainment control in the vicinity of unstable periodic orbits can be realized .the search for suitable goal trajectories has been formulated in terms of an optimization problem with respect to upo deformations .feasibility has been demonstrated in an example , where a chaotic lorenz system has been successfully controlled to an optimized distortion of a upo .the locations of such goal orbits are independent of convergent regions , and thus the required forces are small compared to hitherto used goal dynamics far from the chaotic attractor ..[tab1 ] coordinates give a point of the original upo of the lorenz system , its period .maximum absolute values of the floquet multipliers are given by for the upo and for the optimized deformation ( spo ) .parameters of the spo are the time transformation coefficient and fourier coefficients , .subscripts of numbers indicate a decimal shift , i.e. , stands for . [ cols="<,<,<,<",options="header " , ] 99 chen , g. and dong , x. ( 1993 ) from chaos to order perspectives and methodologies in controlling chaotic nonlinear dynamical systems , _ int .j. of bifurcation and chaos _ * 3*(6 ) , 1363 - 1409 .alekseev , v.v and loskutov , a.yu .( 1987 ) _ dokl .. sssr _ * 293 * ; lima , r. and pettini , m. ( 1989 ) _ phys .rev . a _ * 41 * ; azevedo , a. and rezende , s.m .( 1991 ) _ phys .* 66 * ; braiman , y. and goldhirsch , i. ( 1989 ) _ phys .lett . _ * 66 * ; fronzoni , l. _ et al . _( 1991 ) _ phys .a _ * 43*. salerno , m. ( 1991 ) _ phys .b _ * 44 * ; farrelly , d. and milligan , j.a . (1993 ) _ phys .e _ * 47 * ; cicogna , g. and fronzoni , l. ( 1993 ) _ phys .e _ * 47*. mettin , r. and kurz , t. ( 1995 ) optimized periodic control of chaotic systems , _ phys .a _ * 206 * , 331 - 339 .hbler , a. and lscher , e. ( 1989 ) _ naturwissenschaften _ * 76 * ; plapp , b.b . and hbler , a. ( 1990 ) _ phys . rev. lett . _ * 65 * ; jackson , e.a .( 1990 ) _ physica d _ * 50 * ; shermer , r. _ et al . _( 1991 ) _ phys .a _ * 43 * ; breeden , j.l .( 1994 ) _ phys .a _ * 190*. ott , e. , grebogi , c. , and yorke , y.a .( 1990 ) controlling chaos , _ phys .* 64 * , 1196 - 1199 .bellman , r.e . ,bentsman , j. , and meerkov , s.m .( 1986 ) vibrational control of nonlinear systems : vibrational stabilizability , _ ieee trans ._ * ac-31 * , 710 - 716 .mettin , r. , hbler , a. , scheeline , a. , and lauterborn , w. ( 1995 ) parametric entrainment control of chaotic systems , _ phys .e _ * 51 * , 4065 - 4075 .jackson , e.a .( 1991 ) controls of dynamic flows with attractors , _ phys .a. _ * 44 * , 48394853 .parker t.s . and chua , l.o .( 1989 ) _ practical numerical algorithms for chaotic systems _ , springer - verlag , new york .press , w.h . ,teukolsky , s.a . , vetterling , w.t . and flannery , b.r .( 1992 ) _ numerical recipes in c _ , 2nd ed . , cambridge university press , cambridge
it is demonstrated that improved entrainment control of chaotic systems can maintain periodic goal dynamics near unstable periodic orbits without feedback . the method is based on the optimization of goal trajectories and leads to small open - loop control forces .
consider communication over a discrete - time memoryless channel modeled by a conditional point mass function ( pmf ) or probability density function ( pdf ) , where and are the input and output symbols , and are the input and output alphabets , respectively .let be the shannon capacity .fano showed in that the minimum error probability for block channel codes of rate and length is bounded by where is a positive function of channel transition probabilities , known as the error exponent . for finite input and output alphabets , without coding complexity constraint , the maximum achievable is given by gallager in , where is the input distribution , and is given for different values of as follows , the definitions of other variables in ( [ gallagere1 ] ) can be found in .if we replace the pmf by pdf , the summations by integrals and the operators by in ( [ gallagere ] ) , ( [ gallagere1 ] ) , the maximum achievable error exponent for continuous channels , i.e. , channels whose input and/or output alphabets are the set of real numbers , is still given by ( [ gallagere ] ) . in ,forney proposed a one - level concatenated coding scheme , which can achieve the following error exponent , known as forney s exponent , for any rate with a complexity of .}(1-r_o)e\left ( \frac{r}{r_o}\right ) , \label{ecr}\ ] ] where and are the outer and the overall rates , respectively .forney s coding scheme concatenates a maximum distance separable ( mds ) outer error - correction code with well performed inner channel codes . to achieve , the decoder is required to exploit reliability information from the inner codes using a general minimum distance ( gmd ) decoding algorithm .forney s gmd algorithm essentially carries out outer code decoding , under various conditions , for times .the overall decoding complexity of is due to the fact that the outer code ( which is a reed - solomon code ) used in has a decoding complexity of .forney s concatenated codes were generalized to multi - level concatenated codes , also known as the generalized concatenated codes , by blokh and zyablov in .as the order of concatenation goes to infinity , the error exponent approaches the following blokh - zyablov bound ( or blokh - zyablov error exponent ) .}}\left ( \frac{r}{r_o}-r\right)\left [ \int^{\frac{r}{r_o}}_0\frac{dx}{e_l(x , p_x)}\right ] ^{-1}.\ ] ] in , guruswami and indyk proposed a family of linear - time encodable / decodable nearly mds error - correction codes . by concatenating these codes ( as outer codes ) with _ fixed - lengthed _ binary inner codes , together with justesen s gmd algorithm , forney s error exponent was shown to be achievable over binary symmetric channels ( bscs ) with a complexity of , i.e. , linear in the codeword length .the number of outer code decodings required by justesen s gmd algorithm is only a constant , as opposed to in forney s case .since each outer code decoding has a complexity of , upper - bounding the number of outer code decodings by a constant is required for achieving the overall linear complexity . because justesen s gmd algorithm assumes binary channel outputs , achievability of forney s exponent was only proven for bscs in ( * ? ? ?* ; * ? ? ?* theorem 8) . in this paper, we show that forney s gmd algorithm can be revised to carry out outer code decoding for only a constant number of times . with the help of the revised gmd algorithm , by using guruswami - indyk s outer codes with fixed - lengthed inner codes ,one - level and multi - level concatenated codes can arbitrarily approach forney s and blokh - zyablov exponents with linear complexity , over general discrete - time memoryless channels .consider one - level concatenated coding schemes .assume , for an arbitrarily small , we can construct a linear encodable / decodable outer error - correction code , with rate and length , which can correct symbol errors and symbol erasures so long as . note that this is possible for large as shown by guruswami and indyk in . to simplify the notations , we assume is an integer . the outer code is concatenated with suitable inner codes with rate and fixed length .the rate and length of the concatenated code are and , respectively . in forney s gmd decoding ,inner codes forward not only the estimates ] to the outer code , where , and .let for any outer codeword ] , for , where is a positive constant with being an integer , and is given by define dot product as then following theorem gives the key result that enables the revision of forney s gmd decoder .[ theorem2 ] if , then for some , .define a set of values for and an integer , where .can not be .because if , i.e. , , then there are at least zeros in vector . consequently , , which contradicts the assumption that . ]let we have and define a new weight vector ] with such that for and for we have define a set of indices according to the definition of , for , .hence since , and , we have consequently , implies if for all s , then which contradicts ( [ alphax ] ) .therefore , there must be some that satisfies since for , has no more than number of s , which implies , the vectors that satisfy ( [ px ] ) must exist among with . in words , for some , .theorems [ theorem1 ] and [ theorem2 ] indicate that , if is transmitted and , for some , errors - and - erasures decoding specified by ( where symbols with are erased ) will output .since the total number of vectors is upper bounded by a constant , the outer code carries out errors - and - erasures decoding only for a constant number of times .consequently , a gmd decoding that carries out errors - and - erasures decoding for all s and compares their decoding outputs can recover with a complexity of . since the inner code length is fixed , the overall complexity is .the following theorem gives an error probability bound for one - level concatenated codes with the revised gmd decoder .[ theorem3 ] assume inner codes achieve gallager s error exponent given in ( [ gallagere ] ) .let the reliability vector be generated according to forney s algorithm presented in ( * ? ? ?* ; * ? ? ?* section 4.2 ) .let be the transmitted outer codeword . for large enough , error probability of the one - level concatenated codes is upper bounded by ,\end{aligned}\ ] ] where is forney s error exponent given by ( [ ecr ] ) and is a function of and with if .the proof of theorem [ theorem3 ] can be obtained by first replacing theorem 3.2 in with theorem [ theorem2 ] , and then following forney s analysis presented in ( * ? ? ?* ; * ? ? ?* section 4.2 ) .the difference between forney s and the revised gmd decoding schemes lies in the definition of errors - and - erasures decodable vectors , the number of which determines the decoding complexity .forney s gmd decoding needs to carry out errors - and - erasures decoding for a number of times linear in , whereas ours for a constant number of times .although the idea behind the revised gmd decoding is similar to justesen s gmd algorithm , justesen s work has focused on error - correction codes where inner codes forward hamming distance information ( in the form of an vector ) to the outer code .applying the revised gmd algorithm to multi - level concatenated codes is quite straightforward .achievable error exponent of an -level concatenated codes is given in the following theorem .[ theorem4 ] for a discrete - time memoryless channel with capacity , for any and any integer , one can construct a sequence of -level concatenated codes whose encoding / decoding complexity is linear in , and whose error probability is bounded by }}\frac{\frac{r}{r_o}-r}{\frac{r}{r_om}\sum_{i=1}^m\left [ e_l\left((\frac { i}{m})\frac{r}{r_o},p_x\right ) \right ] ^{-1 } } \nonumber \\\end{aligned}\ ] ] the proof of theorem [ theorem4 ] can be obtained by combining theorem [ theorem3 ] and the derivation of in .note that , where is the blokh - zyablov error exponent given in ( [ bzbound ] ) .theorem [ theorem4 ] implies that , for discrete - time memoryless channels , blokh - zyablov error exponent can be arbitrarily approached with linear encoding / decoding complexity .we proposed a revised gmd decoding algorithm for concatenated codes over general discrete - time memoryless channels . by combining the gmd algorithm with guruswami and indyk s error correction codes, we showed that forney s and blokh - zyablov error exponents can be arbitrarily approached by one - level and multi - level concatenated coding schemes , respectively , with linear encoding / decoding complexity .the authors would like to thank professor alexander barg for his help on multi - level concatenated codes .
guruswami and indyk showed in that forney s error exponent can be achieved with linear coding complexity over binary symmetric channels . this paper extends this conclusion to general discrete - time memoryless channels and shows that forney s and blokh - zyablov error exponents can be arbitrarily approached by one - level and multi - level concatenated codes with linear encoding / decoding complexity . the key result is a revision to forney s general minimum distance decoding algorithm , which enables a low complexity integration of guruswami - indyk s outer codes into the concatenated coding schemes . coding complexity , concatenated code , error exponent
stochastic approximation algorithms and their variants are commonly found in control , communication and related fields .popularity has grown due to increased computing power , and the interest in various ` machine learning ' algorithms . when the algorithm is linear , then the error equations take the following linear recursive form : x_t + w_{t+1 } , \elabel{rlsalpha}\ ] ] where is an error sequence , is a sequence of random matrices , is a `` disturbance '' , and is the identity matrix .an important example is the lms ( least mean square ) algorithm .consider the discrete linear time - varying model : where and are the sequences of ( scalar ) observations and noise , respectively , and ^t ] denote the -dimensional regression vector and time varying parameters , respectively .the lms algorithm is given by the recursion where , and the parameter ] is bounded in ?( ii ) : : what does the averaged model ode/ tell us about the behavior of the original stochastic model ?( iii ) : : what is the impact of variability on _ performance _ of recursive algorithms ?in this section we develop stability theory and structural results for the linear model rlsalpha/ where is a fixed constant .it is assumed that an underlying markov chain , with general state - space , governs the statistics of rlsalpha/ in the sense that and are functions of the markov chain : we assume that the entries of the -matrix valued function are bounded functions of .conditions on the vector - valued function are given below .we begin with some basic assumptions on , required to construct a linear operator with useful properties .we assume throughout that the markov chain is _ geometrically ergodic _ or , equivalently , _-uniformly ergodic_. this is equivalent to assuming the validity of the following two conditions : _ irreducibility & aperiodicity : _ there exists a -finite measure on the state space such that , for any and any measurable with , _ geometric drift : _ there exists a _lyapunov function _ , , , , a ` small set ' , and a ` small measure ' , satisfying under these assumptions it is known that is ergodic and has a unique invariant probability measure , to which it converges geometrically fast , and without loss of generality we can assume that for a detailed development of geometrically ergodic markov processes see .we let denote the set of measurable _ vector - valued _ functions satisfying where is the euclidean norm on , and is the lyapunov function as above . for a linear operator define the induced operator norm via where the supremum is over all non - zero .we say that is a bounded linear operator if , and its spectral radius is then given by the _ spectrum _ of the linear operator is if is a finite matrix , its spectrum is just the collection of all its eigenvalues .generally , for the linear operators considered in this paper , the dimension of and its spectrum will be infinite .the family of linear operators , , that will be used to analyze the recursion rlsalpha/ are defined by , \\[.25 cm ] & = & \expect_x \left[(i-\alpha m_1)^\transpose f(\phi_1 ) \right]\ , , \end{array}\ ] ] and we let denote the spectral radius of .we assume throughout the paper that is a bounded function . under these conditionswe obtain the following result as in .there exists such that for , , and . to ensure that the recursion rlsalpha/ is stable itis necessary that the spectral radius satisfy . under this conditionit is obvious that the mean t\to\infty t\to\infty ] is unbounded .proof outline for iterating the system equation esterr/ we may express the expectation ] is bounded in for any deterministic initial conditions , . to construct the stationary process we apply backward coupling as developed in .consider the system starting at time , initialized at , and let , , denote the resulting state trajectory .we then have for all , , \qquad t\ge 0\ , , \ ] ] which implies convergence in to a stationary process : , .we can then compare to the process initialized at , ,\qquad t\ge 0\ , , \ ] ] and the same reasoning as before gives ( ii ) .next we show that is in fact an eigenvalue of for a range of , and we use this fact to obtain a multiplicative ergodic theorem .the maximal eigenvalue in is a generalization of the perron - frobenius eigenvalue ; c.f . .suppose that the eigenvalues of are distinct . then , ( i ) : : there exists such that the linear operator has distinct eigenvalues for all , and is an analytic function of in this domain for each .( ii ) : : for there are associated eigenfunctions and eigenmeasures satisfying moreover , for each , , , are analytic functions on .( iii ) : : suppose moreover that the eigenvalues are real .then we may take sufficiently small so that are real for .the maximal eigenvalue is equal to , and the corresponding eigenfunction and eigenmeasure may be scaled so that the following limit holds : where the convergence is in the -norm .+ in fact , there exists and such that for any the following limit holds : = h_{\alpha } ( x ) \mu_\alpha(f ) + b_0 e^{-\delta_0 t } v(x)\ , .\ ] ] the linear operator possesses a -dimensional eigenspace corresponding to the eigenvalue .this eigenspace is precisely the set of constant functions , with a corresponding basis of eigenfunctions given by , where is the basis element in .the -dimensional set of vector - valued eigenmeasures given by spans the set of all eigenmeasures with eigenvalue .consider the linear operator defined by f , \qquad f\in\lv.\ ] ] it is obvious that is a rank- linear operator , and for we have from the -uniform ergodic theorem of , ^t \to 0,\qquad t\to\infty,\ ] ] where the convergence is in norm , and hence takes place exponentially fast .it follows that the spectral radius of is strictly less than unity . by standard arguments it follows that , for some , the spectral radius of is also strictly less than unity .the results then follow as in theorem 3 of .conditions under which the bound is satisfied are given in , where we also provide formulae for the derivatives of : suppose that the eigenvalues are real and distinct . then , the maximal eigenvalue satisfies , ( i ) : : .( ii ) : : the second derivative is given by , r_0 \ , , \ ] ] where is a right eigenvector of corresponding to , and is the left eigenvector , normalized so that .( iii ) : : suppose that , .then we may take in ( ii ) , and the second derivative may be expressed , where an is the central limit theorem covariance for the stationary vector - valued stochastic process v_0 ] is its variance . to prove ( i ), we differentiate the eigenfunction equation to obtain setting then gives a version of _ poisson s equation _, where ] .as before , we then obtain the steady - state expression , =-\barm^\transpose h_0-h_0 \barm=\eta'_0 \ho.\ ] ] and , as before , we may conclude that .consider the discrete - time , linear time - varying model where is a sequence of scalar observations , is a noise process , is the sequence of -dimensional regression vectors , and are -dimensional time - varying parameters . in this sectionwe illustrate the results above using the lms ( least mean square ) parameter estimation algorithm , where is the error sequence , , . as in the introduction , writing we obtain \ , .\ ] ]this is of the form rlsalpha/ with , and . for the sake of simplicity and to facilitate explicit numerical calculations , we consider the following special case: we assume that is of the form , where the sequence is bernoulli ( with equal probability ) and take to be an i.i.d . noise sequence . in analyzing the random linear system we may ignore the noise and take .this is clearly geometrically ergodic since it is an ergodic , finite state space markov chain , with four possible states .in fact , is geometrically ergodic with lyapunov function .viewing as a vector in , the eigenfunction equation for becomes \ha = \la\ha\ ] ] where ] , ] , we obtain the steady state expression since , we have .now , taking the 2nd derivatives on both sides of qfirst/ gives , letting and considering the steady state , we obtain =\eta''_0\ho+2\eta'_0 \expect_{\pi}[\ho ' ] .\elabel{qsecond_0}\ ] ] poisson s equation qfirst_0/ combined with equation qfirst_steady/ and equation ( 17.39 ) of implies the formula , \\ & = & \expect_{\pi}(\ho')+\sum_{l=0}^{\infty}\expect_{x}[(\barm - m_{l+1})^\transpose\ho+\ho ( \barm - m_{l+1 } ) ] .\end{array}\ ] ] so , from , andqsecond_0/ we have . in order to show is quadratic near zero , we take the 3rd derivative on both sides of qpprime/ and consider the steady state at , with equation ( 17.39 ) of and and , we can show and for , hence is quadratic around .we now turn to the nonlinear model shown in nonlin/. we take the special form , \ , , \elabel{nonlin2}\ ] ] we continue to assume that is geometrically ergodic , and that , , with .the associated ode is given by where , .( n1 ) : : the function is lipschitz , and there exists a function such that furthermore , the origin in is an asymptotically stable equilibrium point for the ode , ( n2 ) : : there exists such that .( n3 ) : : there exists a unique stationary point for the ode nonlinode/ that is a globally asymptotically stable equilibrium .( i ) : : for any , there exists such that ( ii ) : : if the origin is a globally exponentially asymptotically stable equilibrium for the ode nonlinode/ , then there exists such that for every initial condition , , \le b_2 \alpha.\ ] ] proof outline for the continuous - time process is defined to be the interpolated version of given as follows : let , , and define , with defined by linear interpolation on the remainder of $ ] to form a piecewise linear function . using geometric ergodicity we can bound the error between and solutions to the ode nonlinode/ as in , and we may conclude that the joint process is geometrically ergodic with lyapunov function .assume that ( n1)(n3 ) hold , and that the eigenvalues of the matrix have strictly positive real part , where then there exists such that for any , the conclusions of ( ii ) hold , and , in addition : ( i ) : : the spectral radius of the random linear system sensitivity/ describing the evolution of the sensitivity process is strictly less than one .( ii ) : : there exists a stationary process such that for any initial condition , , \to 0,\qquad t\to\infty\ , .\ ] ] p. bougerol .limit theorem for products of random matrices with markovian dependence . in _ proceedings of the 1st world congress of the bernoulli society , vol . 1 ( tashkent , 1986 ) _ , pages 767770 , utrecht , 1987 .vnu sci . press .o. dabeer and e. masry .the lms adaptive algorithm : asymptotic error analysis . in _ proceedings of the 34th annual conference on information sciences and systems , ciss 2000 _ , pages wp16 wp17 , princeton , nj , march 2000 .
we give a development of the ode method for the analysis of recursive algorithms described by a stochastic recursion . with variability modelled via an underlying markov process , and under general assumptions , the following results are obtained : ( i ) : : stability of an associated ode implies that the stochastic recursion is stable in a strong sense when a gain parameter is small . ( ii ) : : the range of gain - values is quantified through a spectral analysis of an associated linear operator , providing a non - local theory . ( iii ) : : a second - order analysis shows precisely how variability leads to sensitivity of the algorithm with respect to the gain parameter . all results are obtained within the natural operator - theoretic framework of geometrically ergodic markov processes .
an amazing characteristic of some old fashion problems is their endurement .the projectile motion is one of them .being one of the main problems used to teach elementary physics , variations and not well known facts about it appear in the physics literature of the xxi century .a search in the web or in the _ science citation index _ gives an idea of this fact .some of the recent studies deal with the problem of air resistance in the projectile motion and its pedagogical character made of it an excellent example to introduce the lambert function , a special function. the lambert function is involved in many problems of interest for physicist and engineers , from the solution of the jet fuel problem to epidemics or , even , helium atom eigenfunctions. one of those problems is the solution for the range in the case that the air resistance has the from . in this paper we analyze the not well known fact of the geometrical place formed by the maxima of all the projectile trajectories at launch angle and in the presence of a drag force proportional to the velocity , we shall denote this locus as .the resulting locus becomes a lambert function of the polar coordinate departing from the origin .this problem raises as a natural continuation from the nice fact that in the drag - free case such a locus is an ellipse with an universal eccentricity . the paper is organized as follows . in section [ projectilemotion ] , the set of maxima for projectile trajectories moving under in the presence of air resistance is presented . in section [ solutionlambert ]we find a closed form , in polar coordinates , to express such a geometrical place , . in section [ param ]we present a numerical calculation of the curvature of using the polar angle and the launch angle as parameterizations . additionally , we demonstrate that the synchronous curve is a circle as in the drag - free case in section [ synchronous ] . in section [ conclusions ]we conclude .several approximations in order to consider the air resistance exist in the literature , the simplest is the linear case . in such a casethe force is given by where is the mass of the projectile and is the drag coefficient .the units of are .the velocity components are labeled as , with and .the solutions for the position and velocity are obtained trough direct integration of eq.([eqnewton ] ) yielding rcl x(t ) & = & , [ timesolx ] rcl y(t ) & = & -g t / b , [ timesol ] for the coordinates , and for the speeds .we used the initial conditions and and . noticing that the terminal speed is in the axis .for the same initial speed these solutions are function of the launch angle and the locus formed by the apexes is obtained if time is eliminated between the solutions in time in eqs .( [ timesolx ] ) and ( [ timesol ] ) , giving the equation and considering the value at the maximum , via .the corresponding solution is where we introduce the dimensionless perturbative parameter , the dimensionless length , and noticing that can be expressed as .an alternative procedure consists in set the derivative to zero to obtain the time of flight to the apex of the trajectory and , evaluate the coordinates at that time .the points conform the locus of apexes for all parabolic trajectories as a function of the launch angle . in fig .[ fig : geometricplace ] we plot described by eqs .( [ xmeqno ] ) and ( [ ymeqno ] ) , for the drag - free case ( in dashed red line ) and for in continuous blue line .several projectile trajectories are plotted in thin black lines .the locus of apexes defined by of eqs .( [ xmeqno ] ) and ( [ ymeqno ] ) is described parametrically by the launch angle and it changes for different values of . in the next section we shall find a description of in terms of polar coordinates and in a closed form using the lambert function . formed by the apexes of all the projectile trajectories ( continuous line in blue ) given by eqs .( [ xmeqno ] ) and ( [ ymeqno ] ) in rectangular coordinates or by eq .( [ w1 ] ) , the last one express in polar coordinates and in term of the lambert function .the dashed red line is the ellipse of eccentricity which represents the drag - free case , i.e. . the parameters are and . ]in order to obtain an analytical closed form of the locus we change the variables to polar ones , i.e. , and .the selection of a description departing from that origin instead of the center or the focus of the ellipse is because the resulting geometrical place is no longer symmetric and the only invariant point is just the launching origin .we substitute the polar forms of and into equations ( [ xmeqno ] ) and ( [ ymeqno ] ) and rearranging terms it must be expressed as the lhs depends on and meanwhile the rhs depends on , however the last angle is a function of and reads as by making from eqs .( [ xmeqno ] ) and ( [ ymeqno ] ) . in order to obtain set since eq .( [ angleeq ] ) allows us to have , implicitly , .we shall return to this point later .hence , we can write eq .( [ ymeqno2 ] ) as where we multiplied both sides of eq .( [ ymeqno2 ] ) by . setting and in eq .( [ pre1 ] ) , it shall have the familiar lambert function form , , from which we can obtain as it is important to note that the argument of the lambert function in this equation is negative for all the values . remains real in the range and have the branches denoted by and . we select the principal branch , , since it is the bounded one , however , for values of there is a precision problem since the required argument values are near to .it is important to stress that in eq .( [ w1 ] ) the independent variable is the angle and , it constitutes the parameterization of the curve .the polar expression of can also be written in terms of the tree function , giving we recover the drag - free result = 2 [ ellipse0 ] when .an explanation of this unfamiliar form of an ellipse is given in appendix [ appendix1 ] followed by a discussion about the limit of expression ( [ w1 ] ) in appendix [ appendix2 ] .formula ( [ w1 ] ) exhibits the deep relationship between the lambert function and the linear drag force projectile problem , since not only the range is given as this function .the problem open the opportunity to study the w function in polar coordinates , that , almost in the review of referencei , is absent .even when it is possible to write the locus in terms of this form does not shows the formal elegance of relation ( [ w1 ] ) . as a function of the launch angle for two different values of and their inverses . ]now we return to equation ( [ angleeq ] ) since we need to solve explicitly it in order to have the function .this task is not trivial since even when we approximate the rhs in expression ( [ angleeq ] ) up to first order in , the inversion is not easy .a way to do the inversion is to expand in a taylor series the rhs and then invert the series term by term. using _ mathematica _ to perform this procedure up to , we obtain as a result ll ( ) & ( 2 ) - ( 2 ) ^2 + ^2 ( 2 ) ^3 + & + ( 27 - 10 ^3 ) ( 2 ) ) ^4 + [ expantion ] the -independent terms had been resumated to yield .however , the series does not converge for values in the argument larger than .the reason is the small convergence ratio for the taylor expansion of . an easier way to perform the inversion is to evaluate using eq .( [ angleeq ] ) and plot the points , the result is shown in figure [ fig : inverse ] .the result is in agreement with the plot of eq .( [ expantion ] ) up to its convergence ratio and it is not shown .notice that this method is exact in the sense that we can obtain as many pair of numbers as we need , a function is , finally , a relation one to one between two sets of real numbers .another result is to obtain the derivative , since it shall be needed in the following sections . to this end, we note that both functions increase monotonically and their derivatives are not zero , except at the interval end .hence , we can use the inverse function theorem in order to obtain = . the resultis shown in fig .[ fig : invderiv](a ) as well as the second derivative in fig .[ fig : invderiv](b ) .the second derivative is calculated using an approximation to the slope to the function previously calculated and using points in the interval ] , as expected .a plot of for several values of beginning at zero and ending at appears in figure [ fig : angle](a ) . in redappears the drag - free case . note that both extremal values increase for increasing value as and .notice that for small , the crosses the drag - free curvature . for increasing values of parameter according to equation ( [ kappa_e ] ) .the values of are indicated in the inset .( b ) several important angles as a function of dimensionless parameter are plotted . in lines and diamondsappears the angle , , at which the curvature is maximum . in red circlesthe angle at which the range is maximum according to exact solution , eq .( [ exactangle ] ) , and in blue crosses the same angle according to eq .( [ asympangle ] ) ( see text for discussion ) . in a dashed blue line the angle at which skewness is maximum is plotted . ]the angles at which attain their maxima are obtained in the usual way and requires to solve , numerical or graphically , the equation l 3 ( 1 + ^ * ) 2^ * _ 1(^ * ) q_2(^ * ) + + ^ * q_3(^ * ) q_4(^ * ) = 0 , with ll q_1 = & 4 ( 3 + ^2 ) 2 ^ * + ( -4 - 15^ * + + & 5 3^ * ) ; ll q_2 = & 16 + 6 ^2 - 8 ^2 2 ^ * + 2 ^2 4 ^ * + + & 30 ^ * - 3 ^ * + 5 ^ * ; ll q_3 = & 5 + 3 4 ^*+ 3 ^ 2 - 4 ^2 2 ^ * + + & ^2 4 ^ * + 10^ * - + & 53^ * + 5 ^ * ; andll q_4 = & 70 + 36 ^2 - 16 ( 1 + 3 ^2 ) 2 ^ * + + & 2 ( 5 + 6 ^2 ) 4 ^ * + 154 ^ * - 31 3 ^ * + & + 7 5 ^*. in figure [ fig : angle](b ) the calculated values of as a function of appear .this angle is between the optimal angle for maximum range ( red circles and blue crosses ) and the angle for the greatest forward skew ( dashed line). _ skew = , where , valid for . in figure [ fig : angle](b ) the optimal angles are drawn , in red circles the exact result in terms of lambert function _ max , s = , [ exactangle ] and the approximated result _ max , w = .[ asympangle ] both expressions are equivalent for large but differ at small , as expected .meanwhile the difference between these angles at small pertubative parameter is unimportant , at large the behavior of the corresponding trajectories is different .one reason is the large asymmetry in the locus formed by the set of apexes . in fig .[ fig : largeeps](a ) we plotted and the corresponding trajectories for the different launch angles for .the blue line corresponds to , note that the maximum height is in contrast to for the drag - free case , however , this can be the case of small friction parameter and large initial velocity giving a large value . in a black line appears the orbit launched at , in red line the corresponding to attain the maximum range and in blue dashed line the orbit with maximum skewness . for .see text for explanation . ]in macmillan s book the calculation of the synchronous curve was done for the drag - free case .this curve is formed if many projectiles were fired simultaneously from the same point , each one with at different launch angle and same initial speed .the locus will be a circle of radius and center in the point , i.e. here , we shall demonstrate that a circle is the synchronous curve in the linear drag case as well . following reference ,we eliminate the launch angle from the position solutions , in the present case they are equations ( [ timesolx ] ) and ( [ timesol ] ) .we write down and and rearrange the terms to give , substituting these expressions in the identity we obtain with the center and the radius . in order to recover the case where , we consider a taylor expansion for the exponential up to second order in the exponential in eq .( [ yc ] ) and up to first order in eq .( [ rc ] ) .the fact that this circle exists in the presence of a drag force is remarkable .we obtained an explicit form for the locus composed by the set of maxima of all the trajectories of a projectile launched at an initial velocity , and in the presence of a linear drag force , , i.e. is the locus of the apexes . in polar coordinates , is written in terms of the principal branch of the lambert function for negative values .this represents the parameterization of the curve by the polar angle only and gives in a closed form and exhibits the deep relationship between the lambert function and the linear drag problem .the curvature of was calculated for different values of the dimensionless parameter in two parameterizations .the first one , the polar parameterization , shows a maximum that slightly departs from the drag - free case in .a wider exploration of the functional dependence respect to is pending due to numerical accuracy in the calculation of the lambert function near the limit at . in the case of a parameterization using the launch angle is not such a restriction . in this case , the curvature was calculated for a wide range of the parameter yielding maximum at angle values larger than those corresponding to maximum range .comparison with the maximum skewness angle was also done and the difference is larger than the previous one . as an addendum , we demostrate that the synchronous curve , in this case , is a circle as in the drag - free case .this work was supported by promep 2115/35621 .hhs thanks to m. olivares - becerril for useful discussions and encouragement .ellipse canonical form or the polar form with the origin considered in one of the focus are standard knowledge . in the present case ,however , we require to consider the origin of coordinates located in the _ bottom _ of the ellipse , since , in the presence of a drag force , the _ launching origin _ is the only invariant point when we change the drag force value . to obtain the ellipse form ,we depart from the drag - free solutions at the locus of the apexes , x_m = , [ xmdf ] and y_m = ^2 , [ ymdf ] being . with the help of the trigonometric relations and we transform the upper equations into 2 & = & ,+ 2 & = & 1 - . taking the squares in both expressions , summing them and arranging terms , we arrive to ( ( 1 + 3 ^2 _ m ) - 2 _ m ) = 0 .where we used the polar coordinates and .the solutions are and r_m(_m ) = .the second one is the required form for the ellipse .in order to obtain the drag - free limit for the locus given in equation ( [ w1 ] ) , we note that = ( 1/2 ) [ tangent ] and that .the first expression is obtainable from the drag - free solutions eqs .( [ xmdf ] ) and ( [ ymdf ] ) , and the second is obtained by setting in eq .( [ fdtheta ] ) . where we used relation ( [ tangent ] ) in order to obtain the last line .using trigonometric identity and eq .( [ tangent ] ) we obtain that = 1 + 3 ^2 _ m. using this result in the expression of we obtain the desired result , eq . ( [ ellipse0 ] ) .oo see for instance ` scholar.google.com ` .corless , g.h .gonnet , g.h .hare , d.e.g .jeffrey , and d.e .knuth , `` on the lambert function '' , adv . in comp .mathematics * 5 * , 329 - 359 ( 1996 ) .scott , a.lchow , d. bressanini , and j.d .morgan iii , `` the nodal surfaces of helium atom eigenfunctions '' , phys .a , * 75*. 060101 - 060104 ( 2007 ) .r.d.h . warburton and j. wang , ``analysis of asymptotic motion with air resistance using the lambert function '' , am .* 72 * , 1404 - 1407 ( 2004 ) .e. packel and d. yuen , `` projectile motion with resistance and the lambert function '' , coll .j. * 35*(5 ) , 337 - 350 ( 2004 ) .fernndez - chapou , a.l .salas - brito , and c.a .vargas , `` an elliptic property of parabolic trajectories '' , am .* 72*,1109 - 1109 ( 2004 ) .macmillan , _ theoretical mechanics : static and the dynamics of a particle _ ( mcgraw - hill , new york and london , 1927 ) . reprinted in ( dover , new york , 1958 ) , pp .249 - 254 .thomas , m.b .weir , j. hass , f.r ._ calculus_. 11th ed .( addisson - wesley , 2004 ) ._ mathematical methods for physicist_. 5th ed .( academic press , 2000 ) .e. kreyzig .`` principal , normal , osculating circle '' in _ diffential geometry_. ( dover , n.y . , 1991 ) . e.w .wiesstein `` curvature '' from mathworld . `http://mathworld.wolfram.com/curvature.html ` .steward , `` a little introductory and intermediate physics with the lambert function '' , proc . of the 16th biennial congress of the australian institute of physics .m. colla ed .pp 194 - 197 .australian institute of physics .parville , vic ( 2005 ) .steward , `` characteristics of the trajectory of a projectile in a linear resisting medium and the lambert function '' , australian inst . of physics . 17th .national congress 2006 .wc0035.(2006 ) . in ref . , the author comment that for we obtain the special value with , the golden ratio .however , the solution that appears in the article is not longer valid for .if the solution corresponds to another real root of the equation this special value corresponds to the case when the initial speed is equal to the limit speed .this makes much more intriguing this fact .
we present an analysis on the geometrical place formed by the set of maxima of the trajectories of a projectile launched in a media with linear drag . such a place , the locus of apexes , is written in term of the lambert function in polar coordinates , confirming the special role played by this function in the problem . in order to characterize the locus , a study of its curvature is presented in two parameterizations , in terms of the launch angle and in the polar one . the angles of maximum curvature are compared with other important angles in the projectile problem . as an addendum , we find that the synchronous curve in this problem is a circle as in the drag - free case .
the density matrix is a useful operator for quantum mechanical calculations . for a given system ,one may be unsure about what is the state vector .if the possible state vectors and their associated probabilities are , one creates the _ _ proper__ density matrix it is hermitian : .it is trace 1 : , where are an arbitrary complete orthonormal set of vectors .it is positive : for an arbitrary vector .all these properties can easily be verified from eq.([z1 ] ) .one can use the density matrix to conveniently calculate probabilities or mean values .if a measurement is set up to result in one of the eigenstates of an operator , that outcome s probability is and the mean eigenvalue of is . because each individual state vector evolves unitarily under the system hamiltonian * h * ( assumed for simplicity here to be time independent ) , , the density matrix in eq.([z1 ] ) satisfies the evolution equation \implies{\bf\rho}(t)=e^{-i{\bf h}t}{\bf\rho}(0)e^{i{\bf h}t}.\ ] ] the operator acting on is often called a _ _superoperator__ since it describes a linear transformation on an operator : it operates on both sides of , so to speak .the case sometime arises where the system under consideration is a subsystem of a larger system , and is not measured .the pure ( so - called because it is formed from a single state vector ) density matrix for the joint system is where are orthonormal bases for , respectively , and . by taking the trace of with respect to ,one arrives at an _ _ improper__ density matrix for from which predictions can be extracted : one can easily see that this is hermitian , trace 1 and positive .however , while the density matrix of evolves unitarily , the density matrix of the subsystem evolving under the influence of generally does not evolve unitarily .nonetheless , sometimes can be written in terms of for a range of times earlier than . sometimes that range is short compared to the time scale of evolution of so that one may make an approximation whereby depends linearly just on .this is quite useful , and is what shall be considered in this paper . in this case , the evolution equation is highly constrained by the requirements on , to be satisfied at all times : hermiticity , trace 1 and positivity .( the latter proves too general to simply implement , so a stronger requirement is imposed , called complete positivity see section [ secpos ] ) . )the result , for an -dimensional hilbert space , is the lindblad ( or lindblad - gorini - kossakowsky - sudarshan ) evolution equation for the density matrix : \nonumber\\ & & \quad-\frac{1}{2}\sum_{\alpha=1}^{n^{2}-1}[{\bf l}^{\alpha\dagger}{\bf l}^{\alpha}{\bf\rho}(t ) + { \bf\rho}(t){\bf l}^{\alpha\dagger}{\bf l}^{\alpha}-2{\bf l}^{\alpha}{\bf\rho}(t){\bf l}^{\alpha\dagger } ] .\nonumber\\\end{aligned}\ ] ] in eq.([z3 ] ) , the hamiltonian is an arbitrary hermitian operator , but the lindblad operators are completely arbitrary operators . actually ,as shall be shown , there need be no limitation on the number of terms in the sum in eq.([z3 ] ) , but this can always be reduced to a sum of terms .it is not a necessary condition , but if the equation is to be time - translation - invariant , the operators are time - independent .before deriving eq.([z3 ] ) , we give a few examples of the non - unitary evolutions it describes .since unitary evolution is well known , we shall let .it shall be seen that relaxation to some equilibrium ( constant ) density matrix is readily described . for simplicity , four of the five examples shall be in a hilbert space ( a restriction that is readily lifted ) .consider a state vector written in a basis whose phase factors undergo random walk. given an initial state vector ( ) , suppose at time , it has evolved to with probability the density matrix is }\big[ab^{*}|\phi_{1}\rangle\langle\phi_{2}|+a^{*}b|\phi_{2}\rangle\langle\phi_{1}|\big ] .\nonumber\end{aligned}\ ] ] we see that the off - diagonal elements decay at a fixed rate while the diagonal elements remain constant .it satisfies \big[{\bf \rho}(t)-\sum_{i=1}^{2}{\bf q}_{i}{\bf \rho}(t){\bf q}_{i}\big],\ ] ] where the projection operator . to see that this is a lindblad equation , note that and are two lindblad operators : identify in eq.([z3 ] ) .suppose in time , a state vector has probability of changing to ( probability of being unchanged ) , where is a hermitian operator and .the density matrix at time is therefore so its evolution equation is .\ ] ] this is of the lindblad form , with one lindblad operator . in the basis where is diagonal with elements , we get and .\ ] ] so , again, its diagonal elements remain constant .its off - diagonal elements decay at the fixed rate ] , then has components ] , has components ] , it is easy to verify that this is an orthonormal set of vectors . although each is a vector in a four dimensional space , with four components , can also be regarded as an operator in the two - dimensional hilbert space with four matrix elements .this leads to a neat way of writing the orthogonality relations for these eigenvectors . instead of , we can write where is the hermitian conjugate ( complex conjugate transpose ) of .it is easy to see how this works for the example where is the pauli+1 basis .the expression for the components of written in terms of its eigenvectors and eigenvalues is putting this into eq.([x1 ] ) results in the _ evolution equation _now , lets impose the _ trace constraint _ , i.e. , . in terms of componentsthis says writing , the trace constraint can be written as \rho_{rs}=0\ ] ] or in matrix notation as {\bf \rho}=0.\ ] ] where is the unit matrix. this must hold for arbitrary .we have seen how to handle such an expression . by successively putting in the four density basis matrices ,we obtain the trace constraint the -dimensional case works just like the two - dimensional case .it follows from eq.([x2 ] ) that can be viewed as an hermitian matrix .it has real eigenvalues .its complex eigenvectors satisfy the orthonormality conditions with * a * written in terms of its eigenvalues and eigenvectors , eq.([x1 ] ) becomes the evolution equation and depend upon , but we shall not write that dependence until it is needed .next , imposition of the trace constraint on eq.([x5 ] ) , with , gives {\bf \rho } = 0.\ ] ] using the density matrix basis as in eq.([x3 ] ) et seq ., we obtain the trace constraint : by taking the trace of eq.([x6 ] ) and using eq.([x4a ] ) , we find the interesting relation final constraint is _ positivity_. this says , given an arbitrary n - dimensional vector , that the expectation value of the density matrix is non - negative .this constraint , applied to eq.([x5 ] ) , is where we have defined .positivity of ensures .thus , we see from eq.([x7 ] ) , if all the s are non - negative , then will be positive too .however , while just shown to be _ sufficient _ for to be positive , is not _necessary_. in the next section , we shall give an example where an eigenvalue is negative , yet is positive ! therefore , a stronger condition than positivity is necessary to ensure that . this condition , presented after the example , is _complete positivity_. this example uses the pauli+1 eigenvectors and .( note that the trace constraint ( [ x6 ] ) is satisfied , provided , since the square of each of the pauli+1 matrices is . )choose : = \begin{bmatrix}\rho_{22}&\rho_{12}\\\rho_{21}&\rho_{11 } \end{bmatrix}.\ ] ] is just with its diagonal elements exchanged .thus , because is positive , then is positive .this is a particularly simple example of a more general case discussed in appendix [ a ] .it is not positivity but , rather , _complete positivity _ that makes the non - negative eigenvalue condition necessary . here is what it means .add to our system a non - interacting and non - evolving additional system in its own -dimensional hilbert space .the enlarged hilbert space is of dimension .the simplest state vector in the enlarged space is a direct product : is a vector from the original hilbert space , is a vector from the added system .the general state vector in the joint space is the sum of such products with c - number coefficients .form an arbitrary density matrix for the enlarged system .suppose it evolves according to eq.([x5 ] ) , where is replaced by ( i.e. , the evolution has no effect on the vectors of the added system . )complete positivity says that the resulting density matrix must be positive .complete positivity says , given the evolution equation ( [ x5 ] ) , that for an arbitrary dimensional vector and for any initial density matrix in the enlarged hilbert space .we wish to prove that complete positivity implies the eigenvalues are non - negative .what we shall do is judiciously choose a single vector and four pure density matrices so that the expressions are , with a positive constant of proportionality .therefore , for complete positivity to hold , must be non - zero . hereare choices that will do the job .we shall choose the maximally entangled vector ( , but it need not be normalized to 1 ) .we construct the state vectors and use them to make four pure density matrices .( note that because of the orthogonality relation eq.([x4a ] ) ) .then , for one , putting this into eq.([x5 ] ) , the complete positivity condition is ( using the orthogonality relation ( [ x4a ] ) ) . thus , complete positivity implies .we follow the same procedure in the n - dimensional case .however , to be a bit more general , we shall use an arbitrary vector , and an arbitrary pure density matrix : where , are yet to be specified complex constants .the unit trace of in eq.([x8b ] ) requires .then , the complete positivity condition is [{\bf e}^{\alpha\dagger}{\bf d}{\bf c}^{\dagger}].\end{aligned}\ ] ] now , choose , for any particular .this choice can be made in many ways .two are , ( the choice made in the two - dimensional example just discussed ) or , ( note , both choices respect ) . with this choice in eq.([x9 ] ) , and with use of the orthonormality conditions eq.([x4a ] ) , we obtain as the consequence of complete positivity : have now applied all the constraints needed to obtain a valid density matrix at a later time from an earlier density matrix at time .this relation is eq.([x5 ] ) , supplemented by the orthonormality conditions ( [ x4a ] ) , the trace constraint ( [ x6 ] ) and the condition of non - negative eigenvalues ( [ x10 ] ) .it is customary to define , so that eqs.([x5 ] , [ x6 ] ) can be written in terms of alone : ( however , the orthonormality conditions , written in terms of , now depend upon ) .eq.([x11a ] ) is called the kraus representation and are called kraus operators . we have proved the necessity of the kraus representation , but it is also sufficient .that is , for _ any _ satisfying eqs.([x11a],[x11b ] ) , even for more than operators , also with no orthonormality conditions imposed , all the constraints on are satisfied .it is easy to see that hermiticity , trace 1 and positivity are satisfied .complete positivity requires a bit more work , and that is given in appendix [ b ] .this general statement of the kraus representation might seem to imply a larger class than we have derived as necessary , but that is not so .since the kraus representation is hermitian , trace 1 and completely positive , it may be written in the form eq.([x5 ] ) , as we have shown .now that we have satisfied all the constraints on the density matrix , we can let , and obtain the differential equation satisfied by . for the rest of this paper we shall only treat the -dimensional case since the argument is precisely identical for the two - dimensional case , except that .first , lets see what we can say about the eigenvectors and eigenvalues when .then , eq.([x5 ] ) says \rho_{rs}.\end{aligned}\ ] ] as we have done before , successive replacement of by the members of the density matrix basis results in multiply eq.([16 ] ) by and sum over .use of the orthonormality relation ( [ x4a ] ) gives if and , eq.([17 ] ) says that all the eigenvectors are .but only one of a set of orthogonal eigenvectors can be proportional to the identity .therefore , for the rest of the eigenvectors , and .call one eigenvector . from eq.([17 ] ) , we find the associated eigenvalue . for ,the eigenvalues vanish .note that the condition says that these eigenvectors are orthogonal to . and , indeed , in this case , eq.([x5 ] ) becomes the identity when , the eigenvalues and eigenvectors change infinitesimally .accordingly we write , \medspace \lambda ^{\alpha}(dt)=c^{\alpha}dt \medspace(\alpha\neq n^{2}),\nonumber\\ & & { \bf e}^{n^{2}}(dt)=\frac{1}{\sqrt{n}}[{\bf1}+{\bf b}dt ] , \medspace { \bf e}^{\alpha}(dt)= { \bf k}^{\alpha } \medspace(\alpha\neq n^{2}),\nonumber\\\end{aligned}\ ] ] where the are constants .we do not include a term in the expression for since , because , it would contribute a negligible term to eqs.([x5],[x6 ] ) . because the eigenvalues must be positive , andbecause the eigenvalues sum to ( equation following eq.([x6 ] ) ) , we see that ( all ) . and are restricted by the orthonormality conditions , which we shall look at later .putting eqs.([19 ] ) into the evolution equation ( [ x5 ] ) gives [{\bf 1}+{\bf b}dt]{\bf\rho}(t)[{\bf 1}+{\bf b^{\dagger}dt}]\nonumber\\ & & + dt\sum_{\alpha=1}^{n^{2}-1}c^{\alpha}{\bf k}^{\alpha}{\bf\rho}(t){\bf k}^{\alpha\dagger } , \medspace\hbox { or in the limit}\medspace dt\rightarrow 0,\nonumber \\ & & \frac{d}{dt}{\bf\rho}(t)=-c^{n^{2}}{\bf\rho}(t)+{\bf b}{\bf\rho}(t)+{\bf\rho}(t){\bf b^{\dagger}}+\sum_{\alpha=1}^{n^{2}-1}c^{\alpha}{\bf k}^{\alpha}{\bf\rho}(t){\bf k}^{\alpha\dagger}.\nonumber \\ \end{aligned}\ ] ] putting eqs.([19 ] ) into the trace constraint ( [ x6 ] ) gives [{\bf 1}+{\bf b^{\dagger}dt][{\bf 1}+{\bf b}}dt]\nonumber\\ & & \qquad\qquad\qquad+dt\sum_{\alpha=1}^{n^{2}-1}c^{\alpha}{\bf k}^{\alpha\dagger}{\bf k}^{\alpha}={\bf 1},\medspace\hbox{or,}\nonumber\\ & & c^{n^{2}}{\bf 1}= { \bf b}+{\bf b}^{\dagger}+\sum_{\alpha=1}^{n^{2}-1}c^{\alpha}{\bf k}^{\alpha\dagger}{\bf k}^{\alpha}.\end{aligned}\ ] ] using ( [ 21 ] ) to replace in ( [ 20 ] ) ( specifically , ] . using to simplify the result, we obtain : the eigenvalues of here are , , so the condition that they lie between 0 and 1 is the fourth density basis matrix is $ ] . using to simplify the result , we obtain : the eigenvalues of here are , , so the condition that they lie between 0 and 1 is so , we have obtained the result that will be positive if eqs.([a3a ] , [ a3b],[a5],[a7 ] ) and are satisfied .the sum of eqs.(a3b , [ a5 ] , [ a7]))minus ( [ a8 ] ) tells us that .eq.([a8 ] ) and the constraint boundaries are three - dimensional hyperplanes in the four dimensional -space .their intersections delineate the allowed areas for the eigenvalues. we shall be content here to set ( in which case eqs.(a3a ) simplifies to ) .then , eq.([a8 ] ) describes a plane in space , and its intersection with the constraint boundary planes can be drawn .this is shown in fig .1 . there are two regions where one of the eigenvalues is negative and the other two are positive : the points in the heavily outlined upper left triangle have , and the points in the heavily outlined lower right triangle have . allowed regions of eigenvalues for a positive density matrix .the two dark - outlined triangular regions are where an eigenvalue is negative .they abut an isosceles triangle , the restricted region of complete positivity , where all the eigenvalues are positive.,scaledwidth=50.0% ]the kraus form eq.([x11a ] ) and the kraus constraint eq.([x11b ] ) , generalized to _ any _ number of _ arbitrary _ operators , are respectively we want to show complete positivity .call any one of the . if we can show complete positivity for for an arbitrary , then eq.([b1a ] ) , which involves a sum of such terms ,will be completely positive . and, it is only necessary to prove complete positivity for , where is any possible basis density matrix ( described in the paragraph following eq.([x3 ] ) ) in the added hilbert space , since the most general density matrix in the direct product hilbert space is the linear sum of such terms .we now calculate for arbitrary .there is no loss of generality if we pick the basis vectors in the added hilbert space any way we like .we shall pick them to be the eigenstates of .now , we note that each has one eigenvalue 1 and the remaining eigenvalues are 0 .call the eigenvector which corresponds to the eigenvalue 1 . then , {\bf\rho } \big[\sum_{m ' = 1}^{n}d_{m1}{\bf m}^{\dagger}| \phi_{m}\rangle\big]\geq 0,\end{aligned}\ ] ] therefore, the kraus form with arbitrary s is completely positive .
the lindblad equation is an evolution equation for the density matrix in quantum theory . it is the general linear , markovian , form which ensures that the density matrix is hermitian , trace 1 , positive and completely positive . some elementary examples of the lindblad equation are given . the derivation of the lindblad equation presented here is simple " in that all it uses is the expression of a hermitian matrix in terms of its orthonormal eigenvectors and real eigenvalues . thus , it is appropriate for students who have learned the algebra of quantum theory . where helpful , arguments are first given in a two - dimensional hilbert space .
we analyse the problem of controllability for linear finite - dimensional systems submitted to parametrised perturbations , depending on unknown parameters in a deterministic manner .in previous works we have analysed the property of averaged control looking for a control , independent of the values of these parameters , designed to perform well , in an averaged sense ( , ) .here we analyse the complementary issue of determining the most relevant values of the unknown parameters so to provide the best possible approximation of the set of parameter - depending controls .our analysis is based on previous work on greedy and weak greedy algorithms for parameter - depending pdes and abstract equations in banach spaces ( , ) , which we adapt to the present context .although the greedy control is applicable to more general control problems and systems , here we concentrate on controllability issues and , to better illustrate the main ideas of the new approach , we focus on linear finite - dimensional systems of parameter - dependent odes .infinite - dimensional systems , as a first attempt to later consider pde models , are discussed separately in section [ infinite ] , as well as in the conclusion section .consider the finite dimensional linear control system in ( [ eq1f - d ] ) the ( column ) vector valued function is the state of the system , is a governing its free dynamics and is a -component control vector in , , entering and acting on the system through the control operator , a parameter - dependent matrix . in the sequel , to simplify the notation , will be simply denoted by .the matrices and are assumed to be lipschitz continuous with respect to the parameter , , being a compact set . however , some of our analytical results ( section [ infinite ] ) will additionally require analytic dependence conditions on . here , to simplify the presentation ,we have assumed the initial datum to be controlled , to be independent of the parameter . despite of this ,the matrices and being -dependent , both the control and the solution will depend on .similar arguments allow to handle the case when also depends on the parameter , which will be discussed separately .we address the controllability of this system whose initial datum is given , known and fully determined .we assume that the system under consideration is controllable for all values of .this can be ensured to hold , for instance , assuming that the controllability condition is satisfied for some specific realisation and that the variations of and with respect to are small enough . in these circumstances , for each value of there is a control of minimal ^m ] , whose regularity is determined by that of the matrices entering in the system , and .here we are interested on the problem of determining the optimal selection of a finite number of realisations of the parameter so that all controls , for all possible values of , are optimally approximated .more precisely , the problem can be formulated as follows .* problem 1 * _ given a control time and arbitrary initial data and final target , we consider the set of controls of minimal ^m ] ._ given we aim at determining a family of parameters in , whose cardinal depends on , so that the corresponding controls , denoted by , are such that for every there exists steering the system in time within the distance from the target , i. e. such that here and in the sequel , in order to simplify the notation , we denote by the control , and similarly we use the simplified notation . note that , in practice , the controllability condition ( [ eq2f - d ] ) is relaxed to the approximate one ( [ eq2f - dap ] ) .this is so since , in practical applications , when performing numerical approximations , one is interested in achieving the targets within a given error .this fact is also intrinsic to the methods we employ and develop in this paper , and that can only yield optimal procedures to compute approximations of the exact control , which turn out to be approximate controls in the sense of ( [ eq2f - dap ] ) .this problem is motivated by the practical issue of avoiding the construction of a control function for each new parameter value which , for large systems , although theoretically feasible by the uniform controllability assumption , would be computationally expensive . by the contrary, the methods we develop try to exploit the advantages that a suitable choice of the most representative values of provides when computing rapidly the approximation of the control for any other value of , ensuring that the system is steered to the target within the given error ( [ eq2f - dap ] ) .of course , the compactness of the parameter set and the lipschitz - dependence assumption with respect to make the goal to be feasible .it would suffice , for instance , to apply a _ naive _ approach , by taking a fine enough uniform mesh on to achieve the goal . however , our aim is to minimise the number of spanning controls and to derive the most efficient approximation .the _ naive _ approach is not suitable in this respect . to achieve this goal we adapt to the present frame of finite - dimensional control , the theory developed in recent years based on greedy and weak - greedy algorithms for parameter dependent pdes or abstract equations in banach spaces , which optimise the dimension of the approximating space , as well as the number of steps required for its construction .the rest of this paper is organised as follows . in section [ control - prel ]we summarise the needed controllability results for finite - dimensional systems and reformulate problem 1 in terms of the corresponding gramian operator .section 3 is devoted to the review of ( weak ) greedy algorithms , while their application to the control problem under consideration and its solution is provided in the subsequent section .the computational cost of the greedy control approach is analysed in section 5 .section 6 contains a generalisation of the approach to infinite dimensional problems followed by a convergence analysis of the greedy approximation errors with respect to the dimension of the approximating space .section 7 contains numerical examples and experiments for finite - difference discretisations of 1-d wave and heat problems .the paper is closed pointing towards future development lines of the greedy control approach .in order to develop the analysis in this paper it is necessary to derive a convenient characterisation of the control of minimal norm , as a function of the parameter . this can be done in a straightforward manner in terms of the gramian operator . in this sectionwe briefly summarise the most basic material on finite - dimensional systems that will be used along this article ( we refer to for more details ) .consider the finite - dimensional system of dimension : where is the -dimensional state and is the -dimensional control , with .this corresponds to a specific realisation of the system above for a given choice of the parameter .we omit however the presence of from the notation since we are now considering a generic linear finite - dimensional system .here is an matrix with constant real coefficients and is an matrix .the matrix determines the dynamics of the system and the matrix models the way controls act on it . in practice , it is desirable to control the components of the system with a low number of controls , the best possible case being the one of scalar controls : . recall that system ( [ fd ] ) is said to be _ controllable _ when every initial datum can be driven to any final datum in in time .this controllability property can be characterised by a necessary and sufficient condition , which is of purely algebraic nature , the so called _ kalman condition _ : system ( [ fd ] ) is controllable if and only if =n.\ ] ] when this rank condition is fulfilled the system is controllable for all .there is a direct proof of this result which uses the representation of solutions of ( [ fd ] ) by means of the variations of constants formula .but for our purpose it is more convenient to use the point of view based on the dual problem of observability of the adjoint system that we discuss now .consider the _ adjoint system _ system ( [ fd ] ) is controllable in time if and only if the adjoint system ( [ afd ] ) is _ observable _ in time , i. e. if there exists a constant such that , for all solution of ( [ afd ] ) , latexmath:[\[\label{fdoi } hold in all time if and only if the kalman rank condition ( [ rc ] ) is satisfied .furthermore , the control of minimal ^m]-norm is of the form where , being the minimiser of .let be the quadratic form , known as ( controllability ) gramian , associated to the pair , i.e. with being solutions to with the data and , respectively . then , under the rank condition , because of the observability inequality , this operator is coercive and symmetric and therefore invertible .its corresponding matrix , which we denote the same , is given by the relation the minimiser can be expressed as the solution to the linear system hereby , the left hand side , up to the free dynamics component , represents the solution of the control system ( [ fd ] ) with the control given by .as the solution is steered to the target , the last relation follows . in our contextthe adjoint system depends also on the parameter : we assume that the system under consideration is controllable for all values of .this can be ensured to hold assuming the following uniform controllability condition where are positive constants , while is the gramian of the system determined by .as we mentioned above , this assumption is fulfilled , in particular , as soon as the system is controllable for some specific value of the parameter , and depend continuously on , and is close enough to .but our discussion and presentation makes sense in the more general setting where ( [ g - bound ] ) is fulfilled .as we restrict the analysis to the set of minimal ^m]-functions in the first formulation ) , uniquely determined as solutions to linear systems .this enables to adapt the ( weak ) greedy algorithms and reduced bases methods for parameter dependent problems , that we present in the following section .the approximate controls we obtain in this manner do not really belong to the space spanned by controls associated to selected parameter values , since the selection is done at the level of , the control being simply the natural one corresponding to the choice of .in this section we present a brief introduction and main results of the linear approximation theory of parametric problems based on the ( weak ) greedy algorithms , that we shall use in our application to controllability problems .a more exhaustive overview can be found in some recent papers , e.g. .the goal is to approximate a compact set in a banach space by a sequence of finite dimensional subspaces of dimension . by increasing one improves the accuracy of the approximation . determining _ offline _ an approximation subspace within a given error normally implies a high computational effort .however , this calculation is performed only once , resulting in a good subspace from which one can easily and computationally cheaply construct _ online _ approximations to every vector from .vectors spanning the space are called _ snapshots _ of .the goal of ( weak ) greedy algorithms is to construct a family of finite dimensional spaces that approximate the set in the best possible manner .the algorithm is structured as follows . *weak greedy algorithm * -3 mm fix a constant . in the first stepchoose such that at the general step , having found , denote and choose the next element such that the algorithm stops when becomes less than the given tolerance .+ the algorithm produces a finite dimensional space that approximates the set within the given tolerance .the choice of a new element in each step is not unique , neither is the sequence of approximation rates .but every such a chosen sequence decays at the same rate , which under certain assumptions given below , is close to the optimal one .thus the algorithm optimises the number of steps required in order to satisfy the given tolerance , as well as the dimension of the final space .the pure greedy algorithm corresponds to the case . as we shall see below, the relaxation of the pure greedy method ( ) to a weak greedy one ( ) will not significantly reduce the efficiency of the algorithm , making it , by the contrary , much easier for implementation . when performing the ( weak ) greedy algorithm one has to chose the next element of the approximation space by exploring the distance for all possible values .such approach is faced with two crucial obstacles : * the set in general consists of infinitely many vectors . * in practical implementationsthe set is often unknown ( e.g. it represents the family of solutions to parameter dependent problems ) .the first problem is bypassed by performing a search over some finite discrete subset of .here we use the fact that , being a compact set , can be covered by a finite number of balls of an arbitrary small radius . as to deal with the second one , instead of considering the exact distance appearing in , one uses some _ surrogate _ , easier to compute . in order to estimate the efficiency of the weak greedy algorithm we compare its approximation rates with the best possible ones .the best choice of a approximating space is the one producing the smallest approximation error .this smallest error for a compact set is called the _ kolmogorov -width of _ , and is defined as it measures how well can be approximated by a subspace in of a fixed dimension . in the sequelwe want to compare with the kolmogorov width , which represents the best possible approximation of by a dimensional subspace of the referent banach space .a precise estimate in that direction was provided by in the hilbert space setting , and subsequently improved and extended to the case of a general banach space in .[ , corollary 3.3 ] [ greedy_rates ] for the weak greedy algorithm with constant in a hilbert space we have the following : if the compact set is such that , for some and then where .this theorem implies that the weak greedy algorithms preserve the polynomial decay rates of the approximation errors , and the result will be used in the convergence analysis of our method in section [ infinite ] .a similar estimate also holds for exponential decays ( cf .in addition , it is also remarkable that the constant effects the efficiency only up to a multiplicative constant , while leaving approximation rates unchanged .in this section we solve problem 2 implementing the ( weak ) greedy algorithm in the manifold consisting of minimisers determined by the relation .the * goal * is to choose parameters such that approximates the whole manifold within the given error . to this effect ,as already stated in the introduction , we assume that the matrices and are lipschitz continuous with respect to the parameter . in turn , this implies that the mapping possesses the same regularity as well , with the lipschitz constant denoted by .the greedy selection of each new snapshot relies on the relation , which in this setting maximises the distance of elements of from the space spanned by already chosen snapshots .theoretically , this process requires that we solve for each value of . andthis is exactly what we want to avoid .actually , here we face the obstacle _ ii ) _ from previous section , since one has to apply the greedy algorithm within a set whose elements are not given explicitly .the problem is managed by identifying an appropriate surrogate for the unknown distances . to this effect note that where , while denotes equivalence of terms resulting from the uniform controllability assumption .in such a way we replace the unknown by an easy computed term , combining the target and the solution of the free dynamics at time . as for the other term , note that represents the value at time of the solution to the system where is the _ control _ obtained by solving the corresponding adjoint problem ( for the parameter ) with initial datum .thus instead of dealing with we use the * surrogate * , obtained by projecting an easy obtainable vector to a linear space whose basis is obtained by solving the adjoint system plus the state one times .the surrogate measures the control performance of the snapshots when applied to the system associated to parameter ( figure 1 ) .namely , by relation the minimiser uniquely determines the control steering the system from the initial datum to the target . by replacing with in the systemis driven to the state , whose distance from the target represents the surrogate value . ]if for every we can find a suitable linear combination of the above states close enough to the target , we deem that we have found a good approximation of the manifold .otherwise , we select as the next snapshot a value for which the already selected snapshots provide the worst performance . the precise description of the offline part of the algorithm is given below .-3 mm fix the approximation error .* step 1 ( discretisation ) * + choose a finite subset such that where is a constant to be determined later ( cf . , ) , in dependence on the problem under consideration and the tolerance .* step 2 ( choosing ) * + if stop the algorithm .else determine the first distinguished parameter value as and choose as the minimiser of corresponding to .* step 3 ( choosing ) * + having chosen calculate for each .if stop the algorithm .else and repeat step 3 . + the algorithm results in the approximating space , where is a number of chosen snapshots ( specially for ) .the value of the parameter is chosen by testing the performance of the null control as an initial guess for all .the selected value is the one for which this performance provides the worst approximation .the algorithm is stopped at this initial level only if the null control ensures the uniform control of all system realisations within the given tolerance .note that the set is linearly independent , as for vectors that linearly depend on already chosen ones , the corresponding surrogate distance ( 23 ) , which is the criterion for the choice of new snapshots , vanishes .thus the algorithm stops after , at most , iterations , and it fulfils the requirements of the weak greedy theory . more precisely the following result holds .[ main_result ] let the be lipschitz and such that the uniform controllability condition holds , and let be the lipschitz constant of the mapping determined by . fora given take the discretisation constant such that then the above algorithm provides a week greedy approximation of the manifold with the constant and the approximation error less than .the obtained approximation error of the family of minimisers is a consequence of the required approximation of the set ( as implies ) .the case , occurring when inequality holds , is a trivial one resulting in a null approximating space that we exclude from the proof .* proof : * in order to prove the theorem we have to show : * that the selected minimisers associated to parameter values determined by and satisfy for the constant given by ; * that the approximation error obtained at the end of the algorithm is less than . *a ) * to this effect note that for the first snapshot the following estimates hold : where we have used relation ( for ) and the criterion . in order to obtain an estimate including the whole set of parameters , we employ lipschitz regularity of the mapping .thus for an arbitrary , by taking such that , by means of and it follows having excluded the case , we have implying latexmath:[\[\label{gamma_1 } given by . for a general -th iteration , and an arbitrary , by taking and using we get where the last two inequalities follow from the fact that the stopping criteria is not satisfied until and the definition of the next snapshot . combining the last relation with , estimate follows . *b ) * finally , having achieved inequality after iterations , for an arbitrary , taking as above we get thus obtaining the required approximation error of the set . besides a choice of the discretisation constant determined by , other choices are possible as well .actually , taken any , define .the implementation of the algorithm in that case would lead to the greedy constant and the approximation error of the set equal to .note , however , that no choice of can reduce the order of the error already obtained by .+ having constructed an approximating space of dimension , we would like to exploit it for construction of an approximate control associated to an arbitrary given value .such a control is given by the relation where is appropriately chosen approximation of from .it steers the system to the state . comparing formula with the one fulfilled by the exact control , we note that the only difference lies in the replacement of the unknown minimiser by its approximation .thus is a suitable linear combination of vectors . as our goal is to steer the system to as close as possible ,the best performance is obtained if is chosen as projection of to the space .for this reason we define approximation of as where the coefficients are chosen such that represents projection of the vector to the space .this choice of corresponds to the minimisation of the functional , determined by , over the space . in order to check performance estimate of the approximate control ,note that for any value we have where we have used that is the orthogonal projection of to the space ( that also contains ) , while is taken from the set .taking into account the stopping criteria , the last term in is less than , while the preceding one equals where denotes the lipschitz constant of the mapping .thus in order to obtain a performance estimate less than , one possibly has to refine the discretisation used in theorem [ main_result ] and to take where , let it be repeated , denotes the lipschitz constant of the mapping .this finalises solving of problem 2 and leads us to the following result .[ thm - online ] let be a lipschitz mapping such that the uniform controllability condition holds .given , let be an approximating space constructed by the greedy control algorithm with the discretisation constant given by . then for any the approximate control given by and steers the control system to the state within the distance from the target .as already commented in section [ control - prel ] , the approximate control does not belong to the space spanned by controls associated to selected parameter values , as the control operator and system matrix entering correspond to the given value , while .the greedy control also applies if we additionally assume that initial datum , as well as target state depend on the parameter in a lipschitz manner . in that case ,the greedy search is performed in the same manner as above , i.e. by exploring elements of the set in calculation of the surrogate distance , and the obtained results remain valid with the same constants. all the above results on greedy control remain valid if instead of lipschitz continuity we merely assume continuous dependence with respect to the parameter .namely , as is a compact set , the assumption directly implies uniform continuity , which suffices for the proof of theorems [ main_result ] and [ thm - online ] in which we need and to be close whenever and are .the only difference in that case is that the discretisation constant can not be given explicitly in terms of , unlike expressions and we finish the section describing the algorithm summarising the above procedure to construct the approximate control .hereby we suppose that , for fixed , the approximating space has been constructed by the weak greedy control algorithm with the choices of the constants and as in the statement of the theorem .-3 mm a parameter value is given .* step 1 * calculate .* step 2 * calculate , where * step 3 * project to .denote the projection by .* step 4 * solve the system for .* step 5 * the approximate control is given by where are already determined within step 2 .-3 mm for any the obtained approximate control steers the system within an distance from the target .note that for a parameter value that belongs to the discretisation set used in the construction of an approximating space steps 13 of the last algorithm can be skipped as the corresponding terms have already been calculated within the offline part of the algorithm .the offline part of the greedy control algorithm consists of two main ingredients .first , the search for distinguished parameter values by examining the surrogate value over the set , and , second , the calculation of the corresponding snapshots .this subsection is devoted to estimate its computational cost .* choosing * + in order to identify the first parameter value , one has to maximise the expression over the set , which represents the distance of the target from the free dynamics state . to this effect one has to solve the original system with zero control times .denoting the cost of solving the system by , the cost of choosing thus equals where the second term corresponds to calculation of the distance between two vectors in .in general , the cost differs depending on the type of matrices under consideration and the method chosen for solving the corresponding control system .for example , an implicit one step method consists on solving linear systems , where denotes the time discretisation step .however , as we consider time independent dynamics , all these systems have the same matrix , thus lu factorisation is required just ones .consequently , the cost can be estimated as where the first part corresponds to the factorisation , while the latter one is the cost of building and solving a system of a type . * calculating * + in order to determine the first snapshot one has to solve the system for the chosen parameter value . to this effectwe construct ( the unknown ) gramian matrix by calculating for the vectors of the canonical basis . according to the results presented in section [ control - prel ] , for any corresponding vector can be determined as the state of the control system at time with the control , with being the solution of the adjoint problem starting from .in such a way , the cost of composing the system for equals ( corresponding to control + adjoint problems starting from different data ) . as solving the system requires a number of operations of order , the cost of this part of the algorithm turns out as * choosing * + suppose we have determined the first snapshots and have constructed the approximating space .the next parameter value is chosen by maximising the distance of the vector , calculated already in the first iteration , from the space over the set ( the already chosen values can obviously be excluded from the search ) .the basis of the above space has been determined gradually throughout the previous iterations up to the last vector , whose calculation is performed at this level .as explained above , it requires operations , that have to be performed times .in addition , this basis is orthonormalised , thus enabling the efficient calculation of the distance in .this process is performed gradually throughout the algorithm as well , and in each iteration we just orthonormalise the last vector with respect to the the rest of the , already orthonormalised set .the corresponding cost for a single value is of order .similarly , the ( orthogonal ) projection of to takes into account its projection to used in the previous iteration , and adds just a projection to the last introduced vector . as its correspondingcost is of order , the total cost of this part of the algorithm equals where , as in , the last term corresponds to calculating the distance between two vectors in . * calculating * + in order to determine the next snapshot , we have to construct corresponding gramian matrix .as explained in the part related to the calculation of , this can be done by applying the gramian to some basis of . as in the previous part of the algorithm we have already calculated ,it is enough to calculate , for vectors of canonical basis complementing the set to the basis of . thus building the matrix of the systemrequires operations .in addition , the complementation of the set takes advantage of the basis obtained as complementation of the set in the previous iteration . adding a new snapshot to that basis , one of vectors has to be removed such that the new set results in a basis again .identifying this vector requires solving a single system . finally , taking into account the cost of solving the system, the cost of calculating the next snapshot turns out to be * total cost * + summing the above costs for the total cost of the algorithm results in as the cost of solving the control system is , the most expensive part of the greedy control algorithm corresponds to the terms containing this cost .it is interesting to notice that , as the number of chosen snapshots approaches either the number of eligible parameter values , or the system dimension , this part converges to which corresponds , up to lower order terms , to the cost of calculating for all values .this demonstrates that the application of the greedy control algorithm is always cheaper than a naive approach that consists of calculating controls for all values of the parameter from an uniform mesh on , taken fine enough so to achieve the approximative control .let us note that in the above analysis we have assumed that in the each iteration we calculate the surrogate distance for all values of the set , except those already chosen by the greedy algorithm . and that we repeat the procedure until the residual is less than for all parameter values of .however , very likely , for some , or even many parameter values , the corresponding residual will be less than the given tolerance already before the last iteration . andonce it occurs , these values do not have to be explored any more , neither we have to calculate the corresponding surrogates for subsequently chosen snapshots . therefore , the obtained cost estimates are rather conservative , and in practice we expect the real cost to be lower than the above one .further significant cost reductions are obtained under assumption of a system with parameter independent matrix and the control operator ( with dependence remaining only in initial datum and/or in the target ) . in that caseit turns that the corresponding gramian matrix is also parameter independent .for this reason , maximising the distance appearing in requires solving of control + adjoint problem just once in each iteration . as a result ,the most expensive term of the total cost is replaced by quite a moderate one * step 1 * + calculating corresponds to solving the control system once , whose computational cost equals . * step 2 * + calculating requires solving the loop of the adjoint+control system times , resulting in the cost of . *step 3 * + the most expensive part of this step consist in the gram schmidt orthonormalisation procedure of the set , whose cost equals . * step 4 * + solving the system with qr decomposition requires operations .* step 5 * + the cost of this step is negligible compared to the previous ones .* total cost * + thus we obtain that the total cost of finding an approximative control for a given parameter value equals as approaches the system dimension , its most expensive part converges , up to lower order terms , to the cost of calculating an exact control .consequently , the cost reduction obtained by choosing the approximative control obtained by the greedy control algorithm instead of the exact one depends linearly on the ratio between the number of used snapshots and the system dimension .the theory and the ( weak ) greedy control algorithm developed in the preceding section for finite dimensional linear control systems extend to odes in infinite dimensional spaces .they can be written exactly as in the form ( [ eq1f - d ] ) except for the fact that solutions for each value of the parameter and each time live in an infinite.dimensional hilbert space .the key assumption that distinguishes these infinite - dimensional odes from partial differential equations ( pde ) is that the operators entering in the system are assumed to be bounded .the controllability of such systems has been extensively elaborated during last few decades , expressed in terms of semigroups generated by a bounded linear operator , see , for instance, .in fact , the existing theory of controllability for linear partial differential equations ( pde ) can be applied in that context too .this allows to characterise controllability in terms of the observability of the corresponding adjoint systems . in this way, the uniform controllability conditions of parameter - dependent control problems can be recast in terms of the uniform observability of the corresponding adjoint systems .however , in here , we limit our analysis to the case of infinite - dimensional odes for which the evolution is generated by bounded linear operators , contrarily to the case of pdes in which the generator is systematically an unbounded operator .the greedy theory developed here is easier to implement in the context of infinite - dimensional odes since , in particular , the analytic dependence property of controls with respect to the parameters entering in the system can be more easily established .in fact , in the context of infinite - dimensional odes , most of the results in section 2 on finite - dimensional systems apply as well . in particular, the characterisation of the controls as in ( [ control_nu ] ) , in terms of minimisers of quadratic functionals of the form as in ( [ funct_j ] ) holds in this case too , together with ( [ gramian ] ) for the gramian operators .obviously , the kalman rank condition can not be extended to the infinite - dimensional case .but the open character of the property of controllability with respect to parametric perturbations remains true in the infinite - dimensional case too , i.e. if the system is controllable for a given value of , under the assumption that depends on in a continuous manner , it is also controllable for neighbouring values of . in this sectionwe shall analyse convergence rates of the constructed greedy control algorithm as the dimension of the approximating space tends to infinity , an issue that only makes sense for infinite - dimensional systems .in particular , we are interested in the dimension of the approximating space required to provide an uniform control within the given tolerance . this problem can be stated in terms of the estimate : what is the number of the algorithm iterations we have to repeat until the estimate is fulfilled . in case of systems of a finite dimension , the algorithm constructs an dimensional approximating space of , and it certainly stops after , at most , iterations . for infinite - dimensional systems, however there is no such an obvious stopping criteria .in general we analyse the performance of the algorithm by comparing the greedy approximation errors with the kolmogorov widths , which represent the best possible approximation of by a dimensional subspace in . to this effectone could try to employ theorem [ greedy_rates ] which connects sequences and . however , we have to apply the theorem to the set ( instead of ) . but while kolmogorov width of a set of admissible parameters is usually easy to estimate , that is not the case for a corresponding set of solutions ( to a parametric dependent equation ) , or minimisers ( as studied in this manuscript ) .fortunately , a result in that direction has been provided recently for holomorphic mappings ( ) under the assumption of a polynomial decay of kolmogorov widths .[ greedy_rates2 ] for a pair of complex banach spaces and assume that is a holomorphic map from an open set into with uniform bound . if is a compact subset of then for any and for any and the constant depending on , and the mapping .the proof of the theorem provides an explicit estimate of the constant in dependence on , and the mapping .however , due to its rather complicated form we do not expose it here .going back to our problem the last theorem can be applied under the assumption that the mapping is analytic ( its image being embedded in the space of linear and bounded operators in ) , which implies , in view of the representation formula ( [ control_nu ] ) , that the mapping to is analytic as well .note that this issue is much more delicate in the pde setting , with the generator of the semigroup being an unbounded operator .in fact the property fails in the context of hyperbolic problems although it is essentially true for elliptic and parabolic equations .as we consider a finite number of parameters , lying in the set , the polynomial decay of can be achieved at any rate just by adjusting the corresponding constant in ( note that for ) . of course , the kolmogorov widths of the set do not have to vanish for large , but the last theorem ensures their polynomial decay at any rate . combining theorems [ greedy_rates ] , [ main_result ] and [ greedy_rates2 ]we thus obtain the following result .let the mapping , corresponding to the parameter - dependent infinite - dimensional odes , be analytic and such that the uniform controllability condition holds .then the greedy control algorithm ensures a polynomial decay of arbitrary order of the approximation rates .more precisely , for all there exists such that for any the minimiser determined by the relation can be approximated by linear combinations of the weak - greedy ones as follows : where can be determined by exploring constants appearing in , and .the last result provides us with a stopping criteria of the greedy control algorithm . for a given tolerance , the algorithm stops after choosing snapshots .it results with a dimensional space approximating the family of minimisers within the error , and providing , by means of formul and , a uniform control of our system within an tolerance .a similar result holds if is infinite - dimensional , provided its kolmogorov width decays polynomially .a typical example of such a set is represented by the so called affine model in which the parameter dependence is given by and/or similarly for . hereit is assumed that belongs to the unit cube in , i.e. that for any , while the sequence of numbers belongs to for some .however , note that in this case the kolmogorov width of the set does not decay polynomially ( actually these are constants equal to 1 ) , but the polynomial decay is obtained for the set . indeed , rearranging the indices so that the sequence is decreasing , it follows where we have used that for a decreasing sequence we must have .thus one can consider the mapping and apply theorem [ greedy_rates2 ] to .furthermore , in the case of the affine model this theorem can be improved , implying the kolmogorov -widths of sets and decay at the same rate ( e.g. ) .consequently , one obtains that the greedy approximation rates decay at the same rate as well .finally , let us mention that the cost of the greedy control is significantly reduced if one considers an affine model in which the control operator has finite representation of the form , while the system matrix is taken as parameter independent . in that casethe corresponding gramian is of the form : where , while .here we consider a finite representation , but a more general one can be reduced to this one by truncation of the series .consequently , computing for a chosen snapshot does not require solving the loop of the adjoint+control system for each value .instead , it is enough to perform the loop times in order to obtain vectors , and express as their linear combination by means of .such approach can result in a lower cost of the greedy control algorithm compared to the one obtained in the previous section , with a precise reduction rate depending on the relation between the series length and the number of eligible parameters .we consider the control system whose governing matrix has the block form where is the identity matrix of dimension , while the control operator is assumed to be of the parameter - independent form the system corresponds to the semi - discretisation of the wave equation problem with the control on the right boundary : parameter represents the ( square of ) velocity of propagation , while corresponds to the number of inner points in the discretisation of the space domain . for this example we specify the following ingredients : the final target is set at and we assume the system satisfies the kalman s rank condition and accordingly the control exists for any value of . although the convergence of this direct approximation method , in which one computes the control for a finite - difference discretisation , hoping that it will lead to a good approximation of the continuous control , is false in general ( see ) , this is a natural way to proceed and it is interesting to test the efficiency of the greedy approximation for a given , which in practice corresponds to fixing the space - mesh for the discretisation . in any case , the time - horizon is chosen such that the geometrical control condition ( ensuring controllability of the continuous problem , see ) is satisfied for the continuous wave equation and all .the greedy control algorithm has been applied with and the uniform discretisation of in values .the algorithm stopped after 24 iterations , choosing 24 ( out of 100 ) parameter values in the following order : the corresponding minimisers have been calculated and stored , completing the offline part of the process . in the online partone explores the stored data in order to construct approximate controls for different parameter values ranging between 1 and 10 by means of formula . herewe present the results obtained for .figure 2 displays the evolution of the last 5 components of the system , corresponding to the time derivative of the solution to at grid points , controlled by the approximate control .as required , their trajectories are driven ( close ) to the zero target .-4 cm . ] -4 cm the first 25 components of the system are plotted by a 3-d plot ( figure 3 ) depicting the evolution of the solution to a semi - discretised problem governed by the approximate control .the starting wave front , corresponding to the sinusoidal initial position , as well as the oscillations in time , gradually diminish and the solution is steered ( close ) to zero , as required .furthermore , the total distance of the solution from the target equals .-4 cm for . ]-4 cm the efficiency of the greedy control approach is checked by exploring the corresponding approximation rates , depicted by the red curve of figure 4 .the curve decreases rapidly below for , stopping the algorithm after 24 iterations .the bases of the approximation spaces are built iteratively in a _greedy _ manner , by exploring the surrogate distance over the set . as explained in the previous section, this search is costly , but should produce optimal approximation rates in the sense of the kolmogorov widths . as opposed to that , one can explore some a cheaper approach , in which vectors spanning an approximation space are chosen arbitrarily , e.g. by taking vectors of the canonical basis .the blue curve of figure 4 plots approximation errors of spaces spanned by the first vectors of the canonical basis in .obviously the greedy approach wins over the latter one , in accordance with the theoretical results , ensuring optimal performance .eventually , both curves terminate at zero at the moment in which the size of the approximation space reaches the size of the ambient space . -4.5 cm -4.5 cm for , with given by , and the control operator the system corresponds to the space - discretisation of the heat equation problem with internal grid points and the control on the right boundary : the parameter represents the diffusion coefficient and is supposed to range within the set .the system satisfies the kalman s rank condition for any and any target time .we aim to control the system from the initial state to zero in time .the greedy control algorithm has been applied for the system of dimension with , and the uniform discretisation of in values .the algorithm stops after only three iterations , choosing parameter values ( out of 100 eligibles ) in the following order : the corresponding three minimisers are used for constructing approximate controls for all parameter values by means of formulas and .here we present the results obtained for .-4 cm for . ] -4 cm evolution of the solution , presented by a 3-d plot is given by figure 5 .the system is driven to the zero state within the error .the corresponding approximate control profile is depicted on figure 6 , exhibiting rather strong oscillations when approaching the target time an intrinsic feature for the heat equation .the choice of a rather tiny value of the target time is due to the strong dissipation effect of the heat equation , providing an exponential solution decay even in the absence of any control .for the same reason , the algorithm stopped after only three iterations although the precision was set rather high with ( for the wave problem 24 iterations were required in order to produce uniform approximation within the error 0.5 ) .-5 cm . ] -5 cm* _ exact controllability . _ in our analysis the exact control problem is relaxed to the approximate one by letting a tolerance on the target at time , what is realistic in applications .note however that this approximation of the final controllability condition is achieved by ensuring a sufficiently small error on the approximation of exact controls .thus , in practice , the methods we have built are of direct application to the problem of exact controllability as well as the tolerance tends to zero . * _ comparison of greedy and naive approach . _as we have seen the weak greedy algorithm we have developed has various advantages with respect to the naive one , the latter consisting of taking a fine enough uniform mesh on the set of eligible parameters and calculating all corresponding controls .although the greedy approach requires an ( expensive ) search for distinguished parameter values , in the finite - dimensional case its computational cost is smaller than for the naive one . meanwhile , in the infinite - dimensional context it leads to algorithms with an optimal convergence rate . * _ model reduction : _ the computational cost of the greedy methods , as developed here , remains high .this is so because on the search of the new distinguished parameter values , to compute the surrogate , we are obliged to solve fully the adjoint system first and then the controlled one. it would be interesting to build cheaper ( from a computational viewpoint ) surrogates .this would require implementing model reduction techniques .but this , as far as we know , has non been successfully done in the context of controllability .model reduction methods have been implemented for elliptic and parabolic optimal control problems ( see , , and ) but , as far as we know , they have not been developed in the context of controllability . * _ parameter dependence ._ most existing practical realisations of ( weak ) greedy algorithms assume an affine parameter dependence presented at the end of section [ infinite ] ( ) .the assumption provides a cheaper computation of a surrogate and reduces the cost of the search for a next snapshot .+ by the contrary , the greedy control algorithm presented in this paper allows for very general parameter dependence , still providing an efficient algorithm beating the naive approach .smooth dependence is needed in order to obtain sharp convergence rates . still the algorithms can be applied for models depending on the parameters in a rough manner .+ the greedy control algorithm and corresponding approximation results of section 4 , as well as solution to our problem is derived under assumption of lipschitz continuity of the control system entries with respect to the parameter .however , additional analyticity requirement is imposed in section [ infinite ] in order to deduce convergence analysis of the algorithm .namely , transfer of kolmogorov width from the set of parameters into the set of solutions ( or controls ) is provided so far only by theorem [ greedy_rates2 ] , which requires analyticity assumption , as well as polynomial decay of kolmogorov widths .however , the lack of an analogous result under more general assumptions does not prevent possible applications of the greedy control algorithm .it can still be implemented , practically at no - risk .namely , in the worst case when dimension of approximation space reaches number of eligible parameters , one will calculate controls for all these parameter values , which corresponds to the naive approach . and this will cost the same as applying naive method directly from the beginning .but there are many reasons to believe the algorithm will stop before this ultimate point . *_ waves versus heat : _ in our numerical experiments we have observed that the greedy algorithm is more efficient for the semi - discrete heat equation than for the wave one , in the sense that for the first one it stops after iterations while for the second one it goes up to .this is a natural result to be expected . indeed , for the heat equation , because of the intrinsic dissipativity of the system , the high frequency components play a minor role when , as in our experiments , dealing with approximate controllability .thus , even if the algebraic dimension of both systems , waves and heat , are the same , in practice the relevant dimensions for the heat equation is much smaller , thus explaining the faster convergence of the greedy algorithm for heat like equations . * _ extension of greedy control to pde systems ._ as we have seen the methods and ideas developed in this paper can be easily adapted to infinite - dimensional odes .the same ideas and methods can be implemented without major changes on the pde setting .however , as we have seen above , in order to quantify the convergence rate of greedy algorithms , the analytic dependence of the controls with respect to parameters plays a key role .as far as we know , this issue has not been thoroughly addressed in the literature , i.e. that of whether or not controls for a given pde controllability problem depend analytically on the parameters entering in the system . and at this level , the type of pde under consideration can play a major role . indeed , for heat equations ,even when the unknown parameters enter in the principal part of the diffusion operator , the analytic dependence of the controls can be expected .this is not the case for wave equations ( see ) .indeed , note that even for the simplest first order transport equation solutions fail to depend analytically on the velocity of propagation in a natural or sobolev energy setting .this issue requires significant further investigation . *robust control ._ the analysis and methods developed in this paper apply to control problems where the data to be controlled are prescribed a priori , either depending on the parameters or not . in that sense , our results are the analogue to what has been done for the greedy approximation of pde solutions with given data ( right hand side terms and boundary values ) as in , and , among others .+ from the point of view of applications it would be interesting to develop greedy methods for control of potential application for all possible data to be controlled .this would require to establish a strategy for identifying the most relevant snapshots of the parameters for the approximation of the gramians , within the space of bounded linear operators . andthis , in turn , requires identifying efficient surrogates .this is an interesting problem to be addressed .+ actually , as far as we know , this has not been done even in the context of the solvability of elliptic problems .the work done so far , as mentioned above , addresses approximation issues for given specific data .but the problem of applying greedy algorithms to approximate the resolvent operators , in a robust manner and independently of the data entering in the pde , has not been addressed . , ( 2006 ) ._ controllability and observability of partial differential equations : some results and open problems_. _ handbook of differential equations : evolutionary equations , _ vol .3 , c. m. dafermos and e. feireisl eds . , elsevier science , amsterdam , 527621 .
we analyse the problem of controllability for parameter - dependent linear finite - dimensional systems . the goal is to identify the most distinguished realisations of those parameters so to better describe or approximate the whole range of controls . we adapt recent results on greedy and weak greedy algorithms for parameter depending pdes or , more generally , abstract equations in banach spaces . our results lead to optimal approximation procedures that , in particular , perform better than simply sampling the parameter - space to compute the controls for each of the parameter values . we apply these results for the approximate control of finite - difference approximations of the heat and the wave equation . the numerical experiments confirm the efficiency of the methods and show that the number of weak - greedy samplings that are required is particularly low when dealing with heat - like equations , because of the intrinsic dissipativity that the model introduces for high frequencies . # 1 # 1 = = # 1h^#1 # 1#2 # 1 _ # 2 # 1 # 1 # 1#2#1,#2 # 1#2#1,#2 ] # 1(#1 ) # 1(#1 ) # 1#2[#1,#2 ] parametrised odes and pdes , greedy control , weak - greedy , heat equation , wave equation , finite - differences .
stochastic modelling is often the most effective tool available in order to describe complex systems in physical , biological and social networks . in particular , since natural noise sources are mostly gaussian , stationary and non - stationary gaussian processes are often used to model the response of a system exposed to environmental noise . in view of the increasing interest towards complex systems ,a question thus naturally arises on whether an effective characterization of gaussian processes is achievable . in this paperwe address the characterization of classical random fields and focus attention on fractional gaussian processes .the reason is twofold : on the one hand , most of of the noise sources in nature are gaussian and the same is true for the linear response of systems exposed to environmental noise . on the other hand ,fractional processes have recently received large attention since they are suitable to describe noise processes leading to complex trajectories , e.g. irregular time series characterized by a haussdorff fractal dimension in the range . in particular , in order to maintain the discussion reasonably self contained , we focus on systems exposed to fractional brownian noise ( fbn ) , which is a paradigmatic nonstationary gaussian stochastic process with zero mean =0 ] , usually referred to as the hurst parameter .the hurst parameter is directly linked to the fractal dimension of the trajectories of the particles exposed to the fractional noise .the notation ] .we remind that fbn is a self - similar gaussian process , i.e. , and that it is suitable to describe anomalous diffusion processes with diffusion coefficients proportional to , corresponding to ( generalized ) noise spectra with a powerlaw dependence on frequency .the characterization of fbn amounts to the determination of the fractal dimension of the resulting trajectories , i.e. the determination of the parameter . in the following , in order to simplify notation and formulas , we will employ the complementary hurst parameter ] , and cases where we know in advance that only two possible values and are admissible and want to discriminate between them .several techniques have been suggested for the estimation of the hurst parameter in the time or in the frequency domain , or using wavelets . among themwe mention range scale estimators , maximum likelihood , karhunen - loeve expansion , p - variation , periodograms , weigthed functional , and linear bayesian models . compared to existing techniques , quantum probesoffers the advantage of requiring measurements performed at a fixed single ( optimized ) instant of time , without the need of observing the system for a long time in order to collect a time series , and thus avoiding any issue related to poor sampling .as we will see , quantum probes may be effectively employed to characterize fractional gaussian process when the the system - environment coupling is weak , provided that a long interaction time is achievable , or when the coupling is strong and the quantum probe may be observed shortly after that the interaction has been switched on .overall , and together with results obtained for the characterization of stationary process , our results indicate that quantum probes may represent a valid alternative to other techniques to characterize classical noise .the paper is structured as follows : in section [ s : mod ] we introduce the physical model and discuss the dynamics of the quantum probe . in section [ s : qest ]we briefly review the basic notions of quantum information geometry and evaluate the figures of merit that are relevant to our problems . in section [ s : qp ]we discuss optimization of the interaction time , and evaluate the ultimate bounds to the above figures of merit that are achievable using quantum probes .section [ s : out ] closes the paper with some concluding remarks .we consider a spin particle in a situation where its motion is subject to environmental fbn noise and may be described classically .we assume that the motional degree of freedom of the particle is coupled to its spin , such that the effects of noise influence also the dynamics of the spin part .we also assume that the noise spectrum of the fbn contain frequencies that are far away from the natural frequency of the spin part . when the spectrum contains frequencies that are _ smaller _ than than the fluctuation induced by the fbn are likely to produce decoherence of the spin part , rather the damping , such that the time - dependent interaction hamiltonian between the motional and the spin degrees of freedom may be written as where denotes a pauli matrix and denotes the coupling between the spin part and its classical environment .we do not refer to any specific interaction model between the motional degree of freedom and the spin part and assume that eq .( [ hi ] ) describes the overall effect of the coupling .the full hamiltonian of the spin part is given by and may be easily treated in the interaction picture . upon denoting by the initial state of the spin part , the state at a subsequent time is given by ] the problem is to discriminate a quantum state within the continuous family . in this case , the relevant quantity is the so - called bures infinitesimal distance between nearby point in the parameter space , where the _ bures metric _ is given by being the eigenvectors of in terms of the fidelity \right)^2 ] , being the conditional probability of obtaining the value when the parameter has the value . when quantum systems are involved , we have ] one can prove that is upper bounded by the quantum fisher information ] and the optimal povm for the discrimination problem is the one minimizing the overall probability of a misidentification i.e. .for the simplest case of equiprobable hypotheses ( ) we have \right) ] and where .this is usually referred to as the helstrom bound , and represent the ultimate quantum bound to the error probability in a binary discrimination problem . in our case , is minimized when the two output states commute , i.e. for leading to where is given in section [ s : mod ] .the minimization over the interaction time will be discussed in the next section .we notice , however , that any single - copy discrimination strategy based on quantum probes is inherently inefficient since eq .( [ peqp ] ) imposes an error probability larger than at any time .one is therefore led to consider different strategies , as those involving several copies of the quantum probes .indeed , let us now suppose that copies of both states are available for the discrimination .the problem may be addressed using the above formulas upon replacing with .we thus need to analyze the quantity .the evaluation of the trace distance for increasing may be difficult and for this reason , one usually resort to the quantum chernoff bound , which gives an upper bound to the probability of error where = \inf_{0\leq s\leq 1 } { \mathop{\text{tr}}\nolimits}\left[\rho_{\gamma_1 } ^s\ : \rho_{\gamma_2}^{1-s}\right]\,.\end{aligned}\ ] ] the bound may be attained in the asymptotic limit of large .notice that while the trace distance is capturing the notion of distinguishability for single copy discrimination this is not the case for multiple copies strategies , where the quantity represent the proper figure of merit .also in eq .( [ q ] ) we omitted the explicit dependence on the interaction time . for nearby statesthe relevant distance is the so - called infinitesimal quantum chernoff bound ( qcb ) distance , where the qcb metric is given by the qcb introduces a measure of distinguishability for density operators which acquires an operational meaning in the asymptotic limit .the larger is the qcb distance , the smaller is the asymptotic error probability of discriminating a given state from its neighbors . on the other hand , for a fixed probability of error ,the smaller is , the smaller the number of copies of and we will need in order to distinguish them . also the quantity is minimizedwhen the two output states commute , i.e. for and , in this case we have ^s \,[1-p_{\gamma_2}(t,\lambda)]^{1-s } \big\}\ , . \notag\end{aligned}\ ] ] the minimization over the parameter and the interaction time will be discussed in the next section . concerning the qcb metric, we have the general relation . in our case , since the maximum is achieved when only the eigenvalues of depends on , the only non zero terms in eqs .( [ gb ] ) and ( [ gqcb ] ) are those with . as a consequencethe first inequality above is saturated and we have , .the working conditions to optimize the estimation or the discrimination of nearby states are thus the same .in this section we discuss optimization of the estimation / discrimination strategies for fbn over the possible values of the interaction time .more explicitly , we maximize the bures metric and minimize the helstrom and qcb bound to error probability , as a function of the interaction time . in this way, we individuate the optimal working conditions , maximizing the performances of quantum probes , and establish a benchmark to assess any strategy based on non optimal measurements . for the estimation of the complementary hurst parameter as a function of and of the interaction time , for different values of the coupling .the contour plots correspond , from top left to bottom right , to , and , respectively .whiter regions correspond to larger values of the bures metric . ] upon inspecting the functional dependence of the bures metric on the quantities , and in eq .( [ gbg ] ) one sees that is somehow a function of the quantity and thus maxima are expected , loosely speaking , for small and large or viceversa . on the other hand , this scaling is not exact and thus a richer structure is expected .this is illustrated in fig .[ f : f1 ] , where we show contour plots of as a function of and of the interaction time for different values of the coupling . as it is apparent from the plots , for any value of the couplingthere are two maxima located in different regions ( notice the different ranges for the interaction time ) .the global maximum moves from one region to the other depending on the values of the coupling ( see below ) . in fig .[ f : f2 ] we show the results obtained from the numerical maximization of the bures metric over the interaction time .the upper left panel is a log - log - plot of the maximized bures metric as a function of the coupling for randomly chosen values of ] ( gray points ) .we also report some curves at fixed values of , showing that for any value of the complementary hurst parameter , except those close to the limiting values and , a threshold value on the coupling , i.e. on the intensity of the noise , naturally emerges .the bures metric is large , i.e. estimation may achieve high precision , in the weak and in the strong coupling limit , that is , when or . on the other hand , for intermediate values of the coupling estimation of the fractal dimension is inherently inefficient .this behavior is further illustrated in the lower left panel , where we report the same random points as a function of , also showing curves at fixed values of the coupling .values of close to or may be precisely estimated for any value of the coupling whereas intermediate values needs a tuning of , in order to be placed in the corresponding weak ( or strong ) coupling limit .the threshold value increases with and does not appear for or .for those values high precision measurements are achievable only in the strong coupling limit ( for , i.e. fractal dimension close to ) or the weak coupling limit ( , i.e. negligible fractal dimension ) .[ cols="^,^ " , ] the plots confirm the overall symmetry of the helstrom bound at fixed .another feature that emerges from fig .[ f : f3 ] is that , say , the pairs and or and have different discriminability despite the fact that for both pairs we have , i.e. the helstrom bound is not uniform .the plots also confirm the overall picture obtained in discussing estimation problems : for each pair of values , two regimes of strong or weak coupling may be individuated , where discrimination may be performed with reduced error probability , whereas for intermediate values of the coupling performances are degraded .the only exception regards values close to the limiting values or , where no threshold appears .we also notice that by increasing the coupling one enlarges the region in the - plane where discrimination may be performed with reduce error probability .this is illustrated in the right panels of fig .[ f : f3 ] , where we show a density plot of the minimized helstrom bound as a function of both the values and for two different values of the coupling : ( top panel ) and ( bottom panel ) of values of the complementary hurst parameter by quantum probes . in the left panelwe report the maximized chernoff bound as a function of the coupling with the environment for pair of values with not too close to the limiting values or . from left to right we have , ( blue squares ) , ( green triangles ) , ( red circles ) , ( magenta stars ) , ( gray squares ) , ( gray circles ) . in the right panelwe show the same quantity for pair of values close to the boundaries and .the increasing curves correspond to ( blue circles ) , ( blue stars ) , ( blue triangles ) , whereas the decreasing ones are for ( black circles ) , ( black stars ) , ( black triangles ) ., title="fig : " ] of values of the complementary hurst parameter by quantum probes . in the left panelwe report the maximized chernoff bound as a function of the coupling with the environment for pair of values with not too close to the limiting values or . from left to right we have , ( blue squares ) , ( green triangles ) , ( red circles ) , ( magenta stars ) , ( gray squares ) , ( gray circles ) . in the right panelwe show the same quantity for pair of values close to the boundaries and .the increasing curves correspond to ( blue circles ) , ( blue stars ) , ( blue triangles ) , whereas the decreasing ones are for ( black circles ) , ( black stars ) , ( black triangles ) . , title="fig : " ] as mentioned in section [ s : qest ] , the helstrom bound to the single - shot error probability by quantum probes is bounded from below by the value , making these kind discrimination schemes of little interest for applications .we are thus naturally led to consider multiple - copy discrimination . in fig .[ f : f4 ] we report the results of the optimization of the chernoff bound of eq .( [ q ] ) over the parameter and the interaction time . in the left panelwe show the quantity , minimized over the interaction time , as a function of the coupling with the environment for different pairs of values and not too close to the limiting values and .also in this case , the plot also confirms that better performances are obtained in the regimes of weak and strong coupling , whereas for intermediate values no measurements are able to effectively extract information from the quantum probe . the threshold to define the two regimes increases with the value of the s themselves . when the values of the hurst parameter are approaching the limiting values and no threshold appears . in thesetwo limiting cases discrimination may be reliably performed in the weak coupling limit ( for negligible fractal dimension ) or in the strong coupling one ( fractal dimension closer to its maximum value ) .this behavior is illustrated in the right panel of fig .[ f : f4 ] , where we show the minimized as a function of the coupling for pairs of values and close to or . for both , single- and multiple - copy discrimination ,the behavior of the optimal interaction time is analogue to that observed in the discussion of estimation problem .we have addressed estimation and discrimination problems involving the fractal dimension of fractional brownian noise . upon assuming that the noise induces a dephasing dynamics on a qubit , we have analyzed in details the performances of inferences strategies based on quantum limited measurements . in particular , in order to assess the performances of quantum probes , we have evaluated the bures metric , the helstrom bound and the chernoff bound , and have optimized their values over the interaction time .our results show that quantum probes provide an effective mean to characterize fractional process in two complementary regimes : either when the the system - environment coupling is weak , provided that a long interaction time is achievable , or when the coupling is strong and the quantum probe may be observed shortly after that the interaction has been switched on .the two regimes of weak and strong coupling are defined in terms of a threshold value of the coupling , which itself increases with the fractional dimension .our results overall indicate that quantum probes may represent a valid alternative to characterize classical noise .this work is dedicated to the memory of r. f. antoni .the author acknowledges support by miur project firb lichis - rbfr10yq3h ) .99 p. sibani , j. h. jensen , _ stochastic dynamics of complex systems _ ( world scientific , new york , 2013 ) .d. j. wilkinson , nat .* 10 * , 122 ( 2009 ) .d. most , d. keles , eur . j. op .res . * 207 * , 543 ( 2010 ) .p. e. smouse , s. focardi , p. r. moorcroft , j. g. kie , j. d. forester , j. m. morales , phyl .b * 365 * , 2201 ( 2010 ) .r. f. fox , phys . lett . * 48 * , 179 ( 1978 ) .b. b. mandelbrot , j. w. van ness , siam rev . * 10 * , 432 ( 1968 ) .b. b. mandelbrot , j. r. wallis , water resour . res . * 4 * , 909 ( 1969 ) .m. s. taqqu , stat .* 28 * , 131 ( 2013 ) .r. j. barton , h. v. poor , ieee trans .th . * 34 * , 943 ( 1988 ) .h. e. hurst , trans .eng . * 116 * , 770 ( 1951 ) .p. flandrin , iee trans .th * 35 * , 197 ( 1989 ) .r. b. davies , d. s. harte , biometrika * 74 * , 95 ( 1987 ) .h. d. jeong , j. s. lee , d. mcnickle , and k. pawlikowski , simul .theory * 15 * , 1173 ( 2007 ) .j. barunik , l. kristoufek , physica a * 389 * , 3844 ( 2010 ). g. w. wornell , a. v. oppenheim , ieee trans . signal pro- cess . * 40 * , 611 ( 1992 ) .l. zunino , d. g. peez , m. t. martn , a. plastino , m. garavaglia , o. a. rosso , phys .e * 75 * , 021115 ( 2007 ) . c. m. kendziorski , j. b. bassingthwaighte , p. j. tonellato , physica a * 273 * , 439 ( 1999 ) . l. a. salomon , j. c. fort , j. stat . comp .simul . * 83 * , 542 ( 2013 ) .m. magdziarz , j. k. slezak , j. wjcik , j. phys .a * 46 * , 325003 ( 2013 ) .m. s. taqqu , v. teverovsky , w. willinger , fractals * 3 * , 785 ( 1995 ) . y. liu , y. liu , k. wang , t. jiang , l. yang , phys . rev .e * 80 * , 066207 ( 2009 ) .d. boyer , d. s. dean , c meja - monasterio , g. oshanin , phys .e * 87 * , 030103(r ) ( 2013 ) .n. makarava , s. benmehdi , m. holschneider , phys .e * 84 * , 021109 ( 2011 ) .j. schmittbuhl , j .-vilotte , s. roux , phys .e * 51 * , 131 ( 1995 ) .a. mehrabi , h. rassamdana , m. sahimi , phys .e * 56 * , 712 ( 1997 ) . c. castelnovo , a. podest , p. piseri , p. milani , phys rev .e * 65 * , 021601 ( 2002 ) . c. benedetti , f. buscemi , p. bordone , m. g. a. paris , phys , rev . a * 89 * , 032114 ( 2014 ) . c. benedetti ,m. g. a. paris , int .j. quantum inf .* 12 * , 1461004 ( 2014 ) .j. ehek , m. g. a. paris ( eds ) _ quantum state estimation _ , lect . not* 649 * ( springer , berlin , 2004 ) i. bengtsson , k. zyczkowski , _ geometry of quantum states _ , ( cambridge university press , 2006 ) .d. bures , trans . am . math. soc . * 135 * , 199 ( 1969 ) .a. uhlmann , rep .math . phys .* 9 * , 273 ( 1976 ) .w. k. wootters phys .d * 23 * , 357 ( 1981 ) .r. josza , j. mod . opt . * 41 * , 2314 ( 1994 ) .sommers , k. zyczkowski , j. phys .a * 36 * , 10083 ( 2003 ) .s. braunstein and c. caves , phys .lett . * 72 * , 3439 ( 1994 ) .s. braunstein , c. caves , and g. milburn , ann .247 * , 135 ( 1996 ) .d. c. brody , l. p. hughston , proc .a * 454 * , 2445 ( 1998 ) ; a * 455 * , 1683 ( 1999 ) .a. sun - ichi , h. nagaoka , _ methods of information geometry _ ( ams , 2000 ) .p. zanardi , m. g. a. paris , l. campos - venuti , phys .a * 78 * , 042105 ( 2008 ) . c. invernizzi , m. korbmann , l. campos - venuti , m. g. a. paris , phys .a * 78 * , 042106 ( 2008 ) .m. g. a. paris , int .* 7 * , 125 ( 2009 ) .m. hotta , t. karasawa , m. ozawa , phys .a * 72 * , 052334 ( 2005 ) .a. monras , m. g. a. paris phys .lett . * 98 * , 160401 ( 2007 ) .a. fujiwara , phys .a * 63 * , 042304 ( 2001 ) ; a. fujiwara , h. imai , j. phys .a * 36 * , 8093 ( 2003 ) .z. ji , g. wang , r. duan , y. feng , m. ying ieee trans .theory , * 54 * , 5172 ( 2008 ) . v. dauria , c. de lisio a. porzio , s. solimeno , and m. g. a. paris j. phys .b * 39 * , 1187 ( 2006 ) .m. brunelli , s. olivares , m. g. a. paris , phys . rev .a * 84 * , 032105 ( 2011 ) ; m. brunelli , s. olivares , m. paternostro , m. g. a. paris , phys .a 86 , 012125 ( 2012 ) .o. e. barndorff - nielsen , r. d. gill , r. d , j. phys .a * 33 * , 4481 ( 2000 ) .a. luati , ann . stat .* 32 * , 1770 ( 2004 ) . c. w. helstrom , _ quantum detection and estimation theory _ ( academic press , new york 1976 ) a. chefles , contemp* 41 * 401 ( 2000 ) .j. a. bergou , u. herzog , m. hillery in , pp 417 - 465 .a. chefles in , pp 467 - 511. j. a. bergou , j. mod . opt .* 57 * , 160 ( 2010 ) .j. calsamiglia , r. munoz - tapia , l. masanes , a. acn , e. bagan , phys .a * 77 * , 032311 ( 2008 ) .k. m. r. audenaert , j. calsamiglia , r. munoz - tapia , e. bagan , ll .masanes , a. acin , f. verstraete , phys .lett . * 98 * , 160501 ( 2007 ) .m. nussbaum , and a. szkola , ann .37 * , 1040 ( 2009 ) .k. m. r. audenaert , m. nussbaum , a. szkola , and f. verstraete , commun .* 279 * , 251 ( 2008 ) .s. pirandola and s. lloyd , phys .a * 78 * , 012331 ( 2008 ) .a * 84 * , 022334 ( 2011 ) .
we address the characterization of classical fractional random noise via quantum probes . in particular , we focus on estimation and discrimination problems involving the fractal dimension of the trajectories of a system subject to fractional brownian noise . we assume that the classical degree of freedom exposed to the environmental noise is coupled to a quantum degree of freedom of the same system , e.g. its spin , and exploit quantum limited measurements on the spin part to characterize the classical fractional noise . more generally , our approach may be applied to any two - level system subject to dephasing perturbations described by fractional brownian noise , in order to assess the precision of quantum limited measurements in the characterization of the external noise . in order to assess the performances of quantum probes we evaluate the bures metric , as well as the helstrom and the chernoff bound , and optimize their values over the interaction time . we find that quantum probes may be successfully employed to obtain a reliable characterization of fractional gaussian process when the coupling with the environment is weak or strong . in the first case decoherence is not much detrimental and for long interaction times the probe acquires information about the environmental parameters without being too much mixed . conversely , for strong coupling information is quickly impinged on the quantum probe and can effectively retrieved by measurements performed in the early stage of the evolution . in the intermediate situation , none of the two above effects take place : information is flowing from the environment to the probe too slowly compared to decoherence , and no measurements can be effectively employed to extract it from the quantum probe . the two regimes of weak and strong coupling are defined in terms of a threshold value of the coupling , which itself increases with the fractional dimension .
the time - varying nature of the underlying channel is one of the most significant design challenges in wireless communication systems . in particular, real - time media traffic typically has a stringent delay constraint , so the exploitation of long blocklength frames is infeasible and the entire frame may fall into deep fading channel states . furthermore , the receiver may have limited resources to feed the estimated channel state information back to the transmitter , which precludes adaptive transmission and forces the transmitter to use a stationary coding strategy .the above described situation is modeled as a slowly fading channel with receiver side information only , which is an example of a non - ergodic _composite channel_. a composite channel is a collection of component channels parameterized by , where the random variable is chosen according to some distribution at the beginning of transmission and then held fixed .we assume the channel realization is revealed to the receiver but not the transmitter .this class of channel is also referred to as the _ mixed channel _ or the _ averaged channel _ in literature .the shannon capacity of a composite channel is given by the verd - han generalized capacity formula where is the liminf in probability of the normalized information density .this formula highlights the pessimistic nature of the shannon capacity definition , which is dominated by the performance of the `` worst '' channel , no matter how small its probability . to provide more flexibility in capacity definitions for composite channels , in relax the constraint that all transmitted information has to be correctly decoded and derive alternate definitions including the _ capacity versus outage _ and the _ expected capacity_. the capacity versus outage approach allows certain data loss in some channel states in exchange for higher rates in other states .it was previously examined in for single - antenna cellular systems , and later became a common criterion for multiple - antenna wireless fading channels .see ( * ? ? ?* ch . 4 ) and references therein for more details .the expected capacity approach also requires the transmitter to use a single encoder but allows the receiver to choose from a collection of decoders based on channel states .it was derived for a gaussian slow - fading channel in , and for a composite binary symmetric channel ( bsc ) in .channel capacity theorems deal with data transmission in a communication system . when extending the system to include the source of the data , we also need to consider the data compression problem which deals with source representation and reconstruction .for the overall system , the end - to - end distortion is a well - accepted performance metric . when both the source and channel are stationary and ergodic , codes are usually designed to achieve the same end - to - end distortion level for any source sequence and channel realization .nevertheless , practical systems do not always impose this constraint .if the channel model is generalized to such scenarios as the composite channel above , it is natural to relax the constraint that a single distortion level has to be maintained for all channel states . in parallel with the development of alternative capacity definitions ,we introduce generalized end - to - end distortion metrics including the _ distortion versus outage _ and the _ expected distortion_. the distortion versus outage is characterized by a pair , where the distortion level is guaranteed in receiver - recognized non - outage states of probability no less than .this definition requires csir based on which the outage can be declared .the expected distortion is defined as , i.e. the achievable distortion in channel state averaged over the underlying distribution .these alternative distortion metrics are also considered in prior works . in average distortion , obtained by averaging over outage and non - outage states , was adopted as a fidelity criterion to analyze a two - hop fading channel . here is the variance of the source symbols .the expected distortion was analyzed for the mimo block fading channel in the high snr regime and in the finite snr regime .various coding schemes for expected distortion were also studied in a slightly different but closely related broadcast scenario .data compression ( source coding ) and data transmission ( channel coding ) are two fundamental topics in shannon theory . for transmission of a discrete memoryless source ( dms ) over a discrete memoryless channel ( dmc ) , the renowned source - channel separation theorem ( * ? ? ?* theorem 2.4 ) asserts that a target distortion level is achievable if and only if the channel capacity exceeds the source rate distortion function , and a two - stage separate source - channel code suffices to meet the requirement .this theorem enables separate designs of source and channel codes with guaranteed optimal performance .it also extends to stationary and ergodic source and channel models .separate source - channel coding schemes provide flexibility through modularized design . from the source s point of view, the source can be transmitted over any channel with capacity greater than and be recovered at the receiver subject to a certain fidelity criterion ( the distortion ) .the source is indifferent to the statistics of each individual channel and consequently focuses on source code design independent of channel statistics . despite their flexibility and optimality for certain systems ,separation schemes also have their disadvantages .first of all , the source encoder needs to observe a long - blocklength source sequence in order to determine the output , which causes infinite delay .second , separation schemes may increase complexity in encoders and decoders because the two processes of source and channel coding are acting in opposition to some extent .source coding is essentially a data compression process , which aims at removing redundancy from source sequences to achieve the most concise representation . on the other hand , channel coding deals with data transmission , which tries to add some redundancy to the transmitted sequence for robustness against the channel noise .if the source redundancy can be exploited by the channel code , then a joint source - channel coding scheme may avoid this overhead .in particular , transmission of a gaussian source over a gaussian channel , and a binary symmetric source over a bsc , are both examples where optimal performance can be achieved without any coding .this is because the source and channel are matched " to each other in the sense that the transition probabilities of the channel solve the variational problem defining the source rate - distortion function and the letter probabilities of the source drive the channel at capacity .a careful inspection of the shannon separation theorem reveals some important underlying assumptions : a single - user channel , a stationary and ergodic source and channel , and a single distortion level maintained for all transmissions .violation of any of these assumptions will likely prompt reexamination of the separation theorem .for example , cover et .al . showed that for a multiple access channel with correlated sources , the separation theorem fails . in vembu et al .gave an example of a non - stationary system where the source is transmissible through the channel with zero error , yet its minimum achievable source coding rate is twice the channel capacity .in this work , we illustrate that different end - to - end distortion metrics lead to different conclusions about separability even for the same source and channel model .in fact , source - channel separation holds under the distortion versus outage metric but fails under the expected distortion metric . in proved the direct part of source - channel separation under the distortion versus outage metric and established the converse for a system of gaussian source and slow - fading gaussian channels . herewe extend the converse to more general systems of stationary sources and composite channels .source - channel separation implies that the operation of source and channel coding does not depend on the statistics of the counterpart .however , the source and channel do need to communicate with each other through a _negotiation interface _ even before the actual transmission starts . in the classical view of shannon separation for stationary ergodic sources and channels, the source requires a rate based on the target distortion and the channel decides if it can support the rate based on its capacity . for generalized source / channel models and distortion metrics ,the interface is not necessarily a single rate and may allow multiple parameters to be agreed upon between the source and channel .after communication through the appropriate negotiation interface , the source and channel codes may be designed separately and still achieve the optimal performance .vembu et al .studied the transmission of non - stationary sources over non - stationary channels and observed that the notion of ( strict ) domination ( * ? ? ?* theorem 7 ) dictates whether a source is transmissible over a channel , instead of the simple comparison between the minimum source coding rate and the channel capacity . the notion of ( strict )domination requires the source to provide the distribution of the _ entropy density _ and the channel to provide the distribution of the _ information density _ as the appropriate interface .the source - channel interface concept also applies after the actual transmission starts . at the transmitter end , we see examples where the source sequence is directly supplied to the channel , such as the uncoded transmission of a gaussian source over a gaussian channel . butmore generally there is certain processing on the source side , and the processed output , instead of the original source sequence , is supplied to the channel .the _ transmitter interface _ contains what the source actually delivers to the channel .for example , in separation schemes the interface is the source encoder output ; in hybrid digital - analog schemes the interface is a combination of vector quantizer output and quantization residue .similarly we can introduce the concept of a _ receiver interface_. instead of directly delivering the channel output sequence to the destination , the receiver may implement certain decoding and choose the channel decoder output as the interface .the interfaces at the transmitter and the receiver are the same in classical shannon separation schemes , since the channel code requires all transmitted information to be correctly decoded with vanishing error , but in general the two interfaces can be different .for example , the receiver interface may include an outage indicator or partial decoding when considering generalized capacity definitions .different transmission schemes can be compared by their end - to - end performance .nevertheless , the concept of source - channel interface opens a new dimension for comparison .ideally the interface complexity should be measured by some quantified metrics .transmission schemes with low interface complexity are also appealing in view of simplified system design .we expect a performance enhancement when the source and channel exchange more information through a more sophisticated interface , and illustrate the tradeoff between interface complexity and end - to - end performance through some examples in this work .the rest of the paper is organized as follows .we review alternative channel capacity definitions and define corresponding end - to - end distortion metrics in section [ sec : performance ] . in section [ sec : source - channel ]we provide a new perspective of source - channel separation generalized from shannon s classical view and also introduce the concept of source - channel interface . in section [ sec : outagedistortion ]we establish the separation optimality for transmission of stationary ergodic sources over composite channels under the distortion versus outage metric . in section [ sec : interfacebsc ] we consider various schemes to transmit a binary symmetric source ( bss ) over a composite bsc and show the tradeoff between achievable expected distortion and interface complexity .conclusions are given in section [ sec : con ] .we first review alternate channel capacity definitions derived in to provide some background information .we then define alternate end - to - end performance metrics for the entire communication system , including the source and the destination . the channel is statistically modeled as a sequence of -dimensional conditional distributions . for any integer , is the conditional distribution from the input space to the output space .let and denote the input and output processes , respectively .each process is specified by a sequence of finite - dimensional distributions , e.g. . in a composite channel ,when the channel side information is available at the receiver , we represent it as an additional channel output .specifically , we let , where is the channel side information and is the output of the channel described by parameter . throughout , we assume the random variable is independent of and unknown to the encoder . thus for each the information density is defined similarly as in consider a sequence of codes .let be the probability that the receiver declares an outage , and be the decoding error probability given that no outage is declared .we say that a rate is outage- achievable if there exists a sequence of channel codes such that and .the _ capacity versus outage _ is defined to be the supremum over all outage- achievable rates , and is shown to be \leq q \right\}. \label{eqn : outage}\ ] ] the operational implication of this definition is that the encoder uses a single codebook and sends information at a fixed rate . assuming repeated channel use and independent channel state at each use , the receiver can correctly decode the information a proportion of the time and turn itself off a proportion of the time .we further define the _ outage capacity _ as the long - term average rate , which is a meaningful metric if we are only interested in the fraction of correctly received packets and approximate the unreliable packets by surrounding samples , or if there is some repetition mechanism where the receiver requests retransmission of lost information from the sender .the value can be chosen to maximize the long - term average throughput .this notion provides another strategy for increasing reliably - received rate .although the transmitter is forced to use a single encoder at a rate without channel state information , the receiver can choose from a collection of decoders , each parameterized by and decoding at a rate , based on csir .denote by the error probability associated with channel state .the expected capacity is the supremum of all achievable rates of any code sequence that has approaching zero . in a composite channel, different channel states can be viewed as virtual receivers , and therefore the expected capacity is closely related to the capacity region of a broadcast channel ( bc ) . in the broadcast system the channel from the input to the output of receiver under certain conditions , it is shown that the expected capacity of a composite channel equals to the maximum weighted sum - rate over the capacity region of the corresponding broadcast channel , where the weight coefficient is the state probability ( * ? ? ?* theorem 1 ) . using broadcast channel codes ,the expected capacity is derived in for a gaussian slow - fading channel and in for a composite bsc .the expected capacity is a meaningful metric if _ partial _ received information is useful .for example , consider sending an image using a multi - resolution ( mr ) source code over a composite channel .decoding all transmitted information leads to reconstructions with the highest fidelity .however , in the case of inferior channel quality , it still helps to decode partial information and get a coarse reconstruction .next we introduce alternative end - to - end distortion metrics as performance measures for transmission of a stationary ergodic source over a composite channel .we denote by the source alphabet and the source symbols are generated according to a sequence of finite - dimensional distributions , and then transmitted over a composite channel with conditional output distribution it is possible that the source generates symbols at a rate different from the rate at which the channel transmits symbols , i.e. a length- source sequence may be transmitted in channel uses with .the channel _ bandwidth expansion ratio _ is defined to be . for simplicitywe assume in this and the next two sections , but the discussions can be easily extended to general cases with .the numerical examples in section [ sec : interfacebsc ] will explicitly address this issue .here we design an encoder that maps the source sequence to the channel input .note that the source and channel encoders , whether joint or separate , do not have access to channel state information .however , the receiver can declare an outage with probability based on csir .in non - outage states , we design a decoder that maps the channel output to a source reconstruction .we say a distortion level is outage- achievable if and where is the distortion measure between the source sequence and its reconstruction . the _ distortion versus outage is the infimum over all outage- achievable distortions . in order to evaluatewe need the conditional distribution .assuming the encoder and the decoder are deterministic , this distribution is given by here is the indicator function .note that the channel statistics and the source statistics are fixed , so the code design is essentially the appropriate choice of the outage states and the encoder - decoder pair .we denote by the achievable average distortion when the channel is in state , and it is given by where the summation is over all such that and .notice that the transmitter can not access channel state information so the encoder is independent of ; nevertheless the receiver can choose different decoders based on csir .in a composite channel , each channel state is assumed to be stationary and ergodic , so for a fixed channel state we can design source - channel codes such that approaches a constant limit for large ; however , it is possible that approaches different limits for different channel states .the expected distortion metric captures the distortion averaged over various channel states .using the conditional distribution in and the definition of in , the average distortion can be written as the expected distortion is the infimum of all achievable average distortions .for transmission of a source over a channel , the system consists of three concatenated blocks : the encoder that maps the source sequence to the channel input ; the channel that maps the channel input to channel output , and the decoder that maps the channel output to a reconstruction of the source sequence .in contrast , a separate source - channel coding scheme consists of five blocks .the encoder is separated into a source encoder and a channel encoder where the index set of size serves as both the source encoder output and the channel encoder input .equivalently , each index in can be viewed as a block of bits ( * ? ? ?* defn . 5 ) .the decoder is also separated into a channel decoder and a source decoder .the difference between a general system and a separate source - channel coding system is summarized in fig . [fig : system3block ] .separation does not imply isolation - the source and channel encoders and decoders still need to agree on certain aspects of their respective designs .there are three interfaces through which they exchange information , the negotiation interface , the transmitter interface and the receiver interface . for classical shannon separation schemes with an end - to - end distortion target , these interfaces are summarized in table [ table : shannoninterface ] .the negotiation interface is a single rate comparison between and .since the shannon capacity definition requires that all transmitted information be correctly decoded , the transmission rate is the same as the receiving rate . assuming stationary and ergodic systems, these rates do not depend on the blocklength .however , these constraints can be relaxed to include more source - channel transmission strategies as separation schemes ..interface for shannon separation schemes [ cols="<,<",options="header " , ] [ table : interfaceresiduesplitting ] we provide some numerical examples to compare different schemes in this section .we assume the two states of the composite bsc have crossover probabilities and , and the bandwidth expansion ratio . for various schemes.,width=288 ] in fig .[ fig : d1d2 ] we plot the achievable distortion pair for each scheme . for the broadcast coding scheme , by varying the auxiliary variable from and , we change the rate allocation between the base layer and the refinement layer .the separation schemes using the shannon capacity code and the capacity versus outage code are the special cases of and , respectively .they are marked by the two end - points of the broadcast distortion region boundary .for the quantization residue splitting scheme , we calculate the distortion pairs for different parameters and .the plotted curve is the convex hull of all achievable distortion pairs .note that the broadcast scheme is a special case of the residue splitting scheme with , so the broadcast distortion region lies strictly within the residue splitting distortion region .there are two systematic codes , one targeting at each channel state .they are represented by two points , both out of the residue splitting distortion region . in fig .[ fig : edcompare ] we plot the expected distortion of various schemes for different channel state distributions .each systematic code achieves a single distortion pair , so the expected distortion is simply the weighted average and increases linearly with the bad channel state probability . for broadcast and residue splitting schemes ,we need to choose the optimal point on the distortion region boundary at each channel state probability . since the broadcast scheme is a special case of the residue splitting scheme , its expected distortion is no less , and sometimes strictly larger , than that of the residue splitting scheme . for different ranges of , the scheme that achieves the lowest expected distortion is also different . for or it is the residue splitting scheme , for it is the systematic code for the good channel state , and for it is the systematic code for the bad channel state . expected distortion alone does not provide the complete picture for comparison of the schemes . in fig .[ fig : interfaceenc ] and [ fig : interfacedec ] we assume the channel state probability and illustrate the tradeoff between the expected distortion and the transmitter / receiver interface complexity for different schemes , where the complexity is measured by bits per source symbol delivered through the interface .for the broadcast scheme , we can reduce the expected distortion by increasing , which reduces the base layer rate but increases the refinement layer rate and the total rate , hence a higher interface complexity . however , the distortion - complexity curve is not strictly decreasing . after we reach the minimum expected distortion, it does not provide any more benefit to further increase the interface complexity .the same trend is also observed in the residue splitting scheme . at channel state probability , the systematic code targeting the good state has the lowest expected distortion, nevertheless it also has the highest interface complexity .the choice about the appropriate scheme and operating points ( parameters ) depends on the system designer s view about this distortion - complexity tradeoff .we consider transmission of a stationary ergodic source over non - ergodic composite channels with channel state information at the receiver ( csir ) . to study the source - channel coding problem for the entire system, we include a broader class of transmission schemes as separation schemes by relaxing the constraint of shannon separation , i.e. a single - number comparison between source coding rate and channel capacity , and introducing the concept of a source - channel interface which allows the source and channel to agree on multiple parameters .we show that different end - to - end distortion metrics lead to different conclusions about separation optimality , even for the same source and channel models .specifically , one such generalized scheme guarantees the separation optimality under the distortion versus outage metric .separation schemes are in general suboptimal under the expected distortion metric .we study the performance enhancement when the source and channel coders exchange more information through a more sophisticated interface , and illustrate the tradeoff between interface complexity and end - to - end performance through the example of transmission of a binary symmetric source over a composite binary symmetric channel .in fig . [ fig : expected_enc ] , the multi - resolution source code can be constructed as follows .consider three independent auxiliary random variables , , and , where and , are given by .also define which has a bernoulli distribution with parameter .these variables are related to the source symbol through the relationship _ random codebook generation _ : generate sequences , , by uniform and independent sampling over the strong typical set .similarly , generate sequences , , drawn uniformly and independently over ._ decoding _ : if only the index is received , the decoder declares the estimate of the source sequence as .if both indices are received , the source is reconstructed as . following the procedures in and (* theorem 1 ) we can easily verify the following distortion targets are achievable : , . in practicethe mr source code can be implemented as a multi - stage vector quantization , which has an _ additive _ successive refinement structure . as shown in fig .[ fig : expected_enc ] , in channel state 2 only the base layer description is received and source dec 2 determines the base reconstruction .when both layers are received , source dec 1 determines a refinement sequence based on the refinement layer encoding index only , and add it to the base reconstruction to obtain the overall reconstruction . on the contrary , for generalmr source codes the overall reconstruction may require a joint decoding of indices from both layers .the additive refinement structure reduces coding complexity , provides scalability , and does not incur any performance loss under certain conditions ( * ? ? ?* theorem 3 ) , which are all satisfied in this example .the broadcast channel code design , for a chosen , is summarized as follows . _ random codebook generation _ : generate independent codewords , , by i.i.d .sampling of a bernoulli distribution .generate independent codewords , , by i.i.d .sampling of a bernoulli distribution ._ decoding _ : given channel output , in state 2 we determine the unique such that in state 1 we look for the unique indices such that following the analysis of ( * ? ? ?* theorem 14.6.2 ) , we can show that the channel decoding error probability approaches zero as long as the encoding rates satisfy . roughly speaking , in channel state 2 , we observe where the channel noise is a bernoulli sequence .we want to decode the sequence subject to the overall interference - plus - noise , which is a bernoulli sequence with parameter , hence the achievable rate . in channel state 1 , we observe since , the sequence can be decoded and then subtracted off .we then decode subject to the noise , and the rate is achievable .m. effros and a. goldsmith .capacity definitions and coding strategies for general channels with receiver side information . in _ proc .inform . theory ( isit ) _, page 39 , cambridge ma , august 1998 .m. effros , a. goldsmith , and y. liang .capacity definitions of general channels with receiver side information . in _ proc .inform . theory ( isit ) _ , pages 921925 , nice , france , june 2007 . c. t.k .ng , d. gndz , a. goldsmith , and e. erkip .recursive power allocation in gaussian layered broadcast coding with successive refinement . in _communications _ , glasgow , scotland , june 2007 . to appear . c. t.k .ng , d. gndz , a. goldsmith , and e. erkip .minimum expected distortion in gaussian layered broadcast coding with successive refinement . in _ proc .inform . theory ( isit ) _ , pages 22262230 , nice , france , june 2007 .g. d. hu . on shannon theorem and its converse for sequence of communication schemes in the case of abstract random variables . in _ proc .3rd prague conf . on inform .theory , stat .decision functions , random processes _ , pages 285333 , czechoslovak academy of sciences , prague , 1964 . c. tian , a. steiner , s. shamai(shitz ) , and s. diggavi . expected distortion for gaussian source with a broadcast transmission strategy over a fading channel . in _ proc .ieee inform .theory workshop on wireless networks _ ,pages 15 , bergen norway , july 2007 .
we consider transmission of stationary and ergodic sources over non - ergodic composite channels with channel state information at the receiver ( csir ) . previously we introduced alternate capacity definitions to shannon capacity , including the capacity versus outage and the expected capacity . these generalized definitions relax the constraint of shannon capacity that all transmitted information must be decoded at the receiver . in this work alternate end - to - end distortion metrics such as the distortion versus outage and the expected distortion are introduced to relax the constraint that a single distortion level has to be maintained for all channel states . for transmission of stationary and ergodic sources over stationary and ergodic channels , the classical shannon separation theorem enables separate design of source and channel codes and guarantees optimal performance . for generalized communication systems , we show that different end - to - end distortion metrics lead to different conclusions about separation optimality even for the same source and channel models . separation does not imply isolation - the source and channel still need to communicate with each other through some interfaces . for shannon separation schemes , the interface is a single - number comparison between the source coding rate and the channel capacity . here we include a broader class of transmission schemes as separation schemes by relaxing the constraint of a single - number interface . we show that one such generalized scheme guarantees the separation optimality under the distortion versus outage metric . under the expected distortion metric , separation schemes are no longer optimal . we expect a performance enhancement when the source and channel coders exchange more information through more sophisticated interfaces , and illustrate the tradeoff between interface complexity and end - to - end performance through the example of transmitting a binary symmetric source over a composite binary symmetric channel .
in einstein s theory of general relativity , linearization of the field equations shows that small perturbations of the metric obey a wave equation ( misner , thorne & wheeler 1973 ) .these small disturbances , referred to as gravitational waves , travel at the speed of light .however , some other gravity theories predict a dispersive propagation ( see for references ) . the most commonly considered form of dispersion supposes that the waves obey a klein gordan type equation : \psi = 0.\ ] ] physically , the dispersive term is ascribed to the quantum of gravitation having a non - zero rest mass , or equivalently a non - infinite compton wavelength .the group velocity of propagation for a wave of frequency is then ,\ ] ] valid for ; only in the infinite frequency limit is general relativity recovered , with waves traveling at the speed of light . over the past few decades a number of different _dynamic _ tests of this dispersive hypothesis have been described , i.e. tests making use of direct observations of gravitational waves or their radiation reaction effects . in this paperwe add another method to this list ; we consider gravitational radiation from _ eccentric _ binary systems .such binaries emit gravitational radiation at ( infinitely many ) harmonics of the orbital frequency .our idea lies simply in measuring the phase of arrival of these harmonics .dispersion of the form described by equation ( [ eq : vg ] ) would be signaled by the higher harmonics arriving slightly earlier than the lower harmonics , as compared to the general relativistic waveform .we present a rough estimate of the bounds that might be obtained , deferring a more accurate calculation to a future study ( barack & jones , in preparation ) .the plan of this paper is as follows . in [ sect : dotb ] we derive formulae to make a simple estimate of the bounds that might be obtained using our method . in [ sect : r ] we estimate bounds obtainable on for lisa observations of two sorts of binary systems . finally in [ sect : c ] we summarize our findings and compare with those of other authors .to derive a rigorous estimate of the bound one should add the graviton mass to the list of unknown source parameters to be extracted from the measured signal , as was done by will in the case of circular orbits .the waveform can then be computed , allowing calculation of the fisher information matrix , which could then be inverted , the component , evaluated at , giving the best bound obtainable . for the case of eccentric binaries sucha calculation is not easy , and so in this paper we make a preliminary estimate of the possible bounds , without going to the trouble of calculating . we will begin by deriving a general formula for estimating the bound on that could be obtained from a system which produces gravitational waves at two different frequencies , say and .the two gravitational waves will travel with ( different ) speeds and , and so their journey times to the detector a distance away will differ by a time interval given by .\ ] ] multiplying this by , where is a characteristic frequency in the problem , gives the accumulated difference in phase of arrival of the two signals caused by the dispersion , measured in terms of radians of phase of : .\ ] ] this is to be compared with the accuracy with which the phase of arrival of the waves can be extracted from the noisy gravitational wave data stream . in the high signal to noise ratio regimethe error in extracting the phase of a continuous signal can be written as where we follow the notation of . in this formula is the signal to noise ratio of the measurement and is a dimensionless factor that depends upon how many unknown parameters ( including the phase ) need be extracted from the signal .the lower bound that can be placed on comes from equating and to give : .\ ] ] this shows that the best bounds will come from high mass ( i.e. high ) , high eccentricity , low orbital frequency systems .we will now apply this method of estimation to eccentric binary systems .in general many more than two harmonics will contribute significantly to , so equation ( [ eq : lambdageneral ] ) is not directly applicable . in order to take advantage of this spreadwe will make the following identifications .we will set equal to the total signal to noise of the observation . to identify appropriate frequencies , consider a plot of the signal to noise of the n - th harmonic , , verses gravitational wave frequency .we will set to be the frequency at which this curve peaks , and as the frequencies corresponding to the lower and upper full - width - at - half - maximum .in reality only discrete harmonic frequencies exist , but for the purpose of defining , and , we will treat the curve as continuous , interpolating to find the necessary frequencies .( a formalism using only discrete frequencies would have introduced spurious step - wise changes in our bounds on as a function of eccentricity ) .identification of a suitable value , which quantifies the error in phase measurement , is more problematic . consider errors in measuring the phase of a single monochromatic signal of known sky location ; they find that for a large fraction of the possible binary orientations . examine extreme mass ratio inspirals .they find phase measurement errors which again yield ( see the parameter of their table iii ). however , even in the dispersionless case of general relativity , the relative phasing of the detected harmonics is non - trivially determined by the source s sky location and the relative orientation of the detector and binary system .the phase differences we are considering here are _ additional _delays caused by dispersive propagation .clearly , then , the results of and do not directly apply to our problem .only a full fisher matrix calculation will accurately show how we can disentangle the phase differences contributed by dispersion , measurement error and those intrinsic to the binary .we expect that in those situations where the system parameters , including its sky location and orientation relative to lisa , are measured accurately , the dispersion - induced phase delays will be measured accurately too . in the absence of a full fisher matrix calculation to evaluate the correct measurement errors we will set , but note that this is the weakest link in our estimate. in particular , if the various geometric factors that enter the problem conspire such that a dispersionless signal from a certain binary is very similar to the dispersed signal from a binary with slightly different parameters ( e.g. a slightly different sky location ) , then the errors in could be very much larger than estimated here .also , will depend upon the type of system being studied .it will generally be smaller for systems where information in addition to the gravitational wave signal is available , e.g. galactic binary systems where optical measurements give accurate sky locations .note , however , that enters the bound on only rather weakly , as , and so we hope that our ignorance of this factor will not change our qualitative conclusions .it is expected that gravitational radiation reaction will result in most binary systems detectable by ground based interferometers being nearly perfectly circular and so will be unusable for deriving a bound of the sort described here .we will therefore concentrate exclusively on ( two sorts of ) binaries in the lisa band . in equation ( [ eq : lambdageneral ] )we will set , as discussed above .when calculating we will assume an integration time of one year .we computed the lisa noise using the online sensitivity curve generator , which included a fit to the galactic white dwarf background .we consider here the inspiral of a solar - mass type black hole into a massive one .these are excellent systems from our point of view , as they are expected to dominate the lisa inspiral event rate and , crucially , many will have very large eccentricities . to see if such systems can indeed be used to obtain a bound on , in figure [ fig1 ] we plot the eccentricity - orbital frequency phase space for a binary at a distance of .the upper curve describes the innermost stable orbit ( iso ) ; binary systems in nature only exist _ below _ this curve .the lower curve gives the minimum eccentricity required for the system to be detectable , with multiple harmonics contributing significantly to .[ our exact criterion is to see if exceeds some detection threshold when the single strongest harmonic is removed from the sum .we have set , as would be reasonable if computational power does not limit the search ] .our methods are only applicable for systems _ above _ this curve .it follows that we can use binary systems which lie _ between _ these two curves to bound .fortunately we see that this means that binaries in a significant portion of the plane are of use to us . to illustrate this, a trajectory of a plausible lisa source is shown between the two curves , with an eccentricity at the iso of about .this system spends about years between the two curves . in figure [ fig2 ]we show the actual bounds on that could be obtained from observations of extreme mass ratio systems .the distance is still fixed at , but now we fix the orbital frequency at and leave the eccentricity as a free parameter . results for binary systems with and several different values of are given , as indicated .the following features are of note : ( i ) each curve terminates at a minimum eccentricity below which the system is undetectable and/or fewer than two harmonics contribute significantly to , and at a maximum eccentricity above which the system is dynamically unstable .( ii ) for a system of given masses , the bound increases slightly ( i.e. becomes stronger ) the larger the eccentricity . (iii ) stronger bounds are obtained from more massive systems , and can be obtained for wider ranges of the eccentricity .lisa will be able to detect gravitational waves from a large number of low mass galactic binaries , consisting of white dwarfs and/or neutron stars . to investigate the suitability of these systems for bounding , in figure [ fig3 ] we plot the eccentricity frequency phase space for a galactic binary at a distance .we set , although a lower value could be used for electromagnetically studied binaries .we do not show the iso curve as for all plausible eccentricities such a binary would go dynamically unstable in the much higher ligo frequency band .clearly , binaries in a large portion of the phase space are of use for bounding . however , unlike the case of the extreme mass ratio inspiral , there is no compelling reason to expect the eccentricities of these systems to be large .many of them will have gone through a period of mass transfer in the past , which is believed to be an efficient circularizer .nevertheless , as we require merely _ one or more of them _ to have a sufficiently large eccentricity , greater than about , a bound on may well be obtained . in figure [ fig4 ]we present the bounds on that would be obtained from observations of various equal - mass binaries at a distance of and with an orbital frequency .the qualitative form is the same as in figure [ fig2 ] , except we terminate the curves at the high eccentricity end at as such extreme eccentricities seem unlikely .in table [ table : lambdag_dynamic ] we collect together reported and proposed dynamic bounds on that have appeared in the literature , and add two proposed bounds from this work .as is clear from perusal of the table and figures [ fig2 ] and [ fig4 ] , the bounds presented here for low mass galactic systems are comparable to those of , while our bounds from extreme mass ratio inspirals are comparable to those of for massive black hole coalescence . it should be remembered that our numbers can only be regarded as estimates , particularly given our rough guess as to the accuracy with which dispersion - induced phase delays can be measured .however , even if , the parameter which quantifies this error , were four orders of magnitude larger than assumed here , our results for extreme mass ratio inspirals would still beat both solar system and galactic low mass binary tests .the method presented here has several advantages over other methods .the analysis of requires knowledge of the initial relative phases of the x - ray and gravitational wave signals from an accreting white dwarf system ; it is not clear if the accretion process will be sufficiently well understood to allow this .less problematically , the method of requires knowledge of the phasing of the binary inspiral waveform in the strongly chirping regime , as it is this frequency variation that allows the dispersion test .in contrast , the method described here is very simple , requiring only that multiple harmonics can be detected .it is not necessary for the binary to be chirping significantly , and correlation with other ( i.e. non - gravitational ) radiation is not required . returning to equation ( [ eq : kg ] ) , in the static regimethe solution is of the form of a yukawa - type potential , i.e. a newtonian one suppressed by an exponential .this offers the possibility of bounding by looking for departures from newtonian gravity in the non - radiative regime .such results are given in table [ table : lambdag_static ] . used planetary ephemeris data to obtain their bound , while cited evidence of gravitational binding of galaxy clusters , suggesting that the exponential suppression is not important over length scales of the order of a mpc .the bounds that could be obtained by using the methods described in this paper would be better than the solar system bounds by around orders of magnitude .however , they are weaker than those from galaxy clusters by orders of magnitude .therefore , if equation ( [ eq : kg ] ) is the correct linearization of the true theory of gravity , and if galaxy clusters are indeed gravitationally bound , the bounds on from the dynamic sector are much weaker than those from the static sector .however , the possibility remains that equation ( [ eq : kg ] ) is not the correct linearization , the static potential is not suppressed , but the wave propagation is nevertheless dispersive , i.e. equation ( [ eq : vg ] ) holds but is not derived from an equation of the form of equation ( [ eq : kg ] ) .the only way of settling this is to use the methods proposed in the dynamic regime .it could even be the case that neither equations ( [ eq : kg ] ) nor ( [ eq : vg ] ) are correct , but that gravitational waves have some other form of dispersion .the method considered here ( or any of the methods referred to in table [ table : lambdag_dynamic ] ) could be used to identify this .having used simple estimates to establish the competitiveness of the method presented here with other dynamic tests , we are currently working to improve the accuracy of our calculation by using the fisher information matrix to calculate the bound ( rather than the methods of [ sect : dotb ] ; barack & jones , in preparation ) .we also aim to extend the scope of the investigation by considering the full range of anticipated gravitational wave sources for both ground and space - based detectors .it is a pleasure to thank leor barack , shane larson , ben owen , steinn sigurdsson and nico yunes for useful discussions during this investigation , and the anonymous referee for providing comments which improved the manuscript .the center for gravitational wave physics is supported by the national science foundation under cooperative agreement phy 01 - 14375 .lll 1 & radio pulsars & + 2 & 4u1820 - 30 & + 3 & , & + 2 & ideal low mass binary & + 4 & ( & + 3 & , & + 4 & ( &
we describe a method by which gravitational wave observations of eccentric binary systems could be used to test general relativity s prediction that gravitational waves are dispersionless . we present our results in terms of the graviton having a non - zero rest mass , or equivalently a non - infinite compton wavelength . we make a rough estimate of the bounds that might be obtained following gravitational wave detections by the space - based lisa interferometer . the bounds we find are comparable to those obtainable from a method proposed by will , and several orders of magnitude stronger than other dynamic ( i.e. gravitational wave based ) tests that have been proposed . the method described here has the advantage over those proposed previously of being simple to apply , as it does not require the inspiral to be in the strong field regime nor correlation with electromagnetic signals . we compare our results with those obtained from static ( i.e. non - gravitational wave based ) tests .
quantum discord ( qd ) is a measure of quantum correlation defined by ollivier and zurek almost ten years ago and , yet , a subject of increasing interest today .it is well known that , for a bipartite pure state , the definition of qd coincides with that of the entanglement of formation ( eof ) .but it has remained an open question how those two quantities would be related for general mixed states . here , we present this desired relation for arbitrarily mixed states and show that the eof and the qd obey a monogamic relation . surprisingly , this necessarily requires an extension of the bipartite mixed system to its tripartite purified version .nonetheless , we obtain a conservation relation for the distribution of eof and qd in the system - the sum of all possible bipartite entanglement shared with a particular subsystem , as given by the eof , can not be increased without increasing , _ by the same amount _ , the sum of all qd shared with this same subsystem . when extended to the case of a tripartite mixed state , this relation results in a new proof of the strong subadditivity of entropy , with stronger bounds depending on the balance between the sum of eof and the sum of qd shared with a particular subsystem . as an example of the importance of this conservation relation , we explore the distribution of entanglement in the deterministic quantum computation with one single pure qubit and a collection of mixed states ( dqc1 ) .the algorithm , developed by knill and laflamme , is able to perform exponentially faster computation of important tasks , when compared with well - known classical algorithms , without any entanglement between the pure qubit and the mixed ones .arguably , the power of the quantum computer is supposed to be related to qd , rather than entanglement . here , using the conservation relation ,we have shown that even in the supposedly entanglement - free quantum computation there is a certain amount of multipartite entanglement between the qubits and the environment , which is responsible for the non - zero qd ( see fig .1 ) . represents the pure qubit , the maximally mixed state , obtained through maximal entanglement with the environment . from the left to the right , is initially entangled with ( purple bar ) .the protocol is then executed and , although not directly entangled with , gets entangled with the pair as the qd between and increase . ]let us first consider an arbitrary system represented by a density matrix with and representing two subsystems and representing the environment .it is important to emphasize that the environment , here , is constituted by the universe minus the subsystems and , since , in this case , is a pure density matrix .there is an important monogamic relation between the entanglement of formation ( eof ) and the classical correlation ( cc ) between the two subsystems developed by koashi and winter , that we employ to understand the distribution of entanglement .it is given by where is the eof between and , is the cc between and , and is the usual shannon entropy of .further , and analogously for and .explicitly , cc reads ] containing , thus , less information about the trace of when is initially less entangled with .the worst case is when is in a definite state ( no entanglement with e ) , when we have access to only one eigenvalue of .curiously this corresponds to the situation where a maximal entanglement between and would be available at the end , which certainly does not contribute to any speedup for this special purpose .therefore , we suggest that one should look carefully at the redistribution of entanglement during any quantum computation , and its implication for the speedup of certain protocols . in the present situation , we see that this ability for entanglement redistribution is a necessary ( but not sufficient ) ingredient for efficient quantum computation .at this point , one could imagine what would be the implications of such a relation when some information is lacking for the description of the global state , i. e. , when the tripartite state involving systems , , and is mixed .in that case eq . ( [ koashi ] ) becomes an inequality and , therefore , eq .( [ uni1 ] ) turns into similarly , by changing for in the equation above , it now reads , which when added to eq .( [ u1 ] ) gives with being the balance between the entanglement and the quantum discord in the system .the inequality ( [ ineq ] ) can be stronger than the strong subadditivity ( ss ) , depending on .for it gives a remarkable lower bound for , which is more restrictive than ( [ sub ] ) and must be fulfilled by any quantum system .thus , we can define a more restrictive inequality than the ss , with where is given by the balance between eof and qd , eq . ( [ delta ] ) .it is important to emphasize that the ss , despite of being more difficult to prove , is essentially derived through extensions of its classical counterpart , but correlations play a different role in quantum systems .so , it is not surprising that a more restrictive bound may occur . to exemplify thislet us suppose we have a convex mixed state , where : + \alpha|000\rangle$ ] , with , and is the identity operator over the joint hilbert space of .let us define two quantities and in fig .( [ fig2 ] ) we plot and as a function of , and , in the inset , we plot for a fixed .it is easy to see that in this situation can be positive or negative . when the inequality given by eq .( [ ineq ] ) is weaker than the ss given by eq .( [ sub ] ) .however , when , meaning that the eof of all bipartions is larger than their qd , eq .( [ ineq ] ) is stronger than eq .( [ sub ] ) , limiting the lower bound for .this is a strikingly different bound imposed on the entropies of quantum systems , which is not shared by their classical counterpart .the inequality above recovers the ss only when is null , meaning that the distribution of bipartite entanglement is equal to the amount of distributed quantum discord or smaller than that .it is important to emphasize here the essential role that ss plays in classical and quantum information theories .many fundamental inequalities , as nonnegativity of entropy and subadditivity , can be derived from that . to the best of our knowledge, the only inequality known to be independent is the one proposed in ref . , which is valid when the ss saturates on some particular subsystems configuration .it is straightforward to show that the inequality in ref . is independent of ( [ newstrsub ] ) as well , when , since in this case ss can not be saturated .however something else can be learned from this saturation . given a quadripartite quantum system such that ss is saturated for the three triples , , and then , .substituting by and using the monogamic relation , eq .( [ koashi ] ) , and the conservation law , eq .( [ unix ] ) , it is straightforward to show that when we have .so , as in eq .( [ ineq ] ) , the difference between the eof and the qd is of fundamental importance . ) , , in red ( solid line ) and the difference between the right and left hand sides of eq .( [ ineq ] ) , , in blue ( dotted line ) . combining these two quantities the stronger inequality eq .( [ newstrsub ] ) is obtained .the difference between its right and left hand sides is given by the shaded area .the inset shows . ]to summarize , we have given a monogamic relation between the eof and the qd .for that , we have derived a general interrelation on how those quantities are distributed in a general tripartite system .we applied this relation to show that in the dqc1 the entanglement present between one of the subsystems and the environment is responsible for the non - zero quantum discord . since the maximally mixed state is entangled with the environment , we show that the circuit described by the dqc1 distributes this initial entanglement between the pure qubit and the mixed state .our results suggest that the protocol ability to redistribute entanglement is a necessary condition for the speedup of the quantum computer .in addition , we have extended the discussion for an arbitrary tripartite mixed system showing the existence of an inequality for the subsystems entropies which is stronger than the usual ss .99 h. ollivier and w. h. zurek , phys .lett . * 88 * , 017901 ( 2001 ) .b. p. lanyon , et al .rev . lett .* 101 * , 200501 ( 2008 ) , a. shabani and d. a. lidar , phys . rev .102 * , 100402 ( 2009 ) , t. werlang , et al .a * 80 * , 024103 ( 2009 ) , k. modi , et al .* 104 * , 080501 ( 2010 ) .e. knill and r. laflamme , phys .81 * , 5672 ( 1998 ) .d. poulin , r. blume - kohout , r. laflamme , and h. ollivier , phys .. lett . * 92 * , 177906 ( 2004 ) .p. w. shor and s. p. jordan , arxiv:0707.2831 ( 2007 ) .a. datta , a. shaji , and c. m. caves , phys .lett . * 100 * , 050502 ( 2008 ) . c. e. lpez , g. romero , f. lastra , e. solano , and j. c. retamal , phys .* 101 * , 080503 ( 2008 ) ; j. maziero , t. werlang , f. f. fanchini , l. c. cleri , and r. m. serra , phys . rev .a * 81 * , 022116 ( 2010 ) . c. h. bennett , d. p. divincenzo , j. a. smolin , and w. k. wootters , phys .a * 54 * , 3824 ( 1996 ) .l. henderson and v. vedral , j. phys . a * 34 * , 6899 ( 2001 ) ; v. vedral , phys .lett * 90 * , 050401 ( 2003 ) .m. koashi and a. winter , phys .a * 69 * , 022309 ( 2004 ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , england , 2000 ) . b. schumacher , phys .a * 51 * , 2738 ( 1995 ) .m. horodecki , j. oppenheim , and a. winter , nature * 436 * , 673 ( 2005 ) ; m. horodecki , j. oppenheim , and a. winter , nature * 436 * , 673 ( 2005 ) ; m. f. cornelio , m. c. de oliveira , f. f. fanchini arxiv:1007.0228 ( 2010 ) .e.h lieb and m.b .ruskai , j. math . phys .* 14 * , 1938 ( 1973 ) .n. linden and a. winter , commun . math. phys . * 259 * , 129 ( 2005 ) .
we present a direct relation , based upon a monogamic principle , between entanglement of formation ( eof ) and quantum discord ( qd ) , showing how they are distributed in an arbitrary tripartite pure system . by extending it to a paradigmatic situation of a bipartite system coupled to an environment , we demonstrate that the eof and the qd obey a conservation relation . by means of this relation we show that in the deterministic quantum computer with one pure qubit the protocol has the ability to rearrange the eof and the qd , which implies that quantum computation can be understood on a different basis as a coherent dynamics where quantum correlations are distributed between the qubits of the computer . furthermore , for a tripartite mixed state we show that the balance between distributed eof and qd results in a stronger version of the strong subadditivity of entropy .
a challenging problem in quantitative biology is to successfully model the evolutionary response of organisms to various environmental pressures . aside from its intrinsic interest , the development of models which can predict the time evolution of a population s genotype could prove useful in understanding a number of important phenomena , such as antibiotic drug resistance , cancer , viral replication dynamics , and immune response .perhaps the simplest formalism for modeling , at least phenomenologically , the evolutionary dynamics of replicating organisms is known as the quasispecies model .this model was introduced by manfred eigen in 1971 as a way to describe the _ in vitro _evolution of single - stranded rna genomes . in the simplest formulation of the model ,we consider a population of asexually replicating genomes , whose only source of variability is induced by point mutations during replication .we assume that each genome , denoted by , may be written as , where each `` base '' is drawn from an alphabet of size . with each genomeis associated a first - order growth rate constant , which we assume to be genome - dependent , since different genomes are expected to be differently suited to the given environment .the set of all growth rate constants is termed the _ fitness landscape _ , which will generally be time - dependent .replication and mutation give rise to mutational flow between the genomes .if we let denote the number of organisms with genome , then , where denotes the first - order mutation rate constant from to . if denotes the probability that , after replication , produces the daughter genome , then clearly . to compute , we assume a per base replication error probability for genome ( different genomes may have different replication error probabilities , since some genomes may code for various repair mechanisms which other genomes do not ) .it is then readily shown that , where denotes the hamming distance between and . in order to model the relative competition between various genomes, it proves convenient to reexpress the dynamics in terms of population fractions . defining , and , we obtain the system of equations , where , and is therefore simply the mean fitness of the population .the above system of equations is physically realizable in a chemostat , which continuously siphons off organisms to maintain a constant population size .this ensures that growth is not resource limited , so the assumption of simple exponential growth is a good one .it should be pointed out , however , that it is possible to introduce a death term which places a cap on the population size , without changing the form of the quasispecies equations .if we introduce a second - order crowding term ( logistic growth ) , so that , then if is genome - independent , it is readily shown that when converting to the the quasispecies equations are unchanged .the quasispecies equations may be written in vector form as , where is the vector of population fractions , is the matrix of first - order mutation rate constants , and is the vector of first - order growth rate constants . for a static fitness landscape ,eigen proved that evolves to the equilibrium distribution given by the eigenvector corresponding to the largest eigenvalue of .a considerable amount of research on quasispecies theory has focused on the simplest possible fitness landscape , known as the _ single fitness peak _ ( sfp ) landscape . in the sfp model, there exists a single , `` master '' sequence for which , while for all other sequences we have .the sfp model assumes a genome - independent mutation rate , so that for all .the sfp landscape is analytically solvable in the limit of infinite sequence length .the equilibrium behavior of the model exhibits two distinct regimes : a localized regime , where the genome population clusters about the master sequence ( giving rise to the term `` quasispecies '' ) , and a delocalized regime , where the genome population is distributed essentially uniformly over the entire sequence space .the transition between the two regimes is known as the _ error catastrophe _ , and can be shown to occur when , the probability of correctly replicating a genome , drops below .the error catastrophe is generally regarded as the central result of quasispecies theory , and it has been experimentally verified in both viruses and bacteria . indeed , the error catastrophe has been shown to be the basis for a number of anti - viral therapies .the structure of the quasispecies equations naturally lends itself to application to more complex systems than rna molecules .indeed , the model has been used to successfully model certain aspects of the immune response to viral infection . however , in their original form , the quasispecies equations fail to capture a number of important aspects of the evolutionary dynamics of real organisms .for example , it is implicitly assumed that each genome replicates _ conservatively _ , meaning that the original genome is preserved by the replication process .correct modeling of dna - based life must take into account the fact that dna replication is _ semiconservative _ .furthermore , the assumption of a genome - independent replication error probability is also too simple , since cells often have various repair mechanisms which may become inactivated due to mutations .in addition , eigen s model neglects the effects of recombination , transposition , insertions , deletions , and gene duplication , to name a few additional sources of variability .thus , a considerable amount of work remains to be done before a quantitative theory of evolutionary response is developed .nevertheless , some progress has been made .for example , semiconservative replication was recently incorporated into the quasispecies model .a simple model incorporating genetic repair was developed in .diploidy has been studied in , and finite size effects in .one area in which more realistic models need to be developed is in the nature of the fitness landscape .as mentioned previously , the most common landscape studied thus far has been the single fitness peak .however , genomes generally contain numerous genes ( even the simplest of bacteria , the mycoplasmas , have several hundred genes ) , which work in concert to confer viability to the organism .therefore , in this paper , we consider the behavior of the model for an arbitrary gene network .we assume conservative replication and a genome - independent error rate for simplicity , though we hypothesize at the end of the paper how our results change for the case of semiconservative replication .this paper is organized as follows : in the following section , we introduce our generalized -gene model defining the `` gene network . ''we first give the quasispecies equations in terms of the population fractions of each of the various genomes .we proceed to the infinite sequence length equations , and then obtain a reduced system of equations which dictates the equilibrium solution of our model .we solve the model in section iii .for the sake of completeness , we include a simple example to illustrate how our solution method may be applied to specific systems .we go on in section iv to discuss the results and implications of our model , such as the relation to c.o .`` survival of the flattest '' , and also what our model says about the response of gene networks to mutagens .finally , we conclude in section v with a summary of our results and future research plans .consider a population of conservatively replicating , asexual organisms , whose genomes consist of genes . each genome may then be written as .let us assume , for simplicity , a `` single fitness peak '' landscape for each gene . that is , for each gene there is a `` master '' sequence for which the gene is functional , while for all the gene is nonfunctional .we assume that the fitness associated with a given genome is dictated by which genes in the genome are functional , and which are not .we let denote the fitness of organisms with genome such that for , while for ( we adopt the convention that when ) . the choice of the landscape is arbitrary , so that the activity of the various genes in the genome are generally correlated .thus , the genes may be regarded as defining a gene network .we assume that the fitnesses are all strictly positive . without loss of generality ( i.e. , by an appropriate rescaling of the time ) , we may assume that . the simplest quasispecies equations for this -gene model are obtained by assuming a genome - independent per base replication error probability .we assume that gene has a sequence length , and we define .then , where , putting everything together , we obtain the system of equations , define the _ hamming class _ .also , define . by the symmetry of the landscape, we may assume that depends only on the corresponding to , and hence we may look at the total population fraction in and study its dynamics .the conversion of the quasispecies equations in terms of to is accomplished by a generalization of the method given in .the result is , we now let the in such a way that the and remain fixed .we assume that the are all strictly positive ( allowing an to be leads to certain difficulties which we choose not to address in this paper ) . because the probability of correctly replicating a genome is simply ,fixing is equivalent to fixing the genome replication fidelity in the limit of infinite sequence length . in this limit , it is possible to show that , for each gene , the only terms in eq .( 8) which survive the limiting process are the terms .this is equivalent to the statement that , in the limit of infinite sequence length , backmutations may be neglected .we also obtain that , and the final result is , it should be noted that the neglect of backmutations is only valid when one can group population fractions into hamming classes . in our case , by the symmetry of the fitness landscape , the equilibrium solution only depends on the hamming class , and hence , to find the equilibria , it is perfectly valid to `` pre - symmetrize '' the population distribution and study the resulting dynamics . thus , when studying dynamics , it is generally not valid to neglect backmutations .for example , consider a single fitness peak landscape , and suppose that a population of organisms is at its equilibrium , clustered about the fitness peak . if the organisms are then mutated , so that they are shifted away from the fitness peak , then eventually they will backmutate and reequilibrate on the fitness peak ( this situation has been observed with prokaryotes ) .if we imagine that the mutation shifts the organism from the master genome to some other genome , then it is clear that the landscape is not symmetric about , and furthermore that the population distribution is not symmetric about .thus , eq . ( 11 ) does not apply . to correctly model the reequilibration dynamics, it is necessary to consider the finite sequence length equations , and explicitly incorporate backmutations . because of the neglect of backmutations , eq . (11 ) may in principle be solved recursively to obtain the equilibrium distribution of the at any , assuming we know the equilibrium mean fitness , denoted .the problem , of course , is that needs to be computed .this may be done as follows : given any collection of indices , define via , where , , and so forth .thus , is simply the total fraction of the population in which the genes of indices are faulty , while the remaining genes are given by their corresponding master sequences .the dynamics of the is derived in appendix a. the result is given by , we can provide an intuitive explanation for this expression : because backmutations may be neglected in the limit of infinite sequence length , it follows that , once a gene is disabled , it remains disabled .therefore , given a set of indices , mutational flow can only occur from to for which ( in this paper , if , then is a proper subset of . if , then either is a proper subset of or ) .similarly , can only receive mutational contributions from for which .for such a , the probability of mutation to may be computed as follows : because the genes corresponding to the indices remain faulty , the neglect of backmutations means that it does not matter whether these genes are replicated correctly or not .all genes with indices in must remain equal to the corresponding master sequences after mutation .the probability that gene replicates correctly is given by , so the probability that all genes with indices in replicate correctly is .the genes which must be replicated incorrectly are those with indices in .since each such gene replicates incorrectly with probability , it follows that the probability of replicating all genes in incorrectly is . putting everything together, we obtain a mutational flow from to of .summing over all possible gives us the expression in eq .( 13 ) .note that , so we need to solve eq .( 13 ) in order to obtain the equilibrium distribution of the model .in this section , we proceed to solve the reduced system of equations given by eq .since this provides us with and , it follows that we can recursively solve for the equilibrium values of all . in vector notation ,( 13 ) may be expressed in the form , where is the vector of all , is the vector of all , and is the matrix of mutation rate constants . because of the neglect of backmutations in the limit of infinite sequence length , different regions of the genome space become mutationally decoupled , so that the largest eigenvalue of the mutation matrix will in general be degenerate .thus , the equilibrium of the reduced system of equations is not unique .however , for any initial condition , the system will evolve to an equilibrium , though of course different initial conditions will yield different equilibrium results . in this subsection, we define a variety of constructs which we will need to characterize the equilibrium behavior of our model .we begin with the definition of a _ node _ : we define a _ level n node _ to refer to any collection of `` knocked out '' genes with indices .the reason for this terminology is simple .we may imagine the set of all nodes to be connected via mutations .because of the neglect of backmutations , it follows that a node is accessible from a node via mutations if and only if . the result is that we can generate a directed graph of mutational flows between nodes , an example of which is illustrated in figure 1 . given some node ,define .therefore , may be regarded as the subgraph of all nodes which are mutationally accessible from .an example of such a subgraph is illustrated in figure 2 .let denote any collection of nodes .then we may define .furthermore , define .thus , is the set of all nodes in such that no node in is contained within the mutational subgraph of any other node in .figure 3 gives an example showing the construction of from .given some node , define .we then define . finally , given some , define . with these definitions in hand, we are now ready to obtain the structure of the equilibrium solution at a given .we claim that . we prove this in two steps .first of all , we claim that for some node . clearly , because , it follows that at least one of the at equilibrium .let be a node of minimal such that . then it should be clear that , at equilibrium , we have , which , since , may be solved to give .so now suppose that .then .such an equilibrium can never be observed because it is unstable . to see this , let denote a node for which .then from eq .( 13 ) we have , at equilibrium , that , and so .clearly , however , any perturbation on will push away from its equilibrium value .this equilibrium is therefore unstable , and hence , unobservable .note that since , it follows that the mean equilibrium fitness is a continuous function of . to find the equilibrium solution of the reduced system of equations , we first need to determine which at equilibrium .to this end , we begin with the claim that , for , unless . for suppose thereexists such that at equilibrium .then out of the set of all nodes which satisfy the above two properties , we may choose to be of minimal level .we claim that , for any , we have that , for otherwise it is clear that .therefore , by the minimality of the level of , it follows that whenever is a proper subset of .but then the equilibrium equation for gives , and so .therefore , .however , by assumption , , which means that contains nodes in which are distinct from . denote one of these nodes by .then at equilibrium we have , from eq .( 13 ) , that , which is clearly a contradiction .this establishes our claim .we now argue that our equilibrium solution may be found if we know for .we claim that for any , we may write , where the , and for a given is strictly positive if and only if . the above expression then holds for all , since we simply take for . we can prove the above formula via induction on the level of the nodes in . in doing so, we will essentially develop an algorithm for constructing the .so , let us start with , the minimal level nodes .then clearly , so that , hence the formula is correct for .so now suppose that , for some , the formula is correct for all such that . then for a level node in , denoted by ,we have , at equilibrium , that , now , if , then .otherwise , , so the equilibrium equation may be solved to give , note that . furthermore , if , then no proper subset of is in .therefore , , so .conversely , if , then since , it follows that .therefore , the sum in eq .( 20 ) is nonempty , hence , since the appearing in the sum are all strictly positive , it follows that .this implies that is strictly positive if and only if , which completes the induction step , and proves the claim . for each , we can define , and then define and . if , for each we also define , that is , the vector of all , and if we define , then we obtain , where .note that the form a linearly independent set of vectors .therefore , if contains more than one node , then the equilibrium solution of the reduced system of equations is not unique , but rather is defined by the parallelipiped .as mentioned earlier , the degeneracy in the equilibrium behavior follows from the neglect of backmutations in the limit of infinite sequence length .the various nodes in become mutationally decoupled in this limit , which can cause the largest eigenvalue of the mutation matrix to be degenerate . however , for _ finite _ sequence lengths , the quasispecies dynamics will always converge to a unique solution . in particular , if we start with the initial condition , then for finite sequence lengths we will converge to the unique equilibrium solution . because all nodes are mutationally connected in the infinite sequence length limit with this initial condition , we make the assumption that the way to find the infinite sequence length equilibrium which is the limit of the finite sequence length equilibria is to find the infinite sequence length equilibrium starting from the initial condition .this allows us to break the eigenstate degeneracy in a canonical manner . in the appendices , we describe a fixed - point iteration approach for finding the equilibrium solution of the model . within this algorithm, we also use the initial condition as the analogous approach to the one above for finding the infinite sequence length equilibrium which is the limit of the finite sequence length equilibria . finally ,the treatment thus far has been finding the equilibrium solution of the reduced system of equations for . the equilibrium solution for obtained by taking the limit of the solutions , so that . from the previous developmentit is clear that the nodes in may be regarded as `` source '' nodes which dictate the solution . to understand how the solution changes with , we therefore need to determine how depends on .we claim the following : that there exist a finite number of , which we denote by , where , for which contains distinct elements . in any interval , is constant , and may therefore be denoted by .the are all disjoint , and .we begin proving this claim by introducing one more definition .let denote the set of all sets of nodes , such that a collection of nodes is a member of if and only if contains distinct elements .note that since there are nodes , there are sets of nodes , hence consists of a finite number of elements .given some , we claim that for at most one . to show this , suppose that there exist for which .choose any two nodes , in , and note that , and similarly for .however , and implies that , so that and hence .therefore , and , so does not contain distinct elements . because this contradicts our assumption about , it follows that for at most one .so , since contains a finite number of elements , it follows that there are a finite number of for which satisfies the property that contains distinct elements .we can denote these by , where we assume that .note that if a collection of nodes has the property that , then must be a collection in .this is easy to see : contains some for which there exists a distinct where .therefore , which proves our contention .we now prove that is some constant , which we denote by , over .given some , let ( stands for `` supremum '' , which is the least upper bound of a set of real numbers . if is a set of real numbers with an upper bound , then exists , and satisfies the following properties : ( 1 ) is an upper bound for . ( 2 ) if is any upper bound of , then . (2 ) if , then there exists at least one element of which exceeds . ) .clearly , .we claim that . to show this , note first of all that for all , and that for any , there exists such that .for given any , we have , by definition of , that there exists some such that for all .in particular , this implies that .furthermore , if there exists for which for all , then for all , contradicting the definition of .now , suppose . then we can write and for all . then since , it follows by continuity that for in some neighborhood . butthis implies that over . since over , we obtain that over , contradicting the definition of .we have just shown that .since over , we must have that . using a similar argument with , we can show that over , and so is constant on ( stands for `` infimum '' , and is defined as the greatest lower bound of a set of real numbers .it satisfies properties analogous to those of ) .suppose for two with , we have and are not disjoint .then they share at least one node , and so , by the nature of the two sets , we must have that .define to be for any node in , , and to be .now , contains some node such that for in .but if for we have that and , then and . since is monotone decreasing or increasing , it follows that on , or equivalently , .therefore , .the are thus all disjoint , as claimed .finally , since is continuous , we have that .if , then this gives .similarly , considering gives that for .therefore , , so , as claimed .the various may therefore be regarded as defining different `` phases '' in the equilibrium behavior of the model .physically , each `` phase '' is defined by a set of `` source nodes , '' which dictate which genes in the genome are knocked out , and which are not .the transition from to corresponds to certain genes in the genome becoming knocked out , and perhaps other genes becoming viable again .this transition can happen more than once , and so we refer to the series of transitions as an `` error cascade . '' because , for sufficiently large , for any .therefore , for sufficiently large , .since is constant on , it follows that on .thus , the final transition from to corresponds to delocalization over the entire genome space , which is simply the error catastrophe .once we have determined , we can in principle obtain the population fractions in the various hamming classes .the problem is that , if , then for any _ finite _ values of , we get that . to show this ,suppose we can find such that at equilibrium . of the for which , choose a set of indices such that is as small as possible .note that if , with , then .now , let the nonzero be denoted by .then , and we have , from eq . ( 11 ) , that , at equilibrium , which gives .but , . therefore , , and so , hence .but then . this proves our claim .if , then the above claim does not present us with any problem .we can simply recursively solve eq .( 11 ) at equilibrium for all the .but once any delocalization occurs , it is impossible to solve for the equilibrium distribution in terms of the hamming classes .however , we can recursively obtain the distribution of another class of population fractions , as follows : given some collection of indices , another collection of indices , and a collection of hamming distances , we define and as , it is possible to show that , and hence that , we may then derive the expression for . since the derivation uses techniques similar to those used in appendices a and b , we simply state the final result , which is , where , where are the indices of the nonzero hamming distances in . we claim that , at equilibrium , only if for some for which . for if , let be a node of minimal level for which there exists such that .note then that for any proper subset , we must have that .this implies that , at equilibrium , among all for which , there exists an such that is minimal. then we obtain , which gives .now , let denote the indices of the nonzero hamming distances in . then .but since , we get , so .therefore , so since , we have .the may be obtained recursively from eq .( 27 ) , starting with the values of for .the idea is that , starting with the values of for , we may compute recursively .we then proceed down the levels , computing the using the values of and for .note then that instead of computing the , which will be as soon as any delocalization occurs , we first sum over a set of gene indices containing the delocalized genes as a subset , and only compute the population distribution for finite hamming distances of the localized genes . in this subsection, we compute various localization lengths associated with the population distribution .specifically , given a node , and some , we define two localization lengths , and , as follows : note that , and so , in analogy with and , we have that , we also define the localization length by , note that , and so is finite if and only if all the are finite .we can compute at equilibrium by finding the time derivative and setting it to . in appendix bwe show that , therefore , at equilibrium , we get , we can characterize the behavior of the .first of all , we claim that if and only if .secondly , we claim that if and only if with for some . to show this , note first of all that , from physical considerations , if .if , then , and so , since , it follows that .this establishes the first part of our claim .so now suppose that , with for some .then , and so , which of course implies that . to prove the converse ,let us suppose that .let us choose to be the minimal level subset for which .then if , it is clear from the expression for that for some , with . but this contradicts the minimality of , hence , so since , it follows that .this proves the converse , which establishes the second part of our claim .we now illustrate the theory developed above using a simple two - gene `` network '' as an example .we assume a genome containing two identical genes , so that , and we choose the following growth parameters : , , and . with these parameters ,the system exhibits two localization to delocalization transitions .first , for we have . for have .the error catastrophe occurs at .we determined the equilibrium behavior of the model by solving the finite sequence length equations for and .the details may be found in appendix c. figure 4 shows a plot of from the simulation results and from our theory .figure 5 shows plots of , , , and from the simulation results and from theory .the first point to note about the solution of the quasispecies equations for a gene network is that , unlike the single gene model , which exhibits a single `` error catastrophe , '' the multiple gene model exhibits a series of localization to delocalization transitions which we term an `` error cascade . ''the reason for this is that as the mutation rate is increased , the selective advantage for maintaining functional copies of certain genes in the genome is no longer sufficiently strong to localize the population distribution about the corresponding master sequences , and delocalization occurs in the corresponding sequence spaces .the more a given gene or set of genes contributes to the fitness of an organism , the larger will have to be to induce delocalization in the corresponding sequence spaces .eventually , by making sufficiently large , the selective advantage for maintaining any functional genes in the genome will disappear , and the result is complete delocalization over all sequence spaces , corresponding to the error catastrophe .the prediction of an error cascade suggests an approach for determining the selective advantage of maintaining certain genes in a genome .currently , the standard method for determining whether a gene is `` essential '' is by knocking it out , and then seeing if the organism survives . by knocking out each of the genes, one can construct a `` deletion set '' for a given organism , consisting of the minimal set of genes necessary for the organism to survive . while knowledge of the deletion set of an organism is important, it does not explain why the organism should maintain functional copies of other , `` nonessential '' genes .one possibility is that these `` nonessential '' do confer a fitness advantage to the organism , however , the time scale on which organisms are observed to grow during knockout experiments is simply too short to resolve these fitness differences .thus , an alternative approach to the deletion set method is to have organisms grow at various mutagen concentrations . by determining which genes get knocked out at the corresponding mutation rates , it is possible to determine the relative importance of various genes to the fitness of an organism . such an experiment is likely to be difficult to perform . nevertheless ,if successful , it would provide a considerably more powerful approach than the deletion set method for determining fitness advantages of various genes .the results in this paper also shed light on a phenomenon which c.o .wilke termed the `` survival of the flattest '' .briefly , what wilke ( and others ) showed was that at low mutation rates , a population will localize in a region of sequence space which has high fitness . at higher mutation rates, a population will relocalize in a region of sequence space which may not have maximal fitness , but is mutationally robust .the error cascade is exactly a relocalization from a region of high fitness but low mutational support to a region of lower fitness but higher mutational support .the reason for this is that the fitness landscape becomes progressively flatter as more and more genes are knocked out , because the more genes are knocked out , the smaller the fraction of the genome which is involved in determining the fitness of the organism .this implies that an error cascade is necessary for the `` survival of the flattest '' principle to hold .robustness in this sense is therefore conferred by modularity in the genome .that is , robustness does not arise because an individual gene may remain functional after several point - mutations , but rather arises from the fact that the organism can remain viable even if entire regions ( e.g. `` genes '' ) of the genome are knocked out ( it should be noted that the idea that mutational robustness is conferred by modularity in the genome has been discussed before ) . to see this point more clearly , one can consider a `` robust '' landscape in which the genome consists of a single gene .however , unlike the single - fitness peak landscape , the organism is viable out to some hamming class .therefore , if , then if , otherwise , where . using techniques similar to the ones used in this paper ( neglect of backmutations and stability criterion for equilibria ) , it is possible to show that the equilibrium mean fitness is exactly , unchanged from the single - fitness peak results .thus , in contrast to robustness studies which consider finite sequence lengths and do not have a well - defined viability cutoff , in the limit of infinite sequence length there is no selective advantage in having a genome which can sustain a finite number of point mutations and remain viable .this paper developed an extension of the quasispecies model for arbitrary gene networks .we considered the case of conservative replication and a genome - independent replication error probability .we showed that , instead of a single error catastrophe , the model exhibits a series of localization to delocalization transitions , termed an `` error cascade . '' while the numerical example we used in this paper was relatively simple ( in order to clearly illustrate the theory developed ) , it is possible to have nontrivial delocalization behavior , depending on the choice of the landscape .for example , it is possible that certain genes which are knocked out in one phase can become reactivated again in the following phase .that is , instead of a delocalization , one can have a _ re - localization _ to source nodes not contained in the mutational subgraphs of the source nodes in the previous phase .this implies that the can exhibit discontinuous behavior .the types of equilibrium behaviors possible is something which will be explored in future research .future research also will involve incorporating more details to the multiple - gene model introduced in this paper .for example , one extension is to move away from the `` single - fitness peak '' assumption for each gene .another natural extension is to study the equilibrium behavior of the multiple - gene quasispecies equations for the case of semiconservative replication .while this is a subject for future work , we hypothesize that many of the semiconservative results would be essentially unchanged from the conservative ones .thus , we claim that at equilibrium , we would still have that , only this time is computed by replacing with .we also claim that we would still have that define the `` source '' nodes of the equilibrium solution .indeed , we hypothesize that , for semiconservative replication , eq . ( 13 ) becomes , finally , another subject for future work is the incorporation of repair into our network model . in was assumed that only one gene controlled repair . by assuming that several genes control repair , then , in analogy with fitness , we hypothesize that instead of a single `` repair catastrophe '' , we obtain a series of localization to delocalization transitions over the repair gene sequence spaces , a `` repair cascade . '' this research was supported by the national institutes of health .the authors would like to thank eric j. deeds for helpful discussions .in this appendix , we derive eq . ( 13 ) from eq . ( 11 ) . to this end ,define , we then have that , we now claim that , this can be proved by induction . for this statement is clearly true , since .suppose then , that for some , the statement is true for all .then we have , and so , now , for each set appearing in the sum , a given subset occurs only once .the -element sets which contain as a subset must be of the form , where . therefore , there are distinct -element sets which contain . rearranging the sum, we obtain , this completes the induction step , and proves the claim .we are almost ready to derive the expression for . before doing so, we state the following identity , which we will need in our calculation : we now have , which is exactly eq .in this section we derive the expression for .we have , we therefore have that , \nonumber \\ & = & \sum_{l = 0}^{n } \sum_{\{j_1 ' , \dots , j_l'\ } \subseteq \{i_1 , \dots , i_n\ } } ( -1)^{n - l } e^{-(1 - \alpha_{j_1 ' } - \dots - \alpha_{j_l } ' - \alpha_i ) \mu } \times \nonumber \\ & & ( \kappa_{\{j_1 ' , \dots , j_l ' , i\ } } \tilde{\langle l_i\rangle}_{\{j_1 ' , \dots , j_l'\ } } + \alpha_i \mu \kappa_{\{j_1 ' , \dots , j_l'\ } } \tilde{z}_{\{j_1 ' , \dots , j_l'\ } } + \alpha_i \mu \kappa_{\{j_1 ' , \dots , j_l ' , i\ } } \tilde{z}_{\{j_1 ' , \dots , j_l ' , i\ } } ) \times \nonumber \\ & & \sum_{k - l = 0}^{n - l } ( -1)^{k - l } \sum_{\{j_1 , \dots , j_{k - l}\ } \subseteq \{i_1 , \dots , i_n\}/\{j_1 ' , \dots , j_l'\ } } e^{\alpha_{j_1 } \mu } \cdot \dots \cdot e^{\alpha_{j_{k - l } } \mu } \nonumber \\ & & - \bar{\kappa}(t ) \tilde{\langle l_i\rangle}_{\{i_1 , \dots , i_n\ } } \nonumber \\ & = & e^{-(1 - \alpha_{i_1 } - \dots - \alpha_{i_n } - \alpha_i)\mu } \sum_{k = 0}^{n } \sum_{\{j_1 , \dots , j_k\ } \subseteq \{i_1 , \dots , i_n\ } } \times \nonumber \\ & & ( \kappa_{\{j_1 , \dots , j_k , i\ } } \tilde{\langle l_i\rangle}_{\{j_1 , \dots , j_k\ } } + \alpha_i \mu \kappa_{\{j_1 , \dots , j_k\ } } \tilde{z}_{\{j_1 , \dots , j_k\ } } + \alpha_i \mu \kappa_{\{j_1 , \dots , j_k , i\ } } \tilde{z}_{\{j_1 , \dots , j_k , i\ } } ) \times \nonumber \\ & & \prod_{j \in \{i_1 , \dots , i_n\}/\{j_1 , \dots , j_k\ } } ( 1 - e^{-\alpha_j \mu } ) \nonumber \\ & & - \bar{\kappa}(t ) \tilde{\langle l_i\rangle}_{\{i_1 , \dots , i_n\}}\end{aligned}\ ] ] which is exactly the expression in eq .the finite sequence length equations , given by eq . ( 11 ) , may be expressed in vector form , at equilibrium , we therefore have that , the equilibrium solution may be found using fixed - point iteration , via the equation , the iterations are stopped when the stop changing .this is determined by introducing a cutoff parameter , and stop iterating when the fractional change of each of the components after iterations is smaller than . is chosen to be sufficiently large so that , on average , each base mutates at least once after iterations .thus , we choose . what this method does is account for the fact that equilibration takes longer for smaller values of .this means that the smaller the value of , the more times it is necessary to iterate before comparing the changes in the . for our two - gene simulation, we took , and .we chose this initial condition to show that , even though backmutations may become small at large sequence lengths , they still strongly affect the equilibrium solution . by iterating a sufficient number of times ,the cumulative effect of the backmutations becomes sufficiently large to lead to a unique equilibrium solution , independent of the initial condition .
in this paper , we study the equilibrium behavior of eigen s quasispecies equations for an arbitrary gene network . we consider a genome consisting of genes , so that each gene sequence may be written as . we assume a single fitness peak ( sfp ) model for each gene , so that gene has some `` master '' sequence for which it is functioning . the fitness landscape is then determined by which genes in the genome are functioning , and which are not . the equilibrium behavior of this model may be solved in the limit of infinite sequence length . the central result is that , instead of a single error catastrophe , the model exhibits a series of localization to delocalization transitions , which we term an `` error cascade . '' as the mutation rate is increased , the selective advantage for maintaining functional copies of certain genes in the network disappears , and the population distribution delocalizes over the corresponding sequence spaces . the network goes through a series of such transitions , as more and more genes become inactivated , until eventually delocalization occurs over the entire genome space , resulting in a final error catastrophe . this model provides a criterion for determining the conditions under which certain genes in a genome will lose functionality due to genetic drift . it also provides insight into the response of gene networks to mutagens . in particular , it suggests an approach for determining the relative importance of various genes to the fitness of an organism , in a more accurate manner than the standard `` deletion set '' method . the results in this paper also have implications for mutational robustness and what c.o . wilke termed `` survival of the flattest . ''
in the context of the predictive maintenance of the french railway switches ( or points ) which enable trains to be guided from one track to another at a railway junction , we have been brought to parameterize switch operations signals representing the electrical power consumed during a point operation ( see figure [ signal_intro ] ) .the final objective is to exploit these parameters for the identification of incipient faults .the method we propose to characterize signals is based on a regression model incorporating a discrete hidden process allowing abrupt or smooth switchings between various regression models .this approach has a connection with the switching regression model introduced by quandt and ramsey and is very linked to the mixture of experts ( me ) model introduced by jordan and jacobs by the using of a time - dependant logistic transition function .the me model , as discussed in , uses a conditional mixture modeling where the model parameters are estimated by the expectation maximization ( em ) algorithm .other alternative approaches are based on hidden markov models in a context of regression .a dedicated em algorithm including a multi - class iterative reweighted least - squares ( irls ) algorithm is proposed to estimate the model parameters .section 2 introduces the proposed model and section 3 describes the parameters estimation via the em algorithm .the fourth section is devoted to the experimental study using simulated data and real data .we represent a signal by the random sequence of real observations , where is observed at time .this sample is assumed to be generated by the following regression model with a discrete hidden logistic process , where : in this model , is the -dimensional coefficients vector of a degree polynomial , is the time dependant -dimensional covariate vector associated to the parameter and the are independent random variables distributed according to a gaussian distribution with zero mean and variance . this section defines the probability distribution of the process that allows the switching from one regression model to another .the proposed hidden logistic process supposes that the variables , given the vector , are generated independently according to the multinomial distribution , where is the logistic transformation of a linear function of the time - dependant covariate , is the -dimensional coefficients vector associated to the covariate and .thus , given the vector , the distribution of can be written as : where if i.e when is generated by the regression model , and otherwise .the pertinence of the logistic transformation in terms of flexibility of transition can be illustrated through simple examples with components . as it can be shown in figure [ logistic_function_k=2_p=012 ] ( left ), the dimension of controls the number of changes in the temporal variation of .more particularly , if the goal is to segment the signals into convex homogenous parts , the dimension of must be set to .the quality of transitions and the change time point are controlled by the components values of the vector ( see figures [ logistic_function_k=2_p=012 ] ( middle ) and ( right ) ) . [cols="^,^,^ " , ]from the model given by equation ( [ eq.regression model ] ) , it can be proved that the random variable is distributed according to the normal mixture density where is the parameter vector to be estimated .the parameter is estimated by the maximum likelihood method .as in the classic regression models we assume that , given , the are independent .this also involves the independence of .the log - likelihood of is then written as : since the direct maximization of this likelihood is not straightforward , we use the expectation maximization ( em ) algorithm to perform the maximization .the proposed em algorithm starts from an initial parameter and alternates the two following steps until convergence : [ [ e - step - expectation ] ] * e step ( expectation ) : * + + + + + + + + + + + + + + + + + + + + + + + this step consists of computing the expectation of the complete log - likelihood , given the observations and the current value of the parameter ( being the current iteration ) : \nonumber\\ & = & \sum_{i=1}^{n}\sum_{k=1}^k t^{(m)}_{ik}\log ( \pi_{ik}({\mathbf{w}})\mathcal{n}(x_{i};{\boldsymbol{\beta}}^t_k{\boldsymbol{r}}_{i},\sigma^2_k ) ) \enspace,\end{aligned}\ ] ] where is the posterior probability that originates from the regression model .as shown in the expression of , this step simply requires the computation of .[ [ m - step - maximization ] ] * m step ( maximization ) : * + + + + + + + + + + + + + + + + + + + + + + + + in this step , the value of the parameter is updated by computing the parameter maximizing the expectation with respect to . the maximization of can be performed by separately maximizing the maximization of with respect to is a multinomial logistic regression problem weighted by the .we use a multi - class iterative reweighted least squares ( irls ) algorithm to solve it . maximizing with respect to consists of analytically solving a weighted least - squares problem .in addition to providing a signal parametrization , the proposed approach can be used to denoise and segment signals .the denoised signal can be approximated by the expectation where is the parameters vector obtained at the convergence of the algorithm . on the other hand , a signal segmentation can also be deduced by computing the estimated label of : .this section is devoted to the evaluation of the proposed algorithm using simulated and real data sets .two evaluation criteria are used in the simulations : the misclassification rate between the simulated partition and the estimated partition and the euclidian distance between the denoised simulated signal and the estimated denoised signal normalized by the sample size .the proposed approach is compared to the piecewise regression approach .each signal is generated according to the regression model with a hidden logistic process defined by eq ( [ eq.regression model ] ) .the number of states of the hidden variable is fixed to and the order of regression is set to .the order of the logistic regression is fixed to what guarantees a segmentation into convex intervals .we consider that all signals are observed over seconds . for each size we generate 20 samples .the values of assessment criteria are averaged over the 20 samples .figure [ fig.error rates ] ( left ) shows the misclassification rate obtained by the two approaches in relation to the sample size .it can be observed that the proposed approach is more stable for a few number of observations .figure [ fig.error rates ] ( right ) shows the results obtained by the two approaches in terms of signal denoising .it can be observed that the proposed approach provides a more accurate denoising of the signal compared to the piecewise regression approach . for the proposed model, the optimal values of has also been estimated by computing the bayesian information criterion ( bic ) for varying from to and varying from to .the simulated model , corresponding to and , has been chosen with the maximum percentage of .this section presents the results obtained by the proposed model for signals of switch points operations .one situation corresponding to a signal with a critical defect is presented .the number of the regressive components is chosen in accordance with the number of the electromechanical phases of a switch points operation ( ) .the value of has been set to , what guarantees a segmentation into convex intervals , and the degree of the polynomial regression has been set to which is adapted to the different regimes in the signals .figure [ resultat_signal_aig_2 ] ( left ) shows the original signal and the denoised signal ( given by equation ( [ eq .signal expectation ] ) ) .figure [ resultat_signal_aig_2 ] ( middle ) shows the variation of the proportions over time .it can be observed that these probabilities are very closed to when the regressive model seems to be the most faithful to the original signal .the five regressive components involved in the signal are shown in figure [ resultat_signal_aig_2 ] ( right ) .in this paper a new approach for signals parametrization , in the context of the railway switch mechanism monitoring , has been proposed .this approach is based on a regression model incorporating a discrete hidden logistic process .the logistic probability function , used for the hidden variables , allows for smooth or abrupt switchings between polynomial regressive components over time .in addition to signals parametrization , an accurate denoising and segmentation of signals can be derived from the proposed model .b. krishnapuram , l. carin , m.a.t .figueiredo and a.j .hartemink , sparse multinomial logistic regression : fast algorithms and generalization bounds , _ ieee transactions on pattern analysis and machine intelligence , _27(6 ) : 957 - 968 , june 2005 .s. r. waterhouse , _ classification and regression using mixtures of experts , _ phd thesis , department of engineering , cambridge university , 1997 .v. e. mcgee and w. t. carleton , piecewise regression , _ journal of the american statistical association _ , 65 , 1109 - 1124 , 1970 .
a new approach for signal parametrization , which consists of a specific regression model incorporating a discrete hidden logistic process , is proposed . the model parameters are estimated by the maximum likelihood method performed by a dedicated expectation maximization ( em ) algorithm . the parameters of the hidden logistic process , in the inner loop of the em algorithm , are estimated using a multi - class iterative reweighted least - squares ( irls ) algorithm . an experimental study using simulated and real data reveals good performances of the proposed approach .
nowadays , the diffused innovation policies require frequent survival estimates based on necessarily small samples .that may happen when the reliability of technological products continuously improved must be monitored ; or when the efficacy of always - new chemotherapy must be promptly checked .in helping statisticians to choose a suitable survival model , careful consideration of the generative mechanisms of the involved random variable ( rv ) plays an important ( often neglected ) role . such consideration can supplement or even prevail over usual model selection procedures , when the observations are extremely few and , consequently , the information about the effective shape of the `` parent '' distribution ( i.e. the population distribution ) is very scarce . in this context, the paper provides the mathematical models of three typical generative mechanisms of the inverse weibull ( iw ) rv .so , the paper helps exploiting the iw model to give correct answers for some specific survival problems , found in biometry and reliability , for which it appears the natural interpretative stochastic model . doubtless , the iw rv is not widely known and so scarcely identified .the iw model is referred to by many different names like `` frechet - type '' ( johnson et al .1995 ) , `` complementary weibull '' ( drapella 1993 ) , `` reciprocal weibull '' ( lu and meeker 1993 ; mudholkar and kollia 1994 ) , and `` inverse weibull '' ( erto 1982 ; erto 1989 ; johnson et al . 1994 ; murthy et al .. an early study of the iw model is reported in the unprocurable paper ( erto 1989 ) .however , it seems to be no comprehensive reference in the literature that studies the iw as survival model . this paper tries to do that specifically exploring its peculiar probabilistic and statistical characteristics .the peculiar heavy right tail of probability density as well as the upside - down bathtub ( ubt ) shaped hazard function of the iw model has been really found in several applications ( nelson 1990 ; rausand and reinertsen 1996 ; gupta et al . 1997 ; gupta et al .1999 ; jiang et al . 2003 ) .also the inverse gamma , inverse gaussian , log - normal , log - logistic , and the birnbaum - saunders models show similarly shaped hazard rates ( glen 2011 ; klein and moeschberger 2003 ; lai and xie 2006 ) .however , a model incorrectly fitted to iw data may lead to very wrong critical prognoses , even despite its good fitting to the empirical distribution .in fact , especially when few observations are available , the empirical distribution contains scarce information about the shape of the far - right tail , which is the main and unusual feature of the iw distribution .so , the knowledge of primary generative mechanisms leading to the iw rv can help one not to miss its proper application in some real life peculiar circumstances , analytically shown in the following .obviously , the inverse of the iw data follows a weibull distribution .so the parameter estimates of the iw distribution can be easily obtained by applying to its reciprocal data the same standard procedures implemented in packages for the weibull model ( see murthy et al .the probability density function ( pdf ) of the iw rv with scale parameter and shape parameter is : it is skewed and unimodal for .the moment of the iw rv is and it exists if then the mean and the variance follows .the most distinctive applicative feature of the iw model is its heavy right tail . that is highlighted by the _ property n. 1 _ : `` the pdf of the iw model is infinitesimal of lower order than the negative exponential as goes to infinity . '' in fact , the ratio of the iw pdf ( [ eq1 ] ) ( setting , for simplicity ) to the negative exponential function goes to infinity as goes to infinity . the cumulative distribution function ( cdf ) , the survival function ( sf ) and the hazard rate ( hr ) are easily derived from ( [ eq1 ] ) : the hr is infinitesimal as goes to infinity .it is unimodal and belongs to the ubt class ( see glaser 1980 ) with only one change point : _ property n. 2 _ : `` the hr of the iw model has a unique global maximum between the mode and the value . ''the condition of maximum for the iw hr does not lead to a closed - form solution .however , taking the derivative of the logarithm of the iw hr ( and appropriately arranging the terms ) the necessary condition for the maximum of the hr implies that : the auxiliary functions and corresponding to the first and second members of this equation , have a unique intersection point . in the first quadrant these two functions are both increasing up to their maximum point , whose abscissa is for both functions equal to and then they are both decreasing and infinitesimal to the same order as goes to infinity .moreover , it is possible to verify that is null as goes to 0 , while is null for the iw mode . because of the following inequalities : we derive that the intersection point of the two auxiliary functions , that is the maximum point of the hr , falls between the mode and . the mean residual life ( , also called the life expectancy of the fraction of items lived longer than is : being the lower incomplete gamma function . the following _property n. 3 _ stands : `` the function of the iw model is bathtub - shaped . ''this property can be deduced from the general results given in gupta and akman ( 1995 ) and is in agreement with the properties of the hr .so , the iw model belongs to the class of distribution for which the reciprocity of the shape of the hr and functions holds .specifically , the decreases from the initial value ( as goes to 0 ) to its minimum at the change point and then increases infinitely as goes to infinity .being ( e.g. , see lai and xie , 2006 , chap .4 ) , the change point must solve the equation necessarily . in practice, this peculiar shape can be found , for example , in some biometry problems when the longer the patient s survival time from his tumor ablation the better his prognosis .if are i.i.d .random variables , the limit distribution for their maximum is the iw distribution ( [ eq2 ] ) ( johnson _ et al . _ 1995 ) .therefore , for instance , when a disease or failure is related to the maximum value of a critical non - negative variable , this generative mechanism can be considered .this generative mechanism differs from the following three new ones , since for these the time variable does play an explicit role in their modeling .let be a system deterioration index that , as such , is a strictly increasing function of the run time .at every intercept with the vertical line passing through , suppose that the uncertainty about can be reasonably fitted by a weibull pdf , with shape parameter constant and scale parameter , function of , modeled by a generic power law : if a threshold ( maximum , positive ) value allowed for exists , the system has the iw sf .in fact , consider a weibull random variable with pdf : , \\y\ge 0,\quad v,\;u>0 \\ \end{array}\ ] ] where , the shape parameter , is constant , and , the scale parameter , is the drift function ( [ eq8 ] ) . if is the threshold ( maximum , positive ) value for , then : } .\ ] ] substituting back into the previous relationship , we obtain : .\ ] ] on putting and the iw sf follows .this mechanism is found in many technological corrosion phenomena that give rise to failures only when they reach a threshold deepness the mechanism is found also in many biologic degenerative phenomena ( i.e. , gradual deterioration of organs and cells ) where the loss of function appears when the deterioration deep reaches a fixed threshold value . besides , this mechanism is found when tumors spread potential metastases with a dissemination probability proportional to their size .hence , a tumor size greater than a given threshold value causes a rate of occurrence of metastases which is really first increasing and then decreasing ( see le cam and neyman 1982 , p. 253 ) like the iw one ( [ eq3 ] ) .if the stress ( in the broad sense ) is a rv with distribution that can be reasonably fitted by a weibull model and the strength that opposes is a decreasing function of time that can be modeled by a generic power law : the resulting sf is the iw one .in fact , if the stress is a weibull random variable : and the strength that opposes follows the decreasing function of time ( [ eq12 ] ) : . \\ \end{array}\ ] ] substituting back into the previous relationship , we obtain : \ ] ] then , renaming and the iw sf follows .this mechanism is common for many mechanical components ( see , for example , bury 1975 , p. 593 ; shigley 1977 , p. 184 ) as well as it is found in patients with a decreasing vital strength following the ( [ eq12 ] ) ( e.g. , because they are subjected to intensive and prolonged chemotherapy ) and subjected to a relapse having a random virulence or gravity in these cases , an hr first quickly increasing and then slowly decreasing , is sometimes surprisingly observed ( see carter et al .1983 , p. 79 ) .suppose that a disease ( or failure ) is latent and the physiological defensive attempts averse to it occur randomly according to a poisson model .if the probability of one successful defensive attempt depends on the incubation time ( but not on the number of previously occurred defensive actions ) according to a generic power law decreasing function : the iw cdf follows . in fact , suppose that the random variable describing the physiological defensive attempts against a latent disease ( or failure ) , occurs according to a poisson law : let be the probability of one successful defensive attempt , which depends on the incubation time ( but not on the number of previously occurred defensive actions ) according to the function ( [ eq16 ] ) .consequently , the probability of manifest disease ( or failure ) is : then , on putting and the iw cdf follows .this mechanism is found in biometry when the immune system works randomly against antigens , and its effectiveness decreases as the disease expands ( see le cam and neyman 1982 , p. 15 ) . in reliability, this mechanism is found when a technological system is randomly ( i.e. , without any definite plan ) maintained : the smaller the time from the beginning of the failure process ( up to the maintenance action ) the greater the maintenance efficacy .consider the following 50 pseudo random ( ordered ) data generated from a close - to - standard " parent cdf ( [ eq2 ] ) with and ( we can not put since , in general , the moment of the iw pdf exists if ) : 0.2776 , 0.2931 , 0.3384 , 0.4321 , 0.4739 , 0.4771 , 0.5331 , 0.5424 , 0.5482 , 0.5571 , 0.6139 , 0.6451 , 0.6523 , 0.6587 , 0.7166 , 0.7838 , 0.8466 , 0.8892 , 0.9278 , 0.9651 , 1.008 , 1.051 , 1.123 , 1.203 , 1.213 , 1.366 , 1.529 , 1.795 , 1.947 , 2.093 , 2.143 , 2.189 , 2.246 , 2.453 , 2.526 , 2.858 , 2.924 , 3.381 , 3.383 , 3.587 , 4.964 , 5.101 , 5.139 , 6.753 , 10.11 , 11.37 , 12.68 , 16.88 , 17.25 , 19.07 . the anderson - darling statistic ( anderson and darling 1954 ) , with a -value equal to 0.94333 , shows the high conformity of this sample to the parent cdf .incidentally , in this paper , we chose this specific goodness - of - fit test since it emphasizes the tails of the presumed parent distribution .however , in the above case , also tests that give less weight to the tails lead to similar results .suppose that we want to identify a generic cdf model being very well fitted to both the data and the parent cdf , but we do nt have any strong information about the latter .we decide to adopt a `` less informative model '' which is coherent with our poor information .we chose a polynomial cumulative hr ( hr ) model of order 3 , since it is the minimum able to fit a non - monotone model too .in our ( simulated ) condition , we can define an excellent `` a priori '' model by fitting the polynomial to 50 points ( vertically equally spaced ) of the known parent cdf .the resulting model is : which has a coefficient of determination .moreover , being the anderson - darling statistic , with a -value equal to 0.2856 , this `` a priori '' model appears very well fitted to data too .incidentally , the maximum likelihood ( ml ) estimates of its three parameters give the following polynomial hr model very close to the former ( [ eq19 ] ) : which has a coefficient of determination .suppose now that the analysis of the generative mechanism suggests us to fit the iw model to the 50 data .the ml estimates of its parameters are and .the coefficient of determination of the hr function estimated from this iw model is .the anderson - darling statistic is with a -value equal to 0.9530 .although the previous analysis has shown that the two cdf models fit the data very well , some important characteristics could be different . to highlight that , we compare some critical estimates obtained from the `` a priori and less informative '' model ( [ eq19 ] ) with those obtained using the last `` fitted and informative '' iw model . from these two modelswe obtain the estimates reported in [ tab1 ] , where the true values are those of the parent population .. estimates for the polinomial and iw fitted models [ cols="<,<,<,<",options="header " , ] [ tab2 ]this example is representative of the critical real - world situations in which only tiny data sets are available .the dataset consists of 15 times to breakdown ( in minutes ) of an insulating fluid between electrodes at a constant voltage ( 36 kv ) , provided in nelson ( 1982 , p. 105 ) : 0.35 , 0.59 , 0.96 , 0.99 , 1.69 , 1.97 , 2.07 , 2.58 , 2.71 , 2.90 , 3.67 , 3.99 , 5.35 , 13.77 , 25.50 .unfortunately , due to small size of the sample , we can not rely on the sample point on the graph of [ fig1 ] to start the selection of a reasonable model .however , analyzing the experiment ( aiming to derive the lifetime distribution of the insulating fluid ) we come to the conclusion that it shows an example of the `` deterioration '' mechanism close to the one described in section 3.1 .in fact , the mean of the insulating resistance of the fluid decreases according to a positive ( and less than one ) power function of time .this model belongs to the arrhenius class of cumulative damage relationships , widely found in life tests with constant stress ( see , e.g. , nelson 1990 ) .consequently , the mean of the resistive leakage current ( i.e. , the system deterioration index increases with a positive ( and greater than one ) power of time to the dielectric failure , which occurs when a threshold value ( fixed by the operating and environmental conditions supposed constant ) is exceeded .moreover , the nature of the failure mechanism is stationary and does not induce any change in the shape of the pdf .then , a pdf model with mean increasing as a power function of time and with constant shape is well rendered by the weibull model ( [ eq9 ] ) .in fact , being constant the shape parameter , its mean is effectively a positive ( and greater than one ) power function of the time . hence we decide to assume the iw model as our weighted hypothesis .however , we consider also the log - logistic model because , as shown in [ fig1 ] , it plays the role of a frontier separating the iw model and many other alternative models .the ml estimates of the iw parameters are and ; the anderson - darling statistic is with a -value equal to 0.596 ; the _ mll _ is .the ml estimates of the log - logistic parameters are and and the anderson - darling statistic is with a -value equal to 0.870 ; the maximized log - likelihood is . the comparison of the two alternative models by means of the anderson - darling statistic and the _ mlls _( both at their ml value ) would support the log - logistic model .however , we think that the differences are not enough large ( e.g. only 0.3 unit separates the two _ mlls _ ) to contradict the previous choice based on a careful and detailed technological analysis .the paper proves that the iw distribution is another of the relatively few ubt survival distributions .so , when dealing with ubt distributions , it is helpful to have an alternative model that has , moreover , a distinctive heavy right tail .this paper demonstrates how the iw distribution is the natural candidate , among all the survival models , to face three unreported classes of real and well defined degenerative phenomena .so the practitioners are helped to choose this model by profiting from the knowledge of the involved phenomena , such as a disease or failure , rather than exclusively on the usual analysis of goodness - of - fit .some illustrative examples show that the polynomial cumulative hazard model and the log - logistic one can both fit the cdf of iw data very well .the polynomial model is used as antithetic benchmark because : a ) differently from the iw model , it is capable of giving a wide range of hr shapes ; b ) it is used in situations where strong assumptions about the parent distribution are unavailable .the log - logistic model has been considered because : a ) it is the closest model which shares the upside - down bathtub ( ubt ) shaped hazard function ; b ) it plays the role of a frontier separating the iw model from many other alternative models .however , all the illustrative examples show that the above models even though very well fitted to iw data may be very misleading because they entail highly incorrect assessments concerning , for instance , the mean residual life .the paper proves that when any knowledge about generative mechanism is unavailable selecting between the iw and the log - logistic models that one which minimizes the anderson - darling statistic or , even better , maximizes the likelihood is a very effective procedure .finally , we show that for the iw and log - logistic models both selection criteria are independent of hypothetical distribution parameters , and the corresponding probabilities of correct selection are respectively greater than 0.85 and 0.93 when the size of the available sample is greater than 50 . instead ,when the size of the available sample is less than 30 ( i.e. , in a very frequent situation in the technological and biological fields ) selecting the correct model purely on the basis of the empirical distribution remains a highly risky procedure , since the probabilities of wrong selection are respectively greater than 0.23 and 0.12 . +* references *
the peculiar properties of the inverse weibull ( iw ) distribution are shown . it is proven that the iw distribution is one of the few models having upside - down bathtub ( ubt ) shaped hazard function . three real and typical degenerative mechanisms , which lead exactly to the iw random variable , are formulated . so a new approach to proper application of this relatively unknown survival model is supported . however , we consider also the case in which any knowledge about generative mechanism is unavailable . in this hypothesis , we study a procedure based on the anderson - darling statistic and log - likelihood function to discriminate between the iw model and others alternative ubt distributions . the invariant properties of the proposed discriminating criteria have been proven . based on monte carlo simulations , the probability of the correct selection has been computed . a real applicative example closes the paper . = 4 mean residual life , model selection , ubt shaped hazard rate
networks are useful constructs to schematize the organization of interactions in social and biological systems. networks are particularly valuable for characterizing _interdependent _ interactions , where the interaction between components a and b influences the interaction between components b and c , and so on . for most such integrated systems , it is a flow of some entity passengers traveling among airports , money transferred among banks , gossip exchanged among friends , signals transmitted in the brain that connects a system s components and generates their interdependence .network structures constrain these flows .therefore , understanding the behavior of integrated systems at the macro - level is not possible without comprehending the network structure with respect to the flow , the _dynamics on _ the network. one major drawback of networks is that , for visualization purposes , they can only depict small systems .real - world networks are often so large that they must be represented by coarse - grained descriptions .deriving appropriate coarse - grain descriptions is the basic objective of community detection .but before we decompose the nodes and links into modules that represent the network , we must first decide what we mean by appropriate . "that is , we must decide which aspects of the system should be highlighted in our coarse - graining .if we are concerned with the process that _ generated _ the network in the first place , we should use methods based on some underlying stochastic model of network formation . to study the formation process , we can , for example , use modularity , mixture models at two or more levels , bayesian inference , or our cluster - based compression approach to resolve community structure in undirected and unweighted networks .if instead we want to infer system behavior from network structure , we should focus on how the structure of the extant network constrains the dynamics that can occur on that network . to capturehow local interactions induce a system - wide flow that connects the system , we need to simplify and highlight the underlying network structure with respect to how the links drive this flow across the network .for example , both markov processes on networks and spectral methods can capture this notion . in this paper , we present a detailed description of the flow - based and information - theoretic method known as the map equation . for a given network partition ,the map equation specifies the theoretical limit of how concisely we can describe the trajectory of a random walker on the network . with the random walker as a proxy for real flow , minimizing the map equation over all possible network partitions reveals important aspects of network structure with respect to the dynamics on the network . to illustrate and further explain how the map equation operates, we compare its action with the topological method modularity maximization . because the two methods can yield different results for some network structures ,it is illuminating to understand when and why they differ .there is a duality between the problem of compressing a data set , and the problem of detecting and extracting significant patterns or structures within those data .this general duality is explored in the branch of statistics known as mdl , or minimum description length statistics .we can apply these principles to the problem at hand : finding the structures within a network that are significant with respect to how information or resources flow through that network . to exploit the inference - compression duality for dynamics on networks ,we envision a communication process in which a sender wants to communicate to a receiver about movement on a network .that is , we represent the data that we are interested in the trace of the flow on the network with a compressed message .this takes us to the heart of information theory , and we can employ shannon s source coding theorems to find the limits on how far we can compress the data . for some applications , we may have data on the actual trajectories of goods , funds , information , or services as they travel through the network , and we could work with the trajectories directly . more often , however , we will only have a characterization of the network structure along which these objects can move , in which case we can do no better than to approximate likely trajectories as random walks guided by the directed and weighted links of the network .this is the approach that we take with the map equation . in order to effectively and concisely describe where on the network a random walker is , an effective encoding of positionwill necessarily exploit the regularities in patterns of movement on that network .if we can find an optimal code for describing places traced by a path on a network , we have also solved the dual problem of finding the important structural features of that network .therefore , we look for a way to assign codewords to nodes that is efficient with respect to the dynamics on the network . a straightforward method of assigning codewords to nodes is to use a huffman code .huffman codes are optimally efficient for symbol - by - symbol encoding and save space by assigning short codewords to common events or objects , and long codewords to rare ones , just as morse code uses short codes for common letters and longer codes for rare ones .figure [ fig1](b ) shows a prefix - free huffman coding for a sample network .it corresponds to a lookup table for coding and decoding nodes on the network , a _ codebook _ that connects nodes with codewords . in this codebook, each huffman codeword specifies a particular node , and the codeword lengths are derived from the ergodic node visit frequencies of a random walk ( the average node visit frequencies of an infinite - length random walk ) .because the code is prefix - free , that is , no codeword is a prefix of any other codeword , codewords can be sent concatenated without punctuation and still be unambiguously decoded by the receiver . with the huffman code pictured in fig .[ fig1](b ) , we are able to describe the nodes traced by the specific 71-step walk in 314 bits . if we instead had chosen a uniform code , in which all codewords are of equal length , each codeword would be bits long ( logarithm taken in base 2 ) , and bits would have been required to describe the walk . this huffman code is optimal for sending a one - time transmission describing the location of a random walker at one particular instant in time .moreover , it is optimal for describing a list of locations of the random walker at arbitrary ( and sufficiently distant ) times .however , if we wish to list the locations visited by our random walker in a sequence of successive steps , we can do better . sequences of successive steps are of critical importance to us ; after all , this is flow .many real - world networks are structured into a set of regions such that once the random walker enters a region , it tends to stay there for a long time , and movements between regions are relatively rare .as we design a code to enumerate a succession of locations visited , we can take advantage of this regional structure .we can take a region with a long persistence time and give it its own separate codebook .so long as we are content to reuse codewords in other regional codebooks , the codewords used to name the locations in any single region will be shorter than those in the global huffman code example above , because there are fewer locations to be specified .we call these regions `` modules '' and their codebooks `` module codebooks . ''however , with multiple module codebooks , each of which re - uses a similar set of codewords , the sender must also specify which module codebook should be used .that is , every time a path enters a new module , both sender and receiver must simultaneously switch to the correct module codebook or the message will be nonsense .this is implemented by using one extra codebook , an index codebook , with codewords that specify which of the module codebooks is to be used .the coding procedure is then as follows .the index codebook specifies a module codebook , and the module codebook specifies a succession of nodes within that module .when the random walker leaves the module , we need to return to the index codebook . to indicate this , instead of sending another node name from the module codebook, we send the `` xit command '' from the module codebook .the codeword lengths in the index codebook are derived from the relative rates at which a random walker enters each module , while the codeword lengths for each module codebook are derived from the relative rates at which a random walker visits each node in the module or exits the module .here emerges the duality between coding a data stream and finding regularities in the structure that generates that stream . using multiple codebooks, we transform the problem of minimizing the description length of places traced by a path into the problem of how we should best partition the network with respect to flow .how many modules should we use , and which nodes should be assigned to which modules to minimize the map equation ?figure [ fig1](c ) illustrates a two - level description that capitalizes on structures with long persistence time and encodes the walk in panel ( a ) more efficiently than the one - level description in panel ( b ) . we have implemented a dynamic visualization and made it available for anyone to explore the inference - compression duality and the mechanics of the map equation ( http://www.tp.umu.se/~rosvall/livemod/mapequation/ ) .figure [ codebooks ] visualizes the use of one or multiple codebooks for the network in fig .the sparklines show how the description length associated with between - module movements increases with the number of modules and more frequent use of the index codebook .contrarily , the description length associated with within - module movements decreases with the number of modules and with the use of smaller module codebooks .the sum of the two , the full description length , takes a minimum at four modules .we use stacked boxes to illustrate the rates at which a random walker visits nodes and enters and exits modules .the codewords to the right of the boxes are derived from the within - module relative rates and within - index relative rates , respectively .both relative rates and codewords change from the one - codebook solution with all nodes in one module , to the optimal solution , with an index codebook and four module codebooks with nodes assigned to four modules ( see online dynamic visualization ) .we have described the huffman coding process in detail in order to make it clear how the coding structure works .but of course the aim of community detection is not to encode a particular path through a network . in community detection , we simply want to find the modular structure of the network with respect to flow and our approach is to exploit the inference - compression duality to do so . in fact , we do not even need to devise an optimal code for a given partition to estimate how efficient that optimal code would be .this is the whole point of the map equation .it tells us how efficient the optimal code would be for any given partition , without actually devising that code .that is , it tells us the theoretical limit of how concisely we can specify a network path using a given partition structure . to find an optimal partition of the network ,it is sufficient to calculate this theoretical limit for different partitions of the network and pick the one that gives the shortest description length . for a module partition of nodes into modules , we define this lower bound on code length to be . to calculate for an arbitrary partition ,we first invoke shannon s source coding theorem , which implies that when you use codewords to describe the states of a random variable that occur with frequencies , the average length of a codeword can be no less than the entropy of the random variable itself : ( we measure code lengths in bits and take the logarithm in base 2 ) .this provides us with a lower bound on the average length of codewords for each codebook .to calculate the average length of the code describing a step of the random walk , we need only to weight the average length of codewords from the index codebook and the module codebooks by their rates of use .this is the map equation : here is the frequency - weighted average length of codewords in the index codebook and is frequency - weighted average length of codewords in module codebook .further , the entropy terms are weighted by the rate at which the codebooks are used . with for the probability to exit module , the index codebook is used at a rate , the probability that the random walker switches modules on any given step . with for the probability to visit node , module codebook is used at a rate , the fraction of time the random walk spends in module plus the probability that it exits the module and the exit message is used .now it is straightforward to express the entropies in and . for the index codebook ,the entropy is and for module codebook the entropy is by combining eqs .[ map_master ] and [ map_module ] and simplifying , we can write the map equation as : in this expanded form of the map equation , we note that the term is independent of partitioning , and elsewhere in the expression appears only when summed over all nodes in a module .consequently , when we optimize the network partition , it is sufficient to keep track of changes in , the rate at which a random walker enters and exits each module , and , the fraction of time a random walker spends in each module .they can easily be derived for any partition of the network , and updating them is a straightforward and fast operation .any numerical search algorithm developed to find a network partition that optimizes an objective function can be modified to minimize the map equation . [ cols= "< , < , < , < " , ] for undirected networks , the node visit frequency of node simply corresponds to the relative weight of the links connected to the node . the relative weight is the total weight of the links connected to the node divided by twice the total weight of all links in the network , which corresponds to the total weight of all link - ends . with for the relative weight of node , for the relative weight of module , for the relative weight of links exiting module , and for the total relative weight of links between modules ,the map equation takes the form for directed weighted networks , we use the power iteration method to calculate the steady state visit frequency for each node . to guarantee a unique steady state distribution for directed networks ,we introduce a small teleportation probability in the random walk that links every node to every other node with positive probability and thereby converts the random walker into a _random surfer_. the movement of the random surfer can now be described by an irreducible and aperiodic markov chain that has a unique steady state by the perron - frobineous theorem . as in google s pagerank algorithm , we use .the results are relatively robust to this choice , but as , the stationary frequencies may poorly reflect the important nodes in the network as the random walker can get trapped in small clusters that do not point back into the bulk of the network .the surfer moves as follows : at each time step , with probability , the random surfer follows one of the outgoing links from the node that it currently occupies to the neighbor node with probability proportional to the weights of the outgoing links from to .it is therefore convenient to set . with the remaining probability , or with probability if the node does not have any outlinks , the random surfer `` teleports '' with uniform probability to a random node anywhere in the system .but rather than averaging over a single long random walk to generate the ergodic node visit frequencies , we apply the power iteration method to the probability distribution of the random surfer over the nodes of the network .we start with a probability distribution of for the random surfer to be at each node and update the probability distribution iteratively . at each iteration, we distribute a fraction of the probability flow of the random surfer at each node to the neighbors proportional to the weights of the links and distribute the remaining probability flow uniformly to all nodes in the network .we iterate until the sum of the absolute differences between successive estimates of is less than and the probability distribution has converged .given the ergodic node visit frequencies for and an initial partitioning of the network , it is easy to calculate the ergodic module visit frequencies for module . the exit probability for module , with teleportation taken into account , is then where is the number of nodes in module .this equation follows since every node teleports a fraction and guides a fraction of its weight to nodes outside of its module . if the nodes represent objects that are inherently different it can be desirable to nonuniformly teleport to nodes in the network .for example , in journal - to - journal citation networks , journals should receive teleporting random surfers proportional to the number of articles they contain , and , in air traffic networks , airports should receive teleporting random surfers proportional to the number of flights they handle .this nonuniform teleportation nicely corrects for the disproportionate amount of random surfers that small journals or small airports receive if all nodes are teleported to with equal probability . in practice, nonuniform teleportation can be achieved by assigning to each node a normalized teleportation weight such that . with teleportation flow distributed nonuniformly , the numeric values of the ergodic node visit probabilities will change slightly and the exit probability for module becomes this equation follows since every node now teleports a fraction of its weight to nodes outside of its module .conceptually , detecting communities by mapping flow is a very different approach from inferring module assignments for underlying network models .whereas the former approach focuses on the interdependence of links and the dynamics on the network once it has been formed , the latter one focuses on pairwise interactions and the formation process itself .because the map equation and modularity take these two disjoint approaches , it is interesting to see how they differ in practice . to highlight one important difference, we compare how the map equation and the generalized modularity , which makes use of information about the weight and direction of links , operate on networks with and without flow . for weighted and directed networks ,the modularity for a given partitioning of the network into modules is the sum of the total weight of all links in each module minus the expected weight here is the total weight of links starting and ending in module , and the total in- and out - weight of links in module , and the total weight of all links in the network . to estimate the community structure in a network , eq .[ modularity ] is maximized over all possible assignments of nodes into any number of modules .figure [ compare ] shows two different networks , each partitioned in two different ways .both networks are generated from the same underlying network model in the modularity sense : 20 directed links connect 16 nodes in four modules , with equal total in- and out - weight at each module .the only difference is that we switch the direction of two links in each module . because the weights , , , and are all the same for the four - module partition of the two different networks in fig .[ compare](a ) and ( c ) , the modularity takes the same value .that is , from the perspective of modularity , the two different networks and corresponding partitions are identical .however , from a flow - based perspective , the two networks are completely different .the directed links shown in the network in panel ( a ) and panel ( b ) induce a structured pattern of flow with long persistence times in , and limited flow between , the four modules highlighted in panel ( a ) .the map equation picks up on these structural regularities , and thus the description length is shorter for the four - module network partition in panel ( a ) than for the unpartitioned network in panel ( b ) .by contrast , for the network shown in panels ( c ) and ( d ) , there is no pattern of extended flow at all .every node is either a source or a sink , and no movement along the links on the network can exceed more than one step in length . as a result , random teleportation will dominate and any partition into multiple modules will lead to a high flow between the modules . for networks with links that do not induce a pattern of flow , the map equation will always be minimized by one single module .the map equation captures small modules with long persistence times , and modularity captures small modules with more than the expected number of link - ends , incoming or outgoing .this example , and the example with directed and weighted networks in ref . , reveal the effective difference between them . though modularity can be interpreted as a one - step measure of movement on a network , this example demonstrates that one - step walks can not capture flow .any greedy ( fast but inaccurate ) or monte carlo - based ( accurate but slow ) approach can be used to minimize the map equation . to provide a good balance between the two extremes ,we have developed a fast stochastic and recursive search algorithm , implemented it in c++ , and made it available online both for directed and undirected weighted networks . as a reference ,the new algorithm is as fast as the previous high - speed algorithms ( the greedy search presented in the supporting appendix of ref . ) , which were based on the method introduced in ref . and refined in ref . . at the same time , it is also more accurate than our previous high - accuracy algorithm ( a simulated annealing approach ) presented in the same supporting appendix .the core of the algorithm follows closely the method presented in ref . : neighboring nodes are joined into modules , which subsequently are joined into supermodules and so on . first , each node is assigned to its own module . then , in random sequential order , each node is moved to the neighboring module that results in the largest decrease of the map equation .if no move results in a decrease of the map equation , the node stays in its original module .this procedure is repeated , each time in a new random sequential order , until no move generates a decrease of the map equation .now the network is rebuilt , with the modules of the last level forming the nodes at this level . and exactly as at the previous level , the nodes are joined into modules .this hierarchical rebuilding of the network is repeated until the map equation can not be reduced further . except for the random sequence order ,this is the algorithm described in ref . . with this algorithm, a fairly good clustering of the network can be found in a very short time .let us call this the core algorithm and see how it can be improved .the nodes assigned to the same module are forced to move jointly when the network is rebuilt . as a result ,what was an optimal move early in the algorithm might have the opposite effect later in the algorithm .because two or more modules that merge together and form one single module when the network is rebuilt can never be separated again in this algorithm , the accuracy can be improved by breaking the modules of the final state of the core algorithm in either of the two following ways : * _ submodule movements ._ first , each cluster is treated as a network on its own and the main algorithm is applied to this network .this procedure generates one or more submodules for each module .then all submodules are moved back to their respective modules of the previous step . at this stage , with the same partition as in the previous step but with each submodule being freely movable between the modules , the main algorithm is re - applied . *_ single - node movements . _first , each node is re - assigned to be the sole member of its own module , in order to allow for single - node movements .then all nodes are moved back to their respective modules of the previous step . at this stage , with the same partition as in the previous step but with each single node being freely movable between the modules , the main algorithm is re - applied . in practice , we repeat the two extensions to the core algorithm in sequence and as long as the clustering is improved .moreover , we apply the submodule movements recursively .that is , to find the submodules to be moved , the algorithm first splits the submodules into subsubmodules , subsubsubmodules , and so on until no further splits are possible .finally , because the algorithm is stochastic and fast , we can restart the algorithm from scratch every time the clustering can not be improved further and the algorithm stops . the implementation is straightforward and , by repeating the search more than once , 100 times or more if possible , the final partition is less likely to correspond to a local minimum . for each iteration ,we record the clustering if the description length is shorter than the previously shortest description length . in practice , for networks with on the order of 10,000 nodes and 1,000,000 directed and weighted links , each iteration takes about 5 seconds on a modern pc .in this paper and associated interactive visualization , we have detailed the mechanics of the map equation for community detection in networks . our aim has been to differentiate flow - based methods such as spectral methods and the map equation , which focus on system behavior once the network has been formed , from methods based on underlying stochastic models such as mixture models and modularity methods , which focus on the network formation process . by comparing how the map equation and modularity operate on networks with and without flow , we conclude that the two approaches are not only conceptually different , they also highlight different aspects of network structure . depending on the sorts of questions that one is asking ,one approach may be preferable to the other .for example , to analyze how networks are formed and to simplify networks for which links do not represent flows but rather pairwise relationships , modularity or other topological methods may be preferred .but if instead one is interested in the dynamics on the network , in how local interactions induce a system - wide flow , in the interdependence across the network , and in how network structure relates to system behavior , then flow - based approaches such as the map equation are preferable .
many real - world networks are so large that we must simplify their structure before we can extract useful information about the systems they represent . as the tools for doing these simplifications proliferate within the network literature , researchers would benefit from some guidelines about which of the so - called community detection algorithms are most appropriate for the structures they are studying and the questions they are asking . here we show that different methods highlight different aspects of a network s structure and that the the sort of information that we seek to extract about the system must guide us in our decision . for example , many community detection algorithms , including the popular modularity maximization approach , infer module assignments from an underlying model of the network formation process . however , we are not always as interested in how a system s network structure was formed , as we are in how a network s extant structure influences the system s behavior . to see how structure influences current behavior , we will recognize that links in a network induce movement across the network and result in system - wide interdependence . in doing so , we explicitly acknowledge that most networks carry flow . to highlight and simplify the network structure with respect to this flow , we use the map equation . we present an intuitive derivation of this flow - based and information - theoretic method and provide an interactive on - line application that anyone can use to explore the mechanics of the map equation . the differences between the map equation and the modularity maximization approach are not merely conceptual . because the map equation attends to patterns of flow on the network and the modularity maximization approach does not , the two methods can yield dramatically different results for some network structures . to illustrate this and build our understanding of each method , we partition several sample networks . we also describe an algorithm and provide source code to efficiently decompose large weighted and directed networks based on the map equation .
the ligo and virgo gravitational wave interferometric detectors are approaching their design sensitivity , and in the near future , coincidences between the three detectors ( ligo - hanford , ligo - livingston and virgo ) will be possible . in order to reconstruct the direction of the astrophysical sources in the sky , it is well known that a minimum of three detectors is mandatory even if an ambiguity remains between two positions symmetric with respect to the plane defined by the 3 detectors .the source direction is also provided by the coherent searches for bursts , , or coalescing binaries , , where one of the outputs of the detection algorithm is an estimation of the source direction . in this paper , we propose a method for estimating the source position using only the arrival time of the gravitational signal in each detector .the event detection is supposed to have been previously done by dedicated algorithms ( [ 10 - 23 ] for bursts , [ 24 - 29 ] for coalescing binaries ) and is not within the scope of this article .the direction reconstruction is based on a minimization as described in section ii .this technique can be easily extended to any set of detectors .moreover , the method can be applied to several types of sources ( burst , coalescence of binary objects ... ) as soon as an arrival time can be defined for the event .section ii also deals with the simulation procedure which will be used in the following sections to evaluate the reconstruction quality in several configurations .sections iii and iv describe the performances of the ligo - virgo network first neglecting ( iii ) , then including ( iv ) the angular response of the detectors . in sectionv , we consider the addition of other gravitational wave detectors ( supposing a similar sensitivity ) and investigate their impact on the reconstruction . in real conditions , systematic errors on arrival time are likely to exist and their impact on the reconstruction is tackled in section vi .within a network of _ n _ interferometers , we suppose that each detector measures the arrival time of the gravitational wave .of course , the definition of the arrival time depends on the source type and is a matter of convention , for example : peak value in the case of a supernova signal , end of the coalescence for binary events . in the following, it is assumed that all interferometers use consistent conventions .the error on the arrival time , , depends on the estimator used and on the strength of the signal in the detector , strength ( for a given distance and a given signal type ) which is related to the antenna pattern functions ( see and references therein ) at time . at that time , the antenna pattern depends on the longitude and the latitude of the detector location , as well as its orientation , the angle between the interferometer arms , the sky coordinates ( right ascension ) and ( declination ) of the source , and the wave polarization angle .the timing uncertainty can be parametrized by : where is the measured snr in detector , and are constants depending on the detection algorithm and the signal shape .for example , a burst search with a 1-ms gaussian correlator leads to ms and .typically , for an snr equal to 10 , the error on the arrival time is a few tenth of milliseconds and weakly depends on for snr values between 4 and 10 .the measured arrival times and their associated errors are the input for the reconstruction of the source direction in the sky , direction defined by and . in the 3-detector configuration ,the angles ( and ) of the source in the detector coordinate system ( see ref. and for exact definitions ) are given by : where d is placed at ( 0,0,0 ) , d at ( ,0,0 ) and d at ( ,,0 ) .when performing a coherent analysis of the gw detector streams the position of the source in the sky is part of the output parameters , corresponding to the stream combination which maximizes the snr .however , for a burst search , it is known that thousands of possible positions have to be tested to obtain the solution , or a least - square function involving the integration of detector streams has to be minimized .this minimization also implies the test of hundreds of initial conditions in order to reach the right minimum .concerning coalescing binaries , it implies the definition of a five - parameter bank of filters including the chirp mass , the three euler angles and the inclination angle of the orbital plane or a three - parameter bank of thousands filters for the two source angles and the chirp time . in all coherent techniques ,the extraction of the source direction is an heavy process imbedded in the detection procedure . in this paper, we propose a simpler approach where and are found through a least - square minimization using separately triggered events obtained by a coincidence search .we suppose that the detection is already performed applying suitable algorithms ( matched filter for coalescing binaries , robust filters for bursts ) .the is defined by : where is the arrival time of the gravitational wave at the center of the earth and is the delay between the center of the earth and the i detector which only depends on and .the first advantage of this definition is that it deals with absolute times recorded by each detector rather than time differences where one detector has to be singled out . otherwise , the best choice for the reference detector is not obvious : the detector with the lower error on the arrival time , the detector which gives the larger time delays or the detector leading the best relative errors on timing differences ?this definition leads to uncorrelated errors on fitted measurements .the second advantage is that the network can be extended to any number of detectors and the addition of other detectors is straightforward .obviously , the method requires that the event is seen by all detectors .the least - square minimization provides the estimation of and the covariance matrix of the fitted parameters . when the number of detectors is greater than 3 , the value at the minimum can also be used as a discriminating variable , as the system is overconstrained .a list of coincident events are defined by the three arrival times and their associated errors .no detection procedure is performed in these simulations as stated before .the simulation proceeds in two steps in order to study two coupled effects : antenna - patterns and location with respect to the 3-detector plane .the first step is a simplified approach : the antenna - pattern functions are ignored and the same error is assumed for arrival times .the second step is more realistic : we assume the same sensitivity for each detector and the signal strength is adjusted to have the mean ( over the three detectors ) snr equal to 10 . however, this implies that sometimes , due to the antenna - pattern , the signal is seen in a given detector with an snr lower than 4.5 , which remains an acceptable threshold for a real detection . the same threshold equal to 4.5will be used later to see the effect of a reasonable detection scheme on the angular reconstruction error .we evaluate the errors for each arrival time using equation [ eq : sigma_t ] with . for a given simulation ,the coordinates ( ) of the source are chosen and the true arrival times on each detector are computed taken equal to 0 ( it is obvious that the timing origin can always been chosen such as ) the measured arrival times are drawn according to a gaussian distribution centered on and of width .the simulated values are then used as inputs for the least - square minimization . as the minimization is an iterative procedure , some initial values for the parameters have to be given . is initialized by the average of . for the angles, it appears that the initial values for the angles have no influence on the minimization convergence and a random direction is adequate . in the case of three interferometers, it is well known that there is a twofold ambiguity for the direction in the sky which can lead to the same arrival times in the detectors .these two solutions are symmetric with respect to the 3-detector plane . in order to resolve the ambiguity ,a fourth detector is needed . for the evaluation of the reconstruction accuracy ,only the solution closest to the source is retained .in this section , we only deal with the ligo - virgo network and the effect of the antenna - pattern functions are not included and it is assumed that all detectors measure the arrival time with the same precision . as previously said , it allows to decouple the effect of the antenna - patterns and of the location with respect to the 3-detector plane .first of all , as an example , in order to evaluate the accuracy of the reconstruction , we choose a given position in the sky ( coordinates of the galactic center ) and we perform the simulation with s at a fixed time .the results are shown on figure [ fig : reco_pos ] .a resolution of about 0.7 degrees can be achieved both on and .the angular error is defined as the angular distance on the sphere between the true direction and the reconstructed one ( it does not depend on the coordinate system and in particular there is no divergence ( only due to the coordinate system ) when is equal to 90 degrees ) .this variable will be used in the following steps as the estimator of the reconstruction quality .the mean angular error is 0.8 degrees . as shown on table [ tab : covar ] , the estimated errors ( given by the covariance matrix ) obtained by the minimization are in perfect agreement with these resolutions ..reconstruction accuracies on and and errors given by the covariance matrix .three digits are given in order to show the adequacy between rms and errors given by the covariance matrix . [ cols="^,^,^ " , ]all angular errors quoted previously suppose that arrival time measurements are only subject to gaussian noise .systematic biases can also be introduced by the analysis and their effect can be evaluated . in order to do so , we modify equation [ eq : t_measured_bias ] introducing a timing bias for only one detector : as in sections [ source ] and [ including ] , we only consider the ligo - virgo network .it appears that the widths of the distribution for reconstructed and are not modified by the bias but the central values are shifted from the true ones .the differences between the reconstructed value and the true one are proportional to the bias and are significantly different from zero when the bias and the statistical error have the same order of magnitude .table [ tab : bias ] shows the effect of the bias for a given direction ( we check that the effect is independant of the source location ) . in this example , the bias has been applied to the livingstone interferometer .the width of statistical errors on arrival time was .1 ms leading to a statistical angular error about 0.8 .for the tested configurations , we do not observe significant differences between the three interferometers of the network .we described a method for the reconstruction of the source direction using the timing information ( arrival time and associated error ) delivered by gravitational wave detectors such as ligo and virgo .the reconstruction is performed using a least - square minimization which allows to retrieve the angular position of the source and the arrival time at the center of the earth .the minimization also gives an estimation of errors and correlations on fitted variables .for a given position , the angular error is proportional to the timing resolution and the systematic errors ( if they exist ) introduce a significant bias on reconstructed angles when they reach the level of the statistical one .when the antenna - pattern effect is included and imposing a mean snr value of 10 in the ligo - virgo network , a precision of can be reached for half of the sky . in order to reproduce a realistic case, we apply a threshold on the snr in each detector ( snr 4.5 leading to a false alarm rate about hz when performing a threefold coincidence ) .this condition is satisfied for 60 % of the sky and the median angular error in this case is . as a resolution of obtained for 30 % of the events satisfying the snr condition , it means that about 20 % of the whole sky is seen with an angular error lower than .adding other gravitational waves detectors allows to reduce the blind regions and to lower the mean resolution . in the best considered case( 6 detectors ) , the resolution is about and 99% of the sky is seen with a resolution lower than . all quoted resolutions ( about one degree )are similar to those delivered by -ray satellites when the first grb counterparts have been identified .so , we can expect it will be also sufficient for the first identification of gravitational wave sources .
this paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors , mainly the ligo and virgo detectors . we suppose that an event has been seen in coincidence using a filter applied on the three detector data streams . using the arrival time ( and its associated error ) of the gravitational signal in each detector , the direction of the source in the sky is computed using a minimization technique . for reasonably large signals ( snr.5 in all detectors ) , the mean angular error between the real location and the reconstructed one is about . we also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors . it appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors . adding at least one other detector to the ligo - virgo network reduces the blind regions and in the case of 6 detectors , a precision less than on the source direction can be reached for 99% of the sky .
it is well known that adequate relativistic modelling is indispensable for the success of microarcsecond space astrometry .one of the most important relativistic effects for astrometric observations in the solar system is the gravitational light deflection .the largest contribution in the light deflection comes from the spherically symmetric ( schwarzschild ) parts of the gravitational fields of each solar system body .although the planned astrometric satellites gaia , sim , etc . will not observe very close to the sun , they can observe very close to the giant planets also producing significant light deflection .this poses the problem of modelling this light deflection with a numerical accuracy of better than 1 .the exact differential equation of motion for a light ray in the schwarzschild field can be solved numerically as well as analytically . however , the exact analytical solution is given in terms of elliptic integrals , implying numerical efforts comparable with direct numerical integration , so that approximate analytical solutions are usually used .in fact , the standard parametrized post - newtonian ( ppn ) solution is sufficient in many cases and has been widely applied . so far , there was no doubt that the post - newtonian order of approximation is sufficient for astrometric missions even up to microarcsecond level of accuracy , besides astrometric observations close to the edge of the sun .however , a direct comparison reveals a deviation between the standard post - newtonian approach and the exact numerical solution of the geodetic equations .in particular , we have found a difference of up to 16 in light deflection for solar system objects observed close to giant planets .this error has triggered detailed numerical and analytical investigations of the problem . usually , in the framework of general relativity or the ppn formalism analytical orders of smallness of various terms are considered . herethe role of small parameter is played by where is the light velocity .standard post - newtonian and post - post - newtonian solutions are derived by retaining terms of relevant analytical orders of magnitude . on the other hand , for practical calculationsonly numerical magnitudes of various terms are relevant . in this notewe attempt to close this gap and combine the analytical post - post - newtonian solution derived in with estimates of numerical magnitudes of various terms . in this way we will derive a compact analytical solution for the boundary problem for light propagation where all terms are indeed relevant at the level of 1 .the derived analytical solution is then verified using high - accuracy numerical integration of the differential equations of light propagation and found to be correct at the level well below 1 .we use fairly standard notations : * is the newtonian constant of gravitation ; * is the velocity of light ; * and are the parameters of the parametrized post - newtonian ( ppn ) formalism which characterize possible deviation of the physical reality from general relativity theory ( in general relativity ) ; * lower case latin indices , , take values 1 , 2 , 3 ; * lower case greek indices , , take values 0 , 1 , 2 , 3 ; * repeated indices imply the einstein s summation irrespective of their positions ( e.g. and ) ; * a dot over any quantity designates the total derivative with respect to the coordinate time of the corresponding reference system : e.g. ; * the 3-dimensional coordinate quantities ( `` 3-vectors '' ) referred to the spatial axes of the corresponding reference system are set in boldface : ; * the absolute value ( euclidean norm ) of a `` 3-vector '' is denoted as or , simply , and can be computed as ; * the scalar product of any two `` 3-vectors '' and with respect to the euclidean metric is denoted by and can be computed as ; * the vector product of any two `` 3-vectors '' and is designated by and can be computed as , where is the fully antisymmetric levi - civita symbol ; * for any two vectors and the angle between them is designated as .the paper is organized as follows . in section [ section - schwarzschild ]we present the exact differential equations .high - accuracy numerical integration of these equations is discussion in section [ section : numerical_integration ] . in section [ section - standard - pn ]we discuss the standard post - newtonian approximation and demonstrate the problem with the standard post - newtonian solution by direct comparison between numerical results and the ppn solution . in section [ section - ppn - solution ] , the formulas for the boundary problem in post - post - newtonian approximation are considered .a detailed estimation of all relevant terms is given , and simplified expressions are derived .we demonstrate by explicit numerical examples the applicability of this analytical approach for the gaia astrometric mission . in section [ section - stars ]we consider the important case of objects situated infinitely far from the observer as a limit of the boundary problem .the results are summarized in section [ section - conclusion ] . in the appendicesdetailed derivations for a number of analytical formulas are given .for the reasons given above we need a tool to calculate the real numerical accuracy of some analytical formulas for the light propagation . to this end, we consider the exact schwarzschild metric and its null geodesics in harmonic gauge . those exact differential equations for the null geodesics will be solved numerically with high accuracy ( see below ) and that numerical solution provides the required reference .as it has been already discussed in in harmonic gauge the components of the covariant metric tensor of the schwarzschild solution are given by where is the schwarzschild radius of a body with mass .the contravariant components read considering that the determinant of the metric can be computed as one can easily check that this metric satisfies the harmonic conditions ( [ harmonic - conditions ] ) .the christoffel symbols of second kind are defined as using ( [ exact_5 ] ) and ( [ exact_10 ] ) one gets and all other christoffel symbols vanish .as it has been pointed out in section ii.c of the condition of isotropy leads to the following integral of the equations of light propagation where is the coordinate direction of propagation ( ) , is the position of the photon and is the absolute value of the coordinate light velocity normalized by : . reparametrizing the geodetic equations by coordinate time ( see e.g. section ii.d of ) and using the christoffel symbols computed above one gets the differential equations for the light propagation in metric ( [ exact_5 ] ) : { \mbox{\boldmath } } + 2 \frac{a}{x^2 } \;\frac{2 - a}{1 - a^2 } ( { \mbox{\boldmath } } \cdot \dot{{\mbox{\boldmath } } } ) \ , \dot{{\mbox{\boldmath } } } \ , .\label{exact_25}\end{aligned}\ ] ] eq .( [ isotropic_15 ] ) for the isotropic condition together with could be used to avoid the term containing , but it does not simplify the equations .our goal is to integrate eq .( [ exact_25 ] ) numerically to get a solution for the trajectory of a light ray with an accuracy much higher than the goal accuracy of . for this numerical integration a simple fortran 95 code using quadrupole ( 128 bit )arithmetic has been written .numerical integrator odex has been adapted to the quadrupole precision .odex is an extrapolation algorithm based on the explicit midpoint rule .it has automatic order selection , local accuracy control and dense output . using forth and back integration to estimate the accuracy , each numerical integrationis automatically checked to achieve a numerical accuracy of at least in the components of both position and velocity of the photon at each moment of time .the numerical integration is first used to solve the initial value problem for differential equations ( [ exact_25 ] ) .( [ isotropic_15 ] ) should be used to choose the initial conditions .the problem of light propagation has thus only 5 degrees of freedom : 3 degrees of freedom correspond to the position of the photon and two other degrees of freedom correspond to the unit direction of light propagation .the absolute value of the coordinate light velocity can be computed from ( [ isotropic_15 ] ) .fixing initial position of the photon and initial direction of propagation one gets the initial velocity of the photon as function of and computed for given and : the numerical integration yields the position and velocity of a photon as function of time .the dense output of odex allows one to obtain the position and velocity of the photon on a selected grid of moments of time .( [ isotropic_15 ] ) holds for any moment of time as soon as it is satisfied by the initial conditions .therefore , ( [ isotropic_15 ] ) can be also used to estimate the accuracy of numerical integration at each moment of integration . for the purposes of this work we need to have an accurate solution of two - value boundary problem .that is , a solution of eq .( [ exact_25 ] ) with boundary conditions where and are two given constants , is assumed to be fixed and is unknown and should be determined by solving ( [ exact_25 ] ) .instead of using some numerical methods to solve this boundary problem directly , we generate solutions of a family of boundary problems from our solution of initial value problem ( [ num_5 ] ) .each intermediate result computed by odex during the integration with initial conditions ( [ num_5 ] ) gives us a high - accuracy solution of the corresponding two - value boundary problem ( [ num_10 ] ) : and are just taken from the intermediate steps of our numerical integration . in the following discussionwe will compare predictions of various analytical models for the unit direction of light propagation for a given moment of time .the reference value for these comparisons can be derived directly from the numerical integration as the accuracy of this numerically computed in our numerical integrations is guaranteed to be of the order of radiant and can be considered as exact for our purposes .in this section we will recall the standard post - newtonian approach and will compare the results for the light deflection with the accurate numerical solution of the geodetic equations described in the previous section .the well - known equations of light propagation in first post - newtonian approximation with ppn parameters have been discussed by many authors .the differential equations for the light rays are given by the post - newtonian terms of eq .( 22 ) of : the analytical solution of ( [ pn_15 ] ) can be written in the form where solution ( [ pn_20])([pn_25 ] ) satisfies the following initial conditions : from eqs .( [ pn_20])([pn_25 ] ) it is easy to derive the following expression for the unit tangent vector at observer s position ( note , in boundary problem we consider as the exact position , according to eq .( [ pn_20 ] ) ) : where , . by means of eq .( [ omega_5 ] ) given below we obtain that for the angle between and one has ( for ) where in the limit of a source at infinity one gets in order to determine the accuracy of the standard post - newtonian approach we have to compare the post - newtonian predictions of the light deflection with the results of the numerical solution of geodetic equations . here , we compare the difference between the unit tangent vector defined by ( [ pn_30 ] ) and the vector calculated from the numerical integration using ( [ eq : n - numerical ] ) .having performed extensive tests , we have found that , in the real solar system , the error of for observations made by an observer situated in the vicinity of the earth attains 16 .these results are illustrated by table [ table0 ] and fig .[ fig : numeric1 ] . table [ table0 ] contains the parameters we have used in our numerical simulations as well as the maximal deviation between and in each set of simulations .we have performed simulations with different bodies of the solar systems , assuming that the minimal impact distance is equal to the radius of the corresponding body , and the maximal distance between the gravitation body and the observer is given by the maximal distance between the gravitational body and the earth .the simulation shows that the error of is generally increasing for larger and decreasing for larger .the dependence of the error of for fixed and and increasing distance between the gravitating body and the source at is given on fig .[ fig : numeric1 ] for the case of jupiter , being taken to be minimal and to be maximal as given in table [ table0 ] .moreover , the error of is found to be proportional to which leads us to the necessity to deal with the post - post - newtonian approximation for the light propagation ..numerical parameters of the sun and giant planets are taken from . is the minimal value of the impact parameter that was used in the simulations .for each body are equal its radius . for the sun at the impact parameter is computed as . is the maximal absolute value of the position of observer that was used in the simulations . is the maximal angle between and found in the numerical tests . [cols="<,^,^,^,^,^,^ " , ] as soon as we accept the equality of and for our case the only relevant step is the transformation between and .this transformation in the post - post - newtonian approximation is given by eqs .( 53)(54 ) of .introducing impact vector computed using and the position of the observer we can re - write eqs .( 53)(54 ) of as where .now we need to estimate the effect of the individual terms in eq .( [ sigma - n - stars ] ) on the angle between and . this angle can be computed from vector product .the term in ( [ sigma - n - stars ] ) proportional to obviously plays no role and can be ignored . for the other terms taking into account that and considering the general - relativistic values we get where is the sum of all terms of order in ( [ sigma - n - stars ] ) .estimate ( [ psi - estimate ] ) obviously agrees with estimate ( [ n_20 ] ) for .numerical values of this estimate can be found in table [ table2 ] .the estimates show that these terms can be neglected at the level of 1 except for the observations within 5 angular radii from the sun .omitting these terms one gets an expression valid at the level of 1 in all other cases : note that for this coincides with ( [ n_85-better])([p - sso ] ) and with ( [ n - sigma - better])([n - sigma - t ] ) .this formula together with can be applied for sources at distances larger than 1 pc to attain the accuracy of 1 .alternatively eqs .( [ n_85-better])([p - sso ] ) can be used for the same purpose giving slightly better accuracy for very close stars .however , distance information ( parallax ) is necessary to use ( [ n_85-better])([p - sso ] ) .in this report the numerical accuracy of the post - newtonian and post - post - newtonian formulas for light propagation in the parametrized schwarzschild field has been investigated .analytical formulas have been compared with high - accuracy numerical integrations of the geodetic equations . in this waywe demonstrate that the error of the standard post - newtonian formulas for the boundary problem ( light propagation between two given points ) can not be used at the accuracy level of 1 for observations performed by an observer situated within the solar system .the error of the standard formula may attain 16 .detailed analysis has shown that the error is of post - post - newtonian order .on the other hand , the post - post - newtonian terms are often thought to be of order and can be estimated to be much smaller than 1 in this case . to clarify this contradictionwe have investigated the post - post - newtonian solution for the light propagation derived in . for each individual term in relevant formulasupper estimates have been found .it turns out that in each case one post - post - newtonian term may become much larger than the other ones and can not be estimates as .these terms depend only on and do not come from the post - post - newtonian terms of the corresponding metric . the formulas for transformations between directions , and containing both post - newtonian terms and post - post - newtonian ones that can be relevant at the level of 10 cm for the shapiro delay and 1 for the directions have been derived .the formulas are given by eqs .( [ tau_30 ] ) , ( [ sigma - k - better])([sigma - k - s ] ) , ( [ n - sigma - better])([n - sigma - t ] ) , ( [ n_85-better])([p - sso ] ) , and ( [ sigma - n - stars - simplified])([q - stars ] ) .these formulas should be considered as formulas that guarantee this numerical accuracy .the derived analytical solution shows that no `` native '' post - post - newtonian terms are relevant for the accuracy of 1 in the conditions of this note ( no observations closer than five angular radii of the sun ) .`` native '' refers here to the terms coming from the post - post - newtonian terms in the metric tensor .it is , therefore , not the post - newtonian solution itself , but the standard analytical way to convert the solution of the initial value problem into the solution for the boundary problem that is responsible for the numerical error of 16 mentioned above .let us finally note that the post - post - newtonian term in ( [ n_85-better])([p - sso ] ) is closely related to the standard gravitation lens formula .here we only note that all the formulas given in and in this paper are not valid for ( always appear in the denominators of the relevant formulas ) .on the other hand , the standard post - newtonian lens equation successfully treats this case , known as the einstein ring solution .the relation between the lens approximation and the standard post - newtonian expansion is a different topic which will be considered in a subsequent paper .this work was partially supported by the bmwi grant 50qg0601 awarded by the deutsche zentrum fr luft- und raumfahrt e.v .( dlr ) . 999 s.a .klioner , s. zschocke , gaia - ca - tn - lo - sk-002 - 1 e. hairer , s. p. norsett , g. wanner , _ solving ordinary differential equations 1 .nonstiff problems _ , springer , berlin , 1993 .moyer , t.d .( 2000 ) formulation for observed and computed values of deep space network data types for navigation , deep space communications and navigation series , jpl publication 00 - 7 .weissman , l .- a .mcfadden , t.v .johnson , encyclopedia of the solar system , ( san diego : academic ) eds .iers conventions ( 2003 ) .dennis d. mccarthy and grard petit .( iers technical note 32 ) frankfurt am main : verlag des bundesamts fr kartographie und geodsie , 2004 , 127 pp .in order to get ( [ tau_20 ] ) we write the corresponding term as where is the angle between and , and .it is easy to see that for and this immediately gives ( [ tau_20 ] ) . here and below we always give estimates that can not be improved in the sense that they are reachable for certain values of the parameters . for ( [ tau_25 ] ) we write here and below is the angle between and , and .one can show that for and and this immediately gives ( [ tau_25 ] ) .( [ estimate - rho-0 ] ) we note that again is the angle between and , and .one can show that for and \displaystyle{\frac{2}{1 + z } } , & z>1 \end{array } \right.\ , \le 2 \label{for - rho-0 - 2}\end{aligned}\ ] ] and this leads to ( [ estimate - rho-0 ] ) .the discontinuity of and its estimate at are discussed in the main text after eq .( [ rho-4 ] ) . the term can be written as }{r^3 } \,\right| \nonumber\\ & = & 4\,\frac{m^2}{d^2}\,{x\over d}\ , z\,(1+z)\,{1-\cos\phi\over 1+z^2 - 2z\,\cos\phi}\,\left(1+{1-z\over\sqrt{1+z^2 - 2z\,\cos\phi}}\right ) .\label{for - rho-4 - 1}\end{aligned}\ ] ] for and one has \displaystyle{4\,\frac{z}{(1+z)^2 } } , & z<{1\over 2}\ { \rm or}\ z > 1\ , .\end{array } \right .\nonumber\\ \label{for - rho-4 - 2}\end{aligned}\ ] ] this gives eq .( [ rho-4 ] ) .the function itself and its estimate ( [ for - rho-4 - 2 ] ) are again not continuous for ( implying ) .this is discussed after eq .( [ rho-4 ] ) . in order to get ( [ sigma_40 ] ) we write one can show that for and this immediately leads to ( [ sigma_40 ] ) .estimate ( [ estim_5 ] ) for is trivial . for estimate ( [ estim_10 ] ) of write \ , \biggr|\ , \left(1 + \frac{{\mbox{\boldmath } } \cdot { \mbox{\boldmath}}}{x}\right)\ , \frac{r^2}{|\,{\mbox{\boldmath } } \times { \mbox{\boldmath}}_0\,|^4}\,\frac{r^2 - ( x - x_0)^2}{2 } \nonumber\\ & = & 4\,\frac{m^2}{d^2}\,{r\over d}\ , \left(1 + \frac{{\mbox{\boldmath } } \cdot { \mbox{\boldmath}}}{x}\right)\ , \frac{r^2 - ( x - x_0)^2}{2\,r^2 } \nonumber\\ & = & 4\,\frac{m^2}{d^2}\,{r\over d}\ , \left({1-z\,\cos\phi\over \sqrt{1+z^2 - 2z\,\cos\phi}}+1\right)\ , { z\,(1-\cos\phi)\over 1+z^2 - 2z\,\cos\phi},\end{aligned}\ ] ] where again is the angle between and , and .it is easy to see that for and this immediately leads to ( [ estim_10 ] ) . for eq .( [ estim_30 ] ) we write where is the angle between vectors and . herewe used that and .for we have and this proves eq .( [ estim_30 ] ) .in order to get ( [ omega_5 ] ) we write here again is the angle between and , and .one can show that for and that immediately gives ( [ omega_5 ] ) . to derive ( [ omega-1 ] ) and ( [ omega-1-alternative ] ) we write for and one gets this gives the first estimate in ( [ omega-1 ] ) .trivial inequalities , and give the second and third estimates in ( [ omega-1 ] ) and estimate ( [ omega-1-alternative ] ) , respectively. for ( [ omega_22 ] ) we write for and one can demonstrate that & & \quad\le 15\,\pi\end{aligned}\ ] ] and this leads to ( [ omega_22 ] ) .estimates ( [ psi-0])([psi-3 ] ) are trivial . for ( [ psi - estimate ] ) we write where is the angle between vectors and . herewe use and .therefore , for one can use estimate ( [ f7 ] ) for to prove ( [ psi - estimate ] ) .
* gaia - ca - tn - lo - sz-002 - 2 * issue 2 , numerical integration of the differential equations of light propagation in the schwarzschild metric shows that in some extreme situations relevant for practical observations ( e.g. for gaia ) the well - known standard post - newtonian formula for the boundary problem has an error up to 16 . the aim of this note is to identify the reason for this error and to derive an extended formula accurate at the level of 1 as needed e.g. for gaia . the analytical parametrized post - post - newtonian solution for light propagation derived by gives the solution for the boundary problem with all analytical terms of order taken into account . giving an analytical upper estimates of each term we investigate which post - post - newtonian terms may play a role for an observer in the solar system at the level of 1 . we conclude that only one post - post - newtonian term remains important for this numerical accuracy and derive a simplified analytical solution for the boundary problem for light propagation containing all the terms that are indeed relevant at the level of 1 . the derived analytical solution has been verified using the results of a high - accuracy numerical integration of differential equations of light propagation and found to be correct at the level well below 1 for arbitrary observer situated within the solar system .
an extensive class of financial time series models is based on two interrelated processes . in particular , many models include an unobservable part that reflects a certain regime or the volatility of the process .a well - known example is given by the garch family .it is typically applied in order to model financial log returns where the unobservable volatility process drives the observable price of an asset . in the following ,let denote such a process and its unobservable counterpart .let both and be univariate .a common approach for the analysis of the extremal behavior of such interrelated processes focusses on the joint sequence .more precisely , the process is studied under the condition for and an arbitrary norm on .the connection of this approach to the concept of multivariate regular variation has been discussed extensively in .we shall follow a more natural point of view where the process is unobservable .that is , we analyze its limiting behavior under the ( observable ) event as .hence , for we focus on the limit distribution of as .we assume to be of a simple markovian structure , i.e. for some measurable mapping and some sequence of i.i.d .innovations on a measurable space .additionally , we will require the sequence of innovations to be independent of for all .based on and the innovations let the observable process be given by for some measurable mapping with .we will always assume that a stationary solution to and exists .now , by as well as by and we have a simple , but flexible model for the dependence between and . however , note that from the recursive definition in we may find a function such that , , with .hence , for ease of notation we may in the following assume that there exists an such that we may interpret and as a generalized hidden markov model which incorporates a large class of models for financial time series , cf . for the general definition .we shall discuss the garch process ( cf . ) as a specific example , i.e. and for suitable constants and . here , the sequence is the observable part , e.g. a model for financial log returns , and the series describes the conditional standard deviation ( volatility ) of the process at time . in the basic setup the innovation sequence assumed to be i.i.d .standard normal .note that the above garch(1,1 ) model satisfies and for we remark that for in the garch setup includes the arch model as a special case , cf . . for further examples , cf .also remark [ examples ] .it is well - known that under quite general assumptions about the distribution of , , and about the size of the parameters , and the stationary solutions to and share a common regularly varying ( heavy tailed ) behavior .a heavy tailed behavior of both the volatilities and the log returns is a desirable feature of financial time series as it agrees with commonly accepted stylized facts .accordingly , we will assume regular variation for the stationary solutions to both and , cf .condition 1 below .as it is not clear whether the limit in exists we will discuss those questions in more detail in sections [ firstpart ] and [ uniqueness ] . in section [ specialform ] we will show that under some further assumptions the limiting distribution in has a particularly simple form which can be seen as an extension to similar findings in .more precisely , outside of the period our results will allow for a representation of the limit process in as a multiplicative random walk , cf .proposition [ mainprop ] .heuristically , if we consider the example given by and , this is the case since a large value of stems most likely from a large value of as the tail of is heavier than the tail of .now , for a large value of , behaves asymptotically like . at the same time , for , the distribution of is influenced by the extremal event of being large while all future are not influenced by this condition . in section [ mrv ]we will analyze connections of our results with multivariate regular variation of the time series .the theoretical results are applied to the garch model in section [ garch ] .they allow for a simple representation of the tail - chain in this case ( cf . proposition [ easysimulation ] ) and are used for monte carlo evaluations of some extremal characteristics in section [ simulation ] .in the following , we will assume that the stationary distribution of , , cf . , is regularly varying with index and that it is tail - balanced , i.e. the following condition holds : .\ ] ] we will study the joint extremal behavior of and under the assumption that shares the tail behavior of , i.e. there exists a constant such that analogous to condition 1.a we say that * condition 1.b * holds if the time series satisfies with in place of ( with a possibly different value of ) . furthermore , if both conditions and are satisfied we will say that * condition 1 * holds .[ sequenceistight ] let and be stationary time series given by and and let and be satisfied .then , the family of conditional distributions is tight for all .. then by and the r.h.s .is bounded by for large. therefore , a weak accumulation point of the family of distributions exists .the following lemma shows , however , that it is not necessarily unique .[ nonuniquenesslemma ] there exist time series and of the form and such that condition 1 is satisfied but has more than one weak accumulation point. let i.e. .with we have , so as well .let and for a continuous function to be described below .thus . by independence ,any weak accumulation point of equals for some weak accumulation point of , where , denotes the dirac measure in . with will construct such that has a continuum of weak accumulation points .+ let .for the sequence each interval ] it interpolates linearly between the values and , and on ] such that ) \subset [ z_{i},5z_{i } ] ] and follows from monotone convergence .this gives the result .we show that ( ii ) implies ( i ) .let and denote two weak accumulation points . in the following ,let , and .we will show that and do not depend on . here, we shall use that ( ii ) implies by lemma [ pointmass ] , which in turn implies that for any weak accumulation point . since the above sets form a generating -system , any two weak accumulation points coincide . by tightness ( cf .proposition [ sequenceistight ] ) this implies weak convergence .+ consider first and that avoid the at most countably many point masses of the coordinate projections of and .then , is the limit of along a subsequence depending on . for general , insert in .this probability equals by conditions 1 and 2 , and since the variables have point masses at most at zero , this converges to cf . the proof of lemma [ pointmass ] .we have shown that does not depend on .approximation from inside extends this to all and . replacing by the same computation followed by an approximation argumentshows the same for . combining these two results for with shows that does not depend on .thus , the same holds for the sets .lemma [ pointmass ] shows that prop .[ onehelpsforall ] ( ii ) , and thus ( i ) , holds if and only if .we give some examples .suppose that and are nonnegative time series and , thus .then , for some implies by breiman s theorem ( cf . ) and condition 2 holds if is continuous .if for some , then suffices to derive the same result ( cf .e.g. ( * ? ? ?* lemma 2.1 ) ) . for the special case cf .the end of the proof of lemma 3.2 . for further generalizations of breimans theorem see . by similar computationsit can be shown that under the assumptions of proposition [ onehelpsforall ] uniqueness of the weak limit in is also ensured by , with as in proposition [ segers2 ] .a key step in the argument shows that this condition implies weak convergence of as for all and .we give an example with but , i.e. may ensure uniqueness even if property ( ii ) in proposition [ onehelpsforall ] fails . to this end , let and be nonnegative i.i.d .random variables with for , where solves . with ,let for all .then , implies , thus .for let .careful calculations show that ( cf . for similar arguments ) . butwith it holds that , thus by lemma [ pointmass ] .while the existence of a limit in has been analyzed in the preceding section we will now deal with the particular form of the limit . for easy reference we shall introduce the following condition .+ there exists a random vector such that we assume that the limit distribution in condition 3 is unique in order to simplify the statement of the proposition below .note , however , remark [ hasnottobeunique ] at the end of this section for a generalization to the case of non - uniqueness .we will use conditions 1 to 3 to derive a result for the form of the limit in which is similar to proposition [ segers2 ] . while conditions 1 and 2 bear a natural resemblance to the assumptions made in , condition 3is necessary to ensure that a `` starting point '' for a tail chain exists that covers the time span from to where the and therefore are directly influenced by the event .we will see that outside of this range the behavior of the process corresponds to proposition [ segers2 ] .[ mainprop ] let and be stationary time series given by and and let conditions 1 , 2 and 3 hold .then , for all integers and we have with as in condition 3 , and cf . proposition [ segers1 ] for the definition of . here, are independent , and independent of with further , and are as in definition [ bftc ]. the proof is predecessed by a lemma and a corollary where we only assume that conditions 1 and 2 hold .[ etadeltalemma ] let . for any is such that for large enough for all .for the statement follows with .so assume that .the l.h.s . equals the second factor converges to by condition 1 .it suffices to show that the first factor becomes small for . to this end , note that by stationarity the first factor equals which by definition of equals we proceed as in the proof of lemma [ pointmass ] . by an application of the continuous mapping theorem with condition 2 and proposition [ segers1 ] this converges to with again, we use that the two limit random variables include as an independent factor which excludes point masses on the positive axis .now , the set is contained in for the probability of this event gets arbitrarily small for small enough .[ makesmainproofeasier ] let and be a bounded uniformly continuous function on with .for any there is such that for large enough for all .since is bounded and uniformly continuous with , there is some such that choose as in lemma [ etadeltalemma ] . for split the expected value in into two by splitting into the first expected value is bounded by by lemma [ etadeltalemma ] , and the second by note that the case and is analogous to the proof of proposition [ segers1 ] ( cf .* theorem 2.3 ) ) . since is independent of the continuous mapping theorem can be applied to derive and leads to the multiplicative structure with independent increments .let now and , and let us assume that proposition [ mainprop ] holds for .let be bounded and uniformly continuous .we will show that with as defined in the statement of the proposition .let us further assume that as soon as .note that an arbitrary function can be split up additively into two functions and with such that the second function satisfies the aforementioned assumption and the first function depends merely on .since the induction hypothesis implies that is satisfied by a function of the assumption about the structure of is no loss of generality .the idea of the proof is to substitute the condition by a corresponding event in .let .then , for large enough for all , where is chosen according to corollary [ makesmainproofeasier ] .we have with the substitution . here, the first factor converges by condition 1 .furthermore , an application of the continuous mapping theorem in connection with propositions [ segers1 ] and [ segers2 ] yields that the whole expression converges to with and as in propositions [ segers1 ] and [ segers2 ] . defining new variables with the same distribution as in the statement of the proposition and independent of , the above expression equals by the definition of .next , note that by the continuous mapping theorem this equals replacing by and again using condition 1 this becomes since both and are uniformly continuous with and this gives for the complementary expression , where with .we may thus conclude from corollary [ makesmainproofeasier ] that tends to 0 as .thus , an application of the continuous mapping theorem in connection with the induction hypothesis yields that the latter expression equals with as in the statement of the proposition . since this finishes the proof .[ hasnottobeunique ] if is a random vector such that for a sequence with the relation holds instead of condition 3 then a statement analogous to proposition [ mainprop ] holds true along the sequence .the existence of such sequences is guaranteed by condition 1 , cf .proposition [ sequenceistight ] . in order to simplify notation ( using only instead of and ) , we have assumed that holds instead of . however , under assumption the statement of proposition [ mainprop ] looks very similar , cf . , theorem 3.5.2 , for details .in this chapter we will show that condition 3 is closely related to the theory of multivariate regular variation . in a time series context this property is well explored in the case of garch processes , cf . . from the equivalent definitions of multivariateregular variation given in the literature we shall refer to the one used in . recall that a measurable function is said to be univariate regularly varying with index if for all .we call a random vector multivariate regularly varying if there exists a univariate regularly varying function with index and a non - degenerate , non - zero radon measure on ^d\setminus\{\mathbf{0}\} ] .note also that under an additional mild mixing condition the extremal index corresponds to the inverse of the mean cluster size of extreme values in the series , cf . for reference and for further details about the extremal index .now , focussing on the garch model as defined in section [ s : intro ] we find that where cf . and ( * ? ? ?* section 5.2 ) .in addition to the extremal index we shall in the following also consider two alternative extremal characteristics that may be evaluated by the same simulation approach .the so - called extremal coefficient function discussed in is given by for . following the notion of usual autocovariancesthe extremal coefficient function gives the conditional probability of two extreme events separated by a lag . for two reasons we will also briefly describe a modification of this concept , i.e. a probability for threshold exceedances at a lag given that is not only extreme as in , but given that is also at the beginning of an extremal cluster in a time series , cf .* chapter 5 ) for a discussion .more precisely , let where the first reason to touch on this characteristic in our study is its potential to serve as a complement to the extremal coefficient function regarding questions of cluster structures in risk management and related applications that focus on the development of extremal events .the second reason is related to the numerical simulation of ( [ thetam ] ) to ( [ gammam ] ) that will be based on the tail chain concept discussed in section [ garch ] . while the evaluation of and the extremal coefficient function requires a series of runs of either the forward or the backward tail chain it is evident from ( [ gammam ] ) that the simulation of must be based on simultaneous runs of the forward and the backward tail chain at the same time .note at this point that we are not aware of any general closed form solutions for ( [ thetam ] ) to ( [ gammam ] ) that would include the garch model parameters , not even for .our simulation setup generalizes similar methods proposed by and .the algorithm used in is restricted to single time series which satisfy the assumptions of that do , however , not hold for the log returns in a garch setting . as a generalization of our algorithm is in principle not restricted to models with symmetric innovations . furthermore , to our knowledge , there have been no approaches to simulate from the backward tail chain so far .taking the limit in to we note that as indicated above all three characteristics can be expressed via the tail chain distribution . by simulation from this distribution ( cf .proposition [ easysimulation ] ) we may therefore evaluate these quantities by monte carlo estimation . in table[ t : es ] we report the results of such a simulation study for and , ( which we use as approximations for and ) and for .the evaluation of probabilities is based on draws .we fix in the table in order to reflect the stylized fact that is close to one in many applications . the last row of table [ t : es ]is motivated by the following example .cccccccccc & & & & & & & & & ' '' '' ' '' '' + 0.99 & 0 & 1.014 & 0.570 & 0.213 & 0.139 & 0.104 & 0.251 & 0.167 & 0.125 + 0.15 & 0.84 & 1.478 & 0.207 & 0.061 & 0.063 & 0.065 & 0.153 & 0.144 & 0.139 + 0.11 & 0.88 & 1.838 & 0.245 & 0.052 & 0.042 & 0.038 & 0.110 & 0.104 & 0.104 + 0.09 & 0.90 & 2.203 & 0.304 & 0.045 & 0.035 & 0.034 & 0.089 & 0.085 & 0.081 + 0.07 & 0.92 & 2.885 & 0.397 & 0.022 & 0.020 & 0.020 & 0.055 & 0.050 & 0.053 + 0.04 & 0.95 & 5.991 & 0.854 & 0.005 & 0.004 & 0.003 & 0.007 & 0.007 & 0.006 + 0.072 & 0.920 & 2.476 & 0.317 & 0.021 & 0.020 & 0.027 & 0.063 & 0.064 & 0.066 + [ ex : garch ] we fit the garch(1,1 ) model given by ( [ garchx ] ) to a data set of log returns of the s&p 500 index from 01.04.80 to 30.03.10 ( 7569 records ) .the estimated parameters are where the ml standard errors are given in brackets .we include an evaluation of the corresponding extremal measures by the above tail chain approach in the last row of table [ t : es ] . in order to discuss the adequacy of a garch model with regard to the extremal behavior we compare the result of table [ t : es ] with the so - called blocks estimator of the extremal index for the given data . for a block length of and a threshold corresponding to the empirical 0.95 quantile the estimator yields . here, the brackets represent the simulated 95% confidence interval which is based on independent garch(1,1 ) processes of length 7569 according to . as to the choice of the block length note that extremal events occuring in two distinct blocks are assumed to be independent . here, six trading months correspond to 126 days and appear to be a reasonable order of magnitude . given that our block length is a valid choicethe fact that the result falls within the simulated confidence interval indicates a satisfactory agreement of the data set and a garch model with regard to their extremal behavior .[ [ details - on - the - construction - of - f - in - the - proof - of - lemma - nonuniquenesslemma ] ] details on the construction of in the proof of lemma [ nonuniquenesslemma ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let and , i_{2 } = [ 2\frac{1}{4 } z , 3z],i_{3}=[3z,4z],i_{4}=[4z,5z] ] .the function has to satisfy thus .observe that .thus is strictly increasing and , by the above formula .this shows that is well - defined as the inverse of .+ for the existence of on as described it suffices to show that $ ] , is strictly decreasing ( note that ) which follows from .andersen , t. g. : stochastic autoregressive volatility : a framework for volatility modelling .finance * 4 * , 75102 ( 1994 ) .basrak , b. , davis , r.a . and mikosch , t. : regular variation of garch processes .stochastic processes and their applications * 99 * , 95115 ( 2002 ) basrak , b. and segers , j. : regularly varying multivariate time series . stochastic processes and their applications * 119 * , 1055 - 1080 ( 2009 ) beirlant , j. , goegebeur , y. , segers , j. and teugels , j. : statistics of extremes .wiley , chichester ( 2004 ) bingham , n. h. , goldie , c. m. and teugels , j. l. : regular variation .cambridge university press , cambridge ( 1987 ) bollerslev , t. : generalized autoregressive conditional heteroskedasticity. j. econometrics * 31 * , 307327 ( 1986 ) boman , j. and lindskog , f. : support theorems for the radon transform and cramr - wold theorems. technical report , kth stockholm ( 2002 ) breiman , l. : on some limit theorems similar to the arc - sin law .. appl . * 10 * , 323331 ( 1965 ) carrasco , m. and chen , x. : mixing and moment properties of various garch and stochastic volatility models . econometric theory .* 18 * , 1739 ( 2002 ) davis , r. a. and mikosch , t. : extreme value theory for garch processes , in : andersen , t.g . ,davis , r.a ., krei , j .-mikosch , t. ( editors ) : handbook of financial time series .springer , new york , 187200 ( 2009 ) de haan , l. , resnick , s.i . ,rootzn , h. and de vries , c. g. : extremal behaviour of solutions to a stochastic difference equation with applications to arch processes .stochastic processes and their applications * 32 * , 213224 ( 1989 ) denisov , d. and zwart , b. : on a theorem of breiman and a class of random difference equations .* 44 * , 10311046 ( 2007 ) ehlert , a. : characteristics for dependence in time series of extreme valuesthesis , university of gttingen ( 2010 ) embrechts , p. , klppelberg , c. and mikosch , t. : modelling extremal events .springer , berlin ( 1997 ) engle , r. : autoregressive conditional heteroscedastic models with estimates of the variance of united kingdom inflation .econometrica * 50 * , 9871007 ( 1982 ) fasen , v. , klppelberg , c. and schlather , m. : high - level dependence in time series models .extremes * 13 * , 133 ( 2010 ) glosten , r. , jagannathan , r. and runkle , d. : on the relation between expected value and the volatility of the nominal excess return on stocks .journal of finance * 48 * , 17791801 ( 1993 ) gomes , m. i. , de haan , l. and pestana , d. : joint exceedances of the arch process .. prob . * 41 * , 919926 .( 2004 ) ( correction : * 43 * , 1206 .( 2006 ) ) janen , a. : on some connections between light tails , regular variation and extremes .thesis , university of gttingen ( 2010 ) kallenberg , o. : foundations of modern probability .springer , new york ( 2002 ) laurini , f. and tawn , j. a. : the extremal index for garch(1,1 ) processes .extremes , published online first , doi : 10.1007/s10687 - 012 - 0148-z ( 2012 ) nelson , d.b . : stationarity and persistence in the garch(1,1 ) model . econometric theory ,* 6 * , 318334 ( 1990 ) resnick , s. i. : heavy - tail phenomena .springer , new york ( 2007 ) segers , j. : multivariate regular variation of heavy - tailed markov chains , institut de statistique dp0703 , available on arxiv.org as math.pr/0701411 ( 2007 ) smith , r. l. : the extremal index for a markov chain .* 29 * , 3745 ( 1992 ) taylor , s. : modelling financial time series .wiley , chichester ( 1986 ) trapletti , a. and hornik , k. : tseries : time series analysis and computational finance , url ` http://cran.r-project.org/package=tseries ` , r package version 0.10 - 18 ( 2009 ) wuertz , d. , et .see the source file : fextremes : rmetrics - extreme financial market data , url ` http://cran.r-project.org/package=fextremes ` , r package version 2100.77 ( 2009 )
we study the behavior of a real - valued and unobservable process under an extreme event of a related process that is observable . our analysis is motivated by the well - known garch model which represents two such sequences , i.e. the observable log returns of an asset as well as the hidden volatility process . our results complement the findings of segers [ j. segers , multivariate regular variation of heavy - tailed markov chains , arxiv : math/0701411 ( 2007 ) . available online : http://arxiv.org/abs/math/0701411 ] and smith [ r. l. smith , the extremal index for a markov chain . j. appl . prob . ( 1992 ) ] for a single time series . we show that under suitable assumptions their concept of a tail chain as a limiting process is also applicable to our setting . furthermore , we discuss existence and uniqueness of a limiting process under some weaker assumptions . finally , we apply our results to the garch case .
in massive mimo systems a base station ( bs ) with a large number of antennas ( , a few hundreds ) communicates with several user terminals ( , a few tens ) on the same time - frequency resource . there has been recent interest in massive mimo systems due to their ability to increase spectral and energy efficiency even with very low - complexity multi - user detection and precoding .however , physically building cost - effective and energy - efficient large arrays is a challenge . specifically in the downlink , the power amplifiers ( pas ) used in the bsshould be highly power - efficient .due to the trade - off between the efficiency and linearity of the pa , highly efficient but non - linear pas must be used .the efficiency of the pa is related to the amount of backoff necessitated ( to reduce non - linear distortion ) by the peak to average ratio of the input waveform . for minimum backoff and hence maximum efficiency, the input waveform should have a constant or nearly _ constant envelope _ ( ce).[multiblock footnote omitted ] with this motivation , in had proposed a ce precoding algorithm for the frequency - flat multi - user mimo broadcast channel , which was then extended to frequency - selective channels in . with larger than and i.i.d .rayleigh fading , numerical studies done in both these papers revealed that in order to achieve a desired per - user ergodic information rate , the proposed ce precoding algorithm needed only about db extra total transmit power compared to that required under the less stringent and commonly used total average transmit power constraint .it was also observed that even under a stringent per - antenna ce constraint , an array gain is achievable , i.e. , with every doubling in the number of bs antennas the total transmit power can be reduced by db while maintaining a fixed information rate to each user ( assuming that the number of users is fixed ) . however , in the ce precoding algorithm proposed in the phase angle of the complex baseband signal transmitted from each bs antenna is unconstrained ( i.e. , it s principal value lies in ] for a fixed ( the special case of was considered in ) .it is shown that the complexity of the proposed ce algorithm is independent of and is the same as the algorithm proposed in .numerical studies on the i.i.d .rayleigh fading channel suggest that an array gain is achieved even under the additional phase angle variation constraint . to achieve a desired per - user information rate , the extra total transmit power required under the time variation constraint when compared to the special case of no time variation constraint (i.e. , ) , is _ small _ when is close to and .for example , with , , single - antenna users and a desired per - user rate of bit - per - channel - use ( bpcu ) , the magnitude of the phase variation is limited to and the extra transmit power required is less than db .in the previous works and also in this paper , without loss of generality we assume single - antenna users.[multiblock footnote omitted ] it is assumed that the bs has knowledge of the channel vector to each user.[multiblock footnote omitted ] the complex baseband constant envelope signal transmitted from the -th bs antenna at time is of the form & = & \sqrt{\frac{p_t}{n } } \ , e^{j \theta_i[t ] } \,\,\,,\,\,\ , i = 1,2,\cdots , n,\end{aligned}\ ] ] where , is the total power transmitted from the bs antennas and \in [ -\pi \,,\ , \pi) ] .the signal received at the -th user ( ) at time is given by & = & \sqrt{\frac{p_t}{n } } \,\ , \sum_{i=1}^n \, \sum_{l=0}^{l-1 } \ , h_{k , i}[l ] e^{j \theta_i[t - l ] } \,\,+\,\ , w_k[t ] \,,\,\end{aligned}\ ] ] where \sim { \mathcal c}{\mathcal n}(0,\sigma^2) ] . in the following webriefly summarize the ce precoding algorithm proposed in .suppose that , at time instances we are interested in communicating the information symbol \in { \mathcal u}_k \subset { \mathbb c} ] . also , let = ( \sqrt{e_1 } u_1[t ] , \cdots , \sqrt{e_m } u_m[t ] ) \ , \in \ , { \mathcal u}_1 \times \cdots \times { \mathcal u}_m ] . in find the transmit phase angles as a solution to the optimization problem in ( [ nls_joint_eqn ] ) , where = ( \theta_1^u[t ] , \cdots , \theta_n^u[t ] ) \,,\ ,t=1,\ldots , t ] .the main idea in ( [ nls_joint_eqn ] ) is to choose the transmit phase angles in a way so as to minimize the energy of the difference between the received noise - free signal and the intended information symbol for all users .note that the objective function in ( [ nls_joint_eqn ] ) is a function of variables ( phase angles transmitted at time instances ) .finding an exact solution to the problem in ( [ nls_joint_eqn ] ) is prohibitively complex , and therefore in we had proposed a low - complexity near - optimal solution to ( [ nls_joint_eqn ] ) .the ce precoding idea is primarily based on our previous work in ( for frequency - flat channels ) where we had analytically shown that for a broad class of frequency - flat channels ( including i.i.d .fading ) , for a fixed and fixed symbol energy levels ( ) , by having a sufficiently large it is always possible to choose the transmit phase angles in such a way that the received signals at the users are arbitrarily close to the desired information symbols .note that for the ce precoding method , the transmit phase angles can take any value in the interval ( see ( [ nls_joint_eqn ] ) ) .therefore it is possible that between consecutive time instances , the phase angle transmitted from a bs antenna could change by a large magnitude , which will distort the transmit signal at the output of the pa . to address this issue ,in this paper we propose a ce precoder where for each bs antenna the difference between the phase angles transmitted in consecutive time instances is constrained to lie in the interval ] for all .this constraint ensures that the maximum variation in the transmitted phase angle between consecutive time instances is at most ( e.g. , with the maximum phase angle variation is only ) . in this paper , under the time - variation constraint we propose an optimization problem to find the transmit phase angles for given information symbols \,,\ , k=1,\ldots , m \,,\ , t=1,\ldots , t ] and minimize as a function of ] with its optimum value and then move onto the second sub - iteration where we minimize as a function of ] ( i.e. , the phase angle transmitted from the -th bs antenna in the -th time instance ) while keeping the other variables fixed . ,\cdots,\theta^{u}[t ] ) & = & \arg \hspace{-5 mm } \min_{\substack{\\ ( \theta[1 ] , \theta[2 ] , \cdots , \theta[t ] ) \\ \vert \theta_i[t ] - \theta_i[t-1 ] \vert \ , \leq \ , \alpha \pi \\ i=1,\ldots , n \,,\ , t=1,\ldots , t } } \ , f(\theta_1[1 ] , \cdots , \theta_n[1 ] , \cdots , \theta_1[t ] , \cdots , \theta_n[t]).\end{aligned}\ ] ] & = & \arg \hspace{-10 mm } \min_{\substack { \\ \\ \theta_r[q ] \\ \hspace{3 mm } \vert \theta_r[q ] \ , - \ , \theta_r[q-1 ] \vert \ , \leq \ , \alpha \pi } } \hspace{-16 mm } \sum_{t = q}^{\min(t , ( q + l -1 ) ) } \sum_{k=1}^m { \bigg \vert } s_{r , q}(k , t ) + \frac{h_{k , r}[t - q ] e^{j \theta_r[q]}}{\sqrt{n } } { \bigg \vert}^2 \,\,,\,\ , \mbox{where } \,\ , s_{r , q}(k , t ) \ , { \stackrel { \delta } { = } } \ , { \big ( } \sum \limits_{i=1}^n \hspace{-7 mm } \sum \limits_{\substack{l=0 \,,\ , \\ \hspace{6 mm } ( i , l ) \ne ( r,(t - q))}}^{l-1 } \hspace{-6 mm } \frac{h_{k , i}[l ] e^{j \theta_i[t - l ] } } { \sqrt{n } } { \big ) } - \sqrt{e_k } u_k[t ] \nonumber \\ & = & \arg \min_{\substack{\theta_r[q ] \\ ( \theta_r[q ] \ , - \ , \theta_r[q-1 ] ) \ , \in \ , [ -\alpha \pi \,,\ , \alpha \pi ] } } \ , \sum_{t = q}^{\min(t , ( q + l -1 ) ) } \sum_{k=1}^m { \bigg \vert } s_{r , q}(k , t ) + \frac{h_{k , r}[t - q ] e^{j \theta_r[q-1 ] } \ , e^{j ( \theta_r[q ] - \theta_r[q-1 ] ) } } { \sqrt{n } } { \bigg \vert}^2 \nonumber \\ & = & \theta_r[q - 1 ] \ , + \ , \arg \min_{\substack{\omega \ , \in \ , [ -\alpha \pi \,,\ , \alpha \pi ] } } \ , \sum_{t = q}^{\min(t , ( q + l -1 ) ) } \sum_{k=1}^m { \bigg \vert } s_{r , q}(k , t ) + \frac{h_{k , r}[t - q ] e^{j \theta_r[q-1 ] } \ , e^{j \ ,\omega } } { \sqrt{n } } { \bigg \vert}^2 \nonumber \\ & = & \theta_r[q - 1 ] \ , + \ , \arg \max_{\omega \in [ -\alpha \pi \,,\ , \alpha \pi ] } \ , \re { \bigg ( } \ , e^{j \omega } \ , { \big \ { } \ , - \ , \sum_{t = q}^{\min(t , ( q + l -1 ) ) } \sum_{k=1}^m \ , h_{k , r}[t - q ] e^{j \theta_r[q-1 ] } s^{*}_{r , q}(k , t ) { \big \ } } { \bigg ) } \nonumber \\ & = & \theta_r[q - 1 ] \ , + \ , \left \ { \begin{array}{cc } \alpha \pi \ , , & \,\,\ , - \pi \ , \leq \ , c \ ,< \ , - \alpha \pi \\ - c \ , , & \,\,\ , - \alpha \pi \ , \leq \ , c \ , < \ , \alpha \pi \\ - \alpha \pi \ , , & \,\,\ , \alpha \pi \ , \leq \ , c \leq \pi \end{array } \right . \,\,,\,\ , \mbox{where } \,\ , c \ , { \stackrel { \delta } { = } } \ , \mbox{arg } { \big ( } - \hspace{-4 mm } \sum_{t = q}^{\min(t , ( q + l -1 ) ) } \sum_{k=1}^m \ , h_{k , r}[t - q ] e^{j \theta_r[q-1 ] } s^{*}_{r , q}(k , t ) { \big ) } .\end{aligned}\ ] ] since the channel is causal and has a memory of time instances , it follows that in the summation on the right hand side of the definition of in ( [ nls_joint_eqn ] ) , only the terms corresponding to depend on ] is given by ( [ new_eqn_1 ] ) .in ( [ new_eqn_1 ] ) , for any complex number , \ , | \ , e^{j \phi } = z/\vert z \vert\} ] depends on .note that , for every different we need not recalculate explicitly using the sum in the r.h.s . of its definition in ( [ new_eqn_1 ] ) .instead , can be calculated by subtracting the current value of e^{j \theta_r[q]}/\sqrt{n} ] , i.e. e^{j \theta_r[q]}}{\sqrt{n}}.\end{aligned}\ ] ] note that with change in ] is the new updated value of the phase angle to be transmitted from the -th bs antenna at time instance , and is given by ( [ new_eqn_1 ] ) .after the last sub - iteration of an iteration ( i.e. , where we update ] ) of the next iteration .it is clear that the value of the objective function reduces monotonically from one sub - iteration to the next .numerically , it has been observed that the value of the objective function converges in a few iterations ( ) and further iterations lead to little reduction in the value of .further , the value that the objective function converges to , is observed to be small when .the complexity of each sub - iteration is and is independent of ( see ( [ new_eqn_1 ] ) ) .since we update phase angles in each iteration , the total complexity of each iteration is . with a fixed number of iterations , the overall complexity of the proposed algorithmis , i.e. , a per - channel - use complexity of , which is the same as that of the algorithm proposed in to solve ( [ nls_joint_eqn ] ) .for a given set of information symbol vectors \,,\,t=1,\cdots , t ] denote the output phase angles of the proposed iterative ce precoding algorithm ( see section [ sec - ce ] ) .let ] behaves like multi - user interference ( mui ) .also , let , \cdots , y_k[t])^t ] , , \cdots , i_k^u[t])^t ] .let \} ] to be i.i.d . i.e. , proper complex gaussian having zero mean and unit variance . where denotes the differential entropy operator , and the inequality in step ( a ) is due to the fact that conditioning reduces entropy .the inequality in step ( b ) follows from the fact that the proper complex gaussian distribution is the entropy maximizer , i.e. , , where ] , where the expectation is over . substituting this expression for in ( [ inf_rate_bnd ] ), we get \ ,+ \ , \frac{\sigma^2}{p_t } { \bf i } { \big \vert } } { t } { \bigg ] } ^+\end{aligned}\ ] ] here ^+ \ , { \stackrel { \delta } { = } } \ , \max ( 0 , x) ] ( expectation is over ).[multiblock footnote omitted ]we consider a frequency selective channel with a uniform power delay profile , i.e. , \vert^2 ] = 1/l \,,\,l=0,1,\cdots,(l-1) ] are i.i.d .rayleigh faded , i.e. , proper complex gaussian ( mean , variance ) .the ergodic sum rate ] .subsequently , we refer to this rate achieved by each user as the per - user ergodic information rate . in fig .[ fig_3 ] we plot the minimum required by the proposed ce precoder to achieve a per - user information rate of bpcu as a function of increasing with fixed users and .the special case of corresponds to an unconstrained ( time - variation ) ce precoder and therefore has the best performance .we see that for a given , more transmit power is required for a smaller .this is expected since a smaller places a more stringent constraint on the time - variation of the transmitted phase angles , which reduces the information rate . however , even with ( i.e. , limiting the magnitude of the time variation between consecutive time instances to be less than ) , the extra transmit power required when compared to is less than db when is sufficiently larger than ( in this case ). also , for a fixed the extra transmit power required when compared to , decreases with increasing . from the figure, it is also observed that irrespective of the value of , for sufficiently large the required reduces by roughly db with every doubling in ( i.e. , an array gain with bs antennas ) . for the sake of completeness, we have also considered the sum rate achieved under only an average total transmit power constraint ( tapc ) which is clearly less stringent than the per - antenna ce constraint . under tapc, we have plotted an achievable sum rate ( zf - zero - forcing precoder ) and an upper bound on the sum capacity ( cooperative users ) .it can be observed that even with , the extra total transmit power required by the ce precoder when compared to the sum capacity achieving precoder under tapc , is roughly db when .ce precoding with non - linear pas is beneficial , since this db loss is less than the gain in power efficiency that one can achieve by using a non - linear power - efficient pa instead of using a highly linear inefficient pa . 1 f. rusek , d. persson , b. k. lau , e. g. larsson , o. edfors , f. tufvesson and t. l. marzetta , `` scaling up mimo : opportunities and challenges with very large arrays , '' _ ieee signal process . mag ._ , vol . 30 , no . 1 ,40 - 46 , jan . 2013 .e. g. larsson , o. edfors , f. tufvesson and t. l. marzetta , `` massive mimo for next generation wireless systems , '' _ ieee commun ._ , vol .52 , no . 2 , pp . 186 - 195 , feb .2014 . t. l. marzetta , `` noncooperative cellular wireless with unlimited number of base station antennas , '' _ ieee trans .wireless commun ._ , vol . 9 , no . 11 , pp3590 - 3600 , nov . 2010 .h. q. ngo , e. g. larsson and t. l. marzetta , `` energy and spectral efficiency of very large multi - user mimo systems , '' _ ieee trans ._ , vol .61 , no . 4 , april 2013 .s. k. mohammed , `` impact of transceiver power consumption on the energy efficiency of zero - forcing detector in massive mimo systems , '' to appear in _ ieee trans .commun . _ , 2014 .s. c. cripps , _ rf power amplifiers for wireless communications , _ artech publishing house , 1999 .s. k. mohammed and e. g. larsson , `` per - antenna constant envelope precoding for large multi - user mimo systems , '' _ ieee trans ._ , vol .61 , no . 3 , pp. 1059 - 1071 , march 2013 .s. k. mohammed and e. g. larsson , `` constant - envelope multi - user precoding for frequency - selective massive mimo systems , '' _ ieee wireless communications letters _ , vol . 2 , no . 5 , pp .547 - 550 , october 2013 .t. m. cover , _ elements of information theory , _ _ john wiley and sons _ , second edition , 2006 .f. d. nesser , j. l. massey , `` proper complex random processes with applications to information theory , '' _ ieee trans .info . theory _ ,39 , no . 4 , july 1993 .
we consider downlink precoding in a frequency - selective multi - user massive mimo system with highly efficient but non - linear power amplifiers at the base station ( bs ) . a low - complexity precoding algorithm is proposed , which generates constant - envelope ( ce ) transmit signals for each bs antenna . to avoid large variations in the phase angle transmitted from each antenna , the difference of the phase angles transmitted in consecutive channel uses is limited to $ ] for a fixed . to achieve a desired per - user information rate , the extra total transmit power required under the time variation constraint when compared to the special case of no time variation constraint ( i.e. , ) , is _ small _ for many practical values of . in a i.i.d . rayleigh fading channel with bs antennas , single - antenna users and a desired per - user information rate of bit - per - channel - use , the extra total transmit power required is less than db when . massive mimo , constant envelope .
calibration is ubiquitous in all fields of science and engineering . it is an essential step to guarantee that the devices measure accurately what scientists and engineers want . if sensor devices are not properly calibrated , their measurements are likely of little use to the application .while calibration is mostly done by specialists , it often can be expensive , time - consuming and sometimes even impossible to do in practice .hence , one may wonder whether it is possible to enable machines to calibrate themselves automatically with a smart algorithm and give the desired measurements .this leads to the challenging field of _ self - calibration _ ( or _ blind calibration _ ) .it has a long history in imaging sciences , such as camera self - calibration , blind image deconvolution , self - calibration in medical imaging , and the well - known phase retrieval problem ( phase calibration ) .it also plays an important role in signal processing and wireless communications .self - calibration is not only a challenging problem for engineers , but also for mathematicians .it means that one needs to estimate the calibration parameter of the devices to adjust the measurements as well as recover the signal of interests .more precisely , many self - calibration problems are expressed in the following mathematical form , where is the observation , is a partially unknown sensing matrix , which depends on an unknown parameter and is the desired signal .an uncalibrated sensor / device directly corresponds to _ imperfect sensing _ " , i.e. , uncertainty exists within the sensing procedure and we do not know everything about due to the lack of calibration .the purpose of self - calibration is to resolve the uncertainty i.e. , to estimate in and to recover the signal at the same time .the general model is too hard to get meaningful solutions without any further assumption since there are many variants of the general model under different settings . in, may depend on in a nonlinear way , e.g. , can be the unknown orientation of a protein molecule and is the desired object ; in phase retrieval , is the unknown phase information of the fourier transform of the object ; in direction - of - arrival estimation represents unknown offset , gain , and phase of the sensors .hence , it is impossible to resolve every issue in this field , but we want to understand several scenarios of self - calibration which have great potential in real world applications . among all the cases of interest , we assume that _ linearly _ depends on the unknown and will explore three different types of self - calibration models that are of considerable practical relevance .however , even for linear dependence , the problem is already quite challenging , since in fact we are dealing with _ bilinear ( nonlinear ) inverse problems_. all those three models have wide applications in imaging sciences , signal processing , wireless communications , etc . , which will be addressed later .common to these applications is the need for _ fast _ self - calibration algorithms , which ideally should be accompanied by theoretical performance guarantees .we will show under certain cases , these self - calibration problems can be solved by _ linear least squares _ exactly and efficiently if no noise exists , which is guaranteed by rigorous mathematical proofs .moreover , we prove that the solution is also robust to noise with tools from random matrix theory . by assuming that linearly depends on , becomes a bilinear inverse problem , i.e. , we want to estimate and from , where is the output of a bilinear map from bilinear inverse problems , due to its importance , are getting more and more attentions over the last few years . on the other hand , they are also notoriously difficult to solve in general .bilinear inverse problems are closely related to low - rank matrix recovery , see for a comprehensive review. there exists extensive literature on this topic and it we could not possible do justice to all these contributions . instead we will only highlight some of the works which have inspired us . blind deconvolution might be the most important examples of bilinear inverse problems , i.e. , recovering and from , where " stands for convolution .if both and are inside known low - dimensional subspaces , the blind deconvolution can be rewritten as , where , and " denotes the fourier transform . in the inspiring work , ahmed , romberg and recht apply the lifting " techniques and convert the problem into estimation of rank-1 matrix .it is shown that solving a convex relaxation enables recovery of under certain choices of and .following a similar spirit , uses lifting " combined with a convex approach to solve the scenarios with sparse and studies the so called blind deconvolution and blind demixing " problem .the other line of blind deconvolution follows a nonconvex optimization approach . in ,ahmed , romberg and krahmer , using tools from generic chaining , obtain local convergence of a sparse power factorization algorithm to solve this blind deconvolution problem when and are sparse and and are gaussian random matrices , . under the same setting as , lee et al . propose a projected gradient descent algorithm based on matrix factorizations and provide a convergence analysis to recover sparse signals from subsampled convolution .however , this projection step can be hard to implement . as an alternative, the expensive projection step is replaced by a heuristic approximate projection , but then the global convergence is not fully guaranteed .both achieve nearly optimal sampling complexity . proves global convergence of a gradient descent type algorithm when is a deterministic fourier type matrix and is gaussian .results about identifiability issue of bilinear inverse problems can be found in .another example of self - calibration focuses on the setup , where .the difference from the previous model consists in replacing the subspace assumption by multiple measurements .there are two main applications of this model .one application deals with blind deconvolution in an imaging system which uses randomly coded masks .the measurements are obtained by ( subsampled ) convolution of an unknown blurring function with several random binary modulations of one image . both and developed convex relaxing approaches ( nuclear norm minimization ) to achieve exact recovery of the signals and the blurring function .the other application is concerned with calibration of the unknown gains and phases and recovery of the signal , see e.g. .cambareri and jacques propose a gradient descent type algorithm in and show convergence of the iterates by first constructing a proper initial guess .an empirical study is given in when is sparse by applying an alternating hard thresholding algorithm .recently , study the blind deconvolution when inputs are changing .more precisely , the authors consider where each belongs to a different known subspace , i.e. , .they employ a similar convex approach as in to achieve exact recovery with number of measurements close to the information theoretic limit .an even more difficult , and from a practical viewpoint highly relevant , scenario focuses on _ self - calibration from multiple snapshots _ . here, one wishes to recover the unknown gains / phases and a signal matrix ] with , and ( ii ) is a steinhaus sequence ( uniformly distributed over the complex unit circle ) with .we pick those choices of because we know that the image we try to reconstruct has only non - negative values . thus , by choosing to be non - negative , there are fewer cancellation in the expression , which in turn leads to a smaller condition number and better robustness .the corresponding results of our simulations are shown in figure [ fig : pos - repeat-1 ] and figure [ fig : pos - repeat-2 ] , respectively . in both cases , we only measure the relative error of the recovered image .0.5 cm in figure [ fig : pos - repeat-1 ] , we can see that both uncalibrated / calibrated image are quite good . herethe uncalibrated image is obtained by first applying the inverse fourier transform and the inverse of the mask to each and then taking the average of samples .we explain briefly why the uncalibrated image still looks good .note that where is the pseudo inverse of here is actually a diagonal matrix with random entries . as a result ,each is the sum of rank-1 matrices with random coefficients and is relatively small due to many cancellations inside it .moreover , showed that most 2-d signals can be reconstructed within a scale factor from only knowing the phase of its fourier transform , which applies to the case when is positive .however , when the unknown calibration parameters are complex variables ( i.e. , we do not know much about the phase information ) , figure [ fig : pos - repeat-2 ] shows that the uncalibrated recovered image is totally meaningless .our approach still gives a quite satisfactory results even at a relatively low snr of 5db .the second experiment is about blind deconvolution in random mask imaging .suppose we observe the convolution of two components , where both , the filter and the signal of interests are unknown .each is a random -mask .the blind deconvolution problem is to recover .moreover , here we assume that the filter is actually a low - pass filter , i.e. , is compactly supported in an interval around the origin , where is the fourier transform .after taking the fourier transform on both sides , the model actually ends up being of the form with where is a fat " partial dft matrix and is the nonzero part of . in our experiment, we let be a image and be a 2-d gaussian filter of size as shown in figure [ fig : random - mask-1 ] .0.5 cm 0.5 cm in those experiments , we choose since both and are nonnegative .figure [ fig : random - mask-2 ] shows the recovered image from sets of noiseless measurements and the performance is quite satisfactory . herethe oversampling ratio is we can see from figure [ fig : random - mask-3 ] that the blurring effect has been removed while the noise still exists .that is partially because we did not impose any denoising procedure after the deconvolution .we choose to be random hadamard matrices with and and with being a positive / steinhaus sequence , as we did previously .each is sampled from standard gaussian distribution .we choose if is uniformly distributed over ] ; right : is a random vector with each entry uniformly distributed over unit circle.,width=283 ] where , and is a gaussian random matrix . left: is a random vector with each entry uniformly distributed over $ ] ; right : is a random vector with each entry uniformly distributed over unit circle.,width=283 ]for each subsection , we will first give the result of noiseless measurements .we then prove the stability theory by using the result below .[prop : perturb ] suppose that is a consistent and overdetermined system .denote as the least squares solution to with . if , there holds , to apply the proposition above , it suffices to bound and .let us start with when and denote and , then we rewrite as where by definition , our goal is to find out the smallest and the largest eigenvalue of .actually it suffices to understand the spectrum of .obviously , its smallest eigenvalue is zero and the corresponding eigenvector is let be the -th column of and we have under all the three settings in section [ s : model1-setup ] .hence , it is easy to see that and the null space of is spanned by . has an eigenvalue with value of multiplicity and an eigenvalue with value of multiplicity .more importantly , the following proposition holds and combined with proposition [ prop : perturb ] , we are able to prove theorem [ thm : main1 ] . [ prop : main1 ] there holds a. with probability if is gaussian and ; b. with probability if each is a tall " random hadamard / dft matrix and ; c. with probability if each is a fat " random hadamard / dft matrix and .[ rmk : prop1 ] proposition [ prop : main1 ] actually addresses the identifiability issue of the model in absence of noise .more precisely , the invertibility of is guaranteed by that of . by weyls theorem for singular value perturbation in , eigenvalues of are greater than hence , the rank of is equal to if is close to the information theoretic limit under the conditions given above , i.e. , . in other words , the null space of completely spanned by .note that proposition [ prop : main1 ] gives the result if .the noisy counterpart is obtained by applying perturbation theory for linear least squares .let where is the noiseless part and is defined in . without loss of generality , we assume that according to proposition [ prop : perturb ] , it suffices to estimate the condition number of and . note that where from proposition [ prop : main1 ] and theorem 1 in , we know that where and following from , we have on the other hand , follows from proposition [ prop : main1 ] . in other words, we have found the lower and upper bounds for or equivalently , now we proceed to the estimation of let be a unit vector , where with and are the eigenvectors of .then the smallest eigenvalue of defined in has a lower bound as follows : which implies .combined with , therefore , with , the condition number of is bounded by from , is bounded by applying proposition [ prop : perturb ] gives the following upper bound of the estimation error which gives theorem [ thm : main1 ] . *[ proof of proposition [ prop : main1](a ) ] * from now on , we assume , i.e. , the -th column of , obeys a complex gaussian distribution , .let be the -th column of ; it can be written in explicit form as denoting , we obtain obviously each is independent . in order to apply theorem [ thm : bern1 ] to estimate , we need to bound and . due to the semi - definite positivity of , we have and hence this implies now we consider by computing and , i.e. , the -th and -th block of , { \boldsymbol{e}}_i{\boldsymbol{e}}_i^ * , \\ ( { \mathcal{z}}_{l , i}{\mathcal{z}}_{l , i}^*)_{2,2 } & = & \frac{1}{m } ( { \boldsymbol{a}}_{l , i}{\boldsymbol{a}}_{l , i}^ * - { \boldsymbol{i}}_n){\boldsymbol{v}}{\boldsymbol{v}}^*({\boldsymbol{a}}_{l , i}{\boldsymbol{a}}_{l , i}^ * - { \boldsymbol{i}}_n ) + \frac{1}{m^2}({\boldsymbol{a}}_{l , i}{\boldsymbol{a}}_{l , i}^ * - { \boldsymbol{i}}_n)^2.\end{aligned}\ ] ] following from , , and lemma [ lem : pos ] , there holds by applying the matrix bernstein inequality ( see theorem [ berngaussian ] ) we obtain with probability .in particular , by choosing , i.e , the inequality above holds with probability * [ proof of proposition [ prop : main1](b ) ] * each is independent by its definition in if where is an partial dft / hadamard matrix with and and is a diagonal random binary matrix .let ; in explicit form where follows from the assumption .first we take a look at the upper bound of it suffices to bound since and is positive semi - definite . on the other hand , due to lemma [ lem : pos ], we have and hence we only need to bound . for , there holds also for any pair of , can be rewritten as where is the -th column of and . then there holds where the third inequality follows from lemma [ lem : rade ] . applying lemma [ lem : pos ] to , with probability at least .denote the event by .now we try to understand the -th and -th block of are given by by using , and , we have combining , , and lemma [ lem : pos ] , by applying with and over event , we have with probability if . *[ proof of proposition [ prop : main1](c ) ] * each is independent due to .let ; in explicit form here where is a fat " partial dft / hadamard matrix satisfying and is a diagonal -random matrix. there holds where and for each , hence , there holds , with probability at least , which follows exactly from and lemma [ lem : pos ] .now we give an upper bound for .the -th and -th block of are given by by using , and , we have for , we have where note that , and there holds , combining , , , and lemma [ lem : pos ] , by applying with , we have with probability if .we start with by setting . in this way, we can factorize the matrix ( excluding the last row ) into where is the normalized , .we will show that the matrix is of rank to guarantee that the solution is unique ( up to a scalar ) .denote and with where is a standard orthonormal basis in .the proof of theorem [ thm : main2 ] relies on the following proposition .we defer the proof of proposition [ prop : main2 ] to the sections [ s : model2-a ] and [ s : model2-b ] .[ prop : main2 ] there holds , a. with probability at least with if is an complex gaussian random matrix and b. with probability at least with if yields and [ rmk : prop2 ] note that has one eigenvalue equal to 0 and all the other eigenvalues are at least 1 .hence the rank of is .similar to remark [ rmk : prop1 ] , proposition [ prop : main2 ] shows that the solution to is uniquely identifiable with high probability when and [ * proof of theorem [ thm : main2 ] ] * from , we let where is the noiseless part of .now , gives define and . from proposition [ prop : main2 ] and theorem 1 in , we know that the eigenvalues of fulfill and for since ; and the eigenvalues of are 0 , 1 and 2 with multiplicities 1 , , respectively .the key is to obtain a bound for from , where on the other hand , gives since . for , such that and where . by using the same procedure as , where with .since the smallest eigenvalue of yields , combining and leads to applying proposition [ prop : perturb ] and , we have which finishes the proof of theorem [ thm : main2 ] . in this section, we will prove that proposition [ prop : main2](a ) if where is the -th column of . before moving to the proof, we compute a few quantities which will be used later .define as the -th column of where and are standard orthonormal basis in and respectively ; " denotes kronecker product . by definition , we have and all are independent from one another . and its expectation is equal to it is easy to verify that .[ * proof of proposition [ prop : main2](a ) ] * the tool is to use apply matrix bernstein inequality in theorem [ berngaussian ] .note that let and we have since follows from lemma [ lem : pos ] .therefore , the exponential norm of is bounded by and as a result now we proceed by estimating the variance .we express as follows : the -th and the -th block of are and .\ ] ] following from , and , we have due to lemma [ lem : pos ] , there holds , by applying , with , there holds with probability at least if we prove proposition [ prop : main2 ] based on assumption .denote and as the -th column of and and obviously we have denote and let be the -th block of in , i.e. , with , we have where the expectation of -th row of yields . hence its expectation equals [ * proof of proposition [ prop : main2](b ) ] * note that each block is independent and we want to apply bernstein inequality to achieve the desired result .let and by definition , we have note that since implies that , we have with probability at least .we proceed with estimation of by looking at the -th and -th block of , i.e. , note that .the -th diagonal entry of is where and implies since is still a unit vector ( note that is unitary since is a column of ) .therefore , by using lemma [ lem : laal ] , we have with and independence between and , we have where follows from and by using , , and lemma [ lem : pos ] , is bounded above by conditioned on the event , applying with gives with probability at least and it suffices to require recall that in and the only difference from is that here all are equal to .if , ( excluding the last row ) can be factorized into where is the normalized . before we proceed to the main result in this section we need to introduce some notation .let be the -th column of , which is a complex gaussian random matrix ; define to be a matrix whose columns consist of the -th column of each block of , i.e. , where " denotes kronecker product and both and are the standard orthonormal basis in and , respectively . in this way , the are independently from one another . by definition , where and with the expectation of is given by 0.25 cm our analysis depends on the _ mutual coherence _ of .one can not expect to recover all and if all are parallel to each other .let be the gram matrix of with , i.e. , and and in particular , .its eigenvalues are denoted by with .basic linear algebra tells that where is unitary and let , then there holds since here and .in particular , if , then ; if for all , then and we are now ready to state and prove the main result in this subsection . [ prop : main3 ] there holds with probability at least if and each is i.i.d .complex gaussian , i.e. , .in particular , if and , becomes the proof of theorem [ thm : main3 ] follows exactly from that of theorem [ thm : main2 ] when proposition [ prop : main3 ] holds .hence we just give a proof of proposition [ prop : main3 ] .[ * proof of proposition [ prop : main3 ] ] * let , where and are defined as [ [ estimation - of - sum_i1mmathcalz_i1 ] ] estimation of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + following from , we have where is a matrix with orthonormal rows and hence each is rayleigh distributed .( we say is rayleigh distributed if where both and are standard real gaussian variables . ) due to the simple form of , it is easy to see from bernstein s inequality that with probability here where . therefore , now we only need to find out and in order to bound .[ [ estimation - of - mathcalz_i2-_psi_1 ] ] estimation of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + denote and there holds , for , its -norm is bounded by let and .the -norm of yields where the second inequality follows from the cauchy - schwarz inequality , , and . therefore , and there holds [ [ estimation - of - sigma_02 ] ] estimation of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + note that .let and be the -th and -th block of respectively , i.e. , {1\leq k\leq l\leq p } + \frac{1}{m^2 } { \boldsymbol{i}}_p\otimes ( { \boldsymbol{a}}_i{\boldsymbol{a}}_i^ * - { \boldsymbol{i}}_n)^2 .\label{eq : ai22 } \ ] ] since is a positive semi - definite matrix , lemma [ lem : pos ] implies so we only need to compute and . where now we have for note that which follows from . by , andlemma [ lem : pos ] , there holds , applying to with and , we have with probability . by combining with and letting ,we have with probability if or equivalently , [ lem : pos ] for any hermitian positive semi - definite matrix with , there holds , in other words , .[ thm : bern1 ] consider a finite sequence of of independent centered random matrices with dimension .we assume that and introduce the random matrix compute the variance parameter then for all with probability at least where is an absolute constant .[ berngaussian ] for a finite sequence of independent random matrices with and as defined in , we have the tail bound on the operator norm of , with probability at least where is an absolute constant .suppose that is a rademacher sequence and for any fixed and , there holds where is an matrix with only one nonzero entry equal to 1 and at position in particular , setting gives since is a rademacher sequence , i.e , each takes independently with equal probability , this implies and therefore , the -th entry of is .[ lem : laal ] there holds where , and is a deterministic unit vector . is a random partial fourier / hadamard matrix with and , i.e. , the columns of are uniformly sampled without replacement from an dft / hadamard matrix ; is a diagonal matrix with entries random sampled from with equal probability ; moreover , we assume and are independent from each other . in particular ,if , let be the -th column of and the -th entry of is where .the randomness of comes from both and and we first take the expectation with respect to . where follows from each entry in being a bernoulli random variable .hence , let be the -th column of and and " denotes the hadamard ( pointwise ) product .so we have there holds , where the third equation follows from linearity of the hadamard product and from the last one uses the fact that if is a vector from the dft matrix or hadamard matrix . by the property of conditional expectation , we know that and due to the linearity of expectation , it suffices to find out for , where and , by definition , are the -th and -th columns of which are sampled uniformly without replacement from an dft matrix .note that is actually an _ ordered _ pair of random vectors sampled without replacement from columns of .hence there are in total different choices of and where is defined as the -th column of an dft matrix .now we have , for any , where and now we return to . by substituting into, we end up with where follows from a. ahmed , a. cosse , and l. demanet . a convex approach to blind deconvolution with diverse inputs . in_ computational advances in multi - sensor adaptive processing ( camsap ) , 2015 ieee 6th international workshop on _ , pages 58 .ieee , 2015 .s. curtis , j. lim , and a. oppenheim .signal reconstruction from one bit of fourier transform phase . in _ acoustics , speech , and signal processing , ieee international conference on icassp84 ._ , volume 9 , pages 487490 .ieee , 1984 .r. gribonval , g. chardon , and l. daudet .blind calibration for compressed sensing by convex optimization . in _ acoustics ,speech and signal processing ( icassp ) , 2012 ieee international conference on _ , pages 27132716 .ieee , 2012 .j. shin , p. e. larson , m. a. ohliger , m. elad , j. m. pauly , d. b. vigneron , and m. lustig .calibrationless parallel imaging reconstruction based on structured low - rank matrix completion ., 72(4):959970 , 2014 .r. vershynin .introduction to the non - asymptotic analysis of random matrices . in y. c. eldar and g. kutyniok , editors , _ compressed sensing : theory and applications _ , chapter 5 .cambridge university press , 2012 .
whenever we use devices to take measurements , calibration is indispensable . while the purpose of calibration is to reduce bias and uncertainty in the measurements , it can be quite difficult , expensive and sometimes even impossible to implement . we study a challenging problem called _ self - calibration _ , i.e. , the task of designing an algorithm for devices so that the algorithm is able to perform calibration automatically . more precisely , we consider the setup where only partial information about the sensing matrix is known and where linearly depends on . the goal is to estimate the calibration parameter ( resolve the uncertainty in the sensing process ) and the signal / object of interests simultaneously . for three different models of practical relevance we show how such a _ bilinear _ inverse problem , including blind deconvolution as an important example , can be solved via a simple _ linear least squares _ approach . as a consequence , the proposed algorithms are numerically extremely efficient , thus allowing for real - time deployment . explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal ( up to a poly - log factor ) . applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach .
the cosmological principle is one of the cornerstones of modern cosmology . roughly speaking , the principle states that the universe is homogeneous and isotropic on large scales .although large - scale homogeneity and isotropy were initially postulated , in recent decades the principle has received mounting experimental support , and today there is little doubt about its validity .the cosmological principle has a slightly preciser formulation , which states that a perturbed friedman - robertson - walker metric provides an accurate description of the universe .thus , according to the cosmological principle , the universe is well - described by the perturbed spacetime metric ,\ ] ] with sufficiently small ( scalar ) perturbations at long wavelengths but apart from that , the principle has nothing to say about the properties of these perturbations .many of the advances in modern cosmology consist in the characterization of the metric perturbations and .though not often explicitly emphasized , one of the key assumptions is that these perturbations are just a particular realization of a random process in a statistical ensemble .hence , we do not really try to describe the actual perturbations and ; our goal is to characterize the statistical properties of the random fields and .let denote any random field in the universe , such as the metric perturbations considered above , or the energy density of any of the components of our universe .the statistical properties of the random field are uniquely specified by its probability distribution functional .it turns to be simpler however to study the moments of the field where denotes ensemble average , and all the fields are evaluated at a common but arbitrary time , which we suppress for simplicity .the cosmological principle has formal counterparts in the properties of the perturbations , though , as we emphasized above , the cosmological principle itself only requires the actual perturbations in our universe to be small .we say that a random field is statistically homogeneous ( or stationary ) , if all its moments are invariant under translations , in some cases , statistical homogeneity may apply only to some field moments .the random field is _ stationary in the mean _ if and it is _ stationary in the variance _ if where we have defined . if the random field is gaussian , the one- and two - point functions uniquely determine all the remaining moments of the field .a gaussian random field stationary in the mean and in the variance is hence automatically fully stationary .parallel definitions apply to the properties of the perturbations under rotations . in particular, we say that a random field is _ isotropic in the mean _if analogously , a random field is _ isotropic in the variance _ if since there is always a rotation that maps to , and because any two points related by a rotation always differ by a translation , equations ( [ eq : stationary mean ] ) and ( [ eq : isotropic mean ] ) imply that homogeneity and isotropy in the mean are equivalent .but homogeneity in the variance _ does not _ imply isotropy in the variance , though the converse is true , homogeneity and isotropy in the mean have an important consequence : equations ( [ eq : stationary mean ] ) or ( [ eq : isotropic mean ] ) immediately imply that the expectation of a stationary field is constant , and , conversely , any random field with constant mean is homogeneous and isotropic in the mean . because , by definition , cosmological perturbations always represent deviations from a homogeneous and isotropic background , it is then always possible to assume that the constant value of their mean is zero , if they happen to be stationary .for example , in perturbation theory we write the total energy density as a background value plus a perturbation , this split into a background value and a perturbation is essentially ambiguous , unless we specify what the background actually is . in cosmology , what sets the background apart from the perturbations is symmetry . because of the cosmological principle , the background energy density is _ defined _ to be homogeneous . hence , if the constant mean of the stationary random field is not zero , we may redefine our background and perturbations by without affecting the overall value of the energy density , . in this case the redefined perturbation has zero mean , while the redefined background is still space - independent .it is important to recognize that cosmological perturbations can be assumed to have zero mean if and only if their mean is a constant .consider again the example of the energy density ( [ eq : split ] ) , but now assume that is not stationary .although the redefinitions ( [ eq : redefinition ] ) allow us to set the mean of the perturbations to zero , the redefined background is inhomogeneous in this case , in contradiction with our definition of the background density in equation ( [ eq : split ] ) .therefore , we conclude that homogeneity in the mean , isotropy in the mean and zero mean are all equivalent , homogeneity and isotropy in the variance also have important implications .if a random field is stationary in the variance , its two point function in momentum space has to be proportional to a delta function , and if the variance is isotropic , the power spectrum can only depend on the magnitude of , based on the equivalences ( [ eq : equivalencea ] ) and ( [ eq : equivalenceb ] ) , there are hence six possible different combinations of the statistical properties of the primordial perturbations , which we list in table [ tab : cases ] .hypothesis describes the standard assumption that underlies most analyses of cosmological perturbations , and case describes what is usually known as a violation of statistical isotropy . in this articlewe focus on violations of the zero mean hypothesis , cases through . our goal is to test the standard assumption against one of its non - zero mean alternatives ..the six possible different combinations of statistical properties of the primordial perturbations .we are concerned here with the mean and variance alone . [tab : cases ] [ cols="^,^,^,^ " , ]at present , the arguably cleanest and widest window on the primordial perturbations in our universe stems from the temperature anisotropies observed in the cosmic microwave background radiation ( cmb ) .hence , if we want to test whether cosmological perturbations have zero mean , we need to explore how these temperature anisotropies are related to the random fields that we have considered in the introduction . in a homogeneous and isotropic universe ,different fourier modes of cosmological perturbations evolve independently in linear perturbation theory .hence , we may always assume that the temperature anisotropies ( of primordial origin ) observed at point in the direction are given by where are the fourier components of a random field at a sufficiently early time , and is a transfer function whose explicit form we shall not need .say , in a standard cosmology we have , where is the primordial newtonian potential , which , because of the absence of anisotropic stress , also equals . due to the linear relation between temperature anisotropies and primordial perturbations, it immediately follows that zero mean in the primordial perturbations implies zero mean of the temperature anisotropies . for many purposes ,it is more convenient to represent functions on a sphere , like the temperature fluctuations , by their spherical harmonic coefficients throughout this article we work with _ real _ spherical harmonics , whose properties are summarized in appendix a. to calculate the spherical harmonic coefficients of the temperature anisotropies we note that because linear perturbations evolve in an isotropic background ( by definition ) , the transfer function can only depend on the two scalars and .hence , we may expand the latter in legendre polynomials , with real functions . substituting then equation ( [ eq : legendre expansion ] ) into ( [ eq : t anisotropies ] ) ,setting , and using the addition theorem for ( real ) spherical harmonics in equation ( [ eq : addition theorem ] ) we get clearly , if primordial perturbations have zero mean , so do the spherical harmonic coefficients of the temperature anisotropies : in particular , a violation of the condition would thus imply a violation of statistical homogeneity .later we shall also need to know the covariance of the temperature multipoles , which follows from equation ( [ eq : a ] ) .if the random field is homogeneous and isotropic in the variance , the covariance matrix of the multipoles has elements recall that for arbitrary we define thus , denotes departures from a homogeneous and isotropic background , whereas denotes deviations from the ensemble mean . in the following ,we drop the hat from random variables and fields . unfortunately , the temperature anisotropies we actually observe in the sky are not entirely of primordial origin .they are a superposition of primordial anisotropies and foreground contributions , such as dust emission and synchrotron radiation from our very own galaxy .appropriate foreground templates allow the wmap team to eliminate foregrounds in some regions of the sky , but the cleaning procedure does not completely remove foreground contamination along the galactic disc .it is thus necessary to subject these maps to additional processing .since the actual temperature measurements involve a convolution with the detector beam , and also include detector noise , we model the temperature anisotropies in a smoothed , foreground - reduced map by ,\ ] ] where is the smoothing kernel , the star denotes convolution , and represents the residual foreground contamination .we assume that the smoothing kernel and the detector beam are rotationally symmetric .this is actually not the case for the wmap beam , but it should be a good approximation at the scales we are going to consider . in that case , in harmonic space , the convolution acts on the spherical harmonic coefficients simply by multiplication , say , in order to remove the residual foregrounds , the contaminated sky regions have to be masked out .let be the corresponding mask , which is defined by (\hat{n})=0}ppp ] .we plot the -values of the statistic under the null - hypothesis for different multipole ranges and different differencing assemblies ( blue for , red for , green for . ) for reference , the horizontal line marks probability .clearly , the value of in the multipole range is anomalously small.,height=302 ] the reader may also wonder how much better the data are fit by a distribution with non - zero mean . in order to find that out ,we calculate first an effective by extremizing the likelihood under both the null and the alternative hypothesis , since the variates are normal and independent , the likelihood is simply a product of gaussian density functions .therefore , sample mean sample variance respectively are the maximum - likelihood estimators for the population mean and the population variance .the difference is a measure of how much the fit improves when we relax the assumption of zero mean . because of equation ( [ eq : chisq ] ), this difference is a monotonic function of the ratio of maximum likelihoods under and , which also happens to be a monotonic function of the statistic ( example 24.1 in ) . for illustration ,we list the corresponding values of in table [ tab : ic ] . clearly ,since we have an additional parameter to fit the data , we expect a better fit under . to correct for the presence of additional parameters , several model selection measures have been proposed in the literature . in table[ tab : ic ] we list the difference in the corrected akaike information criterion ( aic ) and the difference in the bayesian information criterion ( bic ) . from a bayesian perspective, the difference in information criteria is a measure of relatively model likelihood this equation allows us then to interpret as a number of standard deviations .again , a distribution with non - zero mean seems to be a better model to describe the data in the multipole range .but as we emphasized above , this is relatively likely to happen if multiple ranges of multipoles are considered . { r r r r@{.}l r@{.}l r@{.}l } \hline \hline \ell_\text{min } & \ell_\text{max } & \text{dof } & \multicolumn{2}{c}{\delta\chi^2 } & \multicolumn{2}{c}{\delta\text{aic}_\text{c } } & \multicolumn{2}{c}{\delta\text{bic } } \\ \hline \hline 1 & 18 & 3 & 2&40 & \multicolumn{2}{c}{- } & 1&31 \\ 19 & 38 & 39 & 0&21 & -2&02 & -3&46 \\ 39 & 60 & 79 & 2&52 & 0&41 & -1&85 \\ 61 & 86 & 123 & 7&24 & 5&17 & 2&43 \\ 87 & 112 & 175 & 0&64 & -1&40 & -4&52 \\ 113 & 142 & 227 & 0&41 & -1&63 & -5&02 \\ 143 & 176 & 287 & 0&02 & -2&00 & -5&63 \\ 177 & 212 & 355 & 4&08 & 2&06 & -1&79\\ \hline\hline \end{array } ] confidence limit ( in units ) , as in table [ tab : limits ] .again , blue , red and green label the limits derived from the w1 , v2 and q2 band maps respectively .the extent of the interval on the axis indicates the range in values of for which the limit applies .for comparison we also plot the variance of the multipole components for the best wmap s estimate of the binned power spectrum , .note that the temperature scale is logarithmic , with positive and negative values on either side of the axis ., height=302 ]our results show significant evidence for a non - zero mean of the temperature multipoles in the range to , at the confidence level . taken as a whole however , because this range is just one among eight different multipole bins , the evidence against the zero - mean assumption is statistically insignificant , falling under the confidence level .whatever the case , the limits we have set on the mean of the primordial anisotropies in a set of multipole bins indicate that an eventual non - zero mean has to be about an order of magnitude smaller than the standard deviation of the temperature anisotropies . in that sense , observations constrain the mean to be small . in retrospective , we have therefore partially justified the common assumption of vanishing mean of the cosmological perturbations .this work is supported in part by the nsf grant phy-0855523 .some of the results in this paper have been derived using the healpix package .we acknowledge the use of the legacy archive for microwave background data analysis ( lambda ) . support for lambda is provided by the nasa office of space science .in this article we expand functions defined on a sphere in _ real _ spherical harmonics .these are related to the conventional complex spherical harmonics by it follows that the real multipole coefficients and their complex counterparts are related to each other by where we have assumed that the function on the sphere being expanded is real .the transformation ( [ eq : sh ] ) is unitary , that is , we can write with a unitary matrix , whose matrix elements are implicitly defined by equation ( [ eq : sh ] ) . because of the unitary transformation , real spherical harmonics are orthonormal , and they also satisfy the addition theorem where is a legendre polynomial .sometimes we need to integrate over the product of three spherical harmonics .we define which clearly is totally symmetric in its three arguments .since the real spherical harmonics are related to the complex spherical harmonics by a unitary transformation , this expression is closely related to the integral of the product of three complex spherical harmonics .the latter can be expressed as a product of clebsch - gordan coefficients ( or wigner symbols ) , so we have with it follows then for instance that . under ( active ) azimuthal rotations by an angle complex spherical harmonic coefficients transform according to .therefore , it follows from the left equation in ( [ eq : relations ] ) that real spherical harmonic coefficients transform according to n. jarosik _ et al . _[ wmap collaboration ] , `` first year wilkinson microwave anisotropy probe ( wmap ) observations : on - orbit radiometer characterization , '' astrophys .j. suppl .* 148 * , 29 ( 2003 ) [ arxiv : astro - ph/0302224 ] .e. komatsu _ et al ._ [ wmap collaboration ] , `` five - year wilkinson microwave anisotropy probe ( wmap ) observations : cosmological interpretation , '' astrophys . j. suppl . * 180 * , 330 ( 2009 ) [ arxiv:0803.0547 [ astro - ph ] ] .k. m. gorski , e. hivon , a. j. banday , b. d. wandelt , f. k. hansen , m. reinecke and m. bartelman , `` healpix a framework for high resolution discretization , and fast analysis of data distributed on the sphere , '' astrophys . j. * 622 * , 759 ( 2005 ) [ arxiv : astro - ph/0409513 ] .
a central assumption in our analysis of cosmic structure is that cosmological perturbations have zero ensemble mean . this property is one of the consequences of statistically homogeneity , the invariance of correlation functions under spatial translations . in this article we explore whether cosmological perturbations indeed have zero mean , and thus test one aspect of statistical homogeneity . we carry out a classical test of the zero mean hypothesis against a class of alternatives in which perturbations have non - vanishing means , but homogeneous and isotropic covariances . apart from gaussianity , our test does not make any additional assumptions about the nature of the perturbations and is thus rather generic and model - independent . the test statistic we employ is essentially student s statistic , applied to appropriately masked , foreground - cleaned cosmic microwave background anisotropy maps produced by the wmap mission . we find evidence for a non - zero mean in a particular range of multipoles , but the evidence against the zero mean hypothesis goes away when we correct for multiple testing . we also place constraints on the mean of the temperature multipoles as a function of angular scale . on angular scales smaller than four degrees , a non - zero mean has to be at least an order of magnitude smaller than the standard deviation of the temperature anisotropies .
makespan minimization is a fundamental and extensively studied problem in scheduling theory .consider a sequence of jobs that has to be scheduled on identical parallel machines .each job is specified by a processing time , .preemption of jobs is not allowed .the goal is to minimize the makespan , i.e. the maximum completion time of any job in the constructed schedule .we focus on the online version of the problem where the jobs of arrive one by one .each incoming job has to be assigned immediately to one of the machines without knowledge of any future jobs , .online algorithms for makespan minimization have been studied since the 1960s . in an early paper graham showed that the famous _ list _ scheduling algorithm is -competitive .the best online strategy currently known achieves a competitiveness of about 1.92 .makespan minimization has also been studied with various types of _ resource augmentation _ , giving an online algorithm additional information or power while processing .the following scenarios were considered .( 1 ) an online algorithm knows the optimum makespan or the sum of the processing times of .( 2 ) an online strategy has a buffer that can be used to reorder .whenever a job arrives , it is inserted into the buffer ; then one job of the buffer is removed and placed in the current schedule .( 3 ) an online algorithm may migrate a certain number or volume of jobs . in this paperwe investigate makespan minimization assuming that an online algorithm is allowed to build several schedules in parallel while processing a job sequence .each incoming job is sequenced in each of the schedules . at the end of the scheduling processthe best schedule is selected .we believe that this is a natural form of resource augmentation : in classical online makespan minimization , studied in the literature so far , an algorithm constructs a schedule while jobs arrive one by one .once all jobs have arrived , the schedule may be executed . hence in this standard frameworkthere is a priori no reason why an algorithm should not be able to construct several solutions , the best of which is finally chosen .our new proposed setting can be viewed as providing an online algorithm with extra space , which is used to maintain several solutions .very little is known about the value of extra space in the design of online algorithms .makespan minimization with parallel schedules is of particular interest in parallel processing environments where each processor can take care of a single or a small set of schedules .we develop algorithms that require hardly any coordination or communication among the schedules .last not least the proposed setting is interesting w.r.t . to the foundations of scheduling theory , giving insight into the value of multiple candidate solutions .makespan minimization with parallel schedules was also addressed by kellerer et al . . however , the paper focused on the restricted setting with machines . in this paperwe explore the problem for a general number of machines . as a main resultwe show that a constant number of schedules suffices to achieve a significantly improved competitiveness , compared to the standard setting without resource augmentation .the competitive ratios obtained are at least as good and in most cases better than those attained in the other models of resource augmentation mentioned above . the approach to grant an online algorithmextra space , invested to maintain multiple solutions , could be interesting in other problems as well .the approach is viable in applications where an online algorithm constructs a solution that is used when the entire input has arrived .this is the case , for instance , in basic online graph coloring and matching problems .the approach is also promising in problems that can be solved by a set of independent agents , each of which constructs a separate solution .good examples are online navigation and exploration problems in robotics .some results are known for graph search and exploration , see e.g. , but the approach has not been studied for geometric environments .* problem definition : * we investigate the problem _ makespan minimization with parallel schedules ( mps)_. as always , the jobs of a sequence arrive one by one and must be scheduled non - preemptively on identical parallel machines .each job has a processing time . in mps , an online algorithm may maintain a set of schedules during the scheduling process while jobs of arrive .each job is sequenced in each schedule , . at the end of ,algorithm selects a schedule having the smallest makespan and outputs this solution .the other schedules of are deleted . as we shall show mpscan be reduced to the problem variant where the optimum makespan of the job sequence to the processed is known in advance .hence let mps denote the variant of mps where , prior to the arrival of the first job , an algorithm is given the value of the optimum makespan for the incoming job sequence .an algorithm for mps or mps is -competitive if , for every job sequence , it outputs a schedule whose makespan is at most times .we present a comprehensive study of mps .we develop a -competitive algorithm , for any , using a constant number of schedules .furthermore , we give a -competitive algorithm , for any , that uses a polynomial number of schedules .the number is , which depends on but is independent of the job sequence .these performance guarantees are nearly best possible .the algorithms are obtained via some intermediate results that may be of independent interest .first , in section [ sec : redu ] we show that the original problem mps can be reduced to the variant mps in which the optimum makespan is known .more specifically , given any -competitive algorithm for mps we construct a -competitive algorithm , for any .if uses schedules , then uses schedules .the construction works for any algorithm for mps .in particular we could use a 1.6-competitive algorithm by chen et al . that assumes that the optimum makespan is known and builds a single schedule .we would obtain a -competitive algorithm that builds at most schedules .we proceed and develop algorithms for mps . in section [ sec : ptas ] we give a -competitive algorithm , for any , that uses schedules . in section [ sec:4/3 ]we devise a -competitive algorithm , for any , that uses schedules . combining these algorithms with , we derive the two algorithms for mps mentioned in the above paragraph ; see also section [ sec : mps ] .the number of schedules used by our strategies depends on and exponentially on or .such a dependence seems inherent if we wish to explore the full power of parallel schedules .the trade - offs resemble those exhibited by ptases in offline approximation .recall that the ptas by hochbaum and shmoys for makespan minimization achieves a -approximation with a running time of .in section [ sec : lb ] we present lower bounds .we show that any online algorithm for mps that achieves a competitive ratio smaller than 4/3 must construct more than schedules . hence the competitive ratio of 4/3 is best possible using a constant number of schedules .we show a second lower bound that implies that the number of schedules of our -competitive algorithm is nearly optimal , up to a polynomial factor . our algorithms make use of novel guessing schemes . works with guesses on the optimum makespan .guessing and _ doubling _ the value of the optimal solution is a technique that has been applied in other load balancing problems , see e.g. .however here we design a refined scheme that carefully sets and readjusts guesses so that the resulting competitive ratio increases by a factor of only , for any . moreover , the readjustment and job assignment rules have to ensure that scheduling errors , made when guesses were to small , are not critical .our -competitive algorithm works with guesses on the job processing times and their frequencies in . in order to achieve a constant number of schedules, we have to sparsify the set of all possible guesses .as far as we know such an approach has not been used in the literature before .all our algorithms have the property that the parallel schedules are constructed basically independently .the algorithms for mps require no coordination at all among the schedules . in schedule only has to report when it fails , i.e. when a guess on the optimum makespan is too small .the competitive ratios achieved with parallel schedules are considerably smaller than the best ratios of about 1.92 known for the scenario without resource augmentation .our ratio of , for small , is lower than the competitiveness of about 1.46 obtained in the settings where a reordering buffer of size is available or jobs may be reassigned .skutella et al . gave an online algorithm that is -competitive if , before the assignment of any job , jobs of processing volume may be migrated .hence the total amount of extra resources used while scheduling depends on the input sequence .* related work : * makespan minimization with parallel schedules was first studied by kellerer et al .they assume that machines are available and two schedules may be constructed .they show that in this case the optimal competitive ratio is 4/3 .we summarize results known for online makespan minimization without resource augmentation . as mentioned before , _ list _is -competitive .deterministic online algorithms with a smaller competitive ratio were presented in .the best algorithm currently known is 1.9201-competitive .lower bounds on the performance of deterministic strategies were given in .the best bound currently known is 1.88 , see .no randomized online algorithm whose competitive ratio is provably below the deterministic lower bound is currently known for general .we next review the results for the various models of resource augmentation .articles study makespan minimization assuming that an online algorithm knows the optimum makespan or the sum of the processing times of .chen et al . developed a 1.6-competitive algorithm .azar and regev showed that no online algorithm can attain a competitive ratio smaller than 4/3 .the setting in which an online algorithm is given a reordering buffer was explored in .englert et al . presented an algorithm that , using a buffer of size , achieves a competitive ratio of , where is the lambert function .no algorithm using a buffer of size can beat this ratio .makespan minimization with job migration was addressed in .an algorithm that achieves again a competitiveness of and uses job reassignments was devised in .no algorithm using reassignments can obtain a smaller competitiveness .sanders et al . study a scenario in which before the assignment of each job , jobs up to a total processing volume of may be migrated , for some constant . for , they present a 1.5-competitive algorithm .they also show a -competitive algorithm , for any , where . as for memory in online algorithms , sleator and tarjan studied the paging problem assuming that an online algorithm has a larger fast memory than an offline strategy .raghavan and snir traded memory for randomness in online caching .* notation : * throughout this paper it will be convenient to associate schedules with algorithms , i.e. a schedule is maintained by an algorithm that specifies how to assign jobs to machines in .thus an algorithm for mps or mps can be viewed as a family of algorithms that maintain the various schedules .we will write . if is an algorithm for mps ,then the value is of course given to all algorithms of .furthermore , the _ load _ of a machine always denotes the sum of the processing times of the jobs already assigned to that machine .in this section we will show that any -competitive algorithm for mps can be used to construct a -competitive algorithm for mps , for any .the main idea is to repeatedly execute for a set of guesses on the optimum makespan .the initial guesses are small and are increased whenever a guess turns out to be smaller than .the increments are done in small steps so that , among the final guesses , there exists one that is upper bounded by approximately . in the analysis of this scheme we will have to bound machine loads caused by scheduling `` errors '' made when guesses were too small .unfortunately the execution of , given a guess , can lead to undefined algorithmic behavior . as we shall show , guesses are not critical .however , guesses have to be handled carefully .so let be a -competitive algorithm for mps that , given guess , is executed on a job sequence . upon the arrival of a job , an algorithm may _ fail _ because the scheduling rules of do not specify a machine where to place in the current schedule .we define two further conditions when an algorithm fails .the first one identifies situations where a makespan of is not preserved and hence -competitiveness may not be guaranteed .more precisely , would assign to a machine such that , where denotes s machine load before the assignment .the second condition identifies situations where is not consistent with lower bounds on the optimum makespan , i.e. is smaller than the average machine load or the processing time of . formally , an algorithm _ fails _ if a job , , has to be scheduled and one of the following conditions holds .a. does not specify a machine where to place in the current schedule .b. there holds , for the machine to which would assign in . c. there holds or .we first show that guesses are not problematic .if a -competitive algorithm for mps is given a guess , then there exists an algorithm that does not fail during the processing of and generates a schedule whose makespan is at most .this is shown by the next lemma .[ lem : guess1 ] let be a -competitive algorithm for mps that , given guess , is executed on a job sequence with .then there exists an algorithm that does not fail during the processing of and generates a schedule whose makespan is at most .let be an optimal schedule for the job sequence . moreover , let denote the load of machine in , .for any with , define a job of processing time .let be the job sequence consisting of followed by the new jobs .these up to jobs may be appended to in any order .obviously .hence when using guess is executed on , there must exist an algorithm that generates a schedule with a makespan of at most .since is a prefix of , this algorithm does not fail and generates a schedule with a makespan of at most , when given guess is executed on .* algorithm for mps : * we describe our algorithm for mps , where and may be chosen arbitrarily .the construction takes as input any algorithm for mps . for a proper choice of , will be -competitive , provided that is -competitive . at any time works with guesses on the optimum makespan for the incoming job sequence .these guesses may be adjusted during the processing of ; the update procedure will be described in detail below . for each guess , , executes .hence maintains a total of schedules , which can be partitioned into subsets .subset contains those schedules generated by using , .let denote the schedule generated by using .a job sequence is processed as follows .initially , upon the arrival of the first job , the guesses are initialized as and , for .each job , , is handled in the following way .of course each such job is sequenced in every schedule , and .algorithm checks if using fails when having to sequence in .we remark that this check can be performed easily by just verifying if one of the conditions ( i iii ) holds .if using does not fail and has not failed since the last adjustment of , then in job is assigned to the machine specified by using .the initialization of a guess is also regarded as an adjustment .if using does fail , then and all future jobs are always assigned to a least loaded machine in until is adjusted the next time .suppose that after the sequencing of all algorithms of using a particular guess have failed since the last adjustment of this guess .let be the largest index with this property .then the guesses are adjusted .set and , for . forany readjusted guess , , algorithm using ignores all jobs with when processing future jobs of . specifically , when making scheduling decisions and determining machine loads , algorithm using ignores all job with in its schedule .these jobs are also ignored when checks if using guess fails on the arrival of a job .furthermore , after the assignment of , machines in machines are renumbered so that is located on a machine it would occupy if it were the first job of an input sequence .when guesses have been adjusted , they are renumbered , together with the corresponding schedule sets , such that again . hence at any time and , for .we also observe that whenever a guess is adjusted , its value increases by a factor of at least .a summary of is given in figure [ fig:1 ] .we obtain the following theorem .[ th : guess1 ] let be a -competitive algorithm for mps . then for any and , algorithm for mps is -competitive and uses schedules . for the analysis of we need the following lemma .[ lem : guess2 ] after has processed a job sequence , there holds . at any time guesses .we can view these guesses as being stored in variables .a variable is updated whenever its current guess is increased .hence during the processing of a variable may take any position in the sorted sequence of guesses .we analyze the steps in which adjusts guesses .we first show that when adjusts a guess , then .so suppose that after the arrival of a job , adjust guesses , where is the largest index such that all algorithms using have failed .we prove , which implies the desired statement because guesses are numbered in order of increasing value .let , with , be the most recent time when the variable storing was updated last .if the variable has never been updated since its initialization , then let .all the algorithms using ignore the jobs having arrived before when making scheduling decisions for .let .there holds , .if held true , then by lemma [ lem : guess1 ] there would be an algorithm that , using guess , does not fail when handling .this contradicts the fact that at time all algorithms using fail or have failed since the arrival of .let denote the value of the smallest guess when has finished processing .we distinguish two cases depending on whether or not the variable storing has ever been updated since its initialization .if the variable has never been updated , then , for some .if , there is nothing to show because . if , then the initial guess of value must have been adjustedthis implies , as shown above , and the lemma follows because .in the remainder of the proof we assume that the variable storing has been updated .consider the last update of before the end of and suppose that it took place on the arrival of job .first assume that stores the smallest guess , among the guesses , before the update .then , where is the largest guess before the update .if is also adjusted on the arrival of , then we are done because , as shown above , and thus . if is not adjusted on the arrival of , then is the smallest guess greater than after the update . by the end of guess must be adjusted since otherwise can not become the smallest guess . again and we are done . finally assume that before the update does not store the smallest guess .let be the variable that stores the largest guess smaller than that in .after the update there holds , where is the guess stored in after the update . until the end of , must be adjusted again since otherwise can not become the smallest guess . again and hence . throughout the prooflet and .consider an arbitrary job sequence and let be the smallest of the guesses maintained by at the end of .let be the set of schedules associated with , i.e. was generated by using a series of guesses ending with .let , with , be this series and be the variable that stored these guesses . here is one of the initial guesses and .a first observation is that at the end of there exists an algorithm that using has not failed .this holds true if was set to upon the arrival of a job with because the failure of all algorithms using would have caused an adjustment of .this also holds true if was set to upon the arrival of because in this case none of the algorithms using has failed at the end of .so let be an algorithm that using has not failed and let be the associated schedule .we prove that the load of every machine in is upper bounded by .this establishes the theorem .let .if the variable was updated during the processing of , then let be these points in time , i.e. the arrival of caused an update of and the variable was set to , . for any machine , , in let denote its final load at the end of . moreover , let denote its load due to jobs with , for .obviously first show that .immediately after has been scheduled s load consisting of jobs with is at most .since was set to on the arrival of , the guess adjustment rule ensures .until the end of algorithm using does not fail and hence condition ( ii ) specifying the failure of algorithms implies that the assignment of each further job does not create a machine load greater than in .we next show , for each .the latter difference is the load on machine caused by jobs of the subsequence .hence it suffices to show that after the assignment of any , with , s load due to jobs , with , is at most .after the assignment of s respective load is at most and this value is upper bounded by as ensured by the guess adjustment rule . at times , while using has not failed , s load due to jobs with does not exceed as ensured by condition ( ii ) specifying the failure of algorithms . finally consider a time , , at which fails or has failed .the incoming job is assigned to a least loaded machine .hence if is placed on , then the resulting machine load due to jobs with is upper bounded by .observe that after the arrival of there exists an algorithm that using has not yet failed , since otherwise would be adjusted before time .condition ( iii ) defining the failure of algorithms ensures that and .we obtain that s machine load is at most .we conclude that ( [ eq : b1 ] ) is upper bounded by by lemma [ lem : guess2 ] , . at the end of the description of we observed that whenever a guess is adjusted it increases by a factor of at least .hence .it follows that , for every .hence ( [ eq : b2 ] ) is upper bounded by here ( [ eq : xb2 ] ) uses the fact that and , as mentioned above , is a consequence of lemma [ lem : guess2 ] .line ( [ eq : b3 ] ) follows from the geometric series and , finally , ( [ eq : b4 ] ) is by the choice of and the assumption .we present an algorithm for mps that attains a competitive ratio of , for any . the number of parallel schedules will be .the algorithms will yield a -competitive strategy for and , furthermore , will be useful in the next section where we develop a -competitive algorithm for mps . there will be used as subroutine for a small , constant number of .* description of : * let be arbitrary . recall that in mps the optimum makespan for the incoming job sequence is initially known .assume without loss of generality that .then all job processing times are in ] and ] and \subseteq ( 0 , ( 1+{\varepsilon}')^l{\varepsilon}'] ] be the interval containing the processing times of small jobs .let and , where the logarithm is taken to base 2 .for , let it is easy to verify that and , for . furthermore and . for define ] .moreover , for , let ] .hence $ ] . inthe following represents _ job class _ , for .we say that is a _class- job _if , where .* definition of target configurations : * as mentioned above , for any incoming job sequence , works with estimates on the number of class- jobs arising in , . for each estimate, the algorithm initially determines a virtual schedule or _ target configuration _ on a subset of the machines , assuming that the estimated set of large jobs will indeed arrive .hence we partition the machines into two sets and .let .moreover , let and .set contains the machines for which a target configuration will be computed ; contains the reserve machines .the proportion of to is roughly .a target configuration has the important property that any machine contains large jobs of only one job class , .therefore , a target configuration is properly defined by a vector .if , then does not contain any large jobs in the target configuration , .if , where , then contains class- jobs , .the vector implicitly also specifies how many large jobs reside on a machine .if with , then contains two class- jobs .note that , for general , a third job can not be placed on the machine without exceeding a load bound of .if with , then contains one class- job .again , the assignment of a second job is not feasible in general . given a configuration , is referred to as a _ class- machine _ if , where and . with the above interpretation of target configurations , each vector encodes inputs containing class- jobs , for , as well as class- jobs , for .hence , for an incoming job sequence , instead of considering estimates on the number of class- jobs , for any , we can equivalently consider target configurations .unfortunately , it will not be possible to work with all target configurations since the resulting number of schedules to be constructed would be .therefore , we will work with a suitable sparsification of the set of all configurations .* sparsification of the set of target configurations : * let and .we will show that if is not too small ( see lemma [ lem : kappa ] ) .this property in turn will ensure that any job sequence can be mapped to a . for any vector , we define a target configuration that contains class- machines , for , provided that does not exceed . more specifically , for any , let and , be the partial sums of the first entries of , multiplied by , for .let .first construct a vector of length that contains exactly class- machines .that is , for , let for .we now truncate or extend to obtain a vector of length .if , then is the vector consisting of the first entries of .if , then , i.e.the last entries are set to 0 .let be the set of all target configurations constructed from vectors . * the algorithm family : * let .for any , algorithm works as follows .initially , prior to the arrival of any job of , determines the target configuration specified by and uses this virtual schedule for the machines of to make scheduling decisions .consider a machine and suppose , i.e. is a class- machine for some .let and be the targeted minimal and maximal loads caused by large jobs on , according to the target configuration .more precisely , if , then and . recall that in a target configuration a class- machine contains two class- jobs if . if and hence for some , then and . if is a machine with , then . while the job sequence is processed , a machine may or may not be _again assume that is a class- machine with .if , then at any time during the scheduling process is admissible if it has received less than two class- jobs so far .analogously , if , then is admissible if it has received no class- job so far . finally , at any time during the scheduling process , let be the current load of machine and let be the load due to small jobs , .algorithm schedules each incoming job , , in the following way .first assume that is a large job and , in particular , a class- job , .the algorithm checks if there is a class- machine in that is admissible .if so , is assigned to such a machine .if there is no admissible class- machine available , then is placed on a machine in .there jobs are scheduled according to the _ best - fit _ policy . more specifically , checks if there exists a machine such that .if this is the case , then is assigned to such a machine with the largest current load .if no such machine exists , is assigned to an arbitrary machine in .next assume that is small .the job is a assigned to a machine in , where preference is given to machines that have already received small jobs .algorithm checks if there is an with such that .if this is the case , then is assigned to any such machine .otherwise considers the machines of which have not yet received any small jobs . if there exists an with such that , then among these machines is assigned to one having the smallest targeted load .if again no such machine exists , is assigned to an arbitrary machine in .a summary of , which focuses on the job assignment rules , is given in figure [ fig:3 ] .we obtain the following result .[ th : alg2 ] is -competitive , for any and .the algorithm uses schedules . is -competitive if , for the chosen , the number of machines is at least . if the number of machines is smaller , we can simply apply algorithm with an accuracy of .let be the following combined algorithm . if for the chosen , , execute .otherwise execute .[ cor : a3 ] is -competitive , for any , and uses schedules .if is executed for a machine number , then by theorem [ th : guess2 ] the number of schedules is , which is . in the remainder of this sectionwe prove theorem [ th : alg2 ] .the stated number of schedules follows from the fact that consists of algorithms .recall that and .hence and , which gives that is .hence it suffices to show that , for any job sequence , generates a schedule whose makespan is at most , which we will do in the remainder of this section .more specifically we will prove that , for any , there exists a target configuration that accurately models the large jobs arising in .we will refer to such a vector as a valid target configuration .then we will show that the corresponding algorithm builds a schedule with a makespan of at most .we introduce some notation .consider any job sequence .for any , , let be the number of class- jobs arising in , i.e. is the number of jobs with .furthermore , for any target configuration and any with , let be the number of class- machines in , i.e. .let be the total number of class- machines with .similarly , is the total number of class- machines with .given , vector will be a valid target configuration if , for any , contains as many class- jobs as specified in and , moreover , if all the additional large jobs can be feasibly scheduled on the reserve machines .recall that in a configuration , any class- machine with is supposed to contain two class- jobs .formally , is a _ valid target configuration _ if the following three conditions hold .a. for , there holds .b. for , there holds . c. conditions ( i ) and( ii ) represent the constraint that contains as many class- jobs as specified in , . condition ( iii ) models the requirement that extra large jobs can be feasibly packed on the reserve machines . here is the extra number of class- jobs with in .any two of these can be packed on one machine since the processing time of any of these jobs is upper bounded by .hence two jobs incur a machine load of at most .analogously , is the extra number of class- jobs with , which can not be combined together because their processing times are greater than . in order to prove that , for any , there exists a valid target configurationwe need two lemmas .[ lem : jobs ] for any , there holds .consider any optimal schedule for and recall that we assume without loss of generality that . in machine containing a class- job with can not contain an additional large job : the class- job causes a load greater than and any additional large job , having a processing time greater than , would generate a total load greater than 1 .furthermore , any machine containing a class- job with can contain at most one additional job of the job classes because two further jobs would generate a total load greater than .[ lem : kappa ] for any , there holds if .there holds where the last line follows because of and , for any .the next lemma establishes the existence of valid target configurations .[ lem : config ] for any , there exists a valid target configuration if . in this prooflet .given , we first construct a vector .lemma [ lem : jobs ] implies that for any job class , , there holds . for any job class , , there holds . by lemma [ lem : kappa ] , , which is equivalent to . for any with ,set .for any with , set .then , for , and the resulting vector is element of . we next show that the vector constructed by is a valid target configuration .when constructs , it first builds a vector of length containing exactly entries with , for . if , then contains the first entries of .if , then is obtained from by adding entries of value 0 . in either case contains at most entries of values , for .hence for the target configuration , there holds , for , where is again the total number of class- machines in .if , then , which is equivalent to .similarly , if , then .therefore , conditions ( i ) and ( ii ) defining valid target configurations are satisfied and we are left to verify condition ( iii ) .first assume .in this case the vector contains no entries of value 0 and hence . recall that is the total number of class- machines with specified in .similarly , is the total number of class- machines with . by lemma [ lem : jobs ] , . subtracting the equation , we obtain there holds because is an integer .hence condition ( iii ) defining valid target configurations is satisfied .it remains to study the case . for any with , there holds and hence , which is equivalent to .hence the sum is the total number of entries with in . since , none of these entries is deleted when is derived from .hence is the total number of class- machines with specified in .we conclude for any with , there holds and hence .this implies . since is an integer we obtain .thus again because contains exactly entries with and all of these entries are contained in representing class- machines for .inequalities ( [ eq : n1 ] ) and ( [ eq : n2 ] ) together with the identity imply since again , condition ( iii ) defining valid target configurations holds .we next analyze the scheduling steps of .[ lem : sched1 ] let be any algorithm of processing a job sequence . at any time thereexists at most one machine with and in the schedule maintained by .consider any point in time while sequences .suppose that there exists a machine with and .we show that if a small job arrives and assigns it to a machine with , then so that no new machine with the property specified in the lemma is generated .a first observation is that is not a class- machine because in this case would be .also , if is a class- machine , there is nothing to show because , again , in this case .so assume that assigns to a machine , which is not a class- machine , and prior to the assignment .we first show that .consider the scheduling step in which assigned the first small job to . since is not a class- machine , for some and the assignment of to led to a load of at most .since is not a class- machine either , could have also been assigned to incurring a resulting load of at most on this machine .note that when an algorithm can not assign a small job to a machine with and instead has to resort to machines with , it chooses a machine having the smallest value .we conclude .next consider the assignment of .algorithm would prefer to place on as it already contains small jobs .since this is impossible , there holds and thus .since by assumption it follows . suppose that , for some . then .since we obtain as desired . the following lemmas focus on algorithms that is a valid target configuration for .[ lem : sched2 ] let be any job sequence and be an algorithm such that is a valid target configuration for .let .consider any point in time during the scheduling process .if the schedule of contains at most one machine with , then no further small job can arrive . since is a valid target configuration for , the job sequence contains as many class- jobs , for any , as indicated by .hence the total processing time of large jobs in is lower bounded by .hence the total processing time of jobs in is at least , where the machine loads due to small jobs may be considered at an arbitrary point in time .hence if there exists a time such that for at most one , we obtain the last inequality holds because , for any .hence no further small job can arrive .[ lem : sched3 ] let be any job sequence and be an algorithm such that is a valid target configuration for .let . then in the final schedule constructed by , each machine in has a load of at most .we consider the scheduling steps in which assigns a job to a machine in .first suppose that is large .let be a class- job , where .if is assigned to an , then must be an admissible class- machine , i.e. prior to the assignment of it contains fewer class- jobs as specified by the target configuration .this implies that for any machine , its load due to large jobs is always at most .the latter value is upper bounded by .hence , in order to establish the lemma it suffices to show that whenever a small job is assigned to a machine , the resulting load on is at most .suppose on the contrary that a small job arrives and schedules it on a machine in such that the resulting load is greater than .algorithm first tries to place on a machine with , which has already received small jobs . by lemma [ lem : sched1 ] , among these machines thereexists at most one having the property that .since an assignment to those machines is impossible without exceeding a load of , tries to place on a machine with .since this is also impossible without exceeding a load of , any with must be a class- machine .this holds true because for any class- machine with , there holds and an assignment of a small job would result in a total load of at most .observe that any class- machine has a targeted minimal load of .we conclude that immediately before the assignment of the schedule of contains at most one machine with .lemma [ lem : sched2 ] implies that the incoming job can not be small , and we obtain a contradiction .[ lem : sched4 ] let be any job sequence and be an algorithm such that is a valid target configuration for . then in the final schedule constructed by , each machine in has a load of at most .algorithm assigns only large jobs to machines in .a first observation is that whenever there exists an that contains only one class- job with but no further jobs , then an incoming class- job with will not be assigned to an empty machine .this holds true because the two jobs can be combined , which results in a total load of at most .the observation implies that at any time while processes , the number of machines of containing at least one job is upper bounded by . here denotes the total number of class- jobs with that have been assigned to machines of so far .analogously , is the total number of class- jobs with currently residing on machines in .since is a valid target configuration for conditions ( i ) and ( ii ) defining those configurations imply and . moreover , since assigns large jobs preferably to machines in , there holds and . by condition ( iii ) defining valid target configurations , .hence , while there holds and thus exists an empty machine to which an incoming class- jobs with can be assigned .similarly , while , there must exist an empty machine or a machine containing only one class- job with to which in incoming class- job with can be assigned . in either case , the assignment generates a load of at most on the selected machine .theorem [ th : alg2 ] now follows from lemmas [ lem : config ] , [ lem : sched3 ] and [ lem : sched4 ] .we derive our algorithms for mps .the strategies are obtained by simply combining , presented in section [ sec : redu ] , with and . in order to achieve a precision of in the competitive ratio ,the strategies are combined with a precision of in its parameters . for any ,let be the algorithm obtained by executing in .for any , let be the algorithm obtained by executing in .[ cor:2 ] is a -competitive algorithm for mps and uses no more than schedules , for any .theorem [ th : guess1 ] and corollary [ cor : a3 ] imply that is -competitive , for any , and that the total number of schedules is the product of and , where . by the taylor series for , , we obtain , for any .hence the second term of the product is .[ cor:3 ] is a -competitive algorithm for mps and uses no more than schedules , for any . by theorems [ th : guess1 ] and[ th : guess2 ] algorithm is -competitive , for any .the total number of schedules is the product of and , where .again , by the taylor series , , for any .hence both terms of the product are upper bounded by .we develop lower bounds that apply to both mps and mps .let be any deterministic online algorithm for mps or mps that maintains at most schedules .we show that s competitive ratio is at least . to this endwe construct an adversarial job sequence such that each schedule maintained by has a makespan of at least .the job sequence is composed of two subsequences and , i.e. .subsequence consists of jobs of processing time each .subsequence will consist of jobs having a processing time of either 2/3 or 1 .the exact number of these jobs depends on the schedules constructed by and will be determined later .consider the schedules that may have built after all jobs of have been assigned .each such schedule contains jobs of processing time 1/3 . for the moment we concentrate on schedules in which each machine contains either zero , one or three jobs , i.e. there exists no machine containing two or more than three jobs .each such schedule can be represented by a pair , where denotes the number of machines containing exactly one job and is the number of machines containing three jobs . here and are non - negative integers such that .let be the set of all these pairs .set has elements because can take any value between 0 and and .let be an arbitrary schedule containing jobs of processing time 1/3 and .we say that is an _-schedule _ if the number of machines containing exactly one job equals and the number of machines containing exactly three jobs equals .let be the set of schedules constructed by when the entire subsequence has arrived . by assumption at most schedules , i.e. .hence there must exist a pair such that no schedule of is an -schedule . on the other hand ,let be an -schedule .in we number the machines in order of non - decreasing load such that . schedule contains machines with a load smaller than 1 and , in particular , empty machines .now the subsequence consists of jobs , where the -th job has a processing time of , for .hence contains jobs of processing time 1 followed by jobs of processing time .obviously , the makespan of an optimal schedule for is 1 : the jobs of are sequenced so that an -schedule is obtained .again , after has arrived , the machines are numbered in order of non - decreasing load . while arrives , the -th job is assigned to machine , having a load of , for . in the remainder of this proofwe consider any schedule and show that after has been sequenced , the resulting makespan is at least 4/3 .this establishes the theorem .so let be any schedule and recall that contains jobs of processing time 1/3 each .if in there exists a machine that contains at least four of these jobs , then the makespan is already 4/3 and there is nothing to show .therefore , we restrict ourselves to the case that every machine in contains at most three jobs . againwe number the machines in in order of non - decreasing load so that .consider the -schedule in which the machines loads satisfy .there must exist a machine , , such that : for , if held for all , then for all because and both contain jobs with a total processing time of .thus would be an -schedule and we obtain a contradiction .the last machines in have a load of 1 .it follows that because otherwise in contained at least four jobs .the property implies because and only contain jobs of processing time .we finally show that sequencing of leads to a makespan of at least in .if assigns two jobs of to the same machine , then the resulting machine load is at least 4/3 because each job of has a processing time of at least .so assume that assigns the jobs of to different machines .the first jobs of each have a processing time of at least because the jobs arrive in order of non - increasing processing times . in thereexist at most machines having a load strictly smaller than .hence , after the first jobs have been scheduled in , there exists a machine having a load of at least .this concludes the proof .the next theorem gives a lower bound on the number of schedules required by a -competitive algorithm , where .it implies that , for any fixed , the number asymptotically depends on , as increases .for instance , any algorithm with a competitive ratio smaller than requires schedules .any algorithm with a competitive ratio smaller than needs schedules .[ th : lb2 ] let be a deterministic online algorithm for mps or mps . if attains a competitive ratio smaller than , where , then it must maintain at least schedules , where and .the binomial coefficient increases as decreases and is at least .we extend the proof of theorem [ th : lb1 ] .let .furthermore , let and be defined as in the theorem. there holds .let and note that .we will define a set whose cardinality is at least , and show that if maintains less than schedules , then its competitive ratio is at least .we specify a job sequence and first assume that is even .later we will describe how to adapt if is odd .again is composed of two partial sequences and so that .subsequence consists of jobs of processing time each .subsequence depends on the schedules constructed by and will be specified below .consider the possible schedules after has been sequenced on the machines . we restrict ourselves to schedules having the following property :each machine has a load of exactly 1 or a load that is at most .observe that each machine of load 1 contains jobs .each machine of load at most contains up to jobs because .therefore any schedule with the stated property can be described by a vector , where is the number of machines having a load of 1 and is the number of machines containing exactly jobs , for .the vector satisfies and . the last equation specifies the constraint that the schedule contains jobs . let be the set of all these vectors , i.e. we remark that each uniquely identifies one schedule with our desired property .let be any schedule containing exactly jobs of processing time and .we say that is an _-schedule _ if in there exist machines of load 1 and machines containing exactly jobs , for .now suppose that maintains less than schedules .let be the set of schedules constructed by after all jobs of have arrived . since there must exist an such that no schedule of is an -schedule .let be an -schedule in which machines are numbered in order of non - decreasing load such that .subsequence consists of jobs , where job has a processing time of , for .hence consists of jobs of processing time , for .these jobs arrive in order of non - increasing processing time .each job has a processing time of at least because .the makespan of an optimal schedule for is 1 .the jobs of are sequenced so that an -schedule is obtained .machines are again numbered in order of non - decreasing load .then , while the jobs of arrive , the -th job of the subsequence is assigned to machine in , .we next show that after has sequenced , each of its schedules has a makepan of at least .so consider any and , as always , number the machines in order of non - decreasing load such that .if in there exists a machine that has a load of at least and hence contains at least jobs , then there is nothing to show .so assume that each machine in contains at most jobs and thus has a load of at most 1 .we study the assignment of the jobs of to .if places two jobs of on the same machine , then we are done because each job has a processing time of at least .therefore we focus on the case that assigns the jobs of to different machines. schedules and both contain jobs of total processing time . since is not an -schedule there must exist a , , such that and hence .each machine in has a load of at most 1 while the last machines in have a load of exactly 1 .this implies .the first jobs of each have a processing time of at least . however , there exist at most machines in having a load strictly smaller than .hence after has sequenced the first jobs of there must exist a machine in with a load of at least .so far we have assumed that is even .if is odd , we can easily modify .the first job of is a job of processing time 1 .then and follow .these subsequences are defined as above , where is replaced by the even number . in this case the analysis presented above carries over because the first job of , having a processing time of 1 , must be scheduled on a separate machine and can not be combined with any job of or if a competitive ratio smaller than is to be attained .we next lower bound the cardinality of .again we first focus on the case that is even .in the definition of the critical constraint is , which implies that not every vector of represents a schedule that can be built of jobs . in particular , the vector of length would require jobs .therefore , we introduce a set and show . set contains vectors of length in which the first entries as well as the last one are equal to 0 .the other entries sum to at most , i.e. we show that each can be mapped to a .the mapping has the property that any two different vectors of are mapped to different vectors of .this implies .consider any .let be defined as follows . for ,let . for , let .finally , let .note that . we next show that .there holds .furthermore , it follows , as desired , .note that the last entries of are identical to the last entries of .hence no two vectors of that differ in at least one entry are mapped to the same vector of .hence .if the number of machines is odd , then in the definition of the entries of a vector sum to at most .the rest of the construction and analysis is the same .thus , for a general number of machines this set contains exactly elements , where again . in the remainder of this proof we lowerbound this binomial coefficient .e. angelelli , m.g .speranza and z. tuza .new bounds and algorithms for on - line scheduling : two identical processors , known sum and upper bound on the tasks. _ discrete mathematics & theoretical computer science _ , 8:116 , 2006 .
online makespan minimization is a classical problem in which a sequence of jobs has to be scheduled on identical parallel machines so as to minimize the maximum completion time of any job . in this paper we investigate the problem with an essentially new model of resource augmentation . more specifically , an online algorithm is allowed to build several schedules in parallel while processing . at the end of the scheduling process the best schedule is selected . this model can be viewed as providing an online algorithm with extra space , which is invested to maintain multiple solutions . the setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions . as a main result we develop a -competitive algorithm , for any , that uses a constant number of schedules . the constant is . we also give a -competitive algorithm , for any , that builds a polynomial number of schedules . this value depends on but is independent of the input . the performance guarantees are nearly best possible . we show that any algorithm that achieves a competitiveness smaller than must construct schedules . our algorithms make use of novel guessing schemes that ( 1 ) predict the optimum makespan of a job sequence to within a factor of and ( 2 ) guess the job processing times and their frequencies in . in ( 2 ) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant . the competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation . furthermore they are at least as good and in most cases better than the ratios obtained with other means of resource augmentation for makespan minimization .
pareto optimality is a natural extension of the concept of maximum to multi - objective optimization problems .a solution is part of the pareto optimal set , or pareto front , if it is impossible to improve one objective without worsening another . instead of imposing an arbitrary aggregation of the different objectives into a single scalar function, pareto optimality keeps track of all potentially interesting solutions in the presence of trade - offs . the pareto approach , originally introduced in economics ,has shown to be useful in many engineering applications , decision - making analysis , more recently in biology . herewe apply the pareto approach to the optimization of the response of multi - input monotone systems , which are widely used to describe input - output systems in control theory .consider a system which processes a multi - input vector into a single output , each coordinate of being monotone .consider now a list of such input vectors .the natural order on induces a partial order between the elements , which in turn , due to the monotonicity of , induces partial order constraints between the values .these constraints limit the space accessible to the vector .we then introduce an optimization problem : certain values of have to be maximized while other have to be minimized .the question is to know how far the function can be optimized to fullfill this multi - objective problem , while keeping its monotonicity properties unaffected .this problem can be formulated beyond the framework of monotone systems , as and the can be essentially seen as a source of partial order constraints .the most general formulation of our problem is then to start directly from partial order constraints between bounded values , with either a minimization or a maximization objective associated to each of the , and search for the pareto optimal solutions of this problem .geometrically , setting the bounded between and , the space compatible with the constraints is a convex polytope in the hypercube ^n ] presented in the next part . propositions [ parpar ] and [ parparpar ] will not be used for the demonstration of our specific algorithm , but to discuss the complexity of more naive algorithms in section [ discusscomplexity ] .indeed , they can be used generally in multi - objective optimization problems for reducing the pareto set search into sub - problems .proposition [ parpar ] indicates that the pareto front can be found by taking the pareto front of the assembled pareto fronts of any decomposition of the search space .proposition [ parparpar ] extends this result to any finite number of recursive splittings of the search space , and shows that it is necessary to search only one time the pareto set of the assembled pareto sets of the terminal sub - problems ._ * pareto order * _ we consider bounded variables ^{n} ] __ be the space of all vectors respecting the partial order constraints represented by __ . _determine _ _ _ the set of vectors optimal under the pareto order _ _ _ _ on _ _ .the algorithm starts by setting steps 1 to 4 described below ( and illustrated in figure [ fig : illustration ] ) are then recursively applied to all diagrams , , generated at depth until step 2 can no longer be performed , _i.e. _ the diagram in question no longer contains any ascending or descending vertex ( eqivalently , only contains tovs and boundary vertices ) : 1 .perform a _ transitive reduction _ of , _i.e. _ remove any direct edge if there exists a longer _ path _ from to on .2 . select an extremal vertex .3 . consider the vertices which aims for .there is always at least one such vertex , in limiting cases provided by boundary vertices .define the diagrams by respectively contracting the edge connecting and according to the colouring rules defined in section [ definitions ] . at the end of this branching process ,we are left with a collection of terminal graphs , and we posit that the solution of the initial problem is : and identification of an extremal vertex . left : aggregation of with two conflicting vertices and , yielding a branching of the resolution tree into two diagrams , the union of which solutions results in the pareto front of the parent diagram .color code : filled blue circle : ascending vertex , filled red circle : descending vertex , filled grey circle : tov.[fig : illustration ] ] our proof of the algorithm described above proceeds in three steps : step 1 : : we first show that for each iteration of the algorithm , .step 2 : : next we show that .step 3 : : finally we show that the terminal graphs satisfy . for brevity s sake we treat only the case of an ascending connected component , the demonstration being easily adapted for a descending connected component . consider an ascending subgraph in and a maximal vertex .there are two possibilities : 1 .the vertex representing the upper bound is directly connected to . in this case is the unique vertex pointing to : if there would be another vertex pointing to , there would exist a chain of vertices pointing from to , as is the global upper bound , which would contradict the fact that we have taken the transitive reduction of the diagram .one or more vertices point to , none of which has constant value .we then define the sets a necessary condition for is .suppose otherwise that ( or in the case 1 above ) .then there exists such that for all .if we denote by the vector with coordinates for and otherwise , we have for all and as by assumption .this contradicts . as by definition , , we can use proposition [ necessary ] and have that .we only have to show that , the inclusion in the other direction directly following from proposition [ union ] .we only discuss the case as the result is trivial otherwise ( in particular in case 1 of the first step ) .consider two distinct indices from the set which we can take to be and without loss of generality .consider and suppose there exists such that . then by virtue of being an ascending vertex and by definition of the pareto order : next , as was maximal within its ascending subgraph , any vertex pointing to it must contain at least one descending variable , from which follows that indeed , the latter inequality is trivially true if all label descending variables , i.e. . if not , we can choose an and for which the statement implies both and and hence , together with and by definition of the aggregates , we obtain the equality .we also have implying : finally , and in , vertex points to by hypothesis , which implies : examining the above relations in the order ( [ c2]-[c4]-[c1]-[c3 ] ) , we see that all the variables at play must be equal , in particular . then and consequently .to summarize , we have just demonstrated : now , if we take in particular in relation ( [ inter ] ) , relation ( [ eq : maxequal ] ) implies that by maximality of in .this gives : applying this time relation ( [ eq : maxequal ] ) in the backward direction demonstrates the announced result : by construction a terminal graph contains only boundary vertices and tovs . now consider , and consider a vertex in .if is a boundary vertex , then the variables in are already at their optimum bounds , and otherwise , is a tov and as in step 2 above we can choose an and for which the statement implies both and and hence . as the aggregates form a partition of the initial index set ,the above immediately implies , and hence , by relation ( [ eq : maxequal ] ) , .as this is true for any , we have demonstrated our result : . in order to illustrate the working of the algorithm presented herewe have implemented it in mathematica 9 , using the built - in graph primitives .the program generates random dags with a specified number of internal nodes , randomly chosen to be ascending or descending , using the layer method and then applies the algorithm . in figure[ fig : example ] we show an example of a typical input graph , and the output of the algorithm , in this case the union of three terminal graphs . due to spatial limitations we can not display the intermediate steps the algorithm makes to produce the final result .internal vertices , randomly chosen to be either ascending or descending .bottom : the resulting pareto optimal set obtained from our algorithm , consisting of the union of the set parameterized by three distinct terminal graphs .colour code as in figure [ fig : illustration ] , in addition , black circles represent boundary values . ]we discuss here the complexity of different approaches and propose two further improvements of our algorithm .complexity will be discussed relative to the number of variables , and to the complexity of the pareto front , as quantified by its number of faces . a _ face _ is defined as a maximal convex subset of the pareto front , which itself is a subset of the convex polytope .note that the number of faces of the pareto front can be exponential in the number of variables , and that faces do not necessarily all have the same dimension .the first improvement consist of introducing an additional contraction rule with forbidden steps , leading to a complexity linear in the number of dimensions of the inital problem , relatively to the size to pareto front .this lead to resolution tree size , where the depth and the number of leaves of the tree respectively equal and .the second improvement consist of a combinatorial description of the pareto front from the set of solutions of specific components of the diagram . for thiswe will introduce the interface , which comprises all the conflicting vertices of the initial diagram .for each resolution of , the solutions of the monotone connected components can be computed independently , then assembled combinatorially .parameterization by the different resolutions can exponentially reduce the computing time and the size of the description of the full solution . in particular for series - parallel partial orders , the resolution of is unique . under the additional assumption that the size of monotone ( or monochromatic ) connected components is bounded , one obtains a resolution and a description of the pareto front in , eventhough the pareto front may comprise an exponential number of faces .an exhaustive seach of the pareto front would be to consider the corners of the hypercube , check for each whether the coordinates of the coresponding n - dimensional vector respect the partial order conditions ( test up to conditions ) , assess the pareto optimality of each admissible corner relative the other admissible corners ( , where is the potentially exponential number of admissible corners ) , and finally examine the pareto optimality of all possible interpolations( ) between the pareto optimal admissible corners . given that and can both scale like , the complexity of this process can reach . the edge contraction algorithm as it is described so far provides several benefits compared to the exhaustive search .first , contraction operations on the hasse diagram ensure to only explore solutions which are consistent with the partial order constraints.the complexity of the program can then be taken as examining the validity of subsets of contracted edges ( , as the number of edges is ) , then checking the resulting admissible subspacesfor their relative pareto optimality ( ) .second , the restriction of contractions to vertices which are maximum within monotone connected components ensures that the final result of each contraction process is indeed pareto optimal .finally , each terminal diagram provides a full parametrization of a face of the pareto front , the union of all which covers the full pareto front , without requiring to assess pareto optimality a second time within the set of potential solutions as propositions [ parparpar]and [ parpar ] would suggest in the general case .the worst case complexity of the resolution tree is bounded by the a case where all possible subsets of edges have to be contracted and the number of edges is maximum , the first chosen vertex is in conflict with the other vertices , the second with vertices remaining after contraction , and so on , thus resulting in a complexity of .however , the version of the algorithm described so far has potential sources of increased complexity : ( i ) duplicated diagrams representing the exact same parameterization of a face of the pareto front ; ( ii ) diagrams which aggregates the initial variables into a sub - partition of another diagram , so that the space of the pareto front parameterized by is only a subspace of the space parameterized by . case ( i ) happens in particular when an extremal vertex conflicts with two or more other vertices , because the two corresponding contractions occur in a certain order along a resolution branch , and in another order along another branch .case ( ii ) happens in particular when a vertex aims to a tov and a conflicting vertex : along a first resolution branch , aggregates with , then aggregates with the resulting tov , whereas along a second branch , aggregates with , resulting in two distinct tovs .consequently , the parameterization given by two distinct tovs in the second branch includes the solution obtained in the first . a way to fix the redundancies due to case ( i )would be to store known nodes of the resolution tree in a hash table , using a so - called _ memoization _ strategy .we propose instead an improved version of the contraction rules , which ensures that every terminal diagram represents a distinct face of the pareto front and which also removes sub - representations of the pareto front due to case ( ii ) .the resulting algorithm has a complexity of where is the complexity of the pareto front and the number of variables .we define _ frozen edges _ as edges which can not be contracted .furthermore , we impose this property to be inherited downstream of the resolution tree , i.e. a frozen edge remains frozen after contraction of other edges .otherwise , an edge is qualified as _ free_. the improved version of the algorithm consists of modifying step 3 of the original algorithm as follows : _ * modified contraction rule * _ : if possible , contract extremal vertices which aim for other vertices via a single free edge .otherwise : ( i ) contract in priority with conflicting vertices , then with tovs , and , ( ii ) for each , the edges contracted to obtain , , are frozen in .+ as shown below , these modifications ensure that the terminal diagrams describe all _ distinct _ faces of the pareto front .note that the total size of the resolution tree can be further minimized by treating in priority junctions comprising as few as possible alternatives , which generalizes the priority treament of single free edge contractions .however this changes neither the depth of the tree , nor its number of leaves .an elementary point is that the algorithm can be consistently run until the leaf of each branch is reached .a potential issue would indeed be that the creation of a frozen edge leads to the existence of an extremal vertex not connected to any free edge .however , this never happens due to the priority contraction of single free edges , which implies that frozen edges are generated only at stages when every extremal vertex is connected to two or more free edges .as the treatment of single free edge contractions does not differ from the rules of the initial algorithm , we set ourselves at a branching of the resolution tree corresponding to alternative contractions of an extremal vertex aiming to , where the first conflict with , and the remaining are tovs , where .we show first that the edge freezing rules leads to all admissible ( in the sense of other rules ) and distinct partitions of the initial variables . consider the first contraction of .the resulting graph induces all admissible partitions such that and are in the same set .consider then the contraction of , where is frozen according to the modified algorithm .the resulting branch induces all partitions such that and are in the same set but is not . for each -th iteration of this process , the sub - tree stemming from induces partitions such that and are in the same aggregate but are not . therefore , throughout the different branches , the contractions with enumerate without redundance all the accessible subsets of containing .we now want to show that the face parameterized by every terminal diagram is embedded in a stricly different direction of the euclidian space , thus parameterizes a distinct face of the pareto front . for this, we have to show that none of the partitions is included in another .this is obtained thanks to the prioritization of conflicting vertices contractions : when aggregates with a tov , , any edge , , joining to a conflicting is frozen in .consequently , at this stage , none of the conflicting can be aggregated to the tov resulting from the contraction of .this implies that two conflicting vertices susceptible to form a tov can not both aggregate to another tov , at any stage of the process .therefore , a tov can only contain a single pair of conflicting variables , whereas at least two such pairs would be necessary to form a sub - partition of the aggregate .we define the interface of the initial problem as the set of all conflicting vertices . contains all extremal vertices of which do not directly aim for the maximum or minumum bound .while can be composed of several connected components , a monotone connected component of may intersect several connected components of .we call the diagrams obtained by aggregating first all the extremal vertices of in .as all conflicting vertices have been aggregated into tovs at this point , the algorithm only results in aggregation of extremal vertices with existing tovs . in this sense ,the remaining monotone connected components of each are isolated from each other by tovs .now call the montone connected components of taken together with the tovs they aim for .each can be solved separately , leading to its own set of leaves .the parametrizations of the different parts of the pareto front of can be obtained by concatenating all possible combinations of the indexes of the . herethe concatenation between diagrams is defined as the merging of vertices which represent aggregated variable sets with a non - empty intersection . with these notations, we have : with for every : where .such a combinatorial representation of the pareto front can be exponentially smaller than the number of faces of the pareto front itself .in particular when the size of the is bounded , the number of terms of the concatenation representing increases linearly , while they represent an exponentially increasing number of faces of the pareto front . in the case of series - parallel , we have .this is due to series - parallel graphs being characterized by the absence of fence subgraph ( n " shaped motif ) .this property implies that there can not be conflicting vertices which each participate to a junction . in other words ,at least one of the two has no other alternative than contracting with its conflicting vertex , leading to the absence of branching process during the resolution of the interface of the hasse diagram . under the additional condition that monochromatic connected components have bounded size , all a bounded size , and the complexity of the resolution of these sub - diagrams is bounded .thus the pareto front of the full problem admits a representation which computation time grows linearly with the number of , which itself increases at much linearly with the number of initial variables .we have described and demonstrated an algorithm which allows to find an exact parameterization of the pareto front of any polytope defined by partial order relations within the hypercube ^n$ ] , given an ideal point located at a corner of this hypercube .the solution is obtained in a linear number of steps with the number of parameters , relative to the number of faces of the pareto front .this result is obtained by establishing a mapping between hasse diagrams and polytopes , where a colouring of the graph encodes the location of the ideal point .more explicitly , vertices represent sets of aggregated variables , edges correspond to ordering relations , and colours correspond to the optimization objectives associated with the variables . following a dynamic programming approach in the space of coloured graphs ,the initial polytope is successively projected onto smaller dimension spaces , corresponding to edge contractions in its diagram representation .the pareto front ultimately consists of the union of spaces corresponding to each terminal hasse diagram obtained after contractions .a major advantage of this approach is that assembling the solution from the decomposed sub - problems is a direct operation , which does not require any further pareto computation than taking a union of sets .we have furthermore introduced a parmeterized complexity approach , by introducing a specific subgraph , which we call the interface and which corresponds to the smallest set containing all the potential trade - offs .the edge contraction algorithm can be applied to this subgraph before all other vertices , until all conflicting parameters have been aggregated .this partial computation is sufficient to determine the dimension of the pareto front . in particular , given that it is necessary to combine at least two nodes with conflicting objectives to generate a trade - off , the dimension of the pareto front will be lower than , where is the number of vertices in the interface . as soon as the interface is resolved, the diagram has a specific structure , where some coloured connected components are isolated from others . due to this property ,given a resolution of the interface , the pareto front can be represented combinatorially from the solutions of the connected components which share a single objective . for series - parallel partial orders ,the interface has a unique resolution .when , additionally , coloured connected components are of bounded size , the pareto front , though of exponential complexity , can be computed and represented in .the propositions of the section 2 of the article are generic and can be applied to reduce multi - objective optimization problems and re - assemble the pareto front , similarly to other exact approaches to the pareto set .specifically , the maximality property expressed in proposition [ maximality ] and proposition [ necessary ] shows how to decrease the search space while conserving the pareto set , and proposition [ parparpar ] can then be used to break up the problem recursively into smaller subproblems . in the context of the optimization of multi - input monotone systems, it would be important to investigate generalization of the algorithm developed here to linear inequality constraints or to ideal points other than corners of the search space .pn was supported by fom programme 103 `` dna in action : physics of the genome '' .this work is part of the research programme of the foundation for fundamental research on matter ( fom ) , which is part of the netherlands organisation for scientific research ( nwo ) .we would like to thank olivier spanjaard from the laboratoire dinformatique de paris 6 ( lip6 ) and the members of the laboratoire dinformatique fondamentale de lille ( lifl ) for their unvaluable advices .shoval o , sheftel h , shinar g , hart y , ramote o , mayo a , dekel e , kavanagh k , alon u ( 2012 ) phenotype space evolutionary trade - offs , pareto optimality , and the geometry of phenotype space . _ science _ 336:1157 - 1160 .
we developed a graph - based method to solve the multi - objective optimization problem of minimizing or maximizing subsets of bounded variables under partial order constraints . this problem , motivated by the optimization of the response of multi - input monotone systems applied to biological gene networks , can find applications in other contexts such as task scheduling . we introduce a mapping between coloured graphs ( hasse diagrams ) and polytopes associated with an ideal point , and find an exact closed - form description of the pareto optimal set using a dynamic program based on edge contractions . the proof of the algorithm is based on decomposition properties of pareto optimal sets that follow from elementary set operations , notably a maximality property valid for compact ensembles . in the general case , the pareto front is found in steps , where is the number of variables and the number of faces of the pareto front . using a parameterized complexity approach , the computation and the representation of the solution reaches for series - parallel graphs when the size of monochromatic connected components is bounded .
throughout the world , devastating earthquakes constantly occur with little or no advance warning . brune ( 1979 ) proposed that earthquakes may be inherently unpredictable since large earthquakes start as smaller earthquakes , which in turn start as smaller earthquakes , and so on . in his model ,most of the fault is in a state of stress below that required to initiate slip , but it can be triggered and caused to slip by nearby earthquakes or propagating ruptures .any precursory phenomena will only occur when stresses are close to the yield stress . however , since even small earthquakes are initiated by still smaller earthquakes , in the limit , the region of rupture initiation where precursory phenomena might be expected is vanishingly small .even if every small earthquake could be predicted , one still faces the impossible task of deciding which of the thousands of small events will lead to a runaway cascade of rupture composing a large event .nevertheless , the discussion about the possibility of earthquake forecast continues to be open , and a wide spectrum of new spaceborne technologies for the earthquake study and forecast appeared during the last decades .the main advantage of spaceborne technologies is the ability to cover big territories and areas with difficult access .the list of these technologies is very large .as an example it is possible to mention the measurements of different ionospheric precursors of earthquakes including changes in electromagnetic elf radiation ( serebryakova et al . , 1992 , gokhberg et al . , 1995 ) , and ionospheric electron temperature ( sharma et al . , 2006 ) , and density ( trigunait et al . , 2004 ) ( see pulinets at al . , ( 2003 ) for a review ) .many efforts have been concentrated in the study of the ground deformation using the satellite radar interferometry , that makes it possible to determine the location and amount of coseismic surface displacements ( see for example satybala , 2006 ; schmidt and bergmann , 2006 , lasserre et al . , 2005 , funning et al . ,the ir satellite thermal imaging data were used to study pre - earthquake thermal anomalies ( ouzounov and freund , 2004 ) .the anomalies in the surface latent heat flux data were also detected a few days prior to coastal earthquakes ( cervone et al ., 2005 , singh and ouzounov , 2003 ; dey at al . , 2004 ) . during last years, significant progress has been reached in the understanding how the complex set of phenomena , related to the earthquake gestation is reflected , at least partially , in the geological lineaments . in particular, cotilla - rodriguez and cordoba - barba ( 2004 ) studied the morphotectonic structure of the iberian peninsula and showed that the main seismic activity is concentrated on the first- and second rank lineaments , and some of important epicenters are located near the lineament intersections .stich et al . , ( 2001 ) found from the analysis of 721 earthquakes with magnitude between 1.5 and 5.0 , that the epicenters draw well - defined lineaments and show two dominant strike directions n120 - 130e and n60 - 70e , which are coincident with the known fault system of the area .distances within multiplets ( typically several tens of meters ) are smaller than the fracture radii of these events .carver et al . (2003 ) have used the srtm and landsat-7 digital data and paleoseismic techniques to identify active faults and evaluate seismic hazards on the northeast coast of kodiak island , alaska .arellano et al .( 2004ab , 2005 ) studied the changes in the lineament structure caused by a 5.2 richter scale magnitude earthquake occurred january 27 , 2004 in southern peru . during last years this regionis studied intensively using the ground based seismic network ( comte et al . , 2003 ; david et al ., 2004 ; legrand , 2005 ) as well as gps and sar interferometry data ( campos et al . , 2005 ) .the aster / terra high resolution multispectral images 128 and 48 days before and 73 days after the earthquake were used .it was shown that the lineament system is very dynamical , and significant numbers of lineaments appeared between four and one month before the earthquake .they also studied the changes in stripe density fields .these fields represent the density of stripes , calculated for each direction as a convolution between the corresponding circular masks and the image .the stripe density field residuals showed the reorientation of stripes , which agrees with the dilatancy models of earthquakes .these features disappear in the image obtained two months after the earthquake .analysis of the similar reference area , situated at 200 km from the epicenter , showed that in the absence of earthquakes both lineaments and stripe density fields remain unchanged .similar results were obtained later by bondur and zverev ( 2005 ) due to analysis of modis ( terra ) images of earthquake in california .singh v.p and r.p .singh ( 2005 ) used the lineament analysis to study changes in stress pattern around the epicenter of mw=7.6 bhuj earthquake .this earthquake occurred 26 january 2001 in india .indian remote sensing ( irs-1d ) liss data were used .the lineaments were extracted using high pass filter ( sobel filter in all directions ) .the results obtained also confirm that the lineaments retrieved from the images 22 days before the earthquake differ from the lineaments obtained 3 days after the earthquake .it was assumed that they are related to fractures and faults and their orientation and density give an idea about the fracture pattern of rocks .the results also show the high level of correlation between the continued horizontal maximum compressive stress deduced from the lineament and the earthquake focal mechanism .studies of lineament dynamics can also contribute to better understanding of the nature of earthquakes .to date significant number of theories has been developed to explain how an earthquake occur .one of the oldest is the elastic rebound theory , proposed by harry reid after the california 1906 earthquake ( reid , 1910 ) .it is based on the assumption that the rocks under stress deform elastically , analogous to a rubber band .strain builds up until either the rock break creating a new fault or movement occurs on an existing fault . as stored strainis released during an earthquake , the deformed rocks `` rebound '' to their undeformed shapes .the magnitude of the earthquake reflects how much strain was released .the seismic gap hypothesis states that strong earthquakes are unlikely in regions where weak earthquakes are common and the longer the quiescent period between earthquakes , the stronger the earthquake will be when it finally occur ( see kagan and jackson , 1995 , and references therein ) .the complication is that the boundaries between crustal plates are often fractured into a vast network of minor faults that intersect the major fault lines .when an earthquake relieves the stress in any of these faults , it may pile additional stress on another fault in the network .this contradicts the seismic gap theory because a series of small earthquakes in an area can then increase the probability that a large earthquake will follow . the theory of dilatancy states that an earthquake develops similarly to the rupture of a solid body ( whitcomb et al ., 1973 ; scholz et al . , 1973 ; griggs et al . , 1975 )this approach has a physical basis in laboratory studies of rock samples , which showed that when rocks are compressed until they fracture , a dilatancy often occurs for a short time interval immediately before failure ( scholz , 1968 ) .mjachkin et al .( 1975ab ) modified the dilatancy approach and formulated the theory of unstable avalanche crack formation .the model is based on the two phenomena : interaction between the stress fields of the cracks , and the localization of the process of the crack formation .the number and size of cracks increases gradually under the action of tensions below a critical value . when the density of cracks reaches some critical value , the rock breaks very quickly .this process develops due to merging of cracks as a result of interaction between their stress fields .however , the larger cracks have more probability to interact , and it supposes that a small number of large cracks is gradually formed , and their merging leads to the macro - destruction . during the earthquake gestation , a gradual increase in number and size of cracks occur in the whole volume of rock under compression . when the crack density reaches a critical value , the barriers between cracks are destroyed , and the velocity of deformation increases .finally , an unstable deformation develops and localizes in a narrow zone of future macro - rupture , the cracks orient along the future macro rupture , and a macro - crack is formed , producing an earthquake . however , this model was modified recently by introducing a concept of self - organized criticality , proposed by bak et al .( 1988 ) for description of the behavior of complex systems .applied to earthquakes , this approach describes an interaction between the ruptures of different rank and the collective effects of rupture formation before a strong earthquake ( for example varnes , 1989 ; keilis - borok , 1990 ; sammis and sornette , 2002 ) .a wide area around the future epicenter reaches a metastable state , and the system turns to be very sensitive to small external actions .the concept of soc does not contradict the concept of dilatancy .however , it assumes that a significantly greater region is involved during the last stages of earthquake preparation than the dilatancy theories imply .unfortunately , the main processes leading to an earthquake develop deep inside the crust , and there is no way to realize direct measurements of any quantity .the unique possibility we have is to search for traces of these processes disseminated over the earth s surface . in this context, the lineament analysis could convert in the future in one of power tools for earthquake study , complementing other ground - based and satellite studies .nevertheless , despite promising results obtained , many important questions continue to be present .it is necessary to understand , whether the lineament system is always affected by earthquake ?how early before an earthquake is this alteration manifested ? how is it related to the earthquake magnitude and depth ?how different is it in case of different kinds of plate borders ?this study represent a first step in the search of some answers .for this study we used the the images from the advanced spaceborne thermal emission and reflection radiometer ( aster ) onboard the terra satellite .the satellite was launched to a circular solar - synchronous orbit with altitude of 705 km .the radiometer is composed by three instruments : visible and near infrared radiometer ( vnir ) with 15 m resolution ( bands 1 - 3 ) , short wave infrared radiometer ( swir ) with 30 m resolution ( bands 4 - 9 ) and thermal infrared radiometer tir with 90 m resolution ( bands 11 - 14 ) which measure the reflected and emitted radiation of the earths surface covering the range 0.56 to 11.3 m ( abrams , 2000 ) . the images were processed using the lineament extraction and stripes statistic analysis ( lessa ) software package ( zlatopolsky , 1992 , 1997 ) , which provides a statistical description of the position and orientation of short linear structures through detection of small linear features ( stripes ) and calculation of descriptors that characterize the spatial distribution of stripes .the program also makes it possible to extract the lineaments - straight lines crossing a significant part of the image . to make this extraction , a set of very long and very narrow ( a few pixels ) windows ( bands ) , crossing the entire image in different directions , was used . in each band the density of stripes , the direction of which is coincident with the direction of the band , is calculated .when the density of stripes overcomes a pre - established threshold , the chain of stripes along the band is considered as a lineament .the value of threshold depends on the brightness of the image , relief , etc . and is established empirically .previous studies showed that lineaments , extracted from the image by applying the lessa program , are strongly related to the main lineaments , obtained from the geomorphological studies ( zlatopolsky , 1992 , 1997 ) .the details about the application of lessa package for earthquake studies is given in ( arellano et al ., 2005 ) .during this study we analysed 5 earthquakes , occurred in the in the pacific coast of the south america and one earthquake occured in himalaya , china .table 1 resumes main characteristics of these earthquakes , indicating the date , country , geographic coordinates , magnitude , and depth of the earthquake .also the the aster images available for each earthquake are indicated , for example -126 means that the image 126 days before earthquake was used .the last column indicates that in all south american earthquakes number and orientation of lineaments suffered changes before the earthquake . in case of china earthquake, we can not give a defnite answer , because unfortunately the key images tens day before the earthquake were covered by clouds in appoximately 50% , that made the lineament analysis difficult ( last two lines , two areas cvering the hipocenter and close to hipocenter ) .neverthless , more sophysticated technique based on analysis of stipe density fields was able to detect the alterations in these fields related to the earthquake .the methodology of this analysis is given in ( arellano et al .currently we are preparring a manuscript dedicated especially to the analysis of this event . to illustrat the results obtained we give as an example a detailed analysis of 7.8 mw earthquake , which took place june 13 , 2005 in northern chile close to arica ( see figure 1 ) .the hipocenter was situated at 115 km deep in the crust .the coordinates were lat , long . in the top, a series of four band 3 aster ( vnir ) images around the hipocenter area are shown .it is possible to see , that the presence of clouds was low .the second line contains the images showing the systems of lineaments , obtainend from the images above using the lessa programm with a threhhold 120 ( zlatopolsky , 1992 , 1997 ) .it is posible to see clear time evolution of lineaments , experimenting strong increase in the number of lineaments 5 days before the earthquake .the third and fourth lines quanify this effect by calculaing the rose - diagramms and the radon transforms .reorientation of lineaments can be taken as an indirect evidence in favour to the theory of dilatncy . nevertheless , it is necessary to make more detailed studies to make definitive conclusions ..main characteristics of earthquakes analyzed [ cols="^,^,^,^,^,^,^,^",options="header " , ]in this study we used the multispectral satellite images from aster / terra satellite for detection and analysis of lineaments in the areas around strong earthquakes with magnitude more than 5 mw .a lineament is a straight or a somewhat curved feature in an image , which can be detected by a special processing of images , based on directional filtering and/or hough transform .it was established that the systems of lineaments are very dynamical . by analyzing 5 events of strong earthquakes, it was found that a significant number of lineaments appeared approximately one month before an earthquake , and one month after the earthquake the lineament configuration returned to its initial state .these features were not observed in the test areas , situated hundreds kilometers away from the earthquake epicenters .the main question is how the lineaments extracted from images of 15 - 30 m ( aster ) in resolution are able to reflect the accumulation of stress deep in the crust given that the ground deformations associated with these phenomena are about a few centimeters ?the nature of lineaments is related to the presence of faults and dislocations in the crust , situated at different depth .if a dislocation is situated close to the surface , the fault appears as a clear singular lineament . in the case of a deep located fault, we observe the presence of extended jointing zones , easily detectable in satellite images even up to 200 m resolution .nevertheless , how well lineaments can be detected strongly depends on a number of factors .in particular , it depends on the current level of stress in the crust .generally , an enlarged presence of lineaments indicates that in these regions the crust is more permeable , allowing the elevation of fluids and gases to the surface .accumulation of stress deep in the crust modifies all afore mentioned processes and leads to the variation in the density and orientationn of lineaments , previous to a strong earthquake .we acknowledge hiroji tsu ( geological survey of japan csj ) aster team leader , anne kahle ( jet propulsion laboratory jpl ) us aster team leader and the land processes distributed active archive center for providing the aster level 2 images .we acknowledge a. zlatopolsky for providing the lineament extraction and stripes statistic analysis ( lessa ) software package and helpful suggestions .we thank very much milton rojas gamarra for his assistance in image procesing .this work has been supported by dicyt / usach grant . + + * references * abrams , m. , the advanced spaceborne thermal emission and reflection radiometer ( aster ) : data products for the high spatial resolution imager on nasa s terra platform , international journal of remote sensing , 21(5 ) , 847 - 859 , 2000 .alparone , s. , b. behncke , s. giammanco , m. neri , e. privitera , e. , paroxysmal summit activity at mt .etna ( italy ) monitored through continuous soil radon measurements , geoph .lett . , 32(16 ) , 10.1029/2005gl023352 , 2005 .arellano - baeza , a. ; zverev , a. ; malinnikov , v. study of the structure changes caused by earthquakes in chile applying the lineament analysis to the aster ( terra ) satellite data .35th cospar scientific assembly , paris , france , july 18 - 25 , 2004 .arellano - baeza , a. ; zverev , a. ; malinnikov , changes in geological faults associated with earthquakes detected by the lineament analysis of the aster ( terra ) satellite data ., xi latin american symposium on remote sensing and spatial information systems , santiago , november 22 - 26 , 2004 .arellano - baeza a.a ., a. zverev , v. malinnikov , study of changes in the lineament structure , caused by earthquakes in south america by applying the lineament analysis to the aster ( terra ) satellite data , advances in space research , doi:10.1016/j.asr.2005.07.068 , electronic access from 2005 .campos , j. , j. dechabalier , a. perez , p. bernard , s. bonvalot , m. bouin , o. charade , a. cisternas , e. clevede , v. clouard , r. dannoot , g. gabalda , e. kausel , d. legrand , a. lemoine , a. nercessian , g. patau , j. ruegg , j. vilotte , source parameters and gps deformation of the mw 7.8 tarapaca intermediate depth earthquake , abstract n s13b-0209 , american geophysical union fall meeting , san francisco , usa , 2005 .carver , g. , j. sauber , w. r. lettis , r. c. witter , use of srtm and landsat-7 to evaluate seismic hazards , kodiak island , alaska , abstract nnnnn 4513 , egs - agu - eug joint assembly , nice , france , 6 - 11 april , 2003 .cervone , g. , r.p .singh , m. kafatos , c. yu , wavelet maxima curves of surface latent heat flux anomalies associated with indian earthquakes , natural hazards and earth system sciences , 5(1 ) , 87 - 99 , 2005 .comte , d. , h. tavera , c. david , d. legrand , l. dorbath , a. gallego , j. perez , b. glass , h. haessler , e. correa , a. cruz , seismotectonic characteristics around the arica bend , central andes ( 16s-20s ) : preliminary results , abstract nnnnnnns41a-05 , american geophyisical union fall meeting , san francisco , usa , 2003 .funning , g. j. , b. parsons , t.j .wright , j.a .jackson , james e.j .fielding , surface displacements and source parameters of the 2003 bam ( iran ) earthquake from envisat advanced synthetic aperture radar imagery , surface displacements and source parameters of the 2003 bam ( iran ) earthquake from envisat advanced synthetic aperture radar imagery , 110(b9 ) , doi : 10.1029/2004jb003338 , 2005 .a. a. griffiths , the phenomenon of rupture and flow in solids , phil .london a 221 , 163 - 198 , 1921 .griggs , d. t. , d. d. jackson , l. knopoff , and r. l. shreve , earthquake prediction : modeling the anomalous vp / vs source region , science , 187 , 537 - 540 , 1975 .karnieli , a. , a. meisels , l. fisher , y. arkin , automatic extraction and evaluation of geological linear features from digital remote sensing data using a hough transform .photogrammetric engineering and remote sensing .62(5 ) , 525 - 531 , 1996 .king , g. c. p. the accommodation of large strains in the upper lithosphere of the earth and other solid by self - similar fault system : the geometrical origin of the b - value , pageoph .121 , 567 - 585 , 1983 .koike , k. , nagano s. and kawaba k. , construction and analysis of interpreted fracture planes through combination of satellite - image derived lineaments and digital elevation model data . computers and geosciences , 24 , 573 - 583 , 1998 .koizumi , n. , y. kitagawa , n. matsumoto , m. takahashi , t. sato , o. kamigaichi , k. nakamura , preseismic groundwater level changes induced by crustal deformations related to earthquake swarms off the east coast of izu peninsula , japan , geoph .lett . , 31(10 ) ,doi : 10.1029/2004gl019557 , 2004 .lasserre , c. , g. peltzer , f. cramp y. klinger , j. van der woerd , p. tapponnier , coseismic deformation of the 2001 mw = 7.8 kokoxili earthquake in tibet , measured by synthetic aperture radar interferometry , j. geoph .res . , 110(b12 ) ,doi : 10.1029/2004jb003500 , 2005 .legrand , d. , co - seismic deformation of the crustal mw=6.3 , 2001 , chusmiza , chile event triggered by the subduction mw=8.4 , 2001 , arequipa , peru earthquake studied using seismological data , abstract s43a-1051 , american geophysical union , fall meeting , san francisco , usa , 2005 .morrow , c. , w.f .brace , electrical resistivity changes in tuffs due to stress , j. geoph .res . , 86(b4 ) , 2929 - 2934 , 1986 .oleary , d. w. friedman , j. d. and pohn , h. a. , lineament , linear lineation some proposed new standards for old terms . geological society america bulletin , 87 , 1463 - 1469 . , 1976 .pulinets , s. a. , a.d .legenka , t.v .gaivoronskaya , v.kh .depuev , main phenomenological features of ionospheric precursors of strong earthquakes , j. atm .phys . , 65(16 - 18 ) , 1337 - 1347 , doi:10.1016/j.jastp.2003.07.011 , 2003 .reid , h. f. , the mechanics of the earthquake , v. 2 of the california earthquake of april 18 , 1906 : report of the state earthquake investigation commission : carnegie institution of washington publication 87 , 1910 .satyabala , s. p. , coseismic ground deformation due to an intraplate earthquake using synthetic aperture radar interferometry : the mw6.1 killari , india , earthquake of 29 september 1993 , j. geoph .res . , 111(b2 ) ,doi : 10.1029/2004jb003434 , 2006 .sharma , d. k. , m. israil , r. chand , j. rai , p. subrahmanyam , s. c. garg , signature of seismic activities in the f2 region ionospheric electron temperature , j. atm ., 68(6 ) , 691 - 696 , doi : 10.1016/j.jastp.2006.01.005 , 2006 .serebryakova , o. n. , s. v. bilichenko , v. m. chmyrev , m. parrot , j. l. rauch , f. lefeuvre , o. a. pokhotelov , electromagnetic elf radiation from earthquake regions as observed by low - altitude satellites , geophys ., 19(2 ) , 91 - 94 , 1992 .stich , d. , g. alguacil , j. morales , the relative locations of multiplets in the vicinity of the western almeria ( southern spain ) earthquake series of 1993 - 1994 , geophysical journal international , 146(3 ) , 801 - 812 , 2001 .szen , m. l. and toprak v. , filtering of satellite images in geological lineament analyses : an application to a fault zone in central turkey . international journal of remote sensing , vol .19 , p. 1101 - 1114 , 1998 .varnes , d. j. , predicting earthquakes by analyzing accelerating precursory seismic activity , pageoph .130(4 ) , 661 - 686 , 1989 .wang , j. , howarth , p.j .: use of the hough transform in automated lineament detection .ieee tran .geoscience and remote sensing , vol . 28 .p. 561 - 566 , 1990 .zlatopolsky , a. a. , program lessa ( lineament extraction and stripe statistical analysis ) : automated linear image features analysis experimental results , computers and geosciences , 18(9 ) , 1121 - 1126 , 1992 .
over the last decades strong efforts have been made to apply new spaceborne technologies to the study and possible forecast of strong earthquakes . in this study we use aster / terra multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake . a lineament is a straight or a somewhat curved feature in an image , which it is possible to detect by a special processing of images based on directional filtering and or hough transform . `` the lineament extraction and stripes statistic analysis '' ( lessa ) software package , developed by zlatopolsky ( 1992 , 1997 ) . we assume that the lineaments allow to detect , at least partially , the presence ruptures in the earths crust , and therefore enable one to follow the changes in the system of faults and fractures associated with strong earthquakes . we analysed 6 earthquakes occurred in the pacific coast of the south america and one earthquake in tibet , xizang , china with the richter scale magnitude mw . they were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only . it was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately , after that the system gradually returns its initial state . this effect increases with the earthquake magnitude , and it is much more easily detectable in case of convergent plate boundaries ( for example , nazca and south american plates ) . the results obtained open a possibility to develop a methodology able to evaluate the seismic risk in the regions with similar geological conditions .
, where is the underlying genotype . with ; or indicates the state of the mutable site ( e.g. , amino acid position ) .the effect of a single , double , triple mutation is given by the red arrows .pairwise ( or second - order ) epistasis is defined as the differential effect of a mutation depending on the background in which it occurs , for example in ( b ) it is the degree to which the effect of one mutation ( e.g. ) deviates in the background of the second mutation ( ) .thus , the expression for second order epistasis is .the third order and higher cases are considered in the main text , [ fig : explain ] ] we begin with a formal definition of genotype , phenotype , and the representation of mutational effects . consider a specific sequence comprised of positions as a binary string with , where and represent the `` wild - type '' and mutant state of each position , respectively .this defines a total space of genotypes .the analysis could be expanded to the case of multiple substitutions per position , but we consider just the binary case for clarity here .each genotype has an associated phenotype , which is of the form that the independent action of two mutations means additivity in . for notational simplicity , we will simply write the genotype in a -bit binary form , where is the order of the mutations that are considered .for example , the effect of a single mutation is simply , the difference in the phenotype between the mutant and wild - type states ( fig .[ fig : explain]a ) .the effect of a double mutant is given by ( red arrow , fig .[ fig : explain]b ) , and its linkage through paths of single mutations is defined by a two - dimensional graph ( a square network ) with four total genotypes . similarly , a triple mutant effect is ( red arrow , fig .[ fig : explain]c ) , and its linkage through paths of single mutations are enumerated on a three - dimensional graph ( a cube ) with eight total genotypes .more generally , and as described by horowitz and fersht , the phenotypic effect of any arbitrary -dimensional mutation can be represented by an -dimensional graph , with total genotypes .understanding the relationship of the phenotypes of multiple mutants to that of the underlying lower - order mutant states is the essence of epistasis , and is described below .a well - known approach in biochemistry for analyzing the cooperativity of amino acids in specifying protein structure and function is to use the formalism of thermodynamic mutant - cycles , one manifestation of the general principle of epistasis . in this approach ,the `` phenotype '' is typically an equilibrium free energy ( e.g. of thermodynamic stability or biochemical activity ) , and the goal is to obtain information about the structural basis of this phenotype through mutations that represent subtle perturbations of the wild - type state . for pairs of mutations ,the analysis involves measurements of four variants : wild - type ( ) , each single mutant ( and ) , and the double mutant ( ) , where the subscripts designate the mutated positions , and the superscript o indicates free energy relative to a standard state ( fig .1b ) . from this , we can compute a coupling free energy between the two mutations ( ) as the degree to which the effect of one mutation ( ) is different when tried in the background of the other mutation ( ) : whereas the terms are individual measurements and terms are the effects of single mutations relative to wild - type , is a second order epistatic term describing the cooperativity ( or non - independence ) of two mutations with respect to the wild - type state .this analysis can be expanded to higher order .for example , the third order epistatic term describing the cooperative action of three mutations 1 , 2 , and 3 ( ) is defined as the degree to which the second order epistasis of any two mutations is different in the background of the third mutation : note that requires measurement of eight individual genotypes ( fig .more generally , we can define an -th order epistatic term ( ) , describing the cooperativity of mutations , it is possible to write this expansion in a compact matrix form : where is the vector of epistasis terms of all orders , and is the vector of free energies corresponding to phenotypes of all the individual variants listed in binary order .to illustrate , for three mutations , and we obtain 1 & 0 & 0 & 0 & 0 & 0 & 0 & \ \0\ \\ -1 & 1 & 0 & 0 & 0 & 0 & 0 & \ \0\ \\ -1 & 0 & 1 & 0 & 0 & 0 & 0 & \\ 0\ \\ 1 & -1 & -1 & 1 & 0 & 0 & 0 & \ \0\ \\ -1 & 0 & 0 & 0 & 1 & 0 & 0 & \\ 0\ \\ 1 & -1 & 0 & 0 & -1 & 1 & 0 & \\ 0\ \\ 1 & 0 & -1 & 0 & -1 & 0 & 1 & \ \0\ \\ -1 & 1 & 1 & -1 & 1 & -1 & -1 & \ \ 1\ \end{pmatrix * } \setlength{\arraycolsep}{6pt } * \begin{pmatrix * } y_{000}\\ y_{001}\\ y_{010}\\ y_{011}\\ y_{100}\\ y_{101}\\ y_{110}\\ y_{111 } \end{pmatrix*}\ ] ] in this representation , subscripts in represent combinations of mutations ( e.g. , a double mutant ) and subscripts in represent epistatic order ( e.g. , pairwise epistasis between mutations 1 and 2 ) .thus , equations and correspond to multiplying by the fourth or eighth row of , respectively , to specify and .note that and contain precisely the same information , re - written in a different form .the matrix represents an operator linking these two representations of the mutation data and we will return to the nature of the operation in a later section .we can write a recursive definition for that defines the mapping between and for all epistatic orders : \boldsymbol{g}_n & \ \ 0 \ \ \\[0.2em ] -\boldsymbol{g}_n & \boldsymbol{g}_n \end{pmatrix*}\ \ \mathrm{with}\ \ \ \setlength{\arraycolsep}{6pt } \boldsymbol{g}_0 = 1 \label{eq : grecursive}\ ] ] the inverse mapping is defined by .this relationship gives the effect of any combination of mutants ( in ) as a sum over epistatic terms ( in ) .for example , the energetic effect of three mutations 1,2 , and 3 ( ) is : thus , in the most general case , the free energy value of a multiple mutation requires knowledge of the effect of the single mutations and all associated epistatic terms . for the triple mutant, this means the wild - type phenotype , the three single mutant effects , the three two - way epistatic interactions , and the single three - way epistatic term .this analysis highlights two important properties of epistasis : ( 1 ) the lack of any epistatic interactions between mutations dramatically simplifies the description of multiple mutations to just the sum over the underlying single mutation effects , and ( 2 ) the absence of lower - order epistatic interactions ( e.g. ) does not imply absence of higher order epistatic terms .in contrast to the biochemical definition , the significance of a mutation ( and its epistatic interactions ) may also be defined not solely with regard to a single reference state as the `` wild - type '' , but as an average over many possible genotypes . as we show below, such averaging better represents the epistatic level at which mutations operate , and in principle , can separate mutant effects that are idiosyncratic to particular genotypes from those that are fundamentally important .the concept of averaging epistasis over genotypic backgrounds is analogous to the idea of the schema average fitness in the field of genetic algorithms ( ga ) , which was recently introduced in biology . in its complete form, background - averaged epistasis considers averages over all possible genotypes for the remaining positions in the ensemble .for example , if , the epistasis between two positions 1 and 2 is computed as an average over both states of the third position ( , with the averaging denoted by ) ( see .1c ) : \\ + [ ( y_{011 } - y_{010 } ) - ( y_{001 } - y_{000 } ) ] \bigg\}\end{gathered}\ ] ] thus for , we can write all epistatic terms : \ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\ \\ \ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1\ \\ \ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1\ \\ \ 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\ \\ \ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\ \\ \ 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1\ \\ \ 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1\ \\ \ 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1\ \end{pmatrix * } \setlength{\arraycolsep}{6pt } * \begin{pmatrix * } y_{000}\\ y_{001}\\ y_{010}\\ y_{011}\\ y_{100}\\ y_{101}\\ y_{110}\\ y_{111 } \end{pmatrix*}\ ] ] where is a diagonal weighting matrix to account for averaging over different number of terms as a function of the order of epistasis ; , where is the order of the epistatic contribution in row .more generally , for any number of mutations : where is the same vector of phenotypes of variants as defined above , is the vector of background averaged epistatic terms , and is the operator for background - averaged epistasis , defined recursively as \ \boldsymbol{h}_n & \boldsymbol{h}_n\\[0.2em ] \\boldsymbol{h}_n & -\boldsymbol{h}_n \end{pmatrix*}\ \ \mathrm{with}\ \ \ \setlength{\arraycolsep}{6pt } \boldsymbol{h}_0 = 1\ ] ] the recursive definition for the weighting matrix is \ \frac{1}{2}\boldsymbol{v}_n & 0 \\[0.2em ] \ 0 & -\boldsymbol{v}_n\ \end{pmatrix*}\ \ \mathrm{with}\ \ \ \setlength{\arraycolsep}{6pt } \boldsymbol{v}_0 = 1\ ] ] the matrix has special significance ; its action mathematically corresponds to a generalized fourier analysis known as the walsh - hadamard transform .this converts the phenotypes of individual variants ( in ) into a vector of averaged epistasis ( in ) , an operation that can also be seen as a spectral analysis of the high - dimensional phenotypic landscape defined by the genotypes studied . in this transform ,the phenotypic effects of combinations of mutations are represented as sums over averaged epistatic terms . in summary ,the definition of epistasis proposed in evolutionary genetics is a global definition over sequence space , averaging the epistatic effects of mutations over the ensemble of all possible variants .in contrast , the biochemical definition given in the previous section is a local one , treating a particular variant as a reference for determining the epistatic effect of mutations .a third approach for analyzing epistasis is linear regression .for example , when we have a complete dataset of phenotypes of all genotypes , we can use regression to define the extent to which epistasis is captured by only considering terms to some order .that is , whether terms up to the order are sufficient for effectively capturing the full complexity of a biological system .the standard form for a linear regression is a set of equations : for each genotype .the terms denote the regression coefficients corresponding to the ( epistatic ) effects between subscripted positions , and is the residual noise term . in matrix form this can be written as where tabulates which regression coefficients are summed over for genotypes . for ,regressing to full order , we can write 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix * } * \begin{pmatrix } \beta_{000}\\ \beta_{001}\\ \beta_{010}\\ \beta_{011}\\ \beta_{100}\\ \beta_{101}\\ \beta_{110}\\ \beta_{111 } \end{pmatrix } + \boldsymbol{\bar{\epsilon}}\ ] ] following the same rule for subscripts as before . has the recursive definition : \ \boldsymbol{x}_n & \ \ 0 \ \ \\\[0.2em ] \\boldsymbol{x}_n & \boldsymbol{x}_n\ \end{pmatrix*}\ \ \mathrm{with}\ \ \ \setlength{\arraycolsep}{6pt } \boldsymbol{x}_0 = 1 \label{eq : recursivex}\ ] ] it is worth noting that the inverse of is , the operator for biochemical epistasis ( eq . ; see supplementary information ) .thus , the multi - dimensional mutant - cycle analysis is indistinguishable from regression to full order the case in which and .however , the usual aim of regression is to approximate the data with fewer coefficients than there are data points , i.e. , . to express this ,we simply remove the columns from that refer to the epistatic orders excluded from the regression ( i.e. , ) : is multiplied by an -by- matrix , the identity matrix with columns corresponding to epistatic orders higher than removed . is the number of epistatic terms up to and is given by .thus for regression to order , we can define , and write the linear regression is performed by solving the so - called normal equations where is necessarily square and invertible as long as is full column rank and hence is full rank .note that in this analysis we compute epistatic terms only up to the order , but use phenotype / fitness data of all combinations of mutants .the more general case in which we estimate epistatic terms with less than data points is distinct and is discussed below .if the biochemical definition of epistasis is a local one , exploring the coupling of mutations of all order with regard to one `` wild - type '' reference , and the ensemble view of epistasis is a global one , assessing the coupling of mutations of all order averaged over all possible genotypes , then the regression view of epistasis is an attempt to project to a lower dimension - capturing epistasis as much as possible with low - order terms .the analysis presented above leads to a simple unifying concept underlying the calculations of epistasis . in general ,all the calculations are a mapping from the space of phenotypic measurements of genotypes to epistatic coefficients , in a general form , where is the epistasis operator .we give the bottom line of the different operators below ; their formal mathematical derivations can be found in the supplementary information .the most general situation is that of the background - averaged epistasis with averaging over the complete space of possible genotypes . in this case where is a matrix corresponding to the walsh - hadamard transform ( is the number of mutated sites ) and is a matrix of weights to normalize for the different numbers of terms for epistasis of different orders .the biochemical definition of epistasis using one `` wild - type '' sequence as a reference is a sub - sampling of terms in the hadamard transform . in this case where is , as defined in eq . .in essence , picks out the terms in that concern the wild - type background .note that both these mapping are one - to - one , such that the number of epistatic terms ( in ) is equal to the number of phenotypic measurements ( in ) and no information is lost . in contrast , regression to lower orders necessarily implies fewer epistatic terms than data points , which means the mapping is compressive and information is lost . in this case where ( ) is the identity matrix but with zeros on the diagonal at the orders that are higher than which we regress over .the fundamental point is that all three formalisms for computing epistasis are just versions of the walsh - hadamard transform , with terms selected as appropriate for the choice of a single reference sequence or limitations on the order of epistatic terms considered . from a computational point of view , it is interesting to note that regression using the hadamard transform makes matrix inversion unnecessary ( compare with eq . ) . to illustrate the different analyses of epistasis , we consider a small case study of three spatially proximal mutations that define a switch in ligand specificity in psd95-pdz3 , a member of the pdz family of protein interaction modules ( fig .[ fig : pdz]a ) .two mutations are in psd95-pdz3 ( g330 t and h372a ) , and one mutation in its cognate ligand peptide ( t-2f ) .the phenotype is the binding affinity , , and the absence of epistasis implies additivity in the corresponding free energy , expressed as in kcal mol .binding affinities for this system are from ref . , and given in figure [ fig : pdz]b .these quantitative phenotypes are then transformed to epistatic terms using eq.[eq : omegahadamard]-[eq : regresshadamard ] ( table 1 ) .a number of simple mathematical relationships are evident in the data .first , regression is carried out only to the second - order and therefore the third - order epistatic term for this analysis does not exist ( or , equivalently , is set to zero if the epistatic vector is defined to be of full length ) .second , there are some equalities .the regression terms at the highest order ( second , in this case ) are equal to the corresponding terms for the averaged epistasis .this is because sets columns corresponding to orders higher than the regression order to zero , leaving rows corresponding to the highest regression order with only one non - zero element , on the diagonal . for these rowsthe entries in the epistasis operators and are equal .another more trivial equality is the highest - order term for the mutant - cycle and averaged epistasis formalisms ; there is only one contribution for the highest order , and therefore no backgrounds to average over .values in m for all eight combinations of two amino acids at the three mutable positions.[fig : pdz ] ] 9.5pt depth4pt width0ptrr|rrrrr & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & & + + [ table : pdz ] the data also illustrate the key properties of the different formalisms .the g330 t , h372a , and t-2f mutations represent a collectively cooperative set of perturbations , as indicated by a significant third - order epistatic term by both mutant cycle and background averaged definitions ( kcal mol ) .but the three formalisms differ in the energetic value of the lower order epistatic terms .for example , g330 t is essentially neutral for wild - type ligand binding but shows a dramatic gain in affinity in the context of the t-2f ligand ; thus , a large second - order epistatic term by the biochemical definition ( kcal mol ) .however , the coupling between g330 t and t-2f is nearly negligible in the background of h372a ; as a consequence , the background averaged second - order epistasis term is smaller ( kcal mol ) .similarly , both biochemical and regression formalisms assign a large first - order effect to the t-2f ( 1 * * ) and h372a ( * 1 * ) single mutations , while the corresponding background - averaged terms are nearly insignificant . for example , the free energy effect of mutating h372a ( ) is kcal mol in the wild - type background , but is kcal mol in the background of the t-2f ligand mutation - a nearly complete reversal of the effect of this mutation depending on context .thus with background averaging , the first order term for h372a ( ) is close to zero .this makes sense ; given the experiment described in figure 2 , the h372a mutation should not be thought of as a general determinant of ligand affinity .instead it is a conditional determinant , with an effect that depends on the identity of the amino acid at the position of the ligand .note that the degree of averaging depends on the number of mutated sites , and thus the interpretation of mutational effects will depend on the scale of the experimental study .these examples show that background averaging has the effect of `` correcting '' mutational effects for the existence of higher - order epistatic interactions . without background averaging ,the effect of a mutation ( at any order ) idiosyncratically depends on a particular reference genotype and will fail to account for higher order epistasis which modulates the observed mutational effect . thus , background averaging provides a measure of the effects of mutation that represents its general value over many related systems , and more appropriately represents the cooperative unit within which the mutation operates . the analytical expressions in eq.[eq : omegahadamard]-[eq : regresshadamard ] involves the measurement of phenotypes ( ) for all combinatorial mutants , a fact that exposes two fundamental problems .first , it is only practical when is small . in such cases ( e.g figure 2 , ), the data can be combinatorially complete permitting a full analysis - the local and global structure of epistasis , possible evolutionary trajectories , and adaptive trade - offs .but for the typical size of protein domains ( ) , the combinatorial complexity of mutations precludes the collection of complete datasets .second , even if it were possible , the sampling of all genotypes is not desired ; indeed , the majority of systems in such an ensemble are unlikely to be functional and and averages over them are not meaningful with regard to learning the epistatic structure of native systems .how then can we apply these epistasis formalisms in practice , especially with regard to background averaging ? to develop general principles , we begin with two obvious approaches that lead to well - defined alternative expressions for averaged epistasis .first , consider the case in which the data are only `` locally complete '' ; that is , we have all possible mutants up to a certain order .we can then define a measure that is intermediate between epistasis with a single reference genotype and epistasis with full background - averaging , which we will refer to as the _ partial _ background - averaged epistasis .for example , for three positions ( ) with data complete only up to order ( ) , the partial background - averaged effect of the first position ( rightmost subscript ) , is calculated as . compared to the full background - averaged epistasis , the partial averages just leaves out the last term , , which represents the unavailable phenotype of the triple mutant .more generally , we can define this measure of epistasis as another special case of the hadamard transform : where designates the element - wise product . is again a diagonal weighting vector , now given by where is the epistatic order associated with row as defined earlier , and .note that because mutants of order higher than are considered absent in the dataset .the matrix simply serves to multiply by zero the terms in the hadamard matrix that include orders higher than .interestingly , the matrices display a self - similar hierarchical pattern ( fig .[ fig : sierpinski ] ) and are related to so - called sierpinski triangles ( see ref ) .this permits a recursive definition in both and for the product , which we will designate as : \ \boldsymbol{f}_{\!_{n-1,p } } \ \ \ & \boldsymbol{f}_{\!_{n-1,p-1}}\ \\[0.2em ] \\boldsymbol{f}_{\!_{n-1,p-1 } } & -\boldsymbol{f}_{\!_{n-1,p-1}}\ \end{pmatrix * } \label{eq : partbackgiter}\ ] ] + with for , and is a matrix of zeros , except for a 1 in the upper left corner .this analysis assumes that data are complete up to the order . if not , analytical schemes for background - averaged epistasis such as eqs[eq : partbackg]-[eq : partbackgiter ] are not obvious .a second analytically tractable case for incomplete data arises in regression , where the idea is to estimate epistatic terms up to a specified order from available data .this involves solving a set of equations similar to the normal equations : where is an matrix constructed from the by identity matrix by deleting the rows corresponding to the unavailable phenotypic data , and , with defined as above . in order for this system of equations to be solvable, a necessary constraint is that ; that is , the number of data points available should be larger than or equal to the number of regression parameters .in addition , the data must be such that it is possible to uniquely solve for all epistatic terms in the regression .for example , if two mutations always co - occur in the data , it is obviously impossible to calculate their independent effects . in such cases , the number of solutions to eq . is infinite ( is not invertible ). introduced to calculate the partial background - averaged epistasis , for . ( a ) for when data for mutants up to second - order is available and ( b ) for when only first - order mutants are available .both matrices are self - similar , which allows their generation for arbitrary order , and are related to the so - called logical sierpiski triangle .for example , where is the anti - diagonal identity matrix and is the sierpinski matrix ( i.e. multigrade and in boolean logic ) for three inputs.[fig : sierpinski ] ] in practice , even with `` high - throughput '' assays , we can only hope to measure a tiny fraction of all combinatorial mutants due to the vast number of possibilities .in this situation , the problem of inferring epistasis by regression may be further constrained by imposing additional conditions , termed regularization .for example , kernel ridge regression and lasso include a weighted norm of the regression coefficients in the minimization procedure .regularization comes with its own set of caveats , but its application is , unlike the approaches in eq . and , not conditional on specific structure of the data or depth of coverage .however , none of these approaches directly addresses the problem of optimally defining appropriate ensembles of genotypes over which averages should be taken . in principle , the idea should be to perform background averaging over a representative ensemble of systems that show invariance of functional properties of interest .how can we generally find such ensembles without the impractical notion of exhaustive functional analysis of the space of possible genotypes ?one idea is motivated by the empirical finding of _ sparsity _ in the pattern of key epistatic interactions within biological systems .indeed , evidence suggests that in proteins , the architecture is to have a small subset of amino acids that shows strong and distributed epistatic couplings surrounded by a majority of amino acids that are more weakly and locally coupled .more generally , the notion of a sparse core of strong couplings surrounded by a milieu of weak couplings has been argued to be a signature of evolvable systems .if it can be more generally verified , the notion of sparsity might be exploited to define relevant strategies for optimally learning the epistatic structure of natural systems .one approach is to minimize the so - called -norm ( the sum of absolute values of the epistatic coefficients ) in a constrained optimization , which has the effect of producing many epistatic coefficients with zero or very small values , while projecting onto background - averaged epistatic terms : this procedure is akin to the technique of compressed sensing , a powerful approach used in signal processing to recognize the low - dimensional space in which the relevant features of a high - dimensional dataset occur given the assumption of sparsity of these features .the application of this theory for mapping biological epistasis has to our knowledge not been reported before , but might be explored with focused high - order mutational analyses in specific well - chosen model systems .the necessary technologies for such experiments are now becoming available , and should help define practical data collection strategies for studying epistasis more generally .it is worth pointing out that other approaches that use ensemble - averaged information to understand biological systems have been developed and experimentally tested .for example , statistical methods that operate on multiple sequence alignments of proteins calculate quantities related to epistasis that are averaged over the space of homologous sequences .importantly , these approaches have been successful at revealing a hierarchy of cooperative interactions between amino acids that range from local structural contacts in protein tertiary structures to more global functional modes . for defining good experimental approaches to epistasis, a conceptual advance may come from an attempt to formally map the constrained optimization problem described in eq.[eq : cs ] to the kind of ensemble averaging that underlies the statistical coevolution approaches .a fundamental problem is to define the epistatic structure of biological systems , which holds the key to understanding how phenotype arises from genotype .here we provide a unified mathematical foundation for epistasis in which different approaches are found to be versions of a single mathematical formalism - the weighted walsh - hadamard transform . in the most general case, this transform corresponds to an averaging of mutant effects over all possible genetic backgrounds at every order order of epistasis .this approach corrects the effect of mutations at every level of epistasis for higher order terms .importantly , it represents the degree to which the effects of mutations are transferable from one model system to another , the usual purpose of most mutagenesis studies . in contrast , the thermodynamic mutant cycle ( commonly used in biochemistry ) constitutes a special case of taking a single reference genotype and thus no averaging .this analysis represents the effects of mutations that are specific to a particular model system .regression ( commonly used in evolutionary biology ) is an attempt to capture features of a system with epistatic terms up to a defined lower order , often to bound the extent of epistasis or to predict the effects of higher - order combinations of mutations .the similarity of the regression operator to that of the mutant cycle ( see eq . ) indicates that this approach is also focused around the local mutational environment of a chosen reference sequence .in general , background averaging would seem to provide the most informative representation of the effect of a mutation .however , with the exception of very small - scale studies focused in the local mutational environment of extant systems , it is both impractical and logically flawed to collect combinatorially complete mutation datasets for any system .thus , the essence of the problem is to define optimal strategies for collecting data on ensembles of genotypes that is sufficient for discovering the biologically relevant epistatic structure of systems .the notion of sparsity in epistasis provides a general basis for developing such a strategy , and it will be interesting to test practical applications of this concept ( e.g. eq.[eq : cs ] ) in future work . defining optimal data collection strategies will not only provide practical tools to probe specific systems , but might guide us to principles underlying the `` design '' of these systems through the process of evolution , and help the rational design of new systems .the mathematical relations discussed here provide a necessary foundation to advance such understanding .we thank e. toprak , k. reynolds , and members of the ranganathan laboratory for critically reading the manuscript .fjp gratefully acknowledges funding by the helen hay whitney foundation sponsored by the howard hughes medical institute .rr acknowledges support from the robert a. welch foundation ( i-1366 , r.r . ) and the green center for systems biology .10 wells ja ( 1990 ) additivity of mutational effects in proteins ._ biochemistry _ * 29*:8509 .phillips pc ( 2008 ) epistasis the essential role of gene interactions in the structure and evolution of genetic systems ._ nat rev genet _ * 9*:855 .costanzo m , baryshnikova a , myers cl , andrews b , boone c ( 2011 ) charting the genetic interaction map of a cell ._ curr opin biotechnol _ * 22*:66 .lehner b ( 2011 ) molecular mechanisms of epistasis within and between genes ._ trends genet _ * 27*:323 .dowell rd , ryan o , jansen a , cheung d , agarwala s , et al .( 2010 ) genotype to phenotype : a complex problem ._ science _ * 328*:469 .lunzer m , golding gb , dean am ( 2010 ) pervasive cryptic epistasis in molecular evolution ._ plos genet _ * 6*:e1001162 .kryazhimskiy s , dushoff j , bazykin ga , plotkin jb ( 2011 ) prevalence of epistasis in the evolution of influenza a surface proteins ._ plos genet _ * 7*:e1001301 .bateson w ( 1908 ) facts limiting the theory of heredity ._ science _ * 26*:647 .fisher ra ( 1918 ) the correlation between relatives on the supposition of mendelian inheritance ._ trans r soc edinb _ * 52*:399 .horovitz a ( 1987 ) non - additivity in protein - protein interactions ._ j mol biol _ * 196*:733. cordes mh , davidson ar , sauer rt ( 1996 ) sequence space , folding and protein design ._ curr opin struct biol _ * 6*:3 .horovitz a , bochkareva es , yifrach o , girshovich as ( 1994 ) prediction of an inter - residue interaction in the chaperonin groel from multiple sequence alignment is confirmed by double - mutant - cycle analysis ._ j mol biol _ * 238*:133 .dill ka ( 1997 ) additivity principles in biochemistry ._ j biol chem _ * 272*:701 .jain rk , ranganathan r ( 2004 ) local complexity of amino acid interactions in a protein core ._ proc natl acad sci usa _ * 101*:111 .lander es , schork nj ( 1994 ) genetic dissection of complex traits ._ science _ * 265*:2037 .pettersson m , besnier f , siegel pb , carlborg o ( 2011 ) replication and explorations of high - order epistasis using a large advanced intercross line pedigree ._ plos genet _ * 7*:e1002180 .kouyos rd , leventhal ge , hinkley t , haddad m , whitcomb jm , et al .( 2012 ) exploring the complexity of the hiv-1 fitness landscape. _ plos genet _ * 8*:e1002551 .brem rb , kruglyak l ( 2005 ) the landscape of genetic complexity across 5,700 gene expression traits in yeast ._ proc natl acad sci usa _ * 102*:1572 .ehrenreich i m , torabi n , jia y , kent j , martis s , et al .( 2010 ) dissection of genetically complex traits with extremely large pools of yeast segregants ._ nature _ * 464*:1039. burch cl , chao l ( 2004 ) epistasis and its relationship to canalization in the rna virus phi 6 ._ genetics _ * 167*:559 .weinreich dm , watson ra , chao l ( 2005 ) perspective : sign epistasis and genetic constraint on evolutionary trajectories ._ evolution _ * 59*:1165. poelwijk fj , kiviet dj , weinreich dm , tans sj ( 2007 ) empirical fitness landscapes reveal accessible evolutionary paths ._ nature _ * 445*:383 .poelwijk fj , tnase - nicola s , kiviet dj , tans sj ( 2011 ) reciprocal sign epistasis is a necessary condition for multi - peaked fitness landscapes ._ j theor biol _ * 272*:141 .lozovsky er , chookajorn t , brown km , imwong m , shaw pj , et al .( 2009 ) stepwise acquisition of pyrimethamine resistance in the malaria parasite ._ proc natl acad sci usa _ * 106*:12025 .maharjan rp , ferenci t ( 2013 ) epistatic interactions determine the mutational pathways and coexistence of lineages in clonal escherichia coli populations ._ evolution _ * 67*:2762. draghi ja , plotkin jb ( 2013 ) selection biases the prevalence and type of epistasis along adaptive trajectories. _ evolution _ * 67*:3120 .vandersluis b , bellay j , musso g , costanzo m , papp b , et al .( 2010 ) genetic interactions reveal the evolutionary trajectories of duplicate genes ._ mol syst biol _* 6*:429 .natarajan c , inoguchi n , weber re , fago a , moriyama h , et al . ( 2013 ) epistasis among adaptive mutations in deer mouse hemoglobin ._ science _ * 340*:1324 .gong li , suchard ma , bloom jd ( 2013 ) stability - mediated epistasis constrains the evolution of an influenza protein ._ elife _ * 2*:e00631 .ashworth a , lord c , reis - filho j ( 2011 ) genetic interactions in cancer progression and treatment ._ cell _ * 145*:30 .chakravarti a , clark ag , mootha vk ( 2013 ) distilling pathophysiology from complex disease genetics . _ cell _ * 155*:21 .leiserson mdm , eldridge jv , ramachandran s , raphael bj ( 2013 ) network analysis of gwas data ._ curr opin genet dev _ * 23*:602 .hinkley t , martins j , chappey c , haddad m , stawiski e , et al .( 2011 ) a systems analysis of mutational effects in hiv-1 protease and reverse transcriptase ._ nat genet _* 43*:487 .combarros o , cortina - borja m , smith ad , lehmann dj ( 2009 ) epistasis in sporadic alzheimer s disease ._ neurobiol aging _ * 30*:1333 .fitzgerald jb , schoeberl b , nielsen ub , sorger pk ( 2006 ) systems biology and combination therapy in the quest for clinical efficacy ._ nat chem biol_ * 2*:458 .fu w , oconnor td , akey jm ( 2013 ) genetic architecture of quantitative traits and complex diseases ._ curr opin genet dev _ * 23*:678 .wang x , fu aq , mcnerney me , white kp ( 2014 ) widespread genetic epistasis among cancer genes ._ nature comm _* 5*:4828 chen j , stites we ( 2001 ) higher - order packing interactions in triple and quadruple mutants of staphylococcal nuclease ._ biochemistry _ * 40*:14012 .frisch c , schreiber g , johnson cm , fersht ar ( 1997 ) thermodynamics of the interaction of barnase and barstar : changes in free energy versus changes in enthalpy on mutation ._ j mol biol _ * 267*:696 .jiang c , hwang yt , wang g , randell jcw , coen dm , et al . ( 2007 ) herpes simplex virus mutants with multiple substitutions affecting dna binding of ul42 are impaired for viral replication and dna synthesis . _j virol _ * 81*:12077 .natarajan m , lin km , hsueh rc , sternweis pc , ranganathan r ( 2006 ) a global analysis of cross - talk in a mammalian cellular signalling network ._ nat cell biol _ * 8*:571 .weinreich dm , delaney nf , depristo ma , hartl dl ( 2006 ) darwinian evolution can follow only very few mutational paths to fitter proteins ._ science _ * 312*:111 .aita t , iwakura m , husimi y ( 2001 ) a cross - section of the fitness landscape of dihydrofolate reductase . _ protein eng _ * 14*:633 .kinney jb , murugan a , callan cg , cox ec ( 2010 ) using deep sequencing to characterize the biophysical mechanism of a transcriptional regulatory sequence . _proc natl acad sci usa _ * 107*:9158 .weinreich dm , lan y , wylie cs , heckendorn rb ( 2013 ) should evolutionary geneticists worry about higher - order epistasis ?_ curr opin genet dev _ * 23*:700 .szendro ig , schenk mf , franke j , krug j ,de visser jagm ( 2013 ) quantitative analyses of empirical fitness landscapes ._ j stat mech _ * 2013*:p01005 .horovitz a , fersht ar ( 1990 ) strategy for analysing the co - operativity of intramolecular interactions in peptides and proteins ._ j mol biol _ * 214*:613 .horovitz a ( 1996 ) double - mutant cycles : a powerful tool for analyzing protein structure and function . _ fold des _ * 1*:r121 .horovitz a , fersht ar ( 1990 ) co - operative interactions during protein folding ._ j mol biol _ * 224*:733 .goldberg d ( 1989 ) genetic algorithms and walsh functions : part i , a gentle introduction . _ complex systems _ * 3*:129 .beer t ( 1981 ) walsh transforms . _ american journal of physics _ * 49*:466 . stoffer ds ( 1991 - 06 - 01 ) walsh - fourier analysis and its statistical applications . _ journal of the american statistical association _ * 86*:461 .mclaughlin rn , poelwijk fj , raman a , gosal ws , ranganathan r ( 2012 ) the spatial architecture of protein function and adaptation ._ nature _ * 491*:138 .hartl dl ( 2014 ) what can we learn from fitness landscapes ?_ curr opin microbiol _ * 21*:51 .sierpinski w ( 1915 ) sur une courbe do nt tout point est un point de ramification . _ cr hebd acad science paris _ * 160*:302 .hastie t , tibshirani r , friendman j ( 2009 ) the elements of statistical learning , 2 ed .new york : springer publishing .springer series in statistics .tibshirani r ( 2011 ) regression shrinkage and selection via the lasso : a retrospective ._ j roy stat soc : ser b _ * 73*:273 .otwinowski j , plotkin jb ( 2014 ) inferring fitness landscapes by regression produces biased estimates of epistasis ._ proc natl acad sci usa _ * 111*:e2301 .sadovsky y , yifrach o ( 2007 ) principles underlying energetic coupling along an allosteric communication trajectory of a voltage - activated k+ channel ._ proc natl acad sci usa _ * 104*:19813 .shi l , kay le ( 2014 ) tracing an allosteric pathway regulating the activity of the hslv protease ._ proc natl acad sci usa _ * 111*:2140 .ruschak am , kay le ( 2012 ) proteasome allostery as a population shift between interchanging conformers . _ proc natl acad sci usa _ * 109*:e3454 .luque i , leavitt sa , freire e ( 2002 ) the linkage between protein folding and functional cooperativity : two sides of the same coin ? ._ ann rev biophys biomol struct _ * 31*:235 .halabi n , rivoire o , leibler s , ranganathan r ( 2009 ) protein sectors : evolutionary units of three - dimensional structure. _ cell _ * 138 * : 774 .kirschner m , gerhart j ( 1998 ) evolvability ._ proc natl acad sci usa _ * 95*:8420 .cands ej , wakin mb ( 2008 ) an introduction to compressive sampling ._ ieee signal proc mag _ * 25*:21 .zaremba sm , gregoret lm ( 1999 ) context - dependence of amino acid residue pairing in antiparallel -sheets ._ j mol biol _ * 291*:463 .shepherd tr , hard rl , murray am , pei d , fuentes ej ( 2011 ) distinct ligand specificity of the tiam1 and tiam2 pdz domains ._ biochemistry _ * 50*:1296 .yifrach o , mackinnon r ( 2002 ) energetics of pore opening in a voltage - gated k+ channel. _ cell _ * 111*:231 .hidalgo p , mackinnon r ( 1995 ) revealing the architecture of a k + channel pore through mutant cycles with a peptide inhibitor ._ science _ * 268*:307 .carter pj , winter g , wilkinson aj , fersht ar ( 1984 ) the use of double mutants to detect structural changes in the active site of the tyrosyl - trna synthetase ( bacillus stearothermophilus ) ._ cell _ * 38*:835 .ranganathan r , lewis jh , mackinnon r ( 1996 ) spatial localization of the k+ channel selectivity filter by mutant cycle - based structure analysis ._ neuron _ * 16*:131 .hinkley t , martins j , chappey c , haddad m , stawiski e , whitcomb jm , petropoulos cj , bonhoeffer s ( 2011 ) a systems analysis of mutational effects in hiv-1 protease and reverse transcriptase ._ nat genetics _ * 43*:487 .marks ds , hopf t , sander c ( 2012 ) protein structure prediction from sequence variation .biotechnol _ * 11*:1072. morcos f , pagnani a , lunt b , bertolino a , marks ds , sander c , zecchina r , onuchic jn , hwa t , weigt m ( 2011 ) direct - coupling analysis of residue coevolution captures native contacts across many protein families ._ proc natl acad sci _ * 108*:e1293 .skerker jm , perchuk bs , siryaporn a , lubin ea , ashenberg o , goulian m , laub mt ( 2008 ) rewiring the specificity of two - component signal transduction systems ._ cell _ * 133*:1043 .+ [ [ a .- expressing - the - biochemical - epistasis - operator - boldsymbolg - as - a - hadamard - transform ] ] a. expressing the biochemical epistasis operator as a hadamard transform : ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^+ ( eq . ) + + + first we write the different matrix operators in their recursive form , and then proceed by induction .we have for the recursive form of : + + \ \boldsymbol{x}_n & \ 0 \ \ \ , \\[0.2em ] \ \boldsymbol{x}_n & \boldsymbol{x}_n\ , \end{pmatrix*}\ \ \mathrm{with}\ \ \ \boldsymbol{x}_0 = 1 ] , + + which we can solve by gauss - jordan elimination : + + \boldsymbol{x}_n & \boldsymbol{x}_n\ & \ 0 & \ \ \mathbb{i}\\ \end{array } \right ) \rightarrow \left ( \begin{array}{rr|rr } \mathbb{i } & \ \ 0 \ \ & \boldsymbol{x}^{^{-1}}_{\ n } & 0 \ \ \ \\[0.2em ] \mathbb{i } & \mathbb{i } \ \ & 0 \ \ \ & \boldsymbol{x}^{^{-1}}_{\ n}\\ \end{array } \right)\rightarrow \left ( \begin{array}{rr|rr } \mathbb{i } & \ \ 0 \ \ & \boldsymbol{x}^{^{-1}}_{\ n } & 0 \ \ \ \\[0.2em ] 0 & \mathbb{i } \ \ & -\boldsymbol{x}^{^{-1}}_{\ n } & \boldsymbol{x}^{^{-1}}_{\ n}\\ \end{array } \right ) ] + + + which is identical to the recursive form for : + + \ \boldsymbol{g}_n & \ \ 0 \ \ \ \\[0.2em ] \-\boldsymbol{g}_n & \boldsymbol{g}_n\ \end{pmatrix*} ] + + + and \ \frac{1}{2 } \boldsymbol{v}_{\!n } & 0 \ \ \ \\[0.2em ] 0 \ \ \ & -\boldsymbol{v}_{\!n}\ \end{pmatrix*}\ \ \mathrm{with}\ \ \ \boldsymbol{v}_{\!0 } = 1 ] + + + \ \frac{1}{2 } \boldsymbol{v}_{\!n } & 0 \ \ \ \\[0.2em ] 0 \ \ \ & -\boldsymbol{v}_{\!n } \end{pmatrix * } \begin{pmatrix*}[r ] \ \ 2 \boldsymbol{x}_{_{n}}^t \boldsymbol{h}_n & 0 \ \ \ \ \ \\[0.2em ] \ \\boldsymbol{x}_{_{n}}^t \boldsymbol{h}_n\ & -\boldsymbol{x}_{_{n}}^t \boldsymbol{h}_n\ \end{pmatrix * } = \begin{pmatrix*}[r ] \ \frac{1}{2 } \boldsymbol{v}_{\!n } & 0 \ \ \ \\[0.2em ] 0 \ \ \ & -\boldsymbol{v}_{\!n}\ \end{pmatrix * } \begin{pmatrix*}[r ] \\boldsymbol{x}_{_{n}}^t & \boldsymbol{x}_{_{n}}^t\\ \\[0.2em ] \ 0 \ \ & \boldsymbol{x}_{_{n}}^t\ \ \end{pmatrix * } \begin{pmatrix*}[r ] \ \ \boldsymbol{h}_n & \boldsymbol{h}_n\ \\[0.2em ] \ \ \boldsymbol{h}_n & -\boldsymbol{h}_n\ \end{pmatrix*}$ ] + + + + + + + + ( eq . ) + + + we will use and as defined in the main text .+ + for the right - hand side we can write + + + + where we used , which can be proven straightforwardly by induction using the generative function for .+ rearranging and using , we obtain + + + + we thus have to prove + + + + left - multiplying both sides by ( mind the hat is only on the first operator ) and right - multiplying by we are left to prove + + + + left - multiplication by yields + + + + which , again using the relation we proved in section a above , can be rewritten as + + + + or + + + + given the commutative properties of diagonal matrices and . + + this equality indicates that setting certain rows of to zero ( left - hand side ) is the same as setting both those rows and corresponding columns of to zero ( right - hand side ) .this is obviously not true for every set of rows and columns , and needs more discussion .+ + we can prove this iteratively starting at regression to order and going down to lower order .if regression is done to order , this means that only the last row of is set to zero , and by construction of ( see above ) the last column only has a non - zero element in this row .this means that in this case the equality is correct .another way to see this is looking at matrix for in its explicit representation in the main text ( here being identified with ) and noting that the highest order epistatic term is the only one that receives a contribution from the highest order ( ) mutant term .+ + next , if regression is performed instead to order , not only the last row of is set to zero , but also the rows corresponding to order mutants .analogously to above , the only terms in the vector that receive contributions from the order mutants are the ones in the rows corresponding to order of epistasis ( since the row corresponding to order is already set to zero ) , meaning that their corresponding column again has only one non - zero element . hence setting these rows to zerowill directly set their corresponding column to zero , and the equality holds .+ + and so forth for regression to order , etc .+ + qed + +
defining the extent of epistasis the non - independence of the effects of mutations is essential for understanding the relationship of genotype , phenotype , and fitness in biological systems . the applications cover many areas of biological research , including biochemistry , genomics , protein and systems engineering , medicine , and evolutionary biology . however , the quantitative definitions of epistasis vary among fields , and its analysis beyond just pairwise effects remains obscure in general . here , we show that different definitions of epistasis are versions of a single mathematical formalism - the weighted walsh - hadamard transform . we discuss that one of the definitions , the backgound - averaged epistasis , is the most informative when the goal is to uncover the general epistatic structure of a biological system , a description that can be rather different from the local epistatic structure of specific model systems . key issues are the choice of effective ensembles for averaging and to practically contend with the vast combinatorial complexity of mutations . in this regard , we discuss possible approaches for optimally learning the epistatic structure of biological systems . there has been much recent interest in the prevalence of epistasis in the relationships between genotype , phenotype , and fitness in biological systems . epistasis here is defined as the non - independence ( or context - dependence ) of the effect of a mutation , which is a generalization of bateson s original definition of epistasis as a genetic interaction in which a mutation masks the effect of variation at another locus . it is also in line with fisher s broader definition of epistacy . epistasis limits our ability to predict the function of a system that harbors several mutations given knowledge of the effects of those mutations taken independently , and makes these relationships increasingly more complex . from an evolutionary perspective , the presence of epistatic interactions may limit or entirely preclude trajectories of single - mutation steps towards peaks in the fitness landscape . with regard to human health , epistasis complicates our understanding of the origin and progression of disease . thus , interest in the extent of epistatic interactions in biological systems has originated from the fields of protein biochemistry , protein engineering , medicine , systems biology , and evolutionary biology alike . originally epistasis was considered in the context of two genes , but we can define it more broadly as the non - independence of mutational effects in the genome , whether the effects are within , between , or even outside protein coding regions ( e.g. in regulatory regions ) . the perturbations may go beyond point mutagenesis , but we limit the discussion here for clarity of presentation . importantly , the definition of epistasis can be extended beyond pairwise effects to comprise a hierarchy of 3-way , 4-way , and higher - order terms that represent the complete theoretical description of epistasis between the parts that make up a biological system . how can we quantitatively assign an epistatic interaction given experimentally determined effects of mutations ? since epistasis is deviation from independence , it is crucial to first explicitly state the null hypothesis : asserting what exactly it means to have _ independent _ contributions of mutations . this by itself can be non - trivial . in some cases the phenotype is directly related to a thermodynamic state variable , and the issue is then straightforward : independence implies additivity in the state variable . for example , for equilibrium binding reactions between two proteins , independence means additivity in the free energy of binding , such that the energetic effect of a double mutation is the sum of the energetic effects of each single mutation taken independently . however , in general , many phenotypes can not be so directly linked to a thermodynamic state variable , and quantification of epistasis needs to be accompanied by a proper rationale for the choice of null hypothesis . in what follows we will assume this step has already been carried out and we will equate independence with additivity of mutational effects . epistasis between two mutations is then defined as the degree to which the effect of both mutations together differs from the sum of the effects of the single mutations . in this paper , we describe three theoretical frameworks that have been proposed for characterizing the epistasis between components of biological systems ; these frameworks originate in different fields and use seemingly different calculations to describe the non - independence of mutations . we show that these formalisms are different manifestations of a common mathematical principle , a finding that explains their conceptual similarities and distinctions . each of these formalisms has its value depending on depth of coverage and nature of sampling in the experimental data , and the purpose of the analysis . in the end , the fundamental issue is to develop practical approaches for optimally learning the epistatic structure of biological systems in the face of explosive combinatorial complexity of possible epistatic interactions between mutations . demonstrating the mathematical relationships between the different frameworks for analyzing epistasis is a first key step in this process .
collisionless magnetized plasmas are described by the kinetic ( vlasov - maxwell ) equations and are characterized by high dimensionality , anisotropy and a wide variety of spatial and temporal scales , thus requiring the use of sophisticated numerical techniques to capture accurately their rich non - linear behavior . in general terms , there are three broad classes of methods devoted to the numerical solution of the kinetic equations .dimensional phase space via macro - particles that evolve according to newton s equations in the self - consistent electromagnetic field .pic is the most widely used method in the plasma physics community because of its robustness and relative simplicity .the well known statistical noise associated with the macro - particles implies that pic is really effective for problems where a low signal - to - noise ratio is acceptable .the eulerian - vlasov methods discretize the phase space with a six dimensional computational mesh . as such they are immune to statistical noise but they require significant computational resources and this is perhaps why their application has been mostly limited to problems with reduced dimensionality .for reference , storing a field in double precision on a mesh with cells requires about terabytes of memory .a third class of methods , called transform methods , is spectral and is based on an expansion of the velocity part of the distribution function in basis functions ( typically fourier or hermite ) , leading to a truncated set of moment equations for the expansion coefficients . similarly to eulerian - vlasov methods , transform methods might be resource - intensive if the convergence of the expansion series is slow .in recent years there seems to be a renewed interest in hermite - based spectral methods .some reasons for this can be attributed to the advances in high performance computing and to the importance of simpler , reduced kinetic models in elucidating aspects of the complex dynamics of magnetized plasmas .another reason is that ( some form of ) the hermite basis can unify fluid ( macroscopic ) and kinetic ( microscopic ) behavior into one framework .thus , it naturally enables the fluid / kinetic coupling that might be the ( inevitable ) solution to the multiscale problem of computational plasma physics and is a very active area of research .the hermite basis is defined by the hermite polynomials with a maxwellian weight and is therefore closely linked to maxwellian distribution functions .two kinds of basis have been proposed in the literature ( differing in regard to the details of the maxwellian weight ) : symmetrically- and asymmetrically - weighted .the former features -stability but conservation laws for total mass , momentum and energy are achieved only in limited cases ( i.e. , they depend on the parity of the total number of hermite modes , on the presence of a velocity shift in the hermite basis , ... ) .the latter features exact conservation laws in the discrete and the connection between the low - order moments and typical fluid moments , but -stability is not guaranteed .earlier works pointed out that a proper choice of the velocity shift and the scaling of the maxwellian weight ( free parameters of the method ) is important to improve the convergence properties of the series .indeed , the optimization of the hermite basis is a crucial aspect of the method , which however at this point does not yet have a definitive solution .one could of course envision a different spectral approach which considers a full polynomial expansion without any weight or free parameter . while any connection with maxwellians is lost , such expansion could be of interest in presence of strong non - maxwellian behavior and eliminates the optimization problem .the legendre polynomials are a natural candidate in this case , because of their orthogonality properties .they are normally applied in some preferred coordinate system ( for instance spherical geometry ) to expose quantities like angles that are defined on a bounded domain .indeed , legendre expansions are very popular in neutron transport and some application in kinetic plasma physics can be found for electron transport described by the boltzmann equation .surprisingly , however , we have not found any example in the context of collisionless kinetic theory and in particular for the vlasov - poisson system .the main contribution of the present paper is the formulation , development and successful testing of a spectral method for the one dimensional vlasov - poisson model of a plasma based on a legendre polynomial expansion of the velocity part of the plasma distribution function .the expansion is applied directly in the velocity domain , which is assumed to be finite .it is shown that the legendre expansion features many of the properties of the asymmetrically - weighted hermite expansion : the structure of the equations is similar , the low - order moments correspond to the typical moments of a fluid , and conservation laws for the total mass , momentum and energy ( in weak form , as defined in sec .4 ) can be proven .it also features properties of the symmetrically - weighted hermite expansion : -stability is also achieved by introducing a penalty on the boundary conditions in weak form .this strategy is inspired by the simultaneous approximation strategy ( sat ) technique .the paper is organized as follows . in sec .[ sec : vlasov ] the vlasov - poisson equations for a plasma are introduced together with the spectral discretization : the velocity part of the distribution function is expanded in legendre polynomials while the spatial part is expressed in terms of a fourier series .the time discretization is handled via a second - order accurate crank - nicolson scheme . in sec .[ sec : l2:stability ] the sat technique is used to enforce the -stability of the numerical scheme . in sec .[ sec : conservation : laws ] conservation laws for the total mass , momentum and energy are derived theoretically .numerical experiments on standard benchmark tests ( i.e. , landau damping , two - stream instabilities and ion acoustic wave ) are performed in sec .[ sec : numerical ] , proving numerically the stability of the method and the validity of the conservation laws .conclusions are drawn in sec .[ sec : conclusions ] .we consider the vlasov - poisson model for a collisionless plasma of electrons ( labeled `` '' ) and singly charged ions ( `` '' ) evolving under the action of the self - consistent electric field .the behavior of each particle species with mass and charge is described at any time in the phase space domain \times[{{v}_a},{{v}_b}] ] .we also assume that an initial solution is given at the initial time . [remark : zero : velocity : bcs ] if the initial solution has a compact support in the phase space domain \times[{{v}_a},{{v}_b}] ] by ( * ? ? ?* chapters 8 , 22 ) : and normalized as follows we remap the legendre polynomials onto the velocity range ] to ] .then , we use the recursion formulas - and the orthogonality relation and we obtain the following system of partial differential equations for the legendre coefficients : }_{{{v}_a}}^{{{v}_b } } \right ) = 0 \quad\textrm{for~}n\geq 0 , \label{eq : legendre : system}\end{aligned}\ ] ] where conventionally , \displaystyle\frac{{{v}_b}-{{v}_a}}{2}\,\frac{n}{\sqrt{(2n+1)(2n-1 ) } } & \textrm{for~}n\geq 1 , \end{cases } \label{eq : legendre : sigma : def}\end{aligned}\ ] ] and }_{{{v}_a}}^{{{v}_b } } = \frac{{f^s}(x,{{v}_b},t)\phi_n({{v}_b})-{f^s}(x,{{v}_a},t)\phi_n({{v}_a})}{{{v}_b}-{{v}_a } } , \label{eq : delta_v : def}\end{aligned}\ ] ] is the boundary term resulting from an integration by parts of the integral term that involves the velocity derivative . the derivation of the coefficients and can be found in appendix a. if the distribution has compact support in , the homogeneous boundary conditions at and are imposed in weak form by assuming that }_{{{v}_a}}^{{{v}_b}} ] , i.e. , , and the vector containing the values of the legendre shape functions evaluated at .it holds that .system can be rewritten in the non - conservative vector form : }_{{{v}_a}}^{{{v}_b } } \right ) = 0 , \label{eq : vlasov : legendre : non - conservation}\end{aligned}\ ] ] where ( { \mathbbm{b}}{\mathbf{c}^s})_{n } & = \sum_{i=0}^{n-1}\sigma_{n , i}{{c}^s}_{i } , \label{eq : legendre : matb : def}\end{aligned}\ ] ] and }_{{{v}_a}}^{{{v}_b}}\big)_{n}={\delta_v\big [ { f^s}\phi_{n } \big]}_{{{v}_a}}^{{{v}_b}} ] ) as follows where each coefficient is a complex function of time .the fourier basis functions satisfy the orthogonality relation substituting in and using , we derive the system for the coefficients , which reads as : }_{{{v}_a}}^{{{v}_b } } \right ) \right]_{k } = 0 , \label{eq : legendre : fourier : system}\end{aligned}\ ] ] for and , and where denotes the convolution integral and {k} ] and and the coefficients of their fourier expansion on the basis functions , then the -th fourier mode of the convolution product is given by = \sum_{k'=-{n_f}}^{{n_f}}g_{k ' } h_{k - k'} ] , i.e. , .system can be rewritten in the vector form : }_{{{v}_a}}^{{{v}_b } } \right)\right]_k = 0 , \label{eq : legendre : fourier : compact}\end{aligned}\ ] ] where ( { \mathbbm{b}}{\mathbf{c}^{s}_k})_{n } & = \sum_{i=0}^{n-1}\sigma_{n , i}{{c}^s}_{i , k } \label{eq : legendre : fourier : matb : def}\\[0.5em ] \big({\delta_v\big [ { f^s}{\bm\phi } \big]}_{{{v}_a}}^{{{v}_b}}\big)_{n } & = { \delta_v\big [ { f^s}\phi_{n } \big]}_{{{v}_a}}^{{{v}_b } } = \frac{1}{{{v}_b}-{{v}_a}}\big({f^s}(x,{{v}_b},t)\phi_{n}({{v}_b } ) - { f^s}(x,{{v}_a},t)\phi_{n}({{v}_a})\big).\end{aligned}\ ] ] note the vector expressions : {k } & = \sum_{k'=-{n_f}}^{{n_f}}{e}_{k'}\big[{\mathbbm{b}}{\mathbf{c}}\big]_{k - k'}\\[0.5em ] \big[{e}{\star}{\delta_v\big [ { f^s}{\bm\phi } \big]}_{{{v}_a}}^{{{v}_b}}\big]_{k } & = \sum_{k'=-{n_f}}^{{n_f}}{e}_{k ' } \sum_{n'=0}^{{n_l}-1}{{c}^s}_{n',k - k'}(t)\big(\phi_{n'}({{v}_b}){\bm\phi({{v}_b})}-\phi_{n'}({{v}_a}){\bm\phi({{v}_a})}\big)\end{aligned}\ ] ] and for the -th legendre components : {n , k } & = \sum_{k'=-{n_f}}^{{n_f}}{e}_{k'}\sum_{i=0}^{n-1}\sigma_{n , i}{{c}^s}_{i , k - k'}(t)\\[0.5em ] \big[{e}{\star}{\delta_v\big [ { f^s}{\bm\phi } \big]}_{{{v}_a}}^{{{v}_b}}\big]_{n , k } & = \sum_{k'=-{n_f}}^{{n_f}}{e}_{k ' } \sum_{n'=0}^{{n_l}-1}{{c}^s}_{n',k - k'}(t)\big(\phi_{n'}({{v}_b})\phi_n({{v}_b})-\phi_{n'}({{v}_a})\phi_n({{v}_a})\big).\end{aligned}\ ] ] consider the current density of species given by .we apply the legendre decomposition , the fourier decomposition and we use to obtain the legendre - fourier representation of the total current density : taking the derivative in time of and using with , we obtain the fourier representation of ampere s equation : & = -({{v}_b}-{{v}_a})l\,\sum_{s\in\{e , i\}}{q^s}\left ( \left(\frac{2\pi{i}}{l}k\right)\big(\sigma_1{{c}^s}_{1,k}+\overline{\sigma}{{c}^s}_{0,k}\big ) + \frac{{q^s}}{{m^s}}\left[{e}{\star}{\delta_v\big [ { f^s}\phi_0 \big]}_{{{v}_a}}^{{{v}_b}}\right]_{k } \right ) .\label{eq : legendre : fourier : ampere:00}\end{aligned}\ ] ] for and using definition we reformulate ampere s equation as where }_{{{v}_a}}^{{{v}_b}}\right]_k .\label{eq : legendre : fouriere : ampere : boundary : condition}\end{aligned}\ ] ] for , the fourier decomposition of ampere s equation gives the consistency condition , the zero - th fourier mode of the total current density . to control the filamentation effect , we modify system by introducing the artificial collisional operator in the right - hand side : }_{{{v}_a}}^{{{v}_b } } \right)\right]_k = \mathcal{c}({\mathbf{c}^{s}_k } ) .\label{eq : legendre : fourier : compact : collisional}\end{aligned}\ ] ] consider the diagonal matrix whose -th diagonal entry is given by : and where is an artificial diffusion coefficient whose value can be different from species to species .then , the collisional term is given by .the effect of this operator is to damp the highest - modes of the legendre expansion , thus reducing the filamentation and avoiding recurrence effects .this operator is designed to be zero for , in order not to have any influence on the conservation properties of the method .let be the time step , the time index , and each quantity superscripted by as taken at time , e.g. , , , etc .we advance the legendre - fourier coefficients in time by the crank - nicolson time marching scheme . omitting the superscript `` '' in and to ease the notation , vlasov equation for each species andany legendre - fourier coefficient becomes : & \qquad -\frac{{q^s}}{4{m^s } } \left [ \big({e^{\tau+1}}+{e^{\tau}}\big){\star}\left ( \sum_{i=0}^{n-1}\sigma_{n , i}\big({c^{\tau+1}}_{i}+{c^{\tau}}_{i}\big ) -\gamma^s{\delta_v\big [ \big({f^{\tau+1}}+{f^{\tau}}\big)\phi_n \big]}_{{{v}_a}}^{{{v}_b } } \right ) \right]_{k } = \mathcal{c}\left(\frac{1}{2}{c^{\tau+1}}_{n , k } + \frac{1}{2}{c^{\tau}}_{n , k}\right ) .\label{eq : legendre : fourier : system : time}\end{aligned}\ ] ] equation provides an implicit and non - linear system for the legendre - fourier coefficients as each electric field mode for depends on the unknown coefficient that must be evaluated at the same time . in practice, we apply a jacobian - free newton - krylov solver to search for the minimizer of the residual given by . consider the difference of the fourier representation of poisson s equation at times and by setting in , recalling that and noting that the collisional term does not give any contribution, we find that & -\frac{{q^s}}{4{m^s } } \left [ \big({e^{\tau+1}}+{e^{\tau}}\big){\star}\gamma^s{\delta_v\big [ \big({f^{\tau+1}}+{f^{\tau}}\big ) \big]}_{{{v}_a}}^{{{v}_b } } \right]_{k}=0 .\label{eq : ampere : discrete:05}\end{aligned}\ ] ] using in yields the discrete analog of ampere s equation that is consistent with the full crank - nicolson based discretization of the vlasov - poisson system : where we have introduced the explicit symbol }_{{{v}_a}}^{{{v}_b } } \right]_{k } \qquad \textrm{for~}k\neq 0 \label{eq : full : discrete : ampere : bc}\end{aligned}\ ] ] to denote the boundary terms related to the behavior of all the distribution functions of the plasma species at the boundaries of the velocity domain . in section [ sec : conservation : laws ] we make use of and to characterize the conservation of the total energy .the distribution function solving the vlasov equation satisfies the so - called -stability property for . to see this , just multiply equation by and integrate over the phase space domain \times[{{v}_a},{{v}_b}] ] denotes the zero - th fourier mode of the argument inside the brackets , and denotes the conjugate transpose .all terms in are real numbers .the stability of the legendre - fourier method depends on the behavior of the distribution function at the boundaries and .this result is stated by the following theorem .[ theo : l2stab ] the coefficients of the legendre - fourier decomposition have the property that : }_{{{v}_a}}^{{{v}_b}}\right]_{0 } -2\sum_{n=0}^{{n_l}-1}{\left|d^{s}_{n}\right|}\sum_{k=-{n_f}}^{{n_f}}{\left|{{c}^s}_{n , k}(t)\right|}^2 .\label{eq : l2stab } \end{aligned}\ ] ] _ proof_. multiply from the left by , the conjugate transpose of , to obtain : }_{{{v}_a}}^{{{v}_b } } \right)\right]_k= ( { \mathbf{c}^{s}_k})^{\dagger}{\mathbbm{d}}^{s}_{\nu}{\mathbf{c}^{s}_k}. \ ] ] add to this equation its conjugate transpose .matrix is real and symmetric , is real and the spatial term cancels out from the equation . summing over the fourier index we end up with : }_{{{v}_a}}^{{{v}_b } } \right)\ , \right]_{k } + 2\textsf{re}\big(({\mathbf{c}^{s}_k})^{\dagger}{\mathbbm{d}}^{s}_{\nu}{\mathbf{c}^{s}_k}\big ) .\label{eq : legendre : fourier : l2stab : thm:10}\end{aligned}\ ] ] since is a diagonal matrix with negative real entries and is a real quantity it holds that the assertion of the theorem follows by applying the result of lemma [ lemma : l2stab : useful ] . if at the velocity boundaries ( see remark [ remark : zero : velocity : bcs ] ) , then at any instant the time derivative in is negative due to the collisional term and we have that note that in absence of the collisional term ( take in ) the time derivative is exactly zero and is constant . we refer to this property as _ the stability _ because the orthogonality of the legendre and fourier basis functions implies that ( see appendix b ) , from which we immediately find the stability of the distribution function .however , and can be different than zero and in general they are non zero since the legendre polynomials are globally defined on the whole domain and are non zero at the velocity boundaries .if the right - hand side of becomes positive , the collisional term may be not enough to control the other term in the right - hand side of .therefore , the method may become unstable and the time integration of is arrested . according to we can enforce the stability of the method by introducing the boundary conditions in weak form in the right - hand side of system through the penalty coefficient . to this end , we modify system as follows : }_{{{v}_a}}^{{{v}_b } } \right)\right]_k = { \mathbbm{d}}_{\nu}{\mathbf{c}^{s}_k}. \label{eq : legendre : fourier : modified}\end{aligned}\ ] ] by suitably choosing the value of the penalty we minimize or set equal to zero the term in the right - hand side of that may cause the numerical instability .this result is presented in the following theorem .[ theo : l2-stab : modified ] the modified form of the legendre - fourier method for solving the vlasov - poisson system is -stable for and any .the coefficients of the legendre - fourier decomposition have the property that : _proof_. repeating the proof of theorem [ theo : l2stab ] yields : }_{{{v}_a}}^{{{v}_b } } \right)\ , \right]_{k}\nonumber\\[0.5em ] & \ , -2\sum_{n=0}^{{n_l}-1}{\left|d^{s}_{n}\right|}\sum_{k=-{n_f}}^{{n_f}}{\left|{{c}^s}_{n , k}(t)\right|}^2 .\label{eq : stability : time : variation}\end{aligned}\ ] ] due to , the first term of the right - hand side of is zero ( when the coefficient that multiplies is non zero ) by setting {k } } { \sum_{k=-{n_f}}^{{n_f}}({\mathbf{c}^{s}_k})^{\dagger}\left[\,e{\star}{\delta_v\big [ { f^s}{\bm\phi } \big]}_{{{v}_a}}^{{{v}_b}}\right]_{k } } = \frac{1}{2}.\end{aligned}\ ] ] the assertion of the theorem is then proved by noting that any choice of in the collisional term makes the time derivative non - positive .the coefficient in affects also the first three moment equations and eventually perturbs the conservation properties of the vlasov - poisson system .we may overcome this issue by considering the modified system }_{{{v}_a}}^{{{v}_b } } \right)\right]_k = { \mathbbm{d}}_{\nu}{\mathbf{c}^{s}_k } , \label{eq : legendre : fourier : modified : b } \end{aligned}\ ] ] where the penalty is introduced through the diagonal matrix and does not change the conservation properties of the method .the penalty can be determined at any time cycle by the formula : {k } } { \sum_{n=3}^{{n_l}-1}\sum_{k=-{n_f}}^{{n_f}}\overline{c}^{s}_{nk}\left[\,e{\star}{\delta_v\big [ { f^s}{\bm\phi } \big]}_{{{v}_a}}^{{{v}_b}}\right]_{n , k } } , \end{aligned}\ ] ] where is the conjugate of , and the result of theorem [ theo : l2-stab : modified ] still holds .alternatively , we can apply to all the legendre modes except the first three , i.e. , for .this option is simpler to implement and computationally less expensive , but may not fix the stability issue of the method completely . instead of equation , it holds that }_{{{v}_a}}^{{{v}_b}}\right]_{n , k } -2\sum_{n=0}^{{n_l}-1}{\left|d^{s}_{n}\right|}\sum_{k=-{n_f}}^{{n_f}}{\left|{{c}^s}_{n , k}(t)\right|}^2 , \label{eq : stability : time : variation : gr2 } \end{aligned}\ ] ] and the first term in the right - hand side may still be a source of instability if it has the wrong sign . nonetheless ,if the dissipative effect of the collisional term in is strong enough the scheme will remain stable .we investigated the effectiveness of this latter strategy in the numerical experiments of section [ sec : numerical ] .the vlasov - poisson model in the continuum setting is characterized by the exact conservation of mass , momentum and energy .the spectral discretization that is proposed in the previous section reproduces these conservation laws in the discrete setting .it turns out that the discrete analogs of the conservation of mass , momentum and energy depends on the variation in time of the legendre - fourier coefficients for and , i.e. , , , and .the contribution of the second term in is zero when and the transformed equation for the coefficients ( including the stabilization factor of section [ sec : l2:stability ] ) becomes : }_{{{v}_a}}^{{{v}_b}}\right)\right]_{0}. \label{eq : legendre : fourier : reduced}\end{aligned}\ ] ] in particular , we have : }_{{{v}_a}}^{{{v}_b}}\right]_{0 } , \label{eq : legendre : fourier : dert_c00}\\[1.em ] \textrm{for~}n=1 , k=0:\qquad & \frac{d{{c}^s}_{1,0}}{{dt } } = \frac{{q^s}}{{m^s}}\left[{e}{\star}\big(\sigma_{1,0}{{c}^s}_{0}-\gamma^s{\delta_v\big [ { f^s}\phi_1 \big]}_{{{v}_a}}^{{{v}_b}}\big)\right]_{0 } , \label{eq : legendre : fourier : dert_c10}\\[1.em ] \textrm{for~}n=2 , k=0:\qquad & \frac{d{{c}^s}_{2,0}}{{dt } } = \frac{{q^s}}{{m^s}}\left[{e}{\star}\big(\sigma_{2,1}{{c}^s}_{1}-\gamma^s{\delta_v\big [ { f^s}\phi_2 \big]}_{{{v}_a}}^{{{v}_b}}\big)\right]_{0}. \label{eq : legendre : fourier : dert_c20}\end{aligned}\ ] ] to derive the conservation laws for mass , momentum and energy for the fully discrete approximation , we note that the analog of equation for becomes : & -\gamma^s{\delta_v\big [ \big({f^s}(\cdot,{v},t^{\tau+1})+{f^s}(\cdot,{v},t^{\tau})\big)\phi_n \big]}_{{{v}_a}}^{{{v}_b } } \bigg)\bigg]_{0 } \label{eq : legendre : fourier : reduced : full}\end{aligned}\ ] ] as the collisional term is zero , and where and are the electric field and the distribution function , respectively , as functions of for a given value of and .by setting in we can also derive the analog of equations - for the fully discrete approximation , which we omit . in the following developments we consider the boundary term : }_{{{v}_a}}^{{{v}_b } } \right]_{0}.\end{aligned}\ ] ] note that when for . using the legendre - fourier expansion of and the orthogonality relations and , the total mass of the species given by by taking the time derivative of eq and using it follows that }_{{{v}_a}}^{{{v}_b}}\right]_{0}. \label{eq : legendre : fourier : m3}\end{aligned}\ ] ] the conservation of the total mass per species includes a boundary term that is zero if ( see remark [ remark : zero : velocity : bcs ] ) . from and using with , we derive the conservation of the total mass per species in the full discrete model : equation states that the mass variation between times and is balanced by the boundary term in the right - hand side .the total momentum of the plasma is defined as where is the total momentum of the species . introducing the legendre - fourier expansion of , using the integrated recursive formula , orthogonality relations and , and mass equation yield the time derivative of equation and using it follows that & = \sum_{s\in\{e , i\ } } { q^s}({{v}_b}-{{v}_a})l\,\left ( \sigma_{1}\sigma_{1,0}\,\big[{e}{\star}{{c}^s}_0\big]_{0 } - \gamma^s \left[\,{e}{\star}\left ( \overline{\sigma}{\delta_v\big [ { f^s}\phi_0 \big]}_{{{v}_a}}^{{{v}_b } } + \sigma_{1}{\delta_v\big [ { f^s}\phi_1 \big]}_{{{v}_a}}^{{{v}_b}}\right)\,\right]_{0 } \right ) .\label{eq : legendre : fourier : p3}\end{aligned}\ ] ] using the poisson equation the first term in the last right - hand side is zero because the summation on the convolution index is on a symmetric range of indices and the argument of the summation is anti - symmetric : {0 } & = ( { { v}_b}-{{v}_a})l\,\sum_{k=-{n_f}}^{{n_f}}{e}_{k}(t)\,\sum_{s\in\{e , i\}}{q^s}{{c}^s}_{0,-k}(t ) \nonumber\\[0.5em ] & = -2\pi{i}\epsilon_0\,\sum_{k=-{n_f}}^{{n_f}}k{e}_{k}(t){e}_{-k}(t ) = 0.\end{aligned}\ ] ] consequently , equation becomes : }_{{{v}_a}}^{{{v}_b}}+\sigma_{1}{\delta_v\big [ { f^s}\phi_1 \big]}_{{{v}_a}}^{{{v}_b } } \right ) \ , \right]_{0}. \label{eq : legendre : fourier : p4}\end{aligned}\ ] ] the conservation of the total momentum includes a boundary term that is zero if ( see remark [ remark : zero : velocity : bcs ] ) . from and using with , we derive the variation of momentum per species between times and : & = \frac{{q^s}}{4}({{v}_b}-{{v}_a})l\,\sigma_{1}\delta t\ , \bigg[\big({e}(\cdot , t^{\tau+1})+{e}(\cdot , t^{\tau})\big){\star}\big(\sigma_{1,0}\big({{c}^s}_{0}(t^{\tau+1})+{{c}^s}_{0}(t^{\tau})\big)\big)\bigg]_{0 } \nonumber\\[0.5em ] & \quad+\delta t\left ( \sigma_{1}\mathcal{b}^{s;\tau,\tau+1}_{1,0 } + \overline{\sigma}\mathcal{b}^{s;\tau,\tau+1}_{0,0 } \right).\end{aligned}\ ] ] furthermore , summing over all the species , taking the zero - th fourier mode of the convolution product , and using the poisson equation yield : {0 } \nonumber\\ & \qquad\qquad= \sum_{k=-{n_f}}^{{n_f}}\big(e(\cdot , t^{\tau+1})+e(\cdot , t^{\tau})\big)_{k } \sum_{s\in\{e , i\}}{q^s}\,({{v}_b}-{{v}_a})l\,\big({{c}^s}_{0}(t^{\tau+1})+{{c}^s}_{0}(t^{\tau})\big)_{-k } \nonumber\\ & \qquad\qquad= -2\pii\epsilon_0 \sum_{k=-{n_f}}^{{n_f}}k \big(e(\cdot , t^{\tau+1})+e(\cdot , t^{\tau})\big)_{k } \big(e(\cdot , t^{\tau+1})+e(\cdot , t^{\tau})\big)_{-k } = 0.\end{aligned}\ ] ] therefore , in the full discrete model the _ conservation of the total momentum _ holds in the form : which states that the variation of the total momentum between times and is balanced by the boundary terms in the right - hand side of .the total energy of the plasma is defined as where and are the kinetic energy of the species and the potential energy at time , respectively . introducing the legendre - fourier expansion of and using the orthogonality relations and ,the kinetic energy of species is reformulated as : we take the derivative in time of and use - to obtain & = \frac{{q^s}}{2}({{v}_b}-{{v}_a})l\,\left[{e}{\star}\big ( \sigma_{1}\sigma_{2}\sigma_{2,1}\,{{c}^s}_{1 } + 2\overline{\sigma}\sigma_{1}\sigma_{1,0}\,{{c}^s}_{0 } \big)\right]_{0 } + \mathcal{b}^s_{kin } \label{eq : kinetic : energy:00}\end{aligned}\ ] ] where we introduced the `` kinetic '' boundary term per species : }_{{{v}_a}}^{{{v}_b } } + 2\sigma_1\overline{\sigma}{\delta_v\big [ { f^s}\phi_1 \big]}_{{{v}_a}}^{{{v}_b } } + ( \sigma_1 ^ 2+\sigma_0 ^ 2+\overline{\sigma}^2){\delta_v\big [ { f^s}\phi_0 \big]}_{{{v}_a}}^{{{v}_b } } \,\big)\right]_{0}.\end{aligned}\ ] ] as , and applying to , we obtain : {0 } + \mathcal{b}^{s}_{kin } = \big[{e}{\star}{j^s}\big]_{0 } + \mathcal{b}^{s}_{kin}. \label{eq : legendre : fourier : e3}\end{aligned}\ ] ] using , the orthogonality relation and the convolution notation , the potential energy of the electric field is given by : {0}. \label{eq : epot : fourier}\end{aligned}\ ] ] then , we take the time derivative of the equation above , use ampere s equation and note that =0 ] is zero to obtain : {0 } = -\bigg[{e}{\star}\bigg ( \sum_{s\in\{e , i\}}\big({j^s}+\gamma^sq^s\big ) + c_{a } \bigg ) \bigg]_{0 } = -\bigg[{e}{\star}\sum_{s\in\{e , i\}}{j^s}\bigg]_{0 } + \mathcal{b}_{pot}\end{aligned}\ ] ] where , after expanding the convolution product , we introduced the symbol for the `` potential '' boundary term , being the boundary term defined in ampere s equation . adding the total kinetic energy for all species and the potential energy gives : the conservation of the total energy includes a boundary term that is zero if ( see remark [ remark : zero : velocity : bcs ] ) . from, the variation of the kinetic energy between times and reads as : & \qquad + 2\sigma_{1}\overline{\sigma}\,\big({{c}^s}_{1,0}(t^{\tau+1 } ) - { { c}^s}_{1,0}(t^{\tau})\big ) + \big ( \sigma_{1}^2+\sigma_{0}^2+\overline{\sigma}^2 \big)\,\big({{c}^s}_{0,0}(t^{\tau+1 } ) - { { c}^s}_{0,0}(t^{\tau})\big ) \big).\end{aligned}\ ] ] using with yields : & + 2\sigma_{10}\sigma_{1}\overline{\sigma}\big({{c}^s}_{0}(t^{\tau+1})+{{c}^s}_{0}(t^{\tau})\big ) \big ) \bigg]_{0 } + \delta t\mathcal{b}^{s;\tau,\tau+1}_{kin},\end{aligned}\ ] ] where noting that , using the definition of the convolution product , the fourier decomposition of the electric field and the legendre coefficients , and the definition of the fourier coefficients of the current density given in yield : from , the variation of the potential energy between times and is given by : {0 } - \frac{\epsilon_0}{2 } \big[{e}(\cdot , t^{\tau}){\star}{e}(\cdot , t^{\tau})\big]_{0 } \nonumber\\[0.5em ] & = \frac{\epsilon_0}{2}\big [ \big({e}(\cdot , t^{\tau+1})-{e}(\cdot , t^{\tau})\big){\star}\big({e}(\cdot , t^{\tau+1})+{e}(\cdot,\tau)\big ) \big]_{0 } \nonumber\\[0.5em ] & = \frac{\epsilon_0}{2 } \sum_{k=-{n_f}}^{{n_f } } \big({e}^{\tau+1}_{k}-{e}^{\tau}_{k}\big)\ , \big({e}^{\tau+1}_{-k}+{e}^{\tau}_{-k}\big).\end{aligned}\ ] ] using the discrete analog of ampere s equation given by and yields : where finally , we add the kinetic energy terms for in and the potential energy to find the relation expressing the _ total energy conservation _ for the full discrete approximation : equation states that the variation of the total energy between times and is balanced by the proper combination of kinetic and potential boundary terms in the right - hand side and expresses the _ conservation of the total energy _ for the full discretization of the vlasov - poisson system .in this section we assess the computational performance of the legendre - fourier method by solving the landau damping , two - stream instability and ion acoustic wave problems .these test cases are classical problems in plasma physics and are routinely used to benchmark kinetic codes . in our numerical experiments , we are mainly interested in showing the conservation properties of the method , i.e. , the discrepancy between the initial value of mass , momentum and energy , and their value at successive instants in time during the simulation .we also investigate the stability of the method , i.e. , how the -norm of the distribution function defined as in changes during the time evolution of the system .the penalty is applied to all legendre modes except the first three and the stability of the legendre - fourier method is ensured by the artificial collisional term when .this strategy , which is discussed at the end of section [ sec : l2:stability ] , is very effective in providing a stable method with good conservation properties . in the two - stream instability problem, we also investigate the effect of applying penalty on all the moment equations on the conservation of the total energy . in the first two test problems ,the ions constitute a fixed background with density .we also introduce the following normalization : time is normalized on the electron plasma frequency ; position on the electron debye length ; velocity on the electron thermal velocity where is the boltzmann constant , the electron temperature and the electron mass ; the electric field on , where is the elementary charge ; species densities on a reference density ; and the species distribution function on .landau damping is a classical kinetic effect in warm plasmas , due to particles in resonance with an initial wave perturbation .this interaction leads to an exponential decay of the electric field perturbation .this problem is particularly challenging for kinetic codes because of the continuous filamentation in velocity space , which is a characteristic feature of the collision - less plasma described by the vlasov equation .filamentation is controlled by the artificial collisional operator introduced in .the initial distribution of the electrons is given by , \label{finit}\ ] ] with and .the legendre - fourier expansion of eq .( [ finit ] ) implies that the modes , and are excited at . in this test case , the final simulation time is with time step , legendre modes and fourier modes .the domain of integration is set to , .figure [ fig : ld:00 ] shows the first mode of the electric field versus time for two different values of the stabilization parameter ( ) and the collisional frequency ( ) . for all cases the damping rate is in good agreement with the landau damping theory , which predicts .one can also notice that for all cases the simulation is stable , regardless of the value of , and that does not really affect much the dynamics .as expected , when the system exhibits recursive behavior .the collisional operator with is however sufficient to remove the recurrence effect and stabilizes around for .figure [ fig : ld:02 ] ( left ) shows the time evolution of , which is normalized to its value at time , for the same cases of fig . [fig : ld:00 ] . according to ,this quantity is computed as when , the norm of is constant on the scale of the plot and the boundary term in has a rather negligible effect . instead , when , the norm of decreases with an almost constant slope since the collisional term in is dominant .figure [ fig : ld:02 ] ( right ) shows that theorem [ theo : l2stab ] [ equation ] is indeed satisfied numerically . in fig .[ fig : ld:02 ] ( right ) the time derivative is computed by central finite differences .figure [ fig : ld:04 ] shows the time evolution of the maximum value of the distribution function at the boundary of the system : , with the same format of fig .[ fig : ld:00 ] .one can notice the beneficial effect of the collisional operator : when there is a sharp increase of around , while for it holds that approximately throughout the whole simulation .finally , the legendre - fourier method presented in this work provides exact conservation laws .the relative discrepancy of the mass , defined as , and the discrepancy of momentum , defined as , are _ exactly zero _ at any discrete time step in our double precision implementation and are therefore not shown .the relative discrepancy of the total energy , defined as is shown in figure [ fig : ld:06 ] and is smaller than .the two - stream instability is excited when the distribution function of a species consists of two populations of particles streaming in opposite directions with a large enough relative drift velocity .we initialize the electron distribution function with two counter - streaming maxwellians with equal temperature : \,\left[1+\varepsilon\cos\big(\frac{2\pi}{l}kx\big)\right ] \label{eq : two - stream : init - sol}\end{aligned}\ ] ] where is the drift velocity . for this test case, we have chosen the following parameters : , , , .we integrate the vlasov - poisson system by using the time step , legendre modes , and fourier modes .the domain of integration in phase space is set to , for all the calculations shown in figures [ fig:2s:00]-[fig:2s:05 ] , while in figure [ fig : two - stream : fs : phase - space ] we show the distribution function of electrons that is computed for three different combinations of and velocity range ] for and .in particular , the plots on top are obtained by using and integrating over the velocity range ] ; the plots on bottom are obtained by using and the velocity range ] by .the two following recursion formulas hold : { v}^2\phi_{n}({v } ) & = \sigma_{n+2}\sigma_{n+1}\,\phi_{n+2 } ( { v } ) + 2\sigma_{n+1}\overline{\sigma}\,\phi_{n+1 } ( { v } ) + \big ( \sigma_{n+1}^2+\sigma_{n}^2+\overline{\sigma}^2 \big)\,\phi_{n } ( { v } ) \nonumber\\[0.5em]&\quad + 2\sigma_{n}\overline{\sigma}\,\phi_{n-1 } ( { v } ) + \sigma_{n}\sigma_{n-1}\,\phi_{n-2 } ( { v } ) , \label{eq : recursion : formula : b } \end{aligned}\ ] ] where and are defined in . to prove ,note that the left - hand side term and the two right - hand side terms of the recursion formula for can be rewritten as ( n+1 ) l_{n+1}(s ) & = \frac{n+1}{\sqrt{2(n+1)+1 } } \sqrt{2(n+1)+1}\,l_{n+1 } ( s({v } ) ) = \frac{n+1}{\sqrt{2(n+1)+1}}\phi_{n+1}({v } ) \\[0.5em ] n l_{n-1}(s ) & = \frac{n}{\sqrt{2(n-1)+1 } } \sqrt{2(n-1)+1}\,l_{n-1 } ( s({v } ) ) = \frac{n}{\sqrt{2(n-1)+1}}\,\phi_{n-1 } ( { v})\end{aligned}\ ] ] collecting together and rearranging the three terms yields : + \frac{{{v}_a}+{{v}_b}}{2}\,\phi_{n}({v}),\end{aligned}\ ] ] which has the same form as where and can be readily determined by comparison .to prove just consider and apply twice .moreover , a straightforward calculation yields and in particular we have that . integrating , , and using - give other three useful recurrence formulas : \int_{{{v}_a}}^{{{v}_b}}{v}\phi_{n}({v})\d{v}&= ( { { v}_b}-{{v}_a})\big ( \sigma_{1}\delta_{n,1 } + \overline{\sigma}\delta_{n,0 } \big ) \label{eq : intg : vf}\\[0.5em ] \int_{{{v}_a}}^{{{v}_b}}{v}^2\phi_{n}({v})\d{v}&= ( { { v}_b}-{{v}_a})\big ( + \big ( \sigma_{1}^2+\overline{\sigma}^2 \big)\,\delta_{n,0 } + 2\sigma_{1}\overline{\sigma}\,\delta_{n,1 } + \sigma_{2}\sigma_{1}\,\delta_{n,2 } \big ) .\label{eq : intg : v2f } \end{aligned}\ ] ] all these three relations follows by noting that and applying the orthogonality property .relation is obvious . to derive and we also note that we can remove the terms containing and since .moreover , we can substitute in the -coefficients of , , and , and note that the effect of and is respectively equivalent to and .finally , we note that .relation follows from & = ( { { v}_b}-{{v}_a})\big ( \sigma_{n+1}\delta_{n+1,0 } + \sigma_{n}\delta_{n-1,0 } + \overline{\sigma}\delta_{n,0 } \big).\end{aligned}\ ] ] relation follows from & = ( { { v}_b}-{{v}_a})\big ( \sigma_{n+2}\sigma_{n+1}\,\delta_{n+2,0 } + 2\sigma_{n+1}\overline{\sigma}\,\delta_{n+1,0 } + \big ( \sigma_{n+1}^2+\sigma_{n}^2+\overline{\sigma}^2 \big)\,\delta_{n,0 } \\[0.5em ] & \qquad + 2\sigma_{n}\overline{\sigma}\,\delta_{n+1,0}+\,\delta_{n-1,0 } + \sigma_{n}\sigma_{n-1}\,\delta_{n-2,0 } \big).\end{aligned}\ ] ]the proof of equation starts by applying expansion , legendre orthogonality property , expansion and fourier orthogonality property : & \qquad= ( { { v}_b}-{{v}_a})\sum_{m , n=0}^{{n_l}-1}\int_{0}^{l}{{c}^s}_{m}(x , t){{{c}^s}_{n}}(x , t)\delta_{m , n}{dx}= ( { { v}_b}-{{v}_a})\sum_{n=0}^{{n_l}-1}\int_{0}^{l}{\left|{{{c}^s}_{n}}(x , t)\right|}^2{dx}\\[0.5em ] & \qquad= ( { { v}_b}-{{v}_a})\sum_{n=0}^{{n_l}-1}\sum_{k , k'=-{n_f}}^{{n_f}}\big({{c}^s}_{n , k}(t)\big)^{\dagger}{{c}^s}_{n , k'}(t)\int_{0}^{l}\psi_{-k}(x)\psi_{k}(x){dx}\\[0.5em ] & \qquad= ( { { v}_b}-{{v}_a})l\,\sum_{n=0}^{{n_l}-1}\sum_{k , k'=-{n_f}}^{{n_f}}\big({{c}^s}_{n , k}(t)\big)^{\dagger}{{c}^s}_{n , k'}(t)\delta_{-k+k',0 } = ( { { v}_b}-{{v}_a})l\,\sum_{n=0}^{{n_l}-1}\sum_{k=-{n_f}}^{{n_f}}{\left|{{c}^s}_{n , k}(t)\right|}^2.\end{aligned}\ ] ]to prove the left - most equality in , we first note that : = l\sum_{n=0}^{{n_l}-1}\sum_{k=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}\big[{e}{\star}({\mathbbm{b}}{\mathbf{c}^s})_{n}\big]_k \label{eq : lemma : proof:10:a}\end{aligned}\ ] ] using the definition of the discrete fourier expansion of the electric field , the legendre coefficients , and , we obtain : = l\sum_{k , k'=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}{e}_{k'}\big({\mathbbm{b}}{\mathbf{c}^s}\big)_{n , k - k'}\nonumber\\[0.5em ] & \qquad\qquad= \sum_{k , k',k''=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}{e}_{k'}\big({\mathbbm{b}}{\mathbf{c}^s}\big)_{n , k''}\,l\delta_{-k+k'+k'',0}\nonumber\\[0.5em ] & \qquad\qquad= \int_{0}^{l } \left(\sum_{k=-{m_k}}^{{m_k } } ( { { c}^s}_{n , k})^{\dagger}\psi_{-k}(x)\right ) \left(\sum_{k'=-{m_k}}^{{m_k}}{e}_{k'}\psi_{k'}(x)\right ) \left(\sum_{k''=-{m_k}}^{{m_k}}\big({\mathbbm{b}}{\mathbf{c}^s}\big)_{n , k''}\psi_{k''}(x)\right ) \,{dx}\nonumber\\[0.5em ] & \qquad\qquad= \int_{0}^{l}{{c}^s}_{n}(x , t){e}(x , t)\big({\mathbbm{b}}{\mathbf{c}^s}\big)_{n}. \label{eq : lemma : proof:10}\end{aligned}\ ] ] then , we note that : } \qquad\qquad\qquad\qquad \\[1.75em ] \end{array}\end{aligned}\ ] ] }\\[1.5em ] & \qquad\displaystyle=\frac{1}{{{v}_b}-{{v}_a}}\sum_{n=0}^{{n_l}-1}\sum_{i=0}^{n-1}\sigma_{n , i}{{{c}^s}_{n}}(x , t)\int_{{{v}_a}}^{{{v}_b}}{f^s}(x,{v},t)\phi_{i}({v})\d{v}&\qquad\mbox{\big[use derivative formula~\eqref{eq : legendre : first : derivative}\big]}\\[1.5em ] & \qquad\displaystyle=\frac{1}{{{v}_b}-{{v}_a}}\sum_{n=0}^{{n_l}-1}{{{c}^s}_{n}}(x , t)\int_{{{v}_a}}^{{{v}_b}}{f^s}(x,{v},t)\frac{d\phi_{n}({v})}{d{v}}\d{v}&\qquad\mbox{\big[use again decomposition~\eqref{eq : legendre : decomposition}\big]}\\[1.5em ] & \qquad\displaystyle=\frac{1}{{{v}_b}-{{v}_a}}\int_{{{v}_a}}^{{{v}_b}}{f^s}(x,{v},t)\frac{\partial{f^s}(x,{v},t)}{\partial{v}}\d{v}&\qquad\mbox{\big[use the definition of the derivative\big]}\\[1.5em ] & \qquad\displaystyle = \frac{1}{2({{v}_b}-{{v}_a})}\int_{{{v}_a}}^{{{v}_b}}\frac{\partial({f^s})^2}{\partial{v}}\d{v}= \displaystyle\frac{1}{2}{\delta_v\big [ ( { f^s})^2 \big]}_{{{v}_a}}^{{{v}_b } } \end{array}\end{aligned}\ ] ] using the last relation above in and tranforming back in fourier space yield : & = \sum_{n=0}^{{n_l}-1}\int_{0}^{l}{{c}^s}_{n}(x , t){e}(x , t)\big({\mathbbm{b}}{\mathbf{c}^s}\big)_{n } = \frac{1}{2}\int_{0}^{l}{e}(x , t){\delta_v\big [ ( { f^s}(x,{v},t))^2 \big]}_{{{v}_a}}^{{{v}_b}}\,{dx}\nonumber\\[0.5em ] & = \frac{1}{2}l\left[{e}{\star}{\delta_v\big [ ( { f^s}(x,{v},t))^2 \big]}_{{{v}_a}}^{{{v}_b}}\right]_0 , \label{eq : lemma : proof:15}\end{aligned}\ ] ] where {0}$ ] denotes the zero - th fourier mode and which is the first equality in . applying again the definition of the discrete fourier transform , the right - most equality in is proved as follows : {{{v}_a}}^{{{v}_b}}\big]_k = l\sum_{k=-{m_k}}^{{m_k}}\sum_{n=0}^{{n_l}-1}({{c}^s}_{n , k})^{\dagger}\big[{e}{\star}\delta_{{v}}\big[{f^s}\phi_n\big]_{{{v}_a}}^{{{v}_b}}\big]_k \nonumber\\[0.5em ] & \qquad = l\sum_{n=0}^{{n_l}-1}\sum_{k , k'=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}{e}_{k'}\big(\delta_{{v}}\big[{f^s}\phi_n\big]_{{{v}_a}}^{{{v}_b}}\big)_{k - k ' } \nonumber\\[0.5em ] & \qquad=\sum_{n=0}^{{n_l}-1}\sum_{k , k',k''=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}{e}_{k'}\big(\delta_{{v}}\big[{f^s}\phi_n\big]_{{{v}_a}}^{{{v}_b}}\big)_{k '' } \,l\delta_{-k+k'+k'',0}\nonumber\\[0.5em ] & \qquad=\sum_{n=0}^{{n_l}-1}\int_{0}^{l } \left(\sum_{k=-{m_k}}^{{m_k}}({{c}^s}_{n , k})^{\dagger}\psi_{-k}(x)\right ) \left(\sum_{k'=-{m_k}}^{{m_k}}{e}_{k'}\psi_{k'}(x)\right ) \left(\sum_{k''=-{m_k}}^{{m_k}}\big(\delta_{{v}}\big[{f^s}\phi_n\big]_{{{v}_a}}^{{{v}_b}}\big)_{k''}\psi_{k''}(x)\right ) \,{dx}\nonumber\\[0.5em ] & \qquad = \sum_{n=0}^{{n_l}-1}\int_{0}^{l}{{c}^s}_{n}(x , t){e}(x , t)\delta_{{v}}\big[{f^s}(x,{v},t)\phi_n({v})\big]_{{{v}_a}}^{{{v}_b}}{dx}\nonumber\\[0.5em ] & \qquad = \int_{0}^{l}{e}(x , t)\delta_{{v}}\big[{f^s}(x,{v},t)\,\sum_{n=0}^{{n_l}-1}{{c}^s}_{n}(x , t)\phi_n({v})\big]_{{{v}_a}}^{{{v}_b}}{dx}\nonumber\\[0.5em ] & \qquad = \int_{0}^{l}{e}(x , t)\delta_{{v}}\big[{f^s}(x,{v},t)^2\big]_{{{v}_a}}^{{{v}_b}}{dx}=l\left[{e}{\star}\delta_{{v}}\big[({f^s})^2\big]_{{{v}_a}}^{{{v}_b}}\right]_{0}. \label{eq : lemma : proof:20}\end{aligned}\ ] ] the three members of are real numbers since intermediate steps in the previous developments are formed by real quantities .
we present the design and implementation of an -stable spectral method for the discretization of the vlasov - poisson model of a collisionless plasma in one space and velocity dimension . the velocity and space dependence of the vlasov equation are resolved through a truncated spectral expansion based on legendre and fourier basis functions , respectively . the poisson equation , which is coupled to the vlasov equation , is also resolved through a fourier expansion . the resulting system of ordinary differential equation is discretized by the implicit second - order accurate crank - nicolson time discretization . the non - linear dependence between the vlasov and poisson equations is iteratively solved at any time cycle by a jacobian - free newton - krylov method . in this work we analyze the structure of the main conservation laws of the resulting legendre - fourier model , e.g. , mass , momentum , and energy , and prove that they are exactly satisfied in the semi - discrete and discrete setting . the -stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term . the impact of the penalty term on the conservation properties is investigated theoretically and numerically . an implementation of the penalty term that does not affect the conservation of mass , momentum and energy , is also proposed and studied . a collisional term is introduced in the discrete model to control the filamentation effect , but does not affect the conservation properties of the system . numerical results on a set of standard test problems illustrate the performance of the method . vlasov - poisson , legendre - fourier discretization , conservation laws stability
turbulent flows in many different physical and engineering applications have a reynolds number so high that a direct numerical simulation of the navier - stokes equations ( dns ) is not feasible .the large - eddy simulation ( les ) is a method in which the large scales of turbulence only are directly solved while the effects of the small - scale motions are modelled .the mass , momentum and energy equations are filtered in space in order to obtain the governing equation for the large scale motions .the momentum and energy transport at the large - scale level due to the unresolved scales is represented by the so - called subgrid terms .standard models for such terms , as , for example , the widely used smagorinsky model , are based on the assumption that the unresolved scales are present in the whole domain and that turbulence is in equilibrium at subgrid scales ( see , e.g. , ) .this hypothesis can be questionable in free , transitional and highly compressible turbulent flows where subgrid scales , that is fluctuations on a scale smaller than the space filter size , are not simultaneously present in the whole domain .in such situations , subgrid models such as smagorinsky s overestimate the energy flow toward subgrid scales and , from the point of view of the large , resolved , scales , they appear as over - dissipative by exceedingly damping the large - scale motion .for instance , simulation of astrophysical jets could suffer from such limitation . in this regard, any improvement of the les methodology is opportune .astrophysical flows occur in very large sets of spatial scales and velocities , are highly compressible ( mach number up to ) and have a reynolds number which can exceed , so that only the largest scales of the flow can be resolved even by the largest simulation in the foreseeable future . as a consequence , today , in this field , les appears as a feasible simulation methods able to predict the unsteady system behaviour .we have recently proposed a simple method to localize the regions where the flow is underresolved .the criterion is based on the introduction of a local functional of vorticity and velocity gradients .the regions where the fluctuations are unresolved are located by means of the scalar probe function which is based on the vortical stretching - tilting sensor : where is the velocity vector , is the vorticity vector and the overbar indicates the statistical average. function ( [ f - dnsfluctuation ] ) is a normalized scalar form of the vortex - stretching term that represents the inertial generation of three dimensional vortical small scales inside the vorticity equation . when the flow is three dimensional and rich in small scales is necessarily different from zero , while , on the other hand , it is instead equal to zero in a two - dimensional vortical flow where the vortical stretching is absent .the mean flow is subtracted from the velocity and vorticity fields in order to consider the fluctuating part of the field only . .the initial velocity field is a laminar parallel flow , with an initial mach number equal to 5 , perturbed by eight waves which have an amplitude equal to 5% of the axis jet velocity and a wavelength from to times . ] of the probability that ( see equation [ f - dnsfluctuation ] ) and thus the probability that subgrid terms are introduced in the selective les balance equation by the localization procedure .the lines represent the points where the longitudinal velocity is constant , where is the jet axis mean velocity .all data in this figure have been computed averaging on lines parallel to the jet axis . a three - dimensional animation which shows the time evolution of the underresolved regions where the subgrid terms are introduced in the selective les can be seen in the supplemental material .] a priori test of the spatial distribution of functional test have been performed by computing the statistical distribution of in a fully resolved turbulent fluctuation field ( dns of a homogeneous and isotropic turbulent flow ( , , data from ) ) and in some unresolved instances obtained by filtering this dns field on coarser grids ( from to ) .it has been shown that the probability that assumes values larger than a given threshold is always higher in the filtered fields and increases when the resolution is reduced .the difference between the probabilities in fully resolved and in filtered turbulence is maximum when is in the range $ ] for all resolutions . in such a rangethe probability that is larger than in the less resolved field is about twice the probability in the dns field .furthermore , beyond this range this probability normalized over that of resolved dns fields it is gradually increasing becoming infinitely larger . from that it is possible to introduce a threshold on the values of , such that , when assumes larger values the field could be considered locally unresolved and should benefit from the local activation of the large eddy simulation method ( les ) by inserting a subgrid scale term in the motion equation .the values of this threshold is arbitrary , as there is no sharp cut , but it can be reasonably chosen as the one which gives the maximum difference between the probability in the resolved and unresolved fields .this leads to .furthermore , it should be noted that the morkovin hypothesis , stating that the compressibility effects do not have much influence on the turbulence dynamics , apart from varying the local fluid properties , allows to apply the same value of the threshold in compressible and incompressible flows .such value of the threshold has been used to investigate the presence of regions with anomalously high values of the functional , by performing a set of a priori tests on existing euler simulations of the temporal evolution of a perturbed cylindrical hypersonic light jet with an initial mach number equal to 5 and ten times lighter than the surrounding external ambient .when the effect of the introduction of subgrid scale terms in the transport equation is extrapolated from those a priori tests , they positively compare with experimental results and show the convenience of the use of such a procedure . in this paper we present large - eddy simulations of thistemporal evolving jet , where the subgrid terms are selectively introduced in the transport equations by means of the local stretching criterion .the aim is not to model a specific jet , but instead to understand , from a physical point of view , the differences introduced by the presence of sub - grid terms in the under - resolved simulations of hypersonic jets .our localization procedure selects the regions where subgrid terms are applied and , as such , its effect could be considered equivalent to a model coefficient modulation , as the one obtained by the dynamic procedure or by the use of improved eddy viscosity smagorinsky - like models like vreman s model , which gives a low eddy viscosity in non turbulent regions of the flow . however , it operates differently because it is completely uncoupled from the subgrid scale model used as , unlike the common practical implementations of the dynamic procedure , does not require ensemble averaging to prevent unstable eddy viscosity .other alternatives , such as the approximate deconvolution model , are more complicated than the present selective procedure because involve filter inversion and the use of a dynamic relaxation term .the computational overload of the selective filtering is modest and can make les an affordable alternative to a higher resolution inviscid simulation : the selective les increases the computing time of about one - third with respect to an euler simulation , while the doubling of the resolution can increase the computing time by a factor of sixteen .c + c + c + we have simulated the temporal evolution of a three dimensional jet in a parallelepiped domain with periodicity conditions along the longitudinal direction .the flow is governed by the ideal fluid equations ( mono - atomic gas flow ) for mass , momentum , and energy conservation .the beam is considered thermally confined by the external medium , and the initial pressure is set uniform in the entire domain . in the astrophysical context , this formulation is usually considered to approximate the temporal hydrodynamic evolution inside a spatial window of interstellar jets , which are highly compressible collimated jets characterized by reynolds numbers of the order .see for example , the herbig - haro jets hh24 , hh34 and hh47 .we do not consider the effect of the radiative cooling , which can change the jet dynamics substantially ( see , e.g. ) .the transient evolution includes basically two principal mechanism , the growth and evolution of internal shocks and the dynamics of the mixing process originated by the nonlinear development of the kelvin helmholtz instability .the analysis is carried out through hydro - dynamical simulations by considering only a fraction of the beam which is far from its base and head . due to the use of the periodic boundary conditions ,the jet material is continually processed by the earlier evolution because of the multiple transits though the computational domain . in this way, the focus is put on the instability evolution and on the interaction between the jet and the external medium , rather than an analysis of the global evolution of the jet .it is known that the numerical solution of a system of ideal conservation laws ( such as the euler equations ) actually produces the equivalent solution of another modified system with additional diffusion terms . with the discretizations used in thisstudy it possible to estimate _ a posteriori _ that the numerical viscosity implies an actual reynolds number of about .in such a situation it is clear that the addition into the governing equations of the diffusive - dissipative terms relevant to a reynolds number in the range would be meaningless .the formulation used is thus the following : & = & { \frac{\partial } { \partial x_i}}h(f_{\rm les}-t_\omega)q_i^{sgs}\nonumber\\ & & \label{eq.ene } \ ] ] where the field variables , and and are the filtered pressure , density , velocity , and total energy respectively .the ratio of specific heats is equal to . here and are the subgrid stress tensor and total enthalpy flow , respectively .function is the heaviside step function , thus the subgrid scale fluxes are applied only in the regions where .the threshold is here taken equal to 0.4 , which is the value for which the maximum difference between the probability density function between the filtered and unfiltered turbulence was observed .sensor , as defined in ( [ f - dnsfluctuation ] ) , does not depend on the subgrid model used and on the kind of discretization used to actually solve the filtered transport equations . in principle , it can be coupled with any subgrid model and any numerical scheme .we have chosen to implement the standard smagorinsky model as subgrid model , where is the rate of strain tensor and its norm .constant has been set equal to 0.1 , which is the standard value used in the les of shear flows , and , the turbulent prandtl number , is taken equal to 1 .the initial flow configuration is an axially symmetric cylindrical jet in a parallelepiped domain , described by a cartesian coordinate system .the initial jet velocity is along the -direction ; its symmetry axis is defined by .the interface between the jet and the surrounding ambient medium is described by a smooth velocity and density transition in order to avoid the spurious oscillations that can be introduced by a sharp discontinuity .the longitudinal velocity profile is thus initialized as where is the distance from the jet axis , is the jet radius and the jet velocity . is a smoothing parameter which has been set equal to 4 .the same smoothing has been used for the initial density distribution , where is the initial density inside the jet ambient and is the ratio between the ambient density at infinity to that of on the jet axis .a value of larger than one implies that the jet is lighter than the external medium .the mean pressure is set to a uniform value , that is , we are considering a situation where there is initially a pressure equilibrium between the jet and the surrounding environment .this initial mean profile is perturbed at by adding longitudinal disturbances on the transversal velocity components whose amplitude is 5% of the jet velocity and whose wavenumber is up to eight times the fundamental wavenumber , with random phase shifts , so that even the perturbation with the shortest wavelength is , initially , fully resolved .the integration domain is , and , with and .we have used periodic boundary conditions in the longitudinal direction , while free flow conditions are used in the lateral directions .a scheme of the initial flow configuration used in the simulations is shown in figure [ fig.schema ] .as function of the distance from the axis of the jet .all averages have been computed as space averages on cylinders at constant .,title="fig : " ] + as function of the distance from the axis of the jet .all averages have been computed as space averages on cylinders at constant .,title="fig:"]-1.0 mm in the following , all data have been made dimensionless by expressing lengths in units of the initial jet radius , times in units of the sound crossing time of the radius , where is the reference sound velocity of the initial conditions , velocities in units of ( thus dimensionless velocities coincide with the initial mach number ) , densities in units of and pressures in units of .equations ( [ eq.cont]-[eq.ene ] ) have been solved , in cartesian geometry , using an extension of the pluto code , which is a godunov - type code that supplies a series of high - resolution shock - capturing schemes that are particularly suitable for the present application , because of their low numerical dissipation .in fact , as pointed out by , a high numerical viscosity can overwhelms the subgrid - scale terms effects .the code has been extended by adding the subgrid fluxes and the computation of the functional which allows to perform the selective large - eddy simulation . for this application ,a third order accurate in space and second order in time piecewise - parabolic - method ( ppm ) has been chosen .we have performed three simulations of a jet with an initial mach number equal to 5 and a density ratio equal to 10 .the density ratio is an important parameter in such flow configuration , as it has been shown that it has a strong influence on the temporal evolution and on the flow entrainment as it has been shown by numerical simulations and laboratory experiments .the selective les of the jet has been carried out on a uniform grid .a uniform grids avoids the need to cope with the non - commutation terms in the governing equations ( see , e.g. ) .moreover , three additional simulations were performed for comparison : a standard non selective les where the subgrid model was introduced in the whole domain , which is obtained by forcing in equations ( [ eq.cont]-[eq.ene ] ) , and two euler simulations , which formally can be obtained by putting , one with the same resolution of the large eddy simulations and one which uses a finer grid ( ) .[ cols="^,^ " , ] , defined as the distance between the jet axis and the position where the normalized mean velocity is equal to 0.5 ; ( b ) density thickness , defined as the distance between the jet axis and the position where the mean density is equal to the average between the jet axis density and the external ambient density.,title="fig : " ] + , defined as the distance between the jet axis and the position where the normalized mean velocity is equal to 0.5 ; ( b ) density thickness , defined as the distance between the jet axis and the position where the mean density is equal to the average between the jet axis density and the external ambient density.,title="fig : " ] -in this work we show that the _ selective _ large eddy simulation , which is based on the use of a scalar probe function a function of the magnitude of the local stretching - tilting term of the vorticity equation can be conveniently applied to the simulation of time evolving compressible jets . in the present simulation , the probe function has been coupled with the standard smagorinsky sub - grid model .however , it should be noted that the probe function can be used together with any model because simply acts as an independent switch for the introduction of a sub - grid model .the main results is that even a simple model can give acceptable results when selectively used together with a sub - grid scale localization procedure .in fact , the comparison among the four kinds of simulations ( selective les , standard les , low and high resolution pseudo euler direct numerical simulations ) here carried out shows that this method can improve the dynamical properties of the simulated field . in particular, the selective les hugely improves the spectral distribution of energy and density over the resolved scales , the enstrophy radial distribution and the mean velocity ( up to the 200% ) and density profiles ( up to the 100% ) with respect to the standard les .furthermore , this method avoids the artificial over - damping of the unstable modes at the jet border which in the standard large eddy simulation inhibits the jet lateral growth . in comparison with an euler simulation which uses the same resolution , the selective les clearly improves the flow prediction when the field is reach in small scales ( up to the 50% on the momentum and 4% on the density fields ) .if , as in the example here shown , the kinetic energy in the small scales is not steady in the mean and decays , the improvement due to the use of the selective les in the long term reduces .thus , in flow simulations where the small scales are transient in time these two methods asymptotically offer same results . and internal energy in the computational domain , high resolution pseudo - dns simulation .the kinetic energy has been decomposed in the sum of the energy of the mean flow and of the energy of the fluctuations .the tilde denotes density weighted favre average : .all values have been normalized by the initial energy .the evolution of the kinetic energy of the fluctuations determines the extension of the underresolved regions where subgrid terms must be introduced in the governing equations , see the movie in the supplementary material visible online . ] in synthesis , the selective les explicitly introduces the sub - grid flows of momentum and energy in the governing equations in the regions of the flow where turbulence is physically present . in this way, one does not rely on the numerical diffusion to mimic the overall behaviour of all unresolved scales .this is a positive feature , since the numerical diffusion depends on the algorithm used and on the grid spacing and can not be conveniently controlled .the computing time of the selective les is about one third larger than that of the low resolution euler simulation and seven times smaller than the one of the higher resolution euler simulation .therefore , a selective les could be more convenient than a better resolved euler simulation . because of this properties , given the modest computational burden brought to the simulation , the application of the selective procedure to the simulation of complex flows in particular highly compressible free flows as , for instance , astrophysical jets seems promising. 99 d.tordella , m.iovieno , s.massaglia , `` small scale localization in turbulent flows .a priori tests applied to a possible large eddy simulation of compressible turbulent flows '' , _ comp . phys .comm . _ * 176*(8 ) , 539549 ( 2007 ) .d.k.lilly `` the representation of small - scale turbulence in numerical simulation experiments '' , _ proc .ibm scientific computing symp . on environmental sciences _ ,yorktown heights , new york , ed .goldstine , ibm form no .3201951 ( 1967 ) .s.stolz , n.a.adams , l.kleiser , `` the approximate deconvolution model for the large - eddy simulation of compressible flows and its application to shock - turbulent boundary layer interaction '' ,_ phys.fluids _ * 10 * , 29853001 , ( 2001 ) .b.reipurth , j.bally , `` herbig - haro flows : probes of early stellar evolution '' , _ annual review of astronomy and astrophysics _ * 39 * , 403455 ( 2001 ) .j.m.stone , p.e.hardee and j.xu , `` the stability of radiatively cooled jets in three dimensions '' , _ astrophysical journal _ * 543*(1 ) , 161167 ( 1997 ) .p.rossi , g.bodo , s.massaglia , a.ferrari , `` evolution of kelvin - helmholtz instabilities in radiative jets .2 .shock structure and entrainment properties '' , _ astronomy and astrophysics _ * 321*(2 ) , 672684 ( 1997 ) .a.mignone , g.bodo , s.massaglia , t.matsakos , o.tesileanu , c.zanni and a.ferrari , `` pluto : a numerical code for computational astrophysics '' , _ astr . j. supplement series _ * 170*(1 ) , 228242 ( 2007 ) , and http://plutocode.to.astro.it .m.iovieno , d.tordella , `` variable scale filtered navier - stokes equations : a new procedure to deal with the associated commutation error '' , _ phys. fluids _ * 15*(7 ) , 19261936 ( 2003 ) .m. micono , g. bodo , s. massaglia , p. rossi , a. ferrari , r. rosner , `` kelvin - helmholtz instabilities in three dimensional radiative jets '' , _ astronomy and astrophys ._ , * 360 * , 795808 ( 2000 ) .g.bodo , s.massaglia , p.rossi , r.rosner , a.malagoli , a.ferrari , `` the long - term evolution and mixing properties of high mach number hydrodynamic jets '' , _ astronomy and astrophysics _ * 303*(1 ) , 281298 ( 1995 ) .
a new method for the localization of the regions where small scale turbulent fluctuations are present in hypersonic flows is applied to the large - eddy simulation ( les ) of a compressible turbulent jet with an initial mach number equal to 5 . the localization method used is called selective les and is based on the exploitation of a scalar probe function which represents the magnitude of the _ stretching - tilting _ term of the vorticity equation normalized with the enstrophy . for a fully developed turbulent field of fluctuations , statistical analysis shows that the probability that is larger than 2 is almost zero , and , for any given threshold , it is larger if the flow is under - resolved . by computing the spatial field of in each instantaneous realization of the simulation it is possible to locate the regions where the magnitude of the normalized vortical stretching - tilting is anomalously high . the sub - grid model is then introduced into the governing equations in such regions only . the results of the selective les simulation are compared with those of a standard les , where the sub - grid terms are used in the whole domain , and with those of a standard euler simulation with the same resolution . the comparison is carried out by assuming as reference field a higher resolution euler simulation of the same jet . it is shown that the _ selective _ les modifies the dynamic properties of the flow to a lesser extent with respect to the classical les . in particular , the prediction of the enstrophy , mean velocity and density distributions and of the energy and density spectra are substantially improved . small scale , turbulence , localization , large - eddy simulation , astrophysical jets 47.27.ep , 47.27.wg , 47.40.ki , 97.21.+a , 98.38.fs
for the description of pll - based circuits a physical model in the signals space and a mathematical model in the signal s phase space are used .the equations describing the model of pll - based circuits in the signals space are difficult for the study , since that equations are nonautonomous ( see , e.g. , ) .by contrast , the equations of model in the signal s phase space are autonomous , what simplifies the study of pll - based circuits .the application of averaging methods allows one to reduce the model of pll - based circuits in the signals space to the model in the signal s phase space ( see , e.g. , .consider a model of pll - based circuits in the signal s phase space ( see fig .[ ris : pllbased ] ) .a reference oscillator ( input ) and a voltage - controlled oscillator ( vco ) generate phases and , respectively .the frequency of reference signal usually assumed to be constant : the phases and enter the inputs of the phase detector ( pd ) .the output of the phase detector in the signal s phase space is called a phase detector characteristic and has the form the maximum absolute value of pd output is called a phase detector gain ( see , e.g. , ) .the periodic function depends on difference ( which is called a phase error and denoted by ) .the pd characteristic depends on the design of pll - based circuit and the signal waveforms of input and of vco . in the present work a sinusoidal pd characteristic with considered ( which corresponds , e.g. , to the classical pll with and ) .the output of phase detector is processed by filter .further we consider the active pi filter ( see , e.g. , ) with transfer function , , . the considered filter can be described as where is the filter state .the output of filter is used as a control signal for vco : where is the vco free - running frequency and is the vco gain coefficient . relations ( [ eq : input ] ) , ( [ eq : filterorig ] ) , and ( [ eq : vco ] ) result in autonomous system of differential equations denote the difference of the reference frequency and the vco free - running frequency by . by the linear transformation we have where is the loop gain . for signal waveforms listed in table [ table : pdgains ] , relations ( [ sys : pllsys ] )describe the models of the classical pll and two - phase pll in the signal s phase space .the models of classical costas loop and two - phase costas loop in the signal s phase space can be described by relations similar to ( [ sys : pllsys ] ) ( pd characteristic of the circuits usually is a -periodic function , and the approaches presented in this paper can be applied to these circuits as well ) ( see , e.g. , ) .+ & + & + & + & + , \\ 1-\frac{2}{\pi}\theta_1 ,\theta_1 \in \left[\pi ; 2\pi \right ] \end{cases} ] . in interval ] , .thus , one can study ( [ eq : apppllsys ] ) in interval ] there exist two equilibria and . to define type of the equilibria points let us write out corresponding characteristic polynomials and find the eigenvalues : + thus , equilibrium is a stable node , a stable degenerated node , or a stable focus ( that depends on the sign of ) .equilibrium is a saddle point for all , , .moreover , in virtue of periodicity each equilibrium is a saddle point , and each equilibrium is a stable equilibrium of the same type as .note also that equilibria of ( [ eq : apppllsys ] ) and corresponding equilibria of ( [ app : pllsysorig ] ) are of the same type , and related as follows : let us consider the following differential equation : the right side of equation ( [ eq : plleqphaseplane ] ) is discontinuous in each point of line .this line is an isocline line of vertical angular inclination of ( [ eq : plleqphaseplane ] ) .equation ( [ eq : plleqphaseplane ] ) is equivalent to ( [ eq : apppllsys ] ) in the upper and the lower open half planes of the phase plane .let the solutions of equation ( [ eq : plleqphaseplane ] ) be considered as functions of two variables , .consider the solution of differential equation ( [ eq : plleqphaseplane ] ) , which range of values lies in the upper open half plane of its phase plane .right side of equation ( [ eq : plleqphaseplane ] ) in the upper open half plane is function of class for arbitrary large .solutions of the cauchy problem with initial conditions , ( which solutions are on the upper half plane ) are also of class on their domain of existence for arbitrary large .let us study the separatrix in interval , which tends to saddle point and is situated in its second quadrant .separatrix is the solution of the corresponding cauchy problem for equation ( [ eq : plleqphaseplane ] ) .the separatrix is of class on its domain of existence for arbitrary large .consider separatrix as a taylor series in variable in the neighborhood of : let us denote as the -th approximation of in variable : the taylor remainder is denoted as follows : for the convergent taylor series its remainder for each point of interval .separatrix satisfies the following relation , which follows from ( [ eq : plleqphaseplane ] ) : let us represent as taylor series ( [ eq : tailors ] ) in relation ( [ eq : sineqint ] ) . let us write out the corresponding members of ( [ eq : taylorineq ] ) for each , .+ for : for : for : let us consequently find , , using relations ( [ eq : approxeps0 ] ) , ( [ eq : approxeps1 ] ) and ( [ eq : approxeps2 ] ). begin with evaluation of : according to ( [ rel : approxeps0 ] ) using equation ( [ eq : approxeps1 ] ) and relations ( [ rel : approxeps0 ] ) evaluate : let us evaluate the integral in the interval using the following substitutions : hence , an expression for in interval is obtained : moreover , to shorten the further evaluation of , write out in equivalent form ( in interval ) . hence , , , are evaluated ( equations ( [ rel : approxeps0 ] ) , ( [ rel : approxeps1 ] ) and ( [ rel : approxeps2 ] ) , correspondingly ) .i. e. the first and the second approximations , of separatrix are found .furthermore , using ( [ rel0:approxeps0 ] ) , ( [ rel0:approxeps1 ] ) and ( [ rel0:approxeps2 ] ) the following relations are valid : this work was supported by the russian scientific foundation and saint - petersburg state university .the authors would like to thank roland e. best , the founder of the best engineering company , oberwil , switzerland and the author of the bestseller on pll - based circuits for valuable discussion .alexandrov , n.v .kuznetsov , g.a .leonov , and s.m .best s conjecture on pull - in range of two - phase costas loop . in _2014 6th international congress on ultra modern telecommunications and control systems and workshops ( icumt ) _ , volume 2015-january , pages 7882 .ieee , 2014 .doi : 10.1109/icumt.2014.7002082 .best , n.v .kuznetsov , g.a .leonov , m.v .yuldashev , and r.v .simulation of analog costas loop circuits ._ international journal of automation and computing _ , 110 ( 6):0 571579 , 2014 . 10.1007/s11633 - 014 - 0846-x .best , n.v .kuznetsov , o.a .kuznetsova , g.a .leonov , m.v .yuldashev , and r.v .yuldashev . a short survey on nonlinear models of the classic costas loop : rigorous derivation and limitations of the classic analysis . in _ proceedings of the american control conference _ , pages 12961302 .ieee , 2015 .doi : 10.1109/acc.2015.7170912 .7170912 , http://arxiv.org/pdf/1505.04288v1.pdf .gelig , g.a .leonov , and v.a ._ stability of nonlinear systems with nonunique equilibrium ( in russian)_. nauka , 1978 .( english transl : stability of stationary sets in control systems with discontinuous nonlinearities , 2004 , world scientific ) .kuznetsov , o.a .kuznetsova , g.a .leonov , p. neittaanmaki , m.v .yuldashev , and r.v .. limitations of the classical phase - locked loop analysis . _ proceedings - ieee international symposium on circuits and systems _ , 2015-july:0 533536 , 2015 .doi : http://dx.doi.org/10.1109/iscas.2015.7168688 .kuznetsov , g.a .leonov , s.m .seledzgi , m.v .yuldashev , and r.v .elegant analytic computation of phase detector characteristic for non - sinusoidal signals ._ ifac - papersonline _ , 480 ( 11):0 960963 , 2015 .doi : http://dx.doi.org/10.1016/j.ifacol.2015.09.316 .kuznetsov , g.a .leonov , m.v .yuldashev , and r.v .rigorous mathematical definitions of the hold - in and pull - in ranges for phase - locked loops . _ifac - papersonline _ , 480 ( 11):0 710713 , 2015 .doi : http://dx.doi.org/10.1016/j.ifacol.2015.09.272 .leonov , n.v .kuznetsov , m.v .yuldahsev , and r.v .analytical method for computation of phase - detector characteristic ._ ieee transactions on circuits and systems - ii : express briefs _ , 590 ( 10):0 633647 , 2012 .doi : 10.1109/tcsii.2012.2213362 .leonov , n.v .kuznetsov , m.v .yuldashev , and r.v .nonlinear dynamical model of costas loop and an approach to the analysis of its stability in the large . _ signal processing _ , 108:0 124135 , 2015 .doi : 10.1016/j.sigpro.2014.08.033 .leonov , n.v .kuznetsov , m.v .yuldashev , and r.v .hold - in , pull - in , and lock - in ranges of pll circuits : rigorous mathematical definitions and limitations of classical theory ._ ieee transactions on circuits and systems i : regular papers _ , 620 ( 10):0 24542464 , 2015 .doi : http://dx.doi.org/10.1109/tcsi.2015.2476295 .v. smirnova , a. proskurnikov , and n. utina .problem of cycle - slipping for infinite dimensional systems with mimo nonlinearities . in _ultra modern telecommunications and control systems and workshops ( icumt ) , 2014 6th international congress on _ , pages 590595 .ieee , 2014 .
in the present work pll - based circuits with sinusoidal phase detector characteristic and active proportionally - integrating ( pi ) filter are considered . the notion of lock - in range an important characteristic of pll - based circuits , which corresponds to the synchronization without cycle slipping , is studied . for the lock - in range a rigorous mathematical definition is discussed . numerical and analytical estimates for the lock - in range are obtained . phase - locked loop , nonlinear analysis , pll , two - phase pll , lock - in range , gardner s problem on unique lock - in frequency , pull - out frequency
derived the equivalence of mass and energy ( ) by considering an object of mass that simultaneously emits two electromagnetic packets , each with energy in opposite ( and ) directions . by momentum conservation in the rest frame of the object, it does not change its velocity after emission .seen from a frame moving at velocity ( i.e. , along the axis defined by the emissions ) , the two packets are each doppler shifted ( in opposite directions ) , so that the total energy of these packets is higher in the moving frame than the rest frame by , where . argued that by energy conservation , the object must lose energy in the moving frame by an amount that is greater than what it loses in the rest frame by exactly this difference . had already shown in his earlier paper introducing special relativity , that the kinetic energy of a mass moving at velocity is .appealing to this result , concluded that the mass of the emitting object must decline from to , where . herei show that the same result can be derived from conservation of momentum , without invoking any results from special relativity .that is , the derivation uses only effects that are first order in , and does not employ the second - order effects that characterize special relativity .consider as above an object emitting the two electromagnetic packets that , viewed in its rest frame are equal and opposite . by momentum conservation ,the dual ejection leaves the object at rest in this frame .see figure [ fig : frames ] . now consider the same event from a frame that is moving _ perpendicular _ ( with velocity ) relative to the emission directions . by symmetry ,the wave packets are still equal , but they are no longer opposite : because of the aberration of starlight ( first discovered by james bradley in 1729 ) , the packets will both appear to be moving slightly upward , at an angle .denote the emitted energies of the packets _ in the moving frame _ by , and denote the mass of object in this frame before and after emission by and . by a variety of arguments elaborated below , the magnitude of the momenta of the two packets in this frameare hence , because of aberration of starlight , the vertical components of these momenta will be ( to first order in ) equating the total -momentum in the moving frame before and after emission yields , which can be solved to obtain , note that in carrying out this derivation , i explicitly ignored terms higher than first order in , in particular when i adopted . hence , the result strictly applies only in the limit , i.e. , in the rest frame .this can be expressed as an equivalence between energy and rest - mass , i address the question of how this result can be generalized to moving bodies in [ sec : moving ] . in the derivation ,i used the relation between the energy and momentum for ( monodirectional ) electromagnetic fields , of course , this can be derived from special relativity , but the orientation here is to derive equation ( [ eqn : emc2 ] ) with no recourse to relativity , nor to concepts of a similar vintage , such as photons . recapitulates s manipulations of maxwell s equations to derive the electromagnetic energy flux density , where and are the electric and magnetic fields .he then develops a similar manipulation of maxwell s equations ( together with the lorentz force law ) to derive the momentum density , where is the magnetic induction .combining these two equations for monodirectional electromagnetic waves in free space yields equation ( [ eqn : pegamma ] ) .this shows that this relation rests directly on the maxwell / lorentz equations , although whether anyone actually derived the expression for prior to the simplification of vector notation is not clear .however , already uses for isotropic electromagnetic radiation in his thermodynamic derivation of stefan s law . here is the pressure and is the energy density .this expression already implies for monodirectional electromagnetic waves .as emphasized in [ sec : derivation ] , by carrying out the derivation only to first order in , i ultimately restricted its validity to bodies at rest .put differently , if the true relation between mass and energy had the form , , the derivation would have proceeded exactly the same way .there are two paths to generalizing the result to moving bodies .the first is to adopt the results of special relativity .this is the approach of , who derived using momentum conservation when light is emitted in an arbitrary direction . in special relativity ,equation ( [ eqn : pgammamov ] ) is exact , so the derived relation between mass and energy is exact to all orders in .this approach is pedagogically useful : like einstein s derivation , it makes use of special relativity , but it is simpler and more direct .however , as a historical and logical exercise , one may also ask how equation ( [ eqn : em0c2 ] ) could have been generalized if it had been discovered prior to special relativity .such a generalization follows from a simple thought experiment .imagine a box filled with warm gas , whose thermal energy ultimately resides in the kinetic energy of the atoms . at the time , this picture was controversial but at least some physicists ( e.g. , boltzmann ) held to it .light is emitted from two holes in the box , similarly to the situation in [ sec : derivation ] .the energy of the light packets is drawn from the kinetic energy of the atoms in the box , some of which now move more slowly . by equation ( [ eqn : emc2 ] ) ,the box has lost not only energy , but also mass . however , since the box contains no inter - atom potential energy , the mass ( i.e. , inertia ) of the box must be the sum of the mass ( inertia ) of the atoms in it .as the number of these has not changed , the mass of some of the atoms must have been reduced by exactly the amount of reduced mass of the box , which is exactly the same as the kinetic energy lost from these atoms divided by .that is , kinetic energy also contributes to inertia .up to this point , i have derived without ever making use of s postulate that is the same in all frames of reference , nor of any of the results that he derived from this postulate .i now show that special relativity , including the universality of , can be derived from this equation .first , shows that leads to the growth of inertia with velocity , . to permit clarification of a subtle point, i repeat that derivation here , beginning with the newtonian equation relating force to the increase of kinetic energy , . using the definitions , , , , thiscan be written , or substituting in the just derived yields at this point , there may be some question as to whether the one may pull `` '' out of the derivative , since it has not yet been shown to be `` constant '' .but is a _ constant _ in any one frame : the point that has not yet been addressed is whether it is _ invariant _ under frame changes . in the present case ,the observer is not changing frames : it is the mass that is accelerating .the quantities , , , and are all as measured in the observer frame , which is inertial .we then obtain , whose solution is where is an integration constant , which we identify with the rest mass . from this point , it is straightforward to derive the other relations of special relativity by well - known arguments .for example , as a fast train passes by , a passenger and a bystander each throw tennis balls transverse to the motion of the train ( with equal strength ) in such a way that they hit and each bounces back directly to its respective thrower .the balls must each return at the speed they were launched or the train passenger could detect her own motion . thus , they must have equal and opposite momenta .the bystander reckons that the passenger s ball is more massive and therefore concludes it s transverse velocity is smaller , which can only be true if time passes more closely . by similar traditional arguments, one can go on to derive length contraction , etc . in this way, one can prove that the speed of light is the same in all frames of reference rather than assuming it .while the result obtained here is obviously not new , there are three reasons for establishing this result using a new derivation .first , the expression is zeroth order in , in sharp contrast to the majority of results from special relativity , which are second order .it seems more elegant , therefore , to derive this expression using first - order arguments , rather than relying on second - order expressions .second , because the derivation is more elegant , it has pedagogical value , i.e. , it is easier to transmit to students .third , because the derivation is independent of special relativity , it raises the question of why was not derived earlier than 1905 .in particular , the elements needed to derive it ( momentum conservation , aberration of starlight , and the proportionality between electromagnetic energy and momentum ) were all in place by 1884 . indeed , once one realizes that electromagnetic waves have momentum ( even if one does not yet know the exact expression for this quantity ) , it follows immediately from momentum conservation and aberration of starlight that a light - emitting object must lose mass . as reviewed by , during the 25 years before special relativitythere were many efforts to express the mass of particles in terms of their energy divided by .but these differed from the arguments given here ( and that i have argued could have been given at least as early as 1884 ) by two important features .first , they generally centered around evaluations of the ultimately rather nebulous electromagnetic self - energy of charged particles rather than the kinetic properties of all matter ( charged or neutral ) .second , these evaluations did not recognize ( at least explicitly ) that when an object emitted energy , it also lost mass . indeed , the very complexity of the arguments developed in this era compared to the absolute simplicity of the derivation in [ sec : derivation ] makes it even more puzzling why no one hit on the latter .
the equivalence of mass and energy is indelibly linked with relativity , both by scientists and in the popular mind . here i prove that by demanding momentum conservation of an object that emits two equal electromagnetic wave packets in opposite directions in its own frame . in contrast to einstein s derivation of this equation , which applies energy conservation to a similar thought experiment , the new derivation employs no effects that are greater than first order in and therefore does not rely on results from special relativity . in addition to momentum conservation , it uses only aberration of starlight and the electromagnetic - wave momentum - energy relation , both of which were established by 1884 . in particular , no assumption is made about the constancy of the speed of light , and the derivation proceeds equally well if one assumes that light is governed by a galilean transformation . in view of this , it is somewhat puzzling that the equivalence of mass and energy was not derived well before the advent of special relativity . the new derivation is simpler and more transparent than einstein s and is therefore pedagogically useful .
the precision of measurements may in principle be improved using quantum mechanical effects , viz . , the quantum aided metrology . a convenient way of quantifying the precision in parameter estimation , for instance , is via the quantum fisher information ( qfi ) .also , non - classical states of light have become a promising resource for the improvement of parameter estimation . amongst the quantum states of light ( probe states ) which might bring advantages to phase estimation and that have been already discussed in the literature , we may cite the continuous - variable states , either one - mode ( squeezed ) states or two - mode ( entangled coherent ) states .interestingly , schemes employing continuous - variable states in the low photon number regime have a superior performance when compared to schemes using other types of non - classical states , for instance , the noon " states .it has also been demonstrated the optimality of squeezed states in ideal phase estimation , as well as the fact that the qfi scales quadratically with the mean photon number if squeezed states are employed in place of coherent states ( linear scaling ) .it is therefore of importance to seek other probe states that could lead to more efficient protocols .we remark that in the above mentioned works , features such as squeezing and entanglement have been considered separately in different protocols .thus , we may ask ourselves : could we have any advantage if we use continuous - variable entangled states also exhibiting squeezing ? in this work we investigate the adequacy , for phase estimation purposes , of interpolated quasi - bell squeezed states , which are continuous - variable entangled states having squeezed coherent states as component states . it should be noted that , under more realistic conditions , if one considers external unwanted influences such as noise or even a unitary disturbance " , the accuracy of the estimation may be considerably degraded . accordingly, we will analyze the phase estimation using quasi - bell squeezed states also taking into account a linear unitary disturbance in the derivation of the qfi .our manuscript is organized as follows : in section 2 we present the calculations of the qfi and study the quantum phase estimation using quasi - bell squeezed states .we first consider the ideal case and also investigate the effect of a linear disturbance .the role of entanglement in the phase estimation is also discussed in that section . in section 3present our conclusions .a class of interesting continuous variable states are the quasi - bell squeezed states , defined as is the squeezing parameter , with and .the overlap between the different component states is given by \right\} ] .the most challenging terms to compute are of the kind ( the subscripts have been omitted ) and can be obtained through where is the operator after the transformation . in this way, the qfi can be computed straightforwardly term by term .after some manipulations we obtain : - \left(n_{\mathsf{in},a}\right)^{2}\right\ } \,,\ ] ] where we have defined +\frac{1}{2}\left(4\alpha^{2}-1\right)\cosh(2r)+\frac{3}{8}\cosh(4r)+\frac{1}{8}\,,\end{aligned}\ ] ] and \\ & = & \frac{1}{4}\kappa\left\ { 2\alpha^{2}\left[\alpha^{2}\left(1 + 2\sinh^{2}(4r)\cos(2\theta)+3\cosh(8r)\right)-2(\sinh(2r)+3\sinh(6r))\cos\theta\right.\right.+\\ & & \left.-6\cosh(6r)\right]+4\cosh(2r)\left[4\alpha^{2}\left(2\alpha^{2}\cosh(4r)+1\right)\sinh(2r)\cos\theta-\alpha^{2}-1\right]+\\ & & \left.+\left(8\alpha^{2}+3\right)\cosh(4r)+1\right\ } \,.\end{aligned}\ ] ] naturally , if we let we re - obtain the following equation derived by monras + \cosh(4r)-1\,.\label{qfilzero}\ ] ] we now re - parametrize the qfi as a function of and the squeezing fraction of the component state " , . in this approach, we should not interpret the parameters and as having any specific physical meaning ; they are just two auxiliary parameters that will be useful to compare the results of this work with the non - entangled case .the energy of the input state depends on the parameters and of the component state as well as on the interpolating parameter . for this reason, we represent the qfi as a function of the input average photon number [ eq .( [ nina ] ) ] in mode of the entangled state and as a function of the parameter .it is not an easy task to algebraically invert eq .( [ nina ] ) to obtain explicitly as function of , and we do this numerically , adjusting the value of in order to get the desired input photon number . in fig .[ fig : qfi - zero - eta ] ( _ top _ ) we plot the qfi as a function of the squeezing fraction .the optimal probe state is the one capable of reconciling the gains due to entanglement without loosing the gains due to squeezing .we notice that when we have a `` squeezing fraction '' for which the qfi is greater than the one obtained with non - entangled states . however , when , although we have a state that may be maximally entangled when , we notice that we do not have any increase for the qfi .thus we conclude that the best strategy is to spend all the energy in squeezing the state . to understand this phenomenon , we analyse the parameter that the component state must have in order to let the input photon number be the available value ( fig .[ fig : qfi - zero - eta](_bottom _ ) ) . because the energy increases for , we must reduce to keep the energy fixed while we change .this implies a reduction of the qfi when , unless we make and the optimal state is not entangled . [cols="^,^,^ " , ] we have verified that if the parameter is negative , the optimal input state is not an entangled state , and the mode of this state corresponds to what has been previously found , i.e. , the qfi is given by we remind that in the case of phase estimation with single - mode states we have .if increases from to , though , the qfi increases if there is enough energy . in this range of values for the energy ,increasing past leads to a reduction of entanglement .this is because the components of the quasi - bell state become two squeezed vacuum states , and the overlap increases .for this reason , is not equal to , which reconciles the gains due to entanglement with the gains due to squeezing for the phase estimation . in fig . [fig : emaranhamento - sonda - otima ] we plot the entanglement of the optimal probe state for various interpolation parameters .we draw attention to the fact that even when we take , entanglement is not imposed , because the state is not entangled if , as it happens for . moreover , in the case of a single mode squeezed state ( when there was no additional parameter ) was indeed the optimal value for the squeezing fraction " .this means that the phase estimation with linear unitary disturbance could be upgraded by the use of entangled states .the qfi is gradually increased when we allow the state to be more and more entangled ( increasing ) , so it seems natural to look for a more direct relation between the entanglement of the quasi - bell states and the resulting qfi . when we found the optimal parameters and for the input state by maximizing the qfi , we removed the dependence of the qfi on those parameters .we are now able to observe the dependence of the qfi on the interpolation parameter , which fixes the maximal entanglement of the input state .because both the qfi and the entanglement are monotonic functions of , for , we can represent the qfi directly as a function of the entanglement for this ( positive ) semi - axis . in fig .[ fig : qfi_functionof_e ] we notice that the qfi increases monotonically as the entanglement of the probe state increases , showing that entanglement is a resource for quantum phase estimation even if there is a unitary disturbance in the system .the qfi is an increasing function of the parameter of the disturbance because there is an energy increase parameterized by during the transformation .the average photon number in the mode a of the output state may be calculated in the following way : using again eq .( [ auxiliareq ] ) we obtain , in the limit of : \,,\ ] ] where we have defined \\ & = & 2\kappa\left\ { \eta^{2}+\sinh^{2}r-2\alpha^{2}\left[\sinh\left(4r\right)\cos\theta+\cosh\left(4r\right)\right]\right\ } \,.\end{aligned}\ ] ] we notice that , as expected , for and , reduces to in order to perform a more precise analysis of how the qfi depends on the disturbance , we plot in fig .[ fig : qfi - functionof - nout - entanglement ] the qfi as a function of the average photon number in mode for ( similarly to what is done in ) . of course for we re - obtain the previous results for single - mode gaussian states .we note that the presence of the disturbance affects the phase estimation also when the probe state is entangled , if the total available energy ( input state + transformation ) is taken into account .however , we observe that the qfi may attain larger values than in the non - entangled case , showing once more the advantages of using entangled states for quantum phase estimation even in the presence of a unitary disturbance .finally , we may analyze the feasibility of the adjustment of the parameters that were optimized along this work .firstly we remark that the plot in fig .[ fig : qfi - functionof - nout - entanglement ] corresponds to the behavior of the left ends ( ) of the plots in fig .[ fig : qfi - functionof - l ] ( _ bottom _ ) , when we consider higher energies . for ,the value of is well defined , but it may be hard to reach it for a total input photon number larger than .this is because , in this case , we have and the sensitivity of the qfi upon a very small variation of around is very high .the plot in fig .[ fig : qfi - functionof - nout - entanglement ] represents then only a theoretical indication of how entanglement may be useful for phase estimation .in this work we have analyzed the use of the interpolated quasi - bell states as input probes for quantum phase estimation .we have compared the qfi in the ideal case with the qfi when a unitary disturbance is included in the hamiltonian which determines the evolution .we have found that the use of continuous variable entangled states based on coherent squeezed states makes possible to increase the precision for phase estimation , specially when the total average photon number is not negligible ( ) .we have verified that the qfi is an increasing function of the interpolation parameter ( for ) , which is related to the entanglement contained in the optimal probe state .we have also observed that the larger the unitary disturbance parameter , the larger will be the energy of the output state , which increases the qfi .however when we consider all the energy spent in the process ( including the energy used in the transformation ) we found that the disturbance actually impairs the phase estimation .we also highlight that for input states with higher energy , the `` optimal squeezing fraction '' parameter must be finely adjusted in order to maximize the qfi when .d.d.s . acknowledges fundao de amparo pesquisa do estado de so paulo ( fapesp ) grant no 2011/00220 - 5 , brazil . a.v.b . acknowledges partial support from conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) , inct of quantum information , grant no 2008/57856 - 6 , brazil .yonezawa , h. , nakane , d. wheatley , t.a . ,iwasawa , k. , takeda , s. , arao , h. , ohki , k. , tsumura , k , berry , d.w . ,ralph , t.c . ,wiseman , h.m , huntington , e.h . , and furusawa , a. : quantum - enhanced optical - phase tracking .science 337 , 1514 ( 2012 ) .
in this paper we analyze the quantum phase estimation problem using continuous - variable , entangled squeezed coherent states ( quasi - bell states ) . we calculate the quantum fisher information having the quasi - bell states as probe states and find that both squeezing and entanglement might bring advantages , increasing the precision of the phase estimation compared to protocols which employ other continuous variable states , e.g. , coherent or non - entangled states . we also study the influence of a linear ( unitary ) perturbation during the phase estimation process , and conclude that entanglement is a useful resource in this case as well . * quantum phase estimation with squeezed quasi - bell states * douglas delgado de souza and a. vidiella - barranco + +
you are asked to complete a task drawn according to a pmf from a finite set of tasks .you do not get to see but only its description , where in other words , is described to you using bits .you know the mapping and you promise to complete based on , which leaves you no choice but to complete every task in the set in the interesting case where , you will sometimes have to perform multiple tasks , of which all but one are superfluous .( we use to denote the cardinality of sets . )given , the goal is to design so as to minimize the -th moment of the number of tasks you perform } = \sum_{x\in{\mathcal{x } } } p(x ) { { \lvertf^{-1}(f(x))\rvert}}^\rho,\ ] ] where is some given positive number .this minimum is at least one because is in ; it decreases as increases ; and it is equal to one when .our first result is a pair of upper and lower bounds on this minimum as a function of .the bounds are expressed in terms of the _ rnyi entropy of of order _ : throughout stands for , the logarithm to base . for typographic reasons we henceforth use the notation [ thm : oneshot ] let . 1 . for all positive integers and every , }\geq 2^{\rho(h_{{\tilde{\rho}}}(x)-\log m)}.\ ] ] 2 . for every integer there exists such that }\\ < 1 + 2^{\rho(h_{{\tilde{\rho}}}(x)-\log \widetilde{m})},\ ] ] where . a proof is provided in section [ sec : proof ] .the lower bound is essentially ( * ? ? ?* lemma iii.1 ) .theorem [ thm : oneshot ] is particularly useful when applied to the case where a sequence of tasks is produced by a source with alphabet and the first tasks are jointly described using bits : we assume that the order in which the tasks are performed matters and that every -tuple of tasks in the set must be performed . the total number of performed tasks is therefore , and the ratio of the number of performed tasks to the number of assigned tasks is .[ thm : main ] let be any source with finite alphabet . 1 .if , then there exist encoders such that stands for . ] } = 1.\ ] ] 2 .if , then for any choice of encoders , } = \infty.\ ] ] on account of theorem [ thm : oneshot ] , for all large enough so that , }\\ < 1 + 2^{n\rho\bigl(\frac{h_{{\tilde{\rho}}}(x^n)}{n } - r+\delta_n\bigr)},\end{gathered}\ ] ] where as .when it exists , the limit is called the _ rnyi entropy rate of order . it exists for a large class of sources , including time - invariant markov sources .theorem [ thm : main ] generalizes ( * ? ? ?* theorem iv.1 ) from iid sources to sources with memory and furnishes an operational characterization of the rnyi entropy rate for all orders in .note that for iid sources the rnyi entropy rate reduces to the rnyi entropy because in this case .the proof of the lower bound in theorem [ thm : oneshot ] hinges on the following simple observation .[ prop : count_lists ] if is a partition of a finite set into nonempty subsets ( i.e. , and if , and only if , ) , and is the cardinality of the subset containing , then note that the reverse of proposition [ prop : count_lists ] is not true in the sense that if satisfies then there need not exist a partition of into subsets such that the cardinality of the subset containing is at most .a counterexample is with , , and . in this example , , but we need 3 subsets to satisfy the cardinality constraints .however , as our next result shows , allowing a slightly larger number of subsets suffices : [ prop : sufficiency ] if is a finite set , and ( with the convention ) , then there exists a partition of into at most subsets such that where is the cardinality of the subset containing .proposition [ prop : sufficiency ] is the key to the upper bound in theorem [ thm : oneshot ] .combined with proposition [ prop : count_lists ] it can be considered an analog of the kraft inequality ( * ? ? ?* theorem 5.5.1 ) for partitions of finite sets . a proof is given in section [ sec : proposition ] .the construction of the encoder in the derivation of the upper bound in theorem [ thm : oneshot ] requires knowledge of the distribution of ( see section [ sec : upper_bound ] ) . in section [ sec : divergence ] we consider a mismatched version of this direct part where the construction is carried out based on the law instead of .we show that the penalty incurred by the mismatch between and can be expressed in terms of the divergence measures where can be any positive number not equal to one .( we use the convention and if . )this family of divergence measures was proposed by sundaresan , who showed that it plays a similar role in the massey - arikan guessing problem .the proof of the lower bound is inspired by the proof of ( * ? ? ?* theorem 1 ) .fix an encoder , and note that it gives rise to a partition of into the subsets let denote the number of nonempty subsets in this partition . also note that for this partition the cardinality of the subset containing is recall hlder s inequality :if , and , then rearranging gives substituting , , and in , we obtain where follows from , , and proposition [ prop : count_lists ] ; and where follows because . since hlder s inequality holds with equality if , and only if , ( iff ) is proportional to , it follows that the lower bound in theorem [ thm : oneshot ] holds with equality iff is proportional to .we derive the upper bound in theorem [ thm : oneshot ] by constructing a partition that approximately satisfies this relationship . to this end , we use proposition [ prop : sufficiency ] with in and where we choose just large enough to guarantee the existence of a partition of into at most subsets satisfying .this is accomplished by the choice ( this is where we need . ) indeed , and hence let then the partition with be as promised by proposition [ prop : sufficiency ] , and construct by setting if . for this encoder , where the strict inequality follows from and the inequality which is easily checked by considering separately the cases and .we describe a procedure for constructing a partition of with the desired properties .since the labels do not matter , we may assume for convenience of notation that and the first subset in the partition we construct is if , then the construction is complete and and are clearly satisfied. otherwise we follow the steps below to construct additional subsets . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ step _ : if then we complete the construction by setting and . otherwise we set and go to step . +_ step _ : if then we complete the construction by setting and . otherwise we let contain the smallest elements of , i.e. , we set and go to step . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we next verify that is satisfied and that the total number of subsets does not exceed .clearly , for every , so to prove we check that for every .it is clear that for all .let denote the smallest element in the subset containing .then for all by construction , and since , we have by the assumption , and hence for all .it remains to check that does not exceed .this is clearly true when , so we assume that . since for all , we have on account of proposition [ prop : count_lists ] fix an arbitrary and let be the set of indices such that there is an with .we next argue that . to this end , enumerate the indices in as .for each select such that . then note that if and and , then .thus , because and , and .consequently , iterating this argument shows that and since for by , it follows that .continuing from with , where the first inequality follows because for , and where the second inequality follows from the hypothesis of the proposition .since is an integer and is arbitrary , it follows from that is upper - bounded by .the key to the upper bound in theorem [ thm : oneshot ] was to use proposition [ prop : sufficiency ] with as in and to obtain a partition of for which the cardinality of the subset containing is approximately proportional to .evidently , this construction requires knowledge of the distribution of . in this section, we derive the penalty when is replaced with in and . since it is then still true that proposition [ prop : sufficiency ] guarantees the existence of a partition of into at most subsets satisfying .constructing from this partition as in section [ sec : upper_bound ] and proceeding similarly as in to , we obtain where is as in and is as in theorem [ thm : oneshot ] .( note that only if the support of is contained in the support of . )the penalty in the exponent when compared to the upper bound in theorem [ thm : oneshot ] is thus given by .to reinforce this , further note that where and are the -fold products of and .consequently , if the source is iid and we construct similarly as above based on instead of , we obtain the bound }<1 + 2^{n\rho(h_{{\tilde{\rho}}}(x_1 ) + \delta_{{\tilde{\rho}}}(p||q ) - r + \delta_n)},\ ] ] where as .the rhs of tends to one provided that .thus , in the iid case is the rate penalty incurred by the mismatch between and .we conclude this section with some properties of .properties 13 ( see below ) were given in ; we repeat them here for completeness .note that rnyi s divergence ( see , e.g. , ) satisfies properties 1 and 3 but none of the others in general .property 2 follows by inspection of .properties 35 follow by simple calculus . as to property 1 , consider first the case where . in view of property 2, we may assume that .inequality with and gives the conditions for equality in hlder s inequality imply that equality holds iff .consider next the case where . by hlder s inequality with and , with equality iff .
a task is randomly drawn from a finite set of tasks and is described using a fixed number of bits . all the tasks that share its description must be performed . upper and lower bounds on the minimum -th moment of the number of performed tasks are derived . the key is an analog of the kraft inequality for partitions of finite sets . when a sequence of tasks is produced by a source of a given rnyi entropy rate of order and tasks are jointly described using bits , it is shown that for larger than the rnyi entropy rate , the -th moment of the ratio of performed tasks to can be driven to one as tends to infinity , and that for less than the rnyi entropy rate it tends to infinity . this generalizes a recent result for iid sources by the same authors . a mismatched version of the direct part is also considered , where the code is designed according to the wrong law . the penalty incurred by the mismatch can be expressed in terms of a divergence measure that was shown by sundaresan to play a similar role in the massey - arikan guessing problem .
transient lunar phenomena are defined for the purposes of this investigation as localized ( smaller than a few hundred km across ) , transient ( up to a few hours duration , and probably longer than typical impact events - less than 1s to a few seconds ) , and presumably confined to processes near the lunar surface .how such events are manifest is summarized by cameron ( 1972 ) . in paperi we study the systematic behavior ( especially the spatial distribution ) of tlp observations - particularly their significant correlations with tracers of lunar surface outgassing , and in paper ii some simple , theoretical predictions of other , not - so - obvious aspects that might be associated with tlps and outgassing events . in this paperwe suggest several ways that more information might be gleaned to determine the true nature of these events . at several pointswe emphasize the importance of timely implementation of these approaches .tlps are infrequent and short - lived , and this is the overwhelming fact of their study that must be surmounted .it is our goal to design a nested system of observations which overcomes the problems that this fact has produced , a largely anecdotal and bias - ridden data set , and replace it with another data set with _ a priori _ explicit , calculable selection effects .this might seem a daunting task , since the data set we used in paper i was essentially the recorded visual observations of the entire human race since the invention of the telescope , and even somewhat before . with modern imaging and computer technology, however , we can overcome this .another problem that becomes clear in paper ii is the many , complex means by which outgassing can interact with the regolith . in the case of slow seepage, gases may take a long time to work their way through the regolith .if the gases are volcanic , there may be interactions along the way , and if water vapor is involved , it and perhaps others of these gases may remain trapped in the regolith .these factors must be remembered in designing our future investigations .we can make significant headway , however .the various factors which complicate our task due to the paucity of information about tlps also leave open avenues that modern technology can exploit .the many methods detailed in this paper are summarized in table 1 .there has been no areal - encompassing , digital image monitoring of the near side with appreciable time coverage using modern software techniques to isolate transients .there are no published panspectral maps at high spectral / spatial resolution of the near side surface , beyond what is usually called multispectral imaging .( to some degree this will be achieved by the moon mineralogy mapper onboard _chandrayaan-1 _ , but not before other relevant missions such as have passed ) .there are numerous particle detection methods that are of use .the relevant experiments on apollo were of limited duration , either of a week or less , or 5 - 8 years in the case of alsep .furthermore the _ clementine _ and _ lunar prospector _missions were also of relatively short duration .all of these limitations serve as background to the following discussions .by necessity the monitoring of optical transients from the vicinity of earth must be limited to the near side .as detailed in paper i , however , all physical correlations tied to tlps likewise strongly favor the near side e.g. , outgassing ( 4 of 4 episodes being nearside , as well as nearly all residual ( seen as ) and mare edges ( % nearside , somewhat depending on one s definition , even more so if low - contrast albedo features such as aitken basin are not included ) .remote sensing in the optical / ir is limited in spatial resolution either by the diffraction limit of the telescope or by atmospheric seeing .one arcsecond , a typical value for optical imaging seeing fwhm , corresponds to 1.8 - 2.0 km on the lunar surface , and is the diffraction limit of a 12 cm diameter telescope at nm .the best , consistent imaging resolution will come from the with arcsec fwhm , and indeed images of the moon have been obtained with the /advanced camera for surveys combination ( garvin et al . observations of the moon turn out to be relatively expensive in terms of spacecraft time due to setup time complicated by the relative motions of the target and spacecraft , and inefficiency due to exposure setup times of for each exposure of typically 1s . altogether - 1 h of spacecraft time is needed to successfully image a small region in one filter band ( due in part to several overlapping exposures needed for complete coverage avoiding masks and other obstructions on the hrc detector , as well as to reject cosmic ray signals ) . at least until the servicing mission 4, the guiding of and the state of acs will allow no further such observations .a competing method for producing high - resolution imaging is the `` lucky exposures '' ( le , also `` lucky imaging '' ) technique which exploits occasionally superlative imaging quality among a series of rapid exposures , then sums the best of these with a simple shift - and - add algorithm ( fried 1978 , tubbs 2003 ) .the technique requires a high - speed , linear - response imager , and can be accomplished only with great difficulty using a more conventional astronomical ccd system .nonetheless , many amateur setups have achieved excellent results with this technique , and the cambridge group ( law , mackay & baldwin 2006 ) have achieved diffraction - limited imaging on a 2.5-meter telescope , very close to angular resolution . in practice ,only about 1 - 10% of exposures , hence less than 1% of observing time , survive image quality selection , but for the moon this amounts to a small investment of telescope time ( a few minutes ) .we have attempted this ourselves and encountered some minor problems : image quality must be selected in terms of a fourier decomposition of the image rather than inspection of the point - spread function of a reference star , and shift - and - add parameters must be similarly defined , by image cross - correlation rather than by centroiding a bright star .we will present results from these efforts when they succeed more usefully .unlike adaptive optics approaches , le does not depend on a bright reference star to define the incoming wavefront , but le improvements are still limited to an angular area of the isoplanatic patch determined by atmospheric turbulence , arcsec . covering the entire nearside moon would be challenging ( fields needed - at least 20 nights on a moderate - sized telescope ) .likewise , the acs hrc on , covering 750 arcsec at a time , can not be used practically to map the entire near side .the greater flexibility of an le program , in terms of choice of epoch and wavelength coverage , provides many advantages ; acs hrc , on the other hand , would provide consistent - quality results , albeit at great expense .high resolution imaging can be used to monitor small , specific areas over time , or in a one - shot application comparing a few exposures to imaging from another source . currently , the best full - surface comparison map in the optical is the uvvis ccd map ( eliason et al .1999 ) , 5 bands at 415 - 1000 nm , with typically 200 m resolution , a good match to le and resolutions . unfortunately , neither uvvis, or infrared cameras nir ( 1100 - 2800 nm ) or lwir ( 8000 - 9500 nm ) cover some of the more interesting bands for our purposes ( for example , the regolith hydration bands at 2.9 and 3.4 m ) . in the future, we will be able to make comparisons to the extensive map of the moon mineralogy mapper ( pieters et al .2005 ) on chandrayaan-1 , with 140 m and 20 nm fwhm spatial and wavelength resolution , respectively , over 0.4 - 3.0 m .the 3 - reflectance hydration features in asteroidal regolith have been studied ( lebofsky et al . 1981 , rivkin et al .1995 , 2002 , volquardsen et al .there is little written about the spectroscopic reaction of lunar regolith to hydration ; however , it is apparent that the reflectance features near 3 m do not appear immediately in lunar samples subjected to the terrestrial atmosphere ( akhmanova et al . 1972 ) , but do after several years ( markov et al .1980 , pieters et al . 2006 ) .at least in the latter , samples lose this hydration reflectance effect within a few days of exposure to a dry environment . this issuecould easily be studied with further lunar sample experiments .the prime technique for detecting changes between different epochs in similar images will involve image subtraction .this technique is well - established in studying supernovae , microlensing and variable stars , and produces photon poisson noise - limited performance ( tomaney & crotts 1996 ) .this technique is well matched to ccd or cmos detectors , and at 1 - 2 arcsec fwhm resolution , these can cover the whole moon with 10 - 20 mpixels , as is available for conventional detectors . for proper image subtraction ,one needs at least 2 pixels per fwhm diameter , or else non - poisson residuals tend to dominate , driving up the variable source detection threshold . to illustrate how image subtraction would work , we present data of the kind that might be produced by a monitor to detect tlps . while the image shown in figure 1is taken on a 0.9-meter telescope with 24 m pixels , the data are similar to that would be produced by a smaller , 1-arcsec diffraction - limited telescope with typical commercially - available digital - camera pixels e.g. , 6 m on a 20-cm telescope .image subtraction delivers nearly photon - noise level accuracy in the residual images taken in a ground - based time series , and this is demonstrated in figures 2 - 4 .we introduce an artificial `` tlp '' signal that is a 8% enhancement over the background in the peak pixel of an unresolved source - a signal at or below the threshold of a visual search . the tlp is detected convincingly even in a single image , once subtracted from a reference image e.g. , the average of a time series .the subtraction gives a very flat residual subtracted image ( except for the simulated tlp and a few `` cosmic rays '' of much smaller area and amplitude ) .the only exception is in the complex image region of the highlands near the global terminator .more meaningful , perhaps , is the signal - to - noise ratio of residual sources , shown in figure 3 .this shows the tlp clearly and unambiguously , but there are some false detections in the highland local terminator region at the level of 10 - 20% of the tlp ; we would like to improve on this .one alternative to reduce this noise is to consider applying an edge filter to supply a weighting function to suppress regions where the image structure is too complex .figure 3 shows the result from processing the raw image with a roberts edge enhancement filter ( , where is the raw count in the pixel and is the function shown in figure 3 ) .when the signal difference from figure 2 is divided by figure 3 , the result ( figure 4 ) uniquely and clearly shows the tlp .we would like to avoid this edge filter strategy if possible , relying completely on simple image subtraction , since it may be that some tlps are associated with local terminators on the lunar surface .our group has automated a tlp monitor on the summit of cerro tololo that should be producing regular lunar imaging data as of mid-2007 ( crotts , hickson & pfrommer 2007 ) .this will cover the entire moon at 1 arcsec resolution , and we expect to be able to process the images at a rate of one per 10s .this is sufficient to time - sample nearly all reported tlps ( see paper i ) .in addition we plan to add a second imaging channel on a video loop ; this will retain a continuous record of imaging of sufficient duration so that an alert to a tlp event from the image subtraction processing pipeline will allow one to query the image cache of the video channel record and reconstruct the event at finer time resolution .the image subtraction channel will include a neutral - density filter to allow the exposure time to nearly equal the image cycle time , hence even short tlps ( or meteorite impacts ) will be detected , albeit at a sensitivity reduced by a factor roughly proportional to the square - root of the event duration .the presence of a lunar imaging monitor opens many possibilities for tlp studies .for the first time , this will produce an extensive , objective , digital record of changes in the appearance of the moon , at a sensitivity level much finer than the capability of the human eye .while we will see the true frequency of tlps soon enough , paper i indicates that perhaps one tlp per month might be visible to a human observer observing at full duty cycle .an automated system should be able to distinguish changes in contrast at the level of 1% or slightly better , whereas this is perhaps 10% for a point source observed by the human eye ( based on our tests ) .even augmented human - eye surveys ( such as project moon blink or the corralitos observatory tlp survey - see paper i ) would be at least several times less sensitive than a purely digital survey .the resulting frequency of tlp detections at higher sensitivity depends on the event luminosity distribution function , poorly defined even at brighter limits and completely unknown at the level that will now be accessible. it might be reasonable to assume that a single monitor might detect several tlps per month of observing time .over several years , monitors at a range of terrestrial longitudes might detect of order 100 or more tlps , providing a well - characterized sample that will avoid many of the selection problems of the anecdotal visual data base and approach similar sample sizes .our plan eventually is to run two or more such monitors independently . not only does this increase the likely tlp detection rate , but allows us to perform simultaneous imaging in different bands , or in different polarization states .dollfus ( 2000 ) details tlps evident as polarimetric anomalies .the timescales involved are not tightly constrained , between 6 min and 1 d. other transient polarimetric events ( dzhapiashvili & ksanfomaliti 1962 , lipsky & pospergelis 1966 ) are even less constrained temporally ; however , the fact that we can observe the same event with two monitors simultaneously ( while observing the rest of the moon ) , means that there is little systematic doubt concerning the degree of polarization due to variability of the source while the apparatus is switching polarizations .presumably , since these are likely due to simple scattering effects on linear polarization , we should align the e - vector of one monitor s polarizer parallel to the sun - moon direction on the sky , and the second perpendicular to it . in the case of three or four monitors operating simultaneously, we can reconstruct stokes parameters for linear polarization conventionally by orienting polarizer e - vectors every or , respectively .the total flux from two or more monitors can be obtained by summing in quadrature signals from the different polarizations .a tlp imaging monitor will also open new potential as an alert system for other observing modes .a monitor detection can trigger le imaging in a specific active area .a qualitatively unique possibility is using the monitor to initiate spectroscopic observations , which much better than imaging will provide information about non - thermal processes and perhaps betray the gas associated with the tlp .tlp spectroscopy has its challenges . in order to detect a change, we must make comparisons over a time series of spectroscopic observations .this is essentially a four - dimensional independent - variable problem , therefore : two spatial dimensions of the lunar surface , plus wavelength implying a data cube , plus time . whereas `` hyperspectral '' imaging usually refers to a resolving power , where is the fwhm wavelength resolution, the emission lines from tlps might conceivably be many times more narrow than this , thereby diluted if higher resolution is not employed .it is not currently conceivable to monitor the whole near side in this way ( at gpixel s for and an exposure every 10s ) , but this is unnecessary .a practical approach may be to set up the reduction pipeline of the tlp monitor to alert to an event during its duration e.g. , in under 1000s , and then to bring a larger telescope with an optical or ir spectrograph to bear on the target , which our experience shows might be accomplished in .we are working to implement this in 2007 .there are reasons to prepare an data cube in advance of a tlp campaign for reasons beyond simply having a `` before '' image of the moon prior to an event .for instance , in the ir there are regolith hydration bands near 2.9 and 3.4 m , the latter with substructure on the scale of nm , which will be degraded unless the instrumental resolution is . while there are fewer narrow features in the optical / near - ir , the surface fe feature at 950 nm of pyroxene ( which requires only to be resolved ) , shows compositional shifts in wavelength centroid and width on the scale of nm ( hazen , bell & mao 1978 ) , which requires to be studied in full detail .likewise , differentiating pyroxenes from iron - bearing glass ( farr et al .1980 ) requires . this fe band ( and the corresponding band near 1.9 m ) are useful for lunar surface age - determination since they involve surface states that are degraded by micrometeorites and solar wind in agglutinate formation ( adams 1974 , charette et al .it appears that overturn of fresh material can also be monitored with enhanced blue optical broadband reflectivity ( buratti et al .2000 ) .such datasets are straightforward to collect , as are their reduction ( although requiring of some explanation ) .observations involve scanning across the face of the moon with a long slit spectrograph , which greatly improves the contrast of an emission - line source relative to the background ( figure 5 , showing recent data from the mdm observatory 2.4-meter / ccd spectrograph ) .since the spectral reflectance function of the lunar surface is largely homogenized by impact mixing of the regolith , more than 99% of the light in such a spectrum can be simply `` subtracted away '' by imposing this average spectrum and looking for deviations from it ( figure 6 ) .if a tlp radiates primarily in line emission , this factor along with our ability to reject photons outside the line profile yields a contrast as high as 10,000 times better than the human eye observing the moon through a telescope .this could also be done farther into the infrared , for instance we are preparing to observe the l - band ( 2.9 - 4.3 m ) using spex on the nasa infrared telescope facility in single - order mode , which can deliver . in general observations of this kind might be useful in the infrared for wider band emission , which is repeatable based primarily on temperature ( versus ionizing excitation as in paper ii , appendix 1 ) . using the hitran database to compute vibrational / rotational states for different molecules ,one can see these starting in the infrared ( or smaller wavenumbers for h , nh , co and ch ) , and extending into the optical for h but at least to k - band for nh ( and intermediate bands for co , co and ch ) . at least for these molecules ,the band patterns are strong and highly distinct .to be clear , this latter idea requires having an ir spectrograph available at several minutes notice to follow up on an alert of a tlp ( probably found in imaging ) . on a longer timescale, ir spectroscopy might also be useful for the l - band hydration test outlined above , especially on some of the narrower spectral features near 3.4 m that imaging might overlook , even through narrow - band filters .the data cube described above can be sliced in any wavelength to construct a map of lunar features in narrow or broad bands .figure 7 shows that specific surface features can be reconstructed in good detail and fidelity .given the constraints on imaging from the vicinity of earth , it is interesting to consider the limits and potentials of imaging monitors closer to the moon . in general, we will not be proposing special - purpose missions in space - based remote sensing , and indeed will only mention dedicated missions related to in - situ exploration of areas affected by volatiles , where special - purpose investment seems unavoidable . with in - situ cases , we would perform a more extensive study , so will largely postpone these discussions to later work concentrating on close - range science . herewe propose experiments and detectors which might ride on other platforms , either preceding or in concert with human exploration , and which will accommodate the same orbits and other mission parameters which might be chosen for other purposes .some of these purposes are not designated priorities for planned missions , but might prove useful and probably should be considered in the future . in some cases , we will give rough estimates of project costs based on our prior experience with similar spacecraft .these are for discussion only and would need to be re - estimated in detail to be taken with greater credibility .an instance of such joint use : does exploration of the moon imply establishment of a communications network with line - of - sight visibility from essentially all points on the lunar surface ( excepting those within deep craters , etc . ) ?if so , these platforms might also serve as suitable locations for comprehensive imaging monitoring .a minimal example of such a network might have a tetrahedral configuration ( with each point typically 60000 km above the surface ) with a single platform at earth - moon lagrange point l1 , covering most of the nearside moon , and three points in wide halo orbits around l2 , each covering their respective portion of the far side plus a portion of the limb as seen from earth . no single satellite will be capable of covering the entire far side , especially if operation of farside radio telescopes there require a policy of solely high - frequency communications e.g. , via optical lasers .a single l2 satellite will cover at most 97% of the far side ( subtending , selenocentrically ) ; full coverage ( not to mention some communications system redundancy ) will require three satellites , plus some means of covering the near side . with this configuration , the farthest points from each satellite will be typically ( in selenocentric angle ) , hence forshortened due to proximity to the limb by times .extensive discussion is underway of using a facility at l1 to aid in transfer orbits throughout the solar system ( lo 2004 , ross 2006 ) ; in that case we should also consider placing an imaging monitor at l1 . an imaging monitor to improve significantly on earth - vicinity capabilities might need to be an ambitious undertaking .for instance , to acheive 100 m fwhm resolution at the sub - satellite point on the face of the moon requires an imager of about 4 gpixels , an aperture m , and a field - of - view of 3 .each such monitor , separate from power , downlink , attitude control and other infrastructure requirements will cost perhaps 1b , and remote sensing could inform this effort as to where to look in detail , when dangerous eruptions might occur , and what is the material goal . without such information ,these investigation is likely to be more time - consuming , problematic , and perhaps more hazardous .we concentrate further on remote sensing , even if the proposed expense might be significant .as explained in paper ii , an expectation of water vapor seepage from the lunar interior should be an ice layer within the regolith about 15 m below the lunar surface .a remote means of studying this feature would be ground - penetrating radar , either from the ground or spacecraft platforms .one should realize that there is significant heritage and as well as plans involving lunar radar .the lunar sounder experiment ( lse ) on _ apollo 17 _ ( brown 1972 , porcello 1974 ) operated in both a high - frequency and penetrating radar mode ( 5 , 16 and 260 mhz ) . also planned are the lunar radar sounder ( lre ) aboard selene ( ono & oya 2000 : at 5 mhz ( with an option at 1 mhz and 15 mhz ) , and mini - rf on the lunar reconnaissance orbiter , operating at 3 ghz and ghz .finally , of note for comparison s sake in the martian case is marsis ( mars advanced radar for subsurface and ionosphere sounding at 1.8 , 3.0 , 4.0 , and 5.0 mhz : porcello et al .2005 ) . at 5 mhz ( m )the depth of penetration is many kilometers below the lunar surface , but the spatial resolution is necessarily coarse . to study the regolith and shallow bedrock , we should choose a frequency closer to 100 - 300 mhz .the apollo lse operated for only a few orbits and only close to the equator .the selene lre runs at lower frequency .a higher frequency mode is desirable .the ground - based alternative is useful ; lunar radar maps have been made at 40 mhz , 430 mhz , and 8 ghz ( thompson & campbell 2005 ) , also 2.3 ghz ( stacy 1993 , campbell et al .2006a , b ) . at 8 ghzwe are only studying structure of several centimeters within a meter of the surface . for 430 mhzwe see perhaps m inside , and at 40 mhz , 100 m towards the interior ( with attenuation lengths of roughly 10 - 30 wavelengths ) . in practice ,better angular resolution at higher frequencies is possible e.g. , 20 m ( campbell et al .2006a , b ) .of course from earth only the nearside is accessible , and larger angles of incidence e.g. , , imply echoes dominated by diffuse scattering in a way which can not be modulated .use of circular polarization return measurements can be used to test for water ice ( nozette 1996 , 2001 ) but have been questioned ( simpson 1998 , campbell et al .we will not review this debate here , but application of the idea to subsurface ice is problematic .it is unclear that this could be accomplished at frequencies of hundreds of mhz required to penetrate to depths of m , and the more standard technique ( at 13 cm ) only performs to depths m , where ice sublimation and diffusion rates are almost certainly prohibitive of accumulation .finding subsurface ice has its challenges .for instance , the dielectric constant for both regolith and water ice ( which is slightly higher ) , as it is for many relevant mineral powders of comparable specific gravity e.g. , anorthosite and various basalts . ice andthese substances have similar attenuation lengths , as well . on the strength of net radar return signal alone, it will be difficult to distinguish ice from any usual regolith by their mineral properties .however , in terrestrial situations massive ice bodies reflect little internally e.g. , moorman , robinson & burgess ( 2003 ) .one might expect ice - bearing regions to be relatively dark in radar images , if lunar ice - infused volumes homogenize or `` anneal '' in this way , either by forming a uniform slab or by binding together regolith into a single , uniform bulk .on the other hand , hydrated regolith samples have values much higher than unhydrated ones ( by up to an order of magnitude ) , as well as attenuation lengths even more than an order of magnitude shorter ( chung 1972 ) .this hydration effect is largest at lower frequencies , even below 100 mhz .one might suspect that significant water ice might perturb the chemistry of the regolith significantly , which might even increase charge mobility as in a solution , which appears to invariably drive up , and conductivity even more , increasing the loss tangent : conductivity divided by ( and the frequency ) .one should expect a reflection passing into this high- zone , but this depends strongly on the details of the suddenness of the transition interface . of particular interestis the radar map at 430 mhz ( ghent et al .2004 ) of the aristarchus region , site of roughly 50% of tlp and radon reports .the 43-km diameter crater is surrounded by a low radar - reflectivity zone some 150 km across , particularly in directions downhill from the aristarchus plateau onto oceanus procellarum . in generalthe whole plateau is relatively dark in radar , occasionally interrupted by bright crater pock - marks and vallis schrteri . in contrastthe dark radar halo centered on aristarchus itself is uniquely smooth , indicating that it was probably formed or modified by the impact itself , a few hundred million years ago .this darkness might be interpreted as higher loss tangent , consistent with the discussion in the previous paragraphs , or simply fewer scatterers ( ghent et al .2004 ) i.e. , rocks of approximately meter size ; it is undemonstrated why the latter would be true in the ejecta blanket of a massive impact especially given the bright radar halo within 70 km of the aristarchus center .ghent et al .( 2005 ) show that other craters , some comparable in size to aristarchus , have dark radar haloes , but none so extended .the region around aristarchus has characteristics that might be expected from subsurface ice redistributed by impact melt : dark , smooth radar - return , spreading downhill but otherwise centered on the impact ; this should be expected to be confused , at least , with the dark halo effect seen around some other impacts .it seems well - motivated to search for similar dark radar areas around other likely outgassing sites , particularly ones not associated with recent impacts ; unfortunately , the foremost candidate for such a signature is competing with such an impact , aristarchus , which can be expected to produce its own confusing effect .we would propose that radar at frequencies near hundreds of mhz be considered for future missions , in a search for subsurface ice .this is a complex possibility that we will not detail here , that must be weighed against the potential of future ground - based programs . in particular , the near side has been mapped at about 1 km resolution for 70 cm wavelength ( campbell et al .2007 ) , this could be improved with an even more intensive ground - based program , or from lunar orbit .orbital missions can be configured to combine with higher frequencies and different reception schemes to provide better spatial resolution , deal with ground clutter , and varying viewing angles .a lunar orbiter radar map would be less susceptible to interference speckle noise , which will likely require long series of pointings to be reduced from the ground . in combination with an optical monitor, a ghz - frequency radar might produce detailed maps in which changes due to tlps might be sought , and might be then correlated with few - hundred mhz maps to aid in interpretation in terms of volatiles . at shorter wavelengths one should consider mapping possible changes in surface features due to explosive outgassing , which paper ii hints might occur frequently on scales excavated over tens of meters , and expelled over hundreds or thousands of meters .again , earth - based observations suffer from speckle , but planned observations by the _ lunar reconnaissance orbiter ( lro ) _ mini - rf ( mini radio - frequency technology demonstration - chin et al . 2007 )at 4 and 13 cm might easily make valuable observations of this kind .both modes scan in a swath km wide , which would make comprehensive mapping difficult , but would mesh well with the event resolution from a ground - based optical monitor . a `` before '' and `` after '' radar sequence meshed with an optical monitoring program would likely be instructive as tohow outgassing and optical transients actually interact with the regolith. several upcoming missions will carry high - resolution optical imagers , each of which will be capable of mapping nearly the entire lunar surface e.g. , _change-1 _ ccd imager ( yue et al . 2007 ) , _selene _ spectrometer / multiband imager ( lism / mi ) ( ohtake et al . 2007 ) , _lro _ camera ( lroc ) ( robinson et al .2005 ) , and _chandrayaan-1 _ moon mineralogy mapper ( mmm ) ( pieters et al .2006 ) , typically at tens to hundreds of meters resolution .in particular the mi / sp will usefully observe at 20 m resolution the pyroxene near - ir band that can indicate the exposure of fresh surface , as can the mmm ( albeit at 280 m resolution ) .all of these are sensitive at blue wavelengths which can also indicate surface age .the lroc and mmm will repeatedly map each point on the moon , not in any way sufficient to be considered realtime monitoring of transients , but sufficient to allow frequent sampling on timescales of a lunation .this allows an interesting synergy with ground - based monitors since they can highlight sites of activity for special analysis .furthermore , lroc has a high resolution pointed mode which might provide sub - meter information in areas where tlps have been recently detected , hence excellent sampling on the scales that we suspect will be permanently effected , perhaps in a `` before '' and `` after '' sequence . at any given time, any these four spacecraft have a roughly 10% chance of at least one of them being in view of a particular site above its horizon ; it would be fascinating ( but perhaps too logistically difficult ) if a program could be implemented wherein spacecraft could be alerted to image at high resolution a tlp site in real time during an event . in order to study outgassing directly , we need instruments at or near the lunar surface . in the case of ,the thermal velocity is typically m s , so typical ballistic free flight occurs over km . over its half - life of 3.8 d , a atom travels typically 50000 km in a random walk that wanders from the source only a few hundred km before decaying ( or sticking to a cold surface ) . thus the alpha particles must be detected in much less than a day after outgassing , or the signal disperses by an amount that makes superfluous placing the detector less than a few hundred km above the lunar surface , except for sensitivity considerations .three alpha - particle spectrometers have observed the surface of the moon , but for relatively brief periods of time .the latitude coverage was severely limited on _ apollo 15 _ ( for 145 hours ) and _ apollo 16 _( , 128 h ) . _ lunar prospector s _ alpha particle spectrometer covered the entire moon , over 229 days spanning 16 months , but was partially damaged ( one of five detectors ) upon launch and suffered a sensitivity drop due solar activity ( binder 1998 ) . _ apollo 15 _ observed two outgassing events ( from aristarchus and grimaldi ) , _ apollo 16 _ none , and _ lunar prospector _ two sources ( aristarchus and kepler ) , although the signals from these last sources were integrated over the mission duration .in addition , apollo and _ lunar prospector _instruments detected an enhancement at mare / highlands boundaries from daughter product , indicating leakage over approximately the previous century .the expected detection rate for a single alpha - particle spectrometer in a polar orbit and without instantaneous sensitivity problems , might be grossly estimated from these data .the _ apollo 16 _ instrument covered a sufficiently small fraction ( % ) of the lunar surface so that we will not consider it , whereas _ apollo 15 _ covered about 37% .these missions were in orbit d apiece , and considering the lifetime thereby were sensitive to events ( at % full sensitivity ) for d. _ lunar prospector _ covered the entire lunar surface every 14 d , hence caught events typically at 28% instantaneous full strength ( minimum 8% ) , however , by averaging over the mission diluted this by an factor - 30 .these data are consistent with a picture in which aristarchus produces an outgassing event 1 - 2 times per month at the level detectable by _ apollo 15 _ , and by _ lunar prospector _ when integrated over the mission .apparently other sites such as grimaldi and kepler collectively are about equally active as aristarchus , together all sites might produce 2 - 4 events per month at the sensitivity level of _ apollo 15_. this level of activity is consistent with the statistics of tlps constrained in paper i. a new orbiting alpha - particle spectrometer with a lifetime of a year or more and an instantaneous sensitivity equal to that of _ apollo 15 _ s detector would likely produce a relatively detailed map of where outgassing occurs on the lunar surface , separate from any optical manifestation .this is likely an important test for many of the procedures mentioned above , which are critically dependent on the outgassing / optical correlation .this must be examined in further detail , because there are many ways in which one might imagine that gas issues from the interior , thereby producing radon , without a visible manifestation , either due on one extreme to such rapid outgassing that previous events have cleared the area of regolith that might interact with gas on its way to the vacuum , or due to seepage sufficiently slow to trap water ( and perhaps other gasses by reaction ) in the regolith , and too slow to perturb dust at the surface .radon , an inert gas that will not freeze or react on its way to the surface , is more likely to escape the regolith to be detected , regardless . the alpha - ray detector ( ard ) onboard _ selene _ ( nishimura et al .2006 ) promises to be 25 times more sensitive than the apollo alpha particle spectrometers , with a mission lifetime of one year or more , in a polar orbit .this , in conjunction with an aggressive optical monitoring program ( as in section 2.1 ) , holds the prospect of extending the tlp/ - outgassing correlation test from paper i to a dataset of order 10 times larger .this would likely serve as a significant advance in understanding their connection , but it is probably best to consider what a following generation alpha - particle spectrometer study might entail . to insure better sensitivity coverage two such detectors in complementary orbits would cover the lunar surface every 1.8 half - lives of .this may nearly double the detected sample . unless the alpha - particle detectors are constructed with a veto for solar wind particles , it is best to avoid active solar intervals .we will exit the solar minimum probably by year 2008 , with the next starting by about 2016 . on the other hand , some of the lack of sensitivity to lunar alpha particles and elevated solar particle background count on _lunar prospector _ was due in part to it being spin - stabilized .if detectors on a future mission were kept oriented towards the lunar surface and shielded from solar wind to the extent possible , the apollo results indicate that prompt outburst detection at good sensitivity is possible . beyond this , extending the mission(s ) , of course , will help , and the best approach might be to develop a small alpha - spectrometer package that might easily fly on any extended low - orbital mission .the radioactive decay delay in alpha - particle detection insures that a reasonable number of orbiting detectors can have near unit efficiency .this is not the case for prompt detection of outgassing e.g. , by mass spectrometers .an instantaneous outburst seen 100 km away will undergo a dispersion of only a few tens of seconds in arrival time .the detectors must either be very sensitive or densely spaced , and prepared to measure and analyze what they can in these short time intervals .this is a problem for apollo - era instruments e.g. , the _ apollo 15 _ orbital mass spectrometer experiment ( omse - hoffman & hodges 1972 ) required 62s to scan through a factor of 2.3 in mass ( 12 to 28 , or 28 to 66 amu ) .total amount of outgassing is in the range of many tons per year , and with perhaps tens of outbursts per year , the mass fluence of particles from a single outburst seen at a distance of 1000 km is approaching amu . while a burst on the opposite side of the moon will not be detected and/or properly interpreted , one that can be seen by a few detectors would be very well constrained .the specific operational strategies of these detectors is paramount . for example consider an event at 1000 km distance , which will spread over in event duration. a simple gas pressure gauge will not be overwhelmingly sensitive , in that even with an ambient atmosphere that is not unusual e.g. , number density ( varying day / night e.g. , hodges , hoffman & johnson 2000 ) , the background rate of collisions over 500 s amounts to an order of magnitude or more than the particle fluence than for a typical outgassing outburst , assuming amu particles in the outburst . since interplanetary solar proton densities can change by amount of order unity in an hour or less ( e.g. , mcguire 2006 ) , pressure alone is not likely to be a useful event tracer .a true mass spectrometer is useful in part by subdividing the incoming flux , in mass , obviously , but also in direction , thus decreasing the effective background rate .the disadvantage of this approach in the past has been that it can not cover the entire parameter range of this subdivision at once , so must scan in atomic mass or direction , or must always accept a significantly limited range . for a short burst , this means that mass components may not be examined during the event , or that events might be missed due to detectors pointing in the wrong direction . for apollo - era detectors ,these problems , particularly the former , were significant .we would prefer to operate a mass spectrometer operating continuously over a significant mass range , with ballistic trajectory reconstruction over a large incoming acceptance solid angle .we will return to this concept below .first , let us discuss low - orbit platforms .we will not propose special purpose probes of the atmosphere alone , but there are other reasons for dense constellations of lunar satellites , most prominently a lunar global positioning system ( gps ) .terrestrial systems in operation ( gps ) and planned ( galileo , beidou and glonass : global navigation satellite system ) are typically 25 - 30 satellites at orbital radii km . around the moonthis could be much lower , km , and with fewer satellites , , which would put satellites within km of a surface outburst .this is compared to km for apollo . scaling the sensitivity of the _ apollo 15 _ omse ( hodges et al .1973 ) , a detector on a gps would be sensitive ( at the 5 level ) to an instantaneous outburst of about 50000 kg ( and more depending on the details of non- propagation effects ) .this is insufficient sensitivity to detect outgassing events .one needs a lower orbit ( or much more sensitive detectors , by three orders of magnitude ) .it is unclear if a lower - orbit gps system , while more favorable for an add - on mass spectrometer array , would serve its navagational purpose .a gps / mass spectrometer constellation only 1000 km above the lunar surface could likely be made sufficiently sensitive for gas outburst monitoring , nearly continuously .such a low orbit makes gps more difficult , require several more satellites , and increasing the effects of mascons on their orbit .this requires further modelling .nonetheless , we should consider other science instrumentation on a lunar gps .high - resolution imaging from km radius could be 10 finer ( m ) than platforms at l1 or near l2 . coveringthe moon at this resolution would require pixels , which might allow mapping occasionally , but only crude monitoring temporally .still , if one - third of lunar gps platforms were equipped with a prompt , high - resolution imager , any portion of the lunar surface could be imaged during the course of a surface event .if an event is observed from the ground or from l1/l2 , it could be detailed at 10 m or even higher resolution .this imager network should establish an atlas of global maps ( at various illumination conditions ) to serve as a `` before '' image in this comparison ( as well as allowing a wealth of other studies ) . by allowing transient events to be studied at m resolution, this sets the stage for activity to be isolated at a sufficiently fine scale for in - situ investigations that would thereby be targetted and efficient in localization .returning to mass spectrometry , it is clear that there are two separate modes for gas propagation above the lunar surface , neutral and ionized , and that a significant amounts are seen in both ( vondrak , freeman & lindeman 1974 , hodges et al . 1972 ) , at a rate of one to hundreds of tonne y for each process .there is some possibility that a large portion of the ionized fraction might be molecular in nature ( vondrak et al . 1974 ) . for neutral atoms more massive than h or he ,their thermal escape lifetime is sufficiently long that they have ample time to migrate across the lunar surface until they stick in a shadowed cold - trap .furthermore , the ionized component will predominently follow the electric field embedded in the solar wind , which tends to be oriented perpendicular to the sun - moon vector and hence frequently pointing from the sunrise terminator into space .for these two reasons the best location to monitor outgassing is a point above the sunrise terminator , presumably on a low - orbit platform .note that there is some degeneracy between the timing information recorded by a particle detector on such a satellite between the episodic behavior of particle outgassing versus the motion of the spacecraft at km s .the ideal situation would be to triangulate such signals with more than one platform .such an experiment is not trivial , but there are alternatives , explored below . for a low lunar orbit to be `` low maintenance ''i.e. , require few corrections due to mascon perturbations , it should be at one of several special `` frozen orbit '' inclination angles , , or ( e.g. , ramanan & adimurthy 2005 ) .however , we want to maintain a position over the terminator , using a sun - synchronous orbit , which requires a precession rate rad s .natural precession due to lunar oblateness is determined by the gravitational coefficient ( konopliv et al .1998 ) according to , where is the lunar radius , the orbital angular speed , the lunar mass and the orbital radius . ( the precession caused by earth is 1000 times smaller , and 60000 times smaller for the sun . ) one can not effectively institute both conditions , however , since the maximum inclination orbit with s occurs at ( or else the orbit is below the surface ) . while an orbit at is stable ( at km , 138 km above the surface ) andhas the correct precession rate , it spends most of its time away from the terminator .in contrast , at , s , and the spacecraft needs to accelerate continuously only mm s to place it into sun - synchronous precession .this is nearly the same as the thrust provided by the hall - effect ion engine on _( and corresponds to an area per mass of 330 cm g under the influence of solar radiation pressure . ) while it is not apparent that an ion engine would be the best choice for a platform with mass and ion spectrometers , this illustrates the small amount of impulse need to maintain this favorable orbit , comparable to station - keeping in many non - frozen orbits . in truth , the most efficient location to apply this acceleration is only near the poles , so a slightly more powerful thruster might be needed . since , time - averaged , this perturbed orbit still lands in a frozen - orbit zone , it should still be relatively stable in terms of radius .we would propose that a instrumented platform in this driven , sun - synchronous polar orbit would be ideal for studying outgassing signals near the terminators .there is an interesting synergy between this outgassing monitor platform and another useful investigation from a similar satellite(s ) , although not necessarily simultaneously .an outstanding problem is gravitational potential structure of the moon , particularly the far side ( where satellite orbits can not be monitored from earth ) . with the inclusion of the the 562-day _ lunar prospector _data set ( konopliv et al .2001 ) the error is typically 80 milligals on the far side ( corresponding to surface height errors of about 25 m ) versus 10 milligals in the near - side potential . alsothe limiting harmonic is of order 110 approximately on the near side , and only order 60 on the far side ( 200 km resolution ) .in contrast , the grace ( gravity recovery and climate experiment ) can define the geodesy of earth at much better field and spatial resolution , a few milligals at about order 200 ( tapley et al .2005 - one year of data ) , using a double satellite at km above the earth in polar orbit , with the separation ( km ) between the two components carefully monitored ( by laser interferometer for the proposed grace follow - on mission - watkins et al .2006 , or in the microwave k - band for grace itself ) .such a satellite pair in lunar orbit would improve our knowledge of the farside field by orders of magnitude , determined independent of earth - based tracking measurements , and in general make the accuracy and detail of lunar potential mapping much closer in quality to mineralogical mapping already in hand .one interesting question this might address is whether mascons extend to much smaller scales than currently known .while this mapping is underway , one could use outgassing monitors on board to look for outbursts , and when the geodetic mission is complete , drive the satellites into a polar , sun - synchronous orbit above the terminator . depending on the type of monitors imployed , forcing sun - synchronous precession by chemical , ion or even solar - sail propulsion may or may not interfere ; neutral - gas spectrometers may be compatible with ion drives while charged species trajectories might be perturbed , for instance . maintaining s for a 100 kg spacecraft requires 20 kg month of chemical propellant ( exhaust velocity of 4000 m s ) versus 2.5 kg month of ion propellant ( 30000 m s ) . for a 100 kg spacecrafta solar sail about 30 m in radius would be required .none of these solutions are so easy that they do not inspire a search for alternatives , and their non - gravitational acceleration would mean that they could take place only after ( or before ) any geodesic mission phase .furthermore , ion propulsion and probably chemical propulsion would tend to interfere with mass spectrometry .these should be traded against other possibilites e.g. , several small probes on various orbital planes at , rather than one or two sun - synchronous platforms .the fact that there would be an outgassing detectors on each platform would make temporal / spatial location of specific outbursts more unambiguous , aided by differences in timing and signal strength at the two moving platforms , at least for neutral species .the timing difference will give an indication of the distance difference to the sources , with the source confined to the hyperboloid where is the distance along the line connecting the two satellites , with the origin at the half - way point between them , and is the distance perpendicular to this line .the distance between the two satellites is given by and the difference in distance between the source and the first satellite versus the source and the second is .there is still a left / right ambiguity in event location to be resolved by detector directionality , and better directional sensitivity would add a helpful overconstraint on the measurement .our research group is developing ways to efficiently transfer the insight gained from a program of remote sensing to a program of in - situ research involving the lunar surface .i would like to emphasize a few key points already becoming apparent .the neutral fraction from lunar outgassing need not respect the correlation with lunar sunrise ; a detector giving enough prompt information about outgassing might be invaluable .neutral gas emitted on the day side is free to bounce ballistically until either sticking to a cold surface or escaping ( either due to ionization or by reaching the high - velocity maxwellian tail ) .a highly desirable monitor of this activity would be a mass spectrometer capable of simultaneously accepting particles in a wide range of masses e.g. , a.m.u ., and reconstructing incoming particle trajectories and velocities to allow the locus of outgassing to be reconstructed ( at least within hundreds or thousands of km ) .in addition to tracking the sunrise terminator outgassing signal , such a mass spectrometer would be able to monitor wide areas of the moon for prompt neutral outburst signals from point sources , and therefore the instrument should be placed in the vicinity of known outgassing sites to establish which species succeed in propagating to the regolith surface .the suggested ground - based approaches provides this rough localization , buttressed by the low - orbital outgassing detectors . at some point the identification of a good tracer gas to act as a proxy for endogenous emission would be highly valuable in simplification of outgassing alert monitors not required to scan entire mass ranges .now it is unclear what that gas should be .it is true that seems to be highly correlated with optical transients , but the relationship between radiogenic gas emission and that of volcanic emission is uncertain . besides , while usefully radioactive , radon is a very minor constituent .radiogenic is more abundant , and episodic , but its relation to volcanic gas is uncertain ( as is its correlation to optical transients ) .the most reliable observed molecular atmospheric component is ch , but it is likely to derive in large part from cometary / meteoritic impacts and is somewhat unnatural to expect from the oxygen - rich interior .water suffers from the situation described in paper ii in which a large fraction of any large , endogenous source might never propagate gas to the surface , making it an unreliable tracer . even while endogenous water of nearly certain volcanic origin has been found in glasses likely derived for the deep interior ( saal et al .2007 ) , co is absent .the limits on co are more unclear , as are those for oxides of nitrogen .the first mass spectrometer probes should be designed to clarify this situation . to place these monitors on the surface ,one may exploit human exploration sorties , which will be relatively infrequent and potentially concentrated in sites of just a very few bases .i reiterate that another concern is the contamination that each of the missions will produce , concentrated primarily near the landing site itself .it is evident that by the deployment of lace on the final apollo landing that the outgassing environment was contaminated by a large contribution of anthropogenic gas , and that these vehicles in a new epoch of human exploration will deliver many tens of tons per mission of gases to the lunar surface of composition relevent to species suspected from a potential endogenous volcanic component , a level of contamination comparable to the potential annual output of such gases from endogenous sources .the constellation spacecraft consist of orion , carrying about 10 tonne of n ( nitrogen teroxide ) and ch ( monomethyl hydrazine ) propellant , and lsam , propelled by liquid oxygen and nitrogen .the orion fuel mix produces n , co and h and the lsam exhausts water . depending on the orientations and trajectories of the spacecraft when thrusting they will deposit about 20 tonnes of mostly water to the surface , where most will remain for days ( up to about one lunation ) .during the course of the return to the moon , measurements of at least these three product molecules will be suspect , since in fact their signal will disappear completely over successive lunations . in many respects the surface layer of regolith should be considered as a planet - sized sorption pump coupling the atmosphere , across which gases are free to propagate ( and exit the system if they are ionized or low - mass ) , and the lower regolith , which is cold ( ) and relatively impermeable .gas in the atmosphere can be delivered to the surface where , if it penetrates a few cm , enters a region in which particle mobility slows considerably and where it essentially becomes entrained in the time - averaged signal of endogenous gas ( radiogenic or volcanic ) that is leaking from greater depth .( indeed , since the temperature increases inwards , gas reaching this colder zone preferential migrates to greater depths . )furthermore , once gas from the interior reaches the outer few cm of regolith subject to large temperature swings , it is likely to escape into the vacuum .there is a scientific premium , therefore , to delivering surface monitors to their site without delivery of many tons of anthropogenic gas , annd for this purpose one might consider small , parasitic landing rockets that deliver an experiment package from the orion or lsam human exploration vehicles to the vicinity of the surface , but transition to a low - contamination soft lander system such as an airbag .this is an established , low - cost technology with extensive heritage ( from the ranger block 2 lunar probes to the highly successful mars exploration rovers ) and might easily be the landing technique of choice for small lunar surface packages .on small ( tens of km ) scales , robotic rovers are less prone to sowing contamination when delivering detector packages across the surface .when human exploration turns towards study of lunar outgassing sites the primary challenge may be converting the lower spatial resolution information obtained at earth or lunar orbit into meter or 10-meter scale intelligence regarding where to initiate in - situ exploration .the transitional technologies to bridge this gap consist of local networks of sensors that map area on the scale of a 1 km or 100 m to resolutions of 1 - 10 m using various techniques : local ground - penetrating radar , local seismic arrays , directional and ground - sniffing mass - spectrometers that work to localize , and another technique we propose to investigate : intensive laser grids that densely populate the space above the patch of surface in question with lines of sight sampling strong transitions of some predominant species e.g. , an infrared vibrational / rotational transition if molecules are discovered in quantity .the details of the ideas promulgated in this section are beyond the scope of the current paper and will be presented in a larger document currently in preparation .if the reader will allow a personal statement , i am not easily swayed into writing research papers based on data of the uncertain quality of those seen in papers i and ii , but this is the nature of the field .it has been the purpose of this investigation not only to clarify the implications of existing data , which i think it has done , but also to understand the range of interesting possibilities of phenomena consistent with these data and ask how we should proceed to investigate them , cognizant that many of our actions have implications in terms of disturbing the environment that we care to assay .we need to access which interesting questions need to be addressed , given the state of our ignorance , and consider how to proceed .i hope and intend that these works have advanced the discussion significantly .the phenomena that we have been studying are subtle , and many important aspects may be highly covert .the above - surface signals of outgassing of radiogenic endogenous sources is fairly clear , but gas of more magmatic origin , while possibly present , needs further study to be absolutely confirmed .activity associated with apollo landings easily dominated with anthropogenic gas production the activity in molecular species that might trace residual lunar magnatism .apollo - era and later data were insufficiently sensitive to establish the level of outgassing beyond , and isotopes of ar , plus he , presumably , but did detect molecular gas , particularly ch , but of uncertain origin .it is important to assess how we can advance the apollo - era understanding .consistent with these molecular gas outflows , and perhaps traced by optical transients , there is a range of possible phenomena that have interesting possible scientific consequences and might easily be useful in terms of resource exploitation for human exploration . while this amount of volatile production is inconsequential on the scale of the geology of the moon as a whole , and is poorly constrained by any measurement of current or previous volatiles , even in returned surface samples , it is still capable of massively altering the environment locally in ways which should be investigated in a timely way .we could learn a great deal from the current production of volatiles and their accumulation over geologic timescales in an extraterrestrial environment so easily explored .the salient facts from the above treatment is that for many years yet monitoring for optical transients will still be best done from the earth s surface , even considering the important contributions that will be made by lunar spacecraft probes in the next several years. these spacecraft will be very useful in evaluating the nature of transient events in synergy with ground - based monitoring , however . given the likely behavior of outgassing events , it is unclear that in - situ efforts alone will necessarily isolate their sources within significant winnowing of the field by remote sensing .early placement of capable mass spactrometers of the lunar surface , however , might prove very useful in refining our knowledge of outgassing composition , in particular a dominant component that could be used as a tracer to monitor outgassing activity with more simple detectors .this must take place before significant pollution by large spacecraft , which will produce many candidate tracer gasses in their exhaust .we do not know enough now to discuss the potential implications of this line of research in terms of resources for human exploration , or even in terms of prebiologic chemistry on the moon and for tenuous endogenous outgassing and atmospheric interactions with the regolith on other bodies , but all of these are interesting , new avenues of such research .it is crucial that exploration of these issues progress while we have a pristine lunar surface as our laboratory .i would much like to thank alan binder and james applegate , as well as daniel savin , daniel austin and the other members of aeolus ( `` atmosphere as seen from earth , orbit and lunar orbit '' ) for helpful discussion .* references : * adams , j.b . 1974 ,jgr , 79 , 4829 .akhmanova , m.v . ,dementev , b.v . ,markov , m.n . & sushchinskii , m.m . 1972 , cosmic research , 10 , 381 binder , a.b. 1998 , science , 281 , 1475 ; also see video interview , http://lunar.arc.nasa.gov/results/alres.htm brown , w.e ., jr . 1972 ,earth moon plan . , 4 , 133 .buratti , b.j . ,mcconnochie , t.h . ,calkins , s.b . , hillier , j.k .& herkenhoff , k.e .2000 , icarus , 146 , 98 .campbell , b.a . ,campbell , d.b . ,margot , j .-, ghent , r.r . , nolan , m. , carter , l.m . ,stacy , n.j.s .2007 , eos , 88 , 13 .campbell , d.b . ,campbell , b.a . ,carter , l.m . ,margot , j .-l . & stacy , n.j.s .2006 , nature , 443 , 835 .campbell , b.a . ,carter , l.m . ,campbell , d.b . ,hawke , b.r ., ghent , r.r . & margot , j .-2006 , lun . plan .conf . , 37 , 1717 .charette , m.p . ,adams , j.b . , soderblom , l.a . ,gaffey , m.j .& mccord , t.b .1976 , lun .7 , 2579 .chin , g. , et al. 2007 , lun . plan .38 , 1764 .chung , d.h .1972 , earth moon & plan . , 4 , 356 .crotts , a.p.s .2007 , icarus , submitted ( paper i ) .crotts , a.p.s . & hummels , c. 2007 , apj , submitted ( paper ii ) .crotts , a.p.s .2007 , et al .2007 , lun . plan .conf . , 28 , 2294 . dollfus , a. 2000 , icarus , 146 , 430 . dzhapiashvili , v.p . & ksanfomaliti , l.v . 1962 , the moon , iau symp . 14 , ( academic press : london ) eliason , e.m . , et al .1999 , lun . plan ., 30 , 1933 farr , t.g . ,bates , b. , ralph , r.l . & adams , j.b .1980 , lun . plan ., 11 , 276 . fried , d.l .1978 , opt .j. , 68 , 1651 .garvin , j. , robinson , m. , skillman , d. , pieters , c. , hapke , b. & ulmer , m. 2005 , proposal go 10719 .ghent , r.r . ,leverington , d.k . ,campbell , b.a . ,hawke , b.r .& campbell , d.b .2004 , lun . plan .conf . , 35 , 1679 .ghent , r.r . ,leverington , d.k . ,campbell , b.a . ,hawke , b.r .& campbell , d.b .2005 , jgr .110 , doi : 10.1029/2004je002366 .hazen , r.m . ,bell , p.m. & mao , h.k .1978 , lun . plan ., 9 , 483 .hodges , r.r . , jr . , hoffman , j.h . & johnson , f.s .1973 , lun .conf . , 4 , 2855 . hodges , r.r . , jr . , hoffman , j.h . & johnson , f.s .1974 , icarus , 21 , 415 .hodges , r.r . , jr . , hoffman , j.h ., yeh , t.t.j .& chang , g.k .1972 , jgr , 77 , 4079 .hoffman , j.h .& hodges , r.r . , jr .1972 , lun .conf . , 3 , 2205 .konopliv , a.s . ,asmar , s.w . ,carranza , e. , sjogren , w.l .& yuan , d.n .2001 , icarus , 150 , 1 konopliv , a.s . , binder , a.b . , hood , l.l . ,kucinskas , a.b . ,sjogren , w.l . & williams , j.g .1998 , science , 281 , 1476 law , n.m . ,mackay , c.d .& baldwin , j.e .2006 , a&a , 446 , 739 .lebofsky , l.a . ,feierberg , m.a . ,tokunaga , a.t ., larson , h.p . & johnson , j.r .1981 , icarus , 48 , 453 lipsky , yu.n .& pospergelis , m.m .1966 , astronomicheskii tsirkular , 389 , 1 .lo , m.w .2004 , in `` proc .internatl lunar conf .2003 , ilewg 5 '' ( adv . in astronaut ., sci . & tech .108 ) , eds .durst et al .( univelt : sandiego ) , p. 214 .markov , m.n . ,petrov , v.s . , akhmanova , m.v .& dementev , b.v .1979 , in _ space research , proc .open mtgs .working groups _ ( pergamon : oxford ) , p. 189 .mcguire , r.e .2006 `` space physics data facility '' - nasa goddard space flight center : http://lewes.gsfc.nasa.gov/cgi-bin/cohoweb/selector1.pl?spacecraft=omni moorman , b.j . ,robinson , s.d . & burgess , m.m .2003 , permafrost & periglac ., 14 , 319 .nishimura , j. , et al .2006 , adv .space res . , 37 , 34 .nozette , s. et al .1996 , science , 274 , 1495 .nozette , s. et al .2001 , jgr , 106 , 23253 .ono , t. & oya , h. 2000 , earth plan .space , 52 , 629 .picardi , g. , et al .2005 , science , 310 , 1925 .pieters , c.m . , et al .2005 , in _ space resources roundtable viii _ , lun . plan .contrib . , 1287 , 73 .pieters , c.m . , et al .2005 , http://moonmineralogymapper.jpl.nasa.gov/science/volatiles/ porcello , l.j , et al .1974 , proc .ieee , 62 , 769 .ramanan , r.v . & adimurthy , v. 2005 , j. earth syst .rivkin , a.s . ,howell , e.s . ,britt , d.t . ,lebofsky , l.a ., nolan , m.c .branston , d.d .1995 , icarus , 117 , 90 rivkin et al .2002 , asteroids iii , 237 ross , s.d .2006 , am ., 94 , 230 .saal , a.e ., hauri , e.h . , rutherford , m.j . &cooper , r.f .2007 , lun . plan ., 38 , 2148 .simpson , r.a .1998 , in `` workshop on new views of the moon , '' eds . b.l .jolliff & g. ryder ( lpi : houston ) , p. 61 .stacy , n.j.s .1993 , ph.d .thesis ( cornell u. ) .tapley , b.j . , et al .2005 , j. geodesy , 79 , 467 .thompson , t.w .& campbell , b.a .2005 , lun . plan .conf . , 36 , 1535 .tomanry , a.b . andcrotts , a.p.s .1996 , aj , 112 , 2872 .tubbs , r.n .2003 , ph.d .thesis ( university of cambridge ) .vondrak , r.r . ,freeman , j.w . & lindeman , r.a .1974 , lun . plan .conf . , 5 , 2945 .watkins , m. , folkner , w.m . ,nerem , r.s . & tapley , b.d .2006 , in _ proc .grace science meeting , 2006 dec .8 - 9 _ , in press ( http://www.csr.utexas.edu/grace/gstm/2006/a1.html ) .table 1 : summary of basic experimental / observational techniques detailed here ....all methods are earth - based remote sensing unless specified otherwise . -------------------------------------------------------------------------------- goal detection method channel advantages difficulties ------------- ----------------------- ------- -------------- --------------- map of tlp imaging monitor , entire optical schedulability nearside only , activity nearside , ~2 km resol .comprehensive ; limited resol .more sensitive than human eye polarimetric compare reflectivity in optical easy to requires use study of dust two monitors with schedule ; two monitors perpendicular polarizers further limits dust behavior changes in adaptive optic imaging , 0.95 " on demand " undemonstrated , small , active ~100 m resolution micron , given good depends on areas etc .conditions seeing ; covers ~50 km dia .max " lucky imaging , " 0.95 on demand low duty cycle , ~200 m resolution micron , given good depends on etc .conditions seeing hubble space telescope , 0.95 on demand currently ~100 m resolution micron , given advance unavailable ; etc .notice low efficiency clementine / lro/ 0.95 existing or limited epochs ; chandrayaan-1 imaging , micron , planned survey low flexibility ~100 m resolution etc . selene / chang'e-1 0.95 existing or limited epochs ; imaging , higher resol .micron , planned survey low flexibility etc .tlp spectrum scanning spectrometer nir , may be best requires alert map , then spectra taken optical method to find from tlp image during tlp event composition & monitor ; limit tlp mechanism to long events regolith nir hydration bands 2.9,3.4 directly probe requires alert hydration seen before / after tlp micron regolith / water from monitor , measurement in nir imaging chemistry ; flexible detect water scheduling scanning spectrometer 2.9,3.4 directly probe requires alert map , then spectra taken micron regolith / water from monitor , soon after tlp chemistry ; flexible detect water scheduling relationship simultaneous monitoring rn-222 refute / confirm optical monitor between tlps for optical tlps and by alpha & tlp / outgassing only covers & outgassing selene for rn-222 alpha optical correlation ; nearside ; more particles find gas loci monitors better subsurface penetrating radar ~430mhz directly find ice signal is water ice subsurface ice easily confused with existing with others technique penetrating radar from ~300mhz better resol . ;ice signal is lunar orbit can study easily confused sites of lower with others ; activity more expensive surface radar from > 1ghz better resol . ; redundant with lunar orbit study tlp site high resol .surface change imaging ?high resol .imagers at / near l1 , l2 optical map tlps with expensive , but tlp activity covering entire moon , greater resol . could piggyback map at 100 m resolution & sensitivity , communications entire moon network comprehensive two rn-222 alpha rn-222 map outgassing expensive ; even rn-222 alpha detectors in polar alpha events at full better response particle map orbits 90 degrees apart sensitivity w/ 4 detectors comprehensive two mass spectrometers ions & map outgassing expensive ; even map of outgas adjacent polar orbits neutral events & find better w/ more components composition spectrometers _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in situ , surface experiments : we refer the reader to work in preparation by aeolus collaboration .abbreviations used : dia .= diameter , max = maximum , nir = near infrared , resol .= resolution _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ....0.00 in figure 5 - a ) left : spectrum of an 8-arcmin slit intersecting aristarchus ( bright streak just above center ) and extending over oceanus procellarum , and covering wavelengths 5500 - 10500 , taken by the mdm 2.4-meter telescope ; * b ) * right : the residual spectrum once a model consisting of the outer product the one - dimensional average spectrum from figure 3a times the one - dimensional albedo profile from figure 3a . the different spectral reflectance of material aroundaristarchus is apparent ( at a level of about 7% of the initial signal ) , with r.m.s .deviations of about 0.5% , dominated by interference fringing in the reddest portion , which can be reduced .figure 6 - a ) left : a b - band image of the region around aristarchus ; b ) right : an image of aristarchus in a 3 - wide centered near 6000 , constructed by taking a vertical slice through figure 3a and other exposures from the same sequence of spectra scanning the surface .any such band between 5500 and 10500 can be constructed in the same manner , with resolution of about 1 km and 3 .
in paper i of this series , we show that transient lunar phenomena ( tlps ) correlate with lunar outgassing , geographically , based on surface radon release episodes versus the visual record of telescopic observers ( the later prone to major systematic biases of unspecified nature , which we were able to constrain in paper i ) . in paper ii we calculate some of the basic predictions that this insight implies , in terms of outgassing / regolith interactions . in this paper we propose a path forward , in which current and forthcoming technology provide a more controlled and sensitive probe of lunar outgassing . many of these techniques are currently being realized for the first time . given the optical transient / outgassing connection , progress can be made by earth - based remote sensing , and we suggest several programs of imaging , spectroscopy and combinations thereof . however , as found in paper ii , many aspects of lunar outgassing seem likely to be covert in nature . tlps betray some outgassing , but not all outgassing produces tlps . some outgassing may never appear at the surface , but remain trapped in the regolith . as well as passive remote sensing , we also suggest more intrusive techniques , from radar mapping to in - situ probes . understanding these volatiles seems promising in terms of their exploitation as a resource for human presence on the moon and beyond , and offers an interesting scientific goal in its own right . this paper reads , therefore , as a series of proposed techniques , some in practice , some which might be soon , and some requiring significant future investment ( some of which may prove unwise pending results from predecessor investigations ) . these point towards enhancement of our knowledge of lunar outgassing , its relation to other lunar processes , and an increase in our understanding of how volatiles are involved in the evolution of the moon . we are compelled to emphasize certain ground - based observations in time for the flight of _ selene , lro _ and other robotic missions , and others before extensive human exploration . we discuss how study of the lunar atmosphere in its pristine state is pertinent to understanding the role of anthropogenic volatiles , at times a significant confusing signal . 6.5 in 8.5 in 0.0 in 0.0 in
the deviation of an active region ( ar ) magnetic field from its potential configuration can arise due to photospheric footpoint motions and/or during flux emergence .however , studies with the high - quality data from space and ground based observatories over the last decade suggest that the latter is more dominant mechanism .the deviation of magnetic field from potential configuration is called non - potentiality ( np ) of the field .the magnetic energy of an active region ( ar ) in excess of the potential magnetic energy is called free magnetic energy .the free magnetic energy of an ar , or a portion of it , is released when flare and/or coronal mass ejections ( cmes ) is triggered due to the instability or non - equilibrium .the energy released in the flare is then limited by the amount of free energy or magnetic np of the ar .therefore , it is important to characterize the np of the ar magnetic field in order to predict the intensity of the flare and/or cme . traditionally , the so - called magnetic shear i.e. , the angle between the observed ( ) and the potential ( ) field azimuths , is used to characterize the np of the active region magnetic field and is measured as where and are the observed and potential transverse field vectors . the term magnetic shear " , used here and in similar studies on solar - magnetism , really refers to observational shear " , which is not to be confused with the actual shear " a microscopic property in fluid dynamics . in what follows, we will use the term magnetic shear for traditional reasons and by this term we would always mean the observational shear .various forms of this parameter are used namely , mean shear , weighted mean shear , spatially averaged signed shear , most probable shear etc .it is well known that the polarity inversion line ( pil ) in active regions bearing highly sheared magnetic fields are the potential sites for flares .these pils are generally characterized by dark filaments in the images taken in chromospheric hydrogen alpha line .it may be noted that all of the shear parameters mentioned above measure the twist - shear component of the shear which is measured in the horizontal plane .however , one can also measure the shear of a magnetic field in the vertical plane .this type of shear can be called as the dip - shear " ( we choose the term dip " because it is synonymous to the dip angle measured in geomagnetism ) , and can be measured by taking the difference between the observed ( ) and the potential field inclination angle ( ) i.e. , . physically , the dip - shear can be understood in terms of azimuthal currents , in the same way as the twist - shear is understood in terms of axial currents . in our knowledge , the dip - shear has not been studied earlier in solar active regions .henceforth , we will call the parameter and and as twist - shear and dip - shear , respectively .the larger the value of these angles the larger will be the np of the ar .it may be noticed that , unlike twist - shear , the dip - shear is not affected by the 180 degree azimuth ambiguity , provided that the active region is observed close to the disk center . this is because the dip - shear depends upon the inclination angle of the magnetic field which can be measured without ambiguity . in this _ letter _ , we study the evolution of twist - shear and dip - shear in a penumbral region located close to the flaring site in ar noaa 10930 .we use sequence of high - quality vector magnetograms observed by the _ hinode _ space mission .this active region was in a -sunspot configuration which led to a x3.4 flare and a large cme during 02:20 ut on 13 december 2006 .the flare was quite powerful and the white light flare ribbons along with impulsive lateral motion of the penumbral filaments were observed .evolution of the twist - shear and dip - shear together shows interesting patterns which can be distinguished in the pre - flare and post - flare stages .in general we find that ( a ) the regions with high twist - shear also exhibit high dip - shear , ( b ) the penumbral region close to the flare site shows high twist - shear and dip - shear , and ( c ) twist - shear and dip - shear studied together can be used to study flare related changes in the active regions .the paper is organized as follows .the observational data and the methods of analysis are described in section 2 .the results are presented in section 3 and the discussions and conclusions are made in section 4 .a sunspot with configuration was observed in ar noaa 10930 during 12 - 13 december 2006 by the spectro - polarimeter ( sp ) instrument with solar optical telescope ( sot ) onboard __ satellite .the sp obtains the stokes profiles , simultaneously in fe i 6301.5 and 6302.5 line pair .the spectro - polarimetric maps of the active region are made by scanning the slit across the field - of - view .this takes about an hour to complete one scan .we choose a sequence of six sp scans from 12 december 2006 03:50 ut to 13 december 2006 16:21 ut when the sunspot was located close to the disk center with heliocentric distance ( ) of 0.99 and 0.97 , respectively .the scans were taken in fast map " observing mode with following characteristics : ( i ) field - of - view ( fov ) 295 x 162 arc - sec , ( ii ) integration time of 1.8 seconds and ( iii ) pixel - width across and along the slit of 0.32 and 0.29 arc - sec , respectively .the stokes profiles were then fitted to an analytic solution of unno - rachkovsky equations under the assumptions of local thermodynamic equilibrium ( lte ) and milne - eddington model atmosphere with a non - linear least square fitting code called _ helix _ .the physical parameters of the model atmosphere retrieved after inversion are the magnetic field strength , its inclination and azimuth , the line - of - sight velocity , the doppler width , the damping constant , the ratio of the line center to the continuum opacity , the slope of the source function and the source function at = 0 .we fit a single component model atmosphere along with a stray light component .the inversion code _helix _ is based upon a reliable genetic algorithm .this algorithm , although slower , is more robust than the classical levenberg - marquardt algorithm in the sense that the global minimum of the merit function is reached with higher reliability .the vector magnetograms obtained after inversion , were first solved for 180 degree azimuth ambiguity by using the acute angle method and then transformed from the observed frame ( image plane ) to local solar frame ( heliographic plane ) using the procedure described in .the potential field was computed from the line - of - sight field component by using the fourier transform method .the idl routine used for potential field computation is ` fff.pro ` which is available in the nlfff ( non - linear force free field ) package of the solarsoft library .the continuum intensity images of the sunspot , corresponding to the sequence of scans used , are shown in figure 1 , with the transverse field vectors overlaid on it .the two magnetograms were aligned using the cross - correlation technique applied to the continuum image of the sunspot .a black rectangle is overlaid on these images to show the location of the region where we study the evolution of twist - shear and dip - shear .the location of this black rectangle is chosen with the help of a co - aligned g - band filtergram observed from hinode filtergraph ( fg ) instrument during flare .this g - band image is shown in the left panel of figure 2 .the flare ribbon is marked by ` + ' symbols and the black rectangle of figure 1 is shown here also .the flare ribbons sweep across the rectangular box during 02:20 to 02:26 ut .this indicates that the rectangle is chosen such that it samples the penumbra which is very close to the flaring site .the right panel of figure 2 shows the longitudinal magnetogram in order to indicate the location of rectangle ( shown here with white color ) with respect to the pil .the maps of dip - shear and twist - shear for the sequence of vector magnetograms are shown in figure 3 and 4 respectively . in figure 5we show the distribution of dip - shear and twist - shear inside the black rectangle and its evolution with time .the maps of dip - shear for the entire sequence of vector magnetograms covering the pre - flare ( panels ( a)(c ) ) and post - flare ( panels ( d)(f ) ) phases are shown in figure 3 .the value of field inclination is measured with respect to local solar vertical direction and ranges from 0 to 180 degrees . for purely vertical positive ( negative ) polarity fieldthe value of corresponds to 0 degrees ( 180 degrees ) .the black rectangle in figure 3 corresponds to negative polarity field .therefore , the positive value of dip - shear inside this rectangle means that the observed field is more vertical than potential field .the magnitude of dip - shear can be judged with the aid of colorbar at the bottom of figure 3 .it may be noticed that : + ( i ) the value of dip - shear is large inside the rectangle as compared to other penumbral locations . + ( ii ) in the pre - flare ( panels ( a)(c ) ) phase the dip - shear consistently has a large magnitude which decreases in the post - flare phase ( panels ( d)(f ) ) .the field azimuth has been solved for 180 degree azimuth ambiguity by using the acute angle method and the projection effects have been removed by the application of vector transformation .this azimuth ambiguity resolution method works well in the regions where the angle is less or greater than 90 . for regions where reaches value close to 90 such as along parts of polarity inversion line ( pil ) in flaring active regions , the accuracy of the method is poor .this is why we choose a rectangular box for studying the evolution of twist - shear and dip - shear sample the penumbra close to flaring site and at the same time stay away from the pil where the acute angle method may have problems in resolving the azimuth ambiguity .the maps of twist - shear for the entire sequence of vector magnetograms covering the pre - flare ( panels ( a)(c ) ) and post - flare ( panels ( d)(f ) ) phases are shown in figure 4 .the value of field azimuth is measured with respect to the positive x - axis and is positive in the anti - clockwise direction .the magnitude of twist - shear can be judged with the aid of colorbar at the bottom of figure 4 . however , for the present study the sign of shear angle is not important , so we shall focus on its magnitude .it may be noticed that the value of twist - shear is large inside the rectangle and adjacent pil as compared to other penumbral locations . however , the flare related changes are not so discernable to eye as compared to dip - shear in figure 3 . in figure 5we show the scatter between the dip - shear and twist - shear for the pixels within the black rectangle shown in previous figures .the panels ( a ) to ( c ) correspond to pre - flare while the panels i.e. , ( d ) to ( f ) correspond to post - flare phase .it may be noticed that : + ( i ) the distribution of dip - shear and twist - shear in panels ( a)-(c ) is different from the distribution in panels ( d)-(f ) .+ ( ii ) the dip - shear increases before the flare ( panels ( a)-(c ) ) but twist - shear tends to decrease at the same time .+ ( iii ) the dip - shear and twist - shear are in general correlated , i.e. the pixels with large dip - shear also have large twist - shear .+ ( iv ) the most important change can be noticed after the flare , i.e. , between panels ( c ) and ( d ) .after the flare , ( panel ( d ) ) the dip - shear decreases significantly while twist - shear increases .however , now both shear components show less dispersion i.e. , follow a tight correlation .+ ( v ) in panels ( d)-(f ) the two shears maintain smaller dispersion but dip - shear starts to increase once again .this increase suggests that the non - potentiality was building up again in the active region .it may be noted that the flaring activity continued in this region on the next day i.e. , 14 december 2006 also , with another x - class flare occurring at about 22:00 ut . conjectured that the free - energy is stored in non - potential magnetic loops that are stretched upwards and the free - energy release during the flare must be accompanied by sudden shrinkage or implosion in the field .also , it is predicted that after the flare the field should become more horizontal .using coronal images during flares , there are observational reports about detection of the loop contraction during flares .further , it was shown by that in force - free fields a high non - potentiality implies weaker magnetic tension , which in turn implies a larger vertical extension of the field due to lower magnetic pressure gradient .conversely , the release of free magnetic energy during flare implies a loss of magnetic non - potentiality leading to a decrease in the vertical extension of the field or shrinkage .the non - linear force - free field ( nlfff ) extrapolations of the noaa 10930 active region by show the non - potentiality of this active region in the form of a twist flux rope structure .as suggested by and , such a structure will have larger vertical extension in pre - flare as compared to the post - flare configuration .the closer the post - flare field approaches to the potential field configuration the smaller is the value of inclination difference expected .this may give an explanation for the decrease of dip - shear after the flare .however , in contrast , the increase in the twist - shear after the flare also needs an explanation .the opposite behaviour of twist - shear and dip - shear in relation to the flare can be understood in the following way .the twist - shear is dependent on sub - photospheric / photospheric forces , so the twist - shear will continue to increase independent of coronal processes like flare .however , the plasma decreases rapidly above the photosphere and thus there is no non - magnetic force or shear that is strong enough to change the inclination of the field lines .hence inclination will be more responsive to coronal processes .this may explain why inclination became more potential after the flare .hence , dip - shear could be a better diagnostic of np above the photosphere . in summary, we studied the evolution of twist - shear and dip - shear in a flaring -sunspot using a sequence of high - quality vector magnetograms spanning the pre - flare and post - flare phases and found that : ( i ) the penumbra located close to the flaring site has high twist - shear and dip - shear as compared to other parts of the penumbra , ( ii ) the twist - shear increases after the flare which was earlier reported by also , ( iii ) the dip - shear however shows a decrease after the flare , ( iv ) the twist - shear and dip - shear are correlated , i.e. pixels with high twist - shear exhibit high dip - shear , and this correlation is much tighter after the flare , and ( v ) distribution of twist - shear and dip - shear and its evolution ( in figure 5 ) clearly shows different patterns before and after the flare .this type of behaviour in the twist - shear and dip - shear parameters will need to be evaluated further in more flares before it can be understood physically .we plan to carry out more extensive study of the dip - shear and twist - shear in existing __ datasets . however , a high - cadence study of these shear parameters would be possible only with the upcoming observations from helioseismic and magnetic imager ( hmi ) onboard solar dynamics observatory ( sdo ) .the present study is important in the sense that it points the way to a vector - field follow - up to the results of , which established the line - of - sight field changes during powerful flares . in the context of present study, one should bear in mind that the vector magnetograms derived from the _ hinode _ sot / sp scans , although polarimetrically very precise are very noisy geometrically .an unwanted consequence of the geometric noise could be that the flows , specially on long time scales , would tend to create an appearance of non - potentiality , even if there was none .this is an important issue which needs to be addressed sooner than later , considering the widespread use of sot / sp magnetic maps as the vector magnetograms " .we plan to carry out a detailed study of this effect using simultaneously observed spectro - polarimeter ( sp ) scan from_ hinode _ sot and vector - magnetograms from hmi onboard sdo .we thank the anonymous referee for his / her valuable comments and suggestions , specially for pointing out the geometric noise in sot / sp scans and its side effects .we thank dr .ron moore and dr .pascal d for reading the manuscript .hinode is a japanese mission developed and launched by isas / jaxa , collaborating with naoj as a domestic partner , nasa and stfc ( uk ) as international partners .scientific operation of the hinode mission is conducted by the hinode science team organized at isas / jaxa .this team mainly consists of scientists from institutes in the partner countries .support for the post - launch operation is provided by jaxa and naoj ( japan ) , stfc ( u.k . ) , nasa , esa , and nsc ( norway ) .we also would like to thank dr .andreas lagg for providing his helix code used in this study .alissandrakis , c. e. 1981 , , 100 , 197 charbonneau , p. 1995, , 101 , 309 dmoulin , p. , mandrini , c. h. , van driel - gesztelyi , l. , lopez fuentes , m. c. , & aulanier , g. 2002 , , 207 , 87 falconer , d. a. , moore , r. l. , & gary , g. a. 2002 , , 569 , 1016 forbes , t. g. , & acton , l. w. 1996 , , 459 , 330 gary , g. a. 1989 , , 69 , 323 , s. , venkatakrishnan , p. , & tiwari , s. k. 2009 , , 706 , l240 sudol , j. j. , & harvey , j. w. 2005 , , 635 , 647 hagyard , m. j. , teuber , d. , west , e. a. , & smith , j. b. 1984 , , 91 , 115 hagyard , m. j. , venkatakrishnan , p. , & smith , j. b. , jr . 1990 , , 73 , 159 , j. w. 1969 , phd thesis , university of colorado at boulder .hudson , h. s. 2000 , , 531 , l75 hudson , h. s. , fisher , g. h. , & welsch , b. t. 2008 , subsurface and atmospheric influences on solar activity , 383 , 221 lites , b. w. , et al .2007 , new solar physics with solar - b mission , 369 , 55 liu , r. , wang , h. , & alexander , d. 2009 , , 696 , 121 , b. c. 1982 , , 77 , 43 metcalf , t. r. , et al .2006 , , 237 , 267 moon , y .- j . ,wang , h. , spirock , t. j. , goode , p. r. , & park , y. d. 2003 , , 217 , 79 priest , e. r. , & forbes , t. g. 2002 , , 10 , 313 rachkovsky , d. n. 1973 , izvestiya ordena trudovogo krasnogo znameni krymskoj astrofizicheskoj observatorii , 47 , 3 , p. h. , & sdo / hmi team .2002 , in bulletin of the american astronomical society , vol .34 , bulletin of the american astronomical society , 735+ schrijver , c. j. , de rosa , m. l. , title , a. m. , & metcalf , t. r. 2005 , , 628 , 501 schrijver , c. j. 2007 , , 655 , l117 schrijver , c. j. , et al .2008 , , 675 , 1637 tiwari , s. k. , venkatakrishnan , p. , & sankarasubramanian , k. 2009 , , 702 , l133 tsuneta , s. , et al .2008 , , 249 , 167 unno , w. 1956 , , 8 , 108 venkatakrishnan , p.1990 , , 128 , 371 venkatakrishnan , p. , hagyard , m. j. , & hathaway , d. h. 1988 , , 115 , 125 wang , h. 1992 , , 140 , 85 wheatland , m. s. 2000 , , 532 , 616 zirin , h. , & tanaka , k. 1973 , , 32 , 173 -sunspot in noaa 10930 during the times mentioned at the top .the transverse magnetic field vectors are shown by arrows overlaid upon these maps .the black rectangle , shown in all panels , is the region where we monitor the evolution of twist - shear and dip - shear . ]-sunspot in noaa ar 10930 during 12 december 2006 02:30 ut , with the location of the flare ribbon marked by ` + ' symbols .the flare ribbons sweep across the rectangular box during 02:20 to 02:26 ut . the right panel shows the map of the longitudinal field component for this sunspot .the black ( white ) rectangle in left ( right ) panel marks the region where we monitor the evolution of twist - shear and dip - shear ., title="fig:"]-sunspot in noaa ar 10930 during 12 december 2006 02:30 ut , with the location of the flare ribbon marked by ` + ' symbols .the flare ribbons sweep across the rectangular box during 02:20 to 02:26 ut .the right panel shows the map of the longitudinal field component for this sunspot . the black ( white ) rectangle in left ( right ) panel marks the region where we monitor the evolution of twist - shear and dip - shear ., title="fig : " ]
the non - potentiality ( np ) of the solar magnetic fields is measured traditionally in terms of magnetic shear angle i.e. , the angle between observed and potential field azimuth . here , we introduce another measure of shear that has not been studied earlier in solar active regions , i.e. the one that is associated with the inclination angle of the magnetic field . this form of shear , which we call as the dip - shear " , can be calculated by taking the difference between the observed and potential field inclination . in this _ letter _ , we study the evolution of dip - shear as well as the conventional twist - shear in a -sunspot using high - resolution vector magnetograms from _ hinode _ space mission . we monitor these shears in a penumbral region located close to flare site during 12 and 13 december 2006 . it is found that : ( i ) the penumbral area close to the flaring site shows high value of twist - shear and dip - shear as compared to other parts of penumbra , ( ii ) after the flare the value of dip - shear drops in this region while the twist - shear in this region tends to increase after the flare , ( iii ) the dip - shear and twist - shear are correlated such that pixels with large twist - shear also tend to exhibit large dip - shear , and ( iv ) the correlation between the twist - shear and dip - shear is tighter after the flare . the present study suggests that monitoring twist - shear during the flare alone is not sufficient but we need to monitor it together with dip - shear .
we have recorded the flight of a fly during take off and landing using digital high speed photography .it is shown that the dynamics of flexible wings are different for these procedures . during this observation flyflew freely in a big box and it was not tethered .
in this fluid dynamics video , we demonstrated take off and landing of a fly . the deformation of wings is in focus in this video .
the feedback structure considered in this note is depicted in fig . 1 and the related transfer functions of the process and the controller are given by where is the plant steady - state gain , and the plant time constants , is the positive plant time delay and , and are the parameters of the pid controller .complete explicit expressions of the boundaries of the stability regions in first - order plants have been found in with a version of the hermite - biehler theorem derived by pontryagin , in with the nyquist criterion , and in with the root location method. moreover the second - order plants have been investigated in by means of a graphical approach ; the results obtained are correct , but the stability conditions are not all explicit and no finite number of required computation steps is specified . finally arbitrary - order plants have been studied with the nyquist criterion in , but p and pid controllers with a given separately are considered and no information about the set of the process parameters that allow stability is given .this note can be considered as an extension of to the arbitrary - order plants and is organized as follows . in section 2all the analytical expressions that will be used in the next sections are evaluated in detail . in section 3 a process transfer function without zerosis considered and the stability regions are explicitly evaluated by means of a version of the hermite - biehler theorem derived by pontryagin , already used in . a second - order plant is studied as example and the related stability regions are determined and plotted in two figures . in section 4a process transfer function with zeros is considered and the stability regions are found by means of a new theorem .the two procedures of the sections 3 and 4 are essentially equal , consist of a finite number of steps and yield the stability regions in both process and controller parameters planes . in section 5some conclusive remarks are given .the importance of explicit expressions of the boundaries of the stability zones has been enhanced by the introduction of the controller tuning charts in ( used also in ) .the closed - loop transfer function of the system is given by according to the pontryagin s studies , presented in and summarized in , it is necessary that has a bounded number of poles with arbitrary large positive real part for stability .this holds if the denominator of has a principal term ( in our case , where and , it exists if ) and the function , coefficient of , ( in our case for and for ) has all the zeros in the open left half plane .this happens if one of the following conditions is satisfied : 1 . 2 . and .the denominator of , given by ( [ eq:2.0a ] ) , divided by and hence named , can be written , according to ( [ eq:1.1 ] ) , as since all the poles of the closed - loop transfer function are zeros of and a system is stable if no pole of lies in the right half - plane , the above system is stable if no zero of lies in the right half - plane . for process transfer functions without zeros , examined in section 3 , is a quasi - polynomial and a version of the hermite - biehler theorem derived by pontryagin is employed . for process transfer functions with zeros , examined in section 4 , is a quasi - polynomial divided by a polynomial and a new theorem , proved by use of the principle of the argument , is employed .now , before the explanation of the proposed procedures , let us evaluate all the expressions that will be used in the next sections .it is convenient to introduce the normalized time referred to the plant time delay and the dimensionless parameters , , , , and , in order to obtain equations independent of the real values of the parameters .applying these simplifications , ( [ eq:2.0b ] ) becomes moreover , assuming and , the real and the imaginary components and of , calculated for , are given by \ ] ] where \ ] ] for sake of clarity , and are the symmetric expressions of the time constants and ; is the sum of the products of different selected among the total ( for example and ) .the derivative of with respect to is given by assuming and in ( [ eq:2.3 ] ) , one obtains where it is easy to check that exists for only if .the derivative of with respect to , evaluated at and named , is given by denoting by and the two branches of related respectively to the minus and plus signs , their derivatives and are higher than the derivative of , equal to 0.5 , depending on and , given by in detail , the number of the derivatives and higher than 0.5 are the following : zero if and , one if , and two if and .let us denote by where , corresponding to equal derivatives with respect to of and , is a root of . differentiating ( [ eq:2.5 ] ) with respect to once and twice , one obtains it is worthwhile to note that , , and , where is given by ( [ eq:2.17 ] ) .evaluating , where is a root of given by ( [ eq:2.20 ] ) , one obtains eliminating and from and , given by ( [ eq:2.2 ] ) and ( [ eq:2.3 ] ) , yields denote by and the two straight lines whose equations in the ( )-plane are obtained introducing in ( [ eq:2.6 ] ) respectively , and , ( see figs . 2 and 5 ) ; denote further by , and the vertices of a triangle , whose sides are the axis and the two lines and .the coordinates of these vertices are given by considering ( [ eq:2.23 ] ) , the coordinates and of the points lying on and at are given by \frac{\sqrt{p^{2}(y_{bi})+q^{2}(y_{bi})-h^{2}}}{y_{bi}}\\ ] ] \frac{\sqrt{p^{2}(y_{ai})+q^{2}(y_{ai})-h^{2}}}{y_{ai}}\ .\ ] ] it is easy to check that , when , the absolute values of and , if , are equal to and , if , to given by the process transfer function does not have zeros , and thus , , , hold ; therefore , the function , given by ( [ eq:2.1 ] ) , is a quasi - polynomial and the pontryagin s results are integrally applicable .the following two conditions derived from theorem 3.2 of and from theorem 13.7 of , respectively , must be satisfied in order to have a stable system : * condition no .1 + consider that the principal term of , given by ( [ eq:2.1 ] ) , is , set and let be an appropriate constant such that the coefficient of in does not vanish at .the number of the real roots of in the interval for sufficiently large must be * condition no .2 + for all the zeros of the function the inequality , that is , must hold . in order to study both stable and unstable free - delay plants , the following two cases , adopted also in ,are considered : 1 . ( even number of negative plant time constants )+ and .2 . ( odd number of negative plant time constants ) + and . from ( [ eq:2.3 ] ) ,( [ eq:2.9 ] ) and ( [ eq:2.10 ] ) it follows that the coefficient of the highest degree of in is when is even and when odd ; hence we assume if is even and if odd .and typical functions and are plotted in fig .2 ; according to ( [ eq:2.3 ] ) there is one root of at and one for each intersection of with the horizontal line having the ordinate equal to a given . denoting by the number of the intersections between and corresponding to in fig .2 and assuming that no local minimum or maximum of is equal to for , the relationship between and is given by where is according to ( [ eq:2.17 ] ) ; ( [ eq:3.2 ] ) can be easily checked considering that from ( [ eq:2.5 ] ) , ( [ eq:2.20 ] ) and ( [ eq:2.21 ] ) it follows , and , and also that must hold if and if . since is the common limit value of for the two considered cases ( and ) , the existence of the intersections given by ( [ eq:3.2 ] ) represents a prerequisite of a plant to be made stable .the number can be evaluated by counting the intersections of the plots of and , given by ( [ eq:2.14 ] ) . from ( [ eq:2.14 ] )it follows that is an odd function of ; moreover , assuming ] , and can be expressed as * even + and . * odd + for and for , + for and for .if has no pole , splitting into ( ) and and considering the above described behavior of at and also at , one obtains * + if and , + if , + if and , + where and are given by ( [ eq:2.16 ] ) and ( [ eq:2.17 ] ) . * ; ( see fig .3 ( a ) ) + if is even , + if is odd and , + if is odd and , + where =sign[-(-1)^{n}u(n , n-1)u(n , n)] ] is a polynomial quotient .these functions can be written as where depends on the coefficients of in . in our case and ; since the roots must be positive , the required interval is from to and , therefore , the signs of each sturm function at these ends are the signs of evaluated respectively for and .
the stability of feedback systems consisting of linear time - delay plants and pid controllers has been investigated for many years by means of several methods , of which the nyquist criterion , a generalization of the hermite - biehler theorem , and the root location method are well known . the main purpose of these researches is to determine the range of controller parameters that allow stability . explicit and complete expressions of the boundaries of these regions and computation procedures with a finite number of steps are now available only for first - order plants , provided with one time delay . in this note , the same results , based on pontryagin s studies , are presented for arbitrary - order plants . [ multiblock footnote omitted ]
we acknowledge useful and stimulating discussions with m. g. cosenza and v. m. kenkre .this work was supported by the nsf under grant no .j. c. is also supported by the james s. mcdonnell foundation and the national science foundation itr dmr-0426737 and cns-0540348 within the dddas program .99 w. weidlich , phys . rep . * 204 * , 1 ( 1991 ) .w. weidlich , _ sociodynamics : a systematic approach to mathematical modelling in the social sciences _ ( harwood academic publishers , amsterdam , 2000 ) . s. m. de oliveira , p. m. c. de oliveira , and d. stauffer , _ non - traditional applications of computational statistical physics _teubner , stuttgart , 1999 ) .k. sznajd - weron and j. sznajd , int .c * 11 * , 1157 ( 2000 ) .d. h. zanette , phys .e * 65 * , 041908 ( 2002 ) .m. kuperman and d. h. zanette , eur .j. b * 26 * , 387 ( 2002 ) .a. aleksiejuk , j. a. hoyst , and d. stauffer , physica a * 310 * , 260 ( 2002 ) .m. c. gonzlez , p. g. lind , and h. j. herrmann , phys .* 96 * , 088702 ( 2006 ) .j. candia , phys .e * 74 * , 031101 ( 2006 ) ; phys .e * 75 * , 026110 ( 2007 ) .r. axelrod , j. conflict res .* 41 * , 203 ( 1997 ) .r. axelrod , _ the complexity of cooperation _( princeton university press , princeton , 1997 ) . c. castellano , m. marsili , and a. vespignani , phys .* 85 * , 3536 ( 2000 ) .d. vilone , a. vespignani , and c. castellano , eur .j. b * 30 * , 299 ( 2002 ) .k. klemm , v. m. eguluz , r. toral , and m. san miguel , phys .e * 67 * , 045101(r ) ( 2003 ) .k. klemm , v. m. eguluz , r. toral , and m. san miguel , phys .e * 67 * , 026120 ( 2003 ) .j. c. gonzlez - avella , m. g. cosenza , and k. tucci , phys .e * 72 * , 065102(r ) ( 2005 ) .j. c. gonzlez - avella _ et al _ , phys .e * 73 * , 046119 ( 2006 ) .m. n. kuperman , phys .e * 73 * , 046139 ( 2006 ) .d. lawrence kincaid _ et al _ , international family planning perspectives * 22 * , 169 ( 1996 ) .a. t. bernardes , u. m. s. costa , a. d. araujo , and d. stauffer , int . j. mod .c * 12 * , 159 ( 2001 ) .a. t. bernardes , d. stauffer , and j. kertsz , eur .j. b * 25 * , 123 ( 2002 ) .m. c. gonzlez , a. o. sousa , and h. j. herrmann , int .c * 15 * , 45 ( 2004 ) .f. caruso and p. castorina , int .c * 16 * , 1473 ( 2005 ) .k. sznajd - weron and j. sznajd , physica a * 351 * , 593 ( 2005 ) . c. schulze , int .c * 14 * , 95 ( 2003 ) ; * 15 * , 569 ( 2004 ) .k. sznajd - weron and r. weron , physica a * 324 * , 437 ( 2003 ) .k. sznajd - weron and r. weron , int . j. mod .c * 13 * , 115 ( 2002 ) .
in the context of an extension of axelrod s model for social influence , we study the interplay and competition between the cultural drift , represented as random perturbations , and mass media , introduced by means of an external homogeneous field . unlike previous studies [ j. c. gonzlez - avella _ et al _ , phys . rev . e * 72 * , 065102(r ) ( 2005 ) ] , the mass media coupling proposed here is capable of affecting the cultural traits of any individual in the society , including those who do not share any features with the external message . a noise - driven transition is found : for large noise rates , both the ordered ( culturally polarized ) phase and the disordered ( culturally fragmented ) phase are observed , while , for lower noise rates , the ordered phase prevails . in the former case , the external field is found to induce cultural ordering , a behavior opposite to that reported in previous studies using a different prescription for the mass media interaction . we compare the predictions of this model to statistical data measuring the impact of a mass media vasectomy promotion campaign in brazil . .5 cm .5 cm 21truecm 15truecm _ keywords : _ sociophysics ; econophysics ; marketing ; advertising . the non - traditional application of statistical physics to many problems of interdisciplinary nature has been growing steadily in recent years . indeed , it has been recognized that the study of statistical and complex systems can provide valuable tools and insight into many emerging interdisciplinary fields of science . in this context , the mathematical modeling of social phenomena allowed to perform quantitative investigations on processes such as self - organization , opinion formation and spreading , cooperation , formation and evolution of social structures , etc ( see e.g. ) . in particular , a model for social influence proposed by axelrod , which aims at understanding the formation of cultural domains , has recently received much attention due to its remarkably rich dynamical behavior . in axelrod s model , culture is defined by the set of cultural attributes ( such as language , art , technical standards , and social norms ) subject to social influence . the cultural state of an individual is given by their set of specific traits , which are capable of changing due to interactions with their acquaintances . in the original proposal , the individuals are located at the nodes of a regular lattice , and the interactions are assumed to take place between lattice neighbors . social influence is defined by a simple local dynamics , which is assumed to satisfy the following two properties : ( a ) social interaction is more likely taking place between individuals that share some or many of their cultural attributes ; ( b ) the result of the interaction is that of increasing the cultural similarity between the individuals involved . by means of extensive numerical simulations , it was shown that the system undergoes a phase transition separating an ordered ( culturally polarized ) phase from a disordered ( culturally fragmented ) one , which was found to depend on the number of different cultural traits available . the critical behavior of the model was also studied in different complex network topologies , such as small - world and scale - free networks . these investigations considered , however , zero - temperature dynamics that neglected the effect of fluctuations . following axelrod s original idea of incorporating random perturbations to describe the effect of _ cultural drift _ , noise was later added to the dynamics of the system . with the inclusion of this new ingredient , the disordered multicultural configurations were found to be metastable states that could be driven to ordered stable configurations . the decay of disordered metastable states depends on the competition between the noise rate , , and the characteristic time for the relaxation of perturbations , . indeed , for , the perturbations drive the disordered system towards monocultural states , while , for , the noise rates are large enough to hinder the relaxation processes , thus keeping the disorder . since scales with the system size , , as , the culturally fragmented states persist in the thermodynamic limit , irrespective of the noise rate . more recently , an extension of the model was proposed , in which the role of _ mass media _ and other mass external agents was introduced by considering external and autonomous local or global fields , but neglecting random fluctuations . the interaction between the fields and the individuals was chosen to resemble the coupling between an individual and their neighbors in the original axelrod s model . according to the adopted prescription , the interaction probability was assumed to be null for individuals that do not share any cultural feature with the external message . in this way , intriguing , counterintuitive results were obtained : the influence of mass media was found to disorder the system , thus driving ordered , culturally polarized states towards disordered , culturally fragmented configurations . the aim of this work is to include the effect of cultural drift in an alternative mass media scenario . although still inspired in the original axelrod s interaction , the mass media coupling proposed here is capable of affecting the cultural traits of any individual in the society , including those who do not share any features with the external message . for noise rates below a given transition value , which depends on the intensity of the mass media interactions , only the ordered phase is observed . however , for higher levels of noise above the transition perturbation rate , both the ordered ( culturally polarized ) phase and the disordered ( culturally fragmented ) phase are found . in the latter case , we obtain an order - disorder phase diagram as a function of the field intensity and the number of traits per cultural attribute . according to this phase diagram , the role of the external field is that of inducing cultural ordering , a behavior opposite to that reported in ref . using a different prescription for the mass media interaction . in order to show the plausibility of the scenario considered here , we also compare the predictions of this model to statistical data measuring the impact of a mass media vasectomy promotion campaign in brazil . the model is defined by considering individuals located at the sites of an square lattice . the cultural state of the individual is described by the integer vector , where . the dimension of the vector , , defines the number of cultural attributes , while corresponds to the number of different cultural traits per attribute . initially , the specific traits for each individual are assigned randomly with a uniform distribution . similarly , the mass media cultural message is modeled by a constant integer vector , which can be chosen as without loss of generality . the intensity of the mass media message relative to the local interactions between neighboring individuals is controlled by the parameter ( ) . moreover , the parameter ( ) is introduced to represent the noise rate . the model dynamics is defined by iterating a sequence of rules , as follows : ( 1 ) an individual is selected at random ; ( 2 ) with probability , he / she interacts with the mass media field ; otherwise , he / she interacts with a randomly chosen nearest neighbor ; ( 3 ) with probability , a random single - feature perturbation is performed . the interaction between the and individuals is governed by their cultural overlap , , where is the kronecker delta . with probability , the result of the interaction is that of increasing their similarity : one chooses at random one of the attributes on which they differ ( i.e. , such that ) and sets them equal by changing the trait of the individual selected in first place . naturally , if , the cultural states of both individuals are already identical , and the interaction leaves them unchanged . the interaction between the individual and the mass media field is governed by the overlap term . analogously to the precedent case , is the probability that , as a result of the interaction , the individual changes one of the traits that differ from the message by setting it equal to the message s trait . again , if , the cultural state of the individual is already identical to the mass media message , and the interaction leaves it unchanged . notice that ; thus , the mass media coupling used here is capable of affecting the cultural traits of any individual in the society , including those who do not share any features with the external message . as commented above , this differs from the mass media interaction proposed in ref . , which was given by . = 4.2truein=3.1truein as regards the perturbations introduced in step ( 3 ) , a single feature of a single individual is randomly chosen , and , with probability , their corresponding trait is changed to a randomly selected value between 1 and . in the absence of fluctuations , the system evolves towards absorbing states , i.e. , frozen configurations that are not capable of further changes . for , instead , the system evolves continuously , and , after a transient period , it attains a stationary state . in order to characterize the degree of order of these stationary states , we measure the ( statistically - averaged ) size of the largest homogeneous domain , . the results obtained here correspond to systems of linear size and a fixed number of cultural attributes , , typically averaged over 500 different ( randomly generated ) initial configurations . = 4.2truein=3.1truein figure 1 shows the order parameter , , as a function of the noise rate , , for different values of the mass media intensity . the number of different cultural traits per attribute is . as anticipated , for small noise rates , the perturbations drive the decay of disordered metastable states , and thus the system presents only ordered states with . as the noise rate is gradually increased , the competition between characteristic times for perturbation and relaxation processes sets on , and , for large enough noise rates , the system becomes completely disordered . this behavior , which was already reported in the absence of mass media interactions , is here also observed for . as we consider plots for increasing values of , the transition between ordered and disordered states takes place for increasingly higher levels of noise . indeed , this is an indication of the competition between noise rate and external field effects , thus showing that the external field induces order in the system . figure 2 shows the order - disorder phase diagram as a function of the field intensity and the number of traits per cultural attribute , for the noise rate . the transition points correspond to . for the case , noise - driven order - disorder transitions were found to be roughly independent of the number of traits per cultural attribute , as long as . here , we observe a similar , essentially behavior for as well . typical snapshot configurations of both regions are also shown in figure 2 , where the transition from the ( small- ) multicultural regime to the ( large- ) monocultural state is clearly observed . a majority of individuals sharing the same cultural state , identical to the external message , is found within the ordered phase . for smaller noise rates , , the system is ordered even for , and hence only the monocultural phase is observed . = 4.2truein=3.1truein in order to gain further insight into the interplay and competition between cultural drift and mass media effects , let us now consider the external message being periodically switched on and off . starting with a random disordered configuration and assuming a noise level above the transition value for the case , we observe a periodical behavior : the system becomes ordered within the time window in which the field is applied , while it becomes increasingly disordered when the message is switched off . a cycle representing this behavior is shown by the solid line in figure 3 , which corresponds to , , and . moreover , we can compare this behavior to statistical data measuring the impact of a mass media vasectomy promotion campaign in brazil . symbols in figure 3 correspond to the number of vasectomies performed monthly in a major clinic in so paulo , spanning a time interval of 2 years . the shaded region indicates the time window in which the mass media campaign was performed . the promotion campaign consisted of prime - time television and radio spots , the distribution of flyers , an electronic billboard , and public relations activities . in order to allow a comparison to model results , vasectomy data have been normalized by setting the maximal number of vasectomies measured equal to unity , while the relation between time scales has been chosen conveniently . in the model results , time is measured in monte carlo steps ( mcs ) , where 1 mcs corresponds to iterations of the set of rules ( 1)-(3 ) . for the comparison performed in figure 3 , we assumed that 1 month corresponds to 500 mcs . although the model parameters and scale units were arbitrarily assigned , it is reassuring to observe that a good agreement between observations and model results can be achieved . indeed , the steep growth in the number of vasectomies practiced during the promotion campaign , as well as the monotonic decrease afterwards , can be well accounted for by this model . in order to carry out a straightforward comparison between mass media effects in this model and the measured response within a social group , several simplifying assumptions were adopted . as commented above , the model parameters and scale units were conveniently assigned . moreover , no distinction was attempted between opinions ( as modeled , within axelrod s representation , by sets of cultural attributes in the mathematical form of vectors ) and actual choices ( as measured e.g. by the vasectomy data shown in figure 3 ) . in related contexts , analogous simplifying assumptions were adopted in the statistical physics modeling of political phenomena ( e.g. the distribution of votes in elections in brazil and india , italy and germany , four - party political scenarios , etc ) , marketing competition between two advertised products , and applications to finance . in summary , we have studied , in the context of an extension of axelrod s model for social influence , the interplay and competition between cultural drift and mass media effects . the cultural drift is modeled by random perturbations , while mass media effects are introduced by means of an external field . a noise - driven order - disorder transition is found . in the large noise rate regime , both the ordered ( culturally polarized ) phase and the disordered ( culturally fragmented ) phase can be observed , whereas in the small noise rate regime , only the ordered phase is present . in the former case , we have obtained the corresponding order - disorder phase diagram , showing that the external field induces cultural ordering . this behavior is opposite to that reported in ref . using a different prescription for the mass media field , which neglected the interaction between the field and individuals that do not share any features with the external message . the mass media coupling proposed in this work , instead , is capable of affecting the cultural traits of any individual in the society . in order to show the plausibility of the scenario considered here , we have compared the predictions of this model to statistical data measuring the impact of a mass media vasectomy promotion campaign in brazil . a good agreement between model results and measured data can be achieved . the observed behavior is characterized by a steep growth during the promotion campaign , and a monotonic decrease afterwards . we can thus conclude that the extension of axelrod s model proposed here contains the basic ingredients needed to explain the trend of actual observations . we hope that the present findings will contribute to the growing interdisciplinary efforts in the mathematical modeling of social dynamics phenomena , and stimulate further work .
the _ partially asymmetric simple exclusion process _( pasep ) is a physical model in which sites on a one - dimensional lattice are either empty or occupied by a single particle .these particles may hop to the left or to the right with fixed probabilities , which defines a markov chain on the states of the model .the explicit description of the stationary probability of the pasep was obtained through the matrix - ansatz . since then , the links between specializations of this model and combinatorics have been the subject of an important research ( see for example ) . a great achievment is the description of the stationary distribution of the most general pasep model through statistics defined on combinatorial objects called staircase tableaux .the objective of our work is to show how we can reveal an underlying tree structure in staircase tableaux and use it to obtain combinatorial properties . in this paper, we focus on some specializations of the _ fugacity partition function _ , defined in the next section as a -analogue of the classical partition function of staircase tableaux .the present paper is divided into two sections .section [ sec : tlst ] is devoted to the definition of labeled tree - like tableaux , which are a new presentation of staircase tableaux , and to the presentation of a tree structure and of an insertion algorithm on these objects .section [ sec : app ] presents combinatorial applications of these tools .we get : * a new and natural proof of the formula for ; * a study of ; * a bijective proof for the formula of .a _ staircase tableau _ of size is a ferrers diagram of `` staircase '' shape such that boxes are either empty or labeled with , , , or , and satisfying the following conditions : * no box along the diagonal of is empty ; * all boxes in the same row and to the left of a or a are empty ; * all boxes in the same column and above an or a are empty .figure [ fig : st ] ( left ) presents a staircase tableau of size 5 .[ weight ] the _ weight _ of a staircase tableau is a monomial in and , which we obtain as follows .every blank box of is assigned a or a , based on the label of the closest labeled box to its right in the same row and the label of the closest labeled box below it in the same column , such that : * every blank box which sees a to its right gets a ; * every blank box which sees a to its right gets a ; * every blank box which sees an or to its right , and an or below it , gets a ; * every blank box which sees an or to its right , and a or below it , gets a . after filling all blank boxes ,we define to be the product of all labels in all boxes .the weight of the staircase tableau on figure [ fig : st ] is .there is a simple correspondence between the states of the pasep and the diagonal labels of staircase tableaux : diagonal boxes may be seen as sites of the model , and and ( resp . and ) diagonal labels correspond to occupied ( resp .unoccupied ) sites .we shall use a variable to keep track of the number of particles in each state . to this way, we define to be the number of labels or along the diagonal of . for examplethe tableau in figure [ fig : st ] has .the _ fugacity partition function _ of the pasep is defined as we shall now define another class of objects , called labeled tree - like tableaux .they appear as a labeled version of tree - like tableaux ( tlts ) defined in .these tableaux are in bijection with staircase tableaux , and present two nice properties inherited from tlts : an underlying tree structure , and an insertion algorithm which provides a useful recursive presentation . in a ferrers diagram ,the _ border edges _ are the edges that stand at the end of rows or columns .the number of border edges is clearly the half - perimeter of . for any box of , we define as the set of boxes placed in the same column and above in , and as the set of boxes placed in the same row and to the left of in . by a slight abuse, we shall use the same notations for any tableau of shape .these notions are illustrated at figure [ fig : bordedges - legarm ] .{images / border_edges_leg_arm_1 } \end{array } ] {images / border_edges_leg_arm_3 } \end{array } ] a nice feature of the notion of ltlt is its underlying tree structure .let us consider an ltlt of size .we may see each label of as a node , and definition [ def : ltlt ] ensures that each node ( except the ne - most one , which appears as the root ) has either a node above it or to its left , which may be seen as its father .we refer to figure [ fig : tree ] which illustrates this property .crossing _ is a box such that * there is a label to the left and to the right of ; * there is a label above and below . in this way, we get a labeled binary tree with some additional information : crossings of edges .if we forget the crossings , we have a binary tree in which each internal node and leaf is labeled . as a consequence, any ltlt is endowed with an underlying binary tree structure , such that the size of the ltlt is equal to the number of internal nodes in its underlying binary tree .{images / underlaying_tree_structure_1 } \end{array } ]{images / underlaying_tree_structure_3 } \end{array } x y ] given two ferrers diagrams , we say that the set of cells ( set - theoretic difference ) is a _ ribbon _ if it is connected ( with respect to adjacency ) and contains no square . in this casewe say that can be added to , or that it can be removed from . for our purpose, we shall only consider the addition of a ribbon to an ltlt between a vertical border edge and an horizontal border edge .as in the row / column insertion , we observe that vertical ( resp .horizontal ) border edges are shifted horizontally ( resp .vertically ) , thus we shift also the corresponding labels .figure [ fig : ribbon_insertion ] illustrates this operation . [ def : spec ] let be an ltlt .the _ special box _ of is the northeast - most labeled box among those that occur at the bottom of a column .this is well - defined since the bottom row of contains necessarily a labeled box .{images / insertion_rubban_1 } \end{array } \xrightarrow[insertion]{ribbon } \begin{array}{c } \includegraphics[scale=\scalefigure]{images / insertion_rubban_2 } \end{array } ] an ltlt of size together with the choice of one of its border edges , and a compatible bi - label .[ etape_reperer_case_speciale ] find the special box of .[ etape_inserer_colonne ] add a row / column to at edge with new bi - label . [ etape_ajouter_ruban ] if is to the left of , perform a ribbon addition between and . a final ltlt of size .{images / insertion_procedure_2 } \end{array } \longrightarrow \begin{array}{c } \includegraphics[scale=\scalefigure]{images / insertion_procedure_3 } \end{array } \longrightarrow \begin{array}{c } \includegraphics[scale=\scalefigure]{images / insertion_procedure_4 } \end{array } \longrightarrow \begin{array}{c } \includegraphics[scale=\scalefigure]{images / insertion_procedure_5 } \end{array } ] .since is the degree of , the conclusion follows .equation ( [ eq : stable ] ) , as well as proposition [ prop : stable ] are new .our insertion algorithm shows its strength when it comes to study recursively the diagonal in staircase tableaux , which is meaningful in the pasep model : it is by far more natural than the already studied recursion with respect to the first column of the tableau .we show how our tools lead to a bijection between staircase tableaux without any label or weight and a certain class of paths enumerated by the sequence a026671 of .this is an answer to problem 5.8 of .let us consider ltlts which are in bijection with staircase tableaux without or .the first observation is that these tableaux correspond to trees without any crossing , since a crossing sees an or a to its right and a below , which gives a weight ( _ cf ._ figure [ fig : no_cross ] ) .thus we have to deal with binary trees whose left sons only have one choice of label ( ) and whose right sons may have one ( ) or two choices ( or ) of labels .the restriction of no weight is equivalent to forbidding any point in the tree which sees an or a to its right and a below it . since we deal with binary trees ( without any crossing ), we get that the only nodes where we may put a label are such that the path in the tree from the root to contains exactly one left son , followed by a right son .these nodes are illustrated on figure [ fig : pos_gamma ] .we may shift these labels to their father in the tree , and define the _ left depth _ of a node in a binary tree as the number of left sons in the path from the root to , and using the bijection , we get the following statement . [lem : bij ] the set of staircase tableaux of size without any label or weight is in bijection with the set of binary trees of size whose nodes of left depth equal to are labeled by or .we shall now code the trees in by lattice paths . to do this, we use a deformation of the classical bijection between binary trees and dyck paths : we go around the tree , starting at the root and omitting the last external node , and we add to the path a step when visiting ( for the first time ) an internal node , or a step when visiting an external node .let us denote by the ( dyck ) path associated to the binary tree under this procedure .it is well - known that is a bijection between binary trees with internal nodes and dyck paths of length ( of size ) .if we use the same coding , but omitting the root and the last external nodes , we get a bijection between binary trees with internal nodes and _ almost - dyck _ paths ( whose ordinate is always ) of size . in the sequel ,we shall call _ factor _ of a path a minimal sub - path starting from the axis and ending on the axis .we may replace the negative factors by steps to get a bijection between binary trees of size and _ positive _ lazy paths of size ( these objects appear under the name ( ) in ) .figure [ fig : bij_pi ] illustrates bijections and .we observe that the nodes with left depth equal to in a binary tree correspond to steps which start on the axis in , thus to strictly positive factors in .these nodes may be labeled with or . to translate this bijectively, we only have to leave unchanged a factor associated to a label , and to apply a mirror reflexion to a factor associated to a label .figure [ fig : bij ] illustrates this correspondence .thanks to proposition [ prop : bij ] and lemma [ lem : bij ] , we get a bijection denoted by , between staircase tableaux and lazy paths ( see figure [ fig : from_st_to_lazy_path ] ) .{images / from_staircase_to_lazy_path_1 } \end{array } \longleftrightarrow \begin{array}{c } \includegraphics[scale=\scalefigure]{images / from_staircase_to_lazy_path_2 } \end{array } \longleftrightarrow \begin{array}{c } \includegraphics[scale=\scalefigure]{images / from_staircase_to_lazy_path_3 } \end{array } \longleftrightarrow ] we recall the following definition from .a row _ indexed _ by or in a staircase tableau is a row such that its left - most label is or . in the same way ,a column indexed by an or a is a column such that its top - most label is or .for example , the staircase tableau on the left of figure [ fig : st ] has 2 columns indexed by and 1 row indexed by .the application defines a bijection from the set of staircase tableaux without label and weight , of size , to the set of lazy paths of size . moreover ,if we denote : the number of steps , the number of steps , the length of the initial maximal sequence of steps , the number of steps , the number of factors , and the number of negative factors in , then : * the number of labels in is given by ; * the number of labels in is given by ; * the number of labels in is given by ; * the number of columns indexed by in is given by ; * the number of rows indexed by or in is given by .we still have to check the assertions about the different statistics .we recall that , as defined in , rows indexed by or ( resp .columns indexed by or ) in a staircase tabelau correspond to non - root labels in the first column ( resp .first row ) of .we have : * the number of labels is by definition the number of negative factors in ; * the number of or labels is ; * the number of labels is the number of external nodes minus the number of nodes in the left branch of the tree , thus ; * columns indexed by correspond to nodes in the right branch of the tree , their number is ; * rows indexed by or correspond to nodes in the left branch of the tree , their number is . we may observe that among this class of staircase tableaux , those who have only or labels on the diagonal , _i.e. _ such that are in bijection with binary trees whose internal nodes are of left depth at most , and such that we forbid the label on external nodes .the bijection sends these tableaux onto lazy paths of height and depth bounded by and whose factors preceding either a step or the end of the path are negative ( _ cf . _ figure [ fig : frob_path ] ) .let us denote by the number of such path of size . by decomposing the path with respect to its first two factors ,we may write which corresponds ( _ cf . _ the entry a001519 in ) to the recurrence of odd fibonacci numbers , as claimed in corollary 3.10 of .another interesting special case concerns staircase tableaux of size without any or labels , and without weight .it is obvious that the bijection maps these tableaux onto binary trees , enumerated by catalan numbers .moreover , if we keep track of the number of external nodes labeled with , we get a bijection with dyck paths of size with exactly peaks , enumerated by narayana numbers . * forthcoming objective .* we are convinced that ltlts , because of their insertion algorithm , are objects that are both natural and easy to use , as shown in this paper on some special cases . since a nice feature of our insertion procedure is to work on the boundary edges , which encode the states in the pasep , an objective is to use these objects to describe combinatorially the general case of the pasep model . to do that, we have to find an alternate description of the weight on ltlts , hopefully simpler than the one defined on staircase tableaux in definition [ weight ] .* acknowledgements * [ sec : ack ] this research was driven by computer exploration using the open - source mathematical software ` sage ` and its algebraic combinatorics features developed by the ` sage - combinat ` community .
staircase tableaux are combinatorial objects which appear as key tools in the study of the pasep physical model . the aim of this work is to show how the discovery of a tree structure in staircase tableaux is a significant feature to derive properties on these objects .
in this paper , we will use probabilistic methods to solve the dirichlet boundary value problem for the semilinear second order elliptic pde of the following form : where is a bounded domain in .the operator is given by where ( ) is a measurable , symmetric matrix - valued function satisfying a uniform elliptic condition , , and are merely measurable functions belonging to some spaces , and is a nonlinear function .the operator is rigorously determined by the following quadratic form : we refer readers to and for details of the operator . probabilistic approaches to boundary value problems of second order differential operators have been adopted by many people .the earlier work went back as early as 1944 in .see the books and references therein . if ( i.e. , the linear case ) , and moreover , the solution to problem ( [ 0.1 ] ) can be solved by a feynman kac formula \qquad \mbox{for } x\in d,\ ] ] where , is the diffusion process associated with the infinitesimal generator is the first exit time of the diffusion process from the domain .very general results are obtained in the paper for this case . when , `` '' in ( [ 0.0 ] ) is just a formal writing because the divergence does not really exist for the merely measurable vector field .it should be interpreted in the distributional sense .it is exactly due to the nondifferentiability of , all the previous known probabilistic methods in solving the elliptic boundary value problems such as those in and could not be applied .we stress that the lower order term can not be handled by girsanov transform or feynman kac transform either . in a recent work , we show that the term in fact can be tackled by the time - reversal of girsanov transform from the first exit time from by the symmetric diffusion associated with , the symmetric part of . the solution to equation ( [ 0.1 ] ) ( when ) is given by \\[-8pt ] & & \hphantom{e_x^0 \biggl[\varphi(x^0(\tau_d ) ) \exp\biggl\ { } { } -\frac{1}{2}\int_0^{\tau_d}(b-\hat{b})a^{-1}(b-\hat{b})^{\ast}(x^0(s))\,ds\nonumber \\ & & \hspace*{186pt } { } + \int_0^{\tau_d}q(x^0(s))\,ds \biggr\ } \biggr],\nonumber\end{aligned}\ ] ] where is the martingale part of the diffusion , denotes the reverse operator , and stands for the inner product in .nonlinear elliptic pdes [ i.e. , in ( [ 0.1 ] ) ] are generally very hard to solve .one can not expect explicit expressions for the solutions .however , in recent years backward stochastic differential equations ( bsdes ) have been used effectively to solve certain nonlinear pdes .the general approach is to represent the solution of the nonlinear equation ( [ 0.1 ] ) as the solution of certain bsdes associated with the diffusion process generated by the linear operator .but so far , only the cases where and being bounded were considered . the main difficulty for treating the general operator in ( [ 0.0 ] ) with , is that there are no associated diffusion processes anymore .the mentioned methods used so far in the literature ceased to work .our approach is to transform the problem ( [ 0.1 ] ) to a similar problem for which the operator does not have the `` bad '' term .see below for detailed description .there exist many papers on bsdes and their applications to nonlinear pdes .we mention some related earlier results .the first result on probabilistic interpretation for solutions of semilinear parabolic pde s was obtained by peng in and subsequently in . in , darling and pardoux obtained a viscosity solution to the dirichlet problem for a class of semilinear elliptic pdes ( through bsdes with random terminal time ) for which the linear operator is of the form where and .bsdes associated with dirichlet processes and weak solutions of semi - linear parabolic pdes were considered by lejay in where the linear operator is assumed to be for bounded coefficients and .bsdes associated with symmetric markov processes and weak solutions of semi - linear parabolic pdes were studied by bally , pardoux and stoica in where the linear operator is assumed to be symmetric with respect to some measure .bsdes and solutions of semi - linear parabolic pdes were also considered by rozkosz in for the linear operator of the form now we describe the contents of this paper in more details . our strategy is to transform the problem ( [ 0.1 ] ) by a kind of -transform to a problem of a similar kind , but with an operator that does not have the `` bad '' term .the first step will be to solve ( [ 0.1 ] ) assuming . in section [ sec2 ], we introduce the feller diffusion process whose infinitesimal generator is given by in general , , is not a semimartingale .but it has a nice martingale part , . in this section ,we prove a martingale representation theorem for the martingale part , which is crucial for the study of bsdes in subsequent sections . in section [ sec3 ], we solve a class of bsdes associated with the martingale part , : the random coefficient satisfies a certain monotonicity condition which is particularly fulfilled in the situation we are interested .the bsdes with deterministic terminal time were solved first and then the bsdes with random terminal time were studied . in section [ sec4 ] , we consider the dirichelt problem for the second order differential operator where for some and for some .we first solve the linear problem with a given function and then the nonlinear problem with the help of bsdes . finally , in section [ sec5] , we study the dirichlet problem where is a general second order differential operator given in ( [ 0.0 ] ) .we apply a transform we introduced in to transform the above problem to a problem like ( [ 0.8 ] ) and then a reverse transformation will solve the final problem .let be an elliptic operator of the following general form : where ( ) is a measurable , symmetric matrix - valued function which satisfies the uniform elliptic condition , and are measurable functions which could be singular and such that for some . here is a bounded domain in whose boundary is regular , that is , for every , , where is the first exit time of a standard brownian motion started at from the domain .let be a measurable nonlinear function .consider the following nonlinear dirichlet boundary value problem : let denote the usual sobolev space of order one : we say that is a continuous , weak solution of ( [ 1.01 ] ) if : * for any , * , * , .next we introduce two diffusion processes which will be used later .let be the feller diffusion process whose infinitesimal generator is given by where is the completed , minimal admissible filtration generated by , .the associated nonsymmetric , semi - dirichlet form with is defined by \\[-8pt ] & = & { 1\over2}\sum_{i , j=1}^{d}\int_{r^d}a_{ij}(x)\,{\partial u\over { \partial x_i}}\,{\partial v\over{\partial x_j}}\,dx-\sum_{i=1}^{d}\int_{r^d}b_i(x)\,{\partial u\over{\partial x_i}}\,v(x)\,dx.\nonumber\end{aligned}\ ] ] the process , is not a semimartingale in general .however , it is known ( see , e.g. , and ) that the following fukushima s decomposition holds : where is a continuous square integrable martingale with sharp bracket being given by and is a continuous process of zero quadratic variation .later we also write , to emphasize the dependence on the initial value .let denote the space of square integrable martingales w.r.t .the filtration , .the following result is a martingale representation theorem whose proof is a modification of that of theorem a.3.20 in . it will play an important role in our study of the backward stochastic differential equations associated with the martingale part . for any ,there exist predictable processes such that it is sufficient to prove ( [ 1.5 ] ) for , where is an arbitrary , but fixed constant .recall that is a hilbert space w.r.t .the inner product ] be a given progressively measurable function . for simplicity , we omit the random parameter .assume that is continuous in and satisfies : * , * , * , where , are a progressively measurable stochastic process and is a constant .let .let be the constant defined in ( [ 1.0 ] ) .[ thm3.1 ] assume <\infty ] and <\infty.\ ] ] then , there exists a unique ( -adapted ) solution to the following bsde : where .we first prove the uniqueness .set .suppose and are two solutions to equation ( [ 2.1 ] ) .then \\[-8pt ] & & \qquad\quad { } + 2\bigl(y^1(t)-y^2(t)\bigr)\langle z^1(t)-z^2(t),dm(t)\rangle \nonumber\\ & & \qquad\quad { } + \bigl\langle a(x(t))\bigl(z^1(t)-z^2(t)\bigr ) , z^1(t)-z^2(t)\bigr\rangle \,dt.\nonumber\end{aligned}\ ] ] by the chain rule , using the assumptions ( a.1 ) , ( a.2 ) and young s inequality , we get take expectation in above inequality to get \leq c_{\lambda}\int_t^te\bigl [ e^{\int_0^sd(u)\,du}|y^1(s)-y^2(s)|^2\bigr]\,ds.\ ] ] by gronwall s inequality , we conclude and hence by ( [ 2.3 ] ) .next , we prove the existence .take an even , nonnegative function with . define where .since is continuous in , it follows that as .furthermore , it is easy to see that for every , for some constant .consider the following bsde : in view of ( [ 2.4 ] ) and the assumptions ( a.2 ) , ( a.3 ) , it is known ( e.g. , ) that the above equation admits a unique solution .our aim now is to show that there exists a convergent subsequence . to this end, we need some estimates . applying it s formula , in view of assumptions ( a.1)(a.3 )it follows that take expectation in ( [ 2.6 ] ) to obtain +\frac{1}{2}e\biggl[\int_t^te^{\int _ 0^sd(u)\,du}\langlea(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr]\nonumber\\ & & \qquad \leq e\bigl[|\xi|^2e^{\int_0^td(s)\,ds}\bigr]+c_{\lambda}\int_t^t e\bigl[e^{\int_0^sd(u)\,du}y_n^2(s)\bigr]\,ds\\ & & \qquad \quad { } + e\biggl[\int_t^t e^{\int_0^sd(u)\,du}|f(s,0,0)|^2\,ds\biggr].\nonumber\end{aligned}\ ] ] gronwall s inequality yields \nonumber\\[-8pt]\\[-8pt ] & & \qquad \leq c\biggl\{e\bigl[|\xi|^2e^{\int_0^td(s)\,ds}\bigr]+e\biggl[\int_0^t e^{\int_0^sd(u)\,du}|f(s,0,0)|^2\,ds\biggr]\biggr\}\nonumber\end{aligned}\ ] ] and also <\infty.\ ] ] moreover , ( [ 2.6])([2.9 ] ) further imply that there exists some constant such that \nonumber\\ & & \qquad \leq c+ce\biggl[\sup_{0\leq t\leq t}\int_0^te^{\int_0^sd(u)\,du}y_n(s)\langle z_n(s),dm(s)\rangle\biggr]\nonumber\\ & & \qquad \leq c+ce \biggl [ \biggl ( \int_0^te^{2\int_0^sd(u)\,du}y_n^2(s)\langle a(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr)^{{1/2 } } \biggr]\nonumber\\ & & \qquad \leq c+ce \biggl[\sup_{0\leq s\leq t}\bigl(e^{{(1/2)}\int_0^sd(u)\,du}|y_n(s)|\bigr)\nonumber\\ & & \qquad \quad \hphantom{c+ce \biggl [ } { } \times \biggl ( \int_0^te^{\int_0^sd(u)\,du}\langle a(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr)^{{1/2 } } \biggr]\\ & & \qquad \leq c+\frac{1}{2}e\bigl[\sup_{0\leq s\leq t}\bigl(e^{\int_0^sd(u)\,du}y_n^2(s)\bigr ) \bigr]\nonumber\\ & & \qquad \quad { } + c_1e \biggl [ \int_0^te^{\int_0^sd(u)\,du}\langle a(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr].\nonumber\end{aligned}\ ] ] in view of ( [ 2.9 ] ) , this yields <\infty.\ ] ] by ( [ 2.9 ] ) and ( [ 2.11 ] ) , we can extract a subsequence such that converges to some in ) ] .observe that \\[-8pt ] & & \qquad \quad { } -\frac{1}{2}\int_t^te^{{(1/2)}\int_0^sd(u)\,du}y_{n_k}(s)d(s)\,ds\nonumber \\ & & \qquad \quad { } - \int_t^te^{{(1/2)}\int_0^sd(u)\,du}\langle z_{n_k}(s),dm(s)\rangle.\nonumber\end{aligned}\ ] ] letting in ( [ 2.12 ] ) , using the monotonicity of , following the same arguments as that in the proof of proposition 2.3 in darling and pardoux in , we can show that the limit satisfies set an application of it s formula yields that namely , is a solution to the backward equation ( [ 2.1 ] ) .the proof is complete .let satisfy ( a.1)(a.3 ) in section [ sec3.1 ]. in this subsection , set .the following result provides existence and uniqueness for bsdes with random terminal time .let be a stopping time .suppose is -measurable .[ thm3.2 ] assume <\infty ] and <\infty,\ ] ] for some , where is the constant appeared in ( 2.1 ) .then , there exists a unique solution to the bsde furthermore , the solution satisfies <\infty,\qquad e \biggl[\int_0^{\tau}e^{\int_0^{s}d(u)\,du}|z(s)|^2\,ds \biggr]<\infty,\hspace*{-35pt}\ ] ] and <\infty.\ ] ] after the preparation of theorem [ thm3.1 ] , the proof of this theorem is similar to that of theorem 3.4 in , where , were both assumed to be constants .we only give a sketch highlighting the differences .for every , from theorem [ thm3.1 ] we know that the following bsde has a unique solution on : +\int_{\tau\wedge t}^{\tau\wedge n}f(s , y_n(s ) , z_n(s))\,ds-\int_{\tau\wedge t}^{\tau\wedge n}\langle z_n(s ) , dm(s)\rangle.\hspace*{-35pt}\ ] ] extend the definition of to all by setting ,\qquad z_n(t)=0\qquad \mbox{for } t\geq n.\ ] ] then the extended satisfies a bsde similar to ( [ 2.18 ] ) with replaced by .let .by it s formula , we have -e[\xi & & \qquad \quad { } -\int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^{s\wedge \tau}d(u)\,du}|y_n(s\wedge\tau)-y_m(s\wedge\tau)|^2d(s)\,ds\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } + 2\int_{t\wedge \tau}^{n\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge\tau)\bigr)\nonumber\hspace*{-35pt}\\ & & \qquad \quad\hphantom { { } + 2\int_{t\wedge\tau}^{n\wedge\tau } } { } \times \bigl ( f\bigl(s , y_n(s\wedge\tau ) , z_n(s\wedge \tau)\bigr)\nonumber\hspace*{-35pt } \\ & & \qquad \quad\hphantom { { } + 2\int_{t\wedge\tau}^{n\wedge\tau}\times \bigl ( } { } -f\bigl(s , y_m(s\wedge\tau ) , z_m(s\wedge\tau)\bigr ) \bigr)\,ds\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } + 2\int_{m\wedge \tau}^{n\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge\tau)\bigr)\nonumber \hspace*{-35pt}\\ & & \quad \qquad\hphantom{{}+2\int_{m\wedge\tau}^{n\wedge\tau } } { } \times f\bigl(s , y_m(s\wedge\tau ) , z_m(s\wedge \tau)\bigr)\,ds\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } -2\int_{t\wedge \tau}^{n\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge\tau)\bigr)\langle z_n(s\wedge\tau ) , dm(s)\rangle\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } + 2\int_{t\wedge \tau}^{m\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge\tau)\bigr)\langle z_m(s\wedge\tau ) , dm(s)\rangle.\nonumber\hspace*{-35pt}\end{aligned}\ ] ] choose , such that and . in view of the ( a.1 ) and ( a.2 ) , we have \\[-8pt ] & & \qquad \quad { } + \delta_1d_2 ^ 2\int_{t\wedge \tau}^{n\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge\tau)\bigr)^2\,ds\nonumber\\ & & \qquad \quad { } + \frac{1}{\lambda\delta_1}\int_{t\wedge\tau}^{n\wedge \tau}e^{\int_0^sd(u)\,du}\bigl\langle a(x(s))\bigl(z_n(s)-z_m(s)\chi_{\{s\leq m\wedge \tau\}}\bigr),\nonumber\\ & & \hspace*{186pt } z_n(s)-z_m(s)\chi_{\{s\leq m\wedge\tau\}}\bigr\rangle\,ds.\nonumber\end{aligned}\ ] ] on the other hand , by ( a.3 ) , it follows that |\bigr)^2\,ds.\nonumber\hspace*{-35pt}\end{aligned}\ ] ] take expectation and utilize ( [ 2.19])([2.21 ] ) to obtain \nonumber\hspace*{-35pt}\\ & & \quad { } + \biggl(1-\frac{1}{\lambda\delta_1}\biggr)e \biggl[\int_{t\wedge\tau}^{n\wedge \tau}e^{\int_0^sd(u)\,du}\bigl\langle a(x(s))\bigl(z_n(s)-z_m(s)\chi_{\{s\leq m\wedge \tau\}}\bigr ) , \nonumber\hspace*{-35pt}\\ & & \hspace*{207pt } z_n(s)-z_m(s)\chi_{\{s\leq m\wedge \tau\}}\bigr\rangle\,ds \biggr ] \nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt ] & & \quad { } + ( \delta-\delta_1-\delta_2)d_2 ^ 2e \biggl[\int_{t\wedge\tau}^{n\wedge\tau}e^{\int_0^sd(u)\,du}\bigl(y_n(s\wedge \tau)-y_m(s\wedge \tau)\bigr)^2\,ds \biggr]\nonumber\hspace*{-35pt}\\ & & \qquad \leq e \bigl[e^{\int_0^{n\wedge\tau}d(s)\,ds } ( e[\xi|\mathcal { f}_n]-e[\xi|\mathcal { f}_m ] ) ^2 \bigr]\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } + \frac{1}{\delta_2 d_2 ^ 2}e \biggl [ \int_{m\wedge \tau}^{n\wedge\tau}e^{\int_0^{s\wedge\tau}d(u)\,du}\bigl ( since the right - hand side tends to zero as , we deduce that converges to some in .furthermore , for every , converges in .we may as well assume for all .observe that for any , + \int_{\tau\wedge t}^{n\wedge\tau}e^{{(1/2)}\int_0^{s\wedge \tau}d(u)\,du}f(s , y_n(s ) , z_n(s))\,ds\nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt ] & & \qquad \quad { }-\frac{1}{2}\int_{\tau\wedge t}^{n\wedge \tau}e^{{(1/2)}\int_0^{s\wedge\tau}d(u)\,du } y_n(s)d(s)\,ds\nonumber\hspace*{-35pt}\\ & & \qquad \quad { } -\int_{\tau\wedget}^{n\wedge\tau}e^{{(1/2)}\int_0^{s\wedge \tau}d(u)\,du}\langle z_n(s ) , dm(s)\rangle.\nonumber\hspace*{-35pt}\end{aligned}\ ] ] letting yields that put an application of it s formula and ( [ 2.25 ] ) yield that hence , is a solution to the bsde ( [ 2.15 ] ) proving the existence . to obtain the estimates ( [ 2.16 ] ) and ( [ 2.17 ] ) , we proceed to get an uniform estimate for and then pass to the limit .let be chosen as before .similar to the proof of ( [ 2.8 ] ) , by it s formula , we have |^2e^{\int_0^{n\wedge \tau}d(s)\,ds}-\int_{t\wedge\tau}^{n\wedge \tau } e^{\int_0^sd(u)\,du}|y_n(s)|^2d(s)\,ds\nonumber\\ & & \qquad \quad { } -2\int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}d_1(s)y_n^2(s)\,ds\nonumber\\ & & \qquad \quad { } + 2\int_{t\wedge\tau}^{n\wedge \tau } e^{\int_0^sd(u)\,du}d_2|y_n(s)||z_n(s)|\,ds\nonumber\\ & & \qquad \quad { } + 2\int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}|y_n(s)||f(s,0,0)|\,ds\nonumber\\ & & \qquad \quad { } -2\int_{t\wedge\tau}^{n\wedge \tau } e^{\int_0^sd(u)\,du}y_n(s)\langle z_n(s),dm(s)\rangle\\ & & \qquad \leq |e[\xi|\mathcal { f}_n]|^2e^{\int_0^{n\wedge \tau}d(s)\,ds}-\int_{t\wedge\tau}^{n\wedge\tau } e^{\int _ 0^sd(u)\,du}\delta d_2 ^ 2y_n^2(s)\,ds\nonumber\\ & & \qquad \quad { } + \int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}\delta_1 d_2 ^ 2y_n^2(s)\,ds\nonumber\\ & & \qquad \quad { } + \frac{1}{\delta_1\lambda}\int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}\langle a(x(s))z_n(s ) , z_n(s)\rangle\ , ds \nonumber\\ & & \qquad \quad { } + \int_{t\wedge\tau}^{n\wedge\tau}\delta_2d_2 ^ 2 e^{\int_0^sd(u)\,du}y_n^2(s)\,ds+\frac{1}{\delta_2 d_2 ^ 2}\int_{t\wedge \tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}|f(s,0,0)|^2\,ds\hspace*{-8pt}\nonumber\\ & & \qquad \quad { } -2\int_{t\wedge\tau}^{n\wedge\tau } e^{\int_0^sd(u)\,du}y_n(s)\langle z_n(s),dm(s)\rangle.\nonumber\end{aligned}\ ] ] recalling the choices of , and , using burkholder s inequality , we obtain from ( [ 2.27 ] ) that \nonumber\\ \qquad & & \qquad \leq e\bigl[|\xi|^2e^{\int_0^{n\wedge\tau}d(s)\,ds}\bigr]+e \biggl[\int_{0}^ { \tau}e^{\int_0^sd(u)\,du}\frac{1}{\delta_2 d_2 ^ 2}|f(s,0,0)|^2\,ds \biggr]\\ \qquad & & \qquad \quad { } + 2ce \biggl [ \biggl(\int_{0}^{n\wedge\tau } e^{2\int_0^sd(u)\,du}y_n^2(s)\langle a(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr)^{{1/2 } }\biggr]\nonumber\\ \qquad & & \qquad \leq e\bigl[|\xi|^2e^{\int_0^{n\wedge\tau}d(s)\,ds}\bigr]+e\biggl[\int_{0}^ { \tau } e^{\int_0^sd(u)\,du}\frac{1}{\delta_2 d_2 ^ 2}|f(s,0,0)|^2\,ds \biggr]\nonumber\\ \qquad & & \qquad \quad { } + \frac{1}{2}e \bigl [ \sup_{0\leq t\leq n}|y_n(t\wedge \tau)|^2e^{\int_0^{t\wedge\tau}d(s)\,ds } \bigr]\nonumber\\ \qquad & & \qquad \quad { } + c_1e \biggl [ \int_{0}^{n\wedge\tau } e^{\int_0^sd(u)\,du}\langle a(x(s))z_n(s ) , z_n(s)\rangle\,ds \biggr].\nonumber\end{aligned}\ ] ] in view of ( [ 2.27 ] ) , as the proof of ( [ 2.9 ] ) , we can show that <\infty.\ ] ] ( [ 2.29 ] ) and ( [ 2.28 ] ) together with our assumptions on and imply <\infty.\ ] ] applying fatou lemma , ( [ 2.17 ] ) follows .let be a borel measurable function .assume that is continuous in and satisfies : * , where is a measurable function on .* . * let be a bounded regular domain .define given .consider for each the following bsde : \\[-8pt ] & & { } -\int_{t\wedge\tau_d^x}^{\tau_d^x}\langle z_x(s),dm_x(s)\rangle,\nonumber\end{aligned}\ ] ] where is the martingale part of . as a consequence of theorem [ thm3.2 ], we have the following theorem . [thm3.3 ] suppose for , <\infty,\ ] ] for some and <\infty.\ ] ] the bsde ( [ 2.32 ] ) admits a unique solution .furthermore , in previous sections , will denote the diffusion process defined in ( [ 1.3 ] ) . consider the second order differential operator be a bounded domain with regular boundary ( w.r.t . the laplace operator ) and a measurable function satisfying [ thm4.1 ] assume ( [ 3.2 ] ) and that there exists such that <\infty.\ ] ] then there is a unique , continuous weak solution to the dirichlet boundary value problem ( [ 3.3 ] ) which is given by .\ ] ] write ,\ ] ] and .\ ] ] we know from theorem 4.3 in that is the unique , continuous weak solution to the problem so it is sufficient to show that is the unique , continuous weak solution to the following problem : by lemma 5.7 in and proposition 3.16 in , we know that belong to .let denote the resolvent operators of the generator on with dirichlet boundary condition , that is , .\ ] ] by the markov property , it is easy to see that since is strong continuous , it follows that in .this shows that and .the proof is complete .let be a borel measurable function that satisfies : * * , * where is a measurable function and are constants .consider the semilinear dirichlet boundary value problem where .[ thm4.2 ] assume <\infty,\ ] ] for some .the dirichlet boundary value problem ( [ 3.6 ] ) has a unique continuous weak solution .set .according to theorem [ thm3.3 ] , for every the following bsde : \\[-8pt ] & & { } -\int_{t\wedge\tau_d^x}^{\tau_d^x}\langle z_x(s),dm_x(s)\rangle,\nonumber\end{aligned}\ ] ] admits a unique solution .put and . by the strong markov property of and the uniqueness of the bsde ( [ 3.7 ] ) , it is easy to see that now consider the following problem : where is defined as in section [ sec2 ] . by theorem [ thm4.1 ] , problem ([ 3.8 ] ) has a unique continuous weak solution .as , it follows from the decomposition of the dirichlet process ( see ) that \\[-8pt ] & = & \varphi(x_x(\tau_d^x))+\int_{t\wedge\tau_d^x}^{\tau_d^x } f(x_x(s),y_x(s ) , z_x(s)))\,ds\nonumber\\ & & { } -\int_{t\wedge\tau_d^x}^{\tau_d^x}\bigl\langle\nabla u\bigl(x(s\wedge \tau_d^x)\bigr),dm_x(s)\bigr\rangle.\nonumber\end{aligned}\ ] ] take conditional expectation both in ( [ 3.9 ] ) and ( [ 3.7 ] ) to discover .\end{aligned}\ ] ] in particular , let to obtain . on the other hand , comparing ( [ 3.7 ] ) with ( [ 3.9 ] ) and by the uniqueness of decomposition of semimartingales , we deduce that for all . by it s isometry, we have \nonumber\\ & & \qquad = e \biggl [ \biggl(\int_{0}^{\infty}\bigl\langle\bigl(\nabla u(x(s ) ) -v_0(x_x(s))\bigr)\chi_{\{s<\tau_d^x\}},dm_x(s)\bigr\rangle \biggr)^2 \biggr]\nonumber\\[-8pt]\\[-8pt ] & & \qquad = e \biggl [ \int_{0}^{\infty}\bigl\langle a(x_x(s))\bigl(\nabla u(x(s ) ) -v_0(x_x(s))\bigr ) , \nonumber\\ & & \hspace*{122pt } \nabla u(x(s))-v_0(x_x(s ) ) \bigr\rangle \chi_{\{s<\tau_d^x\}}\,ds \biggr]=0.\nonumber\end{aligned}\ ] ] by fubini theorem and the uniform ellipticity of the matrix , we deduce that =0\ ] ] a.e . in with respect to the lebesgue measure , where $ ] .the strong continuity of the semigroup implies that a.e .returning to problem ( [ 3.8 ] ) , we see that actually is a weak solution to the nonlinear problem : suppose is another solution to the problem ( [ 3.12 ] ) . by the decomposition of the dirichlet process , we find that is also a solution to the bsde ( [ 3.7 ] ) .the uniqueness of the bsde implies that .in particular , .this proves the uniqueness .in this section , we study the semilinear second order elliptic pdes of the following form : where the operator is given by as in section [ sec2 ] and .consider the following conditions : [ thm5.1 ] suppose that , hold and \nonumber \\ & & \qquad < \infty\nonumber\end{aligned}\ ] ] for some , where is the diffusion generated by as in section [ sec2 ] and is the first exit time of from .then there exists a unique , continuous weak solution to equation ( [ 4.1 ] ) .set put let so that . by lemma 3.2 in ( see also ) , there exits a bounded function such that where is the zero energy part of the fukushima decomposition for the dirichlet process .furthermore , satisfies the following equation in the distributional sense : note that by sobolev embedding theorem , if we extend on .this implies that and are continuous additive functionals of in the strict sense ( see ) , and so is .thus , hence , \\[-8pt ] & & \hphantom{{}\times\exp \biggl ( } { } + \int_0^{t } \biggl(q-\frac12 ( b-\hat{b}-a\nabla v ) a^{-1}(b-\hat{b}-a\nabla v)^ * \biggr)(x^0(s))\,ds\nonumber \\ & & \hspace*{95pt } { } + \int_0^{t } \biggl(\frac{1}{2 } ( \nabla v)a(\nabla v)^ * -\langle b-\hat{b } , \nabla v\rangle \biggr)(x^0(s))\,ds \biggr).\nonumber\end{aligned}\ ] ] note that is well defined under for every . set .introduce \,{\partial\over{\partial x_i } } \\ & & { } -\langle b-\hat{b},\nabla v\rangle(x ) + { 1\over2}(\nabla v)a(\nabla v)^*(x)+q(x).\end{aligned}\ ] ] let be the diffusion process whose infinitesimal generator is given by \,{\partial\over{\partial x_i}}.\ ] ] it is known from that is absolutely continuous with respect to and where \\[-8pt ] & & \hphantom{\exp \biggl ( } { } -\int_0^{t } \biggl(\frac12 ( b-\hat{b}-a\nabla v ) a^{-1}(b-\hat{b}-a\nabla v)^ * \biggr)(x^0(s))\,ds \biggr).\nonumber\end{aligned}\ ] ] put then consider the following nonlinear elliptic partial differential equation : in view of ( [ e : zv ] ) , condition ( [ 4.0 ] ) implies that \\[-8pt ] & & \hphantom{\hat{e}_x \biggl[\exp \biggl ( } { } + \int_0^{\tau_d } \biggl(\frac{1}{2 } ( \nabla v)a(\nabla v)^ * -\langle b-\hat{b } , \nabla v\rangle \biggr)(x^0(s))\,ds\biggr ) \biggr]<\infty,\nonumber\end{aligned}\ ] ] where indicates that the expectation is taken under . from theorem [ thm4.2 ] , it follows that equation ( [ 4.2 ] ) admits a unique weak solution .set .we will verify that is a weak solution to equation ( [ 4.1 ] ) . indeed , for , since is a weak solution to equation ( [ 4.2 ] ), it follows that \over{\partial x_i}}\,{\partial[h^{-1}(x)\psi]\over { \partial x_j}}\,dx\\ & & \quad { } -\sum_{i=1}^{d}\int_d[b_i(x)-\hat{b}_i(x)-(a\nabla v)_i(x)]\,{\partial[h(x)u(x)]\over{\partial x_i}}\,h^{-1}(x)\psi \,dx\\ & & \quad { } + \int_d\langle b-\hat{b},\nabla v(x)\rangle u(x)\psi(x)\,dx \\ & & \quad { } -{1\over2}\int_d(\nabla v)a(\nabla v)^*(x)u(x)\psi \,dx-\int _ dq(x)u(x)\psi(x ) \,dx\\ & & \qquad = \int_df(x , u(x))\psi(x ) \,dx.\end{aligned}\ ] ] denote the terms on the left of the above equality , respectively , by , , , , . clearly , using chain rules , rearranging terms , it turns out that \,{\partial[\psi u(x)]\over{\partial x_i}}\,dx\\ & & { } -\sum_{i=1}^{d}\int_d(a\nabla v)_i(x)\,{\partial\psi\over{\partial x_i}}\,u(x ) \,dx+\sum_{i=1}^{d}\int _d(a\nabla v)_i(x)\,{\partial v \over{\partial x_i}}\,u(x)\psi \,dx.\nonumber\end{aligned}\ ] ] in view of ( [ e : vv ] ) , \,{\partial[\psi u(x)]\over{\partial x_i}}\,dx\nonumber\\[-8pt]\\[-8pt ] & & \qquad = \frac{1}{2}\sum _ { i=1}^{d}\int_d(a\nabla v)_i(x)\,{\partial[\psi u(x)]\over{\partial x_i}}\,dx.\nonumber\end{aligned}\ ] ] thus , \over{\partial x_i}}\,dx\\ & & { } -\sum_{i=1}^{d}\int_d(a\nabla v)_i(x)\,{\partial\psi\over{\partial x_i}}\,u(x ) \,dx+\sum_{i=1}^{d}\int _d(a\nabla v)_i(x)\,{\partial v \over{\partial x_i}}\,u(x)\psi \,dx.\nonumber\end{aligned}\ ] ] after cancelations , it is now easy to see that \\[-8pt ] & & { } -\sum_{i}^{d}\int_d\hat{b}\,{\partial \psi\over \partial x_i}\,u(x)\,dx-\int_dq(x)u(x)\psi(x ) \,dx\nonumber \\ & = & \int_df(x , u(x))\psi(x ) \,dx.\nonumber\end{aligned}\ ] ] since is arbitrary , we conclude that is a weak solution of equation ( [ 4.1 ] ) .suppose is a continuous weak solution to equation ( [ 4.1 ] ) .put . reversing the above process, we see that is a weak solution to equation ( [ 4.2 ] ) .the uniqueness of the solution of equation ( [ 4.1 ] ) follows from that of equation ( [ 4.2 ] ) .
in this paper , we prove that there exists a unique solution to the dirichlet boundary value problem for a general class of semilinear second order elliptic partial differential equations . our approach is probabilistic . the theory of dirichlet processes and backward stochastic differential equations play a crucial role . .
recent years witness an avalanche investigation of complex networks .complex systems in diverse fields can be described with networks , the elements as nodes and the relations between these elements as edges .the structure - induced features of dynamical systems on networks attract special attentions , to cite examples , the synchronization of coupled oscillators , the epidemic spreading and the response of networks to external stimuli .synchronization is a wide - ranging phenomenon which can be found in social , physical and biological systems .recent works show that some structure features of complex networks , such as the small - world effect and the scale - free property , can enhance effectively the synchronizabilities of identical oscillators on the networks , i.e. , synchronization can occur in a much more wide range of the coupling strength .we consider a network of coupled identical oscillators .the network structure can be represented with the adjacent matrix , whose element is and if the nodes and are disconnected and connected , respectively . denoting the state of the oscillator on the node as ,the dynamical process of the system is governed by the following equations , where governs the individual motion of the oscillator , the coupling strength and the output function .the matrix is a laplacian matrix , which reads , where is the degree of the node , i.e. , the number of the nodes connecting directly with the node .the eigenvalues of are real and nonnegative and the smallest one is zero .that is , we can rank all the possible eigenvalues of this matrix as .herein , we consider the fully synchronized state , i.e. , as for any pair of nodes and .synchronizability of the considered network of oscillators can be quantified through the eigenvalue spectrum of the laplacian matrix . herewe review briefly the general framework established in .the linear stability of the synchronized state is determined by the corresponding variational equations , the diagonalized block form of which reads , $ ] . is the different modes of perturbation from the synchronized state . for the block, we have , .the synchronized state is stable if the lyapunov exponents for these equations satisfy for .detailed investigations show that for many dynamical systems , there is a single interval of the coupling strength , in which all the lyapunov exponents are negative . in this case, the synchronized state is linearly stable if and only if . while depends on the the dynamics , the eigenratio depends only on the topological structure of the networkhence , this eigenratio represents the impacts of the network structure on the networks s synchronizability .this framework has stimulated an avalanche investigation on the synchronization processes on complex networks .it has been widely accepted as the quantity index of the synchronizability of networks . however , the eigenratio is a lyapunov exponent - based index .it can guarantee the linear stability of the synchronized state .it can not provide enough information on how the network structure impacts the dynamical process from an arbitrary initial state to the final synchronized state .how the structures of complex networks impact the synchronization is still a basic problem to be understood in detail . in this paper , by means of the random matrix theory ( rmt ) , we try to present a possible dynamical mechanism of the enhancement effect , based upon which we suggested a new dynamic - based index of the synchronizabilities of networks .the rmt was developed by wigner , dyson , mehta , and others to understand the energy levels of complex quantum systems , especially heavy nuclei .because of the complexity of the interactions , we can postulate that the elements of the hamiltonian describing a heavy nucleus are random variables drawn from a probability distribution and these elements are independent with each other .a series of remarkable predictions are found to be in agreement with the experimental data .the great successes of rmt in analyzing complex nuclear spectra has stimulated a widely extension of this theory to several other fields , such as the quantum chaos , the time series analysis , the transport in disordered mesoscopic systems , the complex networks , and even the qcd in field theory .for the complex quantum systems , the predictions represent an average over all possible interactions .the deviations from the universal predictions are the clues that can be used to identify system specific , non - random properties of the system under consideration .one of the most important concepts in rmt is the nearest neighbor level spacing ( nnls ) distribution .enormous experimental and numerical evidence tells us that if the classical motion of a dynamical system is regular , the nnls distribution of the corresponding quantum system behaves according to a poisson distribution .if the classical motion is chaotic , the nnls distribution will behave in accordance with the wigner dyson ensembles , i.e , . is the nnls .the nnls distribution of a quantum system can tell us the dynamical properties of the corresponding classical system . this fact is used in this paper to bridge the structure of a network with the dynamical characteristics of the dynamical system defined on it . from the state of the considered system , , we can construct the collective motion of the system as , where and are the phase and the amplitude of the oscillator . is the other oscillation - related parameters . describes the elastic wave on the considered network and presents the displacements at the positions at time . because of the identification of the oscillators , the individual motions should behave same except the phases and the amplitudes .the synchronization process can be described as the transition from an arbitrary initial collective state , , to the final fully synchronized state , . the probability of the transition should be the synchronizability of the considered network .the larger the transition probability , the easier for the system to achieve the fully synchronized state .the collective states are the elastic waves on the considered network .this kind of classical waves are analogous with the quantum wave of a tight - binding electron walking on the network .they obey exactly a same wave equation . in literature, this analogy is used to extend the concept of anderson localization state to the classical phenomena as elastic and optical waves . in this paperwe will use it to find a quantitative description of the transition probability between the collective states .the tight - binding hamiltonian of an electron walking on the network reads , where is the site energy of the oscillator , the hopping integral between the nodes and .because of the identification of the oscillators , all the site energies are same , denoted with . generally , we can set and , which leads to the relation . ranking the spectrum of as ,we denote the corresponding quantum states with .hence , the nnls distribution of the adjacent matrix can show us the dynamical characteristics of the collective motions .if the nnls obeys the poisson form , the transition probability between two eigenstates and will decrease rapidly with the increase of , and the transition occurs mainly between the nearest neighboring eigenstates .this state is called quantum regular state .if the nnls obeys wigner form , the transitions between all the states in the same chaotic regime the initial state belongs to can occur with almost same probabilities .the electron is in a quantum chaotic state .the corresponding collective states of the classical dynamical system to the quantum chaotic and regular sates are called collective chaotic and collective regular states , respectively .if the dynamical system is in a collective chaotic state , the collective motion modes in same chaotic regimes can transition between each other abruptly , while if the system is in a collective regular state only the neighboring collective motion modes can transition between each other .generally , a dynamical system may be in an intermediate state between the regular and the chaotic states , which is called soft chaotic state .the nnls distribution can be obtained by means of a standard procedure .the first step is the so - called unfolding . in the theoretical predictions for the nnls ,the spacings are expressed in units of average eigenvalue spacing .generally , the average eigenvalue spacing changes from one part of the eigenvalue spectrum to the next .we must convert the original eigenvalues to new variables , called unfolded eigenvalues , to ensure that the spacings between adjacent eigenvalues are expressed in units of local mean eigenvalue spacing , and thus facilitates comparison with analytical results .define the cumulative density function as , , where is the density of the original spectrum .dividing into the smooth term and the fluctuation term , i.e. , , the unfolded energy levels can be obtained as , if the system is in a soft chaotic state , the nnls distribution can be described with the brody form , which reads , .\ ] ] we can define the accumulative probability distribution as , .the parameter can be obtained from the linear relation as follows , = \beta lns - \beta ln\eta .\ ] ] for the special condition , the probability distribution function ( pdf ) degenerates to the poisson form and the system is in a regular state .for another condition , the pdf obeys the wigner - dyson distribution and the system is in a hard chaotic state .if the system is in an intermediate soft chaotic state , we have , .hence , from the perspective of random matrix theory , the synchronizability can be described with the parameter .the larger the value of , the easier for the system to become fully synchronized . by this way we find a possible dynamical mechanism for the enhancement effects of the network structures on the synchronization processes .in reference , the authors prove that the spectra of the erdos - renyi , the watts - strogatz(ws ) small - world , and the growing random networks ( grn ) can be described in a unified way with the brody distribution . herein , we are interested in the relation between the parameter and the eigenratio .detailed works show that is a good measure of the synchronizability of complex networks , especially the small world and scale - free networks .figure 1 shows the relation between and for ws small - world networks .we use the one - dimensional regular lattice - based model . in the regular latticeeach node is connected with its right - handed neighbors .connecting the starting and the end of the lattice , with the rewiring probability the end of each edge to a randomly selected node . in this rewiring procedure self - edge anddouble edges are forbidden .numerical results for ws small - world networks with and are presented .we can find that the brody distribution can capture the characteristics of the pdfs of the nnls very well , as shown in the panel ( a ) in fig.1 . with the increase of , the parameter increases rapidly from to , while the parameter decreases rapidly from to .hence , there exists a monotonous relation between the two parameters and . figure 2 gives the results for barabasi - albert ( ba ) scale - free networks .starting from a seed of several connected nodes , at each time step connect a new node to the existing graph with edges . the preferential probability to create an edge between the new node and an existing node is proportional to its degree , i.e. , .numerical results for ba scale - free networks with and are presented .all the pdfs of the nnls obey the brody distribution almost exactly . with the increase of , the parameter increases from to , while the parameter decreases from to .we can find also a monotonous relation between the two parameters and . for , we have .that is , rather than the `` repulsions '' or un - correlations between the levels , there are a certain `` attractiveness '' between the levels . in the construction of the ba networks with , each time only one node is added to the existing network . the resulting network is a tree - like structure without loops at all . dividing the network into subnetworks , we can find that many of them have similar structures , which leads their corresponding level - structures being almost same .because of the weak coupling between the subnetworks , the total level structure can be produced just by put all the corresponding levels together .this kind of level - structure will lead many nnls tending to zero .hence , is an extreme case induced by tree - like structure .this special kind of tree - like ba networks can not enhance the synchronization at all .in summary , by means of the nnls distribution we consider the collective dynamics in the networks of coupling identical oscillators .for the two kinds of networks , we can find the monotonous relation between the two parameters and .this monotonous relation tells us that the high synchronizability is accompanied with a high extent of collective chaos .the collective chaos may increase significantly the transition probability of the initial random state to the final synchronized state .the collective chaotic processes may be the dynamical mechanism for the enhancement impacts of network structures on the synchronizabilities .the parameter in the nnls distribution can be a much more informative measure of the synchronizability of complex networks .it reveals the information of the dynamical processes from an arbitrary initial state to the final synchronized state .it can be regarded in a certain degree as the bridge between the structures and the dynamics of complex networks .one paradox may be raised about the argument in the present paper .the wigner distribution implies a larger correlation between the eigenstates of the network than does the poisson distribution . at the same time, one can reverse the argument that wigner distribution implies level repulsion and , therefore , different frequencies of oscillation of the normal modes , and therefore no synchronization when these modes are coupled .it should be emphasized that the eigenratio and the index should be used together to capture the impacts of the network structures on the synchronization processes . represents the linear stability of the synchronized state , but it can not tell us how the final synchronized state is reached from the initial state . on the other hand , provides us a possible mechanism for this dynamical processes , but it can not tell us the transition orientation . and reflect some features of the impacts of the network structures on the synchronization processes , but there may be some new important features to be found .this work was supported by the national science foundation of china under grant no.70571074 , no.70471033 and no.10635040 .it is also supported by the specialized research fund for the doctoral program of higher education ( srfd no .20020358009 ) .one of the authors would like to thank prof .y. zhuo and j. gu in china institute of atomic energy for stimulating discussions . 1 r. albert , and a. -l .barabasi , rev .phys . * 74 * , 47(2002 ) . s. n. dorogovtsev , and j. f. f. mendes , adv . phys .* 51 * , 1079(2002 ) .m. e. j. newman , siam review * 45 * , 117(2003 ) .l. f. lago - fernndez , r. huerta , f. corbacho , and j. a. siguenza , phys .lett . * 84 * , 2758 ( 2000 ) .p. m. gade , and c. -k .hu , phys .e * 62 * , 6409 ( 2000 ) .x. f.wang , and g. chen , int .j. bifurcation chaos appl .sci . eng .* 12 * , 187(2002 ) .m. barahona , and l. m. pecora , phys .lett . * 89 * , 054101(2002 ) .p. g. lind , j. a. c. gallas , and h. j. herrmann , phys .e * 70 * , 056207 ( 2004 ) .f. liljeros , c. r. edling , l. a. n. amaral , h. e. stanley , and y. aberg , nature * 411 * , 907 ( 2001 ) .h. yang , f. zhao , z. li , w. zhang , and y. zhou , int .b * 18 * , 2734(2004 ) .yaneer , and i. r. epstein , proc .* 101 * , 4341(2004 ) .l. m. pecora , and t. l. carroll , phys .lett . * 80 * , 2109(1998 ) . m. barahona and l.m .pecora , phys .lett . * 89 * , 054101 ( 2002 ) . ; k. s. fink , g. johnson , t. carroll , d. mar , and l. pecora , phys . rev .e * 61 * , 5080 ( 2000 ) .t. guhr , a. mueller- groeling , and h. a. weidenmueller , phys . rep .* 299 * , 189(1998 ) .v. plerou , p. gopikrishnan , b. rosenow , l. a. n. amaral , and t. guhr , phys .e * 65 * , 066126(2002 ) .seba , phys .lett . * 91 * , 198104 - 1(2003 ) .h. yang , f. zhao , w. zhang , and z. li , physica a * 347 * , 704(2005 ) . h. yang , f. zhao , y. zhuo , x. wu , and z. li , physica a * 312 * , 23 ( 2002 ) .h. yang , f. zhao , y. zhuo , x. wu , and z. li , phys .a * 292 * , 349 ( 2002 ) .s. n. dorogovtsev , a. v. goltsev , j. f. f. mendes , and a. n. samukhin , phys . rev .e * 68 * , 046109(2003 ) .k. a. eriksen , i. simonsen , s. maslov , and k. sneppen , phys .* 90 , * 148701(2003 ) .goh , b. kahng , and d. kim , phys .e * 64 * , 051903 ( 2001 ) .i. j. farkas , i. derenyi , a .-barabasi , and t. vicsek , phys .e * 64 * , 026704(2001 ) .r. monasson , eur .j. b * 12 * , 555 ( 1999 ) .h. yang , f. zhao , l. qi , and b. hu , phys .e * 69 * , 066104(2004 ) .f. zhao , h. yang , and b. wang , phys .e * 72*,046119(2005 ) .h. yang , f. zhao , and b. wang , physica a * 364*,544(2006 ) .m.a.m . de aguiar and yaneer bar - yam , phys . rev .e * 71 * , 016106(2005 ) .w. anderson , phys . rev . * 109*,1492(1958 ) .s. john , h. sompolinsky , and m. j. stephen , phys .b * 27 * , 5592(1983 ) .s. john , h. sompolinsky , and m. j. stephen , phys .b * 28*,6358(1983 ) .s. john , h. sompolinsky , and m. j. stephen , phys .. t. a. brody , j. flores , j.b .french , p.a .mello , a. pandey , and s.s.m .wong , rev .* 53 * , 385(1981 ) .t. nishikawa , a. e. motter , y . c. lai , and f. c. hoppensteadt , phys .lett . * 91*,014101(2003 ) .h. hong , m. y. choi , and b. j. kim , phys .e * 65*,026139(2002 ) . c. zhou , a. e. motter , and j. kurths , phys .lett . * 96*,034101(2006 ) .h. guclu , g. korniss , m. a. novotny , z. toroczkai , and z. racz , * 73 * , 066115(2006 ) .d. j. watts , s. h. strogatz , nature ( london ) * 393 * , 440(1998 ) .barabasi , and a. albert , science * 286 * , 509(1999 ) .[ 1]the relation of versus for the constructed ws small - world networks .( a ) several typical results for pdf of the nnls . in the interested regionsa brody distribution can capture the characteristics very well .( b ) with the increase of the rewiring probability the eigenratio decreases rapidly .( c ) with the increase of the rewiring probability the parameter increases rapidly .( d ) the monotonous relation between the two parameters and . , title="fig : " ] [ 1]the relation of versus for the constructed ba scale - free networks . ( a ) results for pdf of the nnls . in the interested regionsa brody distribution can capture the characteristics very well .( b ) with the increase of the eigenratio decreases significantly .( c ) with the increase of the parameter increases significantly .( d ) the monotonous relation between the two parameters and . , title="fig : " ]
the random matrix theory is used to bridge the network structures and the dynamical processes defined on them . we propose a possible dynamical mechanism for the enhancement effect of network structures on synchronization processes , based upon which a dynamic - based index of the synchronizability is introduced in the present paper . * the impact of network structures on the synchronizability of the identical oscillators defined on them is an important topic both for theory and potential applications . from the view point of collective motions , the synchronization state is a special elastic wave occurring on the network , while the initial state is a abruptly assigned elastic state . the synchronizability should be the transition probability between the two states . by means of the analogy between the collective state and the motion of an electron walking on the network , we can use the quantum motion of the electron to find the motion characteristics of the collective states . the random matrix theory ( rmt ) tells us that the nearest neighbor level spacing distribution of the quantum system can capture the dynamical behaviors of the quantum system and the corresponding classical system . a poison distribution shows that the transition can occur only between successive eigenstates , while a wigner distribution shows that the transition can occur between any two eigenstates . a brody distribution , an intermediate between the two extreme conditions , can give us a quantitative description of the transition probability . hence , it can be used as an index to represent the synchronizability . as examples , the watts - strogatz ( ws ) small - world networks and the barabasi - albert(ba ) scale - free networks are considered in this paper . comparison with the widely used eigenratio index shows that this index can describe the synchronizability very well . it is a dynamic - based index and can be employed as a measure of the structures of complex networks . *
recent observations of type ia supernovae supported by wmap measurements of anisotropy of the angular temperature fluctuations indicate that our universe is spatially flat and accelerating . on the other hand, the power spectrum of galaxy clustering indicates that about of critical density of the universe should be in the form of non - relativistic matter ( cold dark matter and baryons ) .the remaining , almost two thirds of the critical energy , may be in the form of a component having negative pressure ( dark energy ) .although the nature of dark energy is unknown , the positive cosmological constant term seems to be a serious candidate for the description of dark energy . in this casethe cosmological constant and energy density remain constant with time and the corresponding mass density , where is the hubble constant expressed in units of and .although the cold dark matter ( cdm ) model with the cosmological constant and dust provides an excellent explanation of the snia data , the present value of is times smaller than value predicted by the particle physics model .many alternative condidates for dark energy have been advanced and some of them are in good agreement with the current observational constraints .moreover , it is a natural suggestion that -term has a dynamical nature like in the inflationary scenario .therefore , it is reasonable to consider the next simplest form of dark energy alternative to the cosmological constant for which the equation of state depends upon time in such a way that , where is a coefficient of the equation of state parametrized by the scale factor or redshift .it has been demonstrated that dynamics of such a system can be represented by one - dimensional hamiltonian flow where the overdot means differentiation with respect to the cosmological time and is a potential function of the scale factor given by where is the effective energy density which satisfies the conservation condition for example , for the model we have of course the trajectories of the system lie on the zero energy surface .hamiltonian ( [ eq:1 ] ) can be rewritten in the following form convenient for our current reconstruction of the equation of state from the potential function , namelly where , here overdot means differentiation with respect to some new reparametrized time , .for example , for mixture of non - interacting fluids potential takes the form where for -th fluid and ( similar to the quiessence model of dark energy ) . due to the hamiltonian structure of friedmann - robertson - walker dynamics , with the general form of the equation of state , the dynamicsis uniquely determined by the potential function ( or ) of the system . only for simplicity of presentationwe assume that the universe is spatially flat ( in the opposite case trajectories of the system should be considered on the energy level ) .let us note that from the potential function we can obtain the equation of state coefficient , the term has a simple interpretation as an elasticity coefficient of the potential function with respect to the scale factor .thus from the potential function both and can be unambiguously calculated it is well known in a flat frw cosmology the luminosity distance and the coordinate distance to an object at redshift are simply related as ( here and elsewhere ) . from equation ( [ eq:8 ] )the hubble parameter is given by } = \bigg[\frac{d}{dz}\bigg(\frac{d_{l}(z)}{1+z}\bigg)\bigg ] .\label{eq:9}\ ] ] it is crucial that formula ( [ eq:9 ] ) is purely kinematic and depends neither upon a microscopic model of matter , including the -term , nor on a dynamical theory of gravity . due to existance of such a relation it would be possible to calculate the potential function which is : ^{-2}}{2(1+z)^{2}}. \label{eq:10}\ ] ] this in turn allows us to reconstruct the potential from snia data .let us note that depends on the first derivative with respect to whereas is associated with the second derivative .let us also note that from of the potential function for a one - dimensional particle - universe moving in the configurational ( or )-space can be reconstructed from recent measurements of angular size of high - z compact radio sources compiled by gurvits _the corresponding formula is ^{-2}}{2(1+z)^{2 } } , \label{eq:10a}\ ] ] where the luminosity distance and the angular distance are related by the simple formula since the potential function is related to the luminosity function by relation ( [ eq:10 ] ) one can determine both the class of trajectories in the phase plane and the hamiltonian form as well as reconstruct the quintessence parameter provided that the luminosity function is known from observations .now we can reconstruct the form of the potential function ( [ eq:10 ] ) using a natural ansatz introduced by sahni __ . in this approach dark energy density which coincides with given as a truncated taylor series with respect to this leads to and the values of three parameters ,, can be obtained by applying a standard fitting procedure to snia observational data based on the maximum likelihood method .the potential function ( [ eq:10 ] ) written in terms of is \label{eq:13}\ ] ] or in dimensionless form , \label{eq:14}\ ] ] where , .our approach to the reconstruction of dynamics of the model is different from the standard approach in which is determined directly from the luminosity distance formula .it should be stressed out that the latter approach has an inevitable limitation because the luminosity distance dependence on is obtained through a multiple - integral relation that loses detailed information on . in our approachthe reconstruction is simpler and more information on survives ( only a single integral is required ) .our approach is also different from the concept of reconstruction of potential of scalar fields considered in the context of quintessence .the key steps of our method are the following : * 1 ) * we reconstruct the potential function for the hamiltonian dynamics of the quintessential universe from the luminosity distances of supernovas type ia ; + * 2 ) * we draw the best fit curves and confidence levels regions obtained from the statistical analysis of snia data ; + * 3 ) * we set the theoretically predicted forms of the potential functions on the confidence levels diagram ; + * 4 ) * those theoretical potential which escape from the confidence level is treated as being unfitted to observations ; + * 5 ) * we choose this potential function which lie near the best fit curve .+ our reconstruction is an effective statistical technique which can be used to compare a large number of theoretical models with observations . instead of estimating some revelant parameters for each model separately, we choose a model - independent fitting function and perform a maximum likelihood parameter estimation for it .the obtained confidence levels can be used to discriminate between the considered models . in this paperthis technique is used to find the fitting function for the luminosity distance . the additional argument which is important when consideringthe potential is that it allows to find some modification in the friedmann equations along the `` cardassian expansion scenario '' .this proposition is very intriguing because of additional terms , which automatically cause the acceleration of the universe .these modifications come from the fundamental physics and these terms can be tested using astronomical observation of distant type ia supernovae . for this aimthe recent measurements of angular size of high - redshift compact radio sources can also be used .the important question is the reliable data available .we expect that supernovae data would improve greatly over next few years .the ongoing snap mission should gives us about 2000 type ia supernovae cases each year .this satellite mission and the next planned ones will increase the accuracy of data compared to data from the 90s . in our analysiswe use the availale data starting from the three perlmutter samples ( sample a is the complete sample of 60 supernovae , but in the analysis it is also used sample b and c in which 4 and 6 outliers were excluded , respectively ) .the fit for the sample c is more robust and this sample was accepted as the base of our consideration . for technical details of the metod the readeris referred to our previous two papers . in fig . 1 we show the reconstructed potential function obtained using the fitting values of as well as .the red line represents the potential function for the best fit values of parameters ( see tables [ resultsa ] , [ resultsc ] and [ resultspac ] ) . in each casethe coloured areas cover the confidence levels ( ) and ( ) for the potential function .the different forms of the potential function which are obtained from the theory are presented in the confidence levels . herewe consider one case , namely the cardassian model . in this casethe standard frw equation is modified by the presence of an additional term , where is the energy density of matter and radiation . for simplicitywe assume that density parameter for radiation is zero ( see table .[ pottab ] ) .the cardassian scenario is proposed as an alternative to the cosmological constant in explaining the acceleration of the universe . in this scenariothe the universe automaticaly accelerates without any dark energy component .the additional term in the friedmann equation arises from exotic physics of the early universe ( i.e. , in the brane cosmology with randall - sundrum version ) ..the forms of the potential functions in dimensionless form for two cases : model and cardassian scenario . [ cols="^,^,^,^,^ " , ] [ resultspac ]the dynamics of the considered cosmological models is governed by the dynamical system with the first integral for ( [ eq:16 ] ) .the main aim of the dynamical system theory is the investigation of the space of all solutions ( [ eq:16 ] ) for all possible initial conditions , i.e. phase space . in the context of quintessential models with the equation of state thereexists a systematic method of reducing einstein s field equations to the form of the dynamical system ( [ eq:16 ] ) .one of the features of such a representation of dynamics is the possibility of resolving of some cosmological problems like the horizon and flatness problems in terms of the potential function .the phase space ( or state space ) is a natural visualization of the dynamics of any model .every point corresponds to a possible state of the system .the r.h.s of the system ( [ eq:16 ] ) define a vector field $ ] belonging to the tangent space .integral curves of this vector field define one - parameter group of diffeomorphisms called the phase flow . in the phase spacethe phase curves ( orbits of the group ) represent the evolution of the system whereas the critical points , are singular solutions equilibria from the physical point of view .the phase curves together with critical points constitute the phase portrait of the system .now we can define the equivalence relation between two phase portraits ( or two vector fields ) by the topological equivalence , namely two phase portraits are equivalent if there exists an orientation preserving homeomorphism transforming integral curves of both systems into each other .following the hartman - grobman theorem , near hyperbolic critical points ( , where is the appropriate eigenvalue of linearization matrix of the dynamical system ) is equivalent to its linear part in our case the linearization matrix takes the form {(x_{0},0 ) } \label{eq:18}\ ] ] classification of critical points is given in terms of eigenvalues of the linearization matrix since the eigenvalues can be determined from the characteristic equation . in our case and eigenvaluesare either real if or purely imaginary and mutually conjugated if . in the former casethe critical points are saddles and in the latter case they are centres .the advantage of representing dynamics in terms of hamiltonian ( [ eq:1 ] ) is the possibility to discuss the stability of critical points which is based only on the convexity of the potential function . in our casethe only possible critical points in a finite donain of phase space are centres or saddles .the dynamical system is said to be structurally stable if all other dynamical systems ( close to it in a metric sense ) are equivalent to it .two - dimensional dynamical systems on compact manifolds form an open and dense subsets in the space of all dynamical systems on the plane .structurally stable critical points on the plane are saddles , nodes and limit cycles whereas centres are structurally unstable .there is a widespread opinion among scientists that each physically realistic models of the universe should possess some kind of structural stability because the existence of many drastically different mathematical models , all in agreement with observations , would be fatal for the empirical method of science .basing on the reconstructed potential function one can conclude that : + * 1 ) * since the diagram of the potential function is convex up and has a maximum which corresponds to a single critical point the quantity at the critical point ( saddle point ) andthe eigenvalues of the linearization matrix at this point are real with oposite signs ; + * 2 ) * the model is structurally stable , i.e. , small perturbation of it do not change the structure of the trajectories in the phase plane ; + * 3 ) * since one can easily conclude from the geometry of the potential function that in the accelerating region ( ) is a decreasing function of its argument ; + * 4 ) * the reconstructed phase portrait for the system is equivalent to the portrait of the model with matter and the cosmological constant . + ) reconstructed from the potential function ( [ eq:14 ] ) for the best fitted parameters ( table [ resultsa ] , ) .the coloured domain of phase space is the domain of accelerated expansion of the universe .the red curve represents the flat model trajectory which separates the regions with negative and positive curvature . ]transformed on the compact poincar sphere .non - physical domain of phase space is marked as pink . ] by an inspection of the phase portraits we can distinguish four characteristic regions in the phase space .the boundaries of region i are formed by a separatrix coming out from the saddle point and going to the singularity and another separatrix coming out of the singularity and approaching the saddle .this region is covered by trajectories of closed and recolapsing models with initial and final singularity .the trajectories moving in region iv are also confined by a separatrix and they correspond to the closed universes contracting from the unstable de sitter node towards the stable de sitter node .the trajectories situated in region iii correspond to the models expanding towards stable de sitter node at infinity .similarly , the trajectories in the symmetric region ii represent the universes contracting from the unstable node towards the singularity .the main idea of the qualitative theory of differential equations is the following : instead of finding and analyzing an indiwidual solution of the model one investigates a space of all possible solutions .any property ( for example acceleration ) is believed to be realistic if it can be attributed to a large subsets of models within the space of all solutions or if it possesses certain stability property which is shared also by all slightly perturbed models .the possible existence of the unknown form of matter called dark energy has usually been invoked as the simplest way to explain the recent observational data of snia .however , the effects arising from the new fundamental physics can also mimic the gravitational effects of dark energy through a modification of the friedmann equation .we exploited the advantages of our method to discriminate among different dark energy models . with the independently determined density parameter of the universe ( ) we found that the current observational results require the cosmological constant in the cardassian models . on fig .[ figpotw ] we can see that both in the case of sample a ( fig . [ figpotw]*c * ) and sample c ( fig .[ figpotw]*a * ) should be close to zero . similarly if we assume that the density parameter for barionic matter is then in the case of sample c ( fig .[ figpotw]*b * ) and for sample a ( fig .[ figpotw]*d * ) .moreover , we showed ( for the sample c of perlmutter snia data ) that a simple cardassian model as a candidate for dark energy is ruled out by our analysis if for and if for at the confidence level .
we demonstrate a model - independent method of estimating qualitative dynamics of an accelerating universe from observations of distant type ia supernovae . our method is based on the luminosity - distance function , optimized to fit observed distances of supernovae , and the hamiltonian representation of dynamics for the quintessential universe with a general form of equation of state . because of the hamiltonian structure of frw dynamics with the equation of state , the dynamics is uniquelly determined by the potential function of the system . the effectiveness of this method in discrimination of model parameters of cardassian evolution scenario is also given . our main result is the following , restricting to the flat model with the current value of , the constraints at confidence level to the presence of modification of the frw models are .
consider any system of chemical reactions , in which certain molecule types catalyse reactions and where there is a pool of simple molecule types available from the environment ( a ` food source ' ) .one can then ask whether , within this system , there is a subset of reactions that is both self - sustaining ( each molecule can be constructed starting just from the food source ) and collectively autocatalytic ( every reaction is catalysed by some molecule produced by the system or present in the food set ) , .this notion of ` self - sustaining and collectively autocatalytic ' needs to be carefully formalised ( we do so below ) , and is relevant to some basic questions such as how biochemical metabolism began at the origin of life , , .a simple mathematical framework for formalising and studying such self - sustaining autocatalytic networks has been developed so - called ` raf ( reflexively - autocatalytic and f - generated ) theory ' .this theory includes an algorithm to determine whether such networks exists within a larger system , and for classifying these networks ; moreover , the theory allows us to calculate the probability of the formation of such systems within networks based on the ligation and cleavage of polymers , and a random pattern of catalysis .however , this theory relies heavily on the system being closed and finite . in certain settings , it is useful to consider polymers of arbitrary length being formed ( e.g. in generating the membrane for a protocell ) . in these and other unbounded chemical systems , interesting complications arise for raf theory , particularly where the catalysis of certain reactions is possible only by molecule types that are of greater complexity / length than the reactants or product of the reactions in question . in this paper, we extend earlier raf theory to deal with unbounded chemical reaction systems . as in some of our earlier work, our analysis ignores the dynamical aspects , which are dealt with in other frameworks , such as ` chemical organisation theory ' ; here we concentrate instead on just the pattern of catalysis and the availability of reactants . in this paper ,a _ chemical reaction system _ ( crs ) consists of ( i ) a set of molecule types , ( ii ) a set of reactions , ( iii ) a pattern of catalysis that describes which molecule(s ) catalyses which reactions , and ( iv ) a distinguished subset of called the _ food set_.we will denote a crs as a quadruple , and encode the pattern of catalysis by specifying a subset of so that precisely if molecule type catalyses reaction .see fig .[ fig1 ] for a simple example ( from ) .and seven reactions .dashed arrows indicate catalysis ; solid arrows show reactants entering a reaction and products leaving . in this crsthere are exactly four rafs ( defined below ) , namely , , , and . ] in certain applications , often consist of or at least contain a set of polymers ( sequences ) over some finite alphabet ( i.e. chains , , where ) , as in fig .[ fig1 ] ; such polymer systems are particularly relevant to rna or amino - acid sequence models of early life. reactions involving such polymers typically involve cleavage and ligation ( i.e. cutting and/or joining polymers ) , or adding or deleting a letter to an existing chain .notice that if no bound is put on the maximal length of the polymers , then both and are infinite for such networks , even when .in this paper we do not necessarily assume that consists of polymers , or that the reactions are of any particular type .thus , a reaction can be viewed formally as an ordered pair consisting of a multi - set of elements from ( the reactants of ) and a multi - set of elements of ( the products of ) ; but we will mostly use the equivalent and more conventional notation of writing a reaction in the form : where the s( reactants of ) and s ( products of ) are elements of , and ( e.g. and are reactions ) . in this paper , we extend our earlier analysis of rafs to the general ( finite or infinite ) case and find that certain subtleties arise that are absent in the finite case. we will mostly assume the following conditions ( a1 ) and ( a2 ) , and sometimes also ( a3 ) .* is finite ; * each reaction has a finite set of reactants , denoted , and a finite set of products , denoted ; * for any given finite set of molecule types , there are only finitely many reactions with . given a subset of , we say that a subset of molecule types is _ closed _ relative to if satisfies the property in other words , a set of molecule types is closed relative to if every molecule that can be produced from using reactions in is already present in .notice that the full set is itself closed .the _ global closure _ of relative to , denoted here as , is the intersection of all closed sets that contain ( since is closed , this intersection is well defined ) .thus is the unique minimal set of molecule types containing that is closed relative to .we can also consider a _ constructive closure _ of relative to , denoted here as , which is union of the set and the set of molecule types that can be obtained from by carrying out any finite sequence of reactions from where , for each reaction in the sequence , each reactant of is either an elements of or a product of a reaction occurring earlier in the sequence , and is a product of the last reaction in the sequence .note that always contains ( and these two sets coincide when the crs is finite ) but , for an infinite crs , can be a strict subset of , even when ( a1 ) holds . to see this ,consider the system where , , where is defined as follows : then . in this example , notice that has infinitely many reactants , which violates ( a2 ) .by contrast , when ( a2 ) holds , we have the following result .[ lem1 ] suppose that ( a2 ) holds. then .moreover , under ( a1 ) and ( a2 ) , if is countable , then this ( common ) closure of relative to is countable also .suppose the condition of lemma [ lem1 ] holds but that is not closed ; we will derive a contradiction .lack of closure means there is a molecule in which is the product of some reaction that has all its reactants in . by ( a2 ) , the set of reactants of is finite , so we may list them as , and , by the definition of , for each , either or there is a finite sequence of reactions from that generates starting from reactants entirely in and using just elements of or products of reactions appearing earlier in the sequence . by concatenating these sequences ( in any order ) and appending at the end , we obtain a finite sequence of reactions that generate from , which contradicts the assumption that is not closed . if follows that is closed relative to , and since it is clearly a minimal set containing that is closed relative to , it follows that . that is countable under ( a1 ) and ( a2 ) follows from the fact that any countable union of finite sets is countable . in view of lemma [ lem1 ] ,whenever ( a2 ) holds , we will henceforth denote the ( common ) closure of relative to as .* definition [ raf , and related concepts ] * suppose we have a crs , which satisfies condition ( a2 ) .an raf for is a non - empty subset of for which * for each , ; and * for each , at least one molecule type in catalyses .in words , a non - empty set of reactions forms an raf for if , for every reaction in , each reactant of and at least one catalyst of is either present in or able to be constructed from by using just reactions from within the set .an raf for is said to be a _finite raf _ or an _ infinite raf _ depending on whether or not is finite or infinite .the concept of an raf is a formalisation of a ` collectively autocatalytic set ' , pioneered by stuart kauffman and .since the union of any collection of rafs is also an raf , any crs that contains an raf necessarily contains a unique maximal raf .irrraf _ is an ( infinite or finite ) raf that is minimal i.e. it contains no raf as a strict subset .in contrast to the uniqueness of the maximal raf , a finite crs can have exponentially many irrrafs .the raf concept needs to be distinguished from the stronger notion of a _ constructively autocatalytic and f - generated _ ( caf ) set which requires that can be ordered so that all the reactants and at least one catalyst of are present in for all ( in the initial case where , we take ) .this condition essentially means that in a caf , a reaction can only proceed if one of its catalysts is already available , whereas an raf could become established by allowing one or more reactions to proceed uncatalysed ( presumably at a much slower rate ) so that later , in some chain of reactions , a catalyst for is generated , allowing the whole system to ` speed up ' .notice that although the crs in fig .[ fig1 ] has four rafs it has no caf . , and are products ) , reactions are hollow squares , and dashed arrows indicate catalysis . ]the raf concept also needs to be distinguished from the weaker notion of a _ pseudo - raf _ , which replaces condition ( ii ) with the relaxed condition : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( ii) : for all , there exists or for some such that . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in other words , a pseudo - raf that fails to be an raf is an autocatalytic system that could continue to persist once it exists , but it can never form from just the food set , since it is not -generated . these two alternatives notions to rafs are illustrated ( in the finite setting ) in fig .[ fig1b ] .notice that every caf is an raf and every raf is a pseudo - raf , but these containments are strict , as fig. [ fig1b ] shows . while the notion of a caf may seem reasonable , it is arguably too conservative in comparison to an raf , since a reaction can still proceed if no catalyst is present , albeit it at a much slower rate , allowing the required catalyst to eventually be produced .however relaxing the raf definition further to a pseudo - raf is problematic ( since a reaction can not proceed at all , unless all its reactants are present , and so such a system can not arise spontaneously just from ) .this , along with other desirable properties of rafs ( their formation requires only low levels of catalysis in contrast to cafs ) , suggests that rafs are a reasonable candidate for capturing the minimal necessary condition for self - sustaining autocatalysis , particularly in models of the origin of metabolism . as in the finite crs setting ,the union of all rafs is an raf , so any crs that contains an raf has a unique maximal one .it is easily seen that an infinite crs that contains an raf need not have a maximal finite raf , even under ( a1)(a3 ) , but in this case , the crs would necessarily also contain an infinite raf ( the union of all the finite rafs ) .a natural question is the following : if an infinite crs contains an infinite raf , does it also contain a finite one ?it is easily seen that even under conditions ( a1 ) and ( a2 ) , the answer to this last question is ` no ' .we provide three examples to illustrate different ways in which this can occur .this is in contrast to cafs , for which exactly the opposite holds : if a crs contains an infinite caf , then it necessarily contains a sequence of finite ones .moreover , two of the infinite rafs in the following example contain no irrrafs ( in contrast to the finite case , where every raf contains at least one irrraf ) .* example 1 : * let , and .let .we will specify particular crs s by describing , and the pattern of catalysis as follows .* has a reaction for each and is catalysed by for each .* has a reaction \rightarrow x_i) c_i ] is a maximal raf for .suppose that .then for any ] for all , and so }(f). ] . in summary , every reaction in the non - empty set ] and so ] , while }(f) ] is not a subset of }(f) ] is the set of reactions for which there is some with .notice that is monotonic and when is finite , the set can be computed in polynomial time in the size of .[ lemcom ] suppose we have a crs satisfying ( a2 ) , and with and defined as in ( [ fgeq ] ) .if is -compatible , then ] . in particular , has an raf if and only if contains a -compatible set .if is -compatible subset of , then for ] . the problem of finding a -compatible set ( if one exists ) in a general setting ( arbitrary , and , not necessarily related to chemical reaction networks ) can be solved in general polynomial time when is finite and is monotonic and computable in finite time .this provides a natural generalization of the classical raf algorithm . in , we showed how other problems ( including a toy problem in economics ) could by formulated within this more general framework .however , if we allow the set to be infinite , then monotonicity of needs to be supplemented with a further condition on . we will consider a condition ( ` -continuity ' ) , which generalizes ( a4) , and that applies automatically when is finitewe say that is ( weakly ) -continuous if , for any nested descending chain of sets , we have : recall that an element in a partially ordered set need not have a greatest lower bound ( glb ) ; but if it does , it has a unique one . notice that when is finite , this property holds trivially , since then for the last set in the ( finite ) nested chain . for a subset of and , define to be the result of applying function iteratively times starting with .thus and for , . taking the particular interpretation of and in ( [ fgeq ] ) ,the sequence is nothing more than the sequence from ( [ cieqx ] ) .notice that the sequence is a nested decreasing sequence of subsets of , and so we may define the set : which is a ( possibly empty ) subset of ( in the setting of proposition [ infpro ] , ) . given ( finite or infinite ) sets , where is partially ordered , together with functions , it is routine to verify that the following properties hold : * the -compatible subsets of are precisely the non - empty subsets of that are fixed points of ;* if is monotonic then contains all -compatible subsets of ; in particular , if , then there is no subset of . *if is -continuous then is -compatible , provided it is non - empty ; in particular , if is monotonic and -continuous then ( by ( ii ) ) there a -compatible subset of exists if and only if is nonempty .* without the assumption that is weakly -continuous in part ( iii ) , it is possible for to fail to be -compatible when is infinite , even if is monotone .the proof of parts ( i)(iii ) proceeds exactly as in , with the addition of one extra step required to justify part ( iii ) , assuming -continuity .namely , condition ( [ glbeq ] ) ensures that is also -continuous in the sense that for any nested descending chain of sets , we have : and so . the proof of ( [ psieq ] ) from ( [ glbeq ] ) is straightforward : firstly , holds for _ any _ function , while if , then , by definition of , for all and for all and so , and for all .now , since is a glb of , we have for all ( i.e. ) and so .part ( iv ) follows directly from parts ( ii ) and ( iii ) . for part ( vi ) , consider the infinite crs in example 2 . as above , take and , for , with and defined as in ( [ fgeq ] ) .then , where however , is not -compatible , since and but this is not a subset of }(f ) = { { \mathcal f}}$ ] since . in this example , fails to be weakly -continuous , and the argument is analogous to where we showed earlier that fails to satisfy ( a4) .more precisely , for each , let , where is defined in ( [ r1eqx ] ) and where , for each reaction , is the unique catalyst of .then and so .however , and so , which differs from the glb of , namely .the examples in this paper are particularly simple indeed mostly we took the food set to consist of just a single molecule , and reactions often had only one possible catalyst . in realitymore ` realistic ' examples can be constructed , based on polymer models over an alphabet , however the details of those examples tends to obscure the underlying principles so we have kept with our somewhat ` toy ' examples in order that the reader can readily verify certain statements .section [ finitesec ] describes a process for determining whether an arbitrary infinite crs ( satisfying ( a1)(a3 ) ) contains a finite raf .however , from an algorithmic point of view , proposition [ finiteraf ] is somewhat limited , since the process described is not guaranteed to terminate in any given number of steps . if no further restriction is placed on the ( infinite ) crs , then it would seem difficult to hope for any sort of meaningful algorithm ; however , if the crs has a ` finite description ' ( as do our main examples above ) , then the question of the algorithmic decidability of the existence of an raf or of a finite raf arises .more precisely , suppose an infinite crs consists of ( i ) a countable set of molecule types , where we may assume ( in line with ( a1 ) ) that , for some finite value , and ( ii ) a countable set of reactions , where has a finite set of reactants , a finite set of products , and a finite or countable set of catalysts , where and are computable ( i.e. partial recursive ) set - valued functions defined on the positive integers . given this setting , a possible question for further investigationis whether ( and under what conditions ) there exists an algorithm to determine whether or not contains an raf , or more specifically a finite raf ( i.e. when is this question decidable ? ) .the author thanks the allan wilson centre for funding support , and wim hordijk for some useful comments on an earlier version of this manuscript .i also thank marco stenico ( personal communication ) for pointing out that -consistency is required for part ( iii ) of the -compatibility result above when is infinite , and for a reference to a related fixed - point result in domain theory ( theorem 2.3 in ) , from which this result can also be derived .p. dittrich , p. speroni di fenizio , chemical organisation theory .bull . math .biol . * 69 * , 11991231 ( 2007 ) p. g. higgs , n. lehman , the rna world : molecular cooperation at the origins of life .genet . * 16*(1 ) , 717 ( 2015 ) w. hordijk , m. steel , autocatalytic sets extended : dynamics , inhibition , and a generalization .. chem . * 3*:5 ( 2012 ) w. hordijk , m. steel , autocatalytic sets and boundaries . j. syst .( in press ) ( 2014 ) w. hordijk , m. steel , detecting autocatalyctic , self - sustaining sets in chemical reaction systems . j. theor . biol . * 227*(4 ) , 451461 ( 2004 ) w. hordijk , s. kauffman , m. steel , required levels of catalysis for the emergence of autocatalytic sets in models of chemical reaction systems .( special issue : origin of life 2011 ) * 12 * , 30853101 ( 2011 ) w. hordijk , m. steel , s. kauffman , the structure of autocatalytic sets : evolvability , enablement , and emergence .acta biotheor .* 60 * , 379392 ( 2012 ) s. a. kauffman , autocatalytic sets of proteins .. biol . * 119 * , 124 ( 1986 ) s. a. kauffman , the origins of order ( oxford university press , oxford 1993 ) e. mossel , m. steel , random biochemical networks and the probability of self - sustaining autocatalysis . j. theor . biol . * 233*(3 ) , 327336 ( 2005 ) j. smith , m. steel , w. hordijk , autocatalytic sets in a partitioned biochemical network. chem . * 5*:2 ( 2014 ) m. steel , w. hordijk , j. smith , minimal autocatalytic networks .journal of theoretical biology * 332 * : 96107 ( 2013 ) v. stoltenberg - hansen , i. lindstrm , e. r. griffor , mathematical theory of domains .cambridge tracts in theoretical computer science 22 ( cambridge university press , cambridge 1994 ) v. vasas , c. fernando , m. santos , s. kauffman , e. szathmry , evolution before genes .. dir . * 7*:1 ( 2012 ) m. villani , a. filisetti , a. graudenzi , c. damiani , t. carletti , r. serra , growth and division in a dynamic protocell model .life * 4 * , 837864 ( 2014 )
given any finite and closed chemical reaction system , it is possible to efficiently determine whether or not it contains a ` self - sustaining and collectively autocatalytic ' subset of reactions , and to find such subsets when they exist . however , for systems that are potentially open - ended ( for example , when no prescribed upper bound is placed on the complexity or size / length of molecules types ) , the theory developed for the finite case breaks down . we investigate a number of subtleties that arise in such systems that are absent in the finite setting , and present several new results .
over the past decade or so two separate developments have occurred in computer science whose intersection promises to open a vast new area of research , an area extending far beyond the current boundaries of computer science .the first of these developments is the growing realization of how useful it would be to be able to control distributed systems that have little ( if any ) centralized communication , and to do so `` adaptively '' , with minimal reliance on detailed knowledge of the system s small - scale dynamical behavior .the second development is the maturing of the discipline of reinforcement learning ( rl ) .this is the branch of machine learning that is concerned with an agent who periodically receives `` reward '' signals from the environment that partially reflect the value of that agent s private utility function .the goal of an rl algorithm is to determine how , using those reward signals , the agent should update its action policy to maximize its utility .( until our detailed discussions below , we will use the term `` reinforcement learning '' broadly , to include any algorithm of this sort , including ones that rely on detailed bayesian modeling of underlying markov processes .intuitively , one might hope that rl would help us solve the distributed control problem , since rl is adaptive , and , in particular , since it is not restricted to domains having sufficient breadths of communication .however , by itself , conventional single - agent rl does not provide a means for controlling large , distributed systems .this is true even if the system have centralized communication .the problem is that the space of possible action policies for such systems is too big to be searched .we might imagine as a variant using a large set of agents , each controlling only part of the system .since the individual action spaces of such agents would be relatively small , we could realistically deploy conventional rl on each one .however , now we face the central question of how to map the world utility function concerning the overall system into private utility functions for each of the agents .in particular , how should we design those private utility functions so that each agent can realistically hope to optimize its function , and at the same time the collective behavior of the agents will optimize the world utility ? we use the term `` collective intelligence '' ( coin ) to refer to any pair of a large , distributed collection of interacting computational processes among which there is little to no centralized communication or control , together with a ` world utility ' function that rates the possible dynamic histories of the collection .the central coin design problem we consider arises when the computational processes run rl algorithms : how , without any detailed modeling of the overall system , can one set the utility functions for the rl algorithms in a coin to have the overall dynamics reliably and robustly achieve large values of the provided world utility ?the benefits of an answer to this question would extend beyond the many branches of computer science , having major ramifications for many other sciences as well .section [ sec : back ] discusses some of those benefits .section [ sec : lit ] reviews previous work that has bearing on the coin design problem .section [ sec : math ] section constitutes the core of this chapter .it presents a quick outline of a promising mathematical framework for addressing this problem in its most general form , and then experimental illustrations of the prescriptions of that framework . throughout, we will use italics for emphasis , single quotes for informally defined terms , and double quotes to delineate colloquial terminology .there are many design problems that involve distributed computational systems where there are strong restrictions on centralized communication ( `` we ca nt all talk '' ) ; or there is communication with a central processor , but that processor is not sufficiently powerful to determine how to control the entire system ( `` we are nt smart enough '' ) ; or the processor is powerful enough in principle , but it is not clear what algorithm it could run by itself that would effectively control the entire system ( `` we do nt know what to think '' ) .just a few of the potential examples include : \i ) designing a control system for constellations of communication satellites or for constellations of planetary exploration vehicles ( world utility in the latter case being some measure of quality of scientific data collected ) ; \ii ) designing a control system for routing over a communication network ( world utility being some aggregate quality of service measure ) \iii ) construction of parallel algorithms for solving numerical optimization problems ( the optimization problem itself constituting the world utility ) ; \iv ) vehicular traffic control , _e.g. _ , air traffic control , or high - occupancy toll - lanes for automobiles .( in these problems the individual agents are humans and the associated utility functions must be of a constrained form , reflecting the relatively inflexible kinds of preferences humans possess . ) ; \v ) routing over a power grid ; \vi ) control of a large , distributed chemical plant ; \vii ) control of the elements of an amorphous computer ; \viii ) control of the elements of a ` noisy ' phased array radar ; \ix ) compute - serving over an information grid .such systems may be best controlled with an artificial coin .however , the potential usefulness of deeper understanding of how to tackle the coin design problem extends far beyond such engineering concerns . that s because the coin design problem is an inverse problem , whereas essentially all of the scientific fields that are concerned with naturally - occurring distributed systems analyze them purely as a `` forward problem . ''that is , those fields analyze what global behavior would arise from provided local dynamical laws , rather than grapple with the inverse problem of how to configure those laws to induce desired global behavior .( indeed , the coin design problem could almost be defined as decentralized adaptive control theory for massively distributed stochastic environments . )it seems plausible that the insights garnered from understanding the inverse problem would provide a trenchant novel perspective on those fields . just as tackling the inverse problem in the design of steam engines led to the first true understanding of the macroscopic properties of physical bodes ( aka thermodynamics ) , so may the cracking of the coin design problem may improve our understanding of many naturally - occurring coins .in addition , although the focuses of those other fields are not on the coin design problem , in that they are related to the coin design problem , that problem may be able to serve as a `` touchstone '' for all those fields. this may then reveal novel connections between the fields .as an example of how understanding the coin design problem may provide a novel perspective on other fields , consider countries with capitalist human economies .although there is no intrinsic world utility in such systems , they can still be viewed from the perspective of coins , as naturally occurring coins .for example , one can declare world utility to be a time average of the gross domestic product ( gdp ) of the country in question .( world utility per se is not a construction internal to a human economy , but rather something defined from the outside . )the reward functions for the human agents in this example could then be the achievements of their personal goals ( usually involving personal wealth to some degree ) . now in general , to achieve high world utility in a coin it is necessary to avoid having the agents work at cross - purposes .otherwise the system is vulnerable to economic phenomena like the tragedy of the commons ( toc ) , in which individual avarice works to lower world utility , or the liquidity trap , where behavior that helps the entire system when employed by some agents results in poor global behavior when employed by all agents .one way to avoid such phenomena is by modifying the agents utility functions . in the context of capitalist economies ,this kind of effect can be achieved via punitive legislation that modifies the rewards the agents receive for engaging in certain kinds of activity .a real world example of an attempt to make just such a modification was the creation of anti - trust regulations designed to prevent monopolistic practices .in designing a coin we usually have more freedom than anti - trust regulators though , in that there is no base - line `` organic '' private utility function over which we must superimpose legislation - like incentives .rather , the entire `` psychology '' of the individual agents is at our disposal when designing a coin .this obviates the need for honesty - elicitation ( ` incentive compatible ' ) mechanisms , like auctions , which form a central component of conventional economics .accordingly , coins can differ in certain crucial respects from human economies .the precise differences the subject of current research seem likely to present many insights into the functioning of economic structures like anti - trust regulators . to continue with this example , consider the usefulness , as far as the world utility is concerned , of having ( commodity , or especially fiat ) money in the coin .formally , from a coin perspective , the use of ` money ' for trading between agents constitutes a particular class of couplings between the states and utility functions of the various agents .for example , if one agent s ` bank account ' variable goes up in a ` trade ' with another agent , then a corresponding ` bank account ' variable in that other agent must decrease to compensate .in addition to this coupling between the agents states , there is also a coupling between their utilities , if one assume that both agents will prefer to have more money rather than less , everything else being equal .however one might formally define such a ` money ' structure , we can consider what happens if it does ( or does not ) obtain for an arbitrary dynamical system , in the context of an arbitrary world utility .for some such dynamical systems and world utilities , a money structure will improve the value of that world utility .but for the same dynamics , the use of a money structure will simultaneously induce _ low levels _ of other world utilities ( a trivial example being a world utility that equals the negative of the first one ) .this raises a host of questions , like how to formally specify the most general set of world utilities that benefits significantly from using money - based private utility functions .if one is provided a world utility that is not a member of that set , then an `` economics - like '' configuration of the system is likely to result in poor performance .such a characterization of how and when money helps improve world utilities of various sorts might have important implications for conventional human economics , especially when one chooses world utility to be one of the more popular choices for social welfare function .( see and references therein for some of the standard economics work that is most relevant to this issue . )there are many other scientific fields that are currently under investigation from a coin - design perspective. some of them are , like economics , part of ( or at least closely related to ) the social sciences .these fields typically involve rl algorithms under the guise of human agents .an example of such a field is game theory , especially game theory of bounded rational players . as illustrated in our money example , viewing such systems from the perspective of a non - endogenous world utility , _i.e. _ , from a coin - design perspective , holds the potential for providing novel insight into them .( in the case of game theory , it holds the potential for leading to deeper understanding of many - player inverse stochastic game theory . )however there are other scientific fields that might benefit from a coin - design perspective even though they study systems that do nt even involve rl algorithms .the idea here is that if we viewed such systems from an `` artificial '' teleological perspective , both in concentrating on a non - endogenous world utility and in casting the nodal elements of the system as rl algorithms , we could learn a lot about the form of the ` design space ' in which such systems live .( just as in economics , where the individual nodal elements _ are _ rl algorithms , investigating the system using an externally imposed world utility might lead to insight . )examples here are ecosystems ( individual genes , individuals , or species being the nodal elements ) and cells ( individual organelles in eukaryotes being the nodal elements ) . in both cases ,the world utility could involve robustness of the desired equilibrium against external perturbation , efficient exploitation of free energy in the environment , etc .the following list elaborates what we mean by a coin : \1 ) there are many processors running concurrently , performing actions that affect one another s behavior .\2 ) there is little to no centralized personalized communication , _i.e. _ , little to no behavior in which a small subset of the processors not only communicates with all the other processors , but communicates differently with each one of those other processors .any single processor s `` broadcasting '' the same information to all other processors is not precluded .\3 ) there is little to no centralized personalized control , _i.e. _ , little to no behavior in which a small subset of the processors not only controls all the other processors , but controls each one of those other processors differently .`` broadcasting '' the same control signal to all other processors is not precluded .\4 ) there is a well - specified task , typically in the form of extremizing a utility function , that concerns the behavior of the entire distributed system .so we are confronted with the inverse problem of how to configure the system to achieve the task .the following elements characterize the sorts of approaches to coin design we are concerned with here : \5 ) the approach for tackling ( 4 ) is scalable to very large numbers of processors .\6 ) the approach for tackling ( 4 ) is very broadly applicable . in particular, it can work when little ( if any ) `` broadcasting '' as in ( 2 ) and ( 3 ) is possible .\7 ) the approach for tackling ( 4 ) involves little to no hand - tailoring .\8 ) the approach for tackling ( 4 ) is robust and adaptive , with minimal need to `` get the details exactly right or else , '' as far as the stochastic dynamics of the system is concerned .\9 ) the individual processors are running rl algorithms . unlike the other elements of this list ,this one is not an _ a priori _ engineering necessity .rather , it is a reflection of the fact that rl algorithms are currently the best - understood and most mature technology for addressing the points ( 8) and ( 9 ) .there are many approaches to coin design that do not have every one of those features .these approaches constitute part of the overall field of coin design .as discussed below though , not having every feature in our list , no single one of those approaches can be extended to cover the entire breadth of the field of coin design .( this is not too surprising , since those approaches are parts of fields whose focus is not the coin design problem per se . )the rest of this section consists of brief presentations of some of these approaches , and in particular characterizes them in terms of our list of nine characteristics of coins and of our desiredata for their design . of the approaches we discuss , at present it is probably the ones in artificial intelligence and machine learning that are most directly applicable to coin design .however it is fairly clear how to exploit those approaches for coin design , and in that sense relatively little needs to be said about them .in contrast , as currently employed , the toolsets in the social sciences are not as immediately applicable to coin design .however , it seems likely that there is more yet to be discovered about how to exploit them for coin design .accordingly , we devote more space to those social science - based approaches here .we present an approach that holds promise for covering all nine of our desired features in section [ sec : math ] .there is an extensive body of work in ai and machine learning that is related to coin design . indeed, one of the most famous speculative works in the field can be viewed as an argument that ai should be approached as a coin design problem .much work of a more concrete nature is also closely related to the problem of coin design .as discussed in the introduction , the maturing field of reinforcement learning provides a much needed tool for the types of problems addressed by coins .because rl generally provides model - free and `` online '' learning features , it is ideally suited for the distributed environment where a `` teacher '' is not available and the agents need to learn successful strategies based on `` rewards '' and `` penalties '' they receive from the overall system at various intervals .it is even possible for the learners to use those rewards to modify _ how _ they learn .although work on rl dates back to samuel s checker player , relatively recent theoretical and empirical results have made rl one of the most active areas in machine learning .many problems ranging from controlling a robot s gait to controlling a chemical plant to allocating constrained resource have been addressed with considerable success using rl . in particular ,the rl algorithms ( which rates potential states based on a _ value function _ ) and ( which rates action - state pairs ) have been investigated extensively .a detailed investigation of rl is available in .although powerful and widely applicable , solitary rl algorithms will not perform well on large distributed heterogeneous problems in general .this is due to the very big size of the action - policy space for such problems .in addition , without centralized communication and control , how a solitary rl algorithm could run the full system at all , poorly or well , becomes a major concern .for these reasons , it is natural to consider deploying many rl algorithms rather than a single one for these large distributed problems .we will discuss the coordination issues such an approach raises in conjunction with multi - agent systems in section [ sec : mas ] and with learnability in coins in section [ sec : math ] .the field of distributed artificial intelligence ( dai ) has arisen as more and more traditional artificial intelligence ( ai ) tasks have migrated toward parallel implementation .the most direct approach to such implementations is to directly parallelize ai production systems or the underlying programming languages .an alternative and more challenging approach is to use distributed computing , where not only are the individual reasoning , planning and scheduling ai tasks parallelized , but there are _ different modules _ with different such tasks , concurrently working toward a common goal . in a dai, one needs to ensure that the task has been modularized in a way that improves efficiency .unfortunately , this usually requires a central controller whose purpose is to allocate tasks and process the associated results .moreover , designing that controller in a traditional ai fashion often results in brittle solutions .accordingly , recently there has been a move toward both more autonomous modules and fewer restrictions on the interactions among the modules . despite this evolution, dai maintains the traditional ai concern with a pre - fixed set of _ particular _ aspects of intelligent behavior ( _ e.g. _ reasoning , understanding , learning etc . ) rather than on their _ cumulative _ character .as the idea that intelligence may have more to do with the interaction among components started to take shape , focus shifted to concepts ( _ e.g. _ , multi - agent systems ) that better incorporated that idea . the field of multi - agent systems ( mas ) is concerned with the interactions among the members of such a set of agents , as well as the inner workings of each agent in such a set ( _ e.g. _ , their learning algorithms ) . as in computational ecologies and computational markets ( see below ) , a well - designed mas is one that achieves a global task through the actions of its components .the associated design steps involve : 1 .decomposing a global task into distributable subcomponents , yielding tractable tasks for each agent ; 2 . establishing communication channels that provide sufficient information to each of the agents for it to achieve its task , but are not too unwieldly for the overall system to sustain ; and 3 .coordinating the agents in a way that ensures that they cooperate on the global task , or at the very least does not allow them to pursue conflicting strategies in trying to achieve their tasks .step ( 3 ) is rarely trivial ; one of the main difficulties encountered in mas design is that agents act selfishly and artificial cooperation structures have to be imposed on their behavior to enforce cooperation .an active area of research , which holds promise for addressing parts the coin design problem , is to determine how selfish agents `` incentives '' have to be engineered in order to avoid the tragedy of the commons ( toc ) .( this work draws on the economics literature , which we review separately below . ) when simply providing the right incentives is not sufficient , one can resort to strategies that actively induce agents to cooperate rather than act selfishly . in such cases coordination , negotiations , coalition formation or contracting among agents may be needed to ensure that they do not work at cross purposes .unfortunately , all of these approaches share with dai and its offshoots the problem of relying excessively on hand - tailoring , and therefore being difficult to scale and often nonrobust .in addition , except as noted in the next subsection , they involve no rl , and therefore the constituent computational elements are usually not as adaptive and robust as we would like . because it neither requires explicit modeling of the environment nor having a `` teacher '' that provides the `` correct '' actions , the approach of having the individual agents in a mas use rl is well - suited for mas s deployed in domains where one has little knowledge about the environment and/or other agents .there are two main approaches to designing such mas s : + ( i ) one has ` solipsistic agents ' that do nt know about each other and whose rl rewards are given by the performance of the entire system ( so the joint actions of all other agents form an `` inanimate background '' contributing to the reward signal each agent receives ) ; + ( ii ) one has ` social agents ' that explicitly model each other and take each others actions into account . both ( i ) and ( ii ) can be viewed as ways to ( try to ) coordinate the agents in a mas in a robust fashion .* solipsistic agents : * mas s with solipsistic agents have been successfully applied to a multitude of problems .generally , these schemes use rl algorithms similar to those discussed in section [ sec : control ] .however much of this work lacks a well - defined global task or broad applicability ( _ e.g. _ , ) .more generally , none of the work with solipsistic agents scales well .( as illustrated in our experiments on the `` bar problem '' , recounted below . )the problem is that each agent must be able to discern the effect of its actions on the overall performance of the system , since that performance constitutes its reward signal .as the number of agents increases though , the effects of any one agent s actions ( signal ) will be swamped by the effects of other agents ( noise ) , making the agent unable to learn well , if at all .( see the discussion below on learnability . ) in addition , of course , solipsistic agents can not be used in situations lacking centralized calculation and broadcast of the single global reward signal .* social agents : * mas s whose agents take the actions of other agents into account synthesize rl with game theoretic concepts ( _ e.g. _ , nash equilibrium ) .they do this to try to ensure that the overall system both moves toward achieving the overall global goal and avoids often deleterious oscillatory behavior .to that end , the agents incorporate internal mechanisms that actively model the behavior of other agents . in section [ sec : bar ] , we discuss a situation where such modeling is necessarily self - defeating .more generally , this approach usually involves extensive hand - tailoring for the problem at hand .some human economies provides examples of naturally occurring systems that can be viewed as a ( more or less ) well - performing coin .the field of economics provides much more though . both empirical economics ( _ e.g. _ , economic history , experimental economics ) and theoretical economics ( _ e.g. _ , general equilibrium theory , theory of optimal taxation ) provide a rich literature on strategic situations where many parties interact .in fact , much of the entire field of economics can be viewed as concerning how to maximize certain constrained kinds of world utilities , when there are certain ( very strong ) restrictions on the individual agents and their interactions , and in particular when we have limited freedom in setting either the utility functions of those agents or modifying their rl algorithms in any other way . in this section we summarize just two economic concepts , both of which are very closely related to coins , in that they deal with how a large number of interacting agents can function in a stable and efficient manner : general equilibrium theory and mechanism design .we then discuss general attempts to apply those concepts to distributed computational problems .we follow this with a discussion of game theory , and then present a particular celebrated toy - world problem that involves many of these issues .often the first version of `` equilibrium '' that one encounters in economics is that of supply and demand in single markets : the price of the market s good is determined by where the supply and demand curves for that good intersect . in cases where there is interaction among multiple markets however , even whenthere is no production but only trading , one can not simply determine the price of each market s good individually , as both the supply and demand for each good depends on the supply / demand of other goods . considering the price fluctuations across marketsleads to the concept of ` general equilibrium ' , where prices for each good are determined in such a way to ensure that all markets ` clear ' .intuitively , this means that prices are set so the total supply of each good is equal to the demand for that good .the existence of such an equilibrium , proven in , was first postulated by leon walras . a mechanism that calculates the equilibrium ( _ i.e. _ , ` market - clearing ' ) prices now bears his name : the walrasian auctioner . in general , for an arbitrary goal for the overall system , there is no reason to believe that having markets clear achieves that goal . in other words, there is no _ a priori _ reason why the general equilibrium point should maximize one s provided world utility function .however , consider the case where one s goal for the overall system is in fact that the markets clear .in such a context , examine the case where the interactions of real - world agents will induce the overall system to adopt the general equilibrium point , so long as certain broad conditions hold . then if we can impose those conditions, we can cause the overall system to behave in the manner we wish .however general equilibrium theory is not sufficient to establish those `` broad conditions '' , since it says little about real - world agents . in particular , general equilibrium theory suffers from having no temporal aspect ( _ i.e. _ , no dynamics ) and from assuming that all the agents are perfectly rational .another shortcoming of general equilibrium theory as a model of real - world systems is that despite its concerning prices , it does not readily accommodate the full concept of money . ofthe three main roles money plays in an economy ( medium of exchange in trades , store of value for future trades , and unit of account ) none are essential in a general equilibrium setting . the unit of account aspect is not needed as the bookkeeping is performed by the walrasian auctioner . since the supplies and demands are matched directly there is no need to facilitate trades , and thus no role for money as a medium of exchange . and finally , as the system reaches an equilibrium in one step , through the auctioner , there is no need to store value for future trading rounds .the reason that money is not needed can be traced to the fact that there is an `` overseer '' with global information who guides the system .if we remove the centralized communication and control exerted by this overseer , then ( as in a real economy ) agents will no longer know the exact details of the overall economy . they will be forced to makes guesses as in any learning system , and the differences in those guesses will lead to differences in their actions .such a decentralized learning - based system more closely resembles a coin than does a conventional general equilibrium system .in contrast to general equilibrium systems , the three main roles money plays in a human economy are crucial to the dynamics of such a decentralized system .this comports with the important effects in coins of having the agents utility functions involve money ( see background section above ) . even if there exists centralized communication so that we are nt considering a full - blown coin , if there is no centralized walras - like control , it is usually highly non - trivial to induce the overall system to adopt the general equilibrium point .one way to try to do so is via an auction .( this is the approach usually employed in computational markets see below . ) along with optimal taxation and public good theory , the design of auctions is the subject of the field of mechanism design .more generally , mechanism design is concerned with the incentives that must be applied to any set of agents that interact and exchange goods in order to get those agents to exhibit desired behavior .usually that desired behavior concerns pre - specified utility functions of some sort for each of the individual agents .in particular , mechanism design is usually concerned with incentive schemes which induce ` ( pareto ) efficient ' ( or ` pareto optimal ' ) allocations in which no agent can be made better off without hurting another agent .one particularly important type of such an incentive scheme is an auction .when many agents interact in a common environment often there needs to be a structure that supports the exchange of goods or information among those agents .auctions provide one such ( centralized ) structure for managing exchanges of goods .for example , in the english auction all the agents come together and ` bid ' for a good , and the price of the good is increased until only one bidder remains , who gets the good in exchange for the resource bid . as another example , in the dutch auction the price of a good is decreased until one buyer is willing to pay the current price .all auctions perform the same task : match supply and demand . as such , auctions are one of the ways in which price equilibration among a set of interacting agents ( perhaps an equilibration approximating general equilibrium , perhaps not ) can be achieved .however , an auction mechanism that induces pareto efficiency does not necessarily maximize some other world utility .for example , in a transaction in an english auction both the seller and the buyer benefit .they may even have arrived at an allocation which is efficient . however , in that the winner may well have been willing to pay more for the good , such an outcome may confound the goal of the market designer , if that designer s goal is to maximize revenue .this point is returned to below , in the context of computational economics .` computational economies ' are schemes inspired by economics , and more specifically by general equilibrium theory and mechanism design theory , for managing the components of a distributed computational system .they work by having a ` computational market ' , akin to an auction , guide the interactions among those components .such a market is defined as any structure that allows the components of the system to exchange information on relative valuation of resources ( as in an auction ) , establish equilibrium states ( _ e.g. _ , determine market clearing prices ) and exchange resources ( _ i.e. _ , engage in trades ) .such computational economies can be used to investigate real economies and biological systems .they can also be used to design distributed computational systems . for example , such computational economies are well - suited to some distributed resource allocation problems , where each component of the system can either directly produce the `` goods '' it needs or acquire them through trades with other components .computational markets often allow for far more heterogeneity in the components than do conventional resource allocation schemes .furthermore , there is both theoretical and empirical evidence suggesting that such markets are often able to settle to equilibrium states .for example , auctions find prices that satisfy both the seller and the buyer which results in an increase in the utility of both ( else one or the other would not have agreed to the sale ) .assuming that all parties are free to pursue trading opportunities , such mechanisms move the system to a point where all possible bilateral trades that could improve the utility of both parties are exhausted .now restrict attention to the case , implicit in much of computational market work , with the following characteristics : first , world utility can be expressed as a monotonically increasing function where each argument of can in turn be interpreted as the value of a pre - specified utility function for agent .second , each of those is a function of an -indexed ` goods vector ' of the non - perishable goods `` owned '' by agent .the components of that vector are , and the overall system dynamics is restricted to conserve the vector .( there are also some other , more technical conditions . ) as an example , the resource allocation problem can be viewed as concerning such vectors of `` owned '' goods . due to the second of our two conditions , one can integrate a market - clearing mechanism into any system of this sort .due to the first condition , since in a market equilibrium with non - perishable goods no ( rational ) agent ends up with a value of its utility function lower than the one it started with , the value of the world utility function must be higher at equilibrium than it was initially .in fact , so long as the individual agents are smart enough to avoid all trades in which they do not benefit , any computational market can only improve this kind of world utility , even if it does not achieve the market equilibrium .( see the discussion of `` weak triviality '' below . )this line of reasoning provides one of the main reasons to use computational markets when they can be applied .conversely , it underscores one of the major limitations of such markets : starting with an arbitrary world utility function with arbitrary dynamical restrictions , it may be quite difficult to cast that function as a monotonically increasing taking as arguments a set of agents goods - vector - based utilities , if we require that those be well - enough behaved that we can reasonably expect the agents to optimize them in a market setting .one example of a computational economy being used for resource allocation is huberman and clearwater s use of a double blind auction to solve the complex task of controlling the temperature of a building . in this case , each agent ( individual temperature controller ) bids to buy or sell cool or warm air .this market mechanism leads to an equitable temperature distribution in the system .other domains where market mechanisms were successfully applied include purchasing memory in an operating systems , allocating virtual circuits , `` stealing '' unused cpu cycles in a network of computers , predicting option futures in financial markets , and numerous scheduling and distributed resource allocation problems .computational economics can also be used for tasks not tightly coupled to resource allocation .for example , following the work of maes and ferber , baum shows how by using computational markets a large number of agents can interact and cooperate to solve a variant of the blocks world problem . viewed as candidate coins , all market - based computational economics fall short in relying on both centralized communication and centralized control to some degree .often that reliance is extreme .for example , the systems investigated by baum not only have the centralized control of a market , but in addition have centralized control of all other non - market aspects of the system .( indeed , the market is secondary , in that it is only used to decide which single expert among a set of candidate experts gets to exert that centralized control at any given moment ) .there has also been doubt cast on how well computational economies perform in practice , and they also often require extensive hand - tailoring in practice .finally , return to consideration of a world utility function that is a monotonically increasing function whose arguments are the utilities of the agents . in general , the maximum of such a world utility function will be a pareto optimal point .so given the utility functions of the agents , by considering all such we map out an infinite set of pareto optimal points that maximize _ some _ such world utility function .( is usually infinite even if we only consider maximizing those world utilities subject to an overall conservation of goods constraint . ) now the market equilibrium is a pareto optimal point , and therefore lies in .but it is only one element of .moreover , it is usually set in full by the utilities of the agents , in concert with the agents initial endowments .in particular , it is independent of the world utility . in general then , given the utilities of the agents and a world utility , there is no _ a priori _ reason to believe that the particular element in picked out by the auction is the point that maximizes that particular world utility .this subtlety is rarely addressed in the work on using computational markets to achieve a global goal .it need not be uncircumventable however . for example , one obvious idea would be to to try to distort the agents _ perceptions _ of their utility functions and/or initial endowments so that the resultant market equilibrium has a higher value of the world utility at hand .game theory is the branch of mathematics concerned with formalized versions of `` games '' , in the sense of chess , poker , nuclear arms races , and the like .it is perhaps easiest to describe it by loosely defining some of its terminology , which we do here and in the next subsection .the simplest form of a game is that of ` non - cooperative single - stage extensive - form ' game , which involves the following situation : there are two or more agents ( called ` players ' in the literature ) , each of which has a pre - specified set of possible actions that it can follow .( a ` finite ' game has finite sets of possible actions for all the players . ) in addition , each agent has a utility function ( also called a ` payoff matrix ' for finite games ) .this maps any ` profile ' of the action choices of all agents to an associated utility value for agent .( in a ` zero - sum ' game , for every profile , the sum of the payoffs to all the agents is zero . )the agents choose their actions in a sequence , one after the other .the structure determining what each agent knows concerning the action choices of the preceding agents is known as the ` information set . 'games in which each agent knows exactly what the preceding ( ` leader ' ) agent did are known as ` stackelberg games ' .( a variant of such a game is considered in our experiments below .see also . ) in a ` multi - stage ' game , after all the agents choose their first action , each agent is provided some information concerning what the other agents did .the agent uses this information to choose its next action . in the usual formulation, each agent gets its payoff at the end of all of the game s stages .an agent s ` strategy ' is the rule it elects to follow mapping the information it has at each stage of a game to its associated action .it is a ` pure strategy ' if it is a deterministic rule . if instead the agent s action is chosen by randomly sampling from a distribution , that distribution is known a ` mixed strategy ' .note that an agent s strategy concerns possible sequences of provided information , even any that can not arise due to the strategies of the other agents .any multi - stage extensive - form game can be converted into a ` normal form ' game , which is a single - stage game in which each agent is ignorant of the actions of the other agents , so that all agents choose their actions `` simultaneously '' .this conversion is acieved by having the `` actions '' of each agent in the normal form game correspond to an entire strategy in the associated multi - stage extensive - form game .the payoffs to all the agents in the normal form game for a particular strategy profile is then given by the associated payoff matrices of the multi - stage extensive form - game .a ` solution ' to a game , or an ` equilibrium ' , is a profile in which every agent behaves `` rationally '' .this means that every agent s choice of strategy optimizes its utility subject to a pre - specified set of conditions . in conventional game theorythose conditions involve , at a minimum , perfect knowledge of the payoff matrices of all other players , and often also involve specification of what strategies the other agents adopted and the like . in particular , a ` nash equilibrium ' is a a profile where each agent has chosen the best strategy it can , _ given the choices of the other agents_. a game may have no nash equilibria , one equilibrium , or many equilibria in the space of pure strategies .a beautiful and seminal theorem due to nash proves that every game has at least one nash equilibrium in the space of mixed strategies .there are several different reasons one might expect a game to result in a nash equilibrium .one is that it is the point that perfectly rational bayesian agents would adopt , assuming the probability distributions they used to calculate expected payoffs were consistent with one another .a related reason , arising even in a non - bayesian setting , is that a nash equilibrium equilibrium provides `` consistent '' predictions , in that if all parties predict that the game will converge to a nash equilibrium , no one will benefit by changing strategies .having a consistent prediction does not ensure that all agents payoffs are maximized though .the study of small perturbations around nash equilibria from a stochastic dynamics perspective is just one example of a ` refinement ' of nash equilibrium , that is a criterion for selecting a single equilibrium state when more than one is present . in cooperative game theorythe agents are able to enter binding contracts with one another , and thereby coordinate their strategies .this allows the agents to avoid being `` stuck '' in nash equilibria that are pareto inefficient , that is being stuck at equilibrium profiles in which all agents would benefit if only they could agree to all adopt different strategies , with no possibility of betrayal .characteristic function _ of a game involves subsets ( ` coalitions ' ) of agents playing the game . for each such subset, it gives the sum of the payoffs of the agents in that subset that those agents can guarantee if they coordinate their strategies .an is a division of such a guaranteed sum among the members of the coalition .it is often the case that for a subset of the agents in a coalition one imputation another , meaning that under threat of leaving the coalition that subset of agents can demand the first imputation rather than the second .so the problem each agent is confronted with in a cooperative game is which set of other agents to form a coalition with , given the characteristic function of the game and the associated imputations can demand of its partners .there are several different kinds of solution for cooperative games that have received detailed study , varying in how the agents address this problem of who to form a coalition with . some of the more popular are the ` core ' , the ` shapley value ' , the ` stable set solution ' , and the ` nucleolus ' . in the real world, the actual underlying game the agents are playing does not only involve the actions considered in cooperative game theory s analysis of coalitions and imputations .the strategies of that underlying game also involve bargaining behavior , considerations of trying to cheat on a given contract , bluffing and threats , and the like . in many respects , by concentrating on solutions for coalition formation and their relation with the characteristic function , cooperative game theory abstracts away these details of the true underlying game .conversely though , progress has recently been made in understanding how cooperative games can arise from non - cooperative games , as they must in the real world .not surprisingly , game theory has come to play a large role in the field of multi - agent systems .in addition , due to darwinian natural selection , one might expect game theory to be quite important in population biology , in which the `` utility functions '' of the individual agents can be taken to be their reproductive fitness . as it turns out , there is an entire subfield of game theory concerned with this connection with population biology , called ` evolutionary game theory ' . to introduce evolutionary game theory , consider a game in which all players share the same space of possible strategies , and there is an additional space of possible ` attribute vectors ' that characterize an agent , along with a probability distribution across that new space .( examples of attributes in the physical world could be things like size , speed , etc . )we select a set of agents to play a game by randomly sampling .those agents attribute vectors jointly determine the payoff matrices of each of the individual agents .( intuitively , what benefit accrues to an agent for taking a particular action depends on its attributes and those of the other agents . )however each agent has limited information concerning both its attribute vector and that of the other players in the game , information encapsulated in an ` information structure ' .the information structure specifies how much each agent knows concerning the game it is playing . in this context , we enlarge the meaning of the term `` strategy '' to not just be a mapping from information sets and the like to actions , but from entire information structures to actions .in addition to the distribution over attribute vectors , we also have a distribution over strategies , . a strategy is a ` population strategy ' if is a delta function about .intuitively , we have a population strategy when each animal in a population `` follows the same behavioral rules '' , rules that take as input what the animal is able to discern about its strengths and weakness relative to those other members of the population , and produce as output how the animal will act in the presence of such animals . given , a population strategy centered about , and its own attribute vector , any player in the support of has an expected payoff for any strategy it might adopt .when s payoff could not improve if it were to adopt any strategy other than , we say that is ` evolutionary stable ' .intuitively , an evolutionary stable strategy is one that is stable with respect to the introduction of mutants into the population .now consider a sequence of such evolutionary games .interpret the payoff that any agent receives after being involved in such a game as the ` reproductive fitness ' of that agent , in the biological sense .so the higher the payoff the agent receives , in comparison to the fitnesses of the other agents , the more `` offspring '' it has that get propagated to the next game . in the continuum - time limit , where games are indexed by the real number ,this can be formalized by a differential equation .this equation specifies the derivative of evaluated for each agent s attribute vector , as a montonically increasing function of the relative difference between the payoff of and the average payoff of all the agents .( we also have such an equation for . )the resulting dynamics is known as ` replicator dynamics ' , with an evolutionary stable population strategy , if it exists , being one particular fixed point of the dynamics .now consider removing the reproductive aspect of evolutionary game theory , and instead have each agent propagate to the next game , with `` memory '' of the events of the preceding game .furthermore , allow each agent to modify its strategy from one game to the next by `` learning '' from its memory of past games , in a bounded rational manner .the field of learning in games is concerned with exactly such situations .most of the formal work in this field involves simple models for the learning process of the agents .for example , in ` ficticious play ' , in each successive game , each agent adopts what would be its best strategy if its opponents chose their strategies according to the empirical frequency distribution of such strategies that has encountered in the past .more sophisticated versions of this work employ simple bayesian learning algorithms , or re - inventions of some of the techniques of the rl community .typically in learning in games one defines a payoff to the agent for a sequence of games , for example as a discounted sum of the payoffs in each of the constituent games . within this framework one can study the long term effects of strategies such as cooperation and see if they arise naturally and if so , under what circumstances .many aspects of real world games that do not occur very naturally otherwise arise spontaneously in these kinds of games .for example , when the number of games to be played is not pre - fixed , it may behoove a particular agent to treat its opponent better than it would otherwise , since have to rely on that other agent s treating it well in the future , if they end up playing each other again .this framework also allows us to investigate the dependence of evolving strategies on the amount of information available to the agents ; the effect of communication on the evolution of cooperation ; and the parallels between auctions and economic theory . in many respects , learning in gamesis even more relevant to the study of coins than is traditional game theory .however it suffers from the same major shortcoming ; it is almost exclusively focused on the forward problem rather than the inverse problem .in essence , coin design is the problem of game theory .the `` el farol '' bar problem and its variants provide a clean and simple testbed for investigating certain kinds of interactions among agents . in the original version of the problem , which arose in economics , at each time step ( each `` night '' ) , each agent needs to decide whether to attend a particular bar .the goal of the agent in making this decision depends on the total attendance at the bar on that night .if the total attendance is below a preset capacity then the agent should have attended .conversely , if the bar is overcrowded on the given night , then the agent should not attend .( because of this structure , the bar problem with capacity set to of the total number of agents is also known as the ` minority game ' ; each agent selects one of two groups at each time step , and those that are in the minority have made the right choice ) .the agents make their choices by predicting ahead of time whether the attendance on the current night will exceed the capacity and then taking the appropriate course of action .what makes this problem particularly interesting is that it is impossible for each agent to be perfectly `` rational '' , in the sense of correctly predicting the attendance on any given night .this is because if most agents predict that the attendance will be low ( and therefore decide to attend ) , the attendance will actually high , while if they predict the attendance will be high ( and therefore decide not to attend ) the attendance will be low .( in the language of game theory , this essentially amounts to the property that there are no pure strategy nash equilibria . ) alternatively , viewing the overall system as a coin , it has a prisoner s dilemma - like nature , in that `` rational '' behavior by all the individual agents thwarts the global goal of maximizing total enjoyment ( defined as the sum of all agents enjoyment and maximized when the bar is exactly at capacity ) .this frustration effect is similar to what occurs in spin glasses in physics , and makes the bar problem closely related to the physics of emergent behavior in distributed systems .researchers have also studied the dynamics of the bar problem to investigate economic properties like competition , cooperation and collective behavior and especially their relationship to market efficiency . properly speaking , biological systems do not involve utility functions and searches across them with rl algorithms .however it has long been appreciated that there are many ways in which viewing biological systems as involving searches over such functions can lead to deeper understanding of them .conversely , some have argued that the mechanism underlying biological systems can be used to help design search algorithms .these kinds of reasoning which relate utility functions and biological systems have traditionally focussed on the case of a single biological system operating in some external environment .if we extend this kind of reasoning , to a set of biological systems that are co - evolving with one another , then we have essentially arrived at biologically - based coins .this section discusses some of how previous work in the literature bears on this relationship between coins and biology .the fields of population biology and ecological modeling are concerned with the large - scale `` emergent '' processes that govern the systems that consist of many ( relatively ) simple entities interacting with one another . as usually cast , the `` simple entities '' are members of one or more species , and the interactions are some mathematical abstraction of the process of natural selection as it occurs in biological systems ( involving processes like genetic reproduction of various sorts , genotype - phenotype mappings , inter and intra - species competitions for resources , etc . ) .population biology and ecological modeling in this context addresses questions concerning the dynamics of the resultant ecosystem , and in particular how its long - term behavior depends on the details of the interactions between the constituent entities .broadly construed , the paradigm of ecological modeling can even be broadened to study how natural selection and self - regulating feedback creates a stable planet - wide ecological environment gaia .the underlying mathematical models of other fields can often be usefully modified to apply to the kinds of systems population biology is interested in .( see also the discussion in the game theory subsection above . )conversely , the underlying mathematical models of population biology and ecological modeling can be applied to other non - biological systems . in particular , those models shed light on social issues such as the emergence of language or culture , warfare , and economic competition .they also can be used to investigate more abstract issues concerning the behavior of large complex systems with many interacting components .going a bit further afield , an approach that is related in spirit to ecological modeling is ` computational ecologies ' .these are large distributed systems where each component of the system s acting ( seemingly ) independently results in complex global behavior .those components are viewed as constituting an `` ecology '' in an abstract sense ( although much of the mathematics is not derived from the traditional field of ecological modeling ) . in particular, one can investigate how the dynamics of the ecology is influenced by the information available to each component and how cooperation and communication among the components affects that dynamics .although in some ways the most closely related to coins of the current ecology - inspired research , the field of computational ecologies has some significant shortcomings if one tries to view it as a full science of coins .in particular , it suffers from not being designed to solve the inverse problem of how to configure the system so as to arrive at a particular desired dynamics .this is a difficulty endemic to the general program of equating ecological modeling and population biology with the science of coins .these fields are primarily concerned with the `` forward problem '' of determining the dynamics that arises from certain choices of the underlying system .unless one s desired dynamics is sufficiently close to some dynamics that was previously catalogued ( during one s investigation of the forward problem ) , one has very little information on how to set up the components and their interactions to achieve that desired dynamics .in addition , most of the work in these fields does not involve rl algorithms , and viewed as a context in which to design coins suffers from a need for hand - tailoring , and potentially lack of robustness and scalability .the field of ` swarm intelligence ' is concerned with systems that are modeled after social insect colonies , so that the different components of the system are queen , worker , soldier , etc .it can be viewed as ecological modeling in which the individual entities have extremely limited computing capacity and/or action sets , and in which there are very few types of entities .the premise of the field is that the rich behavior of social insect colonies arises not from the sophistication of any individual entity in the colony , but from the interaction among those entities .the objective of current research is to uncover kinds of interactions among the entity types that lead to pre - specified behavior of some sort .more speculatively , the study of social insect colonies may also provide insight into how to achieve learning in large distributed systems .this is because at the level of the individual insect in a colony , very little ( or no ) learning takes place .however across evolutionary time - scales the social insect species as a whole functions as if the various individual types in a colony had `` learned '' their specific functions .the `` learning '' is the direct result of natural selection .( see the discussion on this topic in the subsection on ecological modeling . )swarm intelligences have been used to adaptively allocate tasks in a mail company , solve the traveling salesman problem and route data efficiently in dynamic networks among others . despite this, such intelligences do not really constitute a general approach to designing coins .there is no general framework for adapting swarm intelligences to maximize particular world utility functions .accordingly , such intelligences generally need to be hand - tailored for each application . and after such tailoring , it is often quite a stretch to view the system as `` biological '' in any sense , rather than just a simple and _ a priori _ reasonable modification of some previously deployed system .the two main objectives of artificial life , closely related to one another , are understanding the abstract functioning and especially the origin of terrestrial life , and creating organisms that can meaningfully be called `` alive '' .the first objective involves formalizing and abstracting the mechanical processes underpinning terrestrial life . in particular ,much of this work involves various degrees of abstraction of the process of self - replication .some of the more real - world - oriented work on this topic involves investigating how lipids assemble into more complex structures such as vesicles and membranes , which is one of the fundamental questions concerning the origin of life .many computer models have been proposed to simulate this process , though most suffer from overly simplifying the molecular morphology .more generally , work concerned with the origin of life can constitute an investigation of the functional self - organization that gives rise to life . in this regard ,an important early work on functional self - organization is the _ lambda calculus _ , which provides an elegant framework ( recursively defined functions , lack of distinction between object and function , lack of architectural restrictions ) for studying computational systems .this framework can be used to develop an artificial chemistry `` function gas '' that displays complex cooperative properties .the second objective of the field of artificial life is less concerned with understanding the details of terrestrial life per se than of using terrestrial life as inspiration for how to design living systems .for example , motivated by the existence ( and persistence ) of computer viruses , several workers have tried to design an immune system for computers that will develop `` antibodies '' and handle viruses both more rapidly and more efficiently than other algorithms .more generally , because we only have one sampling point ( life on earth ) , it is very difficult to precisely formulate the process by which life emerged . by creating an artificial world inside a computer however , it is possible to study far more general forms of life .see also where the argument is presented that the richest way of approaching the issue of defining `` life '' is phenomenologically , in terms of self- scaling properties of the system .cellular automata can be viewed as digital abstractions of physical gases .formally , they are discrete - time recurrent neural nets where the neurons live on a grid , each neuron has a finite number of potential states , and inter - neuron connections are ( usually ) purely local .( see below for a discussion of recurrent neural nets . )so the state update rule of each neuron is fixed and local , the next state of a neuron being a function of the current states of it and of its neighboring elements .the state update rule of ( all the neurons making up ) any particular cellular automaton specifies the mapping taking the initial configuration of the states of all of its neurons to the final , equilibrium ( perhaps strange ) attractor configuration of all those neurons .so consider the situation where we have a desired such mapping , and want to know an update rule that induces that mapping .this is a search problem , and can be viewed as similar to the inverse problem of how to design a coin to achieve a pre - specified global goal , albeit a `` coin '' whose nodal elements do not use rl algorithms .genetic algorithms are a special kind of search algorithm , based on analogy with the biological process of natural selection via recombination and mutation of a genome .although genetic algorithms ( and ` evolutionary computation ' in general ) have been studied quite extensively , there is no formal theory justifying genetic algorithms as search algorithms and few empirical comparisons with other search techniques .one example of a well - studied application of genetic algorithms is to ( try to ) solve the inverse problem of finding update rules for a cellular automaton that induce a pre - specified mapping from its initial configuration to its attractor configuration . to date, they have used this way only for extremely simple configuration mappings , mappings which can be trivially learned by other kinds of systems . despite the simplicity of these mappings ,the use of genetic algorithms to try to train cellular automata to exhibit them has achieved little success .equilibrium statistical physics is concerned with the stable state character of large numbers of very simple physical objects , interacting according to well - specified local deterministic laws , with probabilistic noise processes superimposed .typically there is no sense in which such systems can be said to have centralized control , since all particles contribute comparably to the overall dynamics . aside from mesoscopic statistical physics ,the numbers of particles considered are usually huge ( _ e.g. _ , ) , and the particles themselves are extraordinarily simple , typically having only a few degrees of freedom . moreover , the noise processes usually considered are highly restricted , being those that are formed by `` baths '' , of heat , particles , and the like .similarly , almost all of the field restricts itself to deterministic laws that are readily encapsulated in hamilton s equations ( schrodinger s equation and its field - theoretic variants for quantum statistical physics ) .in fact , much of equilibrium statistical physics is nt even concerned with the dynamic laws by themselves ( as for example is stochastic markov processes ) .rather it is concerned with invariants of those laws ( _ e.g. _ , energy ) , invariants that relate the states of all of the particles . trivially then, deterministic laws without such readily - discoverable invariants are outside of the purview of much of statistical physics .one potential use of statistical physics for coins involves taking the systems that statistical physics analyzes , especially those analyzed in its condensed matter variant ( _ e.g. _ , spin glasses ) , as simplified models of a class of coins .this approach is used in some of the analysis of the bar problem ( see above ) .it is used more overtly in ( for example ) the work of galam , in which the equilibrium coalitions of a set of `` countries '' are modeled in terms of spin glasses .this approach can not provide a general coin framework though .in addition to the restrictions listed above on the kinds of systems it considers , this is due to its not providing a general solution to arbitrary coin inversion problems , and to its not employing rl algorithms .another contribution that statistical physics can make is with the mathematical techniques it has developed for its own purposes , like mean field theory , self - averaging approximations , phase transitions , monte carlo techniques , the replica trick , and tools to analyze the thermodynamic limit in which the number of particles goes to infinity .although such techniques have not yet been applied to coins , they have been successfully applied to related fields .this is exemplified by the use of the replica trick to analyze two - player zero - sum games with random payoff matrices in the thermodynamic limit of the number of strategies in .other examples are the numeric investigation of iterated prisoner s dilemma played on a lattice , the analysis of stochastic games by expressing of deviation from rationality in the form of a `` heat bath '' , and the use of topological entropy to quantify the complexity of a voting system studied in .other quite recent work in the statistical physics literature is formally identical to that in other fields , but presents it from a novel perspective .a good example of this is , which is concerned with the problem of controlling a spatially extended system with a single controller , by using an algorithm that is identical to a simple - minded proportional rl algorithm ( in essence , a rediscovery of rl ) .much of the theory of physics can be cast as solving for the extremization of an actional , which is a functional of the worldline of an entire ( potentially many - component ) system across all time .the solution to that extremization problem constitutes the actual worldline followed by the system . in this waythe calculus of variations can be used to solve for the worldline of a dynamic system . as an example, simple newtonian dynamics can be cast as solving for the worldline of the system that extremizes a quantity called the ` lagrangian ' , which is a function of that worldline and of certain parameters ( _ e.g. _ , the ` potential energy ' ) governing the system at hand . in this instance , the calculus of variations simply results in newton s laws .if we take the dynamic system to be a coin , we are assured that its worldline automatically optimizes a `` global goal '' consisting of the value of the associated actional .if we change physical aspects of the system that determine the functional form of the actional ( _ e.g. _ , change the system s potential energy function ) , then we change the global goal , and we are assured that our coin optimizes that new global goal . counter - intuitive physical systems , like those that exhibit braess paradox , are simply systems for which the `` world utility '' implicit in our human intuition is extremized at a point different from the one that extremizes the system s actional .the challenge in exploiting this to solve the coin design problem is in translating an arbitrary provided global goal for the coin into a parameterized actional . note that that actional must govern the dynamics of the physical coin , and the parameters of the actional must be physical variables in the coin , variables whose values we can modify . the field of active walker models is concerned with modeling `` walkers '' ( be they human walkers or instead simple physical objects ) crossing fields along trajectories , where those trajectories are a function of several factors , including in particular the trails already worn into the field .often the kind of trajectories considered are those that can be cast as solutions to actional extremization problems so that the walkers can be explicitly viewed as agents optimizing a private utility .one of the primary concerns with the field of active walker models is how the trails worn in the field change with time to reach a final equilibrium state .the problem of how to design the cement pathways in the field ( and other physical features of the field ) so that the final paths actually followed by the walkers will have certain desirable characteristics is then one of solving for parameters of the actional that will result in the desired worldline .this is a special instance of the inverse problem of how to design a coin . using active walker models this way to design coins , like action extremization in general , probably has limited applicability . also , it is not clear how robust such a design approach might be , or whether it would be scalable and exempt from the need for hand - tailoring .this subsection presents a `` catch - all '' of other fields that have little in common with one another except that they bear some relation to coins .an extremely well - researched body of work concerns the mathematical and numeric behavior of systems for which the probability distribution over possible future states conditioned on preceding states is explicitly provided .this work involves many aspects of monte carlo numerical algorithms , all of markov chains , and especially markov fields , a topic that encompasses the chapman - kolmogorov equations and its variants : liouville s equation , the fokker - plank equation , and the detailed - balance equation in particular .non - linear dynamics is also related to this body of work ( see the synopsis of iterated function systems below and the synopsis of cellular automata above ) , as is markov competitive decision processes ( see the synopsis of game theory above ) .formally , one can cast the problem of designing a coin as how to fix each of the conditional transition probability distributions of the individual elements of a stochastic field so that the aggregate behavior of the overall system is of a desired form . unfortunately , almost all that is known in this area instead concerns the forward problem , of inferring aggregate behavior from a provided set of conditional distributions .although such knowledge provides many `` bits and pieces '' of information about how to tackle the inverse problem , those pieces collectively cover only a very small subset of the entire space of tasks we might want the coin to perform . in particular ,they tell us very little about the case where the conditional distribution encapsulates rl algorithms .the technique of iterated function systems grew out of the field of nonlinear dynamics .in such systems a function is repeatedly and recursively applied to itself .the most famous example is the logistic map , for some between 0 and 4 ( so that stays between 0 and 1 ) . more generally the function along with its arguments can be vector - valued . in particular , we can construct such functions out of affine transformations of points in a euclidean plane .iterated functions systems have been applied to image data . in this casethe successive iteration of the function generically generates a fractal , one whose precise character is determined by the initial iteration-1 image . since fractals are ubiquitous in natural images , a natural idea is to try to encode natural images as sets of iterated function systems spread across the plane , thereby potentially garnering significant image compression .the trick is to manage the inverse step of starting with the image to be compressed , and determining what iteration-1 image(s ) and iterating function(s ) will generate an accurate approximation of that image . in the language of nonlinear dynamics , we have a dynamic system that consists of a set of iterating functions , together with a desired attractor ( the image to be compressed ) .our goal is to determine what values to set certain parameters of our dynamic system to so that the system will have that desired attractor .the potential relationship with coins arises from this inverse nature of the problem tackled by iterated function systems .if the goal for a coin can be cast as its relaxing to a particular attractor , and if the distributed computational elements are isomorphic to iterated functions , then the tricks used in iterated functions theory could be of use .although the techniques of iterated function systems might prove of use in designing coins , they are unlikely to serve as a generally applicable approach to designing coins . in addition , they do not involve rl algorithms , and often involve extensive hand - tuning .a recurrent neural net consists of a finite set of `` neurons '' each of which has a real - valued state at each moment in time .each neuron s state is updated at each moment in time based on its current state and that of some of the other neurons in the system .the topology of such dependencies constitute the `` inter - neuronal connections '' of the net , and the associated parameters are often called the `` weights '' of the net. the dynamics can be either discrete or continuous ( _ i.e. _ , given by difference or differential equations ) .recurrent nets have been investigated for many purposes .one of the more famous of these is associative memories .the idea is that given a pre - specified pattern for the ( states of the neurons in the ) net , there may exist inter - neuronal weights which result in a basin of attraction focussed on that pattern . if this is the case , then the net is equivalent to an associative memory , in that a complete pre - specified pattern across all neurons will emerge under the net s dynamics from any initial pattern that partially matches the full pre - specified pattern . in practice ,one wishes the net to simultaneously possess many such pre - specified associative memories .there are many schemes for `` training '' a recurrent net to have this property , including schemes based on spin glasses and schemes based on gradient descent .as can the fields of cellular automata and iterated function systems , the field of recurrent neural nets can be viewed as concerning certain variants of coins . also like those other fields though , recurrent neural nets has shortcomings if one tries to view it as a general approach to a science of coins .in particular , recurrent neural nets do not involve rl algorithms , and training them often suffers from scaling problems .more generally , in practice they can be hard to train well without hand - tailoring .packet routing in a data network presents a particularly interesting domain for the investigation of coins . in particular , with such routing : + ( i ) the problem is inherently distributed ; + ( ii ) for all but the most trivial networks it is impossible to employ global control ; + ( iii ) the routers have only access to local information ( routing tables ) ; + ( iv ) it constitutes a relatively clean and easily modified experimental testbed ; and + ( v ) there are potentially major bottlenecks induced by ` greedy ' behavior on the part of the individual routers , which behavior constitutes a readily investigated instance of the tragedy of the commons ( toc ) . many of the approaches to packet routing incorporate a variant on rl .q routing is perhaps the best known such approach and is based on routers using reinforcement learning to select the best path .although generally successful , q routing is not a general scheme for inverting a global task .this is even true if one restricts attention to the problem of routing in data networks there exists a global task in such problems , but that task is directly used to construct the algorithm .a particular version of the general packet routing problem that is acquiring increased attention is the quality of service ( qos ) problem , where different communication packets ( voice , video , data ) share the same bandwidth resource but have widely varying importances both to the user and ( via revenue ) to the bandwidth provider . determining which packet has precedence over which other packets in such casesis not only based on priority in arrival time but more generally on the potential effects on the income of the bandwidth provider . in this context ,rl algorithms have been used to determine routing policy , control call admission and maximize revenue by allocating the available bandwidth efficiently .many researchers have exploited the noncooperative game theoretic understanding of the toc in order to explain the bottleneck character of empirical data networks behavior and suggest potential alternatives to current routing schemes .closely related is work on various `` pricing''-based resource allocation strategies in congestable data networks .this work is at least partially based upon current understanding of pricing in toll lanes , and traffic flow in general ( see below ) .all of these approaches are particularly of interest when combined with the rl - based schemes mentioned just above . due to these factors ,much of the current research on a general framework for coins is directed toward the packet - routing domain ( see next section ) .traffic congestion typifies the toc public good problem : everyone wants to use the same resource , and all parties greedily trying to optimize their use of that resource not only worsens global behavior , but also worsens _ their own _ private utility ( _ e.g. _ , if everyone disobeys traffic lights , everyone gets stuck in traffic jams ) . indeed , in the well - known braess paradox , keeping everything else constant including the number and destinations of the drivers but opening a new traffic path can _ increase _ everyone s time to get to their destination .( viewing the overall system as an instance of the prisoner s dilemma , this paradox in essence arises through the creation of a novel ` defect - defect ' option for the overall system .) greedy behavior on the part of individuals also results in very rich global dynamic patterns , such as stop and go waves and clusters .much of traffic theory employs and investigates tools that have previously been applied in statistical physics ( see subsection above ) .in particular , the spontaneous formation of traffic jams provides a rich testbed for studying the emergence of complex activity from seemingly chaotic states .furthermore , the dynamics of traffic flow is particular amenable to the application and testing of many novel numerical methods in a controlled environment .many experimental studies have confirmed the usefulness of applying insights gleaned from such work to real world traffic scenarios .finally , there are a number of other fields that , while either still nascent or not extremely closely related to coins , are of interest in coin design : * amorphous computing : * amorphous computing grew out of the idea of replacing traditional computer design , with its requirements for high reliability of the components of the computer , with a novel approach in which widespread unreliability of those components would not interfere with the computation .some of its more speculative aspects are concerned with `` how to program '' a massively distributed , noisy system of components which may consist in part of biochemical and/or biomechanical components .work here has tended to focus on schemes for how to robustly induce desired geometric dynamics across the physical body of the amorphous computer issue that are closely related to morphogenesis , and thereby lend credence to the idea that biochemical components are a promising approach . especially in its limit of computers with very small constituent components , amorphous computing also is closely related to the fields of nanotechnology and control of smart matter ( see below ) .* control of smart matter:*. as the prospect of nanotechnology - driven mechanical systems gets more concrete , the daunting problem of how to robustly control , power , and sustain protean systems made up of extremely large sets of nano - scale devices looms more important .if this problem were to be solved one would in essence have `` smart matter '' .for example , one would be able to `` paint '' an airplane wing with such matter and have it improve drag and lift properties significantly . * morphogenesis : * how does a leopard embryo get its spots , or a zebra embryo its stripes ? more generally , what are the processes underlying morphogenesis , in which a body plan develops among a growing set of initially undifferentiated cells ?these questions , related to control of the dynamics of chemical reaction waves , are essentially special cases of the more general question of how ontogeny works , of how the genotype - phenotype mapping is carried out in development .the answers involve homeobox ( as well as many other ) genes . under the presumption that the functioning of such genes is at least in part designed to facilitate genetic changes that increase a species fitness , that functioning facilitates solution of the inverse problem , of finding small - scale changes ( to dna ) that will result in `` desired '' large scale effects ( to body plan ) when propagated across a growing distributed system .* self organizing systems * the concept of self - organization and self - organized criticality was originally developed to help understand why many distributed physical systems are attracted to critical states that possess long - range dynamic correlations in the large - scale characteristics of the system .it provides a powerful framework for analyzing both biological and economic systems .for example , natural selection ( particularly punctuated equilibrium ) can be likened to self - organizing dynamical system , and some have argued it shares many the properties ( _ e.g. _ , scale invariance ) of such systems .similarly , one can view the economic order that results from the actions of human agents as a case of self - organization .the relationship between complexity and self - organization is a particularly important one , in that it provides the potential laws that allow order to arise from chaos . * small worlds ( 6 degrees of separation ) : * in many distributed systems where each component can interact with a small number of `` neighbors '' ,an important problem is how to propagate information across the system quickly and with minimal overhead . on the one extremethe neighborhood topology of such systems can exist on a completely regular grid - like structure . on the other, the topology can be totally random . in either case, certain nodes may be effectively ` cut - off ' from other nodes if the information pathways between them are too long .recent work has investigated `` small worlds '' networks ( sometimes called 6 degrees of separation ) in which underlying grid - like topologies are `` doped '' with a scattering of long - range , random connections .it turns out that very little such doping is necessary to allow for the system to effectively circumvent the information propagation problem .* control theory : * adaptive control , and in particular adaptive control involving locally weighted rl algorithms , constitute a broadly applicable framework for controlling small , potentially inexactly modeled systems .augmented by techniques in the control of chaotic systems , they constitute a very successful way of solving the `` inverse problem '' for such systems .unfortunately , it is not clear how one could even attempt to scale such techniques up to the massively distributed systems of interest in coins .the next section discusses in detail some of the underlying reasons why the purely model - based versions of these approaches are inappropriate as a framework for coins .summarizing the discussion to this point , it is hard to see how any already extant scientific field can be modified to encompass systems meeting all of the requirements of coins listed at the beginning of section [ sec : lit ] .this is not too surprising , since none of those fields were explicitly designed to analyze coins .this section first motivates in general terms a framework that is explicitly designed for analyzing coins .it then presents the formal nomenclature of that framework .this is followed by derivations of some of the central theorems of that framework .finally , we present experiments that illustrate the power the framework provides for ensuring large world utility in a coin .what mathematics might one employ to understand and design coins ?perhaps the most natural approach , related to the stochastic fields work reviewed above , involves the following three steps : \1 ) first one constructs a detailed stochastic model of the coin s dynamics , a model parameterized by a vector .as an example , could fix the utility functions of the individual agents of the coin , aspects of their rl algorithms , which agents communicate with each other and how , etc .\2 ) next we solve for the function which maps the parameters of the model to the resulting stochastic dynamics .\3 ) cast our goal for the system as a whole as achieving a high expected value of some `` world utility '' .then as our final step we would have to solve the inverse problem : we would have to search for a which , via , results in a high value of e(world utility ) .let s examine in turn some of the challenges each of these three steps entrain : \i ) we are primarily interested in very large , very complex systems , which are noisy , faulty , and often operate in a non - stationary environment .moreover , our `` very complex system '' consists of many rl algorithms , all potentially quite complicated , all running simultaneously . clearly coming up with a detailed model that captures the dynamics of all of this in an accurate manner will often be extraordinarily difficult .moreover , unfortunately , given that the modeling is highly detailed , often the level of verisimilitude required of the model will be quite high .for example , unless the modeling of the faulty aspects of the system were quite accurate , the model would likely be `` brittle '' , and overly sensitive to which elements of the coin were and were not operating properly at any given time .\ii ) even for models much simpler than the ones called for in ( i ) , solving explicitly for the function can be extremely difficult .for example , much of markov chain theory is an attempt to broadly characterize such mappings .however as a practical matter , usually it can only produce potentially useful characterizations when the underlying models are quite inaccurate simplifications of the kinds of models produced in step ( i ) .\iii ) even if one can write down an , solving the associated inverse problem is often impossible in practice .\iv ) in addition to these difficulties , there is a more general problem with the model - based approach .we wish to perform our analysis on a `` high level '' .our thesis is that due to the robust and adaptive nature of the individual agents rl algorithms , there will be very broad , easily identifiable regions of space all of which result in excellent e(world utility ) , and that these regions will not depend on the precise learning algorithms used to achieve the low - level tasks ( cf . the list at the beginning of section [ sec : lit ] ) . to fully capitalize on this, one would want to be able to slot in and out different learning algorithms for achieving the low - level tasks without having to redo our entire analysis each time .however in general this would be possible with a model - based analysis only for very carefully designed models ( if at all ) .the problem is that the result of step ( 3 ) , the solution to the inverse problem , would have to concern aspects of the coin that are ( at least approximately ) invariant with respect to the precise low - level learning algorithms used . coming up with a model that has this property while still avoiding problems( i - iii ) is usually an extremely daunting challenge .fortunately , there is an alternative approach which avoids the difficulties of detailed modeling .little modeling of any sort ever is used in this alternative , and what modeling does arise has little to do with dynamics .in addition , any such modeling is extremely high - level , intented to serve as a decent approximation to almost any system having `` reasonable '' rl algorithms , rather than as an accurate model of one particular system .we call any framework based on this alternative a * descriptive framework*. in such a framework one identifies certain * salient characteristics * of coins , which are characteristics of a coin s entire worldline that one strongly expects to find in coins that have large world utility . under this expectation ,one makes the assumption that if a coin is explicitly modified to have the salient characteristics ( for example in response to observations of its run - time behavior ) , then its world utility will benefit .so long as the salient characteristics are ( relatively ) easy to induce in a coin , then this assumption provides a ready indirect way to cause that coin to have large world utility .an assumption of this nature is the central leverage point that a descriptive framework employs to circumvent detailed modeling . under it , if the salient characteristics can be induced with little or no modeling ( e.g. , via heuristics that are nt rigorously and formally justified ) , then they provide an indirect way to improve world utility without recourse to detailed modeling .in fact , since one does not use detailed modeling in a descriptive framework , it may even be that one does not have a fully rigorous mathematical proof that the central assumption holds in a particular system for one s choice of salient characteristics .one may have to be content with reasonableness arguments not only to justify one s scheme for inducing the salient characteristics , but for making the assumption that characteristics are correlated with large world utility in the first place .of course , the trick in the descriptive framework is to choose salient characteristics that both have a beneficial relationship with world utility and that one expects to be able to induce with relatively little detailed modeling of the system s dynamics .there exist many ways one might try to design a descriptive framework . in this subsection we present nomenclature needed for a ( very ) cursory overview of one of them .( see for a more detailed exposition , including formal proofs . )this overview concentrates on the four salient characteristics of intelligence , learnability , factoredness , and the wonderful life utility , all defined below .intelligence is a quantification of how well an rl algorithm performs .we want to do whatever we can to help those algorithms achieve high values of their utility functions .learnability is a characteristic of a utility function that one would expect to be well - correlated with how well an rl algorithm can learn to optimize it . a utility function is also factored if whenever its value increases , the overall system benefits . finally , wonderful life utility is an example of a utility function that is both learnable and factored .after the preliminary definitions below , this section formalizes these four salient characteristics , derives several theorems relating them , and illustrates in some computer experiments how those theorems can be used to help the system achieve high world utility .* 1 ) * we refer to an rl algorithm by which an individual component of the coin modifies its behavior as a * microlearning * algorithm .we refer to the initial construction of the coin , potentially based upon salient characteristics , as the coin * initialization*. we use the phrase * macrolearning * to refer to externally imposed run - time modifications to the coin which are based on statistical inference concerning salient characteristics of the running coin .* 2 ) * for convenience , we take time , , to be discrete and confined to the integers , _ z_. when referring to coin initialization , we implicitly have a lower bound on , which without loss of generality we take to be less than or equal to . *3 ) * all variables that have any effect on the coin are identified as components of euclidean - vector - valued * states * of various discrete * nodes*. as an important example , if our coin consists in part of a computational `` agent '' running a microlearning algorithm , the precise configuration of that agent at any time , including all variables in its learning algorithm , all actions directly visible to the outside world , all internal parameters , all values observed by its probes of the surrounding environment , etc . ,all constitute the state vector of a node representing that agent .we define to be a vector in the euclidean vector space , where the components of give the state of node at time .the component of that vector is indicated by .* observation 3.1 : * in practice , many coins will involve variables that are most naturally viewed as discrete and symbolic . in such cases , we must exercise some care in how we choose to represent those variables as components of euclidean vectors .there is nothing new in this ; the same issue arises in modern work on applying neural nets to inherently symbolic problems . in our coin framework, we will usually employ the same resolution of this issue employed in neural nets , namely representing the possible values of the discrete variable with a unary representation in a euclidean space .just as with neural nets , values of such vectors that do not lie on the vertices of the unit hypercube are not meaningful , strictly speaking .fortunately though , just as with neural nets , there is almost always a most natural way to extend the definitions of any function of interest ( like world utility ) so that it is well - defined even for vectors not lying on those vertices .this allows us to meaningfully define partial derivatives of such functions with respect to the components of , partial derivatives that we will evaluate at the corners of the unit hypercube .* 4 ) * for notational convenience , we define to be the vector of the states of all nodes at time ; to be the vector of the states of all nodes other than at time ; and to be the entire vector of the states of all nodes at all times . is infinite - dimensional in general , and usually assumed to be a hilbert space .we will often assume that all spaces over all times are isomorphic to a space , i.e. , is a cartesian product of copies of . also for notational convenience , we define gradients using -shorthand .so for example , is the vector of the partial derivative of with respect to the components of .also , we will sometimes treat the symbol `` '' specially , as delineating a range of components of .so for example an expression like `` '' refers to all components with . * 5 ) * to avoid confusion with the other uses of the comma operator, we will often use rather than to indicate the vector formed by concatenating the two ordered sets of vector components and .for example , refers to the vector formed by concatenating those components of the worldline involving node for times less than 0 with those components involving node that have times greater than 0 . *6 ) * we take the universe in which our coin operates to be completely deterministic .this is certainly the case for any coin that operates in a digital system , even a system that emulates analog and/or stochastic processes ( _ e.g. _ , with a pseudo - random number generator ) . more generally , this determinism reflects the fact that since the real world obeys ( deterministic ) physics , real - world system , be it a coin or something else , is , ultimately , embedded in a deterministic system .the perspective to be kept in mind here is that of nonlinear time - series analysis .a physical time series typically reflects a few degrees of freedom that are projected out of the underlying space in which the full system is deterministically evolving , an underlying space that is actually extremely high - dimensional .this projection typically results in an illusion of stochasticity in the time series . *7 ) * formally , to reflect this determinism , first we bundle all variables we are not directly considering but which nonetheless affect the dynamics of the system as components of some catch - all * environment node*. so for example any `` noise processes '' and the like affecting the coin s dynamics are taken to be inputs from a deterministic , very high - dimensional environment that is potentially chaotic and is never directly observed .given such an environment node , we then stipulate that for all such that , sets uniquely .* observation 7.1 : * when nodes are `` computational devices '' , often we must be careful to specify the physical extent of those devices .such a node may just be the associated cpu , or it may be that cpu together with the main ram , or it may include an external storage device . almost always , the border of the device will end before any external system that is `` observing '' begins .this means that since at time only knows the value of , its `` observational knowledge '' of that external system is indirect .that knowledge reflects a coupling between and , a coupling that is induced by the dynamical evolution of the system from preceding moments up to the time .if the dynamics does not force such a coupling , then has no observational knowledge of the outside world . *8) * we express the dynamics of our system by writing .( in this paper there will be no need to be more precise and specify the precise dependency of on and/or . )we define to be a set of constraint equations enforcing that dynamics , and also , more generally , fixing the entire manifold of vectors that we consider to be ` allowed ' .so is a subset of the set of all that are consistent with the deterministic laws governing the coin , _i.e. _ , that obey .we generalize this notation in the obvious way , so that ( for example ) is the manifold consisting of all vectors that are projections of a vector in .* observation 8.1 : * note that is parameterized by , due to determinism . note also that whereas is defined for any argument of the form for some ( _ i.e. _ , we can evolve any point forward in time ) , in general not all lie in .in particular , there may be extra restrictions constraining the possible states of the system beyond those arising from its need to obey the relevant dynamical laws of physics . finally , whenever trying to express a coin in terms of the framework presented here , it is a good rule to try to write out the constraint equations explicitly to check that what one has identified as the space contains all quantities needed to uniquely fix the future state of the system . *observation 8.2 : * we do not want to have be the phase space of every particle in the system .we will instead usually have consist of variables that , although still evolving deterministically , exist at a larger scale of granularity than that of individual particles ( _ e.g. _ , thermodynamic variables in the thermodynamic limit ) .however we will often be concerned with physical systems obeying entropy - driven dynamic processes that are contractive at this high level of granularity .examples are any of the many - to - one mappings that can occur in digital computers , and , at a finer level of granularity , any of the error - correcting processes in the electronics of such a computer that allow it to operate in a digital fashion . accordingly , although the dynamics of our system will always be deterministic , it need not be invertible .* observation 8.3 : * intuitively , in our mathematics , all behavior across time is pre - fixed .the coin is a single fixed worldline through , with no `` unfolding of the future '' as the die underlying a stochastic dynamics get cast .this is consistent with the fact that we want the formalism to be purely descriptive , relating different properties of any single , fixed coin s history .we will often informally refer to `` changing a node s state at a particular time '' , or to a microlearner s `` choosing from a set of options '' , and the like . formally , in all such phrases we are really comparing different worldlines , with the indicated modification distinguishing those worldlines .* observation 8.4 : * since the dynamics of any real - world coin is deterministic , so is the dynamics of any component of the coin , and in particular so is any learning algorithm running in the coin , ultimately .however that does not mean that those deterministic components of the coin are not allowed to be `` based on '' , or `` motivated by '' stochastic concepts .the _ motivation _ behind the algorithms run by the components of the coin does not change their underlying nature .indeed , in our experiments below , we explicitly have the reinforcement learning algorithms that are trying to maximize private utility operate in a ( pseudo- ) probabilistic fashion , with pseudo - random number generators and the like .more generally , the deterministic nature of our framework does not preclude our superimposing probabilistic elements on top of that framework , and thereby generating a stochastic extension of our framework .exactly as in statistical physics , a stochastic nature can be superimposed on our space of deterministic worldlines , potentially by adopting a degree of belief perspective on `` what probability means '' .indeed , the macrolearning algorithms we investigate below implicitly involve such a superimposing ; they implicitly assume a probabilistic coupling between the ( statistical estimate of the ) correlation coefficient connecting the states of a pair of nodes and whether those nodes are in the one another s `` effect set '' .similarly , while it does not salient characteristics that involve probability distributions , the descriptive framework does not preclude such characteristics either . as an example , the `` intelligence '' of an agent s particular action , formally defined below , measures the fraction of alternative actions an agent could have taken that would have resulted in a lower utility value . to define sucha fraction requires a measure across the space of such alternative actions , even if only implicitly .accordingly , intelligence can be viewed as involving a probability distribution across the space of potential actions . in this paperthough , we concentrate on the mathematics that obtains before such probabilistic concerns are superimposed . whereas the deterministic analysis presented here is related to game - theoretic structures like nash equilibria , a full - blown stochastic extension would in some ways be more related to structures like correlated equilibria .* 9 ) * formally , there is a lot of freedom in setting the boundary between what we call `` the coin '' , whose dynamics is determined by , and what we call `` macrolearning '' , which constitutes perturbations to the coin instigated from `` outside the coin '' , and which therefore is reflected in . as an example , in much of this paper , we have clearly specified microlearners which are provided fixed private utility functions that they are trying to maximize . in such casesusually we will implicitly take to be the dynamics of the system , microlearning and all , _ for fixed private utilities _ that are specified in .for example , could contain , for each microlearner , the bits in an associated computer specifying the subroutine that that microlearner can call to evaluate what its private utility would be for some full worldline .macrolearning overrides , and in this situation it refers ( for example ) to any statistical inference process that modifies the private utilities at run - time to try to induce the desired salient characteristics .concretely , this would involve modifications to the bits \{ } specifying each microlearner s private utility , modifications that are accounted for in , and that are potentially based on variables that are not reflected in . since does not reflect such macrolearning , when trying to ascertain based on empirical observation ( as for example when determining how best to modify the private utilities ) , we have to take care to distinguish which part of the system s observed dynamics is due to and which part instead reflects externally imposed modifications to the private utilities . more generally though , other boundaries between the coin and macrolearning - based perturbations to it are possible , reflecting other definitions of , and other interpretations of the elements of each .for example , say that under the perspective presented in the previous paragraph , the private utility is a function of some components of , components that do not include the \{}. now modify this perspective so that in addition to the dynamics of other bits , also encapsulates the dynamics of the bits \{}. having done this , we could still view each private utility as being fixed , but rather than take the bits \{ } as `` encoding '' the subroutine that specifies the private utility of microlearner , we would treat them as `` parameters '' specifying the functional dependence of the ( fixed ) private utility on the components of .in other words , formally , they constitute an extra set of arguments to s private utility , in addition to the arguments .alternatively , we could simply say that in this situation our private utilities are time - indexed , with s private utility at time determined by \{ } , which in turn is determined by evolution under . under either interpretation of private utility , any modification under to the bits specifying sutility - evaluation subroutine constitutes dynamical laws by which the parameters of s microlearner evolves in time . in this case, macrolearning would refer to some further removed process that modifies the evolution of the system in a way not encapsulated in . for such alternative definitions of /, we have a different boundary between the coin and macrolearning , and we must scrutinize different aspects of the coin s dynamics to infer . whatever the boundary , the mathematics of the descriptive framework , including the mathematics concerning the salient characteristics , is restricted to a system evolving according to , and explicitly does not account for macrolearning .this is why the strategy of trying to improve world utility by using macrolearning to try to induce salient characteristics is almost always ultimately based on an assumption rather than a proof . *10 ) * we are provided with some von neumann * world utility * that ranks the various conceivable worldlines of the coin .note that since the environment node is never directly observed , we implicitly assume that the world utility is not directly ( ! ) a function of its state .our mathematics will not involve alone , but rather the relationship between and various sets of * personal utilities * . intuitively , as discussed below , for many purposes such personal utilities are equivalent to arbitrary `` virtual '' versions of the private utilities mentioned above .in particular , it is only private utilities that will occur within any microlearning computer algorithms that may be running in the coin as manifested in .personal utilities are external mathematical constructions that the coin framework employs to analyze the behavior of the system .they can be involved in learning processes , but only as tools that are employed outside of the coin s evolution under , _i.e. _ , only in macrolearning .( for example , analysis of them can be used to modify the private utilities . ) * observation 10.1 : * these utility definitions are very broad . in particular , they do not require casting of the utilities as discounted sums .note also that our world utility is not indexed by . again reflecting the descriptive , worldline character of the formalism, we simply assign a single value to an entire worldline of the system , implicitly assuming that one can always say which of two candidate worldlines are preferable .so given some `` present time '' , issues like which of two `` potential futures '' , is preferable are resolved by evaluating the relevant utility at two associated points and , where the components of those points are the futures indicated , and the two points share the same ( usually implicit ) `` past '' components .this time - independence of automatically avoids formal problems that can occur with general ( _ i.e. _ , not necessarily discounted sum ) time - indexed utilities , problems like having what s optimal at one moment in time conflict with what s optimal at other moments in time .. the effects of the actions by the nodes , adn therefore whether those actions are `` optimal '' or not , depends on the future actions of the nodes .however if they too are to be `` optimal '' , according to their world - utility , those future actions will depend on _ their _ futures .so we have a potentially infinite regress of differing stipulations of what `` optimal '' actions at time entails . ] for personal utilities such formal problems are often irrelevant however .before we begin our work , we as coin designers must be able to rank all possible worldlines of the system at hand , to have a well - defined design task .that is why world utility can not be time - indexed .however if a particular microlearner s goal keeps changing in an inconsistent way , that simply means that that microlearner will grow `` confused '' . from our perspective as coin designers, there is nothing _ a priori _ unacceptable about such confusion .it may even result in better performance of the system as a whole , in whic case we would actually want to induce it .nonetheless , for simplicity , in most of this paper we will have all be independent of , just like world utility .world utility is defined as that function that we are ultimately interested in optimizing . in conventional rlit is a discounted sum , with the sum starting at time . in other words ,conventional rl has a time - indexed world utility .it might seem that in this at least , conventional rl considers a case that has more generality than that of the coin framework presented here .( it obviously has less generality in that its world utility is restricted to be a discounted sum . ) in fact though , the apparent time - indexing of conventional rl is illusory , and the time - dependent discounted sum world utilty of conventional rl is actually a special case of the non - time - indexed world utility of our coin framework . to see this formally ,consider any ( time - independent ) world utility that equals for some function and some positive constant with magnitude less than 1 .then for any and any and where , = sgn[\sum_{t=0}^{\infty } \gamma^t r(\underline{\zeta}'_{,t } ) - \sum_{t=0}^{\infty } \gamma^t r(\underline{\zeta''}_{,t})] ] for all .since utility functions are , by definition , only unique up to the relative orderings they impose on potential values of their arguments , we see that conventional rl s use of a time - dependent discounted sum world utility is identical to use of a particular time - independent world utility in our coin framework . *11 ) * as mentioned above , there may be variables in each node s state which , under one particular interpretation , represent the `` utility functions '' that the associated microlearner s computer program is trying to extremize . when there are such components of , we refer to the utilities they represent as * private utilities*. however even when there are private utilities , formally we allow the personal utilities to differ from them .the personal utility functions \{ } do not exist `` inside the coin '' ; they are not specified by components of .this separating of the private utilities from the \{ } will allow us to avoid the teleological problem that one may not always be able to explicitly identify `` the '' private utility function reflected in such that a particular computational device can be said to be a microlearner `` trying to increase the value of its private utility '' . to the degree that we can couch the theorems purely in terms of personal rather than private utilities, we will have successfully adopted a purely behaviorist approach , without any need to interpret what a computational device is `` trying to do '' . despite this formal distinctionthough , often we will implicitly have in mind deploying the personal utilities onto the microlearners as their private utilities , in which case the terms can usually be used interchangeably .the context should make it clear when this is the case .we will need to quantify how well the entire system performs in terms of . to do thisrequires a measure of the performance of an arbitrary worldline , for an arbitrary utility function , under arbitrary dynamic laws .formally , such a measure is a mapping from three arguments to .such a measure will also allow us to quantify how well each microlearner performs in purely behavioral terms , in terms of its personal utility .( in our behaviorist approach , we do not try to make specious distinctions between whether a microlearner s performance is due to its level of `` innate sophistication '' , or rather due to dumb luck all that matters is the quality of its behavior as reflected in its utility value for the system s worldline . )this behaviorism in turn will allow us to avoid having private utilities explicitly arise in our theorems ( although they still arise frequently in pedagogical discussion ) .even when private utilities exist , there will be no formal need to explicitly identify some components of as such utilities . assuming a node s microlearner is competent , the fact that it is trying to optimize some particular private utility will be manifested in our performance measure s having a large value at for for that utility .the problem of how to formally define such a performance measure is essentially equivalent to the problem of how to quantify bounded rationality in game theory . some of the relevant work in game theory , for example that involving ` trembling hand equilibria ' or ` equilibria ' is concerned with refinements or modifications of nash equilibria ( see also ) . rather than a behaviorist approach , such work adopts a strongly teleological perspective on rationality . in general ,such work is only applicable to those situations where the rationality is bounded due to the precise causal mechanisms investigated in that work .most of the other game - theoretic work first models ( ! ) the microlearner , as some extremely simple computational device ( _ e.g. _ , a deterministic finite automaton ( dfa ) .one then assumes that the microlearner performs perfectly for that device , so that one can measure that learner s performance in terms of some computational capacity measure of the model ( _ e.g. _ , for a dfa , the number of states of that dfa ) .however , if taken as renditions of real - world computer - based microlearners never mind human microlearners the models in this approach are often extremely abstracted , with many important characteristics of the real learners absent or distorted .in addition , there is little reason to believe that any results arising from this approach would not be highly dependent on the model choice and on the associated representation of computational capacity .yet another disadvantage is that this approach concentrates on perfect , fully rational behavior of the microlearners , within their computational restrictions .we would prefer a less model - dependent approach , especially given our wish that the performance measure be based solely on the utility function at hand , , and .now we do nt want our performance measure to be a `` raw '' utility value like , since that is not invariant with respect to monotonic transformations of .similarly , we do nt want to penalize the microlearner for not achieving a certain utility value if that value was impossible to achieve not due to the microlearner s shortcomings , but rather due to and the actions of other nodes .a natural way to address these concerns is to generalize the game - theoretic concept of `` best - response strategy '' and consider the problem of how well performs _ given the actions of the other nodes_. such a measure would compare the utility ultimately induced by each of the possible states of at some particular time , which without loss of generality we can take to be 0 , to that induced by the actual state . in other words , we would compare the utility of the actual worldline to those of a set of alternative worldlines , where , and use those comparisons to quantify the quality of s performance .now we are only concerned with comparing the effects of replacing with on contributions to the utility .but if we allow arbitrary , then in and of themselves the difference between those past components of and those of can modify the value of the utility , regardless of the effects of any difference in the future components .our presumption is that for many coins of interest we can avoid this conundrum by restricting attention to those where differs from only in the internal parameters of s microlearner , differences that only at times manifest themselves in a form the utility is concerned with .( in game - theoretic terms , such `` internal parameters '' encode full extensive - form strategies , and we only consider changes to the vertices at or below the level in the tree of an extensive - form strategy . ) although this solution to our conundrum is fine when we can apply it , we do nt want to restrict the formalism so that it can only concern systems having computational algorithms which involve a clearly pre - specified set of extensive strategy `` internal parameters '' and the like .so instead , we formalize our presumption behaviorally , even for computational algorithms that do not have explicit extensive strategy internal parameters . since changing the internal parameters does nt affect the components of _ that the utility is concerned with _ , and since we are only concerned with changes to that affect the utility , we simply elect to not change the values of the internal parameters of at all .in other words , we leave .the advantage of this stipulation is that we can apply it just as easily whether does or does nt have any `` internal parameters '' in the first place .so in quantifying the performance of for behavior given by we compare to a set of , a set restricted to those sharing s past : , , and . since is free to vary ( reflecting the possible changes in the state of at time 0 ) while is not , , in general .we may even wish to allow in certain circumstances .( recall that may reflect other restrictions imposed on allowed worldlines besides adherence to the underlying dynamical laws , so simply obeying those laws does not suffice to ensure that a worldline lies on . ) in general though , our presumption is that as far as utility values are concerned , considering these dynamically impossible is equivalent to considering a more restricted set of with `` modified internal parameters '' , all of which are .we now present a formalization of this performance measure . given and a measure demarcating what points in we are interested in , we define the ( ) * intelligence * for node of a point with respect to a utility as follows : \cdot \delta(\underline{\zeta}'_{\;\hat{}\eta,0 } - \underline{\zeta}_{\;\hat{}\eta,0})\ ] ] where is the heaviside theta function which equals 0 if its argument is below 0 and equals 1 otherwise , is the dirac delta function , and we assume that . intuitively , measures the fraction of alternative states of which , if had been in those states at time 0 , would either degrade or not improve s performance ( as measured by ) . sometimes in practicewe will only want to consider changes in those components of that we consider as `` free to vary '' , which means in particular that those changes are consistent with and the state of the external world , .( this consistency ensures that s observational information concerning the external world is correct ; see observation 7.1 above . )such a restriction means that even though may not be consistent with and , by itself it is still consistent with ; in quantifying the quality of a particular .so we do nt compare our point to other that are physically impossible , no matter what the past is .any such restrictions on what changes we are considering are reflected implicitly in intelligence , in the measure . as an example of intelligence ,consider the situation where for each player , the support of the measure extends over all possible actions that could take that affect the ultimate value of its personal utility , .in this situation we recover conventional full rationality game theory involving nash equilibria , as the analysis of scenarios in which the intelligence of each player with respect to equals 1 . whose components need not all equal 1 .many of the theorems of conventional game theory can be directly carried over to such bounded - rational games by redefining the utility functions of the players . in other words ,much of conventional full rationality game theory applies even to games with bounded rationality , under the appropriate transformation .this result has strong implications for the legitimacy of the common criticism of modern economic theory that its assumption of full rationality does not hold in the real world , implications that extend significantly beyond the sonnenschein - mantel - debreu theorem equilibrium aggregate demand theorem .] as an alternative , we could for each restrict to some limited `` set of actions that actively considers '' .this provides us with an `` effective nash equilibrium '' at the point where each equals 1 , in the sense that _ as far it s concerned _ , each player has played a best possible action at such a point . as yet another alternative , we could restrict each to some infinitesimal neighborhood about , and thereby define a `` local nash equilibrium '' by having for each player . in general , competent greedy pursuit of private utility by the microlearnercontrolling node means that the intelligence of for personal utility , , is close to 1 .accordingly , we will often refer interchangeably to a capable microlearner s `` pursuing private utility '' , and to its having high intelligence for personal utility .alternatively , if the microlearner for node is incompetent , then it may even be that `` by luck '' its intelligence for some personal utility \{ } exceeds its intelligence for the different private utility that it s actually trying to maximize , .say that we expect that a particular microlearner is `` smart '' , in that it is more likely to have high rather than low intelligence .we can model this by saying that given a particular , the conditional probability that is a monotonically increasing function of .since for a given the intelligence is a monotonically increasing function of , this modelling assumption means that the probability that is a monotonically increasing function of .an alternative weaker model is to only stipulate that the probability of having a particular pair with equal to is a monotonically increasing function of .( this probability is an integral over a joint distribution , rather than a conditional distribution , as in the original model . ) in either case , the `` better '' the microlearner , the more tightly peaked the associated probability distribution over intelligence values is .any two utility functions that are related by a monotonically increasing transformation reflect the same preference ordering over the possible arguments of those functions . since it is only that ordering that we are ever concerned with , we would like to remove this degeneracy by `` normalizing '' all utility functions .in other words , we would like to reduce any equivalence set of utility functions that are monotonic transformations of one another to a canonical member of that set . to see what this means in the coin context , fix .viewed as a function from , is itself a utility function , one that is a monotonically increasing function of .( it says how well would have performed for all vectors . ) accordingly , the integral transform taking to is a ( contractive , non - invertible ) mapping from utilities to utilities . applied to any member of a utility in s equivalence set , this mapping produces the same image utility , one that is also in that equivalence set .it can be proven that any mapping from utilities to utilities that has this and certain other simple properties must be such an integral transform . in this , intelligence is the unique way of `` normalizing '' von neumann utility functions . for those conversant with game theory ,it is worth noting some of the interesting aspects that ensue from this normalizing nature of intelligences . at any point that is a nash equilibrium in the set of personal utilities \{ } , all intelligences must equal 1 . since that is the maximal value any intelligence can take on , a nash equilibrium in the \{ } is a pareto optimal point in the associated intelligences ( for the simple reason that no deviation from such a can raise any of the intelligences ) .conversely , if there exists at least one nash equilibrium in the \{ } , then there is not a pareto optimal point in the \{ } that is not a nash equilibrium . now restrict attention to systems with only a single instant of time , _i.e. _ , single - stage games . also have each of the ( real - valued ) components of each be a mixing component of an associated one of s potential strategies for some underlying finite game .then have be the associated expected payoff to .( so the payoff to of the underlying pure strategies is given by the values of when is a unit vector in the space of s possible states . )then we know that there must exist at least one nash equilibrium in the \{}. accordingly , in this situation the set of nash equilibria in the \{ } is identical to the set of points that are pareto optimal in the associated intelligences .( see eq . 5 in the discussion of factored systems below .) intelligence can be a difficult quantity to work with , unfortunately .as an example , fix , and consider any ( small region centered about some ) along with some utility , where is not a local maximum of . then by increasing the values takes on in that small region we will increase the intelligence .however in doing this we will also necessarily the intelligence at points outside that region .so intelligence has a non - local character , a character that prevents us from directly modifying it to ensure that it is simultaneously high for any and all . a second, more general problem is that without specifying the details of a microlearner , it can be extremely difficult to predict which of two private utilities the microlearner will be better able to learn . indeed ,even the details , making that prediction can be nearly impossible .so it can be extremely difficult to determine what private utility intelligence values will accrue to various choices of those private utilities .in other words , macrolearning that involves modifying the private utilities to try to directly increase intelligence with respect to those utilities can be quite difficult .fortunately , we can circumvent many of these difficulties by using a proxy for ( private utility ) intelligence .although we expect its value usually to be correlated with that of intelligence in practice , this proxy does not share intelligence s non - local nature .in addition , the proxy does not depend heavily on the details of the microlearning algorithms used , _i.e. _ , it is fairly independent of those aspects of . intuitively , this proxy can be viewed as a `` salient characteristic '' for intelligence .we motivate this proxy by considering having for all .if we try to actually use these \{ } as the microlearners private utilities , particularly if the coin is large , we will invariably encounter a very bad signal - to - noise problem . for this choice of utilities ,the effects of the actions taken by node on its utility may be `` swamped '' and effectively invisible , since there are so many other processes going into determining s value .this makes it hard for to discern the echo of its actions and learn how to improve its private utility .it also means that will find it difficult to decide how best to act once learning has completed , since so much of what s important to is decided by processes outside of immediate purview .in such a scenario , there is nothing that s microlearner can do to reliably achieve high intelligence .in addition to this `` observation - driven '' signal / noise problem , there is an `` action - driven '' one . for reasons discussed in observation 7.1 above, we can define a distribution reflecting what does / doesnt know concerning the actual state of the outside world at time 0 . if the node chooses its actions in a bayes - optimal manner , then ] in general ,this bayes - optimal node s intelligence will be less than 1 for the particular at hand , in general .moreover , the less s ultimate value ( after the application of , etc . ) depends on , the smaller the difference in these two argmax - based s , and therefore the higher the intelligence of , in general.s ultimate value to not depend on . ]we would like a measure of that captures these efects , but without depending on function maximization or any other detailed aspects of how the node determines its actions .one natural way to do this is via the * ( utility ) learnability * : given a measure restricted to a manifold , the ( ) utility learnability of a utility for a node at is : * intelligence learnability * is defined the same way , with replaced by .note that any affine transformation of has no effect on either the utility learnability or the associated intelligence learnability , .the integrand in the numerator of the definition of learnability reflects how much of the change in that results from replacing with is due to the change in s state ( the `` signal '' ) .the denominator reflects how much of the change in that results from replacing with is due to the change in the states of nodes other than ( the `` noise '' ) .so learnability quantifies how easy it is for the microlearner to discern the `` echo '' of its behavior in the utility function .our presumption is that the microlearning algorithm will achieve higher intelligence if provided with a more learnable private utility .intuitively , the ( utility ) * differential learnability * of at a point is the learnability with restricted to an infinitesimal ball about .we formalize it as the following ratio of magnitudes of a pair of gradients , one involving , and one involving : note that a particular value of differential utility learnability , by itself , has no significance . simply rescalingthe units of will change that value .rather what is important is the ratio of differential learnabilities , at the same , for different s .such a ratio quantifies the relative preferability of those s .one nice feature of differential learnability is that unlike learnability , it does not depend on choice of some measure .this independence can lead to trouble if one is not careful however , and in particular if one uses learnability for purposes other than choosing between utility functions .for example , in some situations , the coin designer will have the option of enlarging the set of variables from the rest of the coin that are `` input '' to some node at and that therefore can be used by to decide what action to take .intuitively , doing so will not affect the rl `` signal '' for s microlearner ( the magnitude of the potential `` echo '' of s actions are not modified by changing some aspect of how it chooses among those actions ) .however it _ will _ reduce the `` noise '' , in that s microlearner now knows more about the state of the rest of the system . in the full integral version of learnability, this effect can be captured by having the support of restricted to reflect the fact that the extra inputs to at are correlated with the state of the external system . in differentiallearnability however this is not possible , precisely because no measure occurs in its definition .so we must capture the reduction in noise in some other fashion .occurring in the definition of differential learnability with something more nuanced .for example , one may wish to replace it with the maximum of the dot product of with any vector , subject not only to the restrictions that and * 0 * , but also subject to the restriction that must lie in the tangent plane of at .the first two restrictions , in concert with the extra restriction that * 0 * , give the original definition of the noise term . if they are instead joined with the third , new restriction , they will enforce any applicable coupling between the state of at time 0 and the rest of the system at time 0 . solving with lagrange multipliers , we get , where is the normal to at , , and while . as a practical matterthough , it is often simplest to assume that the can vary arbitrarily , independent of , so that the noise term takes the form in eq . 3 . ] alternatively , if the extra variables are being input to for all , not just at , and if `` pays attention '' to those variables for all , then by incorporating those changes into our system itself has changed , .hypothesize that at those the node is capable of modifying its actions to `` compensate '' for what ( due to our augmentation of s inputs ) now knows to be going on outside of it . under this hypothesis , those changes in those external events will have less of an effect on the ultimate value of than they would if we had not made our modification .in this situation , the noise term has been reduced , so that the differential learnabiliity properly captures the effect of s having more inputs . another potential danger to bear in mind concerning differential learnability is that it is usually best to consider its average over a region , in particular over points with less than maximal intelligence .it is really designed for such points ; in fact , at the intelligence - maximizing , . whether in its differential form or not , and whether referring to utilities or intelligence , learnability is not meant to capture all factors that will affect how high an intelligence value a particular microlearner will achieve .such an all - inclusive definition is not possible , if for no other reason the fact that there are many such factors that are idiosyncratic to the particular microlearner used . beyond thisthough , certain more general factors that affect most popular learning algorithms , like the curse of dimensionality , are also not ( explicitly ) designed into learnability .learnability is not meant to provide a full characterization of performance that is what intelligence is designed to do .rather ( relative ) learnability is ony meant to provide a _ guide _ for how to improve performance . a system that has infinite ( differential , intelligence ) learnability for all its personal utilities is said to be `` perfectly '' ( differential , intelligence ) learnable .it is straight - forward to prove that a system is perfectly learnable iff can be written as for some function .( see the discussion below on the general condition for a system s being perfectly factored . ) with these definitions in hand , we can now present ( a portion of ) one descriptive framework for coins . in this subsection , after discussing salient characteristics in general , we present some theorems concerning the relationship between personal utilities and the salient characteristic we choose to concentrate on .we then discus how to use these theorems to induce that salient characteristic in a coin .the starting point with a descriptive framework is the identification of `` salient characteristics of a coin which one strongly expects to be associated with its having large world utility '' . in this chapterwe will focus on salient characteristics that concern the relationship between personal and world utilities .these characteristics are formalizations of the intuition that we want coins in which the competent greedy pursuit of their private utilities by the microlearners results in large world utility , without any bottlenecks , toc , `` frustration '' ( in the spin glass sense ) or the like .one natural candidate for such a characteristic , related to pareto optimality , is * weak triviality*. it is defined by considering any two worldlines and both of which are consistent with the system s dynamics ( _ i.e. _ , both of which lie on ) , where for every node , . , and require only that both of the `` partial vectors '' and obey the relevant dynamical laws , and therefore lie in . ] if for any such pair of worldlines where one `` pareto dominates '' the other it is necessarily true that , we say that the system is weakly trivial. we might expect that systems that are weakly trivial for the microlearners private utilities are configured correctly for inducing large world utility .after all , for such systems , if the microlearners collectively change in a way that ends up helping all of them , then necessarily the world utility also rises .more formally , for a weakly trivial system , the maxima of are pareto - optimal points for the personal utilities ( although the reverse need not be true ) .as it turns out though , weakly trivial systems can readily evolve to a world utility , one that often involves toc . to see this ,consider automobile traffic in the absence of any traffic control system .let each node be a different driver , and say their private utilities are how quickly they each individually get to their destination .identify world utility as the sum of private utilities .then by simple additivity , for all and , whether they lie on or not , if it follows that ; the system is weakly trivial .however as any driver on a rush - hour freeway with no carpool lanes or metering lights can attest , every driver s pursuing their own goal definitely does not result in acceptable throughput for the system as a whole ; modifications to private utility functions ( like fines for violating carpool lanes or metering lights ) would result in far better global behavior .a system s being weakly trivial provides no assurances regarding world utility .this does not mean weak triviality is never of use .for example , say that for a set of weakly trivial personal utilities each agent can guarantee that _ regardless of what the other agents do _ , its utility is above a certain level .assume further that , being risk - averse , each agent chooses an action with such a guarantee .say it is also true that the agents are provided with a relatively large set of candidate guaranteed values of their utilities . under these circumstances, the system s being weakly trivial provides some assurances that world utility is not too low .moreover , if the overhead in enforcing such a future - guaranteeing scheme is small , and having a sizable set of guaranteed candidate actions provided to each of the agents does not require an excessively centralized infrastructure , we can actually employ this kind of scheme in practice .indeed , in the extreme case , one can imagine that every agent is guaranteed exactly what its utility would be for every one of its candidate actions .( see the discussion on general equilibrium in the background section above . ) in this situation , nash equilibria and pareto optimal points are identical , which due to weak triviality means that the point maximizing is a nash equilibrium .however in any less extreme situation , the system may not achieve a value of world utility that is close to optimal .this is because even for weakly trivial systems a pareto optimal point may have poor world utility , in general .situations where one has guarantees of lower bounds on one s utility are not too common , but they do arise .one important example is a round of trades in a computational market ( see the background section above ) . in that scenario, there is an agent - indexed set of functions \{ } and the personal utility of each agent is given by , where is the end of the round of trades .there is also a function that is a monotonically increasing function of its arguments , and world utility is given by .so the system is weakly trivial . in turn , each is determined solely by the `` allotment of goods '' possessed by , as specified in the appropriate components of . to be able to remove uncertainty about its future value of in this kind of system ,in determining its trading actions each agent must employ some scheme like inter - agent contracts .this is because without such a scheme , no agent can be assured that if it agrees to a proposed trade with another agent that the full proposed transaction of that trade actually occurs .given such a scheme , if in each trade round each agent myopically only considers those trades that are assured of increasing the corresponding value of , then we are guaranteed that the value of the world utility is not less than the initial value . the problem with using weak triviality as a general salient characteristic is precisely the fact that the individual microlearners greedy . in a coin , there is no system - wide incentive to replace with a different worldline that would improve everybody s private utility , as in the definition of weak triviality .rather the incentives apply to each microlearner individually and motivate the learners to behave in a way that may well hurt some of them .so weak triviality is , upon examination , a poor choice for the salient characteristic of a coin .one alternative to weak triviality follows from consideration of the stricture that we must ` expect ' a salient characteristic to be coupled to large world utility in a running real - world coin . what can we reasonably expect about a running real - world coin ?we can not assume that all the private utilities will have large values witness the traffic example .but we assume that if the microlearners are well - designed , each of them will be doing close to as well it can _ given the behavior of the other nodes_. in other words , within broad limits we can assume that the system is more likely to be in than if for all , .we define a system to be * coordinated * iff for any such and lying on , .( again , an obvious variant is to restrict , and require only that both and lie in . )traffic systems are coordinated , in general .this is evident from the simple fact that if all drivers acted as though there were metering lights when in fact there were nt any , they would each be behaving with lower intelligence given the actions of the other drivers ( each driver would benefit greatly by changing its behavior by no longer pretending there were metering lights , etc . ) . but nonetheless , world utility would be higher . like weak triviality , coordination is intimately related to the economics concept of pareto optimality .unfortunately , there is not room in this chapter to present the mathematics associated with coordination and its variants . we will instead discuss a third candidate salient characteristic of coins , one which like coordination ( and unlike weak triviality ) we can reasonably expect to be associated with large world utilitythis alternative fixes weak triviality not by replacing the personal utilities \{ } with the intelligences \{ } as coordination does , but rather by only considering worldlines whose difference at time 0 involves a single node .this results in this alternative s being related to nash equilibria rather than pareto optimality .say that our coin s worldline is .let be any other worldline where , and where .now restrict attention to those where at and differ only for node .if for all such = sgn[g(\underline{\zeta } ) - g(\underline{\zeta}_{,t<0 } \bullet c(\underline{\zeta}'_{,0 } ) ) ] \ ; , \ ] ] and if this is true for all nodes , then we say that the coin is * factored * for all those utilities \{ } ( at , with respect to time 0 and the utility ) . for a factored system , for any node ,_ given the rest of the system _ , if the node s state at changes in a way that improves that node s utility over the rest of time , then it necessarily also improves world utility .colloquially , for a system that is factored for a particular microlearner s private utility , if that learner does something that improves that personal utility , then everything else being equal , it has also done something that improves world utility .of two potential microlearners for controlling node ( _ i.e. _ , two potential ) whose behavior until is identical but which differ there , the microlearner that is smarter with respect to will always result in a larger , by definition of intelligence .accordingly , for a factored system , the smarter microlearner is also the one that results in better .so as long as we have deployed a sufficiently smart microlearner on , we have assured a good ( given the rest of the system ) .formally , this is expressed in the fact that for a factored system , for all nodes , one can also prove that nash equilibria of a factored system are local maxima of world utility .note that in keeping with our behaviorist perspective , nothing in the definition of factored requires the existence of private utilities .indeed , it may well be that a system having private utilities \{ } is factored , but for personal utilities \{ } that differ from the \{}. a system s being factored does mean that a change to that improves can not also hurt for some .intuitively , for a factored system , the side effects on the rest of the system of s increasing its own utility do not end up decreasing world utility but can have arbitrarily adverse effects on other private utilities .( in the language of economics , no stipulation is made that s `` costs are endogenized . '' ) for factored systems , the separate microlearners successfully pursuing their separate goals do not frustrate each other _ as far as world utility is concerned_. in addition , if is factored with respect to , then a change to that improves improves .but it may some and/or .( this is even true for a discounted sum of rewards personal utility , so long as . )an example of this would be an economic system cast as a single individual , , together with an environment node , where is a steeply discounted sum of rewards receives over his / her lifetime , , and , .for such a situation , it may be appropriate for to live extravagantly at the time , and `` pay for it '' later .as an instructive example of the ramifications of eq .5 , say node is a conventional computer .we want to be as high as possible , i.e. , given the state of the rest of the system at time 0 , we want computer s state then to be the best possible , as far as the resultant value of is concerned . nowa computer s `` state '' consists of the values of all its bits , including its code segment , i.e. , including the program it is running .so for a factored personal utility , if the program running on the computer is better than most others as far as is concerned , then it is also better than most other programs as far as is concerned .our task as coin designers engaged in coin initialization or macrolearning is to find such a program and such an associated .one way to approach this task is to restrict attention to programs that consist of rl algorithms with private utility specified in the bits \{ } of .this reduces the task to one of finding a private utility \{ } ( and thereby fully specifying ) such that our rl algorithm working with that private utility has high , i.e. , such that that algorithm outperforms most other programs as far as the personal utility is concerned .perhaps the simplest way to address this reduced task is to exploit the fact that for a good enough rl algorithm will be large , and therefore adopt such an rl algorithm and fix the private utility to equal . in this waywe further reduce the original task , which was to search over all personal utilities and all programs to find a pair such that both is factored with respect to and there are relatively few programs that outperform , as far as .the task is now instead to search over all private utilities \{ } such that both \{ } is factored with respect to and such that there are few programs ( _ of any sort _, rl - based or not ) that outperform our rl algorithm working on \{ } , as far as that self - same private utility is concerned .the crucial assumption being leveraged in this approach is that our rl algorithm is `` good enough '' , and the reason we want learnable \{ } is to help effect this assumption . in general though, we ca nt have both perfect learnability and perfect factoredness . as an example , say that , and that the dynamics is the identity operator : , . then if and the system is perfectly learnable , it is not perfectly factored .this is because perfect learnability requires that for some function .however any change to that improves such a will either help or , depending on the sign of .for the `` wrong '' sign of , this means the system is actually `` anti - factored '' . due to such incompatibility between perfect factoredness and perfect learnability , we must usually be content with having high degree of factoredness and high learnability . in such situations ,the emphasis of the macrolearning process should be more and more on having high degree of factoredness as we get closer and closer to a nash equilibrium . this way the system wo nt relax to an incorrect local maximum .in practice of course , a coin will often not be perfectly factored . nor in practice are we always interested only in whether the system is factored at one particular point ( rather than across a region say ) .these issues are discussed in , where in particular a formal definition of of the * degree of factoredness * of a system is presented .if a system is factored for utilities , then it is also factored for any utilities where for each is a monotonically increasing function of .more generally , the following result characterizes the set of all factored personal utilities : * theorem 1 : * a system is factored at all iff for all those , , we can write for some function such that for all and associated values .( the form of the \{ } off of is arbitrary . ) * proof : * for fixed and , any change to which keeps on and which at the same time increases must increase , due to the restriction on .this establishes the backwards direction of the proof . for the forward direction , write .define this formulation of as , which we can re - express as .now since the system is factored , , so consider any situation where the system is factored , and the values of , , and are specified. then we can find _ any _ consistent with those values ( _ i.e. _ , such that our provided value of equals ) , evaluate the resulting value of , and know that we would have gotten the same value if we had found a different consistent .this is true for all .therefore the mapping is single - valued , and we can write .* qed . * by thm . 1 , we can ensure that the system is factored without any concern for , by having each .alternatively , by only requiring that does ( _ i.e. _ , does ) , we can access a broader class of factored utilities , a class that depend on .loosely speaking , for those utilities , we only need the projection of onto to be parallel to the projection of onto . given and ,there are infinitely many having this projection ( the set of such form a linear subspace of ) .the partial differential equations expressing the precise relationship are discussed in . as an example of the foregoing ,consider a ` team game ' ( also known as an ` exact potential game ' ) in which for all . such coins are factored , trivially , regardless of ; if rises , then must as well , by definition .( alternatively , to confirm that team games are factored just take in thm . 1 . ) on the other hand , as discussed below , coins with ` wonderful life ' personal utilities are also factored , but the definition of such utilities depends on .due to their often having poor learnability and requiring centralized communication ( among other infelicities ) , in practice team game utilities often are poor choices for personal utilities .accordingly , it is often preferable to use some other set of factored utilities . to present an important example ,first define the ( ) * effect set * of node at , , as the set of all components for which .define the effect set with no specification of as .( we take this latter definition to be the default meaning of `` effect set '' . )we will also find it useful to define as the set of components of the space that are not in .intuitively , s effect set is the set of all components which would be affected by a change in the state of node at time 0 .( they may or may not be affected by changes in the states of the other nodes . ) note that the effect sets of different nodes may overlap .the extension of the definition of effect sets for times other than 0 is immediate .so is the modification to have effect sets only consist of those components that vary with with the state of node at time 0 , rather than consist of the full vectors possessing such a component .these modifications will be skipped here , to minimize the number of variables we must keep track of .next for any set of components ( ) , define as the `` virtual '' vector formed by clamping the -components of to an arbitrary fixed value .( in this paper , we take that fixed value to be for all components listed in . )consider in particular a * wonderful life * set .the value of the * wonderful life utility * ( wlu for short ) for at is defined as : in particular , the wlu for the effect set of node is , which for can be written as . we can view s effect set wlu as analogous to the change in world utility that would have arisen if node `` had never existed '' .( hence the name of this utility - cf . the frank capra movie . )note however , that is a purely `` fictional '' , counter - factual operation , in the sense that it produces a new without taking into account the system s dynamics .indeed , no assumption is even being made that is consistent with the dynamics of the system .the sequence of states the node is clamped to in the definition of the wlu need not be consistent with the dynamical laws embodied in .this dynamics - independence is a crucial strength of the wlu .it means that to evaluate the wlu we do _ not _ try to infer how the system would have evolved if node s state were set to 0 at time 0 and the system evolved from there .so long as we know extending over all time , and so long as we know , we know the value of wlu .this is true even if we know nothing of the dynamics of the system .an important example is effect set wonderful life utilities when the set of all nodes is partitioned into ` subworlds ' in such a way that all nodes in the same subworld share substantially the same effect set .in such a situation , all nodes in the same subworld will have essentially the same personal utilities , exactly as they would if they used team game utilities with a `` world '' given by .when all such nodes have large intelligence values , this sharing of the personal utility will mean that all nodes in the same subworld are acting in a coordinated fashion , loosely speaking .the importance of the wlu arises from the following results : * theorem 2 : * i ) a system is factored at all iff for all those , , we can write for some function such that for all and associated values .( the form of the \{ } off of is arbitrary . )\ii ) in particular , a coin is factored for personal utilities set equal to the associated effect set wonderful life utilities . *proof : * to prove ( i ) , first write . for all , is independent of , and so by definition of it is a single - valued function of for such .therefore for some function .accordingly , by thm .1 , for \{ } of the form stipulated in ( i ) , the system is factored .going the other way , if the system is factored , then by thm .1 it can be written as . since both and , we can rewrite this as {,t<0 } , [ \hat{}c^{eff}_{\eta}]_{\;\hat{}\eta,0 } , g(\underline{\zeta})) ] .this means that viewed as a -parameterized function from to , is a single - valued function of the components .therefore can only depend on and the non- components of .accordingly , the wlu for is just minus a term that is a function of and . by choosing in thm . 1 to be that difference, we see that s effect set wlu is of the form necessary for the system to be factored . *qed . * as a generalization of ( ii ) , the system is factored if each node s personal utility is ( a monotonically increasing function of ) the wlu for a set that contains . for conciseness ,except where explicitly needed , for the remainder of this subsection we will suppress the argument `` '' , taking it to be implicit .the next result concerning the practical importance of effect set wlu is the following : * theorem 3 : * let be a set containing . then * proof : * writing it out , the second term in the numerator equals 0 , by definition of effect set .dividing by the similar expression for then gives the result claimed .* qed . *so if we expect that ratio of magnitudes of gradients to be large , effect set wlu has much higher learnability than team game utility while still being factored , like team game utility . as an example , consider the case where the coin is a very large system , with being only a relatively minor part of the system ( _ e.g. _ , a large human economy with being a `` typical john doe living in peoria illinois '' ) .often in such a system , for the vast majority of nodes , how varies with will be essentially independent of the value .( for example , how gdp of the us economy varies with the actions of our john doe from peoria , illinois will be independent of the state of some jane smith living in los angeles , california . ) in such circumstances , thm .3 tells us that the effect set wonderful life utility for will have a far larger learnability than does the world utility . for any fixed ,if we change the clamping operation ( _ i.e. _ , change the choice of the `` arbitrary fixed value '' we clamp each component to ) , then we change the mapping , and therefore change the mapping . accordingly , changing the clamping operation can affect the value of evaluated at some point .therefore , by thm .3 , changing the clamping operation can affect .so properly speaking , for any choice of , if we are going to use , we should set the clamping operation so as to maximize learnability . for simplicitythough , in this paper we will ignore this phenomenon , and simply set the clamping operation to the more or less `` natural '' choice of * 0 * , as mentioned above .next consider the case where , for some node , we can write as . say it is also true that s effect set is a small fraction of the set of all components . in this caseit often true that the values of are much larger than those of , which means that partial derivatives of are much larger than those of . in such situationsthe effect set wlu is far more learnable than the world utility , due to the following results : * theorem 4 : * if for some node there is a set containing , a function , and a function , such that , then * proof : * for brevity , write and both as functions of full , just such functions that are only allowed to depend on the components of that lie in and those components that do not lie in , respectively. then the wlu for node is just . since in that second termwe are clamping all the components of that cares about , for this personal utility .so in particular .now by definition of effect set , , since does not contain .so .* qed . * the obvious extensions of thm.s 3 and 4 to effect sets with respect to times other than 0 can also be proven .an important special case of thm .4 is the following : * corollary 1 : * if for some node we can write \i ) {t\ge0 } ) + g_3(\underline{\zeta}_{,t<0}) ] , then . in practice , to assure that condition ( i ) of this corollary is met might require that be a proper superset of .countervailingly , to assure that condition ( ii ) is met will usually force us to keep as small as possible .one can often remove elements from an effect set and still have the results of this section hold .most obviously , if ( , t ) but = * 0 * , we can remove ( , t ) from without invalidating our results . more generally , if there is a set such that for each component ( the chain rule term \;\cdot \ ; [ \partial_{\underline{\zeta}_{\eta,0;i } } [ c(\underline{\zeta}_{,0})]_{\eta',t}] ] .the other is only concerned with one night ; ] , where as usual `` '' indicates the component of the unary vector .since , for any fixed and , this sum just equals .since there are such terms , after taking the norm we obtain \ ; \sqrt{7(n-1})| ] . combining with the result of the previous paragraph ,our ratio is .in addition to this learnability advantage of the wl reward , to evaluate its wl reward each agent only needs to know the total attendance on the night it attended , so no centralized communication is required .finally , although the system wo nt be perfectly factored for this reward ( since in fact the effect set of s action at would be expected to extend a bit beyond ) , one might expect that it is close enough to being factored to result in large world utility .each agent keeps a seven dimensional euclidean vector representing its estimate of the reward for attending each night of the week . at the end of each week ,the component of this vector corresponding to the night just attended is proportionally adjusted towards the actual reward just received . at the beginning of the succeeding week, the agent picks the night to attend using a boltzmann distribution with energies given by the components of the vector of estimated rewards , where the temperature in the boltzmann distribution decays in time .( this learning algorithm is equivalent to claus and boutilier s independent learner algorithm for multi - agent reinforcement learning . )we used the same parameters ( learning rate , boltzmann temperature , decay rates , etc . ) for all three reward functions .( this is an _ extremely _ primitive rl algorithm which we only chose for its pedagogical value ; more sophisticated rl algorithms are crucial for eliciting high intelligence levels when one is confronted with more complicated learning problems . )figure [ fig : barfig ] presents world reward values as a function of time , averaged over 50 separate runs , for all three reward functions , for both ] .the behavior with the g reward eventually converges to the global optimum .this is in agreement with the results obtained by crites for the bank of elevators control problem .systems using the wl reward also converged to optimal performance .this indicates that for the bar problem our approximations of effects sets are sufficiently accurate , _i.e. _ , that ignoring the effects one agent s actions will have on future actions of other agents does not significantly diminish performance .this reflects the fact that the only interactions between agents occurs indirectly , via their affecting each others reward values .however since the wl reward is more learnable than than the g reward , convergence with the wl reward should be far quicker than with the g reward .indeed , when ] systems take 6500 weeks to converge with the g reward , which is more than _ 30 times _ worse than the time with the wl reward .in contrast to the behavior for reward functions based on our coin framework , use of the conventional ud reward results in very poor world reward values , values that deteriorated as the learning progressed .this is an instance of the toc .for example , for the case where ] .systems using the ud reward perform poorly regardless of .systems using the g reward perform well when is low . as increases however, it becomes increasingly difficult for the agents to extract the information they need from the g reward .( this problem is significantly worse for uniform . ) because of their superior learnability , systems using the wl reward overcome this signal - to - noise problem ( _ i.e. _ , because the wl reward is based on the _ difference _ between the actual state and the state where one agent is clamped , it is much less affected by the total number of agents ) . in the experiments recounted above, the agents were sufficiently independent that assuming they did not affect each other s actions ( when forming guesses for effect sets ) allowed the resultant wl reward signals to result in optimal performance . in this sectionwe investigate the contrasting situation where we have initial guesses of effect sets that are quite poor and that therefore result in bad global performance when used with wl rewards .in particular , we investigate the use of macrolearning to correct those guessed effect sets at run - time , so that with the corrected guessed effect sets wl rewards will instead give optimal performance .this models real - world scenarios where the system designer s initial guessed effect sets are poor approximations of the actual associated effect sets and need to be corrected adaptively . in these experimentsthe bar problem is significantly modified to incorporate constraints designed to result in poor when the wl reward is used with certain initial guessed effect sets . to do thiswe forced the nights actually attended by some of the agents ( followers ) to agree with those attended by other agents ( leaders ) , regardless of what night those followers `` picked '' via their microlearning algorithms .( for leaders , picked and actually attended nights were always the same . )we then had the world utility be the sum , over all leaders , of the values of a triply - indexed reward matrix whose indices are the nights that each leader - follower set attends : where is the night the leader attends in week , and and are the nights attended by the followers of leader , in week ( in this study , each leader has two followers ) .we also had the states of each node be one of the integers \{0 , 1 , ... , 6 } rather than ( as in the bar problem ) a unary seven - dimensional vector .this was a bit of a contrivance , since constructions like are nt meaningful for such essentially symbolic interpretations of the possible states .as elaborated below , though , it was helpful for constructing a scenario in which guessed effect set wlu results in poor performance , _ i.e. _ , a scenario in which we can explore the application of macrolearning . to see how this setup can result in poor world utility , first note that the system s dynamics is what restricts all the members of each triple to equal the night picked by leader for weekso and are both in leader s actual effect set at week whereas the initial guess for s effect set may or may not contain nodes other than .( for example , in the bar problem experiments , the guessed effect set does not contain any nodes beyond . ) on the other hand , and are defined for all possible triples ( ) .so in particular , is defined for the dynamically unrealizable triples that can arise in the clamping operation .this fact , combined with the leader - follower dynamics , means that for certain s there exist guessed effect sets such that the dynamics assures poor world utility when the associated wl rewards are used .this is precisely the type of problem that macrolearning is designed to correct . as an example , say each week only contains two nights , 0 and 1 .set and .so the contribution to when a leader picks night 1 is 1 , and when that leader picks night 0 it is 0 , independent of the picks of that leader s followers ( since the actual nights they attend are determined by their leader s picks ) .accordingly , we want to have a private utility for each leader that will induce that leader to pick night 1 .now if a leader s guessed effect set includes both of its followers ( in addition to the leader itself ) , then clamping all elements in its effect set to 0 results in an value of .therefore the associated guessed effect set wlu will reward the leader for choosing night 1 , which is what we want .( for this case wl reward equals if the leader picks night 1 , compared to reward for picking night 0 . )however consider having two leaders , and , where s guessed effect set consists of itself together with the two followers of ( rather than together with the two followers of itself ) .so neither of leader s followers are in its guessed effect set , while itself is .accordingly , the three indices to s need not have the same value .similarly , clamping the nodes in its guessed effect set wo nt affect the values of the second and third indices to s , since the values of those indices are set by s followers .so for example , if and its two followers go to night 0 in week 0 , and and its two followers go to night 1 in that week , then the associated guessed effect set wonderful life reward for for week 0 is $ ] .this equals . simply by setting can ensure that this is negative .conversely , if leader had gone to night 0 , its guessed effect wlu would have been 0 .so in this situation leader will get a greater reward for going to night 0 than for going to night 1 .in this situation , leader s using its guessed effect set wlu will lead it to make the wrong pick . to investigate the efficacy of the macrolearning , two sets of separate experiments were conducted . in the first one the reward matrix chosen so that if each leader is maximizing its wl reward , but for guessed effect sets that contain none of its followers , then the system evolves to world reward .so if a leader incorrectly guesses that some is its effect set even though does nt contain both of that leader s followers , and if this is true for all leaders , then we are assured of worst possible performance . in the second set of experiments , we investigated the efficacy of macrolearning for a broader spectrum of reward matrices by generating those matrices randomly .we call these two kinds of reward matrices _ worst - case _ and _ random _ reward matrices , respectively . in both cases , if it can modify the initial guessed effect sets of the leaders to include their followers , then macrolearning will induce the system to be factored .the microlearning in these experiments was the same as in the bar problem .all experiments used the wl personal reward with some ( initially random ) guessed effect set .when macrolearning was used , it was implemented starting after the microlearning had run for a specified number of weeks .the macrolearner worked by estimating the correlations between the agents selections of which nights to attend .it did this by examining the attendances of the agents over the preceding weeks .given those estimates , for each agent the two agents whose attendances were estimated to be the most correlated with those of agent were put into agent s guessed effect set . of course , none of this macrolearning had any effect on global performance when applied to follower agents , but the macrolearning algorithm can not know that ahead of time ; it applied this procedure to each and every agent in the system .figure [ fig : worstreward ] presents averages over 50 runs of world reward as a function of weeks using the worst - case reward matrix . for comparison purposes , in both plots the top curve represents the case where the followers are in their leaders guessed effect sets .the bottom curve in both plots represents the other extreme where no leader s guessed effect set contains either of its followers . in both plots ,the middle curve is performance when the leaders guessed effect sets are initially random , both with ( right ) and without ( left ) macrolearning turned on at week 500 .the performance for random guessed effect sets differs only slightly from that of having leaders guessed effect sets contain none of their followers ; both start with poor values of world reward that deteriorates with time .however , when macrolearning is performed on systems with initially random guessed effect sets , the system quickly rectifies itself and converges to optimal performance .this is reflected by the sudden vertical jump through the middle of the right plot at 500 weeks , the point at which macrolearning changed the guessed effect sets . by changing those guessed effect sets macrolearning results in a system that is factored for the associated wl reward function , so that those reward functions quickly induced the maximal possible world reward .figure [ fig : randomreward ] presents performance averaged over 50 runs for world reward as a function of weeks using a spectrum of reward matrices selected at random .the ordering of the plots is exactly as in figure [ fig : worstreward ] .macrolearning is applied at 2000 weeks , in the right plot .the simulations in figure [ fig : randomreward ] were lengthened from those in figure [ fig : worstreward ] because the convergence time of the full spectrum of reward matrices case was longer . in figure[ fig : randomreward ] the macrolearning resulted in a transient degradation in performance at 2000 weeks followed by convergence to the optimal . without macrolearning the system s performanceno longer varied after 2000 weeks .combined with the results presented in figure [ fig : worstreward ] , these experiments demonstrate that macrolearning induces optimal performance by aligning the agents guessed effect sets with those agents that they actually do influence the most .many distributed computational tasks can not be addressed by direct modeling of the underlying dynamics , or are at best poorly addressed that way due to robustness and scalability concerns .such tasks should instead be addressed by model - independent machine learning techniques . in particular ,reinforcement learning ( rl ) techniques are often a natural choice for how to address such tasks .when as is often the case we can not rely on centralized control and communication , such rl algorithms have to be deployed locally , throughout the system .this raises the important and profound question of how to configure those algorithms , and especially their associated utility functions , so as to achieve the ( global ) computational task .in particular we must ensure that the rl algorithms do not `` work at cross - purposes '' as far as the global task is concerned , lest phenomena like tragedy of the commons occur .how to initialize a system to do this is a novel kind of inverse problem , and how to adapt a system at run - time to better achieve such a global task is a novel kind of learning problem .we call any distributed computational system analyzed from the perspective of such an inverse problem a collective intelligence ( coin ) . as discussed in the literature review section of this chapter , there are many approaches / fields that address aspects of coins .these range from multi - agent systems through conventional economics and on to computational economics .( human economies are a canonical model of a functional coin . )they range onward to game theory , various aspects of distributed biological systems , and on through physics , active walker models , and recurrent neural nets . unfortunately, none of these fields seems appropriate as a general approach to understanding coins .after this literature review we present a mathematical theory for coins . we then present experiments on two test problems that validate the predictions of that theory for how best to design a coin to achieve a global computational task .the first set of experiments involves a variant of arthur s famous el farol bar problem .the second set instead considers a leader - follower problem that is hand - designed to cause maximal difficulty for the advice of our theory on how to initialize a coin .this second set of experiments is therefore a test of the on - line learning aspect of our approach to coins . in both experiments the procedures derived from our theory , procedures using only local information , vastly outperformed natural alternative approaches , even such approaches that exploited global information .indeed , in both problems , following the theory summarized in this chapter provides good solutions even when the exact conditions required by the associated theorems hold only approximately .there are many directions in which future work on coins will proceed ; it is a vast and rich area of research .we are already successfully applying our current understanding of coins , tentative as it is , to internet packet routing problems .we are also investigating coins in a more general optimization context where economics - inspired market mechanisms are used to guide some of the interactions among the agents of the distributed system .the goal in this second body of work is to parallelize and solve numerical optimization problems where the concept of an `` agent '' may not be in the natural definition of the problem .we also intend to try to apply our current coin framework to the problem of designing high - occupancy toll lanes in vehicular traffic , and to help understand the `` design space '' necessary for distributed biochemical entities like pre - genomic cells .s. bankes . exploring the foundations of artificial societies : experiments in evolving solutions to the iterated n player prisoner s dilemma . in r. brooks and p. maes , editors ,_ artificial life iv _ , pages 337342 . mit press , 1994 .a. m. bell and w. a. sethares . the el farol problem and the internet : congestion and coordination failure . in _fifth international conference of the society for computational economics _ , boston , ma , 1999 .v. s. borkar , s. jain , and g. rangarajan .collective behaviour and diversity in economic communities : some insights from an evolutionary game . in _ proceedings of the workshop on econophysics _ ,budapest , hungary , 1997 .j. boyan and m. littman .packet routing in dynamically changing networks : a reinforcement learning approach .in _ advances in neural information processing systems - 6 _ , pages 671678 .morgan kaufmann , 1994 .t. x. brown , h. tong , and s. singh . optimizing admission control while ensuring quality of service in multimedia networks via reinforcement learning . in _ advances in neural information processing systems - 11_. mit press , 1999 .d. r. cheriton and k. harty . a market approach to operating system memory allocation . in s.e .clearwater , editor , _ market - based control : a paradigm for distributed resource allocation_. world scientific , 1995 .s. p. m. choi and d. y. yeung .predictive q - routing : a memory based reinforcement learning approach to adaptive traffic control . in d.s. touretzky , m. c. mozer , and m. e. hasselmo , editors , _ advances in neural information processing systems - 8 _ , pages 945951 . mit press , 1996 . c. claus and c. boutilier .the dynamics of reinforcement learning cooperative multiagent systems . in _ proceedings of the fifteenth national conference on artificial intelligence _ , pages 746752 , june 1998 .r. h. crites and a. g. barto . improving elevator performance using reinforcement learning . in d.s. touretzky , m. c. mozer , and m. e. hasselmo , editors , _ advances in neural information processing systems - 8 _ , pages 10171023 . mit press , 1996 .r. das , m. mitchell , and j. p. crutchfield .a genetic algorithm discovers particle - based computation in cellular automata . in y.davidor , h .-schwefel , and r. manner , editors , _ parallel problem solving from nature iii _ , pages 344353 .verlag , 1998 .m. a. r. de cara , o. pla , and f. guinea . competition , efficiency and collective behavior in the `` el farol '' bar model .preprint cond - mat/9811162 ( to appear in european physics journal b ) , november 1998 .a. a. economides and j. a. silvester .multi - objective routing in integrated services networks : a game theory approach . in _ieee infocom 91 : proceedings of the conference on computer communication _ ,volume 3 , 1991 .j. ferber .reactive distributed artificial intelligence : principles and applications . in g.o - hare and n. jennings , editors , _ foundations of distributed artificial intelligence _ , pages 287314 .wiley , 1996 .s. forrest , a. s. perelson , l. allen , and r. cherukuri .self - nonself discrimination in a computer . in _ proceedings of the ieee symposium on research in security and privacy _, pages 202212 .ieee computer society press , 1994 .n. friedman , d. koller , and a pfeffer .structured representation of complex stochastic systems . in _ proceedings of the 15th national conference on artificial intelligence ( aaai ) , madison , wisconsin_. aaai press , 1998 .e. a. hansen , a. g. barto , and s. zilberstein .reinforcement learning for mixed open - loop and closed loop control . in _ advances in neural information processing systems - 9 _ , pages 10261032 .mit press , 1998 .b. g. horne and c. lee giles .an experimental comparison of recurrent neural networks . in g.tesauro , d. s. touretzky , and t. k. leen , editors , _ advances in neural information processing systems - 7 _ , pages 697704 . mit press , 1995 .j. hu and m. p. wellman .multiagent reinforcement learning : theoretical framework and an algorithm . in _ proceedings of the fifteenth international conference on machine learning _ ,pages 242250 , june 1998 .m. huber and r. a. grupen . learning to coordinate controllers reinforcement learning on a control basis . in _ proceedings of the 15th international conference of artificial intelligence _, volume 2 , pages 13661371 , 1997 .a. a. lazar and n. semret .design , analysis and simulation of the progressive second price auction for network bandwidth sharing .technical report 487 - 98 - 21 ( rev 2.10 ) , columbia university , april 1998 .t. s. lee , s. ghosh , j. liu , x. ge , and a. nerode . a mathematical framework for asynchronous , distributed , decision making systems with semi autonomous entities : algorithm sythesis , simulation , and evaluation . in _fourth international symposium on autonomous decentralized systems _ , tokyo , japan , 1999 .m. l. littman and j. boyan . a distributed reinforcement learning scheme for network routing . in _ proceedings of the 1993 international workshop on applications of neural networks to telecommunications _ ,pages 4551 , 1993 .p. marbach , o. mihatsch , m. schulte , and j. tsisiklis .reinforcement learning for call admission control and routing in integrated service networks . in _ advances in neural information processing systems - 10 _ ,pages 922928 . mit press , 1998 .m. new and a. pohorille . an inherited efficiencies model for non - genomic evolution . in _ proceedings of the 1st conference on modelling and simulation in biology , medicine and biomedical engineering _ , 1999 .j. oro , e. sherwood , j. eichberg , and d. epps .formation of phospholipids under pritive earth conditions and the role of membranes in prebiological evolution . in _ light transducing membranes , structure , function and evolution _ ,pages 121 . academic press , new york , 1978 .a. pohorille , c. chipot , m. new , and m.a .molecular modeling of protocellular functions . in l.hunter and t.e .klein , editors , _ pacific symposium on biocomputing 96 _ , pages 550569 .world scientific , 1996 .t. sandholm , k. larson , m. anderson , o. shehory , and f. tohme . anytime coalition structure generation with worst case guarantees . in _ proceedings of the fifteenth national conference on artificial intelligence_ , pages 4653 , 1998 .t. sandholm and v. r. lesser .issues in automated negotiations and electronic commerce : extending the contract net protocol . in _ proceedings of the second international conference on multi - agent systems _ , pages 328335 .aaai press , 1995 .w. a. sethares and a. m. bell .an adaptive solution to the el farol problem . in _ proceedings . of the thirty - sixth annual allerton conference on communication , control , and computing _ , allerton , il , 1998 .( invited ) .r. m. starr and m. b. stinchcombe .exchange in a network of trading posts . in k. j. arrow and g.chichilnisky , editors , _ markets , information and uncertainty : essays in economic theory in honor of kenneth arrow_. cambridge university press , 1998 .d. subramanian , p. druschel , and j. chen .ants and reinforcement learning : a case study in routing in dynamic networks . in _ proceedings of the fifteenth international conference on artificial intelligence _ , pages 832838 , 1997 .r. weiss , g. homsy , and r. nagpal .programming biological cells . in_ proceedings of the 8th international conference on architectural support for programming languages and operating systems _ , san jose , nz , 1998 .d. h. wolpert .the relationship between pac , the statistical physics framework , the bayesian framework , and the vc framework . in _ the mathematics of generalization _ , pages 117215 .wesley , 1995 .
this paper surveys the emerging science of how to design a `` collective intelligence '' ( coin ) . a coin is a large multi - agent system where : \i ) there is little to no centralized communication or control . \ii ) there is a provided world utility function that rates the possible histories of the full system . in particular , we are interested in coins in which each agent runs a reinforcement learning ( rl ) algorithm . the conventional approach to designing large distributed systems to optimize a world utility does not use agents running rl algorithms . rather , that approach begins with explicit modeling of the dynamics of the overall system , followed by detailed hand - tuning of the interactions between the components to ensure that they `` cooperate '' as far as the world utility is concerned . this approach is labor - intensive , often results in highly nonrobust systems , and usually results in design techniques that have limited applicability . in contrast , we wish to solve the coin design problem implicitly , via the `` adaptive '' character of the rl algorithms of each of the agents . this approach introduces an entirely new , profound design problem : assuming the rl algorithms are able to achieve high rewards , what reward functions for the individual agents will , when pursued by those agents , result in high world utility ? in other words , what reward functions will best ensure that we do not have phenomena like the tragedy of the commons , braess s paradox , or the liquidity trap ? although still very young , research specifically concentrating on the coin design problem has already resulted in successes in artificial domains , in particular in packet - routing , the leader - follower problem , and in variants of arthur s el farol bar problem . it is expected that as it matures and draws upon other disciplines related to coins , this research will greatly expand the range of tasks addressable by human engineers . moreover , in addition to drawing on them , such a fully developed science of coin design may provide much insight into other already established scientific fields , such as economics , game theory , and population biology . 0.2 in 5.8 in -0.5 in 9.0 in 0.2 in
the exact string matching is one of the oldest tasks in computer science .the need for it started when computers began processing text . at that timethe documents were short and there were not so many of them .now , we are overwhelmed by amount of data of various kind .the string matching is a crucial task in finding information and its speed is extremely important .the exact string matching task is defined as counting or reporting all the locations of given pattern of length in given text of length assuming , where and are strings over a finite alphabet .the first solutions designed were to build and run deterministic finite automaton ( running in space and time ) , the knuth pratt automaton ( running in space and time ) , and the boyer moore algorithm ( running in best case time and worst case time ) .there are numerous variations of the boyer moore algorithm like . in totalmore than 120 exact string matching algorithms have been developed since 1970 .modern processors allow computation on vectors of length 16 bytes in case of sse2 and 32 bytes in case of avx2 .the instructions operate on such vectors stored in special registers xmm0xmm15 ( sse2 ) and ymm0ymm15 ( avx2 ) . as one instructionis performed on all data in these long vectors , it is considered as simd ( single instruction , multiple data ) computation .in the nave approach ( shown as algorithm [ naivesearch2 ] ) the pattern is checked against each position in the text which leads to running time and space .however , it is not bad in practice for large alphabets as it performs only 1.08 comparisons on average on each character of for english text .the variable _ found _ in algorithm [ naivesearch2 ] is not quite necessary .it is presented in order to have a connection to the simd version to be introduced . like in the testing evironment of hume & sunday and the smart library , we consider the counting version of exact string matching .it can be is easily transformed into the reporting version by printing position in line [ naivesearch2-printi ] . and ( =p[j] ] , for , in parallel , in time in total . to this end, we use a primitive which , given a position in and in , compares the strings ] and returns an -bit integer such that the -th bit is set iff = s_2[k] ] with ^\alpha ] . for ,the function compares ] , i.e. , the second symbol against ] iff the -bit of _ found _ is set .we compute _ found _ iteratively , until we either compare the last symbol of or no substring has a partial match ( i.e. , the vector _ found _ becomes zero ) . then , the text is advanced by positions and the process is repeated starting at position . for a given ,the number of occurrences of is equal to the number of bits set in _ found _ and is computed using a popcount instruction . reporting all matches in line [ simdnaivesearch2-printi ] would add an time overhead , as instructions are needed to extract the positions of the bits set in _ found _ , where is the number of occurrences found .the 16-byte version of function simdcompare is implemented with sse2 intrinsic functions as follows : .... simdcompare(x , y , 16 ) x_ptr = _mm_loadu_si128(x ) y_ptr = _mm_loadu_si128(s(y,16 ) ) return _ mm_movemask_epi8(_mm_cmpeq_epi8(x_ptr ,y_ptr ) ) .... here s(y,16 ) is the starting address of 16 copies of y. the instruction ` _mm_loadu_si128(x ) ` loads 16 bytes ( = 128 bits ) starting from x to a simd register .the instruction ` _mm_cmpeq_epi8 ` compares bytewise two registers and the instruction ` _mm_movemask_epi8 ` extracts the comparison result as a 16-bit integer . for the 32-byte version ,the corresponding avx2 intrinsic functions are used . for both versions the sse4 instruction ` _mm_popcnt_u32 ` is utilized for popcount . in order to identify nonmatching positions in the text as fast as possible ,individual characters of the pattern are compared to the corresponding positions in the text in the order given by their frequency in standard text .first , the least frequent symbol is compared , then the second least frequent symbol , etc . therefore the text type should be considered and frequencies of symbols in the text type should be computed in advance from some relevant corpus of texts of the same type .hume and sunday use this strategy in the context of the boyer moore algorithm . [freqssimdnaivesearch2-printi ] [ freqsimdnaivesearch2-out ] algorithm [ freqsimdnaivesearch2 ] shows the nave approach enriched by frequency consideration . a function gives the order in which the symbols of pattern should be compared ( i.e. , , p[\pi(2)],\ldots , p[\pi(m)] ] in a position of the pattern matches to the text , compare next the position of that most unlikely matches . ] with conditional probabilities .his experiments show that the frequency is beneficial in case of texts of large alphabets like texts of natural language .computing all possible frequencies of -grams is rather complicated and the possible speed - up to optimal match is likely marginal .thus we consider only simple frequencies of individual symbols .guard test is a widely used technique to speed - up string matching .the idea is to test a certain pattern position before entering a checking loop . instead of a single guard test ,two or even three tests have been used .guard test is a representative of a general optimization technique called loop peeling , where a number of iterations is moved in front of the loop . as a result ,the loop becomes faster because of fewer loop tests .moreover , loop peeling makes possible to precompute certain values used in the moved iterations .for example , ] . in letter - based languages ,the space character is the most frequent character .we can transform to a slightly better scheme by moving first all the spaces to the end and then processing the remaining positions as for .we have selected four files of different types and alphabet sizes to run experiments on : ` bible.txt ` ( fig .[ figbible ] , table [ tab ] ) and ` e.coli.txt ` ( fig .[ figecoli ] , table [ tab ] ) taken from canterbury corpus , ` dostoevsky-thedouble.txt ` ( fig .[ figdostoyevsky ] , table [ tab ] ) , novel the double by dostoevsky in czech language taken from project gutenberg , and ` protein-hs.txt ` ( fig .[ figprotein ] , table [ tab ] ) taken from protein corpus .file ` dostoevsky-thedouble.txt ` is a concatenation of five copies of the original file to get file length similar to the other files . ) , scaledwidth=90.0% ] ) , scaledwidth=90.0% ] ) , scaledwidth=90.0% ] ) , scaledwidth=90.0% ] we have compared methods naive16 and naive32 having 16 and 32 bytes processed by one simd instruction respectively .naive16-freq and naive32-freq are their variants where comparison order given by nondecreasing probability of pattern symbols ( section [ sec ] ) . naive16-fixed and naive32-fixed are the variants where comparison order is fixed ( section [ sec ] ) .our methods were compared with the fastest exact string matching algorithms up to now sbndm2 , sbndm4 and epsm taken from smart library .the experiments were run on gnu / linux 3.18.12 , with x86_64 intel core i7 - 4770 cpu 3.40ghz with 16 gb ram .the computer was without any other workload and user time was measured using posix function ` getrusage ( ) ` .the average of 100 running times is reported .the accuracy of the results is about .the experiments show for both sse2 and avx2 instructions that for natural text ( ` bible.txt ` ) with the scheme of fixed frequency of comparisons improves the speed of simd - nave - search but it is further improved by considering frequencies of symbols in the text . in case of natural text with larger alphabet ( ` dostoevsky-thedouble.txt ` ) the scheme improves the speed only for avx2 instructions .the comparison based on real frequency of symbols is the bext for both sse2 and avx2 instructions . in case of small alphabets ( ` e.coli.txt ` , ` protein-hs.txt ` ) the order of comparison of symbols does not play any role ( except for ` protein-hs.txt ` and sse2 instructions ) . for files with large alphabet ( ` bible.txt ` , ` dostoevsky-thedouble.txt ` )the peeling factor gave the best results for all our algorithms except for naive16-freq and naive32-freq where was the best .the smaller the alphabet is , the less selective the bigrams or trigrams are . for file ` protein-hs.txt ` , was still good and but for dna sequences of four symbols , turned to be the best we also tested nave - search . in every run it was naturally considerably slower than simd - nave - search .frequency order and loop peeling can also be applied to nave - search . however , the speed - up was smaller than in case of simd - nave - search in our experiments .in spite of how many algorithms were developed for exact string matching , their running times are in general outperformed by the avx2 technology . the implementation of the nave search algorithm ( freq - simd - nave - search ) which uses avx2 instructions , applies loop peeling , and compares symbols in the order of increasing frequency is the best choice in general .however , previous algorithms epsm and sbndm4 have an advantage for small alphabets and long patterns . short patterns of 20 characters or less are objects of most searches in practice and our algorithm is especially good for such patterns . for texts with expected equiprobable symbols ( like in dna or protein strings ) ,our algorithm naturally works well without the frequency order of symbol comparisons .our algorithm is considerably simpler than its simd - based competitor epsm which is a combination of six algorithms .this work was done while jan holub was visiting the aalto university under the asci visitor programme ( dean s decision 12/2016 ) .s. faro and m. o. klekci .fast packed string matching for short patterns . in p. sanders and n.zeh , editors , _ proceedings of the 15th meeting on algorithm engineering and experiments , alenex 2013 _ , pages 113121 .siam , 2013 .s. faro , t. lecroq , s. borz , s. di mauro , and a. maggio .the string matching algorithms research tool . in j.holub and j. rek , editors , _ proceedings of the prague stringology conference 16 _ , pages 99113 , czech technical university in prague , czech republic , 2016. m. o. klekci .an empirical analysis of pattern scan order in pattern matching . in sioiong ao , leonid gelman , david w. l. hukins , andrew hunter , and a. m. korsunsky , editors , _ world congress on engineering _ , lecture notes in engineering and computer science , pages 337341 .newswood limited , 2007 .
more than 120 algorithms have been developed for exact string matching within the last 40 years . we show by experiments that the nave algorithm exploiting simd instructions of modern cpus ( with symbols compared in a special order ) is the fastest one for patterns of length up to about 50 symbols and extremely good for longer patterns and small alphabets . the algorithm compares 16 or 32 characters in parallel by applying sse2 or avx2 instructions , respectively . moreover , it uses loop peeling to further speed up the searching phase . we tried several orders for comparisons of pattern symbols and the increasing order of their probabilities in the text was the best .
probabilistic models of application domains are central to pattern recognition , machine learning , and scientific modeling in various fields .consequently , unifying frameworks are likely to be fruitful for one or more of these fields .there are also more technical motivations for pursuing the unification of diverse model types . in multiscale modeling, models of the same system at different scales can have fundamentally different characteristics ( e.g. deterministic vs. stochastic ) and yet must be placed in a single modeling framework . in machine learning ,automated search over a wide variety of model types may be of great advantage .in this paper we propose stochastic parameterized grammars ( spg s ) and their generalization to dynamical grammars ( dg s ) as such a unifying framework . to this endwe define mathematically both the syntax and the semantics of this formal modeling language .the essential idea is that there is a `` pool '' of fully specified parameter - bearing terms such as \{ , , } where and might be position vectors .a grammar can include rules such as which specify the probability per unit time , , that the macrophage ingests and destroys the bacterium as a function of the distance between their centers .sets of such rules are a natural way to specify many processes .we will map such grammars to stochastic processes in both continuous time ( section [ xref - section-92874322 ] ) and discrete time ( section [ xref - section-9287443 ] ) , and relate the two definitions ( section [ xref - section-92874423 ] ) .a key feature of the semantics maps is that they are naturally defined in terms of an algebraic _ ring _ of time evolution operators : they map operator addition and multiplication into independent or strongly dependent compositions of stochastic processes , respectively .the stochastic process semantics defined here is a mathematical , algebraic object .it is independent of any particular simulation algorithm , though we will discuss ( section [ xref - section-92871922 ] ) a powerful technique for generating simulation algorithms , and we will demonstrate ( section [ xref - section-92872143 ] ) the interpretation of certain subclasses of spg s as a logic programming language .other applications that will be demonstrated are to data clustering ( ) , chemical reaction kinetics ( section [ xref - section-92872456 ] ) , graph grammars and string grammars ( section [ xref - section-92872523 ] ) , systems of ordinary differential equations and systems of stochastic differential equations ( section [ xref - section-926213036 ] ) .other frameworks that describe model classes that may overlap with those described here are numerous and include : branching or birth - and - death processes , marked point processes , mgs modeling language using topological cell complexes , interacting particle systems , the blog probabilistic object model , adaptive mesh refinement with rewrite rules , stochastic pi - calculus , and colored petri nets .the mapping to an operator algebra of stochastic processes , however , appears to be novel .the present paper is an abbreviated summary of .consider the rewrite rule where the and denote symbols chosen from an arbitrary alphabet set of `` types '' .in addition these type symbols carry expressions for parameters or chosen from a base language defined below .the s can appear in any order , as can the s .different s and s appearing in the rule can denote the same alphabet symbol , with equal or unequal parameter values or . is a nonnegative function , assumed to be denoted by an expression in a base language defined below , and also assumed to be an element of a vector space of real - valued functions .informally , is interpreted as a nonnegative probability rate : the independent probability per unit time that any possible instantiation of the rule will `` fire '' if its left hand side precondition remains continuously satisfied for a small time .this interpretation will be formalized in the semantics .we now define .each term or is of type and its parameters take values in an associated ( ordered ) cartesian product set of factor spaces chosen ( possibly with repetition ) from a set of base spaces .each is a measure space with measure .particular may for example be isomorphic to the integers with counting measure , or the real numbers with lebesgue measure .the ordered choice of spaces in constitutes the type signature of type .( as an aside , polymorphic argument type signatures are supported by defining a derived type signature .for example we can regard as a subset of . )correspondingly , parameter expressions are tuples of length , such that each component is either a constant in the space , or a variable that is restricted to taking values in that same space .the variables that appear in a rule this way may be repeated any number of times in parameter expressions or within a rule , providing only that all components take values in the same space . a _ substitution _ of values for variables assigns the same value to all appearances of each variable within a rule .hence each parameter expression takes values in a fixed tuple space under any substitution .this defines the language .we now constrain the language .each nonnegative function is a probability rate : the independent probability per unit time that any particular instantiation of the rule will fire , assuming its precondition remains continuously satisfied for a small interval of time .it is a function only of the parameter values denoted by and , and not of time .each is denoted by an expression in a base language that is closed under addition and multiplication and contains a countable field of constants , dense in , such as the rationals or the algebraic numbers . is assumed to be a nonnegative - valued function in a banach space of real - valued functions defined on the cartesian product space of all the value spaces of the terms appearing in the rule , taken in a standardized order such as nondeccreasing order of type index on the left hand side followed by nondecreasing order of type index on the right hand side of the rule .provided is expressive enough , it is possible to factor within as a product = of a conditional distribution on output parameters given input parameters and a total probability rate as a function of input parameters only . with these definitions we can use a more compact notation by eliminating the s and s , which denote types , in favor of the types themselves .( the expression is called a parameterized _ term , _ which can match to a parameter - bearing _ object _ or _ term instance _ in a `` pool '' of such objects . )the caveat is that a particular type may appear any finite number of times , and indeed a particular parameterized term may appear any finite number of times .so we use multisets ( in which the same object may appear as the value of several different indices ) for both the lhs and rhs ( left hand side and right hand side ) of a rule : here the same object may appear as the value of several different indices under the mappings and/or .finally we introduce the shorthand notation and , and revert to the standard notation for multisets ; then we may write .in addition to the * with * clause of a rule following the lhs header , several other alternative clauses can be used and have translations into * with * clauses .for example , `` * subject to * '' is translated into `` * with * '' where is an appropriate dirac or kronecker delta function that enforces a contraint .other examples are given in .the translation of `` * solving * '' or `` * solve * '' will be defined in terms of * with * clauses in section [ xref - section-926213036 ] . as a matter of definition ,stochastic parameterized grammars do not contain * solving*/*solve * clauses , but dynamical grammars may include them .there exists a preliminary implementation of an interpreter for most of this syntax in the form of a _ mathematica _ notebook , which draws samples according to the semantics of section [ xref - section-92795243 ] below . a stochastic parameterized grammar ( spg ) consists of ( minimally ) a collection of such rules with common type set , base space set , type signature specification , and probability rate language .after defining the semantics of such grammars , it will be possible to define semantically equivalent classes of spg s that are untyped or that have richer argument languages .we provide a semantics function in terms of an operator algebra that results in a _ stochastic process _ , if it exists , or a special `` undefined '' element if the stochastic process does nt exist .the stochastic process is defined by a very high - dimensional differential equation ( the master equation ) for the evolution of a probability distribution in continuous time . on the other handwe will also provide a semantics function that results in a discrete - time stochastic process for the same grammar , in the form of an operator that evolves the probability distribution forward by one discrete rule - firing event . in each casethe stochastic process specifies the time evolution of a probability distribution over the contents of a `` pool '' of grounded parameterized terms that can each be present in the pool with any allowed multiplicity from zero to .we will relate these two alternative `` meanings '' of an spg , in continuous time and in discrete time .a state of the `` pool of term instances '' is defined as an integer - valued function : the `` copy number '' of parameterized terms that are grounded ( have no variable symbols ) , for any combination of type index and parameter value .we denote this state by the `` indexed set '' notation for such functions , . each type be assigned a maximum value for all , commonly ( no constraint on copy numbers ) or 1 ( so which means each term - value combination is simply present or absent ) .the state of the full system at time is defined as a probability distribution on all possible values of this ( already large ) pool state : .the probability distribution that puts all probability density on a particular pool state is denoted .for continuous - time we define the semantics of our grammar as the solution , if it exists , of the master equation , which can be written out as : and which has the formal solution . for discrete - time semantics there is an linear map which evolves unnormalized probabilities forward by one rule - firing time step .the probabilities must of course be normalized , so that after discrete time steps the probability is : which , taken over all and , defines . in both casesthe long - time evolution of the system may converge to a limiting distribution which is a key feature of the semantics , but we do not define the semantics as being only this limit even if it exists . thus semantics - preserving transformations of grammars are fixedpoint - preserving transformations of grammars but the converse may not be true .the master equation is completely determined by the _ generators _ and which in turn are simply composed from elementary operators acting on the space of such probability distributions .they are elements of the operator polynomial ring ] where the basis operators include all creation and annihilation operators . ring addition (as in equation [ xref - equation-922211931 ] or equation [ xref - equation-922212022 ] ) corresponds to independently firing processes ; ring operator multiplication ( as in equation [ xref - equation-922211956 ] ) corresponds to obligatory event co - ocurrence of the constituent events that define a process , in immediate succession , and nonnegative scalar multiplication corresponds to speeding up or slowing down a process .commutation relations between operators describe the exact extent to which the order of event occurrence matters .the operator describes the flow of probability per unit time , over an infinitesimal time interval , into new states resulting from a single rule - firing of any type .if we condition the probability distribution on a single rule having fired , setting aside the probability weight for all other possibilities , the normalized distribution is .iterating , the state of the discrete - time grammar after rule firing steps is as given by ( equation [ xref - equation-102111134 ] ) , where as before .the normalization can be state - dependent and hence dependent on , so .this is a critical distinction between stochastic grammar and markov chain models , for which .an execution algorithm is directly expressed by ( equation [ xref - equation-102111134 ] ) .an indispensible tool for studying such stochastic processes in physics is the time - ordered product expansion .we use the following form : \cdot p_{0}% \label{xref - equation-92363221}\end{gathered}\ ] ] where is a solvable or easily computable part of , so the exponentials can be computed or sampled more easily than .this expression can be used to generate feynman diagram expansions , in which denotes the number of interaction vertices in a graph representing a multi - object history .if we apply ( equation [ xref - equation-92363221 ] ) with and , we derive the well - known gillespie algorithm for simulating chemical reaction networks , which can now be applied to spg s .however many other decompositions of are possible , one of which is used in section [ xref - section-926213036 ] below .because the operators can be decomposed in many ways , there are many valid simulation algorithms for each stochastic process .the particular formulation of the time - ordered product expansion used in ( equation [ xref - equation-92363221 ] ) has the advantage of being recursively self - applicable .thus , ( equation [ xref - equation-92363221 ] ) entails a systematic approach to the creation of novel simulation algorithms ._ proposition ._ given the stochastic parameterized grammar ( spg ) rule syntax of equation [ xref - equation-924145912 ] , ( a ) there is a semantic function mapping from any continuous - time , context sensitive , stochastic parameterized grammar via a time evolution operator to a joint probability density function on the parameter values and birth / death times of grammar terms , conditioned on the total elapsed time , .( b ) there is a semantic function mapping any discrete - time , sequential - firing , context sensitive , stochastic parameterized grammar via a time evolution operator to a joint probability density function on the parameter values and birth / death times of grammar terms , conditioned on the total discrete time defined as number of rule firings , .( c ) the short - time limit of the density conditioned on and conditioned on is equal to . proof : ( a ) : section [ xref - section-92874322 ] .( b ) : section [ xref - section-9287443 ] .( c ) equation [ xref - equation-92363221 ] ( details in , ) . given a new kind of mathematical object ( here , spg s ordg s ) it is generally productive in mathematics to consider the transformations of such objects ( mappings from one object to another or to itself ) that preserve key properties. examples include transformational geometry ( groups acting on lines and points ) and functors acting on categories . in the case of spg s ,two possibilities for the preserved property are immediately salient .first , an spg syntactic transformation could preserve the semantics either fully or just in fixed point form : . preserving the full semanticswould be required of a simulation algorithm .alternatively , an inference algorithm could preserve a joint probability distribution on unobserved and observed random variables , in the form of bayes rule , where are collections of parameterized terms that are inpuuts to , internal to , and outputs from the grammar respectively ..a number of other frameworks and formalisms can be expressed or reduced to spgs as just defined .for example , data clustering models are easily and flexibly described .we give a sampling here . given the chemical reaction network syntax define an index mapping and likewise for as a function of .then ( equation [ xref - equation-926214051 ] ) can be translated to the following equivalent grammar syntax for the multisets of parameterless terms whose semantics is the time - evolution generator \ \ \ \left[\prod\limits_{j\in \operatorname{lhs } ( r ) } a_{b ( j ) } \right ] \ \ .\ ] ] this generator is equivalent to the stochastic process model of mass - action kinetics for the chemical reaction network ( equation [ xref - equation-926214051 ] ) . consider a logic program ( e.g. in pure prolog ) consisting of horn clauses of positive literals axioms have .we can _ translate _ each such clause into a monotonic spg rule where each different literal denotes an unparameterized type with . since there is no * with * clause , the fule firing rates default to .the corresponding time - evolution operator is \ \ \\left[\prod \limits_{j\in \operatorname{lhs } ( r ) } n_{b ( j ) } \right]\ ] ] the semantics of the logic program is its least model or minimal interpretation .it can be computed ( knaster - tarski theorem ) by starting with no literals in the `` pool '' and repeatedly drawing all their consequences according to the logic program .this is equivalent to converging to a fixed point ) .more general clauses include negative literals on the lhs , as , or even more general cardinality constraint atoms .these constraints can be expressed in operator algebra by expanding the basis operator set beyond the basic creation and annihilation operators .finally , atoms with function symbols may be admitted using parameterized terms .graph grammars are composed of local rewrite rules for graphs ( see for example ) .we now express a class of graph grammars in terms of spg s .the following syntax introduces object identifier ( oid ) labels for each parameterized term , and allows labelled terms to point to one another through a graph of such labels .the graph is related to two subgraphs of neighborhood indices and specific to the input and output sides of a rule . like types or variables ,the label symbols appearing in a rule are chosen from an alphabet .unlike types but like variables , the label symbols denote nonnegative integer values - unique addresses or object identifiers .a graph grammar rule is of the form , for some nonnegative - integer - valued functions , , , for which , : ( compare to ( equation [ xref - equation-924145912 ] ) ) .note that the fanout of the graph is limited by .let be mutually exclusive and exhaustive , and the same for .define , , and .then the graph syntax may be translated to the following ordinary non - graph grammar rule ( where nextoid is a variable , and oidgen and null are types reserved for the translation ) : which already has a defined semantics .note that all set membership tests can be done at translation time because they do not use information that is only available dynamically during the grammar evolution .optionally we may also add a rule schema ( one rule per type , ) to eliminate any dangling pointers .strings may be encoded as one - dimensional graphs using either a singly or doubly linked list data structure .string rewrite rules are emulated as graph rewrite rules , whose semantics are defined above .this form is capable of handling many l - system grammars .there are spg rule forms corresponding to stochastic differential equations governing diffusion and transport .given the sde or equivalent langevin equation ( which specializes to a system of ordinary differential equations when ): under some conditions on the noise term the dynamics can be expressed as a fokker - planck equation for the probability distribution : let be the solution of this equation given initial condition ( with dirac delta function appropriate to the particular measure used for each component ) .then at , thus the probability rate is given by a differential operator acting on a dirac delta function . by ( equation [ xref - equation-922212022 ] )we construct the evolution generator operators , where the second order derivative terms give diffusion dynamics and also regularize and promote continuity of probability in parameter space both along and transverse to any local drift direction .calculations with such expressions are shown in . diffusion / drift rules can be combined with chemical reaction rules to describe reaction - diffusion systems .the foregoing approach can be generalized to encompass partial differential equations and stochastic partial differential equations .these operator expressions all correspond to natural extended - time processes given by the evolution of continuous differential equations .the operator semantics of the differential equations is given in terms of derivatives of delta functions .a special `` * solve * '' or `` * solving * ''keyword may be used to introduce such ode / sde rule clauses in the spg syntax .this syntax can be eliminated in favor of a `` * with * '' clause by using derivatives of delta functions in the rate expression , provided that such generalized functions are in the banach space as a limit of functions .if a grammar includes such de rules along with non - de rules , a solver can be used to compute in the time - ordered product for as a hybrid simulation algorithm for discontinuous ( jump ) stochastic processes combined with stochastic differential equations .the relevance of the modeling language defined here to _ artificial intelligence _ includes the following points .first , pattern recognition and machine learning both benefit foundationally from better , more descriptively adequate probabilistic domain models . as an example, exhibits hierarchical clustering data models expressed very simply in terms of spg s and relates them to recent work .graphical models are probabilistic domain models with a fixed structure of variables and their relationships , by contrast with the inherently flexible variable sets and dependency structures resulting from the execution of stochastic parameterized grammars .thus spg s , unlike graphical models , are variable - structure systems ( defined in ) , and consequently they can support compositional description of complex situations such as multiple object tracking in the presence of cell division in biological imagery .second , the reduction of many divergent styles of model to a common spg syntax and operator algebra semantics enables new possibilities for hybrid model forms .for example one could combine logic programming with probability distribution models , or discrete - event stochastic and differential equation models as discussed in section [ xref - section-926213036 ] in possibly new ways . as a third point of ai relevance , from spgprobabilistic domain models it is possible to derive _ algorithms _ for simulation ( as in section [ xref - section-92871922 ] ) and inference either by hand or automatically .of course , inference algorithms are not as well worked out yet for spg s as for graphical models .spg s have the advantage that simulation or inference algorithms could be expressed again in the form of spg s , a possibility demonstrated in part by the encoding of logic programs as spg s .since both model and algorithm are expressed as spg s , it is possible to use spg transformations that preserve relevant quantities ( section [ xref - section - emj001 ] ) as a technique for deriving such novel algorithms or generating them automatically .for example we have taken this approach to rederive by hand the gillespie simulation algorithm for chemical kinetics .this derivation is different from the one in section [ xref - section-92871922 ] . because spg s encompass graph grammars it is even possible in principle to express families of valid spg transformations as meta - spg s .all of these points apply _ a fortiori _ to dynamical grammars as well .the relevance of the modeling language defined here to _ computational science _ includes the following points .first , as argued previously , multiscale models must encompass and unify heterogeneous model types such as discrete / continuous or stochastic / deterministic dynamical models ; this unification is provided by spg s and dg s .second , a representationally adequate computerized modeling language can be of great assistance in constructing mathematical models in science , as demonstrated for biological regulatory network models by cellerator and other cell modeling languages .dg s extend this promise to more complex , spatiotemporally dynamic , variable - structure system models such as occur in biological development .third , machine learning techniques could in principle be applied to find simplified approximate or reduced models of emergent phenomena within complex domain models . in that casethe forgoing ai arguments apply to computational science applications of machine learning as well . both for artificial intelligence and computational science , futurework will be required to determine whether the prospects outlined above are both realizable and compelling .the present work is intended to provide a mathematical foundation for achieving that goal .we have established a syntax and semantics for a probabilistic modeling language based on independent processes leading to events linked by a shared set of objects . the semantics is based on a polynomial ring of time - evolution operators .the syntax is in the form of a set of rewrite rules .stochastic parameterized grammars expressed in this language can compactly encode disparate models : generative cluster data models , biochemical networks , logic programs , graph grammars , string rewrite grammars , and stochastic differential equations among other others .the time - ordered product expansion connects this framework to powerful methods from quantum field theory and operator algebra .useful discussions with guy yosiphon , pierre baldi , ashish bhan , michael duff , sergei nikolaev , bruce shapiro , padhraic smyth , michael turmon , and max welling are gratefully acknowledged .the work was supported in part by a biomedical information science and technology initiative ( bisti ) grant ( number r33 gm069013 ) from the national institue of general medical sciences , by the national science foundation s frontiers in biological research ( fibr ) program award number ef-0330786 , and by the center for cell mimetic space exploration ( cmise ) , a nasa university research , engineering and technology institute ( ureti ) , under award number # ncc 2 - 1364 .000 mjolsness , e. ( 2005 ) ._ stochastic process semantics for dynamical grammar syntax_. uc irvine , irvine .uci ics tr # 05 - 14 , http://computableplant.ics.uci.edu/papers/#frameworks .[ stochsem05 ] mattis , d. c. , & glasser , m. l. ( 1998 ) ._ the uses of quantum field theory in diffusion - limited reactions_. reviews of modern physics , * 70 * , 9791001 .[ mattisglasser98 ] risken , h. ( 1984 ) . _ the fokker - planck equation_. berlin : springer.[riskenfp ] gillespie , d. j. , ( 1976 ) .22 , 403 - 434 .[ gillespie76 ] cenzer , d. , marek , v. w. , & remmel , j. b. ( 2005 ) ._ logic programming with infinite sets_. annals of mathematics and artificial intelligence , volume 44 , issue 4 , aug 2005 , pages 309 - 339 . [ remmel04 ]cuny , j. , ehrig , h. , engels , g. , & rozenberg , g. ( 1994 ) ._ graph grammars and their applications to computer science_. springer.[graphgram94 ] prusinkiewicz , p. , & lindenmeyer , a. ( 1990 ) . _ the algorithmic beauty of plants_. new york : springer - verlag.[prusinkiewiczalgb ] e. mjolsness ( 2005 ) . _variable - structure systems from graphs and grammars_. uc irvine school of information and computer sciences , irvine .uci ics tr # 05 - 09 , http://computableplant.ics.uci.edu/papers/vbl-struct_gg_tr.pdf .[ vsstr05 ] victoria gor , tigran bacarian , michael elowitz , eric mjolsness ( 2005 ) ._ tracking cell signals in fluorescent images_. computer vision methods for bioinformatics ( cvmb ) workshop at computer vision and pattern recognition ( cvpr ) , san diego .[ cvprfluor05 ] bruce e. shapiro , andre levchenko , elliot m. meyerowitz , barbara j. wold , and eric d. mjolsness ( 2003 ) ._ cellerator : extending a computer algebra system to include biochemical arrows for signal transduction simulations_. bioinformatics 19 : 677 - 678 .[ cellerator ]
we define a class of probabilistic models in terms of an operator algebra of stochastic processes , and a representation for this class in terms of stochastic parameterized grammars . a syntactic specification of a grammar is mapped to semantics given in terms of a ring of operators , so that grammatical composition corresponds to operator addition or multiplication . the operators are generators for the time - evolution of stochastic processes . within this modeling framework one can express data clustering models , logic programs , ordinary and stochastic differential equations , graph grammars , and stochastic chemical reaction kinetics . this mathematical formulation connects these apparently distant fields to one another and to mathematical methods from quantum field theory and operator algebra . accepted for : ninth international symposium on artificial intelligence and mathematics , january 2006
complex systems with interacting constituents are ubiquitous in nature and society . to understand the microscopic mechanisms of emerging statistical laws of complex systems , one records and analyzes time series of observable quantities .these time series are usually nonstationary and possess long - range power - law cross - correlations .examples include the velocity , temperature , and concentration fields of turbulent flows embedded in the same space as joint multifractal measures , topographic indices and crop yield in agronomy , temporal and spatial seismic data , nitrogen dioxide and ground - level ozone , heart rate variability and brain activity in healthy humans , sunspot numbers and river flow fluctuations , wind patterns and land surface air temperatures , traffic flows and traffic signals , self - affine time series of taxi accidents , and econophysical variables .a variety of methods have been used to investigate the long - range power - law cross - correlations between two nonstationary time series .the earliest was joint multifractal analysis to study the cross - multifractal nature of two joint multifractal measures through the scaling behaviors of the joint moments , which is a multifractal cross - correlation analysis based on the partition function approach ( mf - x - pf ) . over the past decade , detrended cross - correlation analysis ( dcca ) has become the most popular method of investigating the long - range power - law cross correlations between two nonstationary time series , and this method has numerous variants .statistical tests can be used to measure these cross correlations .there is also a group of multifractal detrended fluctuation analysis ( mf - dcca ) methods of analyzing multifractal time series , e.g. , mf - x - dfa , mf - x - dma , and mf - hxa .the observed long - range power - law cross - correlations between two time series may not be caused by their intrinsic relationship but by a common third driving force or by common external factors .if the influence of the common external factors on the two time series are additive , we can use partial correlation to measure their intrinsic relationship . to extract the intrinsic long - range power - law cross - correlations between two time series affected by common driving driving forces , we previously developed and used detrended partial cross - correlation analysis ( dpxa ) and studied the dpxa exponents of variable cases , combining the ideas of detrended cross - correlation analysis and partial correlation . in ref . , the dpxa method has been proposed independently , focussing on the dpxa coefficient .here we provide a general framework for the dpxa and mf - dpxa methods that is applicable to various extensions , including different detrending approaches and higher dimensions .we adopt two well - established mathematical models ( bivariate fractional brownian motions and multifractal binomial measures ) in our numerical experiments , which have known analytical expressions , and demonstrate how the ( mf-)dpxa methods is superior to the corresponding ( mf-)dcca methods .consider two stationary time series and that depend on a sequence of time series with .each time series is covered with ] , where .we calibrate the two linear regression models for and respectively , where ^{\mathrm{t}} ] , and are the vectors of the error term , and is the matrix of the external forces in the box , where is the transform of .equation ( [ eq : xy : z : rxy : betas ] ) gives the estimates and of the -dimensional parameter vectors and and the sequence of error terms , we obtain the disturbance profiles , i.e. , where .we assume that the local trend functions of and are and , respectively .the detrended partial cross - correlation in each window is then calculated , \left[r_{y , v}(k)-\widetilde{r}_{y , v}(k)\right],\ ] ] and the second - order detrended partial cross - correlation is calculated , ^{1/2}.\ ] ] if there are intrinsic long - range power - law cross - correlations between and , we expect the scaling relation , there are many ways of determining and . the local detrending functions could be polynomials , moving averages , or other possibilities . to distinguish the different detrending methods , we label the corresponding dpxa variants as , e.g. , px - dfa and px - dma .when the moving average is used as the local detrending function , the window size of the moving averages must be the same as the covering window size . to measure the validity of the dpxa method, we perform numerical experiments using an additive model for and , i.e. , where is a fractional gaussian noise with hurst index , and and are the incremental series of the two components of a bivariate fractional brownian motion ( bfbms ) with hurst indices and .the properties of multivariate fractional brownian motions have been extensively studied . in particular, it has been proven that the hurst index of the cross - correlation between the two components is this property allows us to assess how the proposed method perform .we can obtain the of and using the dcca method and the of and using the dpxa method .our numerical experiments show that .we use for theoretical or true values and for numerical estimates . in the simulations we set , , , and in the model based on eq .( [ eq : dpxa : model ] ) .three hurst indices , , and are input arguments and vary from 0.1 to 0.95 at 0.05 intervals . because and are symmetric ,we set , resulting in triplets of . the bfbms are simulated using the method described in ref . , and the fbms are generated using a rapid wavelet - based approach .the length of each time series is 65536 .for each triplet we conduct 100 simulations .we obtain the hurst indices for the simulated time series , , , , and using detrended fluctuation analysis .the average values , , , , and over 100 realizations are calculated for further analysis , which are shown in fig . [ fig : dpxa : dhxyz ] . a linear regression between the output and input hurst indices in fig . [ fig : dpxa : dhxyz](a c ) yields , , and , suggesting that the generated fbms have hurst indices equal to the input hurst indices .figure [ fig : dpxa : dhxyz](d ) shows that when , is close to . when it is not , .figure [ fig : dpxa : dhxyz](e ) shows that . because and [ see fig .[ fig : dpxa : dhxyz](a)(b ) ] , we verify numerically that note also that , and that is a function of , and .a simple linear regression gives which indicates that the dpxa method can be used to extract the intrinsic cross - correlations between the two time series and when they are influenced by a common factor .we calculate the average over different and then find the relative error figure [ fig : dpxa : dhxyz](f ) shows the results for different combinations of and .although in most cases we see that , when both and approach 0 , increases .when , , and when and , .for all other points of , the relative errors are less than 0.10 . in a way similar to detrended cross - correlation coefficients , we define the detrended partial cross - correlation coefficient ( or dpxa coefficient ) as as in the dcca coefficient , we also find for dpxa .the dpxa coefficient indicates the intrinsic cross - correlations between two non - stationary series .( color online . ) detrended partial cross - correlation coefficients .( a ) performance of different methods by comparing three cross - correlation coefficients , and of the mathematical model in eq .( [ eq : dpxa : model ] ) .( b ) estimation and comparison of the cross - correlation levels between the two return time series ( ) and two volatility time series ( ) of crude oil and gold when including and excluding the influence of the usd index.,title="fig : " ] ( color online . ) detrended partial cross - correlation coefficients .( a ) performance of different methods by comparing three cross - correlation coefficients , and of the mathematical model in eq .( [ eq : dpxa : model ] ) .( b ) estimation and comparison of the cross - correlation levels between the two return time series ( ) and two volatility time series ( ) of crude oil and gold when including and excluding the influence of the usd index.,title="fig : " ] we use the mathematical model in eq .( [ eq : dpxa : model ] ) with the coefficients and to demonstrate how the dpxa coefficient outperforms the dcca coefficient .the two components and of the bfbm have very small hurst indices and their correlation coefficient is , and the driving fbm force has a large hurst index .figure [ fig : dpxa : rho](a ) shows the resulting cross - correlation coefficients at different scales .the dcca coefficients between the generated and time series overestimate the true value . because the influence of on and is very strong , the behaviors of and are dominated by , and the cross - correlation coefficient is close to 1 when is small and approaches 1 when us large .in contrast , the dpxa coefficients are in good agreement with the true value . note that the dpxa method better estimates and than the dcca method , since the curve deviates more from the horizontal line than the curve , especially at large scales . to illustrate the method with an example from finance , we use it to estimate the intrinsic cross - correlation levels between the futures returns and the volatilities of crude oil and gold .it is well - documented that the returns of crude oil and gold futures are correlated , and that both commodities are influenced by the usd index .the data samples contain the daily closing prices of gold , crude oil , and the usd index from 4 october 1985 to 31 october 2012 .figure [ fig : dpxa : rho](b ) shows that both the dcca and dpxa coefficients of returns exhibit an increasing trend with respect to the scale , and that the two types of coefficient for the volatilities do not exhibit any evident trend . for both financial variables , fig .[ fig : dpxa : rho](b ) shows that for different scales .although this is similar to the result between ordinary partial correlations and cross - correlations , the dpxa coefficients contain more information than the ordinary partial correlations since the former indicate the partial correlations at multiple scales .an extension of the dpxa for multifractal time series , notated mf - dpxa , can be easily implemented . when mf - dpxa is implemented with dfa or dma , we notate it mf - px - dfa or mf - px - dma .the order detrended partial cross - correlation is calculated ^{1/q}\ ] ] when , and ~.\ ] ] we then expect the scaling relation according to the standard multifractal formalism , the multifractal mass exponent can be used to characterize the multifractal nature , i.e. , where is the fractal dimension of the geometric support of the multifractal measure .we use for our time series analysis .if the mass exponent is a nonlinear function of , the signal is multifractal .we use the legendre transform to obtain the singularity strength function and the multifractal spectrum to test the performance of mf - dpxa , we construct two binomial measures and from the -model with known analytic multifractal properties , and contaminate them with gaussian noise .we generate the binomial measure iteratively by using the multiplicative factors for and for .the contaminated signals are and . figures [ fig : mfpx : pmodel](a)(c ) show that the signal - to - noise ratio is of order .figures [ fig : mfpx : pmodel](d)(f ) show a power - law dependence between the fluctuation functions and the scale , in which it is hard to distinguish the three curves of .figure [ fig : mfpx : pmodel](g ) shows that for and , the function an approximate straight line and that the corresponding spectrum is very narrow and concentrated around .these observations are trivial because and are gaussian noise with the hurst indices , and the multifractal detrended cross - correlation analysis fails to uncover any multifractality . on the contrary ,we find that and .thus the mf - dpxa method successfully reveals the intrinsic multifractal nature between and hidden in and .in summary , we have studied the performances of dpxa exponents , dpxa coefficients , and mf - dpxa using bivariate fractional brownian motions contaminated by a fractional brownian motion and multifractal binomial measures contaminated by white noise .these mathematical models are appropriate here because their analytical expressions are known .we have demonstrated that the dpxa methods are capable of extracting the intrinsic cross - correlations between two time series when they are influenced by common factors , while the dcca methods fail .the methods discussed are intended for multivariate time series analysis , but they can also be generalized to higher dimensions .we can also use lagged cross - correlations in these methods .although comparing the performances of different methods is always important , different variants of a method can produce different outcomes when applied to different systems .for instance , one variant that outperforms other variants under the setting of certain stochastic processes is not necessary the best performing method for other systems .we argue that there are still a lot of open questions for the big family of dfa , dma , dcca and dpxa methods .this work was partially supported by the national natural science foundation of china under grant no . 11375064 , fundamental research funds for the central universities , and shanghai financial and securities professional committee .69ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1017/s0022112075000304 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase10.1140/epjb / e2009 - 00402 - 2 [ * * , ( ) ] link:\doibase 10.1007/s10661 - 009 - 1083 - 6 [ * * , ( ) ] link:\doibase 10.1063/1.3427639 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.06.025 [ * * , ( ) ] link:\doibase 10.1016/j.atmosres.2010.11.009 [ * * , ( ) ] link:\doibase 10.1007/s11071 - 009 - 9642 - 5 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2011.06.018 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.12.038 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2008.01.119 [ * * , ( ) ] link:\doibase 10.1073/pnas.0911983106 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.01.040 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.08.029 [ * * , ( ) ] link:\doibase 10.1016/j.chaos.2010.11.005 [ * * , ( ) ] link:\doibase 10.1209/epl / i1996 - 00438 - 4 [ * * , ( ) ] link:\doibase 10.1209/epl / i2000 - 00170 - 7 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/79/44001 [ * * , ( ) ] link:\doibase 10.1142/s0218348x12500259 [ * * , ( ) ] link:\doibase 10.1103/physreve.73.066128 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.084102 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/94/18007 [ * * , ( ) ] link:\doibase 10.1103/physreve.77.036104 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2009/03/p03037 [ ( ) ] in link:\doibase 10.1109/icassp.2009.4960233 [ _ _ ] ( ) pp. link:\doibase 10.1088/1367 - 2630/12/4/043057 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.11.011 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2013 - 40705-y [ * * , ( ) ] link:\doibase 10.1016/j.physa.2014.03.015 [ * * , ( ) ] link:\doibase 10.1142/s0218348x14500078 [ * * , ( ) ] link:\doibase 10.1103/physreve.91.022802 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2009 - 00310 - 5 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2010.10.022 [ * * , ( ) ] link:\doibase 10.1103/physreve.84.066118 [ * * , ( ) ] link:\doibase 10.1103/physreve.77.066211 [ * * , ( ) ] link:\doibase 10.1103/physreve.84.016106 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/95/68001 [ * * , ( ) ] link:\doibase 10.1155/2009/249370 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2009 - 00384-y [ * * , ( ) ] link:\doibase 10.1371/journal.pone.0015032 [ * * , ( ) ] link:\doibase 10.1111/j.1467 - 842x.2004.00360.x [ * * , ( ) ] master s thesis , ( ) , link:\doibase 10.1038/srep08143 [ * * , ( ) ] link:\doibase 10.1103/physreve.49.1685 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.58.6832 [ * * , ( ) ] link:\doibase 10.1140/epjb / e20020150 [ * * , ( ) ] * * , ( ) link:\doibase 10.1016/j.physa.2007.02.074 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2011.07.008 [ * * , ( ) ] link:\doibase 10.1103/physreve.82.011136 [ * * , ( ) ] link:\doibase 10.1016/j.spl.2009.08.015 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1006/acha.1996.0030 [ * * , ( ) ] link:\doibase 10.1016/s0378 - 4371(01)00144 - 3 [ * * , ( ) ] link:\doibase 10.1016/j.resourpol.2010.05.003 [ * * , ( ) ] link:\doibase 10.1016/j.econmod.2012.09.052 [ * * , ( ) ] link:\doibase 10.1080/14697688.2014.946660 [ * * , ( ) ] link:\doibase 10.1016/s0378 - 4371(02)01383 - 3 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.74.061104 [ * * , ( ) ] link:\doibase 10.1103/physreve.76.056703 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/90/68001 [ * * , ( ) ] link:\doibase 10.1016/j.physleta.2014.12.036 [ * * , ( ) ] link:\doibase 10.1038/srep00835 [ * * , ( ) ]
when common factors strongly influence two power - law cross - correlated time series recorded in complex natural or social systems , using classic detrended cross - correlation analysis ( dcca ) without considering these common factors will bias the results . we use detrended partial cross - correlation analysis ( dpxa ) to uncover the intrinsic power - law cross - correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces . the dpxa method is a generalization of the detrended cross - correlation analysis that takes into account partial correlation analysis . we demonstrate the method by using bivariate fractional brownian motions contaminated with a fractional brownian motion . we find that the dpxa is able to recover the analytical cross hurst indices , and thus the multi - scale dpxa coefficients are a viable alternative to the conventional cross - correlation coefficient . we demonstrate the advantage of the dpxa coefficients over the dcca coefficients by analyzing contaminated bivariate fractional brownian motions . we calculate the dpxa coefficients and use them to extract the intrinsic cross - correlation between crude oil and gold futures by taking into consideration the impact of the us dollar index . we develop the multifractal dpxa ( mf - dpxa ) method in order to generalize the dpxa method and investigate multifractal time series . we analyze multifractal binomial measures masked with strong white noises and find that the mf - dpxa method quantifies the hidden multifractal nature while the mf - dcca method fails .
there have been many models for the evolution of the bias derived from empirical knowledge , theory , simulations and from observations which account for the growth and merging of collapsed structure .however , all of these bias fitting forms include the unknown free parameters which need to be fitted with the set of galaxy bias data and simulation .it is shown that an incorrect bias model causes a shift in measured values of cosmological parameters . thus , the accurate modeling to is prerequisite for the precision cosmology .we obtain the exact linear bias obtained from its definition and show its dependence both on cosmology and on gravity theory .we provide which can be obtained from both theory and observation .this analytic solution for the bias allows one to use it as a cosmological parameter instead of a nuisance one .the observed linear galaxy power spectrum using a fiducial model including the effects of bias and the redshift space distortions is given by p_gal^(k,z ) = b^2 p_m(k , z_0 ) ( 1 + ^2)^2 ( ) ^2 [ pgal ] , where ( ratio of the hubble parameter of the adopted fiducial model , to that of the true model , ) , ( ratio of the angular diameter distance of the true model , to that of the adopted fiducial model , ) , defining the linear bias factor , means the present matter power spectrum , the redshift space distortions ( rsd ) parameter , is defined as , and is the linear growth factor of the matter fluctuation , with meaning .if one adopts the definition of the linear bias as , then one obtains .both and are obtained from observations , and theories predict and .if one takes the derivative of with respect to , then one obtains ( _ we use for below _ ) b(k , z ) & = & ( z ) ^-1 + & = & ( z ) ^-1 [ bkz ] , where we use , , denoting the observed fractional rms in galaxy number density , , and ( under the assumption of the flat universe ) , respectively .one can refer the appendix for detail derivation .all quantities in the second equality of eq .( [ bkz ] ) are measurable from galaxy surveys .both and are measured from galaxy surveys .also can be directly measured from and .thus , one can measure the time evolution of bias if there exists enough binned data to measure .future galaxy surveys will provide the sub - percent level accuracy in measuring and will make the accurate measurement of bias possible .( [ bkz ] ) holds for any gravity theory because it is derived from its definition . from the above eq .( [ bkz ] ) , one can understand the theoretical motivation for the formulae of .if one assumes is constant , then one obtains .thus , the magnitude of is determined by the measured value of which might depend on luminosity , color , and spectral type of galaxies .however , there is no reason to believe that is time independent .thus , we regard as a time dependent observable in eq .( [ bkz ] ) .in addition , the time evolution of bias is completely determined from observations of and .we assume the form of to investigate its behavior where we assign the dependence of bias on galaxy properties into .in this case , the galaxy dependence on is absorbed in solely .the cosmological dependence on bias is represented by , , and . actually , depends on , , and the underlying gravity theory .we restrict our consideration for the linear regime and one can solve the sub - horizon solution for the to obtain the growth factor , for the given model .one can numerically solve this for given models . even though we just investigate the constant dark energy equation of state , , and dgp model in this _ letter _, one can generalize the consideration for the any model by solving numerically . in this subsection, we investigate the evolution of bias for different cosmological parameters ( and ) under the general relativity ( gr ) .for the constant dark energy equation of state , , there exists the known exact analytic solution for the linear growth rate , .we adopt this solution to show both the cosmology and the astrophysics dependence on .one can generalize the time dependent by using the numerical solution for the .we depict the dependence of on and in fig .[ fig1 ] . in the left panle of fig .[ fig1 ] , we show the evolution of for different values of fixed , , and .the dashed , solid , and dotted lines correspond -1.2 , -1.0 , and -0.8 , respectively . as decreases , so does .this is due to the fact that if increases , then both and decrease .the difference of between models increases , as increases .the difference between and is about 4.4 ( 3.5 ) % at .we also show the dependence on for model in the right panel of fig .the dashed , solid , and dotted lines correspond 0.35 , 0.3 , and 0.25 , respectively for model .as increases , so do and .thus , decreases as increases .the difference between and is about 3.8 ( 3.2 ) % at .even though we limit our consideration for the constant with the flat universe , one can generalize the investigation for the time varying and the non - flat universe by solving the sub - horizon equation numerically .also one can find the time varying model which produce the same cmb result for the constant models .[ cols="^,^ " , ]we obtain the exact analytic solution for the linear bias .this solution can investigate both cosmological and astrophysical dependence on bias without any ambiguity . from this solution , one can exactly estimate the time evolution of bias for different models .the different gravity theories provide the different bias .thus , this provides the consistent check for the cosmological dependence on the measured galaxy power spectrum for the given model .this solution can be generalized to many models including the modified gravity theories and the massive neutrino dark matter model by replacing the approximate solution used in this _letter _ with the exact sub - horizon solutions for corresponding models .these cases are under investigation .this theoretical form of bias can be measured from measurements of and from galaxy surveys if we achieve enough binned data .also a known degeneracy between the equation of state and the growth index parameter due to the evolution of can be broken due to this exact form of bias and can be used to distinguish the dark energy from the modified gravity .we would like to thank xiao - dong li and hang bae kim for useful discussion .this work were carried out using computing resources of kias center for advanced computation .we also thank for the hospitality at apctp during the program trp .one takes the derivative of using their definitions , and to obtain = [ dfsig8dz ] , where we use the sub - horizon scale equation for the growth factor , where dot means the derivative with respect to the cosmic time .thus , one obtains an interesting relation between and , _8(z ) & = & + & = & [ sigma8 ] , where we explicitly express the using the observable quantity in the second equality .thus , if one achieves enough binned data for , then one can measure at each epoch .for example , the present value of is given by _ 8 ^ 0 & = & ( ) ^-1 + & = & [ sig80 ] .the value of derived from the cmb depends on the primordial amplitude , and the spectral index , . however , the right hand side of eq .( [ sig80 ] ) depends only on the background evolution parameters , and .thus , one can the constraint and from the rsd measurement . and are degenerated in galaxy surveys , but one can break this from the above eq .( [ sig80 ] ) . if one adopts the definition of linear bias , then one obtains from the above eq .( [ dfsig8dz ] ) b^-1(z ) = [ binv ] .thus , one obtains the exact analytic solution for given by eq .( [ bkz ] ) .one can generalize as if one substitute with even for sub - horizon scales . for example , if one considers model or the massive neutrino model , then one can obtain inside horizon scales at linear regime .o. lahav _ et al ._ , mon . not .. soc . * 333 * , 961 ( 2002 ) [ arxiv : astro - ph/0112162 ] .l. clerkin , d. kirk , o. lahav , f. b. abdalla , and e. gaztanaga , [ arxiv:1405.5521 ] . j. n. fry , astrophys .j. * 461 * , l65 ( 1996 ) s. matarrese , p. coles , f. lucchin , l. moscardini , mon . not .astron . soc . *286 * , 115 ( 1997 ) [ arxiv : astro - ph/9608004 ] .m. teggmark and p. j. e. peebles , astrophys .j. * 500 * , l79 ( 1998 ) [ arxiv : astro - ph/9804067 ] .j. l. tinker _ et al ._ , astrophys .j. * 724 * , 878 ( 2010 ) [ arxiv:1001.3162 ] .s. m. croom _ et al ._ , mon . not .. soc . * 356 * , 415 ( 2005 ) [ arxiv : astro - ph/0409314 ] .s. basilakos , m. plionis , and a. pouri , phys .d * 83 * , 123525 ( 2011 ) [ arxiv:1106.1183 ] .seo and d. j. eisenstein , astrophys .j. * 598 * , 720 ( 2003 ) [ arxiv : astro - ph/0307460 ] .f. beutler _ et al ._ , mon . not .. soc . * 423 * , 3430 ( 2012 ) [ arxiv:1204.4725 ] s. lee , j. cosmol .astropart .phys.*02 * , 021 ( 2014 ) [ arxiv:1307.6619 ] .v. silveira and i. waga , phys . rev .d * 50 * , 4890 ( 1994 ) .s. lee and k .- w .ng , phys . rev .d * 82 * , 043004 ( 2010 ) [ arxiv:0907.2108 ] .s. lee , [ arxiv:1409.1355 ] .w. saunders , m. rowan - robinson , and a. lawrence , mon . not .. soc . * 258 * , 134 ( 1992 ) .planck collaboration ; p. a. r. ade _ et al ._ , astron . astrophys . * 571 * , 39 ( 2014 ) [ arxiv:1309.0382 ] .g. dvali , g. gabadadze , and m. porrati , phys .b * 485 * , 208 ( 2000 ) [ arxiv : hep - th/0005016 ] .s. lee and k .- w .ng , phys . lett .b * 688 * , 1 ( 2010 ) [ arxiv:0906.1643 ] .r. gannouji , b. moraes , and d. polarski j. cosmol .astropart .phys.*02 * , 034 ( 2009 ) [ arxiv:0809.3374 ] .f. simpson and j. a. peacock , phys .d * 81 * , 043512 ( 2010 ) [ arxiv:0910.3834 ] .s. lee , [ in preparation ] .
since kaiser introduced galaxies as a biased tracer of the underlying total mass field , the linear galaxies bias , appears ubiquitously both in theoretical calculations and in observational measurements related to galaxy surveys . however , the generic approaches to the galaxy density is a non - local and stochastic function of the underlying dark matter density and it becomes difficult to make the analytic form of . due to this fact , is known as a nuisance parameter and the effort has been made to measure bias free observable quantities . we provide the exact and analytic function of which also can be measured from galaxy surveys using the redshift space distortions parameters , more accurately unbiased observable . we also introduce approximate solutions for for different gravity theories . one can generalize these approximate solutions to be exact when one solves the exact evolutions for the dark matter density fluctuation of given gravity theories . these analytic solutions for make it advantage instead of nuisance .
in physics , formal simplicity is often a reliable guide to the significance of a result .the concept of weak measurement , due to aharonov and his coworkers , derives some of its appeal from the formal simplicity of its basic formulae .one can extend the basic concept to a sequence of weak measurements carried out at a succession of points during the evolution of a system , but then the formula relating pointer positions to weak values turns out to be not quite so simple , particularly if one allows arbitrary initial conditions for the measuring system .i show here that the complications largely disappear if one takes the cumulants of expected values of pointer positions ; these are related in a formally satisfying way to weak values , and this form is preserved under all measurement conditions .the goal of weak measurement is to obtain information about a quantum system given both an initial state and a final , post - selected state .since weak measurement causes only a small disturbance to the system , the measurement result can reflect both the initial and final states .it can therefore give richer information than a conventional ( strong ) measurement , including in particular the results of all possible strong measurements . to carry out the measurement , a measuring deviceis coupled to the system in such a way that the system is only slightly perturbed ; this can be achieved by having a small coupling constant . after the interaction , the pointer s position is measured ( or possibly some other pointer observable ; e.g. its momentum ) .suppose that , following the standard von neumann paradigm , , the interaction between measuring device and system is taken to be , where is the momentum of a pointer and the delta function indicates an impulsive interaction at time .it can be shown that the expectation of the pointer position , ignoring terms of order or higher , is where is the _ weak value _ of the observable given by as can be seen , ( [ qclassic ] ) has an appealing simplicity , relating the pointer shift directly to the weak value . however, this formula only holds under the rather special assumption that the initial pointer wavefunction is a gaussian , or , more generally , is real and has zero mean .when is a completely general wavefunction , i.e. is allowed to take complex values and have any mean value , equation ( [ qclassic ] ) is replaced by where , for any pointer variable , denotes the initial expected value of ; so for instance and are the means of the initial pointer position and momentum , respectively .( again , this formula ignores terms of order or higher . )equation ( [ complex - version ] ) seems to have lost the simplicity of ( [ qclassic ] ) , but we can rewrite it as where and equation ( [ firstxi ] ) is then closer to the form of ( [ qclassic ] ) . as will become clear , this is part of a general pattern .one can also weakly measure several observables , , in succession .here one couples pointers at several locations and times during the evolution of the system , taking the coupling constant at site to be small .one then measures each pointer , and takes the product of the positions of the pointers . for two observables , and in the special case where the initial pointer distributions are real and have zero mean , e.g. a gaussian , one finds ,\end{aligned}\ ] ] ignoring terms in higher powers of and . here is the _ sequential weak value _ defined by where is a unitary taking the system from the initial state to the first weak measurement , describes the evolution between the two measurements , and takes the system to the final state .( note the reverse order of operators in , which reflects the order in which they are applied . )if we drop the assumption about the special initial form of the pointer distribution and allow an arbitrary , then the counterpart of ( [ abmean ] ) becomes extremely complicated : see appendix , equation [ horrible ] . even the comparatively simple formula ( [ abmean ] ) is not quite ideal .by analogy with ( [ qclassic ] ) we would hope for a formula of the form , but there is an extra term . what we seek, therefore , is a relationship that has some of the formal simplicity of ( [ qclassic ] ) and furthermore preserves its form for all measurement conditions .it turns out that this is possible if we take the _ cumulant _ of the expectations of pointer positions . as we shall see in the next section ,this is a certain sum of products of joint expectations of subsets of the , which we denote by . for a set of observables, we can define a formally equivalent expression using sequential weak values , which we denote by .then the claim is that , up to order in the coupling constants ( assumed to be all of the same approximate order of magnitude ) : where is a factor dependent on the initial wavefunctions for each pointer . equation ( [ cumulant - equation ] ) holds for any initial pointer wavefunction , though different wavefunctions produce different values of .the remarkable thing is that all the complexity is packed into this one number , rather than exploding into a multiplicity of terms , as in ( [ horrible ] ) .note also that ( [ firstxi ] ) has essentially the same form as ( [ cumulant - equation ] ) since , in the case , .however , there is an extra term in ( [ firstxi ] ) ; this arises because the cumulant for is anomalous in that its terms do not sum to zero .given a collection of random variables , such as the pointer positions , the cumulant is a polynomial in the expectations of subsets of these variables ; it has the property that it vanishes whenever the set of variables can be divided into two independent subsets .one can say that the cumulant , in a certain sense , picks out the maximal correlation involving all of the variables .we introduce some notation to define the cumulant .let be a subset of the integers .we write for , where is the size of and the indices of the s in the product run over all the integers in .then the cumulant is given by where runs over all partitions of the integers and the coefficient is given by for we have , and for there is an inverse operation for the cumulant : [ anti ] to see that this equation holds , we must show that the term obtained by expanding the right - hand side is zero unless is the partition consisting of the single set . replacing each subset by the integer , this is equivalent to , where the sum is over all partitions of by subsets of sizes and the s are given by ( [ coefficients ] ) . in this sumwe distinguish partitions with distinct integers ; e.g. and .there are such distinct partitions with subset sizes , where is the number of s equal to , so our sum may be rewritten as , where the sum is now over partitions in the standard sense .this is times the coefficient of in thus the sum is zero except for , which corresponds to the single - set partition .if can be written as the disjoint union of two subsets and , we say the variables corresponding to these subsets are independent if for any subsets .we now prove the characteristic property of cumulants : [ indep - lemma ] the cumulant vanishes if its arguments can be divided into two independent subsets . for follows at once from ( [ q2 ] ) and ( [ indep ] ) , and we continue by induction . from ( [ anticumulant ] ) and the inductive assumption for , we have this holds because any term on the right - hand side of ( [ anticumulant ] ) vanishes when any subset of the partition includes elements of both and . using ( [ anticumulant ] ) again, this implies and by independence , .thus the inductive assumption holds for .in fact , the coefficients in ( [ cumulant ] ) are uniquely determined to have the form ( [ coefficients ] ) by the requirement that the cumulant vanishes when the variables form two independent subsets . for ,the cumulant ( [ q2 ] ) is just the covariance , , and the same is true for , namely . for , however , there is a surprise .the covariance is given by where the sums include all distinct combinations of indices , but the cumulant is which includes terms like that do not occur in the covariance .note that , if the subsets and are independent , the covariance does not vanish , since independence implies we can write the first term in ( [ 4covariance ] ) as and there is no cancelling term .however , as we have seen , the cumulant does contain such a term , and it is a pleasant exercise to check that the whole cumulant vanishes .to carry out a sequential weak measurement , one starts a system in an initial state , then weakly couples pointers at several times during the evolution of the system , and finally post - selects the system state .one then measures the pointers and finally takes the product of the values obtained from these pointer measurements .it is assumed that one can repeat the whole process many times to obtain the expectation of the product of pointer values .if one measures pointer positions , for instance , one can estimate , but one could also measure the momenta of the pointers to estimate . if the coupling for the pointer is given by , and if the individual initial pointer wavefunctions are gaussian , or , more generally , are real with zero mean , then it turns out that these expectations can be expressed in terms of sequential weak values of order or less . herethe sequential weak value of order , , is defined by where defines the evolution of the system between the measurements of and .when the are projectors , , we can write the sequential weak value as which shows that , in this case , the weak values has a natural interpretation as the amplitude for following the path defined by the .figure [ cumulant ] shows an example taken from where the path ( labelled by 1 and 2 successively ) is a route taken by a photon through a pair of interferometers , starting by injecting the photon at the top left ( with state ) and ending with post - selection by detection at the bottom right ( with final state ) . in the last section ,the cumulant was defined for expectations of products of variables .one can define the cumulant for other entities by formal analogy ; for instance for density matrices , or hypergraphs .we can do the same for sequential weak values , defining the cumulant by ( [ cumulant ] ) with replaced by , where the arrow indicates that the indices , which run over the subset , are arranged in ascending order from right to left . for example , for , , and for there is a notion of independence that parallels ( [ indep ] ) : given a disjoint partition such that for any subsets , then we say the observables labelled by the two subsets are _weakly independent_. there is then an analogue of lemma [ indep - lemma ] : the cumulant vanishes if the are weakly independent for some subsets , . as an example of this , if one is given a bipartite system , and initial and final states that factorise as and , then observables on the - and -parts of the system are clearly weakly independent .another class of examples comes from what one might describe as a `` bottleneck '' construction , where , at some point the evolution of the system is divided into two parts by a one - dimensional projector ( the bottleneck ) and its complement , and the post - selection excludes the complementary part .then , if all the measurements before the projector belong to and all those after the projector belong to , the two sets are weakly independent .this follows because we can write where is the part of lying in the post - selected subspace . as an illustration of this ,suppose we add a connecting link ( figure [ bottleneckfig ] , `` '' ) between the two interferometers in figure [ cumulantfig ] , so , the bottleneck , is the projection onto , and post - selection discards the part of the wavefunction corresponding to the path . then measurements at ` 1 ' and ` 2 ' are weakly independent ; in fact , and .note that the same measurements are _ not _ independent in the double interferometer of figure [ cumulantfig ] , where , , and yet , surprisingly , , .consider system observables .suppose , for , are observables of the pointer , namely hermitian functions of pointer position and momentum , and the interaction hamiltonian for the weak measurement of system observable is , where is a small coupling constant ( all being assumed of the same order of magnitude ) .suppose further that the pointer observables are measured after the coupling .let be the -th pointer s initial wave - function .for any variable associated to the -th pointer , write for .we are now almost ready to state the main theorem , but first need to clarify the measurement procedure .when we evaluate expectations of products of the for different sets of pointers , for instance when we evaluate , we have a choice .we could either couple the entire set of pointers and then select the data for pointers 1 and 2 to get . or we could carry out an experiment in which we couple just pointers 1 and 2 to give .these procedures give different answers .for instance , if we couple three pointers and measure pointers 1 and 2 to get , in addition to the terms in , and we also get terms in and involving the observable .this means we get a different cumulant , depending on the procedure used . in what follows ,we regard each expectation as being evaluated in a separate experiment , with only the relevant pointers coupled .it will be shown elsewhere that , with the alternative definition , the theorem still holds but with a different value of the constant .[ main - theorem ] for , for any pointer observables and , and for any initial pointer wavefunctions , up to total order in the , where ( sometimes written more explicitly as ) is given by for the same result holds , but with the extra term : we use the methods of to calculate the expectations of products of pointer variables for sequential weak measurements .let the initial and final states of the system be and , respectively .consider some subset of , with .the state of the system and the pointers after the coupling of those pointers is and following post - selection by the system state , the state of the pointers is expanding each exponential , we have where are integers , means that for , and let us write ( [ sumratio ] ) as where and denotes the index set , etc .. define then set , where in the product ranges over all distinct subsets of the integers .then is an ( infinite ) weighted sum of terms where denotes the set of all the index sets that occur in .the strategy is to show that , when the size of the index set is less than , the coefficient of vanishes ; by ( [ alpha ] ) this implies that all coefficients of order less than in vanish .we then look at the index sets of size , corresponding to terms of order , and show that the relevant terms sum up to the right - hand side of ( [ main - result ] ) .but if for some x , then we also have , since .let be a partition of .we say that is a _ valid _ partition for if a. for each with , , for some , and we can associate a distinct to each .( here means the index set . ) b. for each with , , for some subset that is not in the partition , i.e. for which for any , and we can associate a distinct to each .let be the number of ways of associating a subset to each . [ vanishing ]the coefficient of in is zero if all the index sets in have a zero at some position .if we expand using ( [ cxy2 ] ) , each term in this expansion is associated with a partition of .let be a valid partition for , and let denote the partition derived from by removing from the subset that contains it , and deleting that subset if it contains only .then the following partitions include and are all valid : each partition , for contributes to the coefficient of in , and since this term has coefficient in ( [ cxy2 ] ) for partitions , and for , the sum of all contributions is zero . from equations ( [ alpha ] ) and ( [ index ] ) , the power of in the term is .this , together with the preceding lemma , implies that the lowest order non - vanishing terms in are s that have a 1 occurring once and once only in each position ; we call these _ complete lowest - degree _ terms .[ one - index - set ] the coefficient of a complete lowest - degree term in is zero unless only one of the four classes of indices in , viz . , , or , has non - zero terms .consider first the case where the indices in and are zero , and where both and have some non - zero indices .let be the partition whose subsets consists of the non - zero positions in index sets in , and let be some partition of the remaining integers in .suppose .then we can construct a set of partitions by mixing and ; these have the form where each is either empty or consists of some , and all the subsets are present once only in the partition .if any is eligible , all the other mixtures will also be eligible .furthermore , the set of all eligible partitions can be decomposed into non - overlapping subsets of mixtures obtained in this way .any mixture gives the same value of , which we denote simply by ; so to show that all the contributions to the coefficient of cancel , we have only to sum over all the mixtures , weighting a partition with subsets by .this gives the above argument applies equally well to the situation where and both have some non - zero indices and indices in and are zero .if the non - zero indices are present in and , we can take any eligible partition and divide each subset into two subsets and with the indices from in and those from in .all the mixtures of type ( [ mix ] ) are eligible , and they include the original partition . by the above argument , the coefficients of arising from them sum to zero .other combinations of indices are dealt with similarly .note that , for and for the index sets and , the `` mixture '' argument shows that coefficient of coming from cancels that coming from to give zero .this cancellation occurs with the cumulant ( [ 4cumulant ] ) , but not with the covariance ( [ 4covariance ] ) , where the term is absent .the only terms that need to be considered , therefore , are complete lowest - degree terms with non - zero indices only in one of the sets , , and .it is easy to calculate the coefficients one gets for such terms . consider the case of .we only need to consider the single partition whose subsets are the index sets of . for this partition , by ( [ z ] ) , ( [ x ] ) and ( [ y ] ) , from ( [ cxy2 ] ) , appears in with a coefficient .so , summing over all with indices in , one obtains .similarly , from ( [ alpha ] ) , ( [ u ] ) and ( [ v ] ) , summing over the with indices in gives the complex conjugate of .thus and together give .this corresponds to ( [ main - result ] ) , but with only the first half of as defined by ( [ xi ] ) .the rest of comes from the index sets and .however , the sum of the coefficients of for the same index set in and is zero .this is true because , for any complete lowest degree index set , the sum of coefficients for all with the indices divided in any manner between and is zero , being the number ways of obtaining that index set from times .but by lemma [ one - index - set ] , the coefficient of is zero unless the index set comes wholly from or .now ( [ z ] ) , ( [ x ] ) and ( [ y ] ) tell us that , for an index set in , and from the above argument , this appears appears in with coefficient .again , the index sets in give the complex conjugate of those in .thus we obtain the remaining half of , which proves ( [ main - result ] ) for .for the constant terms ( of order zero in ) in do not vanish , but the proof goes through if we consider instead .consider first the simplest case , where and .we take throughout this section , so . then ( [ main - result1 ] ) and ( [ xi ] ) give which we have already seen as equations ( [ firstxi ] ) and ( [ xi1 ] ) . if we measure the pointer momentum , so , we find which is equivalent to the result obtained in . for two variables ,our theorem for , is with the calculations in the appendix allow one to check ( [ qq ] ) and ( [ xiqq ] ) by explicit evaluation ; see ( [ explicit ] ) .note in passing that , if one writes , the cauchy - schwarz inequality implies a heisenberg - type inequality relating the pointer noise distributions of two weak measurements carried out at different times during the evolution of the system .when one or both of the in ( [ qq ] ) is replaced by the pointer momentum , we get with consider now the special case where is real with zero mean. then the very complicated expression for in ( [ horrible ] ) reduces to ,\end{aligned}\ ] ] as shown in .two further examples from are ,\\ \label{4q } \langle q_1q_2q_3q_4 \rangle&=\frac{g_1g_2g_3g_4}{8}\ re \left [ ( a_4,a_3,a_2,a_1)_w+(a_4,a_3,a_2)_w({\bar a}_1)_w+\ldots + ( a_4,a_3)_w(\overline{a_2,a_1})_w+\ldots \right].\end{aligned}\ ] ] we can use these formulae to calculate the cumulant , and thus check theorem [ main - theorem]for this special class of wavefunctions .each formula contains on the right - hand side a leading sequential weak value , but there are also extra terms , such as in ( [ 2q ] ) and in ( [ 3q ] ) .all these extra terms are eliminated when the cumulant is calculated , and we are left with ( [ main - result ] ) with .this gratifying simplification depends on the fact that the cumulant is a sum over all partitions .for instance , it does not occur if one uses the covariance instead of the cumulant . to see this ,look at the case : the term in , the covariance of pointer positions , gives rise via ( [ 4q ] ) to weak value terms like .however , ( [ 4covariance ] ) together with ( [ 2q ] ) , ( [ 3q ] ) and ( [ 4q ] ) show that has no other terms that generate any multiple of , and consequently this weak value expression can not be cancelled and must be present in .this means that there can not be any equation relating and .this negative conclusion does not apply to the cumulant , as this includes terms such as ; see ( [ 4cumulant ] ) .we have treated the interactions between each pointer and the system individually , the hamiltonian for the pointer and system being , but of course we can equivalently describe the interaction between all the pointers and the system by . for sequential measurements we implicitly assume that all the times are distinct . however , the limiting case where there is no evolution between coupling of the pointers and all the s are equal is of interest , and is the _ simultaneous _ weak measurement considered in . in this case, the state of the pointers after post - selection is given by the exponential here differs from the sequential expression in ( [ bigstate ] ) in that each term in the expansion of the latter appears with the operators in a specific order , viz .the arrow order as in ( [ 4weak ] ) , whereas in the expansion of the former the same term is replaced by a symmetrised sum over all orderings of operators .for instance , for arbitrary operators , and , the third degree terms in include , and , whose counterparts in are , respectively , , and .apart from this symmetrisation , the calculations in section [ theorem - section ] can be carried through unchanged for simultaneous measurement . thus if we replace the sequential weak value by the _simultaneous weak value _ where the sum on the right - hand side includes all possible orders of applying the operators , we obtain a version of theorem [ main - theorem ] for simultaneous weak measurement : likewise , relations such ( [ 2q ] ) , ( [ 3q ] ) , etc . ,hold with simultaneous weak values in place of the sequential weak values ; indeed , these relations were first proved for simultaneous measurement . from ( [ swv ] )we see that , when the operators all commute , the sequential and simultaneous weak values coincide. one important instance of this arises when the operators are applied to distinct subsystems , as in the case of the simultaneous weak measurements of the electron and positron in hardy s paradox .when the operators do not commute , the meaning of simultaneous weak measurement is not so obvious .one possible physical interpretation follows from the well - known formula and its analogues for more operators .suppose two pointers , one for and one for , are coupled alternately in a sequence of short intervals ( figure [ alternate ] , top diagram ) with coupling strength for each interval .this is an enlarged sense of sequential weak measurement in which the same pointer is used repeatedly , coherently preserving its state between couplings .the state after post - selection is from ( [ formula ] ) we deduce that this picture readily extends to more operators .one can also simulate a simultaneous measurement by averaging the results of a set of sequential measurements with the operators in all orders ; in effect , one carries out a set of experiments that implement the averaging in ( [ swv ] ) .there is then no single act that counts as simultaneous measurement , but weak measurement in any case relies on averaging many repeats of experiments in order to extract the signal from the noise . in a certain sense , therefore , sequential measurement includes and extends the concept of simultaneous measurement .however , if we wish to accomplish simultaneous measurement in a single act , then we need a broader concept of weak measurement where pointers can be re - used ; indeed , we can go further , and consider generalised weak coupling between one time - evolving system and another , followed by measurement of the second system . however , even in this case , the measurement results can be expressed algebraically in terms of the sequential weak values of the first system .lundeen and resch showed that , for a gaussian initial pointer wavefunction , if one defines an operator by then the relationship holds .they argued that can be interpreted physically as a lowering operator , carrying the pointer from its first excited state , in number state notation , to the gaussian state ( despite the fact that the pointer is not actually in a harmonic potential ) .although is not an observable , can be regarded as a prescription for combining expecations of pointer position and momentum to get the weak value . if instead of one takes then the even simpler relationship holds .we refer to as a generalised lowering operator .lundeen and resch also extended their lowering operator concept to simultaneous weak measurement of several observables .rephrased in terms of our generalised lowering operators defined by ( [ gaussian ] ) , their finding can be stated as this is of interest for two reasons .first , the entire simultaneous weak value appears on the right - hand side , not just its real part ; and second , the `` extra terms '' in the simultaneous analogues of ( [ 2q ] ) , ( [ 3q ] ) and ( [ 4q ] ) have disappeared .the lowering operator seems to relate directly to weak values .we can generalise these ideas in two ways .first , we extend them from simultaneous to sequential weak measurements .secondly , instead of assuming the initial pointer wavefunction is a gaussian , we allow it be arbitrary ; we do this by defining a generalised lowering operator for a gaussian , , so the above definition reduces to ( [ gaussian ] ) in this case . in general , however , will not be annihilated by and is therefore not the number state ( this state is a gaussian with complex variance ) .nonetheless , there is an analogue of theorem [ main - theorem ] in which the whole sequential weak value , rather than its real part , appears : [ lowering - theorem ] for where is given by for the same result holds , but with the extra term : put , . then ,\end{aligned}\ ] ] where we used theorem [ main - theorem ] to get the last line , and where is given by ( [ constant ] ) and by ( note the bar over that is absent in the definition of by ( [ constant ] ) ) .we want to prove , and to do this it suffices to prove that the complex conjugate of the numerator is zero , i.e. let , , , .using the definition of in ( [ xi ] ) , the above equation can be written suppose the interaction hamiltonian has the standard von neumann form , so in the definition of by equation ( [ xi ] ) .then for , since and , , so we get the even simpler result this is valid for all initial pointer wavefunctions , and therefore extends lundeen and resch s equation ( [ lr1 ] ) .it seems almost too simple : there is no factor corresponding to in equation ( [ qmean ] ) .however , a dependency on the initial pointer wavefunction is of course built into the definition of through . for is no longer true that , even with the standard interaction hamiltonian .however , if in addition , then thus for all . applying the inverse operation for the cumulant , given by propostion [ anti ] , we deduce : if , e.g. if the initial pointer wavefunction is real , then for this is the sequential weak value version of the result for simultaneous measurements , ( [ simul ] ) , but is more general than the gaussian case treated in .we might be tempted to try to repeat the above argument for pointer positions instead of the lowering operators by applying the anti - cumulant to both sides of ( [ main - result ] ) .this fails , however , because of the need to take the real part of the weak values ; in fact , this is one way of seeing where the extra terms come from in ( [ 2q ] ) , ( [ 3q ] ) and ( [ 4q ] ) and their higher analogues .note also that ( [ nice ] ) does not hold for general , since then different subsets of indices may have different values of .the procedure for sequential weak measurement involves coupling pointers at several stages during the evolution of the system , measuring the position ( or some other observable ) of each pointer , and then multiplying the measured values together .in it was argued that we would really like to measure the product of the values of the operators , and that this corresponds to the sequential weak value .multiplication of the values of pointer observables is the best we can do to achieve this goal .however , this brings along extra terms , such as in ( [ 2q ] ) , which are an artefact of this method of extracting information . from this perspective, the cumulant extracts the information we really want . in ,a somewhat idealised measuring device was being considered , where the pointer position distribution is real and has zero mean . when the pointer distribution is allowed to be arbitrary ,the expressions for become wildly complicated ( see for instance ( [ horrible ] ) ) . yetthe cumulant of these terms condenses into the succinct equation ( [ main - result ] ) with all the complexity hidden away in the one number .why does the cumulant have this property ? recall that the cumulant vanishes when its variables belong to two independent sets .the product of the pointer positions will include terms that come from products of disjoint subsets of these pointer positions , and the cumulant of these terms will be sent to zero , by lemma [ indep - lemma ] .for instance , with , the pointers are deflected in proportion to their individual weak values , according to ( [ firstxi ] ) , and the cumulant subtracts this component leaving only the component that arises from the -influence of the weak measurement of on that of .the subtraction of this component corresponds to the subtraction of the term from ( [ 2q ] ) . in general, the cumulant of pointer positions singles out the maximal correlation involving all the , and the theorem tells us that this is directly related to the corresponding `` maximal correlation '' of sequential weak values , , which involves all the operators .in fact , the theorem tells us something stronger : that it does not matter what pointer observable we measure , e.g. position , momentum , or some hermitian combination of them , and that likewise the coupling of the pointer with the system can be via a hamiltonian with any hermitian .different choices of and lead only to a different multiplicative constant in front of in ( [ main - result ] ) .we always extract the same function of sequential weak values , , from the system .this argues both for the fundamental character of sequential weak values and also for the key role played by their cumulants .i am indebted to j. berg for many discussions and for comments on drafts of this paper ; i thank him particularly for putting me on the track of cumulants .i also thank a. botero , p. davies , r. jozsa , r. koenig and s. popescu for helpful comments .a preliminary version of this work was presented at a workshop on `` weak values and weak measurement '' at arizona state university in june 2007 , under the aegis of the center for fundamental concepts in science , directed by p. davies .to calculate for arbitrary pointer wavefunctions and , we use ( [ bigstate ] ) to determine the state of the two pointers after the weak interaction , and then evaluate the expectation using ( [ expectation ] ) , keeping only terms up to order .we define
a weak measurement on a system is made by coupling a pointer weakly to the system and then measuring the position of the pointer . if the initial wavefunction for the pointer is real , the mean displacement of the pointer is proportional to the so - called weak value of the observable being measured . this gives an intuitively direct way of understanding weak measurement . however , if the initial pointer wavefunction takes complex values , the relationship between pointer displacement and weak value is not quite so simple , as pointed out recently by r. jozsa . this is even more striking in the case of sequential weak measurements . these are carried out by coupling several pointers at different stages of evolution of the system , and the relationship between the products of the measured pointer positions and the sequential weak values can become extremely complicated for an arbitrary initial pointer wavefunction . surprisingly , all this complication vanishes when one calculates the cumulants of pointer positions . these are directly proportional to the cumulants of sequential weak values . this suggests that cumulants have a fundamental physical significance for weak measurement .
when optical , electromagnetic or acoustic signals are measured , often the measurement apparatus records an intensity , the magnitude of the signal amplitude , while discarding phase information .this is the case for x - ray crystallography , many optical and acoustic systems , and also an intrinsic feature of quantum measurements .phase retrieval is the procedure of determining missing phase information from suitably chosen intensity measurements , possibly with the use of additional signal characteristics .many of these instances of phase retrieval are related to the fourier transform , but it is also of interest to study this problem from an abstract point of view , using the magnitudes of any linear measurements to recover the missing information .next to infinite dimensional signal models , the finite dimensional case has received considerable attention in the past years . in this case , the signals are vectors in a finite dimensional hilbert space and one chooses a frame to obtain for each the magnitudes of the inner products with the frame vectors , . when recovering signals , we allow for a remaining undetermined global phase factor , meaning we identify vectors in the hilbert space that differ by a unimodular factor , , in the real or complex complex case .accordingly , we associate the equivalence class =\mathbb t x = \{\omega x : |\omega|=1 \} ] .the map is well defined , because does not depend on the choice of .the metric on relevant for the accuracy of signal recovery is the quotient metric , which assigns to elements ] with representatives the distance ,[y])=\min_{|\omega|=1}\|x-\omega y\| ] .it is essential for this step that the sample values are bounded away from zero in order to achieve a unique reconstruction .there are two algorithms considered for this , phase propagation , which recovers the phase iteratively using the phase relation between sample points , and the kernel method , which computes a vector in the kernel of a matrix determined by the magnitude measurements .the error bound is first derived for phase propagation and then related to that of the kernel method .both algorithms are known to be polynomial time , either from the explicit description , or from results in numerical analysis .the nature of the main result has also been observed in simulations ; assuming an a priori bound on the magnitude of the noise results in a worst - case recovery error that grows at most inverse proportional to the signal - to - noise ratio . outside of this regime , the erroris not controlled in a linear fashion . to illustrate this , we include two plots of the typical behavior for the recovery error for .the range of the plots is chosen to show the behavior of the worst - case error in the linear regime and also for errors where this linear behavior breaks down .we tested the algorithm on more than 4.5 million randomly generated polynomials with norm 1 .when errors were graphed for a fixed polynomial , the linear bound for the worst - case error was confirmed , although the observed errors were many orders of magnitude less than the error bound given in this paper .a small number of polynomials we found exhibited a max - min value that is an order of magnitude smaller than that of all the other randomly generated polynomials .we chose the polynomial with the worst max - min value out of the 4.5 million that had been tried , and applied a random walk to its coefficients , with steps of decreasing size that were accepted only if the max - min value decreased .the random walk terminated at a polynomial which provided an error bound that is an order of magnitude worse than any other polynomials that had been tested before .this numerically found , local worst - case polynomial is given by .the accuracy of the coefficients displayed here is sufficient to reproduce the results initially obtained with floating point coefficients of double precision .the errors resulting for this polynomial in the linear and transition regimes are shown in figures [ fig1 ] and [ fig2 ] . as a function of the maximal noise magnitude in the linear regime.,width=345 ] as a function of the maximal noise magnitude beyond the linear regime.,width=345 ]it is instructive to follow the construction of the magnitude measurements and the recovery strategy in the absence of noise . to motivate and prepare the recovery strategy , we compare two recovery methods , phase propagation and the kernel method , a simple form of semidefinite programming , in the absence of noise and under additional non - orthogonality conditions on the input vector . if is a basis for , and such that for all from to , then we call _ full _ with respect to .we recall a well known result concerning recovery of full vectors .let be full with respect to an orthonormal basis .for any from to , we define the measurement vector as the set of measurement vectors is a frame for because it contains a basis . define the magnitude measurement map by .recovery of full vectors with measurements has been shown in , and was proven to be minimal in .we show recovery of full vectors with measurements using the measurement map with two different recovery methods .the first is called phase propagation , the second is a special case of semidefinite programming .the phase propagation method sequentially recovers the components of the vector , similar to the approach outlined in , see also . for any vector ,if is an orthonormal basis with respect to which is full , then the vector may be obtained by induction on the components of , using the values of . without loss of generality , we assume that is the standard basis , drop the subscript from and abbreviate the components of the vector by for each , and similarly for .to initialize , we let so that . for the inductive step with , we assume that we have constructed with the given information .we then let inserting the values for the magnitude measurements and by a fact similar to the polarization identity , iterating this , we obtain .recovery by the kernel method minimizes the values of a quadratic form subject to a norm constraint , or equivalently , computes an extremal eigenvector for an operator associated with the quadratic form .the operator we use is , with where each denotes the linear functional which is associated with the basis vector .the operator is indeed determined by the magnitude measurements . in particular , the second term in the series is computed via the polarization - like identity as in the proof of the preceding theorem , for any integer from to . by construction ,the rank of is at most equal to , because the range of is in the span of . in the next theorem ,we show that indeed the kernel of , or equivalently , the kernel of , is one dimensional , consisting of all multiples of . for any vector ,if is a basis with respect to which is full , then the null space of the operator is given by all complex multiples of .as in the preceding proof , we let denote the standard basis .thus , using the measurements provided , we may obtain the quantity .with respect to the basis , let be the left shift operator where we extend the vector with the convention .we also define the multiplication operator by the map , where again by convention we let .similarly , we define the multiplication operator by the map .note that is invertible if and only if for all from to , which is true by assumption . in terms of these operators , the operator is expressed as .then for any and , so any complex multiple of is in the null space of this operator .conversely , assume that is in the null space of .we use an inductive argument to show that for any from to , .the base case is trivial . for the inductive step , note that for any from to , thus , , and because and , we obtain we conclude that for any from to , , so the vector is a complex multiple of . since the frame vectors used for the magnitude measurementscontain an orthonormal basis , determines the norm of .this is sufficient to recover ] is the solution of the problem because the solution to the phase retrieval problem is obtained from the kernel of the linear operator , or equivalently of , we may use methods from numerical linear algebra such as a rank - revealing qr factorization to recover the equivalence class ] . for any polynomial ,the measurements determine ] by the mean value theorem and the concavity of the square root , there exists a between and ( so that ) such that because the null space of contains only multiples of , we know and thus , using concavity gives the estimate let . for any nonzero polynomial , and any , if there exists a on the unit circle such that , then an approximation can be constructed using the dirichlet kernel and the values of , such that if then for some on the unit circle let be the normalized dirichlet kernel of degree , so that for any in the unit circle . then the set of functions is orthonormal with respect to the inner product on the unit circle , and any can be interpolated as .if an error is present on each of the values , and if we let , then for any on the unit circle if , then . thus , each of the functions , , and , are in , and using the dirichlet kernel these functions may be interpolated from the values of .the error present in the sample values means that approximating trigonometric polynomials are obtained from this interpolation , with a uniform error that is less than for any point on the unit circle .let , , and be these approximating trigonometric polynomials .we find a that satisfies the hypotheses of the theorem by a simple maximization argument on .the set is an orthonormal basis for , and is full in this basis .then because the values of , , and at the points , as well as an error that depends on , , and , with , correspond to the measurements and we know these values on the entire unit circle , we may apply either of the full vector reconstructions given earlier to obtain an approximation for .if lemma [ lem : full - induct - noise ] is applied to these measurements , and we use the equivalence of norms , and then with , and we obtain a vector of coefficients , such that for all from to , and by remark [ rem : incr ] let .then minkowski s inequality gives terms that form geometric series , to obtain a uniform error bound that only assumes bounds on the norms of the vector and on the magnitude of the noise , we use the max - min principle from section [ sec : max - min ] .this provides us with a universally valid lower bound that applies to the above theorem .[ thm : main - uni ] let , and .for any polynomial with , and any , if and , then an approximation can be reconstructed using the dirichlet kernel and the values of , such that if then for some on the unit circle by lemma [ lem : dist - bnd ] we know that there exists a on the unit circle such that the distance between any element of and any roots of any nonzero truncations of is at least . then by lemma [ lem : mag - bnd ] we know that for all from to , . thus , there exists a on the unit circle such that for all from to and we may use and in the preceding theorem .when we apply the above theorem , we get we remark that any that satisfies the claimed max - min bound does not necessarily satisfy lemma [ lem : dist - bnd ] .this means that the above theorem would benefit immediately from an improved lower bound on the minimum magnitude . as the final step for the main result, we remove the normalization condition on the input vector . since the norm of the vector enters quadratically in each component of , the dependence of the error bound on is not linear .instead , we obtain a bound on the accuracy of the reconstruction which is inverse proportional to the signal - to - noise ratio , assuming that is sufficiently small compared to .let , and .for any nonzero polynomial , and any , if and , then an approximation can be reconstructed using the dirichlet kernel and the values of , such that if then for some on the unit circle
the main objective of this paper is to find algorithms accompanied by explicit error bounds for phase retrieval from noisy magnitudes of frame coefficients when the underlying frame has a low redundancy . we achieve these goals with frames consisting of vectors spanning a -dimensional complex hilbert space . the two algorithms we use , phase propagation or the kernel method , are polynomial time in the dimension . to ensure a successful approximate recovery , we assume that the noise is sufficiently small compared to the squared norm of the vector to be recovered . in this regime , the error bound is inverse proportional to the signal - to - noise ratio . upper and lower bounds on the sample values of trigonometric polynomials are a central technique in our error estimates .
complex networks have become a natural abstraction of the interactions between elements in complex systems .when the type of interaction is essentially identical between any two elements , the theory of complex networks provides with a wide set of tools and diagnostics that turn out to be very useful to gain insight in the system under study . however , there are particular cases where this classical approach may lead to misleading results , e.g. when the entities under study are related with each other using different types of relations in what is being called multilayer interconnected networks .representative examples are multimodal transportation networks where two geographic places may be connected by different transport modes , or social networks where users are connected using several platforms or different categorical layers . here , we focus our study on the transportation congestion problem in multiplex networks , where each node is univocally represented in each layer and so the interconnectivity pattern among layers becomes a one - to - one connection ( i.e. , each node in one layer is connected to the same node in the rest of the layers , thus allowing travelling elements to switch layer at all nodes ) .this representation is an excellent proxy of the structure of multimodal transportation systems in geographic areas .the particular topology of each layer is conveniently represented as a spatial network where nodes correspond to a certain coarse grain of the common geography at all layers .transportation dynamics on networks can be , in general , interpreted as the flow of elements from an origin node to a destination node .when the network is facing a number of simultaneous transportation processes , we find that many elements travel through the same node or link .this , in combination with the possible physical constraints of the nodes and links , can lead to network congestion , in which the amount of elements in transit on the network grows proportional to time .usually , to analyze the phenomenon , a discrete abstraction of the transportation dynamics in networks is used .multimodal transportation can also be mathematically abstracted as transportation dynamics on top of a multiplex structure .note that routings on the multilayer transportation system are substantially different with respect to routings on single layer transportation networks . in the multilayer case, each location of the system ( e.g. geographical location ) has different replicas that represent each entry point to the system using the different transportation media .thus , each element with the intention of traveling between locations and have the option to choose between the most appropriate media to start and end its traversal .we assume that elements traverse the network using shortest paths , so each element chooses the starting / ending media that minimizes the distance between the starting / ending locations . as we will show in this work ,this `` selfish '' behavior provokes an unbalance in the load of the transportation layers inducing congestion , similarly to what is presented in the classical counterintuitive result of the braess paradox .note that in a multiplex network we can have two types of shortest paths : paths that only use a single layer ( intra - layer paths ) and paths that use more than one layer ( inter - layer paths ) .hereafter , we develop the analysis of transportation in multiplex networks , consisting of locations ( nodes per layer ) and layers , and quantify when this structure will induce congestion . to this aim , we describe , with a set of discrete time balance equations , ( one for each node at each layer ) , the increment of elements , , in the queue of each node on layer : where is the average number of elements injected at node in layer ( also called the injection rate , which can be assimilated to an external particle reservoir ) , is the average number of elements that arrive to node in layer from the adjacent links of that node ( ingoing rate ) , and ] ; these are our node locations .we then generate the first layer by adding edges between all locations and separated by a distance lower than a certain radius ] force minimum overlapping between both layers .the value of with ] does not exceed the radius of first layer.,scaledwidth=35.0% ]this work has been supported by ministerio de economa y competitividad ( grant fis2012 - 38266 ) and european comission fet - proactive projects plexmath ( grant 317614 ) .a.a . also acknowledges partial financial support from the icrea academia and the james s. mcdonnell foundation .26ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _( , , ) * * , ( ) ( ) * * , ( ) , * * , ( ) * * , ( ) * * , ( ) in link:\doibase 10.1109/asonam.2011.114 [ _ _ ] ( , ) pp . * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) http://dblp.uni-trier.de/db/journals/transci/transci39.html#braessnw05 [ * * , ( ) ] in link:\doibase 10.1145/2615569.2615687 [ _ _ ] ( ) pp . ( ) * * ( )
multiplex networks are representations of multilayer interconnected complex networks where the nodes are the same at every layer . they turn out to be good abstractions of the intricate connectivity of multimodal transportation networks , among other types of complex systems . one of the most important critical phenomena arising in such networks is the emergence of congestion in transportation flows . here we prove analytically that the structure of multiplex networks can induce congestion for flows that otherwise will be decongested if the individual layers were not interconnected . we provide explicit equations for the onset of congestion and approximations that allow to compute this onset from individual descriptors of the individual layers . the observed cooperative phenomenon reminds the braess paradox in which adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance . similarly , in the multiplex structure , the efficiency in transportation can unbalance the transportation loads resulting in unexpected congestion .
questions such as whether the universe will expand forever or eventually re - collapse and end with a big crunch , and what its shape and size may be , are among the most fundamental challenges in cosmology . regarding the former question ,it is well known that the ultimate fate of the universe is intrinsically associated with the nature of its dominant components . in the friedmann - lematre - robertson - walker ( flrw ) class of models , for instance , a universe that is dominated by a pressureless fluid ( as , e.g. , baryons and/or dark matter ) or any kind of fluid with positive pressure ( as radiation , for example ) will expand forever if its spatial geometry is euclidean or hyperbolic , or will eventually re - collapse if it is spherical .this predictable destiny for the universe , however , may be completely modified if it is currently dominated by some sort of negative - pressure dark component , as indicated by a number of independent observational results ( see , e.g. , ref . ) . in this case ,not only the dynamic but also the thermodynamic fate of the universe may be completely different , with the possibility of an eternally expanding closed model , an increasingly hot expanding universe or even a progressive rip - off of the large and small scale structure of matter ending with the occurrence of a curvature singularity , the so - called big smash .the remaining questions , concerning the shape and size of our world , go in turn beyond the scope of general relativity ( gr ) , since they have an intrinsically topological nature . in this way , approaches or answers to these questionsare ultimately associated with measurements of the _ global _ structure ( topology ) of the universe and , as a _metric theory , gr can not say much about it , leaving the global topology of the universe undetermined . over the past few years , several aspects of the cosmic topology have become topical ( see , e.g. , the review articles ref . ) , given the wealth of increasingly accurate cosmological observations , especially the recent results from the wilkinson microwave anisotropy probe ( wmap ) experiment , which have heightened the interest in the possibility of a universe with a nontrivial spatial topology . a pertinent question the reader may ask at this point is whether the current values of cosmological density parameters , which help us to answer the above first question ( associated with the ultimate fate of the universe ) , can be constrained by a possible detection of the spatial topology of the universe .our primary objective here is to address this question by focusing our attention on possible topological constraints on the density parameters associated with the baryonic / dark matter ( ) and dark energy ( ) .motivated by the best fit value for the total energy density ( level ) reported by wmap team , which includes a positively curved universe as a realistic possibility , we shall consider globally homogeneous spherical manifolds , some of which account for the suppression of power at large scales observed by wmap , and also fits the wmap temperature two - point correlation function . to this end , in the next section we present our basic context and prerequisites , while in the last section we discuss our main results and present some concluding remarks .within the framework of standard cosmology , the universe is described by a space - time manifold with a locally homogeneous and isotropic robertson walker ( rw ) metric \;,\ ] ] where , , or depends on the sign of the constant spatial curvature ( , respectively ) .the is usually taken to be one of the following simply - connected spaces : euclidean , spherical , or hyperbolic . however , given that the simple - connectedness of our space has not been established , our may equally well be any one of the possible quotient manifolds , where is a fixed point - free group of isometries of the covering space .thus , for example , in a universe whose geometry of the spatial section is euclidean ( ) , besides there are 6 classes of topologically distinct compact orientable that admits this geometry , while for universes with either spherical ( ) and hyperbolic ( ) spatial geometries there is an infinite number of topologically non - homeomorphic ( inequivalent ) manifolds with nontrivial topology that can be endowed with these geometries .quotient manifolds are compact in three independent directions , or compact in two or at least one independent direction . in compact manifolds , any two given points may be joined by more than one geodesic .since the radiation emitted by cosmic sources follows geodesics , the immediate observational consequence of a nontrivial detectable spatial topology of is that the sky may show multiple images of radiating sources : cosmic objects or specific correlated spots of the cosmic microwave background radiation ( cmbr ) . at very large scales ,the existence of these multiple images ( or pattern repetitions ) is a physical effect that can be used to probe the -space topology . in this work ,we use the so - called circles - in - the - sky " method ( for cosmic crystallographic methods see , e.g. , refs . ) , which relies on multiple copies of correlated circles in the cmbr maps , whose existence is clear from the following reasoning : in a space with a detectable nontrivial topology , the last scattering sphere ( lss ) intersects some of its topological images along pairs of circles of equal radii , centered at different points on the lss , with the same distribution of temperature fluctuations , .since the mapping from the lss to the night sky sphere preserves circles , these pairs of matching circles will be inprinted on the cmbr temperature fluctuations sky maps regardless of the background geometry and detectable topology . as a consequence , to observationally probe a nontrivial topology on the available largest scale , one should scrutinize the full - sky cmb maps in order to extract the correlated circles , whose angular radii and relative position of their centers can be used to determine the topology of the universe .thus , a nontrivial topology of the space section of the universe may be observed , and can be probed through the circles - in - the - sky for all locally homogeneous and isotropic universes with no assumption on the cosmological density parameters .let us now state our basic cosmological assumptions and fix some notation .in addition to the rw metric ( [ rwmetric ] ) , we assume that the current matter content of the universe is well approximated by cold dark matter ( cdm ) of density plus a cosmological constant . in this standard context , for nonflat spacesthe scale factor can be identified with the curvature radius of the spatial section of the universe at time , which is given by where here and in what follows the subscript denotes evaluation at present time , is the hubble constant , and is the total density at . in this way ,for nonflat spaces the distance of any point with coordinates to the origin ( in the covering space ) _ in units of the curvature radius _ , , reduces to where is an integration variable , and . throughout this paperwe shall measure the lengths in unit of curvature radius .a typical characteristic length of nonflat manifolds , which we shall use in this paper , is the so - called injectivity radius , which is defined as the radius of the smallest sphere ` inscribable ' in .an important mathematical result is that , expressed in terms of the curvature radius , is a constant ( topological invariant ) for any given spherical and hyperbolic manifolds . in this workwe shall focus our attention in globally homogeneous spherical manifolds , as presented in table [ singleaction ] ( see also its caption for more details ) .these manifolds satisfy a topological principle of homogeneity , in the sense that all points in are topologically equivalent ..the globally homogeneous spherical manifolds are of the form .the first column gives the name we use for the manifolds .the second column displays the covering groups .finally , the remaining columns present the order of the group and the injectivity radius .the cyclic and binary dihedral cases actually constitute families of manifolds , whose members are given by the different values of the integers and .the order of gives the number of fundamental polyhedra needed to fulfill the whole covering space .thus , for example , for the manifold which is the the well - known poincar dodecahedral space , the fundamental polyhedron is a regular spherical dodecahedron , of which tile the into identical cells that are copies of the fp . [ cols="^,^,^,^",options="header " , ]to investigate the extent to which a possible detection of a nontrival topology may place constraints on the cosmological density parameters , we consider here the globally homogeneous spherical manifolds . in these number of pairs of matching circles depends on the ratio of the injectivity radius to the radius of lss , which in turn depends on the density parameters ( see ref . for examples of specific estimates of this number regarding , and ) . nevertheless , if the topology of a globally homogeneous spherical manifold is detectable the correlated pairs will be antipodal , i.e. the centers of correlated circles are separated by , as shown in figure [ cinthesky1 ] .clearly the distance between the centers of each pair of the _ first _ correlated circles is twice the injectivity radius .now , a straightforward use of known trigonometric rules to the right - angled spherical triangle shown in figure [ cinthesky1 ] yields a relation between the angular radius and the angular sides and radius of the last scattering sphere , namely where is a topological invariant , whose values are given in table [ singleaction ] , and the distance of the last scattering surface to the origin in units of the curvature radius is given by ( [ redshift - dist ] ) with . 0.1 in equations ( [ cosalpha ] ) along with ( [ redshift - dist ] )give the relations between the angular radius and the cosmological density parameters and , and thus can be used to set bounds on these parameters . to quantify thiswe proceed in the following way .firstly , as an example , we assume the angular radius . secondly , since the measurements of the radius unavoidably involve observational uncertainties , in order to obtain very conservative results we take . and its uncertainty . ] in order to study the effect of the cosmic topology on the density parameters and , we consider the binary tetrahedral and the binary octahedral spatial topologies ( see table [ singleaction ] ) , to reanalyze with these two topological priors the constraints on these parameters that arise from the so - called _ gold _ sample of 157 sne ia , as compiled by riess _et al . _ , along with the latest chandra measurements of the x - ray gas mass fraction in 26 x - ray luminous , dynamically relaxed galaxy clusters ( spanning the redshift range ) as provided by allen _ et al ._ ( see also for details on sne ia and x - ray statistics ) .the and spatial topology is added to the conventional sne ia plus clusters data analysis as a gaussian prior on the value of , which can be easily obtained from an elementary combination of ( [ cosalpha ] ) and ( [ redshift - dist ] ) . in other words ,the contribution of the topology to is a term of the form .-0.2 cm -0.2 cm figures 2b and 2c ( central and right panels ) show the results of our statistical analysis .confidence regions 68.3% and 95.4% confidence limits ( c.l . ) in the parametric space are displayed for the above described combination of observational data . for the sake of comparison, we also show in fig .2a the plane for the conventional sne ia plus galaxy clusters analysis , i.e. , the one without the above cosmic topology assumption . by comparing both analyses ,it is clear that a nontrivial space topology reduces considerably the parametric space region allowed by the current observational data , and also breaks some degeneracies arising from the current sne ia and x - ray gas mass fraction measurements . at 95.4% c.l .our sne ia+x - ray+topology analysis provides and ( binary octahedral ) and and ( binary tetrahedral ) . concerning the above analysis it is worth emphasizing three important aspects .first , that the best - fit values depend weakly on the value used for radius of the circle .second , the uncertainty alters predominantly the area corresponding to the confidence regions , without having a significant effect on the best - fit values .third , we also notice that there is a topological degeneracy in that the same best fits and confidence regions for , e.g. , the topology , would equally arise from either or spatial topology .similarly , , and give rise to identical bounds on the density parameters .this kind of topological degeneracy passed unnoticed in refs . .finally , we emphasize that given the wealth of increasingly accurate cosmological observations , especially the recent results from the wmap , and the development of methods and strategies in the search for cosmic topology , it is reasonable to expect that we should be able to detect it . besides it importance as a major scientific achievement , we have shown through concrete examples that the knowledge of the spatial topology allows to place constraints on the density parameters associated to dark matter ( ) and dark energy ( ) .we thank cnpq for the grants under which this work was carried out .we also thank a.f.f .teixeira for the reading of the manuscript and indication of relevant misprints and omissions .v. sahni and a. starobinsky , int . j. modd * 9 * , 373 ( 2000 ) ; j.e .peebles and b. ratra , rev . mod .phys . * 75 * , 559 ( 2003 ) ; t. padmanabhan , phys . rep . *380 * , 235 ( 2003 ) ; j.a.s .lima , braz .j. phys .* 34 * , 194 ( 2004 ) .m. lachize - rey and j .- p .luminet , phys . rep . * 254 * , 135 ( 1995 ) ; g.d .starkman , class .quantum grav . * 15 * , 2529 ( 1998 ) ; j. levin , phys . rep . * 365 * , 251 ( 2002 ) ; m.j .rebouas and g.i .gomero , braz . j. phys .* 34 * , 1358 ( 2004 ) .astro - ph/0402324 ; m.j .rebouas , a brief introduction to cosmic topology , in _ proc .xith brazilian school of cosmology and gravitation _ , eds . m. novello and s. e. perez bergliaffa ( americal institute of physics , melville , new york , 2005 ) aip conference proceedings vol .* 782 * , p 188 ( 2005 ) .e. komatsu et al . ,* 148 * , 119 ( 2003 ) ; h.k .eriksen , f.k .hansen , a.j .banday , k.m .gorski , and p.b .lilje , astrophys .j. * 605 * , 14 ( 2004 ) ; c.j .copi , d. huterer , and g.d .starkman , phys .d * 70 * , 043515 ( 2004 ) .spergel et al . ,astrophys .j.suppl . * 148 * , 175 ( 2003 ) .m. tegmark , a. de oliveira - costa , and a.j.s .hamilton , phys .d * 68 * , 123523 ( 2003 ) ; a. de oliveira - costa , m. tegmark , m. zaldarriaga , and a. hamilton , phys .d * 69 * , 063516 ( 2004 ) ; j.r .weeks , astro - ph/0412231 ; p. bielewicz , h.k .eriksen , a.j .banday , and k.m .gorski , and p.b .lilje , astro - ph/0507186 ; k. land and j. magueijo , phys .* 95 * , 071301 ( 2005 ) .a. bernui , b. mota , m.j .rebouas , and r. tavakol , astro - ph/0511666 ; k. land and j. magueijo , mon . not .astron .357 * , 994 ( 2005 ) .luminet , j. weeks , a. riazuelo , r. lehoucq and j .- p .uzan , nature * 425 * , 593 ( 2003 ) ; n.j .cornish , d.n .spergel , g.d .starkman , and e. komatsu , phys .* 92 * , 201302 ( 2004 ) ; j. gundermann , astro - ph/0503014 ; b.f .roukema , b. lew , m. cechowska , a. marecki , and s. bajtlik , astron .astrophys . * 423 * , 821 ( 2004 ) .r. lehoucq , m. lachize - rey , and j .-p luminet , astron . astrophys . *313 * , 339 ( 1996 ) ; b.f .roukema and a. edge , _ mon . not .soc . _ * 292 * , 105 ( 1997 ) ; b.f .roukema , class .quantum grav . * 15 * , 2645 ( 1998 ) ; r. lehoucq , j .-p luminet , and j .-uzan , astron . astrophys . * 344 * , 735 ( 1999 ) ; h.v .fagundes and e. gausmann , phys .a * 238 * , 235 ( 1998 ) ; h.v .fagundes and e. gausmann , phys .a * 261 * , 235 ( 1999 ) ; j .-uzan , r. lehoucq and j .- p .luminet , astron . astrophys . * 351 * , 766 ( 1999 ) ; g.i .gomero , m.j .rebouas , and a.f.f .teixeira , int .d * 9 * , 687 ( 2000 ) ; r. lehoucq , j .-uzan , and j .-p luminet , astron . astrophys . * 363 * , 1 ( 2000 ) ; g.i .gomero , m.j .rebouas , and a.f.f .teixeira , phys .a * 275 * , 355 ( 2000 ) ; g.i .gomero , m.j .rebouas , and a.f.f .teixeira , class .quantum grav .* 18 * , 1885 ( 2001 ) ; g.i .gomero , a.f.f .teixeira , m.j .rebouas and a. bernui , intd * 11 * , 869 ( 2002 ) ; a. marecki , b. roukema , and s. bajtlik , astron . astrophys . * 435 * , 427 ( 2005 ) .gomero , m.j .rebouas and r. tavakol , class .quantum grav .* 18 * , 4461 ( 2001 ) ; g.i .gomero , m.j .rebouas , and r. tavakol , int .j. mod .a * 17 * , 4261 ( 2002 ) ; j.r .weeks , r. lehoucq , and j .-uzan , class .quantum grav .* 20 * , 1529 ( 2003 ) ; j.r .weeks , mod .phys . lett .a * 18 * , 2099 ( 2003 ) ; g.i .gomero and m.j .rebouas , phys .lett . a * 311 * , 319 ( 2003 ) ; b. mota , m.j .rebouas , and r. tavakol , class .quantum grav .* 20 * , 4837 ( 2003 ) ; b. mota , g.i .gomero , m. j. rebouas and r. tavakol , class .quantum grav .* 21 * , 3361 ( 2004 ) . s.w .allen , r.w .schmidt , h. ebeling , a.c .fabian , and l. van speybroeck , mon . not .soc . * 353 * , 457 ( 2004 ) .j.a.s . lima , j.v .cunha and j.s .alcaniz , phys .d * 68 * , 023510 ( 2003 ) ; j.s .alcaniz and z .- h .zhu , phys .d * 71 * , 083513 ( 2005 ) ; d. rapetti , s.w .allen and j. weller , mon . not .. soc . * 360 * , 546 ( 2005 ) .
given the wealth of increasingly accurate cosmological observations , especially the recent results from the wmap , and the development of methods and strategies in the search for cosmic topology , it is reasonable to expect that we should be able to detect the spatial topology of the universe in the near future . motivated by this , we examine to what extent a possible detection of a nontrivial topology of positively curved universe may be used to place constraints on the matter content of the universe . we show through concrete examples that the knowledge of the spatial topology allows to place constraints on the density parameters associated to dark matter ( ) and dark energy ( ) .
in recent years , there have been a considerable number of important developments in the extension of ( classical ) information - theoretic concepts to a quantum - mechanical setting .bennett and shor have surveyed this progress in the outstanding commemorative issue 19481998 of the _ ieee transactions on information theory_. in particular , they pointed out in strict analogy to the classical case , successfully studied some fifty years ago by shannon in famous landmark work that quantum data compression allows signals from a redundant quantum source to be compressed into a bulk approaching the source s ( quantum ) entropy .bennett and shor did not , however , discuss the intriguing case which arises when the specific nature of the quantum source is _unknown_. this , of course , corresponds to the classical question of _ universal _ coding or data compression ( see , ( * ? ? ?ii.e ) ) . we do address this interesting issue here , by investigating whether or not it is possible to extend to the quantum domain , recent ( classical ) seminal results of clarke and barron .they , in fact , derived various forms of asymptotic redundancy of universal data compression for parameterized families of probability distributions .their analyses provide a rigorous basis for the reference prior method in bayesian statistical analysis . for an extensive commentary on the results of clarke and barron ,see .also see , for some recent related research , as well as a discussion of various rationales that have been employed for using the ( classical ) jeffreys prior a possible quantum counterpart of which will be of interest here for bayesian purposes , cf .let us also bring to the attention of the reader that in a brief review of , the noted statistician , i. j. good , commented that clarke and barron `` have presumably overlooked the reviewer s work '' and cited , in this regard .let us briefly recall the basic setup and the results of clarke and barron that are relevant to the analyses of our paper .clarke and barron work in a noninformative bayesian framework , in which we are given a parametric family of probability densities on a space .these probability densities generate independent identically distributed random variables , which , for a fixed , we consider as producing strings of length according to the probability density of the -fold product of probability distributions .now suppose that nature picks a from , that is a joint density on the product space , the space of strings of length .on the other hand , a statistician chooses a distribution on as his best guess of .of course , there is a loss of information , which is measured by the total relative entropy , where is the _ kullback leibler divergence _ of and ( the _ relative entropy _ of with respect to ) . for finite , and for a given _ prior_ on , by a result of aitchison , the best strategy to minimize the average risk is to choose for the mixture density .this is called a _ bayes procedure _ or a _ bayes strategy_.the quantities corresponding to such a procedure that must be investigated are the _ risk _ ( _ redundancy _ ) _ of the bayes strategy _ and the _ bayes risk _ , the average of risks , . the bayes risk equals shannon s mutual information ( see ) . moreover , the bayes risk is bounded above by the _ minimax redundancy _ .in fact , by a result of gallager and davisson and leon garcia ( see for a generalization ) , for each fixed there is a prior which realizes this upper bound , i.e. , the _ maximin redundancy _ and the minimax redundancy are the same .such a prior is called _ capacity achieving _ or _least favorable_. clarke and barron investigate the above - mentioned quantities _ asymptotically _ , that is , for tending to infinity .first of all , in ( * ? ? ?* ( 1.4 ) ) , ( * ? ? ?* ( 2.1b ) ) , they show that the redundancy of the bayes strategy is asymptotically as tends to infinity .here , is the fisher information matrix the negative of the expected value of the hessian of the logarithm of the density function .( although the binary logarithm is usually used in the quantum coding literature , we employ the natural logarithm throughout this paper , chiefly to facilitate comparisons of our results with those of clarke and barron . ) for priors supported on a compact subset in the interior of the domain of parameters , the asymptotic minimax redundancy was shown to be ( * ? ? ?* ( 2.4 ) ) , , moreover ( * ? ? ?* ( 2.6 ) ) , it is _jeffreys prior _ ( with a normalizing constant ; see also ) which is the unique continuous and positive prior on which is asymptotically least favorable , i.e. , for which the asymptotic maximin redundancy achieves the value ( [ eq:3 ] ) .in particular , asymptotically the maximin and minimax redundancies are the same .in obvious contrast to classical information theory , quantum information theory directly relies upon the fundamental principles of quantum mechanics .this is due to the fact that the basic unit of quantum computing , the quantum bit " or `` qubit , '' is typically a ( two - state ) microscopic system , possibly an atom or nuclear spin or polarized photon , the behavior of which ( e.g. entanglement , interference , superposition , stochasticity , ) can only be accurately explained using the rules of quantum theory .we refer the reader to for a comprehensive introduction to these matters ( including the subjects of quantum error - correcting codes and quantum cryptography ) . here, we shall restrict ourselves to describing , in mathematical terms , the basic notions of quantum information theory , how they pertain to data compression , and in what manner they parallel the corresponding notions from classical information theory . in quantum information theory ,the role of probability densities is played by _ density matrices _ , which are , by definition , nonnegative definite hermitian matrices of unit trace , and which can be considered as operators acting on a ( finite - dimensional ) hilbert space .any probability density on a ( finite ) set , where the probability of equals , is representable in this framework by a diagonal matrix ( which is quite clearly itself , a nonnegative definite hermitian matrix with unit trace ) . given two density matrices and , the quantum counterpart of the relative entropy , that is , the _ relative entropy _ of with respect to , is ( cf . ) , where the logarithm of a matrix is defined as , with the appropriate identity matrix .( alternatively , if acts diagonally on a basis of the hilbert space by , then acts by , . )clearly , if and are diagonal matrices , corresponding to classical probability densities , then ( [ eq:5 ] ) reduces to the usual kullback leibler divergence . as we said earlier ,our goal is to examine the possibility of extending the results of clarke and barron to quantum theory .that is , first of all we have to replace the ( classical ) probability densities by density matrices .we are not able to proceed in complete generality , but rather we will restrict ourselves to considering the first nontrivial case , that is , we will replace by density matrices .such matrices can be written in the form , where , in order to guarantee nonnegative definiteness , the points must lie within the unit ball ( `` bloch sphere '' ) , .( the points on the bounding spherical surface , , corresponding to the _ pure states _ , will be shown to exhibit nongeneric behavior , see ( [ a5 ] ) and the respective comments in sec . [ s3 ] ( cf. ) . )such density matrices correspond , in a one - to - one fashion , to the standard ( complex ) two - level quantum systems notably , those of spin- ( electrons , protons , ) and massless spin- particles ( photons ) .these systems carry the basic units of quantum computing , the _ quantum bits_. ( if we set in ( [ eq:6 ] ) , we recover a classical binomial distribution , with the probability of `` success '' , say , being and of `` failure '' , . setting either or to zero ,puts us in the framework of real as opposed to complex quantum mechanics . )the quantum analogue of the product of ( classical ) probability distributions is the _ tensor product _ of density matrices .( again , it is easily seen that , for diagonal matrices , this reduces to the classical product . )hence , we will replace by the tensor products , where is a density matrix ( [ eq:6 ] ) .these tensor products are matrices , and can be used to compute ( _ via _ the fundamental rule that the expected value of an observable is the trace of the matrix product of the observable and the density matrix ; see ) the probability of strings of quantum bits of length .in it was argued that the quantum fisher information matrix ( requiring due to noncommutativity the computation of symmetric logarithmic derivatives ) , one must find the symmetric logarithmic derivatives ( ) satisfying and then compute the entries of ( [ eq:8 ] ) in the form ( * ? ? ?( 2 ) , ( 3 ) ) , \quad \beta , \gamma = x , y , z.\ ] ] for a well - motivated discussion of these formulas and the manner in which classical and quantum fisher information are related , see . ] ) for the density matrices ( [ eq:6 ] ) should be taken to be of the form the quantum counterpart of the jeffreys prior was , then , taken to be the normalized form ( dividing by ) of the square root of the determinant of ( [ eq:8 ] ) , that is , on the basis of the above - mentioned result of clarke and barron that the jeffreys prior yields the asymptotic common minimax and maximin redundancy , it was conjectured that its assumed quantum counterpart ( [ eq:9 ] ) would have similar properties , as well . to examine this possibility , ( [ eq:9 ] ) was embedded as a specific member ( ) of a one - parameter family of spherically - symmetric / unitarily - invariant probability densities ( i.e. , under unitary transformations of , the assigned probability is invariant ) , ( embeddings of ( [ eq:9 ] ) in other ( possibly , multiparameter ) families are , of course , possible and may be pursued in further research . in this regard ,see theorem [ t15 ] in sec .[ s3 ] . ) for , we obtain a uniform distribution over the unit ball .( this has been used as a prior over the two - level quantum systems , at least , in one study . ) for , the uniform distribution over the spherical boundary ( the locus of the pure states ) is approached .( this is often employed as a prior , for example . ) for , a dirac distribution concentrated at the origin ( corresponding to the fully mixed state ) is approached . for a treatment in our setting that is analogous to that of clarke and barron , we average with respect to . doing so yields a one - parameter family of _ bayesian density matrices _ , , which are the analogues of the mixtures , and which exhibit highly interesting properties .now , still following clarke and barron , we have to compute the analogue of the risk , i.e. , the relative entropy .keeping the definition ( [ eq:5 ] ) in mind , this requires us to explicitly find the eigenvalues and eigenvectors of the matrices , which we do in sec .subsequently , in sec . [ s2.3 ], we determine explicitly the relative entropy of with respect to .we do this by using identities for hypergeometric series and some combinatorics .( it is also possible to obtain some of our results by making use of representation theory of .an even more general result was derived by combining these two approaches .we comment on this issue at the end of sec .[ s3 ] . ) on the basis of these results ,we then address the question of finding asymptotic estimations in sec .[ s2.4 ] and [ s2.5 ] .these , in turn , form the basis of examining to what degree the results of clarke and barron are capable of extension to the quantum domain .let us ( naively ) attempt to apply the formulas of clarke and barron ( [ eq:4 ] ) and ( [ eq:3 ] ) above to the quantum context under investigation here .we do this by setting to 3 ( the dimensionality of the unit ball which we take as ) , to ( the determinant of the quantum fisher information matrix ( [ eq:8 ] ) ) , so that is , and to .then , we obtain from the expression for the asymptotic redundancy ( [ eq:4 ] ) , where , and from the expression for the asymptotic minimax redundancy ( [ eq:3 ] ) , we shall ( in sec . [ s3 ] )compare these two formulas , ( [ eq:12 ] ) and ( [ eq:11 ] ) , with the results of sec . [ s2 ] and find some striking similarities and coincidences , particularly associated with the fully mixed state ( ) .these findings will help to support the working hypothesis of this study that there are meaningful extensions to the quantum domain of the ( commutative probabilistic ) theorems of clarke and barron .however , we find that the minimax and maximin properties of the jeffreys prior do not strictly carry over , but transfer only in an approximate sense , which is , nevertheless , still quite remarkable . in any case, we can not formally rule out the possibility that the actual global ( perhaps common ) minimax and maximin are achieved for probability distributions not belonging to the one - parameter family . in analogy to ( * ? ? ?5.2 ) , the matrices should prove useful for the _ universal _ version of schumacher data compression . schumacher s result must be considered as the quantum analogue of shannon s noiseless coding theorem ( see e.g. ( * ? ? ?roughly , _ quantum data compression _ , as proposed by schumacher , works as follows : a ( quantum ) signal source ( sender " ) generates signal states of a quantum system , the ensemble of possible signals being described by a density operator .the signals are projected down to a dominant " subspace of , the rest is discarded .the information in this dominant subspace is transmitted through a ( quantum ) channel .the receiver tries to reconstruct the original signal by replacing the discarded information by some typical " state . the quality ( or _ faithfulness _ ) of a coding scheme is measured by the _ fidelity _ , which is by definition the overall probability that a signal from the signal ensemble that is transmitted to the receiver passes a validation test comparing it to its original ( see ( * ? ? ?what schumacher shows is that , for each and , under the above coding scheme a compression rate of qubits per signal is possible , where is the _ von neumann entropy _ of , at a fidelity of at least .( thus , the von neumann entropy is the quantum analogue of the shannon entropy , which features in shannon s classical noiseless coding theorem . indeed , as is easy to see , for diagonal matrices , corresponding to classical probability densities , the right - hand side of ( [ eq:1 ] ) reduces to the shannon entropy . )this is achieved by choosing as the dominant subspace that subspace of the quantum system which is the span of the eigenvectors of corresponding to the largest eigenvalues , with the property that the eigenvalues add up to at least .consequently , in a universal compression scheme , we propose to project blocks of signals ( qubits ) onto those `` typical '' subspaces of -dimensional hilbert space corresponding to as many of the dominant eigenvalues of as it takes to exceed a sum . for all ,the leading one of the distinct eigenvalues has multiplicity , and belongs to the ( )-dimensional ( bose einstein ) symmetric subspace .( projection onto the symmetric subspace has been proposed as a method for stabilizing quantum computations , including quantum state storage . ) for , the leading eigenvalue can be obtained by dividing the -st catalan number that is , by .( the catalan numbers `` are probably the most frequently occurring combinatorial numbers after the binomial coefficients '' . )let us point out to the reader the quite recent important work of petz and sudr .they demonstrated that in the quantum case in contrast to the classical situation in which there is , as originally shown by chentsov , essentially only one monotone metric and , therefore , essentially only one form of the fisher information there exists an infinitude of such metrics .`` the monotonicity of the riemannian metric is crucial when one likes to imitate the geometrical approach of [ chentsov ] .an infinitesimal statistical distance has to be monotone under stochastic mappings .we note that the monotonicity of is a strengthening of the concavity of the von neumann entropy .indeed , positive definiteness of is equivalent to the strict concavity of the von neumann entropy and monotonicity is much more than positivity '' .the monotone metrics on the space of density matrices are given by the operator monotone functions , such that and .for the choice , one obtains the minimal metric ( of the symmetric logarithmic derivative ) , which serves as the basis of our analysis here .`` in accordance with the work of braunstein and caves , this seems to be the canonical metric of parameter estimation theory. however , expectation values of certain relevant observables are known to lead to statistical inference theory provided by the maximum entropy principle or the minimum relative entropy principle when _ a priori _ information on the state is available .the best prediction is a kind of generalized gibbs state . on the manifold of those states , the differentiation of the entropy functional yields the kubo - mori / bogoliubov metric , which is different from the metric of the symmetric logarithmic derivative .therefore , more than one privileged metric shows up in quantum mechanics .the exact clarification of this point requires and is worth further studies '' .it remains a possibility , then , that a monotone metric other than the minimal one ( which corresponds to , that is ( [ eq:9 ] ) ) may yield a common global asymptotic minimax and maximin redundancy , thus , fully paralleling the classical / nonquantum results of clarke and barron .we intend to investigate such a possibility , in particular , for the kubo - mori / bogoliubov metric .in this section , we implement the analytical approach described in the introduction to extending the work of clarke and barron to the realm of quantum mechanics , specifically , the two - level systems. such systems are representable by density matrices of the form ( [ eq:6 ] ) .a composite system of independent ( unentangled ) and identical two - level quantum systems is , then , represented by the -fold tensor product . in theorem [ t1 ] of sec .[ s2.1 ] , we average with respect to the one - parameter family of probability densities defined in ( [ eq:10 ] ) , obtaining the bayesian density matrices and formulas for their entries .then , in theorem [ t2 ] of sec. [ s2.2 ] , we are able to explicitly determine the eigenvalues and eigenvectors of . using these results , in sec . [ s2.3 ] , we compute the relative entropy of with respect to .then , in sec. [ s2.4 ] , we obtain the asymptotics of this relative entropy for . in sec .[ s2.5 ] , we compute the asymptotics of the von neumann entropy ( see ( [ eq:1 ] ) ) of .all these results will enable us , in sec .[ s3 ] , to ascertain to what extent the results of clarke and barron could be said to carry over to the quantum domain .the -fold tensor product is a matrix . to refer to specific rows and columns of , we index them by subsets of the -element set .we choose to employ this notation instead of the more familiar use of binary strings , in order to have a more succinct way of writing our formulas . for convenience, we will subsequently write ] contained in both and , denoting the number of elements _ not _ in both and , denoting the number of elements not in but in , and denoting the number of elements in but not in . in symbols , \backslash ( i\cup j)},\\ n_{\notin\in}&=\v{j\backslash i},\\ n_{\in\notin}&=\v{i\backslash j}.\end{aligned}\ ] ] we consider the average of with respect to the probability density defined in ( [ eq:10 ] ) taken over the unit sphere . this average can be described explicitly as follows .[ t1 ] the average , equals the matrix } ] , so that a generic vector is } ] , the symbol denotes the standard unit vector with a 1 in the -th coordinate and 0 elsewhere , i.e. , } ] .then we define the vector by \backslash ( a\cup b),\ \v{y}=s - h } { \sum _ { x\subseteq a } ^{}}(-1)^{\v{x}}\ , e_{x\cup x'\cup y},\ ] ] where is the _ complement of in _ "by which we mean that if consists of the - , - , -largest elements of , , then consists of all elements of _ except for _ the - , - , -largest elements of .for example , let .then the vector is given by ( in this special case , the possible subsets of in the sum in ( [ e16 ] ) are , , , , with corresponding complements in being , , , , respectively , and the possible sets are , , . ) observe that all sets which occur as indices in ( [ e16 ] ) have the same cardinality .[ l3 ] let be integers with and let and be disjoint -element subsets of i u\cup u'\cup v u v u\subseteq a v\subseteq [ n]\backslash ( a\cup b) \v{v}=s - h ] , , which have elements in common with , and which have elements in common with .clearly , we used expression ( [ e4 ] ) with and . to determine , note first that there are possible sets which intersect in exactly elements .next , let us assume that we already made a choice for .in order to determine the number of possible sets such that has elements in common with , we have to choose elements from , for which we have possibilities , and we have to choose elements from \backslash ( i\cup a\cup b) ] be the matrix defined by where , etc ., have the same meaning as earlier , and where is a function of which is symmetric , i.e. , . then , the eigenvalues of are with respective multiplicities independent of .the above proof of theorem [ t2 ] has to be adjusted only insignificantly to yield a proof of theorem [ t6 ] .in particular , the vector as defined in ( [ e16 ] ) is an eigenvector for , for any two disjoint -element subsets and of ] be the matrix with entries given in _( [ e4])_. then , we have with as given in _( [ e14 ] ) _ , and with .before we move on to the proof , we note that theorem [ t7 ] gives us the following expression for the relative entropy of with respect to [ c8 ] the relative entropy of with respect to equals with as given in _ ( [ e14 ] ) _ , and with . .one way of determining the trace of a linear operator is to choose a basis of the vector space , \} ] .by the definition ( [ e16 ] ) of it equals \backslash ( a_p\cup b_p),\ \v{y}=s - h } { \sum _ { x\subseteq a_p } ^{}}\kern-1 cm r_{i , x\cup x'\cup y}\,(-1)^{\v{x}},\ ] ] where denotes the -entry of .( recall that is given explicitly in ( [ e2 ] ) . )now , it should be observed that we did a similar calculation already , namely in the proof of lemma [ l3 ] .in fact , the expression ( [ e34 ] ) is almost identical with the left - hand side of ( [ e19 ] ) .the essential difference is that is replaced by for all ( the nonessential difference is that are replaced by , respectively ) .therefore , we can partially rely upon what was done in the proof of lemma [ l3 ] .we distinguish between the same cases as in the proof of lemma [ l3 ] . .we do not have to worry about this case , since then lies in the span of vectors with , which is taken care of in ( [ e33 ] ) . .essentially the same arguments as those in case 2 in the proof of lemma [ l3 ] show that the term ( [ e34 ] ) vanishes for this choice of .of course , one has to use the explicit expression ( [ e2 ] ) for . .in case 3 in the proof of lemma [ l3 ] we observed that there are sets , for some and , , \backslash ( a_p\cup b_p) ] .) of course , we do the latter step by equating the first derivative of the -dependent part in ( [ d8 ] ) with respect to to zero and solving for .it turns out that this equation takes the appealingly simple form numerically , we find this equation to have the solution , at which the asymptotic maximin redundancy assumes the value .for , on the other hand , we have for the asymptotic redundancy ( [ d8 ] ) , .again , we must , therefore , conclude that in contrast to the classical case our trial candidate ( ) for the quantum counterpart of jeffreys prior can not serve as a `` reference prior , '' in the sense introduced by bernardo . moreover , again in contrast to the classical situation we find that the minimax and the maximin are _ not _ identical ( although remarkably close ) .the two distinct priors yielding these values ( , respectively ) are themselves remarkably close , as well . since they are mixtures of product states , the matrices are classically as opposed to epr ( einstein podolsky rosen ) correlated .therefore , must not be less than the sum of the von neumann entropies of any set of reduced density matrices obtained from it , through computation of partial traces . for positive integers , ,the corresponding reduced density matrices are simply , due to the mixing ( * ? ? ?* exercise 7.10 ) . using these reduced density matrices ,one can compute _ conditional _ density matrices and quantum entropies .clarke and barron have an alternative expression for the redundancy in terms of conditional entropies , and it would be of interest to ascertain whether a quantum analogue of this expression exists .let us note that the theorem of clarke and barron utilized the uniform convergence property of the asymptotic expansion of the relative entropy ( kullback leibler divergence ) .condition 2 in their paper is , therefore , crucial .it assumes as is typically the case classically that the matrix of second derivatives , , of the relative entropy is identical to the fisher information matrix . in the quantum domain , however , in general , , where is the matrix of second derivatives of the quantum relative entropy ( [ eq:5 ] ) and is the symmetric logarithmic derivative fisher information matrix .the equality holds only for special cases .for instance , does hold if for the situation considered in this paper .the volume element of the kubo - mori / bogoliubov ( monotone ) metric is given by .this can be normalized for the two - level quantum systems to be a member ( ) of a one - parameter family of probability densities and similarly studied , it is presumed , in the manner of the family ( cf .( [ eq:10 ] ) and ( [ e7 ] ) ) analyzed here .these two families can be seen to differ up to the normalization factor by the replacement of in ( [ eq : kubo ] ) by , simply , .( these two last expressions are , of course , equal for . ) in general , the volume element of a monotone metric over the two - level quantum systems is of the form ( * ? ? ?* eq . 3.17 ) where is an operator monotone function such that and . for ,one recovers the volume element ( ) of the metric of the symmetric logarithmic derivative , and for , that ( ) of the kubo - mori / bogoliubov metric .( it would appear , then , that the only member of the family proportional to a monotone metric is , that is ( [ eq:9 ] ) .the maximin result we have obtained above corresponding to the solution of ( [ maximin ] ) would appear unlikely , then , to extend globally beyond the family .of course , a similar remark could be made in regard to to the minimax , corresponding to , as shown above . )while can be generated from the relative entropy ( [ eq:5 ] ) ( which is a limiting case of the -entropies ) , is similarly obtained from ( * ? ? ?* eq . 3.16 ) it might prove of interest to repeat the general line of analysis carried out in this paper , but with the use of ( [ jan ] ) rather than ( [ eq:5 ] ) .also of importance might be an analysis in which the relative entropy ( [ eq:5 ] ) is retained , but the family ( [ eq : kubo ] ) based on the kubo - mori / bogoliubov metric is used instead of .let us also indicate that if one equates the asymptotic redundancy formula of clarke and barron ( [ eq:4 ] ) ( using ) to that derived here ( [ a3 ] ) , neglecting the residual terms , solves for , and takes the square root of the result , one obtains a prior of the form ( [ monotone ] ) based on the monotone function .( let us note that the reciprocal of the related `` morozova - chentsov '' function , , in this case , is the _ exponential _mean of and , while for the minimal monotone metric , the reciprocal of the morozova - chentsov function is the _ arithmetic _ mean .it is , therefore , quite interesting from an information - theoretic point of view that these are , in fact , the only two means which furnish additive quasiarithmetic average codeword lengths .also , it appears to be a quite important , challenging question bearing upon the relationship between classical and quantum probability to determine whether or not a family of probability distributions over the bloch sphere exists , which yields as its volume element for the corresponding fisher information matrix , a prior of the form ( [ monotone ] ) with the noted . ) as we said in the introduction , ideally we would like to start with a ( suitably well - behaved ) _ arbitrary _ probability density on the unit ball , determine the relative entropy of with respect to the average of over the probability density , then find its asymptotics , and finally , among all such probability densities , find the one(s ) for which the minimax and maximin are attained . in this regard ,we wish to mention that a suitable combination of results and computations from sec .[ s2 ] with basic facts from representation theory of ( cf . for more information on that topic ) yields the following result .[ t15 ] let be a spherically symmetric probability density on the unit ball , i.e. , depends only on .furthermore , let be the average .then the eigenvalues of are with respective multiplicities and corresponding eigenspaces a ballot path from to , which were described in sec .[ s2.2 ] .the relative entropy of with respect to is given by _( [ e32 ] ) _ , with as given in _ ( [ e50])_. we hope that this theorem enables us to determine the asymptotics of the relative entropy and , eventually , to find , at least within the family of spherically symmetric ( that is , unitarily - invariant ) probability densities on the unit ball , the corresponding minimax and maximin redundancies .doing so , would resolve the outstanding question of whether these two redundancies , in fact , coincide , as classical results would suggest .clarke and barron ( cf . ) have derived several forms of asymptotic redundancy for arbitrarily parameterized families of probability distributions .we have been motivated to undertake this study by the possibility that their results may generalize , in some yet not fully understood fashion , to the quantum domain of noncommutative probability .( thus , rather than probability densities , we have been concerned here with density matrices . )we have only , so far , been able to examine this possibility in a somewhat restricted manner . by this , we mean that we have limited our consideration to two - level quantum systems ( rather than -level ones , ) , and for the case , we have studied ( what has proven to be ) an analytically tractable one - parameter family of possible prior probability densities , , ( rather than the totality of arbitrary probability densities ) .consequently , our results can not be as definitive in nature as those of clarke and barron .nevertheless , the analyses presented here reveal that our trial candidate ( , that is ( [ eq:9 ] ) ) for the quantum counterpart of the jeffreys prior closely approximates those probability distributions which we have , in fact , found to yield the minimax ( ) and maximin ( ) for our one - parameter family ( ). future research might be devoted to expanding the family of probability distributions used to generate the bayesian density matrices for , as well as similarly studying the -level quantum systems ( ) .( in this regard , we have examined the situation in which , and the only density matrices considered are simply the tensor products of identical density matrices . surprisingly , for , the associated trivariate candidate quantum jeffreys prior , taken , as throughout this study , to be proportional to the volume elements of the metrics of the symmetric logarithmic derivative ( cf . ) , have been found to be _ improper _ ( nonnormalizable ) over the bloch sphere .the minimality of such metrics is guaranteed , however , only if `` the whole state space of a spin is parameterized '' . ) in all such cases , it will be of interest to evaluate the characteristics of the relevant candidate quantum jeffreys prior _ vis - - vis _ all other members of the family of probability distributions employed over the -dimensional convex set of density matrices .we have also conducted analyses parallel to those reported above , but having , _ ab initio _ , set either or to zero in the density matrices ( [ eq:6 ] ) .this , then , places us in the realm of real as opposed to complex ( standard or conventional ) quantum mechanics .( of course , setting _ both _ and to zero would return us to a strictly classical situation , in which the results of clarke and barron , as applied to binomial distributions , would be directly applicable . ) though we have on the basis of detailed computations developed strong conjectures as to the nature of the associated results , we have not , at this stage of our investigation , yet succeeded in formally demonstrating their validity . in conclusion , again in analogy to classical results , we would like to raise the possibility that the quantum asymptotic redundancies derived here might prove of value in deriving formulas for the _ stochastic complexity _ ( cf . ) the shortest description length of a string of _ quantum _ bits .the competing possible models for the data string might be taken to be the density matrices ( ) corresponding to different values of , or equivalently , different values of the von neumann entropy , .let , , be a family of density matrices , and let , , be a probability density on .the minimum taken over all density matrices , is achieved by . proof .we look at the difference and show that it is nonnegative .indeed , since relative entropies of density matrices are nonnegative ( * ? ? ?* bottom of p. 17 ) .christian krattenthaler did part of this research at the mathematical sciences research institute , berkeley , during the combinatorics program 1996/97 .paul slater would like to express appreciation to the institute for theoretical physics for computational support .this research was undertaken , in part , to respond to concerns ( regarding the rationale for the presumed quantum jeffreys prior ) conveyed to him by walter kohn and members of the informal seminar group he leads .the co - authors are grateful to : ira gessel for bringing them into initial contact _ via _ the internet ; to helmut prodinger and peter grabner for their hints regarding the asymptotic computations ; to a. r. bishop and an anonymous referee of ; and to the two anonymous referees of this paper itself , whose comments helped to considerably improve the presentation .j. aczl and z. darczy , _ on measures of information and their characterizations_. academic prss : new york , 1975. j. aitchison , `` goodness of prediction fit , '' _ biometrika _ , vol .3 , pp.547554 , 1975 .a. bach and a. srivastav , `` a characterization of the classical states of the quantum harmonic oscillator by means of de finetti s theorem '' _ comm . math . phys . _ , vol .3 , pp . 453462 , 1989 .a. barenco , a. berthiaume , d. deutsch , a. ekert , r. jozsa , and c. macchiavello , `` stabilisation of quantum computations by symmetrisation , '' _ siam j. comput .5 , pp . 15411547 , 1997 . h. barnum , c. a. fuchs , r. jozsa , and b. schumacher , `` general fidelity limit for quantum channels , '' _ phys .a _ , vol .54 , no . 6 , pp .47074711 , dec 1996 . e. g. beltrametti and g. cassinelli , _ the logic of quantum mechanics _ , addison - wesley : reading , 1981 . c. h. bennett , `` quantum information and computation , '' _ physics today _ ,2430 , oct .1995 . c. h. bennett and p. w. shor , `` quantum information theory , '' _ ieee trans .44 , no . 6 , pp .2724 - 2742 ( 1998 ) .j. m. bernardo , `` reference posterior distributions for bayesian inference , '' , _b _ , vol . 41 , pp . 113147 , 1979 . j. m. bernardo and a. f. m. smith , _ bayesian theory_. wiley : new york , 1994 .l. c. biedenharn and j. d. louck , _ angular momentum in quantum physics _, addison wesley : massachusetts , 1981 . s. l. braunstein and g. j. milburn , `` dynamics of statistical distance : quantum limits of two - level clocks , '' _ phys .a _ , vol .3 , pp . 18201826 , mar .n. j. cerf and c. adami , `` information theory of quantum entanglement and measurement , '' _ physica d _1 , pp . 6281 , 1998 . n. n. chentsov , _ statistical decision rules and optimal inference_. amer .soc . : providence , 1982 .b. s. clarke , `` implications of reference priors for prior information and for sample size , '' _173184 , march 1996 .b. s. clarke and a. r. barron , information - theoretic asymptotics of bayes methods , " _ ieee trans .inform . theory _3 , pp . 453471 , may , 1990 . b. s. clarke and a. r. barron , jeffreys prior is asymptotically least favorable under entropy risk , " _j. statist .planning and inference _ , vol .1 , pp . 3761 , aug . 1994 .b. s. clarke and a. r. barron , `` jeffreys prior yields the asymptotic minimax redundancy , '' in _ ieee - ims workshop on information theory and statistics _ , piscataway , nj : ieee , 1995 , p. 14 .r. cleve and d. p. divincenzo , `` schumacher s quantum data compression as a quantum computation , '' _ phys .4 , pp . 26362650 , oct . 1996 .l. d. davisson , universal noiseless coding , " _ ieee trans .inform . theory _it-19 , pp . 783795 , 1980 . l. davisson and a. leon garcia , a source matching approach to finding minimax codes , " _ ieee trans .inform . theory _it-26 , pp . 166174 , 1980 .a. fujiwara and h. nagaoka , quantum fisher metric and estimation for pure state models , _ phys .a _ , vol .119124 , 1995 .r. gallager , source coding with side information and universal coding , " technical report lids - p-937 , m.i.t .laboratory for information and decision systems , 1979 .g. gasper and m. rahman , _ basic hypergeometric series _, encyclopedia of mathematics and its applications 35 , cambridge university press , cambridge , 1990 .i. j. good , _ math ._ , 95k:62011 , nov . 1995 . i. j. good , `` utility of a distribution , '' _ nature _ , vol .219 , no . 5161 , p. 1392, 28 sept . 1968 . i. j. good , `` what is the use of a distribution , '' in _ multivariate analysis - ii _( p. r. krishnaiah , ed . ) .new york : academic press , 1969 , pp .d. haussler , a general minimax result for relative entropy " , _ ieee trans .inform . theory _4 , pp . 12761280 , 1997 .k. r. w. jones , `` principles of quantum inference , '' _ ann .1 , pp . 140170 , 1991 .r. jozsa and b. schumacher , a new proof of the quantum noiseless coding theorem , " _41 , no . 12 , pp .23432349 , 1994 .r. e. kass and l. wasserman , `` the selection of prior distributions by formal rules , '' _435 , pp . 13431370 , sept .e. g. larson and p. r. dukes , `` the evolution of our probability image for the spin orientation of a spin-1/2ensemble connection with information theory and bayesian statistics , '' in _ maximum entropy and bayesian methods _ ( w. t. grandy , jr . andl. h. schick , eds . ) .dordrecht : kluwer , 1991 , pp .lo , quantum coding theorem for mixed states , " _ opt .119 , pp . 552556 ,j. d. malley and j. hornstein , `` quantum statistical inference , '' _ statist ._ , vol . 8 , no .433 - 457 ( 1993 ) .s. massar and s. popescu , `` optimal extraction of information from finite quantum ensembles , '' _ phys .74 , no . 8 , pp . 12591263 , feb .t. matsushima , h. inazumi , and s. hirasawa , `` a class of distortionless codes designed by bayes decision theory , '' _ ieee trans .5 , pp . 12881293 , sept .s. g. mohanty , _ lattice path counting and applications _ , academic press , new york , 1979 .m. ohya and d. petz , _ quantum entropy and its use_. berlin : springer - verlag , 1993 .a. peres , _ quantum theory : concepts and methods_. dordrecht : kluwer , 1993 .d. petz , `` geometry of canonical correlation on the state space of a quantum system , '' _ j. math .780795 , feb .d. petz and h. hasegawa , `` on the riemannian metric of -entropies of density matrices , '' _ lett .38 , pp . 221225 , 1996 .d. petz and c. sudr , geometries of quantum states , " _ j. math .26622673 , june 1996 .d. petz and g. toth , `` the bogoliubov inner product in quantum statistics , '' _ lett .27 , pp . 205216 , 1993 . f. qi , `` generalized weighted mean values with two parameters , '' _ proc .a _ , vol .1978 , pp . 27232732 , 1998 .j. rissanen , fisher information and stochastic complexity , " _ ieee trans .inform . theory _1 , pp . 4047 , jan .j. rissanen , _ stochastic complexity in statistical inquiry_. world scientific : singapore , 1989 .b. schumacher , quantum coding , `` _ phys .a _ , vol .4 , pp . 27382747 , april 1995 .c. e. shannon , ' ' a mathematical theory of communication , " _ bell .27 , pp . 379423 , 623656 , july oct . 1948 .l. j. slater , _ generalized hypergeometric functions _ , cambridge university press , cambridge , 1966 .p. b. slater , applications of quantum and classical fisher information to two - level complex and quaternionic and three - level complex systems , " _ j. math6 , pp . 26822693 , june 1996 .p. b. slater , quantum fisher - bures information of two - level systems and a three - level extension , " _ j. phys .a _ , vol .l271l275 , 21 may 1996 .p. b. slater , `` the quantum jeffreys prior / bures metric volume element for squeezed thermal states and a universal coding conjecture , '' _ j. phys . a mathl601l605 , 1996 .p. b. slater , _ universal coding of multiple copies of two - level quantum systems _ ,march 1996 .n. j. a. sloane and s. plouffe , _ the encyclopaedia of integer sequences _ , academic press , san diego , 1995 .k. svozil , _ quantum algorithmic information theory _ , los alamos preprint archive , quant - ph/9510005 , 5 oct .s. verd , fifty years of shannon theory , " _ ieee trans .inform . theory _ , vol .44 , no . 6 , pp .20572078 , 1998 .x. viennot , _ une thorie combinatoire des polynmes orthogonaux generaux _ , uqam : montreal , quebec , 1983 . n. j. vilenkin and a. u. klimyk , _ representation of lie groups and special functions _ ,vol . 1 ,kluwer : dordrecht , boston , london , 1991 .a. wehrl , `` general properties of entropy , '' _ rev .2 , pp . 221260 , apr . 1978 .d. welsh , _ codes and cryptography _ , clarendon press , oxford , 1989 .r. f. werner , `` quantum states with einstein - podolsky - rosen correlations admitting a hidden - variable model , '' _ phys .40 , no . 8 , pp .42774281 , 15 oct . 1989 .
clarke and barron have recently shown that the jeffreys invariant prior of bayesian theory yields the common asymptotic ( minimax and maximin ) redundancy of universal data compression in a parametric setting . we seek a possible analogue of this result for the two - level _ quantum _ systems . we restrict our considerations to prior probability distributions belonging to a certain one - parameter family , , . within this setting , we are able to compute exact redundancy formulas , for which we find the asymptotic limits . we compare our quantum asymptotic redundancy formulas to those derived by naively applying the classical counterparts of clarke and barron , and find certain common features . our results are based on formulas we obtain for the eigenvalues and eigenvectors of ( bayesian density ) matrices , . these matrices are the weighted averages ( with respect to ) of all possible tensor products of identical density matrices , representing the two - level quantum systems . we propose a form of _ universal _ coding for the situation in which the density matrix describing an ensemble of quantum signal states is unknown . a sequence of signals would be projected onto the dominant eigenspaces of . _ index terms _ quantum information theory , two - level quantum systems , universal data compression , asymptotic redundancy , jeffreys prior , bayes redundancy , schumacher compression , ballot paths , dyck paths , relative entropy , bayesian density matrices , quantum coding , bayes codes , monotone metric , symmetric logarithmic derivative , kubo - mori / bogoliubov metric = 5.9pt plus2pt minus 4pt = 5.9pt plus2pt minus 4pt research supported in part by the msri , berkeley ]
transient lunar phenomena ( tlps or ltps ) are defined for the purposes of this investigation as localized ( smaller than a few hundred km across ) , transient ( up to a few hours duration , and probably longer than typical impact events - less than 1s to a few seconds ) , and presumably confined to processes near the lunar surface .how such events are manifest is summarized by cameron ( 1972 ) . in paper i ( crotts 2008 ; see also crotts 2009 ) we study the systematic behavior ( especially the spatial distribution ) of tlp observations - particularly their significant correlations with tracers of lunar surface outgassing , and we are thereby motivated to understand if this correlation is directly causal .numerous works have offered hypotheses for the physical cause of tlps ( mills 1970 , garlick et al . 1972a , b , geake & mills 1977 , cameron 1977 , middlehurst 1977 , hughes 1980 , robinson 1986 , zito 1989 , carbognani 2004 , davis 2009 ) , but we present a methodical examination of the influence of outgassing , exploring quantitatively how outgassing might produce tlps .furthermore , it seems likely that outgassing activity is concentrated in several areas , which leads one to ask how outgassing might interact with and alter the regolith presumably overlying the source of gas .reviews of similar processes exist but few integrate apollo - era data e.g. , stern ( 1999 ) , mukherjee ( 1975 ) , friesen ( 1975 ) .as the final version of this paper approached completion , several papers were published regarding the confirmed discovery of hydration of the lunar regolith .fortunately , we deal here with the special effects of water on lunar regolith and find that many of our predictions are borne out in the recently announced data. we will deal with this explicitly in 5 .several experiments from apollo indicate that gas is produced in the vicinity of the moon , even though these experiments disagree on the total rate : 1 ) lace ( lunar atmosphere composition experiment on _ apollo 17 _ ) , .1 g s over the entire lunar surface ( hodges et al . 1973 , 1974 ) ; 2 ) side ( suprathermal ion detector experiment on _ apollo 12 , 14 , 15 _ ) , g s ( vondrak et al . 1974 ) ;3 ) ccge ( cold cathode gauge experiment on _ apollo 12 , 14 , 15 _ ) , g s ( hodges et al . 1972 ) .these measurements not only vary by more than two orders of magnitude but also in assayed species and detection methods .lace results here applies only to neutral , and . by mass predominates .side results all relate to ions , and perhaps include a large contribution from molecular species ( vondrak et al . 1974 ) .ccge measures only neutral species , not easily distinguishing between them .the lace data indicate episodic outgassing on timescales of a few months or less ( hodges & hoffman 1975 ) , but resolving this into faster timescales is more ambiguous . in this discussionwe adopt the intermediate rate ( side ) , about 200 tonne y for the total production of gas , of all species , ionized or neutral .the lace is the only instrument to provide compositional ratios , which also include additional , rarer components in detail. we will use these ratios and in some cases normalize them against the side total .much of the following discussion is only marginally sensitive to the actual composition of the gas . for many components of molecular gas at the lunar surface ,however , there is a significant possible contribution from cometary or meteoritic impacts , and a lesser amount from solar wind / regolith interactions .the influx of molecular gas from comets and meteorites are variously estimated , usually in the range of tonnes or tens of tonnes per year over the lunar surface ( see anders et al .1973 , morgan & shemansky 1991 ) .cometary contributions may be sporadically greater ( thomas 1974 ) . except for h , solar wind interactions ( mukhergee 1975 )provide only a small fraction of the molecular concentration seen at the surface ( which are only marginally detected : hoffman & hodges 1975 ) . there is still uncertainty as to what fraction of this gas is endogenous .current data do not succeed in resolving these questions , but we will return to consider them later in the context of gas seepage / regolith interactions .in this paper we consider various effects of outgassing through the regolith , and find the most interesting simple effect occurs when the flow is high enough to cause disruption of the regolith by an explosion to relieve pressure ( 2 ) , which we compare to fluidization .another interesting effect occurs when the gas undergoes a phase change while passing through the regolith ( 3 ) , which seems to apply only to water vapor .this leads primarily to the prediction of the likely production of subsurface ice , particularly in the vicinity of the lunar poles .these effects suggest a variety of observational / experimental approaches , which we summarize in 4 . in 5 we discuss the general implications of these findings , with specific suggestions as to how these might guide further exploration , particularly in respect to contamination by anthropogenic volatiles .we also discuss the relevance of the predictions in 4 to very recent discoveries regarding lunar regolith hydration .first , let us make a few basic points about outgassing and the regolith .one can easily picture several modes in which outgassing volatiles might interact with regolith on the way to the surface .these modes will come into play with increasing gas flow rate and/or decreasing regolith depth , and we simply list them with mneumonic labels along with descriptions : \1 ) choke : complete blockage below the regolith , meaning that any chemistry or phase changes occur within the bedrock / megaregolith ; \2 ) seep : gas is introduced slowly into the regolith , essentially molecule by molecule ; \3 ) bubble : gas is introduced in macroscopic packets which stir or otherwise rearrange the regolith ( such as `` fluidization '' e.g. , mills 1969 ) ; \4 ) gulp : gas is introduced in packets whose adiabatic expansion deposits kinetic energy into regolith and cools the gas , which therefore might even undergo a phase change ; \5 ) explode : gas is deposited in packets at base of the regolith leading to an explosion ; and \6 ) jet : gas simply flows into the vacuum at nearly sound speed with little entrained material .while the intermediate processes might prove interesting , the extreme cases are probably more likely to be in effect and will receive more of our attention .in fact choking behavior might lead to explosions or geysers , when the pressure blockage is released . since these latter two processes involve primarily simple hydrodynamics ( and eventually , newtonian ballistics ), we will consider them first , and how they might relate to tlps .if outgassing occurs at a rate faster than simple percolation can sustain , and where regolith obstructs its path to the surface , the accumulation of the gas will disrupt and cause bulk motion of the intervening regolith .the outgassing can lift the regolith into a cloud in the temporary atmosphere caused by the event .the presence of such a cloud has the potential to increase the local albedo from the perspective of an outside observer due to increased reflectivity and possible mie scattering of underlying regolith .additionally , volatiles buried in the regolith layer could become entrained in this gas further changing the reflective properties of such a cloud .garlick et al .( 1972b ) describe fluidization of lunar regolith , in which dust is displaced only temporarily and/or over small distances compared to ballistic trajectories , but we will assume that we are dealing with more rapid changes .let us construct a simple model of explosive outgassing through the lunar surface .for such an event to occur , we assume a pocket of pressurized gas builds at the base of the regolith , where it is delivered by transport through the crust / megaregolith presumably via channels or cracks , or at least faster diffusion from below . given a sufficient flow rate ( which we consider below ) , gas will accumulate at this depth until its internal pressure is sufficient to displace the overlying regolith mass , or some event releases downward pressure e.g. , impact , moonquake , incipient fluidization , puncturing a seal , etc .we can estimate the minimal amount of gas alone required to cause explosive outgassing by assuming that the internal energy of the buried gas is equal to the total energy necessary to raise the overlying cone of regolith to the surface .this `` minimal tlp '' is the smallest outgassing event likely to produce potentially observable disruption at a new site , although re - eruption through thinned regolith will require less gas .we consider the outgassing event occurring in two parts illustrated in figure 1 .initially , the gas bubble explodes upward propelling regolith with it until it reaches the level of the surface ; we assume that the plug consists of a cone of regolith within 45 of the axis passing from the gas reservoir to the surface , normal to the surface . through this processthe gas and regolith become mixed , and we assume they now populate a uniform hemispherical distribution of radius m on the surface . at this point , the gas expands into the vacuum and drags the entrained regolith outward until the dust cloud reaches a sufficiently small density to allow the gas to escape freely into the vacuum and the regolith to fall eventually to the surface .we consider this to be the `` minimal tlp '' for explosive outgassing , as there is no additional reservoir that is liberated by the event beyond the minimum to puncture the regolith .one could also imagine triggering the event by other means , many of which might release larger amounts of gas other than that poised at hydrodynamical instability . for the initial conditions of the first phase of our model , we assume the gas builds up at the base of the regolith layer at a depth of 15 m ( for more discussion of this depth , see 3 ) .we set the bulk density of the regolith at g ( mckay et al .1991 ) , thereby setting the pressure at this depth at 0.45 atm .because of the violent nature of an explosive outgassing , we assume that the cone of dust displaced will be 45 from vertical ( comparable to the angle of repose for a disturbed slope of this depth : carrier , olhoeft & mendell 1991 ) .the mass of overlying regolith defined by this cone is kg . in order to determine the mass of gas required to displace this regolith cone , we equate the internal energy of this gas bubble with the potential energy ( , where m s ) required to lift the cone of regolith a height m to the surface , requiring 47,000 moles of gas .much of the gas found in outgassing events consists of , and ( see 1 ) , so we assume a mean molar mass for the model gas of g mol , hence 940 kg of gas is necessary to create an explosive outgassing event .the temperature at this depth is ( see 3 ) , consequently implying an overall volume of gas of 2400 m or a sphere 8.3 m in radius . what flow rate is needed to support this ? using fick s diffusion law , , where the gas number density is taken from above , and drops to zero through 15 m of regolith in .the diffusivity is 7.7 and 2.3 cm s for he and ar , respectively , in the knudsen flow regime for basaltic lunar soil simulant ( martin et al .1973 , where is the absolute temperature .the sticking time of the gas molecules , or heat of absorption , becomes significant if the gas is more reactive or the temperature is reduced .unfortunately we find no such numbers for real regolith , although we discuss realistic diffusivities for other gases below . ] ) , so we adopt cm s for our assumed he / ar mixture .( for any other gas mixture of this molecular weight , would likely be smaller ; below we also show that tends to be lower for real regolith . ) over the area of the gas reservoir , this implies a mass leakage rate of 2.8 g s , or % of the total side rate . with the particular approximations made about the regolith diffusivity ,this is probably near the upper limit on the leakage rate . at the surface of the regolith, this flow is spread to a particle flux of only cm , which presumably causes no directly observable optical effects .the characteristic time to drain ( or presumably fill ) the reservoir is 4 d. the second phase of the simulation models the evolution of dust shells and expanding gas with a spherically symmetric 1d simulation centered on the explosion point .the steps of the model include : 1 ) the regolith is divided into over 600 bins of different mean particle size .these bins are logarithmically spaced over the range mm to m according to the regolith particle size distribution from sample 72141,1 ( from mckay et al .the published distribution for sample 72141,1 only goes to 2 m , but other sources ( basu & molinaroli 2001 ) indicate a component extending below 2 m , so we extend our size distribution linearly from 2 m to 0 m .furthermore , we assume the regolith particles are spherical in shape and do not change in shape or size during the explosion ; 2 ) to represent the volume of regolith uniformly entrained in the gas , we create a series of 1000 concentric hemispherical shells for each of the different particle size bins ( i.e. , roughly 600,000 shells ) .each of these shells is now independent of each other and totally dependent on the gas - pressure and gravity for motion ; 3 ) we further assume that each regolith shell remains hemispherical throughout the simulation .explicitly , we trace the dynamics of each shell with a point particle , located initially 45 degrees up the side of the shell ; 4 ) we calculate the outward pressure of the gas exerted on the dust shells .the force from this pressure is distributed among different shells of regolith particle size weighted by the total surface area of the grains in each shell .we calculate each shell s outward acceleration and consequently integrate their equations of motion using a timestep s ; 5 ) we calculate the diffusivity of each radial shell ( in terms of the ability of the gas to move through it ) by dividing the total surface area of all dust grains in a shell by the surface area of the shell itself ( assuming grains surfaces to be spherical ) ; 6 ) starting with the largest radius shell , we sum the opacities of each shell until we reach a gas diffusive opacity of unity .gas interior to this radius can not `` see '' out of the external regolith shells and therefore remains trapped .gas outside of this unit opacity shell is assumed to escape and is dropped from the force expansion calculation .dust shells outside the unit opacity radius are now assumed to be ballistic ; 7 ) we monitor the trajectory of each dust shell ( represented by its initially 45 particle ) until it drops to an elevation angle of 30 ( when most of the gas is expanding above the particle ) , at which time this particle shell is no longer supported by the gas , and is dropped from the gas - opacity calculation ; 8) an optical opacity calculation is made to determine the ability of an observer to see the lunar surface when looking down on the cloud .we calculate the downward optical opacity ( such as from earth ) by dividing the total surface area of the dust grains in a shell by the surface area of the shell as seen from above ( ) . starting with the outmost dust shell , we sum downward - view optical opacities until we reach optical depth and to keep track the evolution of this cloud s appearance as seen from a distance ; 9 ) we return to step # 4 above and iterate another timestep , integrating again the equations of motion .we continue this algorithm until all gas is lost and all regolith has fallen to the ground .finally , when all dust has fallen out , we calculate where the regolith ejecta have been deposited . because we re representing each shell as a single point for the purposes of the equations of motion calculations , we want to do more than simply plot the location of each shell - particle on the ground to determine the deposition profile of ejected regolith .thus we create a template function for the deposition of a ballistic explosion of a single spherical shell of material . by applying this template to each shell - particle s final resting location ,we better approximate the total deposition of material from that shell .we then sum all of the material from the shells to determine the overall dust ejecta deposition profile .there are obvious caveats to this calculation .undoubtably the release mechanism is more complex than that adopted here , but this release mode is sufficiently simple to be modeled .secondly , the diffusion constant , and therefore the minimal flow rate , might be overestimated due to the significant ( but still largely unknown ) decrease in regolith porosity with increasing depth on the scale of meters ( carrier et al .1991 ) , plus the liklihood that simulants used have larger particles and greater porosity than typical regolith .lastly , the regolith depth of 15 meters might be an overestimate for some of these regions , which are among the volcanically youngest and/or freshest impacts on the lunar surface .this exception does not apply to plato and its active highland vicinity , however , and aristarchus is thickly covered with apparent pyroclastic deposits which likely have different but unknown depths and diffusion characteristics .we find results for this `` minimal tlp '' numerical model of explosive outgassing through the lunar regolith interesting in terms of the reported properties of tlps .figure 2 shows the evolution of the model explosion with time , as might be seen from an observer above , in terms of the optical depth and profiles of the model event , where is a rough measure of order unity changes in the appearance of the surface features , whereas is close to the threshold of the human eye for changes in contrast , which is how many tlps are detected ( especially the many without noticable color change ) . in both casesthe cloud at the particular threshold value expands rapidly to a nearly fixed physical extent , and maintains this size until sufficient dust has fallen out so as to prevent any part of the cloud from obscuring the surface to this degree .easily - seen effects on features ( ) lasts for 50 s and extends over a radius of 2 km , corresponding to 2 arcsec in diameter , resolvable by a typical optical telescope but often only marginally so .in contrast , the marginally detectable feature extends over 14 km diameter ( 7.5 arcsec ) , lasting for 90 s , but is easily resolved .this model `` minimal tlp '' is an interesting match to the reported behavior of non - instantaneous ( not s ) tlps : about 7% of duration 90 s or less , and half lasting under about 1300 s. certainly there should be selection biases suppressing reports of shorter events .most tlp reports land in an envelope between about two minutes and one hour duration , and this model event lands at the lower edge of this envelope .furthermore , most tlps , particularly shorter ones , are marginally resolved spatially , as would be the easily - detectable component of the model event .this correspondence also seems interesting , given the simplicity of our model and the state of ignorance regarding relevant parameters. how might this dust cloud actually affect the appearance of the lunar surface ?first , the cloud should cast a shadow that will be even more observable than simple surface obscuration , blocking the solar flux from an area comparable to the region and visible in many orientations .experiments with agitation of lunar regolith ( garlick et al .1972b ) show that the reflectance of dust is nearly always increased under fluidization , typically by about 20% and often by about 50% depending on the particular orientation of the observer versus the light source and the cloud .similar results should be expected here for our simulated regolith cloud .these increases in lunar surface brightness would be easily observable spread over the many square kilometers indicated by our model . furthermore , because the sub - micron particle sizes dominate the outer regions of the cloud , it seems reasonable to expect mie - scattering effects in these regions with both blue and red clouds expected from different sun - earth - moon orientations .figure 3 shows the typical fall - out time of dust particles as a function of size .particles larger than m all fall out within the first few seconds , whereas after a few tens of seconds , particles are differentiated for radii capable of contributing to wavelength - dedendent scattering . later in the eventwe should expect significant color shifts ( albeit not order - unity changes in flux ratios ) .the larger dynamical effects in the explosion cloud change rapidly over the event .half of the initially entrained gas is lost from the cloud in the first 3 s , and 99% is lost in the first 15 s. throughout the observable event , the remaining gas stays in good thermal contact with the dust , which acts as an isothermal reservoir .gas escaping the outer portions of the dust cloud does so at nearly the sound speed ( m s ) , and the outer shells of dust also contain particles accelerated to similar velocities .gas escaping after about 3 s does so from the interior of the cloud in parcels of gas with velocities decreasing roughly inversely with time .one observable consequence of this is the expectation that much of the gas and significant dust will be launched to altitudes up to about 50 km , where it may be observed and might affect spacecraft in lunar orbit .the longterm effects of the explosion are largely contained in the initial explosion crater ( nominally 14 m in radius ) , although exactly how the ejecta ultimately settle in the crater is not handled by the model . at largerradii the model is likely to be more reliable ; figure 4 shows how much dust ejecta is deposited by the explosion as a function of radius . beyond the initial crater , the surface density of deposited material varies roughly as , so it converges rapidly with distance . inside a radius of m ,the covering factor of ejecta is greater than unity ; beyond this one expects coverage to be patchy .this assumes that the crater explosion is symmetric and produces few `` rays . ''the explosion can change the reflectivity by excavating fresh material .this would be evidenced by a % drop in reflectance at wavelength nm caused by surface fe states in pyroxene and similar minerals ( adams 1974 , charette et al .likewise there is an increase in reflectivity in bluer optical bands ( buratti et al .2000 ) over hundreds of nm. even though these photometric effects are compositionally dependent , we are interested only in differential effects : gradients over small distances and rapid changes in time .the lifetime of even these effects at 300 m radius is short , however , due to impact `` gardening '' turnover .the half - life of the ejecta layer at 300 m radius is only of order 1000 y ( from gault et al . 1974 ) , and shorter at large radius ( unless multiple explosions accumulate multiple layers of ejecta ) . at 30 m radiusthe half - life is of order 10 y. from maturation studies of the 950 nm feature ( lucey et al .2000 , 1998 ) , even at 30 m , overturn predominates over optical maturation rates ( over hundreds of my ) . the scale of outgassing in this model event , both in terms of gas release ( 1 tonne ) and timescale ( 4 d ) , are consistent with the total gas output and temporal granularity of outgassing seen in , a dominant lunar atmospheric component .the fact that this model also recovers the scale of many features actually reported for tlps lends credence to the idea that outgassing and tlps might be related to each other causally in this way , as well as circumstantially via the rn episodes and tlp geographical correlation ( paper i ) .how often such an explosive puncturing of the regolith layer by outgassing should occur is unknown , due to the uncertainty in the magnitude and distribution of endogenous gas flow to the surface , and to some degree how the regolith reacts in detail to large gas flows propagating to the surface . also , a new crater caused by explosive outgassing will change the regolith depth , its temperature structure , and eventually its diffusivity .we will not attempt here to follow the next steps in the evolution of an outgassing `` fumerole '' in this way , but are inspired to understand how regolith , its temperature profile , and gas interact , as in the next section .furthermore , such outgassing might happen on much larger scales , or might over time affect a larger area .indeed , such a hypothesis is offered for the scoured region of depleted regolith forming the ina d feature and may extend to other regions around imbrium ( schultz et al .2006 ) .our results here and in paper i bear directly on the argument of vondrak ( 1977 ) that tlps as outgassing events are inconsistent with side episodic outgassing results .the detection limits from alsep sites _ apollo 12 , 14 _ and _ 15 _ correspond to 16 - 71 tonne of gas per event at common tlp sites , particularly aristarchus .( vondrak states that given the uncertainties in gas transportation , these levels are uncertain at the level of an order of magnitude . )our `` minimal tlp event '' described above is 20 - 80 times less massive than this , however , and still visible from earth .it seems implausible that a spectrum of such events would never exceed the side limit , but it is not so obvious such a large event would occur in the seven - year alsep operations interval .also , this side limit interpretation rests crucially on alphonsus ( and ross d ) as prime tlp sites , both features which are rejected by our robust geographical tlp sieve in paper i. dust elutriation or particle segregation in a cloud agitated by a low - density gas , occurring in this model , could potentially generate large electrostatic voltages , perhaps relating to tlps ( mills 1970 ) .luminous discharges are generated in terrestrial volcanic dust clouds ( anderson et al .1965 , thomas et al . 2007 ) .above we see dust particles remain suspended in a gas of number density to on scales of several tenths of a km to several km , a plausible venue for large voltages . in the heterogeneous lunar regolith , several predominant minerals with differing particle size may segregate under gas flow suspension and acceleration . assuming a typical particle size of m , and typical work function differences for particles of even well - defined compositions is problematic due to surface effects such as solar - wind / micrometeoritic weathering and exposed surface fe states .the following analysis suffices for two particles of different conducting composition ; a similar result arises via triboelectric interaction of two different dielectrics although the details are less understood .disturbed dust is readily charged for long periods in the lunar surface environment ( stubbs , vondrak & farell 2005 ) . ] of ev , two particles exchange charge upon contact until the equivalent of .25v is maintained , amounting to coul = 1700 e . when these particles separate to distance , their mutual capacitance becomes . for , if particles retain , voltages increase by times !such voltages can not be maintained .paschen s coronal discharge curve reaches minimum potential at 137v for ar , 156v for he , for column densities of and cm , respectively , and rises steeply for lesser column densities ( and roughly proportional to for larger ) .similar optimal are found for molecules , with minimum voltages a few times higher e.g. , 420v at cm for co , 414v for h , 410v for ch , and for other molecules cm .the visual appearance of atomic emission in high voltage discharge tubes is well know , with he glowing pink - orange ( primarily at 4471.5 and 5875.7 : reader & corliss 1980 , pearse & gaydon 1963 ) , and ar glowing violet ( from lines 4159 - 4880 ) .if this applies to tlps , the incidence of intense red emission in some tlp reports ( cameron 1978 ) argues for another gas . to 13 ( mag arcsec ) in v , compared to 3.4 at full moon , so visible sources could be faint .] ne is not an endogenous gas .common candidate molecules appear white or violet - white ( co , so ) or red ( water vapor - primarily h , which is produced in many hydrogen compounds ; ch - balmer lines plus ch bands at 390 and 431 nm ) .the initial gas density at the surface from a minimal tlp is , so initially the optimal for coronal discharge is on cm scales ( versus the initial outburst over tens of meters ) .as the tlp expands to 1 km radius , drops to , so the optimal holds over the scale of the entire cloud , likely the most favorable condition for coronal discharge .if gas kinetic energy converts to luminescence with , for instance , 2% efficiency , at this density this amounts to j m , or 100 j m , compared to the reflected solar flux of 100 w m , capable of a visible color shift for several seconds .perhaps a minimal tlp could sustain a visible coronal discharge over much of its min lifetime .these should also be observable on the nightside surface , too , since solar photoionization is seemingly unimportant in initiating the discharge , and there are additional factors to consider .referring back to scenarios ( 2 ) and ( 3 ) in 1 , the onset of fluidization ( mills 1969 ) marks the division between these two regimes of seepage and `` bubbling '' and has been studied ( siegal & gold 1973 , schumm 1970 ) . although laboratory test are made with coarser sieve particulates and much thinner dust layers in 1 gravity , we can scale the gas pressure needed for incipient fluidization by and thickness to find the threshold atm ( siegal & gold 1973 ) . correcting for less diffusive regolith ,this pressure estimate is likely a lower limit . below this pressure simple gas percolation likely predominates .what processes occur during `` simple '' percolation ? were it not for phase changes of venting gas within the regolith , the composition of the gas might be a weak consideration in this paper ( except for perhaps the molecular / atomic mass ) , and temperature would likely only affect seepage as in the diffusivity .water plays a special role in this study ( separate from concerns regarding resource exploitation or astrobiology ) , in that it is the only common substance encountering its triple point temperature in passing through the regolith , at least in many locations . in this case watermight not contribute to overpressure underneath the regolith leading to explosive outgassing .this would also imply that even relatively small volatile flows containing water would tend to freeze in place and remain until after the flow stops . for waterthis occurs at 0.01 , corresponding to 0.006 atm in pressure ( the pressure dropping by a factor of 10 every . ) effectively , water is the only relevant substance to behave in this fashion .the next most common substances may be large hydrocarbons such as nonane or benzene , obviously not likely abundant endogenous effluents from the interior . also h reaches its triple point , but changes radically with even modest concentrations of water .a similar statement can be made about hno , not a likely outgassing constituent .these will not behave as their pure state , either ; this leaves only h .water ( and sulfur ) has been found in significant concentration in volcanic glasses from the deep lunar interior ( saal et al .2008 , friedman et al .2009 ) , and has been liberated in large quantities in past volcanic eruptions .the measured quantities of tens of ppm imply juvenile concentrations of hundreds of ppm . from the heat flow measurements at the _ apollo 15 _ and _ 17 _ lunar surface experiment ( alsep ) sites ( langseth & keihm 1977 ) , we know that just below the surface , the stable regolith temperature is in the range of 247 - 253k ( dependent on latitude , of course ) , with gradients ( below 1 - 2 m ) of 1.2 - 1.8 deg m , which extrapolates to at m depths subsurface . with the exception of the outermost few centimeters, the entire regolith is below the triple point temperature and is too deep to be affected significantly by variations in heating over monthly timescales .this is an interesting depth , since in many areas the regolith is not quite this deep , as small as under a few meters near lichtenberg ( schultz & spudis 1983 ) and at the surveyor 1 site near flamsteed ( shoemaker & morris 1970 ) to depths at apollo sites ( summarized by mckay et al .1991 ) near the depths calculated above , up to probably 20 m or more in the highlands , and 40 m deep north of the south pole - aitken basin ( bart & melosh 2005 ) .presumably , the fractured megaregolith supporting the regolith likely does not contain as many small particles useful for retaining water ice , as we detail below , but it may accumulate ice temporarily .recent heat flow analyses ( saito et al .2007 ) account for longer timescale fluctuations placing the depth twice as far subsurface , increasing the lifetime of retained volatiles against sublimation accordingly ; for now we proceed with a more conventional , shorter - lived analysis .the escape of water and other volatiles into the vacuum is regulated by the state of the regolith and is presumably largely diffusive .we assume the knudsen flow regime ( low - density , non - collisional gas ) .of special importance is the measured abundance of small dust grains in the upper levels of the regolith , which perhaps pertains to depths m ( where bulk density is probably higher : carrier et al .assuming that particle distributions are self - similar in size distribution ( constant porosity ) , for random - walk diffusion out of a volume element , the diffusion time step presumably scales with the particle size , so the diffusion time . for particles of the same density ,therefore , one should compute the diffusion time by taking a -weighted average of particle sizes counted by mass , . this same moment of the distribution is relevant in 2. published size distributions measured to sufficiently small sizes include again mckay et al .( 1974 ) with m , and supplemented on smaller sizes with _ apollo 11 _ sample 10018 ( basu & molinaroli 2001 ) , which reduces the average to about 20 m .this is an overestimate because a large fraction ( 34 - 63% ) are agglutinates , which are groupings of much smaller particles .many agglutinates have large effective areas e.g. , , with values of a few up to 8 .( here is a mean radius from the center of mass to a surface element . ) to a gas particle , the sub - particle size is more relevant than the agglutinate size , so the effective particle size of the entire sample might be much smaller , conceivably by a factor of a few .we compare this to experimental simulations , a reasonably close analogy being the sublimation of a slab of ice buried up to 0.2 m below a medium of simulant jsc mars-1 ( allen et al .1998 ) operating at and 7 mbar ( chevrier et al .2007 ) , close to lunar regolith conditions .this corresponds to the lifetime of 800 y for a 1 m thick ice layer covered by 1 m of regolith .the porosity of jsc mars-1 is 44 - 54% , depending on compactification whereas lunar soil has % at the surface , perhaps 40% at a depth of 60 cm , and slightly lower at large depths ( carrier et al .lunar soil is somewhat less diffusive by solely this measure .the mean size of jsc mars-1 is 93 m , times larger than that for _ apollo 17 _ and _ 11 _ regolith , accounting for agglutinates , so the sublimation timescale for regolith material is , very approximately , ky ( perhaps up to ky ) .other simulants are more analogous to lunar regolith , so future experiments might be more closely relevant . converting a loss rate for 1 m below the surface to 15m involves the depth ratio .farmer ( 1976 ) predicts an evaporation rate scaling as ( as opposed to the no - overburden analysis : ingersoll 1970 ) .experiments with varying depths of simulated regolith ( chevrier et al .2007 ) show that the variation in lifetime indeed goes roughly as , implying a 1 m ice slab lifetime at 15 m on the order of to y. the vapor pressure for water ice drops a factor of 10 in passing from to current temperatures of about just below the surface ( also the naked - ice sublimation rate : andreas 2007 ) , which would indicate that % of water vapor tends to stick in overlying layers ( without affecting the lifetime of the original layer , coincidentally ) .this begs the question of the preferred depth for an ice layer to form .the regolith porosity decreases significantly between zero and 1 m depth ( carrier et al .1991 ) which argues weakly for preferred formation at greater depth . at 30 m depth or more, the force of overburden tends to close off porosity .the current best limit on water abundance is from the sunrise terminator abundances from lace , which produces a number ratio of h/ with a central value of 0.014 ( with limits of 0 - 0.04 ) .this potentially indicates an actual h/ outgassing rate ratio up to 5 times higher ( hoffman & hodges 1975 ) .adopting the side rate of 7 g s in the - 44 amu mass range , and assuming most of this is ( vondrak , freeman & lindeman 1974 : given the much lower solar wind contributions of other species in this range ) , this translates to 0.1 g s of water ( perhaps up to 0.5 g s or 15 tonne y ) , in which case most of the gas must be ionized .the disagreement between side and lace is a major source of uncertainty ( perhaps due to the neutral / ionized component ambiguity ) .we discuss below that at earlier times the subsurface temperature was likely lower , but let us consider now the situation in which a source arises into pre - established regolith in recent times .we assume a planar diffusion geometry , again . in this case , we take spatial gradients over 15 m and scale the jsc mars-1 diffusivity of 1.7 cm s to 0.17 cm s for lunar regolith .since the triple - point pressure corresponds to number density , the areal particle flux density is s cm . for a large outgassing site , with the same water fraction of water indicated by lace e.g. , total outgassing of 7 g s including 0.1 g s of water, this rate can maintain a total area of 0.012 km at the triple - point pressure i.e. , a 125 m diameter patch .this is much larger than the 15 m regolith depth , bearing out our assumed geometry .if this ice patch were 1 m thick , for example , the ice would need to be replenished every 4000 y. of course , this is a simple model and many complications could enter .we consider briefly the effects of latitude , change in lunar surface temperature over geological time , and the effects of aqueous chemistry on the regolith .the temperature just below the surface is legislated by the time - averaged energy flux in sunlight , so it scales according to the stefan - boltzmann law from the temperature at the equator ( ) according to .this predicts a 6k temperature drop from the equator ( at about 252k ) to the latitude of the aristarchus plateau ( ) or the most polar subsurface temperature measurement by _ apollo 15 _ , a drop to 224k at plato ( ) , for the coldest 10% of the lunar surface ( ) and for the coldest 1% ( ) . ,but there are flooded craters much higher . ]these translate into a regolith depth at the water triple - point of , 18 , 33 or 65 m deeper than at the equator , respectively , probably deeper in the latter cases than the actual regolith layer .permanently shadowed cold traps , covering perhaps 0.1% of the surface , have temperatures ( e.g. , adorjan 1970 , hodges 1980 ) .( note that the lunar south pole is a minor tlp site responsible for about 1% of robust report counts . ) since even at the equator the h triple point temperature occurs m below the surface , at increasing latitude this zone quickly moves into the megaregolith where the diffusivity is largely unknown but presumably higher ( neglecting the decrease in porosity due to compression by overburden ) . to study this, we assume a low diffusivity regolith layer 15 m deep overlying a high diffusivity layer which may contain channels directing gas quickly upward ( although perhaps not so easily horizontally ) .the diffusivity of the regolith near 0 is dominated by elastic reflection from mineral surfaces , without sticking , whereas at lower temperatures h molecules stick during most collisions ( haynes , tro & george 1992 ) .this is especially the case if the surfaces are coated with at least a few molecular layers of h molecules , of negligible mass .the sticking behavior of h molecules on water ice has been studied over most of the temperature range relevant here ( washburn et al .2003 ) ; but does depend somewhat on whether the ice is crystalline or amorphous ( speedy et al .in contrast the sticking behavior of h molecules on lunar minerals is much less well known . the lunar simulant diffusivity value above corresponds to a mean free path time of for h molecules near 0 . in contrastthe timescale for h molecules sticking on ice is ( from schorghofer & taylor [ 2007 ] and references therein ) : where is the areal density of h molecules on ice cm for density , and molecular mass .the sticking fraction varies from about 70% to 100% for to 120k .the equilibrium vapor pressure is given by 100 m . in the meantime, we should accomplish what we can from the ground .martin , r.t . ,winkler , j.l ., johnson s.w . & carrier , iii , w.d .1973 , `` measurement of conductance of apollo 12 lunar simulant taken in the molecular flow range for helium , argon , and krypton gases . ''unpublished report quoted in carrier et al .( 1991 ) .map of tlp activity & imaging monitor , entire nearside , & optical & comprehensive schedulability ; more & limited resolution + & km resolution .& & sensitive than human eye & + & & & & + polarimetric study of & compare reflectivity in two & optical & easy to schedule ; further constrains & requires use of two monitors + dust & monitors with perpendicular & & dust behavior & + & polarizers & & & + & & & & + changes in small , & adaptive optic imaging , m&0.95 m , etc.&``on demand '' given good conditions&undemonstrated , depends on + active areas & resolution & & & seeing ; covers km + & & & & diameter maximum + & & & & + & `` lucky imaging , '' m & 0.95 m , etc.&on demand given good conditions & low duty cycle , depends on + & resolution & & & seeing + & & & & + & _ hubble space telescope _, m&0.95 m , etc.&on demand given advanced notice & limited availability ; low + & resolution & & & efficiency + & & & & + & _ clementine / lro / chandrayaan-1 _ & 0.95 m , etc.&existing or planned survey & limited epochs ; low flexibility + & imaging , m resolution & & & + & & & & + & _ lro / kaguya / chang-1 _ imaging , & 0.95 m , etc.&existing or planned survey & limited epochs ; low flexibility + & higher resolution & & & + & & & & + tlp spectrum & scanning spectrometer map , plus & nir , & may be best method to find & requires alert from tlp image + & spectra taken during tlp event & optical & composition & tlp mechanism & monitor ; limited to long events + & & & & + & & & & + regolith hydration & nir hydration bands seen before vs.&2.9 , 3.4&directly probe regolith / water & requires alert from monitor + measurement & after tlp in nir imaging & & chemistry ; may detect water & and flexible scheduling + & & & & + & scanning spectrometer map , then & 2.9 , 3.4&directly probe regolith / water & requires alert from monitor + & spectra taken soon after tlp & & chemistry ; may detect water & and flexible scheduling + & & & & + relationship between & simultaneous monitoring : & & & refute / confirm tlp / outgassing&optical monitor only covers + tlps & outgassing & particles by & optical tlps & optical & correlation ; find outgassing loci & nearside ; more monitors better+ & & & & + subsurface water ice & penetrating radar from earth & mhz&directly find subsurface ice with&ice signal is easily confused + & & & existing technique & with others + & & & & + & penetrating radar from lunar orbit& mhz&better resolution ; deeper than&ice signal is easily confused ; + & & & neutron or gamma probes & more expensive + & & & & + & surface radar from lunar orbit & ghz & better resolution ; study tlp site & redundant with high resolution + & & & surface changes & imaging ?+ & & & & + high resolution tlp & imagers at / near l1 , l2 points & optical & map tlps with greater resolution & & expensive , but could piggyback + activity map & covering entire moon , at 100 m & & sensitivity , entire moon & on communications network + & resolution & & & + & & & & + comprehensive & two detectors in polar& &map outgassing events at full&expensive ; even better response + particle map & orbits 90 apart in longitude & & sensitivity & with 4 detectors + & & & & + comprehensive map of & two mass spectrometers in adjacent & ions & & map outgassing events & find & expensive ; even better with more + outgas components & polar orbits & neutrals & composition & detectors +
we follow paper i with predictions of how gas leaking through the lunar surface could influence the regolith , as might be observed via optical transient lunar phenomena ( tlps ) and related effects . we touch on several processes , but concentrate on low and high flow rate extremes , perhaps the most likely . we model explosive outgassing for the smallest gas overpressure at the regolith base that releases the regolith plug above it . this disturbance s timescale and affected area are consistent with observed tlps ; we also discuss other effects . for slow flow , escape through the regolith is prolonged by low diffusivity . water , found recently in deep magma samples , is unique among candidate volatiles , capable of freezing between the regolith base and surface , especially near the lunar poles . for major outgassing sites , we consider the possible accumulation of water ice . over geological time ice accumulation can evolve downward through the regolith . depending on gases additional to water , regolith diffusivity might be suppressed chemically , blocking seepage and forcing the ice zone to expand to larger areas , up to km scales , again , particularly at high latitudes . we propose an empirical path forward , wherein current and forthcoming technologies provide controlled , sensitive probes of outgassing . the optical transient / outgassing connection , addressed via earth - based remote sensing , suggests imaging and/or spectroscopy , but aspects of lunar outgassing might be more covert , as indicated above . tlps betray some outgassing , but does outgassing necessarily produces tlps ? we also suggest more intrusive techniques from radar to in - situ probes . understanding lunar volatiles seems promising in terms of resource exploitation for human exploration of the moon and beyond , and offers interesting scientific goals in its own right . many of these approaches should be practiced in a pristine lunar atmosphere , before significant confusing signals likely to be produced upon humans returning to the moon . 6.5 in 8.5 in 0.0 in 0.0 in
preprocessing and the analysis of preprocessed data are ubiquitous components of statistical inference , but their treatment has often been informal .we aim to develop a theory that provides a set of formal statistical principles for such problems under the banner of multiphase inference .the term `` multiphase '' refers to settings in which inferences are obtained through the application of multiple procedures in sequence , with each procedure taking the output of the previous phase as its input .this encompasses settings such as multiple imputation ( mi , ) and extends to other situations . in a multiphase setting , information can be passed between phases in an arbitrary form ; it need not consist of ( independent ) draws from a posterior predictive distribution , as is typical with multiple imputation .moreover , the analysis procedure for subsequent phases is not constrained to a particular recipe , such as rubin s mi combining rules ( ) .the practice of multiphase inference is currently widespread in applied statistics .it is widely used as an analysis technique within many publications any paper that uses a `` pipeline '' to obtain its final inputs or clusters estimates from a previous analysis provides an example .furthermore , projects in astronomy , biology , ecology , and social sciences ( to name a small sampling ) increasingly focus on building databases for future analyses as a primary objective .these projects must decide what levels of preprocessing to apply to their data and what additional information to provide to their users . providing all of the original dataclearly allows the most flexibility in subsequent analyses . in practice, the journey from raw data to a complete model is typically too intricate and problematic for the majority of users , who instead choose to use preprocessed output .unfortunately , decisions made at this stage can be quite treacherous .preprocessing is typically irreversible , necessitating assumptions about both the observation mechanisms and future analyses .these assumptions constrain all subsequent analyses .consequently , improper processing can cause a disproportionate amount of damage to a whole body of statistical results .however , preprocessing can be a powerful tool .it alleviates complexity for downstream researchers , allowing them to deal with smaller inputs and ( hopefully ) less intricate models. this can provide large mental and computational savings .two examples of such trade - offs come from nasa and high - throughput biology .when nasa satellites collect readings , the raw data are usually massive .these raw data are referred to as the `` level 0 '' data ( ) .the level 0 data are rarely used directly for scientific analyses .instead , they are processed to levels 1 , 2 , and 3 , each of which involves a greater degree of reduction and adjustment .level 2 is typically the point at which the processing becomes irreversible . provide an excellent illustration of this process for the atmospheric infrared sounder ( airs ) experiment .this processing can be quite controversial within the astronomical community .several upcoming projects , such as the advanced technology solar telescope ( atst ) will not be able to retain the level 0 or level 1 data ( ) .this inability to obtain raw data and increased dependence on preprocessing has transformed low - level technical issues of calibration and reduction into a pressing concern .high - throughput biology faces similar challenges .whereas reproducibility is much needed ( e.g. , ) , sharing raw datasets is difficult because of their sizes .the situation within each analysis is similar . confronted with an overwhelming onslaught of raw data , extensivepreprocessing has become crucial and ubiquitous .complex models for genomic , proteomic , and transcriptomic data are usually built upon these heavily - processed inputs .this has made the intricate details of observation models and the corresponding preprocessing steps the groundwork for entire fields . to many statisticians, this setting presents something of a conundrum .after all , the ideal inference and prediction will generally use a complete correctly - specified model encompassing the underlying process of interest and all observation processes . then , why are we interested in multiphase ?we focus on settings where there is a natural separation of knowledge between analysts , which translates into a separation of effort .the first analyst(s ) involved in preprocessing often have better knowledge of the observation model than those performing subsequent analyses .for example , the first analyst may have detailed knowledge of the structure of experimental errors , the equipment used , or the particulars of various protocols .this knowledge may not be easy to encapsulate for later analysts the relevant information may be too large or complex , or the methods required to exploit this information in subsequent analyses may be prohibitively intricate . hence , the practical objective in such settings is to enable the best possible inference given the constraints imposed and provide an account of the trade - offs and dangers involved . to borrow the phrasing of and , we aim for achievable practical efficiency rather than theoretical efficiency that is practically unattainable . multiphase inference currently represents a serious gap between statistical theory and practice .we typically delineate between the informal work of preprocessing and feature engineering and formal , theoretically - motivated work of estimation , testing , and so forth .however , the former fundamentally constrains what the latter can accomplish . as a result, we believe that it represents a great challenge and opportunity to build new statistical foundations to inform statistical practice .we present two examples that show both the impetus for and perils of undertaking multiphase analyses in place of inference with a complete , joint model .the first concerns microarrays , which allow the analysis of thousands of genes in parallel .we focus on expression microarrays , which measure the level of gene expression in populations of cells based upon the concentration of rna from different genes .these are typically used to study changes in gene expression between different experimental conditions . in such studies ,the estimand of interest is typically the log - fold change in gene expression between conditions .however , the raw data consist only of intensity measurements for each probe on the array , which are grouped by gene along with some form of controls .these intensities are subject to several forms of observation noise , including additive background variation and additional forms of interprobe and interchip variation ( typically modeled as multiplicative noise ) . to deal with these forms of observation noise , a wide range of background correction and normalization strategieshave been developed ( for a sampling , see , , , , , , ) .later analyses then focus on the scientific question of interest without , for the most part , addressing the underlying details of the observation mechanisms .background correction is a particularly crucial step in this process , as it is typically the point at which the analysis moves from the original intensity scale to the log - transformed scale . as a result, it can have a large effect on subsequent inferences about log - fold changes , especially for genes with low expression levels in one condition ( , ) .one common method ( mas5 ) , provided by one microarray manufacturer , uses a combination of background subtraction and truncation at a fixed lower threshold for this task ( ) .other more sophisticated techniques use explicit probability models for this de - convolution .a model with normally - distributed background variation and exponentially distributed expression levels has proven to be the most popular in this field ( , ) . unfortunately , even the most sophisticated available techniques pass only point estimates onto downstream analyses .this necessitates ad - hoc screening and corrections in subsequent analyses , especially when searching for significant changes in expression ( e.g. , ) . retaining more information from the preprocessing phases of these analyseswould allow for better , simpler inference techniques with greater power and fewer hacks .the motivation behind the current approach is quite understandable : scientific investigators want to focus on their processes of interest without becoming entangled in the low - level details of observation mechanisms .nevertheless , this separation can clearly compromise the validity of their results .the role of preprocessing in microarray studies extends well beyond background correction .normalization of expression levels across arrays , screening for data corruption , and other transformations preceding formal analysis are standard .each technique can dramatically affect downstream analyses .for instance , quantile normalization equates quantiles of expression distributions between arrays , removing a considerable amount of information .this mutes systematic errors ( ) , but it can seriously compromise analyses in certain contexts ( e.g. , mirna studies ) .another example of multiphase inference can be found in the estimation of correlations based upon indirect measurements .this appears in many fields , but astrophysics provides one recent and striking case .the relationships between the dust s density , spectral properties , and temperature are of interest in studies of star - forming dust clouds .these characteristics shed light on the mechanisms underlying star formation and other astronomical processes .several studies ( e.g. , , , , ) have investigated these relationships , finding negative correlations between the dust s temperature and spectral index . this finding is counter to previous astrophysical theory , but it has generated many alternative explanations .such investigations may , however , be chasing a phantasm .these correlations have been estimated by simply correlating point estimates of the relevant quantities ( temperature and spectral index ) based on a single set of underlying observations . as a result, they may conflate properties of this estimation procedure with the underlying physical mechanisms of interest .this has been noted in the field by , but the scientific debate on this topic continues . provide a particularly strong argument , using a cohesive hierarchical bayesian approach , that improper multiphase analyses have been a pervasive issue in this setting .improper preprocessing led to incorrect , negative estimates of the correlation between temperature and spectral index , according to .these incorrect estimates even appeared statistically significant with narrow confidence intervals based on standard methods . on a broader level, this case again demonstrates some of the dangers of multiphase analyses when they are not carried out properly .those analyzing this data followed an intuitive strategy : estimate what we want to work with ( and ) , then use it to estimate the relationship of interest .unfortunately , such intuition is not a recipe for valid statistical inference .multiphase inference has wide - ranging connections to both the theoretical and applied literatures .it is intimately related to previous work on multiple imputation and missing data ( ( ) , , , ) . in general, the problem of multiphase inference can be formulated as one of missing data . however , in the multiphase setting , missingness arises from the preprocessing choices made , not a probabilistic response mechanism .thus , we can leverage the mathematical and computational methods of this literature , but many of its conceptual tools need to be modified .multiple imputation addresses many of the same issues as multiphase inference and is indeed a special case of the latter .concepts such as congeniality between imputation and analysis models and self - efficiency ( ) have natural analogues and roles to play in the analysis of multiphase inference problems .multiphase inference is also tightly connected to work on the comparison of experiments and approximate sufficiency , going back to ( ) and continuing through and , among others .this literature has addressed the relationship between decision properties and the probabilistic structure of experiments , the relationship between different notions of statistical information , and notions of approximate sufficiency all of these are quite relevant for the study of multiphase inference .we view the multiphase setting as an extension of this work to address a broader range of real - world problems , as we will discuss in section [ sec : riskmonotone ] .the literature on bayesian combinations of experts also informs our thinking on multiphase procedures . provides an excellent review of the field , while provides the core formalisms of interest for the multiphase setting .overall , this literature has focused on obtaining coherent ( or otherwise favorable ) decision rules when combining information from multiple bayesian agents , in the form of multiple posterior distributions .we view this as a best - case scenario , focusing our theoretical development towards the mechanics of passing information between phases .we also focus on the sequential nature of multiphase settings and the challenges this brings for both preprocessors and downstream analysts , in contrast to the more `` parallel '' or simultaneous focus of the literature mentioned above .there are also fascinating links between multiphase inference and the signal processing literature .there has been extensive research on the design of quantizers and other compression systems ; see for example .such work is often focused on practical questions , but it has also yielded some remarkable theory .in particular , the work of on the relationship between surrogate loss functions in quantizer design and -divergences suggests possible ways to develop and analyze a wide class of multiphase procedures , as we shall discuss in section [ sec : future ] .to formalize the notion of multiphase inference , we begin with a formal model for two - phase settings .the first phase consists of the data generation , collection , and preprocessing , while the second phase consists of inference using the output from the first phase .we will call the first - phase agent the `` preprocessor '' and the second - phase agent the `` downstream analyst '' . the preprocessor observes the raw data .this is a noisy realization of , variables of interest that are not directly obtainable from a given experiment , e.g. , gene expression from sequencing data , or stellar intensity from telescopic observations .we assume that the joint density of and with respect to product measure can be factored as here , encapsulates the underlying process of interest and encapsulates the observation process .we assume that is of fixed dimension in all asymptotic settings . in practice ,the preprocessor should be able to postulate a reasonable `` observation model '' , but will not always know the true `` scientific model '' .this is analogous to the mi setting , where the imputer does not know the form of the final analysis . from the original data generating process and outputs , with as missing data .the downstream analyst observes the preprocessor s output and has both and missing . ] using this model , the preprocessor provides the downstream analyst with some output , where is a ( possibly stochastic ) additional input .when is stochastic ( e.g. , an mcmc output ) , the conditional distribution is its theoretical description instead of its functional form .however , for simplicity , we will present our results when is a deterministic function of only , but many results generalize easily .given such , downstream analysts can carry out their inference procedures .figure [ fig : models ] depicts our general model setup .this model incorporates several restrictions .first , it is markovian with respect to , , and ; is conditionally independent of given ( and ) .second , the parameters governing the observation process ( ) and those governing the scientific process ( ) are distinct . in bayesian settings, we further assume that and are independent _ a priori_. the parameters are nuisance from the perspective of all involved ; the downstream analyst wants to draw inferences about and , and the preprocessor wants to pass forward information that will be useful for said inferences .if downstream inferences are bayesian with respect to , then ( which holds under ( [ e : model ] ) ) is sufficient for all inference under the given model and prior .hence , this conditional density is frequently of interest in our theoretical development , as is the corresponding marginalized model . we will compare results obtained with a fixed prior to those obtained in a more general setting to better understand the effects of nuisance parameters in multiphase inference .these restrictions are somewhat similar to those underlying rubin s ( ) definition of `` missing at random '' ; however , we do not have missing data mechanism ( mdm ) in this setting _ per se_. the distinction between missing and observed data ( and ) is fixed by the structure of our model . in place of mdm , we have two imposed patterns of missingness : one for the data - generating process , and one for the inference process .the first is , which creates a noisy version of the desired scientific variables . here, can be considered the missing data and the observed . for the inference process, the downstream analyst observes in place of but desires inference for based upon .hence , and are both missing for the downstream analyst .neither pattern is entirely intrinsic to the problem both are fixed by choice .the selection of scientific variables for a given marginal likelihood is a modeling decision .the selection of preprocessing is a design decision .this contrasts with the typical missing data setting , where mdm is forced upon the analyst by nature . with multiphase problems ,we seek to design and evaluate engineered missingness .thus the investigation of multiphase inference requires tools and ideas from design , inference , and computation in addition to the established theory of missing data . with this model in place , we turn to formally defining multiphase procedures .this is more subtle than it initially appears . in the mi setting , we focus on complete - data procedures for the downstream analyst s estimation and do not restrict the dependence structure between missing data and observations . in contrast , we restrict the dependence structure as in ( [ e : model ] ) , but place far fewer constraints on the analysts procedures . here , we focus our definitions and discussion on the two - phase case of a single preprocessor and downstream analyst .this provides the formal structure to describe the interface between any two phases in a chain of multiphase analyses . in our multiphase setting ,downstream analysts need not have any complete - data procedure in the sense of one for inferring from and ; indeed , they need not formally have one based only upon for inferring . we require only that they have a set of procedures for their desired inference using the quantities provided from earlier phases as inputs ( ) , not necessarily using direct observations of or .such situations are common in practice , as methods are often built around properties of preprocessed data such as smoothness or sparsity that need not hold for the actual values of . for the preprocessor ,the input is and the output is . here could consist of a vector of means with corresponding standard errors , or , for discrete , could consist of carefully selected cross - tabulations . in general , clearly needs to be related to to capture inferential information , but its actual form is influenced by practical constraints ( e.g. , aggregation to lower than desired resolutions due to data storage capacity ) .for the downstream analyst , the input is and the output is an inference for . this analyst can obviously adapt .for example , suppose for each entry of .if the preprocessor provides , the analyst may simply use an unweighted mean to estimate .if the preprocessor instead gives the analyst , where contains standard errors , the latter could instead use a weighted mean to estimate .this adaptation extends to an arbitrary number of possible inputs , each of which corresponds to a set of constraints facing the preprocessor . to formalize this notion of adaptation, we first define an index set with one entry for each such set of constraints .this maps between forms of input provided by the preprocessor and estimators selected by the downstream analyst . in this way, captures the downstream analyst s knowledge of previous processing and the underlying probability model .thus , this index set plays an central role in the definition of multiphase inference problems , far beyond that of a mere mathematical formality ; it regulates the amount of mutual knowledge shared between the preprocessor and the downstream analyst .now , we turn to the estimators themselves .we start with point estimation as a foundation for a broader class of problems .testing begins with estimating rejection regions , interval estimation with estimating coverage , classification with estimating class membership , and prediction with estimating future observations and , frequently , intermediate parameters .the framework we present therefore provides tools that can be adapted for more than estimation theory .we define multiphase estimation procedures as follows : a _ multiphase estimation procedure _ is a set of estimators indexed by the set , where corresponds to the output of the first - phase method ; that is , is a family of estimators with different inputs .when clear , we will drop the subscripts and index the estimators in by their inputs .this definition provides enough flexibility to capture many practical issues with multiphase inference , and it can be iterated to define procedures for analyses involving a longer sequence of preprocessors and analysts .it also encompasses the definition of a missing data procedure used by .such procedures can not , of course , be arbitrarily constructed if they are to deliver results with general validity .hence , having defined these procedures , we will cull many of them from consideration in section [ sec : riskmonotone ] .the obvious choice of our estimand , suggested by our notation thus far , is the parameter for the scientific model , .this is very amenable to mathematical analysis and relevant to many investigations .hence , it forms the basis for our results in section [ sec : theory ] .however , for multiphase analyses , other classes of estimands may prove more useful in practice .in particular , functions of , future scientific variables , or future observations may be of interest .prediction of such quantities is a natural focus in the multiphase setting because such statements are meaningful to both the preprocessor and downstream analyst .such estimands naturally encompass a broad range of statistical problems including prediction , classification , and clustering .however , there is often a lack of mutual knowledge about , so the preprocessor can not expect to `` target '' estimation of in general , as we shall discuss in section [ sec : remarks ] .it is not automatic for multiphase estimation procedures to produce better results as the first phase provides more information . to obtain a sensible context for theoretical development, we must regulate the way that the downstream analyst adapts to different inputs .for instance , they should obtain better results ( in some sense ) when provided with higher - resolution information .this carries over from the mi setting ( , , , ) , where notions such as self - efficiency are useful for regulating the downstream analyst s procedures .we define a similar property for multiphase estimation procedures , but without restricting ourselves to the missing data setting .specifically , let indicate is a deterministic function of . in practice , could be a subvector , aggregation , or other summary of . a multiphase estimation procedure is _ risk monotone _ with respect to a loss function , for all pairs of outputs , implies .an asymptotic analogue of risk monotonicity is defined as would be expected , scaling the relevant risks at an appropriate rate to obtain nontrivial limits .this is a natural starting point for regulating multiphase estimation procedures ; stronger notions may be required for certain theoretical results .note that this definition does not require that `` higher - quality '' inputs necessarily lead to lower risk estimators .risk monotonicity requires only that estimators based upon a larger set of inputs perform no worse than those with strictly less information ( in a deterministic sense ) .however , risk monotonicity is actually quite tight in another sense .it requires that additional information can not be misused by the downstream analyst , imposing a strong constraint on mutual knowledge . for an example , consider the case of unweighted and weighted means . to obtain better results when presented with standard errors , the downstream analyst must know that they are being given ( the correct ) standard errors and to weight by inverse variances .this definition is related to the comparison of experiments , as explored by ( ) , but diverges on a fundamental level .our ordering of experiments , based on deterministic functions , is more stringent than that of , but they are related . indeed , our relation implies that of . in the latter work , an experiment is defined as more informative than experiment , denoted , if all losses attainable from are also attainable from .this relation is also implied when is sufficient for .our stringency stems from our broader objectives in the multiphase setting . from a decision - theoretic perspective , the partial ordering of experiments investigated by blackwell and others deal with which risks are attainable given pairs of experiments , allowing for arbitrary decision procedures .in contrast , our criterion restricts procedures based on whether such risks are actually attained , with respect to a particular loss function .this is because , in the multiphase setting , it is not generally realistic to expect downstream analysts to be capable of obtaining optimal estimators for all forms of preprocessing . the conceptually - simplest way to generate sucha procedure is to begin with a complete probability model for . under traditional asymptotic regimes ,all procedures consisting of bayes estimators based upon such a model will ( with full knowledge of the transformations involved in each and a fixed prior ) be risk monotone . the same is true asymptotically under the same regimes ( for squared - error loss ) for procedures consisting of mles under a fixed model .under some other asymptotic regimes , however , these principles of estimation do not guarantee risk - monotonicity ; we explore this further in section [ sec : missinfo ] .but such techniques are not the only way to generate risk monotone procedures from probability models .this is analogous to self - efficiency , which can be achieved by procedures that are neither bayesian nor mle ( , ) . and form the basis set of statistics .each of these has three descendants ( from and from ) .these descendants are deterministic functions of their parent , but they are not deterministic functions of any other basis statistics . given correctly - specified models for and , a risk monotone procedure can be constructed for all statistics ( ) shown here as described in the text . ]a risk monotone procedure can be generated from any set of probability models for distinct inputs that `` span '' the space of possible inputs .suppose that an analyst has a set of probability models , all correctly specified , for , where ranges over a subset of the relevant index set .we also assume that this analyst has a prior distribution for each such basis models .these priors need not agree between models ; the analyst can build a risk - monotone procedure from an inconsistent set of prior beliefs .suppose that the inputs are not deterministic functions of each other and all other inputs can be generated as nontrivial deterministic transformations of one of these inputs .formally , we require for all distinct and , for each there exists a unique such that ( each output is uniquely descended from a single ) , as illustrated in figure [ fig : risk - monotone ] .this set can form a basis , in a sense , for the given procedure . using the given probability models with a single loss function and set of priors ( potentially different for each model ) ,the analyst can derive a bayes rule under each model .for each , we require to be an appropriate bayes rule on said model .as for some function , we then have the implied , yielding the bayes rule for estimating based on , which is no less risky than . the requirement that each output derives from a unique means that each basis component has a unique line of descendants .within each line , each descendant is comparable to only a single in the sense of deterministic dependence . between these lines ,such comparisons are not possible .this ensures the overall risk - monotonicity .biology provides an illustration of such bases . a wide array of methodological approacheshave been used to analyze high - throughput gene expression data .one approach , builds upon order and rank statistics ( , , ) .another common approach uses differences in gene expression between conditions or experiments , often aggregating over pathways , replicates , and so forth .each class of methods is based upon a different form of preprocessing : ranks transformations for the former , normalization and aggregation for the latter .taking procedures based on rank statistics and aggregate differences in expression as a basis , we can consider constructing a risk - monotone procedure as above .thus , the given formulation can bring together apparently disparate methods as a first step in analyzing their multiphase properties .such constructions are , unfortunately , not sufficient to generate all possible risk monotone procedures .obtaining more general conditions and constructions for risk monotone procedures is a topic for further work . by casting the examples in section [ sec : examples ] into the formal structure just established, we can clarify the practical role of each mathematical component and see how to map theoretical results into applied guidance .we also provide an example that illustrates the boundaries of the framework s utility , and another that demonstrates its formal limits .these provide perspective on the trade - offs made in formalizing the multiphase inference problem .the case of microarray preprocessing presented previously fits quite nicely into the model of section [ sec : model ] .there , corresponds to the observed probe - level intensities , corresponds to the true expression level for each gene under each condition , and corresponds to the parameters governing the organism s patterns of gene expression . in the microarraysetting , would characterize the relationship between expression levels and observed intensities , governed by .these nuisance parameters could include chip - level offsets , properties of any additive background , and the magnitudes of other sources of variation .the assumptions of a markovian dependence structure and distinct parameters for each part of the model appear quite reasonable in this case , as ( 1 ) the observation can only ( physically ) depend upon the sample preparation , experimental protocol , and rna concentrations in the sample and ( 2 ) the distributions and capture physically distinct portions of the experiment .background correction , normalization , and the reduction of observations to log - fold changes are common examples of preprocessing . as discussed previously, estimands based upon may be of greater scientific interest than those based upon .for instance , we may want to know whether gene expression changed between two treatments in a particular experiment ( a statement about ) than whether a parameter regulating the overall patterns of gene expression takes on a particular value . for the astrophysical example, the fit is similarly tidy .the raw astronomical observations correspond to , the true temperature , density , and spectral properties of each part of the dust cloud become , and the parameters governing the relationship between these quantities ( e.g. , their correlation ) form .the distribution governs the physical observation process , controlled by .this process typically includes the instruments response to astronomical signals , atmospheric distortions , and other earthbound phenomena .as before , the conditional independence of and given and is sensible based upon the problem structure , as is the separation of and . here corresponds to signals emitted billions or trillions of miles from earth , whereas the observation process occurs within ground- or space - based telescopes .hence , any non - markovian effects are quite implausible .preprocessing corresponds to the ( point ) estimates of temperature , density , and spectral properties from simple models of given and .the multiphase framework encompasses a broad range of settings , but it does not shed additional light on all of them . if is a many - to - one transformation of , then our framework implies that the preprocessor and downstream analyst face structurally different inference ( and missing data ) problems .this is the essence of multiphase inference , in our view .settings where is degenerate or is a one - to - one function of are boundary cases where our multiphase interpretation and framework add little . for a concrete example of these cases , consider a time - to - failure experiment , with the times of failure , . now, suppose that the experimenters actually ran the experiment in equally - sized batches .they observe each batch only until its first failure ; that is , they observe and report for each batch .subsequent analysts have access only to .this seems to be a case of preprocessing , but it actually resides at the very edge of our framework .we could take the complete observations to be and the batch minima to be .this would satisfy our markov constraint , with a singular , and hence deterministic , observation process simply selecting a particular order statistic within each batch .however , is one - to - one ; the preprocessor observes only the order statistics , as does the downstream analyst .there is no separation of inference between phases ; the same quantities are observed and missing to both the preprocessor and the downstream analyst .squeezing this case into the multiphase framework is technically valid but unproductive .the framework we present is not , however , completely generic .consider a chemical experiment involving a set of reactions .the underlying parameters describe the chemical properties driving the reactions , are the actual states of the reaction , and are the ( indirectly ) measured outputs of the reactions .the measurement process for these experiments , as described by , could easily violate the structure of our model in this case .for instance , the same chemical parameters could affect both the measurement and reaction processes , violating the assumed separation of and .even careful preprocessing in such a setting can create a fundamental incoherence .suppose the downstream analysis will be bayesian , so the preprocessor provides the conditional density of as a function of , latexmath:[ ] for arbitrary -neighborhoods of demonstrates .the result says that if we want distributed preprocessing to provide a lossless compression regardless of the actual form of the observation model , then even under the conditional independence assumption ( [ e : obsm ] ) , we must require the individual working models to _ collectively _ preserve sufficiency under the scientific model .note that preserving sufficiency for a model is a much weaker requirement than preserving the model itself .indeed , two models can have very different model spaces yet share the same _ form _ of sufficient statistics , as seen with i.i.d . and models , both yielding the sample average as a complete sufficient statistic .although we find this sufficiency - preserving condition quite informative about the limits of lossless distributed preprocessing , it is not a sufficient condition . as a counterexample , consider independent for , , where .for the true model , we assume as follows : , , and all variables are mutually independent .for the working model , we take as follows : independently , and with probability 1 for all . obviously is a sufficient statistic for both and because of their normality .because is _ minimally _ sufficient for , this implies that any sufficient statistic for must be sufficient for , therefore the sufficiency preserving condition holds . however , the collection of the complete sufficient statistics for under is not sufficient for under because the latter is no longer an exponential family .the trouble is caused by the failure of the working models to capture additional flexibility in the scientific model that is not controlled by its parameter .therefore , obtaining a condition that is both necessary and sufficient for lossless compression via distributed preprocessing is a challenging task . such a condition appears substantially more intricate than those presented in theorems [ thm : dsc ] and [ thm : necessary ] and may therefore be less useful as an applied guideline .below we discuss a few further subtleties .although theorem [ thm : dsc ] covers both likelihood and bayesian cases , it is important to note a subtle distinction between their general implications . in the likelihood setting ( [ e : con1 ] ) , we achieve lossless compression for all downstream analyses targeting .this allows the downstream analyst to obtain inferences that are robust to the preprocessor s beliefs about , and they are free to revise their inferences if new information about becomes available .but , the downstream analyst must address the nuisance parameter from the preprocessing step , a task a downstream analyst may not be able or willing to handle . in contrast , the downstream analyst need not worry about in the bayesian setting ( [ e : con2 ] ) .however , this is achieved at the cost of robustness .all downstream analyses are potentially affected by the preprocessors beliefs about .furthermore , because is required only to be sufficient for , it may not carry any information for a downstream analyst to check the preprocessor s assumptions about .fortunately , as it is generally logical to expect the preprocessor to have better knowledge addressing than the downstream analyst , such robustness may not be a serious concern from a practical perspective .theoretically , the trade - off between robustness and convenience is not clear - cut ; they can coincide for other types of preprocessing , as seen in section [ sec : missinfo ] below . as discussed earlier , ( conditional ) dependencies among the observation variables across different s will generally rule out the possibility of achieving lossless compression by collecting individual sufficient statistics .this points to the importance of appropriate separation of labors when designing distributed preprocessing .in contrast , dependencies among s are permitted , at the expense of redundancy in sufficient statistics .we first consider deterministic dependencies , and for simplicity , take and constrain attention to the case of sufficiency for .suppose we have and forming a partition of , with a working model that satisfied the dsc for some .imagine we need to add a common variable to both and that is conditionally independent of given and has density , with the remaining model unchanged .however , the two researchers are unaware of the sharing of , so they set up and , with does not correspond to the scientific variable of interest .however , we notice that if we can force in , then we can recover .this forcing is not a mere mathematical trick .rather , it reflects an extreme yet practical strategy when researchers are unsure whether they share some components of their with others . the strategy is simply to retain statistics sufficient for the entire part that they may _ suspect _ to be common , which in this case means that both researchers will retain statistics sufficient for the in their entirety .mathematically , this corresponds to letting , where .it is then easy to verify that dsc holds , if we take , where .this is because when , both sides of ( [ eq : dsc ] ) are zero .when , we have ( adopting integration over functions ) \,{\mathrm{d}}p_{\eta } ' \bigl ( { \eta } ' | { \theta}\bigr ) \\ & & \quad= \int_\eta \int_{\zeta _ 1 } \biggl [ \prod_{i=1}^2 \tilde p_{x_i}(x_i |\eta_i ) \delta_{\ { z=\zeta_i\ } } \biggr]\,{\mathrm{d}}p_{\eta } ( { \eta}| { \theta } ) \delta_{\{\zeta_1=\zeta_2\}}\,{\mathrm{d}}p_{z}(\zeta_1|\theta ) \\ & & \quad = \biggl [ \int_\eta\prod_{i=1}^2 \tilde p_{x_i}(x_i |\eta_i)\,{\mathrm{d}}p_{\eta } ( { \eta}| { \theta } ) \biggr ] \int_{\zeta_1 } \delta_{\{\zeta_1=z\}}\,{\mathrm{d}}p_{z}(\zeta_1|\theta ) \\ & & \quad = p_{x_{}}(x_1 , x_2|\theta)p_{z}(z| \theta)=p_{x}(x|\theta).\end{aligned}\ ] ] this technique of expanding to include shared parts of the allows the dsc and theorem [ thm : dsc ] to be applied to all models , not only those with with distinct s .however , this construction also restricts working models to those with deterministic relationships between parts of and each .the derivation above demonstrates both the broader applications of dsc as a theoretical condition and its restrictive nature as a practical guideline . retaining sufficient statistics for both and can create redundancy .if each preprocessor observes without noise , then only one of them actually needs to retain and report their observation of .however , if each observes with independent noise , then both of their observations are required to obtain a sufficient statistic for .the noise - free case also provides a straightforward counterexample to the necessity of dsc . assuming both preprocessors observe directly , as long as one of the copies of is retained via the use of the saturated density, the other copy can be modeled in any way and hence can be made to violate dsc without affecting their joint sufficiency for .regardless of the dependencies among the s , there is always a safe option open to the preprocessors for data reduction : retain sufficient for under .this will preserve sufficiency for under any scientific model : [ thm : safe ] if is correctly specified and satisfies ( [ e : obsm ] ) , then any collection of individual sufficient statistics with each sufficient for is jointly sufficient for in the sense of ( [ e : con1 ] ) for all . by the factorization theorem, we have for any .hence , by ( [ e : obsm ] ) , \int_x [ \prod_{i=1}^r p_t(t_i | x_i , { \xi}_i ) ] { p_x}(x is sufficient for , by the factorization theorem for sufficiency .theorem [ thm : safe ] provides a universal , safe strategy for sufficient preprocessing and a lower bound on the compression attainable from distributed sufficient preprocessing . as all minimal sufficient statistics for are functions of any sufficient statistic for , retaining minimal sufficient statistics for each results in less compression than any approach properly using knowledge of . however , the compression achieved relative to retaining itself may still be significant .minimal sufficient statistics for provide an upper bound on the attainable degree of compression by the same argument . achieving this compression generally requires that each preprocessor knows the true scientific model . between these bounds ,the dsc ( [ eq : dsc ] ) shows a trade - off between the generality of preprocessing ( with respect to different scientific models ) and the compression achieved : the smaller the set of scientific models for which a given working model satisfies ( [ eq : dsc ] ) , the greater the potential compression from its sufficient statistics . more generally , stochastic dependence among s reduces compression and increases redundancy in distributed preprocessing .these costs are particularly acute when elements of control dependence among s , as seen in the following example where here is a column vector with s as its components , and is the usual kronecker product .if is known , then each researcher can reduce their observations to a scalar statistic and preserve sufficiency for .if is unknown , then each researcher must retain all of ( but not for ) in addition to these sums to ensure sufficiency for , because the minimal sufficient statistic for requires the computation of .thus , the cost of dependence here is additional pieces of information per preprocessor .dependence among the s forces the preprocessors to retain enough information to properly combine their individual contributions in the final analysis , downweighting redundant information .this is true even if they are interested only in efficient estimation of , leading to less reduction of their raw data and less compression from preprocessing than the independent case . from this investigation, we see that it is generally not enough for each researcher involved in preprocessing to reduce data based on even a correctly - specified model for their problem at hand .we instead need to look to other models that include each experimenter s data hierarchically , explicitly considering higher - level structure and relationships . however , significant reductions of the data are still possible despite these limitations .each need not be sufficient for each , nor must be sufficient for overall .this often implies that much less data need to be retained and shared than retaining sufficient statistics for each would demand .for instance , if a working model with satisfies the dsc for a given model and , then only means and covariance matrices of within each experiment need to be retained .the discussions above demonstrate the importance of involving downstream analysts in the design of preprocessing techniques .their knowledge of is extremely useful in determining what compression is appropriate , even if said knowledge is imperfect . constraining the scientific model to a broad class may be enough to guarantee effective preprocessing . for example , suppose we fix a working model and consider all scientific models that can be expressed as ( [ eq : dsc ] ) by varying the choices of .this yields a very broad class of hierarchical scientific models for downstream analysts to evaluate , while permitting effective distributed preprocessing based on the given working model.=1 practically , we see two paths to distributed preprocessing : coordination and caution .coordination refers to the downstream analyst evaluating and guiding the design of preprocessing as needed .such guidance can guarantee that preprocessed outputs will be as compact and useful as possible .however , it is not always feasible. it may be possible to specify preprocessing in detail in some industrial and purely computational settings .accomplishing the same in academic research or for any research conducted over time is an impractical goal . without such overall coordination, caution is needed .it is not generally possible to maintain sufficiency for without knowledge of the possible models unless the retained summaries are sufficient for itself .preprocessors should therefore proceed cautiously , carefully considering which scientific models they effectively exclude through their preprocessing choices .this is analogous to the oft - repeated guidance to include as many covariates and interactions as possible in imputation models ( , ) .having considered the lossless preprocessing , we now turn to more realistic but less clear - cut situations .we consider a less careful preprocessor and a sophisticated downstream analyst .the preprocessor selects an output , which may discard much information in but nevertheless preserves the identifiability of , and the downstream analyst knows enough to make the best of whatever output they are given .that is , the index set completely and accurately captures all relevant preprocessing methods .this does not completely capture all the practical constraints discussed in section [ sec : concepts ] .however , it is important to establish an upper bound on the performance of multiphase procedures before incorporating such issues .this upper bound is on the fisher information , and hence a lower bound on the asymptotic variances of estimators of .as we will see , nuisance parameters ( ) play a crucial role in these investigations .when using a lossy compression , an obvious question is how much information is lost compared to a lossless compression .this question has a standard asymptotic answer when the downstream analyst adopts an mle or bayes estimator , so long as nuisance parameters behave appropriately ( as will be discussed shortly ) .if the downstream analyst adopts some other procedures , such as an estimating equation , then there is no guarantee that the procedure based on is more efficient than the one based on .that is , one can actually obtain a more efficient estimator with less data when one is not using _ probabilistically principled _ methods , as discussed in detail in .therefore , as a first step in our theoretical investigations , we will focus on mles ; the results also apply to bayesian estimators under the usual regularity conditions to guarantee the asymptotic equivalence between mles and bayesian estimators .specifically , let and be the mles of based respectively on and under model ( [ e : model ] ) .we place standard regularity conditions for the joint likelihood of , assuming bounded third derivatives of the log - likelihood , common supports of the observation distributions with respect to , full rank for all information matrices at the true parameter value , and the existence of an open subset of the parameter space that contains .these conditions imply the first and second bartlett identities . however , the most crucial assumption here is a sufficient accumulation of information , indexed by an _ information size _ , to constrain the behavior of remainder terms in quadratic approximations of the relevant score functions .independent identically distributed observations and fixed - dimensional parameters would satisfy this requirement , in which case is simply the data size of , but weaker conditions can suffice ( for an overview , see ) . in general , this assumption requires that the dimension of both and are bounded as we accumulate more data , preventing the type of phenomenon revealed in . for multiphase inferences , cases where these dimensions are unbounded are common ( at least in theory ) and represent interesting settings where preprocessing can actually improve asymptotic efficiency , as we discuss shortly . to eliminate the nuisance parameter , we work with the observed fisher information matrices based on the profile likelihoods for , denoted by and respectively .let be the limit of , the so - called _ fraction of missing information _ ( see ) , as .the proof of the following result follows the standard asymptotic arguments for mles , with the small twist of applying them to profile likelihoods instead of full likelihoods .( we can also invoke the more general arguments based on decomposing estimating equations , as given in . )[ thm : missinfo ] under the conditions given above , we have asymptotically as , ^{-1 } \rightarrow f\ ] ] and ^{-1 } \rightarrow i - f.\ ] ] this establishes the central role of the fraction of missing information in determining the asymptotic efficiency of multiphase procedures under the usual asymptotic regime . as mentioned above , this is an ideal - case bound on the performance of multiphase procedures , and it is based on the usual squared - error loss ; both the asymptotic regime and amount of knowledge held by the downstream analyst are optimistic .we explore these issues below , focusing on ( 1 ) mutual knowledge and alternative definitions of efficiency , ( 2 ) the role of reparameterization , ( 3 ) asymptotic regimes and multiphase efficiency , and ( 4 ) the issue of robustness in multiphase inference . in practice , downstream analysts are unlikely to have complete knowledge of .therefore , even if they were given the entire , they would not be able to produce the optimal estimator , making the value given by theorem [ thm : missinfo ] an unrealistic yardstick .nevertheless , theorem [ thm : missinfo ] suggests a direction for a more realistic standard .the classical theory of estimation focuses on losses of the form , where denotes the truth .risk based on this type of loss , given by $ ] , is a raw measure of performance , using the truth as a baseline .an alternative is regret , the difference between the risk of a given estimator and an ideal estimator ; that is , .regret is popular in the learning theory community and forms the basis for oracle inequalities .it provides a more adaptive baseline for comparison than raw risk , but we can push further . consider evaluating loss with respect to an estimator rather than the truth . for mean - squared error ,this yields .\ ] ] can this provide a better baseline , and what are its properties ? for mles , behaves the same ( asymptotically ) as additive regret because theorem [ thm : missinfo ] implies that , as under the classical asymptotic regime , \\[-8pt ] \nonumber & = & r\bigl({\hat{{\theta}}}(t ) , { \theta}_0\bigr)-r\bigl({\hat{{\theta}}}(y ) , { \theta}_0\bigr ) .\end{aligned}\ ] ] for inefficient estimators , ( [ eq : same ] ) does not hold in general because is no longer guaranteed to be asymptotically uncorrelated with . in such cases , this is precisely the reason can be more efficient than or , more generally , there exists a constant such that is ( asymptotically ) more efficient than . in the terminology of , the estimation procedure is not _ self - efficient _ if ( [ eq : same ] ) does not hold , viewing as the complete data and as the observed data . indeed ,if , may actually be _ larger _ for a _ better _ because of the inappropriate baseline ; it is a measure of difference , not dominance , in such cases .hence , some care is needed in interpreting this measure .therefore , we can view ( [ eq : risk ] ) as a generalization of the usual notion of regret , or the relative regret if we divide it by .this generalization is appealing for the study of preprocessing : we are evaluating the estimator based on preprocessed data directly against what could be done with the complete raw data , sample by sample , and we no longer need to impose the restriction that the downstream analysts must carry out the most efficient estimation under a model that captures the actual preprocessing .this direction is closely related to the idea of strong efficiency from and , which generalizes the idea of asymptotic decorrelation beyond the simple ( but instructive ) setting covered here .such ideas from the theory of missing data provide a strong underpinning for the study of multiphase inference and preprocessing .theorem [ thm : missinfo ] also emphasizes the range of effects that preprocessing can have , even in ideal cases .consider the role that plays under different transformations of .although the eigenvalues of are invariant under one - to - one transformations of the parameters , submatrices of can change substantially .formally , if is transformed to , then the fraction of missing information for can be very different from that for .these changes mean that changes in parameterization can reallocate the fractions of missing information among resulting subparameters in unexpected and sometimes very unpleasant ways .this is true even for linear transformations ; a given preprocessing technique can preserve efficiency for and individually while performing poorly for .such issues have arisen in , for instance , the work of when attempting to characterize the behavior of multiple imputation estimators under uncongeniality . on a fundamental level ,theorem [ thm : missinfo ] is a negative result for preprocessing , at least for mles .reducing the data from to can only hinder the downstream analyst .formally , this means that ( asymptotically ) in the sense that is positive semi - definite . as a result , will dominate in asymptotic variance for any preprocessing .thus , the only justification for preprocessing appears to be pragmatic ; if the downstream analyst could not make use of for efficient inference or such knowledge could not be effectively transmitted , preprocessing provides a feasible way to obtain the inferences of interest .however , this conclusion depends crucially on the assumed behavior of the nuisance parameter .the usual asymptotic regime is not realistic for many multiphase settings , particularly with regards to . in many problems of interest, does not tend to zero as increases , preventing sufficient accumulation of information on the nuisance parameter .a typical regime of this type would accumulate observations from individual experiments , each of which brings its own nuisance parameter .such a process could describe the accumulation of data from microarrays , for instance , with each experiment corresponding to a chip with its own observation parameters , or the growth of astronomical datasets with time - varying calibration .in such a regime , preprocessing can have much more dramatic effects on asymptotic efficiency . in the presence of nuisance parameters , inference based on be more robust and even more efficient than inference based on .it is well - known that the mle can be inefficient and even inconsistent in regimes where ( going back to at least ) .bayesian methods provide no panacea either .marginalization over the nuisance parameter is appealing , but resulting inferences are typically sensitive to the prior on , even asymptotically . in many cases ( such as the canonical neyman scott problem ) , only a minimal set of priorsprovide even consistent bayes estimators .careful preprocessing can , however , enable principled inference in such regimes .such phenomena stand in stark contrast to the theory of multiple imputation .in that theory , complete data inferences are typically assumed to be valid .thus , under traditional missing data mechanisms , the observed data ( corresponding to ) can not provide better inferences than .this is not necessarily true in multiphase settings .if the downstream analyst is constrained to particular principles of inference ( e.g. , mle or bayes ) , then estimators based on can provide lower asymptotic variance than those based on .this occurs , in part , because the mechanisms generating and from are less restricted in the multiphase setting compared to the traditional missing - data framework .principled inferences based on would , in the multiphase setting , generally dominate those based on either or .however , such a relationship need not hold between and without restrictions on the behavior of .we emphasize that this does not contradict the general call in to follow the probabilistically - principled methods ( such as mle and bayes recipes ) to prevent violations of self - efficiency , precisely because the well - established principles of single - phase inference may need to be `` re - principled '' before they can be equally effective in the far more complicated multiphase setting . in the simplest case ,if a can be found such that it is a pivot with respect to and remains dependent upon , then sensitivity to the behavior of can be eliminated by preprocessing . in such cases ,an mle or bayes rule based on can dominate that based on even asymptotically .one such example would be providing -statistics from each of a set of experiments to the downstream analyst .this clearly limits the range of feasible downstream inferences . with these -statistics , detection of signals via multipletesting ( e.g. , ) would be straightforward , but efficient combination of information across experiments could be difficult .this is a ubiquitous trade - off of preprocessing : reductions that remove nuisance parameters and improve robustness necessarily reduce the amount of information available from the data .these trade - offs must be considered carefully when designing preprocessing techniques universal utility is unattainable without the original data .a more subtle case involves the selection of as a `` partial pivot '' . in some settings ,there exists a decomposition of as such that for some fixed and all , and the distribution of is free of for all values of .many normalization techniques used in the microarray application of section [ sec : examples ] can be interpreted in this light .these methods attempt to reduce the unbounded set of experiment - specific nuisance parameters affecting to a bounded , manageable size .for example , suppose each processor observes , .the downstream analyst wants to estimate , considering and as nuisance parameters . in our previous notation, we have and .suppose each preprocessor reduces her data to , where is the ols estimator of based on .the distribution of each depends on but is free of .hence , is a partial pivot as defined above , with and .such pivoting techniques can allow to possess favorable properties even when is inconsistent or grossly inefficient .as mentioned before , this kind of careful preprocessing can dominate bayesian procedures in the presence of nuisance parameters when can grow with . in these regimes , informative priors on can affect inferences even asymptotically .however , reducing to so only the -part of is relevant for s distribution allows information to accumulate on , making inferences far more robust to the preprocessor s beliefs about .these techniques share a common conceptual framework : invariance .invariance has a rich history in the bayesian literature , primarily as a motivation for the construction of noninformative or reference priors ( e.g. , , , , , ) .it is fundamental to the pivotal methods discussed above and arises in the theory of partial likelihood ( ) .we see invariance as a core principle of preprocessing , although its application is somewhat different from most bayesian settings .we are interested in finding functions of the data whose distributions are invariant to subsets of the parameter , not priors invariant to reparameterization .for instance , the rank statistics that form the basis for cox s proportional hazards regression in the absence of censoring ( ) can be obtained by requiring a statistic invariant to monotone transformations of time .indeed , cox s regression based on rank statistics can be viewed as an excellent example of eliminating an infinite dimensional nuisance parameter , i.e. , the baseline hazard , via preprocessing , which retains only the rank statistics .the relationship between invariance in preprocessing , modeling , and prior formulation is a rich direction for further investigation .an interesting practical question arises from this discussion of robustness : how realistic is it to assume efficient inference with preprocessed data ?this may seem unrealistic as preprocessing is frequently used to simplify problems so common methods can be applied .however , preprocessing can make many assumptions more appropriate .for example , aggregation can make normality assumptions more realistic , normalization can eliminate nuisance parameters , and discretization greatly reduces reliance on parametric distributional assumptions altogether. it may therefore be more appropriate to assume that efficient estimators are generally used with preprocessed data than with raw data .the results and examples explored here show that preprocessing is a complex topic in even large - sample settings .it appears formally futile ( but practically useful ) in standard asymptotic regimes . under other realistic asymptotic regimes , preprocessing emerges as a powerful tool for addressing nuisance parameters and improving the robustness of inferences .having established some of the formal motivation and trade - offs for preprocessing , we discuss further extensions of these ideas into more difficult settings in section [ sec : future ] . in some cases , effective preprocessing techniques are quite apparent .if forms an exponential family with parameter or , then we have a straightforward procedure : retain a minimal sufficient statistic . to be precise , we mean that one of the following factorizations holds for a sufficient statistic of bounded dimension : retaining this sufficient statistic will lead to a lossless compression , assuming that the first - phase model is correct. unfortunately , such nice cases are rare .even the bayesian approach offers little reprieve . integrating with respect to a prior removes the observation model from the exponential family consider , for instance , a normal model with unknown variance becoming a distribution . if is approximately quadratic as a function of ,then retaining its mode and curvature would seem to provide much of the information available from the data to downstream analysts. however , such intuition can be treacherous .if a downstream analyst is combining inferences from a set of experiments , each of which yielded an approximately quadratic likelihood , the individual approximations may not be enough to provide efficient inferences .approximations that hold near the mode of each experiment s likelihood need not hold away from these modes including at the mode of the joint likelihood from all experiments . thus , remainder terms can accumulate in the combination of such approximations , degrading the final inference on .furthermore , the requirement that be approximately quadratic in is quite stringent . to justify such approximations, we must either appeal to asymptotic results from likelihood theory or confine our attention to a narrow class of observation models .unfortunately , asymptotic theory is often an inappropriate justification in multiphase settings , because grows in dimension with in many asymptotic regimes of interest , so there is no general reason to expect information to accumulate on .these issues are of particular concern as such quadratic approximations are a standard implicit justification for passing point estimates with standard errors onto downstream analysts .moving away from these cases , solutions become less apparent .no processing ( short of passing the entire likelihood function ) will preserve all information from the sample when sufficient statistics of bounded dimension do not exist .however , multiphase approaches can still possess favorable properties in such settings .we begin by considering a stubborn downstream analyst she has her method and will not consider anything else .for example , this analyst could be dead set on using linear discriminant analysis or anova .the preprocessor has only one way to affect her results : carefully designing a particular given to the downstream analyst .such a setting is extreme .we are saying that the downstream analyst will charge ahead with a given estimator regardless of her input with neither reflection nor judgment .we investigate this setting because it maximizes the preprocessor s burden in terms of her contribution to the final estimate s quality .formally , we consider a fixed second - stage estimator ; that is , the form of its input and the function producing are fixed , but the mechanism actually used to generate is not. could be , for example , a vector of fixed dimension .as we discuss below , admissible designs for the first - phase with a fixed second - phase method are given by a ( generalized ) bayes rule .this uses the known portion of the model to construct inputs for the second stage and assumes that any prior the preprocessor has on is equivalent to what a downstream analyst would have used in the preprocessor s position .formally , this describes all rules that are admissible among the class of procedures using a given second - stage method , following from previous complete class results in statistical decision theory ( e.g. , , ) .assume that the second - stage procedure is fixed as discussed above and we are operating under the model ( [ e : model ] ) .further assume that the preprocessor s prior on is the only such prior used in all bayes rule constructions .for , consider a smooth , strictly convex loss function . then, under appropriate regularity conditions ( e.g. , , ) , if is a smooth function of , then all admissible procedures for generating are bayes or generalized bayes rules with respect to the risk .the same holds when is restricted to a finite set .this guideline follows directly from conventional complete class results in decision theory .we omit technical details here , focusing instead on the guideline s implications . however , a sketch of its proof proceeds along the following lines .there are two ways to approach this argument : intermediate loss and geometry .the intermediate loss approach uses an intermediate loss function .this is the loss facing the preprocessor given a fixed downstream procedure . if is well - behaved , in the sense of satisfying standard conditions ( strict convexity , or a finite parameter space , and so on ) , then the proof is complete from previous results for real .similarly , if is restricted to a finite discrete set , then we face a classical multiple decision problem and can apply previous results to .these straightforward arguments cover a wide range of realistic cases , as has shown .otherwise , we must turn to a more intricate geometric argument .broadly , this construction uses a convex hull of risks generated by attainable rules .this guideline has direct bearing upon the development of inputs for machine learning algorithms , typically known as _feature engineering_. given an algorithm that uses a fixed set of inputs , it implies that using a correctly - specified observation model to design these inputs is necessary to obtain admissible inferences .thus , it is conceptually similar to `` rao - blackwellization '' over part of a probability model .however , several major caveats apply to this result .first , on a practical level , deriving such bayes rules is quite difficult for most settings of interest .second , and more worryingly , this result s scope is actually quite limited . as we discussed in section[ sec : missinfo ] , even bayesian estimators can be inconsistent in realistic multiphase regimes .however , these estimators are still admissible , as they can not be dominated in risk for particular values of the nuisance parameters .admissibility therefore is a minimal requirement ; without it , the procedure can be improved uniformly , but with it , it can still behave badly in many ways . finally , there is the problem of robustness .an optimal input for one downstream estimator may be a terrible input for another estimator , even if and take the same form of inputs .such considerations are central to many real - world applications of preprocessing , as researchers aim to construct databases for a broad array of later analyses .however , this result does show that engineering inputs for downstream analyses using bayesian observation models can improve overall inferences .how to best go about this in practice is a rich area for further work . as befits first steps , we are left with a few loose ends and puzzles .starting with the dsc condition ( [ eq : dsc ] ) of section [ sec : sufficiency ] , we provide a simple counterexample to its necessity .suppose we have .let independent of each other .now , let , , , where , , is a vector of signs , or for , denotes the element - wise absolute value , and denotes the hadamard product .we fix . as our working model , we posit that independently .then , we clearly have as a sufficient statistic for both and .however , the dsc does not hold for this working model .we can not write the actual joint distribution of as a marginalization of with respect to some distribution over in such a way that is sufficient for . to enforce under the working model , any such model must use to share this information . for this example , we can obtain a stronger result : no factored working model exists such that ( 1 ) is sufficient for under and ( 2 ) the dsc holds . for contradiction ,assume such a working model exists . under this working model , is conditionally independent of given , so we can write . as the dsc holds for this working model , we have \int_{\eta } \biggl [ \prod _ { i=1}^2 h_i\bigl(y_i^\top y_i ; g_i(\eta)\bigr ) \biggr ] p_{\eta}({\mathrm{d}}{\eta}| { \theta } ) .\ ] ] hence , we must have conditionally independent of given .however , this conditional independence does not hold under the true model .hence , the given working model can not both satisfy the dsc and have sufficient for each .the issue here is unparameterized dependence , as mentioned in section [ sec : sufficiency ] .the s have a dependence structure that is not captured by .thus , requiring that a working model preserves sufficiency for does not ensure that it has enough flexibility to capture the true distribution of . a weaker condition than the dsc ( [ eq : dsc ] ) that is necessary and sufficient to ensure that all sufficient statistics for are sufficient for may be possible . from sections [ sec : missinfo ] and[ sec : completeclass ] , we are left with puzzles rather than counterexamples . as mentioned previously ,many optimality results are trivial without sufficient constraints .for instance , minimizing risk or maximizing fisher information naively yield uninteresting ( and impractical ) multiphase strategies : have the preprocessor compute optimal estimators , then pass them downstream .overly tight constraints bring their own issues . restricting downstream procedures to excessively narrow classes ( e.g. , point estimates with standard errors ) limits the applied utility of resulting theory and yields little insight on the overall landscape of multiphase inference .striking the correct balance with these constraints is a core challenge for the theory of multiphase inference and will require a combination of computational , engineering , and statistical insights .as we discussed in sections [ sec : concepts ] and [ sec : theory ] , we have a deep well of questions that motivate further research on multiphase inference . these range fromthe extremely applied ( e.g. , enhancing preprocessing in astrophysical systems ) to the deeply theoretical ( e.g. , bounding the performance of multiphase procedures in the presence of nuisance parameters and computational constraints ) .we outline a few directions for this research below .but , before we look forward , we take a moment to look back and place multiphase inference within the context of broader historical debates .such `` navel gazing '' helps us to understand the connections and implications of the theory of multiphase inference . on a historical note ,the study of multiphase inference touches the long - running debate over the role of decision theory in statistics .one side of this debate , championed by wald and lehmann ( among others ) , has argued that decision theory lies at the core of statistical inference .risk - minimizing estimators and , more generally , optimal decision rules play a central role in their narrative .even subjectivists such as savage and de finetti have embraced the decision theoretic formulation to a large extent .other eminent statisticians have objected to such a focus on decisions .as noted by , fisher in particular vehemently rejected the decision theoretic formulation of statistical inference .one interpretation of fisher s objections is that he considered decision theory useful for eventual economic decision - making , but not for the growth of scientific knowledge .we believe that the study of multiphase inference brings a unifying perspective to this debate .fisher s distinction between intermediate processing and final decisions is fundamental to the problem of multiphase inference .however , we also view decision theory as a vital theoretical tool for the study of multiphase inference .passing only risk - minimizing point estimators to later analysts is clearly not a recipe for valid inference .the key is to consider the use of previously generated results explicitly in the final decision problem . in the study of multiphase inference, we do so by focusing on the separation of knowledge and objectives between agents . such separation between preprocessing and downstream inference maps nicely to fisher s distinction between building scientific knowledge and reaching actionable decisions .thus , we interpret fisher s line of objections to decision - theoretic statistics as , in part , a rejection of adopting a myopic single - phase perspective in multiphase settings .we certainly do not believe that our work will bring closure to such an intense historical debate .however , we do see multiphase inference as an important bridge between these competing schools of thought .we see a wide range of open questions in multiphase inference .can more systematic ways to leverage the potential of preprocessing be developed ?is it possible to create a mathematical `` warning system , '' alerting practitioners when their inferences from preprocessed data are subject to severe degradation and showing where additional forms of preprocessing are required ? and ,can multiphase inference inform developments in distributed statistical computation and massive - data inference ( as outlined below in section [ sec : computation ] ) ?all of these problems call for a shared collection of statistical principles , theory , and methods .below , we outline a few directions for the development of these tools for multiphase inference .the mechanics of passing information between phases constitute a major direction for further research .one approach leverages the fact that the likelihood function itself is always a minimal sufficient statistic .thus , a set of ( computationally ) efficient approximations to the likelihood function for could provide the foundation for a wide range of multiphase methods .many probabilistic inference techniques for the downstream model ( e.g. , mcmc samplers ) would be quite straightforward to use given such an approximation .the study of such multiphase approximations also offers great dividends for distributed statistical computation , as discussed below .we believe these approximations are promising direction for general - purpose preprocessing . however , there are stumbling blocks. first , nuisance parameters remain an issue .we want to harness and understand the robustness benefits offered by preprocessing , but likelihood techniques themselves offer little guidance in this direction .even the work of on partial likelihood focuses on the details of estimation once the likelihood has been partitioned .we would like to identify the set of formal principles underlying techniques such as partial pivoting ( to mute the effect of infinite - dimensional nuisance parameters ) , building a more rigorous understanding of the role of preprocessing in providing robust inferences .as discussed in section [ sec : missinfo ] , invariance relationships may be a useful focus for such investigations , guiding both bayesian and algorithmic developments .second , we must consider the burden placed on downstream analysts by our choice of approximation .probabilistic , model - based techniques can integrate such information with little additional development .however , it would be difficult for a downstream analyst accustomed to , say , standard regression methods to make use of a complex emulator for the likelihood function. the burden may be substantial for even sophisticated analysts .for instance , it could require a significant amount of effort and computational sophistication to obtain estimates of from such an approximation , and estimates of are often of interest to downstream analysts in addition to estimates of . with these trade - offs in mind and through the formal analysis of widely - applicable multiphase techniques, we can begin to establish bounds on the error properties of such techniques in a broad range of problems under realistic constraints ( in both technical and human terms ) .more general constraints , for instance , can take the form of upper bounds on the regret attainable with a fixed amount of information passed from preprocessor to downstream analyst for fixed classes of scientific models .extensions to nonparametric downstream methods would have both practical and theoretical implications . in cases where the observation model is well - specified butthe scientific model is less clearly defined , multiphase techniques can provide a useful alternative to computationally - expensive semi - parametric techniques .fusing principled preprocessing with flexible downstream inference may provide an interesting way to incorporate model - based subject - matter knowledge while effectively managing the bias - variance trade - off .the directions discussed above share a conceptual , if not technical , history with the development of congeniality ( ) .both the study of congeniality in mi and our study of multiphase inference seek to bound and measure the amount of degradation in inferences that can occur when agents attempt ( imperfectly ) to combine information . despite these similarities ,the treatment of nuisance parameters are rather different .nuisance parameters lie at the very heart of multiphase inference , defining many of its core issues and techniques . for mi ,the typical approaches have been to integrate them out in a bayesian analysis ( e.g. , ) or assume that the final analyst will handle them ( e.g. , ) .recent work by has shed new light on the role of nuisance parameters in mi , but the results are largely negative , demonstrating that nuisance parameters are often a stumbling block for practical mi inference .understanding the role of preprocessing in addressing nuisance parameters , providing robust analyses , and effectively distributing statistical inference represent further challenges beyond those pursued with mi .therefore , much remains to be done in the study of multiphase inference , both theoretical and methodological .we also see multiphase inference as a source for computational techniques , drawing inspiration from the history of mi .mi was initially developed as a strategy for handling missing data in public data releases . however , because mi separates the task of dealing with incomplete data from the task of making inferences , its use spread .it has frequently been used as a practical tool for dealing with missing - data problems where the joint inference of missing data and model parameters would impose excessive modeling or computational burdens .that is , increasingly the mi inference is carried out from imputation through analysis by a single analyst or research group .this is feasible as a computational strategy only because the error properties and conditions necessary for the validity of mi are relatively well - understood ( e.g. , , ).=1 multiphase methods can similarly guide the development of efficient , statistically - valid computational strategies .once we have a theory showing the trade - offs and pitfalls of multiphase methods , we will be equipped to develop them into general computational techniques . in particular, our experience suggests that models with a high degree of conditional independence ( e.g. , exchangeable distributions for ) can often provide useful inputs for multiphase inferences , even when the true overall model has a greater degree of stochastic structure .the conditional independence structure of such models allows for highly parallel computation with first - phase procedures , providing huge computational gains on modern distributed systems compared to methods based on the joint model.=1 for example , in , a factored model was used to preprocess a massive collection of irregularly - sampled astronomical time series .the model was sophisticated enough to account for complex observation noise , yet its independence structure allowed for efficient parallelization of the necessary computation .its output was then combined and used for population - level analyses .just as markov chain monte - carlo ( mcmc ) has produced a windfall of tools for approximate high - dimensional integration ( see for many examples ) , we believe that this type of principled preprocessing , with further theoretical underpinnings , has the potential to become a core tool for the statistical analysis of massive datasets.=1we would like to acknowledge support from the arthur p. dempster award and partial financial support from the nsf .we would also like to thank arthur p. dempster and stephen blyth for their generous feedback .this work developed from the inaugural winning submission for said award .we also thank david van dyk , brandon kelly , nathan stein , alex damour , and edo airoldi for valuable discussions and feedback , and steven finch for proofreading .finally , we would like to thank our reviewers for their thorough and thoughtful comments , which have significantly enhanced this paper.=1
preprocessing forms an oft - neglected foundation for a wide range of statistical and scientific analyses . however , it is rife with subtleties and pitfalls . decisions made in preprocessing constrain all later analyses and are typically irreversible . hence , data analysis becomes a collaborative endeavor by all parties involved in data collection , preprocessing and curation , and downstream inference . even if each party has done its best given the information and resources available to them , the final result may still fall short of the best possible in the traditional single - phase inference framework . this is particularly relevant as we enter the era of `` big data '' . the technologies driving this data explosion are subject to complex new forms of measurement error . simultaneously , we are accumulating increasingly massive databases of scientific analyses . as a result , preprocessing has become more vital ( and potentially more dangerous ) than ever before . we propose a theoretical framework for the analysis of preprocessing under the banner of multiphase inference . we provide some initial theoretical foundations for this area , including distributed preprocessing , building upon previous work in multiple imputation . we motivate this foundation with two problems from biology and astrophysics , illustrating multiphase pitfalls and potential solutions . these examples also emphasize the motivations behind multiphase analyses both practical and theoretical . we demonstrate that multiphase inferences can , in some cases , even surpass standard single - phase estimators in efficiency and robustness . our work suggests several rich paths for further research into the statistical principles underlying preprocessing . to tackle our increasingly complex and massive data , we must ensure that our inferences are built upon solid inputs and sound principles . principled investigation of preprocessing is thus a vital direction for statistical research .
due to the high frequency of the fastest internal motions in molecular systems , the discrete time step for molecular dynamics simulations must be very small ( of the order of femtoseconds ) , while the actual span of biochemical proceses typically require the choice of relatively long total times for simulations ( e.g. , from microseconds to milliseconds for protein folding processes ) .in addition to this , since biologically interesting molecules ( like proteins and dna ) consist of thousands of atoms , their trajectories in configuration space are esentially chaotic , and therefore reliable quantities can be obtained from the simulation only after statistical analysis . in order to cope with these two requirements , which force the computation of a large number of dynamical stepsif predictions want to be made , great efforts are being done both in hardware and in software solutions .in fact , only in very recent times , simulations for interesting systems of hundreds of thousand of atoms in the millisecond scale are starting to become affordable , being still , as we mentioned , the main limitation of these computational techniques the large difference between the elemental time step used to integrate the equations of motion and the total time span needed to obtain useful information . in this context ,strategies to increase the time step are very valuable .a widely used method to this end is to constrain some of the internal degrees of freedom of a molecule ( typically bond lengths , sometimes bond angles and rarely dihedral angles . for a verlet - like integrator , stability requires the time step to be at least about five times smaller than the period of the fastest vibration in the studied system .here is where constraints come into play . by constraining the hardest degrees of freedom ,the fastest vibrational motions are frozen , and thus larger time steps still produce stable simulations . if constraints are imposed on bond lengths involving hydrogens , the time step can typically be increased by a factor of 2 to 3 ( from 1 fs to 2 or 3 fs ) .constraining additional internal degrees of freedom , such as heavy atoms bond lengths and bond angles , allows even larger timesteps , but one has to be careful , since , as more and softer degrees of freedom are constrained , the more likely it is that the physical properties of the simulated system could be severely distorted .the essential ingredient in the calculation of the forces produced by the imposition of constraints are the so - called lagrange multipliers , and their efficient numerical evaluation is therefore of the utmost importance . in this work ,we show that the fact that many interesting biological molecules are esentially linear polymers allows to calculate the lagrange multipliers in order operations ( for a molecule where constraints are imposed ) in an exact ( up to machine precision ) , non - iterative way .moreover , we provide a method to do so which is based in a clever ordering of the constraints indices , and in a recently introduced algorithm for solving linear banded systems .it is worth mentioning that , in the specialized literature , this possibility has not been considered as far as we are aware ; with some works commenting that solving this kind of linear problems ( or related ones ) is costly ( but not giving further details ) , and some other works explicitly stating that such a computation must take or operations .also , in the field of robot kinematics , many algorithms have been devised to deal with different aspects of constrained physical systems ( robots in this case ) , but none of them tackles the calculation of the lagrange multipliers themselves .this work is structured as follows . in sec .[ sec_aclm ] , we introduce the basic formalism for the calculation of constraint forces and lagrange multipliers . in sec .[ soc ] , we explain how to index the constraints in order for the resulting linear system of equations to be banded with the minimal bandwidth ( which is essential to solve it efficiently ) .we do this starting by very simple toy systems and building on complexity as we move forward towards the final discussion about dna and proteins ; this way of proceeding is intended to help the reader build the corresponding indexing for molecules not covered in this work . in sec . [sec : numerical ] , we apply the introduced technique to a polyalanine peptide using the amber molecular dynamics package and we compare the relative efficiency between the calculation of the lagrange multipliers in the traditional way ( ) and in the new way presented here ( ) . finally , in sec . [ sec : conclusions ] , we summarize the main conclusions of this work and outline some possible future applications .if holonomic , rheonomous constraints are imposed on a classical system of atoms , and the dalembert s principle is assumed to hold , its motion is the solution of the following system of differential equations : [ sistembasico ] where ( [ newton ] ) is the modified newton s second law and ( [ constr ] ) are the equations of the constraints themselves ; are the lagrange multipliers associated with the constraints ; represents the external force acting on atom , is its euclidean position , and colectively denote the set of all such coordinates .we assume to be conservative , i.e. , to come from the gradient of a scalar potential function ; and should be regarded as the _ force of constraint _ acting on atom . also , in the above expression and in this whole document we will use the following notation for the different indices : * ( except if otherwise stated ) for atoms .* ( except if otherwise stated ) for the atoms coordinates when no explicit reference to the atom index needs to be made .* for constrains and the rows and columns of the associated matrices . * as generic indices for products and sums .the existence of constraints turns a system of differential equations with unknowns into a system of algebraic - differential equations with unknowns .the constraints equations in ( [ constr ] ) are the new equations , and the lagrange multipliers are the new unknowns whose value must be found in order to solve the system . if the functions are analytical , the system of equations in ( [ sistembasico ] ) is equivalent to the following one : in this new form , it exists a more direct path to solve for the lagrange multipliers : if we explicitly calculate the second derivative in eq .( [ constr2 ] ) and then substitute eq .( [ newton2 ] ) where the accelerations appear , we arrive to where we have implicitly defined [ pq ] and it becomes clear that , at each , the lagrange multipliers are actually a _ known _ function of the positions and the velocities .we shall use the shorthand and , , , and to denote the whole -tuples , as usual .now , in order to obtain the lagrange multipliers , we just need to solve this is a linear system of equations and unknowns . in the following, we will prove that the solution to it , when constraints are imposed on typical biological polymers , can be found in operations without the use of any iterative or truncation procedure , i.e. , in an exact way up to machine precision . to show this , first, we will prove that the value of the vectors and can be obtained in operations .then , we will show that the same is true for all the non - zero entries of matrix , and finally we will briefly discuss the results in , where we introduced an algorithm to solve the system in ( [ lm ] ) also in operations .it is worth remarking at this point that , in this work , we will only consider constraints that hold the distance between pairs of atoms constant , i.e. , where is a constant number , and the fact that we can establish a correspondence between constrained pairs ( ) and the constraints indices has been explicitly indicated by the notation .this can represent a constraint on : * a bond length between atoms and , * a bond angle between atoms , and , if both and are connected to through constrained bond lengths , * a principal dihedral angle involving , , and ( see for a rigorous definition of the different types of internal coordinates ) , if the bond lengths ( ) , ( ) and ( ) are constrained , as well as the bond angles ( ) and ( ) , * or a phase dihedral angle involving , , and if the bond lengths ( ) , ( ) and ( ) are constrained , as well as the bond angles ( ) and ( ) . this way to constrain degrees of freedomis called _triangularization_. if no triangularization is desired ( as , for example , if we want to constrain dihedral angles but not bond angles ) , different explicit expressions than those in the following paragraphs must be written down , but the basic concepts introduced here are equally valid and the main conclusions still hold . now , from eq .( [ sigma_generica ] ) , we obtain inserting this into ( [ pq1 ] ) , we get a simple expression for the calculation of is more involved , but it also results into a simple expression : first , we remember that the indices run as , and , and we produce the following trivial relationship : where , and are the unitary vectors along the , and axes , respectively . therefore , much related to eq .( [ grad_sigma ] ) , we can compute the first derivative of : \ , \end{aligned}\ ] ] and also the second derivative : \nonumber \\ & & \mbox { } \cdot [ ( \delta_{3\alpha-2,\nu}\hat{i } + \delta_{3\alpha-1,\nu}\hat{j } + \delta_{3\alpha,\nu}\hat{k } ) -(\delta_{3\beta-2,\nu}\hat{i } + \delta_{3\beta-1,\nu}\hat{j } + \delta_{3\beta,\nu}\hat{k } ) ] \nonumber \\ & = & 2 ( \delta_{3\alpha-2,\mu}\delta_{3\alpha-2,\nu } + \delta_{3\beta-2,\mu}\delta_{3\beta-2,\nu } -\delta_{3\alpha-2,\mu}\delta_{3\beta-2,\nu } -\delta_{3\beta-2,\mu}\delta_{3\alpha-2,\nu } \nonumber \\ & & \mbox { } + \delta_{3\alpha-1,\mu}\delta_{3\alpha-1,\nu } + \delta_{3\beta-1,\mu}\delta_{3\beta-1,\nu } -\delta_{3\alpha-1,\mu}\delta_{3\beta-1,\nu } -\delta_{3\beta-1,\mu}\delta_{3\alpha-1,\nu } \nonumber \\ & & \mbox { } + \delta_{3\alpha,\mu}\delta_{3\alpha,\nu } + \delta_{3\beta,\mu}\delta_{3\beta,\nu } -\delta_{3\alpha,\mu}\delta_{3\beta,\nu } -\delta_{3\beta,\mu}\delta_{3\alpha,\nu } ) \ .\end{aligned}\ ] ] taking this into the original expression for in eq .( [ pq2 ] ) and playing with the sums and the deltas , we arrive to now , eqs .( [ defo ] ) , ( [ neop ] ) and ( [ neoq ] ) can be gathered together to become where we can see that the calculation of takes always the same number of operations , independently of the number of atoms in our system , , and the number of constraints imposed on it , .therefore , calculating the whole vector in eq .( [ lm ] ) scales like . in order to obtain an explicit expression for the entries of the matrix , we now introduce eq .( [ grad_sigma ] ) into its definition in eq .( [ defr ] ) : where we have used that looking at this expression , we can see that a constant number of operations ( independent of and ) is required to obtain the value of every entry in .the terms proportional to the kroenecker deltas imply that , as we will see later , in a typical biological polymer , the matrix will be sparse ( actually banded if the constraints are appropriately ordered as we describe in the following sections ) , being the number of non - zero entries actually proportional to .more precisely , the entry will only be non - zero if the constraints and share an atom .now , since both the vector and the matrix in eq .( [ lm ] ) can be computed in operations , it only remains to be proved that the solution of the linear system of equations is also an process , but this is a well - known fact when the matrix defining the system is banded . in , we introduced a new algorithm to solve this kind of banded systems which is faster and more accurate than existing alternatives .essentially , we shown that the linear system of equations where is a matrix , is the vector of the unknowns , is a given vector and is _ banded _ , i.e. , it satisfies that for known can be directly solved up to machine precision in operations .this can be done using the following set of recursive equations for the auxiliary quantities ( see for details ) : [ coefsband ] if the matrix is symmetric ( ) , as it is the case with [ see ( [ defr ] ) ] , we can additionally save about one half of the computation time just by using instead of ( [ xifinal_c ] ) .( [ xifinal_csim ] ) can be obtained from ( [ coefsband ] ) by induction , and we recommend these expressions for the coefficients because other valid ones ( like considering , , which involves square roots ) are computationally more expensive . in the next sections ,we show how to index the constraints in such a way that nearby indices correspond to constraints where involved atoms are close to each other and likely participate of the same constraints .in such a case , not only will the matrix in eq .( [ lm ] ) be banded , allowing to use the method described above , but it will also have a minimal bandwidth , which is also an important point , since the computational cost for solving the linear system scales as ( when the bandwidth is constant ) .in this section we describe how to index the constraints applied to the bond lengths and bond angles of a series of model systems and biological molecules with the already mentioned aim of minimizing the computational cost associated to the obtention of the lagrange multipliers .the presentation begins by deliberately simple systems and proceeds to increasingly more complicated molecules with the intention that the reader is not only able to use the final results presented here , but also to devise appropriate indexings for different molecules not covered in this work .the main idea we have to take into account , as expressed in section [ sec_aclm ] , is to use nearby numbers to index constraints containing the same atoms .if we do so , we will obtain _ banded _ matrices .further computational savings can be obtained if we are able to reduce the number of coefficients in eqs .( [ coefsband ] ) to be calculated . in more detail , solving a linear system like ( [ lm ] ) where the is and banded with semi - band width ( i.e. , the number of non - zero entries neighbouring the diagonal in one row or column ) requires operations if is a constant .therefore , the lower the value of , the smaller the number of required numerical effort .when the semi - band width is not constant along the whole matrix , things are more complicated and the cost is always between and , depending on how the different rows are arranged . in general , we want to minimize the number of zero fillings in the process of gaussian elimination ( see for further details ) , which is achieved by not having zeros below non - zero entries .this is easier to understand with an example : consider the following matrices , where and represent different non - zero values for every entry ( i.e. , not all , nor all must take the same value , and different symbols have been chosen only to highlight the main diagonal ) : during the gaussian elimination process that is behind ( [ coefsband ] ) , in , five coefficients above the diagonal are to be calculated , three in the first row and two in the second one , because the entries below non - zero entries become non - zero too as the elimination process advances ( this is what we have called ` zero filling ' ) .on the other hand , in , which contains the same number of non - zero entries as , only three coefficients have to be calculated : two in the first row and one in the second row .whether looks like or like depends on our choice of the constraints ordering .one has also to take into account that no increase in the computational cost occurs if a series of non - zero columns is separated from the diagonal by columns containing all zeros .i.e. , the linear systems associated to the following two matrices require the same numerical effort to be solved : as promised , we start by a simple model of a biomolecule : an open linear chain without any branch . in this case, the atoms should be trivially numbered as in fig .[ fig : lc ] ( any other arrangement would have to be justified indeed ! ) . if we only constrain bond lengths , the fact that only consecutive atoms participate of the same constraints allows us to simplify the notation with respect to eq .( [ sigma_generica ] ) and establish the following ordering for the constraints indices : with this choice results in a tridiagonal matrix , whose only non - zero entries are those lying in the diagonal and its first neighbours .this is the only case for which an exact calculation of the lagrange multipliers exists in the literature as far as we are aware .the next step in complexity is to constrain the bond angles of the same linear chain that we discussed above .the atoms are ordered in the same way , as in fig .[ fig : lc ] , and the trick to generate a banded matrix with minimal bandwidth is to alternatively index bond length constraints with odd numbers , and bond angle constraints with even ones , where the regular pattern involving the atom indicies that participate of the same constraints has allowed again to use a lighter notation . the constraints equations in this case are respectively , and , if this indexing is used , is a banded matrix where is 3 and 4 in consecutive rows and columns .therefore , the mean is 3.5 , and the number of coefficients that have to be computed per row in the gaussian elimination process is the same because the matrix contains no zeros that are filled .a further feature of this system ( and other systems where both bond lengths and bond angles are constrained ) can be taken into account in order to reduce the computational cost of calculating lagrange multipliers in a molecular dynamics simulation : a segment of the linear chain with constrained bond lengths and bond angles is represented in fig .[ fig : angs ] , where the dashed lines correspond to the virtual bonds between atoms that , when kept constant , implement the constraints on bond angles ( assuming that the bond lengths , depicted as solid lines , are also constrained ) . due to the fact that all these distances are constant , many of the entries of will remain unchanged during the molecular dynamics simulation . as an example, we can calculate where we have used the law of cosines .the right - hand side does not depend on any time - varying objects ( such as ) , being made of only constant quantities .therefore , the value of ( and many other entries ) needs not to be recalculated in every time step , which allows to save computation time in a molecular dynamics simulation . in order to incrementally complicate the calculations, we now turn to a linear molecule with only one atom connected to the backbone , such the one displayed in figure [ fig : branched1 ] .the corresponding equations of constraint and the ordering in the indices that minimizes the bandwidth of the linear system are [ singlybranched ] where the trick this time has been to alternatively consider atoms in the backbone and atoms in the branches as we proceed along the chain .the matrix of this molecule presents a semi - band width which is alternatively 2 and 1 in consecutive rows / columns , with average and the same number of superdiagonal coefficients to be computed per row . the next molecular topology we will consider is that of an alkane ( a familiy of molecules with a long tradition in the field of constraints ) , i.e. , a linear backbone with two 1-atom branches attached to each site ( see fig . [fig : nalkane ] ) .the ordering of the constraints that minimizes the bandwidth of the linear system for this case is where the trick has been in this case to alternatively constrain the bond lengths in the backbone and those connecting the branching atoms to one side or the other .the resulting matrix require the calculation of 2 coefficients per row when solving the linear system . if we want to additionally constrain bond angles in a molecule with the topology in fig .[ fig : branched1 ] , the following ordering is convenient : this ordering produces 16 non - zero entries above the diagonal per each group of 4 rows in the matrix when making the calculations to solve the associated linear system .this is , we will have to calculate a mean of super - diagonal coefficients per row . when we studied the linear molecule with constrained bond lengths and bond angles , this mean was equal to , so including minimal branches in the linear chain makes the calculations just slightly longer . if we now want to add bond angle constraints to the bond length ones described in sec .[ sec : doubly_branched ] for alkanes , the following ordering produces a matrix with a low half - band width : in this case , the average number of coefficients to be calculated per row is approximately 5.7 . if we have cycles in our molecules , the indexing of the constraints is only slightly modified with respect to the open cases in the previous sections .for example , if we have a single - branch cyclic topology , such as the one displayed in fig .[ fig : rings]a , the ordering of the constraints is the following : these equations are the same as those in [ secbl ] , plus a final constraint corresponding to the bond which closes the ring .these constraints produce a matrix where only the diagonal entries , its first neighbours , and the entries in the corners ( and ) are non - zero .in this case , the associated linear system in eq .( [ lm ] ) can also be solved in operations , as we discuss in . in general , this is also valid whenever is a sparse matrix with only a few non - zero entries outside of its band , and therefore we can apply the technique introduced in this work to molecular topologies containing more than one cycle . the ordering of the constraints and the resulting linear systems for different cyclic species , such as the one depicted in fig .[ fig : rings]b , can be easily constructed by the reader using the same basic ideas . as we discussed in sec .[ sec : introduction ] , proteins are one of the most important families of molecules from the biological point of view : proteins are the nanomachines that perform most of the complex tasks that need to be done in living organisms , and therefore it is not surprising that they are involved , in one way or another , in most of the diseases that affect animals and human beings . given the efficiency and precision with which proteins carry out their missions , they are also being explored from the technological point of view .the applications of proteins even outside the biological realm are many if we could harness their power , and molecular dynamics simulations of great complexity and scale are being done in many laboratories around the world as a tool to understand them .proteins present two topological features that simplify the calculation of the lagrange multipliers associated to constraints imposed on their degrees of freedom : represents the first numbered atom in each residue ( the amino nitrogen ) and is the number of atoms in the side chain .* b ) * indexing of the bond length constraints ; denotes the index of the first constraint imposed on the residue ( the n - h bond length ) and is the variable number of constraints imposed on the side chain.,width=302 ] * they are linear polymers , consisting of a backbone with short ( 17 atoms at most ) groups attached to it .this produces a banded matrix , thus allowing the solution of the associated linear problem in operations . even in the casethat disulfide bridges , or any other covalent linkage that disrupts the linear topology of the molecule , exist , the solution of the problem can still be found efficiently if we recall the ideas discussed in sec .[ ring1 ] .* the monomers that typically make up these biological polymers , i.e. , the residues associated to the proteinogenic aminoacids , are only 20 different molecular structures .therefore , it is convenient to write down explicitly one block of the matrix for each known monomer , and to build the matrix of any protein simply joining together the precalculated blocks associated to the corresponding residues the protein consists of . the structure of a segment of the backbone of a protein chain is depicted in fig .[ fig : prot ] .the green spheres represent the side chains , which are the part of the amino acid residue that can differ from one monomer to the next , and which usually consist of several atoms : from 1 atom in the case of glycine to 17 in arginine or tryptophan . in fig .[ fig : prot]a , we present the numbering of the atoms , which will support the ordering of the constraints , and , in fig . [fig : prot]b , the indexing of the constraints is presented for the case in which only bond lengths are constrained ( the bond lengths plus bond angles case is left as an exercise for the reader ) .using the same ideas and notation as in the previous sections and denoting by the block of the matrix that corresponds to a given amino acid residue , with , we have that , for the monomer dettached of the rest of the chain , where the explicit non - zero entries are related to the constraints imposed on the backbone and denotes a block associated to those imposed on the bonds that belong to the different sidechains .the dimension of this matrix is and the maximum possible semi - band width is 12 for the bulkiest residues . a protein s global matrix has to be built by joining together blocks like the one above , and adding the non - zero elements related to the imposition of constraints on bond lengths that connect one residue with the next .these extra elements are denoted by and a general scheme of the final matrix is shown in fig .[ fig : proteinmatrix ] . for a protein molecule with residues . in black, we represent the potentially non - zero entries , and each large block in the diagonal is given by ( [ defr]).,width=264 ] the white regions in this scheme correspond to zero entries , and we can easily check that the matrix is banded .in fact , if each one of the diagonal blocks is constructed conveniently , they will contain many zeros themselves and the bandwidth can be reduced further .the size of the blocks will usually be much smaller than that of their neighbour diagonal blocks .for example , in the discussed case in which we constrain all bond lengths , are ( or ) blocks , and the diagonal blocks size is between ( glycine ) and ( tryptophan ) .nucleic acids are another family of very important biological molecules that can be tackled with the techniques described in this work .dna and rna , the two subfamilies of nucleic acids , consist of linear chains made up of a finite set of monomers ( called ` bases ' ) .this means that they share with proteins the two features mentioned in the previous section and therefore the lagrange multipliers associated to the imposition of constraints on their degrees of freedom can be efficiently computed using the same ideas .it is worth mentioning that dna typically appears in the form of two complementary chains whose bases form hydrogen - bonds . since these bonds are much weaker than a covalent bond , imposing bond length constraints on them such as the ones in eq .( [ sigma_generica ] ) would be too unrealistic for many practical purposes , denotes the index of the first constraint imposed on the nucleotide and is the variable number of constraints imposed on the bonds in the base.,width=340 ] in fig .[ fig : dna_constrs ] , and following the same ideas as in the previous section , we propose a way to index the bond length constraints of a dna strand which produces a banded matrix of low bandwidth .green spheres represent the ( many - atom ) bases ( a , c , t or g ) , and the general path to be followed for consecutive constraint indices is depicted in the upper left corner : first the sugar ring , then the base and finally the rest of the nucleotide , before proceeding to the next one in the chain .this ordering translates into the following form for the block of corresponding to one single nucleotide dettached from the rest of the chain : where is the block associated to the constraints imposed on the bonds that are contained in the base , , , , and are very sparse rectangular blocks with only a few non - zero entries in them , and the form of the diagonal blocks associated to the sugar ring and backbone constraints is the following : [ eq : r11r33 ] analagously to the case of proteins , as many blocks as those in eq .( [ rpartdna ] ) as nucleotides contains a given dna strand have to be joined to produce the global matrix of the whole molecule , together with the blocks associated to the constraints on the bonds that connect the different monomers . in fig .[ fig : matrixadn ] , a scheme of this global matrix is depicted and we can appreciate that it indeed banded . the construction of the matrix for a rna molecule should follow the same steps and the result will be very similar . for a dna molecule with nucleotides . in black , we represent the potentially non - zero entries , and each large block in the diagonal is given by ( [ rpartdna]).,width=264 ]in this section , we apply the efficient technique introduced in this work to a series of polyalanine molecules in order to calculate the lagrange multipliers when bond length constraints are imposed . we also compare our method , both in terms of accuracy and numerical efficiency , to the traditional inversion of the matrix without taking into account its banded structure .we used the code avogadro to build polyalanine chains of , 5 , 12 , 20 , 30 , 40 , 50 , 60 , 80 , 90 and 100 residues , and we chose their initial conformation to be approximately an alpha helix , i.e. , with the values of the ramachandran angles in the backbone and .next , for each of these chains , we used the molecular dynamics package amber to produce the atoms positions ( ) , velocities ( ) and external forces ( ) needed to calculate the lagrange multipliers ( see sec .[ sec_aclm ] ) after a short equilibration molecular dynamics simulations .we chose to constrain all bond lengths , but our method is equally valid for any other choice , as the more common constraining only of bonds that involve hydrogens . in order to produce reasonable final conformations , we repeated the following process for each of the chains : * solvation with explicit water molecules .* minimization of the solvent positions holding the polypeptide chain fixed ( 3,000 steps ) . * minimization of all atoms positions ( 3,000 steps ) .* thermalization : changing the temperature from 0 k to 300 k during 10,000 molecular dynamics steps .* stabilization : 20,000 molecular dynamics steps at a constant temperature of 300 k. * measurement of , and .neutralization is not necessary , because our polyalanine chains are themselves neutral . in all calculations we used the force field described in ,chose a cutoff for coulomb interactions of 10 and a time step equal to 0.002 ps , and impose constraints on all bond lengths as mentioned .in the thermostated steps , we used langevin dynamics with a collision frequency of 1 ps ..,width=302 ] using the information obtained and the indexing of the constraints described in this work , we constructed the matrix and the vector and proceeded to find the lagrange multipliers using eq .( [ lm ] ) .since ( [ lm ] ) is a linear problem , one straightforward way to solve is to use traditional gauss - jordan elimination or lu factorization .but these methods have a drawback : they scale with the cube of the size of the system .i.e. , if we imposed constraints on our system ( and therefore we needed to obtain lagrange multipliers ) , the number of floating point operations that these methods would require is proportional to .however , as we showed in the previous sections , the fact that many biological molecules , and proteins in particular , are essentially linear , allows to index the constraints in such a way that the matrix in eq .( [ lm ] ) is banded and use different techniques for solving the problem which require only floating point operations .[ table : pruebas ] .comparison of numerical complexity and accuracy between a traditional gauss - jordan solver and the banded algorithm described in this work , for the calculation of the lagrange multipliers on a series of polyalanine chains as a function of their number of residues . [ cols="^,^,^,^,^ " , ] in fig .[ fig : pruebas1 ] and table [ table : pruebas ] , we compare both the accuracy and the execution time of the two different methods : gauss - jordan elimination , and the banded recursive solution advocated here and made possible by the appropriate indexing of the constraints .the calculations have been run on a mac os x laptop with a 2.26 ghz intel core 2 duo processor , and the errors were measured using the normalized deviation of from .i.e. , if we denote by the solution provided by the numerical method , from the obtained results , we can see that both methods produce an error which is very small ( close to machine precision ) , being the accuracy of the banded algorithm advocated in this work slightly higher .regarding the computational cost , as expected , the gauss - jordan method presents an effort that approximately scales with the cube of the number of constraints ( which is approximately proportional to ) , while the banded technique allowed by the particular structure of the matrix follows a rather accurate lineal scaling .although it is typical that , when two such different behaviours meet , there exists a range of system sizes for which the method that scales more rapidly is faster and then , at a given system size , a crossover takes place and the slower scaling method becomes more efficient from there on , in this case , and according to the results obtained , the banded technique is less time - consuming for all the explored molecules , and the crossover should exist at a very small system size ( if it exists at all ) .this is very relevant for any potential uses of the methods introduced in this work .we have shown that , if we are dealing with typical biological polymers , whose covalent connectivity is that of essentially linear objects , the lagrange multipliers that need to be computed when constraints are imposed on their internal degrees of freedom ( such as bond lengths , bond angles , etc . ) can be obtained in steps as long as the constraints are indexed in a convenient way and banded algorithms are used to solve the associated linear system of equations .this path has been traditionally regarded as too costly in the literature , and , therefore , our showing that it can be implemented efficiently could have profound implications in the design of future molecular dynamics algorithms . since the field of imposition of constraints in moleculary dynamics simulations is dominated by methods that cleverly achieve that the system exactly stays on the constrained subspace as the simulation proceeds by not calculating the exact lagrange multipliers , but a modification of them instead , we are aware that the application of the new techniques introduced here is not a direct one .however , we are confident that the low cost of the new method and its close relationship with the problem of constrained dynamics could prompt many advances , some of which are already being pursued in our group . among the most promising lines , we can mention a possible improvement of the shake method by the use of the exact lagrange multipliers as a guess for the iterative procedure that constitutes its most common implementation .also , we are studying the possibility of solving the linear problems that appear either in a different implementation of shake ( mentioned in the original work too ) or in the lincs method , and which are defined by matrices which are different from but related to the matrix introduced in this work , being also banded if an appropriate indexing of the constraints is used .finally , we are exploring an extension of the ideas introduced here to the calculation not only of the lagrange multipliers but also of their time derivatives , to be used in higher order integrators than verlet .we would like to thank giovanni ciccotti for illuminating discussions and wise advices , and claudio cavasotto and isaas lans for the help with the setting up and use of amber .the numerical calculations have been performed at the bifi supercomputing facilities ; we thank all the staff there for the help and the technical assistance .this work has been supported by the grants fis2009 - 13364-c02 - 01 ( micinn , spain ) , grupo de excelencia `` biocomputacin y fsica de sistemas complejos '' , e24/3 ( aragn region government , spain ) , araid and ibercaja grant for young researchers ( spain ) .r . is supported by a jae predoc scholarship ( csic , spain ) . , r. ron o. dror , j. salmon , j. grossman , k. mackenzie , j. bank , c. young , b. batson , k. bowers , e. edmond chow , m. eastwood , d. ierardi , j. john l. klepeis , j. jeffrey s. kuskin , r. larson , k. kresten lindorff - larsen , p. maragakis , m. m.a ., s. piana , s. yibing , and b. towles , , in proceedings of the acm / ieee conference on supercomputing ( sc09 ) , acm press , new york , ( 2009 ) . ,t. darden , t. e. cheatham , c. simmerling , w. junmei , d. r. e. , r. luo , k. m. merz , m. a. pearlman , m. crowley , r. walker , z. wei , w. bing , s. hayik , a. roitberg , g. seabra , w. kim , f. paesani , w. xiongwu , v. brozell , s. tsui , h. gohlke , y. lijiang , t. chunhu , j. mongan , v. hornak , p. guanglei , c. beroza , d. h. mathews , c. schafmeister , w. s. ross , and p. a. kollman , , university of california : san francisco ( 2006 ) .
in order to accelerate molecular dynamics simulations , it is very common to impose holonomic constraints on their hardest degrees of freedom . in this way , the time step used to integrate the equations of motion can be increased , thus allowing , in principle , to reach longer total simulation times . the imposition of such constraints results in an aditional set of equations ( the equations of constraint ) and unknowns ( their associated lagrange multipliers ) , that must be solved in one way or another at each time step of the dynamics . in this work it is shown that , due to the essentially linear structure of typical biological polymers , such as nucleic acids or proteins , the algebraic equations that need to be solved involve a matrix which is banded if the constraints are indexed in a clever way . this allows to obtain the lagrange multipliers through a non - iterative procedure , which can be considered exact up to machine precision , and which takes operations , instead of the usual for generic molecular systems . we develop the formalism , and describe the appropriate indexing for a number of model molecules and also for alkanes , proteins and dna . finally , we provide a numerical example of the technique in a series of polyalanine peptides of different lengths using the amber molecular dynamics package . + * keywords : * constraints , lagrange multipliers , banded systems , molecular dynamics , proteins , dna +
e. rasmusen , _ games and information : an introduction to game theory _( blackwell pub , oxford , 2001 ) , r. b. myerson , _ game theory : analysis of conflict _ ( harvard univ . press , cambridge ma , 1997 ) . in bos ,if the source is ideal , that is , there are multiple ne s with the same payoff .( the strategy set achieving the ne s are shown in table [ tab : prob3 ] . ) however , the strategies with and , where and are in the range $ ] , is a focal equilibrium point .this strategy set enables the players to converge to the ne without a concern for the choice of and .
effects of a corrupt source on the dynamics of simultaneous move strategic games are analyzed both for classical and quantum settings . the corruption rate dependent changes in the payoffs and strategies of the players are observed . it is shown that there is a critical corruption rate at which the players lose their quantum advantage , and that the classical strategies are more robust to the corruption in the source . moreover , it is understood that the information on the corruption rate of the source may help the players choose their optimal strategy for resolving the dilemma and increase their payoffs . the study is carried out in two different corruption scenarios for prisoner s dilemma , samaritan s dilemma , and battle of sexes . * _ a . introduction : _ * classical game theory has a very general scope , encompassing questions and situations that are basic to all of the social sciences . there are three main ingredients of a game which is to be a model for real life situations : the first of these is the rational players ( decision makers ) who share a common knowledge . the second is the strategy set which contains the feasible actions the players can take , and the third one is the payoff which are given to the players as their profit or benefit when they apply a specific action from their strategy set . when rational players interact in a game , they will not play dominated strategies , but will search for an equilibrium . one of the important concepts in game theory is that of nash equilibrium ( ne ) in which each player s choice of action is the best response to the actions taken by the other players . in an ne , no player can increase his payoff by unilaterally changing her action . while the existence of a unique ne makes it easier for the players to choose their action , the existence of multiple ne s avoids the sharp decision making process because the players become indifferent between them . in pure strategies , the type and the number of ne s in a game depend on the game . however , due to von neumann there is at least one ne when the same game is played with mixed strategies . classical game theory has been successfully tested in decision making processes encountered in real - life situations ranging from economics to international relations . by studying and applying the principles of game theory , one can formulate effective strategies , predict the outcome of strategic situations , select or design the best game to be played , and determine competitor behavior , as well as the optimal strategy . in recent years , there have been great efforts to apply the quantum mechanical toolbox in the design and analysis of games . as it was the same in other fields such as communication and computation , quantum mechanics introduced novel effects into game theory , too . it has proved to have the potential to affect our way of thinking when approaching to games and game modelling . using the physical scheme proposed by eisert _ et al . _ ( see fig.[fig : scheme ] ) , it has been shown in several games that the dilemma existing in the original game can be resolved by using the paradigm of quantum mechanics . it has also been shown that when one of the players chooses quantum strategies while the other is restricted to classical ones , the player with quantum strategies can always receive better payoff if they share a maximally entangled state . quantum systems are easily affected by their environment , and physical schemes are usually far from ideal in practical situations . therefore , it is important to study whether the advantage of the players arising from the quantum strategies and the shared entanglement survive in the presence of noise or non - ideal components in the physical scheme . in this paper , we consider a corrupt source and analyze its effect on the payoffs and strategies of the players . we search answers for the following two questions : ( i ) is there a critical corruption rate above which the players can not maintain their quantum advantage if they are unaware of the action of the noise on the source , and ( ii ) how can the players adopt their actions if they have information on the corruption rate of the source . * _ b . eisert s scheme : _ * in this physically realizable scheme the quantum version of a two - player - two - strategy classical game can be played as follows : ( a ) a referee prepares a maximally entangled state by applying an entangling operator on a product state where . the output of this entangler , which reads ] , the players can resolve their dilemma and receive the highest possible total payoff . it has also been shown that the dynamics of the games changes when the referee starts with a different initial state . for example , if the referee starts with in sd , four ne s emerge with the same constant payoff making a solution to the dilemma impossible . = 8.7 cm * _ c . corrupt source in quantum games : _ * as we have pointed out above , the initial state from which the referee prepares the entangled state is a crucial parameter in eisert s scheme . therefore , any corruption or deviation from the ideality of the source which prepares this state will change the dynamics and outcomes of the game . consequently , the analysis of situations where the source is corrupt is necessary to shed a light in understanding the game dynamics in the presence of imperfections . we consider the source model shown in fig . [ fig : corrupt ] . this model includes two identical sources constructed to prepare the states s which are the inputs to the entangler at each run of the game . these sources are not ideal and have a _ corruption rate _ , , that is , they prepare the desired state with probability while preparing the unwanted state with probability . the state prepared by these sources thus can be written as . then the combined state generated and sent to the entangler becomes . this results in a mixture of the four possible maximally entangled states , where and . this is the state on which the players will perform their unitary operators . * _ scenario i : _ * in this scenario , the players alice and bob are not aware of the corruption in the source . they assume that the source is ideal and always prepares the initial state , and hence that the output state of the entangler is always . based on this assumption , they apply the operations that is supposed to resolve their dilemma . we have analyzed pd , sd and bos according to this scenario , and compared the payoffs of the players with respect to the corruption rate . the payoff they receive when they stick to their quantum strategies are compared to the payoffs when they play the game classically . we consider the classical counterparts both with and without the presence of noise in the game . that is , the players use the same physical scheme of the quantum version with and without the corrupt source , and apply their actions by choosing their operators from the set . the results of the analysis according to this scenario are depicted in figs . [ fig : pd]-[fig : bos ] . a remarkable result of this analysis is that with the introduction of the corrupt source , the players quantum advantage is no longer preserved if the corruption rate , , becomes larger than a critical corruption rate . at , the classical and quantum strategies produce equal payoffs . another interesting result is the existence of a strategy , where the payoffs of the players become constant independent of corruption rate . this strategy could be attractive for risk avoiding and/or paranoid players . for pd , which is a symmetric game , the optimal classical strategies deliver the payoffs for the actions . in the quantum version with an uncorrupt source , the players can get the optimal payoffs if they adopt the strategies . hence , the dilemma of the game is resolved and the players receive better payoffs than those obtained with classical strategies . however , as seen in fig . [ fig : pd ] , the payoffs of players with classical and quantum strategies become equal to when . if satisfies , the quantum version of the game always does better than the classical one . otherwise , the classical game is better . when the classical version of pd is played with a corrupt source , we find that with increasing corruption rate , while the payoffs for the quantum strategy decrease , those of the classical one increase . that is , if , then the players would rather apply their classical strategies than the quantum ones . this can be explained as follows : when the players apply classical operations , the game is played as if there is no entanglement in the scheme . that is , players apply their classical operators on the state prepared by the source . if the source is ideal , , they operate on the which results in an output state . referee , upon receiving this output state and making the projective measurement , delivers . on the other hand , when , the state from the source is and the output state after the players actions becomes . with this output state , referee delivers them the payoffs . thus , when the players apply the classical operator , their payoffs continuously increase from one to three with the increasing corruption rate from to . using a classical mixed strategy in the asymmetric game of sd , the players receive at the ne . in this strategy , while alice chooses from her strategies with equal probabilities , bob uses a biased randomization where he applies one of his actions , , with probability . the most desired solution to the dilemma in the game is to obtain an ne with . this is achieved when both players apply to . the dynamics of the payoffs in this game with the corrupt source when the players stick to their operators and its comparison with their classical mixed strategy are depicted in fig . [ fig : sd ] . since this game is an asymmetric one , the payoffs of the players , in general , are not equal . however , with the corrupt source it is found that their payoffs become equal at and at , where the payoffs are and , respectively . the critical corruption rate , , which denotes the transition from the quantum advantage to classical advantage regions , is the same for both players . while for increasing , monotonously decreases from two to zero , reaches its minimum of at , where it starts increasing to the value of zero at . it is worth noting that when the players apply their classical mixed strategies in this physical scheme , is always constant and independent of the corruption rate , whereas increases linearly as for . the payoffs of the players are compared in three cases : _ case 1 _ : ( insufficient solution ) , _ case 2 _ : ( weak solution ) , and _ case 3 _ : ( strong solution ) . in the corrupt source scenario in quantum strategies , _ case 1 _ is achieved for ,_case 2 _ is achieved for , and finally _ case 3 _ for . the remarkable result of this analysis is that although the players using quantum strategies have high potential gains , there is a large potential loss if the source is deviated from an ideal one . the classical strategies are more robust to corruption of the source . in bos , which is an asymmetric game , the classical mixed strategies , where alice and bob apply with probabilities and or vice versa , the players receive equal payoffs of . however , the dilemma is not solved due to the existence of two equivalent ne . on the other hand , when the physical scheme with quantum strategies is used the players can reach an ne where their payoffs become and if both players apply to the maximally entangled state prepared with an ideal source . the advantage of this quantum strategy to the classical mixed strategy is that in the former is higher than the latter . in the presence of corruption in the source , payoffs of the players change as shown in fig . [ fig : bos ] . with an ideal source , the payoffs reads , however for increasing corruption rate while decreases from two to one , increases from one to two . with an completely corrupt source , , the payoffs become . the reason for this is the same as explained for pd . when the quantum strategies with and without corrupt source are compared to the classical mixed strategy without noise , it is seen that the former ones always give better payoffs to the players . however , when the source becomes noisy ( corrupt ) , classical strategies become more advantageous to quantum ones with increasing corruption rate . the range of corruption rate where classical strategies are better than the quantum strategy if the players stick to their operations , are and for alice and bob , respectively . when , and these payoffs are equal to the ones received with classical mixed strategies . while independently of for classical mixed strategies , and differ when the players stick to their quantum strategies for . another interesting result for this game is that , contrary to pd and sd , the strategy discussed above always gives a constant payoff , which is better than that of the classical mixed strategy . * _ scenario ii : _ * in this scenario , the referee knows the characteristics of the corruption in the source , and inform the players on the corruption rate . then the question is whether the players can find a unique ne for a known source with corruption rate ; and if they can , does this ne resolve their dilemma in the game or not . when the corruption rate is , the state shared between the players become . then independent of what action they choose , the players receive constant payoffs determined by averaging the payoff entries .strategies which lead to ne s and the corresponding payoffs for the players in prisoner s dilemma ( pd ) if they are provided the information on the corruption rate , , of the source which prepares in the initial product state . [ tab : prob1 ] [ cols="^,^,^,^,^,^,^",options="header " , ] for the sd game . in this game , in contrast to the other two , although for there is a unique ne solving the dilemma , for the players can not find a unique ne . there emerges an infinite number of different strategies with equal payoffs . the players are indifferent between these strategies and can not make sharp decisions . therefore , the dilemma of the game survives , although its nature changes . when we look at some intermediate values for the corruption rate , we see that corruption rate affects bos and sd strongly . for example , when in sd , there are infinite number of strategies and ne s which have the same payoffs . these ne s are achieved when the players choose their operators as and . the same is seen in bos for which results in a payoff when the players choose and . a more detailed analysis carried out for pd with increasing in steps of in the range $ ] has revealed that the players can achieve a unique ne where their payoffs and strategies depends on the corruption rate . therefore , information on the source characteristic might help the players to reorganize their strategies . however , whether providing the players with this kind of information in a game is acceptable or not is an open question . * _ d . conclusion : _ * this study shows that the strategies to achieve ne s and the corresponding payoffs are strongly dependent on the corruption of the source . in a game with corrupt source , the quantum advantage no longer survives if the corruption rate is above a critical value . the corruption may not only cause the emergence of multiple ne s but may cause a decrease in the player s payoff , as well , even if there is a single ne . if the players are given the characteristics of the source then they can adapt their strategies ; otherwise they can either continue their best strategy assuming that the source is ideal and take the risk of losing their quantum advantage over the classical or choose a risk - free strategy , which makes their payoff independent of the corruption rate . however , in the case where players know the corruption rate and adjust their strategies , the problem is that for some games there emerge multiple ne s , therefore the dilemmas in those games survive . this study reveals the importance of the source used in a quantum game . the authors thank to dr . j. soderholm for the critical reading of the manuscript . they also acknowledge the insightful discussions with dr . f. morikoshi and dr . t. yamamoto .