article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
additive models provide an important family of models for semiparametric regression or classification . some reasons for the success of additive models are their increased flexibility when compared to linear or generalized linear models and their increased interpretability when compared to fully nonparametric models .it is well - known that good estimators in additive models are in general less prone to the curse of high dimensionality than good estimators in fully nonparametric models .many examples of such estimators belong to the large class of regularized kernel based methods over a reproducing kernel hilbert space , see e.g. . in the last yearsmany interesting results on learning rates of regularized kernel based models for additive models have been published when the focus is on sparsity and when the classical least squares loss function is used , see e.g. , , , , , and the references therein . of course , the least squares loss function is differentiable and has many nice mathematical properties , but it is only locally lipschitz continuous and therefore regularized kernel based methods based on this loss function typically suffer on bad statistical robustness properties , even if the kernel is bounded .this is in sharp contrast to kernel methods based on a lipschitz continuous loss function and on a bounded loss function , where results on upper bounds for the maxbias bias and on a bounded influence function are known , see e.g. for the general case and for additive models .therefore , we will here consider the case of regularized kernel based methods based on a general convex and lipschitz continuous loss function , on a general kernel , and on the classical regularizing term for some which is a smoothness penalty but not a sparsity penalty , see e.g. .such regularized kernel based methods are now often called support vector machines ( svms ) , although the notation was historically used for such methods based on the special hinge loss function and for special kernels only , we refer to . in this paper we address the open question , whether an svm with an additive kernel can provide a substantially better learning rate in high dimensions than an svm with a general kernel , say a classical gaussian rbf kernel , if the assumption of an additive model is satisfied .our leading example covers learning rates for quantile regression based on the lipschitz continuous but non - differentiable pinball loss function , which is also called check function in the literature , see e.g. and for parametric quantile regression and , , and for kernel based quantile regression .we will not address the question how to check whether the assumption of an additive model is satisfied because this would be a topic of a paper of its own .of course , a practical approach might be to fit both models and compare their risks evaluated for test data .for the same reason we will also not cover sparsity .consistency of support vector machines generated by additive kernels for additive models was considered in . in this paperwe establish learning rates for these algorithms .let us recall the framework with a complete separable metric space as the input space and a closed subset of as the output space .a borel probability measure on is used to model the learning problem and an independent and identically distributed sample is drawn according to for learning .a loss function is used to measure the quality of a prediction function by the local error ._ throughout the paper we assume that is measurable , , convex with respect to the third variable , and uniformly lipschitz continuous satisfying with a finite constant ._ support vector machines ( svms ) considered here are kernel - based regularization schemes in a reproducing kernel hilbert space ( rkhs ) generated by a mercer kernel . with a shifted loss function introduced for dealingeven with heavy - tailed distributions as , they take the form where for a general borel measure on , the function is defined by where is a regularization parameter .the idea to shift a loss function has a long history , see e.g. in the context of m - estimators .it was shown in that is also a minimizer of the following optimization problem involving the original loss function if a minimizer exists : the additive model we consider consists of the _ input space decomposition _ with each a complete separable metric space and a _ hypothesis space _ where is a set of functions each of which is also identified as a map from to .hence the functions from take the additive form .we mention , that there is strictly speaking a notational problem here , because in the previous formula each quantity is an element of the set which is a subset of the full input space , , whereas in the definition of sample each quantity is an element of the full input space , where .because these notations will only be used in different places and because we do not expect any misunderstandings , we think this notation is easier and more intuitive than specifying these quantities with different symbols .the additive kernel is defined in terms of mercer kernels on as it generates an rkhs which can be written in terms of the rkhs generated by on corresponding to the form ( [ additive ] ) as with norm given by the norm of satisfies to illustrate advantages of additive models , we provide two examples of comparing additive with product kernels .the first example deals with gaussian rbf kernels .all proofs will be given in section [ proofsection ] .[ gaussadd ] let , ] let and .\ ] ] the additive kernel is given by furthermore , the product kernel is the standard gaussian kernel given by define a gaussian function on ^ 2 ] and ^s. ] .if we take all the mercer kernels to be , then ] , consisting of all square integrable functions whose partial derivatives are all square integrable , contains discontinuous functions and is not an rkhs .denote the marginal distribution of on as . under the assumption that for each and that is dense in in the -metric , it was proved in that in probability as long as satisfies and .the rest of the paper has the following structure .section [ ratessection ] contains our main results on learning rates for svms based on additive kernels . learning rates for quantile regressionare treated as important special cases .section [ comparisonsection ] contains a comparison of our results with other learning rates published recently .section [ proofsection ] contains all the proofs and some results which can be interesting in their own .in this paper we provide some learning rates for the support vector machines generated by additive kernels for additive models which helps improve the quantitative understanding presented in .the rates are about asymptotic behaviors of the excess risk and take the form with .they will be stated under three kinds of conditions involving the hypothesis space , the measure , the loss , and the choice of the regularization parameter .the first condition is about the approximation ability of the hypothesis space .since the output function is from the hypothesis space , the learning rates of the learning algorithm depend on the approximation ability of the hypothesis space with respect to the optimal risk measured by the following approximation error .[ defapprox ] the approximation error of the triple is defined as to estimate the approximation error , we make an assumption about the minimizer of the risk for each , define the integral operator associated with the kernel by we mention that is a compact and positive operator on . hence we can find its normalized eigenpairs such that is an orthonormal basis of and as . fix .then we can define the -th power of by this is a positive and bounded operator and its range is well - defined .the assumption means lies in this range .[ assumption1 ] we assume and where for some and each , is a function of the form with some .the case of assumption [ assumption1 ] means each lies in the rkhs .a standard condition in the literature ( e.g. , ) for achieving decays of the form for the approximation error ( [ approxerrordef ] ) is with some . herethe operator is defined by in general , this can not be written in an additive form .however , the hypothesis space ( [ additive ] ) takes an additive form .so it is natural for us to impose an additive expression for the target function with the component functions satisfying the power condition .the above natural assumption leads to a technical difficulty in estimating the approximation error : the function has no direct connection to the marginal distribution projected onto , hence existing methods in the literature ( e.g. , ) can not be applied directly .note that on the product space , there is no natural probability measure projected from , and the risk on is not defined . our idea to overcome the difficulty is to introduce an intermediate function .it may not minimize a risk ( which is not even defined ) .however , it approximates the component function well .when we add up such functions , we get a good approximation of the target function , and thereby a good estimate of the approximation error .this is the first novelty of the paper .[ approxerrorthm ] under assumption [ assumption1 ] , we have where is the constant given by the second condition for our learning rates is about the capacity of the hypothesis space measured by -empirical covering numbers . let be a set of functions on and for every the * covering number of * with respect to the empirical metric , given by is defined as and the * -empirical covering number * of is defined as [ assumption2 ] we assume and that for some , and every , the -empirical covering number of the unit ball of satisfies the second novelty of this paper is to observe that the additive nature of the hypothesis space yields the following nice bound with a dimension - independent power exponent for the covering numbers of the balls of the hypothesis space , to be proved in section [ samplesection ] .[ capacitythm ] under assumption [ assumption2 ] , for any and , we have the bound for the covering numbers stated in theorem [ capacitythm ] is special : the power is independent of the number of the components in the additive model .it is well - known in the literature of function spaces that the covering numbers of balls of the sobolev space on the cube ^s ] , we have 2 .let ] since we can use \} ] ) .the following theorem , to be proved in section [ proofsection ] , gives a learning rate for the regularization scheme ( [ algor ] ) in the special case of quantile regression .[ quantilethm ] suppose that almost surely for some constant , and that each kernel is with for some .if assumption [ assumption1 ] holds with and has a -quantile of -average type for some ] and a positive constant such that assumption [ assumption3 ] always holds true for . if the triple satisfies some conditions , the exponent can be larger .for example , when is the pinball loss ( [ pinloss ] ) and has a -quantile of -average type for some ] and ^ 2. ] , then by taking , for any and , ( [ quantilerates ] ) holds with confidence at least . it is unknown whether the above learning rate can be derived by existing approaches in the literature ( e.g. ) even after projection .note that the kernel in the above example is independent of the sample size .it would be interesting to see whether there exists some such that the function defined by ( [ gaussfcn ] ) lies in the range of the operator .the existence of such a positive index would lead to the approximation error condition ( [ approxerrorb ] ) , see . let us now add some numerical comparisons on the goodness of our learning rates given by theorem [ mainratesthm ] with those given by .their corollary 4.12 gives ( essentially ) minmax optimal learning rates for ( clipped ) svms in the context of nonparametric quantile regression using one gaussian rbf kernel on the whole input space under appropriate smoothness assumptions of the target function .let us consider the case that the distribution has a -quantile of -average type , where , and assume that both corollary 4.12 in and our theorem [ mainratesthm ] are applicable .i.e. , we assume in particular that is a probability measure on ] ) the additive structure with each as stated in assumption [ assumption1 ] , where and , with minimal risk and additionally fulfills ( to make corollary 4.12 in applicable ) where ] , and is some user - defined positive constant independent of . forreasons of simplicity , let us fix .then ( * ? ? ?4.12 ) gives learning rates for the risk of svms for -quantile regression , if a single gaussian rbf - kernel on is used for -quantile functions of -average type with , which are of order hence the learning rate in theorem [ quantilethm ] is better than the one in ( * ? ? ?4.12 ) in this situation , if provided the assumption of the additive model is valid .table [ table1 ] lists the values of from ( [ explicitratescz2 ] ) for some finite values of the dimension , where .all of these values of are positive with the exceptions if or .this is in contrast to the corresponding exponent in the learning rate by ( * ? ?* cor . 4.12 ) , because table [ table2 ] and figures [ figure1 ] to [ figure2 ] give additional information on the limit .of course , higher values of the exponent indicates faster rates of convergence .it is obvious , that an svm based on an additive kernel has a significantly faster rate of convergence in higher dimensions compared to svm based on a single gaussian rbf kernel defined on the whole input space , of course under the assumption that the additive model is valid .the figures seem to indicate that our learning rate from theorem [ mainratesthm ] is probably not optimal for small dimensions . however , the main focus of the present paper is on high dimensions ..[table1 ] the table lists the limits of the exponents from ( * ? ? ?* cor . 4.12 ) and from theorem [ mainratesthm ] , respectively , if the regularizing parameter is chosen in an optimal manner for the nonparametric setup , i.e. , with for and .recall that $ ] .[ cols= " > , > , > , > " , ]
additive models play an important role in semiparametric statistics . this paper gives learning rates for regularized kernel based methods for additive models . these learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the gaussian radial basis function kernel , provided the assumption of an additive model is valid . additionally , a concrete example is presented to show that a gaussian function depending only on one variable lies in a reproducing kernel hilbert space generated by an additive gaussian kernel , but does not belong to the reproducing kernel hilbert space generated by the multivariate gaussian kernel of the same variance . * key words and phrases . * additive model , kernel , quantile regression , semiparametric , rate of convergence , support vector machine .
the transport properties of nonlinear non - equilibrium dynamical systems are far from well - understood .consider in particular so - called ratchet systems which are asymmetric periodic potentials where an ensemble of particles experience directed transport .the origins of the interest in this lie in considerations about extracting useful work from unbiased noisy fluctuations as seems to happen in biological systems .recently attention has been focused on the behavior of deterministic chaotic ratchets as well as hamiltonian ratchets .chaotic systems are defined as those which are sensitively dependent on initial conditions . whether chaotic or not , the behavior of nonlinear systems including the transition from regular to chaotic behavior is in general sensitively dependent on the parameters of the system .that is , the phase - space structure is usually relatively complicated , consisting of stability islands embedded in chaotic seas , for examples , or of simultaneously co - existing attractors .this can change significantly as parameters change .for example , stability islands can merge into each other , or break apart , and the chaotic sea itself may get pinched off or otherwise changed , or attractors can change symmetry or bifurcate .this means that the transport properties can change dramatically as well .a few years ago , mateos considered a specific ratchet model with a periodically forced underdamped particle .he looked at an ensemble of particles , specifically the velocity for the particles , averaged over time and the entire ensemble .he showed that this quantity , which is an intuitively reasonable definition of ` the current ' , could be either positive or negative depending on the amplitude of the periodic forcing for the system . at the same time , there exist ranges in where the trajectory of an individual particle displays chaotic dynamics .mateos conjectured a connection between these two phenomena , specifically that the reversal of current direction was correlated with a bifurcation from chaotic to periodic behavior in the trajectory dynamics .even though it is unlikely that such a result would be universally valid across all chaotic deterministic ratchets , it would still be extremely useful to have general heuristic rules such as this .these organizing principles would allow some handle on characterizing the many different kinds of behavior that are possible in such systems .a later investigation of the mateos conjecture by barbi and salerno , however , argued that it was not a valid rule even in the specific system considered by mateos .they presented results showing that it was possible to have current reversals in the absence of bifurcations from periodic to chaotic behavior .they proposed an alternative origin for the current reversal , suggesting it was related to the different stability properties of the rotating periodic orbits of the system .these latter results seem fundamentally sensible . however , this paper based its arguments about currents on the behavior of a _ single _ particle as opposed to an ensemble .this implicitly assumes that the dynamics of the system are ergodic .this is not true in general for chaotic systems of the type being considered . in particular , there can be extreme dependence of the result on the statistics of the ensemble being considered .this has been pointed out in earlier studies which laid out a detailed methodology for understanding transport properties in such a mixed regular and chaotic system . depending on specific parameter value , the particular system under consideration has multiple coexisting periodic or chaotic attractors or a mixture of both .it is hence appropriate to understand how a probability ensemble might behave in such a system .the details of the dependence on the ensemble are particularly relevant to the issue of the possible experimental validation of these results , since experiments are always conducted , by virtue of finite - precision , over finite time and finite ensembles .it is therefore interesting to probe the results of barbi and salerno with regard to the details of the ensemble used , and more formally , to see how ergodicity alters our considerations about the current , as we do in this paper .we report here on studies on the properties of the current in a chaotic deterministic ratchet , specifically the same system as considered by mateos and barbi and salerno .we consider the impact of different kinds of ensembles of particles on the current and show that the current depends significantly on the details of the initial ensemble .we also show that it is important to discard transients in quantifying the current .this is one of the central messages of this paper : broad heuristics are rare in chaotic systems , and hence it is critical to understand the ensemble - dependence in any study of the transport properties of chaotic ratchets .having established this , we then proceed to discuss the connection between the bifurcation diagram for individual particles and the behavior of the current .we find that while we disagree with many of the details of barbi and salerno s results , the broader conclusion still holds .that is , it is indeed possible to have current reversals in the absence of bifurcations from chaos to periodic behavior as well as bifurcations without any accompanying current reversals .the result of our investigation is therefore that the transport properties of a chaotic ratchet are not as simple as the initial conjecture .however , we do find evidence for a generalized version of mateos s conjecture .that is , in general , bifurcations for trajectory dynamics as a function of system parameter seem to be associated with abrupt changes in the current .depending on the specific value of the current , these abrupt changes may lead the net current to reverse direction , but not necessarily so .we start below with a preparatory discussion necessary to understand the details of the connection between bifurcations and current reversal , where we discuss the potential and phase - space for single trajectories for this system , where we also define a bifurcation diagram for this system . in the next section ,we discuss the subtleties of establishing a connection between the behavior of individual trajectories and of ensembles .after this , we are able to compare details of specific trajectory bifurcation curves with current curves , and thus justify our broader statements above , after which we conclude .the goal of these studies is to understand the behavior of general chaotic ratchets .the approach taken here is that to discover heuristic rules we must consider specific systems in great detail before generalizing .we choose the same -dimensional ratchet considered previously by mateos , as well as barbi and salerno .we consider an ensemble of particles moving in an asymmetric periodic potential , driven by a periodic time - dependent external force , where the force has a zero time - average .there is no noise in the system , so it is completely deterministic , although there is damping .the equations of motion for an individual trajectory for such a system are given in dimensionless variables by where the periodic asymmetric potential can be written in the form + \frac{1}{4 } \sin [ 4\pi ( x -x_0 ) ] \bigg ] .\ ] ] in this equation have been introduced for convenience such that one potential minimum exists at the origin with and the term .( a ) classical phase space for the unperturbed system . for , ,two chaotic attractors emerge with ( b ) ( c ) and a period four attractor consisting of the four centers of the circles with .,title="fig:",width=302 ] the phase - space of the undamped undriven ratchet the system corresponding to the unperturbed potential looks like a series of asymmetric pendula .that is , individual trajectories have one of following possible time - asymptotic behaviors : ( i ) inside the potential wells , trajectories and all their properties oscillate , leading to zero net transport . outside the wells , the trajectories either ( ii ) librate to the right or ( iii ) to the left , with corresponding net transport depending upon initial conditions .there are also ( iv ) trajectories on the separatrices between the oscillating and librating orbits , moving between unstable fixed points in infinite time , as well as the unstable and stable fixed points themselves , all of which constitute a set of negligible measure . when damping is introduced via the -dependent term in eq .[ eq : dyn ] , it makes the stable fixed points the only attractors for the system .when the driving is turned on , the phase - space becomes chaotic with the usual phenomena of intertwining separatrices and resulting homoclinic tangles .the dynamics of individual trajectories in such a system are now very complicated in general and depend sensitively on the choice of parameters and initial conditions .we show snapshots of the development of this kind of chaos in the set of poincar sections fig .( [ figure1]b , c ) together with a period - four orbit represented by the center of the circles . a broad characterization of the dynamics of the problem as a function of a parameter ( or ) emerges in a bifurcation diagram. this can be constructed in several different and essentially equivalent ways .the relatively standard form that we use proceeds as follows : first choose the bifurcation parameter ( let us say ) and correspondingly choose fixed values of , and start with a given value for .now iterate an initial condition , recording the value of the particle s position at times from its integrated trajectory ( sometimes we record .this is done stroboscopically at discrete times where and is an integer with the maximum number of observations made . of these , discard observations at times less than some cut - off time and plot the remaining points against .it must be noted that discarding transient behavior is critical to get results which are independent of initial condition , and we shall emphasize this further below in the context of the net transport or current .if the system has a fixed - point attractor then all of the data lie at one particular location . a periodic orbit with period ( that is , with period commensurate with the driving ) shows up with points occupying only different locations of for .all other orbits , including periodic orbits of incommensurate period result in a simply - connected or multiply - connected dense set of points . for the next value , the last computed value of at are used as initial conditions , and previously , results are stored after cutoff and so on until .that is , the bifurcation diagram is generated by sweeping the relevant parameter , in this case , from through some maximum value .this procedure is intended to catch all coexisting attractors of the system with the specified parameter range .note that several initial conditions are effectively used troughout the process , and a bifurcation diagram is not the behavior of a single trajectory .we have made several plots , as a test , with different initial conditions and the diagrams obtained are identical .we show several examples of this kind of bifurcation diagram below , where they are being compared with the corresponding behavior of the current .having broadly understood the wide range of behavior for individual trajectories in this system , we now turn in the next section to a discussion of the non - equilibrium properties of a statistical ensemble of these trajectories , specifically the current for an ensemble .the current for an ensemble in the system is defined in an intuitive manner by mateos as the time - average of the average velocity over an ensemble of initial conditions .that is , an average over several initial conditions is performed at a given observation time to yield the average velocity over the particles this average velocity is then further time - averaged ; given the discrete time for observation this leads to a second sum where is the number of time - observations made . for this to be a relevant quantity to compare with bifurcation diagrams , should be independent of the quantities but still strongly dependent on .a further parameter dependence that is being suppressed in the definition above is the shape and location of the ensemble being used .that is , the transport properties of an ensemble in a chaotic system depend in general on the part of the phase - space being sampled .it is therefore important to consider many different initial conditions to generate a current . the first straightforward result we show in fig .( [ figure2 ] ) is that in the case of chaotic trajectories , a single trajectory easily displays behavior very different from that of many trajectories .however , it turns out that in the regular regime , it is possible to use a single trajectory to get essentially the same result as obtained from many trajectories . further consider the bifurcation diagram in fig .( [ figure3 ] ) where we superimpose the different curves resulting from varying the number of points in the initial ensemble .first , the curve is significantly smoother as a function of for larger . even more relevant is the fact that the single trajectory data ( ) may show current reversals that do not exist in the large data .current versus the number of trajectories for ; dashed lines correspond to a regular motion with while solid lines correspond to a chaotic motion with .note that a single trajectory is sufficient for a regular motion while the convergence in the chaotic case is only obtained if the exceeds a certain threshold , .,title="fig:",width=302 ] current versus for different set of trajectories ; ( circles ) , ( square ) and ( dashed lines ) . note that a single trajectory suffices in the regular regime where all the curves match . in the chaotic regime, as increases , the curves converge towards the dashed one.,title="fig:",width=302 ] also , note that single - trajectory current values are typically significantly greater than ensemble averages .this arises from the fact that an arbitrarily chosen ensemble has particles with idiosyncratic behaviors which often average out . as our result , with these ensembles we see typical for example , while barbi and salerno report currents about times greater .however , it is not true that only a few trajectories dominate the dynamics completely , else there would not be a saturation of the current as a function of .all this is clear in fig .( [ figure3 ] ) .we note that the * net * drift of an ensemble can be a lot closer to than the behavior of an individual trajectory. it should also be clear that there is a dependence of the current on the location of the initial ensemble , this being particularly true for small , of course .the location is defined by its centroid . for , it is trivially true that the initial location matters to the asymptotic value of the time - averaged velocity , given that this is a non - ergodic and chaotic system .further , considering a gaussian ensemble , say , the width of the ensemble also affects the details of the current , and can show , for instance , illusory current reversal , as seen in figs .( [ current - bifur1],[current - bifur2 ] ) for example .notice also that in fig .( [ current - bifur1 ] ) , at and , the deviations between the different ensembles is particularly pronounced .these points are close to bifurcation points where some sort of symmetry breaking is clearly occuring , which underlines our emphasis on the relevance of specifying ensemble characteristics in the neighborhood of unstable behavior .however , why these specific bifurcations should stand out among all the bifurcations in the parameter range shown is not entirely clear . to understand how to incorporate this knowledge into calculations of the current ,therefore , consider the fact that if we look at the classical phase space for the hamiltonian or underdamped motion , we see the typical structure of stable islands embedded in a chaotic sea which have quite complicated behavior . in such a situation , the dynamics always depends on the location of the initial conditions .however , we are not in the hamiltonian situation when the damping is turned on in this case , the phase - space consists in general of attractors .that is , if transient behavior is discarded , the current is less likely to depend significantly on the location of the initial conditions or on the spread of the initial conditions . in particular , in the chaotic regime of a non - hamiltonian system , the initial ensemble needs to be chosen larger than a certain threshold to ensure convergence .however , in the regular regime , it is not important to take a large ensemble and a single trajectory can suffice , as long as we take care to discard the transients .that is to say , in the computation of currents , the definition of the current needs to be modified to : where is some empirically obtained cut - off such that we get a converged current ( for instance , in our calculations , we obtained converged results with ) . when this modified form is used , the convergence ( ensemble - independence ) is more rapid as a function of and the width of the intial conditions .armed with this background , we are now finally in a position to compare bifurcation diagrams with the current , as we do in the next section .our results are presented in the set of figures fig .( [ figure5 ] ) fig .( [ rev - nobifur ] ) , in each of which we plot both the ensemble current and the bifurcation diagram as a function of the parameter .the main point of these numerical results can be distilled into a series of heuristic statements which we state below ; these are labelled with roman numerals . for and , we plot current ( upper ) with and bifurcation diagram ( lower ) versus .note that there is a * single * current reversal while there are many bifurcations visible in the same parameter range.,title="fig:",width=302 ] consider fig .( [ figure5 ] ) , which shows the parameter range chosen relatively arbitrarily . in this figure , we see several period - doubling bifurcations leading to order - chaos transitions , such as for example in the approximate ranges . however , there is only one instance of current - reversal , at .note , however , that the current is not without structure it changes fairly dramatically as a function of parameter .this point is made even more clearly in fig .( [ figure6 ] ) where the current remains consistently below , and hence there are in fact , no current reversals at all .note again , however , that the current has considerable structure , even while remaining negative. for and , plotted are current ( upper ) and bifurcation diagram ( lower ) versus with .notice the current stays consistently below .,title="fig:",width=302 ] current and bifurcations versus . in ( a ) and ( b )we show ensemble dependence , specifically in ( a ) the black curve is for an ensemble of trajectories starting centered at the stable fixed point with a root - mean - square gaussian width of , and the brown curve for trajectories starting from the unstable fixed point and of width . in ( b ) , all ensembles are centered at the stable fixed point , the black line for an ensemble of width , brown a width of and maroon with width .( c ) is the comparison of the current without transients ( black ) and with transients ( brown ) along with the single - trajectory results in blue ( after barbi and salerno ) .the initial conditions for the ensembles are centered at with a mean root square gaussian of width .( d ) is the corresponding bifurcation diagram.,title="fig:",width=302 ] it is possible to find several examples of this at different parameters , leading to the negative conclusion , therefore , that * ( i ) not all bifurcations lead to current reversal*. however , we are searching for positive correlations , and at this point we have not precluded the more restricted statement that all current reversals are associated with bifurcations , which is in fact mateos conjecture .we therefore now move onto comparing our results against the specific details of barbi and salerno s treatment of this conjecture . in particular , we look at their figs .( 2,3a,3b ) , where they scan the parameter region .the distinction between their results and ours is that we are using _ensembles _ of particles , and are investigating the convergence of these results as a function of number of particles , the width of the ensemble in phase - space , as well as transience parameters . our data with larger yields different results in general , as we show in the recomputed versions of these figures , presented here in figs .( [ current - bifur1],[current - bifur2 ] ) .specifically , ( a ) the single - trajectory results are , not surprisingly , cleaner and can be more easily interpreted as part of transitions in the behavior of the stability properties of the periodic orbits .the ensemble results on the other hand , even when converged , show statistical roughness .( b ) the ensemble results are consistent with barbi and salerno in general , although disagreeing in several details .for instance , ( c ) the bifurcation at has a much gentler impact on the ensemble current , which has been growing for a while , while the single - trajectory result changes abruptly . note , ( d ) the very interesting fact that the single - trajectory current completely misses the bifurcation - associated spike at .further , ( e ) the barbi and salerno discussion of the behavior of the current in the range is seen to be flawed our results are consistent with theirs , however , the current changes are seen to be consistent with bifurcations despite their statements to the contrary . on the other hand ( f ) , the ensemble current shows a case [ in fig .( [ current - bifur2 ] ) , at of current reversal that does not seem to be associated with bifurcations .in this spike , the current abruptly drops below and then rises above it again .the single trajectory current completely ignores this particular effect , as can be seen .the bifurcation diagram indicates that in this case the important transitions happen either before or after the spike .all of this adds up to two statements : the first is a reiteration of the fact that there is significant information in the ensemble current that can not be obtained from the single - trajectory current .the second is that the heuristic that arises from this is again a negative conclusion , that * ( ii ) not all current reversals are associated with bifurcations .* where does this leave us in the search for ` positive ' results , that is , useful heuristics ?one possible way of retaining the mateos conjecture is to weaken it , i.e. make it into the statement that * ( iii ) _ most _ current reversals are associated with bifurcations . * same as fig .( [ current - bifur1 ] ) except for the range of considered.,title="fig:",width=302 ] for and , plotted are current ( upper ) and bifurcation diagram ( lower ) versus with .note in particular in this figure that eyeball tests can be misleading .we see reversals without bifurcations in ( a ) whereas the zoomed version ( c ) shows that there are windows of periodic and chaotic regimes .this is further evidence that jumps in the current correspond in general to bifurcation.,title="fig:",width=302 ] for and , current ( upper ) and bifurcation diagram ( lower ) versus .,title="fig:",width=302 ] however , a * different * rule of thumb , previously not proposed , emerges from our studies .this generalizes mateos conjecture to say that * ( iv ) bifurcations correspond to sudden current changes ( spikes or jumps)*. note that this means these changes in current are not necessarily reversals of direction .if this current jump or spike goes through zero , this coincides with a current reversal , making the mateos conjecture a special case .the physical basis of this argument is the fact that ensembles of particles in chaotic systems _ can _ have net directed transport but the details of this behavior depends relatively sensitively on the system parameters .this parameter dependence is greatly exaggerated at the bifurcation point , when the dynamics of the underlying single - particle system undergoes a transition a period - doubling transition , for example , or one from chaos to regular behavior .scanning the relevant figures , we see that this is a very useful rule of thumb . for example, it completely captures the behaviour of fig .( [ figure6 ] ) which can not be understood as either an example of the mateos conjecture , or even a failure thereof . as such, this rule significantly enhances our ability to characterize changes in the behavior of the current as a function of parameter .a further example of where this modified conjecture helps us is in looking at a seeming negation of the mateos conjecture , that is , an example where we seem to see current - reversal without bifurcation , visible in fig .( [ hidden - bifur ] ) .the current - reversals in that scan of parameter space seem to happen inside the chaotic regime and seemingly independent of bifurcation . however , this turns out to be a ` hidden ' bifurcation when we zoom in on the chaotic regime , we see hidden periodic windows .this is therefore consistent with our statement that sudden current changes are associated with bifurcations .each of the transitions from periodic behavior to chaos and back provides opportunities for the current to spike .however , in not all such cases can these hidden bifurcations be found .we can see an example of this in fig .( [ rev - nobifur ] ) .the current is seen to move smoothly across with seemingly no corresponding bifurcations , even when we do a careful zoom on the data , as in fig .( [ hidden - bifur ] ) .however , arguably , although subjective , this change is close to the bifurcation point .this result , that there are situations where the heuristics simply do not seem to apply , are part of the open questions associated with this problem , of course .we note , however , that we have seen that these broad arguments hold when we vary other parameters as well ( figures not shown here ) . in conclusion ,in this paper we have taken the approach that it is useful to find general rules of thumb ( even if not universally valid ) to understand the complicated behavior of non - equilibrium nonlinear statistical mechanical systems . in the case of chaotic deterministic ratchets, we have shown that it is important to factor out issues of size , location , spread , and transience in computing the ` current ' due to an ensemble before we search for such rules , and that the dependence on ensemble characteristics is most critical near certain bifurcation points .we have then argued that the following heuristic characteristics hold : bifurcations in single - trajectory behavior often corresponds to sudden spikes or jumps in the current for an ensemble in the same system .current reversals are a special case of this. however , not all spikes or jumps correspond to a bifurcation , nor vice versa .the open question is clearly to figure out if the reason for when these rules are violated or are valid can be made more concrete .a.k . gratefully acknowledges t. barsch and kamal p. singh for stimulating discussions , the reimar lst grant and financial support from the alexander von humboldt foundation in bonn . a.k.p .is grateful to carleton college for the ` sit , wallin , and class of 1949 ' sabbatical fellowships , and to the mpipks for hosting him for a sabbatical visit , which led to this collaboration .useful discussions with j .-m . rost on preliminary results are also acknowledged .p. hnggi and bartussek , in nonlinear physics of complex systems , lecture notes in physics vol .476 , edited by j. parisi , s.c .mueller , and w. zimmermann ( springer verlag , berlin , 1996 ) , pp.294 - 308 ; r.d .asturmian , science * 276 * , 917 ( 1997 ) ; f. jlicher , a. ajdari , and j. prost , rev . mod .phys . * 69 * , 1269 ( 1997 ) ; c. dring , nuovo cimento d*17 * , 685 ( 1995 ) s. flach , o. yevtushenko , and y. zolotaryuk , phys. rev .lett . * 84 * , 2358 ( 2000 ) ; o. yevtushenko , s. flach , y. zolotaryuk , and a. a. ovchinnikov , europhys .lett . * 54 * , 141 ( 2001 ) ; s. denisov et al .e * 66 * , 041104 ( 2002 )
in 84 , 258 ( 2000 ) , mateos conjectured that current reversal in a classical deterministic ratchet is associated with bifurcations from chaotic to periodic regimes . this is based on the comparison of the current and the bifurcation diagram as a function of a given parameter for a periodic asymmetric potential . barbi and salerno , in 62 , 1988 ( 2000 ) , have further investigated this claim and argue that , contrary to mateos claim , current reversals can occur also in the absence of bifurcations . barbi and salerno s studies are based on the dynamics of one particle rather than the statistical mechanics of an ensemble of particles moving in the chaotic system . the behavior of ensembles can be quite different , depending upon their characteristics , which leaves their results open to question . in this paper we present results from studies showing how the current depends on the details of the ensemble used to generate it , as well as conditions for convergent behavior ( that is , independent of the details of the ensemble ) . we are then able to present the converged current as a function of parameters , in the same system as mateos as well as barbi and salerno . we show evidence for current reversal without bifurcation , as well as bifurcation without current reversal . we conjecture that it is appropriate to correlate abrupt changes in the current with bifurcation , rather than current reversals , and show numerical evidence for our claims .
with significant research efforts being directed to the development of neurocomputers based on the functionalities of the brain , a seismic shift is expected in the domain of computing based on the traditional von - neumann model .the , and the ibm are instances of recent flagship neuromorphic projects that aim to develop brain - inspired computing platforms suitable for recognition ( image , video , speech ) , classification and mining problems . while boolean computation is based on the sequential fetch , decode and execute cycles , such neuromorphic computing architectures are massively parallel and event - driven and are potentially appealing for pattern recognition tasks and cortical brain simulationsto that end , researchers have proposed various nanoelectronic devices where the underlying device physics offer a mapping to the neuronal and synaptic operations performed in the brain . the main motivation behind the usage of such non - von neumann post - cmos technologies as neural and synaptic devices stems from the fact that the significant mismatch between the cmos transistors and the underlying neuroscience mechanisms result in significant area and energy overhead for a corresponding hardware implementation .a very popular instance is the simulation of a cat s brain on ibm s blue gene supercomputer where the power consumption was reported to be of the order of a few .while the power required to simulate the human brain will rise significantly as we proceed along the hierarchy in the animal kingdom , actual power consumption in the mammalian brain is just a few tens of watts . in a neuromorphic computing platform, synapses form the pathways between neurons and their strength modulate the magnitude of the signal transmitted between the neurons .the exact mechanisms that underlie the `` learning '' or `` plasticity '' of such synaptic connections are still under debate .meanwhile , researchers have attempted to mimic several plasticity measurements observed in biological synapses in nanoelectronic devices like phase change memories , memristors and spintronic devices , etc .however , majority of the research have focused on non - volatile plasticity changes of the synapse in response to the spiking patterns of the neurons it connects corresponding to long - term plasticity and the volatility of human memory has been largely ignored . as a matter of fact , neuroscience studies performed in have demonstrated that synapses exhibit an inherent learning ability where they undergo volatile plasticity changes and ultimately undergo long - term plasticity conditionally based on the frequency of the incoming action potentials .such volatile or meta - stable synaptic plasticity mechanisms can lead to neuromorphic architectures where the synaptic memory can adapt itself to a changing environment since sections of the memory that have been not receiving frequent stimulus can be now erased and utilized to memorize more frequent information .hence , it is necessary to include such volatile memory transition functionalities in a neuromorphic chip in order to leverage from the computational power that such meta - stable synaptic plasticity mechanisms has to offer .[ drawing1 ] ( a ) demonstrates the biological process involved in such volatile synaptic plasticity changes . during the transmission of each action potential from the pre - neuron to the post - neuron through the synapse , an influx of ionic species like and causes the release of neurotransmitters from the pre- to the post - neuron .this results in temporary strengthening of the synaptic strength .however , in absence of the action potential , the ionic species concentration settles down to its equilibrium value and the synapse strength diminishes . this phenomenon is termed as short - term plasticity ( stp ) . however ,if the action potentials occur frequently , the concentration of the ions do not get enough time to settle down to the equilibrium concentration and this buildup of concentration eventually results in long - term strengthening of the synaptic junction .this phenomenon is termed as long - term potentiation ( ltp ) . while stp is a meta - stable state and lasts for a very small time duration, ltp is a stable synaptic state which can last for hours , days or even years .a similar discussion is valid for the case where there is a long - term reduction in synaptic strength with frequent stimulus and then the phenomenon is referred to as long - term depression ( ltd ) .such stp and ltp mechanisms have been often correlated to the short - term memory ( stm ) and long - term memory ( ltm ) models proposed by atkinson and shiffrin ( fig .[ drawing1](b ) ) .this psychological model partitions the human memory into an stm and an ltm . on the arrival of an input stimulus , information is first stored in the stm . however , upon frequent rehearsal , information gets transferred to the ltm . while the `` forgetting '' phenomena occurs at a fast rate in the stm , information can be stored for a much longer duration in the ltm . in order to mimic such volatile synaptic plasticity mechanisms , a nanoelectronic device is required that is able to undergo meta - stable resistance transitions depending on the frequency of the input and also transition to a long - term stable resistance state on frequent stimulations .hence a competition between synaptic memory reinforcement or strengthening and memory loss is a crucial requirement for such nanoelectronic synapses . in the next section, we will describe the mapping of the magnetization dynamics of a nanomagnet to such volatile synaptic plasticity mechanisms observed in the brain .let us first describe the device structure and principle of operation of an mtj as shown in fig .[ drawing2](a ) .the device consists of two ferromagnetic layers separated by a tunneling oxide barrier ( tb ) .the magnetization of one of the layers is magnetically `` pinned '' and hence it is termed as the `` pinned '' layer ( pl ) .the magnetization of the other layer , denoted as the `` free layer '' ( fl ) , can be manipulated by an incoming spin current .the mtj structure exhibits two extreme stable conductive states the low conductive `` anti - parallel '' orientation ( ap ) , where pl and fl magnetizations are oppositely directed and the high conductive `` parallel '' orientation ( p ) , where the magnetization of the two layers are in the same direction .let us consider that the initial state of the mtj synapse is in the low conductive ap state .considering the input stimulus ( current ) to flow from terminal t2 to terminal t1 , electrons will flow from terminal t1 to t2 and get spin - polarized by the pl of the mtj .subsequently , these spin - polarized electrons will try to orient the fl of the mtj `` parallel '' to the pl .it is worth noting here that the spin - polarization of incoming electrons in the mtj is analogous to the release of neurotransmitters in a biological synapse .the stp and ltp mechanisms exhibited in the mtj due to the spin - polarization of the incoming electrons can be explained by the energy profile of the fl of the mtj .let the angle between the fl magnetization , , and the pl magnetization , , be denoted by .the fl energy as a function of has been shown in fig .[ drawing2](a ) where the two energy minima points ( and ) are separated by the energy barrier , . during the transition from the ap state to the p state , the fl has to transition from to . upon the receipt of an input stimulus , the fl magnetization proceeds `` uphill '' along the energy profile ( from initial point 1 to point 2 in fig .[ drawing2](a ) ) .however , since point 2 is a meta - stable state , it starts going `` downhill '' to point 1 , once the stimulus is removed .if the input stimulus is not frequent enough , the fl will try to stabilize back to the ap state after each stimulus .however , if the stimulus is frequent , the fl will not get sufficient time to reach point 1 and ultimately will be able to overcome the energy barrier ( point 3 in fig .[ drawing2](a ) ) .it is worth noting here , that on crossing the energy barrier at , it becomes progressively difficult for the mtj to exhibit stp and switch back to the initial ap state .this is in agreement with the psychological model of human memory where it becomes progressively difficult for the memory to `` forget '' information during transition from stm to ltm .hence , once it has crossed the energy barrier , it starts transitioning from the stp to the ltp state ( point 4 in fig .[ drawing2](a ) ) .the stability of the mtj in the ltp state is dictated by the magnitude of the energy barrier .the lifetime of the ltp state is exponentially related to the energy barrier .for instance , for an energy barrier of used in this work , the ltp lifetime is hours while the lifetime can be extended to around years by engineering a barrier height of .the lifetime can be varied by varying the energy barrier , or equivalently , volume of the mtj .the stp - ltp behavior of the mtj can be also explained from the magnetization dynamics of the fl described by landau - lifshitz - gilbert ( llg ) equation with additional term to account for the spin momentum torque according to slonczewski , where , is the unit vector of fl magnetization , is the gyromagnetic ratio for electron , is gilberts damping ratio , is the effective magnetic field including the shape anisotropy field for elliptic disks calculated using , is the number of spins in free layer of volume ( is saturation magnetization and is bohr magneton ) , and is the spin current generated by the input stimulus ( is the spin - polarization efficiency of the pl ) . thermal noise is included by an additional thermal field , , where is a gaussian distribution with zero mean and unit standard deviation , is boltzmann constant , is the temperature and is the simulation time step .equation [ llg ] can be reformulated by simple algebraic manipulations as , hence , in the presence of an input stimulus the magnetization of the fl starts changing due to integration of the input . however , in the absence of the input , it starts leaking back due to the first two terms in the rhs of the above equation .it is worth noting here that , like traditional semiconductor memories , magnitude and duration of the input stimulus will definitely have an impact on the stp - ltp transition of the synapse .however , frequency of the input is a critical factor in this scenario . even though the total flux through the device is same ,the synapse will conditionally change its state if the frequency of the input is high .we verified that this functionality is exhibited in mtjs by performing llg simulations ( including thermal noise ) .the conductance of the mtj as a function of can be described by , where , ( ) is the mtj conductance in the p ( ap ) orientation respectively . as shown in fig .[ drawing2](b ) , the mtj conductance undergoes meta - stable transitions ( stp ) and is not able to undergo ltp when the time interval of the input pulses is large ( ) .however , on frequent stimulations with time interval as , the device undergoes ltp transition incrementally .[ drawing2](b ) and ( c ) illustrates the competition between memory reinforcement and memory decay in an mtj structure that is crucial to implement stp and ltp in the synapse .we demonstrate simulation results to verify the stp and ltp mechanisms in an mtj synapse depending on the time interval between stimulations .the device simulation parameters were obtained from experimental measurements and have been shown in table i. [ table ] table i. device simulation parameters [ cols="^,^ " , ] + the mtj was subjected to 10 stimulations , each stimulation being a current pulse of magnitude and in duration . as shown in fig .[ drawing3 ] , the probability of ltp transition and average device conductance at the end of each stimulation increases with decrease in the time interval between the stimulations .the dependence on stimulation time interval can be further characterized by measurements corresponding to paired - pulse facilitation ( ppf : synaptic plasticity increase when a second stimulus follows a previous similar stimulus ) and post - tetanic potentiation ( ptp : progressive synaptic plasticity increment when a large number of such stimuli are received successively ) .[ drawing4 ] depicts such ppf ( after 2nd stimulus ) and ptp ( after 10th stimulus ) measurements for the mtj synapse with variation in the stimulation interval .the measurements closely resemble measurements performed in frog neuromuscular junctions where ppf measurements revealed that there was a small synaptic conductivity increase when the stimulation rate was frequent enough while ptp measurements indicated ltp transition on frequent stimulations with a fast decay in synaptic conductivity on decrement in the stimulation rate .hence , stimulation rate indeed plays a critical role in the mtj synapse to determine the probability of ltp transition .the psychological model of stm and ltm utilizing such mtj synapses was further explored in a memory array .the array was stimulated by a binary image of the purdue university logo where a set of 5 pulses ( each of magnitude and in duration ) was applied for each on pixel .the snapshots of the conductance values of the memory array after each stimulus have been shown for two different stimulation intervals of and respectively .while the memory array attempts to remember the displayed image right after stimulation , it fails to transition to ltm for the case and the information is eventually lost after stimulation .however , information gets transferred to ltm progressively for .it is worth noting here , that the same amount of flux is transmitted through the mtj in both cases .the simulation not only provides a visual depiction of the temporal evolution of a large array of mtj conductances as a function of stimulus but also provides inspiration for the realization of adaptive neuromorphic systems exploiting the concepts of stm and ltm .readers interested in the practical implementation of such arrays of spintronic devices are referred to ref .the contributions of this work over state - of - the - art approaches may be summarized as follows .this is the first theoretical demonstration of stp and ltp mechanisms in an mtj synapse .we demonstrated the mapping of neurotransmitter release in a biological synapse to the spin polarization of electrons in an mtj and performed extensive simulations to illustrate the impact of stimulus frequency on the ltp probability in such an mtj structure .there have been recent proposals of other emerging devices that can exhibit such stp - ltp mechanisms like synapses and memristors .however , it is worth noting here , that input stimulus magnitudes are usually in the range of volts ( 1.3v in and 80mv in ) and stimulus durations are of the order of a few msecs ( 1ms in and 0.5s in ) .in contrast , similar mechanisms can be exhibited in mtj synapses at much lower energy consumption ( by stimulus magnitudes of a few hundred and duration of a few ) .we believe that this work will stimulate proof - of - concept experiments to realize such mtj synapses that can potentially pave the way for future ultra - low power intelligent neuromorphic systems capable of adaptive learning .the work was supported in part by , center for spintronic materials , interfaces , and novel architectures ( c - spin ) , a marco and darpa sponsored starnet center , by the semiconductor research corporation , the national science foundation , intel corporation and by the national security science and engineering faculty fellowship .j. schemmel , j. fieres , and k. meier , in _ neural networks , 2008 .ijcnn 2008.(ieee world congress on computational intelligence ) .ieee international joint conference on_.1em plus 0.5em minus 0.4emieee , 2008 , pp .431438 .b. l. jackson , b. rajendran , g. s. corrado , m. breitwisch , g. w. burr , r. cheek , k. gopalakrishnan , s. raoux , c. t. rettner , a. padilla _et al . _ , `` nanoscale electronic synapses using phase change devices , '' _ acm journal on emerging technologies in computing systems ( jetc ) _ , vol . 9 , no . 2 , p. 12, 2013 .m. n. baibich , j. m. broto , a. fert , f. n. van dau , f. petroff , p. etienne , g. creuzet , a. friederich , and j. chazelas , `` giant magnetoresistance of ( 001 ) fe/(001 ) cr magnetic superlattices , '' _ physical review letters _ ,61 , no .21 , p. 2472, 1988 .g. binasch , p. grnberg , f. saurenbach , and w. zinn , `` enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange , '' _ physical review b _ , vol .39 , no . 7 , p. 4828, 1989 .w. scholz , t. schrefl , and j. fidler , `` micromagnetic simulation of thermally activated switching in fine particles , '' _ journal of magnetism and magnetic materials _ , vol .233 , no . 3 , pp .296304 , 2001 .pai , l. liu , y. li , h. tseng , d. ralph , and r. buhrman , `` spin transfer torque devices utilizing the giant spin hall effect of tungsten , '' _ applied physics letters _ , vol .101 , no . 12 , p. 122404, 2012 .h. noguchi , k. ikegami , k. kushida , k. abe , s. itai , s. takaya , n. shimomura , j. ito , a. kawasumi , h. hara _et al . _ , in _ solid - state circuits conference-(isscc ) , 2015 ieee international_.1em plus 0.5em minus 0.4emieee , 2015 , pp .t. ohno , t. hasegawa , t. tsuruoka , k. terabe , j. k. gimzewski , and m. aono , `` short - term plasticity and long - term potentiation mimicked in single inorganic synapses , '' _ nature materials _ , vol .10 , no . 8 , pp . 591595 , 2011 .r. yang , k. terabe , y. yao , t. tsuruoka , t. hasegawa , j. k. gimzewski , and m. aono , `` synaptic plasticity and memory functions achieved in a wo3- x - based nanoionics device by using the principle of atomic switch operation , '' _ nanotechnology _ , vol .24 , no .38 , p. 384003
synaptic memory is considered to be the main element responsible for learning and cognition in humans . although traditionally non - volatile long - term plasticity changes have been implemented in nanoelectronic synapses for neuromorphic applications , recent studies in neuroscience have revealed that biological synapses undergo meta - stable volatile strengthening followed by a long - term strengthening provided that the frequency of the input stimulus is sufficiently high . such `` memory strengthening '' and `` memory decay '' functionalities can potentially lead to adaptive neuromorphic architectures . in this paper , we demonstrate the close resemblance of the magnetization dynamics of a magnetic tunnel junction ( mtj ) to short - term plasticity and long - term potentiation observed in biological synapses . we illustrate that , in addition to the magnitude and duration of the input stimulus , frequency of the stimulus plays a critical role in determining long - term potentiation of the mtj . such mtj synaptic memory arrays can be utilized to create compact , ultra - fast and low power intelligent neural systems .
the segmentation process as a whole can be thought of as consisting of two tasks : recognition and delineation .recognition is to determine roughly `` where '' the object is and to distinguish it from other object - like entities .although delineation is the final step for defining the spatial extent of the object region / boundary in the image , an efficient recognition strategy is a key for successful delineation . in this study ,a novel , general method is introduced for object recognition to assist in segmentation ( delineation ) tasks .it exploits the pose relationship that can be encoded , via the concept of ball scale ( b - scale ) , between the binary training objects and their associated images . as an alternative to the manual methods based on initial placement of the models by an expert in the literature ,model based methods can be employed for recognition .for example , in , the position of an organ model ( such as liver ) is estimated by its histogram . in ,generalized hough transform is succesfully extended to incorporate variability of shape for 2d segmentation problem .atlas based methods are also used to define initial position for a shape model . in , affine registration is performed to align the data into an atlas to determine the initial position for a shape model of the knee cartilage .similarly , a popular particle filtering algorithm is used to detect the starting pose of models for both single and multi - object cases . however , due to the large search space and numerous local minimas in most of these studies , conducting a global search on the entire image is not a feasible approach . in this paper, we investigate an approach of automatically recognizing objects in 3d images without performing elaborate searches or optimization .the proposed method consists of the following key ideas and components : * 1 .model building : * after aligning image data from all subjects in the training set into a common coordinate system via 7-parameter affine registration , the live - wire algorithm is used to segment different objects from subjects .segmented objects are used for the automatic extraction of landmarks in a slice - by - slice manner . from the landmark information for all objects ,a model assembly is constructed .b - scale encoding : * the b - scale value at every voxel in an image helps to understand `` objectness '' of a given image without doing explicit segmentation . for each voxel ,the radius of the largest ball of homogeneous intensity is weighted by the intensity value of that particular voxel in order to incorporate appearance ( texture ) information into the object information ( called intensity weighted b - scale : ) so that a model of the correlations between shape and texture can be built . a simple and proper way of thresholding the b - scale image yields a few largest balls remaining in the image .these are used for the construction of the relationship between the segmented training objects and the corresponding images .the resulting images have a strong relationship with the actual delineated objects .relationship between and : * a principal component system is built via pca for the segmented objects in each image , and their mean system , denoted , is found over all training images . has an origin and three axes .similarly the mean system , denoted , for intensity weighted b - scale images is found .finally the transformation that maps to is found .given an image to be segmented , the main idea here is to use to facilitate a quick placement of in with a proper pose as indicated in step 4 below . *hierarchical recognition : * for a given image , is obtained and its system , denoted is computed subsequently . assuming the relationship of to to be the same as of to , and assuming that offers the proper pose of in the training images , we use transformation and to determine the pose of in .this level of recognition is called coarse recognition . further refinement of the recognition can be done using the skin boundary object in the image with the requirement that a major portion of should lie inside the body region delimited by the skin boundary . moreover, a little search inside the skin boundary can be done for the fine tuning , however , since offered coarse recognition method gives high recognition rates , there is no need to do any elaborate searches .we will focus on the fine tuning of coarse recognition for future study .the finest level of recognition requires the actual delineation algorithm itself , which is a hybrid method in our case and called gc - asm ( synergistic integration of graph - cut and active shape model ) .this delineation algorithm is presented in a companion paper submitted to this symposium .a convenient way of achieving incorporation of prior information automatically in computing systems is to create and use a flexible _ model _ to encode information such as the expected _ size _ , _ shape _ , _ appearance _ , and _ position _ of objects in an image . among such information ,_ shape _ and _ appearance _ are two complementary but closely related attributes of biological structures in images , and hence they are often used to create statistical models . in particular , shape has been used both in high and low level image analysis tasks extensively , and it has been demonstrated that shape models ( such as active shape models ( asms ) ) can be quite powerful in compensating for misleading information due to noise , poor resolution , clutter , and occlusion in the images .therefore , we use asm to estimate population statistics from a set of examples ( training set ) . in order to guarantee 3d point correspondences required by asm, we build our statistical shape models combining semi - automatic methods : ( 1 ) manually selected anatomically correspondent slices by an expert , and ( 2 ) semi - automatic way of specifying key points on the shapes starting from the same anatomical locations .once step ( 1 ) is accomplished , the remaining problem turns into a problem of establishing point correspondence in 2d shapes , which is easily solved .it is extremely significant to choose correct correspondences so that a good representation of the modelled object results .although landmark correspondence is usually established manually by experts , it is time - consuming , prone to errors , and restricted to only 2d objects .because of these limitations , a semi - automatic landmark tagging method , _ equal space landmarking _ , is used to establish correspondence between landmarks of each sample shape in our experiments .although this method is proposed for 2d objects , and equally spacing a fixed number of points for 3d objects is much more difficult , we use equal space landmarking technique in pseudo-3d manner where 3d object is annotated slice by slice .let be a single shape and assume that its finite dimensional representation after the landmarking consisting of landmark points with positions , where are cartesian coordinates of the point on the shape .equal space landmark tagging for points for on shape boundaries ( contours ) starts by selecting an initial point on each shape sample in training set and equally space a fixed number of points on each boundary automatically . selecting the starting pointhas been done manually by annotating the same anatomical point for each shape in the training set .figure [ img : landmarking_abd ] shows annotated landmarks for five different objects ( skin , liver , right kidney , left kidney , spleen ) in a ct slice of the abdominal region .note that different number of landmarks are used for different objects considering their size . [ cols="^ " , ]\(1 ) the b - scale image of a given image captures object morphometric information without requiring explicit segmentation .b - scales constitute fundamental units of an image in terms of largest homogeneous balls situated at every voxel in the image .the b - scale concept has been previously used in object delineation , filtering and registration .our results suggest that their ability to capture object geography in conjunction with shape models may be useful in quick and simple yet accurate object recognition strategies .( 2 ) the presented method is general and does not depend on exploiting the peculiar characteristics of the application situation .( 3 ) the specificity of recognition increases dramatically as the number of objects in the model increases .( 4 ) we emphasize that both modeling and testing procedures are carried out on the ct data sets that are part of the clinical pet / ct data routinely acquired in our hospital .the ct data set are thus of relatively poor ( spatial and contrast ) resolution compared to other ct - alone studies with or without contrast .we expect better performance if higher resolution ct data are employed in modeling or testing .this paper is published in spie medical imaging conference - 2010 .falcao , a.x . ,udupa , j.k . ,samarasekera , s. , sharma , s. , hirsch , b.e . , and lotufo , r.a . , 1998user - steered image segmentation paradigms : live wire and live lane . graph .models image process .60 ( 4 ) , pp .233260 .kokkinos , i. , maragos , p. , 2009 .synergy between object recognition and image segmentation using the expectation - maximization algorithm .ieee transactions on pattern analysis and machine intelligience , vol.31 ( 8) , pp.14861501 .brejl , m. , sonka , m. , 2000 .object localization and border detection criteria design in edge - based image segmentation : automated learning from examples .ieee transactions on medical imaging , vol.19 ( 10 ) , pp.973985 .fripp , j. , crozier , s. , warfield , s.k . ,ourselin , s. , 2005 .automatic initialisation of 3d deformable models for cartilage segmentation . in proceedings of digital image computing : techniques and applications , pp .
this paper investigates , using prior shape models and the concept of ball scale ( b - scale ) , ways of automatically recognizing objects in 3d images without performing elaborate searches or optimization . that is , the goal is to place the model in a single shot close to the right pose ( position , orientation , and scale ) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image . this is achieved via the following set of key ideas : ( a ) a semi - automatic way of constructing a multi - object shape model assembly . ( b ) a novel strategy of encoding , via b - scale , the pose relationship between objects in the training images and their intensity patterns captured in b - scale images . ( c ) a hierarchical mechanism of positioning the model , in a one - shot way , in a given image from a knowledge of the learnt pose relationship and the b - scale image of the given image to be segmented . the evaluation results on a set of 20 routine clinical abdominal female and male ct data sets indicate the following : ( 1 ) incorporating a large number of objects improves the recognition accuracy dramatically . ( 2 ) the recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition . ( 3 ) scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization . ( 4 ) effective object recognition can make delineation most accurate .
biological aggregations such as fish schools , bird flocks , bacterial colonies , and insect swarms have characteristic morphologies governed by the group members interactions with each other and with their environment .the _ endogenous _ interactions , _i.e. _ , those between individuals , often involve organisms reacting to each other in an attractive or repulsive manner when they sense each other either directly by sound , sight , smell or touch , or indirectly via chemicals , vibrations , or other signals .a typical modeling strategy is to treat each individual as a moving particle whose velocity is influenced by social ( interparticle ) attractive and repulsive forces . in contrast , the _ exogenous _ forces describe an individual s reaction to the environment , for instance a response to gravity , wind , a chemical source , a light source , a food source , or a predator .the superposition of endogenous and exogenous forces can lead to characteristic swarm shapes ; these equilibrium solutions are the subject of our present study .more specifically , our motivation is rooted in our previous modeling study of the swarming desert locust _schistocerca gregaria _ . in some parameter regimes of our model (presented momentarily ) , locusts self - organize into swarms with a peculiar morphology , namely a bubble - like shape containing a dense group of locusts on the ground and a flying group of locusts overhead ; see figure [ fig : locust](bc ) .the two are separated by an unoccupied gap . with wind, the swarm migrates with a rolling motion .locusts at the front of the swarm fly downwards and land on the ground .locusts on the ground , when overtaken by the flying swarm , take off and rejoin the flying group ; see figure [ fig : locust](cd ) .the presence of an unoccupied gap and the rolling motion are found in real locust swarms . as we will show throughout this paper , features of swarms such as dense concentrations and disconnected components ( that is , the presence of gaps ) arise as properties of equilibria in a general model of swarming .the model of is [ eq : locusts ] which describes interacting locusts with positions .the direction of locust swarm migration is strongly correlated with the direction of the wind and has little macroscopic motion in the transverse direction , so the model is two - dimensional , _i.e. _ , where the coordinate is aligned with the main current of the wind and is a vertical coordinate . as the velocity of each insect is simply a function of position ,the model neglects inertial forces .this so - called kinematic assumption is common in swarming models , and we discuss it further in section [ sec : discretemodel ] .the first term on the right - hand side of ( [ eq : locusts ] ) describes endogenous forces ; measures the force that locust exerts on locust .the first term of describes attraction , which operates with strength over a length scale and is necessary for aggregation .the second term is repulsive , and operates more strongly and over a shorter length scale in order to prevent collisions .time and space are scaled so that the repulsive strength and length scale are unity .the second term on the right - hand side of ( [ eq : locusts ] ) describes gravity , acting downwards with strength .the last term describes advection of locusts in the direction of the wind with speed .furthermore , the model assumes a flat impenetrable ground . since locusts rest and feed while grounded , their motion in that state is negligible compared to their motion in the air .thus we add to ( [ eq : locusts ] ) the stipulation that grounded locusts whose vertical velocity is computed to be negative under ( [ eq : locusts ] ) remain stationary .as mentioned above , for some parameters , ( [ eq : locusts ] ) forms a bubble - like shape .this can occur even in the absence of wind , that is , when ; see figure [ fig : locust](b ) .the bubble is crucial , for it allows the swarm to roll in the presence of wind . as discussed in , states which lack a bubble in the absence of wind do not migrate in the presence of wind .conditions for bubble formation , even in the equilibrium state arising in the windless model , have not been determined ; we will investigate this problem .some swarming models adopt a discrete approach as in our locust example above because of the ready connection to biological observations .a further advantage is that simulation of discrete systems is straightforward , requiring only the integration of ordinary differential equations .however , since biological swarms contain many individuals , the resulting high - dimensional systems of differential equations can be difficult or impossible to analyze .furthermore , for especially large systems , computation , though straightforward , may become a bottleneck .continuum models are more amenable to analysis .one well - studied continuum model is that of , a partial integrodifferential equation model for a swarm population density in one spatial dimension : the density obeys a conservation equation , and is the velocity field , which is determined via convolution with the antisymmetric pairwise endogenous force , the one - dimensional analog of a social force like the one in ( [ eq : locusts ] ) .the general model ( [ eq : introeq ] ) displays at least three solution types as identified in .populations may concentrate to a point , reach a finite steady state , or spread . in , we identified conditions on the social interaction force for each behavior to occur .these conditions map out a `` phase diagram '' dividing parameter space into regions associated with each behavior .similar phase diagrams arise in a dynamic particle model and its continuum analog .models that break the antisymmetry of ( creating an asymmetric response of organisms to each other ) display more complicated phenomena , including traveling swarms .many studies have sought conditions under which the population concentrates to a point mass . in a one - dimensional domain ,collapse occurs when the force is finite and attractive at short distances .the analogous condition in higher dimensions also leads to collapse .one may also consider the case when the velocity includes an additional term describing an exogenous force , in this case , equilibrium solutions consisting of sums of point - masses can be linearly and nonlinearly stable , even for social forces that are repulsive at short distances .these results naturally lead to the question of whether a solution can be continued past the time at which a mass concentrates . early work on a particular generalization of ( [ eq : introeq ] )suggests the answer is yes . for ( [ eq : introeq ] ) itself in arbitrary dimension, there is an existence theory beyond the time of concentration .some of the concentration solutions mentioned above are equilibrium solutions. however , there may be classical equilibria as well .for most purely attractive , the only classical steady states are constant in space , as shown via a variational formulation of the steady state problem . however , these solutions are non - biological , as they contain infinite mass . theredo exist attractive - repulsive which give rise to compactly - supported classical steady states of finite mass .for instance , in simulations of ( [ eq : introeq ] ) , we found classical steady state solutions consisting of compactly supported swarms with jump discontinuities at the edges of the support . in our current work , we will find equilibria that contain both classical and nonclassical components .many of the results reviewed above were obtained by exploiting the underlying gradient flow structure of ( [ eq : introeq2 ] ) .there exists an energy functional = \frac{1}{2 } \int_\mathbb{r } \int_\mathbb{r } \rho(x ) \rho(y ) q(x - y)\,dx\,dy + \int_\mathbb{r } f(x)\rho(x)\,dx,\ ] ] which is minimized under the dynamics .this energy can be interpreted as the continuum analog of the summed pairwise energy of the corresponding discrete ( particle ) model .we will also exploit this energy to find equilibrium solutions and study their stability . in this paper, we focus on equilibria of swarms and ask the following questions : * what sorts of density distributions do swarming systems make ? are they classical or nonclassical ? * how are the final density distributions reached affected by endogenous interactions , exogenous forces , boundaries , and the interplay of these ? *how well can discrete and continuum swarming systems approximate each other ? to answer these questions , we formulate a general mathematical framework for discrete , interacting swarm members in one spatial dimension , also subject to exogenous forces .we then derive an analogous continuum model and use variational methods to seek minimizers of its energy .this process involves solution of a fredholm integral equation for the density .for some choices of endogenous forces , we are able to find exact solutions . perhaps surprisingly , they are not always classical . in particular , they can involve -function concentrations of mass at the domain boundary .the rest of this paper is organized as follows . in section[ sec : formulation ] , we create the mathematical framework for our study , and derive conditions for a particular density distribution to be an equilibrium solution , and to be stable to various classes of perturbations . in sections [ sec : repulsive ] and [ sec : morse ] ,we demonstrate different types of swarm equilibria via examples . in section[ sec : repulsive ] , we focus on purely repulsive endogenous interactions .we consider a bounded domain with no exogenous forces , a half - line subject to gravitational forces , and an unbounded domain subject to a quadratic exogenous potential , modeling attraction to a light , chemical , or nutrient source .for all three situations , we find exact solutions for swarm equilibria .for the first two examples , these equilibria consist of a density distribution that is classical in the interior of the domain , but contains -functions at the boundaries . for the third example , the equilibrium is compactly supported with the density dropping discontinuously to zero at the edge of the support .for all three examples , we compare analytical solutions from the continuum framework to equilibria obtained from numerical simulation of the underlying discrete system . the two agree closely even for small numbers of discrete swarm members .section [ sec : morse ] is similar to section [ sec : repulsive ] , but we now consider the more complicated case of endogenous interactions that are repulsive on short length scales and attractive over longer ones ; such forces are typical for swarming biological organisms . in section [ sec : locust - ground ] , we revisit locust swarms , focusing on their bubble - like morphology as described above , and on the significance of dimensionality . in a one - dimensional model corresponding to a vertical slice of a wide locust swarm under the influence of social interactions and gravity ,energy minimizers can reproduce concentrations of locusts on the ground and a group of locusts above the ground , but there can not be a separation between the two groups. however , a quasi - two - dimensional model accounting for the influence of the swarm s horizontal extent does , in contrast , have minimizers which qualitatively correspond to the biological bubble - like swarms .consider identical interacting particles ( swarm members ) in one spatial dimension with positions .assume that motion is governed by newton s law , so that acceleration is proportional to the sum of the drag and motive forces .we will focus on the case where the acceleration is negligible and the drag force is proportional to the velocity .this assumption is appropriate when drag forces dominate momentum , commonly known in fluid dynamics as the low reynolds number or stokes flow regime . in the swarming literature , the resulting models , which are first - order in time , are known as _kinematic models have been used in numerous studies of swarming and collective behavior , including ) .we now introduce a general model with both endogenous and exogenous forces , as with the locust model ( [ eq : locusts ] ) .the endogenous forces act between individuals and might include social attraction and repulsion ; see for a discussion . for simplicity , we assume that the endogenous forces act in an additive , pairwise manner .we also assume that the forces are symmetric , that is , the force induced by particle on particle is the opposite of that induced by particle on particle .exogenous forces might include gravity , wind , and taxis towards light or nutrients .the governing equations take the form [ eq : discretesystem ] eventually we will examine the governing equations for a continuum limit of the discrete problem . to this end , we have introduced a _ social mass _ which scales the strength of the endogenous forces so as to remain bounded for . is the total social mass of the ensemble .( [ eq : vee ] ) defines the velocity rule ; is the endogenous velocity one particle induces on another , and is the exogenous velocity . from our assumption of symmetry of endogenous forces , is odd and in most realistic situations is discontinuous at the origin .each force , and , can be written as the gradient of a potential under the relatively minor assumption of integrability .as pointed out in , most of the specific models for mutual interaction forces proposed in the literature satisfy this requirement .many exogenous forces including gravity and common forms of chemotaxis do so as well . under this assumption , we rewrite ( [ eq : discretesystem ] ) as a gradient flow , where the potential is [ eq : discrete_gradient ] the double sum describes the endogenous forces and the single sum describes the exogenous forces . also , is the mutual interaction potential , which is even , and is the exogenous potential .the flow described by ( [ eq : discretegradient1 ] ) will evolve towards minimizers of the energy . up to now, we have defined the problem on . in order to confine the problem to a particular domain , one may use the artifice of letting the exogenous potential tend to infinity on the complement of . while this discrete model is convenient from a modeling and simulation standpoint , it is difficult to analyze . presently , we will derive a continuum analog of ( [ eq : discretesystem ] ) .this continuum model will allow us to derive equilibrium solutions and determine their stability via the calculus of variations and integral equation methods . to derive a continuum model , we begin by describing our evolving ensemble of discrete particles with a density function equal to a sum of -functions .( for brevity , we suppress the dependence of in the following discussion . )our approach here is similar to .these -functions have strength and are located at the positions of the particles : the total mass is where is the domain of the problem . using ( [ eq : deltafuncs ] ) , we write the discrete velocity in terms of a continuum velocity .that is , we require where by conservation of mass , the density obeys with no mass flux at the boundary .we now introduce an energy functional ] for nontrivial perturbations , which follows from ( [ eq : expand_w ] ) being exact . to summarize, we have obtained the following results : * equilibrium solutions satisfy the fredholm integral equation ( [ eq : fie ] ) and the mass constraint ( [ eq : mass1 ] ) . * the solution is a local and global minimizer with respect to the first class of perturbations ( those with support in ) if in ( [ eq : second_variation ] ) is positive .* the solution is a local minimizer with respect to the second ( more general zero - mass ) class of perturbations if satisfies ( [ eq : stable ] ) .if in addition is positive for these perturbations , then is a global minimizer as well . in practice, we solve the integral equation ( [ eq : fie ] ) to find candidate solutions .then , we compute to determine whether is a local minimizer .finally , when possible , we show the positivity of to guarantee that is a global minimizer .as the continuum limit replaces individual particles with a density , we need to make sure the continuum problem inherits a physical interpretation for the underlying problem .if we think about perturbing an equilibrium configuration , we note that mass can not `` tunnel '' between disjoint components of the solution . as suchwe define the concept of a multi - component swarm equilibrium .suppose the swarm s support can be divided into a set of disjoint , closed , connected components , that is we define a swarm equilibrium as a configuration in which each individual swarm component is in equilibrium , we can still define in + f(x ) = \int_{{\omega_{{{\bar \rho}}}}}q(x - y ) { { \bar \rho}}(y)~dy + f(x),\ ] ] but now in .we can now define a swarm minimizer .we say a swarm equilibrium is a swarm minimizer if for some neighborhood of each component of the swarm . in practicethis means that the swarm is an energy minimizer for infinitesimal redistributions of mass in the neighborhood of each component .this might also be called a lagrangian minimizer in the sense that the equilibrium is a minimizer with respect to infinitesimal lagrangian deformations of the distributions .it is crucial to note that even if a solution is a global minimizer , other multi - component swarm minimizers may still exist .these solutions are local minimizers and consequently a global minimizer may not be a global attractor under the dynamics of ( [ eq : pde ] ) .in this section we discuss the minimization problem formulated in section [ sec : formulation ] .it is helpful for expository purposes to make a concrete choice for the interaction potential . as previously mentioned , in many physical , chemical , and biological applications , the pairwise potential is symmetric. additionally , repulsion dominates at short distances ( to prevent collisions ) and the interaction strength approaches zero for very long distances .a common choice for is the morse potential with parameters chosen to describe long - range attraction and short - range repulsion . for the remainder of this section, we consider a simpler example where is the laplace distribution which represents repulsion with strength decaying exponentially in space . when there is no exogenous potential , , and when the domain is infinite , _e.g. _ , , the swarm will spread without bound .the solutions asymptotically approach the barenblatt solution to the porous medium equation as shown in .however , when the domain is bounded or when there is a well in the exogenous potential , bounded swarms are observed both analytically and numerically , as we will show .figure [ fig : repulsion_schematic ] shows solutions for three cases : a bounded domain with no exogenous potential , a gravitational potential on a semi - infinite domain , and a quadratic potential well on an infinite domain . in each case , a bounded swarm solution is observed but the solutions are not necessarily continuous and can even contain -function concentrations at the boundaries .we discuss these three example cases in detail later in this section .first , we will formulate the minimization problem for the case of the laplace potential .we will attempt to solve the problem classically ; when the solution has compact support contained within the domain we find solutions that are continuous within the support and may have jump discontinuities at the boundary of the support .however , when the boundary of the support coincides with the boundary of the domain , the classical solution may break down and it is necessary to include a distributional component in the solution .we also formulate explicit conditions for the solutions to be global minimizers .we then apply these results to the three examples mentioned above .recall that for to be a steady solution , it must satisfy the integral equation ( [ eq : fie ] ) subject to the mass constraint ( [ eq : mass1 ] ) . for to be a local minimizer , it must also satisfy ( [ eq : stable ] ) , finally , recall that for a solution to be a global minimizer , the second variation ( [ eq : second_variation ] ) must be positive .we saw that if , this is guaranteed . for ( [ eq : laplace ] ) , andso for the remainder of this section , we are able to ignore the issue of .any local minimizer that we find will be a global minimizer .additionally , for the remainder of this section , we restrict our attention to cases where the support of the solution is a single interval in ; in other words , the minimizing solution has a connected support .the reason that we are able to make this restriction follows from the notion of swarm minimization , discussed above .in fact , we can show that there are no multi - component swarm minimizers for the laplace potential as long as the exogenous potential is convex , that is , on . to see this ,assume we have a swarm minimizer with a at least two disjoint components .consider in the gap between two components so that .we differentiate twice to obtain note that as in . by assumption .consequently , in and so is convex upwards in the gap . also , at the endpoints of the gap .we conclude from the convexity that must be less than near one of the endpoints .this violates the condition of swarm minimization from the previous section , and hence the solution is not a swarm minimizer . since swarm minimization is a necessary condition for global minimization , we now , as discussed , restrict attention to single - component solutions. for concreteness , assume the support of the solution is ] and any mass we can find a solution to ( [ eq : fie ] ) with smooth in the interior and with a concentration at the endpoints .however , we havent yet addressed the issue of being non - negative , nor have we considered whether it is a minimizer .we next consider whether the extremal solution is a minimizer , which involves the study of ( [ eq : stable2 ] ) .we present a differential operator method that allows us to compute and deduce sufficient conditions for to be a minimizer .we start by factoring the differential operator where . applying these operators to the interaction potential , we see that substituting in ( [ eq : fie - sol2 ] ) into our definition of in ( [ eq : stable2 ] ) yields now consider applying to ( [ eq : one ] ) at a point in .we see that = { \mathcal{d}^- } [ \lambda],\ ] ] where we ve used the fact that in .if we let and let decrease to zero , the integral term vanishes and solving for yields the first half of ( [ eq : ab ] ) .a similar argument near yields the value of .assuming does not coincide with an endpoint of , we now consider the region , which is to the left of the support . again , applying to ( [ eq : fie ] ) simplifies the equation ; we can check that both the integral term and the contribution from the -functions are annihilated by this operator , from which we deduce that = 0 \qquad \rightarrow \qquad f(x)-\lambda(x ) = ce^x,\ ] ] where is an unknown constant .a quick check shows that if is continuous , then is continuous at the endpoints of so that .this in turn determines , yielding ^{x-\alpha } \qquad { \rm for } \quad x \leq \alpha.\ ] ] a similar argument near yields ^{\beta -x } \qquad { \rm for } \quad x \geq \beta.\ ] ] as discussed in section [ sec : minimizers ] , for to be a minimizer we wish for for and .a little algebra shows that this is equivalent to [ eq : mincon ] if and are both strictly inside , then ( [ eq : mincon ] ) constitutes sufficient conditions for the extremal solution to be a global minimizer ( recalling that ) . we may also derive a necessary condition at the endpoints of the support from ( [ eq : mincon ] ) .as increases to , we may apply lhpital s rule and this equation becomes equivalent to the condition , as expected .a similar calculation letting decrease to implies that .however , since is a density , we are looking for positive solutions . hence, either or coincides with the left endpoint of .similarly , either or coincides with the right edge of .this is consistent with the result ( [ eq : nodeltas2 ] ) which showed that -functions can not occur in the interior of . in summary, we come to two conclusions : * a globally minimizing solution contains a -function only if a boundary of the support of the solution coincides with a boundary of the domain . * a globally minimizing solution must satisfy ( [ eq : mincon ] ) .we now consider three concrete examples for and .we model a one - dimensional biological swarm with repulsive social interactions described by the laplace potential .we begin with the simplest possible case , namely no exogenous potential , and a finite domain which for convenience we take to be the symmetric interval ] with .cross - hatched boxes indicate the boundary of the domain .the solid line is the classical solution .dots correspond to the numerically - obtained equilibrium of the discrete system ( [ eq : discretesystem ] ) with swarm members .the density at each lagrangian grid point is estimated using the correspondence discussed in section ( [ sec : contmodel ] ) and pictured in figure [ fig : delta_schematic ] .each `` lollipop '' at the domain boundary corresponds to a -function of mass in the analytical solution , and simultaneously to a superposition of swarm members in the numerical simulation .hence , we see excellent agreement between the continuum minimizer and the numerical equilibrium even for this relatively small number of lagrangian points .we now consider repulsive social interactions and an exogenous gravitational potential .the spatial coordinate describes the elevation above ground .consequently , is the semi - infinite interval . then with , shown in figure [ fig : repulsion_schematic](f ) . as we know from ( [ eq : singlecomponent ] ) that the minimizing solution has a connected support , _i.e. _ , it is a single component .moreover , translating this component downward decreases the exogenous energy while leaving the endogenous energy unchanged .thus , the support of the solution must be ] which is positive , and hence the solution is globally stable . for previous calculation naively implies .since can not be negative , the minimizer in this case is a -function at the origin , namely , shown in figure [ fig : repulsion_schematic](d ) . in this case , from ( [ eq : gravitylambda ] ) and from ( [ eq : lambdaright ] ) .it follows that the first inequality follows from a taylor expansion .the second follows from our assumption . since the solution is a global minimizer . in summary, there are two cases . when , the globally stable minimizer is a -function at the origin . when there is a globally stable minimizer consisting of a -function at the origin which is the left - hand endpoint of a compactly - supported classical swarm .the two cases are shown schematically in figures [ fig : repulsion_schematic](de ) .figure [ fig : repulsion_numerics](b ) compares analytical and numerical results for the latter ( ) case with and .we use swarm members for the numerical simulation . the numerical ( dots ) and analytical ( line ) agree , as does the nonclassical part of the solution , pictured as the `` lollipop '' which represents a superposition of swarm members in the numerical simulation having total mass , and simultaneously a -function of mass in the analytical solution .we now consider the infinite domain with a quadratic exogenous potential well , pictured in figure [ fig : repulsion_schematic](c ) .this choice of a quadratic well is representative of a generic potential minimum , as might occur due to a chemoattractant , food source , or light source .thus where controls the strength of the potential .as we know from ( [ eq : singlecomponent ] ) that the minimizing solution is a single component .we take the support of the solution to be ] and then allow for -functions at the boundaries . once again , we will see that minimizers contain -functions only when the boundary of the support , , coincides with the boundary of the domain , . for convenience , define the differential operators and , and apply to ( [ eq : fie ] ) to obtain , \quad x \in { { \omega_{{{\bar \rho } } } } } , \label{eq : morselocal}\ ] ] where thus , we guess the full solution to the problem is obtained by substituting ( [ eq : morseansatz ] ) into ( [ eq : mass1 ] ) and ( [ eq : fie ] ) which yields where in .we begin by considering the amplitudes and of the distributional component of the solution .we factor the differential operators where and where .note that h(x - y),\ ] ] where is the heaviside function .now we apply to ( [ eq : morseintegraleq ] ) at a point in , which yields { { { { { \bar \rho}}_*}}}(y)\,dy & & \\\mbox{}+a { \mathcal{p}^-}{\mathcal{q}^-}q(x-\alpha ) = { \mathcal{p}^-}{\mathcal{q}^-}\ { \lambda -f(x)\}. \nonumber\end{aligned}\ ] ] taking the limit yields where we have used the fact that .a similar calculation using the operators and focusing near yields that eqs .( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) relate the amplitudes of the -functions at the boundaries to the value of the classical solution there .further solution of the problem requires to be specified . in the case where , solving ( [ eq : morselocal ] ) for and solving ( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) for and yields an equilibrium solution .one must check that the solution is non - negative and then consider the solutions stability to determine if it is a local or global minimizer . in the case where is contained in the interior of , we know that as discussed in section [ sec : absence ] .we consider this case below .suppose is contained in the interior of . then . following section [ sec : funcmin ], we try to determine when in and when , which constitute necessary and sufficient conditions for to be a global minimizer .we apply to ( [ eq : morseintegraleq ] ) at a point . the integral term and the terms arising from the functions vanish .the equation is simply .we write the solution as the two constants are determined as follows . from ( [ eq : morseintegraleq ] ) , is a continuous function , and thus we derive a jump condition on the derivative to get another equation for .we differentiate ( [ eq : morseintegraleq ] ) and determine that is continuous . however , since for , . substituting this result into the derivative of ( [ eq : lambdamorse ] ) and letting increase to , we find the solution to ( [ eq : lambdacontinuity ] ) and ( [ eq : lambdaprime ] ) is [ eq : kvals ] now that is known near we can compute when , at least near the left side of .taylor expanding around , we find the quadratic term in ( [ eq : lambdatayl ] ) has coefficient where the second line comes from substituting ( [ eq : morseboundary1 ] ) with and noting that the classical part of the solution must be nonnegative since it is a density . furthermore , since we expect ( this can be shown a posteriori ) , we have that the quadratic term in ( [ eq : lambdatayl ] ) is positive .a similar analysis holds near the boundary .therefore , for in a neighborhood outside of . stated differently , the solution ( [ eq : morseansatz ] ) is a swarm minimizer , that is , it is stable with respect to infinitesimal redistributions of mass . the domain is determined through the relations ( [ eq : morseboundary1 ] ) and ( [ eq : morseboundary2 ] ) , which , when , become [ eq : bcs ] in the following subsections , we will consider the solution of the continuum system ( [ eq : fie ] ) and ( [ eq : mass1 ] ) with no external potential , . we consider two cases for the morse interaction potential ( [ eq : morse ] ) : first , the catastrophic case on , for which the above calculation applies , and second , for the h - stable case on a finite domain , in which case and there are -concentrations at the boundary .exact solutions for cases with an exogenous potential , can be straightforwardly derived , though the algebra is even more cumbersome and the results unenlightening . in this case , in ( [ eq : fie ] ) and in ( [ eq : morse ] ) so that in ( [ eq : morselocal ] ) .the solution to ( [ eq : morselocal ] ) is where in the absence of an external potential , the solution is translationally invariant .consequently , we may choose the support to be an interval ] . as before , in ( [ eq : fie ] ) but now in ( [ eq : morse ] ) so that in ( [ eq : morselocal ] ) .the classical solution to ( [ eq : morselocal ] ) is where we will again invoke symmetry to assume .the minimizer will be the classical solution together with -functions on the boundary , again by symmetry , .consequently , the solution can be written as .\ ] ] substituting into the integral equation ( [ eq : fie ] ) and the mass constraint ( [ eq : mass1 ] ) will determine the constants , and .the integral operator produces modes spanned by .this produces two homogeneous , linear equations for , and .the mass constraint ( [ eq : mass1 ] ) produces an inhomogeneous one , namely an equation linear in , , and for the mass .we have the three dimensional linear system the solution is [ eq : hstablesoln ] }{2\phi } , \\ c & = & -\frac{{{\tilde \mu}}m ( 1-{{\tilde \mu}}^2 ) ( 1-{{\tilde \mu}}^2 l^2)}{\phi } , \\\lambda & = & \frac{m { { \tilde \mu}}(1-{{g}}l^2)\left [ ( 1+{{\tilde \mu}})(1+{{\tilde \mu}}l ) + ( 1-{{\tilde \mu}})(1-{{\tilde \mu}}l ) \right]}{\phi},\end{aligned}\ ] ] where for convenience we have defined for this h - stable case , which ensures that for nontrivial perturbations .this guarantees that the solution above is a global minimizer . in the limit of large domain size ,the analytical solution simplifies substantially . to leading order ,the expressions ( [ eq : hstablesoln ] ) become note that is exponentially small except in a boundary layer near each edge of , and therefore the solution is nearly constant in the interior of .figure [ fig : morse_numerics](b ) compares analytical and numerical results for an example case with a relatively small value of .we take total mass and set the domain half - width to be . the interaction potential parameters and .the solid line is the classical solution .dots correspond to the numerically - obtained equilibrium of the discrete system ( [ eq : discretesystem ] ) with swarm members .each `` lollipop '' at the domain boundary corresponds to a -function of mass in the analytical solution , and simultaneously to a superposition of swarm members in the numerical simulation .we now return to the locust swarm model of , discussed also in section [ sec : intro ] .recall that locust swarms are observed to have a concentration of individuals on the ground , a gap or `` bubble '' where the density of individuals is near zero , and a sharply delineated swarm of flying individuals .this behavior is reproduced in the model ( [ eq : locusts ] ) ; see figure [ fig : locust](b ) .in fact , figure [ fig : locust](c ) shows that the bubble is present even when the wind in the model is turned off , and only endogenous interactions and gravity are present . to better understand the structure of the swarm , we consider the analogous continuum problem . to further simplify the model , we note that the vertical structure of the swarm appears to depend only weakly on the horizontal direction , and thus we will construct a _ quasi - two - dimensional _ model in which the horizontal structure is assumed uniform . in particular , we will make a comparison between a one - dimensional and a quasi - two - dimensional model . both modelstake the form of the energy minimization problem ( [ eq : fie ] ) on a semi - infinite domain , with an exogenous potential describing gravity .the models differ in the choice of the endogenous potential , which is chosen to describe either one - dimensional or quasi - two - dimensional repulsion .the one - dimensional model is precisely that which we considered in section [ sec : grav ] .there we saw that minimizers of the one - dimensional model can reproduce the concentrations of locusts on the ground and a group of individuals above the ground , but there can not be a separation between the grounded and airborne groups .we will show below that for the quasi - two - dimensional model , this is not the case , and indeed , some minimizers have a gap between the two groups .as mentioned , the one - dimensional and quasi - two - dimensional models incorporate only endogenous repulsion .however , the behavior we describe herein does not change for the more biologically realistic situation when attraction is present .we consider the repulsion - only case in order to seek the minimal mechanism responsible for the appearance of the gap .we consider a swarm in two dimensions , with spatial coordinate .we will eventually confine the vertical coordinate to be nonnegative , since it describes the elevation above the ground at .we assume the swarm to be uniform in the horizontal direction , so that .we construct a quasi - two - dimensional interaction potential , letting and , this yields it is straightforward to show that the two - dimensional energy per unit horizontal length is given by = \frac{1}{2 } \int_{{\omega}}\int_{{\omega}}\rho(x_1 ) \rho(y_1 ) q_{2d}(x_1-y_1)\,dx_1\,dy_1 + \int_{{\omega}}f(x_1)\rho(x_1)\,dx_1,\ ] ] where the exogenous force is and the domain is the half - line .this is exactly analogous to the one - dimensional problem ( [ eq : continuum_energy ] ) , but with particles interacting according to the quasi - two - dimensional endogenous potential .similarly , the corresponding dynamical equations are simply ( [ eq : cont_velocity ] ) and ( [ eq : pde ] ) but with endogenous force . for the laplace potential ( [ eq : laplace ] ) , the quasi - two - dimensional potential is this integral can be manipulated for ease of calculation , where ( [ eq : q2db ] ) comes from symmetry , ( [ eq : q2dc ] ) comes from letting , ( [ eq : q2dd ] ) comes from letting , and ( [ eq : q2de ] ) comes from the trigonometric substitution . from an asymptotic expansion of ( [ eq : q2dd ] ), we find that for small , whereas for large , .\ ] ] in our numerical study , it is important to have an efficient method of computing values of . in practice ,we use ( [ eq : smallz ] ) for small , ( [ eq : largez ] ) for large , and for intermediate values of we interpolate from a lookup table pre - computed using ( [ eq : q2de ] ) .the potential is shown in figure [ fig : q2d ] .note that is horizontal at , and monotonically decreasing in .the negative of the slope reaches a maximum of the quantity plays a key role in our analysis of minimizers below .the fourier transform of can be evaluated exactly using the integral definition ( [ eq : q2dc ] ) and interchanging the order of integration of and to obtain which we note is positive , so local minimizers are global minimizers per the discussion in section [ sec : minimizers ] .we model a quasi - two - dimensional biological swarm with repulsive social interactions of laplace type and subject to an exogenous gravitational potential , .the spatial coordinate describes the elevation above ground .consequently , is the semi - infinite interval . from section [ sec : grav ] , recall that for the one - dimensional model, is a minimizer for some , corresponding to all swarm members pinned by gravity to the ground .we consider this same solution as a candidate minimizer for the quasi - two - dimensional problem . in this case, above is actually a minimizer for any mass . to see this, we can compute , since , increases away from the origin and hence is at least a swarm minimizer . in fact , if , is a global minimizer because which guarantees that is strictly increasing for as shown in figure [ fig : lambda](a ) . because it is strictly increasing , for .given this fact , and additionally , since as previously shown , is a global minimizer .this means that if an infinitesimal amount of mass is added anywhere in the system , it will descend to the origin .consequently , we believe this solution is the global attractor ( though we have not proven this ) . note that while the condition is sufficient for to be a global minimizer , it is not necessary . as alluded above ,it is not necessary that be strictly increasing , only that for .this is the case for for , where .figure [ fig : lambda](b ) shows a case when .although for , has a local minimum .in this situation , although the solution with the mass concentrated at the origin is a global minimizer , it is _ not _ a global attractor .we will see that a small amount of mass added near the local minimum of will create a swarm minimizer , which is dynamically stable to perturbations .figure [ fig : lambda](c ) shows the critical case when . in this casethe local minimum of at satisfies and .figure [ fig : lambda](d ) shows the case when and now in the neighborhood of the minimum . in this casethe solution with the mass concentrated at the origin is only a swarm minimizer ; the energy of the system can be reduced by transporting some of the mass at the origin to the neighborhood of the local minimum . when it is possible to construct a continuum of swarm minimizers .we have conducted a range of simulations for varying and have measured two basic properties of the solutions .we set and use in all simulations of the discrete system .initially , all the swarm members are high above the ground and we evolve the simulation to equilibrium . figure [ fig : quasi2dnumerics](a ) measures the mass on the ground as a percentage of the total swarm mass .the horizontal blue line indicates ( schematically ) that for , the equilibrium consists of all mass concentrated at the origin ; as discussed above , this state is the global minimizer and ( we believe ) the global attractor . as massis increased through , the equilibrium is a swarm minimizer consisting of a classical swarm in the air separated from the origin , and some mass concentrated on the ground . as increases , the proportion of mass located on the ground decreases monotonically .figure [ fig : quasi2dnumerics](b ) visualizes the support of the airborne swarm , which exists only for ; the lower and upper data represent the coordinates of the bottom and top of the swarm , respectively . as massis increased , the span of the swarm increases monotonically . as established above , when , swarm minimizers exist with two components .in fact , there is a continuum of swarm minimizers with different proportions of mass in the air and on the ground .which minimizer is obtained in simulation depends on initial conditions .figure [ fig : lambda2 ] shows two such minimizers for and , and the associated values of ( each obtained from a different initial condition ) . recalling that for a swarm minimizer , each connected component of the swarm , is constant , we define for the grounded component and for the airborne component . in figure [ fig : lambda2](ab ) , of the mass is contained in the grounded component . in this case , indicating that the total energy could be reduced by transporting swarm members from the air to the ground .in contrast , in figure [ fig : lambda2](cd ) , of the mass is contained in the grounded component . in this case , indicating that the total energy could be reduced by transporting swarm members from the ground to the air .note that by continuity , we believe a state exists where , which would correspond to a global minimizer .however , this state is clearly not a global attractor and hence will not necessarily be achieved in simulation .we ve demonstrated that for one can construct a continuum of swarm minimizers with a gap between grounded and airborne components , and that for these solutions can have a lower energy than the state with the density concentrated solely on the ground .by contrast with the one - dimensional system of section [ sec : grav ] in which no gap is observed , these gap states appear to be the generic configuration for sufficiently large mass in the quasi - two - dimensional system . we conclude that dimensionality is crucial element for the formation of the bubble - like shape of real locust swarms .in this paper we deeveloped a framework for studying equilibrium solutions for swarming problems .we related the discrete swarming problem to an associated continuum model .this continuum model has an energy formulation which enables analysis equilibrium solutions and their stability .we derived conditions for an equilibrium solution to be a local minimizer , a global minimizer , and/or a swarm minimizer , that is , stable to infinitesimal lagrangian deformations of the mass .we found many examples of compactly supported equilibrium solutions , which may be discontinuous at the boundary of the support .in addition , when a boundary of the support coincides with the domain boundary , a minimizer may contain a -concentration there .for the case of exogenous repulsion modeled by the laplace potential , we computed three example equilibria . on a bounded domain ,the minimizer is a constant density profile with -functions at each end . on a half - line with an exogenous gravitational potential ,the minimizer is a compactly supported linear density profile with a -function at the origin . in free space with an exogenous quadratic potential ,the minimizer is a compactly supported inverted parabola with jump discontinuities at the endpoints .each of the aforementioned solutions is also a global minimizer . to extend the results above, we also found analytical solutions for exogenous attractive - repulsive forces , modeled with the morse potential . in the case that the social force was in the catastrophic statistical mechanical regime, we found a compactly supported solution whose support is independent of the total population mass .this means that within the modeling assumptions , swarms become denser with increasing mass . for the case of an h - stable social force, there is no equilibrium solution on an infinite domain . on a finite domain, mass is partitioned between a classical solution in the interior and -concentrations on the boundary .we recall that for the locust model of ( see figure [ fig : locust ] ) a concentration of locusts occurs on the ground , with a seemingly classical component above , separated by a gap .none of the one - dimensional solutions ( for the laplace and morse potentials ) discussed above contain a gap , that is , multiple swarm components that are spatially disconnected , suggesting that this configuration is intrinsically two - dimensional . to study this configuration , we computed a quasi - two - dimensional potential corresponding to a horizontally uniform swarm .we demonstrated numerically that for a wide range of parameters , there exists a continuous family of swarm minimizers that consist of a concentration on the ground and a disconnected , classical component in the air , reminiscent of our earlier numerical studies of a discrete locust swarm model .we believe that the analytical solutions we found provide a sampling of the rich tapestry of equilibrium solutions that manifest in the general model we have considered , and in nature .we hope that these solutions will inspire further analysis and guide future modeling efforts .cmt acknowledges support from the nsf through grants dms-0740484 and dms-1009633 .ajb gratefully acknowledges the support from the nsf through grants dms-0807347 and dms-0730630 , and the hospitality of robert kohn and the courant institute of mathematical sciences .we both wish to thank the institute for mathematics and its applications where portions of this work were completed . , _ optimal transportation , dissipative pde s and functional inequalities _ , in optimal transportation and applications ( martina franca , 2001 ) , vol .1813 of lecture notes in math . ,springer , berlin , 2003 , pp .
we study equilibrium configurations of swarming biological organisms subject to exogenous and pairwise endogenous forces . beginning with a discrete dynamical model , we derive a variational description of the corresponding continuum population density . equilibrium solutions are extrema of an energy functional , and satisfy a fredholm integral equation . we find conditions for the extrema to be local minimizers , global minimizers , and minimizers with respect to infinitesimal lagrangian displacements of mass . in one spatial dimension , for a variety of exogenous forces , endogenous forces , and domain configurations , we find exact analytical expressions for the equilibria . these agree closely with numerical simulations of the underlying discrete model.the exact solutions provide a sampling of the wide variety of equilibrium configurations possible within our general swarm modeling framework . the equilibria typically are compactly supported and may contain -concentrations or jump discontinuities at the edge of the support . we apply our methods to a model of locust swarms , which are observed in nature to consist of a concentrated population on the ground separated from an airborne group . our model can reproduce this configuration ; quasi - two - dimensionality of the model plays a critical role . swarm , equilibrium , aggregation , integrodifferential equation , variational model , energy , minimizer , locust
inflation generically predicts a primordial spectrum of density perturbations which is almost precisely gaussian . in recent yearsthe small non - gaussian component has emerged as an important observable , and will be measured with good precision by the _planck surveyor _ satellite . in the near future ,as observational data become more plentiful , it will be important to understand the non - gaussian signal expected in a wide variety of models , and to anticipate what conclusions can be drawn about early - universe physics from a prospective detection of primordial non - gaussianity . in this paper, we present a novel method for calculating the primordial non - gaussianity produced by super - horizon evolution in two - field models of inflation .our method is based on the real - space distribution of inflationary field values on a flat hypersurface , which can be thought of as a probability density function whose evolution is determined by a form of the collisionless boltzmann equation . using a cumulant representation to expand our density function around an exact gaussian, we derive ordinary differential equations which evolve the moments of this distribution .further , we show how these moments are related to observable quantities , such as the dimensionless bispectrum measured by .we present numerical results which show that this method gives good agreement with other techniques .it is not necessary to make any assumptions about the inflationary model beyond requiring a canonical kinetic term and applying the slow - roll approximation .while there are already numerous methods for computing the super - horizon contribution to , including the widely used formalism , we believe the one reported here has a number of advantages .first , this new technique is ideally suited to unraveling the various contributions to .this is because we follow the moments of the inflaton distribution directly , which makes it straightforward to identify large contributions to the skewness or other moments .the evolution equation for each moment is simple and possesses clearly identifiable source terms , which are related to the properties of the inflationary flow on field space .this offers a clear separation between two key sources of primordial non - gaussianity .one of these is the intrinsic non - linearity associated with evolution of the probability density function between successive flat hypersurfaces ; the other is a gauge transformation from field fluctuations to the curvature peturbation , .it would be difficult or impossible to observe this split within the context of other calculational schemes , such as the conventional formalism .a second advantage of our method is connected with the computational cost of numerical implementation .analytic formulas for are known in certain cases , mostly in the context of the framework , but only for very specific choices of the potential or hubble rate .these formulas become increasingly cumbersome as the number of fields increases , or if one studies higher moments . in the future , it seems clear that studies of complex models with many fields will increasingly rely on numerical methods .the numerical framework requires the solution to a number of ordinary differential equations which scales exponentially with the number of fields .since some models include hundreds of fields , this may present a significant obstacle .moreover , the formalism depends crucially on a numerical integration algorithm with low noise properties , since finite differences must be extracted between perturbatively different initial conditions after e - folds of evolution .thus , the background equations must be solved to great accuracy , since any small error has considerable scope to propagate . in this paperwe ultimately solve our equations numerically to determine the evolution of moments in specific models .our method requires the solution to a number of differential equations which scales at most polynomially ( or in certain cases perhaps even linearly ) with the number of fields .it does not rely on extracting finite differences , and therefore is much less susceptible to numerical noise .these advantages may be shared with other schemes , such as the numerical method recently employed by lehners & renaux - petel .a third advantage , to which we hope to return in a future publication , is that our formalism yields explicit evolution equations with source terms . from an analysis of these source terms , we hope that it will be possible to identify those physical features of specific models which lead to the production of large non - gaussianities .this paper is organized as follows . in [ sec : computing_fnl ] , we show how the non - gaussian parameter can be computed in our framework .the calculation remains in real space throughout ( as opposed to fourier space ) , which modifies the relationship between and the multi - point functions of the inflaton field .our expression for shows a clean separation between different contributions to non - gaussianity , especially between the intrinsic nonlinearity of the field evolution and the gauge transformation between comoving and flat hypersurfaces . in [ sec : transport ] , we introduce our model for the distribution of inflaton field values , which is a moment expansion " around a purely gaussian distribution .we derive the equations which govern the evolution of the moments of this distribution in the one- and two - field cases . in [ sec : numerics ] , we present a comparison of our new technique and those already in the literature . we compute numerically in several two - field models , and find excellent agreement between techniques .we conclude in [ s : conclusions ] . throughout this paper, we use units in which , and the reduced planck mass is set to unity .in this section , we introduce our new method for computing the non - gaussianity parameter .this method requires three main ingredients : a formula for the curvature perturbation , , in terms of the field values on a spatially flat hypersurface ; expressions for the derivatives of the number of e - foldings , , as a function of field values at horizon exit ; and a prescription for evolving the field distribution from horizon exit to the time when we require the statistical properties of .the first two ingredients are given in eqs . and , found at the end of [ ss : sep_universe ] and [ sec : derivative - n ] respectively .the final ingredient is discussed in [ sec : transport ] . once it became clear that non - linearities of the microwave background anisotropies could be detected by the wmap and _ planck _ survey satellites , many authors studied higher - order correlations of the curvature perturbation . in early work , direct calculations of a correlation function were matched to the known limit of local non - gaussianity .this method works well if isocurvature modes are absent , so that the curvature perturbation is constant after horizon exit . in the more realistic situation that isocurvature modes cause evolution on superhorizon scales , all correlation functions become time dependent .various formalisms have been employed to describe this evolution .lyth & rodrguez extended the method beyond linear order .this method is simple and well - suited to analytical calculation .rigopoulos , shellard and van tent worked with a gradient expansion , rewriting the field equations in langevin form .the noise term was used as a proxy for setting initial conditions at horizon crossing .a similar ` exact ' gradient formalism was written down by langlois & vernizzi . in its perturbative form, this approach has been used by lehners & renaux - petel to obtain numerical results .another numerical scheme has been introduced by huston & malik .what properties do we require of a successful prediction ? consider a typical observer , drawn at random from an ensemble of realizations of inflation . in any of the formalismsdiscussed above , we aim to estimate the statistical properties of the curvature perturbation which would be measured by such an observer .some realizations may yield statistical properties which are quite different from the ensemble average , but these large excursions are uninteresting unless anthropic arguments are in play .next we introduce a collection of comparably - sized spacetime volumes whose mutual scatter is destined to dominate the microwave background anisotropy on a given scale .neglecting spatial gradients , each spacetime volume will follow a trajectory in field space which is slightly displaced from its neighbors .the scatter between trajectories is determined by initial conditions set at horizon exit , which are determined by promoting the vacuum fluctuation to a classical perturbation .a correct prediction is a function of the trajectories followed by every volume in the collection , taken as a whole .one never makes a prediction for a single trajectory .each spacetime volume follows a trajectory , which we label with its position at some fixed time , to be made precise below . throughout this paper ,superscript ` ' denotes evaluation on a spatially flat hypersurface .consider the evolution of some quantity of interest , , which is a function of trajectory .if we know the distribution we can study statistical properties of such as the moment , ^m , \ ] ] where we have introduced the ensemble average of , in eqs . , stands for a collection of any number of fields .it is the which are observable quantities .. defines what we will call the exact separate universe picture .it is often convenient to expand as a power series in the field values centered on a fiducial trajectory , labelled ` fid , ' when eq .is used to evaluate the , we refer to the ` perturbative ' separate universe picture .if all terms in the power series are retained , these two versions of the calculation are formally equivalent . in unfavorable cases , however , convergence may occur slowly or not at all .this possibility was discussed in refs .although our calculation is formally perturbative , it is not directly equivalent to eq . .we briefly discuss the relation of our calculation to conventional perturbation theory in [ s : conclusions ] . by definition , the curvature perturbation measures local fluctuations in expansion history ( expressed in e - folds ) , calculated on a comoving hypersurface . in many models , the curvature perturbation is synthesized by superhorizon physics , which reprocesses a set of gaussian fluctuations generated at horizon exit . in a single - field model, only one gaussian fluctuation can be present , which we label . neglecting spatial gradients , the total curvature perturbationmust then be a function of alone . for , this can be well - approximated by where is independent of spatial position .defines the so - called `` local '' form of non - gaussianity .it applies only when quantum interference effects can be neglected , making a well - defined object rather than a superposition of operators .if this condition is satisfied , spatial correlations of may be extracted and it follows that can be estimated using the rule where we have recalled that is nearly gaussian , or equivalently that . with spatially independent , eq . strictly applies only in single - field inflation . in this caseone can accurately determine by applying eq . to a single trajectory with fixed initial conditions , as in the method of lehners & renaux - petel .where more than one field is present , may vary in space because it depends on the isocurvature modes . in this caseone must determine statistically on a bundle of adjacent trajectories which sample the local distribution of isocurvature modes .is then indispensible .following maldacena , and later lyth & rodrguez , we adopt eq . as our definition of , whatever its origin . in real space , the coefficient in eq .depends on the convention .more generally , this follows from the definition of , eq . .in fourier space , either prescription is automatically enforced after dropping disconnected contributions , again leading to eq . . to proceed , we require estimates of the correlation functions and .we first describe the conventional approach , in which ` ' denotes a flat hypersurface at a fixed initial time .the quantity denotes the number of e - foldings between this initial slice and a final comoving hypersurface , where indexes the species of light scalar fields .the local variation in expansion can be written in the fiducial picture as where .subject to the condition that the relevant scales are all outside the horizon , we are free to choose the initial time set by the hypersurface ` 'at our convenience . in the conventional approach , ` ' is taken to lie a few e - folds after our collection of spacetime volumes passes outside the causal horizon .this choice has many virtues .first , we need to know statistical properties of the field fluctuations only around the time of horizon crossing , where they can be computed without the appearance of large logarithms .second , as a consequence of the slow - roll approximation , the are uncorrelated at this time , leading to algebraic simplifications .finally , the formula subsumes a gauge transformation from the field variables to the observational variable . using eqs . , and, one finds that can be written to a good approximation where and for simplicity we have dropped the ` ' which indicates time of evaluation .a similar definition applies for .. is accurate up to small intrinsic non - gaussianities present in the field fluctuations at horizon exit . as a means of predicting is pleasingly compact , and straightforward to evaluate in many models .unfortunately , it also obscures the physics which determines . for this reason it is hard to infer , from eq . alone , those classes of models in which is always large or small .is dynamically allowed .see , for example , refs . . ]our strategy is quite different .we choose ` ' to lie around the time when we require the statistical properties of .the role of the formula , eq . , is then to encode _ only _ the gauge transformation between the and . in [ sec : derivative - n ] below , we show how the appropriate gauge transformation is computed using the formula . in the present sectionwe restrict our attention to determining a formula for under the assumption that the distribution of field values on ` ' is known . in [ sec : transport ] , we will supply the required prescription to evolve the distribution of field values between horizon exit and ` ' .although the are independent random variables at horizon exit , correlations can be induced by subsequent evolution .one must therefore allow for off - diagonal terms in the two - point function .remembering that we are working with a collection of spacetime volumes in real space , smoothed on some characteristic scale , we write does not vary in space , but it may be a function of the scale which characterizes our ensemble of spacetime volumes . in all but the simplest models it will vary in time .it is also necessary to account for intrinsic non - linearities among the , which are small at horizon crossing but may grow . we define likewise , should be regarded as a function of time and scale .the permutation symmetries of an expectation value such as guarantee that , for example , .when written explicitly , we place the indices of symbols such as in numerical order . neglecting a small ( ) intrinsic four - point correlation , it follows that now we specialize to a two - field model , parametrized by fields and . using eqs . , and , it follows that the two - point function of satisfies the three - point function can be written where we have identified two separate contributions , labelled ` 1 ' and ` 2 ' .the ` 1 ' term includes all contributions involving _ intrinsic _ non - linearities , those which arise from non - gaussian correlations among the field fluctuations , the ` 2 ' term encodes non - linearities arising directly from the gauge transformation to after use of eq . , can be used to extract the non - linearity parameter .this decomposes likewise into two contributions , which we shall discuss in more detail in [ sec : numerics ] . to compute in concrete models ,we require expressions for the derivatives and . for generic initial and final times , these are difficult to obtain .lyth & rodrguez used direct integration , which is effective for quadratic potentials and constant slow - roll parameters .vernizzi & wands obtained expressions in a two - field model with an arbitrary sum - separable potential by introducing gaussian normal coordinates on the space of trajectories .their approach was generalized to many fields by battefeld & easther .product - separable potentials can be accommodated using the same technique .an alternative technique has been proposed by yokoyama __ .a considerable simplification occurs in the present case , because we only require the derivative evaluated between flat and comoving hypersurfaces which coincide in the unperturbed universe . for any species , andto leading order in the slow - roll approximation , the number of e - folds between the flat hypersurface ` ' and a comoving hypersurface ` ' satisfies where and are the field values evaluated on ` ' and ` , ' respectively . under an infinitesimal shift of , we deduce that obeys note that this applies for an arbitrary , which need not factorize into a sum or product of potentials for the individual species . in principle a contribution from variation of the integrand is present , which spoils a nave attempt to generalize the method of refs . to an arbitrary potential .this contribution vanishes in virtue of our supposition that ` ' and ` ' are infinitesimally separated . to compute it is helpful to introduce a quantity , which in the sum - separable case coincides with the conserved quantity of vernizzi & wands . for our specific choice of a two - field model , this takes the form where the integrals are evaluated on a single spatial hypersurface . in an -field model , one would obtain conserved quantities which label the isocurvature fields .the construction of these quantities is discussed in refs . . for sum - separable potentials one can show using the equations of motion that is conserved under time evolution to leading order in slow - roll .it is not conserved for general potentials , but the variation can be neglected for infinitesimally separated hypersurfaces . under a change of trajectory , varies according to the rules and the comoving hypersurface ` ' is defined by we are assuming that the slow - roll approximation applies , so that the kinetic energy may be neglected in comparison with the potential .therefore on ` ' we have combining eqs . , and we obtain expressions for , namely where we have defined eqs . can alternatively be derived without use of by comparing eq . with the formulas of ref . , which were derived using conventional perturbation theory .applying , we obtain to proceed , we require the second derivatives of .these can be obtained directly from , after use of eqs .. we find ^\star \cos^2 \theta + 2 \left ( \frac{v}{v_{,1 } } \right)^{\star 2 } \cos^2 \theta \\\hspace{-6 mm } \mbox { } \times \left [ \frac{v_{,11}}{v } \sin^2 \theta - \frac{v_{,1 } v_{,12}}{v v_{,2 } } \sin^4 \theta - \left ( \frac{v_{,11}}{v } - \frac{v_{,22}}{v } + \frac{v_{,2 } v_{,12}}{v v_{,1 } } \right ) \cos^2 \theta \sin^2 \theta \right]^c .\end{aligned}\ ] ] an analogous expression for can be obtained after the simultaneous exchange .the mixed derivative satisfies ^c \\ \hspace{-6 mm } \mbox { } + \cos^2 \theta \left ( \frac{v_{,2}}{v_{,1 } } - \frac{v v_{,12}}{v_{,1}^2 } \right)^c . \end{aligned}\ ] ] now that the calculation is complete , we can drop the superscripts ` ' and ` , ' since any background quantity is the same on either hypersurface .once this is done it can be verified that ( despite appearances ) eq .is invariant under the exchange .in this section we return to the problem of evolution between horizon exit and the time of observation , and supply the prescription which connects the distribution of field values at these two times .we begin by discussing the single - field system , which lacks the technical complexity of the two - field case , yet still exhibits certain interesting features which recur there . among these featuresare the subtle difference between motion of the statistical mean and the background field value , and the hierarchy of moment evolution equations .moreover , the structure of the moment mixing equations is similar to that which obtains in the two - field case .for this reason , the one - field scenario provides an instructive example of the techniques we wish to employ .recall that we work in real space with a collection of comparably sized spacetime volumes , each with a slightly different expansion history , and the scatter in these histories determines the microwave background anisotropy on a given angular scale . within each volumethe smoothed background field takes a uniform value described by a density function , where in this section we are dropping the superscript ` ' denoting evaluation of spatially flat hypersurfaces .our ultimate goal is to calculate the reduced bispectrum , , which describes the third moment of . in the language of probabilitythis is the skewness , which we denote .a gaussian distribution has skewness zero , and inflation usually predicts that the skew is small .for this reason , rather than seek a distribution with non - zero third moment , as proposed in ref . , we will introduce higher moments as perturbative corrections to the gaussian .such a procedure is known as a _cumulant expansion_. the construction of cumulant expansions is a classical problem in probability theory .we seek a distribution with centroid , variance , and skew , with all higher moments determined by and alone .a distribution with suitable properties is , \ ] ] where \ ] ] is a pure gaussian and denotes the hermite polynomial , for which there are multiple normalization conventions .we choose to normalize so that which implies that the leading term of is .this is sometimes called the `` probabilist s convention . ''we define expectation values by the usual rule , the probability density function in eq .has the properties , and do not depend on the approximation that is small .however , for large the density function may become negative for some values of .it then ceases to be a probability density in the strict sense .this does not present a problem in practice , since we are interested in distributions which are approximately gaussian , and for which will typically be small . moreover , our principal use of eq .is as a formal tool to extract evolution equations for each moment .for this reason we will not worry whether defines an honest probability density function in the strict mathematical sense . ] the moments , , and may be time - dependent , so evolution of the probability density in time can be accommodated by finding evolution equations for these quantities .the density function given in eq .is well - known and has been applied in many situations .it is a solution to the problem of approximating a nearly - gaussian distribution whose moments are known .( [ e : p1d ] ) is in fact the first two terms of the _ gram charlier ` a ' series _ , also sometimes called the _hermite series_. in recent years it has found multiple applications to cosmology , of which our method is closest to that of taylor & watts .other applications are discussed in refs . . for a review of the `a ' series and related nearly - gaussian probability distributions from an astrophysical perspective , see . in this paper, we will refer to eq . and its natural generalization to higher moments as the `` moment expansion . '' in the slow - roll approximation, the field in each spacetime volume obeys a simple equation of motion where records the number of e - foldings of expansion .we refer to as the velocity field .expanding about the instantaneous centroid gives where the value of evolves with time , so each expansion coefficient is time - dependent .hence , we do not assume that the velocity field is _ globally _ well - described by a quadratic taylor expansion , but merely that it is well - described as such in the neighborhood of the instantaneous centroid .we expand the velocity field to second order , although in principle this expansion could be carried to arbitrary order .it remains to specify how the probability density evolves in time .conservation of probability leads to the transport equation eq .can also be understood as the limit of a chapman kolmogorov process as the size of each hop goes to zero .it is well known for example , from the study of starobinsky s diffusion equation which forms the basis of the stochastic approach to inflation that the choice of time variable in this equation is significant , with different choices corresponding to the selection of a temporal gauge .we have chosen to use the e - folding time , , which means that we are evolving the distribution on hypersurfaces of uniform expansion .these are the spatially flat hypersurfaces whose field perturbations enter the formulas described in [ sec : computing_fnl ] . in principle , eq .can be solved directly . in practiceit is simpler to extract equations for the moments of , giving evolution equations for , and . to achieve this , one need only resolve eq. into a hermite series of the form the hermite polynomials are linearly independent , and application of the orthogonality condition shows that the must all vanish .this leads to a hierarchy of equations , which we refer to as the moment hierarchy . at the top of the hierarchy ,the equation is empty and expresses conservation of probability .the first non - trivial equation requires and yields an evolution equation for the centroid , the first term on the right - hand side drives the centroid along the velocity field , as one would anticipate based on the background equation of motion , eq . .however , the second term shows that the centroid is also influenced as the wings of the probability distribution probe the nearby velocity field .this influence is not captured by the background equation of motion .if we are in a situation with , then the wings of the density function will be moving faster than the center .hence , the velocity of the centroid will be larger than one might expect by restricting attention to .accordingly , the mean fluctuation value is not following a solution to the background equations of motion .evolution equations for the variance and skew are obtained after enforcing , yielding in both equations , the first term on the right - hand sides describes how and scale as the density function expands or contracts in response to the velocity field .these terms force and to scale in proportion to the velocity field . specifically ,if we temporarily drop the second terms in each equation above , one finds that and .this precisely matches our expectation for the scaling of these quantities .hence , these terms account for the jacobians associated with infinitesimal transformations induced by the flow . for applications to inflationary non - gaussianity ,the second terms in and are more relevant .these terms describe how each moment is sourced by higher moments and the interaction of the density function with the velocity field . in the exampleabove , if we are in a situation where , the tails of the density function are moving faster than the core .this means that one tail is shrinking and the other is extending , skewing the probability density .the opposite occurs when .these effects are measured by the second term in .hence , by expanding our pdf to the third moment , and our velocity field to quadratic order , we are able to construct a set of evolution equations which include the leading - order source terms for each moment . there is little conceptually new as we move from one field to two .the new features are mostly technical in nature .our primary challenge is a generalization of the moment expansion to two fields , allowing for the possibility of correlation between the fields . with this done ,we can write down evolution equations whose structure is very similar to those found in the single - field case . the two - field system is described by a two - dimensional velocity field , defined by where again we are using the number of e - folds as the time variable .the index takes values in . while we think it is likely that our equations generalize to any number of fields , we have only explicitly constructed them for a two - field system . as will become clear below , certain steps in this construction apply only for two fields , andhence we make no claims at present concerning examples with three or more fields .the two - dimensional transport equation is = 0 .\ ] ] here and in the following we have returned to our convention that repeated species indices are summed . as in the single - field case , we construct a probability distribution which is nearly gaussian , but has a small non - zero skewness .that gives where is a pure gaussian distribution , defined by .\ ] ] in this equation , defines the center of the distribution and describes the covariance between the fields .we adopt a conventional parametrization in terms of variances and a correlation coefficient , the matrix defines two - point correlations of the fields , all skewnesses are encoded in . before defining this explicitly ,it is helpful to pause and notice a complication inherent in eqs . which was not present in the single - field case . to extract a hierarchy of moment evolution equations from the transport equation , eq ., we made the expansion given in and argued that orthogonality of the hermite polynomials implied the hierarchy .however , hermite polynomials of the form $ ] are _ not _ orthogonal under the gaussian measure of eq . .following an expansion analogous to eq .the moment hierarchy would comprise linear combinations of the coefficients .the problem is essentially an algebraic question of gram schmidt orthogonalization . to avoid this problemit is convenient to diagonalize the covariance matrix , introducing new variables and for which eq .factorizes into the product of two measures under which the polynomials and are separately orthogonal .the necessary redefinitions are \ ] ] and .\ ] ] a simple expression for can be given in terms of and , we now define the non - gaussian factor , which encodes the skewnesses , to be in these variables we find , but .in addition , we have in order for eq .to be useful , it is necessary to express the skewnesses associated with the physical variables in terms of and . by definition ,these satisfy after substituting for the definition of these quantities inside the expectation values in eq .we arrive at the relations the moments , and are time - dependent , but for clarity we will usually suppress this in our notation .next we must extract the moment hierarchy , which governs evolution of , , and .we expand the velocity field in a neighborhood of the instantaneous centroid according to where we have defined as in the single - field case , these coefficients are functions of time and vary with the motion of the centroid .the expansion can be pursued to higher order if desired .our construction of and implies that the two - field transport equation can be arranged as a double gauss hermite expansion , = p_g \sum_{m , n \ge 0 } c_{mn } h_m(x ) h_n(y ) = 0 .\ ] ] because the hermite polynomials are orthogonal in the measure defined by , we deduce the moment hierarchy we define the rank " of each coefficient by .we terminated the velocity field expansion at quadratic order , and our probability distribution included only the first three moments .it follows that only with rank five or less are nonzero .if we followed the velocity field to higher order , or included higher terms in the moment expansion , we would obtain non - trivial higher - rank coefficients .inclusion of additional coefficients requires no qualitative modification of our analysis and can be incorporated in the scheme we describe below . a useful feature of the expansion in eq .is that the rank- coefficients give evolution equations for the order- moments . written explicitly in components ,the expressions that result from are quite cumbersome .however , when written as field - space covariant expressions they can be expressed in a surprisingly compact form .: : the rank-0 coefficient is identically zero .this expresses the fact that the total probability is conserved as the distribution evolves . : : the rank-1 coefficients and give evolution equations for the centroid .these equations can be written in the form we remind the reader that here and below , terms like , and represent the velocity field and its derivatives evaluated at the centroid .the first term in expresses the non - anomalous motion of the centroid , which coincides with the background velocity field of eq . .the second term describes how the wings of the probability distribution sample the velocity field at nearby points .narrow probability distributions have small components of and hence are only sensitive to the local value of .broad probability distributions have large components of and are therefore more sensitive to the velocity field far from the centroid . : : the rank-2 coefficients , and give evolution equations for the variances and the correlation .these can conveniently be packaged as evolution equations for the matrix this equation describes the stretching and rotation of as it is transported by the velocity field .it includes a sensitivity to the wings of the probability distribution , in a manner analogous to the similar term appearing in .hence the skew acts as a source for the correlation matrix .: : the rank-3 coefficients , , and describe evolution of the moments .these are the first term describes how the moments flow into each other as the velocity field rotates and shears the coordinate frame relative to the coordinate frame .the second term describes sourcing of non - gaussianity from inhomogeneities in the velocity field and the overall spread of the probability distribution .some higher - rank coefficients in our case , those of ranks four and five are also nonzero , but do not give any new evolution equations .these coefficients measure the error " introduced by truncating the moment expansion .if we had included higher cumulants , these higher - rank coefficients would have given evolution equations for the higher moments of the probability distribution .in general , all moments of the density function will mix so it is always necessary to terminate our expansion at a predetermined order both in cumulants and powers of the field fluctuation .the order we have chosen is sufficient to generate evolution equations containing both the leading - order behavior of the moments namely , the first terms in eqs . , and and the leading corrections , given by the latter terms in these equations .at this point we put our new method into practice .we study two models for which the non - gaussian signal is already known , using the standard formula . for each casewe employ our method and compare it with results obtained using . to ensure a fair comparison , we solve numerically in both cases .our new method employs the slow - roll approximation , as described above .therefore , when using the approach we produce results both with and without slow - roll simplifications .first consider double quadratic inflation , which was studied by rigopoulos , shellard & van tent and later by vernizzi & wands .the potential is we use the initial conditions chosen in ref . , where , and the fiducial trajectory has coordinates and .we plot the evolution of in fig .[ fig1 ] , which also shows the prediction of the standard formula ( with and without employing slow roll simplifications ) .we implement the algorithm using a finite difference method to calculate the derivatives of .a similar technique was used in ref .this model yields a very modest non - gaussian signal , below unity even at its peak .if inflation ends away from the spike then is practically negligible .shows that the method of moment transport allows us to separate contributions to from the intrinsic non - gaussianity of the field fluctuations , and non - linearities of the gauge transformation to . as explained in [ ss : sep_universe ] , we denote the former and the latter , and plot them separately in fig .[ fig2 ] . inspection of this figure clearly shows that is determined by a cancellation between two much larger components .its final shape and magnitude are exquisitely sensitive to their relative phase .initially , the magnitudes of and grow , but their sum remains small .the peak in fig .[ fig1 ] arises from the peak of , which is incompletely cancelled by .it is remarkable that initially evolves in exact opposition to the gauge transformation , to which it is not obviously connected . in the double quadratic model, is always small .however , it has recently been shown by byrnes __ that a large non - gaussian signal can be generated even when slow - roll is a good approximation . the conditions for this to occur are incompletely understood , but apparently require a specific choice of potential and strong tuning of initial conditions . in figs .[ fig3][fig4 ] we show the evolution of in a model with the potential which corresponds to example a of ref .* 5 ) when we choose and initial conditions , .it is clear that the agreement is exact . in this model , is overwhelmingly dominated by the contribution from the second - order gauge transformation , , as shown in fig .[ fig4 ] .this conclusion applies equally to the other large- examples discussed in refs . , although we make no claim that this is a general phenomenon . in conclusion , figs .[ fig1 ] and [ fig3 ] show excellent agreement between our new method and the outcome of the numerical formula .these figures also compare the moment transport method and without the slow - roll approximation .we conclude that the slow - roll estimate remains broadly accurate throughout the entire evolution .non - linearities are now routinely extracted from all - sky observations of the microwave background anisotropy .our purpose in this paper has been to propose a new technique with which to predict the observable signal .present data already give interesting constraints on the skewness parameter , and over the next several years we expect that the _ planck _ survey satellite will make these constraints very stringent . it is even possible that higher - order moments , such as the kurtosis parameter will become better constrained . to meet the need of the observational community for comparison with theory , reliable estimates of these non - linear quantities will be necessary for various models of early - universe physics .a survey of the literature suggests that the ` conventional ' method , originally introduced by lyth & rodrguez , remains the method of choice for analytical study of non - gaussianity . in comparison ,our proposed moment transport method exhibits several clear differences .first , the conventional method functions best when we base the expansion on a flat hypersurface immediately after horizon exit . in our method , we make the opposite choice and move the flat hypersurface as close as possible to the time of observation . after this , the role of the formula is to provide no more than the non - linear gauge transformation between field fluctuations and the curvature perturbation .we substitute the method of moment transport to evolve the distribution of field fluctuations between horizon exit and observation .second , in integrating the transport equation one uses an expansion of the velocity field such as the one given in eqs .. this expansion is refreshed at each step of integration , so the result is related to conventional perturbative calculations in a very similar way to renormalization - group improved perturbation theory . in this interpretation ,derivatives of play the role of couplings . at a given order , , in the moment hierarchy, the equations for lower - order moments function as renormalization group equations for the couplings at level- , resumming potentially large terms before they spoil perturbation theory .this property is shared with any formalism such as which is non - perturbative in time evolution , but may be an advantage in comparison with perturbative methods .we also note that although is non - perturbative as a point of principle , practical implementations are frequently perturbative .for example , the method of vernizzi & wands and battefeld & easther depends on the existence of quantities which are conserved only to leading order in , and can lose accuracy after e - foldings .numerical calculations confirm that our method gives results in excellent agreement with existing techniques . as a by - product of our analysis, we note that the large non - gaussianities which have recently been observed in sum- and product - separable potentials are dominated by non - linearities from the second - order part of the gauge transformation from to .the contribution from intrinsic non - linearities of the field fluctuations , measured by the skewnesses , is negligible . in such casesone can obtain a useful formula for by approximating the field distribution as an exact gaussian .the non - gaussianity produced in such cases arises from a distortion of comoving hypersurfaces with respect to adjacent spatially flat hypersurfaces .our new method joins many well - established techniques for estimating non - gaussian properties of the curvature perturbation . in our experience , these techniques give comparable estimates of , but they do not exactly agree .each method invokes different assumptions , such as the neglect of gradients or the degree to which time dependence can be accommodated .the mutual scatter between different methods can be attributed to the theory error inherent in any estimate of .the comparison presented in [ sec : numerics ] shows that while all of these methods slightly disagree , the moment transport method gives good agreement with other established methods .dm is supported by the cambridge centre for theoretical cosmology ( ctc ) .ds is funded by stfc .dw acknowledges support from the ctc .we would like to thank chris byrnes , jim lidsey and karim malik for helpful conversations .10 t. falk , r. rangarajan , and m. srednicki , _ the angular dependence of the three - point correlation function of the cosmic microwave background radiation as predicted by inflationary cosmologies _ , _ astrophys .j. _ * 403 * ( 1993 ) l1 , [ http://xxx.lanl.gov/abs/astro-ph/9208001 [ arxiv : astro - ph/9208001 ] ] .a. gangui , f. lucchin , s. matarrese , and s. mollerach , _ the three - point correlation function of the cosmic microwave background in inflationary models _ , _ astrophys . j. _ * 430 * ( 1994 ) 447457 , [ http://xxx.lanl.gov/abs/astro-ph/9312033 [ arxiv : astro - ph/9312033 ] ] .t. pyne and s. m. carroll , _ higher - order gravitational perturbations of the cosmic microwave background _* d53 * ( 1996 ) 29202929 , [ http://xxx.lanl.gov/abs/astro-ph/9510041 [ arxiv : astro - ph/9510041 ] ] .v. acquaviva , n. bartolo , s. matarrese , and a. riotto , _ second - order cosmological perturbations from inflation _ , _ nucl .phys . _ * b667 * ( 2003 ) 119148 , [ http://xxx.lanl.gov/abs/astro-ph/0209156 [ arxiv : astro - ph/0209156 ] ] .e. komatsu and d. n. spergel , _acoustic signatures in the primary microwave background bispectrum _ , _ phys .* d63 * ( 2001 ) 063002 , [ http://xxx.lanl.gov/abs/astro-ph/0005036 [ arxiv : astro - ph/0005036 ] ] .f. r. bouchet and r. juszkiewicz , _ perturbation theory confronts observations : implications for the ` initial ' conditions and _ , http://xxx.lanl.gov/abs/astro-ph/9312007 [ arxiv : astro - ph/9312007 ] .p. fosalba , e. gaztanaga , and e. elizalde , _ gravitational evolution of the large - scale density distribution : the edgeworth & gamma expansions _ , http://xxx.lanl.gov/abs/astro-ph/9910308 [ arxiv : astro - ph/9910308 ] .m. sasaki and e. d. stewart , _ a general analytic formula for the spectral index of the density perturbations produced during inflation _ , _ prog .* 95 * ( 1996 ) 7178 , [ http://xxx.lanl.gov/abs/astro-ph/9507001 [ arxiv : astro - ph/9507001 ] ] .g. i. rigopoulos , e. p. s. shellard , and b. j. w. van tent , _ non - linear perturbations in multiple - field inflation _rev . _ * d73 * ( 2006 ) 083521 , [ http://xxx.lanl.gov/abs/astro-ph/0504508 [ arxiv : astro - ph/0504508 ] ] .h. r. s. cogollo , y. rodrguez , and c. a. valenzuela - toledo , _ on the issue of the series convergence and loop corrections in the generation of observable primordial non - gaussianity in slow - roll inflation .part i : the bispectrum _ , _ jcap _ * 0808 * ( 2008 ) 029 , [ http://xxx.lanl.gov/abs/0806.1546[arxiv:0806.1546 ] ] .y. rodrguez and c. a. valenzuela - toledo , _ on the issue of the series convergence and loop corrections in the generation of observable primordial non - gaussianity in slow - roll inflation .part ii : the trispectrum _ , http://xxx.lanl.gov/abs/0811.4092 [ arxiv:0811.4092 ] .choi , l. m. h. hall , and c. van de bruck , _ spectral running and non - gaussianity from slow - roll inflation in generalised two - field models _ , _ jcap _ * 0702 * ( 2007 ) 029 , [ http://xxx.lanl.gov/abs/astro-ph/0701247 [ arxiv : astro - ph/0701247 ] ] . c. gordon , d. wands , b. a. bassett , and r. maartens , _ adiabatic and entropy perturbations from inflation _ , _ phys ._ * d63 * ( 2001 ) 023506 , [ http://xxx.lanl.gov/abs/astro-ph/0009131 [ arxiv : astro - ph/0009131 ] ] .s. matarrese , l. verde , and r. jimenez , _ the abundance of high - redshift objects as a probe of non- gaussian initial conditions _ , _ astrophys. j. _ * 541 * ( 2000 ) 10 , [ http://xxx.lanl.gov/abs/astro-ph/0001366 [ arxiv : astro - ph/0001366 ] ] .l. amendola , _ the dependence of cosmological parameters estimated from the microwave background on non - gaussianity _ , _ astrophys. j. _ * 569 * ( 2002 ) 595599 , [ http://xxx.lanl.gov/abs/astro-ph/0107527 [ arxiv : astro - ph/0107527 ] ] .m. loverde , a. miller , s. shandera , and l. verde , _ effects of scale - dependent non - gaussianity on cosmological structures _ , _ jcap _ * 0804 * ( 2008 ) 014 , [ http://xxx.lanl.gov/abs/0711.4126 [ arxiv:0711.4126 ] ] .d. seery and j. c. hidalgo , _ non - gaussian corrections to the probability distribution of the curvature perturbation from inflation _ , _ jcap _ * 0607 * ( 2006 ) 008 , [ http://xxx.lanl.gov/abs/astro-ph/0604579 [ arxiv : astro - ph/0604579 ] ] .s. blinnikov and r. moessner , _expansions for nearly gaussian distributions _ , _ astron .* 130 * ( 1998 ) 193205 , [ http://xxx.lanl.gov/abs/astro-ph/9711239 [ arxiv : astro - ph/9711239 ] ] .g. i. rigopoulos , e. p. s. shellard , and b. j. w. van tent , _ quantitative bispectra from multifield inflation _rev . _ * d76 * ( 2007 ) 083512 , [ http://xxx.lanl.gov/abs/astro-ph/0511041 [ arxiv : astro - ph/0511041 ] ] .m. sasaki , j. valiviita , and d. wands , _ non - gaussianity of the primordial perturbation in the curvaton model _ , _ phys .rev . _ * d74 * ( 2006 ) 103003 , [ http://xxx.lanl.gov/abs/astro-ph/0607627 [ arxiv : astro - ph/0607627 ] ] .
we present a novel method for calculating the primordial non - gaussianity produced by super - horizon evolution during inflation . our method evolves the distribution of coarse - grained inflationary field values using a transport equation . we present simple evolution equations for the moments of this distribution , such as the variance and skewness . this method possesses some advantages over existing techniques . among them , it cleanly separates multiple sources of primordial non - gaussianity , and is computationally efficient when compared with popular alternatives , such as the framework . we adduce numerical calculations demonstrating that our new method offers good agreement with those already in the literature . we focus on two fields and the parameter , but we expect our method will generalize to multiple scalar fields and to moments of arbitrarily high order . we present our expressions in a field - space covariant form which we postulate to be valid for any number of fields . * keywords * : inflation , cosmological perturbation theory , physics of the early universe , quantum field theory in curved spacetime .
invariants are a popular concept in object recognition and image retrieval .they aim to provide descriptions that remain constant under certain geometric or radiometric transformations of the scene , thereby reducing the search space .they can be classified into global invariants , typically based either on a set of key points or on moments , and local invariants , typically based on derivatives of the image function which is assumed to be continuous and differentiable .the geometric transformations of interest often include translation , rotation , and scaling , summarily referred to as _ similarity _ transformations . in a previous paper , building on work done by schmid and mohr , we have proposed differential invariants for those similarity transformations , plus _ linear _ brightness change . here, we are looking at a _ non - linear _ brightness change known as _ gamma correction_. gamma correction is a non - linear quantization of the brightness measurements performed by many cameras during the image formation process .the idea is to achieve better _ perceptual _ results by maintaining an approximately constant ratio between adjacent brightness levels , placing the quantization levels apart by the _ just noticeable difference_. incidentally , this non - linear quantization also precompensates for the non - linear mapping from voltage to brightness in electronic display devices .gamma correction can be expressed by the equation where is the input intensity , is the output intensity , and is a normalization factor which is determined by the value of . for output devices , the ntsc standard specifies . for input devices like cameras ,the parameter value is just inversed , resulting in a typical value of .the camera we used , the sony 3 ccd color camera dxc 950 , exhibited . for the kodak megaplus xrc camera ][ fig : gammacorr ] shows the intensity mapping of 8-bit data for different values of .it turns out that an invariant under gamma correction can be designed from first and second order derivatives .additional invariance under scaling requires third order derivatives .derivatives are by nature translationally invariant .rotational invariance in 2-d is achieved by using rotationally symmetric operators .the key idea for the design of the proposed invariants is to form suitable ratios of the derivatives of the image function such that the parameters describing the transformation of interest will cancel out .this idea has been used in to achieve invariance under linear brightness changes , and it can be adjusted to the context of gamma correction by at least conceptually considering the _ logarithm _ of the image function . for simplicity , we begin with 1-d image functions .let be the image function , i.e. the original signal , assumed to be continuous and differentiable , and the corresponding gamma corrected function .note that is a special case of where .taking the logarithm yields with the derivatives , and .we can now define the invariant under gamma correction to be {0mm}{13 mm } & = \\ & = \end{tabular}\ ] ] the factor has been eliminated by taking derivatives , and has canceled out .furthermore , turns out to be completely specified in terms of the _ original _ image function and its derivatives , i.e. the logarithm actually does nt have to be computed .the notation indicates that the invariant depends on the underlying image function and location the invariance holds under gamma correction , not under spatial changes of the image function .a shortcoming of is that it is undefined where the denominator is zero .therefore , we modify to be continuous everywhere : {0mm}{8 mm } { \normalsize } & & { \normalsize if } \\ & & { \normalsize else } \\\end{tabular}\ ] ] where , for notational convenience , we have dropped the variable .the modification entails .note that the modification is just a heuristic to deal with poles .if all derivatives are zero because the image function is constant , then differentials are certainly not the best way to represent the function .if scaling is a transformation that has to be considered , then another parameter describing the change of size has to be introduced .that is , scaling is modeled here as variable substitution : the scaled version of is .so we are looking at the function where the derivatives with respect to are , , and . nowthe invariant is obtained by defining a suitable ratio of the derivatives such that both and cancel out : {0mm}{10 mm } & = \end{tabular}\ ] ] analogously to eq .( [ eq : thm12 g ] ) , we can define a modified invariant {0mm}{8 mm } { \normalsize } & & { \normalsize if cond2 } \\ & & { \normalsize else } \\\end{tabular}\ ] ] where condition cond1 is , and condition cond2 is .again , this modification entails .it is a straightforward albeit cumbersome exercise to verify the invariants from eqs .( [ eq : th12 g ] ) and ( [ eq : th123 g ] ) with an analytical , differentiable function . as an arbitrary example, we choose the first three derivatives are , , and .then , according to eq .( [ eq : th12 g ] ) , .if we now replace with a gamma corrected version , say , the first derivative becomes , the second derivative is , and the third is . if we plug these derivatives into eq .( [ eq : th12 g ] ) , we obtain an expression for which is identical to the one for above .the algebraically inclined reader is encouraged to verify the invariant for the same function .[ fig : analyex ] shows the example function and its gamma corrected counterpart , together with their derivatives and the two modified invariants .as expected , the graphs of the invariants are the same on the right as on the left .note that the invariants define a many - to - one mapping .that is , the mapping is not information preserving , and it is not possible to reconstruct the original image from its invariant representation .if or are to be computed on images , then eqs . ( [ eq : th12 g ] ) to ( [ eq : thm123 g ] ) have to be generalized to two dimensions .this is to be done in a rotationally invariant way in order to achieve invariance under similarity transformations .the standard way is to use rotationally symmetric operators .for the first derivative , we have the well known _gradient magnitude _ , defined as where is the 2-d image function , and , are partial derivatives along the x - axis and the y - axis . for the second order derivative, we can use the linear _ laplacian _ horn also presents an alternative second order derivative operator , the _quadratic variation _ since the qv is not a linear operator and more expensive to compute , we use the laplacian for our implementation . for the third order derivative ,we can define , in close analogy with the quadratic variation , a _cubic variation _ as the invariants from eqs .( [ eq : th12 g ] ) to ( [ eq : thm123 g ] ) remain valid in 2-d if we replace with , with , and with .this can be verified by going through the same argument as for the functions .recall that the critical observation in eq .( [ eq : th12 g ] ) was that cancels out , which is the case when all derivatives return a factor .but such is also the case with the rotationally symmetric operators mentioned above .for example , if we apply the gradient magnitude operator to , i.e. to the logarithm of a gamma corrected image function , we obtain returning a factor , and analogously for , qv , and cv .a similar argument holds for eq .( [ eq : th123 g ] ) where we have to show , in addition , that the first derivative returns a factor , the second derivative returns a factor , and the third derivative returns a factor , which is the case for our 2-d operators . while the derivatives of continuous , differentiable functions are uniquely defined , there are many ways to implement derivatives for _ sampled _ functions .we follow schmid and mohr , ter haar romeny , and many other researchers in employing the derivatives of the gaussian function as filters to compute the derivatives of a sampled image function via convolution .this way , derivation is combined with smoothing .the 2-d zero mean gaussian is defined as the partial derivatives up to third order are , , , , , , , , .they are shown in fig .[ fig : gausskernels ] . we used the parameter setting and kernel size these kernels , eq .( [ eq : th12 g ] ) , for example , is implemented as at each pixel , where denotes convolution .we evaluate the invariant from eq .( [ eq : thm12 g ] ) in two different ways .first , we measure how much the invariant computed on an image without gamma correction is different from the invariant computed on the same image but with gamma correction .theoretical , this difference should be zero , but in practice , it is not .second , we compare template matching accuracy on intensity images , again without and with gamma correction , to the accuracy achievable if instead the invariant representation is used .we also examine whether the results can be improved by prefiltering .a straightforward error measure is the _ absolute error _ , where `` 0gc '' refers to the image without gamma correction , and gc stands for either `` sgc '' if the gamma correction is done synthetically via eq .( [ eq : gammacorr ] ) , or for `` cgc '' if the gamma correction is done via the camera hardware . like the invariant itself ,the absolute error is computed at each pixel location of the image , except for the image boundaries where the derivatives and therefore the invariants can not be computed reliably .[ fig : imas ] shows an example image .the sgc image has been computed from the 0gc image , with .note that the gamma correction is done _after _ the quantization of the 0gc image , since we do nt have access to the 0gc image before quantization .[ fig : accuinv ] shows the invariant representations of the image data from fig .[ fig : imas ] and the corresponding absolute errors . since , we have .the dark points in fig . [ fig : accuinv ] , ( c ) and ( e ) , indicate areas of large errors .we observe two error sources : * the invariant can not be computed robustly in homogeneous regions .this is hardly surprising , given that it is based on differentials which are by definition only sensitive to spatial changes of the signal .* there are outliers even in the sgc invariant representation , at points of very high contrast edges .they are a byproduct of the inherent smoothing when the derivatives are computed with differentials of the gaussian .note that the latter put a ceiling on the maximum gradient magnitude that is computable on 8-bit images .in addition to computing the absolute error , we can also compute the relative error , in percent , as then we can define the set of _ reliable points _ , relative to some error threshold , as and , the percentage of reliable points , as where is the number of valid , i.e. non - boundary , pixels in the image .[ fig : reliapts ] shows , in the first row , the reliable points for three different values of the threshold .the second row shows the sets of reliable points for the same thresholds if we gently prefilter the 0gc and cgc images . the corresponding data for the ten test images from fig .[ fig : imadb ] is summarized in table [ tab : reliaperc ] .derivatives are known to be sensitive to noise .noise can be reduced by smoothing the original data before the invariants are computed . on the other hand , derivatives should be computed as locally as possible . with these conflicting goals to be considered , we experiment with gentle prefiltering , using a gaussian filter of size =1.0 .the size of the gaussian to compute the invariant is set to =1.0 .note that and can _ not _ be combined into just one gaussian because of the non - linearity of the invariant .with respect to the set of reliable points , we observe that after prefiltering , roughly half the points , on average , have a relative error of less than 20% .gentle prefiltering consistently reduces both absolute and relative errors , but strong prefiltering does not .template matching is a frequently employed technique in computer vision . here, we will examine how gamma correction affects the spatial accuracy of template matching , and whether that accuracy can be improved by using the invariant .an overview of the testbed scenario is given in fig .[ fig : templloca ] . a small template of size , representing the search pattern ,is taken from a 0gc intensity image , i.e. without gamma correction .this query template is then correlated with the corresponding cgc intensity image , i.e. the same scene but with gamma correction switched on .if the correlation maximum occurs at exactly the location where the 0gc query template has been cut out , we call this a _ correct maximum correlation position _ , or cmcp. the correlation function employed here is based on a normalized mean squared difference : where is an image , is a template positioned at , is the mean of the subimage of at of the same size as , is the mean of the template , and .the template location problem then is to perform this correlation for the whole image and to determine whether the position of the correlation maximum occurs precisely at .[ fig : matchtempl ] demonstrates the template location problem , on the left for an intensity image , and on the right for its invariant representation .the black box marks the position of the original template at ( 40,15 ) , and the white box marks the position of the matched template , which is incorrectly located at ( 50,64 ) in the intensity image . on the right , the matched template ( white )has overwritten the original template ( black ) at the same , correctly identified position .[ fig : correlexmpl ] visualizes the correlation function over the whole image .the white areas are regions of high correlation .the example from figs .[ fig : matchtempl ] and [ fig : correlexmpl ] deals with only _ one _ arbitrarily selected template . in order to systematically analyze the template location problem , we repeat the correlation process for all possible template locations. then we can define the _ correlation accuracy _ ca as the percentage of correctly located templates , where is the size of the template , is the set of correct maximum correlation positions , and , again , is the number of valid pixels .we compute the correlation accuracy both for unfiltered images and for gently prefiltered images , with .[ fig : corrcorrelpts ] shows the binary correlation accuracy matrices for our example image .the cmcp set is shown in white , its complement and the boundaries in black .we observe a higher correlation accuracy for the invariant representation , which is improved by the prefiltering .we have computed the correlation accuracy for all the images given in fig .[ fig : imadb ] .the results are shown in table [ tab : ca ] and visualized in fig .[ fig : correlaccuras ] .we observe the following : * the correlation accuracy ca is higher on the invariant representation than on the intensity images .* the correlation accuracy is higher on the invariant representation with gentle prefiltering , , than without prefiltering .we also observed a decrease in correlation accuracy if we increase the prefiltering well beyond .by contrast , prefiltering seems to be always detrimental to the intensity images ca .* the correlation accuracy shows a wide variation , roughly in the range 30%% for the unfiltered intensity images and 50%% for prefiltered invariant representations .similarly , the gain in correlation accuracy ranges from close to zero up to 45% . for our test images, it turns out that the invariant representation is always superior , but that does nt necessarily have to be the case . * the medians and means of the cas over all test images confirm the gain in correlation accuracy for the invariant representation . * the larger the template size , the higher the correlation accuracy , independent of the representation .a larger template size means more structure , and more discriminatory power .we have proposed novel invariants that combine invariance under gamma correction with invariance under geometric transformations . in a general sense ,the invariants can be seen as trading off derivatives for a power law parameter , which makes them interesting for applications beyond image processing .the error analysis of our implementation on real images has shown that , for sampled data , the invariants can not be computed robustly everywhere .nevertheless , the template matching application scenario has demonstrated that a performance gain is achievable by using the proposed invariant .bob woodham suggested to the author to look into invariance under gamma correction .his meticulous comments on this work were much appreciated .jochen lang helped with the acquisition of image data through the acme facility .d. forsyth , j. mundy , a. zisserman , c. coelho , c. rothwell , `` invariant descriptors for 3-d object recognition and pose '' , _ ieee transactions on pattern analysis and machine intelligence _ , vol.13 ,no.10 , pp.971 - 991 , oct.1991 .d. pai , j. lang , j. lloyd , r. woodham , `` acme , a telerobotic active measurement facility '' , sixth international symposium on experimental robotics , sydney , 1999 .see also : http://www.cs.ubc.ca/nest/lci/acme/
_ this paper presents invariants under gamma correction and similarity transformations . the invariants are local features based on differentials which are implemented using derivatives of the gaussian . the use of the proposed invariant representation is shown to yield improved correlation results in a template matching scenario . _
developments are currently underway to promote the sensitivity of ligo and to improve its prospect for detecting gravitational waves emitted by compact object binaries . of particular interestare the detection of gravitational waves released during the inspiral and merger of binary black hole ( bbh ) systems .detection rates for bbh events are expected to be within 0.41000 per year with advanced ligo .it is important that rigorous detection algorithms be in place in order to maximize the number of detections of gravitational wave signals .the detection pipeline currently employed by ligo involves a matched - filtering process whereby signals are compared to a pre - constructed template bank of gravitational waveforms . the templates are chosen to cover some interesting region of mass - spin parameter space and are placed throughout it in such a way that guarantees some minimal match between any arbitrary point in parameter space and its closest neighbouring template .unfortunately , the template placement strategy generally requires many thousands of templates ( e.g. ) evaluated at arbitrary mass and spin ; something that can not be achieved using the current set of numerical relativity ( nr ) waveforms . to circumvent this issue, ligo exploits the use of analytical waveform families like phenomenological models or effective - one - body models .we shall focus here on the phenomenological b ( phenomb ) waveforms developed by .this waveform family describes bbh systems with varying masses and aligned - spin magnitudes ( i.e. non - precessing binaries ) .the family was constructed by fitting a parameterized model to existing nr waveforms in order to generate a full inspiral - merger - ringdown ( imr ) description as a function of mass and spin .the obvious appeal of the phenomb family is that it allows for the inexpensive construction of gravitational waveforms at arbitrary points in parameter space and can thus be used to create arbitrarily dense template banks . to optimize computational efficiency of the detection processit is desirable to reduce the number of templates under consideration .a variety of reduced bases techniques have been developed , either through singular - value decomposition ( svd ) , or via a greedy algorithm .svd is an algebraic manipulation that transforms template waveforms into an orthonormal basis with a prescription that simultaneously filters out any redundancies existing within the original bank . as a result, the number of templates required for matched - filtering can be significantly reduced . in addition, it has been shown in that , upon projecting template waveforms onto the orthonormal basis produced by the svd , interpolating the projection coefficients provides accurate approximations of other imr waveforms not included in the original template bank . in this paper , we continue to explore the use of the interpolation of projection coefficients . we take a novel approach that utilizes both the analytic phenomb waveform family and nr hybrid waveforms .we apply svd to a template bank constructed from an analytical waveform family to construct an orthonormal basis spanning the waveforms , then project the nr waveforms onto this basis and interpolate the projection coefficients to allow arbitrary waveforms to be constructed , thereby obtaining a new waveform approximant .we show here that this approach improves upon the accuracy of the original analytical waveform family .the original waveform family shows mismatches with the nr waveforms as high as when no extremization over physical parameters is applied ( i.e. , a measure of the faithfulness " of the waveform approximant ) , and mismatches of when maximized over total mass ( i.e. , a measure of the effectualness " of the waveform approximant ) . with our svd accuracy booster, we are able to construct a new waveform family ( given numerically ) with mismatches even without extremization over physical parameters .this paper is organized as follows .we begin in section [ sec : mbias ] where we provide definitions to important terminology used in our paper .we then compare our nr hybrid waveforms to the phenomb family and show that a mass - bias exists between the two . in section [ sec : method ]we present our svd accuracy booster applied to the case study of equal - mass , zero - spin binaries . in section [ sec:2d ]we investigate the feasibility of extending this approach to include unequal - mass binaries . we finish with concluding remarks in section [ sec : discussion ] .a gravitational waveform is described through a complex function , , where real and imaginary parts store the sine and cosine components of the wave .the specific form of depends on the parameters of the system , in our case the total mass and the mass - ratio .while is a continuous function of time , we discretize by sampling , where the sampling times have uniform spacing .we shall also whiten any gravitational waveform .this processes is carried out in frequency space via where is the ligo noise curve and is the fourier transform of .the whitened time - domain waveform , , is obtained by taking the inverse fourier transform of . in the remainder of the paper, we shall always refer to whitened waveforms , dropping the subscript `` w '' . for our purposesit suffices to take to be the initial ligo noise curve . using the advanced ligo noise curvewould only serve to needlessly complicate our approach by making waveforms longer in the low frequency domain . as a measure of the level of agreement between two waveforms , and , we will use their match , or overlap , .we define where is the standard complex inner product and the norm .we always consider the overlap maximized over time- and phase - shifts between the two waveforms .the time - maximization is indicated in , and the phase - maximization is an automatic consequence of the modulus .note that . for discrete sampling at points we have that where is the complex conjugate of . without whitening , would need to be evaluated in the frequency domain with a weighting factor .the primary advantage of is its compatibility with formal results for the svd , which will allow us to make more precise statements below .when maximizing over time - shifts , we ordinarily consider discrete time - shifts in integer multiples of , as this avoids interpolation .after the overlap has been maximized , it is useful to speak in terms of the mismatch , , defined simply as we use this quantity throughout the paper to measure the level of disagreement between waveforms .we use numerical waveforms computed with the spectral einstein code spec .primarily , we use the 15-orbit equal - mass ( mass - ratio ) , zero - spin ( effective spin ) waveform described in . in section [ sec:2d ] , we also use unequal mass waveforms computed by .the waveforms are hybridized with a taylort3 post - newtonian ( pn ) waveform as described in at matching frequencies and for mass - ratios and , respectively .taylort4 at 3.5pn order is known to match nr simulations exceedingly well for equal - mass , zero - spin bbh systems ( see also fig . 9 of ) .for , a taylort3 hybrid is very similar to a taylort4 hybrid , cf.figure 12 of .the mismatch between taylort3 and taylort4 hybrids is below at , dropping to below for , and for .these mismatches are significantly smaller than mismatches arising in the study presented here , so we conclude that our results are not influenced by the accuracy of the utilized pn - nr hybrid waveform . for higher mass - ratios ,the pn - nr hybrids have a larger error due to the post - newtonian waveform .the error - bound on the hybrids increases with mass - ratio , however , is mitigated in our study here , because we use the hybrids only for total mass of , where less of the post - newtonian waveform is in band .because nr simulations are not available for arbitrary mass ratios , we will primarily concentrate our investigation to the equal - mass and zero - spin nr hybrid waveforms described above .the full imr waveform can be generated at any point along the line through a simple rescaling of amplitude and phase with total mass of the system .despite such a simple rescaling , the line lies orthogonal to lines of constant chirp mass , therefore tracing a steep gradient in terms of waveform overlap , and encompassing a large degree of waveform structure .since our procedure for constructing an orthonormal basis begins with phenomb waveforms , let us now investigate how well these waveforms model the nr waveforms to be interpolated . for this purpose , we adopt the notation and to represent nr and phenomb waveforms of total mass , respectively .we quantify the faithfulness of the phenomb family by computing the mismatch ] is a minimum .the result of this process is shown by the solid line in the top panel of figure [ figure : bias ] . allowing for a mass bias significantlyreduces the mismatch for .the mass that minimizes mismatch is generally smaller than the mass of our nr `` signal '' waveform , over almost all of the mass range considered .apparently , phenomb waveforms are systematically underestimating the mass of the `` true '' nr waveforms , at least along the portion of parameter space considered here . the solid line in the bottom panel of figure [ figure : bias ] plots the relative mass - bias , .at this value is , and it rises to just above for . and . a more comprehensive minimization over mass , mass ratio , and effective spin might change this result . ]it is useful to consider how this mass bias compares to the potential parameter estimation accuracy in an early detection . for a signal with a matched - filter signal - to - noise ratio ( snr ) of 8 characteristic of early detection scenarios template / waveform mismatches will influence parameter estimation when the mismatch is . placing a horizontal cut on the top panel of figure [ figure : bias ] at , we see that for phenomb waveform errors have no observational consequence ; for a phenomb waveform with the wrong mass will be the best match for the signal . for the missmatch between equal - mass phenomb waveforms and nr ( when optimizing over mass )grows to .optimization over mass - ratio will reduce this mismatch , but we have not investigated to what degree .we aim to construct an orthonormal basis via the svd of a bank of phenomb template waveforms , and then interpolate the coefficients of nr waveforms projected onto this basis to generate a waveform family with improved nr faithfulness .the first step is to construct a template bank of phenomb waveforms , with attention restricted to equal - mass , zero - spin binaries .an advantage of focusing on the line is that template bank construction can be simplified by systematically arranging templates in ascending order by total mass . with this arrangementwe define a template bank to consist of phenomb waveforms , labelled ( ) , with and with adjacent templates satisfying the relation : where is the desired overlap between templates and is some accepted tolerance in this value .the template bank is initiated by choosing a lower mass bound and assigning .successive templates are found by sequentially moving toward higher mass in order to find waveforms satisfying until some maximum mass is reached . throughout each trial , overlap between waveformsis maximized continuously over phase shifts and discretely over time shifts . for template bank constructionwe choose to refine the optimization over time by considering shifts in integer multiples of .we henceforth refer to our fiducial template bank which employs the parameters , , , and . the lower mass boundwas chosen in order to obtain a reasonably sized template bank containing waveforms ; pushing downward to results in more than doubling the number of templates .template waveforms each have a duration of and are uniformly sampled at ( a sample frequency of ) . of memory is required to store this template bank using double - precision waveforms .the next step is to transform the template waveforms into an orthonormal basis .following the presentation in , this is achieved by arranging the templates into the rows of a matrix and factoring through svd to obtain where and are orthogonal matrices and is a diagonal matrix whose non - zero elements along the main diagonal are referred to as singular values .the svd for is uniquely defined as long as the singular values are arranged in descending order along the main diagonal of .the end result of is to convert the complex - valued templates into real - valued orthonormal basis waveforms .the basis waveform , , is stored in the row of , and associated with this mode is the singular value , , taken from the element along the main diagonal of .one of the appeals of svd is that the singular values rank the basis waveforms with respect to their ability to represent the original templates .this can be exploited in order to construct a reduced basis that spans the space of template waveforms to some tolerated mismatch .for instance , suppose we choose to reduce the basis by considering only the first basis modes while discarding the rest .template waveforms can be represented in this reduced basis by expanding them as the sum where are the complex - valued projection coefficients , the prime in is used to stress that the reduced basis is generally unable to fully represent the original template .we are guaranteed from to completely represent the template . ]it was shown in that the mismatch expected from reducing the basis in this way is given , can be inverted to determine the number of basis waveforms , , required to represent the original templates for some expected mismatch .provides a useful estimate to the mismatch in represeting templates from a reduced svd basis . in order to investigate its accuracy ,however , we should compute the mismatch explicitly for each template waveform . using the orthonormality condition , it is easy to show from that the mismatch between the template and its projection can be expressed in terms of the projection coefficients : this quantity is minimized continuously over phase and discretely over time shifts in integer multiples of .choosing , predicts that of the basis waveforms from our fiducial template bank are required to represent the templates to the desired accuracy . in figure[ figure : rec ] we compare the expected mismatch of to the actual mismatches computed from for each phenomb waveform in the template bank . the open squares in this plot show that the actual template mismatch has a significant amount of scatter about ,but averaged over a whole remains well bounded to the expected result .the phenomb template waveforms can thus be represented to a high degree from a reasonably reduced svd basis .we are of course more interested in determining how well nr waveforms can be represented by the same reduced basis of phenomb waveforms .since nr and phenomb waveforms are not equivalent , can not be used to estimate the mismatch obtained when projecting nr waveforms onto the reduced basis .we must therefore compute their representation mismatch explicitly .a general waveform , , can be represented by the reduced basis in analogy to by expressing it as the sum : where .as before , the represented waveform will in general be neither normalized nor equivalent to the original waveform .the mismatch between them is where we remind the reader that we always minimize over continuous phase shifts and discrete time shifts of the two waveforms . in figure [ figure :rec ] we use open circles to plot the representation mismatch of nr waveforms evaluated at the same set of masses from which the phenomb template bank was constructed .we see that nr waveforms can be represented in the reduced basis with a mismatch less than over most of the template bank boundary .this is about a factor of five improvement in what can be achieved by using phenomb waveforms optimized over mass . since nr waveforms were not originally included in the template bank , and because a mass - bias exists between the phenomb waveforms which were included , we can expect that the template locations have no special meaning to nr waveforms .this is evident from the thin dashed line which traces the nr representation mismatch for masses evaluated between the discrete templates .this line varies smoothly across the considered mass range and exhibits no special features at the template locations .this is in contrast to the thin solid line which traces phenomb representation mismatch evaluated between templates .in this case , mismatch rises as we move away from one template and subsequently falls back down as the next template is approached .the representation tolerance of the svd is a free parameter , which so far , we have constrained to be .when this tolerance is varied , we observe the following trends : ( i ) phenomb representation mismatch generally follows ; ( ii ) nr representation mismatch follows at first and then _ saturates _ to a minimum as the representation tolerance is continually reduced .these trends are observed in figure [ figure : trun ] where we plot nr and phenomb representation mismatch averaged over the mass boundary of the template bank evaluated both at and between templates .the saturation in nr representation mismatch occurs when the reduced basis captures all of the nr waveform structure contained within the phenomb basis . reducingthe basis further hits a point of diminishing returns as the increased computational cost associated with a larger basis outweighs the benefit of marginally improving nr match .we now wish to examine the possibility of using the reduced svd basis of phenomb template waveforms to construct a new waveform family with improved nr representation .the new waveform family would be given by a numerical interpolation of the projection coefficients of nr waveforms expanded onto the reduced basis .here we test this using the fiducial template bank and reduced basis described above .the approach is to sample nr projection coefficients , , at some set of locations , , and then perform an interpolation to obtain the continuous function that can be evaluated for arbitrary .the accuracy of the interpolation scheme is maximized by finding the space for which are smooth functions of .it is reasonable to suppose that the projection coefficients will vary on a similar scale over which the waveforms themselves vary .hence , a suitable space to sample along is the space of constant waveform overlap .we define this to be the space ] . to begin this calculation ,it is useful to consider the square of the overlap , where we have dropped the explicit mass - dependence and subscripts for convenience . using and taylor - expanding the right - hand - side of to second order in , we find to second order in , the mismatch is therefore we note that the right - hand - side of can be written as , where is the part of orthogonal to , however , for simplicity , we proceed by dropping the last term in : using , this gives \le \\ \frac{1}{2}\sum_{k=1}^{n ' } \left|r_k(m)\right|^2 + \frac{1}{2}\sum_{k = n'+1}^{2n}\left|\mu_k(m)\right|^2 + \frac{1}{2}|{\bf h}_\perp|^2 \end{gathered}\ ] ] we thus see three contributions to the total mismatch : ( i ) the interpolation error , ; ( ii ) the truncation error from the discarded waveforms of the reduced basis , ; ( iii ) the failure of the svd basis to represent the nr waveform , .the sum of the last two terms , which together make up the representation error , is traced by the dashed line in figure [ figure : rec ] .the goal for our new waveform family is to have an interpolation error that is negligible compared to the representation error . to remove the mass - dependence of interpolation error in ,we introduce the maximum interpolation error of each mode , this allows the bound to place an upper limit on the error introduced by interpolation .figure [ figure : rk ] plots as a function of mode - number as well as the cumulative sum .the data pertains to an interpolation performed using chebyshev polynomials on the reduced svd basis containing the frist of waveforms . in this case , we find the interpolation error to be largely dominated by the lowest - order modes and also partially by the highest - order modes .interpolated coefficients for various modes are plotted in figure [ figure : interpcoeff ] and help to explain the features seen in figure [ figure : rk ] . in the first place , interpolation becomes increasingly more difficult for higher - order modes due to their increasing complexity .this problem is mitigated by the fact that high - order modes are less important for representing waveforms , as evidenced by the diminishing amplitude of projection coefficients .although low - order modes are much smoother and thus easier to interpolate , their amplitudes are considerably larger meaning that interpolation errors are amplified with respect to high - order modes. summarizes the three components adding to the final mismatch of our interpolated waveform family .their total contribution can be computed directly from the interpolated coefficients in a manner similar to : = 1 - \sqrt{\sum_{k=1}^{n^\prime } \mu_k^\prime(m ) \mu_k^{\prime^*}(m)}. \label{eq : interpolationmismatch3}\ ] ] in the case of perfect interpolation for which , and reduce to and respectively , and the total mismatch is simply the representation error of the reduced basis . in figure[ figure : interperror ] open circles show the total mismatch between our interpolated waveform family and the true nr waveforms for various masses . also plotted is the nr representation error without interpolation and the mismatch between nr and phenomb waveforms minimized over mass .we see that interpolation introduces only small additional mismatch to the interpolated waveform family , and remains well below the optimized nr - phenomb mismatch .this demonstrates the efficacy of using svd coupled to nr waveforms to generate a _ faithful _ waveform family with improved accuracy over the _phenomb family that was originally used to create templates .this represents a general scheme for improving phenomenological models and presents an interesting new opportunity to enhance the matched - filtering process employed by ligo .so far , we have focused on the total mass axis of parameter space .as already discussed , this served as a convenient model - problem , because the nr waveform can be rescaled to any total mass , so that we are able to compare against the `` correct '' answer .the natural extension of this work is to expand into higher dimensions where nr waveforms are available only at certain , discrete mass - ratios . in this sectionwe consider expanding our approach of interpolating nr projection coefficients from a two - dimensional template bank containing unequal - mass waveforms .we compute a template bank of phenomb waveforms covering mass - ratios from 1 to 6 and total masses .this mass range is chosen to facilitate comparison with previous work done by .for the two - dimensional case the construction of a template bank is no longer as straightforward as before due to the additional degree of freedom associated with varying .one method that has been advanced for this purpose is to place templates hexagonally on the waveform manifold . using this procedure we find templates are required to satisfy a minimal match of 0.97 . following the waveform preparation of ,templates are placed in the rows of a matrix with real and imaginary components filled in alternating fashion where the whitened waveforms are arranged in such a way that their peak amplitudes are aligned .the waveforms are sampled for a total duration of with uniform spacing so that of memory is required to store the contents of if double precision is desired .application of transforms the 16 complex - valued waveforms into 32 real - valued orthonormal basis waveforms .the aim is to sample the coefficients of nr waveforms projected onto the svd basis of phenomb waveforms using mass - ratios for which nr data exists , and then interpolate amongst these to construct a numerical waveform family that can be evaluated for arbitrary parameters .this provides a method for evaluating full imr waveforms for mass - ratios that have presently not been simulated . to summarize , we take some nr waveform , , or total mass and mass - ratio , and project it onto the basis waveform in order to obtain we apply some two - dimensional interpolation scheme on to construct continuous functions that can be evaluated for arbitrary values of and bounded by the regions of the template bank .the interpolated waveform family is given numerically by the form : as before , the interpolation process works best if we can develop a scheme for which the projection coefficients are smoothly varying functions of and .following the procedure described in , the complex phase of the first mode is subtracted from all modes : }\mu_k(m , q ) .\label{eq : musmooth}\ ] ] to motivate why might be useful , let us consider modifying the phenomb waveform family with a parameter - dependent complex phase : when constructing a template bank , or when using a template bank , such a complex phase is irrelevant , because the waveforms are always optimized over a phase - shift .however , will appear in the projection coefficients , , therefore , if one had chosen a function with fine - scale structure , this structure would be inherited by the projection coefficients . for traditional uses of waveformfamilies the overall complex phase is irrelevant , and therefore , little attention may have been paid to how it varies with parameters .the transformation removes the ambiguity inherent in by choosing it such that .this choice ties the complex phase to the physical variations of the coefficient , and does therefore eliminate all unphysical phase - variations on finer scales . in the leftmost panels of figure[ figure : coeff2d ] we plot the real part of the smoothed coefficients for phenomb waveforms projected onto the basis modes and .the middle panels show the same thing except using the nr waveforms evaluated at the set of mass - ratios = \{1 , 2 , 3 , 4 , 6 } for which we have simulated waveforms .obviously , the refinement along the axis is much finer for the phenomb waveforms since they can be evaluated for arbitrary mass - ratio , whereas we are limited to sampling at only 5 discrete mass - ratios for nr waveforms . for comparison purposes ,the rightmost panels of figure [ figure : coeff2d ] show the phenomb projection coefficients coarsened to the same set of mass - ratios for which the nr waveforms are restricted to .we find the same general behaviour as before that low - order modes display the smoothest structure , while high - order modes exhibit increasing complexity . a plausible interpolation scheme would be to sample for nr waveforms of varying mass for constant mass ratio ( i.e. as we have done previously ) and then stitch these together across the axis .since the projection coefficients in figure [ figure : coeff2d ] show sinusoidal structure they must be sampled with at least the nyquist frequency along both axes .however , looking at the middle and rightmost panels it appears as though this is not yet possible given the present set of limited nr waveforms .at best the 5 available mass - ratios are just able to sample at the nyquist frequency along the axis for high - order modes . in order to achieve a reasonable interpolation from these projectioncoefficients the current nr data thus needs to be appended with more mass - ratios .based on the left panels of figure [ figure : coeff2d ] a suitable choice would be to double the current number of mass - ratios to include = \{1.5 , 2.5 , 3.5 , 4.5 , 5 , 5.5}. hence , though it is not yet practical to generate an interpolated waveform family using the svd boosting scheme applied to nr waveforms , the possibility remains open as more nr waveforms are generated .we have shown that svd can be used to improve the representation of nr waveforms from a phenomb template bank . a reasonably reduced svd basis was able to reduce mismatch by a factor of five compared to phenomb waveforms optimized over mass .there was also no mass - bias associated with the svd basis and therefore no optimization over physical parameters required .this occurs because svd unifies a range of waveform structure over an extended region of parameter space so that any biases become blended into its basis .svd therefore represents a generalized scheme through which phenomenological waveform families can be de - biased and enhanced for use as matched - filter templates .we were able to calibrate an svd basis of phenomb templates against nr waveforms in order to construct a new waveform family with improved accuracy .this was accomplished by interpolating the coefficients of nr waveforms projected onto the phenomb basis .only marginal error was introduced by the interpolation scheme and the new waveform family provided a more faithful representation of the `` true '' nr signal compared to the original phenomb model .this was shown explicitly for the case of equal - mass , zero - spin binaries .we proceeded to investigate the possibility of extending this approach to phenomb template banks containing unequal - mass waveforms . at present , however , this method is not yet feasible since the current number of mass - ratios covered by nr simulations are unable to sample the projection coefficients with the nyquist frequency .this method will improve as more nr waveforms are simulated and should be sufficient if the current sampling rate of mass - ratios were to double .we thank ilana macdonald for preparing the hybrid waveforms used in this study .kc , jde and hpp gratefully acknowledge the support of the national science and engineering research council of canada , the canada research chairs program , the canadian institute for advanced research , and industry canada and the province of ontario through the ministry of economic development and innovation .dk gratefully acknowledges the support of the max planck society .
matched - filtering for the identification of compact object mergers in gravitational - wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms . typically the template bank is constructed from phenomenological waveform models since these can be evaluated for an arbitrary choice of physical parameters . recently it has been proposed that singular value decomposition ( svd ) can be used to reduce the number of templates required for detection . as we show here , another benefit of svd is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity ( nr ) simulations . using these ideas , we present a method that calibrates a reduced svd basis of phenomenological waveforms against nr waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model . the new waveform family is given numerically through the interpolation of the projection coefficients of nr waveforms expanded onto the reduced basis and provides a generalized scheme for enhancing phenomenological models .
the origin - destination ( od ) matrix is important in transportation analysis .the matrix contains information on the number of travellers that commute or the amount of freight shipped between different zones of a region .the od matrix is difficult and often costly to obtain by direct measurements / interviews or surveys , but by using incomplete traffic counts and other available information one may obtain a reasonable estimate .a particular application of the od matrix estimation is in the area of public transport . in order to improve their service ,the responsible managers are looking for on - going evaluation of the passenger flow and the reasons that would influence this flow .this is typically the case for the city rail , sydney bus and sydney ferry organisations , which handle the public transport in the region around the city of sydney , australia .cityrail and co are handling a large number of stations ( wharfs , bus stops ) for trains ( buses and ferries ) across the state . they carry thousands of passengers every day , and periodically optimise the time - table schedule to best meet the changing demand .+ + an ideal optimization of the schedule would consider the resources in trains , drivers , stations and passengers .while the primary informations ( trains , drivers , stations ) are known to cityrail and co , the number of passenger on each train between each station can not be deduced easily given their current passenger flow data collection processes .+ + various approaches to estimating the od matrix using traffic counts have been developed and tested using traffic counts , or road traffic flows , .most of the papers in the literature solve this problem by postulating a general model for the trip distribution , for example a gravity type model , which aims at introducing a prior knowledge on the traffic flows and assigning a cost to each journey .then the inference is produced to estimate the parameters of this model .all these papers _ are not passengers oriented_. + most of the work relating to od matrix estimation are based on passengers observations assuming the knowledge of where the people get in and out of the public transport .lo et al developed a framework centred on the passenger choice , which they called the random link choice , and model this to obtain a maximum likelihood estimator .nandi et al applied a strategy centred on a fixed cost per person per kilometre assumption on the air - route network of india and provide some comparisons with the real data .+ when the information is not available ( for example we have no data on when passengers get off the bus ) , kostakos offers to use a wireless detection of the passengers trips , and lundgren and peterson s model is based on a target od - matrix previously defined . however , none of the cited work considered using survey data . indeed , if no complete information is available about the passengers destinations , the simplest solution is to use an appropriate survey to estimate destination information . furthermore , what characteristics of the survey are required for the estimation to be accurate ?bierliaire and toint introduces a structure - based estimation of the origin - destination matrix based on parking surveys . in their article , they used the parking surveys to infer an a priori estimate of the od matrix , and they used this prior in coordination with the partial observations of the network flows to derive a generalized least square estimator of the od matrix . despite its novelty , this article assume that the behaviour of car - user and public transport users are the same , at least regarding their respective od matrix . given that the public transport network topology is often different from the road network topology , one may doubt the accuracy of this assumption .moreover , they just use the partial structure extracted from the surveys .+ the purpose of this paper is then to develop an estimation procedure for the origin - destination matrix based on the ticket records available for the transport network and/or on previous surveys . unlike the article from bierliaire ,we use survey data collection from public transport users , and estimate the approximate whole matrix structure through the estimation of its eigenvectors .we propose a robust version of the estimator to avoid biases induced by the survey .we also construct a regression estimation procedure that accounts for the influence of exogenous variable such as the weather conditions or the time of the year .+ we first briefly present the passenger model , and then move on to outlining the observations model . in section [ sec : om ] , we explain how the measurements are obtained , and what measurements error should be expected . in section[ sec : mam ] , we explain the assumptions we make on the measurements , and how this affects our estimation procedure .we present in section [ sec : est ] the maximum likelihood ( ml ) estimation procedure , by providing a system of equation to be solved , for deriving estimators .we improve on this ml estimation to make it robust to survey biases in section [ sec : rob ] . finally , we present a simulation example and an application to a real world case in section [ sec : app ] .we finally comment on the results and outline some future research opportunities .let be the matrix of passengers number between the stations in the rail network over time period so that is the number of passengers who depart from station and arrive at station at time period .given that there is an obvious time dependency here , denoted by the period in which the commuting occur ( for example a day ) .the purpose of this work is to provide an estimation of given the observations specified in section [ sec : om ] .the observations provided about the passengers are very different , and only considering them all allow a direct estimation of .we list in the subsections [ om - casual],[om - deparr ] and [ om - regular ] the different kind of observations .a casual commuter is defined as a single or return journey that is not repeated regularly ( e.g. daily ) .typically , people going to a once - in - a - year event will buy their ticket for that trajectory and will probably return on the same day . accordingly for single and day return tickets , we have complete information under the assumption that they take the next train after purchasing their ticket and that they take the shortest route .let be that matrix of measurements .each journey between major stations , the passenger has to validate his ticket through the machines at the entrance of the station , and do it again at the exit . between minor stationswe assume they take the next train to arrive at the station they purchased their ticket at and assume they take the trio planners recommended route for that time .two scenarios are considered . in the first one , ( called ) , every station in the network have these machines . in the second case ( called ) only major stations have these machines . in any case, let call the vector corresponding to the departures at the stations , and the vector of arrivals .fortunately we can have regular passengers with specific departure and destination , and this matrix will be denoted , where the rows stand for the departure stations and the columns for the arrival stations . this matrix is observed , and assumed distributed according to a poisson probability function with mean .+ the main part of the information , however , remains unknown .indeed most of the passengers will probably have a zone ticket for a period of time , from 1 week to 1 year .the nature of these tickets make the station of departure and arrival unknown , and is the main challenge of this paper .let call the matrix of zone passengers numbers .+ to make a proper statistical inference , we need two assumptions ; * the traveller will act independently of the validity duration of his ticket ; * the regular traveller commits to a return journey on each working day .the observations linked to this model are two - folds . for major stations, we have the total number of passengers that crossed the boom gates , in and out . for stations without boom gates , the observations have to estimated using a survey. we also have access to the total number of people with a valid zone ticket at time ( e.g. the day of the analysis ) , denoted , in the end , the total number of regular passenger at the time period will be denoted , and we have , these very different observations , we need a good fitting model based on reasonable assumptions . sections [ mam - gm ] , [ mam - cm ] , [ mam - da ] and [ mam - rm ] presents these assumptions for each parameter in our model .recall that is a matrix of count , the main assumption on that matrix is that the number of passenger is the sum of the casual passengers ( ) plus the regular passengers ( ) plus a matrix stating the unusual big events such as major sporting events , or large concerts ( called ) , the casual commuter journey could be assumed to be poisson distributed i.e. is supposed to be drawn with a poisson distribution which parameter belong to the matrix . ] where is the matrix of means for the counts .+ however , the variance of the counts are not expected to be equal to their mean and so the poisson counts assumption may be unrealistic .therefore , we decided to use a negative binomial regression model for , which can be over - dispersed in order to better describe the distribution of the counts .we specify that is distributed according to a gamma distribution , . for a purpose of simplicity ,let be distributed as negative binomial with parameters and ( we will denote ) . according to the definition of the measurements ,the following relationships hold : where and are the vectors of the total number of departures and arrivals at each station during time period . for the same reasons as described for the casual matrix, we will use a negative binomial distribution to model the uncertainty around the regular traveller s information .however , unlike the casual commuter , we do not suspect an over - dispersion but an under - dispersion , so that , and . let and be the expectation of and .when the model is well defined , the estimation procedure is computationally straight forward , e.g. , between major stations where we have complete information of arrivals and departures .meaning that the maximum likelihood estimation method accuracy , practically depends on the efficient solving of the optimization problem . in this section ,the stationary model parameters are estimated from the data .since the process is unlikely to be stationary , we present a second option ( section [ est : reg ] ) , a multivariate spatio - temporal model that we expect to fit the data better .the estimation procedure will be carried out in well - defined steps .if we ignore the time dependence , the successive observations can be considered independent , identical random counts from negative binomial or poisson distribution .this means that simple maximum likelihood estimation should work well , especially for large sample sizes .we observe for several realizations . given no space - time dependencies we assume that is independently distributed as .the likelihood is then , \end{aligned}\ ] ] where stands for one element of the matrix .we thus can estimate the parameters through , despite the absence of closed form solution to this problem , the optimization algorithms can quickly lead to a global maximum .unfortunately we do nt have complete information for those with weekly , monthly , quarterly or anuual tickets ( long - term tickets ) .we have information of the times they enter and departs at major stations but we do nt have complete information for the long term tickets either to or from minor stations .our assumption here is that only a proportion of the people will travel on day , where . is an additional parameter that reflects the passengers habit .it does exist because when performing the estimation , one may find a bigger estimation of travellers than what is observed .some of the difference is due to the randomness of , but it might also be explained by the fact that travellers with prepaid long term tickets will not necessary travel each of the working day of the week .+ however , we may provide the same estimation for the parameters as we did in the previous section , that is , where stands for the same likelihood function as above .+ this leads us to the final estimation , the contribution this paper makes to the literature .the aim is to estimate the matrix with the available departure and arrival data .the first step is to estimate the general shape of the matrix .the problem is to achieve this in a simple way given that is to be estimated with parameters , and only equations .the following paragraph presents an elegant solution to this problem .+ recall as the expectation of .it is assumed symmetric , we can diagonalize it , so that , where is a projection matrix of eigenvectors of and is a diagonal matrix , with terms equal to the respective eigenvalues . therefore ,if the structure of is known ( i.e. the eigenvectors are known ) and constant , then we have reduced the problem to solving a system of unknown parameters with equations .[ eq : odeq ] and the previous estimations , we have the following system , where and are obtained by simple subtraction .the probability density function of the observations can then be written , \quad p \big ( y_{di}^t \vert r_z , p_{rz } \big ) & \sim & \mathcal{nb } ( \sum_j r^{ij}_z , p_{rz})\end{aligned}\ ] ] where and .according to this equation , we then have likelihood equations ( ] , \end{aligned}\ ] ] _ most of the optimization algorithms that deal with the constraint require an initialization which belong to the constrained space .one could be tempted to address as a starting point the mean value of the observations , according to the one - dimensional ( ) result .however , it is very unlikely that this initial point will satisfy the constraints .therefore , the best choice so far seems to be the diagonal elements of the matrix , given that they naturally fill * constraint 1 * and * constraint 2*. + + the complete optimization program therefore becomes , with the initial value .this optimization program can be replaced by an explicit expression of the estimator , subject to some constraints stated in [ ann : a ] .the main constraint is the poisson distribution assumption , so that we have , + * proposition 1 : * _ assume that , then ^{-1 } \bar{y}\end{aligned}\ ] ] where is the matrix of estimated eigenvalues of . _+ if now we consider a gaussian likelihood instead of poisson , the following maximum likelihood estimator is found , + * proposition 2 : * _ assume that , then where is the matrix of estimated eigenvalues of . _ + the proofs of propositions 1 and 2 are presented in [ ann : a ] .we can also derive the follwing theorem , that ensures us of the quality of the estimation , + * theorem 1 * + assume that {\ a.s.\ } p ] stands for the scale of the survey , ] . then , and we have , where stands for the probability density function of .+ the first integral decreases towards as grows to infinity according to eq .[ eq : as ] .the argument for the second integral is the following . according to the assumption of strong convergence of , towards the dirac function as goes to infinity . being strictly positive , this ends the proof . . [[ calculation - in - case - of - poisson - regression - and - log - link - function ] ] * calculation in case of poisson regression ( and log link function ) * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + _ the beginning of the reasoning is similar to the previous one .then , if we assume that exogenous variables have impacts on the number of passengers , we can write , where are symmetric matrices reflecting the intercept ( ) for baseline commuter flows and the variable influences ( ) for changes in commuter flows from known daily influences . moreover , we assume that the same diagonalization ( meaning with the same eigenvectors ) can be applied , which lead us to , therefore , will be distributed according to a poisson distribution with the following parameter , where the parameters to be estimated are , which means we have to estimate parameters . + the probability of one observationcan then be written , which gives the following log - likelihood , therefore , to obtain the final system of equation , we need to calculate the derivatives of the log - likelihood with respect to each parameter . _
the estimation of the number of passengers with the identical journey is a common problem for public transport authorities . this problem is also known as the origin - destination estimation ( od ) problem and it has been widely studied for the past thirty years . however , the theory is missing when the observations are not limited to the passenger counts but also includes station surveys . + our aim is to provide a solid framework for the estimation of an od matrix when only a portion of the journey counts are observable . + our method consists of a statistical estimation technique for od matrix when we have the sum - of - row counts and survey - based observations . our technique differs from the previous studies in that it does not need a prior od matrix which can be hard to obtain . instead , we model the passengers behaviour through the survey data , and use the diagonalization of the partial od matrix to reduce the space parameter and derive a consistent global od matrix estimator . we demonstrate the robustness of our estimator and apply it to several examples showcasing the proposed models and approach . we highlight how other sources of data can be incorporated in the model such as explanatory variables , e.g. rainfall , indicator variables for major events , etc , and inference made in a principled , non - heuristic way . constraint maximum likelihood estimation , eigenvectors , counts estimation
a fair number of astronomers and astronomy students have a physical challenge .it is our responsibility to learn the basics of accessibility to be able to help our library patrons to gain access to things that they need for their studies and work .astronomy is often seen as a very visual science .after all , its origins lie in looking at the skies .hence , it is a common belief that you need to use your sight to be able to study astronomy .this is strictly not true . in reality, we have been using assistive technologies telescopes , sensors , computers for a long time now to gain access to data that the human eye does not see unaided .visual information is coming to us as large streams of bytes .the modern astronomer is hardly bound by physical limitations .one can produce solid research sitting comfortably in front of one s personal computer .there are many examples of physically challenged individuals who have made successful careers in science .those who have seen the movie _ contact _ based on carl sagan s novel are familiar with the blind astronomer who is listening to radio signals instead of watching them on the screen .his character is based on a real scientist , dr . d. kent cullers .there are other success stories in fact , too many to enumerate here .but , you ask , is nt the sheer amount of information a major hindrance to those who can not browse it easily ? yes , it is to some degree .electronic textual materials provide both a possibility and a challenge for those with low vision . in theory , it is possible for almost anyone to access online information , but in practice , this requires know - how and proper tools .plenty of assistive technologies exist to overcome hindrances .the daisy standard for digital talking books has been an important tool for making electronic texts easy to browse .not all hindrances are in the visual domain .imagine an elderly astronomer who has the full use of his or her intelligence , but whose hands are shaking , and who might have some difficulty with pointing a mouse when navigating a webpage and filling out search forms . it is a challenging task for librarians and information specialists to make our services and search forms accessible to people with a diversity of abilities so that they can do the research necessary for building careers as active contributors in their chosen fields of research .but what does accessibility look like ?there is a pervasive myth that it looks boring .this is strictly not true .accessible design should be functional enough , not just pretty . with proper html code and other techniques , we can make the text compliant with technological aids .if the html coding is poor , a document may be impossible to open with such aids or it could be impossible to navigate the text .the author of this paper was involved with an university - wide accessibility project that was undertaken by the university of helsinki in 20052006 , with a follow up in 20082009 .it was recognized that accessibility must cover not only our physical surroundings , but also the online environment as well . in spring 2009, we noticed that the new national online system for applying for university education was not accessible to blind students .the system was provided by the finnish ministry of education , and we challenged them to fix it . to our big surprise , they did , working in collaboration with us and the finnish federation of the visually impaired .figure 1 shows a page from the application system .it looks exactly the same both before and after accessibility changes were made .differences can be seen on the coding level , but otherwise one can not tell the old version from the new one by visual inspection alone . the change has resulted in a major functional improvement . the old version could not even be opened with assistive technology , and blind students could not use it . now they can .accessibility needs some muscle to drive it .it is not just about good people doing good deeds it is also about ensuring that everyone has access to things that matter to them .we need guidelines and standards , preferably with legislation to back them up . in the united states , section 508 of the rehabilitation actregulates purchases made with federal funding .it is about `` access to and use of information and data that is comparable to that provided to others . ''a market for accessible products helps big publishers to take accessibility into account . when a publisher has a large enough number of customers who need to buy accessible products , they will be motivated to sell accessible products .we also need strong standards . the world wide consortium has updated its web content accessibility guidelines ( wcag ) version 2 dates back to 2008 .this new version of wcag is meant to be a practical tool , evidenced by its three levels of accessibility : * a : minimum * aa : medium * aaa : as accessible as possible you will find a good wcag2 checklist online .the ideal thing to do would be to make your website as accessible as possible , but in practice you need to read the guidelines and identify the accessibility level best suited to serving your users .let s look at a concrete example by applying an a - level guideline to an existing search form .the guideline states : `` form inputs have associated text labels or , if labels can not be used , a descriptive title attribute . ''let s look at a part of an ads search form with its original coding .this piece of code is from the section which requires an object for selection . 0.2 in _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `< input name = obj_req value = yes type = checkbox > require object for selection ` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 0.2 in let s add some more coding ( in boldface ) . rather than just a checkbox , we now have a _ text label_. 0.2 in _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ` < input id = obj_req name = obj_req value = yes type = checkbox > < label for = obj_req > require object for selection</label > ` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 0.2 in figure 2 shows what has changed . the text label in question has been highlighted .it is no longer necessary to hit the small checkbox it is enough if you just click the associated text .this makes the box much easier to check .you can do clever things with html .there are however many other formats to consider : pdf , flash , and office products , to name just a few .no matter what the material at hand , it needs structure above all else . otherwise , a blind person who tries to read a text has to read everything from beginning to end and is not able to navigate to a chapter or a footnote .even pdf which used to be an accessibility nightmare can now boast of a structure to make it more accessible it s called tagged pdf . as a general guideline ,no matter what kind of document you are writing , you will need to stick to structure .do you use subtitles that are bold and in a different font ?please , use proper titles instead and use styles to control the fonts and such .let s take a peek at an html page that has structure .there are tools to make the structure visible .the box in figure 3 has been done with a wave toolbar .this example is taken from _planetary and space science_. a good amount of structure has been revealed .the html structure of _ earth , moon and planets _ , shows next to nothing .its only structure is a references header , `` h2 references . ''there is no subtitle structure at all that you can jump to .most publishers make their electronic materials available in pdf format .usually , those files are without any structure .figure 4 shows the acrobat reader results of an accessibility quick check there is no structure .what is the current situation with different astronomy publishers and journals ?table 1 shows accessibility elements for a selection of publishers based on inspection of a few papers published in 2009 by university of helsinki astronomers .we asked some questions about the basic properties of each paper .is there html fulltext ?does it have structure ? and does the pdf have structure ? if not , are there at least pdf bookmarks ?you can see that these results leave a lot to hope for .the only consistently good results are from _ planetary and space science _ , which is published by elsevier .unfortunately , however , not all elsevier products are equally accessible .llcccc title & publisher & html & html & pdf & + & & fulltext & structure & structure & bookmarks + astronomy & astrophysics & edp sciences & yes & ok & no & yes + astrophysical journal & iop & yes & none & no & yes + monthly notices r.a.s . & wiley & yes & none & no & no + astron .nachrichten & wiley & no & & no & no + planetary space sci . &elsevier & yes & ok & yes & yes + earth , moon & planets & springer & yes & none & no & yes + elsevier was the winner of this brief check .it has been making some efforts to increase accessibility of its products , which sets a good example for other big publishers . have inspected the overall accessibility compliance and practices of major database vendors , elsevier included . even if major publishers are making some progress , it is not enough .there are also smaller publishers , and beyond that there are institutes and libraries producing their own online materials or making their own search forms .many of them are unaware of current accessibility standards .standards can seem difficult to apply .but really , they are easy to follow if we make the guidelines clear enough so that everyone can understand and use them .remember that new technologies are taken into use all the time .we will be constantly facing new challenges to make them accessible , but they will also bring new possibilities with them .there is one last thing that you need to be aware of do nt forget about copyright .it is not a given fact that a library can freely distribute electronic material to a patron who could then read it on a personal computer or some other device .the copyright laws in different countries vary surprisingly on this point . moreover ,even when the right to access is written into a law , thus making special exceptions to copyright for disabled persons , a license agreement between a library and a publisher might take this right away for particular electronic materials or products . a publisher ora consortium will not allow you to do things that are not specifically stated in the signed agreement .please always remember to check the accessibility options in agreements you sign . to give an example ,the current finnish national electronic library ( finelib ) consortium agreement with elsevier specifies that `` coursepacks in nonelectronic , non - print perceptible form ( e.g. braille ) may be offered for [ the ] visually impaired . ''this is not , however , how visually impaired users would like to use the materials .this is a standard clause that should be modified to meet real needs .unfortunately , when the consortium was formed , this clause did not receive the proper attention it should have .practically everyone who lives long enough has to face physical challenges at some point .an astronomer who is able - bodied today could have accessibility issues tomorrow .we can not expect that she or he is willing to give up practicing science . in her essay _ the blind astronomer _ , the new zealand astronomer tracy farr eloquently describes the changes brought by the gradual loss of her vision . with a different approach to looking at the research data, she can continue to access the universe : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i am freeing myself from the fixedness of the seen . with my mindopen to the universe , i hear the heavens ebb and flow as music .it is the incomprehensibly wonderful revelation of music first heard after only ever having seen black spots and lines on a white page .as my ears open and my eyes close , i hear the planets dance ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
making online resources more accessible to physically challenged library users is a topic deserving informed attention from astronomy librarians . recommendations like wcag 2.0 standards and section 508 , in the united states , have proven valuable , and some vendors are already making their products compliant with them . but what about the wide variety of databases and other resources produced by astronomy information professionals themselves ? few , if any , of these are currently compliant with accessibility standards . here we discuss some solutions to these accessibility challenges .
in adaptive control and recursive parameter estimation one often needs to adjust recursively an estimate of a vector , which comprises constant but unknown parameters , using measurements of a quantity here is a vector of known data , often called the regressor , and is a measurement error signal .the goal of tuning is to keep both the estimation error and the parameter error as small as possible .there are several popular methods for dealing with the problem above , for instance least - squares .maybe the most straightforward involve minimizing the prediction error via gradient - type algorithms of the form : where is a constant , symmetric , positive - definite gain matrix .let us define and analyze differential equations and , which under the assumption that is identically zero read : the nonnegative function has time derivative hence inspection of the equation above reveals that is limited in time , thus , and also that the error ( norms are taken on the interval where all signals are defined ) .these are the main properties an algorithm needs in order to be considered a suitable candidate for the role of a tuner in an adaptive control system .often or something similar is also a desirable property . to obtain the latter , normalized algorithms can be used ; however , the relative merits of normalized versus unnormalized tuners are still somewhat controversial .another alternative is to use a time - varying , as is done in least - squares tuning . in [ sec : acceleration ] we present a tuner that sets the second derivative of , and in [ sec : covariance ] the effects of a white noise on the performance of the two algorithms are compared .then we show some simulations and make concluding remarks .classical tuners are such that the _ velocity _ of adaptation ( the first derivative of the parameters ) is set proportional to the regressor and to the prediction error .we propose to set the _ acceleration _ of the parameters : notice that the the formula above is implementable ( using integrators ) if measurement error is absent , because the unknown appears only in scalar product with .choose another function of lyapunovian inspiration : taking derivatives along the trajectories of gives integrating we obtain which leads immediately to the desired properties : the slow variation property follows without the need for normalization , and now we obtain instead of as before. we might regard as a modified error , which can be used in the stability analysis of a detectable or `` tunable '' adaptive system via an output - injection argument ; see .a generalization of is with and constant , symmetric , positive - definite matrices such that and .the properties of tuner , which can be obtained using the positive - definite function in the same manner as before , are now consider the effects on the expected value and covariance of of the presence of a measurement error .the assumptions are that is a white noise with zero average and covariance and that are given , deterministic data . for comparison purposes , first consider what happens when the conventional tuner is applied to in the presence of measurement error : the solution to the equation above can be written in terms of s state transition matrix as follows hence because by assumption . herethe notation , denoting the expectation with respect to the random variable , is used to emphasize that the stochastic properties of are not under consideration .the conclusion is that will converge to zero in average as fast as does .the well - known persistency of excitation conditions on are sufficient for the latter to happen . to study the second moment of the parameter error , write covariance of can be written as the sum of four terms .the first is deterministic .the second term because has zero mean , and the third term is likewise zero .the fourth term where fubini s theorem and the fact were used .performing the integration and adding the first and fourth terms results in this equation can be given the following interpretation : for small , when is close to the identity , the covariance of remains close to , the outer product of the error in the initial guess of the parameters with itself .as , which will happen if is persistently exciting , tends to .this points to a compromise between higher convergence speeds and lower steady - state parameter error , which require respectively larger and smaller values of the gain .algorithms that try for the best of both worlds parameter convergence in the mean - square sense often utilize time - varying , decreasing gains ; an example is the least - squares algorithm .we shall now attempt a similar analysis for the acceleration tuner applied to , which results in the differential equation let where , , each is a function of unless otherwise noted , and the dot signifies derivative with respect to the first argument . if , following the same reasoning used for the velocity tuner , one concludes that and that however the properties of the acceleration and velocity tuners are not yet directly comparable because the right - hand side of does not lend itself to immediate integration . to obtain comparable results, we employ the ungainly but easily verifiable formula , ' '' '' valid for arbitrary scalars and , and make the [ [ simplifying - assumption ] ] simplifying assumption : + + + + + + + + + + + + + + + + + + + + + + + + for , and 3 , , where are scalars and is the identity matrix . premultiplying by ] , integrating from 0 to , and using the simplifying assumption gives formula . ' '' '' taking in , results positive - semidefinite , therefore the combination of and shows that can be increased without affecting s steady - state covariance . on the other hand , to decrease the covariance we need to increase , which roughly speaking means increasing damping in . since and can be increased without affecting the stability properties shown in [ sec : acceleration ] , a better transient steady - state performance compromise might be achievable with the acceleration tuner than with the velocity tuner , at least in the case when , , and are `` scalars . ''notice that by construction .[ [ approximate - analysis ] ] approximate analysis : + + + + + + + + + + + + + + + + + + + + + + the derivation of inequality does not involve any approximations , and therefore provides an upper bound on , valid independently of .a less conservative estimate of the integral in can be obtained by replacing by its average value in the definition of in .this approximation seems reasonable because appears inside an integral , but calls for more extensive simulation studies . to obtain a useful inequality ,we require ; namely , using the schur complement or , using the simplifying assumption and substituting by its approximation suppose further that .looking for the least conservative estimate , we pick , the least value of that keeps .thus with \bar{m}_1 \left[\begin{smallmatrix}{\phi}^\top_{11}(t,0 ) \\ { \phi}^\top_{12}(t,0 ) \end{smallmatrix}\right]}{4m_1 ^ 2 m_2m_3r(1+\mu_2 ) -r}.$ ] taking we repeat the previous , exact result . for large positive values of first term of the right - hand side of tends to , which indicates that the steady - state covariance of the parameter error decreases when the signal increases in magnitude , and that it can be made smaller via appropriate choices of the gains and .the situation for the accelerating tuner is hence much more favorable than for the conventional one .the simulations in this section compare the behavior of the accelerating tuner with those of the gradient tuner and of a normalized gradient one .all simulations were done in open - loop , with the regressor a two - dimensional signal , and without measurement noise .figure [ fig : step ] shows the values of and respectively when is a two - dimensional step signal . in figure[ fig : sin ] the regressor is a sinusoid , in figure [ fig : sia ] an exponentially increasing sinusoid , and in figure [ fig : prb ] a pseudorandom signal generated using matlab .no effort was made to optimize the choice of gain matrices ( , , and were all chosen equal to the identity ) , and the effect of measurement noise was not considered .the performance of the accelerating tuner is comparable , and sometimes superior , to that of the other tuners .= 2.5 in = 2.5 in = 2.5 in = 2.5 in = 2.5 in = 2.5 in = 2.5 in = 2.5 inother ideas related to the present one are replacing the integrator in with a positive - real transfer function , and using high - order tuning ( ) .high - order tuning generates as outputs as well as its derivatives up to a given order ( in this sense we might consider the present algorithm a second - order tuner ) , but unlike the accelerating tuner requires derivatives of up to that same order .we expect that accelerating tuners will find application in adaptive control of nonlinear systems and maybe in dealing with the topological incompatibility known as the `` loss of stabilizability problem '' in the adaptive control literature .the stochastic analysis in [ sec : covariance ] indicates that the performance and convergence properties of the accelerating tuner , together with its moderate computational complexity , may indeed make it a desirable tool for adaptive filtering applications .it seems that a better transient steady - state performance compromise is achievable with the accelerating tuner than with the velocity tuner . to verify this conjecture , a study of convergence properties of the accelerating tuner and their relation with the persistence of excitation conditionsis in order , as well as more extensive simulations in the presence of measurement noise .
we propose a tuner , suitable for adaptive control and ( in its discrete - time version ) adaptive filtering applications , that sets the second derivative of the parameter estimates rather than the first derivative as is done in the overwhelming majority of the literature . comparative stability and performance analyses are presented . * key words : * adaptive control ; parameter estimation ; adaptive filtering ; covariance analysis .
the experimental data used in this paper were collected by the forward looking radar of the us army research laboratory . that radar was built for detection and possible identification of shallow explosive - like targets .since targets are three dimensional objects , one needs to measure a three dimensional information about each target .however , the radar measures only one time dependent curve for each target , see figure 5 .therefore , one can hope to reconstruct only a very limited information about each target .so , we reconstruct only an estimate of the dielectric constant of each target . for each target ,our estimate likely provides a sort of an average of values of its spatially distributed dielectric constant .but even this information can be potentially very useful for engineers .indeed , currently the radar community is relying only on the energy information of radar images , see , e.g. .estimates of dielectric constants of targets , if taken alone , can not improve the current false alarm rate .however , these estimates can be potentially used as an additional piece of information .being combined with the currently used energy information , this piece of the information might result in the future in new classification algorithms , which might improve the current false alarm rate .an inverse medium scattering problem ( imsp ) is often also called a coefficient inverse problem ( cip ) .imsps / cips are both ill - posed and highly nonlinear .therefore , an important question to address in a numerical treatment of such a problem is : _ how to reach a sufficiently small neighborhood of the exact coefficient without any advanced knowledge of this neighborhood ? _ the size of this neighborhood should depend only on the level of noise in the data and on approximation errors .we call a numerical method , which has a rigorous guarantee of achieving this goal , _ globally convergent method _ ( gcm ) . in this paperwe develop analytically a new globally convergent method for a 1-d inverse medium scattering problem ( imsp ) with the data generated by multiple frequencies .in addition to the analytical study , we test this method numerically using both computationally simulated and the above mentioned experimental data . first , we derive a nonlinear integro - differential equation in which the unknown coefficient is not present ._ element _ of this paper is the method of the solution of this equation .this method is based on the construction of a weighted least squares cost functional .the key point of this functional is the presence of the carleman weight function ( cwf ) in it .this is the function , which is involved in the carleman estimate for the underlying differential operator .we prove that , given a closed ball of an arbitrary radius with the center at in an appropriate hilbert space , one can choose the parameter of the cwf in such a way that this functional becomes strictly convex on that ball .the existence of the unique minimizer on that closed ball as well as convergence of minimizers to the exact solution when the level of noise in the data tends to zero are proven .in addition , it is proven that the gradient projection method reaches a sufficiently small neighborhood of the exact coefficient if its starting point is an arbitrary point of that ball .the size of that neighborhood is proportional to the level of noise in the data .therefore , since restrictions on are not imposed in our method , then this is a _ globally convergent _ numerical method .we note that in the conventional case of a non convex cost functional a gradient - like method converges to the exact solution only if its starting point is located in a sufficiently small neighborhood of this solution : this is due to the phenomenon of multiple local minima and ravines of such functionals . unlike previously developed globally convergent numerical methods of the first type for cips ( see this section below ) , the convergence analysis for the technique of the current paper does not impose a smallness condition on the interval of the variations of the wave numbers .the majority of currently known numerical methods of solutions of nonlinear ill - posed problems use the nonlinear optimization . in other words ,a least squares cost functional is minimized in each problem , see , e.g. chavent , engl , gonch1,gonch2 . however , the major problem with these functionals is that they are usually non convex .figure 1 of the paper scales presents a numerical example of multiple local minima and ravines of non - convex least squares cost functionals for some cips .hence , convergence of the optimization process of such a functional to the exact solution can be guaranteed only if a good approximation for that solution is known in advance .however , such an approximation is rarely available in applications .this prompts the development of globally convergent numerical methods for cips , see , e.g. .the first author with coauthors has proposed two types of gcm for cips with single measurement data .the gcm of the first type is reasonable to call the tail functions method `` . this development has started from the work and has been continued since then , see , e.g. and references cited therein . in this case, on each step of an iterative process one solves the dirichlet boundary value problem for a certain linear elliptic pde , which depends on that iterative step .the solution of this pde allows one to update the unknown coefficient first and then to update a certain function , which is called the tail function '' . the convergence theorems for this method impose a smallness condition on the interval of the variation of either the parameter of the laplace transform of the solution of a hyperbolic equation or of the wave number in the helmholtz equation .recall that the method of this paper does not impose the latter assumption .in this paper we present a new version of the gcm of the second type . in any version of the gcm of the second typea weighted cost functional with a cwf in it is constructed .the same properties of the global strict convexity and the global convergence of the gradient projection method hold as the ones indicated above .the gcm of the second type was initiated in klib95,klib97,kt with a recently renewed interest in .the idea of any version of the gcm of the second type has direct roots in the method of , which is based on carleman estimates and which was originally designed in only for proofs of uniqueness theorems for cips , also see the recent survey in . another version of the gcm with a cwf in it was recently developed in bau1 for a cip for the hyperbolic equation where is the unknown coefficient .this gcm was tested numerically in . in bau1,bau2 non - vanishing conditionsare imposed : it is assumed that either or or in the entire domain of interest .similar assumptions are imposed in for the gcm of the second type . on the other hand , we consider in the current paper ,so as in , the fundamental solution of the corresponding pde .the differences between the fundamental solutions of those pdes and solutions satisfying non - vanishing conditions cause quite significant differences between klib95,klib97,kt , ktsiap and of corresponding versions of the gcm of the second type .recently , the idea of the gcm of the second type was extended to the case of ill - posed cauchy problems for quasilinear pdes , see the theory in klquasi and some extensions and numerical examples in bakklkosh , klkosh .cips of wave propagation are a part of a bigger subfield , inverse scattering problems ( isps ) .isps attract a significant attention of the scientific community . in thisregard we refer to some direct methods which successfully reconstruct positions , sizes and shapes of scatterers without iterations .we also refer to for some other isps in the frequency domain . in addition, we cite some other numerical methods for isps considered in . as to the cips with multiple measurement , i.e. the dirichlet - to - neumann map data , we mention recent works and references cited therein , where reconstruction procedures are developed , which do not require a priori knowledge of a small neighborhood of the exact coefficient . in section 2 we state our inverse problem .in section 3 we construct that weighted cost functional . in section 4we prove the main property of this functional : its global strict convexity . in section 5we prove the global convergence of the gradient projection method of the minimization of this functional .although this paper is mostly an analytical one ( sections 3 - 5 ) , we complement the theory with computations . in section 6we test our method on computationally simulated data . in section 7we test it on experimental data .concluding remarks are in section 8 .let the function be the spatially distributed dielectric constant of the medium .we assume that the source position for brevity , we do not indicate below dependence of our functions on consider the 1-d helmholtz equation for the function , be the solution of the problem ( [ 2.4 ] ) , ( [ 2.6 ] ) for the case then interest is in the following inverse problem : * inverse medium scattering problem ( imsp)*. _ let _\subset \left ( 0,\infty \right ) ] in addition , uniqueness of our imsp was proven in klibloc .also , the following asymptotic behavior of the function takes place : \left ( 1+% \widehat{u}\left ( x , k\right ) \right ) , k\rightarrow \infty , \forall x\in % \left [ 0,1\right ] , \label{2.10}\]] given ( [ 2.9 ] ) and ( [ 2.10 ] ) we now can uniquely define the function as in .the difficulty here is in defining since this number is usually defined up to the addition of where is an integer . for sufficiently large values of define the function using ( [ 2.60 ] ) , ( [ 2.100 ] ) , ( [ 2.10 ] ) and ( [ 2.1000 ] ) as , for sufficiently large , eliminates the above mentioned ambiguity .suppose that the number is so large that ( [ 2.12 ] ) is true for then is defined as in ( [ 2.11 ] ) . as to not large values of , we define the function ( [ 2.11]) as ( [ 2.9 ] ) , \forall \xi > 0. ] consider the function and its , where hence, the function , which we call the tail function " , and this function is unknown, let note that since for then equation ( [ 2.4 ] ) and the first condition ( [ 2.6 ] ) imply that for hence , ( [ 2.60 ] ) and ( [ 2.100 ] ) imply that for it follows from ( [ 2.4 ] ) , ( [ 2.60 ] ) , ( [ 2.100])([2.160 ] ) , ( [ 2.15 ] ) and ( [ 2.150 ] ) that ( [ 2.15 ] ) , ( [ 2.150 ] ) , ( [ 3.0 ] ) and ( [ 3.3 ] ) , we obtain ( [ 3.5 ] ) with respect to and use ( [ 3.0])-([3.4 ] ) .we obtain , k\in % \left [ \underline{k},\overline{k}\right ] .\label{3.70}\ ] ] we have obtained an integro - differential equation ( [ 3.6 ] ) for the function with the overdetermined boundary conditions ( [ 3.7 ] ) . the tail function is also unknown .first , we will approximate the tail function .next , we will solve the problem ( [ 3.6 ] ) , ( [ 3.7 ] ) for the function . to solve this problem, we will construct the above mentioned weighted cost functional with the cwf in it , see ( [ 3.00 ] ) .this construction , combined with corresponding analytical results , is the _ central _ part of our paper .thus , even though the problem ( [ 3.6])-([3.70 ] ) is the same as the problem ( 65 ) , ( 66 ) in , the numerical method of the solution of the problem ( [ 3.6])-([3.70 ] ) is _ radically _ different from the one in . now , suppose that we have obtained approximations for both functions and .then we obtain the unknown coefficient via backwards calculations .first , we calculate the approximation for the function via ( 3.1 ) and ( [ 3.2 ] ) .next , we calculate the function via ( [ 3.5 ] ) .we have learned from our numerical experience that the best value of to use in ( [ 3.5 ] ) for the latter calculation is the approximation for the tail function is done here the same way as the approximation for the so - called first tail function " in section 4.2 of .however , while tail functions are updated in , we are not doing such updates here .it follows from ( [ 2.100])-([2.110 ] ) and ( [ 3.0])-([3.2 ] ) that there exists a function ] and } \leq c\left\vert f\right\vert _ { h^{2}\left ( 0,1\right ) } , \forall f\in h^{2}\left ( 0,1\right ) , \label{3.130}\ ] ] where is a generic constant theorem 3.1 is a reformulation of theorem 4.2 of . * theorem 3.1 . *_ let the function _ _ _ satisfying conditions ( [ 2.1])-([2.2 ] ) be the exact solution of our imsp with the noiseless data _ _ ] _ _ _ _ _ _ _ _ is a sufficiently small number , which characterizes the level of the error in the boundary data .let in ( [ 3.12 ] ) _ _ _ _ let the function __ _ _ be the minimizer of the functional ( [ 3.12 ] ) on the set of functions _ _ _ _ defined in ( [ 3.13 ] ) .then there exists a constant _ _ only on _ _ _ _ and _ _ _ _ such that _ _ } \leq c\left\vert v_{\alpha \left ( \delta \right ) } \left ( x\right ) -v^{\ast } \left ( x,% \overline{k}\right ) \right\vert _ { h^{2}\left ( 0,1\right ) } \leq c_{1}\delta .\label{3.15}\ ] ] * remark 3.1*. we have also tried to consider two terms in the asymptotic expansion for in ( [ 3.8 ] ) : the second one with this resulted in a nonlinear system of two equations .we have solved it by via minimizing an analog of the functional of section 3.3 .however , the quality of resulting images deteriorated as compared with the above function in addition , we have tried to iterate with respect to the tail function .however , the quality of resulting images has also deteriorated .consider the function satisfying ( [ 3.6])-(3.70 ) . in sections 5.2 and 5.3 we use lemma 2.1 and theorem 2.1 of bakklkosh . to apply theorems, we need to have zero boundary conditions at hence , we introduce the function , replace in ( [ 3.6 ] ) with then ( [ 3.6 ] ) , ( [ 3.7 ] ) and ( [ 3.16 ] ) and ( [ 3.170 ] ) imply that introduce the hilbert space of pairs of real valued functions as ^{1/2}<\infty% \end{array}% \right\ } .\label{3.19}\]]here and below based on ( [ 3.17 ] ) and ( [ 3.18 ] ) , we define our weighted cost functional as be an arbitrary number .let be the closure in the norm of the space of the open set of functions defined as * minimization problem*. _ minimize the functional _ _ _ on the set _ _ * remark 3.1*. the analytical part of this paper below is dedicated to this minimization problem . since we deal with complex valued functions , we consider below as the functional with respect to the 2-d vector of real valued functions thus , even though we the consider complex conjugations below , this is done only for the convenience of writing . below ] _ depending only on listed parameters and a generic constant _ _ _ , such that for all _ _ functional _ _ _ _ is strictly convex on _ _ _ _i.e. for all _ _ _ _ _ _ * proof .* everywhere below in this paper } , r\right ) > 0 ] so large that then , using ( [ 3.35 ] ) and ( [ 3.36 ] ) , we obtain with a new generic constant for all theorem 4.1 , we establish in this section the global convergence of the gradient projection method of the minimization of the functional as to some other versions of the gradient method , they will be discussed in follow up publications .first , we need to prove the lipschitz continuity of the functional with respect to .* theorem 5.1*. _let conditions of theorem 3.1 hold .then the functional _ _ _ is lipschitz continuous on the closed ball _ _ _ _ in other words,__ * proof*. consider , for example the first line of ( [ 3.27 ] ) for and denote it we define similarly .both these expressions are linear with respect to denote we have \label{5.2}\]] h^{\prime } .\]]it is clear from ( [ 3.17 ] ) that hence , using ( [ 3.35 ] ) , ( [ 5.2 ] ) and cauchy - schwarz inequality , we obtain rest of the proof of ( [ 5.1 ] ) is similar . theorem 5.2 claims the existence and uniqueness of the minimizer of the functional on the set * theorem 5.2*. _ let conditions of theorem 4.1 hold . then for every __ there exists unique minimizer _ _ _ _ of the functional _ _ _ _ on the set __ _ _ furthermore,__ \geq 0,\forall y\in \overline{b\left ( r\right ) } . \label{5.3}\ ] ] * proof*. this theorem follows immediately from the above theorem 4.1 and lemma 2.1 of . let be the operator of the projection of the space on the closed ball let and let be an arbitrary point of .consider the sequence of the gradient projection method, * theorem 5.3 . *_ let conditions of theorem 4.1 hold . then for every _ __ there exists a sufficiently small number __ } , \left\vert p_{1}\right\vert _ { c\left [ \underline{k},\overline{k}\right ] } , r,\lambda \right ) \in \left ( 0,1\right ) ] seems to be the optimal one , and we indeed observed this in our computations .hence , we choose for our study and .we note that even though the above theory of the choice of the tail function works only for sufficiently large values of the notion sufficiently large " is relative , see , e.g. ( [ 6.20 ] ) .besides , it is clear from section 7 that we actually work in the gigahertz range of frequencies , and this can be considered as the range of large frequencies in physics ., scaledwidth=40.0% ] next , having the values of , we calculate the function in ( [ 2.8 ] ) and introduce the random noise in this function and are random numbers , uniformly distributed on .the next important question is about the choice of an optimal parameter indeed , even though theorem 4.1 says that the functional is strictly convex on the closed ball for all in fact , the larger is , the less is the influence on of those points which are relatively far from the point where the data are given .hence , we need to choose such a value of which would provide us satisfactory images of inclusions , whose centers are as in ( [ 6.2 ] ) : ] as = \left [ \underline{k},\overline{k}\right ] .\label{7.1}\]]the considerations for the choice ( [ 7.1 ] ) were similar with ones for the case of simulated data in section 6.2 .we had experimental data for total five targets .the background was air in the case of targets placed in air with and it was sand with ] of wave numbers .the method is based on the construction of a weighted cost functional with the carleman weight function in it .the main new theoretical result of this paper is theorem 4.1 , which claims the strict convexity of this functional on any closed ball for any radius , as long as the parameter of this functional is chosen appropriately. global convergence of the gradient method of the minimization of this functional to the exact solution is proved .numerical testing of this method on both computationally simulated and experimental data shows good results .h. ammari , y. t. chow , and j. zou , _ phased and phaseless domain reconstructions in the inverse scattering problem via scattering coefficients _ , siam journal on applied mathematics , 76 ( 2016 ) , pp . 10001030 .a. b. bakushinskii , m. v. klibanov , and n. a. koshev , _ carleman weight functions for a globally convergent numerical method for ill - posed cauchy problems for some quasilinear pdes _ ,nonlinear analysis : real world applications , 34 ( 2017 ) , pp .201224 .m. v. klibanov , n. a. koshev , j. li , and a. g. yagola , _ numerical solution of an ill - posed cauchy problem for a quasilinear parabolic equation using a carleman weight function _ ,journal of inverse and ill - posed problems , 24 ( 2016 ) , pp .761776 .m. v. klibanov , d .-nguyen , l. h. nguyen , and h. liu , _ a globally convergent numerical method for a 3d coefficient inverse problem with a single measurement of multi - frequency data _ , ( 2016 ) , https://arxiv.org/abs/1612.04014 .m. v. klibanov , l. h. nguyen , a. sullivan , and l. nguyen , _ a globally convergent numerical method for a 1-d inverse medium problem with experimental data _ , inverse problems and imaging , 10 ( 2016 ) , pp .10571085 .a. v. kuzhuget , l. beilina , m. v. klibanov , a. sullivan , l. nguyen , and m. a. fiddy , _ blind backscattering experimental data collected in the field and an approximately globally convergent inverse algorithm _ , inverse problems , 28 ( 2012 ) , p. 095007 .a. v. kuzhuget , l. beilina , m. v. klibanov , a. sullivan , l. nguyen , and m. a. fiddy , _ quantitative image recovery from measured blind backscattered data using a globally convergent inverse method _ , ieee transactions on geoscience and remote sensing , 51 ( 2013 ) , pp .29372948 .nguyen , m. v. klibanov , l. h. nguyen , a. e. kolesov , m. a. fiddy , and h. liu , _ numerical solution of a coefficient inverse problem with multi - frequency experimental raw data by a globally convergent algorithm _ , ( 2016 ) , https://arxiv.org/abs/1609.03102 .l. nguyen , d. wong , m. ressler , f. koenig , b. stanton , g. smith , j. sichina , and k. kappra , _ obstacle avoidance and concealed target detection using the army research lab ultra - wideband synchronous impulse reconstruction ( uwb sire ) forward imaging radar _ , 2007 , p. 65530h .m. sini and n. t. thnh , _ regularized recursive newton - type methods for inverse scattering problems using multifrequency measurements _ , esaim : mathematical modelling and numerical analysis , 49 ( 2015 ) , pp. 459480 .n. t. thnh , l. beilina , m. v. klibanov , and m. a. fiddy , _ imaging of buried objects from experimental backscattering time - dependent measurements using a globally convergent inverse algorithm _ , siam journal on imaging sciences , 8 ( 2015 ) , pp .757786 . _ dielectric constant table _ , https://www.honeywellprocess.com/library/marketing/tech-specs/dielectric constant table.pdf[https://www.honeywellprocess.com/library/marketing/tech-specs/dielectric constant table.pdf ]
a new numerical method is proposed for a 1-d inverse medium scattering problem with multi - frequency data . this method is based on the construction of a weighted cost functional . the weight is a carleman weight function ( cwf ) . in other words , this is the function , which is present in the carleman estimate for the undelying differential operator . the presence of the cwf makes this functional strictly convex on any a priori chosen ball with the center at in an appropriate hilbert space . convergence of the gradient minimization method to the exact solution starting from any point of that ball is proven . computational results for both computationally simulated and experimental data show a good accuracy of this method . * key words * : global convergence , coefficient inverse problem , multi - frequency data , carleman weight function * 2010 mathematics subject classification : * 35r30 .
in a database containing a solution of the 3d incompressible navier - stokes ( ns ) equations is presented .the equations were solved numerically with a standard pseudo - spectral simulation in a periodic domain , using a real space grid of grid points .a large - scale body force drives a turbulent flow with a taylor microscale based reynolds number . out of this solution , snapshots were stored , spread out evenly over a large eddy turnover time .more on the simulation and on accessing the data can be found at http://turbulence.pha.jhu.edu . in practical terms, we have easy access to the turbulent velocity field and pressure at every point in space and time .one usual way of visualising a turbulent velocity field is to plot vorticity isosurfaces see for instance the plots from .the resulting pictures are usually very `` crowded '' , in the sense that there are many intertwined thin vortex tubes , generating an extremely complex structure .in fact , the picture of the entire dataset from looks extremely noisy and it is arguably not very informative about the turbulent dynamics . in this work ,we follow a different approach .first of all , we use the alternate quantity first introduced in .secondly , the tool being used has the option of displaying data only inside clearly defined domains of 3d space .we can exploit this facility to investigate the multiscale character of the turbulent cascade . because vorticity is dominated by the smallest available scales in the velocity, we can visualize vorticity at scale by the curl of the velocity box - filtered at scale .we follow a simple procedure : * we filter the velocity field , using a box filter of size , and we generate semitransparent surfaces delimitating the domains where ; * we filter the velocity field , using a box filter of size , and we generate surfaces delimitating the domains where , but only if these domains are contained in one of the domains from ; and this procedure can be used iteratively with several scales ( we use at most 3 scales , since the images become too complex for more levels ) .additionally , we wish sometimes to keep track of the relative orientation of the vorticity vectors at the different scales . for this purposewe employ a special coloring scheme for the isosurfaces : for each point of the surface , we compute the cosine of the angle between the filtered vorticity and the filtered vorticity : the surface is green for , yellow for and red for , following a continuous gradient between these three for intermediate values .the opening montage of vortex tubes is very similar to the traditional visualisation : a writhing mess of vortices . upon coarse - graining, additional structure is revealed .the large - scale vorticity , which appears as transparent gray , is also arranged in tubes . as a next step ,we remove all the fine - scale vorticity outside the large - scale tubes . the color scheme for the small - scale vorticityis that described earlier , with green representing alignment with the large - scale vorticity and red representing anti - alignment .clearly , most of the small - scale vorticity is aligned with the vorticity of the large - scale tube that contains it .we then remove the fine - grained vorticity and pan out to see that the coarse - grained vortex tubes are also intricately tangled and intertwined . introducing a yet larger scale , we repeat the previous operations. the relative orientation properties of the vorticity at these two scales is similar to that observed earlier .next we visualize the vortex structures at all three scales simultaneously , one inside the other .it is clear that the small vortex tubes are transported by the larger tubes that contain them .however , this is not just a passive advection .the small - scale vortices are as well being distorted by the large - scale motions . to focus on this more clearly, we now render just the two smallest scales .one can observe the small - scale vortex tubes being both stretched and twisted by the large - scale motions .the stretching of small vortex tubes by large ones was suggested by orszag and borue as being the basic mechanism of the turbulent energy cascade .as the small - scale tubes are stretched out , they are `` spun up '' and gain kinetic energy . here , this phenomenon is clearly revealed .the twisting of small - scale vortices by large - scale screw motions has likewise been associated to helicity cascade .the video thus allows us to view the turbulent cascade in progress .next we consider the corresponding view with three levels of vorticity simultaneously .since the ratio of scales is here 1:15:49 we are observing less than two decades of the turbulent cascade .one must imagine the complexity of a very extended inertial range with many scales of motion .not all of the turbulent dynamics is tube within tube . in our last scenewe visualize in the right half domain all the small - scale vortices , and in the left domain only the small - scale vortices inside the larger scale ones . in the right half ,the viewer can observe stretching of the small - scale vortex structures taking place externally to the large - scale tubes .the spin - up of these vortices must contribute likewise to the turbulent energy cascade .6ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwosecondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1080/14685240802376389 [ * * ( ) , 10.1080/14685240802376389 ] * * ( ) , in http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1592886[__ ] ( ) p. _ _ , ( ) link:\doibase 10.1017/s0022112097008306 [ * * , ( ) ] http://journals.cambridge.org / production / action / cjogetfulltext?fulltextid=4% 00523 [ * * , ( ) ]
the jhu turbulence database can be used with a state of the art visualisation tool to generate high quality link : anc / dfdsubmissionquarterres.mpg[fluid dynamics videos ] . in this work we investigate the classical idea that smaller structures in turbulent flows , while engaged in their own internal dynamics , are advected by the larger structures . they are not advected undistorted , however . we see instead that the small scale structures are sheared and twisted by the larger scales . this illuminates the basic mechanisms of the turbulent cascade .
the open connectome project ( located at http://openconnecto.me ) aims to annotate all the features in a 3d volume of neural em data , connect these features , and compute a high resolution wiring diagram of the brain , known as a connectome .it is hoped that such work will help elucidate the structure and function of the human brain .the aim of this work is to automatically annotate axoplasmic reticula , since it is extremely time consuming to hand - annotate them .specifically , the objective is to achieve an operating point with high precision , to enable robust contextual inference .there has been very little previous work towards this end .axoplasmic reticula are present only in axons , indicating the identity of the surrounding process and informing automatic segmentation .the brain data we are working with was color corrected using gradient - domain image - stitching techniques to adjust contrast through the slices .we use this data as the testbed for running our filters and annotating axoplasmic reticula .the bilateral filter is a non - linear filter consisting of one 2d gaussian kernel , which decays with spatial distance , and one 1d gaussian kernel , which decays with pixel intensity : = \frac{1}{w_p}\sum_{q\in s}g_{\sigma_{s}}(||p - q||)g_{\sigma_{r}}(i_p - i_q)i_q,\\ & \hspace{4mm}\textrm{where } w_p = \sum_{q\in s}g_{\sigma_{s}}(||p - q||)g_{\sigma_{r}}(i_p - i_q ) \end{split}\ ] ] is the normalization factor .this filter smooths the data by averaging over neighboring pixels while preserving edges , and consequently important detail , by not averaging over pixels with large intensity difference . applying this filteraccentuates features like axoplasmic reticula in our data .even with a narrow gaussian in the intensity domain , the bilateral filter causes some color bleeding across edges .we try to undo this effect through laplacian sharpening .the laplacian filter computes the difference between the intensity at a pixel and the average intensity of its neighbors .therefore , adding a laplacian filtered image to the original image results in an increase in intensity where the average intensity of the surrounding pixels is less than that of the center pixel , an intensity drop where the average is greater , and no change in areas of constant intensity .hence , we use the 3x3 laplacian filter to highlight edges around dark features such as axoplasmic reticula .we use a morphological region growing algorithm on our filtered data to locate and annotate axoplasmic 26.5 mm 26.5 mm 26.5 mm 26.5 mm 26.5 mm 26.5 mm 2 reticula .we implement this by iterating over the filtered image and looking for dark pixels , where a dark pixel is defined as a pixel with value less than a certain specified threshold .when a dark pixel is found , we check its 8-neighborhood to determine if the surrounding pixels are also below the threshold .then , we check the pixels surrounding these , and we do this until we find only high intensity pixels , or until we grow larger than the diameter of an axoplasmic reticula .the thresholds we use in our algorithm are biologically motivated and tuned empirically .finally , we track our annotations through the volume to verify their correctness and identify axoplasmic reticula that were missed initially . for each slice , we traverse the annotations and check if an axoplasmic reticulum is present in the corresponding xy - location ( with some tolerance ) in either of the adjacent slices .if a previously annotated axoplasmic reticulum object is present , we confirm the existing annotation .otherwise , the adjacent slice locations are checked for axoplasmic reticula with a less restrictive growing algorithm , and new annotations are added in the corresponding slice . if no axoplasmic reticulum object is found in either of the adjacent slices , then we assume the annotation in the current slice to be incorrect , and delete it .we qualitatively evaluated our algorithm on 20 slices from the kasthuri11 dataset , and quantitatively compared our results against ground truth from a neurobiologist .our algorithm annotates axoplasmic reticulum objects with 87% precision , and 52% recall .these numbers are approximate since there is inherent ambiguity even among expert annotators .our current algorithm is designed to detect transverally sliced axoplasmic reticula . in future work , we plan to extend our morphological region growing algorithm to also find dilated axoplasmic reticula , and to incorporate a more robust tracking method such as kalman or particle filtering .additionally , our algorithm can be adapted to annotate other features in neural em data , such as mitochondria , by modifying the morphological region growing algorithm .
* _ abstract _ : * in this paper , we present a new pipeline which automatically identifies and annotates axoplasmic reticula , which are small subcellular structures present only in axons . we run our algorithm on the kasthuri11 dataset , which was color corrected using gradient - domain techniques to adjust contrast . we use a bilateral filter to smooth out the noise in this data while preserving edges , which highlights axoplasmic reticula . these axoplasmic reticula are then annotated using a morphological region growing algorithm . additionally , we perform laplacian sharpening on the bilaterally filtered data to enhance edges , and repeat the morphological region growing algorithm to annotate more axoplasmic reticula . we track our annotations through the slices to improve precision , and to create long objects to aid in segment merging . this method annotates axoplasmic reticula with high precision . our algorithm can easily be adapted to annotate axoplasmic reticula in different sets of brain data by changing a few thresholds . the contribution of this work is the introduction of a straightforward and robust pipeline which annotates axoplasmic reticula with high precision , contributing towards advancements in automatic feature annotations in neural em data . + 2
over the past 70 years , there have been multiple attempts to dynamically model the movement of polymer chains with brownian dynamics , which have more recently been used as a model for dna filament dynamics .one of the first and simplest descriptions was given as the rouse model , which is a bead - spring model , where the continuous filament is modelled at a mesoscopic scale with beads connected by springs .the only forces exerted on beads are spring forces from adjacent springs , as well as gaussian noise .hydrodynamic forces between beads and excluded volume effects are neglected in the model in favour of simplicity and computational speed , but the model manages to agree with several properties of polymer chains from experiments .other models exist , for example the zimm model introduces hydrodynamic forces between beads , or bending potentials can be introduced to form a wormlike chain and give a notion of persistence length , see , for example , review article or books on this subject .most of the aforementioned models consider the filament on only a single scale . in some applications ,a modeller is interested in a relatively small region of a complex system .then it is often possible to use a hybrid model which is more accurate in the region of interest , and couple this with a model which is more computationally efficient in the rest of the simulated domain .an application area for hybrid models of polymer chains is binding of a protein to the dna filament , which we study in this paper . the model which we have created uses rouse dynamics for a chain of dna , along with a freely diffusing particle to represent a binding protein .as the protein approaches the dna , we increase the resolution in the nearby dna filament to increase accuracy of our simulations , whilst keeping them computationally efficient .in this paper we use the rouse model for analysis due to its mathematical tractability and small computational load .such a model is applicable to modelling dna dynamics when we consider relatively low resolutions , when hydrodynamic forces are negligible and persistence length is significantly shorter than the kuhn length between each bead .the situation becomes more complicated when we consider dna modelling at higher spatial resolutions .inside the cell nucleus , genetic information is stored within strands of long and thin dna fibres , which are separated into chromosomes .these dna fibres are folded into structures related to their function .different genes can be enhanced or inhibited depending upon this structure .folding also minimises space taken up in the cell by dna , and can be unfolded when required by the cell for different stages in the cell cycle or to alter gene expression .the folding of dna occurs on multiple scales . on a microscopic scale ,dna is wrapped around histone proteins to form the nucleosome structure .this in turn gets folded into a chromatin fibre which gets packaged into progressively higher order structures until we reach the level of the entire chromosome .the finer points of how the nucleosome packing occurs on the chromatin fibre and how these are then packaged into higher - order structures is still a subject of much debate , with long - held views regarding mesoscopic helical fibres becoming less fashionable in favour of more irregular structures in vivo . in the most compact form of chromatin ,many areas of dna are not reachable for vital reactions such as transcription .one potential explanation to how this is overcome by the cell is to position target dna segments at the surface of condensed domains when it is needed , so that transcription factors can find expressed genes without having to fit into these tightly - packed structures .this complexity is not captured by the multiscale model of protein binding presented in this paper . however ,if one uses the developed refinement of the rouse model together with a more detailed modelling approach in a small region of dna next to the binding protein , then such a hybrid model can be used to study the effects of microscopic details on processes over system - level spatial and temporal scales .when taking this multiscale approach , it is necessary to understand the error from including the less accurate model in the hybrid model and how the accuracy of the method depends on its parameters .these are the main questions studied in this paper .the rest of the paper is organized as follows . in section [ secmrbs ] , we introduce a multi - resolution bead - spring model which generalizes the rouse model . we also introduce a discretized version of this model which enables the use of different timesteps in different spatial regions . in section [ section3 ], we analyze the main properties of the multi - resolution bead - spring model .we prove two main lemmas giving formulas for the diffusion constant and the end - to - end distance .we also study the appropriate choice of timesteps for numerical simulations of the model and support our analysis by the results of illustrative computer simulations .our main application area is studied in section [ section4 ] where we present and analyze a dna binding model .we develop a method to increase the resolution in existing segments on - the - fly using the metropolis - hastings algorithm . in section [ secdiscussion ] ,we conclude our paper by discussing possible extensions of the presented multiscale approach ( by including more detailed models of dna dynamics ) and other multiscale methods developed in the literature .we generalize the classical rouse bead - spring polymer model to include beads of variable sizes and springs with variable spring constants . in definition[ defmrbs ] , we formulate the evolution equation for this model as a system of stochastic differential equations ( sdes ). we will also introduce a discretized version of this model in algorithm [ algoneiter ] , which will be useful in sections [ section3 ] and [ section4 ] where we use the multi - resolution bead - spring model to develop and analyze multiscale models for dna dynamics .[ defmrbs ] let be a positive integer .a multi - resolution bead - spring polymer of size consists of a chain of beads of radius , for , connected by springs which are characterized by their spring constants , for .the positions ] is a wiener process , is absolute temperature , is boltzmann s constant and we assume that each spring constant can be equivalently expressed in terms of the corresponding kuhn length by we assume that the behaviour of boundary beads ( for and ) is also given by equation simplified by postulating and _ [ figmrbeadspring ] ] in figure [ figmrbeadspring ] , we schematically illustrate a multi - resolution bead - spring polymer for .the region between the -th and the -th bead is described with the highest resolution by considering smaller beads and springs with larger spring constants ( or equivalently with smaller kuhn lengths ) .the scalings of different parameters in definition [ defmrbs ] are chosen so that we recover the classical rouse model if we assume and . then equation ( [ sdedef ] ) simplifies to where , and we again define and in equations for boundary beads . in the polymer physics literature , the rouse model ( [ sderouse ] ) is equivalently written as where random thermal noises exerted on the beads from brownian motion are characterized by the moments where and . for the remainder of this paper, we will use the sde notation as given in ( [ sderouse ] ) , because we will often study numerical schemes for simulating polymer dynamics models .the simplest discretization of ( [ sderouse ] ) is given by the euler - maruyama method , which uses the finite timestep and calculates the position vector of the -th bead , , at discretised time by for , where is normally distributed random variable with zero mean and unit variance ( i.e. ) for . in order to discretize the multi - resolution bead - spring model ,we allow for variable timesteps .[ defvartimestep ] let and let , be positive integers such that or for .let us assume that at least one of the values of is equal to 1 .we define for and we call a timestep associated with the -th spring .definition [ defvartimestep ] specifies that all timesteps must be integer multiples of the smallest timestep .the timesteps associated with two adjacent springs are also multiples of each other .the time evolution of the multi - resolution bead - spring model is computed at integer multiples of .one iteration of the algorithm is shown in algorithm [ algoneiter ] .the position of the -th bead is updated at integer multiples of by calculating the random displacement due to brownian motion , with displacement caused by springs attached to the bead also updated at integer multiples of the timesteps associated with each spring , i.e. or considering the situation that all beads , springs and timesteps are the same , then one can easily deduce the following result .update positions of internal beads which are connected to two springs : + update of the first bead : + update of the last bead : + [ lemconsnum ] let , , and be positive constants and be an integer .consider a multi - resolution bead - spring polymer of size with , , for , and , for .let the timesteps associated with each spring be equal to , i.e. in definition then algorithm is equivalent to the euler - maruyama discretization of the rouse model given as equation .lemma [ lemconsnum ] shows that the multi - resolution bead - spring model is a generalization of the rouse model . in the next section, we will study properties of this model which will help us to select the appropriate parameter values for this model and use it in multiscale simulations of dna dynamics .we have formulated a multiscale rouse model which varies the kuhn lengths throughout the filament , but we would like to keep properties of the overall filament constant regardless of the resolution regime being considered for the filament .we consider a global statistic for the system to be _ consistent _ if the expected value of the statistic is invariant to the resolution regime being considered for the filament .we consider the _ self diffusion constant _ and _ root mean squared ( rms ) end - to - end distance _ as two statistics we wish to be consistent in our system , which can be ensured by varying the bead radius and the number of beads respectively .the precise way to vary these properties will be explored in this section .the _ self diffusion constant _ is defined as where is the _ centre of mass _ of the polymer chain at time , which is defined by definition ( [ eq : com ] ) is an extension to the definition given by doi and edwards for the centre of mass of a continuous chain on only one scale . if all beads have the same radius ( i.e. if for ) , then equation ( [ eq : com ] ) simplifies to the centre of mass definition for the classical rouse model . in this case , the self diffusion constant is given by where is the number of beads .this result explains the , on the face of it , counterintuitive scaling of equation ( [ eq : com ] ) with .if we suppose that each bead had the same density , then the mass of each bead would be proportional to its volume , i.e. to . however , in definition ( [ eq : com ] ) , we have used weights instead of because beads do not represent physical bead objects like nucleosomes , but representations of the filament around it , so the bead radius scales with the amount of surrounding filament , which is linear in bead radius in this formulation .if we consider dna applications , we could imagine each bead as a tracker for individual base pairs at intervals of , say , thousands of base pairs away from each other along the dna filament .the filament in the model is then drawn between adjacent beads .this linear scaling with can also be confirmed using equation ( [ eq : rousediff ] ) for the classical rouse model .if we describe the same polymer using a more detailed model consisting of twice as many beads ( i.e. if we change to ) , then we have to halve the bead radius ( i.e. change to ) to get a polymer model with the same diffusion constant ( [ eq : rousediff ] ) .in particular , the mass of a bead scales with ( and not with ) . in the next lemma, we extend result ( [ eq : rousediff ] ) to a general multi - resolution bead - spring model .[ lemdg ] let us consider a multi - resolution bead - spring polymer of size and a set of timesteps associated with each spring satisfying the assumptions of definitions and .then the self diffusion constant of the polymer evolution described by algorithm is given by algorithm describes one iteration of our numerical scheme . multiplying the steps corresponding to the -th bead by and summing over all beads ,we obtain how changes during one timestep . since ,tension terms cancel after summation and the evolution rule for simplifies to where ] , where we used the fact that the sum of normally distributed random variables is again normally distributed . dividing equation ( [ rgequation2 ] ) by , we obtain .\ ] ] using definition ( [ dgdef ] ) , we obtain ( [ eq : sdc ] ) . the formula ( [ eq : sdc ] ) is a generalization of equation ( [ eq : rousediff ] ) obtained for the rouse model .it is invariant to the resolutions provided that the mass of the filament remains constant through selection of the number of beads and bead radius , therefore the self diffusion constant is consistent .we define the _ end - to - end vector _ from one end of the filament to the other .an important statistic to consider related to this is the _ root mean squared ( rms ) end - to - end distance _ of the filament .the expected value of the long - time limit of the rms end - to - end distance , denoted , for the classical rouse model is given by we generalize this result in the following lemma .[ lemrms ] let us consider a multi - resolution bead - spring polymer of size satisfying the assumptions of definition .then and the long - time limit of the rms end - to - end distance is given by equations ( [ sdedef ] ) describe a system of linear sdes .however , the sdes corresponding to different spatial dimensions are not coupled .we therefore restrict our investigation to the behaviour of the first coordinates of each vector in ( [ rmsbond ] ) .let us arrange the differences of the first coordinates of subsequent beads into the -dimensional vector .\ ] ] then sdes ( [ sdedef ] ) can be rewritten to the system of sdes for in the matrix form where is a three - diagonal matrix given by is a two - diagonal matrix given by and is -dimensional noise vector ^t . ] of the protein from the middle bead of the filament .we estimate as a fraction of simulations which end up with the protein bound to dna .each data point in figure [ fig : transcriptionresults ] represents the value of estimated from independent realizations of the process . if , then the protein is immediately bound to dna , i.e. for .if , then the probability of binding is nonzero , because the initial placement , , is the distance of the protein from the centre of the filament . in particular , the minimum distance from protein to filament is less than or equal to the initial placement distance , , and the simulations ( with the possibility of binding ) take place even if ., depending on starting distance , , from the filament for the single - scale ( black points ) and omr ( blue line ) models .error bars give a 95% confidence interval based on the wilson score interval for binomial distributions ._ [ fig : transcriptionresults ] ] due to computational constraints of the single - scale model we consider a selection of initial distances at points m , ( black points ) , where error bars give a 95% confidence interval based on the wilson score interval for binomial distributions .we run simulations for more initial distances , m , ( blue line ) , using the computationally efficient omr model and present our results as the blue line in figure [ fig : transcriptionresults ] .we see that is very similar between the single - scale and omr models .the model also succeeds in reducing computational time . for simulations with the protein starting from the middle bead , with parameters given in table [ tab : transcriptionparmas ] ,the omr model represented a 3.2-times speedup compared to the detailed model , with only a 3-times resolution difference .we expect for larger resolution differences to see greater improvements in speed .in this paper we have extended basic filament modelling techniques to multiple scales by developing omr methods .we have presented an mcmc approach for increasing the resolution along a static filament segment , as well as an extension to the rouse model to dynamically model a filament which considers multiple scales .the bead radius , as well as the number of beads associated with each resolution , is altered to maintain consistency with the end - to - end distance and diffusion of a filament across multiple scales , as well as the timestep to ensure numerical convergence .we have then illustrated the omr methodology using a simple model of protein binding to a dna filament , in which the omr model gave similar results to the single - scale model .we have also observed a 3.2-times speed - up in computational time on a model which considers only a 3-times increase in resolution , which illustrates the use of the omr approach as a method to speed up simulations whilst maintaining the same degree of accuracy as the more computationally intensive single - scale model .the speed - up in computational time could be further increased by replacing brownian dynamics based on time - discretization ( [ eq : particlediffuse ] ) by event - based algorithms such as the fpkmc ( first passage kinetic monte carlo ) and gfrd ( green s function reaction dynamics ) methods .when considering the zooming out of the dna binding model , note that it is generally possible to zoom in and out repetitively , as long as the dynamics are such that we can generate a high resolution structure independent from the previous one ( i.e. , once we zoom out , the microscopic structure is completely forgotten ) . however , particularly in the case of chromatin , histone modification and some dna - binding proteins may act as long - term memory at a microscopic scale below the scales currently considered . to reflect the effect of the memory , some properties of the microscopic structure should be maintained even after zooming out .fractal dimension may serve as a candidate of indices , which can be also estimated in living cells by single - molecule tracking experiments .the omr method could be applied to modern simulations of dna and other biological polymers which use the rouse model in situations where certain regions of the polymer require higher resolutions than other regions .the model considered in this report uses rouse dynamics , which is moderately accurate given its simplicity , but as we zoom in further towards a binding site , then we will need to start to consider hydrodynamic forces and excluded volume effects acting between beads .models which include hydrodynamic interactions such as the zimm model have previously been used to look at filament dynamics .therefore it is of interest to have a hybrid model which uses the rouse model in low resolutions and the zimm model in high resolutions . the combination of different dynamical models might give interesting results regarding hierarchical structures forming as we move between resolutions .as we go into higher resolutions , strands of dna can be modelled as smooth , unlike the fjc model where angles between beads are unconstrained .the wormlike chain model of kratky and porod , implemented via algorithm by hagermann and zimm , gives a non - uniform probability distribution for the angles between each bead .allison then implements the zimm model dynamics on top of the static formulation to give bending as well as stretching forces .another interesting open multiscale problem is to implement this at higher resolutions , with the rouse model at lower resolutions , in order to design a hybrid model . to introduce even more realism, we would see individual histones and consider forces between these as in the model of rosa and everaers which includes lennard - jones and fene forces between beads .as we approach an atomistic level , it may be interesting to consider a molecular dynamics approach to modelling the dna filament .coarser brownian dynamics models can be estimated from molecular dynamics models either analytically or numerically , depending on the complexity of the molecular dynamics model .a variety of structure - based coarse - grained models have been used for chromatin ( e.g. ) , also with transcription factors .multiscale modelling techniques ( e.g. with iterative coarse - graining ) , as well as adaptive resolution models ( e.g. for solvent molecules ) , have been developed .we expect these studies will connect with polymer - like models at a certain appropriate length and time scale . on top of this , models for the target searching process by proteins such as transcription factors could be improved ( for example , by incorporating facilitated diffusion under crowded environment ) .the need for developing and analyzing multiscale models of dna which use one of the above detailed simulation approaches for small parts of the dna filament is further stimulated by recent experimental results .chromosome conformation capture ( 3c)-related techniques , particularly at a genome - wide level using high - throughput sequencing ( hi - c ) , provide the three - dimensional structure of the chromosomes in an averaged manner .moreover , recent imaging techniques have enabled us to observe simultaneously the motion and transcription of designated gene loci in living cells .simulated processes could be compared with such experimental results .recent hi - c experiments also revealed fine structures such as loops induced by dna - binding proteins . to develop more realistic models ,information about the binding sites for these proteins may be utilized when we increase the resolution in our scheme .s. shinkai , t. nozaki , k. maeshima , and y. togashi .dynamic nucleosome movement provides structural information of topological chromatin domains in living human cells .biorxiv doi:10.1101/059147 , 2016 .
a multi - resolution bead - spring model for polymer dynamics is developed as a generalization of the rouse model . a polymer chain is described using beads of variable sizes connected by springs with variable spring constants . a numerical scheme which can use different timesteps to advance the positions of different beads is presented and analyzed . the position of a particular bead is only updated at integer multiples of the timesteps associated with its connecting springs . this approach extends the rouse model to a multiscale model on both spatial and temporal scales , allowing simulations of localized regions of a polymer chain with high spatial and temporal resolution , while using a coarser modelling approach to describe the rest of the polymer chain . a method for changing the model resolution on - the - fly is developed using the metropolis - hastings algorithm . it is shown that this approach maintains key statistics of the end - to - end distance and diffusion of the polymer filament and makes computational savings when applied to a model for the binding of a protein to the dna filament . polymer dynamics , dna , rouse model , brownian dynamics , multiscale modelling 60h10 , 60j70 , 82c31 , 82d60 , 92b99
in the last years wireless communication systems coped with the problem of delivering reliable information while granting high throughput . this problem has often been faced resorting to channel codes able to correct errors even at low signal to noise ratios .as pointed out in table i in , several standards for wireless communications adopt binary or double binary turbo codes and exploit their excellent error correction capability .however , due to the high computational complexity required to decode turbo codes , optimized architectures ( e.g. , ) have been usually employed . moreover, several works addressed the parallelization of turbo decoder architectures to achieve higher throughput .in particular , many works concentrate on avoiding , or reducing , the collision phenomenon that arises with parallel architectures ( e.g. ) .although throughput and area have been the dominant metrics driving the optimization of turbo decoders , recently , the need for flexible systems able to support different operative modes , or even different standards , has changed the perspective . in particular ,the so called software defined radio ( sdr ) paradigm made flexibility a fundamental property of future receivers , which will be requested to support a wide range of heterogeneous standards .some recent works ( e.g. , , ) deal with the implementation of application - specific instruction - set processor ( asip ) architectures for turbo decoders . in order to obtain architectures that achieve both high throughput and flexibilitymulti - asip is an effective solution .thus , together with flexible and high throughput processing elements , a multi - asip architecture must feature also a flexible and high throughput interconnection backbone . to that purpose, the network - on - chip ( noc ) approach has been proposed to interconnect processing elements in turbo decoder architectures designed to support multiple standards , , , , , . in addition , noc based turbo decoder architectures have the intrinsic feature of adaptively reducing the communication bandwidth by the inhibition of unnecessary extrinsic information exchange .this can be obtained by exploiting bit - level reliability - based criteria where unnecessary iterations for reliable bits are avoided . in , , ring ,chordal ring and random graph topologies are investigated whereas in previous works are extended to mesh and toroidal topologies .furthermore , in butterfly and benes topologies are studied , and in binary de - bruijn topologies are considered . however , none of these works presents a unified framework to design a noc based turbo decoder , showing possible complexity / performance trade - offs .this work aims at filling this gap and provides two novel contributions in the area of flexible turbo decoders : i ) a comprehensive study of noc based turbo decoders , conducted by means of a dedicated noc simulator ; ii ) a list of obtained results , showing the complexity / performance trade - offs offered by different topologies , routing algorithms , node and asip architectures .the paper is structured as follows : in section [ sec : system_analysis ] the requirements and characteristics of a parallel turbo decoder architecture are analyzed , whereas in section [ sec : noc ] noc based approach is introduced .section [ sec : topologies ] summarizes the topologies considered in previous works and introduces generalized de - bruijn and generalized kautz topologies as promising solutions for noc based turbo decoder architectures . in section [ sec : ra ] three main routing algorithms are introduced , whereas in section [ sec : tnoc ] the turbo noc framework is described .section [ sec : routing_algo_arch ] describes the architecture of the different routing algorithms considered in this work , section [ sec : results ] presents the experimental results and section [ sec : concl ] draws some conclusions .a parallel turbo decoder can be modeled as processing elements that need to read from and write to memories . each processing element ,often referred to as soft - in - soft - out ( siso ) module , performs the bcjr algorithm , whereas the memories are used for exchanging the extrinsic information among the sisos .the decoding process is iterative and usually each siso performs sequentially the bcjr algorithm for the two constituent codes used at the encoder side ; for further details on the siso module the reader can refer to . as a consequence , each iteration is made of two half iterations referred to as interleaving and de - interleaving . during one half iteration the extrinsic information produced by siso at time ( ) is sent to the memory at the location , where and are functions of and derived from the permutation law ( or interleaver ) employed at the encoder side .thus , the time required to complete the decoding is directly related to the number of clock cycles necessary to complete a half iteration . without loss of generality , we can express the number of cycles required to complete a half iteration ( ) as where is the total number of trellis steps in a data frame , is the number of trellis steps processed by each siso , is the siso output rate , namely the number of trellis steps processed by a siso in a clock cycle , and is the interconnection structure latency .thus , the decoder throughput expressed as the number of decoded bits over the time required to complete the decoding process is where is the clock frequency , is the number of iterations , for binary codes and for double binary codes . when the interconnection structure latency is negligible with respect to the number of cycles required by the siso , we obtain thus , to achieve a target throughput and satisfactory error rate performance , a proper number of iterations should be used .the minimum ( ) to satisfy with iterations can be estimated from ( [ eq : tapprox ] ) for some asip architectures available in the literature .if we consider , as in , , ranges in [ 5 , 37 ] to achieve mb / s ( see table [ tab : pasip ] ) .it is worth pointing out that the values in table [ tab : pasip ] represent the average numbers of cycles required by the siso to update the soft information of one bit ( see table vi in and table i in ) .moreover , strongly depends on the internal architecture of the siso and in general tends to increase with the code complexity . as a consequence ,several conditions can further increase , namely 1 ) interconnection structures with larger ; 2 ) higher values ; 3 ) higher ; 4 ) higher ; 5 ) lower clock frequency .thus , we consider as relevant for investigation a slightly wider range for : ..parallelism degree required to obtain mb / s for with some asip architectures available in the literature [ cols="^,^,^,^,^,^,^ " , ] the area and the percentage are not really zero , but they are negligible compared with the i m and lm contribution to the total area .the most important conclusions that can be derived from results in table [ tab : wimax_results ] and [ tab : mhoms_results ] are : 1 .the asp - ft routing algorithm is the best performing solution both in terms of throughput and area when .2 . the routing memory overhead of the asp - ft algorithm ( see fig .[ fig : node ] ( b ) ) becomes relevant as decreases and ssp solutions become the best solutions mainly for and .3 . in most cases topologies with =4 achieve higher throughput with lower complexity overhead than topologies with =2 when .4 . in most cases , generalized de - bruijn and generalized kautz topologies are the best performing topologies . as a significant example , in fig .[ fig : r1_asp - ft ] , we show the experimental results obtained with and asp - ft routing algorithm for the wimax interleaver with ( a ) and the circular shifting interleaver with ( b ) .each point represents the throughtput and the area obtained for a certain topology with a certain parallelism degree .results referred to the same value are bounded into the same box and a label is assigned to each point to highlight the corresponding topology , namely topologies are identified as r - ring , h - honeycomb , t - toroidal mesh , k - generalized kautz with the corresponding value ( k2 , k3 , k4 ) .as it can be observed , generalized kautz topologies with ( k4 ) are always the best solutions to achieve high throughput with minimum area overhead . in fig .[ fig : tar_tot ] significant results extracted from table [ tab : wimax_results ] and [ tab : mhoms_results ] are shown in graphical form .in particular , for the asp - ft routing algorithm is the best solution , whereas for ssp routing algorithms , implemented as in fig .[ fig : node ] ( c ) , tend to achieve the same performance as the asp - ft routing algorithm with lower complexity overhead ( see fig . [fig : tar_tot ] ( a ) and ( b ) for the wimax interleaver , and fig .[ fig : tar_tot ] ( c ) and ( d ) for the circular shifting interleaver , ) .an interesting phenomenon that arises increasing the interleaver size is the performance saturation that can be observed in the table [ tab : mhoms_results ] for topologies , namely the throughput tends to saturate and increasing has the effect of augmenting the area with a negligible increase or even with a decrease of throughput . as an example , the generalized kautz topology with and asp - ft routing algorithm achieves more than 180 mb / s with , , .however , the solution with the smallest area is the one obtained with .the throughput flattening of low topologies can be explained by observing that high values of tend to saturate the network .furthermore , high values of lengthen the input fifos as highlighted in table [ tab : percentage ] , where the total area of the network is given as the breakdown of the building blocks , namely the input fifos , the crossbars ( cb ) , the output registers , the routing algorithm / memory ( ra / m ) , the identifier memory ( i m ) and the location memory ( lm ) is given for some significant cases : the highest throughput ( light - gray ) , the highest area ( mid - gray ) , and lowest area ( dark - gray ) points for each value in table [ tab : mhoms_results ] .in this work a general framework to design network on chip based turbo decoder architectures has been presented .the proposed framework can be adapted to explore different topologies , degrees of parallelism , message injection rates and routing algorithms .experimental results show that generalized de - bruijn and generalized kautz topologies achieve high throughput with a limited complexity overhead .moreover , depending on the target throughput requirements different parallelism degrees , message injection rates and routing algorithms can be used to minimize the network area overhead .a. giulietti , l. v. der perre , and m. strum , `` parallel turbo coding interleavers : avoiding collisions in accesses to storage elements , '' _ iet electronics letters _ , vol .38 , no . 5 , pp . 232234 , feb 2002 .m. j. thul , f. gilbert , and n. wehn , `` optimized concurrent interleaving architecture for high - throughput turbodecoding , '' in _ ieee international conference on electronics , circuits and systems _ , 2002 , pp . 10991102 .c. neeb , m. j. thul , and n. wehn , `` network - on - chip - centric approach to interleaving in high throughput channel decoders , '' in _ ieee international symposium on circuits and systems _, 2005 , pp . 17661769 .h. moussa , o. muller , a. baghdadi , and m. .jezequel , `` butterfly and benes - based on - chip communication networks for multiprocessor turbo decoding , '' in _ design , automation and test in europe conference and exhibition _, 2007 , pp . 654659 .s. benedetto , d. divsalar , g. montorsi , and f. pollara , `` soft - input soft - output modules for the construction and distributed iterative decoding of code networks , '' _european transactions on telecommunications _ , vol . 9 , no . 2 , pp . 155172 , mar / apr 1998 .o. muller , a. baghdadi , and m. jezequel , `` asip - based multiprocessor soc design for simple and double binary turbo decoding , '' in _ design , automation and test in europe conference and exhibition _, 2006 , pp . 13301335 .o. muller , a. baghdadi , and m. jezequel , `` exploring parallel processing levels for convolutional turbo decoding , '' in _ ieee international conference on information and communication technologies : from theory to applications _, 2006 , pp .
this work proposes a general framework for the design and simulation of network on chip based turbo decoder architectures . several parameters in the design space are investigated , namely the network topology , the parallelism degree , the rate at which messages are sent by processing nodes over the network and the routing strategy . the main results of this analysis are : i ) the most suited topologies to achieve high throughput with a limited complexity overhead are generalized de - bruijn and generalized kautz topologies ; ii ) depending on the throughput requirements different parallelism degrees , message injection rates and routing algorithms can be used to minimize the network area overhead .
monte carlo methods appeared about sixty years ago with the need to evaluate numerical values for various complex problems .these methods evolved and were applied early to quantum problems , thus putting within reach exact numerical solutions to non - trivial quantum problems .many improvements of these methods followed , avoiding critical slowing down near phase transitions and allowing to work directly in the continuous imaginary time limit . in recent years ,interest in methods that work in the canonical ensemble with global updates yet allow access to green functions has intensified .however , a method that works well for a given hamiltonian often needs major modifications for another .for example , the addition of a 4-site ring exchange term in the bosonic hubbard model required special developments for a treatment by the stochastic series expansion algorithm , as well as by the wordline algorithm .this can result in long delays .it is , therefore , advantageous to have at one s disposal an algorithm that can be applied to a very wide class of hamiltonians without requiring any changes . in a recent publication ,the stochastic green function ( sgf ) algorithm was presented , which meets this goal .the algorithm can be applied to any lattice hamiltonian of the form where is diagonal in the chosen occupation number basis and has only positive matrix elements .this includes all kinds of systems that can be treated by other methods presented in ref. , for instance bose - hubbard models with or without a trap , bose - fermi mixtures in one dimension , heisenberg models ... in particular hamiltonians for which the non - diagonal part is non - trivial ( the eigen - basis is unknown ) are easily treated , such as the bose - hubbard model with ring exchange , or multi - species hamiltonians in which a given species can be turned into another one ( see eq.([twospecies ] ) and fig .[ density ] and [ momentum ] for a concrete example ) .systems for which it is not possible to find a basis in which is diagonal and has only positive matrix elements are said to have a `` sign problem '' , which usually arises with fermionic and frustrated systems . as other qmc methods , the sgf algorithm does not solve this problem . the algorithm allows to measure several quantities of interest , such as the energy , the local density , local compressibility , density - density correlation functions ... in particular the winding is sampled and gives access to the superfluid density .equal - time n - body green functions are probably the most interesting quantities that can be measured by the algorithm , by giving access to momentum distribution functions which allow direct comparisons with experiments .all details on measurements are given in ref. .in addition the algorithm has the property of being easy to code , due in part to a simple update scheme in which all moves are accepted with a probability of 1 . despite of such generality and simplicity ,the algorithm might suffer from a reduced efficiency , compared to other algorithms in situations where they can be applied .the purpose of this paper is to present a `` directed '' update scheme that ( i ) keeps the simplicity and generality of the original sgf algorithm , and ( ii ) enhances its efficiency by improving the sampling over the imaginary time axis .while the sgf algorithm is not intended to compete with the speed of other algorithms , the improvment resulting from the directed update scheme is remarkable ( see section v ) .but what makes the strength of the sgf method is that it allows to simulate hamiltonians that can not be treated by other methods or that would require special developments ( see eq.([twospecies ] ) for a concrete example ) .the paper is organized as follows : we introduce in section ii the notations and definitions used in ref. . in section iii , we propose a simplification of the update scheme used in the original sgf algorithm , and determine how to satisfy detailed balance .a generalization of the simplified update scheme is presented in section iv , which constitutes the directed updated scheme .finally section v shows how to determine the introduced optimization parameters , and presents some tests of the algorithm and a comparison with the original version .in this section , we recall the expression of the `` green operator '' introduced in the sgf algorithm , and the extended partition function which is considered . although not required for understanding this paper , we refer the reader to ref. for full details on the algorithm . as many qmc algorithms ,the sgf algorithm samples the partition function the algorithm has the property of working in the canonical ensemble . in order to define the green operator ,we first define the `` normalized '' creation and annihilation operators , where and are the usual creation and annihilation operators of bosons , and is the number operator . from ( [ normalizedoperators ] )one can show the following relations for any state in the occupation number representation , with the particular case .appart from this exception , the operators and change a state by respectively creating and annihilating one particle , but they do not change the norm of the state . using the notation to denote two subsets of site indices and with the constraint that all indices in subset are different from the indices in subset ( but several indices in one subset may be equal ) , we define the green operator by where is a matrix that depends on the application of the algorithm . in order to sample the partition function ( [ partitionfunction ] ) , an extended partition function is considered by breaking up the propagator , and introducing the green operator between the broken parts , defining the time dependant operators and , and working in the occupation number basis in which is diagonal , the extended partition function takes the form where the sum implicitly runs over complete sets of states . we will systematically use the labels and to denote the states appearing on the left and the right of the green operator , and use the notation to denote the diagonal energy .we will also denote by and the time indices of the operators appearing on the left and the right of . as a result ,the extended partition function is a sum over all possible configurations , each being determined by a set of time indices and a set of states , , , , .the algorithm consists in updating those configurations by making use of the green operator . assuming that the green operator is acting at time , it can `` create '' a operator ( that is to say a operator can be inserted in the operator string ) at the same time , thus introducing a new intermediate state , then it can be shifted to a different time . while shifting , any operator encountered by the green operator is `` destroyed '' ( that is to say removed from the operator string ) . assuming a left ( or right ) move , creating an operator will update the state ( or ) ,while destroying will update the state ( or ) .when a diagonal configuration of the green operator occurs , , such a configuration associated to the extended partition function ( [ extendedpartitionfunction ] ) is also a configuration associated to the partition function ( [ partitionfunction ] ) .measurements can be done when this occurs ( see ref. for details on measurements ) .next section presents a simple update scheme that meets the requirements of ergodicity and detailed balance .before introducing the directed update , we start by simplifying the update scheme used in the original sgf algorithm. we will assume in the following that a left move of the green operator is chosen .in the original version , the green operator can choose to create or not on its right a operator at time .then a time shift to the left is chosen for the green operator with an exponential distribution in the range .if an operator is encountered while shifting the green operator , then the operator is destroyed and the move stops there . as a result , four possible situations can occur during one move : 1 . no creation , shift , no destruction .2 . creation , shift , no destruction .3 . no creation , shift , destruction .4 . creation , shift , destruction .it appears that the first possibility `` no creation , no destruction '' is actually useless , since no change is performed in the operator string . the idea is to get rid of this possibility by forcing the green operator to destroy an operator if no creation is chosena further simplification can be done by noticing that the last possibility `` creation , destruction '' is not necessary for the ergodicity of the algorithm , and can be avoided by restricting the range of the time shift after having created an operator .therefore we replace the original update scheme by the following : we assume that the green operator is acting at time and that the operator on its left is acting at time . the green operator chooses to create or not an operator on its right at time .if creation is chosen , then a time shift of the green operator is chosen to the left in the range , with the probability distribution defined below .if no creation is chosen , then the green operator is directly shifted to the operator on its left at time , and the operator is destroyed . as a resultonly two possibilities have to be considered : 1 .creation , shift .2 . shift , destruction .figure [ simplfiedupdatescheme ] shows the associated organigram .section iii.b explains how detailed balance can be satisfied with this simplified update scheme . when updating the configurations according to the chosen update scheme , we need to generate different transitions from initial to final states with probabilities that satisfy detailed balance . in this sectionwe propose a choice for these probabilities , and determine the corresponding acceptance factors .we denote the probability of the initial ( final ) configuration by ( ) .we denote by the probability of the transition from configuration to configuration , and by the probability of the reverse transition .finally we denote by the acceptance rate of the transition from to , and by the acceptance rate of the reverse transition .the detailed balance can be written as we will make use of the metropolis solution , with we will use primed ( non - primed ) labels for states and time indices to denote final ( initial ) configurations .we consider here the case where a left move is chosen , an operator is created on the right of the green operator at time , and a new state is chosen .then a time shift to the left is chosen for the green operator in the range .it is important to note that and correspond to the time indices of the operators appearing on the left and the right of the green operator after the new operator has been inserted , that is to say at the moment where the time shift needs to be performed .thus we have and .the probability of the initial configuration is the boltzmann weight appearing in the extended partition function ( [ extendedpartitionfunction ] ) : the probability of the final configuration takes the form : it is important here to realize that the green operator only inserted on its right the operator , before being shifted from to .therefore we have the equalities , , , and .the probability of the transition from the initial configuration to the final configuration is the probability of a left move , times the probability of a creation , times the probability to choose the new state , times the probability to shift the green operator by , knowing that the states on the left and the right of the green operator at the moment of the shift are and : the probability of the reverse transition is simply the probability of a right move , times the probability of no creation , : \ ] ] from the original version of the sgf algorithm , we know that choosing the time shift with an exponential distribution is a good choice , because it cancels the exponentials appearing in the probabilities of the initial ( [ initial ] ) and final ( [ final ] ) configurations , avoiding exponentially small acceptance factors .however a different normalization must be used here , since the time shift is chosen in the range instead of .the suitable solution is : it is straightforward to check that the above probability is correctly normalized and well - defined for any real value of , the particular case reducing to the uniform distribution ( note that is always a positive number ) . for the probability to choose the new state ,the convenient solution is the same as in the original version : putting everything together , the acceptance factor ( [ metropolis2 ] ) becomes \big[1-e^{-(\tau_l^\prime-\tau_r^\prime)(v_r^\prime - v_l^\prime)}\big]}{v_r^\prime - v_l^\prime},\end{aligned}\ ] ] where we have used the notation to emphasize that this acceptance factor corresponds to a creation .it is also important for the remaining of this paper to note that is written as a quantity that depends on the initial configuration , times a quantity that depends on the final configuration .we consider here the case where a left move is chosen , and the operator on the left of the green operator is destroyed .this move corresponds to the inverse of the above `` creation , shift '' move .thus , the corresponding acceptance factor is obtained by inverting the acceptance factor , exchanging the initial time and final time , and switching the direction .however represents an absolute time shift , so and do not have to be exchanged .we get \big[1-e^{-(\tau_l-\tau_r)(v_l - v_r)}\big ] } \\ & \times & \frac{\big\langle\psi_l^\prime\big|\hat\mathcal g\big|\psi_r^\prime\big\rangle p(\rightarrow^\prime)p_\rightarrow^\dagger(\tau^\prime)}{\big\langle\psi_l^\prime\big|\hat\mathcal t\hat\mathcal g\big|\psi_r^\prime\big\rangle},\end{aligned}\ ] ] which is written as a quantity that depends on the initial configuration , times a quantity that depends on the final configuration .we will use here the short notation , , and to denote respectively the quantities , , and . as in ref . , we have some freedom for the choice of the probabilities of choosing a left or right move , and , and the probabilities of creation and .a suitable choice for those probabilities can be done in order to accept all moves , resulting in an appreciable simplification of the algorithm . for this purpose ,we impose the acceptance factor ( or ) to be equal to the acceptance factor ( or ) .this allows to determine the probabilities and , and the acceptance factors and take the form with finally we can impose the acceptance factors and to be equal .this implies defining , we are left with a single acceptance factor , which is independent of the chosen direction , and independent of the nature of the move ( creation or destruction ) .thus all moves can be accepted by making use of a proper reweighting , as explained in ref .the appendix shows how to generate random numbers with the appropriate exponential distribution ( [ exponentialdistribution ] ) . although the above simplified update scheme works , it turns out to have a poor efficiency .this is because of a lack of `` directionality '' : the green operator has , in average , a probability of to choose a left move or a right move .therefore the green operator propagates along the operator string like a `` drunk man '' , with a diffusion - like law .the basic creation and destruction processes correspond to the steps of the random walk .this suggests that the efficiency of the update scheme can be improved if one can force the green operator to move in the same direction for several iterations .next section presents a modified version of the simplified update scheme , which allows to control the mean length of the steps of the random walk , that is to say the mean number of creations and destructions in a given direction .the proposed directed update scheme can be considered analogous to the `` directed loop update '' used in the stochastic series expansion algorithm , which prevents a worm from going backwards .however the connection should not be pushed too far .indeed the picture of a worm whose head is evolving both in space and imaginary time accross vertices is obvious in a loop algorithm . in such algorithm ,a creation ( or an annihilation ) operator which is represented by the head of a worm is propagated both in space and imaginary time , while an annihilation ( or a creation ) operator represented by the tail of the worm remains at rest .the loop ends when the head of the worm bites the tail .such a worm picture is not obvious in the sgf algorithm : instead of single creation or annihilation operators , it is the full green operator over the whole space that is propagated only in imaginary time .this creates open worldlines , thus introducing discontinuities .these discontinuities increase or decrease while propagating in imaginary time .all open ends of the worldlines are localized at the same imaginary time index .therefore it is actually not possible to draw step by step a worm whose head is evolving in space and imaginary time until it bites its tail .we present in this section a directed update scheme which is obtained by modifying slightly the simplified update scheme , thus keeping the simplicity and generality of the algorithm . assuming that a left move is chosen, the green operator chooses between starting the move by a creation or a destruction . after having created ( or destroyed ) an operator , the green operator can choose to keep moving in the same direction and destroy ( or create ) with a probability ( or ) , or to stop .if it keeps moving , then a destruction ( or creation ) occurs , and the green operator can choose to keep moving and create ( or destroy ) with a probability ( or ) ... and so on , until it decides to stop . if the last action of the move is a creation , then a time shift is chosen .the organigram is represented in figure [ directedupdatescheme ] . in order to satisfy detailed balance , in addition to the acceptance factors and , we need to determine new acceptance factors of the form and .we first determine the new expressions of and resulting from the directed update scheme . for ,the previous probability has to be multiplied by the probability to stop the move after having created , .the previous probability has to be multiplied by the probability to stop the move after having destroyed , .we get for and the new expressions : }{\big\langle\psi_l\big|\hat\mathcal g\big|\psi_r\big\rangle p(\leftarrow)p_\leftarrow^\dagger(\tau ) } \\ & \times & \frac{p(\rightarrow^\prime)\big[1-p_\rightarrow^\dagger(\tau^\prime)\big]\big[1-e^{-(\tau_l^\prime-\tau_r^\prime)(v_r^\prime - v_l^\prime)}\big]}{\big[1-p_\leftarrow^{kd}(\tau^\prime)\big]\big(v_r^\prime - v_l^\prime\big ) } \\\nonumber q_\leftarrow^d & = & \frac{\big[1-p_\rightarrow^{kd}(\tau)\big]\big(v_l - v_r\big)}{p(\leftarrow)\big[1-p_\leftarrow^\dagger(\tau)\big]\big[1-e^{-(\tau_l-\tau_r)(v_l - v_r)}\big ] } \\ & \times & \frac{\big\langle\psi_l^\prime\big|\hat\mathcal g\big|\psi_r^\prime\big\rangle p(\rightarrow^\prime)p_\rightarrow^\dagger(\tau^\prime)}{\big\langle\psi_l^\prime\big|\hat\mathcal t\hat\mathcal g\big|\psi_r^\prime\big\rangle\big[1-p_\leftarrow^{kc}(\tau^\prime)\big]},\end{aligned}\ ] ] we consider here the case where a left move is chosen , an operator is created on the right of the green operator , and a new state is chosen .then the operator on the left of the green operator is destroyed . using the superscripts to denote intermediate configurations between initial and final configurations ,the sequence is the following 1 . 2 . 3 . , where we have , , , and .the probability of the transition from the initial configuration to the final configuration is the probability to choose a left move , times the probability to create an operator at time , times the probability to choose the new state , times the probability to keep moving and destroy , times the probability to stop the move after having destroyed : \ ] ] the probability of the reverse move is exactly symmetric : \ ] ] it is important to notice that , when in the intermediate configuration , the time of the operator to the left of the green operator is equal to , and the time of the operator to the right of the green operator is equal to .thus the acceptance factor takes the form }{\big\langle\psi_l\big|\hat\mathcal g\big|\psi_r\big\rangle p(\leftarrow)p_\leftarrow^\dagger(\tau ) } \\\nonumber & \times & \frac{e^{-\big(\tau_l^a-\tau_r^a\big)v_r^a}p_\rightarrow^{kd}(a)}{e^{-\big(\tau_l^a-\tau_r^a\big)v_l^a}p_\leftarrow^{kd}(a ) } \\ & \times & \frac{\big\langle\psi_l^\prime\big|\hat\mathcal g\big|\psi_r^\prime\big\rangle p(\rightarrow^\prime)p_\rightarrow^\dagger(\tau^\prime)}{\big\langle\psi_l^\prime\big|\hat\mathcal t\hat\mathcal g\big|\psi_r^\prime\big\rangle\big[1-p_\leftarrow^{kc}(\tau^\prime)\big]},\end{aligned}\ ] ] and is written as a quantity that depends on the initial configuration , times a quantity that depends on the intermediate configuration , times a quantity that depends on the final configuration .it is useful for the remaining of the paper to define the intermediate acceptance factor , we consider here the case where a left move is chosen , the operator on the left of the green operator is destroyed , then an operator is created on its right , and a new state is chosen . finally a time shift is chosen .the sequence of configurations is the following 1 . 2 . 3 . , where we have , and .the probability of the transition from the initial configuration to the final configuration is the probability to choose a left move , times the probability of no creation , times the probability to keep moving and create , times the probability to choose the new state , times the probability to stop the move after having destroyed , times the probability to shift the green operator by : \leftarrow^{kc}(a)p_\leftarrow(\psi_r^\prime ) \\ & \times & \big[1-p_\leftarrow^{kd}(\tau^\prime)\big]p_\leftarrow^{l^\prime r^\prime}(\tau^\prime-\tau_r^\prime)\end{aligned}\ ] ]the probability of the reverse move is exactly symmetric : \rightarrow^{kc}(a)p_\rightarrow(\psi_l ) \\ & \times & \big[1-p_\rightarrow^{kd}(\tau)\big]p_\rightarrow^{lr}(\tau_l-\tau)\end{aligned}\ ] ] the acceptance factor takes the form \big(v_l - v_r\big)}{p(\leftarrow)\big[1-p_\leftarrow^\dagger(\tau)\big]\big[1-e^{-(\tau_l-\tau_r)(v_l - v_r)}\big ] } \\\nonumber & \times & \frac{\big\langle\psi_l^a\big|\hat\mathcal g\hat\mathcal t\big|\psi_r^a\big\ranglep_\rightarrow^{kc}(a)}{\big\langle\psi_l^a\big|\hat\mathcal t\hat\mathcal g\big|\psi_r^a\big\rangle p_\leftarrow^{kc}(a ) } \\ & \times & \frac{p(\rightarrow^\prime)\big[1-p_\rightarrow^\dagger(\tau^\prime)\big]\big[1-e^{-(\tau_l^\prime-\tau_r^\prime)(v_r^\prime - v_l^\prime)}\big]}{\big[1-p_\leftarrow^{kd}(\tau^\prime)\big]\big(v_r^\prime - v_l^\prime\big)},\end{aligned}\ ] ] and is written as a quantity that depends on the initial configuration , times a quantity that depends on the intermediate configuration , times a quantity that depends on the final configuration .it is useful for the remaining of the paper to define the intermediate acceptance factor , we consider here the case where a left move is chosen , an operator is created on the right of the green operator , then the operator on its left is destroyed , then a second operator is created on its right .finally , a time shift of the green operator is performed .the sequence of configurations is the following 1 . 2 . 3 . 4 . , considering the intermediate configurations and between the intial and final configurations , it is easy to show that the corresponding acceptance factor can be written we consider here the case where a left move is chosen , the operator on the left of the green operator is destroyed , then an operator is created on its right . finally a second operator on the left of green operator is destroyed .the sequence of configurations is the following 1 . 2 . 3 . 4 . , considering the intermediate configurations and between the intial and final configurations , it is easy to show that the corresponding acceptance factor can be written it is straighforward to show that the acceptance factors of the form , , ( or , , ) can be expressed as products of the acceptance factor ( or ) and the intermediate factors and . in the same manner , the acceptance factors of the form , , ( or , , ) can be expressed as products of the acceptance factor ( or ) and the intermediate factors and . hereagain it is possible to take advantage of the freedom that we have for the choice of the probabilities , , , and ( or , , , and ) .a proper choice of these probabilities can be done in order to allow us to accept all moves , simplicity and generality being the leitmotiv of the sgf algorithm . for this purpose, we impose to all acceptance factors corresponding to left ( or right ) moves to be equal .this requires the intermediate acceptance factors and ( or and ) to be equal to 1 .this is realized if where and are optimization parameters belonging to . by tuning these parameters , the mean length of the steps of the green operatorcan be controlled .note that we have explicitly excluded from the allowed values for these optimization parameters .this is necessary for the green operator to have a chance to end in a diagonal configuration , .indeed , the choice would systematically lead to values of for the probabilities and for diagonal configurations .therefore the green operator would never stop in a diagonal configution , and no measurement could be done .it is important here to note that the quantities , , and are evaluated between the states on the left and the right of the green operator that are present at the moment where those quantities are needed , as well as for the times indices and and the potentials and .all acceptance factors corresponding to a given direction of propagation become equal if we choose for the creation probabilities : (v_l - v_r)}{\big[1-p_\rightarrow^{kc}\big]\big[1-e^{-(\tau_l-\tau_r)(v_l - v_r)}\big ] } } \\ & & p_\rightarrow^\dagger(\tau)=\frac{\big\langle\hat\mathcal t\hat\mathcal g\big\rangle}{\big\langle\hat\mathcal t\hat\mathcal g\big\rangle+\big\langle\hat\mathcal g\big\rangle\frac{\big[1-p_\leftarrow^{kd}\big](v_r - v_l)}{\big[1-p_\leftarrow^{kc}\big]\big[1-e^{-(\tau_l-\tau_r)(v_r - v_l)}\big]}},\end{aligned}\ ] ] finally , all acceptances factors become independant of the direction of propagation if we choose and with \frac{\big\langle\hat\mathcal g\hat\mathcal t\big\rangle}{\big\langle\hat\mathcal g\big\rangle}+\frac{\big[1-p_\rightarrow^{kd}\big](v_l - v_r)}{\big[1-e^{-(\tau_l-\tau_r)(v_l - v_r)}\big ] } \\ r_\rightarrow(\tau)=\big[1-p_\leftarrow^{kc}\big]\frac{\big\langle\hat\mathcal t\hat\mathcal g\big\rangle}{\big\langle\hat\mathcal g\big\rangle}+\frac{\big[1-p_\leftarrow^{kd}\big](v_r - v_l)}{\big[1-e^{-(\tau_l-\tau_r)(v_r - v_l)}\big]}.\end{aligned}\ ] ] as a result all moves can be accepted again , ensuring the maximum of simplicity of the algorithm .we still have some freedom for the choice of the optimization parameters and .this is discussed in next section .from the central limit theorem , we know that the errorbar associated to any measured quantity must decrease as the square root of the number of measurements , or equivalently , the square root of the time of the simulation. therefore it makes sense to define the efficiency of a qmc algorithm by where represents the set of all optimization parameters of the algorithm , is the measured quantity of interest , is the time of the simulation , and is the errorbar associated to the measured quantity .this definition ensures that is independent of the time of the simulation . as a result ,the larger the more efficient the algorithm . in the present casewe have , while for the original sgf algorithm .it is useful here to realize that , by symmetry , the mean values of and ( and and ) must be equal .therefore we define and .it seems reasonable to impose a condition of uniform sampling , .this condition can be satisfied by adjusting dynamically the values of and during the thermalization process . for this purposewe introduce a new optimization parameter and apply the following algorithm from time to time while thermalizing ( we start with ) : thus we are left with the optimization parameter . in order to determine the optimal value ,we have considered 2 different hamiltonians and , and evaluated the efficiency of the algorithm while scanning .the first hamiltonian we have considered describes free hardcore bosons and is exactly solvable , where the sum runs over pairs of first neighboring sites and is the hopping parameter .the second hamiltonian is highly non - trivial and describes a mixture of atoms and diatomic molecules , with a special term allowing conversions between the two species , where and ( and ) are the creation and annihilation operators of atoms ( molecules ) , , , , , and are respectively the hopping parameter of atoms , the hopping parameter of molecules , the atomic onsite interaction parameter , the molecular onsite interaction parameter , and the inter - species interaction parameter .the conversion term is tunable via the parameter and does not conserve the number of atoms or the number of molecules. however the total number of particles is conserved and is the canonical constraint .the parameter allows to control the ratio between the number of atoms and molecules .the application of the sgf algorithm to the hamiltonian ( [ twospecies ] ) is described in details in ref. .the changes coming with the directed update scheme are completely independent of the chosen hamiltonian .the following table shows the mean number of creations and destructions in one step , , and the relative efficiency of the algorithm applied to at half filling , for which we have measured the energy , the superfluid density , and the number of particles in the zero momentum state : .relative efficiency of the algorithm applied to at half filling for the energy , the superfluid density , and the number of particles in the zero momentum state . [ cols="^,^,^,^,^",options="header " , ] while the best value of depends on the hamiltonian which is considered and the measured quantity , it appears that a good compromise is to choose between and .the improvment of the efficiency is remarkable . in the following ,we illustrate the applicability of the algorithm to problems with non - uniform potentials , by adding a parabolic trap to the hamiltonian ( [ twospecies ] ) : the parameters and allow to control the curvature of the trap associated to atoms and molecules , respectively , and is the number of lattice sites .the inclusion of this term in the algorithm is trivial since only the values of the diagonal energies and are changed .figures ( [ density ] ) and ( [ momentum ] ) show the density profiles and momentum distribution functions obtained for a system with lattice sites initially loaded with atoms and no molecules , and the parameters , , , , , , , , , and .the presented results have been obtained by performing updates for thermalization , and updates with measurements ( an update is to be understood as the occurence of a diagonal configuration ) .the time of the simulation is about 8 hours on a cheap 32 bits laptop with 1ghz processor , with an implementation of the algorithm involving dynamical structures with pointers ( see ref. ) . ) to the hamiltonian ( [ twospecies ] ) .the errorbars are smaller than the symbol sizes , and are the biggest in the neighborhood of site indices 23 and 47 where they equal the size of the symbols . , scaledwidth=45.0% ] ) to the hamiltonian ( [ twospecies ] ) .the errorbars are smaller than the symbol sizes , and are the biggest for where they equal the size of the symbols ., scaledwidth=45.0% ]we have presented a directed update scheme for the sgf algorithm , which has the properties of keeping the simplicity and generality of the original algorithm , and improves significantly its efficiency .i would like to express special thanks to peter denteneer for useful suggestions .this work is part of the research program of the `` stichting voor fundamenteel onderzoek der materie ( fom ) , '' which is financially supported by the `` nederlandse organisatie voor wetenschappelijk onderzoek ( nwo ) . ''we describe here how to generate numbers with the appropriate exponential distribution ( [ exponentialdistribution ] ) . assuming that we have at our disposal a uniform random number generator that generates a random variable with the distribution for , we would like to find a function such that the random variable is generated with the distribution where and are the parameters of the exponential distribution .because of the relation , the probability to find in the range must be equal to the probability to find in the range .this implies the condition with .thus we have taking the anti - derivative with respect to on both sides of the equation , we get where is a constant . this constant and the correct sign are determined by imposing the conditions and . as a result , if is a realization of , then a realization of is given by .\ ] ] 10 nicholas metropolis and s. ulam , journal of the american statistical association , number 247 , volume 44 ( 1949 ) .handscomb , proc .58 , 594 ( 1962 ) .kalos , phys .128 , 1791 ( 1962 ) .r. blankenbecler , d.j .scalapino and r.l .sugar , phys .d 24 , 2278 ( 1981 ) . g.g .batrouni and r.t .scalettar , phys .b * 46 * , 9051 ( 1992 ) . w. von der linden , phys . rep .220 , 53 ( 1992 ) .evertz , g. lana and m. marcu , phys .70 , 875 - 879 ( 1993 ) .ceperley , rev .67 , 279 ( 1995 ) .beard and u .- j .wiese , phys .77 5130 ( 1996 ) .`` quantum monte carlo methods in physics and chemistry '' , ed . m.p . nightingale and c.j .umrigar , nato science series c 525 , kluwer academic publishers , dordrecht , ( 1999 ) .sandvik , j. phys .a * 25 * , 3667 ( 1992 ) ; phys . rev .b * 59 * , 14157 ( 1999 ) . n.v .prokofev , b.v .svistunov , and i.s .tupitsyn , jetp lett . * 87 * , 310 ( 1998 ) .m. rigol , a. muramatsu , g.g .batrouni , and r.t .scalettar , phys .lett . * 91 * , 130403 ( 2003 ) .k. van houcke , s.m.a .rombouts , and l. pollet , phys .e * 73*,056703 ( 2006 ) .rousseau , phys .e * 77 * , 056705 ( 2008 ) .sandvik , s. daul , r.r.p .singh , and d.j .lett . * 89 * , 247201 ( 2002 ) .rousseau , r.t .scalettar , and g.g .batrouni , phys .b * 72 * , 054524 ( 2005 ) .n. metropolis , a.w .rosenbluth , m.n .metropolis , a.h .teller , and e. teller , j. chem . phys .* 21 * , 1087 ( 1953 ) .olav f. syljuasen , anders w. sandvik , phys .e * 66 * , 046701 ( 2002 ) .rousseau and p.j.h .denteneer , phys .a * 77 * , 013609 ( 2008 ) .
in a recent publication we have presented the stochastic green function ( sgf ) algorithm , which has the properties of being general and easy to apply to any lattice hamiltonian of the form , where is diagonal in the chosen occupation number basis and has only positive matrix elements . we propose here a modified version of the update scheme that keeps the simplicity and generality of the original sgf algorithm , and enhances significantly its efficiency .
model selection is an important problem in many areas including machine learning .if a proper model is not selected , any effort for parameter estimation or prediction of the algorithm s outcome is hopeless .given a set of candidate models , the goal of model selection is to select the model that best approximates the observed data and captures its underlying regularities .model selection criteria are defined such that they strike a balance between the _ goodness - of - fit ( gof ) _ , and the _ generalizability _ or _complexity _ of the models .goodness - of - fit measures how well a model capture the regularity in the data .generalizability / complexity is the assessment of the performance of the model on unseen data or how accurately the model fits / predicts the future data .models with higher complexity than necessary can suffer from overfitting and poor generalization , while models that are too simple will underfit and have low gof .cross - validation , bootstrapping , akaike information criterion ( aic ) , and bayesian information criterion ( bic ) , are well known examples of traditional model selection . in re - sampling methods such as cross - validation and bootstraping , the generalization error of the modelis estimated using monte carlo simulation .in contrast with re - sampling methods , the model selection methods like aic and bic do not require validation to compute the model error , and are computationally efficient . in these proceduresan _ information criterion _ is defined such that the generalization error is estimated by penalizing the model s error on observed data .a large number of information criteria have been introduced with different motivations that lead to different theoretical properties .for instance , the tighter penalization parameter in bic favors simpler models , while aic works better when the dataset has a very large sample size .kernel methods are strong , computationally efficient analytical tools that are capable of working on high dimensional data with arbitrarily complex structure .they have been successfully applied in wide range of applications such as classification , and regression . in kernel methods ,the data are mapped from their original space to a higher dimensional feature space , the reproducing kernel hilbert space ( rkhs ) .the idea behind this mapping is to transform the nonlinear relationships between data points in the original space into an easy - to - compute linear learning problem in the feature space .for example , in kernel regression the response variable is described as a linear combination of the embedded data .any algorithm that can be represented through dot products has a kernel evaluation .this operation , called kernelization , makes it possible to transform traditional , already proven , model selection methods into stronger , corresponding kernel methods .the literature on kernel methods has , however , mostly focused on kernel selection and on tuning the kernel parameters , but only limited work being done on kernel - based model selection . in this study , we investigate a kernel - based information criterion for ridge regression models . in kernel ridge regression ( krr ) , tuning the ridge parameters to find the most predictive subspace with respect to the data at hand and the unseen data is the goal of the kernel model selection criterion . in classical model selection methods the performance of the model selection criterion is evaluated theoretically by providing a consistency proof where the sample size tends to infinity and empirically through simulated studies for finite sample sizes .other methods investigate a probabilistic upper bound of the generalization error .proving the consistency properties of the model selection in _ kernel model selection _ is challenging .the proof procedure of the classical methods does not work here .some reasons for that are : the size of the model to evaluate problems such as under / overfitting is not apparent ( for data points of dimension , the kernel is , which is independent of ) and asymptotic probabilities of generalization error or estimators are hard to compute in rkhs .researchers have kernelized the traditional model selection criteria and shown the success of their kernel model selection empirically .kobayashi and komaki extracted the kernel - based regularization information criterion ( kric ) using an eigenvalue equation to set the regularization parameters in kernel logistic regression and support vector machines ( svm ) .rosipal et al . developed covariance information criterion ( cic ) for model selection in kernel principal component analysis , because of its outperformed results compared to aic and bic in orthogonal linear regression .demyanov et al . , provided alternative way of calculating the likelihood function in akaike information criterion ( aic , and bayesian information criterion ( bic , ) , and used it for parameter selection in svms using the gaussian kernel . as pointed out by van emden , a desirable model is the one with the fewest dependent variables . thus defining a complexity term that measures the interdependency of model parameters enables one to select the most desirable model . in this study , we define a novel variable - wise variance and obtain a complexity measure as the additive combination of kernels defined on model parameters . formalizing the complexity term in this way effectively captures the interdependency of each parameter of the model .we call this novel method _ kernel - based information criterion ( kic)_. model selection criterion in gaussian process regression ( gpr ; ) , and kernel - based information complexity ( icomp ; ) resemble kic in using a covariance - based complexity measure .however , the methods differ because these complexity measures capture the interdependency between the data points rather than the model parameters .although we can not establish the consistency properties of kic theoretically , we empirically evaluate the efficiency of kic both on synthetic and real datasets obtaining state - of - the - art results compared to leave - one - out - cross - validation ( loocv ) , kernel - based icomp , and maximum log marginal likelihood in gpr .the paper is organized as follows . in section [ sec : krr ] , we give an overview of kernel ridge regression .kic is described in detail in section [ sec : kic ] .section [ sec : om ] is provides a brief explanation of the methods to which kic is compared , and in section [ sec : exp ] we evaluate the performance of kic through sets of experiments .in regression analysis , the regression model of the form : where can be either a linear or non - linear function . in linear regression we have , , where is an observation vector ( response variable ) of size , is a full rank data matrix of independent variables of size , and , is an unknown vector of regression parameters , where denotes the transposition .we also assume that the error ( noise ) vector is an -dimensional vector whose elements are drawn i.i.d , , where is an -dimensional identity matrix and is an unknown variance .the regression coefficients minimize the squared errors , , between estimated function , and target function .when , the problem is ill - posed , so that some kind of regularization , such as tikhanov regularization ( ridge regression ) is required , and the coefficients minimize the following optimization problem where is the regularization parameter .the estimated regression coefficients in ridge regression are : in _ kernel _ ridge regression ( krr ) , the data matrix is non - linearly transformed in rkhs using a feature map .the estimated regression coefficients based on are : where is the kernel matrix .equation [ eq : theta ] does not obtain an explicit expression for because of ( the kernel trick enables one to avoid explicitly defining that could be numerically intractable if computed in rkhs , if known ) , thus a ridge estimator is used ( e.g. ) that excludes : using in the calculation of krr is similar to regularizing the regression function instead of the regression coefficients , where the objective function is : and denotes the relevant rkhs . for , and have : where is the kernel function , and .the main contribution of this study is to introduce a new kernel - based information criterion ( kic ) for the model selection in kernel - based regression . according to equation kic balances between the goodness - of - fit and the complexity of the model .gof is defined using a log - likelihood - based function ( we maximize penalized log likelihood ) and the complexity measure is a function based on the covariance function of the parameters of the model . in the next subsections we elaborate on these terms .the definition of van emden for the complexity measure of a random vector is based on the interactions among random variables in the corresponding covariance matrix .a desirable model is the one with the fewest dependent variables .this reduces the information entropy and yields lower complexity . in this paperwe focus on this definition of the complexity measures . considering a -variate normal distribution , the complexity of a covariance matrix , , is given by the shannon s entropy , where , are the marginal and the joint entropy , and is the diagonal element of . if and only if the covariates are independent .the complexity measure in equation changes with orthonormal transformations because it is dependent on the coordinates of the random variable vectors .to overcome these drawbacks , bozodgan and haughton introduced icomp information criterion with a complexity measure based on the maximal covariance complexity , which is an upper bound on the complexity measure in equation : this complexity measure is proportional to the estimated arithmetic ( ) and geometric mean ( ) of the eigenvalues of the covariance matrix .larger values of , indicates higher dependency between random variables , and vice versa .zhang introduced a kernel form of this complexity measure , that is computed on kernel - based covariance of the ridge estimator : the complexity measure in gaussian process regression ( gpr ; ) is defined as , a concept from the joint entropy ( as shown in equation [ eq : complexity ] ) .in contrast to icomp and gpr , the complexity measure in kic is defined using the hilbert - schmidt ( hs ) norm of the covariance matrix , . minimizing this complexity measure obtains a model with more independent variables . in the next sections ,we explain in detail how to define the needed variable - wise variance in the complexity measure , and the computation of the complexity measure .+ in kernel - based model selection methods such as icomp , and gpr , the complexity measure is defined on a covariance matrix that is of size for of size .the idea behind this measure is to compute the interdependency between the model parameters , which independent of the number of the model parameters . in the other words ,the concept of the size of the model is hidden because of the definition of a kernel . to have a complexity measure that depends on , we introduce variable - wise variance using an additive combination of kernels for each parameter of the model .let be the parameter vector of the kernel ridge regression : where and , and the solution of krr is given by .the quantity = \sigma^2 \operatorname{tr}[k(k+\alpha i)^{-2 } ] ] , and =e[k(\cdot , y)] ] , and ] .we compared kic with loocv , kernel - based icomp , and maximum log of marginal likelihood in gpr ( abbreviated as gpr ) to find the optimal ridge regressors .the reason to compare kic with icomp and gpr is that in all of these methods the complexity measure computes the interdependency of model parameters as a function of covariance matrix in different ways .loocv is a standard and commonly used methods for model selection .* loocv : * re - sampling model selection methods like cross - validation is time consuming . for instance , the leave - one - out - cross - validation ( loocv ) has the computational cost of the number of parameter combinations ( is the processing time of the model selection algorithm ) for training samples . to have cross - validation methods with faster processing time ,the closed form formula for the risk estimators of the algorithm under special conditions are provided .we consider the kernel - based closed form of loocv for linear regression introduced by : ^{-1}[i - h]y\|_2 ^ 2}{n}\end{aligned}\ ] ] where is the hat matrix . *maximizing the log of marginal likelihood ( gpr ) * is a kernel - based regression method . for a given training set , and ,a multivariate gaussian distribution is defined on any function such that , , where is a kernel .marginal likelihood is used as the model selection criterion in gpr , since it balances between the lack - of - fit and complexity of a model . maximizing the log of marginal likelihood obtains the optimal parameters for model selection .the log of marginal likelihood is denoted as : where denotes the model s fit , , denotes the complexity , and is a normalization constant . without loss of generality in this paper gpr means the model selection criterion is used in gpr .* icomp : * the kernel - based icomp introduced in is an information criterion to select the models and is defined as , where , and elaborated in equations [ eq : cicomp ] , and [ eq : sigmaicomp ] .in this section we evaluate the performance of kic on synthetic , and real datasets , and compare with competing model selection methods .kic was first evaluated on the problem of approximating from a set of 100 points sampled at regular intervals in $ ] . to evaluate robustness to noise , normal random noisewas added to the function at two noise - to - signal ( nsr ) ratios : , and .figure [ sinc ] shows the sinc function and the perturbed datasets .the following experiments were conducted : ( 1 ) shows how kic balances between gof and complexity , ( 2 ) shows how kic and mse on training sets change when the sample size and the level of noise in the data change ( 3 ) investigates the effect of using different kernels , and ( 4 ) evaluates the consistency of kic in parameter selection .all experiments were run 100 times using randomly generated datasets , and corresponding test sets of size 1000 . * experiment 1 . * the effect of on complexity , lack - of - fit and kic values was measured by setting , with krr models being generated using a gaussian kernel with different standard deviations , , computed over the 100 data points .the results are shown in figure [ co_la_kic ] . the model generated with overfits , because it is overly complex , while gives a simpler model that underfits .as the ridge parameter increases , the model complexity decreases while the goodness - of - fit is adversely affected .kic balances between these two terms , which yields a criterion to select a model that has good generalization , as well as goodness of fit to the data .* experiment 2 . * the influence of training sample size was investigated by comparing sample sizes , , of 50 , and 100 , for a total of four sets of experiments : ( ) : ( ) , ( ) , ( ) , ( ) .the gaussian kernel was used with . the kic value and mean squared error ( mse , ) , for different is shown in figure [ kic - mse ] .the data with nsr= has larger mse values , and larger error bars , and consequently larger kic values compared to data with nsr= . in both cases , kic and mse change with similar profiles with respect to .the noise and the sample size have no effect on kic for selecting the best model ( parameter ) .* experiment 3 .* the effect of using a gaussian kernel , , versus the cauchy kernel , , was investigated , where , and in the computation of the kernel - based model selection criteria icomp , kic , gpr , and loocv . the results are reported in figures [ gaussian kernel ] and [ cauchy kernel ] .the graphs show box plots with markers at , and of the empirical distributions of mse values . as expected , the mse of all methods is larger when nsr is high , , and smaller for the larger of the two training sets ( 100 samples ) .loocv , icomp , and kic performed comparably , and better than gpr using a gaussian kernel for data with nsr . in the other cases , the best results ( smallest mse ) was achieved by kic .all methods have smaller mse values using the gaussian kernel versus the cauchy kernel .gpr with the cauchy kernel obtains results comparable with kic , but with a standard deviation close to zero .* experiment 4 .* we assessed the consistency of selecting / tuning the parameters of the models in comparison with loocv .we considered four experiment of sample size , , and nsr .the parameters to tune or select are , and for the gaussian kernel .the frequency of selecting the parameters are shown in figure [ loocv ] for loocv , and in figure [ kic_frequency ] for kic .the more concentrated frequency shows the more consistent selecting criterion .the diagrams show that kic is more consistent in selecting the parameters rather than loocv .loocv is also sensitive to sample size .it provides a more consistent result for benchmarks with samples .+ we used three benchmarks selected from the delve datasets ( www.cs.toronto.edu/~delve/data ) : ( 1 ) abalone dataset ( 4177 instances , 7 dimensions ) , ( 2 ) kin - family of datasets ( 4 datasets ; 8192 instances , 8 dimensions ) , and ( 3 ) puma - family of datasets ( 4 datasets ; 8192 instances , 8 dimensions ) . for the abalone dataset , the task is to estimate the age of abalones .we used normalized attributes in range [ 0,1 ] .the experiment is repeated 100 times to obtain the confidence interval . in each trial100 samples were selected randomly as the training set and the remaining 4077 samples as the test set .the kin - family and puma - family datasets are realistic simulations of a robot arm taking into consideration combinations of attributes such as whether the arm movement is nonlinear ( n ) or fairly linear ( f ) , and whether the level of noise ( unpredictability ) in the data is : medium ( m ) , or high ( h ) .the kin - family includes : kin-8fm , kin-8fh , kin-8 nm , kin-8nh datasets , and the puma - family contains : puma-8fm , puma-8fh , puma-8 nm , and puma-8nh datasets . in the kin - family of datasets , having the angular positions of an 8-link robot arm , the distance of the end effector of the robot arm from a starting position is predicted . the angular position of a link of the robot armis predicted given the angular positions , angular velocities , and the torques of the links .we compared kic_1 ( [ eq : kic1 ] ) , kic_2 ( [ eq : kic2 ] ) , and kic with loocv , icomp , and gpr on the three datasets .the results are shown as box - plots in figures [ abalone ] , [ kin - family ] , and [ puma - family ] for abalone , kin - family , and puma - family datasets , respectively .the best results across all three datasets were achieved using kic , and the second best results were for loocv . for the abalone dataset , comparable results were achieved for kic and loocv , that are better than icomp , and the smallest mse value obtained by sgpr .kic_1 , and kic_2 had similar mse values , which are larger than for the other methods . for the kin - family datasets , except for kin-8fm , kic gets better results than gpr , icomp , and loocv .kic_1 , and kic_2 obtain better results than gpr , and loocv for kin-8fm , and kin-8 nm , which are datasets with medium level of noise , but larger mse value for datasets with high noise ( kin-8fh , and kin-8nh ) . for the puma - family datasets ,kic got the best results on all datasets except for on puma-8 nm , where the smallest mse was achieved by loocv .the result of kic is comparable to icomp and better than gpr for puma-8 nm dataset .for puma-8fm , puma-8fh , and puma-8nh , although the median of mse for loocv and gpr are comparable to kic , kic has a more significant mse ( smaller interquartile in the box bots ) .the median mse value for kic_1 , and kic_2 are closer to the median mse values of the other methods on puma-8fm , and puma-8 nm , where the noise level is moderate compared to puma-8fh , and puma-8nh , where the noise level is high . the sensitivity of kic_1 , and kic_2 to noise is due to the existence of variance in their formula .kic_2 has a larger interquartile of mse than kic_1 in datasets with high noise , which highlights the effect of in its formula ( equation [ eq : kic2 ] ) rather than in equation .we introduced a novel kernel - based information criterion ( kic ) for model selection in regression analysis . the complexity measure in kicis defined on a variable - wise variance which explicitly computes the interdependency of each parameter involved in the model ; whereas in methods such as kernel - based icomp and gpr , this interdependency is defined on a covariance matrix , which obscures the true contribution of the model parameters .we provided empirical evidence showing how kic outperforms loocv ( with kernel - based closed form formula of the estimator ) , kernel - based icomp , and gpr , on both artificial data and real benchmark datasets : abalon , kin family , and puma family . in these experiments ,kic efficiently balances the goodness of fit and complexity of the model , is robust to noise ( although for higher noise we have larger confidence interval as expected ) and sample size , is consistent in tuning / selecting the ridge and kernel parameters , and has significantly smaller or comparable mean squared values with respect to competing methods , while yielding stronger regressors .the effect of using different kernels was also investigated since the definition of a proper kernel plays an important role in kernel methods .kic had superior performance using different kernels and for the proper one obtains smaller mse .this work was funded by fnsnf grants ( p1tip2_148352 , pbtip2_140015 ) .we want to thank arthur gretton , and zoltn szab for the fruitful discussions .
this paper introduces kernel - based information criterion ( kic ) for model selection in regression analysis . the novel kernel - based complexity measure in kic efficiently computes the interdependency between parameters of the model using a variable - wise variance and yields selection of better , more robust regressors . experimental results show superior performance on both simulated and real data sets compared to leave - one - out cross - validation ( loocv ) , kernel - based information complexity ( icomp ) , and maximum log of marginal likelihood in gaussian process regression ( gpr ) .
this work on pricing american options under proportional transaction costs goes back to the seminal discovery by that to hedge against a buyer who can exercise the option at any ( ordinary ) stopping time , the seller must in effect be protected against all mixed ( randomised ) stopping times .this was followed by , who established a non - constructive dual representation for the set of strategies superhedging the seller s ( though not the buyer s ) position in an american option under transaction costs .efficient iterative algorithms for computing the upper and lower hedging prices of the option , the hedging strategies , optimal stopping times as well as dual representations for both the seller and the buyer of an american option under transaction costs were developed by in a model with two assets , and in a multi - asset model .all these approaches take it for granted that the buyer can only exercise the option instantly , at an ordinary stopping time of his choosing . by contrast , in the present paper we allow the buyer the flexibility to exercise an american option gradually , rather than all at a single time instance . though it would be difficult in practice to exercise a fraction of an option contract and to hold on to the reminder to exercise it later, the holder of a large portfolio of options may well choose to exercise the individual contracts on different dates if that proves beneficial .does this ability to exercise gradually affect the pricing bounds , hedging strategies and optimal stopping times for the buyer and/or seller ?perhaps surprisingly , the answer to this question is yes , it does in the presence of transaction costs .gradual exercise turns out to be linked to another feature , referred to as deferred solvency , which will also be studied here .if a temporary loss of liquidity occurs in the market , as reflected by unusually large bid - ask spreads , agents may become insolvent .being allowed to defer closing their positions until liquidity is restored might enable them to become solvent once again .this gives more leeway when constructing hedging strategies than the usual requirement that agents should remain solvent at all times . was the first to explore the consequences of gradual exercise and deferred solvency using a model with a single risky asset as a testing ground . in the present paperthese ideas are developed in a systematic manner and extended to the much more general setting of the multi - asset market model with transaction costs due to ; see also and .pricing and hedging for the seller of an american option under transaction costs is a convex optimisation problem irrespective of whether instant or gradual exercise is permitted .however , this is not so for the buyer . in this case one has to tackle a non - convex optimisation problem for options that can only be exercised instantly .a very interesting consequence of gradual exercise is that pricing and hedging becomes a convex optimisation problem also for the buyer of an american option , making it possible to deploy convex duality methods .the convexity of the problem also makes it much easier to implement the pricing and hedging algorithms numerically .we will make use of this new opportunity in this paper .the paper is organised as follows .section [ sect - multi - curr - mod ] recalls the general setting of kabanov s multi - asset model with transaction costs . in section [sect : inst - versus - grad - exe ] the hedging strategies for the buyer and seller and the corresponding option prices under gradual exercise are introduced and compared with the same notions under instant exercise. a toy example is set up to demonstrate that it is easier to hedge an option and that the bid - ask spread of the option prices can be narrower under gradual exercise as compared to instant exercise . in section [sect : seller ] the seller s case is studied in detail .the notion of deferred solvency is first discussed and linked in proposition [ prop : am : seller : immediate - ultimate ] with the hedging problem for the seller of an american option with gradual exercise .the sets of seller s hedging portfolios are then constructed and related to the ask price of the option under gradual exercise and to a construction of a seller s hedging strategy realising the ask price ; see theorem [ prop : seller : zau0=initial - endowments ] .a dual representation of the seller s price is established in theorem [ thm : ask - price - representation ] .the toy example is revisited to illustrate the various constructions and results for the seller .section [ sect : buyer ] is devoted to the buyer s case .buyer s hedging portfolios and strategies are constructed and used to compute the bid price of the option ; see theorem [ prop:2012 - 07 - 26:hedging - construct ] .finally , the dual representation for the buyer is explored in theorem [ th : bu - buyer ] .once again , the toy example serves to illustrate the results .a numerical example with three assets can be found in section [ sec : num - example ] . some conclusions and possible further developments and ramifications are touched upon in section [ sect : conclusions ] . technical information and proofsare collected in the appendix .let be a filtered probability space .we assume that is finite , , and for all . for each let be the collection of atoms of , called the _ nodes _ of the associated tree model .a node is said to be a _successor _ of a node if . for each denote the collection of successors of any given node by . for each let be the collection of -measurable -valued random variables .we identify elements of with functions on whenever convenient .we consider the discrete - time currency model introduced by and developed further by and among others .the model contains assets or currencies . at each trading date and for each one unit of asset can be obtained by exchanging units of asset .we assume that the exchange rates are -measurable and for all and .we say that a portfolio is can be _ exchanged _ into a portfolio at time whenever there are -measurable random variables , such that for all where represents the number of units of asset received as a result of exchanging some units of asset . the _ solvency cone _ is the set of portfolios that are _ solvent _ at time , i.e. the portfolios at time that can be exchanged into portfolios with non - negative holdings in all assets .it is straightforward to show that is the convex cone generated by the canonical basis of and the vectors for , and so is a polyhedral cone , hence closed .note that contains all the non - negative elements of .a _ trading strategy _ is a predictable -valued process with final value and initial endowment . for each the portfolio held from time to time .let be the set of trading strategies .we say that is a _ self - financing _strategy whenever for all . note that no implicitly assumed self - financing condition is included in the definition of . a trading strategy is an _ arbitrage opportunity _ if it is self - financing , and there is a portfolio with non - negative holdings in all assets such that .this notion of arbitrage was considered by , and its absence is formally different but equivalent to the weak no - arbitrage condition introduced by .[ th:2012 - 10 - 03:ftap ] the model admits no arbitrage opportunity if and only if there exists a probability measure equivalent to and an -valued -martingale such that where is the polar of ; see ( [ eq:2012 - 09 - 20:aast ] ) in the appendix .we denote by the set of pairs satisfying the conditions in theorem [ th:2012 - 10 - 03:ftap ] , and by the set of pairs satisfying the conditions in theorem [ th:2012 - 10 - 03:ftap ] but with absolutely continuous with respect to ( and not necessarily equivalent to ) .we assume for the remainder of this paper that the model admits no arbitrage opportunities , i.e. . in place of a pair can equivalently use the so - called _ consistent price process _ ; see .we also define for any in the absence of arbitrage is a non - empty compactly -generated polyhedral cone for all ( * ? ? ?* remark 2.2 ) , which means that .( for the definition of a compactly -generated cone , see appendix [ subsect : comp - gen - cones ] . )the payoff of an american option in the model with underlying currencies is , in general , an -valued adapted process .the seller of the american option is obliged to deliver , and the buyer is entitled to receive the portfolio of currencies at a stopping time chosen by the buyer . here denotes the family of stopping times with values in .this is the usual setup in which the option is exercised _ instantly _ at a stopping time .american options with the provision for instant exercise in the multi - currency model under proportional transaction costs have been studied by , who established a non - constructive characterisation of the superhedging strategies for the option seller only , and by , who provided computationally efficient iterative constructions of the ask and bid option prices and the superhedging strategies for both the option seller and buyer . in the present paperwe relax the requirement that the option needs to be exercised instantly at a stopping time .instead , we allow the buyer to exercise _ gradually _ at a mixed stopping time .( for the definition of mixed stopping times , see appendix [ sect : mixed - stop - times ] . )if the buyer chooses to exercise the option gradually according to a mixed stopping time , then the seller of the american option will be obliged to deliver , and the buyer will be entitled to receive the fraction of the portfolio of currencies at each time . the question then arises whether or not it would be more beneficial for the buyer to exercise the option gradually rather than instantly ? what will be the optimal mixed stopping time for the buyer ?how should the seller hedge against gradual exercise ?are the ask ( seller s ) and bid ( buyer s ) option prices and hedging strategies affected by gradual exercise as compared to instant exercise ? in the case of instant exercise the seller of an american option needs to hedge by means of a trading strategy against all ordinary stopping times chosen by the buyer . the trading strategy needs to be self - financing up to time and to allow the seller to remain solvent on delivering the portfolio at time , for any . hence the family of seller s superhedging strategiesis defined as and the _ ask price _ ( _ seller s price _ ) of the option in currency is this is the smallest amount in currency needed to superhedge a short position in . on the other hand , the buyer of an american option can select both a stopping time and a trading strategy .the trading strategy needs to be self - financing up to time and to allow the buyer to remain solvent on receiving the portfolio at time .thus , the family of buyer s superhedging strategies is defined as and the _ bid price _ ( _ buyer s price _ ) of the option in currency is this is the largest amount in currency that the buyer can raise using the option as surety . for american options with instant exercise , iterative constructions of the ask and bid option prices and and the corresponding seller s and buyer s superhedging strategies from and were established by .when the buyer is allowed to exercise gradually , the seller needs to follow a suitable trading strategy to hedge his exposure .since the seller can react to the buyer s actions , this strategy may in general depend on the mixed stopping time followed by the buyer , and will be denoted by . in other words, we consider a function . at each time the seller will be holding a portfolio and will be obliged to deliver a fraction of the payoff .he can then rebalance the remaining portfolio into in a self - financing manner , so that self - financing and superhedging conditions have merged into one .we call ( [ eq : seller - self - fin - superhedge ] ) the _ rebalancing _ condition .when creating the portfolio at time , the seller can only use information available at that time .this includes , but the seller has no way of knowing the future values that will be chosen by the buyer .the trading strategies that can be adopted by the seller are therefore restricted to those satisfying the _ non - anticipation _condition in particular , the initial endowment of the trading strategy is the same for all .we denote this common value by .we define the family of seller s superhedging strategies against gradual exercise by and the corresponding _ ask price _ ( _ seller s price _ ) of the option in currency by this is the smallest amount in currency that the seller needs to superhedge a short position in the american option when the buyer is allowed to exercise gradually . on the other hand ,the buyer is able to select both a mixed stopping time and a trading strategy , and will be taking delivery of a fraction of the payoff at each time . because the choice of the mixed stopping time is up to the buyer , the trading strategy needs to be good just for the one chosen stopping time , and does not need to be considered as a function of , in contrast to the seller s case .the _ rebalancing _condition needs to be satisfied .hence , the family of superhedging strategies for the buyer of an american option with gradual exercise is defined as and the corresponding _ bid price _ ( _ buyer s price _ ) of the option in currency is this is the largest amount in currency that can be raised using the option as surety by a buyer who is able to exercise gradually .[ exl : new]we consider a toy example with two assets , a foreign currency ( asset 1 ) and domestic currency ( asset 2 ) in a two - step binomial tree model with the following bid / ask foreign currency prices in each of the four scenarios in :{|c|cc|cc|cc|}\hline & & & & & & \\\hline & & & & & & \\\cline{6 - 7} & & & & & & \\\cline{4 - 7} & & & & & & \\\cline{6 - 7} & & & & & & \\\hline \end{tabular}\ ] ] note there are only two nodes with a non - trivial bid / ask spread , namely the ` up ' node and the ` up - up ' node . the corresponding exchange rates are {cc}\pi_{t}^{11 } & \pi_{t}^{12}\\ \pi_{t}^{21 } & \pi_{t}^{22}\end{array } \right ] = \left [ \begin{array } [ c]{cc}1 & 1/s_{t}^{\mathrm{b}}\\ s_{t}^{\mathrm{a } } & 1 \end{array } \right ] .\ ] ] in this model we consider an american option with the following payoff process :{|c|c|c|c|}\hline & & & \\\hline & & & \\\cline{4 - 4} & & & \\\cline{3 - 4} & & & \\\cline{4 - 4} & & & \\\hline \end{tabular}\ ] ] in the case when the option can only be exercised instantly , using the algorithms of we can compute the bid and ask prices of the option in the domestic currency to be now consider given by{|c|c|c|c|}\hline & & & \\\hline & & & \\ & & & \\\cline{4 - 4} & & & \\ & & & \\\hline \end{tabular}\ ] ] for any .also consider and such that{|c|c|c|c|c|c|c|}\hline & & & & & & \\\hline & & & & & & \\ & & & & & & \\\cline{4 - 4}\cline{6 - 7} & & & & & & \\ & & & & & & \\\hline \end{tabular}\ ] ] we can verify that and .the existence of these strategies means that this example demonstrates that the seller s and buyer s prices under gradual exercise may differ from their respective counterparts under instant exercise . it demonstrates the need to revisit and investigate the pricing and superhedging results in the case when the instant exercise provision is relaxed and replaced by gradual exercise .we have seen in example [ exl : new ] that the seller s price may be higher than .the reason is that an option seller who follows a hedging strategy is required to be instantly solvent upon delivering the payoff at the stopping time when the buyer has chosen to exercise the option .meanwhile , a seller who follows a strategy will be able to continue rebalancing the strategy up to the time horizon as long as a solvent position can be reached eventually .being able to defer solvency in this fashion allows more flexibility for the seller , resulting in a lower seller s price . on the other hand, it might appear that a seller who hedges against gradual exercise ( against mixed stopping times ) would have a harder task to accomplish than someone who only needs to hedge against instant exercise ( ordinary stopping times ) .however , this turns out not to be a factor affecting the seller s price , as we shall see in proposition [ prop : am : seller : immediate - ultimate ] .these considerations indicate that the notion of solvency needs to be relaxed .we say that a portfolio satisfies the _ deferred solvency _ condition at time if it can be exchanged into a solvent portfolio by time without any additional investment , i.e. if there is a sequence such that for all and we call such a sequence a _ liquidation strategy _ starting from at time . the set of portfolios satisfying the deferred solvency condition at time is a cone .we call it the _ deferred solvency cone _ and denote by . in example [ exl : new ] the portfolio with in the domestic currency and in the foreign currency is insolvent at the ` up ' node at time , that is , .it does , however , satisfy the deferred solvency condition at that node , i.e. .the large bid - ask spread =[3,9] \rule[-0.2cm]{0pt}{0.6cm} \mathcal{z}_{0}^{\mathrm{ad}} \mathcal{z}_{1}^{\mathrm{ad}} \mathcal{z}_{2}^{\mathrm{ad}} \omega_{1}\rule[-0.4cm]{0cm}{1.05cm} 5x^{1}+x^{2}\geq5 \begin{array } [ c]{l}8x^{1}+x^{2}\geq8\\ 4x^{1}+x^{2}\geq0 \end{array } \begin{array } [ c]{l}8x^{1}+x^{2}\geq8\\ 4x^{1}+x^{2}\geq0 \end{array } \omega_{2}\rule[-0.4cm]{0cm}{1.05cm} 5x^{1}+x^{2}\geq5 \begin{array } [ c]{l}8x^{1}+x^{2}\geq8\\ 4x^{1}+x^{2}\geq0 \end{array } 4x^{1}+x^{2}\geq0\omega_{3}\rule[-0.4cm]{0cm}{1.05cm} 5x^{1}+x^{2}\geq5 2x^{1}+x^{2}\geq0 3x^{1}+x^{2}\geq0\omega_{4}\rule[-0.4cm]{0cm}{1.05cm} 5x^{1}+x^{2}\geq5 2x^{1}+x^{2}\geq0 x^{1}+x^{2}\geq0 \rule[-0.2cm]{0pt}{0.6cm} \mathbb{\hat{q}} \hat{s}_{0} \hat { s}_{1} \hat{s}_{2} \hat{\chi}_{0} \hat{\chi}_{1} \hat{\chi } _{ 2} \omega_{1}\rule[-0.2cm]{0pt}{0.6cm} 1 ( 5,1) ( 4,1) ( 8,1) 0 \frac{3}{4} \frac{1}{4}\omega_{2}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 4,1) ( 4,1) 0 \frac{3}{4} \frac{1}{4}\omega_{3}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 2,1) ( 3,1) 0 0 1\omega_{4}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 2,1) ( 1,1) 0 0 1 \rule[-0.2cm]{0pt}{0.6cm} \mathcal{z}_{0}^{\mathrm{bd}} \mathcal{z}_{1}^{\mathrm{bd}} \mathcal{z}_{2}^{\mathrm{bd}} \omega_{1}\rule[-0.6cm]{0cm}{1.45cm} 5x^{1}+x^{2}\geq-3 \begin{array } [ c]{c}8x^{1}+x^{2}\geq-8\\ 6x^{1}+x^{2}\geq-4\\ 4x^{1}+x^{2}\geq-4 \end{array } \begin{array } [ c]{l}8x^{1}+x^{2}\geq-8\\4x^{1}+x^{2}\geq0 \end{array } \omega_{2}\rule[-0.6cm]{0cm}{1.45cm} 5x^{1}+x^{2}\geq-3 \begin{array } [ c]{c}8x^{1}+x^{2}\geq-8\\ 6x^{1}+x^{2}\geq-4\\ 4x^{1}+x^{2}\geq-4 \end{array } 4x^{1}+x^{2}\geq0\omega_{3}\rule[-0.6cm]{0cm}{1.45cm} 5x^{1}+x^{2}\geq-3 2x^{1}+x^{2}\geq0 3x^{1}+x^{2}\geq0\omega_{4}\rule[-0.6cm]{0cm}{1.45cm} 5x^{1}+x^{2}\geq-3 2x^{1}+x^{2}\geq0 x^{1}+x^{2}\geq0 \rule[-0.2cm]{0pt}{0.6cm} \mathbb{\hat{q}} \hat{s}_{0} \hat { s}_{1} \hat{s}_{2} \hat{\chi}_{0} \hat{\chi}_{1} \hat{\chi } _{ 2} \omega_{1}\rule[-0.2cm]{0pt}{0.6cm} 1 ( 5,1) ( 5,1) ( 5,1) 0 \frac{1}{2} \frac{1}{2}\omega_{2}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 5,1) ( 4,1) 0 \frac{1}{2} \frac{1}{2}\omega_{3}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 2,1) ( 3,1) 0 0 1\omega_{4}\rule[-0.2cm]{0pt}{0.6cm} 0 ( 5,1) ( 2,1) ( 1,1) 0 0 1 ] , and such that 4 .[ item : prop : seller : dual:3]for every and we have and for each there exist ] , and such that by and proposition [ prop : seller : dual ] , there exist $ ] and for all such that this completes the inductive step .also define for all then , are also satisfied when .the mixed stopping time is defined by setting and it is straightforward to show by induction that for all .moreover , since , we have observe also that for all , where is defined by ( [ eq:2013 - 07 - 13-chi - star ] ) .it then follows from , and that for all we now show by backward induction that for all at time the result is trivial because .suppose now that ( [ eq : seller : dual - opt:5 ] ) holds for some .then , by the tower property of conditional expectation , and , by , the predictability of , and , this concludes the inductive step. we also show by backward induction that for all at time suppose now that ( [ eq : seller : dual - opt:10 ] ) holds for some . then by , and the tower property of conditional expectation , we have this concludes the inductive step . by proposition [ prop:20130727:pi - ag - dual ] , a stopping time and a pair be constructed such that to establish the reverse inequality we prove by backward induction that for any , and when , since and .now fix any , and suppose that then , by the tower property of conditional expectation , and since and , it follows that which proves ( [ eq : reverse - ineq - dual repr - seller ] ) . the construction in the proof of theorem [prop : seller : zau0=initial - endowments ] with initial portfolio yields a strategy . for any and we have , and therefore ( [ eq : reverse - ineq - dual repr - seller ] ) with yields it follows that the set is clearly polyhedral with recession cone . for proceed by induction .suppose that is polyhedral and its recession cone is .then is polyhedral and its recession cone is ( * ? ? ?* corollary 8.3.3 ) .being polyhedral , is the convex hull of a finite set of points and directions , and its recession cone is the convex hull of the origin and the directions in .the set is polyhedral ( * ? ? ?* corollary 19.3.2 ) and hence it is the convex hull of a finite set of points and directions .since the cone can be written as the convex hull of the origin and a finite number of directions , it is possible to write as the convex hull of a finite set of points , all in , and a finite set of directions .these directions are exactly the directions in and , i.e. the directions in and .thus the recession cone of is since by ( [ eq : qt - recursive ] ) .this means that the set is closed and its recession cone is ( * ? ? ?* corollary 9.8.1 ) .moreover , since and are polyhedral , it follows that is polyhedral ( * ? ? ?* theorem 19.6 ) , which means that is polyhedral , concluding the inductive step .the proof is by backward induction .since , from we have it immediately follows that on the set .on the set we have because is a cone , and therefore * on the set we have and therefore so that since it follows that on . * on the set we have because by .there are two further possibilities . * * on we have and therefore * * on we have and therefore as claimed . in view of proposition[ prop:2012 - 07 - 26:hedging - construct - converse ] , to verify ( [ eq:2012 - 07 - 26:constr - equivalence ] ) it is sufficient to show that for every there exists a pair such that . to this end , define and .suppose by induction that for some we have constructed predictable sequences and such that and because of , there exists an -measurable random variable such that and equations and then give where follows from the fact that is a convex cone .this means there exists a random variable such that put .then , which concludes the inductive step .now define the mixed stopping time by we also put .we have constructed and such that and finally , we construct such that and . by the definition of the deferred solvency cones , for each there is a liquidation strategy starting from at time .we put which means that for each , with , completing the proof of ( [ eq:2012 - 07 - 26:constr - equivalence ] ) . next , if follows from ( [ eq:2012 - 07 - 26:constr - equivalence ] ) that by proposition [ prop:2012 - 09 - 19:zt - closed ] , is polyhedral , hence closed . as a result, the set is also closed .it is non - empty and bounded above because for any large enough , and for any small enough .this means that the supremum is attained .it follows that , so we know that a strategy can be constructed such that .theorem [ prop:2012 - 07 - 26:hedging - construct ] gives the maximum is attained , so .the strategy constructed by the method in the proof of theorem [ prop:2012 - 07 - 26:hedging - construct ] from the initial portfolio therefore realises the supremum in ( [ eq : buyer - bid - price - gradual ] ) .we write this supremum as a maximum , ,\end{aligned}\ ] ] and apply proposition [ prop : am - eur ] , which gives \\ & = \max_{\chi\in\mathcal{x } } \left[-p^\mathrm{a}_{j}(-\xi_\chi)\right],\end{aligned}\ ] ] where is the ask ( seller s ) price in currency of a european option with expiry time and payoff as defined in appendix [ sect : eur - opt ] .we can now apply lemma [ lem : eur - ask - price - dual - repr ] to write for any , since is a martingale under , we have this means that proving ( [ eq : pi - bu ] ) .we know that realises the supremum in ( [ eq : buyer - bid - price - gradual ] ) , and therefore the above maxima over are attained at .a pair such that can be constructed by the method of ( * ? ? ?* proposition 5.3 ) for the european option with payoff , completing the proof .we recall a result for european options in the market model with assets under transaction costs .this is needed in the proof of the dual representation for the bid price of an american option .a european option obliges the seller ( writer ) to deliver a portfolio at time .the set of strategies superhedging the seller s position is given as and the _ ask price _ ( _ seller s price _ ) of such an option in currency is the following result can be found in ( * ? ? ?* section 4.3.1 ) .[ lem : eur - ask - price - dual - repr ] the ask price in currency of a european option can be represented as moreover , a pair such that can be constructed algorithmically .roux , a. zastawniak , t. 2009 , american options under proportional transaction costs : pricing , hedging and stopping algorithms for long and short positions , _ acta applicandae mathematicae _ * 106 * , 199228 .
american options in a multi - asset market model with proportional transaction costs are studied in the case when the holder of an option is able to exercise it gradually at a so - called mixed ( randomised ) stopping time . the introduction of gradual exercise leads to tighter bounds on the option price when compared to the case studied in the existing literature , where the standard assumption is that the option can only be exercised instantly at an ordinary stopping time . algorithmic constructions for the bid and ask prices and the associated superhedging strategies and optimal mixed stoping times for an american option with gradual exercise are developed and implemented , and dual representations are established .
consider a network which evolves under the removal and addition of vertices . in each unit of timewe add vertex and remove vertices .removal of a vertex also implies that all the edges incident on that vertex vanish and consequently the degree of vertices at the end of those edges decrease . here can be interpreted as the ratio of vertices removed to those added , so represents a growing network , a shrinking one , while implies vertex turnover but fixed network size .the equations to follow represent the completely general case .however , for the purposes of this paper we will specialize to networks of constant size as we assume that the network already exists and we would like to preserve its original structure , by balancing the rate of attack against the rate of repair .let be the fraction of nodes in the network that at a given time have degree . by definitionthen it has the normalization : in addition to this we would like to have freedom over the degree of the incoming vertex .let be the probability distribution governing this , with the constraint .we also have to consider how a newly arriving vertex chooses to attach to other vertices extant in the network and how a vertex is removed from the same .let be the probability that a given edge from a new node is connected to a node of degree , multiplied by the total number of nodes .then is the probability that an edge from a new node is connected to some node of degree .similarly , let be the probability that a given node with degree fails or is attacked during one node removal also multiplied by .then is the total probability to remove a node with degree during one node removal .note that the introduction of the deletion kernel is what sets our model apart from previous models describing the network evolution process .since each newly attached edge goes to some vertex with degree , we have the following normalization conditions : armed with the given definitions and building on the work done previously by , we are now in a position to write down a rate equation governing the evolution of the degree distribution . for a network of nodes at a given unit of time ,the total number of nodes with degree is .after one unit of time we add one vertex and take away vertices , so the number is , where is the new value of .therefore we have , where is the conditional probability of following an edge from a node of degree and reaching a node of degree .alternatively , it is the degree distribution of nodes at the end of an edge emanating from a node of degree .note that and are always zero , and for an uncorrelated network , .the terms involving describe the flow of vertices with degree to and to as a consequence of edges gained due to the addition of new vertices .the first two terms involving describes the flow of vertices with degree to and to as vertices lose edges as a result of losing neighbors .the term represents the direct removal of a node of degree at rate .finally represents the addition of a vertex with degree .processes where vertices gain or lose two or more edges vanish in the limit of large and are not included in eq . .the rate equation described above presents a formidable challenge due to the appearance of from the terms representing deleted edges from lost neighbors .rate equations for recovery schemes based on edge rewiring are slightly easier to deal with . upon failure , all edges connected to that nodeare rewired so that the degrees of the deleted node s neighbors do not change , and this term does not appear .the specific case of preferential failure in power - law networks was considered previously in this context by . however , this recovery protocol can only be used on strictly growing networks , because a network of constant size would become dense under its application .moreover , it is dependent on the power - law structure of the network .the methods described here are general and are applicable to arbitrary degree distributions .apart from edge rewiring , the special case of random deletion also leads to a significant simplification .uniform deletion amounts to setting .doing so , then leads to the following , which renders eq .independent of and thus independent of any degree - degree correlations .random deletion hence closes equation for , enabling us to seek a solution for the degree distribution for a given and . with non - uniform deletion ,the degree distribution depends on a two - point probability distribution , and as we shall see in section [ sec : correlations ] , the two - point probability distribution will depend on the three - point probability distribution and so on .this hierarchy of distributions , where the -point distribution depends on the -point distribution , is not closed under non - uniform failure and hence it is difficult to seek an exact solution for the degree distribution .nevertheless , in the following , we demonstrate a method that allows us to navigate our way around this problem . as mentioned before , for the purposes of this paper we will be interested in a network of constant size , where the rate of attack is compensated by the rate of repair . assuming that the network reaches ( or already is ) a stationary distribution and does not possess degree - degree correlations , we set and can further simplify eq . .let be the mean degree of nodes removed from the network ( i.e. ) , and the mean degree of the original degree distribution .then we have , the evolution process , specifically non - uniform removal of nodes , can and in many cases will introduce degree - degree correlations into our networks . in order to confront this issue, we will first find choices for and that satisfy the solutions to the rate equation , for a given , in a network that is uncorrelated .we will then demonstrate that a special subset of those solutions for and is an uncorrelated fixed point of the rate equation for the degree - degree correlations .this opens up the possibility , that a network that initially has no degree - degree correlations will not develop correlations from the evolution process .although the rate equation described in eq .is fairly complicated , it is a relatively straightforward exercise to determine the relation between edges added to those removed . multiplying eq . by , summing over and rearranging yields .this equation is simple to interpret .since the network has a constant fixed - point degree distribution , the average degree of the network remains constant , and therefore edges are removed and added at the the same rate .in this section we describe our method under which networks can recover from various forms of attack . the types of attack we consider are those studied generally by most authors ( though in static networks ) , namely preferential and targeted attacks .random failures are the most generally studied schemes in both static and evolving networks , in view of the fact that they lend themselves to relatively simple analysis .these types of failures may be representative , say , of disruption of power lines or transformers in a power grid owing to extraneous factors such as weather .however , the functionality of most networks often depends on the performance of higher degree nodes , consequently non - uniform attack schemes focus on these .for example , in a peer - to - peer network , a high degree node could be a central user with large amounts of data .high degree could also be indicative of the amount of load on a node during its operation , or on the public visibility of a person in a social network .it is reasonable to assume that a malicious entity such as a computer virus is more likely to strike these important nodes .et al _ have employed this removal strategy ( among others ) on a variety of simulated and real networks and have found it to be highly effective in disrupting the structure of the attacked network .nodes ) with mean , under preferential attack and uniform attachment using .,width=302 ] we simulate these kinds of attacks using preferential failure , that sample nodes in proportion to their number of connections , and through an outright attack on the highest degree nodes represented by , where is the heaviside step function .our method of compensation will involve control over two processes : the first where our newly incoming / repaired vertex chooses a degree for itself drawn from some distribution , and second , the process by which this vertex decides to attach to any other vertex in the network , governed by the attachment kernel .our goal here is to solve for the attachment kernel , that will preserve the original probability distribution , subject to a deletion kernel for some choice of .we will assume that the final network is uncorrelated and work with eq ., keeping in mind that any arbitrary choice of and is probably not consistent with that assumption . introducing the cumulative distribution for the attacked and newly added vertices , and respectively , we sum eq . from to , noting that for our steady state network .this leads to the following relation , dividing both sides by gives us an expression for the attachment kernel , .\nonumber\\ \label{eq : genattachment}\end{aligned}\ ] ] nodes ) with , under high degree attack and uniform attachment using .,width=302 ] equation represents the set of possible solutions for the attachment kernel that will lead to the desired degree distribution , given that the final network is uncorrelated .the correct choice of solution from the above set , must obey the consistency condition , that when inserted into the rate equation for the degree - degree correlations , the correlations vanish . in section [ sec : correlations ] , we will show that the following _ ansatz _ chosen from the above set is such a choice : equation was previously derived by for the case of random deletion .here we posit that it works more generally for the case of non - uniform attack when our initial network is uncorrelated ( with some caveats that will be explained shortly ) .the choice of makes intuitive sense because the quantity is the probability distribution governing the number of edges belonging to a node , reached by following a randomly chosen edge to one of its ends , _ not including _ the edge that was followed .this is one less than the total degree of the node and is also referred to as the _ excess _ degree distribution .note that in our model we specify the degree of incoming nodes .therefore the appearance of the excess degree distribution is a signature of an uncorrelated network , implying the newly arriving edges are being introduced in an uncorrelated fashion .there are basically two conditions for the existence of a solution given by eq . ; must be a valid probability distribution , and must be finite .these are not very stringent conditions and are typically satisfied by most degree distributions . in other words , barring some pathological cases , it is always possible to find a solution of the form of eq . .there is an additional consideration , the deletion process may lead to nodes of degree zero in a network that originally did not have any such nodes . while the fraction of such nodes is vanishingly small for networks with say , poisson degree distributions , they may be non - trivial for power - law networks .as such , it is important to set ( the probability to attach to a node of degree zero ) to a generous value in order to reconnect these nodes to the network .we are now in a position to effect our repair on the network .given the original degree distribution and the form of the attack , eq .gives us the precise recipe for recovering the degree distribution .we need to sample the degrees of the newly introduced nodes in proportion to the product of the deletion kernel _ and _ the degree distribution , and then attach these edges in proportion to the excess degree distribution of the network . to test our repair method , we provide four examples for initially uncorrelated networks with nodes generated using the _ configuration model _ . in the configuration model ,only the degrees of vertices are specified , apart from this sole constraint the connections between vertices are made at random .nodes ) with under targeted attack using from eq . after setting .,width=302 ]the simulation results show the initial degree distribution and the compensated one subject to two types of attacks on poissonian networks with degree distribution given by , in fig .[ fig : poissonpref ] we show the resulting degree distribution where nodes were attacked preferentially , i.e. , while in fig .[ fig : poissontheta ] we show the case for targeted attack only on high degree nodes represented by where is the _ minimum _ degree of the node attacked .the degrees of newly added nodes were chosen from the distribution with the attachment kernel set to one , corresponding to the solution of equation after substituting in the appropriate .the data points in all the figures are averaged over multiple realizations of the network each subject to iterations of addition and deletion .the points along with corresponding error bars represent the final degree distribution , whereas the solid line represents the initial network . as the figures show , the final networks are in excellent agreement with the initial degree distribution. nodes ) with exponent and exponential cutoff , under preferential attack using from eq . after setting .,width=302 ]we employ the same attack kernels , and a targeted attack only on high degree nodes represented by on two other examples .our first example network has links distributed according to a power - law with an exponential cutoff , is a normalization constant which in this case is , where the function is the poly - logarithm function defined as : the exponential cut - off has been introduced for three reasons .first , many real world networks appear to show this cutoff and second , it renders the distribution normalizable for ranges of the exponent .finally , for a pure power - law network it is in principle possible to assign a degree to a node that is greater than the system size .the exponential cutoff ensures that the probability for this to happen is vanishingly small . in the other examples that we consider, the functional form of the distribution already ensures this property .the second network has an exponential distribution given by , fig .[ fig : exptheta ] shows the results for the exponentially distributed network ( ) undergoing targeted attack . in fig .[ fig : powerpref ] we show the resulting degree distribution for the power - law network ( and ) where nodes were attacked preferentially .both figures indicate the initial and final networks are in excellent agreement . at this point , aside from the technical details , it is worth reminding ourselves of the big picture .we have demonstrated above that if a network with a certain degree structure is subjected to an attack that aims to destabilize that structure , one can recover the same , by manipulating the rules by which vertices are introduced to the network .the rules that we employ in our repair method are dependent on the types of attacks that our networks are subject to . in the following section we give a detailed justification of the employment of our method . in order for our results from the previous sections to be valid, we must demonstrate that our initially uncorrelated networks remain uncorrelated under our repair scheme . to accomplish this, we will define a rate equation for the degree - degree correlations and demonstrate that the uncorrelated network is a fixed point of this equation .our rate equation will describe the evolution of the expected number of edges in the network with ends of degree and .let the expected number of such edges in the network be , where , and is the probability that a randomly selected edge has degree at one end and degree in the other .the expected number of edges after one time step where we add and take away edges is then , e'_{l , k } = m e_{l , k } + \delta , \label{eq : edgerate1}\ ] ] where represents all other edge addition and removal processes .we have already established that in the steady state case , irrespective of the degree distribution , so our goal is equivalent to showing that is equal to zero for an uncorrelated network generated / repaired with our special choices of and . as a result , implying that the degree - degree correlations ( if any ) remain constant over time .we will assume that our network is locally tree - like , something which holds true for most random graphs .in addition we will only consider processes out to second nearest - neighbors of a node .these assumptions allows us to avoid including terms in the rate equation representing removal of nodes with neighbors that are connected to each other .nevertheless , there are a large number of remaining processes that we will need to consider .to start things off , note that the rate equation is symmetric in the indices and .any process that contributes to changing while holding constant also contributes to changing while holding constant .we can therefore consider contributions to from , and and add on the corresponding symmetric terms at the end .the first process we need to take into account is a direct addition of a node of degree .this contributes two flows to the rate equation , and .similarly , the direct deletion of a node of degree contributes and .next , we will have to take into account second nearest - neighbor processes .we can be certain that these terms are of the same order by merely counting the number of unsummed probability distributions that go into each process .there will be two terms for the attachment process representing the situation where a new node of any degree attaches to a node of degree or , that was previously attached to a node of degree .these terms are and .similarly there are two removal processes , where a node of any degree that is removed from the network was previously attached to a node of degree or that has neighbor(s ) of degree .unfortunately these terms introduce three - point correlations into the rate equation .analogous to methods employed in similar hierarchy problems , we use a moment - closure approximation to represent these processes as a product of two two - point correlations in the following manner , adding all of these terms together our final equation for is , in addition to terms where and are interchanged . after inserting the appropriate and from eq .along with the uncorrelated solution , it can be shown that , according to eq ., there exist a set of solutions such that an initially uncorrelated network will not develop any degree - degree correlations as a consequence of the evolution process .the attachment kernel that was employed in the network evolution process , described in section [ sec : designattachment ] , was a subset of these solutions .this allowed the repair method to be employed by maintaining negligible correlations in the network .one must point out , that we have not explicitly demonstrated the stability of the uncorrelated solution to perturbations .for example fluctuations in or in the number of edges may drive the network away from the uncorrelated steady - state . an analytical approach to determine this , say using linear stability analysis is difficult , due to the numerous related probability distributions involved .so instead we resort to a numerical approach .we measured the pearson correlation coefficient between the degrees of nodes at both ends of an edge for all our model networks . for the poisson and exponential cases, the correlations remained negligible during the evolution process .on the other hand , the power - law network developed non - trivial correlations .we have not been able to determine whether the appearance of these correlations was due to finite - size effects , or instability in the uncorrelated solution , or to some other cause .the results show that the agreement between the initial and final degree distributions is very good , and it seems that in this particular case , the correlations did not demonstrate a significant effect on the final state of the network .in this paper , we have shown how to preserve a network s degree distribution from various forms of attack or failures by allowing it to adapt via the simple manipulation of rules that govern the introduction of nodes and edges .we based our analysis on a rate equation describing the evolution of the network under arbitrary schemes of addition and deletion .in addition to choosing the degree of incoming nodes , we allow ourselves to choose how nodes attach to the existing network . to deal with the special case of non - uniform deletionwe have introduced a rate equation for the evolution of degree - degree correlations and have used that in combination with the equation for the degree distribution to come to our solution .we have provided examples of the applicability of this method using a combination of analytical techniques and numerical simulations on a variety of degree distributions , yielding excellent results in each case .the structure of many networks in the real world is crucially related to their performance .many authors have seized on the fact that technological networks such as the internet and peer - to - peer networks are power - law in nature , and have used this to design efficient search schemes among other things .loss of structural properties of these networks then lead to severe constraints on their performance .recent empirical studies have suggested that node removal , for example , in the world wide web , is typically non - uniform in nature . in view of this , it is crucial for researchers to come up with effective solutions to try and manage these types of disruptions . to the best of our knowledge, there is a considerable gap in understanding the non - uniform deletion process of nodes and edges and corresponding methods to deal with them .this paper begins to address this gap .it must be pointed out that the methods we have described depends crucially on the assumption of negligible correlations as the network evolves . curiously enough , in our example power - law network , we were able to get very good agreement between the initial and final degree distributions , in spite of the appearance of non - trivial correlations .it will certainly be interesting to see if our methods can be extended to the case of networks with strong correlations , and other metrics describing network structure .perhaps it is possible to directly confront the rate equation for the degree - degree correlations , although this seems a difficult prospect at the moment .the idea of preserving the structure of networks from attacks by allowing it to react in real - time is a relatively nascent one and the authors look forward to more developments in this area .
there has been a considerable amount of interest in recent years on the robustness of networks to failures . many previous studies have concentrated on the effects of node and edge removals on the connectivity structure of a _ static _ network ; the networks are considered to be static in the sense that no compensatory measures are allowed for recovery of the original structure . real world networks such as the world wide web , however , are not static and experience a considerable amount of turnover , where nodes and edges are both added and deleted . considering degree - based node removals , we examine the possibility of preserving networks from these types of disruptions . we recover the original degree distribution by allowing the network to react to the attack by introducing new nodes and attaching their edges via specially tailored schemes . we focus particularly on the case of non - uniform failures , a subject that has received little attention in the context of evolving networks . using a combination of analytical techniques and numerical simulations , we demonstrate how to preserve the _ exact _ degree distribution of the studied networks from various forms of attack . recent years have witnessed a substantial amount of interest within the physics community in the properties of networks . techniques from statistical physics coupled with the widespread availability of computing resources have facilitated studies ranging from large scale empirical analysis of the worldwide web , social networks , biological systems , to the development of theoretical models and tools to explore the various properties of these systems . a relatively large body of work has been devoted to the study of degree distributions of networks , focusing both on their measurement , and formulation of theories to explain their emergence and their effects on various properties such as resilience and percolation . these studies are mostly aimed at networks in the real world that evolve naturally , in the sense that they are driven by dynamical processes not under our control . representative examples being social , biological networks and information networks like the world wide web , which though manmade , grows in a distributed fashion . there are however different classes of infrastructure related networks such as the transportation and power grids , communication networks such as the telephone and internet , that evolve under the direction of a centrally controlled authority . in addition to these is a relatively new class of networks which fall in between these two types , the classic example being peer - to - peer file - sharing networks . these networks grow in a collaborative , distributed fashion , so that we have no direct influence over their structure . however , we can manipulate some of the rules by which these form , giving us a limited but potentially useful influence over their properties . it is a well established fact , that the structure of such networks is directly related to their performance . in view of this , a certain degree of effort has been made to tailor these _ designer _ networks towards structures that optimize certain properties such as robustness to removal of nodes and efficient information transfer among other things . these networks typically experience a significant amount of vertex / edge turnover , with users joining and leaving the network voluntarily , possible failures of key components and resources , or intentional attacks such as denial of service . these factors can lead to severe disruption of the network structure and as a result , loss of its key properties . in the face of this , it is natural to extend our analysis to the effects of these failures / attacks and use our limited control to attempt to adaptively restore the original structure of these networks . previous work has focused on the effects of disruption on static networks , where authors have studied the connectivity structure under the random / targeted removal of nodes and edges . the network is considered static in that no compensatory measures , such as the introduction of new edges or nodes , are permitted . the effect of these removals have been measured against the existence of the _ giant component _ : the largest set of vertices in the network of o( ) , where is the number of nodes , that are connected to each other by at least one path . a representative example can be found in the paper by albert _ et al _ , where they studied the size of the giant component of scale free networks such as the internet , under simulated random failures and targeted attacks on high degree nodes . one of the interesting things they found was that , while these networks were remarkably robust to random failures , they were extremely fragile to targeted attacks . this emphasizes the importance of non - uniform removal strategies . unlike in the static case , the networks considered in this paper evolve in time with sustained node and edge removals . the network is allowed to react to these disruptions via the introduction of new nodes and edges , chosen to be attached in a manner such that the network retains it original form , at least in terms of the degree distribution . such models , conventionally referred to in the literature as _ reactive networks _ have been discussed before , see for instance . here we assume that the designers of the network are only aware of the statistical properties of the removed nodes and have no ability to influence the existing network beyond the introduction of new nodes or reattachment of those removed . consequently they have two processes under their control to compensate for the attack . the first is the degree of the introduced vertices and the second is the process by which a newly introduced vertex chooses to attach to a previously extant vertex on the network . failure is thus compensated by adding nodes and edges chosen from an appropriate degree distribution and attaching them to the network via specially tailored schemes . note that in our model , one can re - introduce nodes that have been removed or introduce completely new sets of nodes . the former case could be indicative of say a computer in a peer - to - peer network that loses its connection , and would like to reconnect . the latter could represent the permanent loss of web - pages from the world wide web and the introduction of a new web - page . we use the attachment kernel of krapivsky and redner , to simulate the introduction of nodes and edges , and via the introduction of a deletion kernel we analyze the interesting and neglected case of non - uniform deletion . a variety of models have been proposed to simulate network evolution and growth where vertices are both added and deleted , but these have concentrated on the relatively simple case of uniform deletion . we will show that under uniform failures , the appearance of degree - degree correlations , that typically arise as a result of growth processes , as discussed in , can be neglected . previous models have taken advantage of precisely this fact to circumvent the difficulty of dealing with degree - degree correlations . for the case of non - uniform deletion , correlations can not be ignored . in this paper we confront this issue by demonstrating how to preserve an initially uncorrelated network throughout the evolution process with the introduction of an additional rate equation for the degree - degree correlations . we give analytical results and numerical simulations for a variety of degree distributions under various forms of attack . in all the cases that we study , we recover the _ exact _ degree distributions .
information - theoretic research on capacity and coding for write - limited memory originates in , , and . in ,the authors consider a model of write - once memory ( wom ) .in particular , each memory cell can be in state either 0 or 1 .the state of a cell can go from 0 to 1 , but not from 1 back to 0 later .these write - once bits are called _ wits_. it is shown that , the efficiency of storing information in a wom can be improved if one allows multiple rewrites and designs the storage / rewrite scheme carefully .multilevel flash memory is a storage technology where the charge level of any cell can be easily increased , but is difficult to decrease .recent multilevel cell technology allows many charge levels to be stored in a cell .cells are organized into blocks that contain roughly cells .the only way to decrease the charge level of a cell is to erase the whole block ( i.e. , set the charge on all cells to zero ) and reprogram each cell .this takes time , consumes energy , and reduces the lifetime of the memory .therefore , it is important to design efficient rewriting schemes that maximize the number of rewrites between two erasures , , , .the rewriting schemes increase some cell charge levels based on the current cell state and message to be stored . in this paper, we call a rewriting scheme a _ modulation code_. two different objective functions for modulation codes are primarily considered in previous work : ( i ) maximizing the number of rewrites for the worst case and ( ii ) maximizing for the average case . asfinucane et al . mentioned , the reason for considering average performance is the averaging effect caused by the large number of erasures during the lifetime of a flash memory device .our analysis shows that the worst - case objective and the average case objective are two extreme cases of our optimization objective .we also discuss under what conditions each optimality measure makes sense . in previous work ( e.g. , ) ,many modulation codes are shown to be asymptotically optimal as the number of cell - levels goes to infinity .but the condition that can not be satisfied in practical systems . therefore , we also analyze asymptotically optimal modulation codes when is only moderately large using the results from load - balancing theory .this suggests an enhanced algorithm that improves the performance of practical system significantly .theoretical analysis and simulation results show that this algorithm performs better than other asymptotically optimal algorithms when is moderately large .the structure of the paper is as follows .the system model and performance evaluation metrics are discussed in section [ sec : optimality - measure ] .an asymptotically optimal modulation code , which is universal over arbitrary i.i.d .input distributions , is proposed in section [ sub : another - rewriting - algorithm ] .the storage efficiency of this asymptotically optimal modulation code is analyzed in section [ sec : an - enhanced - algorithm ] .an enhanced modulation code is also presented in section [ sec : an - enhanced - algorithm ] .the storage efficiency of the enhanced algorithm is also analyzed in section [ sec : an - enhanced - algorithm ] .simulation results and comparisons are presented in section [ sec : simulation - results ] .the paper is concluded in section [ sec : conclusion ] .flash memory devices usually rely on error detecting / correcting codes to ensure a low error rate . so far, practical systems tend to use bose - chaudhuri - hocquenghem ( bch ) and reed - solomon ( rs ) codes .the error - correcting codes ( ecc s ) are used as the outer codes while the modulation codes are the inner codes . in this paper , we focus on the modulation codes and ignore the noise and the design of ecc for now .let us assume that a block contains -level cells and that cells ( called an -cell ) are used together to store -ary variables ( called a -variable ) .a block contains -cells and the -variables are assumed to be i.i.d . random variables .we assume that all the -variables are updated together randomly at the same time and the new values are stored in the corresponding -cells .this is a reasonable assumption in a system with an outer ecc .we use the subscript to denote the time index and each rewrite increases by 1 .when we discuss a modulation code , we focus on a single -cell .( the encoder of the modulation code increases some of the cell - levels based on the current cell - levels and the new value of the -variable . )remember that cell - levels can only be increased during a rewrite .so , when any cell - level must be increased beyond the maximum value , the whole block is erased and all the cell levels are reset to zero .we let the maximal allowable number of block - erasures be and assume that after block erasures , the device becomes unreliable .assume the -variable written at time is a random variable sampled from the set with distribution .for convenience , we also represent the -variable at time in the vector form as where denotes the set of integers modulo .the cell - state vector at time is denoted as and denotes the charge level of the -th cell at time when we say we mean for since the charge level of a cell can only be increased , continuous use of the memory implies that an erasure of the whole block will be required at some point .although writes , reads and erasures can all introduce noise into the memory , we neglect this and assume that the writes , reads and erasures are noise - free .consider writing information to a flash memory when encoder knows the previous cell state the current -variable , and an encoding function that maps and to a new cell - state vector .the decoder only knows the current cell state and the decoding function that maps the cell state back to the variable vector .of course , the encoding and decoding functions could change over time to improve performance , but we only consider time - invariant encoding / decoding functions for simplicity .the idea of designing efficient modulation codes jointly to store multiple variables in multiple cells was introduced by jiang . in previous work on modulation codesdesign for flash memory ( e.g. , , , ) , the lifetime of the memory ( either worst - case or average ) is maximized given fixed amount of information per rewrite .improving storage density and extending the lifetime of the device are two conflicting objectives .one can either fix one and optimize the other or optimize over these two jointly .most previous work ( e.g. , ) takes the first approach by fixing the amount of information for each rewrite and maximizing the number of rewrites between two erasures . in this paper, we consider the latter approach and our objective is to maximize the total amount of information stored in the device until the device dies .this is equivalent to maximizing the average ( over the -variable distribution ) amount of information stored per cell - level , where is the amount of information stored at the -th rewrite , is the number of rewrites between two erasures , and the expectation is over the -variable distribution .we also call as _ storage efficiency_. in previous work on modulation codes for flash memory , the number of rewrites of an -cell has been maximized in two different ways . the authors in consider the worst case number of rewrites and the authors in consider the average number of rewrites . as mentioned in , the reason for considering the average case is due to the large number of erasures in the lifetime of a flash memory device .interestingly , these two considerations can be seen as two extreme cases of the optimization objective in ( [ eq : opt ] ) .let the -variables be a sequence of i.i.d .random variables over time and all the -cells .the objective of optimization is to maximize the amount of information stored until the device dies .the total amount of information stored in the device - cell changes to the same value , should it count as stored information ? should this count as a rewrite ?this formula assumes that it counts as a rewrite , so that values ( rather than ) can be stored during each rewrite .] can be upper - bounded by where is the number of rewrites between the -th and the -th erasures .note that the upper bound in ( [ eq : total_info_ub ] ) is achievable by uniform input distribution , i.e. , when the input -variable is uniformly distributed over , each rewrite stores bits of information .due to the i.i.d .property of the input variables over time , s are i.i.d .random variables over time . since s are i.i.d . over time, we can drop the subscript .since , which is the maximum number of erasures allowed , is approximately on the order of , by the law of large numbers ( lln ) , we have \log_{2}(l).\ ] ] let the set of all valid encoder / decoder pairs be where implies the charge levels are element - wise non - decreasing .this allows us to treat the problem as the following equivalent problem \log_{2}(l).\label{eq : opt2 - 1}\ ] ] denote the maximal charge level of the -th -cell at time as .note that time index is reset to zero when a block erasure occurs and increased by one at each rewrite otherwise .denote the maximal charge level in a block at time as which can be calculated as define as the time when the -th -cell reaches its maximal allowed value , i.e. , .we assume , perhaps naively , that a block - erasure is required when any cell within a block reaches its maximum allowed value .the time when a block erasure is required is defined as it is easy to see that =ne\left[t\right], ] is equivalent to maximizing .so the optimization problem ( [ eq : opt2 - 1 ] ) can be written as the following optimization problem .\label{eq : opt3}\ ] ] under the assumption that the input is i.i.d .over all the -cells and time indices , one finds that the s are i.i.d . random variables .let their common probability density function ( pdf ) be it is easy to see that is the minimum of i.i.d .random variables with pdf therefore , we have where is the cumulative distribution function ( cdf ) of , the optimization problem ( [ eq : opt3 ] ) becomes =\max_{f , g\in\mathcal{q}}\int nf_{t}(x)\left(1-f_{t}(x)\right)^{n-1}x\mbox{d}x.\label{eq : opt}\ ] ] note that when the optimization problem in ( [ eq : opt ] ) simplifies to .\label{eq : opt2}\ ] ] this is essentially the case that the authors in consider .when the whole block is used as one -cell and the number of erasures allowed is large , optimizing the average ( over all input sequences ) number of rewrites of an -cell is equivalent to maximizing the total amount of information stored the analysis also shows that the reason we consider average performance is not only due to the averaging effect caused by the large number of erasures .one other important assumption is that there is only one -cell per block .the other extreme is when in this case , the pdf tends to a point mass at the minimum of and the integral approaches the minimum of .this gives the worst case stopping time for the programming process of an -cell .this case is considered by .our analysis shows that we should consider the worst case when even though the device experiences a large number of erasures .so the optimality measure is not determined only by , but also by when and are large , it makes more sense to consider the worst case performance . when , it is better to consider the average performance . when is moderately large, we should maximize the number of rewrites using ( [ eq : opt ] ) which balances the worst case and the average case .when is moderately large , one should probably focus on optimizing the function in ( [ eq : opt ] ) , but it is not clear how to do this directly .so , this remains an open problem for future research .instead , we will consider a load - balancing approach to improve practical systems where is moderately large .if we assume that there is only one variable changed each time , the average amount of information per cell - level can be bounded by because there are possible new values .since the number of rewrites can be bounded by we have if we allow arbitrary change on the -variables , there are totally possible new values .it can be shown that for fixed and , the bound in ( [ eq : storage_efficiency_bound ] ) suggests using a large can improve the storage efficiency .this is also the reason jointly coding over multiple cells can improve the storage efficiency . since optimal rewriting schemes only allow a single cell - level to increase by one during each rewrite , decodability implies that for the first case and for the second case .therefore , the bounds in ( [ eq : storage_efficiency_bound2 ] ) and ( [ eq : storage_efficiency_bound ] ) also require large to improve storage efficiency .the upper bound in ( [ eq : storage_efficiency_bound ] ) grows linearly with while the upper bound in ( [ eq : storage_efficiency_bound2 ] ) grows logarithmically with .therefore , in the remainder of this paper , we assume an arbitrary change in the -variable per rewrite and , i.e. , the whole block is used as an -cell , to improve the storage efficiency .this approach implicitly trades instantaneous capacity for future storage capacity because more cells are used to store the same number of bits , but the cells can also be reused many more times .note that the assumption of might be difficult for real implementation , but its analysis gives an upper bound on the storage efficiency . from the analysis above with , we also know that maximizing is equivalent to maximize the average number of rewrites .in , modulation codes are proposed that are asymptotically optimal ( as goes to infinity ) in the average sense when . in this section ,we introduce a modulation code that is asymptotically optimal for arbitrary input distributions and arbitrary and . this rewriting algorithm can be seen as an extension of the one in .the goal is , to increase the cell - levels uniformly on average for an arbitrary input distribution .of course , decodability must be maintained .the solution is to use common information , known to both the encoder ( to encode the input value ) and the decoder ( to ensure the decodability ) , to randomize the cell index over time for each particular input value .let us assume the -variable is an i.i.d .random variable over time with arbitrary distribution and the -variable at time is denoted as the output of the decoder is denoted as we choose and let the cell state vector at time be , where is the charge level of the -th cell at time at , the variables are initialized to , and . the decoding algorithm is described as follows .* step 1 : read cell state vector and calculate the norm .* step 2 : calculate and the encoding algorithm is described as follows .* step 1 : read cell state and calculate and as above . if then do nothing . *step 2 : calculate and * step 3 : increase the charge level of the -th cell by 1 . for convenience , in the rest of the paper , we refer the above rewriting algorithm as `` self - randomized modulation code '' .the self - randomized modulation code achieves at least rewrites with high probability , as for arbitrary and i.i.d. input distribution .therefore , it is asymptotically optimal for random inputs as .[ sketch of proof ] the proof is similar to the proof in .since exactly one cell has its level increased by 1 during each rewrite , is an integer sequence that increases by 1 at each rewrite .the cell index to be written is randomized by adding the value .this causes each consecutive sequence of rewrites to have a uniform affect on all cell levels . as ,an unbounded number of rewrites is possible and we can assume .consider the first steps , the value is as even as possible over for convenience , we say there are s at each value , as the rounding difference by 1 is absorbed in the term .assuming the input distribution is .for the case that , the probability that is for .therefore , has a uniform distribution over .since inputs are independent over time , by applying the same chernoff bound argument as , it follows that the number of times is at most with high probability ( larger than ) for all . summing over , we finish the proof .notice that the randomizing term a deterministic term which makes look _ random _ over time in the sense that there are equally many terms for each value .moreover , is known to both the encoder and the decoder such that the encoder can generate `` uniform '' cell indices over time and the decoder knows the accumulated value of , it can subtract it out and recover the data correctly .although this algorithm is asymptotically optimal as , the maximum number of rewrites can not be achieved for moderate .this motivates the analysis and the design of an enhanced version of this algorithm for practical systems in next section .a self - randomized modulation code uses cells to store a -variable .this is much larger than the used by previous asymptotically optimal algorithms because we allow the -variable to change arbitrarily .although this seems to be a waste of cells , the average amount of information stored per cell - level is actually maximized ( see ( [ eq : storage_efficiency_bound2 ] ) and ( [ eq : storage_efficiency_bound ] ) ) .in fact , the definition of asymptotic optimality requires if we allow arbitrary changes to the -variable .we note that the optimality of the self - randomized modulation codes is similar to the weak robust codes presented in .we use cells to store one of possible messages .this is slightly worse than the simple method of using .is it possible to have self - randomization using only cells ?a preliminary analysis of this question based on group theory indicates that it is not .thus , the extra cell provides the possibility to randomize the mappings between message values and the cell indices over time .while asymptotically optimal modulation codes ( e.g. , codes in , , , and the self - randomized modulation codes described in section [ sec : another - rewriting - algorithm ] ) require , practical systems use values between and . compared to the number of cells ,the size of is not quite large enough for asymptotic optimality to suffice . in other words , codes that are asymptotically optimalmay have significantly suboptimal performance when the system parameters are not large enough .moreover , different asymptotically optimal codes may perform differently when is not large enough . therefore , asymptotic optimality can be misleading in this case . in this section ,we first analyze the storage efficiency of self - randomized modulation codes when is not large enough and then propose an enhanced algorithm which improves the storage efficiency significantly .before we analyze the storage efficiency of asymptotically optimal modulation codes for moderately large , we first show the connection between rewriting process and the load - balancing problem ( aka the balls - into - bins or balls - and - bins problem ) which is well studied in mathematics and computer science .basically , the load - balancing problem considers how to distribute objects among a set of locations as evenly as possible .specifically , the balls - and - bins model considers the following problem .if balls are thrown into bins , with each ball being placed into a bin chosen independently and uniformly at random , define the _ load _ as the number of balls in a bin , what is the maximal load over all the bins ? based on the results in theorem 1 in , we take a simpler and less accurate approach to the balls - into - bins problem and arrive at the following theorem . [ thm : random_loading]suppose that balls are sequentially placed into bins . each timea bin is chosen independently and uniformly at random .the maximal load over all the bins is and : ( ) if the maximally loaded bin has balls , and , with high probability ( ) as ( ) if , the maximally loaded bin has balls , , with high probability ( ) as ( ) if the maximally loaded bin has , , and , with high probability ( ) as denote the event that there are at least balls in a particular bin as . using the union bound over all subsets of size it is easy to show that the probability that occurs is upper bounded by using stirling s formula , we have . then can be further bounded by if , substitute to the rhs of ( [ eq : maxload_ub ] ) , we have denote the event that all bins have at most balls as . by applying the union bound , it is shown that since we finish the proof for the case of if , substitute to the rhs of ( [ eq : maxload_ub ] ) , we have by applying the union bound , we finish the proof for the case of if substitute to the rhs of ( [ eq : maxload_ub ] ) , we have where by applying the union bound , it is shown that since we finish the proof for the case of note that theorem [ thm : random_loading ] only shows an upper bound on the maximum load with a simple proof .more precise results can be found in theorem 1 of , where the exact order of is given for different cases .it is worth mentioning that the results in theorem 1 of are different from theorem [ thm : random_loading ] because theorem 1 of holds with probability while theorem [ thm : random_loading ] holds with probability ( ) .the asymptotic optimality in the rewriting process implies that each rewrite only increases the cell - level of a cell by 1 and all the cell - levels are fully used when an erasure occurs .this actually implies .since is usually a large number and is not large enough in practice , the theorem shows that , when is not large enough , asymptotic optimality is not achievable .for example , in practical systems , the number of cell - levels does not depend on the number of cells in a block .therefore , rather than only roughly charge levels can be used as if is a small constant which is independent of . in practice, this loss could be mitigated by using writes that increase the charge level in multiple cells simultaneously ( instead of erasing the block ) .[ thm : gamma1]the self - randomized modulation code has storage efficiency when and when as goes to infinity with high probability ( i.e. , ) . consider the problem of throwing balls into bins and let the r.v . be the number of balls thrown into bins until some bin has more than balls in it . while we would like to calculate ] we show the loss factor for random loading with 1 and 2 random choices as comparison .note that does not take the amount of information per cell - level into account .results in fig .[ flo : fig2 ] show that the self - randomized modulation code has the same with random loading with 1 random choice and the load - balancing modulation code has the same with random loading with 2 random choices .this shows the optimality of these two modulation codes in terms of ball loading . , and 1000 erasures.[flo : fig2 ] ] with , and 1000 erasures.[flo : fig4 ] ] .[fig : fig5 ] ] .[fig : fig6 ] ]we also provide the simulation results for random loading with 1 random choice and the codes designed in , which we denote as flm-( ) algorithm , in fig .[ flo : fig4 ] . from results shown in fig .[ flo : fig4 ] , we see that the flm-( ) algorithm has the same loss factor as random loading with 1 random choice .this can be actually seen from the proof of asymptotic optimality in as the algorithm transforms an arbitrary input distribution into an uniform distribution on the cell - level increment .note that flm algorithm is only proved to be optimal when 1 bit of information is stored .so we just compare the flm algorithm with random loading algorithm in this case .[ fig : fig5 ] and fig .[ fig : fig6 ] show the storage efficiency for these two modulation codes .[ fig : fig5 ] and fig .[ fig : fig6 ] show that the load - balancing modulation code performs better than self - randomized modulation code when is large .this is also shown by the theoretical analysis in remark [ rem : if is ] .in this paper , we consider modulation code design problem for practical flash memory storage systems . the storage efficiency , or average ( over the distribution of input variables ) amount of information per cell - level is maximized . under this framework, we show the maximization of the number of rewrites for the the worst - case criterion and the average - case criterion are two extreme cases of our optimization objective .the self - randomized modulation code is proposed which is asymptotically optimal for arbitrary input distribution and arbitrary and , as the number of cell - levels .we further consider performance of practical systems where is not large enough for asymptotic results to dominate .then we analyze the storage efficiency of the self - randomized modulation code when is only moderately large .then the load - balancing modulation codes are proposed based on the power of two random choices .analysis and numerical simulations show that the load - balancing scheme outperforms previously proposed algorithms .
in this paper , we consider modulation codes for practical multilevel flash memory storage systems with cell levels . instead of maximizing the lifetime of the device , we maximize the average amount of information stored per cell - level , which is defined as storage efficiency . using this framework , we show that the worst - case criterion and the average - case criterion are two extreme cases of our objective function . a self - randomized modulation code is proposed which is asymptotically optimal , as , for an arbitrary input alphabet and i.i.d . input distribution . in practical flash memory systems , the number of cell - levels is only moderately large . so the asymptotic performance as may not tell the whole story . using the tools from load - balancing theory , we analyze the storage efficiency of the self - randomized modulation code . the result shows that only a fraction of the cells are utilized when the number of cell - levels is only moderately large . we also propose a load - balancing modulation code , based on a phenomenon known as `` the power of two random choices '' , to improve the storage efficiency of practical systems . theoretical analysis and simulation results show that our load - balancing modulation codes can provide significant gain to practical flash memory storage systems . though pseudo - random , our approach achieves the same load - balancing performance , for i.i.d . inputs , as a purely random approach based on the power of two random choices .
we present a detailed study on the nature of biases in network sampling strategies to shed light on how best to sample from networks .a _ network _ is a system of interconnected entities typically represented mathematically as a graph : a set of vertices and a set of edges among the vertices .networks are ubiquitous and arise across numerous and diverse domains . for instance , many web - based social media , such as online social networks , produce large amounts of data on interactions and associations among individuals .mobile phones and location - aware devices produce copious amounts of data on both communication patterns and physical proximity between people . in the domain of biology also , from neurons to proteins to food webs , there is now access to large networks of associations among various entities and a need to analyze and understand these data . with advances in technology ,pervasive use of the internet , and the proliferation of mobile phones and location - aware devices , networks under study today are not only substantially larger than those in the past , but sometimes exist in a decentralized form ( e.g. the network of blogs or the web itself ) . for many networks ,their global structure is not fully visible to the public and can only be accessed through `` crawls '' ( e.g. online social networks ) .these factors can make it prohibitive to analyze or even access these networks in their entirety .how , then , should one proceed in analyzing and mining these network data ?one approach to addressing these issues is _ sampling _ : inference using small subsets of nodes and links from a network . from epidemiological applications to web crawling and p2p search ,network sampling arises across many different settings . in the present work ,we focus on a particular line of investigation that is concerned with constructing samples that match critical structural properties of the original network .such samples have numerous applications in data mining and information retrieval . in , for example , structurally - representative samples were shown to be effective in inferring network protocol performance in the larger network and significantly improving the efficiency of protocol simulations . in section [ sec : applications ] , we discuss several additional applications . although there have been a number of recent strides in work on network sampling ( e.g. ) , there is still very much that requires better and deeper understanding . moreover , many networks under analysis , although treated as complete , are , in fact , _ samples _ due to limitations in data collection processes .thus , a more refined understanding of network sampling is of general importance to network science . towards this end, we conduct a detailed study on _ network sampling biases_. there has been a recent spate of work focusing on _ problems _ that arise from network sampling biases including how and why biases should be avoided .our work differs from much of this existing literature in that , for the first time in a comprehensive manner , we examine network sampling bias as an _ asset to be exploited_. we argue that biases of certain sampling strategies can be advantageous if they `` push '' the sampling process towards inclusion of specific properties of interest .our main aim in the present work is to identify and understand the connections between specific sampling biases and specific definitions of structural representativeness , so that these biases can be leveraged in practical applications .* summary of findings .* we conduct a detailed investigation of network sampling biases .we find that bias towards high _ expansion _( a concept from expander graphs ) offers several unique advantages over other biases such as those toward high degree nodes .we show both empirically and analytically that such an expansion bias `` pushes '' the sampling process towards new , undiscovered clusters and the discovery of wider portions of the network .in other analyses , we show that a simple sampling process that selects nodes with many connections from those already sampled is often a reasonably good approximation to directly sampling high degree nodes and locates well - connected ( i.e. high degree ) nodes significantly faster than most other methods .we also find that the breadth - first search , a widely - used sampling and search strategy , is surprisingly among the most dismal performers in terms of both discovering the network and accumulating critical , well - connected nodes . finally , we describe ways in which some of our findings can be exploited in several important applications including disease outbreak detection and market research .a number of these aforementioned findings are surprising in that they are in stark contrast to conventional wisdom followed in much of the existing literature ( e.g. ) .not surprisingly , network sampling arises across many diverse areas . here, we briefly describe some of these different lines of research .* network sampling in classical statistics . * the concept of sampling networks first arose to address scenarios where one needed to study hidden or difficult - to - access populations ( e.g. illegal drug users , prostitutes ) . for recent surveys, one might refer to .the work in this area focuses almost exclusively on acquiring unbiased estimates related to variables of interest attached to each network node . the present work , however , focuses on inferring properties related to the _ network itself _( many of which are not amenable to being fully captured by simple attribute frequencies ) .our work , then , is much more closely related to _ representative subgraph sampling_. * representative subgraph sampling .* in recent years , a number of works have focused on _ representative subgraph sampling _ : constructing samples in such a way that they are condensed representations of the original network ( e.g. ) .much of this work focuses on how best to produce a `` universal '' sample representative of _ all _ structural properties in the original network .by contrast , we subscribe to the view that no single sampling strategy may be appropriate for all applications .thus , our aim , then , is to better understand the _ biases _ in specific sampling strategies to shed light on how best to leverage them in practical applications .* unbiased sampling .* there has been a relatively recent spate of work ( e.g. ) that focuses on constructing uniform random samples in scenarios where nodes can not be easily drawn randomly ( e.g. settings such as the web where nodes can only be accessed through crawls ) .these strategies , often based on modified random walks , have been shown to be effective for various frequency estimation problems ( e.g. inferring the proportion of pages of a certain language in a web graph ) . however , as mentioned above , the present work focuses on using samples to infer structural ( and functional ) properties of the _ network itself_. in this regard , we found these unbiased methods to be less effective during preliminary testing .thus , we do not consider them and instead focus our attention on other more appropriate sampling strategies ( such as those mentioned in _ representative subgraph sampling _ ) . * studies on sampling bias .* several studies have investigated _ biases _ that arise from various sampling strategies ( e.g. ) .for instance , showed that , under the simple sampling strategy of picking nodes at random from a scale - free network ( i.e. a network whose degree distribution follows the power law ) , the resultant subgraph sample will _ not _ be scale - free .the authors of showed the converse is true under traceroute sampling .virtually all existing results on network sampling bias focus on its negative aspects .by contrast , we focus on the _ advantages _ of certain biases and ways in which they can be exploited in network analysis .* property testing .* work on sampling exists in the fields of combinatorics and graph theory and is centered on the notion of _ property testing _ in graphs .properties such as those typically studied in graph theory , however , may be less useful for the analysis of _ real - world _ networks ( e.g. the exact meaning of , say , -colorability within the context of a social network is unclear ) .nevertheless , theoretical work on property testing in graphs is excellently surveyed in .* other areas . *decentralized search ( e.g. searching unstructured p2p networks ) and web crawling can both be framed as network sampling problems , as both involve making decisions from subsets of nodes and links from a larger network .indeed , network sampling itself can be viewed as a problem of information retrieval , as the aim is to seek out a subset of nodes that either individually or collectively match some criteria of interest .several of the sampling strategies we study in the present work , in fact , are graph search algorithms ( e.g. breadth - first search ) .thus , a number of our findings discussed later have implications for these research areas ( e.g. see ) . for reviews on decentralized search both in the contexts of complex networks and p2p systems, one may refer to and , respectively . for examples of connections between web crawling and network sampling ,see .we now briefly describe some notations and definitions used throughout this paper .[ defn : network ] is a _ network _ or _ graph _ where is set of vertices and is a set of edges .[ defn : sample ] a _ sample _ is a subset of vertices , .[ defn : neighborhood ] is the _ neighborhood _ of if .[ defn : inducedsubgraph ] is the _ induced subgraph _ of based on the sample if where the vertex set is and the edge set is .the induced subgraph of a sample may also be referred to as a _ subgraph sample_. we study sampling biases in a total of twelve different networks : a power grid ( powergrid ) , a wikipedia voting network ( wikivote ) , a pgp trust network ( pgp ) , a citation network ( hepth ) , an email network ( enron ) , two co - authorship networks ( condmat and astroph ) , two p2p file - sharing networks ( gnutella04 and gnutella31 ) , two online social networks ( epinions and slashdot ) , and a product co - purchasing network ( amazon ) .these datasets were chosen to represent a rich set of diverse networks from different domains .this diversity allows a more comprehensive study of network sampling and thorough assessment of the performance of various sampling strategies in the face of varying network topologies .table [ tab : datasets ] shows characteristics of each dataset .all networks are treated as undirected and unweighted .[ th ] .network properties . *key : * _ n= # of nodes , d= density , pl = characteristic path length , cc = local clustering coefficient , ad = average degree . _ [ cols="<,^,^,^,^,^",options="header " , ] -0.15 inin the present work , we focus on a particular class of sampling strategies , which we refer to as _ link - trace sampling_. in _ link - trace sampling _ , the next node selected for inclusion into the sample is always chosen from among the set of nodes directly connected to those already sampled . in this way ,sampling proceeds by tracing or following links in the network . this concept can be defined formally .[ defn : linktracesampling ] given an integer and an initial node ( or seed ) to which is initialized ( i.e. ) , a _ link - trace sampling _ algorithm , , is a process by which nodes are iteratively selected from among the current neighborhood and added to until ._ link - trace sampling _ may also be referred to as _ crawling _( since links are `` crawled '' to access nodes ) or viewed as _ online _ sampling ( since the network reveals itself iteratively during the course of the sampling process ) .the key advantage of sampling through link - tracing , then , is that complete access to the network in its entirety is _ not _ required .this is beneficial for scenarios where the network is either large ( e.g. an online social network ) , decentralized ( e.g. an unstructured p2p network ) , or both ( e.g. the web ) .as an aside , notice from definition [ defn : linktracesampling ] that we have implicitly assumed that the neighbors of a given node can be obtained by visiting that node during the sampling process ( i.e. is known ) .this , of course , accurately characterizes most real scenarios .for instance , neighbors of a web page can be gleaned from the hyperlinks on a visited page and neighbors of an individual in an online social network can be acquired by viewing ( or `` scraping '' ) the friends list .having provided a general definition of _ link - trace sampling _ , we must now address _ which _ nodes in should be preferentially selected at each iteration of the sampling process .this choice will obviously directly affect the properties of the sample being constructed .we study seven different approaches - all of which are quite simple yet , at the same time , ill - understood in the context of real - world networks . *breadth - first search ( bfs ) . * starting with a single seed node , the bfs explores the neighbors of visited nodes . at each iteration, it traverses an unvisited neighbor of the _ earliest _ visited node . in both and , it was empirically shown that bfs is biased towards high - degree and high - pagerank nodes .bfs is used prevalently to crawl and collect networks ( e.g. ) . * depth - first search ( dfs ) .* dfs is similar to bfs , except that , at each iteration , it visits an unvisited neighbor of the most _ recently _ visited node .* random walk ( rw ) .* a random walk simply selects the next hop uniformly at random from among the neighbors of the current node . * forest fire sampling ( ffs ) .* ffs , proposed in , is essentially a probabilistic version of bfs . at each iteration of a bfs - like process ,a neighbor is only explored according to some `` burning '' probability . at , ffs is identical to bfs .we use , as recommended in . * degree sampling ( ds ) . *the ds strategy involves greedily selecting the node with the highest degree ( i.e. number of neighbors ) .a variation of ds was analytically and empirically studied as a p2p search algorithm in .notice that , in order to select the node with the highest degree , the process must know for each .that is , knowledge of is required at each iteration . as noted in , this requirement is acceptable for some domains such as p2p networks and certain social networks .the ds method is also feasible in scenarios where 1 ) one is interested in efficiently `` downsampling '' a network to a connected subgraph , 2 ) a crawl is repeated and history of the last crawl is available , or 3 ) the proportion of the network accessed to construct a sample is less important .* sec ( sample edge count ) .* given the currently constructed sample , how can we select a node with the highest degree _ without _ having knowledge of ?the sec strategy tracks the links from the currently constructed sample to each node and selects the node with the most links from .in other words , we use the degree of in the induced subgraph of as an approximation of the degree of in the original network .similar approaches have been employed as part of web crawling strategies with some success ( e.g. ) .* xs ( expansion sampling ) . *the xs strategy is based on the concept of expansion from work on expander graphs and seeks to greedily construct the sample with the maximal expansion : , where is the desired sample size . at each iteration ,the next node selected for inclusion in the sample is chosen based on the expression : like the ds strategy , this approach utilizes knowledge of . in sections [ sec : rep.reach ] and [ sec :biases.xs ] , we will investigate in detail the effect of this expansion bias on various properties of constructed samples .what makes one sampling strategy `` better '' than another ? in computer science , `` better '' is typically taken to be structural _ representativeness _ ( e.g. see ) .that is , samples are considered better if they are more representative of structural properties in the original network .there are , of course , numerous structural properties from which to choose , and , as correctly observed by ahmed et al . , it is not always clear which should be chosen . rather than choosing arbitrary structural properties as measures of representativeness , we select specific measures of representativeness that we view as being potentially useful for real applications .we divide these measures ( described below ) into three categories : degree , clustering , and reach . for each sampling strategy, we generate 100 samples using randomly selected seeds , compute our measures of representativeness on each sample , and plot the average value as sample size grows .( standard deviations of computed measures are discussed in section [ sec : rep.seedsensitivity ] .applications for these measures of representativeness are discussed later in section [ sec : applications ] . ) due to space limitations and the large number of networks evaluated , for each evaluation measure , we only show results for two datasets that are illustrative of general trends observed in all datasets . however , full results are available as supplementary material .the degrees ( numbers of neighbors ) of nodes in a network is a fundamental and well - studied property .in fact , other graph - theoretic properties such as the average path length between nodes can , in some cases , be viewed as byproducts of degree ( e.g. short paths arising from a small number of highly - connected hubs that act as conduits ) .we study two different aspects of degree ( with an eye towards real - world applications , discussed in section [ sec : applications ] ) .* degree distribution similarity ( distsim ) .* we take the degree sequence of the sample and compare it to that of the original network using the two - sample kolmogorov - smirnov ( k - s ) d - statistic , a distance measure .our objective here is to measure the agreement between the two degree distributions in terms of both shape and location .specifically , the d - statistic is defined as , where is the range of node degrees , and and are the cumulative degree distributions for and , respectively .we compute the distribution similarity by subtracting the k - s distance from one .* hub inclusion ( hubs ) .* in several applications , one cares less about matching the _ overall _ degree distribution and more about accumulating the highest degree nodes into the sample quickly ( e.g. immunization strategies ) .for these scenarios , sampling is used as a tool for information retrieval .here , we evaluate the extent to which sampling strategies accumulate hubs ( i.e. high degree nodes ) quickly into the sample . as sample size grows, we track the proportion of the top nodes accumulated by the sample . for our tests ,we use .figure [ fig : rep.degree ] shows the _ degree distribution similarity _ ( distsim ) and _ hub inclusion _ ( hubs ) for the slashdot and enron datasets .note that the sec and ds strategies , both of which are biased to high degree nodes , perform best on _ hub inclusion _ ( as expected ) , but are the _ worst _ performers on the distsim measure ( which is also a direct result of this bias ) .( the xs strategy exhibits a similar trend but to a slightly lesser extent . ) on the other hand , strategies such as bfs , ffs , and rw tend to perform better on distsim , but worse on hubs .for instance , the ds and sec strategies locate the majority of the top 100 hubs with sample sizes less than in some cases .bfs and ffs require sample sizes of over ( and the performance differential is larger when locating hubs ranked higher than ) .more importantly , no strategy performs best on _ both _ measures .this , then , suggests a tension between goals : constructing small samples of the most well - connected nodes is in conflict with producing small samples exhibiting representative degree distributions . more generally , when selecting sample elements , choices resulting in gains for one area can result in losses for another .thus , these choices must be made in light of how samples will be used - a subject we discuss in greater depth in section [ sec : applications ] .we conclude this section by briefly noting that the trend observed for sec seems to be somewhat dependent upon the quality and number of hubs actually present in a network ( relative to the size of the network , of course ) .that is , sec matches ds more closely as degree distributions exhibit longer and denser tails ( as shown in figure [ fig : rep.dd ] ) .we will revisit this in section [ sec : biases.sec ] .( other strategies are sometimes affected similarly , but the trend is much less consistent . ) in general , we find sec best matches ds performance on many of the social networks ( as opposed to technological networks such as the powergrid with few `` good '' hubs , lower average degree , and longer path lengths ) .however , further investigation is required to draw firm conclusions on this last point .+ -0.01 in -0.15 in + -0.01 in -0.15 in many real - world networks , such as social networks , exhibit a much higher level clustering than what one would expect at random .thus , clustering has been another graph property of interest for some time . here ,we are interested in evaluating the extent to which samples exhibit the level of clustering present in the original network .we employ two notions of clustering , which we now describe. * local clustering coefficient ( ccloc ) . *the local clustering coefficient of a node captures the extent to which the node s neighbors are also neighbors of each other .formally , the local clustering coefficient of a node is defined as where is the degree of node and is the number of links among the neighbors of .the average local clustering coefficient for a network is simply . * global clustering coefficient ( ccglb ) . *the global clustering coefficient is a function of the number of triangles in a network .it is measured as the number of closed triplets divided by the number of connected triples of nodes .results for clustering measures are less consistent than for other measures .overall , dfs and rw strategies appear to fare relatively better than others .we do observe that , for many strategies and networks , estimates of clustering are initially higher - than - actual and then gradually decline ( see figure [ fig : rep.clustering ] ) .this agrees with intuition .nodes in clusters should intuitively have more paths leading to them and will , thus , be encountered earlier in a sampling process ( as opposed to nodes not embedded in clusters and located in the periphery of a network ) .this , then , should be taken into consideration in applications where accurately matching clustering levels is important .+ -0.01 in -0.15 in we propose a new measure of representativeness called _network reach_. as a newer measure , _ network reach _ has obviously received considerably less attention than degree and clustering within the existing literature , but it is , nevertheless , a vital measure for a number of important applications ( as we will see in section [ sec : applications ] ) ._ network reach _ captures the extent to which a sample _ covers _ a network . intuitively , for a sample to be truly representative of a large network, it should consist of nodes from diverse portions of the network , as opposed to being relegated to a small `` corner '' of the graph .this concept will be made more concrete by discussing in detail the two measures of _ network reach _ we employ : _ community reach _ and the _ discovery quotient_. * community reach ( cnm and rak ) .* many real - world networks exhibit what is known as _ community structure_. a _ community _ can be loosely defined as a set of nodes more densely connected among themselves than to other nodes in the network .although there are many ways to represent community structure depending on various factors such as whether or not overlapping is allowed , in this work , we represent community structure as a _ partition _ : a collection of disjoint subsets whose union is the vertex set . under this representation ,each subset in the partition represents a community .the task of a community detection algorithm is to identify a partition such that vertices within the same subset in the partition are more densely connected to each other than to vertices in other subsets . for the criterion of _community reach _ , a sample is more representative of the network if it consists of nodes from more of the communities in the network .we measure _ community reach _ by taking the number of communities represented in the sample and dividing by the total number of communities present in the original network . sincea community is essentially a cluster of nodes , one might wonder why we have included _ community reach _ as a measure of _ network reach _ , rather than as a measure of _clustering_. the reason is that we are slightly less interested in the structural details of communities detected here .rather , our aim is to assess how `` spread out '' a sample is across the network .since community detection is somewhat of an inexact science ( e.g. see ) , we measure _ community reach _ with respect to two separate algorithms .we employ both the method proposed by clauset et al . in (denoted as cnm ) and the approach proposed by raghavan et al . in ( denoted as rak ) .essentially , for our purposes , we are defining communities simply as the output of a community detection algorithm .* discovery quotient ( dq ) . *an alternative view of _ network reach _ is to measure the proportion of the network that is _ discovered _ by a sampling strategy .the number of nodes discovered by a strategy is defined as .the _ discovery quotient _ is this value normalized by the total number of nodes in a network : .intuitively , we are defining the _ reach _ of a sample here by measuring the extent to which it is one hop away from the rest of the network . as we will discuss in section [ sec : applications ] , samples with high _ discovery quotients _ have several important applications .note that a simple greedy algorithm for coverage problems such as this has a well - known sharp approximation bound of .however , link - trace sampling is restricted to selecting subsequent sample elements from the current neighborhood at each iteration , which results in a much smaller search space .thus , this approximation guarantee can be shown not to hold within the context of link - trace sampling .as shown in figure [ fig : rep.reach ] , the xs strategy displays the overwhelmingly best performance on all three measures of _ network reach_. we highlight several observations here .first , the extent to which the xs strategy outperforms all others on the rak and cnm measures is quite striking .we posit that the expansion bias of the xs strategy `` pushes '' the sampling process towards the inclusion of new communities not already seen ( see also ) . in section [ sec : biases.xs ] , we will analytically examine this connection between expansion bias and _ community reach_. on the other hand , the sec method appears to be among the least effective in reaching different communities or clusters .we attribute this to the fact that sec preferentially selects nodes with many connections to nodes already sampled .such nodes are likely to be members of clusters already represented in the sample .second , on the dq measure , it is surprising that the ds strategy , which explicitly selects high degree nodes , often fails to even come close to the xs strategy .we partly attribute this to an overlap in the neighborhoods of well - connected nodes . by explicitly selecting nodes that contribute to _ expansion _ , the xs strategy is able to discover a much larger proportion of the network in the same number of steps - in some cases , by actively sampling comparatively _ lower _ degree nodes .finally , it is also surprising that the bfs strategy , widely used to crawl and explore online social networks ( e.g ) and other graphs ( e.g. ) , performs quite dismally on all three measures . in short , we find that nodes contributing most to the expansion of the sample are unique in that they provide specific and significant advantages over and above those provided by nodes that are simply well - connected and those accumulated through standard bfs - based crawls .these and previously mentioned results are in contrast to the conventional wisdom followed in much of the existing literature ( e.g. ) .+ -0.01 in + -0.01 in -0.15 in as described , link - trace sampling methods are initiated from randomly selected seeds .this begs the question : how sensitive are these results to the seed supplied to a strategy ?figure [ fig : std ] shows the standard deviation of each sampling strategy for both _hub inclusion _ and _ network reach _ as sample size grows .we generally find that methods with the most explicit biases ( xs , sec , ds ) tend to exhibit the least seed sensitivity and variability , while the remaining methods ( bfs , dfs , ffs , rw ) exhibit the most .this trend is exhibited across all measures and all datasets .let us briefly summarize two main observations from section [ sec : rep ] .we saw that the xs strategy dramatically outperformed all others in accumulating nodes from many different communities .we also saw that the sec strategy was often a reasonably good approximation to directly sampling high degree nodes and locates the set of most well - connected nodes significantly faster than most other methods . here , we turn our attention to analytically examining these observed connections .we begin by briefly summarizing some existing analytical results . *random walks ( rw ) .* there is a fairly large body of research on random walks and markov chains ( see for an excellent survey ) .a well - known analytical result states that the probability ( or _ stationary _ probability ) of residing at any node during a random walk on a connected , undirected graph converges with time to , where is the degree of node .in fact , the _ hitting time _ of a random walk ( i.e. the expected number of steps required to reach a node beginning from any node ) has been analytically shown to be directly related to this stationary probability .random walks , then , are naturally biased towards high degree ( and high pagerank ) nodes , which provides some theoretical explanation as to why rw performs slightly better than other strategies ( e.g. bfs ) on measures such as _ hub inclusion_. however , as shown in figure [ fig : rep.degree ] , it is nowhere near the best performers .thus , these analytical results appear only to hold in the limit and fail to predict actual sampling performance . * degree sampling ( ds ) . * in studying the problem of searching peer - to - peer networks , adamic et al . proposed and analyzed a greedy search strategy very similar to the ds sampling method .this strategy , which we refer to as a degree - based walk , was analytically shown to quickly find the highest - degree nodes and quickly cover large portions of scale - free networks .thus , these results provide a theoretical explanation for performance of the ds strategy on measures such as _ hub inclusion _ and the _ discovery quotient_. * other results . * as mentioned in section [ sec : relatedwork ] , to the best of our knowledge , much of the other analytical results on sampling bias focus on _ negative _ results .thus , these works , although intriguing , may not provide much help in the way of explaining _ positive _ results shown in section [ sec : rep ] .+ we now analyze two methods for which there are little or no existing analytical results : xs and sec . a widely used measure for the `` goodness '' or the strength of a community in graph clustering and community detection is _ conductance _ , which is a function of the fraction of total edges emanating from a sample ( lower values mean stronger communities ) : where are entries of the adjacency matrix representing the graph and , which is the total number of edges incident to the node set .it can be shown that , provided the conductance of communities is sufficiently low , sample expansion is directly affected by community structure . consider a simple random graph model with vertex set and a community structure represented by partition where .let and be the number of each node s edges pointing within and outside the node s community , respectively .these edges are connected uniformly at random to nodes either within or outside a node s community , similar to a configuration model ( e.g. , ) .note that both and are related directly to conductance .when conductance is lower , is smaller is , the total number of edges incident to is , and and are random variables denoting the inward and outward edges , respectively , of each node ( as opposed to constant values ) .then , and .if , then .( in this example , the expectations are over nodes in only . ) ] as compared to .the following theorem expresses the link between expansion and _ community reach _ in terms of these inward and outward edges .[ thm : xsbias ] let be the current sample , be a new node to be added to , and be the size of s community . if , then the expected expansion of is higher when is in a new community than when is in a current community .let be the expected value for when is in a new community and let be the expected value when not .we compute an upper bound on and a lower bound on .+ deriving : assume is affiliated with a current community already represented by at least one node in .since we are computing an upper bound on , we assume there is exactly one node from within s community , as this is the minimum for s community to be a _current _ community . by the linearity of expectations , the upper bound on is , where the term is the expected number of nodes in s community that are both linked to _ and _ in the set .+ deriving : assume belongs to a new community not already represented in .( by definition , no nodes in will be in s community . ) applying the linearity of expectations once again , the lower bound on is , where the term is the expected number of nodes in s community that are both linked to _ and _ already in . + solving for , if , then . theorem [ thm : xsbias ] shows analytically the link between expansion and community structure - a connection that , until now , has only been empirically demonstrated .thus , a theoretical basis for performance of the xs strategy on _ community reach _ is revealed .recall that the sec method uses the degree of a node in the induced subgraph as an estimation for the degree of in . in section [ sec : rep ], we saw that this choice performs quite well in practice . here , we provide theoretical justification for the sec heuristic .consider a random network with some arbitrary expected degree sequence ( e.g. a power law random graph under the so - called model ) and a sample .let be a function that returns the expected degree of a given node in a given random network ( see for more information on _ expected _ degree sequences ) .then , it is fairly straightforward to show the following holds . [prop : secbias ] for any two nodes , + if , then .the probability of an edge between any two nodes and in g is where .let . then , since only when , the proposition holds .combining proposition [ prop : secbias ] with analytical results from ( described in section [ sec : biases.existing ] ) provides a theoretical basis for observed performance of the sec strategy on measures such as _ hub inclusion_. finally , recall from section [ sec : rep.degree.results ] that the extent to which sec matched the performance of ds on hubs seemed to partly depend on the tail of degree distributions .proposition [ prop : secbias ] also yields insights into this phenomenon .longer and denser tails allow for more `` slack '' when deviating from these expectations of random variables ( as in real - world link patterns that are not purely random ) .we now briefly describe ways in which some of our findings may be exploited in important , real - world applications .although numerous potential applications exist , we focus here on three areas : 1 ) outbreak detection 2 ) landmarks and graph exploration 3 ) marketing .what is the most effective and efficient way to predict and prevent a disease outbreak in a social network ?in a recent paper , christakis and fowler studied outbreak detection of the h1n1 flu among college students at harvard university .previous research has shown that well - connected ( i.e. high degree ) people in a network catch infectious diseases earlier than those with fewer connections .thus , _ monitoring _these individuals allows forecasting the progression of the disease ( a boon to public health officials ) and _ immunizing _ these well - connected individuals ( when immunization is possible ) can prevent or slow further spread .unfortunately , identifying well - connected individuals in a population is non - trivial , as access to their friendships and connections is typically not fully available . and , collecting this information is time - consuming , prohibitively expensive , and often impossible for large networks .matters are made worse when realizing that most existing network - based techniques for immunization selection and outbreak detection assume full knowledge of the global network structure ( e.g. ) .this , then , presents a prime opportunity to exploit the power of _sampling_. to identify well - connected students and predict the outbreak , christakis and fowler employed a sampling technique called _ acquaintance sampling _ ( acq ) based on the so - called friendship paradox .the idea is that random neighbors of randomly selected nodes in a network will tend to be highly - connected .christakis and fowler , therefore , sampled random friends of randomly selected students with the objective of constructing a sample of highly - connected individuals . based on our aforementioned results ,we ask : can we do better than this acq strategy ? in previous sections , we showed empirically and analytically that the sec method performs exceedingly well in accumulating hubs .( it also happens to require less information than ds and xs , the other top performers . )figure [ fig : outdet ] shows the sample size required to locate the top - ranked well - connected individuals for both sec and acq .the performance differential is quite remarkable , with the sec method faring overwhelmingly better in quickly zeroing in on the set of most well - connected nodes . aside from its superior performance , sec has one additional advantage over the acq method employed by christakis and fowler .the acq method assumes that nodes in can be selected uniformly at random .it is , in fact , dependent on this .( acq , then , is _ not _ a link - trace sampling method . ) by contrast , sec , as a pure link - trace sampling strategy , has no such requirement and , thus , can be applied in realistic scenarios for which acq is unworkable .-0.15 in recall from section [ sec : rep.reach ] that a community in a network is a cluster of nodes more densely connected among themselves than to others . identifying communities is important , as they often correspond to real social groups , functional groups , or similarity ( both demographic and not ) .the ability to easily construct a sample consisting of members from diverse groups has several important applications in marketing .marketing surveys often seek to construct stratified samples that collectively represent the diversity of the population .if the attributes of nodes are not known in advance , this can be challenging .the xs strategy , which exhibited the best _ community reach _ , can potentially be very useful here .moreover , it has the added power of being able to locate members from diverse groups with absolutely no _ a priori _ knowledge of demographics attributes , social variables , or the overall community structure present in the network .there is also recent evidence to suggest that being able to construct a sample from many different communities can be an asset in effective word - of - mouth marketing .this , then , represents yet another potential marketing application for the xs strategy . _ landmark - based methods _represent a general class of algorithms to compute distance - based metrics in large networks quickly .the basic idea is to select a small sample of nodes ( i.e. the landmarks ) , compute offline the distances from these landmarks to every other node in the network , and use these pre - computed distances at runtime to approximate distances between pairs of nodes .as noted in , for this approach to be effective , landmarks should be selected so that they _ cover _significant portions of the network . based on our findings for_ network reach _ in section [ sec : rep.reach ] , the xs strategy overwhelmingly yields the best _ discovery quotient _ and covers the network significantly better than any other strategy .thus , it represents a promising landmark selection strategy .our results for the _ discovery quotient _ and other measures of _ network reach _ also yield important insights into how graphs should best be explored , crawled , and searched .as shown in figure [ fig : rep.reach ] , the most prevalently used method for exploring networks , bfs , ranks low on measures of _ network reach_. this suggests that the bfs and its pervasive use in social network data acquisition and exploration ( e.g. see ) should possibly be examined more closely .we have conducted a detailed study on sampling biases in real - world networks . in our investigation , we found the bfs , a widely - used method for sampling and crawling networks , to be among the worst performers in both discovering the network and accumulating critical , well - connected hubs .we also found that sampling biases towards high expansion tend to accumulate nodes that are uniquely different from those that are simply well - connected or traversed during a bfs - based strategy .these high - expansion nodes tend to be in newer and different portions of the network not already encountered by the sampling process .we further demonstrated that sampling nodes with many connections from those already sampled is a reasonably good approximation to sampling high degree nodes .finally , we demonstrated several ways in which these findings can be exploited in real - world application such as disease outbreak detection and marketing . for future work , we intend to investigate ways in which the top - performing sampling strategies can be enhanced for even wider applicability .one such direction is to investigate the effects of alternating or combining different biases into a single sampling strategy .
from social networks to p2p systems , network sampling arises in many settings . we present a detailed study on the nature of biases in network sampling strategies to shed light on how best to sample from networks . we investigate connections between specific biases and various measures of structural representativeness . we show that certain biases are , in fact , beneficial for many applications , as they `` push '' the sampling process towards inclusion of desired properties . finally , we describe how these sampling biases can be exploited in several , real - world applications including disease outbreak detection and market research . [ data mining ]
set theory was proposed with the intended use to the fields of pattern classification and information processing [ 1 ] .indeed , it has attracted many researchers , and their applications to real - life problems are of a great significance .simpson [ 2 ] presented the fuzzy min max neural network ( fmm ) , which makes the soft decisions to organize hyperboxes by its degree of belongingness to a particular class , which is known as a membership function .hyperbox is a convex box , completely represented by min and max points .fmm classification results are completely characterized with the help of a membership function .along with this elegant proposal , [ 2 ] also presented the characteristics for a good classifier , among which , nonlinear separability , overlapping classes and tuning parameters have proved to be of a great interest to a research community .simpson also presented a clustering approach using fmm in [ 3 ] .but many problems in real - life require both classification and clustering . to address this issue ,gfmm [ 4 ] brought this generality . besides generality, the more significant contribution has proved to be modification to the membership function .the presented membership function computes the belongingness to the hyperbox so that the membership value decreases uniformly as we move away from the hyperbox .another weakness of fmm was the patterns belonging to overlapped region , where the rate of misclassification is considerably high .the tuning parameter , theta ( ) , which controls the size of a hyperbox , has a great impact on this overlapped region .smaller theta values produce less overlaps producing high training accuracy , but the efficacy of the network gets compromised , and for larger theta values , accuracy gets decreased .multiple approaches were presented to tackle this problem .earlier , the process of contraction [ 1][4 ] was employed , which used to eliminate all the overlapping regions .this method had the intrinsic problem of representing patterns not belonging to any of the hyperbox , in turn lessening the accuracy .exclusion / inclusion fuzzy classification ( hefc ) network was introduced in [ 5 ] , which further reduced the number of hyperboxes and increased the accuracy .inclusion hyperboxes were used to represent patterns belonging to the same class , while exclusion hyperboxes were used to denote the overlapped region , treated as if it is a hyperbox .this notion is used as it is in almost all the newly introduced models [ 6][7][8][9 ] .fuzzy min - max neural network classifier with compensatory neurons ( fmcn ) was acquainted in [ 7 ] .authors categorized the overlap into three parts , namely , full containment , partial overlap and no overlap , and then a new membership function to accommodate belongingness based on the compensation value .authors also analyzed that neatly taking care of overlapped region automatically brings the insensitivity to the hyperbox size parameter , .data core based fuzzy min - max neural network ( dcfmn ) [ 8 ] further improved upon fmcn .authors eliminated the need of overlap categorization .they also suggest a new membership function based on noise , geometric center and data cores of the hyperbox .wherein dcfmn improved the accuracy in few cases , there are some serious drawbacks .* * dcfmn introduces two new user controlled variables , and . is used to suppress the influence of the noise and is used to control the descending speed of the membership function .these two variables greatly impact the performance of the model and naturally , defining their values is a tedious job .* there exists an underlying assumption that noise within all the hyperboxes is similar , which may not be true .moreover , the sequence of the training exemplars plays a role as well .* mlf conveys that this membership function is not always preferred , in that , it does not work well for high percentage of samples belonging to overlapped area .multi - level fuzzy min max neural network ( mlf ) [ 9 ] addresses the problem of overlapped region with an elegant approach .it uses separate levels for overlapping regions , and monotonically decreases the hyperbox size ( ) . for most cases , mlf produces 100% training accuracy .though mlf achieves a significant milestone , entertaining testing accuracy is rather more important than training accuracy , as it greatly sways the usage of the algorithm in practical scenarios . in this brief , we identify and define a new boundary region , where misclassification rate is substantial . to the best of our knowledge, this kind of approach is presented for the first time , at least we did not come across any similar published work .hence we propose a method , based on data centroids , to evidentially prove that handling this newly introduced area of confusion between hyperboxes of different classes significantly increases the testing accuracy . the paper is organized as follows .mlf is reviewed in section ii .we introduced d - mlf algorithm in section iii .an illustrative example and comparative results of d - mlf with mlf model are presented in section iv and v , respectively .finally , conclusion is given in section vi .multi - level fuzzy min max neural network ( mlf ) is a classifier which efficiently caters misclassification of patterns belonging to overlapped region by maintaining a tree structure , which is a homogeneous tree [ 9 ] . in mlf training phase, exemplars are continuously recurred to form the hyperboxes and overlaps , each recursion resulting in one level .this recursive procedure is carried till the predefined maximum depth or till overlap exists .hyperbox expansion , based on hyperbox size controlling parameter ( ) , is validated using equation ( 1 ) and expansion is carried out by equation ( 2 ) . where , and are min point and max point of hyperbox _ b _ respectively , is the dimension of pattern _ a _ and _ d _ is the number of dimensions . also , prior to each recursion , is updated using equation ( 3 ) where , and thetas for next level and previous level , respectively and , being the value between 0 and 1 , ensures that size of hyperbox in overlapped region is less than its previous level . in the testing phase, overlap regions are first traversed recursively , to discover appropriate subnet to which a test pattern belongs to .thence , in that level , a class of hyperbox having highest membership value with the hyperboxes in the discovered subnet , is selected as a predicted class .mlf is able to achieve higher accuracy rates than previous fmm methods .this is due to an elegant treatment to the boundary region a confusion area .but , after training , there exists a room for yet another boundary .the region where membership function generates very close by values , it becomes difficult to assign a class with high degree of assurance .as per our experiments , mlf , and all the previous classifiers , do not perform well in this area . hence , a definition of this new region , and a methodology to solve it is proposed .in this section , we give details about a newly proposed algorithm , specifically , we define a new boundary region generated due to trained network and propose a solution to correctly classify test patterns belonging to it ._ figure 1 _ describes the d - mlf structure , each node in s contains two segments , hyperboxes segment ( hbs ) and overlapped segment ( ols ) .hbs represents hyperboxes generated in that level , whereas ols represents overlaps in that level .along with hyperbox information , data centroid ( dc ) ._ figure 2 _ shows the area of confusion considered by mlf and d - mlf .we introduce a boundary region that exists between any two hyperboxes , where , according to our experiments , the rate of misclassification is comparatively high . in the proposed method ,the recommendations of mlf are intact , in addition to it , we use distance with the data centroids to improve a classification rate in the anew boundary region .similar to the mlf learning procedure , d - mlf maintains using hbs and ols structures .first , all the patterns are passed through , resulting in creation and expansion of hyperboxes using equations ( 1 ) and ( 2 ) .then each hyperbox is checked with the rest of hyperboxes to detect the overlap using equation ( 4 ) . where and are the max points and and are the min points of the two hyperboxes , among which overlap is tested . moreover , d - mlf adds a new step at the learning phase , known as data centroid ( dc ) computation , where dc of all input patterns belonging to each hyperbox is maintained in the hbs .dc is computed as follows : where is the data centroid of the hyperbox , is number of patterns belonging to hyperbox and is the pattern in hyperbox .if there exists an overlap , patterns belonging to the overlapped region are again sent to training procedure , where hbs and ols creation takes place for the next level .this process of recursion is followed afterward to train all the patterns . due to computation of ols and process of finding patterns belonging to ols ,d - mlf and mlf are not single pass algorithms .in general , given the n overlaps in the first level , entire training data has to be traversed n times .thereafter , in the subsequent stages , data belonging to overlapped region is traversed in order of magnitude of number of overlaps in that region .this is a novel finding , and contradictory to what mlf authors have mentioned [ 9 ] .note that , the patterns belonging to overlapped region are not part of the dc computation .this step makes sure that training patterns balloting for more than one class are omitted in the final decision making .+ net = d - mlf - train(net , ) + = h.centroid / h.membercount return null h.centroid + = sample ; h.membercount + = 1 ; create new hyperbox h ; h.centroid = sample ; h.membercount = 1 ; sdata = samples which inhabit in i region ; hi.centroid -= s ; hi.membercount -= 1 ; create an overlap - box as and add to ols = d - mlf - train ( sdata , ) ; link to with link ; the original mlf used a decision making based on the subnets decision .the selected subnet need not be a leaf node in the tree .we do not alter this model , rather enhance the process of how subnet marks the choice .membership function mentioned in the equation ( 11 ) is used against overlapped boxes .after recursively traversing the ols an appropriate subnet is discovered , to which test pattern belongs to .a membership function explained in the equation ( 6 ) is used , this time , to compute the membership with the hyperboxes within the selected subnet . \\ [ 1- f(v_i^j - a_h^i,\gamma_i ) ] ) )\\ f(x,\gamma)=\begin{cases}{1}\;\;\;\ ; if \ ; x\gamma\;>\;1 \\ { x}\;\;\;\ ; if \; 0\;\leq\;x\gamma\;\leq\;1 \\ { 0}\;\;\;\ ; if \ ; x\gamma\;<\;0 \end{cases } \end{split}\ ] ] where represents belongingness of sample with hyperbox . is a difference between min and max point with sample and is a tuning parameter to control fuzziness . within these membership values , hyperboxes with highest two values are selected to define a boundary .medial region of these hyperboxes , controlled by , is treated as a boundary region . is a user controlled variable , mentioned in the percentage value . at this point , it is necessary to check if test pattern belongs to the boundary region .we define and as incident angles between test pattern and two hyperboxes , respectively .inclusion value is evaluated as follows : further , based on the inclusion value , output class is chosen .if pattern exists in the area outside of the defined boundary , we simply follow a path of mlf , and classify the pattern based on the maximum membership value , which is already computed .if the pattern belongs to the boundary region , euclidean distance [ 10 ] between test pattern and the data centroids of the selected hyperboxes is computed .hence , centered on the inclusion value , the output of the network is denoted as either the class of maximum among all the hyperboxes , or as a minimum of the distances of the topmost two hyperboxes where is given by ; where is the class membership for the test sample in subnet , is edge between subnet and the corresponding overlap box that enables the subnet if test sample is in this overlap box . and is the output of ols , which is given by equation ( 10 ) where is number of overlap boxes in ols and is membership function of the overlap box for test sample , given by equation ( 11 ) and is given by equation(12 ) where is the euclidean distance computed amongst sample and the data centroid of the topmost hyperbox using equation ( 13 ) out = d - mlf - test(net , sample ) + out = d - mlf - test ( , sample ) ; return null ; mv = [ ] ; mv + = membership ( sample , ) ; = [ max(mv ) , max(mv ( mv max(mv ) ) ) ] d = eudistance(sample , h1.dc , h2.dc ) ; out = min(d).class ; out = max ( mv).class ; this illustration , we describe the effectiveness of the proposed model , clearly pointing out the identification and handling of the stated area of confusion ._ figure 3 _ illustrates the 2-diamentional data space .we consider 14 data samples for training and 6 data samples for testing .hyperbox size parameter ( ) is fixed at 0.3 and a boundary parameter ( ) is fixed at 5% .both mlf and d - mlf create two hyperboxes at layer .d - mlf also computes data centroids ( dc ) for each hyperbox , and . here ,data centroids of and are and , respectively .patterns which do not belong to boundary region are classified correctly by mlf .but when it comes to boundary region , it fails to correctly classify the patterns . whereas the proposed d - mlf works better in the boundary region as well , as its decision making is not completely based on the membership value , but it also considers data centroids .it can be noted that the patterns in the above example are not uniformly spread out . which is a very common scenario in real - world examples .it occurs because of the dominance of the parameters such as outliers , temporal nature of the variables , etc . due to them ,most of the times , the patterns within the overall data , and in case of fuzzy min max hierarchy , within hyperboxes , will not be steadily spread across all the dimensions .as demonstrated above , our proposed method treats them elegantly , without many of the modifications to the state of the art .performance of proposed method ( d - mlf ) is studied on the basis of the classification rate .various experiments were carried out to test d - mlf on different standard datasets .standard datasets such as iris , glass , wine , wisconsin breast cancer ( wbc ) , wisconsin diagnostic breast cancer ( wdbc ) and ionosphere were used .these datasets were obtained from the uci repository of machine learning databases [ 11 ] . in these experiments , hyperbox size parameter ( )was chosen as 0.2 , 0.5 and 0.9 .this was to perform the measurements across the spectrum .as we increase the size of the hyperbox , the number of overlaps increase , and so does the misclassification rate .we split the data evenly for training and testing .the average results are shown over 100 experiments .for each iteration , training and testing data is chosen randomly ._ table 1 _ shows results , we compare our results to mlf method , as it has been already proven to perform better than the previously proposed fmm methods [ 9 ] ..results [ cols="^,^,^,^",options="header " , ]in this brief , we introduced a new boundary region and distance based mlf classification method to handle patterns belonging to that boundary region . a data centroid based method , d - mlf , minimizes significance of outliers and similar errors in decision making .it has been evidentially proven that the proposal outperforms all the previously proposed fmm methods .more importantly , we have proposed a model suited for data in the real world , extending the state of the art .d - mlf will help humongous application areas such as security , natural language processing , biomedical reasoning , etc .l. a. zadeh , fuzzy sets , information and control , vol .3 , pp . 338 - 353 , 1965 .p. k. simpson , fuzzy min - max neural networks .classification , ieee trans .neural network , vol .5 , pp . 776786 , sep . 1992 . simpson , p. k. , fuzzy min - max neural networks - part 2 : clustering , ieee trans fuzzy systems 1 , 3245 1993 .b. gabrys and a. bargiela , general fuzzy min - max neural network for clustering and classification , ieee trans .neural networks , vol .11 , pp . 769783 , 2000 .bargiela , w. pedrycz , and m. tanaka , an inclusion / exclusion fuzzy hyperbox classifier , int .based intell ., vol . 8 , no .2 , pp . 9198 , 2004 .a. rizzi , m. panella , and f. m. f. mascioli , adaptive resolution min - max classifiers , ieee trans .neural netw .2 , pp . 402414 , mar .a. v. nandedkar and p. k. biswas , a fuzzy min - max neural network classifier with compensatory neuron architecture , ieee trans .neural netw .1 , pp . 4254 , jan . 2007. h. zhang , j. liu , d. ma , and z.wang , data - core - based fuzzy min max neural network for pattern classification , ieee trans .neural netw .12 , pp . 23392352 , dec .r. davtalab , m. h. dezfoulian and m. mansourizade , multi - level fuzzy min - max neural network classifier , ieee trans .neural netw .3 , pp.470 - 481 , mar .w. bezdel and h. j. chandler , results of an analysis and recognition of vowels by computer using zero - crossing data , proc .2060 - 2066 , nov .k. bache and m. lichman . , uci machine learning repository , school inf .california , irvine , ca , usa . , 2013.[online available ] http://archive.ics.uci.edu/ml
recently , a multi - level fuzzy min max neural network ( mlf ) was proposed , which improves the classification accuracy by handling an overlapped region ( area of confusion ) with the help of a tree structure . in this brief , an extension of mlf is proposed which defines a new boundary region , where the previously proposed methods mark decisions with less confidence and hence misclassification is more frequent . a methodology to classify patterns more accurately is presented . our work enhances the testing procedure by means of data centroids . we exhibit an illustrative example , clearly highlighting the advantage of our approach . results on standard datasets are also presented to evidentially prove a consistent improvement in the classification rate . hyperbox , fuzzy min - max , data centroids , neural networks , neurofuzzy , classification , machine learning .
collisionless shocks are widely thought to be effective accelerators of energetic , nonthermal particles ( hereafter cosmic - rays or crs ) .those particles play central roles in many astrophysical problems .the physical basis of the responsible diffusive shock acceleration ( dsa ) process is now well established through in - situ measurements of heliospheric shocks and through analytic and numerical calculations . while test particle dsa model treatments are relatively well developed ; e.g. , , it has long been recognized that dsa is an integral part of collisionless shock physics and that there are substantial and highly nonlinear backreactions from the crs to the bulk flows and to the mhd wave turbulence mediating the cr diffusive transport ( see , for example , and references therein ) .most critically , the crs can capture a large fraction of the kinetic energy dissipated across such transitions .as they diffuse upstream the crs form a pressure gradient that decelerates and compresses the entering flow inside a broad shock precursor .that , in turn , can lead to greatly altered full shock jump conditions , especially if the most energetic crs , which can have very large scattering lengths , escape the system and carry significant energy with them .also in response to the momentum dependent scattering lengths and flow speed variations through the shock precursor the cr momentum distribution will take on different forms than in a simple discontinuity .effective analytic ( e.g. , ) and numerical ( e.g. , ) methods have been developed that allow one to compute steady - state modified shock properties given an assumed diffusion behavior . on the other hand , as the cr particle population evolves in time during the formation of such a shock the shock dynamics and the cr - scattering wave turbulence evolve as well . for dynamically evolving phenomena , such as supernova remnants , the time scale for shock modification can be comparable to the dynamical time scales of the problem .the above factors make it essential to be able to include both nonlinear and time dependent effects in studies of dsa .generally , numerical simulations are called for .full plasma simulations offer the most complete time dependent treatments of the associated shock microphysics , but are far too expensive to follow the shock evolution over the time , length and energy scales needed to model astrophysical cr acceleration .the most powerful alternative approach utilizes continuum methods , with a kinetic equation for each cr component combined with suitably modified compressible fluid dynamical equations for the bulk plasma ( see 2 below ) . by extending that equation set to include relevant wave action equations for the wave turbulence that mediates cr transport ,a self - consistent , closed system of equations is possible ( e.g. , ) ) .continuum dsa simulations of the kind just described are still quite challenging and expensive even with only one spatial dimension .the numerical difficulty derives especially from the very large range of cr momenta that must be followed , which usually extends to hundreds of gev / c or beyond on the upper end and down to values close to those of the bulk thermal population , with nonrelativistic momenta .the latter are needed in part to account for `` injection '' of crs due to incomplete thermalization that is characteristic of collisionless shocks .one computational constraint comes from the fact that cr resonant scattering lengths from mhd turbulence , , are generally expected to be increasing functions of particle rigidity , .the characteristic length coupling the crs of a given momentum , , to the bulk flow and defining the width of the modified shock precursor is the so - called diffusion length , , where is the cr particle speed , and is the bulk flow speed into the shock .one must spatially resolve the modified shock transition for the entire range of in order to capture the physics of the shock formation and the spatial diffusion of the crs , in particular .the relevant typically spans several orders of magnitude , beginning close to the dissipation length of the thermal plasma , which defines the thickness of the classical , `` viscous '' gas shock , also called the `` subshock '' in modified structure .this resolution requirement generally leads to very fine spatial grids in comparison to the `` outer scale '' of the problem , which must exceed the largest .two approaches have been applied successfully so far to manage this constraint in dsa simulations .berezhko and collaborators developed a method that normalizes the spatial variable by at each momentum value of interest during solution of the cr kinetic equation .this approach establishes an spatial grid that varies economically in tune with .derived cr distribution properties at different momenta can be combined to estimate feed - back on the bulk flow at appropriate scales .the method was designed for use with cr diffusion properties known _ a priori_. it is not readily applied to cr diffusion behaviors incorporating arbitrary , nonlinear feedback between the waves and the crs . as an alternative that can accommodate those latter diffusion properties ,kang have implemented diffusive cr transport into a multi - level adaptive mesh refinement ( amr ) environment .the benefit of amr in this context comes from the feature that the highest resolutions are only necessary very close to the subshock , which can still be treated as a discontinuity satisfying standard rankine - hugoniot relations . by efficient use of spatial gridding both of these computational strategiescan greatly reduce the cost of time dependent dsa simulations .on the other hand , the above methods do not directly address the principal computational cost in such simulations , so they remain much more costly compared to purely hydrodynamic or mhd simulations .this is because the dependence of on cr momentum , , adds a physical dimension to the problem . in practice ,the spatial evolution of the kinetic equation for each cr constituent must be updated over the entire spatial grid at multiple momentum values ; say , .the value of is usually large , since the spanned range of cr momentum is typically several orders of magnitude .physically , crs propagate in momentum space during dsa in response to adiabatic compression in the bulk flow , sometimes by momentum diffusion ( see , for example , equation [ dce ] below ) , or because of various irreversible energy loss mechanisms , such as coulomb or radiative losses .the associated evolution rates for depend on the process , but generally depend on . the conventional approach to evolving approximates through low order finite differences in ( e.g. , ) . experience has shown that converged solutions of usingsuch methods require . in that case , for example , a mere five decades of momentum coverage requires more than 100 grid points in . since spatial update of the kinetic equation at each momentum grid point requires computational effort comparable to that for any of the accompanying hydrodynamical equations ( e.g. , the mass continuity equation ) , cr transport then dominates the computational effort by a very large factor , commonly exceeding an order of magnitude . an attractive alternative approach to evolving the kinetic equation replaces by its integral moments over a discrete set of finite momentum volumes , in which case is replaced by evaluated at the boundaries of those volumes .the method we outline here follows that strategy . because is relatively smooth , simple subvolume modelscan effectively be applied over moderately large momentum volumes .we have found this method to give accurate solutions to the evolution of with an order of magnitude fewer momentum bins than needed in our previous finite difference calculations .the computational effort to evolve the cr population is thereby reduced to a level comparable to that for the hydrodynamics . in recognition of its distinctive features we refer to the method as `` coarse - grained momentum finite volume '' or `` cgmv '' .it extends related ideas introduced in , and for test particle cr transport . those previous presentations ,while satisfactorily following cr transport in many large - scale , smooth flows , did not include spatial or momentum diffusion , so could not explicitly follow evolution of during dsa . instead ,analytic , test - particle solutions for were applied at shock jumps . herewe extend the cgmv method so that it can be applied to the treatment of fully nonlinear cr modified shocks .we outline the basic cgmv method and its implementation in eulerian hydrodynamics codes in 2 .several tests are discussed in 3 , and our conclusions are presented in 4 .the standard diffusion - convection form of the kinetic equation describing the evolution of the isotropic cr distribution function , , can be written in one spatial dimension as ( e.g. , ) . where is the bulk flow speed , , is the spatial diffusion coefficient , is the momentum diffusion coefficient , and is a representative source term .we henceforth express particle momentum in units of , where is the particle mass .the first rhs term in equation ( [ dce ] ) represents `` momentum advection '' in response to adiabatic compression or expansion . for simplicity of presentation equation [ dce ]neglects for now propagation of the scattering turbulence with respect to the bulk plasma , which can be a significant influence when the sonic and alfvnic mach numbers of the flow are comparable .although it is numerically straightforward to include this effect , the details are somewhat complex , so we defer that to a follow - up work focussed on cr transport in mhd shocks . full solution of the problem at hand requires simultaneous evolution of the hydrodynamical flow , as well as the diffusion coefficients , and . again postponing full mhd , the added equations to be solved are the standard gasdynamic equations with cr pressure included .expressed in conservative , eulerian formulation for one dimensional plane - parallel geometry , they are where and are the isotropic gas and the cr pressure , respectively , is the total energy density of the gas per unit mass and the rest of the variables have their usual meanings .the injection energy loss term , , accounts for the energy of the suprathermal particles transferred at low energy to the crs . as usual , cr inertia is neglected in such computations , since the mass fraction of the crs is generally tiny .we note for completeness that can be computed from using the expression in the simulations described below we set the particle mass , , for convenience . as mentioned in [ intro ] , the momentum advection and diffusion terms in equation [ dce ] typically require when using low order finite difference methods in the momentum coordinate .the resulting large number of grid points in makes finding the solution of equation [ dce ] the dominate effort in simulations of dsa . on the other hand , previous studies of dsa as well as direct observations of crs in different environmentshave shown that is commonly well described by the form , where , is a slowly varying function of .thus , we may expect a piecewise powerlaw form to provide an efficient and accurate , two - parameter subgrid model for .two moments of are sufficient to recover the subgrid model parameters .we find it convenient to use and the first of these moments , , is proportional to the spatial number density of crs in the momentum bin ] ( ) .the simulation represented in fig .2 included the momentum range = [ 2\times 10^{-4 } , 2.4\times 10 ^ 5] ] , where is the smallest momentum that can leak upstream ( see equation 15 ) . in this case a bohm - type diffusion model with , is adopted and the tl injection parameter , is used .the crash test was significantly more computationally demanding than the tvd - cr tests .note first that in the crash simulation the value of , while in the previous examples shown in figs . 1 - 3 .consequently , is about seven times greater in the current case , and the nominal physical scale of the precursor and its formation timescale are similarly lengthened .in addition , the stronger momentum dependence of bohm diffusion coefficient means that the precursor width expands more strongly as increases. the associated time rate of increase in is , however , slower , so that the shock must evolve longer to reach a given .these factors substantially increase the size of the physical domain needed to reach a given .4 shows the early evolution of this cr - modified shock for as computed with both the fd and the cgmv methods .the spatial domain for this simulation is [ 0,20 ] .the base spatial grid included zones , giving .since it is necessary to resolve structures near the subshock on scales of the diffusion length for freshly injected , suprathermal crs , the amr feature of the crash code is utilized .the fd simulation is carried out with 7 refined grid levels ; four levels of refined grid are applied in the cgmv simulation .240 momentum points ( )are used in the fd simulation , while the cgmv simulation includes 20 momentum bins ( ) . the time step for each refinement level , ,is determined by a standard courant condition , that is , .although the crank - nicholson scheme is stable with an arbitrary time step , the diffusion convection equation is solved with the time step smaller than ) to maintain good accuracy in the momentum space advection ( _ i.e. , _ ) . with , the required time step is smaller by a factor of three or so than the hydrodynamic time step in the fd simulation . consequently ,the fd diffusion convection solver is typically subcycled about 3 times with for each hydrodynamic time step . because of the much larger , subcycling is not necessary in the cgmv simulation .that adds another relative economy to the cgmv calculation . at the end ofthis simulation , , the modified shock structure is approaching a dynamical equilibrium in the sense that the postshock values of , and will not change much at later times .since this shock is weaker than the mach 40 shocks examined earlier modifications are more moderate . on the other hand ,as expected from the stronger momentum dependence of , the shock precursor broadens much more quickly in the present case .the cutoff in the cr distribution has reached roughly by .longer term evolution of this shock will be addressed below .the agreement between the fd and cgmv solutions shown in fig .4 is good , although not as close as it was in the examples illustrated in fig . 1 and fig .2 . the more apparent distinctions between the two solutions in the present case come from effective differences in the application of the tl injection model with bohm diffusion in fd and cgmv methods . recall that the cgmv scheme applies the diffusion coefficients averaged across the momentum bins ( see equations [ kni ] , [ kgi ] ) .the bohm diffusion model has a very steep momentum dependence for nonrelativistic particles ; namely , . at low momenta where injection takes placethe averaging increases the effective diffusion coefficient , and , thus , the leakage flux of suprathermal particles , leading to higher injection rate compared to the fd scheme for the same tl model parameters .consequently , the distribution function in the second bin at is slightly higher in the cgmv scheme , as evident in figs .note that is anchored on the tail of maxwellian distribution .the cgmv solutions accordingly show slightly more efficient cr acceleration than the fd solutions at early times . in this test about 5 % greater in the cgmv simulation at .since the cgmv scheme can be implemented with nonuniform momentum bins , such differences could be reduced by making the momentum bins smaller at low momentum in instances where the details relating to the injection rate were important .we show in fig .5 the evolution of this same shock extended to , as computed with the cgmv method .this simulation is computed on the domain [ 0,800 ] , spanned by a base spatial grid of zones , giving .we also included 7 refined grid levels at the subshock , giving .this grid spacing is insufficient for convergence at the injection momentum , , so that the very early evolution is somewhat slower than in the simulations shown in fig .however , once shock modification becomes strong evolution becomes roughly self - similar , as pointed out previously .the time asymptotic states do not depend sensitively on the early injection history .the self - similar behavior results with bohm diffusion from a match between the upstream and downstream extensions of the cr population .one also sees from the form of the distribution function in fig .5 that the postshock gas temperature has stabilized , while the previously - explained concave form to the cr distribution is better developed than it was at earlier times .this simulation illustrates nicely the relative efficiency of the cgmv scheme .the equivalent fd simulation would be very much more expensive , because this model requires a long execution time and a large spatial domain .with bohm diffusion for ultrarelativistic crs , so that the scale of the precursor , . at the same timethe peak in the cr momentum distribution extends relatively slowly , with .the required spatial grid is , thus , 40 times longer than for the shorter simulation illustrated in fig .the simulated time interval in the extended simulation was 50 times longer .together those increase the total computational time by a factor 2000 .the fd calculation with $ ] to took about 2 cpu days on our fastest available processor , so the extended simulation would have been unrealistic using the fd method .the extended cgmv simulation , however , required only about 10 times the effort of the shorter fd simulation , clearly demonstrating the efficiency of the cgmv scheme . this speed - up is a result of combination of several factors : 20 times larger grid spacing , no need for subcycling for the diffusion convection solver , and , of course , a smaller number of momentum bins .detailed time dependent simulations of nonlinear cr shock evolution are very expensive if one allows for inclusion of arbitrary , self - consistent and possibly time dependent spatial diffusion , as well as various other momentum dependent transport processes .the principal computational cost in such calculations is typically the cr transport itself , and , in self - consistent calculations , the analogous transport of the mhd wave turbulence that mediates cr transport .tracking these behaviors requires adding at least one physical dimension to the simulations compared to the associated hydrodynamical calculations , since the collisionless media involved are sensitive to the phase space configurations of the particles and waves .particle kinetic equations ( commonly the so - called diffusion convection equation ) provide a straightforward approach to addressing this problem and can be coupled conveniently with hydrodynamical equations that track mass and bulk momentum and energy effectively .momentum derivatives of the cr distribution function in the diffusion convection equation are most frequently handled by finite differences . although it is simple , that approach requires moderately fine resolution in momentum space .that is a primary reason that such calculations are costly .here we introduce a new scheme to solve the diffusion convection equation based on finite volumes in momentum space with a momentum bin spacing as much as an order of magnitude larger than that of the usual finite difference scheme .we demonstrate that this coarse grained momentum finite volume ( cgmv ) method can be used successfully to model the evolution of strong , cr - modified shocks at much lower computational cost than the finite difference approach .the computation efficiency is greatly increased , not only because the number of momentum bins is smaller , but also because the required spatial grid spacing is less demanding due to the coarse - grained averaging of the diffusion coefficient used in the cgmv method .in addition , larger momentum bin size can eliminate the need of subcycling of the diffusion convection solver that can be necessary in some instances using finite differences in momentum .thus , the combination of the cgmv scheme with amr techniques as developed in our crash code , for example , should allow more detailed modeling of the diffusive shock acceleration process with a strongly momentum dependent diffusion model such as bohm diffusion , or self - consistent treatments of cr diffusion and wave turbulence transport .twj is supported by nsf grant ast03 - 07600 , by nasa grants nag5 - 10774 , nng05gf57 g and by the university of minnesota supercomputing institute .hk was supported by kosef through the astrophysical research center for the structure and evolution of cosmos ( arcsec ) . .red lines and stars were obtained using the new cgmv scheme with .the solutions are almost indistinguishable .a pre - existing cr population , , corresponding to the upstream cr pressure , is included , without fresh injection at the shock ( ) . ] at for the same shock as shown in fig 2 .the different curves represent results computed using the fd scheme and three different momentum resolutions with the cgmv scheme .bottom : the cr distribution function at the shock from the same simulations . ] and , respectively .the heavy dashed lines represent solution at with a conventional finite difference scheme using 240 momentum points ( ) .the red solid lines and x s represent cgmv solutions at and 1000 with 20 momentum bins ( ) . ]
we have developed a new , very efficient numerical scheme to solve the cr diffusion convection equation that can be applied to the study of the nonlinear time evolution of cr modified shocks for arbitrary spatial diffusion properties . the efficiency of the scheme derives from its use of coarse - grained finite momentum volumes . this approach has enabled us , using momentum bins spanning nine orders of magnitude in momentum , to carry out simulations that agree well with results from simulations of modified shocks carried out with our conventional finite difference scheme requiring more than an order of magnitude more momentum points . the coarse - grained , cgmv scheme reduces execution times by a factor approximately half the ratio of momentum bins used in the two methods . depending on the momentum dependence of the diffusion , additional economies in required spatial and time resolution can be utilized in the cgmv scheme , as well . these allow a computational speed - up of at least an order of magnitude in some cases . ,
real time operation of the power grid and computation of electricity prices require accurate estimation of its structure and critical state variables .remote terminal units ( rtus ) transmit measurements collected from different grid components to the central control center for state estimation and subsequent use in analyzing grid stability .the collected measurements can be broadly classified into two kinds : meter readings and breaker statuses .the breaker statuses on transmission lines help create the current operational topology of the grid .the meter readings , comprising of line flow and bus power injection measurements , are then used to estimate the state variables over the estimated topology . in a practicalsetting , the collected measurements suffer from noise , that get added at source or during communication to the control center .the affect of such noise is minimized through placement of redundant / additional meters and use of suitable bad - data detection and correction techniques at the estimator .cyber - attacks on the power grid refer to corruption of measurements ( meter readings and breaker statuses ) by an adversary , aimed at changing the state estimation output , without getting detected by the estimator s checks .the viability of such attacks has in fact been demonstrated through controlled experiments like the aurora attack in department of energy s idaho laboratory and gps spoofing attack on phasor measurement units ( pmus ) .past literature on cyber - attacks have generally looked at adversaries that change meter data ( and not breaker statuses ) to affect state estimation .such data attacks involving injection of malicious data into meters were first analyzed in . using a dc power flow model for state estimation , the authors of provide an attack design using projection matrices . following this ,several approaches have been discussed to study hidden attacks under different operating conditions .these include mixed integer programming , heuristic based detection , sparse recovery using relaxation , graph - cut based construction for systems with phasor measurement units ( pmus ) among others .the possible economic ill - affects of such hidden data attacks on power markets are presented in . in a recent paper ,the authors investigates hidden attacks under the more general and potent regime of topology data ( breaker statuses ) and meter data corruption .all of these cited work on data alone or topology and data attacks , however , require changing floating point meter measurements in real time .the practicality of this is questionable as significant resources are required to synchronize the changes at multiple meters . in this paper, we focus on hidden attacks that primarily operate through changes in breaker statuses . herethe adversary changes the statuses of a few operational breakers from ( closed ) to ( open ) , as well as jams ( blocks the communication ) of flow measurements on a subset of transmission lines in the grid .however , the adversary does not modify any meter reading to an arbitrary value .we term these attacks as breaker - jammer attacks .note that breaker statuses , unlike meter readings , are binary in nature and fluctuate with lower frequency .they are thus easier to change , even by adversaries with limited resources .jamming measurements , through jammers or by destruction of communication apparatus , is technologically less intensive than corrupting meter measurements .in fact , jamming does not raise a major alarm as measurement loss due random communication drops occurs under normal circumstances .the breaker - jammer attack model was introduced by the authors for grids with a specific meter configuration requiring sufficient line flow measurements in .this work generalizes the framework to any grid with line flow and injection meters and uses a novel graph - coloring analysis to determine the optimal hidden attack .our graph coloring based analysis is in principal similar to which studies standard data attacks as a graph partitioning problem .however , the similarly ends there as our attack model does not use corruption of meter readings .instead breaker status changes and line flow jams provide a different set of necessary and sufficient conditions for feasible attacks .the surprising revelation of our analysis is that under normal operating conditions , a single breaker status change ( with the necessary flow measurement jamming ) is sufficient to create an undetectable attack .in fact , we show that if a hidden attack can be constructed by changing the status of a set of breakers , then a hidden attack using only one break status change exists as well .this is significant as the adversary can focus on jamming the necessary flow measurements , after selecting a breaker to attack .further , our attack design does not depend on the current system state or transmission line parameter values , and has low information requirements .the rest of this paper is organized as follows .we present the system model used in generalized state estimation and describe the attack model in the next section . the graph coloring approach to determine the necessary and sufficient conditions for a hidden attack and elucidating examples are discussed in section [ sec : coloring ] .the design of the optimal hidden attack is discussed in section [ sec : design ] along with simulations on ieee test cases . finally , concluding remarks and future directions of work are presented in section [ sec : conclusion ] .first , we provide a brief description of the notation used .we represent the current operational structure of the grid by graph where denotes the set of buses / nodes of size and denotes the set of operational edges of size .the set of binary breakers statuses for the edges is denoted by the diagonal matrix of size .we assume that all lines to be initially operational ( is identity matrix ) and ignore any non - operation line for ease of notation .the edge to node incidence matrix is denoted by of dimension .each operational edge between nodes and has a corresponding row in , where . denotes the standard basis vector in with one at the location .the direction of flow on edge is taken to be from to , without any loss of generality .we consider the dc power flow model for state estimation in this paper .the state variables in this model are the bus phase angles , denoted by the vector .the set of measurements is denoted by the vector . hereline flow measurements are included in and bus injection measurements are included in .state estimation in the power grid relies on the breaker statuses in for topology estimation and then uses the meter measurements for estimating the state vector .the relation between and in the dc model is given by where is the zero mean gaussian noise vector with covariance matrix . is the measurement matrix and depends on the grid structure and susceptance of transmission lines .let the entry in corresponds to the flow measurement on line .then ( the row in ) is given by = b_{ab}m_{ab } \label{flow}\end{aligned}\ ] ] with the non - zero values at the and locations respectively . is the susceptance of the line . on the other hand ,if the entry corresponds to an injection measurement at node , we have . in matrix form , ignoring measurement noise , we can write equations for received measurements as is the diagonal matrix of susceptances of lines in .we arrange the rows in such that the top rows represent the lines with flow measurements .matrix , comprising of the top rows of a identity matrix , selects these measured flows .for ease of notation and analysis in later sections , we pad trailing zeros to vector and make it of length .similarly , we pad trailing all - zero rows to to make it a diagonal square matrix of dimension . on the other hand consists of the columns of that correspond to the nodes with injection measurements .the optimal state vector estimate is given by minimizing the residual .if the minimum residual does not satisfy a tolerance threshold , bad - data detection flags turn on and data correction is done by the estimator .the overall scheme of topology and state estimation processes followed by bad - data detection and correction is called generalized state estimation ( gse ) as illustrated in figure [ estimator ] .[ estimator ] * attack model : * we assume that the adversary is agnostic and has no information on the current system state or line susceptance matrix . for attack ,the adversary changes the breaker statuses on some lines . the new breaker status matrix , after attack ,is denoted by where diagonal matrix has a value of for attacked breakers .similarly , the available flow measurements after jamming are represented by , with diagonal matrix having a value of corresponding to jammed flows .let the new state vector estimated after the breaker - jammer attack be denoted by , where denotes the change .note that if the flow measurement on a line is not jammed , its value remains the same following the attack . using ( [ flowmat ] ), we have it follows immediately that if the breaker status on the line with flow measurement is changed ( ) , to avoid detection , its flow measurement needs to be jammed as well ( ) .thus , consider the injection measurements ( ) now , which are not changed during the attack .the breaker attack leads to removal of lines marked as open from equation ( [ injmat ] ) , resulting in the following modification . equation ( [ injcond ] ) thus states that after the breaker - jammer attack , for each injection measurement , the sum of original flows contributed by lines with attacked breakers ( left side ) needs to be accommodated by changes in estimated flows on lines ( connected to the same bus ) whose breakers are intact but actual flow measurements are not received ( right side ) . finally , for unique state estimation following the adversarial attack ( with one bus considered reference bus with phase angle ) we need the necessary conditions for a successful breaker - jammer attack that results in a change in estimated state vector consists of equations ( [ flowcond ] ) , ( [ breakjam ] ) , ( [ injcond ] ) , and ( [ rank ] ) . in the next section ,we describe a graph coloring based analysis of the necessary and sufficient conditions and use it to discuss design of optimal attacks of our regime .for our graph coloring based analysis , we use the following coloring scheme : _ for any change in the estimated state vector , neighboring buses with same value in are given same color ._ using this , we now discuss a permissible graph coloring corresponding to the requirements of a feasible attack discussed in the previous section .equation ( [ flowcond ] ) states that if the flow on line between buses and is not jammed , ( same color in our scheme ) .thus , _ * a set of buses connected through lines with available flow measurements ( not jammed ) has the same color . * _this implies that the grid buses , following a feasible attack , can be divided into groups , each group having a distinct color .the lines between buses of different groups do not carry any flow measurement or are jammed by the adversary .a test example is illustrated in figure [ fig : graphcoloring ] .observe the buses with injection measurements , that are not corrupted by the adversary .for an interior bus , ( all neighboring nodes have the same color as itself ) , the right side of equation ( [ injcond ] ) equates to zero .the left side becomes equal to zero , under normal operating conditions , if breakers on lines connected to bus are not attacked .thus , we have _ * a feasible graph coloring has lines with attacked breakers connected to boundary buses . * _ a boundary bus is one that has neighboring buses of colors distinct from itself . bus system with flow measurements on all lines and injection measurements at buses , and . the blue ,green and black buses are divided into groups and have same value of change in estimated state vector .the dotted red lines represent jammed lines , solid black lines represent operational lines .the grey lines with red bars represent the lines and with attacked breakers.,scaledwidth=42.0%,scaledwidth=30.0% ] [ fig : graphcoloring ] now , consider the injection meter installed on any boundary bus .such buses can exist in two configurations : a ) connected to lines with attacked breaker ( see bus in figure [ fig : graphcoloring ] ) or b ) connected to only lines with correct breaker statuses ( node if line did not have a breaker attack ) . in either case , using ( [ injcond ] ) , we have : _ * each injection measurement placed at a boundary bus provides one constraint relating the values of for neighboring differently colored buses . * _ for further analysis , we now use the coloring constraints highlighted in bold above to construct a reduced grid graph from as follows : * 1*. in each colored group , club boundary buses without injection measurements with all interior buses into one supernode of that color . make boundary buses with injection measurements into supernodes with the same color .connect supernodes of same color with artificial lines of zero susceptance .* 2*. for each line with intact breaker between two buses of different colors , create a line of same impedance between their corresponding supernodes. remove supernodes connected only to other supernodes of same color .* 3*. make injection measurements on supernodes equal to the sum of original flows on lines with attacked breakers connected to them ( positive for inflow , negative for outflow ) .if no incident line has attacked breaker , make the injection equal to . .the blue , green and black solid circles represent super nodes for buses , and respectively .the flow on the dotted red lines are not measured after attack .the grey lines with red bars represent lines with attacked breakers , that influence the injections at supernodes and .,scaledwidth=33.0%,scaledwidth=22.0% ] [ fig : supernode ] figure [ fig : supernode ] illustrates the reduced graph construction for the example in figure [ fig : graphcoloring ] .note that in the reduced graph , original lines between buses of same color are removed .the included lines exist between buses of different colors and have jammed or unavailable flow measurements .similarly , injection measurement relation ( [ injcond ] ) at interior nodes are trivially satisfied by and are ignored .the reduced system , thus , only includes constraints from boundary injection measurements that are similar in form to equation ( [ injcond ] ) as shown below : here , and are supernodes of different colors . the numeric value for the color of supernode is given by ( not the entry in ) . and are the susceptance matrix and edge set corresponding to the reduced graph . denotes the injection measurement on supernode with value given by step in the reduced graph construction .note that equation ( [ reducedinjcond ] ) for the injection measurements involves rows of the susceptance weighted laplacian matrix for .a unique solution of for in turn provides a uniquely estimated in after the adversarial attack .we now look at condition ( [ rank ] ) , necessary for unique state estimation after a feasible adversarial attack in terms of graph coloring .the reduced graph greatly simplifies our analysis here .first , it is clear that each color must have at least one supernode or a neighboring supernode ( of different color ) with injection measurement .otherwise the value of for that color will not be in any injection constraint .this goes against uniqueness of state estimation .note that the number of degrees of freedom in ( representing distinct values in ) is one less than the number of colors as one color denotes the reference phase change of .using , we prove the following result regarding permissible graph coloring for unique estimation .[ oneless ] following a breaker - jammer attack , the number of injection measurements at the boundary buses should be one less than the number of distinct colors in the grid buses .let the number of colored groups be .then the number of independent entries in is ( one entry being ) .the total number of linear constraints involving the numeric values in is equal to the number of injection measurements at the supernodes in . for unique state estimation ,the number of injection measurements should thus be greater than or equal to .we now show that exactly injection measurements are needed to get a solution to state estimation .consider the reduced graph .for real valued line susceptances and for cases where the supernodes having injection measurements do not form a closed ring with no additional branches ( see figure [ fig : supernode ] ) , the rank of rows is and we have unique state estimation . if the reduced graph contains a closed ring of supernodes with injection measurements , then the measurements will represent the entire susceptance weighted graph laplacian of the ring , that is rank deficient .however , the real valued entries in that exist on the right side of ( [ reducedinjcond ] ) and are derived from flows on lines with attacked breakers , will not cancel out under normal operating conditions .further , the adversary designing the attack is unaware of the current system state and will be able to determine if they do . hence the injections measurements constraints will be linearly independent ( the adversary will expect this under normal operations ) .this gives an unique and for a distinct colored grid graph .to summarize , the highlighted statements and theorem [ oneless ] provide the necessary and sufficient conditions for a feasible breaker - jammer attack under our graph - coloring scheme . in the next section , we show that the graph coloring approach proves a surprising result that simplifies the design of an optimal attack .we call an breaker - jammer feasible attack optimal if it requires minimum number of breaker status changes ( considering the fact that doing so is significantly more resource draining than measurement jamming ) . if multiple attacks are possible using the minimum number of breaker changes , we select as optimal the attack that requires the least number of flow measurement jams . using the reduced graph , we present the following result for the minimum number of breaker changes needed for a feasible attack under normal operating conditions ( non - zero real - valued bus susceptances and line flows that are distinct for different grid elements ) .[ oneenough ] if a feasible attack can be designed with breaker status changes , then a feasible attack exists such that all but one breaker statuses are changed back to their original operational state ( ) , while keeping their line flow measurements jammed .construct the reduced graph with its colored supernodes for the feasible attack with breakers and necessary flow measurement jams .let the number of colors in state estimation change be .the length of is then . by theorem [ oneless ] , there are injection measurements at the supernodes that provide constraint equations listed in ( [ reducedinjcond ] ) .if we revert the breaker status of an attacked line back to while keeping its flow measurement jammed , the only change in any constraint equation ( [ reducedinjcond ] ) involving that line will be that the injection measurement on the incident node ( entry in ) will become .since all but one breakers are brought back to the operational state , at least one injection measurement in will still remain non - zero and the constraint equations will still have linear inndependence .thus , state estimation will result in a different but non - zero , leading to a feasible attack .for example , consider the case in figure [ fig : graphcoloring ] where two breaker statuses are attacked . if the breaker status on line is changed back to while keeping the flow measurement jammed , the new reduced graph that will be derived is given in figure [ fig : supernode1 ] . as mentioned in theorem[ oneenough ] , the coloring scheme is still feasible and a non - zero change in state estimation results .bus case given in figure [ fig : graphcoloring ] , but with line being changed to a dotted red line .the blue , green and black solid circles represent super nodes for buses , and respectively . the flow on the dotted red lines are not received .the only grey line with red bar represents the line with an attacked breaker.,scaledwidth=33.0%,scaledwidth=22.0% ] [ fig : supernode1 ] this is a very significant result and simplifies the search for an optimal attack greatly . since one breaker change is sufficient , the adversary can select each line in turn ( iterations ) , attack its breaker ( change the corresponding entry in diagonal to ) and determine the flow measurements that need to be jammed ( given by diagonal ) to conduct a feasible attack . the breaker change that requires the minimum number of measurement jams ( or maximally sparse ) will then give the optimal attack .the selection of jammed measurements , after fixing , is formulated as ( [ opt_attack ] ) . this is simplified in formulation ( [ opt_attack1 ] ) where the jammed measurements ( with on diagonal of ) are given by the non - zero entries in . relaxation can be used to approximately solve ( [ opt_attack1 ] ) . since the adversary has no access to the actual state vector , a random non - zero , unavailable line susceptance are replaced with distinct real values .these replacements , under normal conditions , do not affect the optimal solution as they preserve the linear independence of injection constraints given in ( [ reducedinjcond ] ) . the rank constraint ( [ rank ] ) is not included in the optimization framework and can be checked manually after determining , for consistency . +* experiments : * we simulate our attack model on ieee and bus test systems and present averaged findings in figure [ fig : topologyplot ] . for each test system considered , we place flow measurements on all lines and injection measurements on a fraction of buses , selected randomly . to design a feasible attack involving a line , we change its breaker status and solve problem ( [ opt_attack1 ] ) to jam flows measurements to prevent detection .this is repeated for each line to determine the optimal attack . in figure[ fig : topologyplot ] , note that the average number of flow measurements jammed increases with the number of injection measurements .this happens due to an increase in the number of injection constraints that require more measurement jams .[ fig : topologyplot ]in this paper , we study topology based cyber - attacks on power grids where an adversary changes the breaker statuses of operational lines and marks them as open .the adversary also jams flow measurements on certain lines to prevent detection at the state estimator .the attack framework is novel as it does not involve any injection of corrupted data into meters or knowledge of system parameters and current system state .using lesser information and resource overhead than traditional data attacks , our attack regime explores attacks on systems where all meter data are protected from external manipulation .we discuss necessary and sufficient conditions for the existence of feasible attacks through a new graph - coloring approach .the most important result arising from our analysis is that optimal topology based attacks exist that require a single breaker status change .finally , we discuss an optimization framework to select flow measurements that are jammed to prevent detection of the optimal attack .its efficacy is presented through simulations on ieee test cases .designing protection schemes for our attack model is the focus of our current work .1 a. l. ott , experience with pjm market operation , system design , and implementation " , _ ieee trans .power syst .18 , no . 2 , 2003 .a. abur and a. g. expsito , power system state estimation : theory and implementation " , new york : marcel dekker , 2004 .d. shepard , t. humphreys , and a. fansler , evaulation of the vulnerability of phasor measurement units to gps spoofing " , _ international conference on critical infrastructure protection _y. liu , p. ning , and m. k. reiter , false data injection attacks against state estimation in electric power grids " , _ proc .commun . security _o. vukovic , k. c. sou , g. dan , and h. sandberg , network - aware mitigation of data integrity attack on power system state estimation " , _ ieee journal on selected areas in communications _ , vol .30 , no . 6 , 2012 .o. kosut , l. jia , r. j. thomas , and l. tong , limiting false data attacks on power system state estimation " , _ proc .t. kim and v. poor , strategic protection against data injection attacks on power grids " , _ ieee trans .smart grid _ , vol .2 , no . 2 , 2011 .d. deka , r. baldick , and s. vishwanath , data attack on strategic buses in the power grid : design and protection " , _ ieee pes general meeting _ , 2014 .l. xie , y. mo , and b. sinopoli , false data injection attacks in electricity markets " , _ proc .ieee smartgridcomm _ , 2010 .j. kim and l. tong , on topology attack of a smart grid : undetectable attacks and countermeasures " , _ ieee j. select .areas commun .31 , no . 7 , 2013 .d. deka , r. baldick , and s. vishwanath , attacking power grids with secure meters : the case for using breakers and jammers " , _ ieee infocom ccses workshop _ , 2014 .a. giani , e. bitar , m. garcia , m. mcqueen , p. khargonekar , and k. poolla , smart grid data integrity attacks " , _ ieee trans .on smart grid _ , vol .4 , no . 3 , 2013 . a. abur and a. g. exposito , _ power system state estimation : theory and implementation _ , crc , 2000 . power system test archive " , http://www.ee.washington.edu/research/pstca .
a coordinated cyber - attack on grid meter readings and breaker statuses can lead to incorrect state estimation that can subsequently destabilize the grid . this paper studies cyber - attacks by an adversary that changes breaker statuses on transmission lines to affect the estimation of the grid topology . the adversary , however , is incapable of changing the value of any meter data and can only block recorded measurements on certain lines from being transmitted to the control center . the proposed framework , with limited resource requirements as compared to standard data attacks , thus extends the scope of cyber - attacks to grids secure from meter corruption . we discuss necessary and sufficient conditions for feasible attacks using a novel graph - coloring based analysis and show that an optimal attack requires breaker status change at only one transmission line . the potency of our attack regime is demonstrated through simulations on ieee test cases .
recently a new method for analyzing multifractal functions was introduced .it exploits the fact that the fractional derivative of order ( denoted here by ) of has , for a suitable range of , a power - law tail in its cumulative probability the exponent is the unique solution of the equation where is the scaling exponent associated to the behavior at small separations of the structure function of order , i.e. .it was also shown that the actual observability of the power - law tail when multifractality is restricted to a finite range of scales is controlled by how much departs from linear dependence on . the larger this departurethe easier it is to observe multifractality .so far the theory of such power - law tails has been developed only for synthetic random functions , in particular the random multiplicative process for which kesten - type maps and large deviations theory can be used .it is our purpose here to test the fractional derivative method for invariant measures of dissipative dynamical systems , in particular for the feigenbaum invariant measure which appears at the accumulation point of the period doubling cascade where the orbit has period .its multifractality was proven rigorously in ref . using a thermodynamic formalism . for the feigenbaum measure allscaling exponents can be determined with arbitrary accuracy .there is an important difference in the way one processes functions and invariant measures to determine their multifractal properties and in particular the spectrum of singularities , usually denoted for functions and for measures . for a function one uses the moments or the pdfs of the increments to determine the scaling exponents , whereas for an invariant measure one works with integrals over intervals or boxes of different sizes . in the one - dimensional casethe two approaches become equivalent by introducing the cumulative distribution function hence we shall apply the fractional derivative method to the integral of the invariant measure .the organization of the paper is the following .section [ s : thermo ] is devoted to the thermodynamic formalism for the feigenbaum attractor . in section [ ss :formalism ] , we recall the method used in ref . . in section [ ss : connection ]we show how this formalism , based on the study of the geometrical properties of the attractor , is actually connected to the standard multifractal formalism which focusses on the statistical properties of the invariant measure . to the best of our knowledgethe exact relation between the two formalisms is discussed here for the first time .then , in section [ ss : numericalfreeenergy ] we calculate numerically the free energy and accordingly the scaling exponents for the integral of the invariant measure ; this is done by a very accurate transfer - matrix - based method .fractional derivatives are discussed in section [ s : fraclap ] . in section [ ss : fraclap_pheno ]we briefly recall the phenomenology of power - law tails in the distribution of fractional derivatives and the limits on observability .the fractional derivative analysis of the feigenbaum measure is presented in section [ ss : fraclap_numerics ] .concluding remarks are made in section [ s : concl ] .in this section we give a brief description of the thermodynamic formalism for the invariant measure of the feigenbaum map ( see ref . for the mathematical details ) and show how one can use it in order to study the multifractal properties of the hlder exponents . by feigenbaum attractor we understand the attractor of the one - dimensional mapping \to [ 0,1] ] and the first few terms in the power series expansion are the value of the universal constant which is the inverse of the feigenbaum scaling constant is approximately equal to .an attractor for the map can be constructed in the following way . for each define a collection of intervals of level : , \nonumber \\ & & \delta^{(n)}_i = g^{(i)}(\delta^{(n)}_0 ) \equiv \underbrace{g \circ g \circ \cdots \circ g}_{i } ( \delta_0^{(n ) } ) \quad ( 1 \leq i\leq 2^n-1 ) .\label{delta}\end{aligned}\ ] ] the following properties of the intervals are easy consequences of the doubling equation ( [ g ] ) : ( a ) intervals are pairwise disjoint .( b ) .( c ) each interval of level contains exactly two intervals of level , and .( d ) , where denotes the length of the interval .the first three levels of the intervals are shown in fig .[ f : dynamicalpartition ] .the feigenbaum cvitanovi map and the first three levels of the partitions . for used the expansion ( [ g1 ] ) , introduced in ref . up to . ] 65 10 dynamical partitions the properties above imply that it is natural to use a dyadic representation for the intervals .let , where .then we can use a sequence as a symbolic coding for intervals : .now we can define the feigenbaum attractor the set is isomorphic to the set of all infinite dyadic sequences .such sequences can be considered as a symbolic coordinate system on . in this new coordinate systemthe map acts as the dyadic addition of the sequence .notice that topologically is a cantor set .it is easy to see that is indeed an attractor for all but countably many initial points ] }\leq c_1 \ , .\ ] ] the condition corresponding to intervals with odd s plays only a technical role and it is not essential for our further analysis since the odd intervals contain information about the lengths of the even ones .indeed , it is very easy to see that for every odd the intervals and have lengths of the same order .we next introduce a parameter ( inverse temperature ) and define the partition function \label{part}\ ] ] and the free energy it immediately follows from ( [ part ] ) and ( [ free ] ) that .\ ] ] in the thermodynamic limit the probability distributions \label{gibbs}\ ] ] tend to a limiting distribution which can be considered as a gibbs measure with the potential , inverse temperature and the boundary condition .this gibbs distribution generates the probability measure on which is the part of the whole attractor corresponding to intervals with odd numbers .we shall denote this gibbs measure on by .notice that corresponds to a unique invariant measure and gives a conditional distribution corresponding to lebesgue measure on ] .we have \ , \sim \, z_n(\beta ) \ , \sim \ , \exp[f(\beta)n]\ ] ] which gives \ , .\ ] ] using ( [ n ] ) we can find the hausdorff dimension of the set of points which are typical with respect to the measure . since \ ] ] we conclude that which immediately implies the hausdorff dimension of the whole attractor is equal to the maximum of over all . let be the unique solution of the equation .it is easy to see that .the integral of the feigenbaum invariant measure calculated with bins of uniform length in ] .this means that under dynamics given by the map any initial absolutely continuous distribution on ] into subintervals of length .then it follows from ( [ hjkps1 ] ) that another characteristic of a multifractal measure is given by its spectrum of dimensions which is just the legendre transform of : \ , .\ ] ] the dual legendre relation allows one to find from : \ , .\ ] ] we next find a correspondence between the pair and the pair of thermodynamic functions .we shall show that where is an inverse function to the free energy . to derive the first relation we consider the dynamical partition andassume that but .for each define notice that the asymptotic behavior of depends only on asymptotic scalings of smaller elements of the dynamical partitions inside .the thermodynamic formalism constructed above implies that asymptotically those scalings are completely determined by the potential and hence they do not depend on .rescaling the invariant measure inside by a factor we conclude that where is a total number of the intervals inside . taking the sum over and using ( [ free1 ] ) we have \ , .\end{aligned}\ ] ] this together with ( [ hjkps11 ] ) immediately gives =2^p\ ] ] which implies the first relation in ( [ hjkps3 ] ) .we next show that the second relation holds . using ( [ hjkps2 ] )we have =\inf_p \ [ \alphap - ( -f^{-1}(p\ln 2)]=\inf_z \\left[\frac{\alpha}{\ln 2}z + f^{-1}(z)\right ] \nonumber \\ & = & \inf_\beta \\left[\frac{\alpha}{\ln 2}f(\beta ) + \beta \right ] = \frac{\alpha}{\ln 2}\inf_\beta\ \left[\frac{\ln 2}{\alpha}\beta + f(\beta)\right ] \ , .\end{aligned}\ ] ] it is easy to see that the extremum in ( [ hjkps8 ] ) corresponds to which implies finally we express the scaling exponents for the structure functions through the thermodynamic characteristics . the exponent is defined by the scaling relation in terms of the integral of the invariant measure .let be a partition of ] and ] and ] ; they are plotted in fig .[ f : feigen - zetap](b ) .the exponents are then obtained by a least square fit of the structure functions over the range . with this number of bins , the quality of the fit begins to somewhat deteriorate beyond , but otherwise there is rather good agreement between the two methods of determining .note that the `` -intercept '' of the graph of , namely , which is the codimension of the support of the invariant measure , is positive and its numerical value is slightly under one half .this will be important in the sequel .in this section we briefly recall the phenomenological approach to multifractality via fractional derivatives and adapt it to a multifractal measure .we therefore work , not with the measure itself , but with its integral .singularity exponents may be viewed as local hlder exponents of , i.e. , for .we turn to fractional derivatives of order defined , as in ref . , as the multiplication in the fourier space by by ( see ref . for precise definition ) .an isolated non - oscillatory singularity with exponent at a point implies if , as we shall assume hereafter , the exponent is negative , the fractional derivative can become arbitrarily large and thus contributes to the tail - behavior of the probability .a key assumption in the phenomenology is that this argument can be carried over to non - isolated multifractal singularities , provided we take all types of singularities into account . for the feigenbaum invariant measure , we know the hausdorff dimension of the set of points having a singularity with exponent . assuming that we can also use as a covering dimension , we can express the probability to have a singularity of exponent contributing a fractional derivative of order which exceeds ( in absolute value ) a given large value , that is we require in terms of the codimension of the set , the probability to satisfy ( [ e : y - x ] ) is written as here is the spatial dimension ( ) .taking now into account the singularities with all possible exponents , the tail of the cumulative probability of the fractional derivative of order is given , to the leading order , by the following power law an easy calculation shows that corresponding to the infimum in ( [ e : inf ] ) satisfies which immediately gives .on the other hand , we know that here the infimum is given by an satisfying the very same relation .hence , using ( [ alpha_star ] ) , we get where the second relation follows from .the geometrical interpretation of this equation is that the ( negative ) exponent of the power - law tail for the fractional derivative of order is the -value of the intersection of the graph of and of a straight line of slope through the origin . as shown in ref . , in the presence of the finite range of scaling , the power - law tail ( [ e : cprob ] ) emerges only if the multifractality is sufficiently strong .this strength is given by the multifractality parameter , a measure of how strongly the data depart from being self similar ( which would imply ) : where . it was shown that observability of the power - law requires a sufficiently large value for the product , where is the number of octaves over which the data present multifractal scaling . in practiceit was found in ref . that for example , fully - developed turbulence velocity data have typical values of the order of , thereby requiring a monstrous inertial range of about 300 octaves for observability of power - law tails .as we shall see , the situation is much more favorable for the feigenbaum invariant measure . before turning to numerical questions ,we comment on an issue raised by an anonymous referee who worried about the nonlocal character of the fractional derivative and wrote in essence that our approach makes sense , strictly speaking , only for ( statistically ) translationally invariant in space systems : otherwise , if the system consists of components whose `` fractal properties '' are rather different the results will be smeared out .our feeling about such matters is summarized as follows .first one can observe that , of course , the attractor for the feigenbaum map is not homogeneous ( translation - invariant ) but after zooming in it becomes increasingly so ; the fractional derivative is not a local operator but the tail of its pdf is likely to be dominated by strongly localized events .second , a more technical observation .the idea of the multi - fractal analysis is based on the fact that the dynamics of a system determines a variety of scales .it is important that these scales do not depend on a particular place in the phase space . on the contrary , they are present and `` interact '' with each other everywhere . in the case of the feigenbaum attractorthe scales depend on a symbolic location in a system of partitions . in physical systems , like homogeneous turbulence , such partitions are difficult to define rigorously . however , the invariance with respect to the space coordinate is still present and forms a basis for applicability of the multifractal calculus .the phenomenological arguments presented in the previous section suggest that we should find power - law tails in the cumulative probability for fractional derivatives of for suitable orders .inspection of fig .[ f : feigen - zetap ] indicates that should be between the minimum slope of the graph and unity .the value is of course not a fractional order but , as we shall see , it is associated with a power - law tail of exponent minus one .. ( putting into the partition function , we have . ) ] the minimum slope can be easily found .indeed , takes large values when is large negative . in this case the main contribution to from the shortest interval of the partition with the length of the order of .hence , in the limit .this gives the following lower bound of the differentiation order : we have already observed that , because the graph does not pass through the origin , substantial values can be expected for the multifractality parameter .the actual values of , associated to values of ranging from to 3 by increments of are shown in table [ t : fzetap ] , together with the number of scaling octaves needed determined by ( cf .( [ e : criterion ] ) ) ..[t : fzetap ] for the feigenbaum invariant measure we show the scaling exponents , the corresponding inverse temperature s , the multifractality parameter and the number of scaling octaves needed . [ cols="^,^,^,^,^,^,^",options="header " , ] in practice , on a 32 bit machine , we are limited to about 25 octaves of dynamical range in resolution over the interval $ ]. this should be enough to observe power - law tails .cumulative probabilities of absolute values of fractional derivatives of various orders for the feigenbaum invariant measure .each function displays a power - law tail with an exponent fairly close to the predicted value .insets : corresponding graphs . ( a ) .( b ) .( c ) .( d ) .( e ) . ] cumulative probabilities of absolute values of fractional derivatives of various orders for the feigenbaum invariant measure .each function displays a power - law tail with an exponent fairly close to the predicted value .insets : corresponding graphs .( a ) .( b ) .( c ) .( d ) .( e ) ., title="fig : " ] cumulative probabilities of absolute values of fractional derivatives of various orders for the feigenbaum invariant measure .each function displays a power - law tail with an exponent fairly close to the predicted value .insets : corresponding graphs .( a ) .( b ) .( c ) .( d ) .( e ) ., title="fig : " ] cumulative probabilities of absolute values of fractional derivatives of various orders for the feigenbaum invariant measure .each function displays a power - law tail with an exponent fairly close to the predicted value .insets : corresponding graphs .( a ) .( b ) .( c ) .( d ) .( e ) ., title="fig : " ] cumulative probabilities of absolute values of fractional derivatives of various orders for the feigenbaum invariant measure .each function displays a power - law tail with an exponent fairly close to the predicted value .insets : corresponding graphs .( a ) .( b ) .( c ) .( d ) .( e ) ., title="fig : " ] 65 10 feigen : cumulative probability indeed , fig .[ f : feigen - cump ] shows five instances of cumulative probabilities of fractional derivatives with power - law tails , corresponding to the values of the exponent listed in table [ t : fzetap ] .the corresponding order of differentiation ranges between and .is too small , e.g. for , no power - law tail is observed .] since the function which we are analyzing is not periodic , we resort to the hann windowing technique employed previously in ref . ( section 13.4 ) . also , we use rank ordering to avoid binning . the power - law behavior observed is consistent with the phenomenological theory presented in section [ ss : fraclap_pheno ] , the residual discrepancies being due to the resolution of bins .we have found solid numerical evidence for the presence of power - law tails in the cumulative distribution of fractional derivatives for the integral of the invariant measure of the feigenbaum map .furthermore the exponents measured are consistent with those predicted by phenomenological arguments from the spectrum of singularities .since we have a fairly deep understanding of the structure of the attractor , thanks in particular to the thermodynamic formalism , a reasonable goal may be to actually prove the results .the main difficulty is that the operation of fractional derivative is non - local .however , we believe that a rigorous analysis here is still possible due to the quite simple spectral structure of the dynamical system corresponding to the feigenbaum attractor .we are grateful to rahul pandit for useful remarks .computational resources were provided by the yukawa institute ( kyoto ) .this research was supported by the european union under contract hprn - ct-2000 - 00162 and by the indo - french centre for the promotion of advanced research ( ifcpar 2404 - 2 ) .vul , ya.g .sinai and k.m .khanin , feigenbaum universality and the thermodynamic formalism , _ russian math . surveys _ * 39*:140 ( 1984 ) .g. parisi and u. frisch , on the singularity structure of fully developed turbulence , in _ turbulence and predictability in geophysical fluid dynamics _ ,proceedings of international school of physics enrico fermi , jun .1424 1983 , varenna , italy , m. ghil , r. benzi and g. parisi , eds . , pp . 8487 , north holland ( 1985 ) .
it is shown that fractional derivatives of the ( integrated ) invariant measure of the feigenbaum map at the onset of chaos have power - law tails in their cumulative distributions , whose exponents can be related to the spectrum of singularities . this is a new way of characterizing multifractality in dynamical systems , so far applied only to multifractal random functions ( frisch and matsumoto ( _ j . stat . phys . _ * 108*:1181 , 2002 ) ) . the relation between the thermodynamic approach ( vul , sinai and khanin ( _ russian math . surveys _ * 39*:1 , 1984 ) ) and that based on singularities of the invariant measures is also examined . the theory for fractional derivatives is developed from a heuristic point view and tested by very accurate simulations . _ j. stat . phys . in press _ * keywords : * chaotic dynamics , multifractals , thermodynamic formalism .
the primary goal of tomography is to determine the internal structure of an object without cutting it , namely using data obtained by methods that leave the object under investigation undamaged .these data can be obtained by exploiting the interaction between the object and various kinds of probes including x - rays , electrons , and many others .after its interaction with the object under investigation , the probe is detected to produce what we call a projected distribution or tomogram , see fig .[ fig : profiles ] .tomography is a rapidly evolving field for its broad impact on issues of fundamental nature and for its important applications such as the development of diagnostic tools relevant to disparate fields , such as engineering , biomedical and archaeometry .moreover , tomography can be a powerful tool for many reconstruction problems coming from many areas of research , such as imaging , quantum information and computation , cryptography , lithography , metrology and many others , see fig .[ fig : tomography ] . from the mathematical point of viewthe reconstruction problem can be formulated as follows : one wants to recover an unknown function through the knowledge of an appropriate family of integral transforms .it was proved by j. radon that a smooth function on can be determined explicitly by means of its integrals over the lines in .let denote the integral of along the line ( tomogram ). then where is the laplacian on , and its square root is defined by fourier transform ( see theorem [ thm : inversioneformula ] ) .we now observe that the formula above has built in a remarkable duality : first one integrates over the set of points in a line , then one integrates over the set of lines passing through a given point .this formula can be extended to the -dimensional case by computing the integrals of the function on all possible hyperplanes .this suggests to consider the transform defined as follows .if is a function on then is the function defined on the space of all possible -dimensional planes in such that , given a hyperplane , the value of is given by the integral of along .the function is called _ radon transform _ of .there exist several important generalizations of the radon transform by john , gelfand , helgason and strichartz .more recent analysis has been boosted by margarita and volodya manko and has focused on symplectic transforms , on the deep relationship with classical systems and classical dynamics , on the formalism of star product quantization , and on the study of marginals along curves that are not straight lines . in quantum mechanicsthe radon transform of the wigner function was considered in the tomographic approach to the study of quantum states and experimentally realized with different particles and in diverse situations . fora review on the modern mathematical aspects of classical and quantum tomography see .good reviews on recent tomographic applications can be found in and in , where particular emphasis is given on maximum likelihood methods , that enable one to extract the maximum reliable information from the available data can be found .as explained above , from the mathematical point of view , the internal structure of the object is described by an unknown function ( density ) , that is connected via an operator to some measured quantity ( tomograms ) .the tomographic reconstruction problem can be stated as follows : for given data , the task is to find from the operator equation .there are many problems related to the implementation of effective tomographic techniques due to the instability of the reconstruction process .there are two principal reasons of this instability .the first one is the ill - posedness of the reconstruction problem : in order to obtain a satisfactory estimate of the unknown function it is necessary an extremely precise knowledge of its tomograms , which is in general physically unattainable .the second reason is the discrete and possibly imperfect nature of data that allows to obtain only an approximation of the unknown function .the first question is whether a partial information still determines the function uniquely .a negative answer is given by a theorem of smith , solomon and wagner , that states : `` a function with compact support in the plane is uniquely determined by any infinite set , but by no finite set of its tomograms '' .therefore , it is clear that one has to abandon the request of uniqueness in the applications of tomography .thus , due to the ill - posedness of reconstruction problem and to the loss of uniqueness in the inversion process , a regularization method has to be introduced to stabilize the inversion .a powerful approach is the introduction of a mumford - shah ( ms ) functional , first introduced in a different context for image denoising and segmentation .the main motivation is that , in many practical applications , one is not only interested in the reconstruction of the density distribution , but also in the extraction of some specific features or patterns of the image .an example is the problem of the determination of the boundaries of inner organs . by minimizing the ms functional, one can find not only ( an approximation of ) the function but also its sharp contours .very recently a ms functional for applications to tomography has been introduced in the literature .some preliminary results in this context are already available but there are also many interesting open problems and promising results in this direction , as we will try to explain in the second part of this article . the article is organized as follows .section [ sec : radon ] contains a short introduction to the radon transform , its dual map and the inversion formula .section [ sec : ill ] is devoted to a brief discussion on the ill - posedness of the tomographic reconstruction and to the introduction of regularization methods . in section[ sec : ms ] a ms functional is applied to tomography as a regularization method . in particular , in subsection [ subsec : ms ] the piecewise constant model and known results are discussed together with a short list of some interesting open problems .finally , in section [ sec:3dinterpretation ] we present an electrostatic interpretation of the regularization method based on the ms functional , which motivates us to introduce an improved regularization method , based on the blake - zisserman functional , as a relaxed version of the previous one .consider a body in the plane , and consider a beam of particles ( neutrons , electrons , x - rays , etc . ) emitted by a source .assume that the initial intensity of the beam is .when the particles pass through the body they are absorbed or scattered and the intensity of the beam traversing a length decreases by an amount proportional to the density of the body , namely so that a detector placed at the exit of the body measures the final intensity and then from one can record the value of the density integrated on a line . if another ray with a different direction is considered , with the same procedure one obtains the value of the integral of the density on that line .the mathematical model of the above setup is the following : given a smooth function on the plane , , and a line , consider its tomogram , given by where is the euclidean measure on the line . in this way, we have defined an operator that maps a smooth function on the plane into a function on , the manifold of the lines in .we ask the following question : if we know the family of tomograms , can we reconstruct the density function ?the answer is affirmative and in the following we will see how to obtain this result .let us generalize the above definitions to the case of an -dimensional space .let be a function defined on , integrable on each hyperplane in and let be the manifold of all hyperplanes in .the radon transform of is defined by eq .( [ sharp lambda ] ) , where is the euclidean measure on the hyperplane .thus we have an operator , the _ radon transform _ , that maps a function on into a function on , namely .its dual transform , also called _ back projection operator _ , associates to a function on the function on given by where is the unique probability measure on the compact set which is invariant under the group of rotations around . using its signed distance from the origin and a unit vector perpendicular to .,scaledwidth=45.0% ] let us consider the following covering of where is the unit sphere in .thus , the equation of the hyperplane is with denoting the euclidean inner product of .[ fig : radon nd ] . observe that the pairs are mapped into the same hyperplane . therefore ( [ double covering ] ) is a double covering of .thus has a canonical manifold structure with respect to which this covering mapping is differentiable .we identify continuous ( differentiable ) functions on with continuous ( differentiable ) functions on satisfying .we will momentarily work in the schwartz space of complex - valued rapidly decreasing functions on .in analogy with we define as the space of functions on which for any integers , any multiindex , and any differential operator on satisfy the space is then defined as the set of satisfying .now we want to obtain an inversion formula , namely we want to prove that one can recover a function on from the knowledge of its radon transform . in order to get this resultwe need a preliminary lemma , whose proof can be found in , which suggests an interesting physical interpretation .[ sharp flat ] let and , , .then where depends only on the dimension , and denotes the convolution product , is the potential at generated by the charge distribution .,scaledwidth=45.0% ] a physical interpretation of lemma [ sharp flat ] is the following : if is a charge distribution , then the potential at the point generated by that charge is exactly , see fig . [fig : potential ] .notice , however , that the potential of a point charge scales always as the inverse distance _ independently _ of the dimension , and thus it is coulomb only for .the only dependence on is in the strength of the elementary charge .this fact is crucial : indeed , the associated poisson equation involves an -dependent ( fractional ) power of the laplacian , which appears in the inversion formula for the radon transform .[ thm : inversioneformula ] let .then where , with , is a pseudodifferential operator whose action is where is the fourier transform of , the proof of theorem [ thm : inversioneformula ] can be found in .equation ( [ inversion formula0 ] ) says that , modulo the final action of , the function can be recovered from its radon transform by the application of the dual mapping : first one integrates over the set of points in a hyperplane and then one integrates over the set of hyperplanes passing through a given point .explicitly we get which has the following remarkable interpretation .note that if one fixes a direction , then the function is constant on each plane perpendicular to , i.e. it is a ( generalized ) plane wave .therefore , eq . ( [ inversion formula ] ) gives a representation of in terms of a continuous superposition of plane waves .a well - known analogous decomposition is given by fourier transform .when , one recovers the inversion formula ( [ eq : radoninversion ] ) originally found by radon .we have defined the radon transform of any function as .the following theorem contains the characterization of the range of the radon linear operator and the extension of to the space of square integrable functions .[ th : radonbijection ] the radon transform is a linear one - to - one mapping of onto , where the space is defined as follows : if and only if and for any integer the integral is a homogeneous polynomial of degree in .moreover , the radon operator can be extended to a continuous operator from and . in medical imaging ,computerized tomography is a widely used technique for the determination of the density of a sample from measurements of the attenuation of x - ray beams sent through the material along different angles and offsets .the measured data are connected to the density via the radon transform . to compute the density distribution the equation has to be inverted .unfortunately it is a well known fact that is not continuously invertible on , and this imply that the problem of inversion is ill - posed .for this reason , regularization methods have to be introduced to stabilize the inversion in the presence of data noise .we discuss ill - posed problems only in the framework of linear problems in hilbert spaces .let be hilbert spaces and let be a linear bounded operator from into .the problem is called well - posed by hadamard ( 1932 ) if it is uniquely solvable for each and if the solution depends continuously on .otherwise , ( [ prob : inverseproblem ] ) is called ill - posed .this means that for an ill - posed problem the operator either does not exist , or is not defined on all of , or is not continuous .the practical difficulty with an ill - posed problem is that even if it is solvable , the solution of need not be close to the solution of if is close to . in general is not a continuous operator . to restore continuitywe introduce the notion of a regularization of .this is a family of linear continuous operators which are defined on all and for which on the domain of .obviously as if is not bounded .with the help of a regularization we can solve ( [ prob : inverseproblem ] ) approximately in the following sense . let be an approximation to such that .let be such that , as , then , as , hence , is close to if is close to . the number is called a _ regularizationparameter_. determining a good regularization parameter is one of the crucial points in the application of regularization methods .there are several methods for constructing a regularization as the truncated singular value decomposition , the method of tikhonov - phillips or some iterative methods . in the following section we present a regularization method based on the minimization of a mumford - shah type functional .in many practical applications one is not only interested in the reconstruction of the density distribution but also in the extraction of some specific features within the image which represents the density distribution of the sample .for example , the planning of surgery might require the determination of the boundaries of inner organs like liver or lung or the separation of cancerous and healthy tissue .segmenting a digital image means finding its _ homogeneous regions _ and its _ edges _ , or _boundaries_. of course , the homogeneous regions are supposed to correspond to meaningful parts of objects in the real world , and the edges to their apparent contours .the mumford - shah variational model is one of the principal models of image segmentation .it defines the segmentation problem as a joint smoothing / edge detection problem : given an image , one seeks simultaneously a `` piecewise smoothed image '' with a set of abrupt discontinuities , the `` edges '' of .the original mumford - shah functional , is the following : where * is an open set ( _ screen _ ) ; * is a closed set ( _ set of edges _ ) ; * ( _ cartoon _ ) ; * denotes the distributional gradient of ; * is the datum ( _ digital image _ ) ; * are parameters ( _ tuning parameters _ ) ; * denotes the -dimensional hausdorff measure .the squared distance in ( [ defn : jms ] ) plays the role of a fidelity term : it imposes that the cartoon approximate the image . the second term in the functional imposes that the cartoon be piecewise smooth outside the edge set . in other wordthis term favors sharp contours rather than zones where a thin layer of gray is used to pass smoothly from white to black or viceversa .finally the third term in the functional imposes that the contour be `` small '' and as smooth as possible .what is expected from the minimization of this functional is a sketchy , cartoon - like version of the given image together with its contours .see fig .[ fig : cartooneye ] . the minimization of the functional represents a compromise between accuracy and segmentation .the compromise depends on the tuning parameters and which have different roles .the parameter determines how much the cartoon can vary , if is small some variations of are allowed , while as increases tends to be a piecewise constant function .the parameter represents a scale parameter of the functional and measure the amount of contours : if is small , a lot of edges are allowed and we get a fine segmentation . as increases , the segmentation gets coarser . for more details on the modelsee the original paper , and the book . ) .center : contours of the image in the mumford - shah model ( edges ) .right : piecewise smooth function approximating the image ( cartoon ) .,scaledwidth=45.0% ] the minimization of the functional in ( [ defn : jms ] ) is performed among the admissible pairs such that is closed and .it is worth noticing that in this model there are two unknowns : a scalar function and the set of its discontinuities .for this reason this category of problems is often called `` free discontinuities problem '' .existence of minimizers of the functional in ( [ defn : jms ] ) was proven by de giorgi , carriero , leaci in in the framework of bounded variation functions without cantor part ( space sbv ) introduced by ambrosio and de giorgi in .further regularity properties for optimal segmentation in the mumford - shah model were shown in . herewe present a variation of the ms functional , adapted to the inversion problem of the radon transform .more precisely , we consider a regularization method that quantifies the edge sets together with images , i.e. a procedure that gives simultaneously a reconstruction and a segmentation of ( assumed to be supported in ) directly from the measured tomograms , based on the minimization of the mumford - shah type functional the only difference between the functionals and is the first term , i.e. the fidelity term , that ensures that the reconstruction for is close enough to a solution of the equation , whereas the other terms play exactly the same role explained for the functional .as explained above , in addition to the reconstruction of the density , we are interested in the reconstruction of its singularity set , i.e. the set of points where the solution is discontinuous .the main difference with respect to the standard mumford - shah functional ( [ defn : jms ] ) is that we have to translate the information about the set of sharp discontinuities of ( and hence on the space of the radon transform ) into information about the strong discontinuities of .here we will review the results obtained by ramlau and ring concerning the minimization of ( [ def : msfunctional ] ) restricted to piecewise constant functions , and then consider some interesting open problems .for medical applications , it is often a good approximation to restrict the reconstruction to densities that are constant with respect to a partition of the body , as the tissues of inner organs , bones , or muscles have approximately constant density .we introduce the space as the space of piecewise constant functions that attain at most different function values , where is an open and bounded subset of . in other words ,each is a linear combination of characteristic functions of sets which satisfy we assume that the s are open relatively to and we set for the boundary of with respect to the topology relative to the open domain .in this situation the edge set will be given by the union of the boundaries of s .for technical reasons it is necessary to assume a _ nondegeneracy _condition on the admissible partitions of : for some , for all , where denotes the lebesgue measure on .it turns out to be convenient to split the information encoded in a typical function , into a `` geometrical '' part described by the -tuple of pairwise disjoint sets which cover up to a set of measure zero and a `` functional '' part given by the -tuple of values .we also use the notation , for the boundaries of . as usualwhen dealing with inverse problems , we have to assume that the data are not exactly known , but that we are only given noisy measured tomograms of a ( hypothetical ) exact data set with .if we restrict the functional ( [ def : msfunctional ] ) to functions in we obtain that the second term ( involving the derivatives of ) disappears , therefore it remains to minimize the functional over , with respect to the functional variable ( a vector of components ) and the geometric variable ( a partition of the domain with at most distinct regions satisfying the non degeneracy condition ( [ nondegeneracy - cond ] ) ) .so the problem is to find such that where it is clear that will depend on the regularization parameter and on the error level .now we can state the results concerning the functional in ( [ functional : jbeta ] ) .there are several technical details necessary for the precise statement and proof of the theorems , for which we refer to the original paper . herewe will give a simplified version of the theorems with the purpose of explain the main goal , without too many technical details .the first result is about the existence of minimizers of the functional in ( [ functional : jbeta ] ). for all there exists a minimizer of the functional in ( [ functional : jbeta ] ) , with .the second result regards the stable dependence of the minimizers of the functional in ( [ functional : jbeta ] ) on the error level .let be a sequence of functions in and let .for all , let denote the minimizers of the functional with initial data .if in , as , then there exists a subsequence of such that as , and is a minimizer of with initial data .moreover , the limit of each convergent subsequence of is a minimizer of with initial data .finally the last theorem is a regularization result .let be given , and let .assume we have noisy data with .let us choose the parameter satisfying the conditions and as . for any sequence ,let denote the minimizers of the functional with initial data and regularization parameter .then there exists a convergent subsequence of .moreover , for every convergent subsequence with limit the function is a solution of the equation with a minimal perimeter .moreover if is the unique solution of this equation then the whole sequence converges when .finally , let us list some open problems in this context : * is the nondegeneracy condition ( [ nondegeneracy - cond ] ) necessary ? * can one find an a priori optimal value for the number of different values ? *is it possible to give an a priori estimate on the -norm of the solution ( maximum principle ) ? * and finally , it would be very important for applications to prove the existence of minimizers of the functional not restricted to piecewise constant functions .we observe that all these problems are quite natural , and have been completely solved in the case of the standard mumford - shah functional in ( [ defn : jms ] ) , see e.g. .in this section we restrict our attention to the -dimensional case .we propose an electrostatic interpretation of the regularization method based on the functional discussed in the previous section .the intent is to give a physical explanation of the fidelity term in the functional ( [ def : msfunctional ] ) , that provide the intuition for an improved regularization method . for , the inversion formula ( [ inversion formula0 ] ) and the electrostatic identity ( [ eq : electid ] ) particularize , respectively , as follows : for all one gets and where and is a constant .we present two preliminary lemmas .[ lemma : normrf ] for all real valued one has we know that , therefore ( x)\ , { \mathcal{i}}({\mathcal{r}}{f } ) ( x ) \ ; \textrm{d}x \nonumber \\ & & = \frac{1}{2(2\pi)^2 } \int_{{\mathbb{r}}^3}| \nabla { \mathcal{i}}({\mathcal{r}}{f } ) ( x)|^2\ ; \textrm{d}x .\nonumber\end{aligned}\ ] ] [ lemma : electricfield ] for all real valued define and then where we used the inversion formula ( [ inversionformula_again ] ) .now we consider a measured tomogram and let us assume that for some . by lemma [ lemma : normrf]-[lemma : electricfield ]it follows immediately that the fidelity term can be rewritten as follows : where are the corresponding electric fields , while are the corresponding potentials . with respect to the standard mumford - shah functional in ( [ defn : jms ] ) , the new fidelity term in the functional in ( [ def : msfunctional ] ) controls the distance between the radon transform of and the tomographic data .the relevant difference with respect to the original functional is that the function and its radon transform are defined in different spaces .let us try to interpret the fidelity term from a physical point of view .a key ingredient for this goal is the electrostatics formulation of the radon transform .this formulation can be summarized as follows : if we consider , in dimension , a function , we can think at it as a charge distribution density ; if we apply to first the radon operator and then its adjoint we obtain , up to a constant , the electrostatic potential generated by the charge distribution .this formulation can be stated in any dimension : the difference with general potential theory in dimension is that , in tomography , the potential produced by a point charge always scales like , which is the case of electrostatic potential only in dimension . from the electrostatic formulation of the radontransform we can prove that the fidelity term in the functional actually imposes that the electric field produced by the charge distribution must be close to the `` measured electric field '' . therefore we conclude that the term is a fidelity term in this weaker sense . using this property based on the electrostatic interpretation of the tomographic reconstruction, we can try to minimize some appropriate functionals in the new variables ( electric field ) or ( electric potential ) and then compute the corresponding ( charge density ) .we manipulate the functional as follows : where is a new functional depending on a vector function and on a set , and we used the fact that , since is conservative .we observe that the functional is a second order functional for a vector field in which appears the measure of the set that is the set of discontinuities of and thus is the set of discontinuities of . in the functional recognize some similarities with a famous second - order free - discontinuity problem : the blake - zisserman model .this model is based on the minimization of the blake - zisserman functional among admissible triplets , where * is an open set ; * are closed sets ; * is the set of discontinuities of ( jump set ) , and the the set of discontinuities of ( crease set ) ; * , is a scalar function ; * denotes the distributional laplacian of ; * is the datum ( grey intensity levels of the given image ) ; * are parameters ; * denotes the -dimensional hausdorff measure .the blake - zisserman functional allows a more precise segmentation than the mumford - shah functional in the sense that also the curvature of the edges of the original picture is approximated . on the other hand ,minimizers may not always exist , depending on the values of the parameters and on the summability assumption on .we refer to for motivation and analysis of variational approach to image segmentation and digital image processing .in particular see for existence of minimizer results and for a counterexample to existence and for results concerning the regularity of minimizers .equation ( [ eqnaeeay_manipulation ] ) implies that the functional can be rewritten in terms of the vector field and of the discontinuities set of , i.e. the set of creases of , using the terminology of the blake - zisserman model . the fact that in the functional the discontinuities set of is not present depends on the fact that we are assuming that the charge density in the functional do not concentrate on surfaces or on lines .if we admit concentrated charge layers we can consider the blake - zisserman model for the vector function as a relaxed version of the mumford - shah model for the charge .in other words we propose to investigate the connections between minimizers of and minimizers of the higher order functional : with the additional constraint .the main advantage of this approach is that the functional is a purely differential functional , while the functional is an integro - differential one .we expect that some results about the blake - zisserman model that could be rephrased into tomographic terms would provide immediately new results in tomography .conversely all the peculiar tomographic features as the intrinsic vector nature of the variable , the fact that its support can not be bounded and the extra - constraint , motivate new research directions in the study of free - discontinuities problems .for example , an interesting result in this context would be the determination of a good hypothesis on the datum that ensure that the charge density do not concentrate .we conclude this section with some comments : * we proved that the measured data are actually the measured electric field produced by the unknown charge density , so the term in the functional is a fidelity term in a weak sense .* the problem of the reconstruction of the charge can be rephrased into a reconstruction problem for the electric field .the electric field is an irrotational vector field , so the new minimization problem is actually a constrained minimization . in order to avoid this constraintone could reformulate the reconstruction problem in terms of the electric potential ( ) obtaining a third - order functional in which the fidelity term is where the potentials are given by ( [ eq : potentials ] ) . * all thisconsiderations hold true in dimension . in a generic dimension the situation is quite different because the inversion formula for the radon transform involves a ( possibly fractional ) power of the laplacian . in this casethe electrostatic description of tomography given in this section fails . in order to restore it, it is necessary to consider another radon - type transform which involves integrals of over linear manifolds with codimension such that , i.e. , see e.g. .we thank g. devillanova , g. florio and f. maddalena for for helpful discussions . this work was supported by `` fondazione cassa di risparmio di puglia '' and by the italian national group of mathematical physics ( gnfm - indam ) .
in this article we present a review of the radon transform and the instability of the tomographic reconstruction process . we show some new mathematical results in tomography obtained by a variational formulation of the reconstruction problem based on the minimization of a mumford - shah type functional . finally , we exhibit a physical interpretation of this new technique and discuss some possible generalizations . i _ keywords _ : radon transform ; integral geometry ; image segmentation ; calculus of variations .
reciprocity , which was first found by lorentz at the end of 19th century , has a long history and has been derived in several formalisms .there are two typical reciprocal configurations in optical responses as shown in fig .the configurations in figs .[ fig1](a ) and [ fig1](b ) are transmission reciprocal and those in figs .[ fig1](a ) and [ fig1](c ) are reflection reciprocal . as shown in fig .[ fig1 ] , we denote transmittance by and reflectance by ; the suffice k and stand for incident wavenumber vector and angle , respectively .the reciprocal configurations are obtained by symmetry operations on the incident light of the wavenumber vector : ( ) or ( ) .reciprocity on transmission means that , and that on reflection is expressed as , which is not intuitively obvious and is frequently surprising to students .the most general proof was published by petit in 1980, where reciprocal reflection as shown in fig .[ fig1 ] is derived for asymmetric gratings such as an echelette grating . on the basis of the reciprocal relation for the solutions of the helmholtz equation, the proof showed that reciprocal reflection holds for periodic objects irrespective of absorption .it seems difficult to apply the proof to transmission because it would be necessary to construct solutions of maxwell equations that satisfy the boundary conditions at the interfaces of the incident , grating , and transmitted layers .the history of the literature on reciprocal optical responses has been reviewed in ref . since the 1950s , scattering problems regarding light , elementary particles , and so onhave been addressed by using scattering matrix ( s - matrix ) . in the studies employing the s - matrix ,it is assumed that there is no absorption by the object .the assumption leads to the unitarity of the s - matrix and makes it possible to prove reciprocity .the reciprocal reflection of lossless objects was verified in this formalism. in this paper we present a simple , direct , and general derivation of the reciprocal optical responses for transmission and reflection relying only on classical electrodynamics .we start from the reciprocal theorem described in sec .[ thm ] and derive the equation for zeroth order transmission and reflection coefficients in sec . [ proof ] .the equation is essential to the reciprocity .a numerical and experimental example of reciprocity is presented in sec .[ example ] . the limitation and break down of reciprocal optical responses are also discussed .the reciprocal theorem has been proved in various fields , such as statistical mechanics , quantum mechanics , and electromagnetism. here we introduce the theorem for electromagnetism . when two currents exist as in fig .[ fig2 ] and the induced electromagnetic ( em ) waves travel in linear and locally responding media in which and , then equation is the reciprocal theorem in electromagnetism .the proof shown in ref .exploits plane waves and is straightforward . equation ( [ reci ] ) is valid even for media with losses . the integrands take non - zero values at the position where currents exist , that is , . the theorem indicates the reciprocity between the two current sources ( ) and the induced em waves which are observed at the position of the other source ( ) .in this section , we apply the reciprocal theorem to optical responses in both transmission and reflection configurations .first , we define the notation used in the calculations of the integrals in eq .( [ reci ] ) .an electric dipole oscillating at the frequency emits dipole radiation , which is detected in the far field .when a small dipole along the axis is located at the origin , it is written as and , where denotes the unit vector along the axis and the magnitude of the dipole .the dipole in vacuum emits radiation , which in the far field is where polar coordinates ( , , ) are used , a unit vector is given by , and . because the dipole is defined by and conservation of charge density is given by , we obtain the current associated with the dipole : consider two arrays of dipoles ( long but finite ) in the plane as shown in fig .the two arrays have the same length , and the directions are specified by normalized vectors ( ) and . in this case , the current is . if the dipoles coherently oscillate with the same phase , then the emitted electric fields are superimposed and form a wave front at a position far from the array in the plane as drawn in fig . [ fig3 ] .the electric field vector of the wave front , , satisfies and travels with wavenumber vector .thus , if we place the dipole arrays far enough from the object , the induced em waves become slowly decaying incident plane waves in the plane to a good approximation .the arrays of dipoles have to be long enough to form the plane wave . for the transmission configuration, we calculate ( and ) . figure [ fig3 ] shows a typical transmission configuration , which includes an arbitrary periodic object asymmetric along the axis .the relation between the current , the direction of the dipole , and the wavenumber vector of the wave front is summarized as and .it is convenient to expand the electric field into a fourier series for the calculation of periodic sources : where is the fourier coefficient of , ( ) , and is the periodicity of the object along the axis ( see fig .[ fig3 ] ) .the component is expressed in homogeneous media in vacuum as , where the signs correspond to the directions along the axis .when the dipole array is composed of sufficiently small and numerous dipoles , the integration can be calculated to good accuracy as where . to ensure that the integration is proportional to , the array of dipoles has to be longer than : where is the least common multiple of the diffraction channels which are open at the frequency .this condition would usually be satisfied when forms a plane wave . by permutating 1 and 2 in eq .( [ j1e2 ] ) , we obtain . equation ( [ j1e2 ] ) and the reciprocal theorem in eq .( [ reci ] ) lead to the equation each electric vector ( ) is observed at the position where there is another current ( ) .the integral in eq .( [ reci ] ) is reduced to eq .( [ j1e2 ] ) which is expressed only by the zeroth components of the transmitted electric field .the reciprocity is thus independent of higher order harmonics , which are responsible for the modulated em fields in structured objects . when there is no periodic object in fig .[ fig3 ] , a similar relation holds : the transmittance is given by from eqs .( [ e_reci])([t_reci ] ) , we finally reach the reciprocal relation . the feature of the proof that is independent of the detailed evaluation of and therefore makes the proof simple and general .the proof can be extended to two - dimensional periodic structure by replacing the one - dimensional periodic structure in fig .[ fig3 ] by two - dimensional one .although we have considered periodic objects , the proof can also be extended to non - periodic objects . to do this extension , eq .( [ e_expand ] ) has to be expressed in the general form , and a more detailed calculation for is required .reciprocity for transmission thus holds irrespective of absorption , diffraction , and scattering by objects . in fig .[ fig3 ] the induced electric fields are polarized in the plane . the polarization is called tm polarization in the terminology of waveguide theory and is also often called polarization . for te polarization ( which is often called polarization ) for which has a polarization parallel to the axis ,the proof is similar to what we have described except that the dipoles are aligned along the axis .reciprocal reflection is also shown in a similar way .the configuration is depicted in fig .the two sources have to be located to satisfy the mirror symmetry about the axis .the calculation of leads to the reciprocal relation for reflectance .note that in eq .( [ e0_reci ] ) has to be evaluated by replacing the periodic object by a perfect mirror .an example of reciprocal optical response is shown here . figure [ fig5](a )displays the structure of the sample and reciprocal transmission configuration .the sample consists of periodic grooves etched in metallic films of au and cr on a quartz substrate .the periodicity is 1200 nm , as indicated by the dotted lines in fig .[ fig5](a ) .the unit cell has the structure of au : air : au : air = 3:1:4:5 .the thickness of au , cr , and quartz is 40 nm , 5 nm , and 1 mm , respectively .the structure is obviously asymmetric about the axis .the profile was modeled from an afm image of the fabricated sample .figure [ fig5](b ) shows our numerical results .the incident light has and tm polarization ( the electric vector is in the plane ) .the numerical calculation was done with an improved s - matrix method the permittivities of gold and chromium were taken from refs . and ; the permittivity of quartz is well known to be 2.13 . in the numerical calculation, the incident light is taken to be a plane wave , and harmonics up to in eq .( [ e_expand ] ) were used , which is enough to obtain accurate optical responses .the result indicates that transmission spectra ( lower solid line ) are numerically the same in the reciprocal configurations , while reflection ( upper solid line ) and absorption ( dotted line ) spectra show a definite difference .the absorption is plotted along the left axis .the difference implies that surface excitations are different on each side and absorb different numbers of photons . nonetheless , the transmission spectra are the same for incident wavenumber vectors and . experimental transmission spectraare shown in fig .[ fig5](c ) and are consistent within experimental error .reciprocity is thus confirmed both numerically and experimentally .there have a few experiments on reciprocal transmission ( see references in ref . ) . in comparison with these results , fig .[ fig5](c ) shows the excellent agreement of reciprocal transmission and is the best available experimental evidence supporting reciprocity .we note that transmission spectra in figs .[ fig5](b ) and [ fig5](c ) agree quantitatively above 700 nm .on the other hand , they show a qualitative discrepancy below 700 nm .the result could come from the difference between the modeled profile in fig .[ fig5](a ) and the actual profile of the sample .the dip at 660 nm stems from a surface plasmon at the metal - air interface , so that the measured transmission spectra would be affected significantly by the surface roughness and the deviation from the modeled structure .as described in sec . [ thm ] , the reciprocal theorem assumes that all media are linear and show local response .logically , it can happen that the reciprocal optical responses do not hold for nonlinear or nonlocally responding media .reference discusses an explicit difference of the transmittance for a reciprocal configuration in a nonlinear optical crystal of knbo:mn .the values of the transmittance deviate by a few tens of percent in the reciprocal configuration .the crystal has a second - order response such that .the break down of reciprocity comes from the nonlinearity .does reciprocity also break down in nonlocal media ?in nonlocal media the induction * d * is given by .although a general proof for this case has not been reported to our knowledge , it has been shown that reciprocity holds in a particular stratified structure composed of nonlocal media. in summary , we have presented an elementary and heuristic proof of the reciprocal optical responses for transmittance and reflectance . when the reciprocal theorem in eq .( [ reci ] ) holds , the reciprocal relations come from geometrical configurations of light sources and observation points , and are independent of the details of the objects .transmission reciprocity has been confirmed both numerically and experimentally .we thank s. g. tikhodeev for discussions .one of us ( m. i. ) acknowledges the research foundation for opto - science and technology for financial support , and the information synergy center , tohoku university for their support of the numerical calculations . 20 r. j. potton,``reciprocity in optics , '' rep .. phys . * 67 * , 717754 ( 2004 ) .r. petit , `` a tutorial introduction , '' in _ electromagnetic theory of gratings _ , edited by r. petit ( springer , berlin , 1980 ) , p. 1 .n. a. gippius , s. g. tikhodeev , and t. ishihara , `` optical properties of photonic crystal slabs with an asymmetric unit cell , '' physb * 72 * , 045138 - 17 ( 2005 ). l. d. landau , e. m. lifshitz , and l. p. pitaevskii , _ electrodynamics of continuous media _ ( pergamon press , ny , 1984 ) , 2nd ed .j. d. jackson , _ classical electrodynamics _( john wiley & sons , nj , 1999 ) , 3rd ed . s. g. tikhodeev , a. l. yablinskii , e. a. muljarov , n. a. gippius , and t. ishihara , `` quasiguided modes and optical properties of photonic crystal slabs , '' phys . rev .b * 66 * , 045102 - 117 ( 2002 ) .l. li , `` use of fourier series in the analysis of discontinuous periodic structures , '' j. opt .a , * 13 * , 18701876 ( 1996 ) .p. b. johnson and r. w. christy , `` optical constants of the noble metals , '' phys .b * 6 * , 43704379 ( 1972 ) . p. b. johnson and r. w. christy , `` optical constants of transition metals : ti , v , cr , mn , fe , co , ni , and pd , '' phys .b * 9 * , 50565070 ( 1974 ) . m. z. zha and p. gnter , `` nonreciprocal optical transmission through photorefractive knbo:mn , '' opt* 10 * , 184186 ( 1985 ). h. ishihara , `` appearance of novel nonlinear optical response by control of excitonically resonant internal field , '' in _ proceedings of 5th symposium of japanese association for condensed matter photophysics _( 1994 ) , pp .287281 ( in japanese ) .reciprocal configurations . ( a ) and ( b ) show reciprocal configurations for transmission . in ( a ) denotes transmittance for incident wavenumber vector . in ( b ) is defined similarly .the reciprocal relation is .( a ) and ( c ) are reciprocal for reflection . in ( a ) is reflectance for incident wavenumber vector and in ( c ) for .the reciprocal relation is .,width=283 ] schematic drawing of reciprocal configuration for transmission .the object has an arbitrary periodic structure , which is asymmetric along the axis .currents induce electric fields ( ).,width=245 ] schematic configuration for reciprocal reflection .the object has an arbitrary periodic structure , which consists of asymmetric unit cells .the currents yield electric fields ( ).,width=264 ] ( a ) schematic drawing of metallic grating profile modeled from afm images .the periodicity is 1200 nm .the dotted lines show the unit cells in which the ratio is au : air : au : air = 3:1:4:5 .the thickness of au , cr , and the quartz substrate is 40 nm , 5 nm , and 1 mm , respectively .( b ) numerically calculated spectra for 10 incidence of ( upper panel ) and ( lower panel ) of tm polarization . in both panels the reflectance ( upper solid line ) and absorption ( dotted line ) are plotted using the left axis , while the transmittance ( lower solid line ) uses the right axis .( c ) measured transmittance spectra , corresponding to the transmittance spectra in ( b).,width=264 ]
we present an elementary proof concerning reciprocal transmittances and reflectances . the proof is direct , simple , and valid for the diverse objects that can be absorptive and induce diffraction and scattering , as long as the objects respond linearly and locally to electromagnetic waves . the proof enables students who understand the basics of classical electromagnetics to grasp the physical basis of reciprocal optical responses . in addition , we show an example to demonstrate reciprocal response numerically and experimentally .
turbulence is a key element of the dynamics of astrophysical fluids , including those of interstellar medium ( ism ) , clusters of galaxies and circumstellar regions .the realization of the importance of turbulence induces sweeping changes , for instance , in the paradigm of ism .it became clear , for instance , that turbulence affects substantially star formation , mixing of gas , transfer of heat .observationally it is known that the ism is turbulent on scales ranging from aus to kpc ( see armstrong et al 1995 , elmegreen & scalo 2004 ) , with an embedded magnetic field that influences almost all of its properties .the issue of quantitative descriptors that can characterize turbulence is not a trivial one ( see discussion in lazarian 1999 and ref . therein ) .one of the most widely used measures is the turbulence spectrum , which describes the distribution of turbulent fluctuations over scales .for instance , the famous kolmogorov model of incompressible turbulence predicts that the difference in velocities at different points in turbulent fluid increases on average with the separation between points as a cubic root of the separation , i.e. . in terms of direction - averaged energy spectrumthis gives the famous kolmogorov scaling , where is a _3d _ energy spectrum defined as the fourier transform of the correlation function of velocity fluctuations .note that in this paper we use to denote averaging procedure .quantitative measures of turbulence , in particular , turbulence spectrum , became important recently also due to advances in the theory of mhd turbulence .as we know , astrophysical fluids are magnetized , which makes one believe that the correspondence should exist between astrophysical turbulence and mhd models of the phenomenon ( see vazquez - semadeni et al . 2000 , mac low & klessen 2004 , bellesteros - paredes et al .2007 , mckee & ostriker 2007 and ref . therein ) . in fact , without observational testing, the application of theory of mhd turbulence to astrophysics could always be suspect .indeed , from the point of view of fluid mechanics astrophysical turbulence is characterized by huge reynolds numbers , , which is the inverse ratio of the eddy turnover time of a parcel of gas to the time required for viscous forces to slow it appreciably .for we expect gas to be turbulent and this is exactly what we observe in hi ( for hi ) .in fact , very high astrophysical and its magnetic counterpart magnetic reynolds number ( that can be as high as ) present a big problem for numerical simulations that can not possibly get even close to the astrophysically - motivated numbers .the currently available 3d simulations can have and up to . both scale as the size of the box to the first power , while the computational effort increases as the fourth power ( 3 coordinates + time ) , so the brute force approach can not begin to resolve the controversies related , for example , to ism turbulence .we expect that observational studies of turbulence velocity spectra will provide important insights into ism physics . even in the case ofmuch more simple oceanic ( essentially incompressible ) turbulence , studies of spectra allowed to identify meaningful energy injection scales . in interstellar , intra - cluster medium , in addition to that , we expect to see variations of the spectral index arising from the variations of the degree of compressibility , magnetization , interaction of different interstellar phases etc .how to get the turbulence spectra from observations is a problem of a long standing . while density fluctuations are readily available through both interstellar scincillations and studies of column density maps , the more coveted velocity spectra have been difficult to obtain reliably until very recently .turbulence is associated with fluctuating velocities that cause fluctuations in the doppler shifts of emission and absorption lines .observations provide integrals of either emissivities or opacities , both proportional to the local densities , at each velocity along the line of sight .it is far from trivial to determine the properties of the underlying turbulence from the observed spectral line .centroids of velocity ( munch 1958 ) have been an accepted way of studying turbulence , although it was not clear to when and to what extend the measure really represents the velocity .recent studies ( lazarian & esquivel 2003 , henceforth le03 , esquivel & lazarian 2005 , ossenkopf et al 2006 , esquivel et al .2007 ) have showed that the centroids are not a good measure for supersonic turbulence , which means that while the results obtained for hii regions ( odell & castaneda 1987 ) are probably ok , those for molecular clouds are unreliable .an important progress in analytical description of the relation between the spectra of turbulent velocities and the observable spectra of fluctuations of spectral intensity was obtained in lazarian & pogosyan ( 2000 , henceforth lp00 ) .this description paved way to two new techniques , which were later termed velocity channel analysis ( vca ) and velocity coordinate spectrum ( vcs ) .the techniques provide different ways of treating observational data in position - position - velocity ( ppv ) data cubes . while vca is based on the analysis of channel maps , which are the velocity slices of ppv cubes , the vcs analyses fluctuations along the velocity direction . if the slices have been used earlier for turbulence studies , although the relation between the spectrum of intensity fluctuations in the channel maps and the underlying turbulence spectrum was unknown , the analysis of the fluctuations along the velocity coordinate was initiated by the advent of the vcs theory . with the vca and the vcs one can relate both observations and simulations to _ turbulence theory_. for instance , the aforementioned turbulence indexes are very informative, e.g. velocity indexes steeper than the kolmogorov value of are likely to reflect formation of shocks , while shallower indexes may reflect scale - dependent suppression of cascading ( see beresnyak & lazarian 2006 and ref . therein ) . by associating the variations of the index with different regions of ism , e.g. with high or low star formation, one can get an important insight in the fundamental properties of ism turbulence , its origin , evolution and dissipation .the absorption of the emitted radiation was a concern of the observational studies of turbulence from the very start of the work in the field ( see discussion in munch 1999 ) .a quantitative study of the effects of the absorption was performed for the vca in lazarian & pogosyan ( 2004 , henceforth lp04 ) and for the vcs in lazarian & pogosyan ( 2006 , henceforth lp06 ) . in lp06it was stressed that absorption lines themselves can be used to study turbulence .indeed , the vcs is a unique technique that does not require a spatial coverage to study fluctuations .therefore individual point sources sampling turbulent absorbing medium can be used to get the underlying turbulent spectra .however , lp06 discusses only the linear regime of absorption , i.e. when the absorption lines are not saturated .this substantially limits the applicability of the technique .for instance , for many optical and uv absorption lines , e.g. mg ii , sii , siii the measured spectra show saturation .this means that a part of the wealth of the unique data obtained e.g. by hst and other instruments can not be handled with the lp06 technique .the goal of this paper is to improve this situation .in particular , in what follows , we develop a theoretical description that allows to relate the fluctuations of the absorption line profiles and the underlying velocity spectra in the saturated regime .below , in 2 we describe the setting of the problem we address , while our main derivations are in 3 .the discussion of the new technique of turbulence study is provided in 4 , while the summary is in 5 .while in our earlier publications ( lp00 , lp04 , lp06 ) concentrated on emission line , in particular radio emission lines , e.g. hi and co , absorption lines present the researchers with well defined advantages .for instance , they allow to test turbulence with a pencil beam , suffer less from uncertainties in path length .in fact , studies of absorption features in the spectra of stars have proven useful in outlining the gross features of gas kinematics in milky way .recent advances in sensitivity and spectral resolution of spectrographs allow studies of turbulent motions . among the available techniques , vcs is the leading candidate to be used with absorption lines .indeed , it is only with extended sources that the either centroid or vca studies are possible . at the same time , vcs makes use not of the spatial , but frequency resolution .thus , potentially , turbulence studies are possible if absorption along a single line is available . in reality, information along a few lines of sight , as it shown in fig 1 is required to improve the statistical accuracy of the measured spectrum . using the simulated data sets chepurnov & lazarian ( 2006ab ) experimentally established that the acceptable number of lines ranges from 5 to 10 . for weak absorption, the absorption and emission lines can be analyzed in the same way , namely , the way suggested in lp06 .for this case , the statistics to analyse is the squared fourier transform of the doppler - shifted spectral line , irrespectively of the fact whether this is an emission or an absorption spectral line .such a `` spectrum of spectrum '' is not applicable for saturated spectral lines , which width is still determined by the doppler broadening .it is known ( see spitzer 1978 ) that this regime corresponds to the optical depth ranging from 10 to . the present paper will concentrate on this regime larger than the line width is determined by atomic constants and therefore it does not carry information about turbulence . ] .consider the problem in a more formal way .intensity of the absorption line at frequency is given as where is the optical depth .in the limit of vanishing intrinsic width of the line , the frequency spread of is determined solely by the doppler shift of the absorption frequency from moving atoms .the number density of atoms along the line of sight moving at required velocity is where is the thermal distribution centered at every point at the local mean velocity that is determined by the sum of turbulent and regular flow at that point .this is the density in ppv coordinate that we introduced in lp00 , so .the intrinsic line width is accounted for by the convolution or , in more detail , with intrinsic profile given by the lorenz form , the inner integral gives the shifted voigt profile so we have another representation we clearly see from eq .( [ eq : tau_h ] ) that the line is affected both by doppler shifts and atomic constants .the optical depth as a function of frequency contains fluctuating component arising from turbulent motions and associated density inhomogeneities of the absorbers .statistics of optical depth fluctuations along the line of sight therefore carries information about turbulence in ism .the optical depth is determined by the density of the absorbers in the ppv space , . in our previous workwe have studied statistical properties of in the context of emission lines , using both structure function and power spectrum formalisms .absorption lines demonstrate several important differences that warrant separate study .firstly , our ability to recover the optical depth from the observed intensity depends on the magnitude of the absorption as well as sensitivity of the instrument and the level of measurement noise . for lines with low optical depth we can in principle measure the optical depth throughout the whole line . at higher optical depths ,the central part of the line is saturated below the noise level and the useful information is restricted to the wings of the line .this is the new regime that is the subject of this paper. in this regime the data is available over a window of frequencies limited to velocities high enough so that but not as high as to have lorentz tail define the line .higher the overall optical depth , narrower are the wings ( following spitzer , at the wings are totally dominated by lorentz factor ) .we shall denote this window by where is the velocity that the window is centered upon ( describing frequency position of the wing ) and is the wing width .it acts as a mask on the `` underlying '' data secondly , fluctuations in the wings of a line are superimposed on the frequency dependent wing profile . in other words , the statistical properties of the optical depth are inhomogeneous in this frequency range , with frequency dependent statistical mean value . while _ fluctuations _ of the optical depth that have origin in the turbulence can still be assumed to be statistically homogeneous , the mean profile of a wing must be accounted for .what statistical descriptors one should chose in case of line of sight velocity data given over limited window ?primary descriptors of a random field , here , are the ensemble average product of the values of the field at separated points the two point correlation function and , reciprocally , the average square of the amplitudes of its ( fourier ) harmonics decomposition the power spectrum in practice these quantities are measurable if one can replace ensemble average by averaging over different positions which relies on some homogeneity properties of stochastic process .we assume that underlying turbulence is homogeneous and isotropic .this does not make the optical depth to be statistically homogeneous in the wings of the line , but allows to introduce the fluctuations of on the background of the mean profile , , which are ( lp04 ) homogeneous correlation function depends only on a point separation and amplitudes of distinct fourier harmonics are independent .the obvious relations are although mathematically the power spectrum is just a fourier transform of the correlation function which of them is best estimated from data depends on the properties of the signal and the data .the power spectrum carries information which is localized to a particular scale and as such is insensitive to processes that contribute outside the range of scales of interest , in particular to long - range smooth variations . on the other hand , determination of fourier harmonicsis non - local in configuration space and is sensitive to specifics of data sampling the finite window , discretization , that all lead to aliasing of power from one scales to another .the issue is severe if the aliased power is large .conversely , the correlation function is localized in configuration space and can be measured for non - uniformly sampled data . however, at each separation it contains contribution from all scales and may mix together the physical effects from different scales . in particular , is not even defined for power lawspectra with index ( for one dimensional data ) .. ] this limitation is relieved if one uses the structure function instead , which is well defined for . the structure function can be thought of as regularized version of the correlation function that is related to the power spectrum in the same way as the correlation function , if one excludes the mode .velocity coordinate spectrum studies of lp06 demonstrated that the expected one dimensional spectrum of ppv density fluctuations along velocity coordinate that arise from turbulent motions is where is the index of line - of - sight component of the velocity structure function . for kolmogorov turbulence and for turbulent motions dominated by shocks .these spectra are steep which makes the direct measurement of the structure functions impractical ( although for the structure function can be defined ) . at the same time , in our present studies we deal with a limited range of data in the wings of the absorption lines , which complicates the direct measurements of the power spectrum .below we first describe the properties of the power spectrum in this case , and next develop the formalism of higher order structure functions .let us derive the power spectrum of the optical depth fluctuations , here is wave number reciprocal to the velocity ( frequency ) separation between two points on the line - of - sight and angular brackets denote an ensemble averaging .. here we restrict ourselves to diagonal terms only . ]fourier transform of the eq .( [ eq : tau_h ] ) with respect to velocity is and the power spectrum } w(k_v - k_v^\prime ) w^*(k_v - k_v^{\prime\prime } ) \right\rangle \nonumber\end{aligned}\ ] ] which is useful to express using average velocity and velocity difference , as well as correspondent variables for the wave numbers and , as the fluctuating , random quantities , over which the averaging is performed are the density and the line - of - sight component of the velocity of the absorbers , varying along the line of sight . in our earlier papers ( see lp00 , lp04 ) we argued that in many important cases they can be considered as uncorrelated between themselves , so that } ~ ~ , \label{eq : maxprof_average}\ ] ] where is the correlation function of the density of the absorbers and is the structure function of their line - of - sight velocity due to turbulent motions . expected to saturate at the value for separations of the size of the absorbing cloud .the dependence of and only on spatial separation between a pair of absorbers reflects the assumed statistical homogeneity of the turbulence model .introducing and performing integration over one obtains where , and symmetrized window is defined in the appendix .if one has the whole line available for analysis , the masking window will be flat with -function like fourier transform .the combination of the windows in the power spectrum will translate to and masking the data has the effect of aliasing modes of the large scales that exceed the available data range , to shorter wavelength .this is represented by the convolution with fourier image of the mask .secondary effect is the contribution of the modes with different wave numbers to the diagonal part of the power spectrum .this again reflects the situation that different fourier components are correlated in the presence of the mask . to illustrate the effects of the mask ,let us assume that we select the line wing with the help of a gaussian mask centered in the middle of the wing at that gives all integrals can then be carried out to obtain } { \sqrt{(d^+ + 2 \delta^2)(d^- + 2 \delta^2 ) } } \exp\left[-\frac{\delta^2 d^-}{d^- + 2 \delta^2 } k_v^2\right ] \\ & & \times \exp\left[\frac{2 a^2}{d^-+2 \delta^2}\right ] \left\ { \exp\left[\frac{-4 a \delta^2 k_v}{d^- + 2\delta^2}\right ] \mathrm{erfc } \left[\frac{\sqrt{2}(a-\delta^2 k_v)}{\sqrt{d^-+2\delta^2 } } \right ] + ( k_v \to -k_v ) \right\ } \nonumber\end{aligned}\ ] ] the following limits ( taking ) are notable : \nonumber \\ a \to 0 & : & \\ p(k_v ) & \propto & \alpha(\nu_0)^2 s \int_0^s dz ( s - z ) \xi(z ) \frac{\delta^2 \exp\left[-\frac{1}{4}\frac{v_1 ^ 2}{d^+ + \delta^2}\right ] } { \sqrt{(d^+ + 2 \delta^2)(d^- + 2 \delta^2 ) } } \exp\left[-\frac{\delta^2 d^-}{d^- + 2 \delta^2 } k_v^2\right ] \nonumber\end{aligned}\ ] ] the last expression particularly clearly demonstrates the effect of the window , which width in case of the line wing is necessarily necessarily limited by .the power spectrum is corrupted at scales , but still maintains information about turbulence statistics for .indeed , in our integral representation the power spectrum at is determined by the linear scales such that which translates into .thus , if over all scales defining power at one has and there is no significant power aliasing . varies little , in the interval . ] for intermediate scales there is a power aliasing as numerical results demonstrate in figure [ fig : spectrum ] ., , , are dimensionless , in the units of , the variance of the turbulent velocity at the scale of the cloud .intrinsic line broadening is neglected .only the effect of the turbulent motions and not spatial inhomogeneity of the absorbers is taken into account .the underlying scaling of the turbulent velocities is kolmogorov , .the left panel illustrates the power aliasing due to finite width of the window .the power spectrum is plotted , from top to bottom , for , i.e the widths of the wing ranges from the complete line to one - tenth of the line width .the straight line shows the power law expected under ideal observational circumstances .one finds that for the ideal gaussian mask the underlying spectrum is recovered for .the right panel shows the modification of the spectrum due to thermal broadening , which is taken at the level .thermal effects must be accounted for for .,title="fig : " ] , , , are dimensionless , in the units of , the variance of the turbulent velocity at the scale of the cloud .intrinsic line broadening is neglected .only the effect of the turbulent motions and not spatial inhomogeneity of the absorbers is taken into account .the underlying scaling of the turbulent velocities is kolmogorov , .the left panel illustrates the power aliasing due to finite width of the window .the power spectrum is plotted , from top to bottom , for , i.e the widths of the wing ranges from the complete line to one - tenth of the line width .the straight line shows the power law expected under ideal observational circumstances .one finds that for the ideal gaussian mask the underlying spectrum is recovered for .the right panel shows the modification of the spectrum due to thermal broadening , which is taken at the level .thermal effects must be accounted for for .,title="fig : " ] doppler broadening described by incorporates both turbulent and thermal effects .thermal effects are especially important in case of narrow line wings , since the range of the wavenumbers relatively unaffected by both thermal motions and the mask is limited and exists only for relatively wide wings . for narrower wingsthe combined turbulent and thermal profile must be fitted to the data , possibly determining the temperature of the absorbers at the same time .this recipe is limited by the assumption that the temperature of the gas is relatively constant for the absorbers of a given type .we should note that the gaussian window provides one of the ideal cases , limiting the extend of power aliasing since the window fourier image falls off quickly .one of the worst scenarios is represented by sharp top hat mask , which fourier image falls off only as spreading the power from large scales further into short scales . for steep spectra that we have in vcs studies all scales may experience some aliasing .this argues for extra care while treating the line wings through power spectrum or for use of alternative approaches .second order structure function provides an alternative to power spectrum measurement in case of steep spectra with the data limited to the section of the lines .the second order structure function of the fluctuations of the optical depth can be defined as it represents additional regularization of the correlation function beyond the ordinary structure function is proportional to three dimensional velocity space density structure function at zero angular separations , discussed in lp06 . using the results of lp06 for we obtain \nonumber \\ & \propto & \frac{{\bar \rho}^2 s^2 } { d_z(s ) } \frac{1}{m } \left(\frac{r_0}{s}\right)^\gamma \left [ \hat v^{2p } \gamma(-p ) \left(2^{p-1}-2^{1-p}\right ) + \frac{2^{4 p-6}}{p-2 } \hat v^4 + o ( \hat v^6 ) \right ] \label{eq : d_vv}\end{aligned}\ ] ] where and is the correlation index that describes spatial inhomogeneities of the absorbers .to shorten intermediate formulas , the dimensionless quantities , , are introduced .the first term in the expansion contains information about the underlying field , while the power law series represent the effect of boundary conditions at the cloud scale .in contrast to ordinary correlation function they are not dominant until , i.e for the second order structure function is well defined .when turbulent motions provide the dominant contribution to optical depth fluctuations , , we see that measuring the one can recover the turbulence scaling index if , which includes both interesting cases of kolmogorov turbulence and shock - dominated motions .this condition is replaced by if the density fluctuations , described by correlation index , are dominant . at sufficiently small scalesthe second order structure function has the same scaling as the first order one a practical issue of measuring the structure functions directly in the wing of the line is to take into account the line profile .the directly accessible is related to the structure function of the fluctuations as where the mean profile of the optical depth is related to the mean profile of ppv density given in the appendix b of lp06 . at small separations , the correction to the structure function due to mean profile behaves as and is subdominant .the price one pays when utilizing higher - level structure function is their higher sensitivity to the noise in the data . while correlation function itself is not biased by the noise except at at zero separations ( assuming noise is uncorrelated [\tau(v_1+v)+n(v_1+v)]\right\rangle = \xi_\tau(v ) + \langle n^2 \rangle \delta(v)\ ] ] already the structure function is biased by the noise which contributes to all separations ^ 2\right\rangle = d_\tau(v ) + 2 \langle n^2 \rangle\ ] ] this effect is further amplified for the second order structure function ^ 2\right\rangle = dd_\tau(v ) + 3 \langle n^2 \rangle\ ] ] the error in the determination of the structures function of higher order due to noise also increases .structure functions and power spectra are used interchangeably in the theory of turbulence ( see monin & yaglom 1975 ) .however , complications arise when spectra are `` extremely steep '' , i.e. the corresponding structure function of fluctuation grows as , . for such random fields , one can not use ordinary structure functions , while the one dimensional fourier transforms that is employed in vcs corresponds to the power spectrum of is well defined . as a rule , one does not have to deal with so steep spectra in theory of turbulence ( see , however , cho et al . 2002 and cho & lazarian 2004 ) . within the vcs ,such `` extremely steep '' spectra emerge naturally , even when the turbulence is close to being kolmogorov .this was noted in lp06 , where the spectral approach was presented as the correct one to studying turbulence using fluctuations of intensity along v - coordinate .the disadvantage of the spectral approach is when the data is being limited by a non - gaussian window function .then the contributions from the scales determined by the window function may interfere in the obtained spectrum at large .an introduction of an additional more narrow gaussian window function may mitigate the effect , but limits the range of for which turbulence can be studied .thus , higher order structure functions ( see the subsection above ) , is advantageous for the practical data handling . in terms of the vca theory , we used mostly spectral description in lp00 , while in lp04 , dealing with absorption , we found advantageous to deal with real rather than fourier space . in doing so, however , we faced the steepness of the spectrum along the v - coordinate and provided a transition to the fourier description to avoid the problems with the `` extremely steep '' spectrum . naturally , our approach of higher order structure functions is applicable to dealing with the absorption within the vca technique .in the paper above we have discussed the application of vcs to strong absorption lines .the following assumption were used .first of all , considering the radiative transfer we neglected the effects of stimulated emission .this assumption is well satisfied for optical or uv absorption lines ( see spitzer 1979 ) .then , we assumed that the radiation is coming from a point source , which is an excellent approximation for the absorption of light of a star or a quasar .moreover , we disregarded the variations of temperature in the medium. within our approach the last assumption may be most questionable .indeed , it is known that the variations of temperature do affect absorption lines .nevertheless , our present study , as well as our earlier studies , prove that the effects of the variations of density are limited .it is easy to see that the temperature variations can be combined together with the density ones to get effective renormalized `` density '' which effects we have already quantified .our formalism can also be generalized to include a more sophisticated radiative transfer and the spatial extend of the radiation source . in the latter casewe shall have to consider both the case of a narrow and a broad telescope beam , the way it has been done in lp06 .naturally , the expressions in lp06 for a broad beam observations can be straightforwardly applied to the absorption lines , substituting the optical depth variations instead of intensities .the advantage of the extended source is that not only vcs , but also vca can be used ( see deshpande et al .2000 ) . as a disadvantage of an extended source is the steepening of the observed v - coordinate spectrum for studies of unresolved turbulence .this , for instance , may require employing even higher order structure functions , if one has to deal with windows arising from saturation of the absorption line . in lp06we have studied the vcs technique in the presence of absorption and formulated the criterion for the fluctuations of intensity to reliably reflect the fluctuations in turbulent velocities . in this paper , however , we used the logarithms of intensities and showed that this allows turbulence studies beyond the regime , at which fluctuations of intensity would be useful .the difficulty of such an approach is the uncertainty of the base level of the signal .taking logarithm is a non - linear operation that may distort the result , if the base level of signal is not accounted for properly .however , the advantage of the approach that potentially it allows studies of velocity turbulence , when the traditional vca and vcs fail .further research should clarify the utility of this approach .the study of turbulence using the modified vcs technique above should be reliable for optical depth up to .for this range of optical depth , the line width is determined by doppler shifts rather than the atomic constants .while formally the entire line profile provides information about the turbulence , in reality , the flat saturated part of the profile will contain only noise and will not be useful for any statistical study .thus , the wings of the lines will contain signal .as several absorption lines can be available along the same line of sight , this allows to extend the reliability of measurements combining them together .we believe that piecewise analyses of the wings belonging to different absorption lines is advantageous .the actual data analysis may employ fitting the data with models , that , apart from the spectral index , specify the turbulence injection scale and velocity dispersion , as this is done in chepurnov et al .( 2006 ) .note , that measurements of turbulence in the same volume using different absorption lines can provide complementary information .formally , if lines with weak absorption , i.e. are available , there is no need for other measurements .however , in the presence of inevitable noise , the situation may be far from trivial .naturally , noise of a constant level , e.g. instrumental noise , will affect more weak absorption lines .the strong absorption lines , in terms of vcs sample turbulence only for sufficiently large .this limits the range of turbulent scales that can be sampled with the technique .however , the contrast that is obtained with the strong absorption lines is higher , which provides an opportunity of increasing signal to noise ratio for the range of that is sampled by the absorption lines .if , however , a single strong absorption line is used , an analogy with a two dish radio interferometer is appropriate .every dish of the radio interferometer samples spatial frequencies in the range approximately $ ] , where is the operational wavelength , is the diameter of the dish .in addition , the radio interferometer samples the spatial frequency , where is the distance between the dishes . similarly , a strong absorption line provides with the information on turbulent velocity at the largest spatial scale of the emitting objects , as well as the fluctuation corresponding to the scales . in lp06 we concentrated on obtaining asymptotic regimes for studying turbulence .at the same time in chepurnov et al .( 2006 ) fitting models of turbulence to the data was attempted . in the latter approach non - power law observed spectracan be used , which is advantageous for actual data , for which the range of scales in is rather limited .indeed , for hi with the injection velocities of 10 km / s and the thermal velocities of 1 km / s provides an order of magnitude of effective `` inertial range '' . correcting for thermal velocities one can increase this range by a factor , which depends on the signal to noise ratio of the data .using heavier species rather than hydrogen one can increase the range by a factor .this may or may not be enough for observing good asymptotics .we have seen in [ ] that for absorption lines the introduction of windows determined by the width of the line wings introduces additional distortions of the power spectrum .however , this is not a problem if , instead of asymptotics , fitting of the model is used .compared to the models used in chepurnov et al .( 2006 ) the models for absorption lines should also have to model the window induced by the absorption .the advantage is , however , that absorption lines provide a pure pencil beam observations .formally , there exists an extensive list of different tools to study turbulence that predated our studies ( see lazarian 1999 and ref . therein ) .however , a closer examination shows that this list is not as impressive as it looks .moreover , our research showed that some techniques may provide confusing , if not erroneous , output , unless theoretical understanding of what they measure is achieved .for instance , we mentioned in the introduction an example of the erroneous application of velocity centroids to supersonic molecular cloud data .note , that clumps and shell finding algorithms would find a hierarchy of clumps / shells for synthetic observations obtained with _incompressible _ simulations .this calls for a more cautious approach to the interpretation of the results of some of the accepted techniques . for instance , the use of different wavelets for the analysis of data is frequently treated in the literature as different statistical techniques of turbulence studies ( gill & henriksen 1990 , stutzki et al .1998 , cambresy 1999 , khalil et al . 2006 ) , which creates an illusion of an excessive wealth of tools and approaches . in reality, while fourier transforms use harmonics of , wavelets use more sophisticated basis functions , which may be more appropriate for problems at hand . in our studieswe also use wavelets both to analyze the results of computations ( see kowal & lazarian 2006a ) and synthetic maps ( ossenkopf et al . 2006 , esquivel et al .2007 ) , along with or instead of fourier transforms or correlation functions .wavelets may reduce the noise arising from inhomogeneity of data , but we found in the situations when correlation functions of centroids that we studied were failing as the mach number was increasing , a popular wavelet ( -variance ) was also failing ( cp .esquivel & lazarian 2005 , ossenkopf et al .2006 , esquivel et al .2007 ) . while in waveletsthe basis functions are fixed , a more sophisticated technique , principal component analysis ( pca ) , chooses basis functions that are , in some sense , the most descriptive . nevertheless , the empirical relations obtained with pca for extracting velocity statistics provide , according to padoan et al .( 2006 ) , an uncertainty of the velocity spectral index of the order ( see also brunt et al .2003 ) , which is too large for testing most of the turbulence theories .in addition , while our research in lp00 shows that for density spectra , for both velocity and density fluctuations influence the statistics of ppv cubes , no dependencies of ppv statistics on density have been reported so far in pca studies .this also may reflect the problem of finding the underlying relations empirically with data cubes of limited resolution .the latter provides a special kind of shot noise , which is discussed in a number of papers ( lazarian et al .2001 , esquivel et al .2003 , chepurnov & lazarian 2006a ) ._ spectral correlation function ( scf ) _( see rosolowsky et al .1999 for its original form ) is another way to study turbulence .further development of the scf technique in padoan et al .( 2001 ) removed the adjustable parameters from the original expression for the scf and made the technique rather similar to vca in terms of the observational data analysis .indeed , both scf and vca measure correlations of intensity in ppv `` slices '' ( channel maps with a given velocity window ) , but if scf treats the outcome empirically , the analytical relations in lazarian & pogosyan ( 2000 ) relate the vca measures to the underlying velocity and density statistics .mathematically , scf contains additional square roots and normalizations compared to the vca expressions .those make the analytical treatment , which is possible for simpler vca expressions , prohibitive .one might speculate that , similar to the case of conventional centroids and not normalized centroids introduced in lazarian & esquivel ( 2003 ) , the actual difference between the statistics measured by the vca and scf is not significant .in fact , we predicted several physically - motivated regimes for vca studies .for instance , slices are `` thick '' for eddies with velocity ranges less than and `` thin '' otherwise .vca relates the spectral index of intensity fluctuations within channel maps to the thickness of the velocity channel and to the underlying velocity and density in the emitting turbulent volume . in the vcathese variations of indexes with the thickness of ppv `` slice '' are used to disentangle velocity and density contributions .we suspect that similar thick " and thin " slice regimes should be present in the scf analysis of data , but they have not been reported yet . while the vca can be used for all the purposes the scf is used ( e.g. for an empirical comparisons of simulations and observations ) ,the opposite is not true . in fact , padoan et al .( 2004 ) stressed that vca eliminates errors inevitable for empirical attempts to calibrate ppv fluctuations in terms of the underlying 3d velocity spectrum ._ vcs _ is a statistical tool that uses the information of fluctuations along the velocity axis of the ppv . among all the tools that use spectral data , including the vca , it is unique , as it _ does not _ require spatial resolution .this is why , dealing with the absorption lines , where good spatial coverage is problematic , we employed the vcs . potentially , having many sources sampling the object one can create ppv cubes and also apply the vca technique .however , this requires very extended data sets , while for the vcs sampling with 5 or 10 sources can be sufficient for obtaining good statistics ( chepurnov & lazarian 2006a ) .we feel that dealing with the ism turbulence , it is synergetic to combine different approaches .for the wavelets used their relation with the underlying fourier spectrum is usually well defined .therefore the formulation of the theory ( presented in this work , as well as , in our earlier papers in terms of the fourier transforms ) in terms of wavelets is straightforward . at the same time, the analysis of data with the wavelets may be advantageous , especially , in the situations when one has to deal with window functions .in the paper above we have shown that * studies of turbulence with absorption lines are possible with the vcs technique if , instead of intensity , one uses the logarithm of the absorbed intensity , which is equivalent to the optical depth .* in the weak absorption regime , i.e. when the optical depth at the middle of the absorption line is less than unity , the analysis of the coincides with the analysis of intensities of emission for ideal resolution that we discussed in lp06 . * in the intermediate absorption retime , i.e. when the optical depth at the middle of the absorption line is larger than unity , but less than , the wings of the absorption line can be used for the analysis .the saturated part of the line is expected to be noise dominated . *the higher the absorption , the less the portion of the spectrum corresponds to the wings available for the analysis . in terms of the mathematical settingthis introduces and additional window in the expressions for the vcs analysis .however , the contrast of the small scale fluctuations increases with the decrease of the window . * for strong absorption regime ,the broadening is determined by lorentzian wings of the line and therefore no information on turbulence is available .following eqns .( [ eq : ptau_gen],[eq : maxprof_average ] ) the power spectrum of the optical depth is where and while and . since the mask is real , . to deal with absolute values in the lorentz transform ,we split integration regions in quadrants i , ii , iii and iv .integration over quadrants iii and iv can be folded into integration over regions i and ii respectively by substitution .writing out only integration over \nonumber \\ ii+iv & : & \int_0^{\infty } d k_v^\prime \int_{-\infty}^0 \!\ !d k_v^{\prime\prime } e^{-\frac{1}{2}{k_v^+}^2 d^- } e^{-\frac{1}{4}{k_v^-}^2 d^+ } e^ { -a k_v^- } \\ & & \times \left [ w\left(k_v - k_v^+-\frac{k_v^-}{2}\right ) w\left(k_v^+ -k_v -\frac{k_v^-}{2}\right ) + w\left(k_v^+ + k_v + \frac{k_v^-}{2}\right ) w\left(-k_v - k_v^+ + \frac{k_v^-}{2}\right ) \right ] \nonumber\end{aligned}\ ] ] changing variables of integration to and \nonumber \\ ii+iv & : & \int_{-\infty}^\infty d k_v^+ e^{-\frac{1}{2}{k_v^+}^2 d^- } \int_{|2 k_v^+|}^\infty \!\ ! d k_v^- e^{-\frac{1}{4}{k_v^-}^2 d^+ } e^ { -a k_v^- } \\ & & \times \left [ w\left(k_v - k_v^+-\frac{k_v^-}{2}\right ) w\left(k_v^+ -k_v -\frac{k_v^-}{2}\right ) + w\left(k_v + k_v^+ + \frac{k_v^-}{2}\right ) w\left(-k_v - k_v^+ + \frac{k_v^-}{2}\right ) \right ] \nonumber\end{aligned}\ ] ] at the end , the integrals can be combined into the main contribution and the correction that manifests itself only when lorentz broadening is significant . where symmetrized window is the final expression for is then armstrong , j. w. , rickett , b. j. , & spangler , s. r. 1995 , apj , 443 , 209 ballesteros - paredes , j. , klessen , r. , mac low , m. & vasquez - semadeni , e. 2006 , in `` protostars and planets v '' , reipurth , d. jewitt , and k. keil ( eds . ) , university of arizona press , tucson , 951 pp ., 2007 . , p.63 - 80 cho , j. , & lazarian , a. 2003 , , 345 , 325 cho , j. , & lazarian , a. 2004 , , 615 , l41 cho , j. , & lazarian , a. 2005 , theoretical and computational fluid dynamics , 19 , 127 cho , j. , lazarian , a. , honein , a. , knaepen , b. , kassinos , s. , & moin , p. 2003 , apj , 589 , l77 esquivel , a. , lazarian , a. , pogosyan , d. , & cho , j. 2003 , mnras , 342 , 325 falgarone , e. 1999 , in _ interstellar turbulence _ , ed .by j. franco , a. carraminana , cup , ( henceforth _ interstellar turbulence _ ) p.132 lazarian , a. , pogosyan , d. , & esquivel , a. 2002 , in asp conf .276 , seeing through the dust , ed .r. taylor , t. l. landecker , & a. g. willis ( san francisco : asp),182 lazarian , a. , pogosyan , d. , vzquez - semadeni , e. , & pichardo , b. 2001 , , 555 , 130 lazarian , a. , vishanic , e. , cho , j. 2004 , , 603 , 180 lazarian , a. & yan , h. 2004 , in `` astrophysical dust '' eds .a. witt & b. draine , aps , v. 309 , p.479 maron , j. & goldreich , p. 2001 , apj , 554 , 1175 monin , a.s . & yaglom , a. m. 1975 , statistical fluid mechanics : mechanics of turbulence , vol . 2 ( cambridge : mit press ) munch , g. 1999 , in `` interstellar turbulence '' , eds .j. franco and a. carraminana , cup , p. 1munch , g. 1958 , rev ., 30 , 1035 narayan , r. , & goodman , j. 1989 , mnras , 238 , 963 pudritz , r. e. 2001 , from darkness to light : origin and evolution of young stellar clusters , asp , vol . 243 .eds t. montmerle and p. andre .san francisco , p.3 spangler , s.r . , & gwinn , c.r . 1990 ,apj , 353 , l29 stanimirovi , s. , & lazarian , a. , 2001 , , 551 , 53 stutzki , j. 2001 , astrophysics and space science supplement , 277 , 39 sunyaev , r.a . ,norman , m.l . , & bryan , g.l .2003 , astronomy letters , 29 , 783 von hoerner , s. 1951 , zeitschrift fr astrophysics , 30 , 17 wilson , o.c . ,munch , g. , flather , e.m ., & coffeen , m.f .1959 , apjs , 4 , 199
we continue our work on developing techniques for studying turbulence with spectroscopic data . we show that doppler - broadened absorption spectral lines , in particularly , saturated absorption lines , can be used within the framework of the earlier - introduced technique termed the velocity coordinate spectrum ( vcs ) . the vcs relates the statistics of fluctuations along the velocity coordinate to the statistics of turbulence , thus it does not require spatial coverage by sampling directions in the plane of the sky . we consider lines with different degree of absorption and show that for lines of optical depth less than one , our earlier treatment of the vcs developed for spectral emission lines is applicable , if the optical depth is used instead of intensity . this amounts to correlating the logarithms of absorbed intensities . for larger optical depths and saturated absorption lines , we show , that the amount of information that one can use is , inevitably , limited by noise . in practical terms , this means that only wings of the line are available for the analysis . in terms of the vcs formalism , this results in introducing an additional window , which size decreases with the increase of the optical depth . as a result , strongly saturated absorption lines carry the information only about the small scale turbulence . nevertheless , the contrast of the fluctuations corresponding to the small scale turbulence increases with the increase of the optical depth , which provides advantages for studying turbulence combining lines with different optical depths . we show that , eventually , at very large optical depths the lorentzian profile of the line gets important and extracting information on velocity turbulence , gets impossible . combining different absorption lines one can tomography turbulence in the interstellar gas in all its complexity
threshold nets are obtained by assigning a weight , from a distribution , to each of nodes and connecting any two nodes and whose combined weights exceed a certain threshold , : .threshold nets can be produced of ( almost ) arbitrary degree distributions , including scale - free , by judiciously choosing the weight distribution and the threshold , and they encompass an astonishingly wide variety of important architectures : from the star graph ( a simple cartoon " model of scale - free graphs consisting of a single hub ) with its low density of links , , to the complete graph .studied extensively in the graph - theoretical literature , they have recently come to the attention of statistical and non - linear physicists due to the beautiful work of hagberg , swart , and schult . , and ( b ) its box representation , highlighting modularity .nodes are added one at a time from bottom to top , s on the left and s on the right.,scaledwidth=35.0% ] hagberg _ et al_. , exploit the fact that threshold graphs may be more elegantly encoded by a two - letter sequence , corresponding to two types of nodes , and .as new nodes are introduced , according to a prescribed sequence , nodes of type connect to none of the existing nodes , while nodes of type connect to all of the nodes , of either type : and . in fig .[ graph_box](a ) we show an example of the threshold graph obtained from the sequence .note the _ modular _ structure of threshold graphs : a subsequence of consecutive s gives rise to a -clique , while nodes in a subsequence of s connect to nodes thereafter , but not among one another .we highlight this modularity with a diagram of boxes ( similar to ) : oval boxes enclose nodes of type , that are not connected among themselves , while rectangular boxes enclose -cliques of -nodes .a link between two boxes means that all of the nodes in one box are connected to all of the nodes in the other , fig .[ graph_box](b ) . given the sequence of a threshold net ,there exist fast algorithms to compute important structural benchmarks , besides its modularity , such as degree distribution , triangles , betweenness centrality , and the spectrum and eigenvectors of the graph laplacian .the latter are a crucial determinant of dynamics and synchronization and have applications to graph partitioning and mesh processing .perhaps more importantly , it becomes thus possible to _ design _ threshold nets with a particular degree distribution , spectrum of eigenvalues , etc ., . despite their malleability ,threshold nets are limited in some obvious ways , for example their diameter is 1 or 2 , regardless of the number of nodes .our idea consists of studying the broader class of nets that can be constructed from a sequence ( formed from two or more letters ) by deterministic rules of connectivity on their own right .it is truly this property that gives the nets all their desired attributes : modularity ( as in everyday life complex nets ) , easily computable structural measures including the possibility of design and a high degree of compressibility . roughly speaking, each additional letter to the alphabet allows for an increase of one link in the nets diameter , so that the three - letter nets possess diameter 3 or 4 ( some of the new types of two - letter nets have diameter 3 ) .this modest increase is very significant , however , in view of the fact that the diameter of many everyday life complex nets is not much larger than that .sequence nets gain us much latitude in the types of nets that can be described in this elegant fashion , while retaining much of the analytical appeal of threshold nets .another unusual property of sequence nets is that any ensemble of sequence nets admits a natural ordering ; simply list them alphabetically according to their sequences .one may use this ordering for exploring eigenvalues and other structural properties of sequence nets . in this paper, we make a first stab at the general class of _ sequence nets_. in section [ two - letter ] we explore systematically all of the possible rules for creating connected sequence nets from a two - letter alphabet . applying symmetry arguments, we find that threshold nets are only one of three equivalence classes , characterized by the highest level of symmetry .we then discuss the remaining two classes , showing that also then there is a high degree of modularity and that various structural properties can be computed easily .curiously , the new classes of two - letter sequence nets can be related to a generalized form of threshold nets , where the difference , rather than the sum of the weights , is the one compared to the threshold . in section[ three - letter ] we derive all possible forms of connected three - sequence nets . symmetry arguments lead us to the discovery of 30 distinct equivalence classes . among these classes ,we identify a natural extension of threshold nets to three - letter sequence nets . despite the enlarged alphabet , 3-letter sequence nets do retain many of the desirable properties of threshold and 2-letter sequence nets .we also show that at least some of the 3-letter sequence nets can be mapped into threshold nets with _ two _ thresholds , instead of one . we conclude with a summary and discussion of open problems in section [ conclude ] .consider graphs that can be constructed from sequences of the two letters and .we can represent any possible rule by a matrix * r * whose elements indicate whether nodes of type connect to nodes of type : if the nodes connect , and 0 otherwise ( stands for , respectively ) .[ graph_box ] gives an example of the graph obtained from the sequence , applying the _ threshold _ rule .since each element can be or independently of the others , there are possible rules .we shall disregard , however , the four rules that fail to connect between and , for they yield simple _ disjoint _ graphs of the two types of nodes : yields isolated nodes only , yields one complete graph of type and one of type , yields a complete graph of type and isolated nodes of type , etc . applied to the sequence ( a ) , and from applied to the reverse - inverted sequence ( b ) , are identical.,scaledwidth=35.0% ] the list of remaining rules can be shortened further by considering two kinds of symmetries : ( a ) permutation , and ( b ) time reversal ._ permutation _ is the symmetry obtained by permuting between the two types of nodes , .thus , a permuted rule ( and ) acting on a permuted sequence ( ) yields back the original graph ._ time reversal _ is the symmetry obtained by reversing the arrows ( time " ) in the connectivity rules , or taking the transpose of .the transposed rule acting on the reversed sequence yields back the original graph .the two symmetry operations are their own inverse and they form a symmetry group . in particular , one may combine the two symmetries : a rule with applied on a reversed sequence with inverted types yields back the original graph , see fig .[ time_reversal ] .all of the four rules are equivalent and generate threshold graphs . is the rule for threshold graphs exploited by hagberg et al . , , and is equivalent to it by permutation . is obtained from by time reversal and permutation ( fig .[ time_reversal ] ) , and is obtained from by time reversal .the two rules are equivalent , by either permutation or time reversal , and generate non - trivial bipartite graphs that are different from threshold nets ( fig . [ abgraphs ] ) .the rule generates complete bipartite graphs .however , the complete bipartite graph can also be produced by applying to the sequence of s followed by s , so the rule is a `` degenerate '' form of .one could see that this is the case at the outset , because of the symmetrical relations , : these render the ordering of the s and s in the graph s sequence irrelevant . by the same principle , and are degenerate forms of and , respectively .they yield threshold graphs with segregated sequences of s and s. the two rules are equivalent , by either permutation or time reversal , and generate non - trivial graphs different from threshold graphs and graphs produced by ( fig . [ abgraphs ] ) . finally ,the rule is a degenerate form of ( or ) and yields only complete graphs ( which are threshold graphs , so is subsumed also in ) ., applying rules ( a ) , ( b ) , and ( c ) . note the figure - background symmetry of ( a ) and ( c ) : the graphs are the inverse , or complement of one another ( see text ) .the inverse of the threshold graph ( b ) is also a ( two - component ) threshold graph , obtained from the same sequence and applying the rule ( s complement).,scaledwidth=47.0% ] to summarize , , , and are the only two - letter rules that generate different classes of non - trivial connected graphs .there is yet another amusing type of symmetry : applying and to the same sequence yields _ complement _ , or _inverse _ graphs nodes are adjacent in the inverse graph if and only if they are _ not _ connected in the original graph .the figure - background symmetry manifest in the rules and ( ) is also manifest in the graphs they produce ( fig .[ abgraphs]a , c ) . on the other hand ,the inverse of threshold graphs are also threshold graphs . also , the complement of a threshold rule applied to the complement ( inverted ) sequence yields back the original graph . in this sense , threshold graphs have maximal symmetry .-graphs are typically less dense , and -graphs are typically denser than threshold graphs. possible connections between nodes of type and .( b ) three equivalent representations of the threshold rule .the second and third diagram are obtained by label permutation and time - reversal , respectively .( c ) diagrams for and .note how they complement one another to the full set of connections in part ( a).,scaledwidth=25.0% ] the connectivity rules have an additional useful interpretation as directed graphs , where the nodes represent the letters of the sequence alphabet , a directed link , e , g ., from to indicates the rule , and a connection of a type to itself is denoted by a self - loop ( fig .[ graph_notation ] ) .because the rules are the same under permutation of types , there is no need to actually label the nodes : all graph isomorphs represent the same rule .likewise , time - reversal symmetry means that graphs with inverted arrows are equivalent as well .note that the direction of self - loops is irrelevant in this respect , so we simply take them as undirected .we shall make use of this notation , extensively , for the analysis of 3-letter sequence nets in section [ three - letter ] .a very special property of sequence nets is the fact that any arbitrary ensemble of such nets possesses a natural ordering , simply listing the nets alphabetically according to their sequences .in contrast , think for example of the ensemble of erds - rnyi random graphs of nodes , where links are present with probability : there is no natural way to order the graphs in the ensemble . plotting a structural property against the alphabetical ordering of the ensemble reveals some inner structure of the ensemble itself , yielding new insights into the nature of the nets . as an example , in fig .[ eigs_2threshold ] we show , the second smallest eigenvalue , for the ensemble of connected threshold nets containing nodes ( there are graphs in the ensemble , since their sequences must all start with the letter ) .notice the beautiful pattern followed by the eigenvalues plotted in this way , which resembles a fractal , or a cayley tree : the values within the first half of the graphs in the -axis repeat in the second half , and the pattern iterates as we zoom further into the picture .nodes , plotted against their alphabetical ordering.,scaledwidth=45.0% ] structural properties of the new classes of two - letter sequence nets , and , are as easily derived as for threshold nets .here we focus on alone , which forms a subset of bipartite graphs .the analysis for is very similar and often can be trivially obtained from the complementary symmetry of the two classes .all connected sequence nets in the class must begin with the letter and end with the letter .a sequence of this sort may be represented more compactly by the numbers of s and s in the alternating layers , .we assume that there are nodes and layers ( is even ) .we also use the notation and for the total number of s and s , as well as and likewise for . finally , since all the nodes in a layer have identical properties we denote any in the -th layer by and any in the -th layer by . with this notation in mindwe proceed to discuss several structural properties . :since s connect only to subsequent s ( and s only to preceding s ) the degree of the nodes is given by : there are no triangles in nets so the clustering of all nodes is zero . :every is connected to the last , so the distance between any two s is 2 .every is connected to the first in the sequence , so the distance between any two s is also 2 .the distance between and is 1 if ( they connect directly ) , and 3 if ( links to , that links to , that links to ) . : because of the time - reversal symmetry between and , it suffices to analyze nodes only .the result for can then be obtained by simply reversing the creation sequence and permuting the letters .the vertex betweenness of a node is defined as : where is the number of shortest paths from node to ( ) , excluding the cases that or . is the number of shortest paths from to that goes through .the factor appears for undirected graphs since each pair is counted twice in the summation .the betweenness of s can be calculated from lower layers to higher layers recursively . in the first b - layer and for . the second term on the rhs accounts for the shortest paths from layer to itself and all previous layers of , and the third term corresponds to paths from to to ( ) to .although this recursion can be solved explicitly it is best left in this form , as it thus highlights the fact that the betweenness centrality increases from one layer to the next . in other words ,the networks are _ modular _ , where each additional -layer dominates all the layers below . :unlike threshold nets , for nets the eigenvalues are _ not _ integer , and there seems to be no easy way to compute them .instead , we focus on the second smallest and largest eigenvalues , and , alone , for their important dynamical role : the smaller the ratio the more susceptible the network is to synchronization .consider first .for it is easy to show that both the _ vertex _ and _ edge connectivity _ are equal to .then , following an inequality in , the upper bound seems stricter and is a reasonable approximation to ( see fig . [ l2bounds ] ) .nets with against their alphabetical ordering ( solid curve ) , and their upper and lower bounds ( broken lines).,scaledwidth=40.0% ] for , using theorem 2.2 of one can derive the bounds but they do not seems very useful , numerically . playing with various structural properties of the nets , plotted against their alphabetical ordering ,we have stumbled upon the approximation where is the average degree of the graph , see fig . [ l2approx ] .the approximation is exact for bipartite _ complete _ graphs ( ) and the relative error increases slowly with ; it is roughly at 10% for . nets with against their alphabetical ordering ( solid curve ) , and its approximated value ( broken line).,scaledwidth=40.0% ] in it was shown that threshold graphs have a mapping to a sequence net , with a unique sequence ( under the threshold rule " ) ; and conversely , for any -sequence net there exists a set of weights of the nodes ( not necessarily unique ) , such that connecting any two nodes that satisfy reproduces the sequence net .here we establish a similar relation between - ( or - ) sequence nets and a different kind of threshold net , where connectivity is decided by the difference rather than the sum of the weights .we begin with the mapping of a weighted set of nodes to a -sequence net .let a set of nodes have weights ( ) , taken from some probability density , and we assume , without loss of generality .denote nodes with as type and nodes with as type .finally , connect any two nodes and that satisfy .the resulting graph can be constructed by a unique sequence under the rule , obtained as follows .for convenience , rewrite the set of weights as where the first weights correspond to -nodes and the rest to -nodes . denote the creation sequence by and determine the by the algorithm ( in pseudo - code ) : set , for , do : 0.4 cm if 0.8 cm set and 0.4 cm else 0.8 cm set and end .it is understood that if the are exhausted before the end of the loop , the remainder -nodes are automatically affixed to the end of the sequence ( and similarly for the ) .for example , using this algorithm we find that the difference - threshold " graph resulting from the set of weights ,2,3,5,7,16,17,20 and , can be reproduced from the sequence , with the rule .consider now the converse problem : given a graph created from the sequence with the rule , we derive a ( non - unique ) set of weights such that connecting any two nodes with results in the same graph .rewrite first the creation sequence into its compact form , and assign weights for nodes in layer , weights for nodes in layer , and set the threshold at .for example , the sequence has a compact representation , with layers , so the three s in layer have weights , the two s in layer have weights , the two s in layer have weights , and the single in layer has weight .the weights , with connection threshold , reproduce the original graph .sequence graphs obtained from the rule can be also mapped to difference threshold graphs in exactly the same way , only that the criterion for connecting two nodes is then , instead of , as for .the mapping of sequence nets to generalized threshold graphs may be helpful in the analysis of some of their properties , for example , for finding the _ isoperimetric number _ of a sequence graph .with a three - letter alphabet , , there are at the outset possible rules .again , these can be reduced considerably , due to symmetry .because the rule matrix has 9 entries ( an odd number ) no rule can be identical to its complement .thus , we can limit ourselves to rules with no more than 4 non - zero entries and apply symmetry arguments to reduce their space at the very end we can then add the complements of the remaining rules . in fig .[ 3nets ] we list all possible three - letter rules with two , three , and four interactions .rules that lead to disconnected graphs , and symmetric rules ( by label permutation or time - reversal ) have been omitted from the figure . and ) , and rules 3 , 12 , 13 , and 14 are degenerate cases of rules 2 , 6 , 7 , and 6 , respectively .this leaves us with fifteen distinct three - letter rules ( underlined ) , and their fifteen complements , for a total of 30 different classes of three - letter sequence nets.,scaledwidth=40.0% ] rule 2 is in fact not new : identifying nodes of type and ( as marked in rule 1 of the figure ) we can easily see that the rule is identical to the two - letter rule 8 . in the same fashion ,rule 7 is the same as the two - letter threshold rule 4 .rule 3 is a degenerate form of 2 : because of the double connection and , the order at which and appear in the sequence relative to one another is inconsequential .( on the other hand , the order of the s relative to s _ is _ important , since s connect only to those s that appear earlier in the sequence . )then , given a sequence one can rearrange it by moving all the s to the end of the list .if we now apply 2 , and , then we get the same graph as from the original sequence under the rule 3 .the same consideration applies to rules , and , that are degenerate forms of 6 , 7 and 8 ( or 6 ) , respectively .we are thus left with only 15 distinct rules with fewer than 5 connections . to these one should add their complements , for a total of 30 distinct three - letter rules .note the resemblance of , , and to two - letter threshold nets .seems like a particularly symmetrical generalization and we will focus on it in much of our discussion below .while one can easily establish wether a graph is connected or not , _ a posteriori _ , with a burning algorithm that requires steps , it is useful to have shortcut rules that tell us how to avoid bad sequences at the outset : knowing that two - letter threshold graphs are connected if and only if their sequence ends with , deals with the question most effectively .analogous criteria exist for three - letter sequence graphs but they are a bit more complicated .for example , three - letter sequences interpreted with lead to connected graphs if and only if they satisfy : _ ( 1 ) the first a and the first c in the sequence appear before the last b. ( 2 ) the sequence does not start with b_. ( we assume that the sequence contains all three letters . ) for 1the requirements are : _ ( 1 ) the first a in the sequence must appear after the first b. ( 2 ) the last c in the sequence must appear before the last b. ( 3 ) the last a in the sequence must appear after the first c , and there ought to be at least one b between the two ._ similar criteria exist for all other three - letter rules and can be found by inspection .structural properties of three - letter sequence nets are analyzed as easily as those of two - letter nets , here we list , as an example , a few basic attributes of sequence nets .we use a notation similar to that of section [ new_classes ] . : and nodes form complete subgraphs , while nodes connect to all preceding s and s .thus the degree of the nodes are : : since the nodes make a subset complete graph , and likewise for , .the s do not connect among themselves , but they all connect to the nodes in the first layer ( which does not consist of s ) , so .for the distance of nodes from , we have where is the index of the first -layer and is the index of the last -layer .the first line follows since s are directly connected to preceding s and s .the second , and third and fourth lines are illustrated in fig .[ distance]a and b , respectively .the distance follows the very same pattern . finally , inspecting all different cases one finds in nets .( a ) if and the first is below the distance is 2 .( b ) if the first is above , then the first must be below ( ca nt start the sequence ) ; in that case if is below the last the distance is 3 , and otherwise the distance is 4 .only the relevant parts of the complete net are shown.,scaledwidth=40.0% ] : we have found no obvious way to compute the eigenvalues , despite the similarities between nets and two - letter threshold nets .however , plots of the eigenvalues against the alphabetical ordering of the nets once again reveals intriguing fractal patterns , and one can hope that these might be exploited at the very least to produce good bounds and approximations . in fig .[ r_r18 ] we plot the ratio for nets with against their alphabetical ordering .the -axis includes sequences of nets that are not connected : in this case and synchronization is not possible .these cases show as gaps in the plot , for example , the big gap in the center corresponds to disconnected sequences that start with the letter ( see section [ connect ] ) . for nets consisting of nodes , against their alphabetical ordering .note the gap near the center , which corresponds to sequences of disconnected graphs .note also the mirror symmetry this is due to the mirror symmetry of the rule itself.,scaledwidth=40.0% ] some of the three - letter sequence nets can be mapped to generalized forms of threshold nets .for example , the following scheme yields a _two_-threshold net , equivalent to three - letter sequence nets generated by the rule .let the nodes be assigned weights , from a random distribution , and connect any two nodes and that satisfy or . identifying nodes with weight with , nodes with with , and nodes with with , we see that all s connect to one another and all s connect to one another but the s do not , and s and s do not connect ; nodes of type and may or may not connect , and likewise for nodes of type and . to reflect the actual connections , the nodes of type and may be arranged in a sequence according to the algorithm in , for the threshold rule .also the nodes of type and may be arranged in a sequence , to reflect the actual connections , with the very same algorithm . because there are no connections between and the two results may be trivially merged .note , however , that once the - sequence is established the order of the s is set , so the direction of connections between and ( or ) is _ not _ arbitrary . in our example , the mapping is possible to but not to .we have introduced a new class of nets , sequence nets , obtained from a sequence of letters and fixed rules of connectivity .two - letter sequence nets contain threshold nets , and in addition two newly discovered classes .the class can be mapped to a difference - threshold " net , where nodes and are connected if their weights difference satisfies . this type of net may be a particularly good model for social nets , where the weights might measure political leaning , economical status , number of offspring , etc . , andagents tend to associate when they are closer in these measures .we have shown that the structural properties of the new classes of two - letter sequence nets can be analyzed with ease , and we have introduced an ordering in ensembles of sequence nets that is useful in visualizing and studying their various attributes .we have fully classified 3-letter sequence nets , and looked at a few examples , showing that they too can be analyzed simply .the diameter of sequence nets grows linearly with the number of letters in the alphabet and for a 3-letter alphabet it is already 3 or 4 , comparable to many everyday life complex nets .realistic diameters might be achieved with a modest expansion of the alphabet .there remain numerous open questions : applying symmetry arguments we have managed to reduce the class of 3-leter nets to just 30 types , but we have not ruled out the possibility that some overlooked symmetry might reduce the list further ; the question of which sequences lead to connected nets can be studied by inspection for small alphabets , but we have no comprehensive approach to solve the problem in general ; we have shown how to map sequence nets to generalized types of threshold nets , in some cases is such a mapping always possible ?is there a systematic way to find such mappings for any sequence rule ? ; what kinds of nets would result if the connectivity rules applied only to the preceding letters , instead of to _ all _ preceding letters ? etc .we hope to tackle some of these questions in future work .
we study a new class of networks , generated by sequences of letters taken from a finite alphabet consisting of letters ( corresponding to types of nodes ) and a fixed set of connectivity rules . recently , it was shown how a binary alphabet might generate threshold nets in a similar fashion [ hagberg et al . , phys . rev . e 74 , 056116 ( 2006 ) ] . just like threshold nets , sequence nets in general possess a modular structure reminiscent of everyday life nets , and are easy to handle analytically ( i.e. , calculate degree distribution , shortest paths , betweenness centrality , etc . ) . exploiting symmetry , we make a full classification of two- and three - letter sequence nets , discovering two new classes of two - letter sequence nets . the new sequence nets retain many of the desirable analytical properties of threshold nets while yielding richer possibilities for the modeling of everyday life complex networks more faithfully .
variable selection methods based on penalty theory have received great attention in high - dimensional data analysis .a principled approach is due to the lasso of , which uses the -norm penalty . also pointed out that the lasso estimate can be viewed as the mode of the posterior distribution .indeed , the penalty can be transformed into the laplace prior . moreover , this prior can be expressed as a gaussian scale mixture .this has thus led to bayesian developments of the lasso and its variants .there has also been work on nonconvex penalization under a parametric bayesian framework . derived their local linear approximation ( lla ) algorithm by combining the expectation maximization ( em ) algorithm with an inverse laplace transform .in particular , they showed that the penalty with can be obtained by mixing the laplace distribution with a stable density .other authors have shown that the prior induced from a penalty , called the nonconvex log penalty and defined in equation ( [ eqn : logp ] ) below , has an interpretation as a scale mixture of laplace distributions with an inverse gamma mixing distribution .recently , extended this class of laplace variance mixtures by using a generalized inverse gaussian mixing distribution .related methods include the bayesian hyper - lasso , the horseshoe model and the dirichlet laplace prior . in parallel ,nonparametric bayesian approaches have been applied to variable selection .for example , in the infinite gamma poisson model negative binomial processes are used to describe non - negative integer valued matrices , yielding a nonparametric bayesian feature selection approach under an unsupervised learning setting .the beta - bernoulli process provides a nonparametric bayesian tool in sparsity modeling . additionally , proposed a nonparametric approach for normal variance mixtures and showed that the approach is closely related to lvy processes .later on , constructed sparse priors using increments of subordinators , which embeds finite dimensional normal variance mixtures in infinite ones .thus , this provides a new framework for the construction of sparsity - inducing priors .specifically , discussed the use of -stable subordinators and inverted - beta subordinators for modeling joint priors of regression coefficients . the connection of two nonconvex penalty functions , which are referred to as log and exp and defined in equations ( [ eqn : logp ] ) and ( [ eqn : exp ] ) below , with the laplace transforms of the gamma and poisson subordinators .a subordinator is a one - dimensional lvy process that is almost surely non - decreasing . in this paperwe further study the application of subordinators in bayesian nonconvex penalization problems under supervised learning scenarios .differing from the previous treatments , we model latent shrinkage parameters using subordinators which are defined as stochastic processes of regularization parameters .in particular , we consider two families of compound poisson subordinators : continuous compound poisson subordinators based on a gamma random variable and discrete compound poisson subordinators based on a logarithmic random variable .the corresponding lvy measures are generalized gamma and poisson measures , respectively .we show that both the gamma and poisson subordinators are limiting cases of these two families of the compound poisson subordinators .since the laplace exponent of a subordinator is a bernstein function , we have two families of nonconvex penalty functions , whose limiting cases are the nonconvex log and exp . additionally , these two families of nonconvex penalty functions can be defined via composition of log and exp , while the continuous and discrete compound poisson subordinators are mixtures of gamma and poisson processes .recall that the latent shrinkage parameter is a stochastic process of the regularization parameter .we formulate a hierarchical model with multiple regularization parameters , giving rise to a bayesian approach for nonconvex penalization . to reduce computational expenses, we devise an ecme ( for expectation / conditional maximization either " ) algorithm which can adaptively adjust the local regularization parameters in finding the sparse solution simultaneously .the remainder of the paper is organized as follows .section [ sec : levy ] reviews the use of lvy processes in bayesian sparse learning problems . in section [ sec : gps ] we study two families of compound poisson processes . in section [ sec : blrm ] we apply the lvy processes to bayesian linear regression and devise an ecme algorithm for finding the sparse solution .we conduct empirical evaluations using simulated data in section [ sec : experiment ] , and conclude our work in section [ sec : conclusion ] .our work is based on the notion of bernstein and completely monotone functions as well as subordinators .let with .the function is said to be completely monotone if for all and bernstein if for all . roughly speaking , a _subordinator _ is a one - dimensional lvy process that is non - decreasing almost surely .our work is mainly motivated by the property of subordinators given in lemma [ lem : subord ] .[ lem : subord ] if is a subordinator , then the laplace transform of its density takes the form where is the density of and , defined on , is referred to as the _ laplace exponent _ of the subordinator and has the following representation \nu ( d u).\ ] ] here and is the lvy measure such that .conversely , if is an arbitrary mapping from given by expression ( [ eqn : psi ] ) , then is the laplace transform of the density of a subordinator .it is well known that the laplace exponent is bernstein and the corresponding laplace transform is completely monotone for any . moreover ,any function , with , is a bernstein function if and only if it has the representation as in expression ( [ eqn : psi ] ) .clearly , as defined in expression ( [ eqn : psi ] ) satisfies . as a result , is nonnegative , nondecreasing and concave on .we are given a set of training data , where the are the input vectors and the are the corresponding outputs .we now discuss the following linear regression model : where , ^t ] is an improper prior of . additionally , is a poisson subordinator .specifically , is a poisson distribution with intensity taking values on the set .that is , which we denote by .in this section we explore the application of compound poisson subordinators in constructing nonconvex penalty functions .let be a sequence of independent and identically distributed ( i.i.d . )real valued random variables with common law , and let be a poisson process with intensity that is independent of all the . then , for , follows a compound poisson distribution with density ( denoted ) , and hence is called a compound poisson process .a compound poisson process is a subordinator if and only if the are nonnegative random variables .it is worth pointing out that if is the poisson subordinator given in expression ( [ eqn : possion ] ) , it is equivalent to saying that follows .we particularly study two families of nonnegative random variables : nonnegative continuous random variables and nonnegative discrete random variables .accordingly , we have continuous and discrete compound poisson subordinators .we will show that both the gamma and poisson subordinators are limiting cases of the compound poisson subordinators . in the first family a gamma random variable .in particular , let and the be i.i.d . from the distribution , where , and .the compound poisson subordinator can be written as follows the density of the subordinator is then given by we denote it by .the mean and variance are respectively .the laplace transform is given by where is a bernstein function of the form .\ ] ] the corresponding lvy measure is given by notice that is a gamma measure for the random variable .thus , the lvy measure is referred to as a generalized gamma measure .the bernstein function was studied by for survival analysis . however, we consider its application in sparsity modeling .it is clear that for and satisfies the conditions and .also , is a nonnegative and nonconvex function of on , and it is an increasing function of on .moreover , is continuous w.r.t . but nondifferentiable at the origin .this implies that can be treated as a sparsity - inducing penalty .we are interested in the limiting cases that and .[ pro : first ] let , and be defined by expressions ( [ eqn : first_tt ] ) , ( [ eqn : first ] ) and ( [ eqn : first_nu ] ) , respectively. then 1 . and ; 2 . and ; 3 . and .this proposition can be obtained by using direct algebraic computations .proposition [ pro : first ] tells us that the limiting cases yield the nonconvex log and exp functions .moreover , we see that converges in distribution to a gamma random variable with shape and scale , as , and to a poisson random variable with mean , as .it is well known that degenerates to the log function .here we have shown that approaches to exp as .we list another special example in table [ tab : exam ] when .we refer to the corresponding penalty as a _ linear - fractional _ ( lfr ) function . for notational simplicity ,we respectively replace and by and in the lfr function .the density of the subordinator for the lfr function is given by we thus say each follows a squared bessel process without drift , which is a mixture of a dirac delta measure and a randomized gamma distribution .we denote the density of by .lllll & bernstein functions & lvy measures & subordinators & priors + log & & & & proper + exp & ] & & & improper + + + in the second case , we consider a family of discrete compound poisson subordinators .particularly , is discrete and takes values on . and it is defined as logarithmic distribution , where and , with probability mass function given by moreover , we let have a poisson distribution with intensity , where . then is distributed according to a negative binomial ( nb ) distribution .the probability mass function of is given by which is denoted as .we thus say that follows an nb subordinator .let and .it can be verified that has the same mean and variance as the distribution .the corresponding laplace transform then gives rise to a new family of bernstein functions , which is given by .\ ] ] we refer to this family of bernstein functions as _ compound exp - log _ ( cel ) functions .the first - order derivative of w.r.t . is given by the lvy measure for is given by the proof is given in appendix 1 .we call this lvy measure a_ generalized poisson measure _ relative to the generalized gamma measure . like , can define a family of sparsity - inducing nonconvex penalties . also , for , and satisfies the conditions , and .we present a special cel function as well as the corresponding and in table [ tab : exam ] , where we replace and by and for notational simplicity .we now consider the limiting cases .[ pro:8 ] assume is defined by expression ( [ eqn : second_nu ] ) for fixed and .then we have that 1 . and . and .3 . and .4 . and notice that this shows that converges to , as .analogously , we obtain the second part of proposition [ pro:8]-(d ) , which implies that as , converges in distribution to a gamma random variable with shape parameter and scale parameter . an alternative proof is given in appendix 2 .proposition [ pro:8 ] shows that degenerates to exp as , while to log as .this shows an interesting connection between in expression ( [ eqn : first ] ) and in expression ( [ eqn : second ] ) ; that is , they have the same limiting behaviors .we note that for , \ ] ] which is a composition of the log and exp functions , and that \ ] ] which is a composition of the exp and log functions .in fact , the composition of any two bernstein functions is still bernstein .thus , the composition is also the laplace exponent of some subordinator , which is then a mixture of the subordinators corresponding to the original two bernstein functions .this leads us to an alternative derivation for the subordinators corresponding to and .that is , we have the following theorem whose proof is given in appendix 3 .[ thm : poigam ] the subordinator associated with is distributed according to the mixture of distributions with mixing , while associated with is distributed according to the mixture of distributions with mixing . additionally , the following theorem illustrates a limiting property of the subordinators as approaches 0 .[ thm : limit ] let be a fixed constant on ] or , then converges in probability to , as .2 . if where \ ] ] or , then converges in probability to , as .the proof is given in appendix 4 . since converges in probability to " implies converges in distribution to , " we have that finally , consider the four nonconvex penalty function given in table [ tab : exam ] .we present the following property .that is , when and for any fixed , we have \leq\frac{s}{\gamma s { + } 1 } \leq\frac{1}{\gamma } [ 1 { - } \exp ( { - } \gamma s ) ] \leq\frac { 1}{\gamma } \log\big({\gamma } s { + } 1 \big ) \leq s,\ ] ] with equality only when .the proof is given in appendix 5 .this property is also illustrated in figure [ fig : penalty ] .in table [ tab : exam ] with and . ]we apply the compound poisson subordinators to the bayesian sparse learning problem given in section [ sec : levy ] . defining , we rewrite the hierarchical representation for the joint prior of the under the regression framework . that is , we assume that & \stackrel{ind}{\sim } & l(b_j|0 , \sigma ( 2\eta_j)^{-1 } ) , \\f_{t^{*}(t_j)}(\eta_j ) & { \propto } & \eta_j^{-1 } f_{t(t_j)}(\eta_j),\end{aligned}\ ] ] which implies that the joint marginal pseudo - prior of the s is given by we will see in theorem [ thm : poster ] that the full conditional distribution is proper .thus , the maximum _ a posteriori _ ( map ) estimate of is based on the following optimization problem : clearly , the s are local regularization parameters and the s are latent shrinkage parameters . moreover ,it is interesting that ( or ) is defined as a subordinator w.r.t . .the full conditional distribution is conjugate w.r.t .the prior , which is .specifically , it is an inverse gamma distribution of the form .\ ] ] in the following experiment , we use an improper prior of the form ( i.e. , ) .clearly , is still an inverse gamma distribution in this setting . additionally , based on \prod_{j=1}^p \exp(-\frac { \eta _ j}{\sigma } |b_j|)\vadjust{\eject}\ ] ] andthe proof of theorem [ thm : poster ] ( see appendix 6 ) , we have that the conditional distribution is proper .however , the absolute terms make the form of unfamiliar .thus , a gibbs sampling algorithm is not readily available and we resort to an em algorithm to estimate the model .notice that if is proper , the corresponding normalizing constant is given by d |b_j|= 2 \int_{0}^{\infty } \exp\big [ -t_j \psi\big(\frac{|b_j| } { \sigma } \big ) \big ] d ( |b_j|/\sigma),\ ] ] which is independent of . also , the conditional distribution is independent of the normalizing term .specifically , we always have that which is proper .as shown in table [ tab : exam ] , except for log with which can be transformed into a proper prior , the remaining bernstein functions can not be transformed into proper priors . in any case ,our posterior computation is directly based on the marginal pseudo - prior .we ignore the involved normalizing term , because it is infinite if is improper and it is independent of if is proper .given the estimates of in the e - step of the em algorithm , we compute p(\eta_j|b_j^{(k ) } , \sigma ^{(k ) } , t_j ) } d \eta_j + \log p(\sigma ) \\ & \propto-\frac{n+\alpha_{\sigma}}{2 } \log\sigma{- } \frac{\|{\bf y } { -}{\bf x}{\bf b}\|_2 ^ 2 + \beta_{\sigma}}{2 \sigma } - ( p+1 ) \log \sigma \\ & \quad- \frac{1 } { \sigma } \sum_{j=1}^p here we omit some terms that are independent of parameters and .in fact , we only need to compute in the e - step . considering that and taking the derivative w.r.t . on both sides of the above equation , we have that the m - step maximizes w.r.t. . in particular , it is obtained that : the above em algorithm is related to the linear local approximation ( lla ) procedure .moreover , it shares the same convergence property given in and .subordinators help us to establish a direct connection between the local regularization parameters s and the latent shrinkage parameters s ( or ) .however , when we implement the map estimation , it is challenging how to select these local regularization parameters .we employ an ecme ( for expectation / conditional maximization either " ) algorithm for learning about the s and s simultaneously . for this purpose ,we suggest assigning gamma prior , namely , because the full conditional distribution is also gamma and given by \sim\ga\big(\alpha_{t } , 1/[\psi(|b_j|/\sigma ) + \beta_{t}]\big).\ ] ] recall that we here compute the full conditional distribution directly using the marginal pseudo - prior , because our used bernstein functions in table [ tab : exam ] can not induce proper priors . however ,if is proper , the corresponding normalizing term would rely on . as a result ,the full conditional distribution of is possibly no longer gamma or even not analytically available . figure [ fig : graphal0]-(a ) depicts the hierarchical model for the bayesian penalized linear regression , and table [ tab : alg ] gives the ecme procedure where the e - step and cm - step are respectively identical to the e - step and the m - step of the em algorithm , with .the cme - step updates the s with in order to make sure that , it is necessary to assume that . in the following experiments , we set .we conduct experiments with the prior for comparison .this prior is induced from the -norm penalty , so it is a proper specification .moreover , the full conditional distribution of w.r.t .its gamma prior is still gamma ; that is , \sim\ga\big({\alpha_t}{+}2 , \ ; 1/({\beta_t } { + } \sqrt{|b_j|/\sigma})\big).\ ] ] thus , the cme - step for updating the s is given by the convergence analysis of the ecme algorithm was presented by , who proved that the ecme algorithm retains the monotonicity property from the standard em .moreover , the ecme algorithm based on pseudo - priors was also used by . .the basic procedure of the ecme algorithm [ cols= " < , < " , ] our analysis is based on a set of simulated data , which are generated according to .in particular , we consider the following three data models small , " medium " and large ." data s : : : , , , and is a matrix with on the diagonal and on the off - diagonal .data m : : : , , has non - zeros such that and , and .data l : : : , , , and ( five blocks ) . for each data model ,we generate data matrices such that each row of is generated from a multivariate gaussian distribution with mean and covariance matrix , , or .we assume a linear model with multivariate gaussian predictors and gaussian errors .we choose such that the signal - to - noise ratio ( snr ) is a specified value . following the setting in , we use in all the experiments .we employ a standardized prediction error ( spe ) to evaluate the model prediction ability .the minimal achievable value for spe is .variable selection accuracy is measured by the correctly predicted zeros and incorrectly predicted zeros in .the snr and spe are defined as for each data model , we generate training data of size , very large validation data and test data , each of size . for each algorithm ,the optimal global tuning parameters are chosen by cross validation based on minimizing the average prediction errors . with the model computed on the training data , we compute spe on the test data .this procedure is repeated times , and we report the average and standard deviation of spe and the average of zero - nonzero error .we use `` '' to denote the proportion of correctly predicted zero entries in , that is , ; if all the nonzero entries are correctly predicted , this score should be .we report the results in table [ tab : toy2 ] .it is seen that our setting in figure [ fig : graphal0]-(a ) is better than the other two settings in figures [ fig : graphal0]-(b ) and ( c ) in both model prediction accuracy and variable selection ability .especially , when the size of the dataset takes large values , the prediction performance of the second setting becomes worse .the several nonconvex penalties are competitive , but they outperform the lasso .moreover , we see that log , exp , lfr and cel slightly outperform .the penalty indeed suffers from the problem of numerical instability during the em computations . as we know, the priors induced from lfr , cel and exp as well as log with are improper , but the prior induced from is proper .the experimental results show that these improper priors work well , even better than the proper case . vs. on data s " and data m " where is the permutation of such that . ]recall that in our approach each regression variable corresponds to a distinct local tuning parameter .thus , it is interesting to empirically investigate the inherent relationship between and .let be the estimate of obtained from our ecme algorithm ( alg 1 " ) , and be the permutation of such that . figure [ fig : tb1 ] depicts the change of vs. with log , exp , lfr and cel on data s " and data m. " we see that is decreasing w.r.t .moreover , becomes 0 when takes some large value .a similar phenomenon is also observed for data l. " this thus shows that the subordinator is a powerful bayesian approach for variable selection .in this paper we have introduced subordinators into the definition of nonconvex penalty functions .this leads us to a bayesian approach for constructing sparsity - inducing pseudo - priors .in particular , we have illustrated the use of two compound poisson subordinators : the compound poisson gamma subordinator and the negative binomial subordinator .in addition , we have established the relationship between the two families of compound poisson subordinators .that is , we have proved that the two families of compound poisson subordinators share the same limiting behaviors .moreover , their densities at each time have the same mean and variance .we have developed the ecme algorithms for solving sparse learning problems based on the nonconvex log , exp , lfr and cel penalties .we have conducted the experimental comparison with the state - of - the - art approach .the results have shown that our nonconvex penalization approach is potentially useful in high - dimensional bayesian modeling .our approach can be cast into a point estimation framework .it is also interesting to fit a fully bayesian framework based on the mcmc estimation .we would like to address this issue in future work .consider that & = \log\big[1-\frac{1}{1{+}\rho } \exp(-\frac{\rho}{1{+}\rho } \gamma s)\big ] - \log\big[1-\frac{1}{1{+}\rho}\big ] \\ & = \sum_{k=1}^{\infty } \frac{1}{k ( 1{+}\rho)^k } \big[1- \exp\big ( { -}\frac{\rho}{1{+}\rho } k \gamma s\big)\big ] \\ & = \sum_{k=1}^{\infty } \frac{1}{k ( 1{+}\rho)^k } \int_{0}^{\infty } ( 1- \exp(- u s ) ) \delta_{\frac{\rho k \gamma}{1{+}\rho}}(u ) d u.\end{aligned}\ ] ] we thus have that .we here give an alternative proof of proposition [ pro:8]-(d ) , which is immediately obtained from the following lemma .let take discrete value on and follow negative binomial distribution . if converges to a positive constant as , converges in distribution to a gamma random variable with shape and scale .since we have that notice that and this leads us to similarly , we have that a mixture of with mixing .that is , letting , and , we have that we now consider a mixture of with which is .let , , and .thus , =1 ] . as a result , for . as for , it is directly obtained from that since = \frac{\gamma}{\exp(\gamma s ) } - \frac{\gamma}{1+\gamma s}<0 ] .it then follows the propriety of because \prod _ { j=1}^p \exp\big ( { - } t_j \psi\big(\frac{|b_j| } { \sigma }\big ) \big)\leq\exp\big [ { - } \frac{1}{2 \sigma } \|{\bf y}- { \bf x}{\bf b}\|_2 ^ 2 \big].\ ] ] we now consider that \prod_{j=1}^p \exp\big(-t_j \psi \big(\frac{|b_j| } { \sigma } \big ) \big).\ ] ] let { \bf y}$ ] .since the matrix is positive semidefinite , we obtain .based on expression ( [ eqn : pf01 ] ) , we can write \varpropto n({\bf b}|{\bf z } , \sigma({\bf x}^t { \bf x})^{+ } ) { \iga}(\sigma|\frac{\alpha_{\sigma } { + } n{+}2p{-}q}{2 } , \nu{+ } \beta_{\sigma}).\ ] ] subsequently , we have that d { \bf b } d \sigma } < \infty,\ ] ] and hence , \prod_{j=1}^p\exp\big(-t_j \psi\big(\frac{|b_j| } { \sigma } \big ) \big ) d { \bf b}d \sigma } < \infty.\ ] ] therefore is proper .thirdly , we take } { \sigma ^{\frac{n+\alpha_{\sigma}+2p}{2 } + 1 } } \prod_{j=1}^p \big\{\exp \big({-}t_j \psi\big(\frac{|b_j| } { \sigma } \big ) \big ) \frac { t_j^{{\alpha_t}{- } 1 } \exp({- } { \beta_t } t_j)}{\gamma({\alpha_t } ) } \big\ } \\ & \triangleq f({\bf b } , \sigma , { \bf t}).\end{aligned}\ ] ] in this case , we compute } { \sigma^{\frac{n+\alpha_{\sigma}+2p}{2 } + 1 } } \prod _ { j=1}^p \frac{1 } { \big({\beta_t } { + } \psi\big(\frac{|b_j| } { \sigma } \big ) \big)^{{\alpha_t } } } d { \bf b}d \sigma}.\ ] ] similar to the previous proof , we also have that because . as a result , is proper .finally , consider the setting that .that is , and . in this case , if , we obtain and . as a result, we use the inverse gamma distribution .thus , the results still hold .polson , n. g. and scott , j. g. ( 2010 ) .`` shrink globally , act locally : sparse bayesian regularization and prediction . '' in bernardo , j. m. , bayarri , m. j. , berger , j. o. , dawid , a. p. , heckerman , d. , smith , a. f. m. , and west , m. ( eds . ) , _ bayesian statistics 9_. oxford university press .the authors would like to thank the editors and two anonymous referees for their constructive comments and suggestions on the original version of this paper .the authors would especially like to thank the associate editor for giving extremely detailed comments on earlier drafts .this work has been supported in part by the natural science foundation of china ( no . 61070239 ) .
in this paper we discuss bayesian nonconvex penalization for sparse learning problems . we explore a nonparametric formulation for latent shrinkage parameters using subordinators which are one - dimensional lvy processes . we particularly study a family of continuous compound poisson subordinators and a family of discrete compound poisson subordinators . we exemplify four specific subordinators : gamma , poisson , negative binomial and squared bessel subordinators . the laplace exponents of the subordinators are bernstein functions , so they can be used as sparsity - inducing nonconvex penalty functions . we exploit these subordinators in regression problems , yielding a hierarchical model with multiple regularization parameters . we devise ecme ( expectation / conditional maximization either ) algorithms to simultaneously estimate regression coefficients and regularization parameters . the empirical evaluation of simulated data shows that our approach is feasible and effective in high - dimensional data analysis .
synchronization in chaotic systems is a surprising phenomenon , which recently received a lot of attention , see e.g. .even though the heuristic theory and the classification of the synchronization phenomena are well studied and reasonably well understood , a mathematically rigorous theory is still lacking . generally speaking, a standard difficulty lies in the fact that the phenomenon involves the dynamics of non - uniformly chaotic systems , typically consisting of different sub - systems , whose long - time behavior depends crucially on the sign of the `` central '' lyapunov exponents , i.e. of those exponents that are zero in the case of zero coupling , and become possibly non - trivial in the presence of interactions among the sub - systems .the mathematical control of such exponents is typically very hard .progress in their computation is a fundamental preliminary step for the construction of the srb measure of chains or lattices of chaotic flows , which may serve as toy models for extensive chaotic systems out - of - equilibrium ( i.e. they may serve as standard models for non - equilibrium steady states in non - equilibrium statistical mechanics ) . in a previous paper , we introduced a simple model for phase synchronization in a three - dimensional system consisting of the suspension flow of arnold s cat map coupled with a clock .the coupling in was unidirectional , in the sense that it did not modify the suspension flow , but only the clock motion .notwithstanding its simplicity , the model has a non - trivial behavior : in particular , it exhibits phase locking and in we constructed the corresponding attractive invariant manifold via a convergent expansion .however , because of unidirectionality , the lyapunov spectrum in was very simple : the `` longitudinal '' exponents ( i.e. , those corresponding to the motion on the invariant manifold ) coincided with the unperturbed ones , and the central exponent was expressed in the form of a simple integral of the perturbation over the manifold . in this paper, we extend the analysis of to a simple bidirectional model , for which the lyapunov spectrum is non - trivial , and we show how to compute it in terms of a modified expansion , which takes the form of a decorated tree expansion discussed in detail in the following . the model is defined as follows. take arnold s cat map and denote by and the eigenvalues and eigenvectors , respectively , of : with , so that are normalized .we let the suspension flow of arnold s cat be defined as , with , if .formally , is the solution to the following differential equation instead of , but throughout the paper we only used the fact that at all times the variable jumped abruptly from to , and besides these discontinuities the flow was smooth .therefore , all the results and statements of are correct , modulo this re - interpretation of the flow equation ( * ? ? ?* ( 2.1 ) ) , where should be replaced by . ] on : x=(t)(s ) x , [ 1.susf]where is the -periodic delta function such that for all .the model of interest is obtained by coupling the suspension flow of arnold s cat map with a clock by a regular perturbation , so that on the evolution equation is +\varepsilon f(x , w , t ) , & \\\dot{w}=1+\varepsilon g(x , w , t ) , \end{cases}\ ] ] where and , are -periodic in their arguments .for the motions of and are independent .therefore , the relative phase mod among the two flows is arbitrary .if and if the interaction is dissipative ( in a suitable sense , to be clarified in a moment ) , then the phases of the two sub - systems can lock , so that the limiting motion in the far future takes place on an attractor of dimension smaller than 3 , for all initial data in an open neighborood of the attractor . in , we explicitly constructed such an attractor in terms of a convergent power series expansion in , for and a special class of dissipative functions . in this paper , we generalize the analysis of to .our first result concerns the construction of the attractive invariant manifold for .[ prop:1 ] let be the flow on associated with the dynamics , with and analytic in their arguments .set and assume there exists such that and , independently of .then there are constants such that for there exist a homemorphism and a continuous function , both hlder - continuous of exponent , such that the surface is invariant under the poincar map and the dynamics of on is conjugated to that of on , i.e. the proof of this theorem is constructive : it provides an explicit algorithm for computing the generic term of the perturbation series of with respect to , it shows how to estimate it and how to prove convergence of the series . as a by - product, we show that the invariant manifold is holomorphic in in a suitable domain of the complex plane , whose boundary contains the origin .the construction also implies that is an attractor .we denote by its basin of attraction and by an arbitrary open neighborood of contained in such that , with the lesbegue measure on .in addition to the construction of the invariant surface , in this paper we show how to compute the invariant measure on the attractor and the lyapunov spectrum , in terms of convergent expansions .more precisely , let be the lesbegue measure restricted to , i.e. , denoting by the characteristic function of , , for all measurable .the `` natural '' invariant measure on the attractor , , is defined by for all continuous functions and -a.e . , where .the limiting measure is supported on and such that . on the attractor , -a.e .point defines a dynamical base , i.e. a decomposition of the tangent plane as , such that the constants of motion are the lyapunov exponents , and we suppose them ordered as ; in the following we shall call the _ central _ lyapunov exponent .our second main result is the following .[ prop:2 ] there exists such that the following is true .let be an hlder continuous function on .then is hlder continuous in , for .if is analytic , then is analytic in , for and a suitable -dependent constant .moreover , the lyapunov exponents , , are analytic in for .in particular , the central lyapunov exponent is negative : , while .the paper is organized as follows .theorem [ prop:1 ] is proved in section [ sec:2 ] below .the proof follows the same strategy of : ( 1 ) we first write the equations for the invariant surface and solve them recursively at all orders in ; ( 2 ) then we express the result of the recursion ( which is not simply a power series in ) in terms of tree diagrams ( planar graphs without loops ) ; trees with nodes are proportional to times a _ tree value _ , which is also a function of ; ( 3 ) finally , using the tree representation , we derive an upper bound on the tree values . the fact that the dissipation is small , of order , produces bad factors in the bounds of the tree values , for some depending on the treetherefore , we need to show that for any tree is smaller than a fraction of , if is the number of nodes in the tree . this is proved by exhibiting suitable cancellations , arising from the condition .theorem [ prop:2 ] is proved in section [ sec:3 ] .the proof adapts the tree expansion to the computation of the local lyapunov exponents on the invariant surface , in the spirit of ( * ? ? ?* chapter 10 ) and .the positive local lyapunov exponent plays the role of the gibbs potential for the invariant measure .therefore , given a convergent expansion for , can be constructed by standard cluster expansion methods , as in ( * ? ? ?* chapter 10 ) .finally , can be expressed as averages of the local exponents over the stationary distribution . in section [ sec:4 ], we present some numerical evidences for a fractal to non - fractal transition of the invariant manifold , and formulate some conjectures .in this section , we define the equations for the invariant manifold , by introducing a conjugation that maps the dynamics restricted to the attractor onto the unperturbed one .the conjugation is denoted by , with where is the identity in .let and be the initial conditions at time .we will look for a solution to of the form for , with boundary conditions the evolution equation for will be written by `` expanding the vector field at first order in and at zeroth order in '' , i.e. as where and g(,t):=g(s h()+a(,t),w_0+t+u(,t),t)-_0(,t)- _ 1(,t)u(,t).[2.g ] the logic in the rewriting is that the ( linear ) approximate dynamics obtained by neglecting is dissipative , with contraction rate proportional to , thanks to the second condition in : this will allow us to control the full dynamics as a perturbation of the approximate one .the approximation obtained by neglecting is the simplest one displaying dissipation . in principlewe could have expanded the dynamics at first order both in and in , but the result would be qualitatively the same .we now set and fix such that ; then we obtain the equation for , if expressed in terms of , gives , after integration , for , these give [ eq:2.7 ] it is useful to introduce an auxiliary parameter , to be eventually set equal to , and rewrite and as [ eq:2.8 ] the idea is to first consider as a parameter independent of , then write the solution in the form of a power series in , with coefficients depending on , and finally show that the ( -dependent ) radius of convergence of the series in behaves like , , at small : this implies that we will be able to take without spoiling the summability of the series . summarizing, we will look for a solution of in the form with [ eq:2.11 ] and : , with and as in ; is defined in ; must be eventually set equal to .the solution to - is looked for in the form of a power series expansion in ( at fixed , in the sense explained after ) .therefore , we write [ eq:2.10 ] and insert these expansions into . by the analyticity assumption on and , we may expand ( defining and ) where in the first sum denotes the constraint , and and similarly for ( recall that and are the eigenvalues and eigenvectors of ) ; here and henceforth we are denoting by the standard scalar product in .moreover where and . for future reference, we note since now that the analyticity of and yields , by the cauchy inequality , for some constant , uniformly in ( here ). define , , and , with ) . setting and plugging into, we find for [ eq:2.12 ] introduce the notation where . here and henceforth , if , the product should be interpreted as 1 , and similarly for the other products in the case that and/or . then , defining , we find , for , [ eq:2.13 ] where .we now want to bound the generic term in the series originating from the recursive equations ; the goal is to show that the -th order is bounded proportionally to , with and . we find convenient to represent graphically the coefficients in in terms of rooted trees ( or simply trees , in the following ) as in . we refer to ( * ? ? ?* section v ) for the definition of trees and notations . with respect to the trees in in the present casethere are four _ types _ of nodes .we use the symbols , , and , calling them nodes of type , , and , respectively : they correspond to contributions to , respectively .the constraint forbids a node of type 0 or 1 to be immediately preceded by exactly one node of these two types .recall that a _ tree _ is a partially ordered set of nodes and lines ; the partial ordering relation is denoted by and each line will be drawn as an arrow pointing from the node it exits to the node it enters .we call the set of nodes and the set of lines of the tree . as in we denote by the node such that for any node : will be called the _ special node _ and the line exiting will be called the _ root line_. the root line can be imagined to enter a further point , called the root , which , however , is not counted as a node . with each nodewe associate a label to denote its type and a time variable ] is the integer part .implies that the radius of convergence of the series is bounded by .therefore , we can take . hlder - continuity of and can be proved mutatis mutandis like in .this completes the proof of theorem [ prop:1 ] .in order to compute the lyapunov exponents , we need to understand how the vectors on the tangent space evolve under the interacting dynamics . to this purpose , we set , rewrite as , with , and write the dynamics on the tangent space as follows : + \e \boldsymbol{\partial f}({\boldsymbol{x}},t)\;,\ ] ] where , is the solution to found in section [ sec:2 ] , are the projections into the unperturbed eigendirections , are the corresponding unperturbed eigenvalues , and is the jacobian matrix of .integration of gives the tangent map .we denote by the solution to with initial condition at ( started at ) . for $ ]we obtain we look for a conjugation , where is the identity in and is a matrix , such that , by setting for a suitable matrix , to be determined , one has , that is of course only the part of involving the tangent dynamics has still to be solved , so we study the conjugation equation the matrix will be taken to be diagonal in the basis , where , and .then in the basis one has while the matrix takes diagonal form , with values along the main diagonal . from now on we shall use this basis , and implicitly assume that the indices , run over the values , unless stated otherwise . by setting , we obtain from note that , given a solution of , then also is a solution with replaced with , where are non - zero functions from to .therefore , with no loss of generality , we can require the diagonal elements of to vanish : hence will be looked for as an off - diagonal matrix .we look for the conjugation in the form - , with equations and give where can be computed iteratively via as & & -.8truecm ds_^(())= +_n1^n_0^d_1 ( s^_1_(()),_1 ) _ 0^_1d_2(s^_2_(()),_2 ) + & & 1.15truecm _ 0^_n-1d_n(s^_n_(()),_n):=(+m(,)).[eq:3.9bis]at first order in , gives ( with ) in particular , setting and recalling that is off - diagonal , we find while , for , we have to solve recursively for , the result being ( if ) : [ eq:3.11 ] in order to compute the higher orders , we insert in the left side of , thus getting where .note that , according to , is expressed as a series of iterated integrals of , where is analytic in its argument .therefore , the power series expansion in of can be obtained ( and its -th order coefficient can be bounded ) by using the corresponding expansions for the components of ; here the functions , , and are as in with the coefficients given by and bounded as in .we write by using the very definition of and the bounds , it is straightforward to prove that ^(n ) ( ) : = _ i , j \{+,-,3 } | _ i , j ( ) | c_4^n^-[(2n-1)/3][e3.14]for a suitable .now , if , the diagonal part of gives while the off - diagonal part can be solved in a way similar to , i.e. , if , [ eq:3.16 ] , \label{eq:3.16a } \\k_{\alpha,3}^{(n)}(\varphi ) & = -\a\l_\a^{-1}\sum_{m\in\mathbb z_\a}\l_\a^{-m } \big [ \mathfrak{m}_{\alpha,3}^{(n)}(s^m\f ) + \mathcal q_{\a,3}^{(n)}(s^m\f ) \bigr ] , \label{eq:3.16b } \\k_{3,-\alpha}^{(n)}(\varphi ) & = - \a\sum_{m\in\mathbb z_{\a}}\l_{-\a}^{m } \big[\mathfrak{m}_{3,-\alpha}^{(n)}(s^m\f)\l_{-\a } + \mathcalq_{3,-\a}^{(n)}(s^m\f ) \bigr ] , \label{eq:3.16c}\end{aligned}\ ] ] where we have set in the simple case that , , while thus recovering the formula for given in ( * ? ? ?* section vii ) . in figure[ fig : rappresentazione - grafica - di gamma^n ] and [ fig : rappresentazione - grafica - di k^n ] we give a graphical representation of and , respectively .the representation of and is the same as in figure [ fig : rappresentazione - grafica - di k^n ] , simply with the labels replaced by and , respectively .=0.5cm=1.0 cm & & & & & & & & * + [ o][f**:black ] + _ i^(n)()=@<-[r ] ^-ii ^(1.24)(n ) & * + [ f**:black ] & = @<-[r]^-ii _ ( 0.75)v_0 & * + [ f**:black]@ < [ r]^<<^(0.6)ii ^(1.4)(n ) & * + [ o][f**:black ] & + & @<-[r]^-ii _ ( 0.80)v_0 & * + [ f**:black ] @ < [ ru]^-ij_(1.2)(n_1)@<-[rd]^(0.2)^-ji^(1.2)(n_2 ) & + & & & & & & & & * + [ f**:white ] =0.5cm=1.0 cm & & & & & & & & * + [ o][f**:black ] + k_i , j^(n)()=@<-[r ] ^-i j ^(1.24)(n ) & * + [ f**:white ] & = @<-[r]^-i j _ ( 0.75)v_0 & * + [ f**:white]@ < [ r]^<<^(0.7)i j ^(1.4)(n ) & * + [ o][f**:black ] & + & @<-[r]^-i j _ ( 0.80)v_0 & * + [ f**:white ] @ < [ ru]^-i j_(1.2)(n_1 ) @<-[rd]^(0.2)^(0.6)j j^(1.2)(n_2 ) & + & & & & & & & & * + [ f**:white ] + & & * + [ f**:white ] + + @<-[r]^-i j _ ( 0.80)v_0 & * + [ f**:white ] @<-[ru]^-i j_(1.2)(n_1 ) @<-[rd]^(0.25)^(0.6)j j^(1.2)(n_2 ) & + & & * + [ f**:black ] to iterate the graphical construction and provide a tree representation for both and , we need a few more definitions . we identify three types of _ principal nodes _ , that we call of type , and , and represent graphically , respectively , by , and . with any such node ,we associate a label , to denote its type , and two labels , which will be drawn superimposed to the line exiting ; if is of type , then , while if is of type , then .a node is of type if and only if it is an end - node .furthermore , with each node with , we associate a label , while we set for all nodes with ; with each node with , we associate a label such that either or ( recall that if is a nodes of type then , so that either or are ) , and a label ; if is of type or we define .if denotes the number of lines entering and the number of lines of type entering , we have the constraints and .moreover : ; .if and is the node immediately preceding on has and . if , let be the two nodes immediately preceding ; if , with no loss of generality we assume that is of type ( so that is of type ) ; if , with no loss of generality we assume that is of type ( so that is of type ) ; in both cases we impose the constraints that , and .denoting by the node immediately following , we set , and we are finally ready to define the _ node factors _ associated with the nodes : then , by iterating the graphical representation in figures [ fig : rappresentazione - grafica - di gamma^n ] and [ fig : rappresentazione - grafica - di k^n ] , we end up with trees like that in figure [ fig : esempi - di - alberi 1 e 2 ] for ; note that the end - nodes are all of type . if the only difference is that the special node is of type . with the definitions above ,we denote by the set of labelled trees such that , , and the constraints and properties described above .then it is straightforward to prove by induction that where given , we let be the family of labelled trees differing from just by the choice of the labels . then , using , it is easy to see that },\ ] ] which immediately implies that },\ ] ] for a suitable constant . therefore , the radius of convergence in of the series for and is proportional to , which allows us to fix eventually .the lyapunov exponents are the time average of the quantities . however , if denotes the restriction of on the attractor , the dynamical system is conjugated to an asonov system and hence it is ergodic : therefore time - averaged observables are -independent .furthermore , there exists a unique srb measure such that the measure can be computed by reasoning as in ( * ? ? ?* chapter 10 ) .let be a markov partition for on and set .call the symbolic code induced by the markov partition and denote by the symbolic representation of a point , i.e. . then the expansion rate of along the unstable manifold of is , where with denoting the shift map and . if denotes the gibbs distribution for the energy function ( see ( * ? ? ?* chapter 5 ) ) , then the srb distribution for the system is and can be computed accordingly ( see ( * ? ? ?* chapter 6 ) ) .moreover , by construction , is analytic in and hlder - continuous in .therefore , for any hlder - continuous function , the expectation value is hlder - continuous in for .if is analytic in , then there exists a positive constant , depending on , such that is analytic for .in particular the lyapunov exponents are analytic in and , from , one finds this completes the proof of theorem [ prop:2 ] .in this section , we discuss informally some of the consequences of our main theorem , and formulate a conjecture about the transition from fractal to smooth(er ) behavior , which is suggested by our result . from theorem [ prop:1 ] we know that the surface of the attractor is h continuous , but we do not have any control on its possible differentiability .this means that our attractor may be fractal , and we actually expect this to be the case for positive and small enough .an analytic estimate of the fractal dimension of the attractor in terms of the lyapunov exponents is provided by the _ lyapunov dimension _ , which is defined as follows .consider an ergodic dynamical system admitting an srb measure on its attractor , and let be its lyapunov exponents , counted with their multiplicities .then , where is the largest integer such that .the _ kaplan - yorke conjecture _ states that coincides with the hausdorff dimension of the attractor ( also known as the _ information dimension _ , see , e.g. , ( * ? ? ?* chapt.5.5.3 ) for a precise definition ) . in this section ,we take as a heuristic estimate of the fractal dimension of the attractor , without worrying about the possible validity of the conjecture ( which has been rigorously proven only some special cases , see e.g. ) .specializing the expression of to our context , we find that , for sufficiently small , d_l=2+=3++r_2(),[eq : fract]where is the taylor remainder of order 2 in , which is computable explicitly in terms of the convergent expansion derived in the previous sections .note that , so that is smaller than 3 ( as desired ) and is decreasing in , for small .therefore , combined with the kalpan - yorke conjecture , ( [ eq : fract ] ) suggests that the attractor is fractal for small , and its fractal dimension decreases ( as expected ) by increasing the strength of the dissipative interaction .it is now tempting to extrapolate to larger values of ( possibly beyond the range of validity of theorem [ prop:2 ] ) , up to the point where , possibly , the relative ordering of and changes . in the simple case that ( which is the case considered in ) , the lyapunov exponents are independent of : .therefore , on the basis of , we conjecture that by increasing the hausdorff dimension of the attractor decreases from to until reaches the critical value , where .formally , this critical point is (higher orders ) , the higher orders being computable via the expansion described in the previous sections . for , we expect the attractor to be a smooth manifold of dimension two .the transition is illustrated in fig.[fig1 ] and [ fig2 ] for the simple case that and , in which case the expected critical point is .( 100,30 ) ( 0,0 ) and .a ) : .b ) : .,title="fig:",width=264 ] ( 50,0 ) and .a ) : .b ) : .,title="fig:",width=264 ] ( 100,30 ) ( 0,0 ) and as in fig .a ) : .b ) : .,title="fig:",width=264 ] ( 50,0 ) and as in fig .a ) : .b ) : .,title="fig:",width=264 ] if , on the basis of numerical simulations , the attractor does not seem to display a transition from a fractal set to a smooth manifold .still , for suitable choices of , we expect the attractor to display a `` first order phase transition '' , located at the value of where , to be called again . at ,the derivative of the hausdorff dimension of the attractor with respect to is expected to have a jump .a possible scenario is that the attractor is fractal both for and for , but it is `` smoother '' at larger values of , in the sense that its closure may be a regular , smooth , manifold of dimension two .an illustration of this smoothing " mechanism is in fig.[fig3 ] .( 100,30 ) ( 0,0 ) .a ) : .b ) : .,title="fig:",width=264 ] ( 50,0 ) .a ) : .b ) : .,title="fig:",width=264 ] it would be interesting to investigate the nature of this transition in a more quantitative way , by comparing a numerical construction of the attractor with the theory proposed here , obtained by extrapolating the convergent expansion described in this paper to intermediate values of .such a comparison goes beyond the purpose of this paper , and we postpone the discussion of this issue to future research .one has for and for . given a tree of order one proceed by induction .assume that for all the trees of order and consider a tree of order .let be the special node of , and call the subtrees entering , with . if is a node of type 1 , then , so that the bound follows for . if , then the node preceding can not be of type 1 .call the subtrees entering , with . then one has for all . finally if is not a node of type 1 , then it has not to be counted and the argument follows by using the inductive bounds for the subtrees entering .moreover the bound in lemma [ lem:2.1 ] is optimal .indeed there are trees of order such that .define recursively the level of a node by setting if is an end - node and if at least one line entering exits a node with level . then consider a tree in which all nodes except the end - nodes are circles ( thais is of type 1 or 2 ) and have two entering lines except those with level which have only one entering line; see figure [ fig:3.8 ] for an example with ( means we can have any kind of square node ) . for such treesone has , where , is the number of internal nodes and is the number of end - nodes . hence .=0.5 cm & & & * + [ o][f]@<-[r ] & * + < 3.0pt>[f**:black]+[f**:white ] + & & * + [ o][f]@<-[ru]@<-[rd ] & & + & & & * + [ o][f]@<-[r ] & * + < 3.0pt>[f**:black]+[f**:white ] + @<-[r ] & * + [ o][f]@<-[ruu]@<-[rdd ] & & & + & & & * + [ o][f]@<-[r ] & * + <3.0pt>[f**:black]+[f**:white ] + & & * + [ o][f]@<-[ru]@<-[rd ] & & + & & & * + [ o][f]@<-[r ] &* + < 3.0pt>[f**:black]+[f**:white ] 10 adkmz a. arenas , a. daz - guilera , j. kurths , y. moreno , ch .zhou , _ synchronization in complex networks _ phys .rep . * 469 * ( 2008 ) , no . 3 , 93 - 153 .blech i.i .blekhman , _ synchronization in science and technology _ , asme press , new york , 1988 .bkovz s. boccaletti , j. kurths , g. osipov , d.l .valladares , c.s .zhou , _ the synchronization of chaotic systems _ , phys .* 366 * ( 2002 ) , no . 1 - 2 , 1 - 101 .bfg04 f. bonetto , p. falco , a. giuliani , _ analyticity of the srb measure of a lattice of coupled anosov diffeomorphisms of the torus _ , j. math* 45 * ( 2004 ) , no . 8 , 3282 - 3309. eck - ruelle j .-eckmann , d. ruelle , _ ergodic theory of chaos and strange attractors _ , rev .modern phys .* 57 * ( 1985 ) , no . 3 , 617 - 656 .farmer - ott - yorke j.d .farmer , e. ott , j.a .yorke , _ the dimension of chaotic attractors _ , phys .d * 7 * ( 1983 ) , no . 1 - 3 , 153 - 180 . g. gallavotti , _ foundations of fluid dynamics _ , springer - verlag , berlin heidelberg , 2002. gbg g. gallavotti , f. bonetto , g. gentile , _ aspects of the ergodic , qualitative and statistical theory of motion _ , springer , berlin , 2004 .ggg g. gallavotti , g. gentile , a. giuliani , _ resonances within chaos _ ,chaos * 22 * , 026108 ( 2012 ) , 6 pages .gonzalez j.m .gonzlez - miranda , _ synchronization and control of chaos .an introduction for scientists and engineers _ , imperial college press , london , 2004 .kaplan - yorke l. kaplan , j.a .yorke , _ chaotic behavior of multidimensional difference equations _, functional differential equations and approximation of fixed points , lecture notes in mathematics 730 , 204 - 227 , eds .peitgen , h .- o .walther , springer , berlin , 1979 .pcjmh l. pecora , th.l .carroll , g.a .johnson , d.j .mar , j.f .heagy , _ fundamentals of synchronization in chaotic systems , concepts , and applications _ , chaos * 7 * ( 1997 ) , no . 4 , 520 - 543 .a. pikovsky , m. rosenblum , j. kurths , _ synchronization .a universal concept in nonlinear sciences _ , cambridge university press , cambridge , 2001 .
we consider a three - dimensional chaotic system consisting of the suspension of arnold s cat map coupled with a clock via a weak dissipative interaction . we show that the coupled system displays a synchronization phenomenon , in the sense that the relative phase between the suspension flow and the clock locks to a special value , thus making the motion fall onto a lower dimensional attractor . more specifically , we construct the attractive invariant manifold , of dimension smaller than three , using a convergent perturbative expansion . moreover , we compute via convergent series the lyapunov exponents , including notably the central one . the result generalizes a previous construction of the attractive invariant manifold in a similar but simpler model . the main novelty of the current construction relies in the computation of the lyapunov spectrum , which consists of non - trivial analytic exponents . some conjectures about a possible smoothening transition of the attractor as the coupling is increased are also discussed . * _ keywords : _ * partially hyperbolic systems ; anosov systems ; synchronization ; phase - locking ; lyapunov exponents ; fractal attractor ; srb measure ; tree expansion ; perturbation theory .
in high energy physics ( hep ) , unfolding ( also called unsmearing ) is a general term describing methods that attempt to take out the effect of smearing resolution in order to obtain a measurement of the true underlying distribution of a quantity .typically the acquired data ( distorted by detector response , inefficiency , etc . )are binned in a histogram .the result of some unfolding procedure is then a new histogram with estimates of the true mean bin contents prior to smearing and inefficiency , along with some associated uncertainties .it is commonly assumed that such unfolded distributions are useful scientifically for comparing data to one or more theoretical predictions , or even as quantitative measurements to be propagated into further calculations .since an important aspect of the scientific enterprise is to test hypotheses , we can ask : `` should unfolded histograms be used to test hypotheses ? '' if the answer is yes , then one can further ask if there are limitations to the utility of testing hypotheses using unfolded histograms . if the answer is no , then the rationale for unfolding would seem to be limited .in this note we illustrate an approach to answering the title question with a few variations on a toy example that captures some of the features of real - life unfolding problems in hep .the goal of the note is to stimulate more interest in exploring what one of us ( rc ) has called a _ bottom - line test _ for an unfolding method : _ if the unfolded spectrum and supplied uncertainties are to be useful for evaluating which of two models is favored by the data ( and by how much ) , then the answer should be materially the same as that which is obtained by smearing the two models and comparing directly to data without unfolding _this is a different emphasis for evaluating unfolding methods than that taken in studies that focus on intermediate quantities such as bias and variance of the estimates of the true mean contents , and on frequentist coverage of the associated confidence intervals .while the focus here is on comparing two models for definiteness , the basic idea of course applies to comparing one model to data ( i.e. , goodness of fit ) , and to more general hypothesis tests .recently zech has extended the notion of the bottom - line test to parameter estimation from fits to unfolded data , and revealed failures in the cases studied , notably in fits to the width of a peak .we adopt the notation of the monograph _ statistical data analysis _ by glen cowan ( suppressing for simplicity the background contribution that he calls ) : is a continuous variable representing the _ true _ value of some quantity of physical interest ( for example momentum ) .it is distributed according to the pdf . is a continuous variable representing the _ observed _ value of the same quantity of physical interest , after detector smearing effects and loss of events ( if any ) due to inefficiencies . is the resolution function of the detector : the conditional pdf for observing , given that the true value is ( and given that it was observed somewhere ) . contains the expectation values of the bin contents of the _ true _( unsmeared ) histogram of ; contains the bin contents of the _ observed _ histogram ( referred to as the _ smeared histogram _ , or occasionally as the _ folded _histogram ) of in a single experiment ; contains the expectation values of the bin contents of the _ observed _ ( smeared ) histogram of , including the effect of inefficiencies : ] .the estimate of provided by an unfolding algorithm is .thus we have as discussed by cowan and noted above , includes the effect of the efficiency , i.e. , the effect of events in the true histograms not being observed in the smeared histogram .the only efficiency effect that we consider here is that due to events being smeared outside the boundaries of the histogram .( that is , we do not consider an underflow bin or an overflow bin . )the response matrix depends on the resolution function and on ( unknown ) true bin contents ( and in particular on their true densities _ within _ each bin ) , and hence is either known only approximately or as a function of assumptions about the true bin contents .the numbers of bins and need not be the same .( is often suggested , while leaves the system of equations under - determined . ) for the toy studies discussed here , we set , so that is a square matrix that typically has an inverse . in the smeared space , we take the observed counts to be independent observations from the underlying poisson distributions : the unfolding problem is then to use and as inputs to obtain estimates of , and to obtain the covariance matrix of these estimates ( or rather an estimate of , ) , ideally taking in account uncertainty in .when reporting unfolded results , authors report , ideally along with .( if only a histogram of with `` error bars '' is displayed , then only the diagonal elements of are communicated , further degrading the information . )the `` bottom line test '' of an application of unfolding is then whether hypothesis tests about underlying models that predict can obtain meaningful results if they take as input and .for the null hypothesis , we consider the continuous variable to be distributed according the true pdf where is known , and is a normalization constant . for the alternative hypothesis , we consider to be distributed according the true pdf where is the same as in the null hypothesis , and where is a pdf that encodes a departure from the null hypothesis . in this note, we assume that both and are known , and lead to potentially significant departures from the null hypothesis at large .the constant controls the level of such departures .figure [ truepdfs ] displays the baseline pdfs that form the basis of the current study , for which we take to be a normalized gamma distribution , and .is represented by , shown in red .the alternative hypothesis has an additional component shown in dashed blue , with the sum in solid blue ., title="fig:",scaledwidth=49.0% ] is represented by , shown in red .the alternative hypothesis has an additional component shown in dashed blue , with the sum in solid blue ., title="fig:",scaledwidth=49.0% ] for each hypothesis , the true bin contents are then each proportional to the integral of the relevant over each bin . for both hypotheses ,we take the smearing of to be the gaussian resolution function , where is known . for baseline plots, we use the values shown in table [ baseline ] , and the study the effect of varying one parameter at a time . for both and , we consider histograms with 10 bins of width 1 spanning the interval [ 0,10 ] .the default is half this bin width .the quantities , , and are then readily computed as in ref .figure [ histos ] displays and ( in solid histograms ) , while fig .[ responsepurity ] displays the response matrix as well as the source bin of events that are observed in each bin . in each simulated experiment ,the total number of events is sampled from a poisson distribution with mean given in table [ baseline ] ..values of parameters used in the baseline unfolding examples [ cols="<,<,<",options="header " , ] and smeared , for ( left ) the null hypothesis and ( right ) the alternative hypothesis .data points : in mc simulation a set of true points is chosen randomly and then smeared to be the set .the three points plotted in each bin are then the bin contents when and are binned , followed by the unfolded estimate for bin contents . ,title="fig:",scaledwidth=49.0% ] and smeared , for ( left ) the null hypothesis and ( right ) the alternative hypothesis .data points : in mc simulation a set of true points is chosen randomly and then smeared to be the set .the three points plotted in each bin are then the bin contents when and are binned , followed by the unfolded estimate for bin contents . , title="fig:",scaledwidth=49.0% ] for default parameter values in table [ baseline ] .( right ) for each bin in the measured value , the fraction of events that come from that bin ( dominant color ) and from nearby bins ., title="fig:",scaledwidth=49.0% ] for default parameter values in table [ baseline ] .( right ) for each bin in the measured value , the fraction of events that come from that bin ( dominant color ) and from nearby bins ., title="fig:",scaledwidth=49.0% ] boundary effects at the ends of the histogram are an important part of a real problem . in our simplified toy problems ,we use the same smearing for events near boundaries as for all events ( hence not modeling correctly some physical situations where observed values can not be less than zero ) ; events that are smeared to values outside the histogram are considered lost and contribute to the inefficiencies included in .these toy models capture some important aspects of real problems in hep . for example, one might be comparing event generators for top - quark production in the standard model .the variable might be the transverse momentum of the top quark , and the two hypotheses might be two calculations , one to higher order. another real problem might be where represents transverse momentum of jets , the null hypothesis is the standard model , and the alternative hypothesis is some non - standard - model physics that turns on at high transverse momentum .( in this case , it is typically not the case that amplitude of additional physics is known . )in a typical search for non - standard - model physics , the hypothesis test of vs. is formulated in the smeared space , i.e. , by comparing the histogram contents to the mean bin contents predicted by the true densities under each hypothesis combined with the resolution function and any efficiency losses .the likelihood for the null hypothesis is the product over bins of the poisson probability of obtaining the observed bins counts : where the are taken from the null hypothesis prediction .likelihoods for other hypotheses , such as , are constructed similarly . for testing goodness of fit, it can be useful to use the observed data to construct a third hypothesis , , corresponding the _ saturated model _ , which sets the predicted mean bin contents to be exactly those observed .thus is the upper bound on for any hypothesis , given the observed data .the negative log - likelihood ratio is a goodness - of - fit test statistic that is asymptotically distributed as a chisquare distribution if is true .similarly one has for testing .an alternative ( in fact older ) goodness - of - fit test statistic is pearson s chisquare , yet another alternative , generally less favored , is known as neyman s chisquare , ref . argues that eqn .[ baker ] is the most appropriate gof statistic for poisson - distributed histograms , and we use it as our reference point in the smeared space .figure [ nullgofsmeared ] shows the distributions of and , and their difference , for histograms generated under .both distributions follow the expected distribution with 10 degrees of freedom ( dof ) .in contrast , the histogram of ( figure [ nullgofsmeared ] ( bottom left ) ) has noticeable differences from the theoretical curve ., in the smeared space with default value of gaussian , histograms of the gof test statistics : ( top left ) , ( top right ) , and ( bottom left ) .the solid curves are the chisquare distribution with 10 dof .( bottom right ) histogram of the event - by - event difference in the two gof test statistics and . , title="fig:",scaledwidth=49.0% ] , in the smeared space with default value of gaussian , histograms of the gof test statistics : ( top left ) , ( top right ) , and ( bottom left ) .the solid curves are the chisquare distribution with 10 dof .( bottom right ) histogram of the event - by - event difference in the two gof test statistics and . , title="fig:",scaledwidth=49.0% ] , in the smeared space with default value of gaussian , histograms of the gof test statistics : ( top left ) , ( top right ) , and ( bottom left ) .the solid curves are the chisquare distribution with 10 dof .( bottom right ) histogram of the event - by - event difference in the two gof test statistics and ., title="fig:",scaledwidth=49.0% ] , in the smeared space with default value of gaussian , histograms of the gof test statistics : ( top left ) , ( top right ) , and ( bottom left ) .the solid curves are the chisquare distribution with 10 dof .( bottom right ) histogram of the event - by - event difference in the two gof test statistics and . ,title="fig:",scaledwidth=49.0% ] for testing vs. , a suitable test statistic is the likelihood ratio formed from the probabilities of obtaining bin contents under each hypothesis : where the second equality follows from eqn .[ baker ] .figure [ lambdah0h1 ] shows the distribution of for events generated under and for events generated under , using the default parameter values in table [ baseline ] . for events generated under ( in blue ) and ( in red ) . ,scaledwidth=49.0% ] we would assert that these results obtained in the smeared space are the `` right answers '' for chisquare - like gof tests of and ( if desired ) , and in particular for the likelihood - ratio test of vs in fig . [ lambdah0h1 ] . given a particular observed data set ,such histograms can be used to calculate -values for each hypothesis , simply by integrating the appropriate tail of the histogram beyond the observed value of the relevant likelihood ratio . in frequentist statistics , such -valuesare typically the basis for inference , especially for the simple - vs - simple hypothesis tests considered here .( of course there is a vast literature questioning the foundations of using -values , but in this note we assume that they can be useful , and are interested in comparing ways to compute them . )we compare , , , and the generalization of eqn .[ chisq ] including correlations in various contexts below . for poisson - distributed data , arguments in favor of when it is available are in ref . . in the usual gof test with ( uncorrelated )estimates having _ gaussian _ densities with standard deviations , one would commonly have although not usually mentioned , this is equivalent to a likelihood ratio test with respect to the saturated model , just as in the poisson case .the likelihood is where for one has predicted by , and for the saturated model , one has .thus and hence ( it is sometimes said loosely and incorrectly that for the gaussian model , , but clearly the ratio is necessary to cancel the normalization factor . ) there is also a well - known connection between the usual gaussian of eqn .[ chisq ] and pearson s chisquare in eqn .[ pearson ] : since the variance of a poisson distribution is equal to its mean , a naive derivation of eqn .[ pearson ] follows immediately from eqn .[ chisq ] .if one further approximates by the estimate , then one obtains neyman s chisquare in eqn .[ neyman ] .if one unfolds histograms and then compares the unfolded histograms to ( never smeared ) model predictions , even informally , then one is implicitly assuming that the comparison is scientifically meaningful .for this to be the case , we would assert that the results of comparisons should not differ materially from the `` right answers '' obtained above in the smeared space . herewe explore a few test cases . given the observed histogram contents , the likelihood function for the unknown follows from eqn .[ poisprob ] and leads to the maximum likelihood ( ml ) estimates , i.e. , one might then expect that the ml estimates of the unknown means can be obtained by substituting for in eqn . [ nurmu ] .if is a square matrix , as assumed here , then this yields these are indeed the ml estimates of as long as is invertible and the estimates are positive , which is generally the case in the toy problem studied here .the covariance matrix of the estimates in terms of and is derived in ref . : where .since the true values are presumed unknown , it is natural to substitute the estimates from eqn . [ nun ] , thus obtaining an estimate .consequences of this approximation are discussed below . in all cases ( even when matrix inversion fails ) , the ml estimates for can be found to desired precision by the iterative method variously known as expectation maximization ( em ) , lucy - richardson , or ( in hep ) the iterative method of dagostini . because the title of ref . mentions bayes theorem , in hep the em method is unfortunately ( and wrongly ) referred to as `` bayesian '' , even though it is a fully frequentist algorithm .as discussed by cowan , the ml estimates are unbiased , but the unbiasedness can come at a price of large variance that renders the unfolded histogram unintelligible to humans .therefore there is a vast literature on `` regularization methods '' that reduce the variance at the price of increased bias , such that the mean - squared - error ( the sum of the bias squared and the variance ) is ( one hopes ) reduced .the method of regularization popularized in hep by dagostini ( and studied for example by bohm and zech ) is simply to stop the iterative em method before it converges to the ml solution .the estimates then retain some memory of the starting point of the solution ( typically leading to a bias ) and have lower variance .the uncertainties ( covariance matrix ) also depend on when the iteration stops . our studies in this note focus on the ml and truncated iterative em solutions , anduse the em implementation ( unfortunately called roounfoldbayes ) in the roounfold suite of unfolding tools .this means that for the present studies , we are constrained by the policy in roounfold to use the `` truth '' of the training sample to be the starting point for the iterative em method ; thus we have not studied convergence starting from , for example , a uniform distribution .useful studies of the bias of estimates are thus not performed .other popular methods in hep include variants of tikhonov regularization , such as `` svd '' method advocated by hocker and kartvelishvili , and the implementation included in tunfold .the relationship of these methods to those in the professional statistics literature is discussed by kuusela .figure [ histos ] shows ( in addition to the solid histograms mentioned above ) three points with error bars plotted in each bin , calculated from a particular set of simulated data corresponding to one experiment .the three points are the bin contents when the sampled values of and are binned , followed by that bin s components of the set of unfolded estimates .figure [ matricesinvert](left ) shows the covariance matrix for the estimates obtained for the same particular simulated data set , unfolded by matrix inversion ( eqn .[ nurmuinv ] ) to obtain the ml estimates .figure [ matricesinvert ] ( right ) shows the corresponding correlation matrix with elements .figure [ matricesiterative ] shows the corresponding matrices obtained when unfolding by the iterative em method with default number of iterations . for the ml solution ,adjacent bins are negatively correlated , while for the em solution with default ( 4 ) iterations , adjacent bins are positively correlated due to the implicit regularization . for unfolded estimates , as provided by the ml estimates ( matrix inversion ) .( right ) the correlation matrix corresponding to , with elements . , title="fig:",scaledwidth=49.0% ] for unfolded estimates , as provided by the ml estimates ( matrix inversion ) .( right ) the correlation matrix corresponding to , with elements . ,title="fig:",scaledwidth=49.0% ] for unfolded estimates , as provided by the default iterative em method .( right ) the correlation matrix corresponding to , with elements . ,title="fig:",scaledwidth=49.0% ] for unfolded estimates , as provided by the default iterative em method .( right ) the correlation matrix corresponding to , with elements . ,title="fig:",scaledwidth=49.0% ] figure [ converge ] shows an example of the convergence of iterative em unfolding to the ml solution for one simulated data set . on the left is the fractional difference between the em and ml solutions , for each of the ten histogram bins , as a function of the number of iterations , reaching the numerical precision of the calculation . on the rightis the covariance matrix after a large number of iterations , showing convergence to that obtained by matrix inversion in fig .[ matricesinvert](left ) ., title="fig:",scaledwidth=49.0% ] ( left ) ., title="fig:",scaledwidth=49.0% ]although the ml solution for may be difficult for a human to examine visually , if the covariance matrix is well enough behaved , then a computer can readily calculate a chisquare gof test statistic in the unfolded space by using the generalization of eqn .[ chisq ] , namely the usual formula for gof of gaussian measurements with correlations , if unfolding is performed by matrix inversion ( when equal to the ml solution ) , then substituting from eqn .[ nurmuinv ] , from eqn .[ nurmu ] , and from eqn .[ covmu ] , yields so for as assumed by cowan , this calculated in the unfolded space is equal to pearson s chisquare ( eqn .[ pearson ] ) in the smeared space .if however one substitutes for as in eqn .[ nun ] , then in the unfolded space is equal to neyman s chisquare in the smeared space !this is the case in the implementation of roounfold that we are using , as noted below in the figures . for events unfolded with the ml estimates ,figure [ nulgofunfoldedinvert ] ( top left ) shows the results of such a gof test with respect to the null hypothesis using same events used in fig .[ nullgofsmeared ] . as foreseen , the histogram is identical ( apart from numerical artifacts ) with the histogram of in fig .[ nullgofsmeared ] ( bottom left ) .figure [ nulgofunfoldedinvert ] ( top right ) show the event - by - event difference of and pearson s in the smeared space , and figure [ nulgofunfoldedinvert ] ( bottom ) is the difference with respect to in the smeared space .figure [ nulgofunfolded ] shows the same quantities calculated after unfolding using the iterative em method with default iterations . for these tests using ml unfolding ,the noticeable difference between the gof test in the smeared space with that in the unfolded space is directly traced to the fact that the test in the unfolded space is equivalent to in the smeared space , which is an inferior gof test compared to the likelihood ratio test statistic .it seems remarkable that , even though unfolding by matrix inversion would appear not to lose information , in practice the way the information is used ( linearizing the problem via expressing the result via a covariance matrix ) already results in some failures of the bottom - line test of gof .this is without any regularization or approximate em inversion .that tests for compatibility with in the unfolded space , for the same events generated under as those used in the smeared - space test of fig .[ nullgofsmeared ] .( top right ) for these events , histogram of the difference between in the unfolded space and in the smeared space .( bottom ) for these events , histogram of the difference between in the unfolded space and the gof test statistic in the smeared space ., title="fig:",scaledwidth=49.0% ] that tests for compatibility with in the unfolded space , for the same events generated under as those used in the smeared - space test of fig .[ nullgofsmeared ] .( top right ) for these events , histogram of the difference between in the unfolded space and in the smeared space .( bottom ) for these events , histogram of the difference between in the unfolded space and the gof test statistic in the smeared space ., title="fig:",scaledwidth=49.0% ] that tests for compatibility with in the unfolded space , for the same events generated under as those used in the smeared - space test of fig .[ nullgofsmeared ] .( top right ) for these events , histogram of the difference between in the unfolded space and in the smeared space .( bottom ) for these events , histogram of the difference between in the unfolded space and the gof test statistic in the smeared space . , title="fig:",scaledwidth=49.0% ] , here calculated after unfolding using the iterative em method with default ( four ) iterations . ,title="fig:",scaledwidth=49.0% ] , here calculated after unfolding using the iterative em method with default ( four ) iterations . ,title="fig:",scaledwidth=49.0% ] , here calculated after unfolding using the iterative em method with default ( four ) iterations ., title="fig:",scaledwidth=49.0% ] for the histogram of each simulated experiment , the gof statistic is calculated with respect to the prediction of and also with respect to the prediction of .the difference of these two values , , is then a test statistic for testing vs. , analogous to the test statistic .figure [ delchi ] shows , for the same events as those used in fig .[ lambdah0h1 ] , histograms of the test statistic in the unfolded space for events generated under and under , with calculated using and using . for the default problem studied here ,the dependence on is not large .thus unless otherwise specified , all other plots use calculated under . , histogram of the test statistic in the unfolded space , for events generated under ( in blue ) and ( in red ) , with calculated using .( right ) for the same events , histograms of the test statistic in the unfolded space , with calculated using . ,title="fig:",scaledwidth=49.0% ] , histogram of the test statistic in the unfolded space , for events generated under ( in blue ) and ( in red ) , with calculated using .( right ) for the same events , histograms of the test statistic in the unfolded space , with calculated using . , title="fig:",scaledwidth=49.0% ] figure [ deldel ] shows , for the events in figs .[ lambdah0h1 ] and in [ delchi ] , histograms of the event - by - event difference of and .the red curves correspond to events generated under , while the blue curves are for events generated under .the unfolding method is ml on the left and iterative em on the right .this is an example of a _ bottom - line test _ : does one obtain the same answers in the smeared and unfolded spaces ? there are differences apparent with both unfolding techniques .since the events generated under both and are shifted in the same direction , the full implications are not immediately clear .thus we turn to roc curves or equivalent curves from neyman - pearson hypothesis testing . and in [ delchi](left ) , histogram of the event - by - event difference of and . in the left histogram , ml unfoldingis used , while in the right histogram , iterative em unfolding is used . ,title="fig:",scaledwidth=49.0% ] and in [ delchi](left ) , histogram of the event - by - event difference of and . in the left histogram , ml unfoldingis used , while in the right histogram , iterative em unfolding is used . ,title="fig:",scaledwidth=49.0% ] we can investigate the effect of the differences apparent in fig . [ deldel ] by using the language of neyman - pearson hypothesis testing , in which one rejects if the value of the test statistic ( in the smeared space , or in the unfolded space ) is above some critical value .the type i error probability is the probability of rejecting when it is true , also known as the `` false positive rate '' .the type ii error probability is the probability of accepting ( not rejecting ) when it is false .the quantity is the _ power _ of the test , also known as the `` true positive rate '' .the quantities and thus follow from the cumulative distribution functions ( cdfs ) of histograms of the test statistics . in classification problems outside hepis it common to make the roc curve of true positive rate vs. the false positive rate , as shown in fig .figure [ alphabeta ] shows the same information in a plot of vs. , i.e. , with the vertical coordinate inverted compared to the roc curve .figure [ alphabetaloglog ] is the same plot as fig .[ alphabeta ] , with both axes having logarithmic scale .the result of this `` bottom line test '' does not appear to be dramatic in this first example , and appear to be dominated by the difference between the poisson - based and already present in the ml unfolding solution , rather than by the additional differences caused by truncating the em solution .unfortunately no general conclusion can be drawn from this observation , since as mentioned above the em unfolding used here starts from the true distribution as the first estimate .it is of course necessary to study other initial estimates . and [ delchi](left ) , roc curves for classification performed in the smeared space ( blue curve ) and in the unsmeared space ( red curve ) .( left ) unfolding by ml , and ( right ) unfolding by iterative em ., title="fig:",scaledwidth=49.0% ] and [ delchi](left ) , roc curves for classification performed in the smeared space ( blue curve ) and in the unsmeared space ( red curve ) .( left ) unfolding by ml , and ( right ) unfolding by iterative em ., title="fig:",scaledwidth=49.0% ] and [ delchi](left ) , plots of vs. , for classification performed in the smeared space ( blue curve ) and in the unsmeared space ( red curve ) .( left ) unfolding by ml , and ( right ) unfolding by iterative em ., title="fig:",scaledwidth=49.0% ] and [ delchi](left ) , plots of vs. , for classification performed in the smeared space ( blue curve ) and in the unsmeared space ( red curve ) .( left ) unfolding by ml , and ( right ) unfolding by iterative em . , title="fig:",scaledwidth=49.0% ] vs. as in fig . [ alphabeta ] , here with logarithmic scale on both axes ., title="fig:",scaledwidth=49.0% ] vs. as in fig. [ alphabeta ] , here with logarithmic scale on both axes ., title="fig:",scaledwidth=49.0% ] with the above plots forming a baseline , we can ask how some of the above plots vary as we change the parameters in table [ baseline ] .figure [ sigmaparam ] shows , as a function of the gaussian smearing parameter , the variation of the gof results shown for in 1d histograms in figs .[ nulgofunfoldedinvert ] ( top left ) and [ nulgofunfoldedinvert ] ( bottom ) .the events are generated under .used in smearing ( vertical axis ) .the horizontal axes are the same as those in the 1d histograms in figs .[ nulgofunfoldedinvert ] ( top left ) and [ nulgofunfoldedinvert ] ( bottom ) , namely in the unfolded space ; and the difference with respect to in the smeared space ; for gof tests with respect to using events generated under . ,title="fig:",scaledwidth=49.0% ] used in smearing ( vertical axis ) .the horizontal axes are the same as those in the 1d histograms in figs .[ nulgofunfoldedinvert ] ( top left ) and [ nulgofunfoldedinvert ] ( bottom ) , namely in the unfolded space ; and the difference with respect to in the smeared space ; for gof tests with respect to using events generated under . , title="fig:",scaledwidth=49.0% ] figure [ deldelsigma ] shows the variation of the 1d histogram in fig [ deldel ] with the gaussian used in smearing , for both ml and em unfolding. used in smearing ( vertical axis ) of the 1d histogram in fig [ deldel ] of the event - by - event difference of and .( right ) the same quantity for iterative em unfolding ., title="fig:",scaledwidth=49.0% ] used in smearing ( vertical axis ) of the 1d histogram in fig [ deldel ] of the event - by - event difference of and .( right ) the same quantity for iterative em unfolding ., title="fig:",scaledwidth=49.0% ] figures [ deldelb ] and [ deldelbiter ] show , for ml and em unfolding respectively , the result of the bottom - line test of fig . [ deldel ] as a function of the amplitude of the extra term in in eqn .[ altp ] . as a function of the amplitude of the extra term in in eqn .[ altp ] , for ( left ) derived from and ( right ) derived from ; for ml unfolding ., title="fig:",scaledwidth=49.0% ] as a function of the amplitude of the extra term in in eqn .[ altp ] , for ( left ) derived from and ( right ) derived from ; for ml unfolding ., title="fig:",scaledwidth=49.0% ] , for iterative em unfolding ., title="fig:",scaledwidth=49.0% ] , for iterative em unfolding ., title="fig:",scaledwidth=49.0% ] figure [ deldelnummeas ] shows , for ml and em unfolding , the result of the bottom - line test of fig .[ deldel ] as a function of the mean number of events in the histogram of . as a function of the number of events on the histogram of , for ( left ) ml unfolding and ( right ) iterative em unfolding ., title="fig:",scaledwidth=49.0% ] as a function of the number of events on the histogram of , for ( left ) ml unfolding and ( right ) iterative em unfolding . ,title="fig:",scaledwidth=49.0% ] figure [ deldelreg ] shows , for iterative em unfolding , the result of the bottom - line test of fig .[ deldel ] as a function of the number of iterations . as a function of number of iterations in ( left ) linear vertical scale and ( right ) logarithmic vertical scale ., title="fig:",scaledwidth=49.0% ] as a function of number of iterations in ( left ) linear vertical scale and ( right ) logarithmic vertical scale ., title="fig:",scaledwidth=49.0% ]this note illustrates in detail some of the differences that can arise with respect to the smeared space when testing hypotheses in the unfolded space . as the note focuses on a particularly simple hypotheses test , and looks only at the ml and em solutions ,no general conclusions can be drawn , apart from claiming the potential usefulness of the `` bottom line tests '' . even within the limitations of the roounfold software used here ( in particular that the initial estimate for iterating is the presumed truth ) , we see indications of dangers of testing hypotheses after unfolding . perhaps the most interesting thing to note thus far is that unfolding by matrix inversion ( and hence no regularization ) yields , in the implementation studied here , a generalized test statistic that is identical to in the smeared space , which is intrinsically inferior to .the potentially more important issue of bias due to regularization affecting the bottom line test remains to be explored .such issues should be kept in mind , even in informal comparisons of unfolded data to predictions from theory . for quantitative comparison ( including the presumed use of unfolded results to evaluate predictions in the future from theory ), we believe that extreme caution should be exercised , including performing the bottom - line - tests with various departures from expectations .this applies to both gof tests of a single hypothesis , and comparisons of multiple hypotheses .more work is needed in order to gain experience regarding what sort of unfolding problems and unfolding methods yield results that give reasonable performance under the bottom - line - test , and which cases lead to bad failures .as often suggested , reporting the response matrix along with the smeared data can facilitate comparisons with future theories in the folded space , in spite of the dependence of on the true pdfs .we are grateful to pengcheng pan , yan ru pei , ni zhang , and renyuan zhang for assistance in the early stages of this study .rc thanks the cms statistics committee and gnter zech for helpful discussions regarding the bottom - line test .this work was partially supported by the u.s .department of energy under award number de sc0009937 .louis lyons , `` unfolding : introduction , '' in proceedings of the phystat 2011 workshop on statistical issues related to discovery claims in search experiments and unfolding , edited by h.b .prosper and l. lyons , ( cern , geneva , switzerland , 17 - 20 january 2011 ) + https://cds.cern.ch/record/1306523 ( see end of section 5 . )olive et al .( particle data group ) , chin .c * 38 * 090001 ( 2014 ) and 2015 update .the likelihood - ratio gof test with saturated model is eqn .the test for gaussian data with correlations is eqn .pearson s is eqn .38.48 .mikael kuusela , `` introduction to unfolding in high energy physics , '' lecture at advanced scientific computing workshop , eth zurich ( july 15 , 2014 ) + http://mkuusela.web.cern.ch/mkuusela/eth_workshop_july_2014/slides.pdf t. adye , `` unfolding algorithms and tests using roounfold , '' , in proceedings of the phystat 2011 workshop on statistical issues related to discovery claims in search experiments and unfolding , edited by h.b . prosper and l. lyons , ( cern , geneva , switzerland , 17 - 20 january 2011 ) https://cds.cern.ch/record/1306523 , p. 313 .we used version 1.1.1 from + http://hepunx.rl.ac.uk/~adye/software/unfold/roounfold.html , accessed dec .
in many analyses in high energy physics , attempts are made to remove the effects of detector smearing in data by techniques referred to as `` unfolding '' histograms , thus obtaining estimates of the true values of histogram bin contents . such unfolded histograms are then compared to theoretical predictions , either to judge the goodness of fit of a theory , or to compare the abilities of two or more theories to describe the data . when doing this , even informally , one is testing hypotheses . however , a more fundamentally sound way to test hypotheses is to smear the theoretical predictions by simulating detector response and then comparing to the data without unfolding ; this is also frequently done in high energy physics , particularly in searches for new physics . one can thus ask : to what extent does hypothesis testing after unfolding data materially reproduce the results obtained from testing by smearing theoretical predictions ? we argue that this `` bottom - line - test '' of unfolding methods should be studied more commonly , in addition to common practices of examining variance and bias of estimates of the true contents of histogram bins . we illustrate bottom - line - tests in a simple toy problem with two hypotheses .
understanding and characterizing general features of the dynamics of open quantum systems is of great importance to physics , chemistry , and biology .the non - markovian character is one of the most central aspects of an open quantum process , and attracts increasing attentions .markovian dynamics of quantum systems is described by a quantum dynamical semigroup , and often taken as an approximation of realistic circumstances with some very strict assumptions . meanwhile ,exact master equations , which describe the non - markovian dynamics , are complicated . based on the infinitesimal divisibility in terms of quantum dynamical semigroup , wolf _ et al ._ provided a model - independent way to study the non - markovian features .later , in the intuitive picture of the backward information flow leading to the increasing of distinguishability in intermediate dynamical maps , breuer , laine , and piilo ( blp ) proposed a measure on the degree of non - markovian behavior based on the monotonicity of the trace distance under quantum channels , as shown in fig .[ fig : sketch ] .the blp non - markovianity has been widely studied , and applied in various models .( color online ) sketch of the information flow picture for non - markovianity . according to this scenario ,the loss of distinguishability of the system s states indicates the information flow from the system to the reservoir .if the dynamics is markovian , the information flow is always outward , represented by the green thick arrow .non - markovian behaviors occurs when there is inward information flow , represented by the orange thin arrow , bringing some distinguishability back to the system.,width=226 ] unlike for classical stochastic processes , the non - markovian criteria for quantum processes is non - unique , and even controversial .first , the non - markovian criteria from the infinitesimal divisibility and the backward information flow are not equivalent .second , several other non - markovianity measures , based on different mechanism like the monotonicity of correlations under local quantum channels , have been introduced .third , even in the framework of backward information flow , trace distance is not the unique monotone distance for the distinguishability between quantum states .other monotone distances on the space of density operators can be found in ref . , and the statistical distance is another widely - used one .different distance should not be expected to give the same non - markovian criteria .the inconsistency among various non - markovianity reflects different dynamical properties . in this paper, we show that the blp non - markovianity can not reveal the infinitesimal non - divisibility of quantum processes caused by the non - unital part of the dynamics .besides non - markovianity , `` non - unitality '' is another important dynamical property , which is the necessity for the increasing of the purity under quantum channels and for the creating of quantum discord in two - qubit systems under local quantum channels . in the same spirit as blp non - markovianity, we define a measure on the non - unitality . as blpnon - markovianity is the most widely used measure on non - markovianity , we also provide a measure on the non - unital non - markovianity , which can be conveniently used as a supplement to the blp measure , when the quantum process is non - unital .we also give an example to demonstrate an extreme case , where the blp non - markovianity vanishes while the quantum process is not infinitesimal divisible .this paper is organized as follows . in sec .[ review ] , we give a brief review on the representation of density operators and quantum channels with hermitian orthonormal operator basis , and various measures on non - markovianity . in sec .[ sec : non - unital - nm ] , we investigate the non - unitality and the non - unital non - markovianity and give the corresponding quantitative measures respectively . in sec .[ sec : example ] , we apply the non - unital non - markovianity measure on a family of quantum processes , which are constructed from the generalized amplitude damping channels . section [ sec : conclusion ] is the conclusion .the states of a quantum system can be described by the density operator , which is positive semidefinite and of trace one .quantum channels , or quantum operations , are completely positive and trace - preserving ( cpt ) maps from density operators to density operators , and can be represented by kraus operators , choi - jamiokowski matrices , or transfer matrices . in this work ,we use the hermitian operator basis to express operators and represent quantum channels .let be a complete set of hermitian and orthonormal operators on complex space , i.e. , satisfies and .any operator on can be express by a column vector through with .every is real if is hermitian . in the meantime, any quantum channel can be represented by ] for any local quantum channel , where is an appropriate measure for the correlations in the bipartite states , including entanglement entropy and the mutual information .the corresponding measures on non - markovianity are given and discussed in refs .the non - markovianity measure is available to capture the non - markovian behavior of the unital aspect of the dynamics .but for the non - unital aspect , it is not capable . to show this, we use the hermitian orthonormal operator basis to express states and quantum channels . utilizing eq .( [ eq : rho ] ) , the trace distance between two states and is given by \cdot\bm{\lambda}\big|.\end{aligned}\ ] ] therefore , for the two evolving states , we get \cdot\bm{\lambda}\big| , \label{eq : dis}\ ] ] where , are initial states of the system . from this equationone can see that the trace distance between any two evolved states is irrelevant to the non - unital part of the time evolution . then if there are two quantum channels , whose affine maps are and , respectively , the characteristic of trace distance between the evolving states from any two initial states can not distinguish these two channels .more importantly , may cause the non - divisibility of the quantum process , and this can not be revealed by . on the other hand , the non - unital part has its own physical meaning : is necessary for the increasing of the purity .in other words , besides the non - markovian feature , the non - unitality is another kind of general feature of quantum processes . in analogy to the definition of blp non - markovianity , we defined the following measure on the degree of the non - unitality of a quantum process : } { \mathrm{d}t } \right| \mathrm{d}t,\ ] ] where is the initial state . obviously , vanishes if .since the non - unital aspect of the dynamics , which is not revealed by the trace distance , has its own speciality , we aim to measure the effect of non - unitality on non - markovian behavior .however , a perfect separation of the non - unital aspect from the total non - markovianity may be infeasible .therefore we require a weak version for measuring non - unital non - markovianity to satisfy the following three conditions : ( i ) vanishes if is infinitesimal divisible , ( ii ) vanishes if is unital , ( iii ) should be relevant to .based on these conditions , we introduce the following measure where with is the set of the trajectory states which evolve from the maximally mixed state , and ,\ ] ] with an appropriate distance which will be discussed below .the first condition is guaranteed if we require that is monotone under any cpt maps , i.e. , \leq d(\rho_{1},\rho_{2}) ] , or its symmetric version , is another qualified candidate for the distance . noting that when the support of is not within the support of , namely , , will be infinite , so in such cases , quantum relative entropy will bring singularity to the measure of non - markovianity .also , hellinger distance is qualified .although all of these distances are monotone under cpt maps , they may have different characteristics in the same dynamics , see ref . .the difference between non - unital non - markovian measure defined by eq .( [ eq : definition_nun ] ) and the blp - type measures , including those which use other alternative distances , is the restriction on the pairs of initial states . comparing with the blp - type measures relying on any pair of initial states , the non - unital non - markovianity measure only relies on the pairs consisting of the maximally mixed state and its trajectory states . on one hand, this restriction makes the non - unital non - markovianity measure vanish when the quantum processes are unital , no matter they are markovian or non - markovian ; on the other hand , this restriction reflects that non - unital non - markovianity measure reveals only a part of information concerning the non - markovian behaviors .to illustrate the non - unital non - markovian behavior , we give an example in this section .we use the generalized amplitude damping channel ( gadc ) as a prototype to construct a quantum process .the gadc can be described by with the kraus operators given by where and are real parameters .note that for any ] , the corresponding is a quantum channel .for a two - level system , the hermitian orthonormal operator basis can be chosen as , where is the vector of pauli matrices . with the decomposition in eq .( [ eq : rho ] ) , the affine map for the bloch vector is given by , where the gadc is unital if and only if or .when , , the map is identity .a quantum process can be constructed by making the parameter and to be dependent on time . for simplicity , we take and , where is a constant real number . this is a legitimate quantum process , because is a quantum channel for every , and is the identity map .first , let us consider the for this quantum process .for any two initial states and , we have the trace distance & = \frac{1}{2}\mathrm{tr}\left|m(\mathcal{e}_{t})[\mathbf{r}(\rho_{1})-\mathbf{r}(\rho_{2})]\cdot\frac{\bm{\sigma}}{\sqrt{2}}\right|\nonumber \\ & = \frac{1}{\sqrt{2}}\left|m(\mathcal{e}_{t})[\mathbf{r}(\rho_{1})-\mathbf{r}(\rho_{2})]\right|,\end{aligned}\ ] ] where is the euclidean length of the vector , and we used the equality for pauli matrices . denoting by , we get =\frac{e^{-t/2}}{\sqrt{2}}\sqrt{x^{2}+y^{2}+e^{-t}z^{2}},\label{eq : gadc_tr_distance}\ ] ] which implies \leq0 ] . with the expressions and , it is =\frac{e^{-t}}{2}\left|\cos2\omega\tau\right|(1-e^{-\tau}).\label{eq : gadc_nonunital}\ ] ] in fig .[ fig : db_dtr](a ) , we can see that while the trace distance between the evolving states and monotonously decreases with the time , the bures distance increases during some intermediate time intervals . from eq .( [ eq : gadc_nonunital ] ) , one can see although $ ] depends on , it does not depend on . actually , from eq .( [ eq : gadc_tr_distance ] ) one could find that for any two initial states , the trace distance between the evolving states is independent on . in this sense ,the blp non - markovianity treats a family of quantum processes , which only differ with , as the same one .meanwhile , reveals the effects of on the infinitesimal non - divisibility and is capable of measuring it . in order to compare with bhp measure, we also calculate the defined by eq .( [ eq : gt ] ) .we get \ ] ] with the mediate dynamical maps with infinitesimal are not completely positive when . from fig .[ fig : db_dtr](b ) , we can see that the increasing of the bures distance occurs in the regimes where , which coincides with the monotonicity of bures distance under cpt maps .in conclusion , we have shown that the measure for non - markovianity based on trace distance can not reveal the infinitesimal non - divisibility caused by the non - unital part of the dynamics . in order to reflect effects of the non - unitality, we have constructed a measure on the non - unital non - markovianity , and also defined a measure on the non - unitality , in the same spirit as blp non - markovianity measure . like non - markovianity, the non - unitality is another interesting feature of the quantum dynamics . with the development of quantum technologies , we need novel theoretical approaches for open quantum systems .it is expected that some quantum information methods would help us to understand some generic features of quantum dynamics .we hope this work may draw attention to study more dynamical properties from the informational perspective .this work was supported by nfrpc through grant no .2012cb921602 , the nsfc through grants no .11025527 and no . 10935010 andnational research foundation and ministry of education , singapore ( grant no .wbs : r-710 - 000 - 008 - 271 ) .m. m. wolf , quantum channels & operations guided tour , http://www-m5.ma.tum.de/ foswiki/ pub/ m5 /allgemeines/michaelwolf/ qchannellecture.pdf[http://www-m5.ma.tum.de/ foswiki/ pub/ m5 /allgemeines/ michaelwolf/ qchannellecture.pdf ]
trace distance is available to capture the dynamical information of the unital aspect of a quantum process . however , it can not reflect the non - unital part . so , the non - divisibility originated from the non - unital aspect can not be revealed by the corresponding measure based on the trace distance . we provide a measure of non - unital non - markovianity of quantum processes , which is a supplement to breuer - laine - piilo ( blp ) non - markovianity measure . a measure on the degree of the non - unitality is also provided .
classification problem is one of the most important tasks in time series data mining . a well - known 1-nearest neighbor ( 1-nn ) with dynamic time warping ( dtw )distance is one of the best classifier to classify time series data , among other approaches , such as support vector machine ( svm ) , artificial neural network ( ann ) , and decision tree . for the 1-nn classification ,selecting an appropriate distance measure is very crucial ; however , the selection criteria still depends largely on the nature of data itself , especially in time series data .though the euclidean distance is commonly used to measure the dissimilarity between two time series , it has been shown that dtw distance is more appropriate and produces more accurate results .sakoe - chiba band ( s - c band ) originally speeds up the dtw calculation and later has been introduced to be used as a dtw global constraint .in addition , the s - c band was first implemented for the speech community , and the width of the global constraint was fixed to be 10% of time series length .however , recent work reveals that the classification accuracy depends solely on this global constraint ; the size of the constraint depends on the properties of the data at hands . to determine a suitable size , all possible widths of the global constraintare tested , and the band with the maximum training accuracy is selected . ratanamahatana - keogh band ( r - k band ) has been introduced to generalize the global constraint model represented by a one - dimensional array .the size of the array and the maximum constraint value is limited to the length of the time series .and the main feature of the r - k band is the multi bands , where each band is representing each class of data . unlike the single s - c band, this multi r - k bands can be adjusted as needed according to its own class warping path .although the r - k band allows great flexibility to adjust the global constraint , a learning algorithm is needed to discover the best multi r - k bands . in the original work of r - k band ,a hill climbing search algorithm with two heuristic functions ( accuracy and distance metrics ) is proposed .the search algorithm climbs though a space by trying to increase / decrease specific parts of the bands until terminal conditions are met .however , this learning algorithm still suffers from an overfitting phenomenon since an accuracy metric is used as a heuristic function to guide the search . to solve this problem ,we propose two new learning algorithms , i.e. , band boundary extraction and iterative learning .the band boundary extraction method first obtains a maximum , mean , and mode of the paths positions on the dtw distance matrix , and the iterative learning , band s structures are adjusted in each round of the iteration to a silhouette index .we run both algorithms and the band that gives better results . in prediction step ,the 1-nn using dynamic time warping distance with this discovered band is used to classify unlabeled data .note that a lower bound , lb_keogh , is also used to speed up our 1-nn classification .the rest of this paper is organized as follows .section 2 gives some important background for our proposed work . in section 3 ,we introduce our approach , the two novel learning algorithms .section 4 contains an experimental evaluation including some examples of each dataset .finally , we conclude this paper in section 5 .our novel learning algorithms are based on four major fundamental concepts , i.e. , dynamic time warping ( dtw ) distance , sakoe - chiba band ( s - c band ) , ratanamahatana - keogh band ( r - k band ) , and silhouette index , which are briefly described in the following sections .dynamic time warping ( dtw ) distance is a well - known similarity measure based on shape .it uses a dynamic programming technique to find all possible warping paths , and selects the one with the minimum distance between two time series . to calculate the distance, it first creates a distance matrix , where each element in the matrix is a cumulative distance of the minimum of three surrounding neighbors .suppose we have two time series , a sequence of length ( ) and a sequence of length ( ) .first , we create an -by- matrix , where every ( ) element of the matrix is the cumulative distance of the distance at ( ) and the minimum of three neighboring elements , where and .we can define the ( ) element , , of the matrix as : where is the squared distance of and , and is the summation of and the the minimum cumulative distance of three elements surrounding the ( ) element .then , to find an optimal path , we choose the path that yields a minimum cumulative distance at ( ) , which is defined as : where is a set of all possible warping paths , is ( ) at element of a warping path , and is the length of the warping path . in reality , dtw may not give the best mapping according to our need because it will try its best to find the minimum distance .it may generate the unwanted path .for example , in figure [ flo : dtw1 ] , without global constraint , dtw will find its optimal mapping between the two time series .however , in many cases , this is probably not what we intend , when the two time series are expected to be of different classes .we can resolve this problem by limiting the permissible warping paths using a global constraint .two well - known global constraints , sakoe - chiba band and itakura parallelogram , and a recent representation , ratanamahatana - keogh band ( r - k band ) , have been proposed , figure [ flo : dtw2 ] shows an example for each type of the constraints . [cols="^,^ " , ] [ flo : result ]in this work , we propose a new efficient time series classification algorithm based on 1-nearest neighbor classification using the dynamic time warping distance with multi r - k bands as a global constraint .to select the best r - k band , we use our two proposed learning algorithms , i.e. , band boundary extraction algorithm and iterative learning .silhouette index is used as a heuristic function for selecting the band that yields the best prediction accuracy .the lb_keogh lower bound is also used in data prediction step to speed up the computation .we would like to thank the scientific parallel computer engineering ( space ) laboratory , chulalongkorn university for providing a cluster we have used in this contest . 1 fumitada itakura .minimum prediction residual principle applied to speech recognition ., 23(1):6772 , 1975 .eamonn j. keogh and chotirat ann ratanamahatana .exact indexing of dynamic time warping ., 7(3):358386 , 2005 .alex nanopoulos , rob alcock , and yannis manolopoulos .feature - based classification of time - series data ., pages 4961 , 2001 .chotirat ann ratanamahatana and eamonn j. keogh . making time - series classification more accurate using learned constraints . in _ proceedings of the fourth siam international conference on data mining ( sdm 2004 ) _ , pages 1122 , lake buena vista ,fl , usa , april 22 - 24 2004 .chotirat ann ratanamahatana and eamonn j. keogh .three myths about dynamic time warping data mining . in _ proceedings of 2005siam international data mining conference ( sdm 2005 ) _ ,pages 506510 , newport beach , cl , usa , april 21 - 23 2005 .juan jos rodrguez and carlos j. alonso .interval and dynamic time warping - based decision trees . in _ proceedings of the 2004 acm symposium on applied computing ( sac 2004 ) _ , pages 548552 , nicosia , cyprus , march 14 - 17 2004 .peter rousseeuw .silhouettes : a graphical aid to the interpretation and validation of cluster analysis . , 20(1):5365 , 1987 .hiroaki sakoe and seibi chiba .dynamic programming algorithm optimization for spoken word recognition ., 26(1):4349 , 1978 .yi wu and edward y. chang .distance - function design and fusion for sequence data . in _ proceedings of the 2004 acmcikm international conference on information and knowledge management ( cikm 2004 ) _ , pages 324333 , washington , dc , usa , november 8 - 13 2004 .
1-nearest neighbor with the dynamic time warping ( dtw ) distance is one of the most effective classifiers on time series domain . since the global constraint has been introduced in speech community , many global constraint models have been proposed including sakoe - chiba ( s - c ) band , itakura parallelogram , and ratanamahatana - keogh ( r - k ) band . the r - k band is a general global constraint model that can represent any global constraints with arbitrary shape and size effectively . however , we need a good learning algorithm to discover the most suitable set of r - k bands , and the current r - k band learning algorithm still suffers from an overfitting phenomenon . in this paper , we propose two new learning algorithms , i.e. , band boundary extraction algorithm and iterative learning algorithm . the band boundary extraction is calculated from the bound of all possible warping paths in each class , and the iterative learning is adjusted from the original r - k band learning . we also use a silhouette index , a well - known clustering validation technique , as a heuristic function , and the lower bound function , lb_keogh , to enhance the prediction speed . twenty datasets , from the workshop and challenge on time series classification , held in conjunction of the sigkdd 2007 , are used to evaluate our approach .
given competing mathematical models to describe a process , we wish to know whether our data is compatible with the candidate models . often comparing models requires optimization and fitting time course data to estimate parameter values and then applying an information criterion to select a ` best ' model .however sometimes it is not feasible to estimate the value of these unknown parameters ( e.g. large parameter space , nonlinear objective function , nonidentifiable etc ) .the parameter problem has motivated the growth of fields that embrace a parameter - free flavour such as chemical reaction network theory and stoichiometric theory .however many of these approaches are limited to comparing the behavior of models at steady - state .inspired by techniques commonly used in applied algebraic geometry and algebraic statistics , methods for discriminating between models without estimating parameters has been developed for steady - state data , applied to models in wnt signaling , and then generalized to only include one data point .briefly , these approaches characterize a model in only observable variables using techniques from computational algebraic geometry and tests whether the steady - state data are coplanar with this new characterization of the model , called a _ steady - state invariant _ .notably the method does nt require parameter estimation , and also includes a statistical cut - off for model compatibility with noisy data . here, we present a method for comparing models with _ time course data _ via computing a _differential invariant_. we consider models of the form and where is a known input into the system , , is a known output ( measurement ) from the system , , are species variables , , is the unknown parameter vector , and the functions are rational functions of their arguments .the dynamics of the model can be observed in terms of a time series where is the input at discrete points and is the output . in this setting , we aim to characterize our ode models by eliminating variables we can not measure using differential elimination from differential algebra . from the elimination , we form a differential invariant , where the differential monomials have coefficients that are functions of the parameters .we obtain a system of equations in 0,1 , and higher order derivatives and we write this implicit system of equations as , , and call these the input - output equations our _ differential invariants_. specifically , we have equations of the form : where are rational functions of the parameters and are differential monomials , i.e. monomials in .we will see shortly that in the linear case , is a linear differential equation . for non - linear models , is nonlinear .if we substitute into the differential invariant available data into the observable monomials for each of the time points , we can form a linear system of equations ( each row is a different time point ) .then we ask : does there exist a such that .if of course we are guaranteed a zero trivial solution and the non - trivial case can be determined via a rank test ( i.e. , svd ) and can perform the statistical criterion developed in with the bound improved in , but for there may be no solutions .thus , we must check if the linear system of equations is consistent , i.e. has one or infinitely many solutions .assuming measurement noise is known , we derive a statistical cut - off for when the model is incompatible with the data .however suppose that one does not have data points for the higher order derivative data , then these need to be estimated .we present a method using gaussian process regression ( gpr ) to estimate the time course data using a gpr . since the derivative of a gp is also gp , so we can estimate the higher order derivative of the data as well as the measurement noise introduced and estimate the error introduced during the gpr ( so we can discard points with too much gpr estimation error ) .this enables us to input derivative data into the differential invariant and test model compatibility using the solvability test with the statistical cut - off we present .we showcase our method throughout with examples from linear and nonlinear models .we now give some background on differential algebra since a crucial step in our algorithm is to perform differential elimination to obtain equations purely in terms of input variables , output variables , and parameters .for this reason , we will only give background on the ideas from differential algebra required to understand the differential elimination process .for a more detailed description of differential algebra and the algorithms listed below , see . inwhat follows , we assume the reader is familiar with concepts such as _ rings _ and _ ideals _ , which are covered in great detail in .a ring is said to be a _ differential ring _if there is a derivative defined on and is closed under differentiation .differential ideal _ is an ideal which is closed under differentiation .a useful description of a differential ideal is called a _differential characteristic set _ , which is a finite description of a possibly infinite set of differential polynomials .we give the technical definition from : let be a set of differential polynomials , not necessarily finite .if is an auto - reduced set , such that no lower ranked auto - reduced set can be formed in , then is called a _differential characteristic set_. a well - known fact in differential algebra is that differential ideals need not be finitely generated .however , a radical differential ideal is finitely generated by the _ ritt - raudenbush basis theorem _ .this result gives rise to ritt s pseudodivision algorithm ( see below ) , allowing us to compute the differential characteristic set of a radical differential ideal .we now describe various methods to find a differential characteristic set and other related notions , and we describe why they are relevant to our problem , namely , they can be used to find the _ input - output equations_. consider an ode system of the form and for with and rational functions of their arguments .let our differential ideal be generated by the differential polynomials obtained by subtracting the right - hand - side from the ode system to obtain and for .then a differential characteristic set is of the form : the first terms of the differential characteristic set , , are those terms independent of the state variables and when set to zero form the _ input - output equations _ : specifically , the input - output equations are polynomial equations in the variables with rational coefficients in the parameter vector .note that the differential characteristic set is in general non - unique , but the coefficients of the input - output equations can be fixed uniquely by normalizing the equations to make them monic .we now discuss several methods to find the input - output equations . the first method ( ritt s pseudodivision algorithm )can be used to find a differential characteristic set for a radical differential ideal .the second method ( rosenfeldgroebner ) gives a representation of the radical of the differential ideal as an intersection of regular differential ideals and can also be used to find a differential characteristic set under certain conditions .finally , we discuss grbner basis methods to find the _ input - output equations_. a differential characteristic set of a prime differential ideal is a set of generators for the ideal . an algorithm to find a differential characteristic set of a radical ( in particular , prime ) differential ideal generated by a finite set of differential polynomals is called ritt s pseudodivision algorithm .we describe the process in detail below , which comes from the description in .note that our differential ideal as described above is a prime differential ideal .let be the leader of a polynomial , which is the highest ranking derivative of the variables appearing in that polynomial .a polynomial is said to be of _ lower rank _ than if or , whenever , the algebraic degree of the leader of is less than the algebraic degree of the leader of .a polynomial is _ reduced with respect to a polynomial _ if contains neither the leader of with equal or greater algebraic degree , nor its derivatives .if is not reduced with respect to , it can be reduced by using the pseudodivision algorithm below . 1 .if contains the derivative of the leader of , is differentiated times so its leader becomes .2 . multiply the polynomial by the coefficient of the highest power of ; let be the remainder of the division of this new polynomial by with respect to the variable .then is reduced with respect to .the polynomial is called the _ pseudoremainder _ of the pseudodivision .the polynomial is replaced by the pseudoremainder and the process is iterated using in place of and so on , until the pseudoremainder is reduced with respect to .this algorithm is applied to a set of differential polynomials , such that each polynomial is reduced with respect to each other , to form an auto - reduced set .the result is a differential characteristic set . using the differentialalgebra package in maple, one can find a representation of the radical of a differential ideal generated by some equations , as an intersection of radical differential ideals with respect to a given ranking and rewrites a prime differential ideal using a different ranking .specifically , the rosenfeldgroebner command in maple takes two arguments : sys and r , where sys is a list of set of differential equations or inequations which are all rational in the independent and dependent variables and their derivatives and r is a differential polynomial ring built by the command differentialring specifying the independent and dependent variables and a ranking for them . then rosenfeldgroebner returns a representation of the radical of the differential ideal generated by sys , as an intersection of radical differential ideals saturated by the multiplicative family generated by the inequations found in systhis representation consists of a list of regular differential chains with respect to the ranking of r. note that rosenfeldgroebner returns a differential characteristic set if the differential ideal is prime . finally ,both algebraic and differential grbner bases can be employed to find the input - output equations . to use an algebraic grbner basis , one can take a sufficient number of derivatives of the model equations and then treat the derivatives of the variables as indeterminates in the polynomial ring in , , , ... , , , , ... , , , , ... , etc .then a grbner basis of the ideal generated by this full system of ( differential ) equations with an elimination ordering where the state variables and their derivatives are eliminated first can be found .details of this approach can be found in .differential grbner bases have been developed by carr ferro , ollivier , and mansfield , but currently there are no implementations in computer algebra systems .we now discuss how to use the differential invariants obtained from differential elimination ( using ritt s pseudodivision , differential groebner bases , or some other method ) for model selection / rejection .recall our input - output relations , or differential invariants , are of the form : the functions are differential monomials , i.e. monomials in the input / output variables , , , etc , and the functions are rational functions in the unknown parameter vector . in order to uniquely fix the rational coefficients to the differential monomials , we normalize each input / output equation to make it monic . in other words, we can re - write our input - output relations as : here is a differential polynomial in the input / output variables , , , etc .if the values of , , , etc , were known at a sufficient number of time instances , then one could substitute in values of and at each of these time instances to obtain a linear system of equations in the variables .first consider the case of a single input - output equation .if there are unknown coefficients , we obtain the system : we write this linear system as , where is an by matrix of the form : is the vector of unknown coefficients ^t ] . for the case of multiple input - output equations , we get the following block diagonal system of equations : where is a by matrix . for noise - free ( perfect ) data , this system should have a unique solution for . in other words ,the coefficients of the input - output equations can be uniquely determined from enough input / output data .the main idea of this paper is the following . given a set of candidate models , we find their associated differential invariants and then substitute in values of , etc , at many time instances , thus setting up the linear system for each model .the solution to should be unique for the correct model , but there should be no solution for each of the incorrect models .thus under ideal circumstances , one should be able to select the correct model since the input / output data corresponding to that model should satisfy its differential invariant .likewise , one should be able to reject the incorrect models since the input / output data should not satisfy their differential invariants .however , with imperfect data , there could be no solution to even for the correct model .thus , with imperfect data , one may be unable to select the correct model . on the other hand , if there is no solution to for each of the candidate models , then the goal is to determine how `` badly '' each of the models fail and reject models accordingly .we now describe criteria to reject models .let and consider the linear system where .note , in our case , , so is just the vector . here , we study the solvability of under ( a specific form of ) perturbation of both and .let and denote the perturbed versions of and , respectively , and assume that and depend only on and , respectively .our goal is to infer the _ unsolvability _ of the unperturbed system from observation of and only .we will describe how to detect the rank of an augmented matrix , but first introduce notation .the singular values of a matrix will be denoted by ( note that we have trivially extended the number of singular values of from to . ) the rank of is written .the range of is denoted . throughout, refers to the euclidean norm .the basic strategy will be to assume as a null hypothesis that has a solution , i.e. , , and then to derive its consequences in terms of and .if these consequences are not met , then we conclude by contradiction that is unsolvable . in other words, we will provide _ sufficient but not necessary _ conditions for to have no solution , i.e. , we can only reject ( but not confirm ) the null hypothesis .we will refer to this procedure as _ testing _ the null hypothesis .we first collect some useful results .the first , weyl s inequality , is quite standard .let .then weyl s inequality can be used to test using knowledge of only .let and assume that .then [ cor : weyl - rank ] therefore , if is not satisfied , then .assume the null hypothesis .then , so ) = \operatorname{rank}(a ) \leq \min ( m , n) ] .but we do not have access to ] . under the null hypothesis , )\leq \| [ \tilde{a } - a , \tilde{b } - b ] \| \leq \| \tilde{a } - a \| + \| \tilde{b } - b \| .\label{eqn : augmented - sigma } \end{aligned}\ ] ] [ thm : augmented - matrix ] apply corollary [ cor : weyl - rank ] . in other words , if does not hold , then has no solution .this approach can fail to correctly reject the null hypothesis if is ( numerically ) low - rank . as an example , suppose that and let consist of a single vector ( ) . then ) \leq n ] ( or is small ) .assuming that and are small , ) ] .however , we can only establish lower bounds on the matrix rank ( we can only tell if a singular value is `` too large '' ) , so this is not feasible in practice .an alternative approach is to consider only _ numerical _ ranks obtained by thresholding . how to choose such a threshold , however , is not at all clear and can be a very delicate matter especially if the data have high dynamic range .the theorem is uninformative if since then ) = \sigma_{n + 1 } ( \tilde{a } , \tilde{b } ) = 0 ] in [ sec : augmented - matrix ] . then since where we have made explicit the dependence of both sides on the same underlying random mechanism , the ( cumulative ) distribution function of must dominate that of , i.e. , thus , [ eqn : prob - tau ] note that if , e.g. , ( i.e. , if were known exactly ) , then simplifies to just .using , we can associate a -value to any given realization of by referencing upper tail bounds for quantities of the form . recall that under the null hypothesis . in a classical statistical hypothesis testing framework , we may therefore reject the null hypothesis if is at most , where is the desired significance level ( e.g. , ) .we now turn to bounding , where we will assume that .this can be done in several ways .one easy way is to recognize that where is the frobenius norm , so but has a chi distribution ) . ] with degrees of freedom .therefore , however , each inequality in can be quite loose : the first is loose in the sense that while the second in that but a slightly better approach is to use the inequality where and denote the row and column , respectively , of .the term can then be handled using a chi distribution via as above or directly using a concentration bound ( see below ) .variations on this undoubtedly exist . here , we will appeal to a result by tropp .the following is from 4.3 in .let , where each .then for any , [ thm : hadamard - gaussian ] the bound for can then be computed as follows .let so that . then by theorem [ thm : hadamard - gaussian ] , \ , dt , \end{aligned}\ ] ] where and are the `` variance '' parameters in the theorem for and , respectively .the term in parentheses simplifies to \\ & = \frac{1}{\sigma_{a}^{2 } \sigma_{b}^{2 } } \left [ ( \sigma_{a}^{2 } + \sigma_{b}^{2 } ) \left ( t - \frac{\sigma_{a}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } x \right)^{2 } + \sigma_{a}^{2 } \left ( 1 - \frac{\sigma_{a}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } \right ) x^{2 } \right]\\ & = \frac{1}{\sigma_{a}^{2 } \sigma_{b}^{2 } } \left [ ( \sigma_{a}^{2 } + \sigma_{b}^{2 } ) \left ( t - \frac{\sigma_{a}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } x \right)^{2 } + \frac{\sigma_{a}^{2 } \sigma_{b}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } x^{2 } \right]\\ & = \frac{\sigma_{a}^{2 } + \sigma_{b}^{2}}{\sigma_{a}^{2 } \sigma_{b}^{2 } } \left ( t - \frac{\sigma_{a}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } x \right)^{2 } + \frac{x^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } \end{aligned}\ ] ] on completing the square .therefore , \int_{0}^{x } \exp \left [ -\frac{1}{2 } \left ( \frac{\sigma_{a}^{2 } + \sigma_{b}^{2}}{\sigma_{a}^{2 } \sigma_{b}^{2 } } \right ) \left ( t - \frac{\sigma_{a}^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } x \right)^{2 } \right ] dt .\end{aligned}\ ] ] now set so that the integral becomes dt = \int_{0}^{x } \exp \left [ -\frac{(t - \alpha x)^{2}}{2 \sigma^{2 } } \right ] dt .\end{aligned}\ ] ] the variable substitution then gives dt = \sigma \int_{-\alpha x / \sigma}^{(1 - \alpha ) x / \sigma } e^{-u^{2}/2 } \ , du = \sqrt{2 \pi } \sigma \left [ \phi \left ( \frac{(1 - \alpha ) x}{\sigma } \right ) - \phi \left ( -\frac{\alpha x}{\sigma } \right ) \right ] , \end{aligned}\ ] ] where is the standard normal distribution function .thus , \exp \left [ -\frac{1}{2 } \left ( \frac{x^{2}}{\sigma_{a}^{2 } + \sigma_{b}^{2 } } \right ) \right ] .\label{eqn : p1 } \end{aligned}\ ] ] a similar ( but much simpler ) analysis yields next present a method for estimating higher order derivatives and the estimation error using gaussian process regression and then apply the differential invariant method to both linear and nonlinear models in the subsequent sections . a gaussian process ( gp ) is a stochastic process , where is a mean function and a covariance function .gps are often used for regression / prediction as follows .suppose that there is an underlying deterministic function that we can only observe with some measurement noise as , where for the dirac delta .we consider the problem of finding in a bayesian setting by assuming it to be a gp with prior mean and covariance functions and , respectively .then the joint distribution of ^{{\mathsf{t}}} ] and ^{{\mathsf{t}}} ] is the conditional distribution of given is also gaussian : where are the posterior mean and covariance , respectively .this allows us to infer on the basis of observing .the diagonal entries of are the posterior variances and quantify the uncertainty associated with this inference procedure .equation provides an estimate for the function values .what if we want to estimate its derivatives ?let for some covariance function . then by linearity of differentiation .thus , \hat{x } ( \boldsymbol{t } ) \cr\- x(\boldsymbol{s } ) \cr x'(\boldsymbol{s } ) \cr \vdots \cr x^{(n ) } ( \boldsymbol{s } ) \cr \end{pmat } \sim { \mathcal{n}}\left ( \begin{pmat}[{. } ] \mu_{{\text{prior } } } ( \boldsymbol{t } ) \cr\- \mu_{{\text{prior } } } ( \boldsymbol{s } ) \cr \mu_{{\text{prior}}}^{(1 ) } ( \boldsymbol{s } ) \cr \vdots\cr \mu_{{\text{prior}}}^{(n ) } ( \boldsymbol{s } ) \cr \end{pmat } , \begin{pmat}[{| ... } ] \sigma_{{\text{prior } } } ( \boldsymbol{t } , \boldsymbol{t } ) + \sigma^{2 } ( \boldsymbol{t } ) i & \sigma_{{\text{prior}}}^{{\mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{t } ) & \sigma_{{\text{prior}}}^{(1,0),{\mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{t } ) & \cdots & \sigma_{{\text{prior}}}^{(n,0 ) , { \mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{t } ) \cr\- \sigma_{{\text{prior } } } ( \boldsymbol{s } , \boldsymbol{t } ) & \sigma_{{\text{prior } } } ( \boldsymbol{s } , \boldsymbol{s } ) & \sigma_{{\text{prior}}}^{(1,0 ) , { \mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{s } ) & \cdots & \sigma_{{\text{prior}}}^{(n,0 ) , { \mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{s } ) \cr \sigma_{{\text{prior}}}^{(1,0 ) } ( \boldsymbol{s } , \boldsymbol{t } ) & \sigma_{{\text{prior}}}^{(1,0 ) } ( \boldsymbol{s } , \boldsymbol{s } ) & \sigma_{{\text{prior}}}^{(1,1 ) } ( \boldsymbol{s } , \boldsymbol{s } ) & \cdots & \sigma_{{\text{prior}}}^{(n,1 ) , { \mathsf{t } } } ( \boldsymbol{s } , \boldsymbol{s } ) \cr \vdots & \vdots & \vdots & \ddots & \vdots \cr \sigma_{{\text{prior}}}^{(n,0 ) } ( \boldsymbol{s } , \boldsymbol{t } ) & \sigma_{{\text{prior}}}^{(n,0 ) } ( \boldsymbol{s } , \boldsymbol{s } ) & \sigma_{{\text{prior}}}^{(n,1 ) } ( \boldsymbol{s } , \boldsymbol{s } ) & \cdots & \sigma_{(n , n ) } ( \boldsymbol{s } , \boldsymbol{s } ) \cr \end{pmat } \right ) , \end{aligned}\ ] ] where is the prior mean for and .this joint distribution is exactly of the form .an analogous application of then yields the posterior estimate of for all .alternatively , if we are interested only in the posterior variances of each , then it suffices to consider each block independently : the cost of computing can clearly be amortized over all .we now consider the specific case of the squared exponential ( se ) covariance function , \end{aligned}\ ] ] where is the signal variance and is a length scale .the se function is one of the most widely used covariance functions in practice .its derivatives can be expressed in terms of the ( probabilists ) hermite polynomials ( these are also sometimes denoted ) .the first few hermite polynomials are , , and .we need to compute the derivatives .let so that . then and .therefore , the gp regression requires us to have the values of the hyperparameters , , and . in practice , however , these are hardly ever known . in the examples below, we deal with this by estimating the hyperparameters from the data by maximizing the likelihood .we do this by using a nonlinear conjugate gradient algorithm , which can be quite sensitive to the initial starting point , so we initialize multiple runs over a small grid in hyperparameter space and return the best estimate found .this increases the quality of the estimated hyperparameters but can still sometimes fail .we showcase our method on competing models : linear compartment models ( 2 and 3 species ) , lotka - volterra models ( 2 and 3 species ) and lorenz .as the linear compartment differential invariants were presented in an earlier section , we compute the differential invariants of the lotka - volterra and lorenz using rosenfeldgroebner .we simulate each of these models to generate time course data , add varying levels of noise , and estimate the necessary higher order derivatives using gp regression . as described in the earlier section , we require the estimation of the higher order derivatives to satisfy a negative log likelihood value , otherwise the gp fit is not ` good ' . in some cases ,this can be remedied by increase the number of data points . using the estimated gp regression data, we test each of the models using the differential invariant method on other models .[ ex : lv2 ] the two species lotka - volterra model is : where and are variables , and are parameters .we assume only is observable and perform differential elimination and obtain our differential invariant in terms of only : [ ex : lv3 ] by including an additional variable , the three species lotka - volterra model is : assuming only is observable .after differential elimination , the differential invariant is : [ ex : lor ] another three species model , the lorenz model , is described by the system of equations : we assume only is observable , perform differential elimination , and obtain the following invariant : [ ex : lc2 ] a linear 2-compartment model without input can be written as : where and are variables , and are parameters .we assume only is observable and perform differential elimination and obtain our differential invariant in terms of only : [ ex : lc3 ] the linear 3-compartment model without input is : where are variables , and are parameters . we assume only is observable andperform differential elimination and obtain our differential invariant in terms of only : by assuming in examples 6.16.5 represents the same observable variable , we apply our method to data simulated from each model and perform model comparison .the models are simulated and 100 time points are obtained variable in each model .we add different levels of gaussian noise to the simulated data , and then estimate the higher order derivatives from the data .for example , during our study we found that for some parameters of the lotka - volterra three species model , e.g. ] and initial condition ] and initial condition ] and initial condition ] and initial condition $ ] . ]we have demonstrated our model discrimination algorithm on various models . in this section ,we consider some other theoretical points regarding differential invariants . notethat we have assumed that the parameters are all unknown and we have not taken any possible algebraic dependencies among the coefficients into account .this latter point is another reason our algorithm only concerns model rejection and not model selection .thus , each unknown coefficient is essential treated as an independent unknown variable in our linear system of equations .however , there may be instances where we d like to consider incorporating this additional information .we first consider the effect of incorporating known parameter values . in , an explicit formula for the input - output equations for linear models was derived .in particular , it was shown that all linear models corresponding to strongly connected graphs with at least one leak and having the same input and output compartments will have the same differential polynomial form of the input - output equations .for example , a linear 2-compartment model with a single input and output in the same compartment and corresponding to a strongly connected graph with at least one leak has the form : thus , our model discrimination method would not work for two distinct linear 2-compartment models with the above - mentioned form . in order to discriminate between two such models , we need to take other information into account , e.g. known parameter values .consider the following two linear 2-compartment models : whose corresponding input - output equations are of the form : notice that both of these equations are of the above - mentioned form , i.e. both 2-compartment models have a single input and output in the same compartment and correspond to strongly connected graphs with at least one leak . in the first model , there is a leak from the first compartment and an exchange between compartments and . in the second model, there is a leak from the second compartment and an exchange between compartments and .assume that the parameter is known .in the first model , this changes our invariant to : in the second model , our invariant is : in this case , the right - hand sides of the two equations are the same , but the first equation has two variables ( coefficients ) while the second equation has three variables ( coefficients ) .thus , if we had data from the second model , we could try to reject the first model ( much like the 3-compartment versus 2-compartment model discrimination in the examples below ) .in other words , a vector in the span of and for may not be in the span of and only .we next consider the effect of incorporating coefficient dependency relationships . while we can not incorporate the polynomial algebraic dependency relationships among the coefficients in our linear algebraic approach to model rejection, we can include certain dependency conditions , such as certain coefficients becoming known constants .we have already seen one way in which this can happen in the previous example ( from known nonzero parameter values ) .we now explore the case where certain coefficients go to zero . from the explicit formula for input - output equations from , we get that a linear model without any leaks has a zero term for the coefficient of . thus a linear 2-compartment model with a single input and output in the same compartment and corresponding to a strongly connected graph without any leakshas the form : thus to discriminate between two distinct linear 2-compartment models , one with leaks and one without any leaks , we should incorporate this zero coefficient into our invariant .consider the following two linear 2-compartment models : whose corresponding input - output equations are of the form : in the first model , there is a leak from the first compartment and an exchange between compartments and . in the second model, there is an exchange between compartments and and no leaks .thus , our invariants can be written as : again , the right - hand sides of the two equations are the same , but the first equation has three variables ( coefficients ) while the second equation has two variables ( coefficients ) .thus , if we had data from the first model , we could try to reject the second model .in other words , a vector in the span of and for may not be in the span of and only .finally , we consider the identifiability properties of our models .if the number of parameters is greater than the number of coefficients , then the model is unidentifiable . on the other hand ,if the number of parameters is less than or equal to the number of coefficients , then the model could possibly be identifiable .clearly , an identifiable model is preferred over an unidentifiable model .we note that , in our approach of forming the linear system from the input - output equations , we could in theory solve for the coefficients and then solve for the parameters from these known coefficient values if the model is identifiable .however , this is not a commonly used method to estimate parameter values in practice .as noted above , the possible algebraic dependency relationships among the coefficients are not taken into account in our linear algebra approach .this means that there could be many different models with the same differential polynomial form of the input - output equations .if such a model can not be rejected , we note that an identifiable model satisfying a particular input - output relationship is preferred over an unidentifiable one satisying the same form of the input - output relations , as we see in the following example .consider the following two linear 2-compartment models : whose corresponding input - output equations are of the form : in the first model , there is a leak from the first compartment and an exchange between compartments and . in the second model ,there are leaks from both compartments and an exchange between compartments and .thus , both models have invariants of the form : since the first model is identifiable and the second model is unidentifiable , we prefer to use the form of the first model if the model s invariant can not be rejected .after performing this differential algebraic statistics model rejection , one has already obtained the input - output equations to test structural identifiability . in a sense, our method extends the current spectrum of potential approaches for comparing models with time course data , in that one first can reject incompatible models , then test structural identifiability of compatible models using input - output equations obtained from the differential elimination , infer parameter values of the admissible models , and apply an information criterion model selection method to assert the best model .notably the presented differential algebraic statistics method does not penalize for model complexity , unlike traditional model selection techniques .rather , we reject when a model can not , for any parameter values , be compatible with the given data .we found that simpler models , such as the linear 2 compartment model could be rejected when data was generated from a more complex model , such as the three species lotka - volterra model , which elicits a wider range of behavior . on the other hand ,more complex models , such as the lorenz model , were often not rejected , from data simulated from less complex models . in futureit would be helpful to better understand the relationship between differential invariants and dynamics .we also think it would be beneficial to investigate algebraic properties of sloppiness .we believe there is large scope for additional parameter - free coplanarity model comparison methods. it would be beneficial to explore which algorithms for differential elimination can handle larger systems , and whether this area could be extended .the authors acknowledge funding from the american institute of mathematics ( aim ) where this research commenced .the authors thank mauricio barahona , mike osborne , and seth sullivant for helpful discussions .we are especially grateful to paul kirk for discussions on gps and providing his gp code , which served as an initial template to get started .nm was partially supported by the david and lucille packard foundation .hah acknowledges funding from ams simons travel grant ,epsrc fellowship ep / k041096/1 and mph stumpf leverhulme trust grant . c. aistleitner , _ relations between grbner bases , differential grbner bases , and differential characteristic sets _ , masters thesis , johannes kepler universitt , 2010 . h. akaike , _ a new look at the statistical model identification _ , ieee trans .automat . control , * 19 * ( 1974 ) , pp . 716723 .f. boulier , _ differential elimination and biological modelling _ , radon series comp .math . , * 2 * ( 2007 ) , pp . 111 - 139. f. boulier , d. lazard , f. ollivier , m. petitot , _ representation for the radical of a finitely generated differential ideal _ , in : issac 95 : proceedings of the 1995 international symposium on symbolic and algebraic computation , pp 158 - 166 .acm press , 1995 .g. carr ferro , em grbner bases and differential algebra , in l. huguet and a. poli , editors , proceedings of the 5th international symposium on applied algebra , algebraic algorithms and error - correcting codes , volume 356 of lecture notes in computer science , pp .131 - 140 .springer , 1987 .clarke , _ stoichiometric network analysis _ , cell biophys ., 12 ( 1988 ) , pp .d. cox , j. little , and donal oshea , _ ideals , varieties , and algorithms _ , springer , new york , 2007 .c. conradi , j. saez - rodriguez , e.d .gilles , j. raisch , _ using chemical reaction network theory to discard a kinetic mechanism hypothesis _ , iee proc .152 ( 2005 ) , pp .s. diop , _ differential algebraic decision methods and some applications to system theory _ ,* 98 * ( 1992 ) , pp . 137 - 161 .m. drton , b. sturmfels , s. sullivant , _ lectures on algebraic statistics _ , oberwolfach seminars ( springer , basel ) vol. 39 . 2009 .m. feinberg , _ chemical reaction network structure and the stability of complex isothermal reactors i . the deficiency zero and deficiency one theorems _ , chem ., * 42 * ( 1987 ) , pp . 22292268. m. feinberg , _ chemical reaction network structure and the stability of complex isothermal reactors ii .multiple steady states for networks of deficiency one _ , chem ., * 43 * ( 1988 ) , pp . 125. k. forsman , _ constructive commutative algebra in nonlinear control theory _ , phd thesis , linkping university , 1991 .o. golubitsky , m. kondratieva , m. m. maza , and a. ovchinnikov , _ a bound for the rosenfeld - grbner algorithm _ , j. symbolic comput . ,* 43 * ( 2008 ) , pp . 582 - 610 .e. gross , h.a .harrington , z. rosen , b. sturmfels , _ algebraic systems biology : a case study for the wnt pathway _ , bull .biol . , * 78*(1 ) ( 2016 ) , pp . 21 - 51 .e. gross , b. davis , k.l .ho , d. bates , h. harrington , _ numerical algebraic geometry for model selection _ , submitted .j. gunawardena , _ distributivity and processivity in multisite phosphorylation can be distinguished through steady - state invariants _ , biophys .j. , 93 ( 2007 ) , pp .gutenkunst , j.j .waterfall , f.p .casey , k.s .brown , c.r .myers , j.p .sethna , _ universally sloppy parameter sensitivities in systems biology models _ , plos comput .biol . , 3 ( 2007 ) ,harrington , k.l . ho , t. thorne , m.p.h .stumpf , _ parameter - free model discrimination criterion based on steady - state coplanarity _ , proc ., * 109*(39 ) ( 2012 ) , pp . 1574615751 .i. kaplansky , _ an introduction to differential algebra _ , hermann , paris , 1957 .e. r. kolchin , _ differential algebra and algebraic groups _ , pure appl . math ., * 54 * ( 1973 ) . l. ljung and t. glad , _ on global identifiability for arbitrary model parameterization _ , automatica , * 30*(2 ) ( 1994 ) , pp .265 - 276 .maclean , z. rosen , h.m .byrne , h.a .harrington , _ parameter - free methods distinguish wnt pathway models and guide design of experiments _ , proc ., * 112*(9 ) ( 2015 ) , pp . 26522657 .e. mansfield , _ differential grbner bases _ , phd thesis , university of sydney , 1991 . a.k .manrai , j. gunawardena , _ the geometry of multisite phosphorylation _ ,j. , * 95 * ( 2008 ) , pp . 55335543 .maple documentation .url http://www.maplesoft.com/support/help/maple/view.aspx?path=differentialalgebra n. meshkat , c. anderson , and j. j. distefano iii , _ alternative to ritt s pseudodivision for finding the input - output equations of multi - output models _ , math biosci . ,* 239 * ( 2012 ) , pp . 117 - 123 .n. meshkat , s. sullivant , and m. eisenberg , _ identifiability results for several classes of linear compartment models _ , bull . math . biol . ,* 77 * ( 2015 ) , pp . 1620 - 1651 .f. ollivier , _ le probleme de lidentifiabilite structurelle globale : etude theoretique , methodes effectives and bornes de complexite _ , phd thesis , ecole polytechnique , 1990 .f. ollivier , _ standard bases of differential ideals_. in s. sakata , editor , proceedings of the 8th international symposium on applied algebra , algorithms , and error - correcting codes , volume 508 of lecture notes in computer science , pp .304 - 321 .springer , 1991 .orth , i. thiele , b. .palsson , _ what is flux balance analysis ? _ nature biotechnol . ,* 28 * ( 2010 ) , pp .rasmussen , c.k.i .williams , _gaussian processes for machine learning_. the mit press : cambridge , 2006 .j. f. ritt , _ differential algebra _ ,dover ( 1950 ) .m. p. saccomani , s. audoly , and l. dangi , _ parameter identifiability of nonlinear systems : the role of initial conditions _ ,automatica * 39 * ( 2003 ) , pp . 619 - 632 .user - friendly tail bounds for sums of random matrices . found .12 : 389434 , 2012 .inequalities for the singular values of hadamard products .siam j. matrix anal .18 ( 4 ) : 10931095 , 1997 .
we present a method for rejecting competing models from noisy time - course data that does not rely on parameter inference . first we characterize ordinary differential equation models in only measurable variables using differential algebra elimination . next we extract additional information from the given data using gaussian process regression ( gpr ) and then transform the differential invariants . we develop a test using linear algebra and statistics to reject transformed models with the given data in a parameter - free manner . this algorithm exploits the information about transients that is encoded in the model s structure . we demonstrate the power of this approach by discriminating between different models from mathematical biology . keywords : model selection , differential algebra , algebraic statistics , mathematical biology
in 2011 , i described a timing sequencer and related laser lab instrumentation based on 16-bit microcontrollers and a homemade custom keypad / display unit. since then , two new developments have enabled a far more powerful approach : the availability of high - performance 32-bit microcontrollers in low - pin - count packages suitable for hand assembly , and the near - ubiquitous availability of tablets with high - resolution touch - screen interfaces and open development platforms .this article describes several new instrument designs tailored for research in atomic physics and laser spectroscopy .each utilizes a 32-bit microcontroller in conjunction with a usb interface to an android tablet , which serves as an interactive user interface and graphical display .these instruments are suitable for construction by students with some experience in soldering small chips , and are programmed using standard c code that can easily be modified .this offers both flexibility and educational opportunities .the instruments can meet many of the needs of a typical optical research lab : event sequencing , ramp and waveform generation , precise temperature control , high - voltage pzt control for micron - scale optical alignment , diode laser current control , rf frequency synthesis for modulator drivers , and dedicated phase - sensitive lock - in detection for frequency locking of lasers and optical cavities .the 32-bit processors have sufficient memory and processing power to allow interrupt - driven instrument operation concurrent with usage of a real - time graphical user interface .the central principle in designing these instruments has been to keep them as simple and self - contained as possible , but without sacrificing performance .with simplicity comes small size , allowing control instrumentation to be co - located with optical devices for example , an arbitrary waveform synthesizer could be housed directly in a diode laser head , or a lock - in amplifier could fit in a small box together with a detector . as indicated in fig .[ systemoverview ] , each instrument is based on a commodity - type 32-bit microcontroller in the microchip pic32 series , and can be controlled by an android app designed for a 7 `` or 8 '' tablet .an unusual feature is that the tablet interface is fully interchangeable , using a single app to communicate with any of a diverse family of instruments as described in sec .[ subsec : usb ] .further , all of the instruments are fully functional even when the external interface is removed .when the operating parameters are modified , the values are stored in the microcontroller program memory , so that these new values will be used even after power has been disconnected and reconnected .the usb interface also allows connection to an external pc to provide centralized control .( color online ) block diagram of a microcontroller - based instrument communicating with an android tablet via usb .a tablet app , microcontroller , uploads parameter values and their ranges from the instrument each time the usb interface cable is connected . ]four printed - circuit boards ( pcbs ) have so far been designed .one , the labint32 board described in section [ sec : labint ] , is a general - purpose laboratory interface specifically designed for versatility .the others are optimized for special purposes , as described in section [ sec : specialpurpose ] .the pcbs use a modular layout based in part on the daughter boards `` described in sec .[ subsec : daughterboards ] .they range from simple interface circuits with just a handful of components to the relatively sophisticated wvfm32 board , which uses the new analog devices ad9102 or ad9106 waveform generation chips to support a flexible voltage - output arbitrary waveform generator and direct digital synthesizer ( dds ) .it measures 1.5''.8 " , much smaller than any comparable device known to the author .further details on these designs , including circuit board layout files and full source code for the software , are available on my web page at the university of connecticut. designing the new instrumentation i considered several design approaches .one obvious method is to use a central data bus , facilitating inter - process communication and central control .apart from commercial systems using labview and similar products , some excellent homemade systems of this type have been developed , including an open - source project supported by groups at innsbruck and texas. this approach is best suited to labs that maintain a stable long - term experimental configurations of considerable complexity , such as the apparatus for bose - einstein condensation that motivated the innsbruck / texas designs .as already mentioned , the approach used here is quite different , intended primarily for smaller - scale experiments or setups that evolve rapidly , where a flexible configuration is more important than providing full central control from a single console .the intent is that most lab instruments will operate as autonomous devices , although a few external synchronization and control signals are obviously needed to set the overall sequence of an experiment .these can come either from a central lab computer or , for simple setups , from one of the boards described here , set up as an event sequencer and analog control generator .this approach is consistent with our own previous work and with recent designs from other small laser - based labs. once having decided on decentralized designs using microcontrollers , there are still at least three approaches : organized development platforms , compact development boards , or direct incorporation of microcontroller chips into custom designs .numerous development platforms are now available , ranging from the hobbyist - oriented arduino and raspberry pi to more engineering - based solutions. however , these approaches were ruled out because they increase the cost , size , and complexity of an instrument . for simple hardware - oriented tasks requiring rapid and repeatable responses , a predefined hardware interfacing configuration andthe presence of an operating system can be more of a hindrance than a help .initially it seemed attractive to use a compact development card to simplify design and construction .my initial design efforts used the simple and affordable mini-32 development card from mikroelektronika, which combines an 80 mhz microchip pic32mx534f064h processor with basic support circuitry and a usb connector .this board was used to construct a ramp generator and event sequencer very similar in design to an earlier 16-bit version. while successful , this approach entailed numerous inconveniences : the microcontroller program and ram memories are too small at 64 kb and 16 kb , the oscillator crystal is not a thermally stabilized txco type , the usb interface requires extensive modification to allow host - mode operation , and the 80 mhz instruction rate is somewhat compromised by mandatory wait states and interrupt latency . finally ,certain microcontroller pins that are essential for research lab use , such as the asynchronous timing input t1ck , are assigned for other purposes on the mini-32 , requiring laborious cutting and resoldering of traces .tests of the event sequencer yielded reasonably good results : the maximum interrupt event rate of 1.5 mhz is about twice as fast as the 16-bit design operating at 20 mhz , although the typical interrupt latency of 400 ns is not very different . nevertheless , it became evident that the effort in using preassembled development boards outweighs the advantages .( color online ) photograph of the 5``.25 '' labint32 pcb .it includes a wvfm32 daughter board with required support circuitry , and a dual 16-bit dac with one output connected to a card - edge sma connector . ] instead , the designs described here use low - pin - count chips in the microchip pic32mx250 series that are directly soldered to the circuit boards , as can be seen in fig .[ labintphoto ] .these microcontrollers , even though they are positioned as basic commodity - type devices by the manufacturer , have twice the memory of the mini-32 processor and can operate at 40 mhz without wait states. they feature software - reassignable pins that increase interfacing flexibility , as described in sec . [ subsec : modular ] . while the reduced 40 mhz speed is a consideration for event sequencing, it does not impact the performance of any of the other instruments described here , and the absence of wait states during memory access is partially compensatory . the processor clock and other timing references are derived from miniature temperature - compensated crystal oscillators in the fox electronics fox924b series , which are small , inexpensive , and accurate within 2.5 parts per million .ease of construction is a major consideration for circuits used in an academic research lab . to facilitate this ,the easily - mounted 28-pin pic32mx250f128b microcontroller is used where possible , and a 44-pin variant when more extensive interfacing is needed .the basic support circuitry for the controller is laid out to allow hand soldering , as is other low - frequency interface circuitry .nevertheless , all of the pcbs include at least a few surface - mounted chips that are more easily mounted using hot - air soldering methods .we have obtained very good results using solder paste and a light - duty hot - air station. for rf circuits the hot - air method is unfortunately a necessity , because modern rf chips commonly use compact flat packages such as the qfn-32 , with closely - spaced pins located underneath the chip .construction can also be made easier by including a full solder mask on the pcb , greatly reducing the incidence of accidental solder bridges between adjacent pins .these masks are available for a modest extra fee from most pcb fabricators , and their additional services usually also include printed legends that can conveniently label the component layout .( color online ) typical screen view of the microcontroller app on a google nexus 7 tablet .when a parameter is selected on the scrollable list at the upper left , its value can be adjusted either with a pop - up keypad or with the two slider bars .the strip - chart graph shows in yellow the output voltage produced by a temperature controller card , and in blue the temperature offset from the set point ( 25 units 1 mk ) . ]as previously described , a user interface to a commodity - type tablet is very appealing because it offers a fast , responsive high - resolution graphical touch - screen interface that requires no specialized instrumentation or construction .although rf communication with a tablet is possible using bluetooth or wi - fi protocols , a usb interface is a better choice for lab instruments because it avoids the need for extra circuitry , and it avoids the proliferation of multiple rf - based devices operating in a limited space .an interface based on an open - source development environment is important , so that programs on both the tablet and the microcontroller can be freely modified for individual research needs .fortunately the android operating system provides such a resource , the android open accessory ( aoa ) protocol. for this reason , the programs described here were developed for the widely available google nexus 7 android tablet , which offers a 1280 display and up to 32 gb of memory , with a fast quad - core processor .the microcontroller programs use the aoa protocol mainly to transfer five - byte data packets consisting of a command byte plus a 32-bit integer .they also support longer data packets in the microcontroller - to - tablet direction for displaying text strings and graphics .an important consideration is that the usb interface at the microcontroller end of the link must operate in host mode because many tablets , including the google nexus 7 , support only device - mode operation .an additional consideration is that for extended operation of the graphical display , a continuous charging current must be provided .the only way to charge most tablets is via the usb connector , and charging concurrent with communication is only possible if the tablet operates in usb device mode . on the other hand , it is important that the microcontroller usb interface also be capable of device - mode operation , because when control by an external personal computer is desired , the pc will support only host - mode operation . for this reason , the full usbon - the - go ( otg ) protocol has been implemented in hardware , allowing dynamic host - vs - device switching .presently the microcontroller software supports only host - mode operation with a tablet interface , but extension to a pc interface would require only full incorporation of the usb otg sample code available from microchip. a more subtle hardware consideration is that both of the tablets i have so far examined , the google nexus 7 and archos 80 g9 , use internal switching power supplies that present a rapidly shifting load to the 5 v charging supply . in initial designs , the 5 v power supply on the microcontroller pcb was unable to accommodate the rapidly switched load , causing fluctuations of mv which then propagated to some of the analog signal lines . a good solution is to provide a separate regulator for the usb charging supply , operating directly from the same 6v input power that powers the overall circuit card . with this designthere is no measurable effect on the 5 v and 3.3 v power supplies used to power chips on the main circuit board .a single android app , microcontroller , supports all of the instruments described here by using a flexible user interface based on a scrolling parameter list that is updated each time a new usb connection is established .it was developed in java using the android software development kit , for which extensive documentation is available. the app is available on my web page, both as java source code and in compiled form . as shown in figs .[ systemoverview ] and [ screenshot ] , the app displays a parameter list with labels and ranges specific to the application .several check boxes and status indicators are also available , also with application - specific labels . once the user selects a parameter by touching it , its value can be changed using either a pop - up keypad or the coarse and fine sliders visible in fig .[ screenshot ] .the remainder of the display screen is reserved for real - time graphics displayed using the open - source achartengine package, , and can show plots of data values , error voltages from locking circuits , and similar information .the graphics area can be fully updated at rates up to about 15 hz .while certain tasks will eventually require their own specialized android apps to offer full control , particularly arbitrary waveform generation and diode laser frequency locking , the one - size - fits all solution offered by the microcontroller app still works surprisingly well as a starting point . for a majority of the instruments described here , it is also quite satisfactory as a permanent user interface .although this paper mentions seven distinct instruments , they are accommodated using only four pcbs , all of which share numerous design elements as well as a common usb tablet interface .multiple instruments can also share a single tablet for user interfacing because it needs to be connected only when user interaction is needed , a major advantage of this design approach .another common design element is a 5-pin programming header included on each pcb that allows a full program to be loaded in approximately 10 - 20 seconds using an inexpensive microchip pickit 3 programmer .the programs are written in c and are compiled and loaded to the programmer using the free version of the microchip xc32 compiler and the mplab x environment. the pic32mx250 processor family further enhances design flexibility by providing numerous software - reassignable i / o pins , so that a given pin on a card - edge interface terminal might be used as a timer output by one program , a digital input line by another , and a serial communication output by a third . to avoid repetitive layout work and to further enhance flexibility ,several commonly used circuit functions have been implemented on small daughter boards " as described in sec .[ subsec : daughterboards ] .two of these daughter boards are visible on the general - purpose lab interface shown in fig .[ labintphoto ] , as is an unpopulated additional slot. some of these daughter boards simply offer routine general - purpose functionality , such as usb power switching , while others offer powerful signal generation and processing capabilities . with the exception of the 1``.8 '' usb interface board , the daughter boards measure 1.5``.8 '' , and share a common 20-pin dip connector formed by two rows of square - pin headers .the power supply and spi lines are the same for all of the boards , while the other pins are allocated as needed . these connectors can be used as a convenient prototyping area for customizing interface designs after the circuit boards have been constructed , by wire - wrapping connections to the square pins .as already mentioned , the lab interface ( labint32 ) pcb was designed to allow a multitude of differing applications by providing hardware support for up to two interchangeable daughter boards , as well as powerful on - board interfacing capabilities . as shown in figs .[ labintphoto ] and [ labintschematic ] , the core of the design is a pic32mx250f128d microcontroller in a 44-pin package .this provides enough interface pins to handle a wide variety of needs , particularly considering that many of them are software - assignable .several card - edge connectors and jacks provide access to numerous interface pins and signals , including an 8-bit digital i / o interface , of which six bits are tolerant of 5 v logic levels .two of the connectors are designed to support an optional rotary shaft encoder and serial interface as described in sec .[ subsec : currentctrl ] .the board operates from a single 6 v , 0.5 a power module but contains several on - board supplies and regulators .these provide the 3.3 v and 5 v power required for basic operation , as well as optional supplies at -5 v and v for op amps , analog conversion , and rf signal generation .these optional supplies are small switching power supplies that operate directly from the 6 v input power , so that they do not impose switching transients on the 5 v supply as mentioned in sec .[ subsec : usb ] .there are also provisions on the main board for three particularly useful interface components : a dual 16-bit voltage - output dac with a buffered precision 2.5 v reference ( analog devices ad5689r ) , a robust instrumentation amplifier useful for input signal amplification or level shifting ( ad8226 ) , and a 1024-position digital potentiometer ( ad5293 - 20 ) that can provide computer - based adjustment of any signal controllable by a 20 k resistor , up to a bandwidth limit of about 100 khz .presently there are two demonstration - type programs available for the microcontroller on the labint card .one uses the on - board 16-bit dac to provide a high - resolution analog ramp with parameters supplied by the tablet interface .the other operates with the wvfm32 daughter board to provide a synthesized complex waveform with data output rates up to 96 mhz , as described in the next section .the simplest of the daughter boards , the tiny 1``.6 '' usb32 board , is used on all of the pcbs .it simply provides the power and switching logic for a usb otg host / device interface , by use of a 0.5 a regulator and a tps2051b power switch .it includes a micro usb a / b connector , which is inconveniently small for soldering but is necessary because it is the only connector type that is approved both for host - mode connections to tablets and device - mode connection to external computers. as part of the usb otg standard , an internal connection in the usb cable is used to distinguish the a ( host ) end from the b ( device ) end .the remainder of the daughter boards are slightly larger at 1.5``.8 '' , and they all share a common 20-pin dip connector as described in sec .[ subsec : modular ] .they can be used interchangeably on the labint32 pcb or for specific purposes on other pcbs , as for the dac32 daughter board needed by the tempctrl card .this section describes the wvfm32 daughter board in detail , and briefly describes three others .* wvfm32 * the wvfm32 daughter board , whose schematic is shown in fig .[ wvfm32 ] , benefits from the simplicity of a direct spi interface and provides an extremely small but highly capable instrument .it combines the remarkable analog devices ad9102 ( or ad9106 ) waveform generation chip with a fast dc - coupled differential amplifier ( two for the ad9106 ) , along with a voltage regulator and numerous decoupling capacitors necessitated by the bandwidth of about 150 - 200 mhz .the ad9102/06 provides both arbitrary waveform generation from a 4096-word internal memory and direct digital synthesis ( dds ) of sine waves , with clock speeds that can range from single - step to 160 mhz .when it is used on the labint32 pcb , a fast complementary clock generator is not available , but the programmable refclko output of the pic32 microcontroller works very well for moderate - frequency output waveforms after it is conditioned by the simple passive network shown near the center of fig .[ labintschematic ] .the refclko output can be clocked at up to 40 mhz using the pic32 system clock or at up to 96 mhz using the internal usb pll clock. even though the pic32 output pins are not specified for operation above 40 mhz , the 96 mhz clock seems to work well .the differential buffer amplifiers , ad8129 or ad8130 , can drive a terminated 50 ohm line with an amplitude of .5 v. at full bandwidth the rms output noise level is approximately 1 mv , or 1 part in 5000 of the full - scale output range .the dac switching transients were initially very large at mv , but after improving the ground connection between the complementary output sampling resistors ( r5 and r6 in fig .[ wvfm32 ] ) , the transients were reduced to 6 mv pulses about 60 ns in duration , and they alternate in sign so that the average pulse area is nearly zero .the large - signal impulse response was measured using an ad8129 to drive a 1 m , 50 cable , by setting up the ad9102 waveform generator to produce a step function .the shape of the response function is nearly independent of the step size for 15 v steps .the output reaches 0.82 v after 4 ns , approaching the limits of the 100 mhz oscilloscope used for the measurement , demonstrating that the circuit approaches its design bandwidth of mhz .however , it exhibits a slight shoulder after 4 ns , taking nearly 8 ns to reach 90% of full output and then reaching 100% at ns . after this initial rise ,the output exhibits slight ringing at the % level with a period of about 100 ns , damping out in about three cycles to reach the noise level .this ringing is caused at least in part by the response of the v regulators to the sudden change in current on the output line , and is not observed with smaller steps of .1 v. \2 .* dac32 * the dac32 daughter board is a straightforward design that includes one or two of the same ad5689r dac chips described in section [ sec : labint ] , providing up to four 16-bit dac outputs , together with an uncommitted dual op amp .the op amps have inputs and outputs accessible on the 20-pin dip connector , and can be used in combination with the dacs or separately . \3 . * lockin * the lockin daughter board does just what its name implies .it realizes a simple but complete lock - in amplifier , with a robust adjustable - gain instrumentation amplifier ( ad8226 or ad8422 ) driving an ad630 single - chip lock - in amplifier that works well up to about 100 khz .the output is amplified and filtered , then digitized by an ad7940 14-bit adc .the performance is determined mainly by the ad630 specifications , except that the instrumentation amplifier determines the input noise level and the common - mode rejection ratio . when used on the labint32 pcb , the digital potentiometer on the main boardcan be tied to this daughter board to allow computer - adjustable gain on the input amplifier. \4 . * adc32 * the adc32 board is still in the design stage .it will use an ad7687b 16-bit adc , together with a robust adg5409b multiplexer and an intersil isl28617fvz differential amplifier , to provide a flexible high - resolution analog - to - digital converter supporting four fully differential inputs .in addition to offering higher resolution than the built - in 10-bit adcs on the pic32 microcontroller , it offers a much wider input voltage range and considerable protection against over - voltage conditions .( color online ) major elements of the precision temperature controller .high - voltage pa340cc op amps can be substituted for the opa548 high - current drivers for use as a dual pzt driver . ] the temp32 pcb , shown in block form in fig .[ tempctrl ] , uses a 28-pin pic32mx250f128b on a card optimized specifically for low - bandwidth analog control , with three separate ground planes for digital logic , signal ground , and analog power ground . while simple enough to be used for general - purpose temperature control , the board was designed to allow the very tight control needed for single - mode distributed bragg reflector ( dbr ) lasers , for which a typical temperature tuning coefficient of 25 ghz / c necessitates mk - level control for mhz - level laser stability. as shown in fig .[ tempctrl ] , a divider formed by a thermistor and a 5 ppm / c precision resistor provides the input to a 22-bit adc .the microchip mcp3550 - 60 is a low - cost sigma - delta " adc that provides very high accuracy and excellent rejection of 60 hz noise at low data rates ( 15 hz ) .a 2.5 v precision reference is used both for the thermistor divider and to set the full - scale conversion range of the adc , making the results immune to small reference fluctuations .no buffering is required for the thermistor , although if a different sensor were used a low - noise differential amplifier might be desirable. the microcontroller program implements a pid ( proportional - integral - differential ) controller using integer arithmetic , with several defining and constraining parameters that can be optimized via the tablet interface .the output after iteration is determined by the error , gain factors , and , the sampling frequency , and a scale factor that allows the full 16-bit range of the output dac to be used : this output is sent a dac32 daughter board , then amplified by an opa548 power op amp capable of driving 60 v or 3 a. the separate analog power ground plane for the output section of the pcb is connected to the analog signal ground plane only at a single point . with a conventional 10 k thermistor , the single - measurement rms noise level is approximately 7 adc units , corresponding to about 0.3 mk near room temperature . assuming a bandwidth of about 1 hz for heating or cooling a laser diode or optical crystal , the time - averaged noise level and accuracy can exceed 0.1 mk , adequate for most purposes in a typical laser - based research lab .the temp32 circuit board can alternatively be used as a dual 350-v pzt controller for laser spectrum analyzers or other micron - scale adjustments . to accomplish this ,the adc is omitted and the output op amps are substituted with apex pa340cc high - voltage op amps , using a simple adapter pcb that accommodates the changed pin - out . the freqsynth32 pcb , presently in the testing phase , is intended to provide accurate high - frequency rf signals for applications such as driving acousto - optic modulators .it supports up to two adf4351 ultra - broadband frequency synthesizers , which can produce far higher frequencies than dds synthesizers .these pll - based devices include internal voltage - controlled oscillators and output dividers , allowing self - contained rf generation from 35 - 4000 mhz . as shown in fig .[ freqsynth ] , an rf switch allows ns - timescale switching between the two synthesizers , or if one is turned off , it allows fast on - off switching .signal conditioning includes a low - pass filter to eliminate harmonics from the adf4351 output dividers , as well as a broadband amplifier and digital attenuator that provide an output level adjustable from about -15 dbm to + 16 dbm .the output can drive higher - power amplifier modules such as the rfhic rfc1g21h4 - 24 , which provides up to 4w in the range 20 - 1000 mhz .it is a challenge to work over such a broad frequency range .although an impedance - matched stripline design was not attempted because this would require a pcb substrate thinner than the 0.062 " norm , considerable attention has been paid to keeping the rf transmission path short , wide , and guarded " from radiative loss by numerous vias connecting the front and back ground planes on the pcb .the mpl_interface pcb , described more fully on my web page, is designed as a single - purpose interface to a laser diode current driver compatible with the mpl series from wavelength electronics . however , this circuit may be of more general interest for two reasons .first , it allows control and readout of devices with a ground reference level that can float in a range of v , with 13 - 16 bit accuracy .second , its control program includes full support for a rotary shaft encoder ( bourns em14a0d - c24-l064s ) and a simple serial lcd display ( sparkfun lcd-09067 ) , allowing the laser current to be adjusted and displayed without the usb tablet interface . the same encoder and display could also be attached to the labint32 pcb using jacks provided for this purpose .a significant portion of the research needs of a typical laser spectroscopy or atomic physics laboratory can be met by the four pcbs described in secs . [sec : labint ] and [ sec : specialpurpose ] , together with appropriate software and the daughter boards in sec .[ subsec : daughterboards ] .the advantages of this approach include simple and accessible modular designs , a user interface to an android tablet with interactive high - resolution graphics , and easily reconfigurable software .the circuit designs are intended for in - house construction , reducing expenses and allowing valuable educational opportunities for students , while still offering the high performance expected of a specialized research instrument .most of the pcbs can be hand - soldered , although a hot - air soldering station is required for the two rf circuits ( wvfm32 and freqsynth32 ) .full design information and software listings are available at my website. apart from these general considerations , these instruments offer some unusual and valuable capabilities .one is the single shared android app that provides a full graphical interface to numerous different devices .when the tablet is removed after adjusting the operating parameters , the microcontroller stores the updated parameter values and the instrument will continue to use them indefinitely .another is the very small size of the wvfm32 waveform generator , which takes advantage of a simple direct interface connection to a microcontroller to provide voltage - output arbitrary waveform generation and dds on a 1.5``.8 '' pcb .up to two of these pcbs can be mounted on a labint32 general - purpose interface card , itself measuring only 5``.25 '' , and only a single semi - regulated 6v , 0.5a power supply is required .similarly , the dac32 and lockin daughter boards share the same small footprint , facilitating control instrumentation that can fit inside the device being controlled .the primary usage of these instruments in our own laboratory is to control several diode lasers and to provide flexible control of numerous frequency modulators needed for research on optical polychromatic forces on atoms and molecules. although the available circuits and software reflect this focus , most of these instruments can be used for diverse applications in their present form , and all can be modified readily for special needs .+ e. e. eyler , rev .instrum . * 82 * , 013105 ( 2011 ) .e. e. eyler , web page at http://www.phys.uconn.edu/~eyler/microcontrollers/ , _ microcontroller designs for atomic , molecular , and optical physics laboratories _, university of connecticut physics dept .p. e. gaskell , j. j. thorn , s. alba , and d. a. steck , rev .instrum . * 80 * , 115103 ( 2009 ) .t. meyrath and f. schreck , a laboratory control system for cold atom experiments , atom optics laboratory , center for nonlinear dynamics and department of physics , university of texas at austin , http://www.strontiumbec.com/control/control.html , 2013 .see , for example , m. ugray , j. e. atfield , t. g. mccarthy , and r. c. shiell , rev .instrum . * 77 * , 113109 ( 2006 ) .d. sandys , life after pi , digi - key corporation , june 14 , 2013 , available at http://www.digikey.com/us/en/techzone/microcontroller/resources/articles/life-after-pi.html .mikroelektronika corp .viegradska 1a , 11000 belgrade , serbia .see http://www.mikroe.com / mini / pic32/. faster 50 mhz versions are also described in the data sheets , but were not yet widely available as of july 2013 .see pic32mx1xx/2xx data sheet , document ds61168e , 2012 , microchip technology inc ., 2355 west chandler blvd . , chandler , arizona , http://www.microchip.com .aoyue968a+ , available from sra soldering products , foxboro , ma . to mount surface - mount chips a thin layer of solder paste with fluxshould be applied to the pcb pads ; we have obtained good results using chipquik model smd291ax .links to information and software for usb interfacing with the aoa protocol can be found at http://source.android.com/accessories/custom.html .part of the microchip application libraries , microchip technology inc ., 2355 west chandler blvd ., chandler , arizona .see http://www.microchip.com/pagehandler/en-us/technology/usb/gettingstarted.html ( 2013 ) .w - m lee , _ beginning android application development _ , wiley publishing , indianapolis , 2011 .z. mednieks , l. dornin , g. blake meike , and m. nakamura , _ programming android _ ,oreilly media , sebastopol , ca , 2011 . android software development kit ( sdk ) , http://developer.android.com/sdk/index.html , 2013 . the free achartengine library for android , written in java , is available at http://code.google.com / p / achartengine/. mplab xc32 compiler , free version , microchip technology inc ., 2355 west chandler blvd . ,chandler , arizona , http://www.microchip.com / pagehandler / en_us / devtools / mplabxc/. version 1.20 was used for this work . on - the - go and embedded host supplement to the usb revision 2.0 specification , rev .2.0 version 1.1a , july 27 , 2012 , available for download from http://www.usb.org / developers / docs/. _ pic32 family reference manual _ , microchip technology inc .available for download ( by chapter ) at http://www.microchip.com ) .j. spencer , photodigm corp ., _ tunable laser diode absorption spectroscopy ( tldas ) with dbr lasers _ ,aug . 4 , 2011 , available at http://photodigm.com/blog/bid/62359 .j. horn and g. gleason , weigh scale applications for the mcp3551 , _ microchip application note an1030 _ , microchip technology inc .m. a. chieda and e. e. eyler , phys .rev . a * 86 * , 053415 ( 2012 ) .s. e. galica , l. aldridge , and e. e. eyler , to be published .
several high - performance lab instruments suitable for manual assembly have been developed using low - pin - count 32-bit microcontrollers that communicate with an android tablet via a usb interface . a single android tablet app accommodates multiple interface needs by uploading parameter lists and graphical data from the microcontrollers , which are themselves programmed with easily - modified c code . the hardware design of the instruments emphasizes low chip counts and is highly modular , relying on small daughter boards `` for special functions such as usb power management , waveform generation , and phase - sensitive signal detection . in one example , a daughter board provides a complete waveform generator and direct digital synthesizer that fits on a 1.5''.8 " circuit card .
where river water meets the sea , an enormous amount of energy is dissipated as a result of the irreversible mixing of fresh and salt water .the dissipated energy is about 2 kj per liter of river water , _i.e. _ equivalent to a waterfall of 200 m .it is estimated that the combined power from all large estuaries in the world could take care of approximately 20% of today s worldwide energy demand .extracting or storing this energy is therefore a potentially serious option that our fossil - fuel burning society may have to embrace in order to become sustainable .however , interesting scientific and technical challenges are to be faced . so far pressure - retarded osmosis ( pro ) andreverse electrodialysis ( red ) have been the two main and best - investigated techniques in this field of so - called `` blue energy '' , or salinity - gradient energy . in pro the osmotic pressure difference across a semi - permeable membrane is used to create a pressurised solution from incoming fresh and salt water , which is able to drive a turbine . in red stacks of alternating cation- and anion - exchange membranes are used to generate an electric potential difference out of a salinity gradient .these techniques enable the generation of ( electrical ) work at the expense of the mixing of streams with different salinity .actually , pro and red can be thought of as the inverse processes of reverse osmosis and electrodialyses , where one has to supply ( electrical ) work in order to separate an incoming salt - water stream in a saltier and a fresher stream . +the applicability of pro and red are currently being explored : a 1 - 2 kw prototype plant based on pro was started up in 2009 in norway , and a 5 kw red device is planned to be upscaled to a 50 kw demonstration project in the netherlands .interestingly , the bottleneck to large - scale applications of both these techniques is often _ not _ the available fuel there is a lot of fresh and salt water but rather the very large membranes that are required to operate at commercially interesting power outputs .tailoring such membranes with a very high transport capacity and minimal efficiency losses due to biofouling requires advanced membrane technology .recently , however , a solid - state device _ without _ membranes was constructed by brogioli , who directly extracts energy from salinity differences using porous carbon electrodes immersed in an aqueous electrolyte . due to the huge internal surface of porous carbon , of the order of m per gram of carbon, the capacitance of a pair of electrolyte - immersed porous carbon electrodes can be very large , allowing for large amounts of ionic charge to be stored in the diffuse part of the double layers of the electrolytic medium inside the pores .in fact , although the energy that is stored in the charged state of such large - area electrodes is somewhat lower than that in modern chargeable batteries , the power uptake and power delivery of these ultracapacitors is comparable or even larger .the capacitance of these devices not only scales with the contact area between the electrode and the electrolyte , but also with the inverse distance between the electronic charge on the electrode and the ionic charge in the diffuse part of the double layer , i.e. the capacitance increases with the inverse of the thickness of the ionic double layer . as a consequence ,the capacitance increases with increasing salinity , or , in other words , the potential increases at fixed electrode charge upon changing the medium from salt to fresh water .this variability of the capacity was used by brogioli , and also more recently by brogioli _ _ et al.__ , to extract electric work from salinity gradients without membranes .although sales _ et al . _ showed that the combination of membranes and porous electrodes has some desirable advantages , we will focus here on brogioli s experiment .the key concept of ref. is a four - stage cycle abcda of a pair of porous electrodes , together forming a capacitor , such that 1 .the two electrodes , immersed in sea water , are charged up from an initial state a with low initial charges to a state b with higher charges ; 2 .the salt water environment of the two electrodes is replaced by fresh water at fixed electrode charges , thereby increasing the electrostatic potential of the electrodes from to ; 3 .the two highly charged electrodes , now immersed in fresh water in state c , are discharged back to in state d , and finally 4 .the fresh water environment of the electrodes is replaced by salt water again , at fixed electrode charges , thereby lowering the electrode potentials to their initial values in state a. this cycle , during which a net transport of ions from salt to fresh water takes place , renders the salt water fresher and the fresh water saltier although only infinitessimally so if the reservoir volumes are infinitely large . as a consequence , the ionic entropy has increased after a cycle has been completed , and the associated free - energy reduction of the combined device and the two electrolyte reservoirs equals the electric work done by the device during the cycle , as we will see in more detail below .brogioli extrapolates an energy production of 1.6 kj per liter of fresh water in his device , equivalent to a waterfall of 160 m , quite comparable to current membrane - based techniques .these figures are promising in the light of possible future large - scale blue - energy extraction . together with the large volume of fresh and salt water at the river mouths of this planet, they also put an interesting and blessing twist to bob evans quotes at the beginning of this article .below we investigate the ( free ) energy and the performed work of electrolyte - immersed supercapacitors within a simple density functional that gives rise to a modified poisson - boltzmann ( pb ) equation for the ionic double layers . by seeking analogies with the classic carnot cycle for heat engines with their maximum efficiency to convert heat into mechanical work given the two temperatures of the heat baths , we consider modifications of brogioli s cycle that may maximise the conversion efficiency of ionic entropy into electric work given the two reservoir salt concentrations .our modification does _ not _ involve the trajectories ab and cd of the cycle where the ( dis)charging electrodes are in diffusive contact with an electrolytic reservoir with the inhomogeneously distributed salt ions `` properly '' treated grand - canonically as often advocated by bob evans .in fact , we will argue that the grand - canonical trajectories ab and cd at constant ionic chemical potential are the analogue of the isotherms in the carnot cycle .rather we consider to modify the constant - charge trajectories bc and da ( which correspond to isochores in a heat - engine as we will argue ) by a continued ( dis)charging process of the electrodes at a constant number of ions ( which corresponds to an adiabatic ( de)compression in the heat engine ) .in other words , we propose to disconnect the immersed electrodes from the ion reservoirs in bc and da , treating the salt ions canonically while ( dis)charging the electrodes , thereby affecting the ion adsorption and hence the bulk concentration from salty to fresh ( bc ) and _ vice versa _ ( da ) .finally , we will consider a ( dis)charging cycle in the ( realistic ) case of a finite volume of available fresh water , such that the ion exchange process renders this water brackish ; the heat - engine analogue is a temperature rise of the cold bath due to the uptake of heat .+ similar cycles were already studied theoretically by biesheuvel , although not in this context of osmotic power but its reverse , capacitive desalination .the `` switching step '' in biesheuvel s cycle , where the system switches from an electrolyte with a low salt concentration to an electrolyte with a higher salt concentration , appears to be somewhat different from our proposal here , e.g. without a direct heat - engine analogue .we consider two electrodes , one carrying a charge and the other a charge .the electrodes , which can charge and discharge by applying an external electric force that transports electrons from one to the other , are both immersed in an aqueous monovalent electrolyte of volume at temperature .we denote the number of cations and anions in the volume by and , respectively .global charge neutrality of the two electrodes and the electrolyte in the volume is guaranteed if .if the two electrodes are separated by a distance much larger than the debye screening length a condition that is easily met in the experiments of ref. then each electrode and its surrounding electrolyte will be separately electrically neutral such that , where is the proton charge and where we assume without loss of generality .note that this `` local neutrality '' can only be achieved provided , where the extreme case corresponds to an electrode charge that is so high that all anions in the volume are needed to screen the positive electrode and all cations to screen the negative one . for ,which we assume from now on , we can use and as independent variables of a neutral system of the positive electrode immersed in an electrolyte of volume at temperature , the helmholtz free energy of which is denoted by . at fixed volume and temperaturewe can write the differential of the free energy of the positive electrode and its electrolyte environment as with the average of the ionic chemical potentials and the electrostatic potential of the electrode .the last term of eq.([df ] ) is the electric work _done on _ the system if the electrode charge is increased by at fixed , and hence the electrostatic work _ done by _ the electrode system is . given that is a state function , such that for any cycle , the total work _done by _ the system during a ( reversible ) cycle equals in order to be able to _ calculate _we thus need explicit cycles _ and _ the explicit equations - of - state and/or , for which we will use a simple density functional theory to be discussed below . however , before performing these explicit calculations a general statement can be made , because there is an interesting analogy to be made with mechanical work _ done by _ a fixed amount of gas at pressure that cyclically changes its volume and entropy ( by exchanging heat ) . in that casethe differential of the thermodynamic potential reads with a state function denoting the internal energy . since then find .if the exchange of heat takes place between two heat baths at given high and low temperatures and , it is well known that the most - efficient cycle the cycle that produces the maximum work per adsorbed amount of heat from the hotter bath is the carnot cycle with its two isothermal and two adiabatic ( de-)compressions . if we transpose all the variables from the gas performing mechanical work to the immersed electrodes performing electric work , we find , , , , and , where all pairs preserve the symmetry of being both extensive or both intensive .the analogue of high and low temperatures are thus high and low ionic chemical potentials and ( corresponding to sea and river water , respectively ) , the analogue of the isothermal volume change is thus the ( dis)charging at constant , and the analogue of an adiabatic volume change is ( dis)charging at constant .therefore , the analogue of the most efficient gas cycle is the electric cycle consisting of ( grand)canonical ( dis)charging processes .indeed , the trajectories ( ab ) and ( cd ) of the experimental cycle of ref. , as discussed in section i , are of a grand - canonical nature with the electrode in contact with a salt reservoir during the ( dis)charging. however , the processes ( bc ) and ( da ) take place at constant , i.e. they are equivalent to isochores in a gas cycle , instead of adiabats .efficiency is thus to be gained , at least in principle , by changing bc and da into canonical charging processes . whether this is experimentally easily implementable is , at this stage for us , an open question that we will not answer here . for the most efficient cycles , which are schematically shown in fig.[fig : carnotcompare ] in the and the representation, we can easily calculate the work performed during a cycle . for the mechanical work of the gas onefinds , with the temperature difference and the entropy that is exchanged between the heat baths during the isothermal compression and decompression .the analogue for the work delivered by the electrode is given by , with and the number of exchanged ions between the reservoirs during the grand - canonical ( dis)charging processes .this result also follows directly from eq.([w ] ) . below we will calculate and hence from a microscopic theorymoreover , we will also consider several other types of cycles . in the context of the thermodynamics that we discuss here , it is also of interest to analyse the `` global '' energy flow that gives rise to the work that the immersed porous electrodes deliver per ( reversible ) cycle .for this analysis it is crucial to realise that the device and the two salt reservoirs at chemical potentials and are considered to be at constant temperature throughout , which implies that they are thermally coupled to a heat bath ( that we call the `` atmosphere '' here for convenience ) at temperature .we will show that with every completed cycle , during which ions are being transported from the sea to the river water , a net amount of heat flows from the atmosphere to the two salt reservoirs , and that in the limit that the ion clouds do not store potential energy due to multi - particle interactions .this may at first sight contradict kelvin s statement of the second law ( `` no process is possible whose sole result is the complete conversion of heat into work '' ) , but one should realise that the cycle _ also _ involves the transport of ions from the sea to the river ; the word `` sole '' in kelvin s statement is thus crucial , of course .the analysis is based on the entropy changes , and of the device , the highly - concentrated salt reservoir and the one with low salt concentration , respectively , upon the completion of a cycle .given that the device returns to its initial state after a complete cycle , its entropy change vanishes and .this implies that the device , at its fixed temperature , does not adsorb or desorb any net amount of heat . during a cyclethe `` river '' gains ions , and hence its ( helmholtz or gibbs ) free energy changes by , while the `` sea '' loses ions such that .now the basic identity implies that and , where is the average energy ( or enthalpy if denotes the gibbs free energy ) per particle .we assume to be independent of density , which physically corresponds to the case that there are no multi - particle contributions to the internal energy of the reservoirs , as is the case for hard - core systems or ions treated within poisson - boltzmann theory as ideal gases in a self - consistent field .the total energy in the reservoirs therefore remains constant during mixing , such that the entropy changes of the salt reservoirs are and . as a consequence of the global preservation of entropy in the reversible cycle ,the ion exchange actually drives a heat exchange whereby the sea extracts a net amount of heat from the atmosphere , while the river dumps a net amount of heat into the atmosphere .of course the transport of ions itself is also accompanied with a heat exchange in between the reservoirs , the only relevant flow is therefore the net flow of heat out of the atmosphere , which is . the energy flow and the particle flow of the device and reservoirs are tentatively illustrated in fig .[ fig : flows ] , where one should realise that the distribution of the heat flow from the atmosphere into the sea ( ) and the river ( ) depends on the heat - flow from river to sea or _ vice versa _ , which we have not considered here in any detail ; _ only _ the net heat flow is fixed by global thermodynamic arguments .this identification of with would have the interesting implication that the conversion of this work into heat again , e.g. by using it to power a laptop , would _ not _ contribute to ( direct ) global warming since the released heat has previously been taken out of the atmosphere .it is not clear to us , however , to what extent this scenario is truly realistic and relevant , given that rivers , seas , and the atmosphere are generally _ not _ in thermal equilibrium such that other heat flows are to be considered . in this studywe do not consider the heat fluxes at all , and just consider systems that are small enough for the temperature to be fixed .in order to calculate and of a charged electrode immersed in an electrolyte of volume , we need a microscopic model of the electrode and the electrolyte .we consider a positively charged porous electrode with a total pore volume , total surface area , and typical pore size .we write the total charge of the positive electrode as with the number of elementary charges per unit area .the negative electrode is the mirror image with an overall minus sign for charge and potential , see also fig.[fig : electrodes ] .the volume of the electrolyte surrounding this electrode is , with the volume of the electrolyte outside the electrode .the electrolyte consists of ( i ) water , viewed as a dielectric fluid with dielectric constant at temperature , ( ii ) an ( average ) number of anions with a charge and ( iii ) an ( average ) number of cations with a charge . the finite pore size inside the electrodes is taken into account here only qualitatively by regarding a geometry of two laterally unbounded parallel half - spaces representing the solid electrode , both with surface charge density , separated by a gap of thickness filled with the dielectric solvent and an inhomogeneous electrolyte characterised by concentration profile . here is the cartesian coordinate such that the charged planes are at and .the water density profile is then , within a simple incompressibility approximation with a molecular volume that is equal for water and the ions , given by .if the electrolyte in the gap is in diffusive contact with a bulk electrolyte with chemical potentials and of the cations and anions , we can write the variational grand - potential as a functional ] , which we checked to be sufficient for all values of that we considered . throughout the remainder of this text we set with nm , which restricts the total local ion concentration to a physically reasonable maximum of 10 m. the bjerrum length of wateris set to nm .we first consider a positive electrode immersed in a huge ( ) ionic bath at a fixed salt concentration , such that the ions can be treated grand - canonically . in fig .[ fig : potchargerel ] we plot ( a ) the electrode potential and ( b ) the total ion adsorption , both as a function of the electrode charge number density , for three reservoir salt concentrations , 10 , and 100 mm from top to bottom , where the full curves represent the full theory with pore size nm , the dashed curves the infinite pore limit , and the dotted curve the analytic gouy - chapman expressions ( for and ) of eqs.([gc ] ) and ( [ psigc ] ) . the first observation in fig .[ fig : potchargerel](a ) is that gc theory breaks down at surface charge densities beyond nm , where steric effects prevent too dense a packing of condensing counterions such that the actual surface potential rises much more strongly with than the logarithmic increase of gc theory ( see eq.([psigc ] ) ) .this rise of the potential towards v may induce ( unwanted ) electrolysis in experiments , so charge densities exceeding , say , 5 nm should perhaps be avoided .a second observation is that the finite pore size hardly affects the relation for nm , provided the steric effects are taken into account .the reason is that the effective screening length is substantially smaller than in these cases due to the large adsorption of counterions in the vicinity of the electrode .a third observation is that the full theory predicts , for the lower salt concentrations and 10 mm , a substantially larger at low , the more so for lower .this is due to the finite pores size , which is _ not _ much larger than in these cases , such that the ionic double layers must be distorted : by increasing a donnan - like potential is generated in the pore that attracts enough counterions to compensate for the electrode charge in the small available volume .interestingly , steric effects do _ not _ play a large role for in fig . [fig : potchargerel](b ) , as the full curves of the full theory with are indistinguishable from the full theory with .the finite pore size appears to be more important for , at least at first sight , at low , where appears substantially lower than from the full calculation in the finite pore .however , this is in the linear regime where the adsorption is so small that only the logarithmic scale reveals any difference ; in the nonlinear regime at high all curves for coincide and hence the gc theory is accurate to describe the adsorption .we now consider the ( reversible ) - cycle abcda shown in fig.[fig : cyclus](a ) , for an electrode with pore sizes nm that operates between two salt reservoirs at high and low salt salt concentrations m ( sea water ) and m ( river water ) , respectively , such that .for simplicity we set such that the total electrolyte volume equals the pore volume . the trajectory ab represents the charging of the electrode from an initial charge density nm to a final charge density nm at , which involves an increase in the number of ions per unit area nm using eqs.([gam ] ) and ( [ nrions ] ) which we calculate numerically with ( [ gampm ] ) .the trajectory bc is calculated using the fixed number of particles in state , , calculating a lower and lower value for for increasing s using eq.([nrions ] ) until at nm .then the discharging curve cd , at fixed is traced from surface charges down to nm for which , i.e. the discharging continues until the number of expelled ions equals their uptake during the charging process ab . the final trajectory , da , is characterised by the fixed number of particles in state d ( which equals that in a ) , and is calculated by numerically finding higher and higher -values from eq.([nrions ] ) for surface charges decreasing from to , where at such that the loop is closed .note that all four trajectories involve numerical solutions of the modified poisson - boltzmann problem and some root - finding to find the state points of interest , and that the loop is completely characterised by , , , and .fig.[fig : cyclus](b ) shows the concentration profiles of the anions ( full curve ) and cations ( dashed curves ) in the states a , b , c , and d , ( i ) showing an almost undisturbed double layer in a and b that reaches local charge neutrality and a reservoir concentration in the center of the pore , ( ii ) an increase of counterions at the expense of a decrease of coions in going from b to c by a trade off with the negative electrode , accompanied by the saturation of counterion concentration at 10 m close to the electrode in state c and the ( almost ) complete absence of co - ions in the low - salt states c and d , and ( iii ) the trading of counterions for coions from d to a at fixed overall ion concentration .the work done during the cycle abcda follows from either the third or the fourth term of eq.([w ] ) , yielding nm or , equivalently , for the present set of parameters .the enclosed area of the cycle abcda in fig .[ fig : cyclus ] corresponds to the amount of extracted work ( up to a factor ) , and equals the net decrease of free energy of the reservoirs . in order to compare the presently proposed type of cycle abcda with the type used in the experiments of brogioli , where `` isochores '' at constant rather than `` adiabats '' at constant were used to transit between the two salt baths , we also numerically study the dashed cycle abcda of fig.[fig : cyclus](a ) .this cycle has exactly the same trajectory ab characterised by as before .state point c at and has , however , a much smaller number of ions than in state b and c , because its surface charge .trajectory cd at fixed is quite similar to cd but extends much further down to , where the number of ions in d is even further reduced to the minimum value in the cycle nm .finally , at fixed the number of ions increases up to by gradually increasing from to .so also this cycle is completely determined by , , , and .the electric work done during the cycle abcda follows from eq.(2 ) and reads nm , which is equivalent to where is the number of ions that was exchanged between the two reservoirs during the cycle . clearly , , i.e. the brogioli - type cycle with the `` isocharges '' bc and da produces more work than the presently proposed abcda cycle with canonical trajectories bc and da . however , the efficiency of abcda , defined as , indeed exceeds the efficiency of the abcda cycle . thisis also illustrated in fig.[fig : cyclus2 ] , where the two cycles abcda ( a ) and abcda ( b ) are shown in the - representation . whereas the total area of ( b ) is larger than that of ( a ) , so according to eq.([w ] ) ,the larger spread in compared to renders the efficiency of ( b ) smaller .the work is therefore less than the decrease of the free energy of the reservoirs combined .the hatched area of fig.[fig : cyclus2](b ) denotes the work that could have been done with the number of exchanged ions , if a cycle of the type abcda had been used .the fact that while proves to be the case for all charge densities and for which we calculated ( of an abcda - type cycle ) and ( of an abcda - type cycle ) , at the same reservoirs and and the same pore size as above .this is illustrated in table 1 , which lists and per unit area and per transported ion for several choices of and .the data of table 1 shows that by up to a factor 2 , while by up to a factor of three for , and a factor 8 for .we thus conclude that the choice for a particular cycle to generate electric work depends on optimization considerations ; our results show that maximum work or maximum efficiency do not necessarily coincide .table 1 not only shows the work per area and per ion , but in the last column also with , _ i.e. _ the work per charge that is put on the electrode during the charging of trajectory ab .interestingly , in these units the work is comparable to provided , as also follows from gouy - chapman theory for highly charged surfaces .note that the work per transported charge does _ not _ equal the amount of performed work per transported ion as is typically much larger than .nevertheless , the fact that gives us a handle to link our results with the experiments of brogioli . during the experiment , the charge on the electrodes varies by , such that one arrives at an expected work of 6 per electrode .this agrees reasonably well with the obtained value of 5 out of the entire system .unfortunately , the relation between the electrostatic potential and the charge in the experiments differs significantly from that of our theory by at least hundreds of millivolts ; at comparable electrostatic potentials the charge density in brogioli s experiments is almost two orders of magnitude smaller than our theoretical estimates .therefore a qualitative comparison with the brogioli - cycle is at this point very hard .the relatively low experimental charge densities clarify the lower amount of work produced per gram of electrode , which was noted earlier in the text . including the stern layer may be a key ingredient that is missing in the present analysis ..the work and of cycles abcda and abcda , respectively , as illustrated in figs.[fig : cyclus](a ) and [ fig : cyclus2 ] , for several choices of surface charges and in states a and b , for systems operating between electrolytes with high hand low salt concentrations m and m , for electrodes with pore size nm .we converted and to room temperature thermal energy units , and not only express them per unit electrode area but also per exchanged number of ions and during the two cycles , respectively .note that is a property of the two reservoirs , not of the charge densities of the cycle .also note that depends on the volume of electrolyte outside the electrodes , here we successively give values for the optimal situation as well as for the situation . [ cols="^,^,^,^,^,^,^",options="header " , ]of course many more cycles are possible . the two cycles abcda and abcda consideredso far generate electric work out of the mixing of two very large reservoirs of salt and fresh water , taking up ions from high - salt water and releasing them in fresh water .due to the large volume of the two reservoirs the ionic chemical potentials and , and hence the bulk salt concentrations and in the reservoirs , do not change during this transfer of a finite number of ions during a cycle .however , there could be relevant cases where the power output of an osmo - electric device is limited by the finite inflow of fresh water , which then becomes brackish due to the mixing process ; usually there is enough sea water to ignore the opposite effect that the sea would become less salty because of ion drainage by a cycle . in other words , the volume of fresh water can not always be regarded as infinitely large while the salt water reservoir is still a genuine and infinitely large ion bath .the cycle with a limited fresh water supply is equivalent to a heat - engine that causes the temperature of its cold `` bath '' to rise due to the release of rest heat from a cycle , while the hot heat bath does not cool down due to its large volume or heat capacity .here we describe and quantify a cycle abca that produces electric work by reversibly mixing a finite volume of fresh water with a reservoir of salt water .we consider a finite volume of fresh water with a low salt concentration m , such that the number of ions in this compartment equals .this fresh water is assumed to be available at the beginning of a ( new ) cycle ; its fate at the end of the cycle is to be as salty as the sea by having taken up ions from the electrode ( which received them from the sea ) , with m the salt concentration in the sea .the cycle , which is represented in fig .[ fig : newcycle ] , starts with the electrodes connected to a large volume of sea water at concentration , charged up in state a at a charge density nm . during the first part ab of the cycle ,the electrodes are further charged up until the positive one has taken up ions in its pores , which fixes the surface charge nm in state b. then the electrodes are to be disconnected from the sea , after which the charging proceeds in trajectory bc such that the increasing ion adsorption at a fixed total ion number reduces the salt chemical potential down to ( and hence the salt concentration far from the electrode surface down to ) at nm in state c. the system can then be reversibly coupled to the finite compartment of initially fresh water , after which the discharging process ca takes place such that the released ions cause the fresh water to become more salty , reaching a charge density when the salt concentration in the compartment of volume equals .the cycle can then be repeated by replacing the compartment by fresh water again .the relation between the surface potential , the charge density on the electrodes , and the ion reservoir concentration or the ion number , was numerically calculated using the modified pb - equation ( [ modpb ] ) with bc s ( [ bc1 ] ) and ( [ bc2 ] ) , combined with the adsorption relation ( [ gam ] ) , with the same parameters , nm , nm , and as before . the enclosed area in fig .[ fig : newcycle ] gives , using eq.([w ] ) , the net amount of ( reversible ) work performed during a cycle , which again equals the decrease of the free energy of the salt water reservoir and fresh water volume combined .in fact , this work can be calculated analytically as , \label{newwork}\ ] ] where .this result agrees with the prediction by pattle for very small . for the parameters of the cycle discussed here, we find kj per liter of fresh water , or nm .the figures show that the amount of work per ion that is transported is typically smaller than what we found for the carnot - like cycle , of course .+ we may compare this reversible cycle with the one proposed by biesheuvel for the reverse process , which is called desalination .this cycle is very similar to ours , except that biesheuvel s switching step from sea - to river water and v.v .is actually an iso- trajectory instead of our iso- tracjectory .this iso - adsorption trajectory does not seem to have a reversible heat engine analogue , as the degree of reversibility depends on the extent to which the electrolyte can be drained out of the micropores before . nevertheless , we find agreement with the work that must be provided in the case of only a relatively small output volume of fresh water , and the expression found by biesheuvel exactly equals eq .( [ newwork ] ) .the point we would like to stress is that irreversible mixing during the switching step can be prevented by introducing a canonical(iso- ) part into the cycle which enables the system to adapt to a new salt concentration in a time - reversible fashion , such that maximal efficiency is preserved .although substantial attempts to extract renewable energy from salinity gradients go back to the 1970 s , there is considerable recent progress in this field stemming from the availability of high - quality membranes and large - area nanoporous electrodes with which economically interesting yields of the order of 1 kj per liter of fresh water can be obtained equivalent to a waterfall of one hundred meter . the key concept in the recent experiments of brogioli is to cyclically ( dis)charge a supercapacitor composed of two porous electrodes immersed in sea ( river ) water . in this articlewe have used a relatively simple density functional , based on mean - field electrostatics and a lattice - gas type description of ionic steric repulsions , to study the relation between the electrode potential , the electrode surface charge density , the ion adsorption , the ion chemical potential , and the total number of ions in a ( slit - like ) pore of width that should mimic the finite pores of the electrodes . with this microscopic information at hand , we have analysed several cycles of charging and discharging electrodes in sea and river water . by making an analogy with heat engines , for which the most - efficient cycle between two heat baths at fixed temperatures is the carnot cycle with isothermal and adiabatic ( de)compressions , we considered cycles composed of iso and iso ( dis)charging processes of the electrodes .we indeed found that these cycles are maximally efficient in the sense that the work per ` consumed ' ion that is transported from the sea to the river water during this cycle is optimal , given the salt concentrations in the river- and sea water .however , although the cycles used by brogioli , with two iso and two iso trajectories ( where the latter are analogous to isochores in the heat - engine ) are less efficient per transported ion , the total work of a `` brogioli - cycle '' is larger , at least when comparing cycles that share the iso charging in the sea water trajectory .we find , for electrode potentials mv and electrode charge densities nm in electrolytes with salt concentrations m ( sea water ) and m ( river water ) , typical amounts of delivered work of the order of several per transported ion , which is equivalent to several per nm of electrode area or several kj per liter of consumed fresh water .+ our calculations on the brogioli type of cycle agree with experiments regarding the amount of performed work per cycle with respect to the variance in the electrode charge during ( dis- ) charging ; each unit charge is responsible for an amount of work that is given by the difference in chemical potential between the two reservoirs .however , the experimental data concerning the electrostatic potential could _ not _ be mapped onto our numerical data .this could very well be due to the fact that the pore size in the experiments by brogioli is very small such that ion desolvation , ion polarisability , and image charge effects may be determining the relation between the surface charge and electrostatic potential .models which go beyond the present mean - field description are probably required for a quantitative description of this regime .another ingredient in a more detailed description must involve the finite size of the ions combined with the microscopic roughness of the carbon .the ions in the solvent and the electrons ( holes ) in the electrode material can not approach infinitely close , and the resulting charge free zone can be modeled by a stern capacitance .standard gouy - chapman - stern ( gcs ) theory has successfully been applied to fit charge - voltage curves for porous carbon capacitive cells within the context of osmo - electrical and capacitive desalination devices .extensions to gcs theory are currently being developed which include finite pore sizes , in order to obtain a physically realistic and simultaneously accurate model of the stern layer within this geometry .+ throughout this work we ( implicitly ) assumed the cycles to be reversible , which implies that the electrode ( dis)charging is carried out sufficiently slowly for the ions to be in thermodynamic equilibrium with the instantaneous external potential imposed by the electrodes .this reversibility due to the slowness of the charging process has the advantage of giving rise to optimal conversion from ionic entropy to electric work in a given cycle .however , if one is interested in optimizing the _ power _ of a cycle , _ i.e. _ the performed work per unit time , then quasistatic processes are certainly not optimal because of their inherent slowness .heuristically one expects that the optimal power would result from the trade - off between reversibility ( slowness ) to optimize the work per cycle on the one hand , and fast electronic ( dis)charging processes of the electrodes and fast fluid exchanges on time scales below the relaxation time of the ionic double layers on the other .an interesting issue is the diffusion of ions into ( or out of ) the porous electrode after switching on ( or off ) the electrode potential .ongoing work in our group employs dynamic density functional theory to find optimal - power conditions for the devices and cycles studied in this paper , e.g. focussing on the delay times between the electrode potential and the ionic charge cloud upon voltage ramps .+ the recovery of useful energy from the otherwise definite entropy increase at estuaries , which may be relevant because our planet is so full of water , is just one example where one can directly build on bob evans fundamental work on ( dynamic ) density functional theory , inhomogeneous liquids , electrolytes , interfaces , and adsorption .it is a great pleasure to dedicate this paper to bob evans on the occasion of his 65th birthday .rvr had the privilege of being a postdoctoral fellow in bob s group in the years 1997 - 1999 in bristol , where he experienced an unsurpassed combination of warm hospitality , unlimited scientific freedom , and superb guidance on _ any _ aspect of scientific and ( british ) daily life .bob s words `` ren , you look like a man who needs a beer ! '' when entering the post - doc office in the late afternoon , which usually meant that he was ready to discuss physics after his long day of lecturing and administration , trigger memories of evening - long pub - discussions and actual pencil - and - paper calculations on hard - sphere demixing , like - charge attraction , liquid - crystal wetting , poles in the complex plane , or hydrophobic interactions , with ( long ) intermezzos of analyses of , say , bergkamp s qualities versus those of beckham .even though not all of this ended up in publications , bob s input , explanations , historic perspective , and style contained invaluable career - determining elements for a young postdoc working with him .rvr is very grateful for all this and more .we wish bob , and of course also margaret , many happy and healthy years to come .+ we thank marleen kooiman and maarten biesheuvel for useful discussions .this work was financially supported by an nwo - echo grant .
a huge amount of entropy is produced at places where fresh water and seawater mix , for example at river mouths . this mixing process is a potentially enormous source of sustainable energy , provided it is harnessed properly , for instance by a cyclic charging and discharging process of porous electrodes immersed in salt and fresh water , respectively [ d. brogioli , phys . . lett . * 103 * , 058501 ( 2009 ) ] . here we employ a modified poisson - boltzmann free - energy density functional to calculate the ionic adsorption and desorption onto and from the charged electrodes , from which the electric work of a cycle is deduced . we propose optimal ( most efficient ) cycles for two given salt baths involving two canonical and two grand - canonical ( dis)charging paths , in analogy to the well - known carnot cycle for heat - to - work conversion from two heat baths involving two isothermal and two adiabatic paths . we also suggest a slightly modified cycle which can be applied in cases that the stream of fresh water is limited . + bob evans about water ( 1998 ) . + + bob evans about ionic criticality ( 1998 ) .
we consider the following sequence space model where are the coefficients of a signal and the noise has a diagonal covariance matrix .this heterogeneous model may appear in several frameworks where the variance is fluctuating , for example in heterogeneous regression , coloured noise , fractional brownian motion models or statistical inverse problems , for which the general literature is quite exhaustive .the goal is to estimate the unknown parameter by using the observations .model selection is a core problem in statistics .one of the main reference in the field dates back to the aic criterion , but there has been a huge amount of papers on this subject ( e.g. , ) .model selection is usually linked to the choice of a penalty and its precise choice is the main difficulty in model selection both from a theoretical and a practical perspective .there is a close relationship between model selection and thresholding procedures , which is addressed e.g. in .the idea is that the search for a `` good penalty '' in model selection is indeed very much related to the choice of a `` good threshold '' in wavelet procedures .there exists also a fascinating connection between the false discovery rate control ( fdr ) and both thresholding and model selection , as studied in , which will become apparent later in our paper .our main modeling assumption is that the parameter of interest is sparse .sparsity is one of the leading paradigms nowadays and signals with a sparse representation in some basis ( for example wavelets ) or functions with sparse coefficients appear in many scientific fields ( see among many others ) . in this paper, we consider the sequence space model with heterogeneous errors .our goal is then to select among a family of models the best possible one , by use of a data - driven selection rule . in particular, one has to deal with the special heterogeneous nature of the observations , and the choice of the penalty must reflect this .the heterogenous case is much more involved than the direct ( homogeneous ) model .indeed , there is no more symmetry inside the stochastic process that one needs to control , since each empirical coefficient has its own variance . the problem andthe penalty do not only depend on the number of coefficients that one selects , but also on their position .this also appears in the minimax bounds where the coefficients in the least favourable model will go to the larger variances . by a careful and explicit choice of the penalty , however , we are able to select the correct coefficients and get a sharp non - asymptotic control of the risk of our procedure .results are also obtained for full model selection and a fdr - type control on a family of thresholds . in the case of known sparsity , we consider a non - adaptive threshold estimator and obtain a minimax upper bound .this estimator exactly attains the lower bound and is then minimax .using our model selection approach , the procedure is almost minimax ( up to a factor 2 ) .moreover , the procedure is fully adaptive .indeed , the sparsity is unknown and we obtain an explicit penalty , valid in the mathematical proofs and directly applicable in simulations .the paper is organized as follows . in the following subsection [ sec : exa ] ,we give examples of problems where our heterogeneous model appears .section [ sec : sel ] contains the data - driven procedure and a general result . in section [ sec : spa ] , we consider the sparsity assumptions and obtain theorems for the full subset selection and thresholding procedures .section [ sec : low ] and [ sec : upp ] are concerned with minimax lower and upper bounds . in section [ sec: num ] , we present numerical results for the finite - sample properties of the methods .consider first a model of heterogeneous regression where are i.i.d .standard gaussian , but their variance are fluctuating depending on the design points and is some spiky unknown function . in this model . by spiky functionwe mean that is zero apart from a small subset of all design points .these signals are frequently encountered in applications ( though rarely modeled in theoretical statistics ) , e.g. when measuring absorption spectra in physical chemistry ( i.e. rare well - localised and strong signals ) or jumps in log returns of asset prices ( i.e. log - price increments which fluctuate at low levels except when larger shocks occur ) .often in applications coloured noise models are adequate .let us consider here the problem of estimating an unknown function observed with a noise defined by some fractional brownian motion , ,\ ] ] where is an unknown function in , =0 , is the noise level and is a fractional brownian motion , defined by ( see ) , where is a brownian motion , , is the gamma function . the fractional brownian motion also appears in econometric applications to model the long - memory phenomena , e.g. in .the model ( [ mod ] ) is close to the standard gaussian white noise model , which corresponds to the case . here, the behaviour of the noise is different .we are not interested in the fractional brownian motion itself , but we want to estimate the unknown function based on the noisy data , as in .a very important point is linked with the definition of the fractional integration operator . in this framework , if the function is supposed to be , then the natural way is to consider the periodic version of fractional integration ( given in ( [ frac ] ) ) , such that and thus ( see p.135 in ) , by integration and projection on the cosine ( or sine ) basis and using ( [ eigen ] ) , one obtains the sequence space model ( as in ) , where are independent with , where and .consider the following framework of a general inverse problem where is a known injective compact linear bounded operator , an unknown -dimensional function , is a gaussian white noise and the noise level .we will use here the framework of singular values decomposition ( svd ) , see e.g. .denote by the eigenfunctions of the operator associated with the strictly positive eigenvalues . remark that any function may be decomposed in this orthonormal basis as , where .let be the normalized image basis by projection and division by the singular values , we may obtain the empirical coefficients we then obtain a model in the sequence space ( see ) with and .we consider the sequence space model for coefficients of an unknown -function with respect to an orthornormal system .the estimator over an arbitrary large , but finite index set is then defined by where the empirical version of is defined as we write and for the cardinality of .let us write for the covariance matrix of the restricted to the indices for which , i.e. with .by we denote the operator norm , i.e. the largest absolute eigenvalue . the random elements take values in the sample space .we now consider an arbitrary family of borel - measurable data - driven subset selection rules .define an estimator by minimizing in the family the penalized empirical risk : with the penalty where denotes the -th largest value among and .remark that is defined in an equivalent way by where then , define the data - driven estimator the next lemma shows that one has an explicit risk hull , a concept introduced in full detail in .[ th : hull ] the function with the penalty from is a risk hull , i.e. we have recall and introduce the stochastic term remark that such that follows from let us write and let denote the inverse rank of in ( e.g. , if such that note that for any enumeration of by monotonicity : holds .we therefore obtain with the inverse order statistics and ( i.e. etc . ) of and , respectively , { \leqslant}\operatorname{{\mathbf e}}\big[\sum_{j=1}^n\sigma_{(j)}^2\big(\zeta_{(j)}^2 - 2(\log(ne / j)+j^{-1}\log_+(n{\lvert \sigma \rvert}))\big)_+ \big].\ ] ] it remains to evaluate ] .let us consider the intuitive version of sparsity by assuming a small proportion of nonzero coefficients ( cf . ) , i.e. the family where denotes the maximal proportion of nonzero coefficients . throughout, we assume that this proportion is such that asymptotically the goal here is to study the accuracy of the full model selection over the whole family of estimators .each coefficient may be chosen to be inside or outside the model .let us consider the case where denotes all deterministic subset selections , [ th : ms ] let be the data - driven rule defined in ( [ hstar ] ) with as in ( [ hms ] ) .we have , for , uniformly over , in particular , if ( i.e. , any polynomial growth for is admissible ) and , then we obtain for the right - hand side in theorem [ th : oracle ] can be bounded by considering the oracle such that +\omega_\delta & { \leqslant}(1+\delta)2pen(h^f)+\omega_\delta.\end{aligned}\ ] ] we will use the following inequality , as , by comparison with the integral . since , we obtain that as . on the other hand , we have we use which shows choosing such that , e.g. , we thus find , as , using theorem [ th : oracle ] , equation ( [ omega_del ] ) we have ( [ bound1 ] ) .moreover , using the bounds on and we obtain ( [ bound1b ] ) .consider now a family of threshold estimators .the problem is to study the data - driven selection of the threshold .let us consider the case where denotes the threshold selection rules with arbitrary threshold values note that consists of different subset selection rules only and can be implemented efficiently using the order statistics of .[ th : tr ] let be the data - driven rules defined in ( [ hstar ] ) with as in ( [ htr ] ) .if , then we have , for , uniformly over assuming for the growth bounds with a second condition always checked if , this inequality simplifies to let us now evaluate the right - hand side of the oracle inequality in theorem [ th : oracle ] for the threshold selection rules with arbitrary threshold values defined in ( [ htr ] ) .given an oracle parameter ( to be determined below ) , we set .we obtain with denoting the ( inverse ) rank of the coefficient with index among \\ & { \leqslant}\mathbf{e}_f\big [ \sum_{\lambda\in\lambda}\big({\bf 1}({\lvert x_\lambda \rvert}{\leqslant}\tau_\lambda)f_\lambda^2- { \bf 1}({\lvert x_\lambda \rvert}>\tau_\lambda ) ( x_\lambda^2-f_\lambda^2)\\ & \qquad\qquad + 4\sigma_\lambda^2 { \bf 1}({\lvert x_\lambda \rvert}>\tau_\lambda)(\log(en / r_\lambda ) + r_\lambda^{-1}\log_+(n{\lvert \sigma \rvert}))\big)\big].\end{aligned}\ ] ] let us first show that ] applied to and deterministic yields &{\leqslant}c_\lambda+\int_{c_\lambda}^\infty p({\lvert \xi_\lambda \rvert}{\geqslant}\sqrt{z}-\tau_\lambda)\,dz { \leqslant}c_\lambda+2e^{-(\sqrt{c_\lambda}-\tau_\lambda)^2/(2\sigma_\lambda^2)}.\end{aligned}\ ] ] in order to ensure whenever , we are lead to choose in the sequel we bound simply by in the case . then using again the bound on sums of logarithms ( [ sumlog ] ) and as well as the concavity of for bounding the sum of exponentials, we obtain that over the signal part satisfies \\ & { \leqslant}\sum_{\lambda\in\lambda , f_\lambda\not=0}(c_\lambda+2e^{-(\sqrt{c_\lambda}-\tau_\lambda)^2/(2\sigma_\lambda^2 ) } ) { \leqslant}n\gamma_n(c_n{\lvert \sigma_{h_f } \rvert}+2 e^{-(c_n-(t^0)^2)/2}),\,\end{aligned}\ ] ] where owing to we even have \nonumber\\ & { \leqslant}{\lvert \sigma_{h_f } \rvert}n\gamma_n c_n(1+o(1)).\label{sig2}\end{aligned}\ ] ] on the other hand , for the non - signal part , we introduce and we use the large deviation bound : =n p({\lvert \xi_\lambda \rvert}>\tau_\lambda){\leqslant}2n(t^0)^{-1}e^{-(t^0)^2/2}. \ ] ] again by considering worst case permutations instead of the ranks , using ( [ sumlog ] ) and by jensen s inequality for the concave functions we infer : \\ & { \leqslant}4{\lvert \sigma \rvert}\mathbf{e}_f\left[\sum_{\lambda\in\lambda } { \bf 1}({\lvert \xi_\lambda \rvert}>\tau_\lambda)(\log(en / r_\lambda)+r_\lambda^{-1}\log_+(n{\lvert \sigma \rvert}))\right]\\ & { \leqslant}4{\lvert \sigma \rvert}\mathbf{e}\left[\sum_{j=1}^{n_\tau}(\log(en / j)+j^{-1}\log_+(n{\lvert \sigma \rvert}))\right]\\ & { \leqslant}4{\lvert \sigma \rvert}\mathbf{e}\left[(n_\tau\log(en / n_\tau)+\log(n_\tau)\log_+(n{\lvert \sigma \rvert}))\right ] ( 1+o(1))\\ & { \leqslant}4{\lvert \sigma \rvert } ( 2n ( t^0)^{-1 } e^{-(t^0)^2/2}(1+t_0 ^ 2/2)+(\log n-(t^0)^2/2)\log_+(n{\lvert \sigma \rvert}))(1+o(1))\\ & { \leqslant}2{\lvert \sigma \rvert}(2n e^{-(t^0)^2/2}t^0+(2\log n-(t^0)^2)\log_+(n{\lvert \sigma \rvert}))(1+o(1)).\end{aligned}\ ] ] for the chosen , the total bound over is thus , by ( [ sig2 ] ) , ( [ eq42 ] ) and by definition of in ( [ c_n ] ) , this yields the asserted general bound and inserting the bound for gives directly the second bound . _heterogeneous case ._ one may compare the method and its accuracy with other results in related frameworks .for example , considers a very close framework of model selection in inverse problems by using the svd approach .this results in a noise which is heterogeneous and diagonal . study the related topic of inverse problems and wavelet vaguelette decomposition ( wvd ) , built on .the framework in is more general than ours .however , this leads to less precise results . in all their results , there exist universal constants which are not really controlled .this is even more important for the constants inside the method , for example in the penalty .our method contains an explicit penalty .it is used in the mathematical results and also in simulations without additional tuning .a possible extension of our method to the dependent wvd case does not seem straight - forward ._ homogeneous case ._ let us compare with other work for the homogeneous setting .there exist a lot of results in this framework , see e.g. .again those results contain universal constants , not only in the mathematical results , but even inside the methods .for example , constants in front of the penalty , but also inside the fdr technique , with an hyper - parameter which has to be tuned .the perhaps closest paper to our work is in the homogeneous case .our penalty is analogous to `` twice the optimal '' penalty considered in .this is due to difficulties in the heterogenous case , where the stochastic process that one needs to control is much more involved in this setting .indeed , there is no more symmetry inside this stochastic process , since each empirical coefficient has its own variance . the problem andthe penalty do not only depend on the number of coefficients that one selects , but also on their position .this leads to a result , where one gets a constant in .the potential loss of the factor 2 in the heterogeneous framework might possibly be avoidable in theory , but in simulations the results seem comparably less sensitive to this factor than to other modifications , e.g. to how many data points , among the non - zero coefficients , are close to the critical threshold level , which defines some kind of effective sparsity of the problem ( often muss less than ) .this effect is not treated in the theoretical setup in all of the fdr - related studies , where implicitly a worst case scenario of the coefficients magnitude is understood .[ th : lower ] for any estimator based on observations we have the minimax lower bound { \geqslant}\sup_{\alpha_n\in s_\lambda(n\gamma_n , c_n ) } 2\big(1+o(1)\big)\big(\sum_{\lambda\in\lambda } \sigma_\lambda ^2\alpha_{\lambda , n}\log(\alpha_{\lambda , n}^{-1})\big)\ ] ] for some where ^\lambda\,|\,\sum_\lambda\alpha_\lambda{\leqslant}r(1-c)\} ] and the bayes risk the expectation of the conditional variance , which is calculated as =\operatorname{{\mathbf e}}[f_\lambda ^2]-\operatorname{{\mathbf e}}[\operatorname{{\mathbf e}}[f_\lambda |x_\lambda ] ^2 ] = \mu_{\lambda , n}^2\big(\alpha_{\lambda , n}-\int \frac{\alpha_{\lambda , n}^2{\varphi}_{\mu_{\lambda , n},\sigma_\lambda ^2}(x)^2 } { ( 1-\alpha_{\lambda , n}){\varphi}_{0,\sigma_\lambda ^2}(x)+ \alpha_{\lambda , n}{\varphi}_{\mu_{\lambda , n},\sigma_\lambda ^2}(x)}\,dx\big).\ ] ] the integral can be transformed into an expectation with respect to and bounded by jensen s inequality : \\ & \qquad { \leqslant}\alpha_{\lambda , n } \big(1+\alpha_{\lambda , n}^{-1}(1-\alpha_{\lambda , n})\operatorname{{\mathbf e}}[\exp(\sigma_\lambda ^{-1}z-\mu_{\lambda , n}^2/(2\sigma_\lambda ^2))]\big)^{-1 } \\ & \qquad = \alpha_{\lambda , n } \big(1+\alpha_{\lambda , n}^{-1}(1-\alpha_{\lambda , n})\exp((1-\mu_{\lambda , n}^2)/(2\sigma_\lambda ^2))\big)^{-1}.\end{aligned}\ ] ] since uniformly , we just select such that {\geqslant}2\sigma_\lambda ^2\alpha_{\lambda , n}(1-(\log c_n^{-1})^{-1/2})\log(\alpha_{\lambda , n}^{-1 } ) ( 1-((1+(1-\alpha_{\lambda , n})\alpha_{\lambda ,n}^{-(\log c_n^{-1})^{-1/2}}e^{1/(2\sigma_\lambda ^2)}))^{-1}).\ ] ] noting uniformly over , the overall bayes risk is hence uniformly lower bounded by the supremum at is attained for where is such that holds , provided for all . the latter condition is fulfilled if .alternatively , we may write and the entropy expression becomes where the ] and increasing noise level for .the inner black diagonal lines indicate the sparse threshold ( with oracle value of ) and the outer diagonal lines the universal threshold .the non - blue points depict noisy observations .observations included in the adaptive full subset selection estimator are coloured green , while those included for the adaptive threshold estimator are the union of green and yellow points ( in fact , for this sample the adaptive thresholding selects all full subset selected points ) , the discarded observations are in magenta. we have run 1000 monte carlo experiments for the parameters , in the sparse ( ) and dense ( ) case . in figure [ fig2 ]the first 100 relative errors are plotted for the different estimation procedures in the dense case .the errors are taken as a quotient with the sample - wise oracle threshold value applied to the renormalised .therefore only the full subset selection can sometimes have relative errors less than one .table [ tab1 ] lists the relative monte carlo errors for the two cases .the last column reports the relative error of the oracle procedure with that discards all observations with ( not noticing the model selection complexity ) .the simulation results are quite stable for variations of the setup .altogether the thresholding works globally well .the ( approximate ) full subset selection procedure ( see below for the greedy algorithm used ) is slightly worse and exhibits a higher variability , but is still pretty good . by construction , in the dense casethe oracle sparse threshold works better than the universal threshold , while the universal threshold works better in very sparse situations .the reason why the sparse threshold even with a theoretical oracle choice of does not work so well is that the entire theoretical analysis is based upon potentially most difficult signal - to - noise ratios , that is coefficients of the size of the threshold or the noise level . here , however , the effective sparsity is larger ( i.e. , effective is smaller ) because the uniformly generated non - zero coefficients can be relatively small especially at indices with high noise level , see also figure [ fig1 ] .let us briefly describe how the adaptive full subset selection procedure has been implemented .the formula attributes to each selected coefficient the individual penalty with the inverse rank of .due to all coefficients with are included into in an initial step .then , iteratively is extended to by including all coefficients with the iteration stops when no further coefficients can be included .the estimator at this stage definitely contains all coefficients also taken by . in a second iterationwe now add in a more greedy way coefficients that will decrease the total penalized empirical risk . including a new coefficient ,adds to the penalized empirical risk the ( positive or negative ) value here , is to be understood as the rank at when setting .consequently , the second iteration extends each time by one coefficient for which the displayed formula gives a negative value until no further reduction of the total penalized empirical risk is obtainable .this second greedy optimisation does not necessarily yield the optimal full subset selection solution , but most often in practice it yields a coefficient selection with a significantly smaller penalized empirical risk than the adaptive threshold procedure .the numerical complexity of the algorithm is of order due to the second iteration in contrast to the exponential order when scanning all possible subsets .a more refined analysis of our procedure would be interesting , but might have minor statistical impact in view of the good results for the straight - forward adaptive thresholding scheme .the authors would like to thank iain johnstone , debashis paul and thorsten dickhaus for interesting discussions .m. rei gratefully acknowledges financial support from the dfg via research unit for1735 _ structural inference in statistics_. massart p. ( 2007 ) . _concentration inequalities and model selection ._ lectures from the 33rd summer school on probability theory held in saint - flour , july 6 - 23 , 2003 .lecture notes in mathematics , springer , berlin .
we consider a gaussian sequence space model where has a diagonal covariance matrix . we consider the situation where the parameter vector is sparse . our goal is to estimate the unknown parameter by a model selection approach . the heterogenous case is much more involved than the direct model . indeed , there is no more symmetry inside the stochastic process that one needs to control since each empirical coefficient has its own variance . the problem and the penalty do not only depend on the number of coefficients that one selects , but also on their position . this appears also in the minimax bounds where the worst coefficients will go to the larger variances . however , with a careful and explicit choice of the penalty we are able to select the correct coefficients and get a sharp non - asymptotic control of the risk of our procedure . some simulation results are provided .
in order to formally handle ( specify and prove ) some properties of prolog execution , we needed above all a definition of a port .a port is perhaps the single most popular notion in prolog debugging , but theoretically it appears still rather elusive .the notion stems from the seminal article of l.byrd which identifies four different types of control flow in a prolog execution , as movements in and out of procedure _ boxes _ via the four _ ports _ of these boxes : * _ call _ , entering the procedure in order to solve a goal , * _ exit _ , leaving the procedure after a success , i.e. a solution for the goal is found , * _ fail _ , leaving the procedure after the failure , i.e. there are no ( more ) solutions , * _ redo _ , re - entering the procedure , i.e. another solution is sought for . in this work , we present a formal definition of ports , which is a calculus of execution states , and hence provide a formal model of pure prolog execution , s : pp .our approach is to define ports by virtue of their effect , as _port transitions_. a port transition relates two _events_. an event is a state in the execution of a given query with respect to a given prolog program .there are two restrictions we make : 1 .the program has to be pure 2 .the program shall first be transformed into a canonical form .the first restriction concerns only the presentation in this paper , since our model has been prototypically extended to cover the control flow of full standard prolog , as given in .the canonical form we use is the common single - clause representation .this representation is arguably ` near enough ' to the original program , the only differences concern the head - unification ( which is now delegated to the body ) and the choices ( which are now uniformly expressed as disjunction ) .first we define the canonical form , into which the original program has to be transformed .such a syntactic form appears as an intermediate stage in defining the clark s completion of a logic program , and is used in logic program analysis .however , we are not aware of any consensus upon the name for this form . some of the names in the literature are _ single - clausal form _ and _ normalisation of a logic program _ . herewe use the name _canonical form _ , partly on the grounds of our imposing a transformation on if - then as well ( this additional transformation is of no interest in the present paper , which has to do only with pure prolog , but we state it for completeness ) .[ def : canon ] we say that a predicate is in the canonical form , if its definition consists of a single clause here is a `` canonical body '' , of the form , and is a `` canonical head '' , i.e. are distinct variables not appearing in .further , is a disjunction of canonical bodies ( possibly empty ) , is a conjunction of goals ( possibly empty ) , and is a goal ( for facts : ) . additionally , each if - then goal must be part of an if - then - else ( like ) . for the following program q(a ,q(z , c ) (z ) .r(c ) . we obtain as canonical form q(x , y ) = a , y = b , true ; x = z , y = c , r(z ) .r(x ) =c , true . having each predicate represented as one clause , and bearing in mind the box metaphor above , we identified some elementary execution steps . for simplicitywe first disregard variables . the following table should give some intuition about the idea .the symbols , in this table serve to identify the appropriate redo - transition , depending on the exit - transition .transitions are deterministic , since the rules do not overlap .[ fig : port : intuit ] [ cols="^ , < , < , < , < " , ] [ def : rules ] \intertext{conjunction } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : conj:1}{\tag{s : conj:1}}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}''}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , \text { with } { { { \ensuremath{\mathit{b}}}}}''{\ensuremath{\mathrel{\joinrel{:=}}}}{{\ensuremath{{\ensuremath{{\rm{\text{substof}}}{\ensuremath{\boldsymbol{(}}}{{\ensuremath{\mathbb{\sigma}}}}{\ensuremath{\boldsymbol{)}}}}}{\ensuremath{\boldsymbol{(}}}{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\boldsymbol { ) } } } } } } { \label{spec : conj:2}{\tag{s : conj:2}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : conj:3}{\tag{s : conj:3}}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : conj:4}{\tag{s : conj:4}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : conj:5}{\tag{s : conj:5}}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : conj:6}{\tag{s : conj:6 } } } \intertext{disjunction } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : disj:1}{\tag{s : disj:1}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : disj:2}{\tag{s : disj:2}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : disj:3}{\tag{s : disj:3}}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : disj:4}{\tag{s : disj:4}}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : disj:5}{\tag{s : disj:5}}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}}\ifempty{{{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle}}}}{\label{spec : disj:6}{\tag{s : disj:6 } } } \intertext{true } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : true:1}{\tag{s : true:1}}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : true:2}{\tag{s : true:2 } } } \intertext{fail } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{fail}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{fail}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : fail}{\tag{s : fail } } } \intertext{explicit unification } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}\begin{cases } { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{if } { \ensuremath{{\rm{\text{mgu}}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}{\ensuremath{\boldsymbol{,}}}{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{\sigma}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{otherwise } \end{cases}{\label{spec : unif:1}{\tag{s : unif:1}}}\medskip \\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle}}}}{\label{spec : unif:2}{\tag{s : unif:2 } } } \intertext{user - defined atomary goal { \ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a } } } } } } } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}\begin{cases } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{\ensuremath{\sigma}}{\ensuremath{\boldsymbol{(}}}{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\boldsymbol{)}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{if } { { { \ensuremath{\mathit{h}}}}}{\ensuremath{\mathrel{\mathord:\mathord-}}}{{{\ensuremath{\mathit{b}}}}}\text { is a fresh renaming of a } \\ & \hspace*{-2.7cm}\text{clause in { \ensuremath{\mit\pi } } , } \text{and } { \ensuremath{{\rm{\text{mgu}}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\boldsymbol{,}}}{{{\ensuremath{\mathit{h}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{\sigma}},\text { and } { \ensuremath{{\ensuremath{\sigma}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{otherwise } \end{cases}{\label{spec : atom:1}{\tag{s : atom:1}}}\medskip\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : atom:2}{\tag{s : atom:2}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : atom:3}{\tag{s : atom:3}}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \label{spec : atom:4}{\tag{s : atom:4 } } } \end{aligned}\ ] ] spec : about event : * _ current goal _ is a generalization of _ selected literal _ : rather than focusing upon single literals , we focus upon goals . * _ ancestor _ of a goal is defined in a disambiguating manner , via _ tags_. * the notion of _ environment _ is generalized , to contain following _ bets _ : 1 .variable bindings , 2 .choices taken ( or - branches ) , 3 . used predicate definitions .+ environment is represented by one stack , storing each bet as soon as it is computed . for an event to represent the state of pure prolog execution , suffices here one environment and one ancestor stack . about transitions : * port transition relation is functional .the same holds for its converse , if restricted on _ legal events _, i.e. events that can be reached from an _ initial event _ of the form .* this uniqueness of legal derivations enables _ forward and backward _ derivation steps , in the spirit of the byrd s article . * _ modularity _ of derivation : the execution of a goal can be abstracted like for example .notice the same a - stack . by _ atom _ or _ atomary goal_ we denote only user - defined predications .so , or shall not be considered atoms .the most general unifiers be chosen to be idempotent , i.e. .the names or of should only suggest that the argument is related to or , but the actual retrieval is determined by the tags and , saying that respectively the first or the second conjunct are currently being tried .for example , the rule states that the call of leads to the call of with immediate ancestor .this kind of add - on mechanism is necessary to be able to correctly handle a query like where retrieval by unification would get stuck on the first conjunct .note the requirement in . since the clauses are in canonical form , unifying the head of a clause with a goal could do no more thanrename the goal .since we do not need a renaming of the goal , we may fix the mgu to just operate on the clause .[ logupdate : more ] observe how and serve to implement the _ logical update view _ of lindholm and okeefe , saying that the definition of a predicate shall be fixed at the time of its call .this is further explained in the following remark .although we memorize the used predicate definition _ on exit _ , the definition will be unaffected by exit bindings , because _ bindings are applied lazily _ : instead of `` eagerly '' applying any bindings as they occur ( e.g. in , in resolution or in read ) , we chose to do this only in conjunction ( in rule ) and nowhere else . due to the rules and , the exit bindings shall not affect the predicate definition like e.g. . also , lazy bindings enable less ` jumpy ' trace. a jumpy trace can be illustrated by the following exit event ( assuming we applied bindings eagerly ) : ,b,[o|b])}}}}}}}}\ifempty{\{{{{\ensuremath{\mathsf{2}}}}}/([i|b]=[i|b]),append([],b , b){\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}\}{{\ensuremath{\mathbb{\sigma } } } } } { } { \ifempty{\{{{{\ensuremath{\mathsf{2}}}}}/([i|b]=[i|b]),append([],b , b){\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathsf{\{{{{\ensuremath{\mathsf{2}}}}}/([i|b]=[i|b]),append([],b , b){\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}\ } } } } } } \ifempty{{{\ensuremath{\mathbb{\sigma}}}}}{}{{{,\ , { } } } { { \ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma } } } } } } } } } } } } \ ] ] the problem consists in exiting the goal ,b , b)} ] , the latter of course being no instance of the former . by means of lazy binding, we avoid the jumpiness , and at the same time make memoing definitions on exit possible . to ensure that the trace of a query execution shows the correct bindings ,an event shall be printed only after the current substitution has been applied to it .a perhaps more important collateral advantage of lazy binding is that a successful derivation ( see ) can always be abstracted as follows : even if happened to get further instantiated in the course of this derivation .the instantiation will be reflected in the b - stackbut not in the goal itself .let be a program ._ port transition relation _ defined in .the converse relation shall be denoted by .if , we say that _ leads to _ .an event can be _ entered _ , if some event leads to it .an event can be _ left _, if it leads to some event .the relation is functional , i.e. for each event there can be at most one event such that .the premisses of the transition rules are mutually disjunct , i.e. there are no critical pairs .[ ex : rel : converse ] the converse of the port transition relation is not functional , since there may be more than one event leading to the same event : we could have prevented the ambiguous situation above and made converse relation functional as well , by giving natural conditions on redo - transitions for atomary goal and unification .however , further down it will be shown that , for events that are _ legal _ , the converse relation is functional anyway .let be a program .let , be events .-derivation of from _ , written as , is a path from to in the port transition relation wrt .we say that can be _ reached _ from .an _ initial event _ is any event of the form , where is a goal . the goal of an initial event is called a _ top - level goal _ , or a _query_. let be a program .if there is a goal that is a -derivation , then we say that is a _ legal -derivation _ , is a _legal -event _ , and is a -_execution _ of the query .a legal event is a _ final _ event wrt program , if there is no transition wrt .if is an event , and , then we say that is the _ parent _ of .function is defined as follows : and analogously for disjunction .let be an event with the port .if is one of , then is a _ push _ event .if is one of , then is a _ pop _ event .[ lem : finalevent ] if is a legal pop event , and its a - stack is not empty , then according to the rules ( see also appendix [ appendix : leave ] ) , the possibilities to leave an exit event are : these rules state that it is always possible to leave an exit event , save for the following two restrictions : the parent goal may not be , or a unification ; and if the parent goal is a disjunction , then there has to hold i.e. it is not possible to leave an event if ( and similarly for the second disjunct ) .the first restriction is void , since a parent can not be , or a unification anyway , according to the rules .it remains to show that the second restriction is also void , i.e. a legal exit event has necessarily the property .looking at the rules for entering an exit event , we note that the goal part of an exit event either comes from the a - stack , or is or .the latter two possibilities we may exclude , because can only be derived from , which can not be reached if .similarly for unification .so the goal part of a legal exit event must come from the a - stack .the elements of the a - stackoriginate from call / redo events , and they have the property . in conclusion , we can always leave a legal exit event with a nonempty a - stack . similarly for a fail event . [lem : uniq ] if is a legal event , then can have only one legal predecessor , and only one successor .in case is non - initial , there is exactly one legal predecessor . in case is non - final , there is exactly one successor .the successor part follows from the functionality of .looking at the rules , we note that only two kinds of events may have more than one predecessor : and .let be a legal event .its predecessor may have been , on the condition that and have no mgu ( rule ) , or it could have been ( rule ) . in the latter case , must be a legal event , so the b - stack had to be derived .the only rule able to derive such a b - stackis , on the condition that the previous event was and .hence , there can be only one legal predecessor of , depending solely on and . by a similar argumentwe can prove that can have only one legal predecessor .this concludes the proof of functionality of the converse relation , if restricted to the set of legal events . as a notational convenience , all the events which are not final and do not lead to any further events by means of transitions with respect to the given program ,are said to lead to the _ impossible event _ , written as .analogously for events that are not initial events and can not be entered .in particular , and with respect to any program .some impossible events are : , ( can not be entered , non - initial ) , and ( can not be left , non - final ) .[ lem : illegal ] if , then is not legal . if , then is not legal .let .if is legal , then , because of the uniqueness of the transition , has to be legal as well . [lem : uptodate ] for a legal call event holds that , meaning that the substitutionsfrom the b - stack are _ already applied _ upon the goal to be called . in other words ,the goal of any legal call event is up - to - date relative to the current substitution .notice that this property holds only for call events .concatenation of stacks we denote by . concatenating to both stacks of an event we denote by : if , then .let be a program .let be one of .if is a legal -derivation , then for every a - stack for every b - stack that is a legal event , holds : is also a legal -derivation . observe that our rules ( with the exception of ) refer only to the existence of the top element of some stack , never to the emptiness of a stack . since the top element of a stack can not change after appending another stack to , it is possible to emulate each of the original derivation steps using the ` new ' stacks .it remains to consider the rule , which applies the whole current substitution upon the second conjunct .first note that any variables in a legal derivation stem either from the top - level goal or are fresh . according to the, a call event is always up - to - date , i.e. the current substitution has already been applied to the goal .the most general unifiers may be chosen to be idempotent , so a multiple application of a substitution amounts to a single application .hence , if is a legal event , the substitution of affect any variables of the original derivation .uniqueness and modularity of legal port derivations allow us to succinctly define some traditional notions .[ def : success ] a goal said to terminate wrt program , if there is a -derivation where is one of . in case of ,the derivation is _ successful _ ,otherwise it is _failed_. in a failed derivation , .in a successful derivation is , restricted upon the variables of , called the _ computed answer substitution _ for .uniqueness of legal derivation steps enables _ forward and backward _ derivation steps , in the spirit of the byrd s article .push events ( call , redo ) are more amenable to forward steps , and pop events ( exit , fail ) are more amenable to backward steps .we illustrate this by a small example .if the events on the left - hand sides are legal , the following are legal derivations ( for appropriate , ) : the first statement claims : if is legal , then it was reached via . without inspecting ,in general it is not known whether a disjunction succeeded via its first , or via its second member .but in this particular disjunction , the second member can not succeed : assume there are some , with . according to the rules : so according to , is not a legal event , which proves .similarly , the non - legal derivation proves .modularity of legal derivations enables _ abstracting the execution _ of a goal , like in the following example .assume that a goal succeeds , i.e. .then we have the following legal derivation : if fails , then we have : this paper we give a simple mathematical definition s : ppof the 4-port model of pure prolog .some potential for formal verification of pure prolog has been outlined .there are two interesting directions for future work in this area : \(1 ) formal specification of the control flow of _ full standard prolog _( currently we have a prototype for this , within the 4-port model ) \(2 ) formal specification and proof of some non - trivial program properties , like adequacy and non - interference of a practical program transformation .concerning attempts to formally define the 4-port model , we are aware of only few previous works .one is a graph - based model of tobermann and beckstein , who formalize the graph traversal idea of byrd , defining the notion of a _ trace _ ( of a given query with respect to a given program ) , as a path in a trace graph .the ports are quite lucidly defined as hierarchical nodes of such a graph . however , even for a simple recursive program and a ground query , with a finite sld - tree , the corresponding trace graph is infinite , which limits its applicability .another model of byrd box is a continuation - based approach of jahier , ducass and ridoux .there is also a stack - based attempt in , but although it provides for some parametrizing , it suffers essentially the same problem as the continuation - based approach , and also the prototypical implementation of the tracer given in , taken as a specification of prolog execution : in these three attempts , a port is represented by some semantic action ( e.g. writing of a message ) , instead of a formal method .therefore it is not clear how to use any of these models to prove some port - related assertions .in contrast to the few specifications of the byrd box , there are many more general models of pure ( or even full ) prolog execution . due to space limitations we mention here only some models , directly relevant to s : pp , and for a more comprehensive discussion see e.g. .comparable to our work are the stack - based approaches .strk gives in , as a side issue , a simple operational semantics of pure logic programming .a state of execution is a stack of frame stacks , where each frame consists of a goal ( ancestor ) and an environment . in comparison , our state of executionconsists of exactly one environment and one ancestor stack . the seminal paper of jones and mycroft was the first to present a stack - based model of execution , applicable to pure prolog with cut added .it uses a sequence of frames . in these stack - based approaches ( including our previous attempt ) , there is no _ modularity _ , i.e.it is not possible to abstract the execution of a subgoal .many thanks for helpful comments are due to anonymous referees . dedc96 lawrence byrd . understanding the control flow of prolog programs . in s.a. trnlund , editor , _ proc . of the 1980 logic programming workshop _ ,pages 127138 , debrecen , hungary , 1980 .also as d. a. i. research paper no.151 .p. deransart , a. ed - dbali , and l. cervoni . .springer - verlag , 1996 .e. jahier , m. ducass , and o. ridoux .specifying byrd s box model with a continuation semantics . in _ proc .of the wlpe99 , las cruces , nm _ , volume 30 of _entcs_. elsevier , 2000 .http://www.elsevier.nl/locate/entcs/volume30.html .n. d. jones and a. mycroft .stepwise development of operational and denotational semantics for prolog . in _ proc . of the 1st int .symposium on logic programming ( slp84 ) _ , pages 281288 , atlantic city , 1984 .m. kula and c. beierle .defining standard prolog in rewriting logic . in k.futatsugi , editor , _ proc . of the 3rd int .workshop on rewriting logic and its applications ( wrla 2000 ) , kanazawa _ , volume 36 of _ entcs_. elsevier , 2001 .http://www.elsevier.nl/locate/entcs/volume36.html .a. king and l. lu .a backward analysis for constraint logic programs ., 2(4):517547 , 2002 .m. kula . a rewriting prolog semantics . in m.leuschel , a. podelski , r. ramakrishnan c. and u. ultes - nitsche , editors , _ proc . of the cl 2000 workshop on verification and computational logic ( vcl 2000 ) , london _ , 2000 .t. lindgren .control flow analysis of prolog ( extended remix ) .technical report 112 , uppsala university , 1995 .http://www.csd.uu.se/papers/reports.html .t. lindholm and r. a. okeefe .efficient implementation of a defensible semantics for dynamic prolog code . in _ proc . of the 4th int .conference on logic programming ( iclp87 ) _ , pages 2139 , melbourne , 1987 .robert f. strk .the theoretical foundations of lptp ( a logic program theorem prover ) . , 36(3):241269 , 1998 .source distribution http://www.inf.ethz.ch/staerk/lptp.html .g. tobermann and c. beckstein .what s in a trace : the box model revisited . in _ proc .of the 1st int .workshop on automated and algorithmic debugging ( aadebug93 ) , linkping _ ,volume 749 of _lncs_. springer - verlag , 1993 . \intertext{leaving a call event } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : conj:1}}\\ { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : disj:1}}\\ { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : true:1}}\\ { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{fail}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{fail}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : fail}}\\ { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}\begin{cases } { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{if } { \ensuremath{{\rm{\text{mgu}}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}{\ensuremath{\boldsymbol{,}}}{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{\sigma}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{otherwise } \end{cases}{\tag{s : unif:1}}\medskip \\ { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}\begin{cases } { \ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{\ensuremath{\sigma}}{\ensuremath{\boldsymbol{(}}}{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\boldsymbol{)}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{if } { { { \ensuremath{\mathit{h}}}}}{\ensuremath{\mathrel{\mathord:\mathord-}}}{{{\ensuremath{\mathit{b}}}}}\text { is a fresh renaming of a } \\ & \hspace*{-2.7cm}\text{clause in { \ensuremath{\mit\pi } } , } \text{and } { \ensuremath{{\rm{\text{mgu}}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\boldsymbol{,}}}{{{\ensuremath{\mathit{h}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{\sigma}},\text { and } { \ensuremath{{\ensuremath{\sigma}}{\ensuremath{\boldsymbol{(}}}{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\boldsymbol{)}}}}}={\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , & \text{otherwise } \end{cases}{\tag{s : atom:1}}\medskip\\ \intertext{leaving a redo event } { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : conj:6}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{c}}}}}}}}}\ifempty{{{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{n}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle}}}}{\tag{s : disj:6}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{true}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : true:2}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{\ensuremath{\sigma}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{1}}}}}}\mathord={\ensuremath{{{\ensuremath{\mathit{t}}}}_{{\ensuremath{\mathit{2}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle}}}}{\tag{s : unif:2}}\\ { \ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}'{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : atom:4}}\\ \intertext{leaving an exit event } { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}''}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } , \text { with } { { { \ensuremath{\mathit{b}}}}}''{\ensuremath{\mathrel{\joinrel{:=}}}}{{\ensuremath{{\ensuremath{{\rm{\text{substof}}}{\ensuremath{\boldsymbol{(}}}{{\ensuremath{\mathbb{\sigma}}}}{\ensuremath{\boldsymbol{)}}}}}{\ensuremath{\boldsymbol{(}}}{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\boldsymbol { ) } } } } } } { \tag{s : conj:2}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : conj:4}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : disj:4}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{({{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}})}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : disj:5}}\\ { \ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{exit}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}},{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : atom:2}}\\ \intertext{leaving a fail event } { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : conj:3}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}'}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{redo}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}},{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : conj:5}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{1}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{call}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : disj:2}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{{\ensuremath{\mathsf{2}}}}}/{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{a}}}}};{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : disj:3}}\\ { \ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{{{\ensuremath{\mathit{b}}}}}}}}}\ifempty{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}{\ensuremath{\mathop{\bullet}}}{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } & { \ensuremath{\,\rightarrowtriangle\,}}{\ensuremath{{{\ensuremath{\mathit{fail}}}}\mathinner{{\ensuremath{\mathsf{{\ensuremath{{{\ensuremath{\mathit{g}}}}_{{\ensuremath{\mathit{a}}}}}}}}}}\ifempty{{{\ensuremath{\mathbb{u}}}}{{\ensuremath{\mathbb{\sigma}}}}}{}{{\langle\textstyle\frac{{{\ensuremath{\mathit{{{\ensuremath{\mathbb{\sigma}}}}}}}}}{{{\ensuremath{\mathsf{{{\ensuremath{\mathbb{u}}}}}}}}}\rangle } } } } { \tag{s : atom:3 } } \ ] ] spec :assume the following program : }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{one(x , y){\ensuremath{\mathop{\bullet}}}{}1/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{one(x , y){\ensuremath{\mathop{\bullet}}}{}1/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{2/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{2/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}a}}}},{{\ensuremath{\mathsf{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{(1/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / a}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{2/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{2/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(one(x , y),two(x , y))}}}},{{\ensuremath{\mathsf{post(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{(y{\ensuremath{\mathord=}}{}a;y{\ensuremath{\mathord=}}{}b)}}}},{{\ensuremath{\mathsf{two(1,y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{or({{\ensuremath{\mathsf{y{\ensuremath{\mathord=}}{}b}}}},{{\ensuremath{\mathsf{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{(2/(y{\ensuremath{\mathord=}}{}a);y{\ensuremath{\mathord=}}{}b){\ensuremath{\mathop{\bullet}}}{}two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{y\mathord / b}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{two(1,y){\ensuremath{\mathop{\bullet}}}{}2/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ] }}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } { } { \ifempty{\{1/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } \colorbox{svetlosivo}{{{\ensuremath{\mathsf{\{1/one(x , y),two(x , y){\ensuremath{\mathop{\bullet}}}{}post(x , y){\ensuremath{\mathop{\bullet}}}{}1/post(x , y),fail{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } \ifempty{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\}}{}{{{,\ , { } } } { { \ensuremath{\mathit{\{{{{\ensuremath{\mathit{by({{\ensuremath{\mathsf{x{\ensuremath{\mathord=}}{}1}}}},{{\ensuremath{\mathsf{one(x , y)}}}})}}}}}{\ensuremath{\mathop{\bullet}}}{}{\colorbox{sivo}{\bf\color{white}{\mathversion{bold}[{\ensuremath{x\mathord/1}}]}}}{\ensuremath{\mathop{\bullet}}}{}{{{\ensuremath{\mathit{nil}}}}}\ } } } } } } } ]
a simple mathematical definition of the 4-port model for pure prolog is given . the model combines the intuition of ports with a compact representation of execution state . forward and backward derivation steps are possible . the model satisfies a modularity claim , making it suitable for formal reasoning .
today , with a vast amount of publications being produced in every discipline of scientific research , it can be rather overwhelming to select a good quality work ; that is enriched with original ideas and relevant to scientific community .more often this type of publications are discovered through the citation mechanism .it is believed that an estimate measure for scientific credibility of a paper is the number of citations that it receives , though this should not be taken too literally since some publications may have gone unnoticed or have been forgotten about over time .knowledge of how many times their publications are cited can be seen as good feedback for the authors , which brings about an unspoken demand for the statistical analysis of citation data .one of the impressive empirical studies on citation distribution of scientific publications showed that the distribution is a power - law form with exponent .the power - law behaviour in this complex system is a consequence of highly cited papers being more likely to acquire further citations .this was identified as a _ preferential attachment _ process in . the citation distribution of scientific publications is well studied andthere exist a number of network models to mimic its complex structure and empirical results to confirm predictions .however , they seem to concentrate on the total number of citations without giving information about the issuing publications .the scientific publications belonging to a particular research area do not restrict their references to that discipline only , they form bridges by comparing or confirming findings in other research fields .for instance most _ small world network models _ presented in statistical mechanics , reference a sociometry article which presents the studies of milgram on the small world problem .this is the type of process which we will investigate with a simple model that only considers two research areas and referencing within and across each other .the consideration of cross linking also makes the model applicable to _ the web of human sexual contacts _ , where the interactions between males and females can be thought of as two coupled growing networks .this paper is organized as follows : in the proceeding section the model is defined and analyzed with a rate equation approach . in the final section discussions and comparisons of findings with the existing data are presented .one can visualize the proposed model with the aid of fig .( [ coupled ] ) that attempts to illustrate the growth mechanism .we build the model by the following considerations . initially , both networks and contains nodes with no cross - links between the nodes in the networks .at each time step two new nodes with no incoming links , one belonging to network and the other to , are introduced simultaneously .the new node joining to with outgoing links , attaches fraction of its links to pre - existing nodes in and fraction of them to pre - existing nodes in .the similar process takes place when a new node joins to , where the new node has outgoing links from which of them goes to nodes in and the complementary goes to .the attachments to nodes in either networks are preferential and the rate of acquiring a link depends on the number of connections and the initial attractiveness of the pre - existing nodes .we define as the average number of nodes with total number of connections that includes the incoming intra - links and the incoming cross - links in network at time .similarly , is the average number of nodes with connections at time in network .notice that the indices are discriminative and the order in which they are used is important , as they indicate the direction that the links are made .further more we also define and the average number of nodes with and incoming intra - links to and respectively .finally , we also have and to denote the average number of nodes in and with and incoming cross - links .to keep this paper less cumbersome we will only analyse the time evolution of network and apply our results to network .in addition to this , we only need to give the time evolution of , defined as the joint distribution of intra - links and cross - links . using this distributionwe can find all other distributions that are mentioned earlier .the time evolution of can be described by a rate equation \nonumber\\ & & + p_{ba}m_{b}[(k_{aa}+k_{ba}-1+a)n_{a}(k_{aa},k_{ba}-1,t)\nonumber\\ & & -(k_{aa}+k_{ba}+a)n_{a}(k_{aa},k_{ba},t)]\}+ \delta_{k_{aa}0}\delta_{k_{ba}0}.\end{aligned}\ ] ] the form of the eq .( [ na ] ) seems very similar to the one used in . in that modelthe rate of creating links depends on the out - degree of the issuing nodes and the in - degree of the target nodes .here we are concerned with two different types of in - degrees namely intra- and cross - links of the nodes . on the right hand side of eq .( [ na ] ) the terms in first square brackets represent the increase in the number of nodes with links when a node with intra - links acquires a new intra - link and if the node already has links this leads to reduction in the number .similarly , for the second square brackets where the number of nodes with links changes due to the incoming cross - links .the final term accounts for the continuous addition of new nodes with no incoming links , each new node could be thought of as the new publication in a particular research discipline .the normalization factor sum of all degrees is defined as we limit ourself to the case of preferential linear attachment rate shifted by , the initial attractiveness of nodes in , which ensures that there is a nonzero probability of any node acquiring a link .the nature of lets one to obtain , as where is the average total in - degree in network .( [ mat ] ) implying that is linear in time .similarly , it is easy to show that is also linear function of time .we use these relations in eq .( [ na ] ) to obtain the time independent recurrence relation {a}(k_{aa},k_{ba})\nonumber\\ = p_{aa}m_{a}(k_{aa}+k_{ba}+a-1)n_{a}(k_{aa}-1,k_{ba})\nonumber\\ + p_{ba}m_{b}(k_{aa}+k_{ba}+a-1)n_{a}(k_{aa},k_{ba}-1)\nonumber\\ + ( a+<m_{a}>)\delta_{k_{aa}0}\delta_{k_{ba}0}.\end{aligned}\ ] ] the expression in eq .( [ arec ] ) does not simplify however , it lets us to obtain the total in - degree distribution writing and since then satisfies {a}(k_{a})=<m_{a}>(k_{a}+a-1)n_{a}(k_{a}-1 ) \nonumber\\ + ( a+<m_{a}>)\delta_{k_{a}0}.\end{aligned}\ ] ] solving eq .( [ narec ] ) for yields , with as eq .( [ nagamma ] ) gives the asymptotic behaviour of the total in - degree distribution in which is a power - law form with an exponent that only depends on the average total in - degree and the initial attractiveness of the nodes .similarly , we can write the total in - degree distribution in network for the asymptotic limit of as again , the exponent depends upon the initial attractiveness of nodes and the average total incoming links .we now move on to analyse , the distribution of the average number of nodes with intra - links in network . in citation networkone can think of these links being issued from the same subject class as the receiving nodes and in the case of human sexual contact network , they represent the homosexual interactions . since which can also be written as , a linear function of time . then summing eq .( [ arec ] ) over all possible values of we get {aa}(k_{aa})+<m_{a } > \sum_{k_{ba}=0}^{\infty}k_{ba}n_{a}(k_{aa},k_{ba})\nonumber\\ = p_{aa}m_{a}(k_{aa}+a-1)n_{aa}(k_{aa}-1)+p_{aa}m_{a } \sum_{k_{ba}=0}^{\infty}k_{ba}n_{a}(k_{aa}-1,k_{ba})\nonumber\\ + p_{ba}m_{b}(k_{aa}+a)n_{aa}(k_{aa})+p_{ba}m_{b } \sum_{k_{ba}=0}^{\infty}(k_{ba}-1)n_{a}(k_{aa},k_{ba}-1)\nonumber\\ + ( a+<m_{a}>)\delta_{k_{aa}0}.\end{aligned}\ ] ] for large eq .( [ aarec ] ) reduces to {aa}(k_{aa})=\nonumber\\ p_{aa}m_{a}(k_{aa}+a-1)n_{aa}(k_{aa}-1)+(a+<m_{a}>)\delta_{k_{aa}0}.\end{aligned}\ ] ] iterating former relation for yields where in the asymptotic limit as eq .( [ aagamma ] ) has a power - law form that depends upon both and the coupling parameter .similarly , the time independent recurrence relation for has the same form as eq .( [ aarec ] ) with the only difference being the parameters .therefore we will simply give the power - law distribution where the other coupling parameter is revealed in the exponent .finally , the distribution of average number of nodes with incoming cross - links in can be found by summing over for all its intra - links as before is also linear in time . when the cross links are large enough , then from eq .( [ arec ] ) we obtain where in the asymptotic limit as the distribution has a power - law form and similarly for the network as unlike the case in intra - links , here the exponents are inversely proportional to the coupled parameters and respectively .for the sake of simplicity , we set the number of outgoing links of the new nodes in either networks to be the same , i.e. . furthermore taking the rate of cross linking to be and the rate of intra linking , consequently we have , and as the coupling parameter . in the weak coupling case ,the cross linking is negligibly small i.e. then the power - law exponent of the intra - link distribution is equal to total link distribution .this gives a solution obtained in and when we recover the exponent , the empirical findings in .thus , varying in yields any values of between and . on the contrary ,the exponent of cross - link distribution decreases from to , as increases from to .taking gives supposing , which seems reasonable for consideration of citation networks , we find that and .the former result coincides with the distribution of connectivities for the electric power grid of southern california . where the system is small and the local interactions is of importance hence there seems to be some analogy to the intra - linking process .for the latter , as far as we are aware there is none empirical studies present in the published literature . now , consider the web of human sexual contacts .if we let to represent males and females that is and then are the power - law exponents of the degree distributions of the sexes .where and denote the male and female attractiveness respectively and usually is considered . by setting , and that is , cross links are predominant then as in we obtain for males and for females .the exponents and have been observed for the cumulative distributions in empirical study .the model we studied here seems to have the flexibility to represent variety of complex systems .we would like to thank the china scholarship council , epsrc for their financial support and geoff rodgers for useful discussions .
we introduce and solve a model which considers two coupled networks growing simultaneously . the dynamics of the networks is governed by the new arrival of network elements ( nodes ) making preferential attachments to pre - existing nodes in both networks . the model segregates the links in the networks as intra - links , cross - links and mix - links . the corresponding degree distributions of these links are found to be power - laws with exponents having coupled parameters for intra- and cross - links . in the weak coupling case the model reduces to a simple citation network . as for the strong coupling , it mimics the mechanism of _ the web of human sexual contacts_.
the impossibility of superluminal communication through the use of quantum entanglement has already been vividly discussed in the past , see for example .recently this topic has re - entered the stage of present research in the context of quantum cloning : the no - signalling constraint has been used to derive upper bounds for the fidelity of cloning transformations .as the connection between approximate cloning and no - signalling is still widely debated , we aim at clarifying in this paper the quantum mechanical principles that forbid superluminal communication , and at answering the question whether they are the same principles that set limits to quantum cloning .our scenario throughout the paper for the attempt to transmit information with superluminal speed is the well - known entanglement - based communication scheme .the idea is the following : two space - like separated parties , say alice and bob , share an entangled state of a pair of two - dimensional quantum systems ( qubits ) , for example the singlet state .alice encodes a bit of information by choosing between two possible orthogonal measurement bases for her qubit and performing the corresponding measurement . by the reduction postulate , the qubit at bob s side collapses into a pure state depending on the result of the measurement performed by alice .if a perfect cloning machine were available , bob could now generate an infinite number of copies of his state , and therefore would be able to determine his state with perfect accuracy , thus knowing what basis alice decided to use . in this way, transfer of information between alice and bob would be possible .in particular , if they are space - like separated , information could be transmitted with superluminal speed .the same transfer of information could evidently also be obtained if it were possible to determine the state of a single quantum system with perfect accuracy , which is also impossible .one might ask the question whether approximate cloning allows superluminal communication : with imperfect cloning bob can produce a number of imperfect copies , and thus get some information about his state . but this information is never enough to learn alice s direction of measurement .this has been shown in ref . for a specific example .more generally , as we will show in this paper , the reason is that _ no _ local linear transformation can lead to transmission of information through entanglement , but any cloning operation consistent with quantum mechanics has to be linear .the fact that non - locality of quantum entanglement can not be used for superluminal communication , has been phrased as `` peaceful coexistence '' between quantum mechanics and relativity , a much - cited expression . herewe emphasize that this consistency is not a coincidence , but a simple consequence of linearity and completeness of quantum mechanics .our arguments go beyond previous work , as we consider the most general evolution on alice s and bob s side in the form of local maps .recently , this consistency has been exploited in order to devise new methods to derive bounds or constraints for quantum mechanical transformations . however , in this paper we will show that the principles underlying the impossibility of 1 ) superluminal signalling and 2 ) quantum cloning beyond the optimal bound allowed by quantum mechanics , are not the same . in particular , the impossibility of information transfer by means of quantum entanglement is due only to linearity and preservation of trace of local operations .in this section we want to show how the impossibility of superluminal communication arises by assuming only completeness and linearity of local maps on density operators .we consider the most general scenario where alice and bob share a global quantum state of two particles and are allowed to perform any local map , which we denote here with and , respectively .the local map can be any local transformation , including a measurement averaged over all possible outcomes ( which , in fact , can not be known by the communication partner ) . alice can choose among different local maps in order to encode the message `` '' that she wishes to transmit , namely she encodes it by performing the transformation on her particle .bob can perform a local transformation on his particle ( e.g. cloning ) and then a local measurement to decode the message ( is a povm ) .the impossibility of superluminal communication in the particular case where bob performs only a measurement has been demonstrated in ref . . herewe follow a more general approach , discussing the roles of `` completeness '' and linearity of any local map involved . by `` completeness '' of a map we mean that the trace is preserved under its action , namely \equiv { \mbox{tr}}[\rho_a]\ ] ] for any .linearity of the map on trace - class operators of the form , allows to extend the completeness condition to the whole hilbert space , namely \equiv { \mbox{tr}}[\rho_{ab}]\;,\ ] ] and analogously for the partial trace \equiv { \mbox{tr}}_a[\rho_{ab } ] \label{part}\;,\ ] ] on bob s side , only linearity without completeness is needed for the local map , leading to the equality = { \mbox{}}\,{\mbox{tr}}_a[{\mbox{}}\otimes{\mbox{}}(\rho_{ab})]\;.\label{gcomp}\ ] ] as we will show in the following , the above equations are the fundamental ingredients and the only requirements for local maps to prove the impossibility of superluminal communication .we will now compute the conditional probability that bob records the result when the message was encoded by alice : \;.\ ] ] by exploiting eqs .( [ gcomp ] ) and ( [ part ] ) we have ) ] \nonumber \\&= & { \mbox{tr}}_b[\pi_r\,{\mbox{}}\ , ( { \mbox{tr}}_a[\rho_{ab}])]\equiv p(r ) \;.\label{gcomp2}\end{aligned}\ ] ] the conditional probability is therefore independent of the local operation that alice performed on her particle , and therefore the amount of transmitted information vanishes .note that the speed of transmission does not enter in any way , i.e. _ any _ transmission of information is forbidden , in particular superluminal transmission .we want to stress that this result holds for all possible linear local operations that alice and bob can perform , and also for any joint state .in particular , it holds for any kind of linear cloning transformation performed at bob s side ( notice that ideal cloning is a non - linear map ) .notice also that any operation that is physically realizable in standard quantum mechanics ( completely positive map ) is linear and complete , and therefore it does not allow superluminal communication .we also emphasize here that the `` peaceful coexistence '' between quantum mechanics and relativity is automatically guaranteed by the linearity and completeness of any quantum mechanical process .actually , as shown in the diagram [ maps ] , the set of local quantum mechanical maps is just a subset of the local maps that do not allow superluminal communication . .5truecm= .8 .5truecm in the next section we will show how superluminal communication could be achieved if one would give up the linearity requirement for the local maps , by discussing some explicit examples .our examples are based on the scenario where alice and bob share an entangled state of two qubits and alice performs a projection measurement with her basis oriented along the direction .the final state of bob , who does not know the result of the measurement , is given by where denote the probabilities that alice finds her qubit oriented as , and are the corresponding final density operators at bob s side after he performed his local transformation .notice that the evolved state of bob , as in the following examples , can be a joint state of a composite system with more than one qubit .if the information is encoded in the choice of two possible different orientations and of the measurement basis , the impossibility of superluminal communication corresponds to the condition for all choices of and . in the following section we give some explicit examples of local maps on bob s side . notice that we will intentionally leave the ground of quantum mechanics ( an explicit example of a superluminal communication scheme based on the use of non - linear evolutions is also given in ref . ) .\(1 ) _ example of a linear , non - positive cloning transformation which does not allow superluminal communication : _ the evolved state at bob s side after his transformation is a state of two qubits given by \label{rhoout1}\end{aligned}\ ] ] where is the bloch vector which is cloned and is the shrinking factor .the above map is non - positive for .this is the case , for instance , for and .such a transformation violates the upper bound of the universal quantum cloner but , as this is a linear transformation , eq .( [ gcomp2 ] ) holds .therefore the cloning is `` better '' than the optimal one , and the no - signalling condition ( [ nsc ] ) is still fulfilled .this means that we can go beyond the laws of quantum mechanics ( complete positivity ) without necessarily creating the possibility of superluminal communication .\(2 ) _ example of non - linear , positive or non - positive cloning transformation which does allow superluminal communication : _ consider bob s transformation \ , \label{rhoout2}\end{aligned}\ ] ] where denotes a function of the component of the bloch vector , which is such that this map acts non - linearly on a convex combination of density matrices . for odd functions ,namely one does not violate the no - signalling condition for a maximally entangled state because taking it follows that does not depend on , whereas for even non - constant functions one does .however , for odd functions the no - signalling condition is in general violated for partially entangled pure states , i.e. in eq .( [ final ] ) .it is interesting to see that in this non - physical case superluminal communication is achieved when sharing less than maximal entanglement .depending on the value of the parameter this map can be positive or non - positive .examples of non - positive maps can for instance be found by violating the condition ( compare with previous example ) .\(3 ) _ example of a non - linear , positive cloning transformation which does allow superluminal communication : _ consider where is orthogonal to .the no - signalling condition ( [ nsc ] ) for two different choices of basis and with equiprobable outcomes is violated because which holds for any value .it is then possible to devise a measurement procedure that distinguishes between the left and right hand side of eq .( [ dist ] ) , thus allowing to transmit information faster than light .in order to illustrate this we give an explicit example with .let us denote the right hand side of equation ( [ nonlinear ] ) as .we choose and and a povm measurement on the clones given by the operators and , which are the projectors over the subspaces spanned by and , respectively . with this measurement the probabilities for outcome 0 and 1 depend on alice s choice of measurement basis .we denote as the probability that bob finds outcome 0 , if alice measured in the basis , and arrive at =0 \;,\nonumber \\ p(1|\psi)&=&1-p(0|\psi)=1 \;.\end{aligned}\ ] ] analogously , for the other choice of alice s basis one has =\frac{1}{2 } \;,\nonumber \\p(1|\phi)&=&1-p(0|\phi)=\frac{1}{2 } \;.\end{aligned}\ ] ] therefore , we can distinguish between the two different choices of bases .note that , when giving up the constraint of linearity , one could send signals superluminally even for fidelities smaller than those of optimal quantum cloning .similar arguments hold for the transformation have shown that the `` peaceful coexistence '' between quantum mechanics and relativity is automatically guaranteed by the linearity and completeness ( i.e. trace - preserving property ) of any quantum mechanical process : hence , any approximate optimal quantum cloning , as a particular case of a linear trace - preserving map , can not lead to signalling . for the sake of illustration , in figure [ maps ]we summarize the set of local maps .this set is divided into linear and non - linear maps .any linear trace - preserving map forbids superluminal signalling .reversely , the no - signalling condition implies only linearity , as shown in refs. and .the positive maps contain the linear maps allowed by quantum mechanics ( qm ) , namely the completely positive trace - preserving maps . both trace - preservation and positivity crucial for quantum mechanics are not implied by the no - signalling constraint .in particular , positivity seems to be unrelated with no - signalling .hence , there is room for maps that go beyond quantum mechanics , but still preserve the constraint of no - superluminal signalling , and example 1 ) above shows that this is the case . from what we have seenwe can conclude that any bound on a cloning fidelity can not be derived from the no - signalling constraint alone , but only in connection with other quantum mechanical principles : example 3 ) shows how the cloning fidelity is unrelated to the no - signalling condition .quantum mechanics as a complete theory , however , naturally guarantees no - signalling , and obviously gives the correct known upper bounds on quantum cloning .we thank c. fuchs , g. c. ghirardi , l. hardy and a. peres for fruitful discussions .db acknowledges support by the esf programme qit , and from deutsche forschungsgemeinschaft under sfb 407 and schwerpunkt qiv .the theoretical quantum optics group of pavia acknowledges the european network equip and cofinanziamento 1999 `` quantum information transmission and processing : quantum teleportation and error correction '' for partial support .99 g. ghirardi , a. rimini , and t. weber , lett .nuovo cimento * 27 * , 293 ( 1980 ) .n. herbert , found .* 12 * , 1171 ( 1982 ) . w. k. wootters , w. h. zurek , nature * 299 * , 802 ( 1982 ) .g. ghirardi , r. grassi , a. rimini and t. weber , europhys . lett . * 6 * , 95 ( 1988 ) .h. scherer and p. busch , phys .a * 47 * , 1647 ( 1993 ) .g. svetlichny , found .* 28 * , 131 ( 1998 ) .a. peres , phys .a * 61 * , 022117 ( 2000 ) .n. gisin , phys .a * 242 * , 1 ( 1998 ) .l. hardy and d.d . song , phys .a * 259 * , 331 ( 1999 ) .s. ghosh , g. car and a. roy , phys .a * 261 * , 17 ( 1999 ) .pati , quant - ph/9908017 .g. m. dariano and h. p. yuen , phys .lett . * 76 * 2832 ( 1996 ) .p. busch , in _ potentiality , entanglement and passion - at - a - distance : quantum mechanical studies for abner shimony _ , eds .cohen , m.a .horne , j. stachel , kluwer , dordrecht , 1997 ( quant - ph/9604014 ) .this problem has been posed to us by g. c. ghirardi . c. simon , g. weihs and a. zeilinger , acta phys .slovaca * 49 * , 755 ( 1999 ) .a. shimony , in _ foundations of quantum mechanics in the light of new technology _ , ed .s. kamefuchi , phys .japan , tokyo , 1983 .n. gisin and s. massar , phys .lett . * 79 * , 2153 ( 1997 ) .d. bru , d. divincenzo , a. ekert , c. fuchs , c. macchiavello and j. smolin , phys .a * 57 * , 2368 ( 1998 ) .r. werner , phys .rev . a*58 * , 1827 ( 1998 ) .d. bru , a. ekert and c. macchiavello , phys .lett . * 81 * , 2598 ( 1998 ) .l. duan , g. guo , phys .. lett . * 80 * , 4999 ( 1998 ) . c. w. helstrom , _ quantum detection and estimation theory _ , academic press , new york , 1976 .a. peres , _ quantum theory : concepts and methods _ , kluwer , dordrecht , 1993 .we refer to the map as complete as synonymous of trace - preserving since generally alice s map is a measurement , and summing over all possible outcomes ( i.e. for the completeness of the measurement ) leads to a linear trace - preserving map ( non linear state reduction maps are always linear on average ) .a. peres , private communication .n. gisin , helv . phys .acta * 62 * , 363 ( 1989 ) .n. gisin , phys .lett . a * 143 * , 1 ( 1990 ) .v. buek and m. hillery , phys .a * 54 * , 1844 ( 1996 ) .
we show that non - locality of quantum mechanics can not lead to superluminal transmission of information , even if most general local operations are allowed , as long as they are linear and trace preserving . in particular , any quantum mechanical approximate cloning transformation does not allow signalling . on the other hand , the no - signalling constraint on its own is not sufficient to prevent a transformation from surpassing the known cloning bounds . we illustrate these concepts on the basis of some examples .
one of the most important motivations of these series of conferences is to promote vigorous interaction between statisticians and astronomers .the organizers merit our admiration for bringing together such a stellar cast of colleagues from both fields . in this third edition ,one of the central subjects is cosmology , and in particular , statistical analysis of the large - scale structure in the universe .there is a reason for that the rapid increase of the amount and quality of the available observational data on the galaxy distribution ( also on clusters of galaxies and quasars ) and on the temperature fluctuations of the microwave background radiation .these are the two fossils of the early universe on which cosmology , a science driven by observations , relies . herewe will focus on one of them the galaxy distribution .first we briefly review the redshift surveys , how they are built and how to extract statistically analyzable samples from them , considering selection effects and biases .most of the statistical analysis of the galaxy distribution are based on second order methods ( correlation functions and power spectra ) .we comment them , providing the connection between statistics and estimators used in cosmology and in spatial statistics .special attention is devoted to the analysis of clustering in fourier space , with new techniques for estimating the power spectrum , which are becoming increasingly popular in cosmology .we show also the results of applying these second - order methods to recent galaxy redshift surveys .fractal analysis has become very popular as a consequence of the scale - invariance of the galaxy distribution at small scales , reflected in the power - law shape of the two - point correlation function .we discuss here some of these methods and the results of their application to the observations , supporting a gradual transition from a small - scale fractal regime to large - scale homogeneity .the concept of lacunarity is illustrated with some detail .we end by briefly reviewing some of the alternative measures of point statistics and structure functions applied thus far to the galaxy distribution : void probability functions , counts - in - cells , nearest neighbor distances , genus , and minkowski functionals .cosmological datasets differ in several respects from those usually studied in spatial statistics .the point sets in cosmology ( galaxy and cluster surveys ) bear the imprint of the observational methods used to obtain them .the main difference is the systematically variable intensity ( mean density ) of cosmological surveys .these surveys are usually magnitude - limited , meaning that all objects , which are brighter than a pre - determined limit , are observed in a selected region of the sky .this limit is mainly determined by the telescope and other instruments used for the program .apparent magnitude , used to describe the limit , is a logarithmic measure of the observed radiation flux .it is usually assumed that galaxies at all distances have the same ( universal ) luminosity distribution function .this assumption has been tested and found to be in satisfying accordance with observations . as the observed flux from a galaxy is inversely proportional to the square of its distance, we can see at larger distances only a bright fraction of all galaxies .this leads directly to the mean density of galaxies that depends on their distance from us .one can also select a distance limit , find the minimum luminosity of a galaxy , which can yet be seen at that distance , and ignore all galaxies that are less luminous .such samples are called volume - limited .they are used for some special studies ( typically for counts - in - cells ) , but the loss of hard - earned information is enormous .the number of galaxies in volume - limited samples is several times smaller than in the parent magnitude - limited samples .this will also increase the shot ( discreteness ) noise .in addition to the radial selection function , galaxy samples also are frequently subject to angular selection .this is due to our position in the galaxy we are located in a dusty plane of the galaxy , and the window in which we see the universe , also is dusty .this dust absorbs part of galaxies light , and makes the real brightness limit of a survey dependent on the amount of dust in a particular line - of - sight .this effect has been described by a law ( is the galactic latitude ) ; in reality the dust absorption in the galaxy is rather inhomogeneous .there are good maps of the amount of galactic dust in the sky , the latest maps have been obtained using the cobe and iras satellite data .edge problems , which usually affect estimators in spatial statistics , also are different for cosmological samples .the decrease of the mean density towards the sample borders alleviates these problems .of course , if we select a volume - limited sample , we select also all these troubles ( and larger shot noise ) . from the other side ,edge effects are made more prominent by the usual observing strategies , when surveys are conducted in well - defined regions in the sky .thus , edge problems are only partly alleviated ; maybe it will pay to taper our samples at the side borders , too ? some of the cosmological surveys have naturally soft borders .these are the all - sky surveys ; the best known is the iras infrared survey , dust is almost transparent in infrared light .the corresponding redshift survey is the pscz survey , which covers about 85% of the sky .a special follow - up survey is in progress to fill in the remaining galactic zone - of - avoidance region , and meanwhile numerical methods have been developed to interpolate the structures seen in the survey into the gap .another peculiarity of galaxy surveys is that we can measure exactly only the direction to the galaxy ( its position in the sky ) , but not its distance .we measure the radial velocity ( or redshift , is the velocity of light ) of a galaxy , which is a sum of the hubble expansion , proportional to the distance , and the dynamical velocity of the galaxy , .thus we are differentiating between redshift space , if the distances simply are determined as , and real space .the real space positions of galaxies could be calculated if we exactly knew the peculiar velocities of galaxies ; we do not .the velocity distortions can be severe ; well - known features of redshift space are fingers - of - god , elongated structures that are caused by a large radial velocity dispersion in massive clusters of galaxies .the velocity distortions expand a cluster in redshift space in the radial direction five - ten times . for large - scale structures the situation is different ,redshift distortions compress them .this is due to the continuing gravitational growth of structures .these differences can best be seen by comparing the results of numerical simulations , where we know also the real - space situation , in redshift space and in real space .the last specific feature of the cosmology datasets is their size .up to recent years most of the datasets have been rather small , of the order of objects ; exceptions exist , but these are recent .such a small number of points gives a very sparse coverage of three - dimensional survey volumes , and shot noise has been a severe problem .this situation is about to change , swinging to the other extreme ; the membership of new redshift surveys already is measured in terms of ( 160,000 for the 2df survey , quarter of a million planned ) and million - galaxy surveys are on their way ( the sloan survey ) . more information about these surveys can be found in their web pages : _http:/-2pt / www.mso.anu.edu.au/2dfgrs/ _ for the 2df survey and _ http:/-2pt / www.sdss.org/ _ for the sloan survey .this huge amount of data will force us to change the statistical methods we use .nevertheless , the deepest surveys ( e.g. , distant galaxy cluster surveys ) will always be sparse , so discovering small signals from shot - noise dominated data will remain a necessary art .there are several related quantities that are second - order characteristics used to quantify clustering of the galaxy distribution in real or redshift space .the most popular one in cosmology is the two - point correlation function , .the infinitesimal interpretation of this quantity reads as follows : dv_2\ ] ] is the joint probability that in each one of the two infinitesimal volumes and , with separation vector , lies a galaxy . here is the mean number density ( intensity ) . assuming that the galaxy distribution is a homogeneous ( invariant under translations ) and isotropic ( invariant under rotations ) point process, this probability depends only on . in spatial statistics , other functions related with commonly used : where is the second - order intensity function , is the pair correlation function , also called the radial distribution function or structure function , and is the conditional density proposed by .different estimators of have been proposed so far in the literature , both in cosmology and in spatial statistics .the main differences are in correction for edge effects .comparison of their performance can be found in several papers .there is clear evidence that is well described by a power - law at scales mpc where is the hubble constant in units of 100 km s : with and mpc .this scaling behavior is one of the reasons that have lead some astronomers to describe the galaxy distribution as fractal .a power - law fit for permits to define the correlation dimension .the extent of the fractal regime is still a matter of debate in cosmology , but it seems clear that the available data on redshift surveys indicate a gradual transition to homogeneity for scales larger than 1520 mpc . moreover , in a fractal point distribution , the correlation length increases with the radius of the sample because the mean density decreases .this simple prediction of the fractal interpretation is not supported by the data , instead remains constant for volume - limited samples with increasing depth .several versions of the volume integral of the correlation function are also frequently used in the analysis of galaxy clustering .the most extended one in spatial statistics is the so - called ripley -function although in cosmology it is more frequent to use an expression which provides directly the average number of neighbors an arbitrarily chosen galaxy has within a distance , or the average conditional density again a whole collection of estimators are used to properly evaluate these quantities .pietronero and coworkers recommend to use only minus estimators to avoid any assumption regarding the homogeneity of the process . in these estimators , averages of the number of neighbors within a given distance are taken only considering as centers these galaxies whose distances to the border are larger than .however , caution has to be exercised with this procedure , because at large scales only a small number of centers remain , and thus the variance of the estimator increases .integral quantities are less noisy than the corresponding differential expressions , but obviously they do contain less information on the clustering process due the fact that values of and for two different scales and are more strongly correlated than values of and .scaling of provides a smoother estimation of the correlation dimension . if scaling is detected for partition sums defined by the moments of order of the number of neighbors the exponents are the so - called generalized or multifractal dimensions .note that for , is an estimator of and therefore for is simply the correlation dimension .if different kinds of cosmic objects are identified as peaks of the continuous matter density field at different thresholds , we can study the correlation dimension associated to each kind of object .the multiscaling approach associated to the multifractal formalism provides a unified framework to analyze this variation .it has been shown that the value of corresponding to rich galaxy clusters ( high peaks of the density field ) is smaller than the value corresponding to galaxies ( within the same scale range ) as prescribed in the multiscaling approach .finally we want to consider the role of lacunarity in the description of the galaxy clustering . in fig .[ lacun ] , we show the space distribution of galaxies within one slice of the las campanas redshift survey , together with a fractal pattern generated by means of a rayleigh - lvy flight . both have the same mass - radius dimension , defined as the exponent of the power - law that fits the variation of mass within concentric spheres centered at the observer position . the best fitted value for both point distributions is as shown in the left bottom panel of fig .[ lacun ] .the different appearance of both point distributions is a consequence of the different degree of lacunarity .have proposed to quantify this effect by measuring the variability of the prefactor in eq .[ mrr ] , the result of applying this lacunarity measure is shown in the right bottom panel of fig .[ lacun ] .the visual differences between the point distributions are now well reflected in this curve . curves in the lower left panel , but the lacunarity curves ( in the lower right panel ) differ considerably .the solid lines describe the galaxy distribution , dotted lines the model results . from . ] the current statistical model for the main cosmological fields ( density , velocity , gravitational potential ) is the gaussian random field .this field is determined either by its correlation function or by its spectral density , and one of the main goals of spatial statistics in cosmology is to estimate those two functions . in recent years the power spectrum has attracted more attention than the correlation function .there are at least two reasons for that the power spectrum is more intuitive physically , separating processes on different scales , and the model predictions are made in terms of power spectra .statistically , the advantage is that the power spectrum amplitudes for different wavenumbers are statistically orthogonal : here is the fourier amplitude of the overdensity field at a wavenumber , is the matter density , a star denotes complex conjugation , denotes expectation values over realizations of the random field , and is the three - dimensional dirac delta function .the power spectrum is the fourier transform of the correlation function of the field .estimation of power spectra from observations is a rather difficult task . up to nowthe problem has been in the scarcity of data ; in the near future there will be the opposite problem of managing huge data sets .the development of statistical techniques here has been motivated largely by the analysis of cmb power spectra , where better data were obtained first , and has been parallel to that recently .the observed samples can be modeled by an inhomogeneous point process ( a gaussian cox process ) of number density : where is the dirac delta - function . asgalaxy samples frequently have systematic density trends caused by selection effects , we have to write the estimator of the density contrast in a sample as where is the selection function expressed in the number density of objects .the estimator for a fourier amplitude ( for a finite set of frequencies ) is where is a weight function that can be selected at will .the raw estimator for the spectrum is and its expectation value where is the window function that also depends on the geometry of the sample volume .symbolically , we can get the estimate of the power spectra by inverting the integral equation where denotes convolution , is the raw estimate of power , and is the ( constant ) shot noise term . in general , we have to deconvolve the noise - corrected raw power to get the estimate of the power spectrum .this introduces correlations in the estimated amplitudes , so these are not statistically orthogonal any more . a sample of a characteristic spatial size creates a window function of width of , correlating estimates of spectra at that wavenumber interval . as the cosmological spectraare usually assumed to be isotropic , the standard method to estimate the spectrum involves an additional step of averaging the estimates over a spherical shell $ ] of thickness in wavenumber space .the minimum - variance requirement gives the fkp weight function : and the variance is where is the number of coherence volumes in the shell .the number of independent volumes is twice as small ( the density field is real ) .the coherence volume is .as the data sets get large , straight application of direct methods ( especially the error analysis ) becomes difficult .there are different recipes that have been developed with the future data sets in mind .a good review of these methods is given in .the deeper the galaxy sample , the smaller the coherence volume , the larger the spectral resolution and the larger the wavenumber interval where the power spectrum can be estimated .the deepest redshift surveys presently available are the pscz galaxy redshift survey ( 15411 redshifts up to about , see ) , the abell / aco rich galaxy cluster survey , 637 redshifts up to about 300 ) , and the ongoing 2df galaxy redshift survey ( 141400 redshifts up to ) .the estimates of power spectra for the two latter samples have been obtained by the direct method .[ 2dfpower ] shows the power spectrum for the 2df survey .the covariance matrix of the power spectrum estimates in fig .[ 2dfpower ] was found from simulations of a matching gaussian cox process in the sample volume .the main new feature in the spectra , obtained for the new deep samples , is the emergence of details ( wiggles ) in the power spectrum .while sometime ago the main problem was to estimate the mean behaviour of the spectrum and to find its maximum , now the data enables us to see and study the details of the spectrum .these details have been interpreted as traces of acoustic oscillations in the post - recombination power spectrum .similar oscillations are predicted for the cosmic microwave background radiation fluctuation spectrum .the cmb wiggles match the theory rather well , but the galaxy wiggles do not , yet . the probability that a randomly placed sphere of radius contains exactly galaxies is denoted by . in particular , for , is the so - called void probability function , related with the empty space function or contact distribution function , more frequently used in the field of spatial statistics , by .the moments of the counts - in - cells probabilities can be related both with the multifractal analysis and with the higher order -point correlation functions . in spatial statistics ,different quantities based on distances to nearest neighbors have been introduced to describe the statistical properties of point processes . is the distribution function of the distance of a given point to its nearest neighbor .it is interesting to note that is just the distribution function of the distance from an arbitrarily chosen point in not being an event of the point process to a point of the point process ( a galaxy in the sample in our case ) .the quotient introduced by is a powerful tool to analyze point patterns and has discriminative power to compare the results of -body models for structure formation with the real distribution of galaxies .one very popular tool for analysis of the galaxy distribution is the genus of the isodensity surfaces . to define this quantity ,the point process is smoothed to obtain a continuous density field , the intensity function , by means of a kernel estimator for a given bandwidth .then we consider the fraction of the volume which encompasses those regions having density exceeding a given threshold .the boundary of these regions specifies an isodensity surface .the genus of a surface is basically the number of holes minus the number of isolated regions plus 1 .the genus curve shows the variation of with or for a given window radius of the kernel function . an analytical expression for this curveis known for gaussian density fields .it seems that the empirical curve calculated from the galaxy catalogs can be reasonably well fitted to a gaussian genus curve for window radii varying within a large range of scales . a very elegant generalization of the previous analysis to a larger family of morphological characteristics of the point processes is provided by the minkowski functionals .these scalar quantities are useful to study the shape and connectivity of a union of convex bodies . they are well known in spatial statistics and have been introduced in cosmology by . on a clustered point process , minkowski functionals are calculated by generalizing the boolean grain model into the so - called germ - grain model .this coverage process consists in considering the sets for the diagnostic parameter , where represents the galaxy positions and is a ball of radius centered at point .minkowski functionals are applied to sets when varies . in are four functionals : the volume , the surface area , the integral mean curvature , and the euler - poincar characteristic , related with the genus of the boundary of by .application of minkowski functionals to the galaxy cluster distribution can be found in .these quantities have been used also as efficient shape finders by .this work was supported by the spanish mcyt project aya2000 - 2045 and by the estonian science foundation under grant 2882 .enn saar is grateful for the invited professor position funded by the vicerrectorado de investigacin de la universitat de valncia . peacock j a , cole s , norberg p , baugh c m , bland - hawthorn j , bridges t , cannon r d , colless m , collins c , couch w , dalton g , deeley k , propris r d , driver s p , efstathiou g , ellis r s , frenk c s , glazebrook k , jackson c , lahav o , lewis i , lumsden s , maddox s , percival w j , peterson b a , price i , sutherland w taylor k 2001 _ nature _ * 410 * , 169173 .percival w j , baugh c m , bland - hawthorn j , bridges t , cannon r , cole s , colless m , collins c , couch w , dalton g , propris r d , driver s p , efstathiou g , ellis r s , frenk c s , glazebrook k , jackson c , lahav o , lewis i , lumsden s , maddox s , moody s , norberg p , peacock j a , peterson b a , sutherland w taylor k 2001 .astro - ph/0105252 , submitted to mon . not .soc .saunders w ballinger b e 2000 _ in _r. c kraan - korteweg , p. a henning h andernach , eds , ` the hidden universe , asp conference series ' astronomical society of the pacific , san francisco .astro - ph/0005606 , in press .saunders w , sutherland w j , maddox s j , keeble o , oliver s j , rowan - robinson m , mcmahon r g , efstathiou g p , tadros h , white s d m , frenk c s , carramiana a hawkins m r s 2000 _ mon . not .soc . _ * 317 * , 5564 .schmoldt i m , saar v , saha p , branchini e , efstathiou g p , frenk c s , keeble o , maddox s , mcmahon r , oliver s , rowan - robinson m , saunders w , sutherland w j , tadros h white s d m 1999 _ astron .j. _ * 118 * , 11461160 .
in this introductory talk we will establish connections between the statistical analysis of galaxy clustering in cosmology and recent work in mainstream spatial statistics . the lecture will review the methods of spatial statistics used by both sets of scholars , having in mind the cross - fertilizing purpose of the meeting series . special topics will be : description of the galaxy samples , selection effects and biases , correlation functions , nearest neighbor distances , void probability functions , fourier analysis , and structure statistics .
we are interested in the following nonconvex semidefinite programming problem : where is convex , is a nonempty , closed convex set in and ( ) are nonconvex matrix - valued mappings and smooth .the notation means that is a symmetric negative semidefinite matrix .optimization problems involving matrix - valued mapping inequality constraints have large number of applications in static output feedback controller design and topology optimization , see , e.g. . especially , optimization problems with bilinear matrix inequality ( bmi ) constraints have been known to be nonconvex and np - hard .many attempts have been done to solve these problems by employing convex semidefinite programming ( in particular , optimization with linear matrix inequality ( lmi ) constraints ) techniques .the methods developed in those papers are based on augmented lagrangian functions , generalized sequential semidefinite programming and alternating directions .recently , we proposed a new method based on convex - concave decomposition of the bmi constraints and linearization technique .the method exploits the convex substructure of the problems .it was shown that this method can be applied to solve many problems arising in static output feedback control including spectral abscissa , , and mixed synthesis problems . in this paper, we follow the same line of the work in to develop a new local optimization method for solving the nonconvex semidefinite programming problem . the main idea is to approximate the feasible set of the nonconvex problem by a sequence of inner positive semidefinite convex approximation sets .this method can be considered as a generalization of the ones in .0.1 cm _ contribution ._ the contribution of this paper can be summarized as follows : * we generalize the inner convex approximation method in from scalar optimization to nonlinear semidefinite programming .moreover , the algorithm is modified by using a _ regularization technique _ to ensure strict descent .the advantages of this algorithm are that it is _ very simple to implement _ by employing available standard semidefinite programming software tools and _ no globalization strategy _ such as a line - search procedure is needed .* we prove the convergence of the algorithm to a stationary point under mild conditions .* we provide two particular ways to form an overestimate for bilinear matrix - valued mappings and then show many applications in static output feedback . 0.1 cm _ outline ._ the next section recalls some definitions , notation and properties of matrix operators and defines an inner convex approximation of a bmi constraint .section [ sec : alg_and_conv ] proposes the main algorithm and investigates its convergence properties .section [ sec : app ] shows the applications in static output feedback control and numerical tests .some concluding remarks are given in the last section .in this section , after given an overview on concepts and definitions related to matrix operators , we provide a definition of inner positive semidefinite convex approximation of a nonconvex set .let be the set of symmetric matrices of size , , and resp ., be the set of symmetric positive semidefinite , resp ., positive definite matrices . for given matrices and in , the relation ( resp . , )means that ( resp . , ) and ( resp . , ) is ( resp . , ) .the quantity is an inner product of two matrices and defined on , where is the trace of matrix . for a given symmetric matrix , denotes the smallest eigenvalue of .[ de : psd_convex] a matrix - valued mapping is said to be positive semidefinite convex ( _ psd - convex _ ) on a convex subset if for all $ ] and , one has if holds for instead of for then is said to be _ strictly psd - convex _ on . in the opposite case , is said to be _ psd - nonconvex_. alternatively , if we replace in by then is said to be psd - concave on .it is obvious that any convex function is psd - convex with .a function is said to be _ strongly convex _ with parameter if is convex .the notation denotes the subdifferential of a convex function . for a given convex set , if and if denotes the normal cone of at .the derivative of a matrix - valued mapping at is a linear mapping from to which is defined by for a given convex set , the matrix - valued mapping is said to be differentiable on a subset if its derivative exists at every .the definitions of the second order derivatives of matrix - valued mappings can be found , e.g. , in .let be a linear mapping defined as , where for .the adjoint operator of , , is defined as for any . finally ,for simplicity of discussion , throughout this paper , we assume that all the functions and matrix - valued mappings are _ twice differentiable _ on their domain .let us first describe the idea of the inner convex approximation for the scalar case .let be a continuous nonconvex function .a convex function depending on a parameter is called a convex overestimate of w.r.t .the parameterization if and for all . let us consider two examples .0.1 cm _ example 1 ._ let be a continuously differentiable function and its gradient is lipschitz continuous with a lipschitz constant , i.e. for all .then , it is well - known that . therefore , for any we have with .moreover , for any . we conclude that is a convex overestimate of w.r.t the parameterization .now , if we fix and find a point such that then .consequently if the set is nonempty , we can find a point such that .the convex set is called an inner convex approximation of . 0.1 cm _ example 2 ._ we consider the function in .the function is a convex overestimate of w.r.t .the parameterization provided that .this example shows that the mapping is not always identity .let us generalize the convex overestimate concept to matrix - valued mappings .[ def : over_relaxation ] let us consider a psd - nonconvex matrix mapping .a psd - convex matrix mapping is said to be a psd - convex overestimate of w.r.t .the parameterization if and for all and in . let us provide two important examples that satisfy definition [ def : over_relaxation ] . _ example 3 ._ let be a bilinear form with , and arbitrarily , where and are two matrices .we consider the parametric quadratic form : one can show that is a psd - convex overestimate of w.r.t .the parameterization . indeed , it is obvious that .we only prove the second condition in definition [ def : over_relaxation ] .we consider the expression . by rearranging this expression, we can easily show that .now , since , by , we can write : note that .therefore , we have for all and ._ example 4 ._ let us consider a psd - noncovex matrix - valued mapping , where and are two psd - convex matrix - valued mappings .now , let be differentiable and be the linearization of at .we define .it is not difficult to show that is a psd - convex overestimate of w.r.t .the parametrization .[ re : nonunique_of_bmi_app ] _ example 3 _ shows that the `` lipschitz coefficient '' of the approximating function is .moreover , as indicated by _ examples _ 3 and 4 , the psd - convex overestimate of a bilinear form is not unique . in practice , it is important to find appropriate psd - convex overestimates for bilinear forms to make the algorithm perform efficiently .note that the psd - convex overestimate of in _ example 3 _ may be less conservative than the convex - concave decomposition in since all the terms in are related to and rather than and .let us recall the nonconvex semidefinite programming problem .we denote by the feasible set of and the relative interior of , where is the relative interior of .first , we need the following fundamental assumption . [ as : a1 ] the set of interior points of is nonempty .then , we can write the generalized kkt system of as follows : any point with is called a _ kkt point _ of , where is called a _ stationary point _ and called the corresponding lagrange multiplier .the main step of the algorithm is to solve a convex semidefinite programming problem formed at the iteration by using inner psd - convex approximations .this problem is defined as follows : here , is given and the second term in the objective function is referred to as a regularization term ; is the parameterization of the convex overestimate of .let us define by the solution mapping of [ eq : convx_subprob ] depending on the parameters .note that the problem [ eq : convx_subprob ] is convex , is multivalued and convex .the feasible set of [ eq : convx_subprob ] is written as : the algorithm for solving starts from an initial point and generates a sequence by solving a sequence of convex semidefinite programming subproblems [ eq : convx_subprob ] approximated at .more precisely , it is presented in detail as follows .[ alg : a1 ] * initialization . *determine an initial point .compute for . choose a regularization matrix .set .* iteration ( ) * perform the following steps : * _ step 1 . _ for given , if a given criterion is satisfied then terminate . *_ solve the convex semidefinite program [ eq : convx_subprob ] to obtain a solution and the corresponding lagrange multiplier . *_ update , the regularization matrix ( if necessary ) .increase by and go back to step 1 . ** the core step of algorithm [ alg : a1 ] is step 2 where a general convex semidefinite program needs to be solved . in practice , this can be done by either implementing a particular method that exploits problem structures or relying on standard semidefinite programming software tools .note that the regularization matrix can be fixed at , where is sufficiently small and is the identity matrix .since algorithm [ alg : a1 ] generates a feasible sequence to the original problem and this sequence is strictly descent w.r.t .the objective function , _ no globalization strategy _ such as line - search or trust - region is needed .we first show some properties of the feasible set defined by . for notational simplicity , we use the notation . [le : feasible_set ] let be a sequence generated by algorithm [ alg : a1 ] . then : * the feasible set for all . * it is a feasible sequence , i.e. .* . * for any , it holds that : where is the strong convexity parameter of . for a given , we have and for . thus if then , the statement a ) holds .consequently , the sequence is feasible to which is indeed the statement b ) .since is a solution of [ eq : convx_subprob ] , it shows that .now , we have to show it belongs to . indeed , since by definition [ def: over_relaxation ] for all , we conclude .the statement c ) is proved .finally , we prove d ) . since is the optimal solution of [ eq : convx_subprob ], we have for all .however , we have due to c ) . by substituting in the previous inequalitywe obtain the estimate d ) .now , we denote by the lower level set of the objective function .let us assume that is continuously differentiable in for any .we say that the _ robinson qualification _condition for [ eq : convx_subprob ] holds at if for . in order to prove the convergence of algorithm [ alg : a1] , we require the following assumption . [ as : a2 ] the set of kkt points of is nonempty . for a given , the matrix - valued mappings are continuously differentiable on .the convex problem [ eq : convx_subprob ] is solvable and the robinson qualification condition holds at its solutions .we note that if algorithm 1 is terminated at the iteration such that then is a stationary point of .[ th : convergence ] suppose that assumptions a.[as : a1 ] and a.[as : a2 ] are satisfied .suppose further that the lower level set is bounded .let be an infinite sequence generated by algorithm [ alg : a1 ] starting from .assume that .then if either is strongly convex or for then every accumulation point of is a kkt point of . moreover ,if the set of the kkt points of is finite then the whole sequence converges to a kkt point of .first , we show that the solution mapping is _closed_. indeed , by assumption a.[as : a2 ] , [ eq : convx_subprob ] is feasible .moreover , it is strongly convex .hence , , which is obviously closed .the remaining conclusions of the theorem can be proved similarly as ( * ? ? ?* theorem 3.2 . ) by using zangwill s convergence theorem of which we omit the details here .[ rm : conclusions ] note that the assumptions used in the proof of the closedness of the solution mapping in theorem [ th : convergence ] are weaker than the ones used in ( * ? ? ?* theorem 3.2 . ) .in this section , we present some applications of algorithm [ alg : a1 ] for solving several classes of optimization problems arising in static output feedback controller design .typically , these problems are related to the following linear , time - invariant ( lti ) system of the form : where is the state vector , is the performance input , is the input vector , is the performance output , is the physical output vector , is state matrix , is input matrix and is the output matrix . by using a static feedback controller of the form with ,we can write the closed - loop system as follows : the stabilization , , optimization and other control problems of the lti system can be formulated as an optimization problem with bmi constraints .we only use the psd - convex overestimate of a bilinear form in _ example 3 _ to show that algorithm [ alg : a1 ] can be applied to solving many problems ins static state / output feedback controller design such as : * sparse linear static output feedback controller design ; * spectral abscissa and pseudospectral abscissa optimization ; * optimization ; * optimization ; * and mixed synthesis .these problems possess at least one bmi constraint of the from , where , where and are matrix variables and is a affine operator of matrix variable . by means of _ example3 _ , we can approximate the bilinear term by its psd - convex overestimate . then using schur s complement to transform the constraint of the subproblem [ eq : convx_subprob ] into an lmi constraint .note that algorithm [ alg : a1 ] requires an interior starting point . in this work ,we apply the procedures proposed in to find such a point .now , we summary the whole procedure applying to solve the optimization problems with bmi constraints as follows : [ scheme : a1 ] + _ step 1 . _find a psd - convex overestimate of w.r.t .the parameterization for ( see _ example 1 _ ) .+ _ step 2 ._ find a starting point ( see ) .+ _ step 3 ._ for a given , form the convex semidefinite programming problem [ eq : convx_subprob ] and reformulate it as an optimization with lmi constraints .+ _ step 4 . _apply algorithm [ alg : a1 ] with an sdp solver to solve the given problem .now , we test algorithm [ alg : a1 ] for three problems via numerical examples by using the data from the comp library .all the implementations are done in matlab 7.8.0 ( r2009a ) running on a laptop intel(r ) core(tm)i7 q740 1.73ghz and 4 gb ram .we use the yalmip package as a modeling language and sedumi 1.1 as a sdp solver to solve the lmi optimization problems arising in algorithm [ alg : a1 ] at the initial phase ( phase 1 ) and the subproblem [ eq : convx_subprob ] .the code is available at http://www.kuleuven.be/optec/software/bmisolver .we also compare the performance of algorithm [ alg : a1 ] and the convex - concave decomposition method ( ccdm ) proposed in in the first example , i.e. the spectral abscissa optimization problem . in the second example, we compare the -norm computed by algorithm [ alg : a1 ] and the one provided by hifoo and penbmi .the last example is the mixed synthesis optimization problem which we compare between two values of the -norm level .we consider an optimization problem with bmi constraint by optimizing the spectral abscissa of the closed - loop system as : here , matrices , and are given .matrices and and the scalar are considered as variables .if the optimal value of is strictly positive then the closed - loop feedback controller stabilizes the linear system . by introducing an intermediate variable , the bmi constraint in the second line ofcan be written .now , by applying scheme [ scheme : a1 ] one can solve the problem by exploiting the sedumi sdp solver . in order to obtain a strictly descent direction ,we regularize the subproblem [ eq : convx_subprob ] by adding quadratic terms : , where .algorithm [ alg : a1 ] is terminated if one of the following conditions is satisfied : * the subproblem [ eq : convx_subprob ] encounters a numerical problem ; * ; * the maximum number of iterations , , is reached ; * or the objective function of is not significantly improved after two successive iterations , i.e. for some and , where .we test algorithm [ alg : a1 ] for several problems in comp and compare our results with the ones reported by the _ convex - concave decomposition method _( ccdm ) in . -0.45cm .computational results for in comp [ cols= " < , > , > , > , > , > , > , > , > , > " , ] here , are the and norms of the closed - loop systems for the static output feedback controller , respectively . with , the computational results show that algorithm [ alg : a1 ] satisfies the condition for all the test problems. the problems ac11 and ac12 encounter a numerical problems that algorithm [ alg : a1 ] can not solve . while , with , there are problems reported infeasible , which are denoted by `` - '' .the -constraint of three problems ac11 and nn8 is active with respect to .we have proposed a new iterative procedure to solve a class of nonconvex semidefinite programming problems .the key idea is to locally approximate the nonconvex feasible set of the problem by an inner convex set .the convergence of the algorithm to a stationary point is investigated under standard assumptions .we limit our applications to optimization problems with bmi constraints and provide a particular way to compute the inner psd - convex approximation of a bmi constraint .many applications in static output feedback controller design have been shown and two numerical examples have been presented . note that this method can be extended to solve more general nonconvex sdp problems where we can manage to find an inner psd - convex approximation of the feasible set . this is also our future research direction .
in this work , we propose a new local optimization method to solve a class of nonconvex semidefinite programming ( sdp ) problems . the basic idea is to approximate the feasible set of the nonconvex sdp problem by inner positive semidefinite convex approximations via a parameterization technique . this leads to an iterative procedure to search a local optimum of the nonconvex problem . the convergence of the algorithm is analyzed under mild assumptions . applications in static output feedback control are benchmarked and numerical tests are implemented based on the data from the compl library .
the collective dynamics of complex distributed systems often can be usefully described in terms of a superposition of rate processes or frequencies which determine the changes in macroscopically measurable variables as energy flows through the system ; that is , a dynamical model expressed as a system of coupled ordinary differential equations in a few averaged state variables or mode coefficients and several , independently tunable , parameters that represent physical properties or external controls .this type of reduced ( or low - order or low - dimensional ) modelling averages over space , mode spectrum structure , single - particle dynamics and other details , but the payoff lies in its amenity to sophisticated analytic theory and methods that enable us to track important qualitative features in the collective dynamics , such as singularities , bifurcations , and stability changes , broadly over the parameter space . motivated by the need for improved guidance and control of the ( mostly bad ) behaviour of fusion plasmas in magnetic containers ,i elaborate in this work a case study in bifurcation and stability analysis in which reduced dynamical system modelling yields new global and predictive information about gradient driven turbulence flow energetics that is complementary to direct numerical simulation and can guide experimental design . reduced dynamical models are powerful tools for describing and analysing complex systems such as turbulent plasmas and fluids , primarily because they are supported by well - developed mathematics that gives qualitative and global insight , such as singularity , bifurcation , stability , and symmetry theory . in principleone can map analytically the bifurcation structure of the entire phase and parameter space of a reduced dynamical system , but this feat is not possible for an infinite - dimensional system , or partial differential equations , and not practicable for systems of high order .the usefulness of such models seems to be no coincidence , too : in turbulent systems generally , which in detail are both complex and complicated , the dynamics seems to take place in a low - dimensional subspace .it seems paradoxical that enthusiasm for low - dimensional modelling and qualitative analysis of fluid and plasma systems has paced the ever larger direct numerical simulations of their flow fields .this is an exemplar of how the simplexity and complicity juxtaposition can work well : these methods affirm each other , for both have common ground in the universal conservation equations for fluid flow ( as well as separate bases in mathematics and computational science ) . developments in one feed developments in the other .reduced dynamical models can give insights into the physics and dynamics of a system in a way that is complementary to brute - force numerical simulations of the detailed , spatially distributed models from which they are derived .in practice this complementarity means that low - order models ( which capture few or no spatial modes ) can be used to channel information gleaned from the generic , qualitative structure of the parameter space attractors , critical points of onset , stability properties , and so on to numerical simulations ( which contain all spatial modes but , on their own , bring little physical understanding ) , giving them purpose and meaning . in turnthe fluid simulations serve as virtual experiments to validate the low - order approach .it is reasonable , therefore , to assert that improved low - dimensional dynamical models for plasmas and fluids could provide numerical experimenters with new and interesting challenges that will continue to push the limits of computational science and technology .fusion plasmas in magnetic containers , such as those in tokamak or stellarator experiments , are strongly driven nonequilibrium systems in which the kinetic energy of small - scale turbulent fluctuations can drive the formation of large - scale coherent structures such as shear and zonal flows .this inherent tendency to self - organize is a striking characteristic of flows where lagrangian fluid elements see a predominantly two - dimensional velocity field , and is a consequence of the inverse energy cascade .the distinctive properties of quasi two - dimensional fluid motion are the basis of natural phenomena such as zonal and coherent structuring of planetary flows , but are generally under - exploited in technology . in plasmas the most potentially useful effect of two - dimensional fluid motion is suppression of high wavenumber turbulence that generates cross - field transport fluxes and degrades confinement .suppression of turbulent transport can manifest temporally as a spontaneous and more - or - less abrupt enhancement of sheared poloidal or zonal flows and concomitant damping of density fluctuations , and spatially as the rapid development of a localized transport barrier or steep density gradient .the phenomenon is often called low- to high - confinement ( l h ) transitions and has been the subject of intensive experimental , _ in numero , _ and theoretical and modelling investigations since the 1980s .the large and lively primary literature on reduced dynamical models for confinement transitions and associated oscillations in plasmas represents a sort of consensus on the philosophy behind qualitative analysis , if not on the details of the models themselves .what motivates this approach is the predictive power that a unified , low - order description of the macroscopic dynamics would have in the management of confinement states . since it is widely acknowledged that control of turbulent transport is crucial to the success of the world - wide fusion energy program it is important to develop predictive models for efficient management of access to , and sustainment of , high confinement rgimes .for example , if one plans to maintain a high confinement state at a relatively low power input by exploiting the hysteresis in the transition it would be useful , not to mention cheaper , to know in advance which parameters control the shape and extent of hysteresis , or whether it can exist at all in the operating space of a particular system , or whether a transition will be oscillatory . however , it has been shown that many of the models in the literature are structurally flawed . they often contain pathological or persistent degenerate ( higher order ) singularities .an associated issue is that of overdetermination , where near a persistent degenerate singularity there may be more defining equations than variables .consequently much of the discussion in the literature concerning confinement transitions is qualitatively wrong .* such models can not possibly have predictive power*. the heart of the matter lies in the mapping between the bifurcation structure and stability properties of a dynamical model and the physics of the process it is supposed to represent : if we probe this relationship we find that degenerate singularities ought to correspond to some essential physics ( such as fulfilling a symmetry - breaking imperative , or the onset of hysteresis ) , or they are pathological . in the first casewe can usually unfold the singularity in a physically meaningful way ; in the other case we know that something is amiss and we should revise our assumptions . degeneratesingularities are good because they provide opportunities to improve a model and its predictive capabilities , but bad when they are not recognized as such .[ [ section ] ] the literature on confinement transitions has two basic strands : ( 1 ) transitions are an internal , quasi two - dimensional flow , phenomenon and occur spontaneously when the rate of upscale transfer of kinetic energy from turbulence to shear and zonal flows exceeds the nonlinear dissipation rate ; ( 2 ) transitions are due to nonambipolar ion orbit losses near the plasma edge , the resulting electric field providing a torque which drives the poloidal shear flow nonlinearly .these two different views of the physics behind confinement transitions are smoothly reconciled for the first time in this work .a systematic methodology for characterizing the equilibria of dynamical systems involves finding and classifying high - order singularities then perturbing around them to explore and map the bifurcation landscape .broadly , this paper is about applying singularity theory as a diagnostic tool while an impasto picture of confinement transition dynamics is compounded .the bare - bones model is presented in section [ two ] in section [ three ] the global consequences of local symmetry - breaking are explored , leading to the discovery of an organizing centre and trapped degenerate singularities .this leads in to section [ four ] where i unfold a trapped singularity smoothly by introducing another layer that models the neglected physics of downscale energy transfer .section [ five ] follows the qualitative changes to the bifurcation and stability structure that are due to potential energy dissipative losses . in section [ six ]the unified model is presented , in which is included a direct channel between gradient potential energy and shear flow kinetic energy .the results and conclusions are summarized in section [ seven ]in the edge region of a plasma confinement experiment such as a tokamak or stellarator potential energy is stored in a steep pressure gradient which is fed by a power source near the centre .gradient potential energy is converted to turbulent kinetic energy , which is drawn off into stable shear flows , with kinetic energy , and dissipation channels .the energetics of this simplest picture of confinement transition dynamics are schematized in fig .[ fig1](a ) .( nomenclature for the quantities here and in the rest of the paper is defined in table [ tab1 ] . )energy transfer diagrams for the gradient - driven plasma turbulence shear flow system .annotated arrows denote rate processes ; curly arrows indicate dissipative channels , straight arrows indicate inputs and transfer channels between the energy - containing subsystems. see text for explanations of each subfigure . ] a skeleton dynamical system for this overall process can be written down directly from fig .[ fig1](a ) by inspection : the power input is assumed constant and the energy transfer and dissipation rates generally may be functions of the energy variables .a more physics - based derivation of this system was outlined in ball ( 2002)ball:2002 , in which averaged energy integrals were taken of momentum and pressure convection equations in slab geometry , using an electrostatic approximation to eliminate the dynamics of the magnetic field energy .equations [ e1 ] are fleshed out by substituting specific rate - laws for the general rate expressions on the right hand sides : [ e2 ] where .the rate expressions in eqs [ e2 ] were derived in sugama and horton ( 1995)sugama:1995 and ball ( 2002)ball:2002 from semi - empirical arguments or given as ansatzes .( rate - laws for bulk dynamical processes are not usually derivable purely from theory , and ultimately must be tested against experimental evidence . ) the rest of this paper is concerned with the character of the equilibria of eqs [ e2 ] and modifications and extensions to this system .we shall study the type , multiplicity , and stability of attractors , interrogate degenerate or pathological singularities where they appear , and classify and map the bifurcation structure of the system . in doing thiswe shall attempt to answer questions such as : are eqs [ e2 ] or modified versions a good that is , predictive model of the system ? does the model adequately reflect the known phenomenology of confinement transitions in fusion plasmas ? what is the relationship between the bifurcation properties of the model and the physics of confinement transitions ?the equilibrium solutions of eqs [ e2 ] are shown in the bifurcation diagrams of fig .[ fig2 ] , where the shear flow is chosen as the state variable and the power input is chosen as the principal bifurcation or control parameter .( in these and subsequent bifurcation diagrams stable equilibria are indicated by solid lines and unstable equilibria are indicated by dashed lines . ) several bifurcation or singular points are evident .the four points in ( a ) annotated by asterisks , where the stability of solutions changes , are hopf bifurcations to limit cycles , which are discussed in section [ three - two ] . on the line the singularity * p * is found to satisfy the defining and non - degeneracy conditions for a pitchfork , where is the bifurcation equation derived from the zeros of eqs [ e2 ] , represents the chosen state variable , represents the chosen control or principal bifurcation parameter , and the subscripts denote partial derivatives . in the qualitatively different bifurcation diagrams ( a ) and ( b ) the dissipative parameter is relaxed either side of the critical value given in ( c ) , where the perfect , twice - degenerate pitchfork is represented .thus for ( a poorly dissipative system ) the turning points in ( a ) appear and the system may also show oscillatory behaviour .the dynamics are less interesting for ( a highly dissipative system ) as in ( b ) because the turning points , and perhaps also the hopf bifurcations , can not occur .however , * p * is persistent through variations in or any other parameter in eqs [ e2 ] .( this fact was not recognized in some previous models for confinement transitions , where such points were wrongly claimed to represent second - order phase transitions . )typically the pitchfork is associated with a fragile symmetry in the dynamics of the modelled physical system .the symmetry in this case is obvious from fig .[ fig2 ] : in principle the shear flow can be in either direction equally . in real life ( or _ in numero _ ) , experiments are always subject to perturbations that determine a preferred direction for the shear flow , and the pitchfork is inevitably dissolved . in this casethe perturbation is an effective force or torque from any asymmetric shear - inducing mechanism , such as friction with neutrals in the plasma or external sources , and acts as a shear flow driving rate . assuming this rate to be small and independent of the variables over the characteristic timescales for the other rate processes in the system, we may revise the shear flow evolution eq .[ e2c ] as where the symmetry - breaking term models the shear flow drive .the corresponding energy transfer schematic is shown in fig . [ fig1](b ) .the pitchfork * p * in fig . [ fig2 ] ( c ) can now be obtained exactly by applying the conditions ( [ e3 ] ) to the zeros of eqs [ e2a ] , [ e2b ] , and [ e4 ] , with [ e2d ] , and with and : the other singularity * t * on satisfies the defining and non - degeneracy conditions for a transcritical bifurcation , it is once - degenerate and also requires the symmetry - breaking parameter for exact definition . a bifurcation diagramwhere * p * is fully unfolded , that is , for and , is shown in fig .this diagram is rich with information that speaks of the known and predicted dynamics of the system and of ways in which the model can be improved further , and which can not be inferred or detected from the degenerate bifurcation diagrams of fig .it is worthwhile to step through fig .[ fig3 ] in detail , with the energy schema fig .[ fig1 ] ( b ) at hand .let us begin on the stable branch at .here the pressure gradient is being charged up , the gradient potential energy is feeding the turbulence , and the shear flow is small but positive because the sign of the perturbation is positive . as the power input increased quasistatically the shear flow begins to grow but at the turning point , where solutions become unstable , there is a discontinuous transition to the upper stable branch in and , ( a ) and ( b ) , and to the lower stable branch in , ( c ) . at the given value of hysteresis is evident : if we backtrack a little the back transition takes place at a lower value of .continuing along the upper stable branch in we encounter another switch in stability ; this time at a hopf bifurcation to stable period one limit cycles .( in this and other diagrams the amplitude envelopes of limit cycle branches are marked by large solid dots . ) from the amplitude envelope we see that the oscillations grow as power is fed to the system then are extinguished rather abruptly , at a second hopf bifurcation where the solutions regain stability .the shear flow decreases toward zero as the pressure - dependent anomalous viscosity , the second term in eq .[ e2d ] , takes over to dissipate the energy at high power input .the system may also be evolved to an equilibrium on the antisymmetric , branch in fig .[ fig3 ] ( a ) , by choosing initial conditions appropriately or a large enough kick . however , if the power input then falls below the turning point at we see an interesting phenomenon : the shear flow spontaneously reverses direction. the transient would nominally take the system toward the nearest stable attractor , the lower branch , but since it would then be sitting very close to the lower turning point small stochastic fluctuations could easily induce the transition to the higher branch .see the inset zoom - in over this region in ( a ) .here is an example of a feature that is unusual in bifurcation landscapes , a domain over which there is fivefold multiplicity comprising three stable and two unstable equilibria .two more examples of threefold stable domains will be shown in section [ five ] the same equilibria depicted using and as dynamical variables in ( b ) and ( c ) are annotated to indicate whether they correspond to the or domain . for claritythe amplitude envelopes of the limit cycle solutions are omitted from ( b ) and ( c ) . in the remainder of this paperi concentrate on the branches and ignore the domain .now we approach the very heart of the model , the organizing centre ; strangely enough via the branch of _ unstable _ solutions that is just evident in fig .[ fig3 ] ( a ) and ( b ) in the top left - hand corner and ( c ) in the lower left hand corner .the effects of symmetry - breaking are more far - reaching than merely providing a local universal unfolding of the pitchfork , for this branch of equilibria was trapped as a singularity at for .the organizing centre itself , described as a metamorphosis in ball ( 2002)ball:2002 , can be encountered by varying .the sequence in fig .[ fig4 ] tells the story visually .the `` new '' unstable branch develops a hopf bifurcation at the turning point .( strictly speaking , this is a degenerate hopf bifurcation , called dze , where a pair of complex conjugate eigenvalues have zero real and imaginary components . ) as is tuned up to 0.08 ( a ) a segment of stable solutions becomes apparent as the hopf bifurcation moves away from the turning point ; the associated small branch of limit cycles can also just be seen . at , the metamorphosis , the `` new '' and `` old '' branches exchange arms , ( b ) .the metamorphosis satisfies the conditions ( [ trans ] ) and is therefore an unusual , non - symmetric , transcritical bifurcation .it signals a profound change in the _ type _ of dynamics that the system is capable of . for transition must still occur at the lower limit point , but there is no classical hysteresis , ( c ) and ( d ) . in factclassical hysteresis is ( locally ) forbidden by the non - degeneracy condition in eqs [ trans ] .various scenarios are possible in this rgime , including a completely non - hysteretic transition , a forward transition to a stable steady state and a back transition from a large period limit cycle , or forward and back transitions occurring to and from a limit cycle .the symmetry - broken model comprises eqs [ e2a ] , [ e2b ] , and [ e4 ] , with [ e2d ] .the bifurcation structure , some of which is depicted in figs [ fig3 ] and [ fig4 ] , predicts various behaviours : * shear flow suppression of turbulence ; * smooth , hysteretic , non - hysteretic , and oscillatory transitions ; * spontaneous and kicked reversals in direction of shear flow ; * saturation then decrease of the shear flow with power input due to pressure - dependent anomalous viscosity ; * a metamorphosis of the dynamics through a transcritical bifurcation . a critical appraisal of experimental evidence that supports the qualitative structure of this model is given in ball ( 2004)ball:2004a . with the exception of the last itemall of the above dynamics have been observed in magnetically contained fusion plasma systems .the model would therefore seem to be a `` good '' and `` complete '' one , in the sense of being free of pathological or persistent degenerate singularities and reflecting observed behaviours . however , there are several outstanding issues that suggest the model is still incomplete .one issue arises as a gremlin in the bifurcation structure that makes an unphysical prediction , another comes from a thermal diffusivity term that was regarded as negligible in previous work on this model .a third issue arises from the two strands in literature on the physics of confinement transitions : the model as it stands does not describe confinement transitions due to a nonlinear electric field drive .the first issue of incompleteness concerns a pathology in the bifurcation structure of the model , implying infinite growth of shear flow as the power input _falls_. before we pinpoint the culprit singularity , it is illuminating to evince the physical or unphysical situation through a study of the role of the thermal capacitance parameter , which regulates the contribution of the pressure gradient dynamics , eq .[ e2a ] , to the oscillatory dynamics of the system .conveniently , we can use as a second parameter to examine the stability of steady - state solutions around the hopf bifurcations in fig .[ fig4 ] without quantitative change .the machinery for this study consists of the real - time equations [ e2a ] , [ e2b ] , and [ e4 ] recast in `` stretched time '' , [ e6 ] and the two - parameter locus of hopf bifurcations in the real - time system shown in fig .we consider two cases .the curve is the locus of the hopf bifurcations in fig .[ fig4](c ) over variations in . ] 1 .the high - capacitance rgime : + the maximum on the curve in fig .[ fig5 ] marks a degenerate hopf bifurcation , a dze point . herethe two hopf bifurcations at lower in fig . [ fig4](c ) merge and are extinguished as increases .( this merger through a dze has obviously occurred in fig .[ fig4](d ) , where is varied rather than . ) the surviving upper hopf bifurcation moves to higher as increases further .+ in this high - capacitance rgime the dynamics becomes quasi one - dimensional on the stretched timescale . to see this formally ,define and multiply the stretched - time equations [ e6b ] and [ e6c ] through by . in the limit the kinetic energy variables are slaved to the pressure gradient ( or potential energy ) dynamics .switching back to real time and multiplying eq .[ e2a ] through by , for .the kinetic energy subsystems see the potential energy as a constant , `` infinite source '' .it is conjectured that the surviving upper hopf bifurcation moves toward and for the dynamics becomes largely oscillatory in real time , with energy simply sloshing back and forth between the turbulence and the shear flow .2 . the low - capacitance rgime : + the minimum on the curve in fig .[ fig5 ] marks another degenerate hopf bifurcation , also occurring at a dze point . herethe two hopf bifurcations at higher in fig . [ fig4](c ) merge and are extinguished as decreases .the surviving hopf bifurcation moves to lower and higher as decreases further .this scenario is illustrated in fig .[ trap4a ] , where the steady - state curve and limit cycle envelope are roughly sketched in a decreasing sequence .+ as is decreased further than the minimum in fig .[ fig5 ] the remaining hopf bifurcation slides up the steady - state curve , which becomes stable toward unrealistically high and low . ]+ in this low - capacitance rgime the dynamics also becomes quasi one - dimensional , and as the conjectured fate of the surviving hopf bifurcation is a double zero eigenvalue trap at . to see why this can be expected ,consider again the stretched - time system , eqs [ e6 ] .for we have and . on the stretched timescalethe potential energy subsystem sees the kinetic energy subsystems as nearly constant , and . reverting to real time , as we have ; the potential energy is reciprocally slaved to the kinetic energy dynamics .+ the anomaly in this low - capacitance picture is that , as the power input ebbs , the shear flow can grow quite unrealistically . with diminishing the hopf bifurcation moves upward along the curve , the branch of limit cycles shrinks , and the conjugate pair of pure imaginary eigenvalues approaches zero .it would seem , therefore , that some important physics is still missing from the model .what is not shown in figs [ fig3 ] and [ fig4 ] ( because a log scale is used for illustrative purposes ) is a highly degenerate branch of equilibria that exists at where and ; it is shown in fig .[ fig7](a ) . for is a trapped degenerate turning point , annotated as s4 , where the `` new '' branch crosses the branch .the key to its release ( or unfolding ) lies in recognizing that kinetic energy in large - scale structures inevitably feeds the growth of turbulence at smaller scales , as well as vice versa . in a flow where lagrangian fluid elements locally experience a velocity field that is predominantly two - dimensionalthere will be a strong tendency to upscale energy transfer ( or inverse energy cascade , see kraichnan and montgomery ( 1980)kraichnan:1980 ) , but the net rate of energy transfer to high wavenumber ( or kolmogorov cascade , see ball ( 2004)ball:2004a ) is not negligible .what amounts to an ultraviolet catastrophe in the physics when energy transfer to high wavenumber is neglected maps to a trapped degenerate singularity in the mathematical structure of the model .the trapped singularity s4 may be unfolded smoothly by including a simple , conservative , back - transfer rate between the shear flow and turbulent subsystems : [ e7 ] the model now consists of eqs [ e7 ] and [ e2a ] , with [ e2d ] , and the corresponding energy transfer schematic is fig .[ fig1](c ) .the back- transfer rate coefficient need not be identified with any particular animal in the zoo of plasma and fluid instabilities , such as the kelvin - helmholtz instability ; it is simply a lumped dimensionless parameter that expresses the inevitability of energy transfer to high wavenumber .the manner and consequences of release of the turning point s4 can be appreciated from fig .[ fig7](b ) , from which we learn a salutary lesson : unphysical equilibria and singularities should not be ignored .the unfolding of s4 creates a maximum in the shear flow , and ( apparently ) a _ fourth _ hopf bifurcation is released from a trap at infinity . at the given values of the other parametersthis unfolding of s4 has the effect of forming a finite - area isola of steady - state solutions , but it is important to visualize this ( or , indeed , any other ) bifurcation diagram as a slice of a three - dimensional surface of steady states , where the third coordinate is another parameter .( isolas of steady - state solutions were first reported in the chemical engineering literature , where nonlinear dynamical models typically include a thermal or chemical autocatalytic reaction rate . ) in fig .[ fig8 ] we see two slices of this surface , prepared in order to demonstrate that the metamorphosis identified in section [ 3.3 ] is preserved through the unfolding of s4 . herethe other turning points are labelled s1 , s2 , and s3 . walking through fig .[ fig8 ] we make the forward transition at s1 and progress along this branch through the onset of an a limit cycle rgime , as in fig .[ fig3 ] . for obvious reasonswe now designate this segment as the _ intermediate _ shear flow branch , and the isola or peninsula as the _ high _ shear flow branch . in( a ) a back - transition occurs at s2 .the system can only reach a stable attractor on the isola by a transient , either a non - quasistatic jump in a second parameter or an evolution from initial conditions within the appropriate basin of attraction . in ( b ) as we make our quasistatic way along the intermediate branch with diminishing the shear flow begins to grow , then passes through a second oscillatory domain before reaching a maximum and dropping steeply ; the back transition in this case occurs at s4 .in the model so far the only outlet channel for the potential energy is conversion to turbulent kinetic energy , given by the conservative transfer rate .however , in a driven dissipative system such as a plasma other conduits for gradient potential energy may be significant .the cross - field thermal diffusivity , a neoclassical transport quantity is often assumed to be negligible in the strongly - driven turbulent milieu of a tokamak plasma , but here eq . [ e2 ] is modified to include explicitly a linear `` infinite sink '' thermal energy dissipation rate : following thyagaraja et al .( 1999)thyagaraja:1999 is taken as as a lumped dimensionless parameter and the rate term as representing all non - turbulent or residual losses such as neoclassical and radiative losses .the model now consists of eqs [ e7 ] and [ e8 ] , with [ e2d ] and the corresponding energy schematic is fig . [fig1](d ) .this simple dissipative term has profound effects on the bifurcation structure of the model , and again the best way to appreciate them is through a guided walking tour of the bifurcation diagrams . in fig .[ fig9 ] the series of bifurcation diagrams has been computed for increasing values of and a connected slice of the steady state surface ( i.e. , using a set of values of the other parameters for which the metamorphosis has already occurred ) .a qualitative change is immediately apparent , which has far - reaching consequences : for the two new turning points s5 and s6 appear , born from a local cusp singularity that was trapped at .overall , from ( a ) to ( e ) we see that s1 does not shift significantly but that the peninsula becomes more tilted and shifts to higher , but let us begin the walk at s1 in ( b ) . here , as in fig .[ fig8 ] , the transition occurs to an intermediate shear flow state and further increments of take the system through an oscillatory rgime .but the effect of decreasing is radically different : at s6 a discontinuous transition occurs to a high shear flow state on the stable segment of the peninsula . from this pointwe may step forward through the shear flow maximum and fall back to the intermediate branch at s5 .we see that over the range of between s5 and s6 the system has five steady states , comprising three stable interleaved with two unstable steady states . as in fig .[ fig8](b ) a back transition at low occurs at s4 .the tristable rgime in ( b ) has disappeared in ( c ) in a surprisingly mundane way : not through a singularity but merely by a shift of the peninsula toward higher . butthis shift induces a _ different _ tristable rgime through the creation of s7 and s8 at another local cusp singularity . in ( d )s4 and s7 have been annihilated at yet another local cusp singularity .it is interesting and quite amusing to puzzle over the 2-parameter projection of these turning points s1 , s8 , s7 , and s4 followed over , it is given in fig .[ fig9c-2par ] .the origins of the three local cusps can be read off the diagram , keeping in mind that the crossovers are a _trompe de loeil _ : they are nonlocal .the turning points s1 , s8 , s7 , and s4 are followed over . , , , , , , , . ] at s5 in fig .[ fig9](c ) , ( d ) , and ( e ) the system transits to a limit cycle , rather than to a stable intermediate steady state . shown in fig .[ fig10 ] are the bifurcation diagrams in and corresponding to fig .[ fig9](e ) .the pressure gradient jumps at s1 because the power input exceeds the distribution rates , and oscillatory dynamics between the energy subsystems sets in abruptly at s5 .the turbulence is enormously suppressed due to uptake of energy by the shear flow , but rises again dramatically with this hard onset of oscillations .the early theoretical work on confinement transitions attempted to explain edge l h transitions exclusively in terms of the electric field driving torque created by nonambipolar ion orbit losses , with no coupling to the internal dynamics of energy transfers from the potential energy reservoir in the pressure gradient .the electric field is bistable , hence the transition to a high shear flow , or high confinement , rgime is discontinuous and hysteretic .although there are many supporting experiments , this exposition of the physics behind confinement transitions is incomplete because it can not explain shear flow suppression of turbulence a well - known characteristic of l h transitions . herethis `` electric field bifurcation '' physics is treated as a piece of a more holistic physical picture and a simple model for the rate of shear flow generation due to this physics is used to create a unified dynamical model for confinement transitions . following the earlier authors this rateis given as $ ] , which simply says that the rate at which ions are preferentially lost , and hence flow is generated , is proportional to a collision frequency times the fraction of those collisions that result in ions with sufficient energy to escape .the form of the energy factor assumes an ion distribution that is approximately maxwellian and , analogous to an activation energy , is proportional to the square of the critical escape velocity . in this form of the rate expressioni have explicitly included the temperature - dependence of , through , which couples it to the rest of the system .if is high the rate is highly temperature ( pressure gradient ) sensitive .( for heuristic purposes constant density is assumed , constants and numerical factors are normalized to 1 , and the relatively weak temperature dependence of the collision rate is ignored . ) for convenience the equations for the unified model are gathered together : [ e9 ] \label{e8c}\\ & \mu(p , n ) = bp^{-3/2 } + a pn .\tag{\ref{e2d}}\end{aligned}\ ] ] the corresponding energy schematic is fig . [ fig1](e ) where it is seen that is a competing potential energy conversion channel , that can dominate the dynamics when the critical escape velocity is low or the pressure is high .this is exactly what we see in the bifurcation diagrams , fig .[ fig11 ] .overall , the effect of this contribution to shear flow generation from the ion orbit loss torque is to elongate and flatten the high shear flow peninsula .the hopf bifurcations that are starred in ( a ) , where the contribution is relatively small , have disappeared in ( b ) at a dze singularity .what this means is that as begins to take over * there is no longer a practicably accessible intermediate branch * , as can be seen in ( c ) where the intermediate branch is unstable until the remaining hopf bifurcation is encountered at extremely high .locally , in the transition region , as becomes significant the bifurcation diagram begins to look more like the simple s - shaped , cubic normal form schematics with classical hysteresis presented by earlier authors .however , this * unified * model accounts for shear flow suppression of the turbulence ( d ) , whereas theirs could not .the generation of stable shear flows in plasmas , and the associated confinement transitions and oscillatory behaviour in tokamaks and stellarators , is regulated by reynolds stress decorrelation of gradient - driven turbulence and/or by an induced bistable radial electric field .* these two mechanisms are smoothly unified by the first smooth road through the singularity and bifurcation structure of a reduced dynamical model for this system*. the model is constructed self - consistently , beginning from simple rate - laws derived from the basic pathways for energy transfer from pressure gradient to shear flows .it is iteratively strengthened by finding the singularities and allowing them to `` speak for themselves '' , then matching up appropriate physics to the unfoldings of the singularities .the smooth road from turbulence driven to electric field driven shear flows crosses interesting territory : * hysteresis is possible in both rgimes and is governed by different physics . * a metamorphosis of the dynamics is encountered , near which hysteretic transitions are forbidden .the metamorphosis is a robust organizing centre of codimension 1 , even though there are singularities of higher codimension in the system .* oscillatory and tristable domains are encountered . * to travel the smooth road several obstacles are successively negotiated in physically meaningful ways :a pitchfork is dissolved , simultaneously releasing a branch of solutions from a singular trap at infinity , a singularity is released from a trap at zero power input , and another is released from a trap at zero thermal diffusivity . in particular , these results suggest strategies for controlling access to high confinement states and manipulating oscillatory behaviour in fusion experiments .more generally i have shown that low - dimensional models have a useful role to play in the study of one of the most formidable of complex systems , a strongly driven turbulent plasma .having survived such a trial - by - ordeal , the methodology is expected to continue to develop as a valuable tool for taming this and other complex systems ..glossary of nomenclature [ cols= " < , < " , ]this work is supported by the australian research council .i thank the referees for helpful comments that have resulted in a better paper , and for their positive endorsements .diamond , p. h. , shapiro , v. , shevchenko , v. , kim , y. b. , rosenbluth , m. n. , carreras , b. a. , sidikman , k. , lynch , v. e. , garcia , l. , terry , p. w. , and sagdeev , r. z. ( 1992 ) .self - regulated shear flow turbulence in confined plasmas : basic concepts and potential applications to the l transition ., 2:97113 .fujisawa , a. , iguchi , h. , minami , t. , yoshimura , y. , tanaka , k. , itoh , k. , sanuki , h. , lee , s. , kojima , m. , itoh , s .-i . , yokoyama , m. , kado , s. , okamura , s. , akiyama , r. , ida , k. , isobe , m. , and s. nishimura , m. ( 2000 ). experimental study of the bifurcation nature of the electrostatic potential of a toroidal helical plasma ., 7(10):41524183 .
a case study in bifurcation and stability analysis is presented , in which reduced dynamical system modelling yields substantial new global and predictive information about the behaviour of a complex system . the first smooth pathway , free of pathological and persistent degenerate singularities , is surveyed through the parameter space of a nonlinear dynamical model for a gradient - driven , turbulence shear flow energetics in magnetized fusion plasmas . along the route various obstacles and features are identified and treated appropriately . an organizing centre of low codimension is shown to be robust , several trapped singularities are found and released , and domains of hysteresis , threefold stable equilibria , and limit cycles are mapped . characterization of this rich dynamical landscape achieves unification of previous disparate models for plasma confinement transitions , supplies valuable intelligence on the big issue of shear flow suppression of turbulence , and suggests targeted experimental design , control and optimization strategies .
the long - time dynamics of biological evolution have recently attracted considerable interest among statistical physicists , who find in this field new and challenging interacting nonequilibrium systems .an example is the bak - sneppen model , in which interacting species are the basic units , and less fit " species change by mutations " that trigger avalanches that may lead to a self - organized critical state .however , in reality both mutations and natural selection act on _ individual organisms _ , and it is desirable to develop and study models in which this is the case .one such model was recently introduced by hall , christensen , and coworkers . to enable very long monte carlo ( mc ) simulations of the evolutionary behavior, we have developed a simplified version of this model , for which we here present preliminary results .the model consists of a population of individuals with a haploid genome of binary genes , so that the total number of potential genomes is .the short genomes we have been able to study numerically ( here , ) should be seen as coarse - grained representations of the full genome .we thus consider each different bit string as a separate species " in the rather loose sense that this term is used about haploid organisms . in our simplified modelthe population evolves asexually in discrete , nonoverlapping generations , and the population of species in generation is .the total population is . in each generation , the probability that an individual of species has offspring before it dies is , while it dies without offspring with probability .the reproduction probability is given by } \;. \label{eq : p}\ ] ] the verhulst factor , which prevents from diverging , represents an environmental `` carrying capacity '' due to limited shared resources . the time - independent interaction matrix expresses pair interactions between different species such that the element gives the effect of the population density of species on species . elements and both positive represent symbiosis or mutualism , and both negative represent competition , while and of opposite signs represent predator - prey relationships . to concentrate on the effects of interspecies interactions , we follow in taking . as in , the offdiagonal elements of are randomly and uniformly distributed on ] , where is the information - theoretical entropy ( known in ecology as the shannon - weaver index ) , \ln \left [ { n_i(t)}/{n_{\rm tot}(t ) } \right ] ] ( _ red _ ) . ( * b * ) species index vs time .the symbols indicate ( _ black _ ) , ] ( _ red _ ) , ] ( _ red _ ) . ( * b * ) species index vs time .the symbols indicate ( _ black _ ) , ] ( _ red _ ) , $ ] ( _ green _ ) , and ( _ yellow _ ) ., title="fig : " ] generations each .the model parameters are those given in the text and used in fig .[ fig : fig1 ] .the like spectrum is indicative of very long - time correlations and a wide distribution of qss lifetimes . ]
we present long monte carlo simulations of a simple model of biological macroevolution in which births , deaths , and mutational changes in the genome take place at the level of individual organisms . the model displays punctuated equilibria and flicker noise with a -like power spectrum , consistent with some current theories of evolutionary dynamics .
the need for the efficient use of the scarce spectrum in wireless applications has led to significant interest in the analysis of cognitive radio systems .one possible scheme for the operation of the cognitive radio network is to allow the secondary users to transmit concurrently on the same frequency band with the primary users as long as the resulting interference power at the primary receivers is kept below the interference temperature limit .note that interference to the primary users is caused due to the broadcast nature of wireless transmissions , which allows the signals to be received by all users within the communication range .note further that this broadcast nature also makes wireless communications vulnerable to eavesdropping .the problem of secure transmission in the presence of an eavesdropper was first studied from an information - theoretic perspective in where wyner considered a wiretap channel model . in ,the secrecy capacity is defined as the maximum achievable rate from the transmitter to the legitimate receiver , which can be attained while keeping the eavesdropper completely ignorant of the transmitted messages .later , wyner s result was extended to the gaussian channel in .recently , motivated by the importance of security in wireless applications , information - theoretic security has been investigated in fading multi - antenna and multiuser channels .for instance , cooperative relaying under secrecy constraints was studied in . in , for amplify and forwad relaying scheme , not having analytical solutions for the optimal beamforming design under both total and individual power constraints , an iterative algorithm is proposed to numerically obtain the optimal beamforming structure and maximize the secrecy rates .although cognitive radio networks are also susceptible to eavesdropping , the combination of cognitive radio channels and information - theoretic security has received little attention .very recently , pei _ et al ._ in studied secure communication over multiple input , single output ( miso ) cognitive radio channels . in this work , finding the secrecy - capacity - achieving transmit covariance matrix under joint transmit and interference power constraints is formulated as a quasiconvex optimization problem . in this paper , we investigate the collaborative relay beamforming under secrecy constraints in the cognitive radio network .we first characterize the secrecy rate of the amplify - and - forward ( af ) cognitive relay channel .then , we formulate the beamforming optimization as a quasiconvex optimization problem which can be solved through convex semidefinite programming ( sdp ) .furthermore , we propose two sub - optimal null space beamforming schemes to reduce the computational complexity .we consider a cognitive relay channel with a secondary user source , a primary user , a secondary user destination , an eavesdropper , and relays , as depicted in figure [ fig : channel ] .we assume that there is no direct link between and , and , and and .we also assume that relays work synchronously to perform beamforming by multiplying the signals to be transmitted with complex weights .we denote the channel fading coefficient between and by , the fading coefficient between and by , and by and the fading coefficient between and by . in this model, the source tries to transmit confidential messages to with the help of the relays on the same band as the primary user s while keeping the interference on the primary user below some predefined interference temperature limit and keeping the eavesdropper ignorant of the information .it s obvious that our channel is a two - hop relay network . in the first hop, the source transmits to relays with power =p_s ] .there are two kinds of power constraints for relays .first one is a total relay power constraint in the following form : where ^t$ ] and is the maximum total power . and denote the transpose and conjugate transpose , respectively , of a matrix or vector . in a multiuser network such as the relay system we study in this paper , it is practically more relevant to consider individual power constraints as wireless nodes generally operate under such limitations. motivated by this , we can impose or equivalently where denotes the element - wise norm - square operation and is a column vector that contains the components . is the maximum power for the relay node .the received signals at the destination and eavesdropper are the superposition of the messages sent by the relays .these received signals are expressed , respectively , as where and are the gaussian background noise components with zero mean and variance , at and , respectively .it is easy to compute the received snr at and as where denotes the mutual information .the interference at the primary user is latexmath:[\ ] ] where superscript denotes conjugate operation .then , the received snr at the destination and eavesdropper , and the interference on primary user can be written , respectively , as with these notations , we can write the objective function of the optimization problem ( i.e. , the term inside the logarithm in ( [ srate ] ) ) as if we denote , , define , and employ the semidefinite relaxation approach , we can express the beamforming optimization problem as the optimization problem here is similar to that in .the only difference is that we have an additional constraint due to the interference limitation .thus , we can use the same optimization framework . the optimal beamforming solution that maximizes the secrecy rate in the cognitive relay channelcan be obtained by using semidefinite programming with a two dimensional search for both total and individual power constraints . for simulation, one can use the well - developed interior point method based package sedumi , which produces a feasibility certificate if the problem is feasible , and its popular interface yalmip .it is important to note that we should have the optimal to be of rank - one to determine the beamforming vector .while proving analytically the existence of a rank - one solution for the above optimization problem seems to be a difficult task , we would like to emphasize that the solutions are rank - one in our simulations .thus , our numerical result are tight . also ,even in the case we encounter a solution with rank higher than one , the gaussian randomization technique is practically proven to be effective in finding a feasible , rank - one approximate solution of the original problem .details can be found in .obtaining the optimal solution requires significant computation . to simplify the analysis, we propose suboptimal null space beamforming techniques in this section .we choose to lie in the null space of . with this assumption ,we eliminate s capability of eavesdropping on .mathematically , this is equivalent to , which means is in the null space of .we can write , where denotes the projection matrix onto the null space of .specifically , the columns of are orthonormal vectors which form the basis of the null space of . in our case, is an matrix .the total power constraint becomes .the individual power constraint becomes under the above null space beamforming assumption , is zero .hence , we only need to maximize to get the highest achievable secrecy rate . is now expressed as the interference on the primary user can be written as defining , we can express the optimization problem as this problem can be easily solved by semidefinite programming with bisection search . in this section ,we choose to lie in the null space of and .mathematically , this is equivalent to requiring , and .we can write , where denotes the projection matrix onto the null space of and .specifically , the columns of are orthonormal vectors which form the basis of the null space . in our case, is an matrix .the total power constraint becomes .the individual power constraint becomes .with this beamforming strategy , we again have .moreover , the interference on the primary user is now reduced to which is the sum of the forwarded additive noise components present at the relays .now , the optimization problem becomes again , this problem can be solved through semidefinite programming . with the following assumptions ,we can also obtain a closed - form characterization of the beamforming structure .since the interference experienced by the primary user consists of the forwarded noise components , we can assume that the interference constraint is inactive unless is very small . with this assumption, we can drop this constraint .if we further assume that the relays operate under the total power constraint expressed as , we can get the following closed - form solution : where is the largest generalized eigenvalue of the matrix pair . and positive definite matrix , is referred to as a generalized eigenvalue eigenvector pair of if satisfy .] hence , the maximum secrecy rate is achieved by the beamforming vector where is the eigenvector that corresponds to and is chosen to ensure .the discussion in section [ sec : op ] can be easily extended to the case of more than one primary user in the network .each primary user will introduce an interference constraint which can be straightforwardly included into ( [ optimal ] ) .the beamforming optimization is still a semidefinite programming problem . on the other hand ,the results in section [ sec : op ] can not be easily extended to the multiple - eavesdropper scenario . in this case , the secrecy rate for af relaying is , where the maximization is over the rates achieved over the links between the relays and different eavesdroppers .hence , we have to consider the eavesdropper with the strongest channel . in this scenario ,the objective function can not be expressed in the form given in ( [ srate ] ) and the optimization framework provided in section [ sec : op ] does not directly apply to the multi - eavesdropper model .however , the null space beamforming schemes discussed in section [ sec : null ] can be extended to the case of multiple primary users and eavesdroppers under the condition that the number of relay nodes is greater than the number of eavesdroppers or the total number of eavesdroppers and primary users depending on which null space beamforming is used .the reason for this condition is to make sure the projection matrix exists .note that the null space of channels in general has the dimension where is the number of relays .we assume that , are complex , circularly symmetric gaussian random variables with zero mean and variances , , and respectively . in this section, each figure is plotted for fixed realizations of the gaussian channel coefficients .hence , the secrecy rates in the plots are instantaneous secrecy rates . in fig .[ fig:1 ] , we plot the optimal secrecy rates for the amplify - and - forward collaborative relay beamforming system under both individual and total power constraints . we also provide , for comparison , the secrecy rates attained by using the suboptimal beamforming schemes .the fixed parameters are , , and . since af secrecy rates depend on both the source and relay powers , the rate curves are plotted as a function of .we assume that the relays have equal powers in the case in which individual power constraints are imposed , i.e. , .it is immediately seen from the figure that the suboptimal null space beamforming achievable rates under both total and individual power constraints are very close to the corresponding optimal ones .especially , they are nearly identical in the high snr regime , which suggests that null space beamforming is optimal at high snrs . thus , null space beamforming schemes are good alternatives as they are obtained with much less computational burden .moreover , we interestingly observe that imposing individual relay power constraints leads to small losses in the secrecy rates . in fig .[ fig:11 ] , we change the parameters to , and . in this case , channels between the relays and the eavesdropper and between the relays and the primary - user are on average stronger than the channels between the relays and the destination .we note that beamforming schemes can still attain good performance and we observe similar trends as before . in fig .[ fig:2 ] , we plot the optimal secrecy rate and the secrecy rates of the two suboptimal null space beamforming schemes ( under both total and individual power constraints ) as a function of the interference temperature limit .we assume that .it is observed that the secrecy rate achieved by beamforming in the null space of both the eavesdropper s and primary user s channels ( bnep ) is almost insensitive to different interference temperature limits when since it always forces the signal interference to be zero regardless of the value of .it is further observed that beamforming in the null space of the eavesdropper s channel ( bne ) always achieves near optimal performance regardless the value of under both total and individual power constraints .in this paper , collaborative relay beamforming in cognitive radio networks is studied under secrecy constraints .optimal beamforming designs that maximize secrecy rates are investigated under both total and individual relay power constraints .we have formulated the problem as a semidefinite programming problem and provided an optimization framework .in addition , we have proposed two sub - optimal null space beamforming schemes to simplify the computation .finally , we have provided numerical results to illustrate the performances of different beamforming schemes .a. wyner `` the wire - tap channel , '' _ bell .syst tech .j _ , vol.54 , no.8 , pp.1355 - 1387 , jan 1975 . i. csiszar and j. korner `` broadcast channels with confidential messages , '' _ ieee trans .inform . theory _ , vol.it-24 , no.3 , pp.339 - 348 , may 1978 .v. nassab , s. shahbazpanahi , a. grami , and z .- q .luo , `` distributed beamforming for relay networks based on second order statistics of the channel state information , '' _ ieee trans . on signal proc .56 , no 9 , pp . 4306 - 4316 ,g. zheng , k. k. wong , a. paulraj , and b. ottersten , `` robust collaborative - relay beamforming , '' _ ieee trans . on signal proc .57 , no . 8 , aug .2009 z - q luo , wing - kin ma , a.m .- c .so , yinyu ye , shuzhong zhang `` semidefinite relaxation of quadratic optimization problems '' _ ieee signal proc . magn .3 , may 2010 j. lofberg , `` yalmip : a toolbox for modeling and optimization in matlab , '' _ proc .the cacsd conf ._ , taipei , taiwan , 2004 . s. boyd and l. vandenberghe , convex optimization .cambridge , u.k .: cambridge univ . press , 2004 .
in this paper , a cognitive relay channel is considered , and amplify - and - forward ( af ) relay beamforming designs in the presence of an eavesdropper and a primary user are studied . our objective is to optimize the performance of the cognitive relay beamforming system while limiting the interference in the direction of the primary receiver and keeping the transmitted signal secret from the eavesdropper . we show that under both total and individual power constraints , the problem becomes a quasiconvex optimization problem which can be solved by interior point methods . we also propose two sub - optimal null space beamforming schemes which are obtained in a more computationally efficient way . _ index terms : _ amplify - and - forward relaying , cognitive radio , physical - layer security , relay beamforming .
sigmoidal input - output response modules are very well - conserved in cell signaling networks that might be used to implement binary responses , a key element in cellular decision processes .additionally , sigmoidal modules might be part of more complex structures , where they can provide the nonlinearities which are needed in a broad spectrum of biological processes [ 1,2 ] , such as multistability [ 3,4 ] , adaptation [ 5 ] , and oscillations [ 6 ] .there are several molecular mechanisms that are able to produce sigmoidal responses such as inhibition by a titration process [ 7,8 ] , zero - order ultrasensitivity in covalent cycles [ 9,10 ] , and multistep activation processes - like multisite phosphorylation [ 11 - 13 ] or ligand binding to multimeric receptors [ 14 ] .sigmoidal curves are characterized by a sharp transition from low to high output following a slight change of the input .the steepness of this transition is called ultrasensitivity [ 15 ] . in general, the following operational definition of the hill coefficient may be used to calculate the overall ultrasensitivity of sigmoidal modules : where ec10 and ec90 are the signal values needed to produce an output of 10% and 90% of the maximal response .the hill coefficient quantifies the steepness of a function relative to the hyperbolic response function which is defined as not ultrasensitive and has ( i.e. an 81-fold increase in the input signal is required to change the output level from 10% to 90% of its maximal value ) .functions with need a smaller input fold increase to produce such output change , and are thus called ultrasensitive functions .global sensitivity measures such the one described by eq .1 do not fully characterize s - shaped curves , y(x ) , because they average out local characteristics of the analyzed response functions . instead , these local features are well captured by the logarithmic gain or response coefficient measure [ 16 ] defined as : equation 2 provides local ultrasensitivity estimates given by the local polynomial order of the response function . equation 2 provides local ultrasensitivity estimates given by the local polynomial order of the response function .mitogen activated protein ( map ) kinase cascades are a well - conserved motif .they can be found in a broad variety of cell fate decision systems involving processes such as proliferation , differentiation , survival , development , stress response and apoptosis [ 17 ] .they are composed of a chain of three kinases which sequentially activate one another , through single or multiple phosphorylation events .a thoughtful experimental and mathematical study of this kind of systems was performed by ferrell and collaborators , who analyzed the steady - state response of a mapk cascade that operates during the maturation process in xenopus oocytes [ 18 ] .they developed a biochemical model to study the ultrasensitivity displayed along the cascade levels and reported that the combination of the different ultrasensitive layers in a multilayer structure produced an enhancement of the overall system s global ultrasensitivity [ 18 ] .in the same line , brown et al .[ 19 ] showed that if the dose - response curve , f(x ) , of a cascade could be described as the mathematical composition of functions , fisi , that described the behavior of each layer in isolation ( i.e , then the local ultrasensitivity of the different layers combines multiplicatively : . in connection with this result ,ferrell showed for hill - type modules of the form where the parameter ec50 corresponds to the value of input that elicits half - maximal output , and nh is the hill coefficient ) , that the overall cascade global ultrasensitivity had to be less than or equal to the product of the global ultrasensitivity estimations of each cascade s layer , i.e [ 20 ] . hill functions of the form given by eq .3 are normally used as empirical approximations of sigmoidal dose - response curves , even in the absence of any mechanistic foundation [ 2 ] .however , it is worth noting that for different and more specific sigmoidal transfer functions , qualitatively different results could have been obtained .in particular , a supra - multiplicative behavior ( the ultrasensitivity of the combination of layers is higher than the product of individual ultrasensitivities ) might be observed for left - ultrasensitive response functions , i.e. functions that are steeper to the left of the ec50 than to the right [ 21 ] ) . in this case , the boost in the ultrasensitivity emerges from a better exploitation of the ultrasensitivity `` stored '' in the asymmetries of the dose - response functional form ( see [ 21 ] for details ) .as modules are embedded in bigger networks , constraints in the range of inputs that each module would receive ( as well as in the range of outputs that the network would be able to transmit ) could arise .we formalized this idea in a recent publication introducing the notion of dynamic range constraint of a module s dose - response function .the later concept is a feature inherently linked to the coupling of modules in a multilayer structure , and resulted a significant element to explain the overall ultrasensitivity displayed by a cascade [ 21 ] . besides dynamic range constraint effects sequestration - i.e., the reduction in free active enzyme due to its accumulation in complex with its substrate- is another relevant process inherent to cascading that could reduce the cascade s ultrasensitivity [ 22 - 24 ] .moreover , sequestration may alter the qualitative features of any well - characterized module when integrated with upstream and downstream components , thereby limiting the validity of module - based descriptions [ 25 - 27 ] .all these considerations expose the relevance of studying the behavior of modular processing units embedded in their physiological context .although there has been significant progress in the understanding of kinase cascades , how the combination of layers affects the cascade s ultrasensitivity remains an open - ended question for the general case .sequestration and dynamic range constraints not only contribute with their individual complexity , but also usually occur together , thus making it more difficult to identify their individual effective contribution to the system s overall ultrasensitivity . in the present work ,we have developed a method to describe the overall ultrasensitivity of a molecular cascade in terms of the effective contribution of each module .in addition , said method allows us to disentangle the effects of sequestration and dynamic range constraints .we used our approach to analyze a recently presented synthetic mapk cascade experimentally engineered by oshaughnessy et al .[ 28 ] . using a synthetic biology approach oshaughnessy et al .[ 28 ] constructed an isolated mammalian mapk cascade ( a raf - mek - erk system ) in yeast and analyzed its information processing capabilities under different rather well - controlled environmental conditions .they made use of a mechanistic mathematical description to account for their experimental observations .their model was very similar in spirit to huang - ferrell s with two important differences : a ) no phosphatases were included , and b ) the creation and degradation of all species was explicitly taken into account .interestingly , they reported that the multilayer structure of the analyzed cascades can accumulate ultrasensitivity supra - multiplicatively , and suggested that cascading itself and not any other process ( such as multi - step phosphorylation , or zero - order ultrasensitivity ) was at the origin of the observed ultrasensitivity .they called this mechanism , de - novo ultrasensitivity generation .as we found the proposed mechanism a rather appealing and unexpected way of ultrasensitivity generation , we wanted to further characterize it within our analysis framework .in particular , we reasoned that the methodology and concepts introduced in the present contribution were particularly well - suited to understand the mechanisms laying behind the ultrasensitivity behavior displayed by oshaughnessy cascade model .the paper was organized as follows .first , we presented a formal connection between local and global descriptors of a module s ultrasensitivity for the case of a cascade system composed of units .we then introduced the notion of hill input s working range in order to analyze the contribution to the overall system s ultrasensitivity of a module embedded in a cascade .next , we presented the oshaughnessy cascade analysis , in order to show the insights that might be gained using the introduced concepts and analysis methodologies .we concluded by presenting a summarizing discussion after which conclusions were drawn .the concept of ultrasensitivity describes a module s ability to amplify small changes in input values into larger changes in output values .it is customary to quantify and characterize the extent of the amplification both globally , using the hill coefficient defined in equation 1 , and locally , using the response coefficient , r(i ) , as a function of the module s input signal i ( equation 2 ) , we found a simple relationship between both descriptions considering the logarithmic amplification coefficient , defined as : describes ( in a logarithmic scale ) the change produced in the output when the input varies from a to b values .for instance , for an hyperbolic function evaluated between the inputs that resulted in 90% and 10% of the maximal output . in this case , the two considered input levels delimited the input range that should be considered for the estimation of the respective hill coefficient . , we called this input interval : the hill input s working range ( see fig 1a - b ) . ) , and the `` hill working range '' is the input range relevant for the calculation of the system s . in hill functions , inputs values much smaller than its ec50 produce local sensitivities around its hill coefficient.schematic response function diagrams for the composition of two hill type ultrasensitive modules ( d - e ) .the and are the input values that take the `` i '' module when the last module ( the second one ) reach the % 10 and % 90 of it maximal output ( ) . when (d ) equals the maximum output of module 2 in isolation , thus , and match the ec10 and ec90 of module 2 in isolation .also the hill working range of module 1 is located in the input region below ec501 . on the other hand , when ( e ) is less than the maximum output of module 2 in isolation , thus , and differ from the ec10 and ec90 of module 2 in isolation . in this casethe modules-1 hill working range tends to be centered in values higher than its ec50 , this will depends on modules-2 ultrasensitivity ( see supplementary materials [ sm1]),scaledwidth=100.0% ] [ fig1 ] taking into account eq . ( 4 ) , the parameter could be rewritten as follows , consequently , the hill coefficient could be interpreted as the ratio of the logarithmic amplification coefficients of the function of interest and an hyperbolic function , evaluated in the corresponding hill input s working range .it is worth noting that the logarithmic amplification coefficient that appeared in equation 5 equaled the slope of the line that passed through the points and in a log - log scale .thus , it was equal to the average response coefficient calculated over the interval ] , with ( see fig 1d - e ) .the factor in equation 8 was formally equivalent to the hill coefficient of layer-2 but , importantly , now it was calculated using layer-1 hill input s working range limits , x101 and x901 , instead of the hill working range limits of layer-1 in isolation , and . on the other hand , was the amplification factor that logarithmically scaled the range ] . in this context , we dubbed the coefficient : effective ultrasensitivity coefficient of layer - i , as it was associated to the effective contribution of layer - i to the system s overall ultrasensitivity . for the more general case of a cascade of modules we found that : this last equation , which hold exactly , showed a very general result . for the general case , the overall of a cascade could be understood as a multiplicative combination of the of each module .the connection between the global and local ultrasensitivity descriptors , provided by equation 9 , proved to be a useful tool to analyze ultrasensitivity in cascades , as it allowed assessing the effective contribution of each module to the system s overall ultrasensitivity . according to this equation , hill input s working range designated regions of inputs over which the mean local - ultrasensitivity value was calculated for each cascade level in order to set the system s .it was thus a significant parameter to characterize the overall ultrasensitivity of multilayer structures . in figures 1d-1e we illustrated , for the case of two composed hill functions , to what extent the actual location of these relevant intervals depended on the way in which the cascade layers were coupled .the ratio between and played a key role at this respect ( see fig 1 ) .we showed how this parameter sets the hill working ranges for the case of modules presenting different dose - response curves in sup mat .importantly this analysis highlighted the impact of the detailed functional form of a module s response curve on the overall system s ultrasensitivity in cascade architectures .local sensitivity features of the involved transfer functions were of the uttermost importance in this kind of setting and could be at the core of non - trivial phenomenology . for example a dose - response module with larger local ultrasensitivities than its overall global value might contribute with more ultrasensitivity to the system than the function s own ( see sup.mat .[ sm1 ] ) . in order to disentangle the different factors contributing to a cascade overall ultrasensitivity , we simultaneously considered three approximations of the system under study ( see fig 2 ) . in a first step ( fig 2a ) we numerically computed the transfer function , , of each module in isolation and calculated the respective hill coefficients . )can be obtain by the mathematical composition of each module s transfer functions acting in isolation ( b ) . when the sequestration effect is taken into account, the layers embedded in the map kinase cascade may have a different dose - response curve from the isolated case ( c).,scaledwidth=100.0% ] [ fig2 ] we then studied the mathematical composition , , of the isolated response functions ( fig 2b ) : represented the transfer function of the kinase cascade when sequestration effects were completely neglected .following equation 9 effective ultrasensitivities , could be estimated and compared against the global ultrasensitivity that each module displayed in isolation , .thus , this second step aimed to specifically analyze to what extent the existence of _ hill s input working ranges _ impinged on ultrasensitivity features display by cascade arrangement of layers ( fig 2b ) .finally , in a third step , response functions were obtained considering a mechanistic model of the cascade .effective ultrasensitivity coefficients , , could then be estimated for each module in order to asses for putative sequestration effects that could take place in the system ( fig 2c ) . a sketch of the oshaughnessy et al .mapk cascade is shown in fig 3a . in our analysis , we defined the output of a module and the input to the next one as the total active form of a species , including complexes with the next layer s substrates .however , we excluded complexes formed by same layer components ( such as a complex between the phosphorylated kinase and its phosphatase ) , since these species are `` internal '' to each module . by doing this ,we are able to consistently identify layers with modules ( the same input / output definition was used by ventura _ et al _ [ 25 ] ) .[ fig3 ] in order to study the contribution of each layer to the system s ultrasensitivity , we proceeded to numerically compute the transfer function of each module in isolation and then calculate their respective ( column 2 in table 1 ) ..[tab : table1]*3-step analysis of oshaughnessy _ et al ._ dual - step cascade .* + column 1 : hill coefficients of active species dose - response curves ( respect to estradiol ) .column 2 : hill coefficients of the modules in .column 3 : effective ultrasensitivities of the system given by the composition of the modules in isolation , which is represented by ( equation [ 6 ] ) where no sequestration effects are present .column 4 : effective ultrasensitivities in the original cascade . [ cols="<,<,<,<,<",options="header " , ] we verified that the hill function is not contributing as much ultrasensitivity as the original system s raf - mek module .the reason is that even the dose - response of active mek and the hill function appear to be similar , there are strong dissimilarities in their local ultrasensitivity behavior ( see fig 5b ) .this is particularly true for low input values , where the hill s input working range is located . in this region ,the active mek curve presents local ultrasensitivity values larger than the hill function counterpart , thus the replacement by a hill function produce a reduction in the hill coefficient in this way , despite the high - quality of the fitting adjustment residual standard error=2.6 ) , the hill function approximation introduced non - trivial alterations in the system s ultrasensitivity as a technical glitch .there have been early efforts to interpret cascade system - level ultrasensitivity out of the sigmoidal character of their constituent modular components .usually they have focused either on a local or a global characterization of ultrasensitivity features .for instance , brown et al . [ 19 ] had shown that the local system s ultrasensitivity in cascades equals the product of the local ultrasensitivity of each layer . in turn , from a global ultrasensitivity perspective , ferrell [ 20 ] pointed out that in the composition of two hill functions , the hill coefficient results equal or less than the product of the hill coefficient of both curves ( ) . in this contributionwe have found a mathematical expression ( equation 6 ) that linked both , the local and global ultrasensitivity descriptors in a fairly simple way . moreoverwe could provide a generalized result to handle the case of a linear arrangement of an arbitrary number of such modules ( equation 9 ) .noticeably , within the proposed analysis framework , we could decompose the overall global ultrasensitivity in terms of a product of single layer effective ultrasensitivities .these new parameters were calculated as local - ultrasensitivity values averaged over meaningful working ranges ( dubbed _ hill s input working ranges _ ) , and permitted to assess the effective contribution of each module to the system s overall ultrasensitivity .of course , the reason why we could state an exact general equation for a system - level feature in terms of individual modular information was that in fact system - level information was used in the definition of the _ hill s input working ranges _ that entered equation 9 .the specific coupling between ultrasensitive curves , set the corresponding _hill s input working ranges _ , thus determining the effective contribution of each module to the cascade s ultrasensitivity. this process , which we called _hill s input working range setting _ , has already been noticed by several authors [ 20 , 29,30 , 23 ] , but as far as we know this was the first time that a mathematical framework , like the one we present here , has been proposed for it .the value of the obtained expression ( equation 9 ) resides in the fact that not only it captured previous results , like ferrell s inequality , but also that it threw light about the mechanisms involved in the ultrasensitivity generation .for instance , the existence of supramultiplicative behavior in signaling cascades have been reported by several authors [ 23 , 28 ] but in many cases the ultimate origin of supramultiplicativity remained elusive .our framework naturally suggested a general scenario where supramultiplicative behavior could take place .this could occur when , for a given module , the corresponding _s input working range _ was located in an input region with local ultrasensitivities higher than the global ultrasensitivity of the respective dose - response curve . in order to study how multiple ultrasensitive modules combined to produce an enhancement of the system s ultrasensitivity , we have developed an analysis methodology that allowed us to quantify the effective contribution of each module to the cascade s ultrasensitivity and to determine the impact of sequestration effects in the system s ultrasensitivity .this method was particularly well suited to study the ultrasensitivity in map kinase cascades .we used our methodology to revisit oshaughnessy et al .tunable synthetic mapk system [ 28 ] in which they claim to have found a new source of ultrasensitivity called : _ de novo _ ultrasensitivity generation .they explained this new effect in terms of the presence of intermediate elements in the kinase - cascade architecture .we started analyzing the mapk cascade .we found that sequestration was not affecting the system s ultrasensitivity and that the overall sub - multiplicative behavior was only due to a re - setting of the hill input s working range for the first and second levels of the cascade .then , to investigate the origin of the claimed _ de novo _ ultrasensitivity generation mechanism , we applied our framework on the single - step phosphorylation cascade .we found that the system s ultrasensitivity in the single - step cascade came only from the contribution of the last module , which behaved as a goldbeter - koshland unit with kinases working in saturation and phosphatases in a linear regime .therefore the ultrasensitivity in its single - step cascade was not generated by the cascading itself , but by the third layer , which itself was actually ultrasensitive .finally we analyzed the auxiliary model considered by oshaughnessy et al .in which the raf and mek layers were replaced by a hill function that is coupled to the erk layer . in this case , , even the original estradiol - mek input - output response curve could be fairly well fitted and global ultrasensitivity features were rather well captured , the replacement by a hill function produce a strong decrease in the systems ultrasensitivity .we found that the functional form of the hill function failed to reproduce original local ultrasensitivity features that were in fact the ones that , due to the particular hill working range setting acting in this case , were responsible for the overall systems ultrasensitivity behavior .the analyzed case was particularly relevant , as provided an illustrative example that warned against possible technical glitches that could arise as a consequence of the inclusion of approximating functions in mapk models .the study of signal transmission and information processing inside the cell has been , and still is , an active field of research . in particular , the analysis of cascades of sigmoidal modules has received a lot of attention as they are well - conserved motifs that can be found in many cell fate decision systems . in the present contribution we focused on the analysis of the ultrasensitive character of this kind of molecular systems .we presented a mathematical link between global and local characterizations of the ultrasensitive nature of a sigmoidal unit and generalized this result to handle the case of a linear arrangement of such modules . in this way, the overall system ultrasensitivity could define in terms of the effective contribution of each cascade tier . based on our finding , we proposed a methodological procedure to analyzed cascade modular systems , in particular mapk cascades .we used our methodology to revisit oshaughnessy et al .tunable synthetic mapk system [ 28 ] .in which they claim to find a new source of ultrasensitivity called : _ de novo _ ultrasensitivity generation , which they explained in terms of the presence of intermediate elements in the kinase - cascade architecture . with our frameworkwe found that the ultrasensitivity did not come from a cascading effect but from a ` hidden ' first - order ultrasensitivity process in the one of the cascade s layer . from a general perspective, our framework serves to understand the origin of ultrasensitivity in multilayer structures , which could be a powerful tool in the designing of synthetic systems .in particular , in ultrasensitive module designing , our method can be used to guide the tuning of both the module itself and the coupling with the system , in order to set the working range in the region of maximal local ultrasensitivity .\1 . ferrellje , ha sh .ultrasensitivity part iii : cascades , bistable switches , and oscillators .trends in biochemical sciences .2014 dec 31;39(12):612 - 8 .zhang q , sudin bhattacharya and melvin e. andersen ultrasensitive response motifs : basic amplifiers in molecular signalling networks open biol .2013 3 130031 + 3 .angeli d , ferrell j e and sontag e d detection of multistability , bifurcations , and hysteresis in a large class of biological positive - feedback systems .pnas 2004 101 ( 7 ) 1822 - 7 + 4 .ferrell j e and xiong w bistability in cell signaling : how to make continuous processes discontinuous , and reversible processes irreversible chaos 2001 11 ( 1 ) 227 - 36 + 5 .srividhya j , li y , and pomerening jr open cascades as simple solutions to providing ultrasensitivity and adaptation in cellular signaling phys biol .2011 8(4):046005 6 .kholodenko b n negative feedback and ultrasen- sitivity can bring about oscillations in the mitogen - acti- vated protein kinase cascades .eur j biochem 2000 267 15831588 .buchler n e and louis m. , molecular titration and ultrasensitivity in regulatory networks .2008384 1106 - 19 .buchler n e and cross f r. , protein sequestration generates a flexible ultrasensitive response in a genetic network .mol syst biol .20095 272 + 9 .goldbeter a and koshland d e an amplified sensitivity arising from covalent modification in biological systems pnas 1981 78 11 6840 - 6844 + 10 .ferrell j e and ha s h ultrasensitivity part ii : multisite phosphorylation , stoichiometric inhibitors , and positive feedback trends in biochemical sciences 2014 39 ( 11 ) : 556569 + 11 .ferrell j e tripping the switch fantastic : how a protein kinase cascade can convert graded inputs into switch - like outputs trends biochem .1996 21 460466 + 12 .n i , hoek j b , and kholodenko b n signaling switches and bistability arising from multisite phosphorylation in protein kinase cascades journal cell biology 2004 164 ( 3):353 + 13 .gunawardena j multisite protein phosphorylation makes a good threshold but can be a poor switch .pnas 2005 102 41 1461714622 + 14 .rippe k , analysis of protein - dna binding in equilibrium , b.i.f .futura 199712 20 - 26 + 15 .ferrell j e and ha s h ultrasensitivity part i : michaelian responses and zero - order ultrasensitivity trends in biochemical sciences 2014 39 ( 11 ) : 496503 + 16 .kholodenko b n , hoek j b , westerhoff h v and brown g c quantification of information transfer via cellular signal transduction pathways , febs letters 1997 414 430 - 4 + 17 .keshet y and seger r.the map kinase signaling cascades : a system of hundreds of components regulates a diverse array of physiological functions methods mol biol 2010 661 3 - 38 + 18 .huang c - y f and ferrell j e ultrasensitivity in the mitogen - activated protein kinase cascade proc . natl .1996 93 10078 - 10083 + 19 .brown gc , hoek j b , kholodenko b n , why do protein kinase cascades have more than one level ? , trends biochem sci .1997 22 ( 8):288 . + 20 .ferrell j e how responses get more switch - like as you move down a protein kinase cascade trends biochem sci. 1997 22 ( 8):288 - 9 .altszyler e , ventura a , colman - lerner a , chernomoretz a. impact of upstream and downstream constraints on a signaling module s ultrasensitivity . physical biology .2014 oct 14;11(6):066003 .bluthgen n , bruggeman f j , legewie s , herzel h , westerhoff h v and kholodenko b n effects of sequestration on signal transduction cascades .febs journal 2006 273 895 - 906 + 23 .e and slepchenko b m on sensitivity amplification in intracellular signaling cascades phys . biol .2008 5 036004 - 12 + 24 .wang g , zhang m. tunable ultrasensitivity : functional decoupling and biological insights . scientific reports .2016 6 20345 + 25 .ventura a c , sepulchre j a and merajver s d a hidden feedback in signaling cascades is revealed , plos comput biol .2008 4(3):e1000041 + 26 .del vecchio d , ninfa a and sontag e modular cell biology : retroactivity and insulation .mol sys biol 2008 4 161 p161 .ventura a c , jiang p , van wassenhove l , del vecchio d , merajver s d , and ninfa a j signaling properties of a covalent modification cycle are altered by a downstream target pnas 2010 107 ( 22 ) 10032 - 10037 + 28 .oshaughnessy e c , palani s , collins j j , sarkar c a tunable signal processing in synthetic map kinase cascades cell 2011 7 144(1):119 - 31 + 29 .bluthgen n and herzel h map - kinase - cascade : switch , amplifier or feedback controller ?2nd workshop on computation of biochemical pathways and genetic networks - berlin : logos - verlag 2001 55 - 62 + 30 .bluthgen n and herzel h how robust are switches in intracellular signaling cascades journal of theoretical biology 2003 225 293 - 300 +the _ hill s input working range _ delimits the region of inputs over which the mean local - ultrasensitivity value is calculated ( equation 9 ) .it is thus a significant parameter to get insights about the overall ultrasensitivity of multilayer structures . in what follows ,we show that the actual location of this relevant interval depends on the way in which cascade layers are coupled .let s start by considering two coupled ultrasensitive modules .two different regimes could be identified depending whether the upstream module s maximum output was or was nt large enough to fully activate the downstream unit : a. in the first case i.e. when ( see fig 1d ) , and are equal to the and levels respectively .therefore , when coupled to module-1 , the hill input s working range of module-2 would not differ from the isolated case , and would equal the hill coefficient of this module acting in isolation : i.e .in addition , it can be seen that the hill input s working range of module-1 tends to be located at the low input - values region for increasing levels of the ratio . in this region the response coefficient of the hill functions achieve the highest values , with ( see fig 1c ) , thus , when calculating the average logarithmic gain , , we would obtain .finally , following equation 8 we get .it can be seen that the cascade behaves multiplicatively in this regime , which is consistent with ferrell s results [ 1 ] b. when the upstream module s maximal output is not enough to fully activate the downstream module , i.e. we will have different behaviors depending on module-2 ultrasensitivity : first let s see what happens in a case in which module-2 dose - response has , thus displaying a linear behavior at low input values ( see fig 6 ) .[ figs1 ] the linearity produces that and ( and of the linear curve ) match the % 10 and % 90 of , thus and coincide with and centering the hill working range around .furthermore , as a result of applying equation 4 , the system s behavior lies on module-1 and shows a multiplicative behavior , given the linearity of module-2 . on the other hand , it can be seen that if , then module-2 dose response has a power - law behavior at low input values ( see fig 6 ) . in this case , the non - linearity produces a shift in modules-2 working range toward higher values , which centers modules-1 hill working range in input values higher than .furthermore , given that decreases with , the modules-1 working range shift produces , then , finally we get that if and , the system shows a submultiplicative behavior ( consistent with ferrell s results [ 20 ] ) , which arises from a setting of modules-1 working range in a region with low local ultrasensitivity .although we show that the submultiplicativity occurs in the limit of , the same argument is still valid for . of course, the ultimate consequences in the coupling of two ultrasensitive modules will depend on the particular mathematical details of the transfer functions under consideration . in this way, a completely qualitatively different behavior could be found for a system composed of two modules characterized by golbeter - koshland , gk , response functions [ 2 ] .gk functions appear in the mathematical characterization of covalent modification cycles ( such as phosphorylation - dephosphorylation ) , ubiquitous in cell signaling , operating in saturation . for cases where the phosphatases , but not the kinases , work in saturation , gk functions present input regions with response coefficients higher than their overall ( see fig 2a - c ) .their _ hill input s working range _ are thus located in the region of greatest local ultrasensitivity , these functions are able to contribute with more effective ultrasensitivity than their global ultrasensitivity .therefore , cascades involving gk functions may exhibit supra - multiplicative behavior . for this kind of systems, fig 2d shows that , under regime ( a ) ( when the modules-2 ec50 is much lower than the gk maximal output level , o1 max ) the modules-1 hill input s working range is set in its linear regime ( ) , and the gk function does not contribute to the overall system s ultrasensitivity . on the other hand , fig 2e shows that the to relation can be tuned in order to set modules-1 hill input s working range in its most ultrasensitive region , producing an effective ultrasensitivity contribution , , even larger than the ultrasensitivity of the gk curve in isolation ( i.e. ) , resulting in supra - multiplicative behavior .our analysis highlights the impact of the detailed functional form of a module s response curve on the overall system s ultrasensitivity in cascade architectures .local sensitivity features of the involved transfer functions are of the uttermost importance in this kind of setting and could be at the core of non - trivial phenomenology .ferrell j e how responses get more switch - like as you move down a protein kinase cascade trends biochem sci .1997 22 ( 8):288 - 9 .+ 2 . goldbeter a and koshland d e an amplified sensitivity arising from covalent modification in biological systems pnas 1981 78 11 6840 - 6844in fig 7a can be appreciated that sequestration effects were actually negligible for the mapkk and mapk layers , given that the input - output relation of the composition of isolated functions ( non - seq ) and embedded modules ( seq ) coincided . only for the mapkkk layer ,sequestration effects produced a shift between both curves .noticeably , the corresponding hill working ranges changed accordingly , and the resulting overall ultrasensitivity did not get affected at all . hence we could finally conclude that in this particular system , even sequestration effects existed , the overall sub - multiplicative behavior was only due to a resetting of the _ hill input s working range _ for the first and second levels of the cascade . and of each layer of the original cascade ,while the red solid vertical lines show the and of each layers in the system given by the composition of the modules in isolation ( , see eq [ 6 ] ) .it worth noting that the response curves that each module sustain in the non - sequestration scenario ( panels b - c ) will coincide with the isolated curves , with the exception that are limited in the spanned input region.,scaledwidth=100.0% ] [ figs2 ]the goldbeter - koshland function [ 1 ] is defined as in order to center the g - k function , we multiply the independent variable for a scale factor , , where was set in order to make the ec50 of g - k function coincides with the ec50 of erkpp curve
ultrasensitive response motifs , which are capable of converting graded stimulus in binary responses , are very well - conserved in signal transduction networks . although it has been shown that a cascade arrangement of multiple ultrasensitive modules can produce an enhancement of the system s ultrasensitivity , how the combination of layers affects the cascade s ultrasensitivity remains an open - ended question for the general case . here we have developed a methodology that allowed us to quantify the effective contribution of each module to the overall cascade s ultrasensitivity and to determine the impact of sequestration effects in the overall system s ultrasensitivity . the proposed analysis framework provided a natural link between global and local ultrasensitivity descriptors and was particularly well - suited to study the ultrasensitivity in map kinase cascades . we used our methodology to revisit oshaughnessy et al . tunable synthetic mapk cascade , in which they claim to have found a new source of ultrasensitivity : ultrasensitivity generated de novo , which arises due to cascade structure itself . in this respect , we showed that the system s ultrasensitivity in its single - step cascade did not come from a cascading effect but from a ` hidden ' first - order ultrasensitivity process in one of the cascade s layer . our analysis also highlighted the impact of the detailed functional form of a module s response curve on the overall system s ultrasensitivity in cascade architectures . local sensitivity features of the involved transfer functions were found to be of the uttermost importance in this kind of setting and could be at the core of non - trivial phenomenology associated to ultrasensitive motifs .
the self - avoiding walk ( saw ) model is an important model in statistical physics .it models the excluded - volume effect observed in real polymers , and exactly captures universal features such as critical exponents and amplitude ratios .it is also an important model in the study of critical phenomena , as it is the limit of the -vector model , which includes the ising model ( ) as another instance .indeed , one can straightforwardly simulate saws in the infinite volume limit , which makes this model particularly favorable for the calculation of critical parameters .exact results are known for self - avoiding walks in two dimensions and for ( mean - field behavior has been proved for ) , but not for the most physically interesting case of .the pivot algorithm is a powerful and oft - used approach to the study of self - avoiding walks , invented by lal and later elucidated and popularized by madras and sokal .the pivot algorithm uses pivot moves as the transitions in a markov chain which proceeds as follows . from an initial saw of length , such as a straight rod ,new -step walks are successively generated by choosing a site of the walk at random , and attempting to apply a lattice symmetry operation , or pivot , to one of the parts of the walk ; if the resulting walk is self - avoiding the move is accepted , otherwise the move is rejected and the original walk is retained . thus a markov chain is formed in the ensemble of saws of fixed length ; this chain satisfies detailed balance and is ergodic , ensuring that saws are sampled uniformly at random .one typical use of the pivot algorithm is to calculate observables which characterize the size of the saws : the squared end - to - end distance , the squared radius of gyration , and the mean - square distance of a monomer from its endpoints . to leading orderwe expect the mean values of these observables over all saws of steps , with each saw is given equal weight , to be ( ) , with a universal critical exponent .for -step saws , the implementation of the pivot algorithm due to madras and sokal has estimated mean time per attempted pivot of on and on ; performance was significantly improved by kennedy to and respectively . in this article, we give a detailed description of a new data structure we call the saw - tree .this data structure allows us to implement the pivot algorithm in a highly efficient manner : we present a heuristic argument that the mean time per attempted pivot is on and , and numerical experiments which show that for walks of up to steps the algorithmic complexity is well approximated by .this improvement enables the rapid simulation of walks with many millions of steps . in a companion article , we describe the algorithm in general terms , and demonstrate the power of the method by applying it to the problem of calculating the critical exponent for three - dimensional self - avoiding walks . thus farthe saw - tree has been implemented for , , and , but it can be straightforwardly adapted to other lattices and the continuum , as well as polymer models with short - range interactions .other possible extensions would be to allow for branched polymers , confined polymers , or simulation of polymers in solution .we intend to implement the saw - tree and associated methods as an open source software library for use by researchers in the field of polymer simulation .madras and sokal demonstrated , through strong heuristic arguments and numerical experiments , that the pivot algorithm results in a markov chain with short integrated autocorrelation time for global observables .the pivot algorithm is far more efficient than markov chains which utilize local moves ; see for detailed discussion . the implementation of the pivot algorithm by madras and sokal utilized a hash table to record the location of each site of the walk .they showed that for -step saws the probability of a pivot move being accepted is , with dimension - dependent but close to zero ( ) .as accepted pivots typically result in a large change in global observables such as , this leads to the conclusion that the pivot algorithm has integrated autocorrelation time , with possible logarithmic corrections .in addition , they argued convincingly that the cpu time per successful pivot is for their implementation . throughout this article we work with the mean time per attempted pivot , , which for the madras and sokal implementation is .madras and sokal argued that per successful pivot is best possible because it takes time to merely write down an -step saw .kennedy , however , recognized that it is _ not _ necessary to write down the saw for each successful pivot , and developed a data structure and algorithm which cleverly utilized geometric constraints to break the barrier . in this paper, we develop methods which further improve the use of geometric constraints to obtain a highly efficient implementation of the pivot algorithm .we have efficiently implemented the pivot algorithm via a data structure we call the saw - tree , which allows rapid monte carlo simulation of saws with millions of steps .this new implementation can also be adapted to other models of polymers with short - range interactions , on the lattice and in the continuum , and hence promises to be widely useful .the heart of our implementation of the algorithm involves performing intersection tests between `` bounding boxes '' of different sub - walks when a pivot is attempted . in generated large samples of walks with up to steps , but for the purpose of determining the complexity of our algorithm we have also generated smaller samples of walks of up to steps . for , the mean number of intersection tests needed per attempted pivot is remarkably low : 39 for , 158 for , and 449 for . in sec .[ sec : complexity ] we present heuristic arguments for the asymptotic behavior of the mean time per attempted pivot for -step saws , , and test these predictions with computer experiments for .we summarize our results in table [ tab : performance ] ; note that indicates is bounded above by asymptotically , indicates dominates , indicates dominates , and indicates bounds both above and below . for comparison , we also give the algorithmic complexity of the implementations of madras and sokal , and kennedy . in sec .[ sec : complexityhighdim ] , we develop an argument for the complexity of our algorithm on ; this same argument leads to an estimate for the performance of the implementation of madras and sokal on .we do not know the complexity of kennedy s implementation for and with , but we suspect it is with , with possible logarithmic corrections .ccccc lattice & madras and sokal & kennedy & + & & & predicted & observed ' '' ''+ & & & & ' '' '' + & & & & + & & ? & & + , & & ? & & ? ' '' '' + our implementation is also fast in practice : for simulations of walks of length on , our implementation is almost 400 times faster when compared with kennedy s , and close to four thousand times faster when compared with that of madras and sokal .we have measured for each implementation over a wide range of on , , and , and report these results in sec .[ sec : comparison ] . in sec .[ sec : implementation ] , we give a detailed description of the saw - tree data structure and associated methods which are required for implementing the pivot algorithm . in sec .[ sec : complexity ] we present heuristic arguments that for self - avoiding walks on and is , and numerical evidence which shows that for walks of up to steps is for and for .we also discuss the behavior of our implementation for higher dimensions . in sec .[ sec : initialization ] we discuss initialization of the markov chain , including details of how many data points are discarded .we also explain why it is highly desirable to have a procedure such as * pseudo_dimerize * for initialization ( pseudo - code in sec .[ sec : implementationhigh ] ) when studying very long walks , and show that the expected running time of * pseudo_dimerize * is . in sec .[ sec : autocorrelation ] we discuss the autocorrelation function for the pivot algorithm , and show that the batch method for estimating confidence intervals is accurate , provided the batch size is large enough .this confirms the accuracy of the confidence intervals for our data published in . finally , in sec .[ sec : comparison ] we compare the performance of our implementation with previous implementations of the pivot algorithm .we show that the saw - tree implementation is not only dramatically faster for long walks , it is also faster than the other implementations for walks with as few as 63 steps .self - avoiding walks ( saws ) are represented as binary trees ( see e.g. ) via a recursive definition ; we describe here the saw - tree data structure and associated methods using pseudo - code .these methods can be extended to include translations , splitting of walks , joining of walks , and testing for intersection with surfaces . indeed , for saw - like models ( those with short range interactions ), it should be possible to implement a wide variety of global moves and tests for saws of steps in time or better .it is also possible to parallelize code by , for example , performing intersection testing for a variety of proposed pivot moves simultaneously .parallelization of the basic operations is also possible , but would be considerably more difficult to implement .in this section we give precise pseudo - code definitions of the data structure and algorithms . for reference , r - trees and bounding volume hierarchies ( see e.g. ) are data structures which arise in the field of computational geometry which are related to the saw - tree . for self - avoiding walks ,the self - avoidance condition is enforced on sites rather than bonds , and this means that the saw - tree is naturally defined in terms of sites .this representation also has the advantage that the basic objects , sites , have physical significance as they correspond to the monomers in a polymer .the only consequences of this choice are notational : a saw - tree of sites has steps .we adopt this notation for the remainder of this section . when discussing the complexity of various algorithms we will still use rather than in order to be consistent with the companion article and other sections of the present work .an -site saw on is a mapping with for each ( denotes the euclidean norm of ) , and with for all .saws may be either rooted or unrooted ; our convention is that the saws are rooted at the site which is at ( unit vector in the first coordinate direction ) , i.e .this convention simplifies some of the algebra involved in merging sub - walks , and is represented visually , e.g. in fig .[ fig : example ] , by indicating a dashed bond from the origin to the first site of the walk .( 0,0 ) circle ( 3pt ) ; ( 1,0 ) circle ( 3pt ) ; ( 1,1 ) circle ( 3pt ) ; ( 2,1 ) circle ( 3pt ) ; ( 3,1 ) circle ( 3pt ) ; ( 3,0 ) circle ( 3pt ) ; ( 0,0 ) ( 1,0 ) ; ( 1,0 ) ( 1,1 ) ( 2,1 ) ( 3,1 ) ( 3,0 ) ; we denote the group of symmetries of as , which corresponds to the dihedral group for , and the octahedral group for .this group acts on coordinates by permuting any of the coordinate directions ( choices ) , and independently choosing the orientation of each of these coordinates ( choices ) ; thus has elements .the group of lattice symmetries for therefore has 48 elements , and we use all of them except the identity as potential pivot operations ; other choices are possible. we can represent the symmetry group elements as orthogonal matrices , and the symmetry group elements act on the coordinates written as column vectors .we also define the ( non - unique ) pivot sequence representation of a self - avoiding random walk on as a mapping from the integers to , .the sequence elements represent changes in the symmetry operator from site to site , while represent absolute symmetry operations , i.e. relative to the first site of the walk. we can relate this to the previous definition of a self - avoiding walk in terms of sites via the recurrence relations with , and initial conditions , , and . as noted by madras and sokal ( footnote 10 , p132 in ) , for the pivot sequence representation it is possible to perform a pivot of the walk in time by choosing a site uniformly at random , and multiplying by a ( random ) symmetry group element .however , the pivot sequence representation does no better than the hash table implementation of madras and sokal if we wish to determine if this change results in a self - intersection , or if we wish to calculate global observables such as for the updated walk .forgetting for the moment the self - avoidance condition , and using the fact that has elements , we see that for random walks of sites there are possible pivot sequences , while there are only random walks .this suggests that each random walk is represented by pivot sequences .this can be derived directly by noting that given a pivot sequence , we can insert a pivot which preserves the vector , between two elements and as follows without altering the walk .the number of symmetry group elements which preserve is , and there are locations where these symmetry group elements can be inserted , leading to equivalent pivot representations for a random walk of sites . for , given the recurrence relations in eqs .[ eq : qrecurrence ] and [ eq : omegarecurrence ] only fix one of the two non - zero elements in , leaving the choice of sign for the other non - zero element free . for our example walk we have we give three of the 16 equivalent choices for the pivot representation of , the first involving only proper rotations , the second with improper rotations for with , and the third with proper and improper rotations alternating : the non - uniqueness of the pivot representation for saws is due to the fact that the monomers ( occupied sites ) are invariant under the symmetry group , i.e. it is not possible to distinguish the different orientations of a single site .the non - uniqueness is of no practical concern , but perhaps hints that it may be possible to derive a more succinct and elegant representation of walks than the mapping to defined here .the merge operation is the fundamental operation on saws which allows for the binary tree data structure we call the saw - tree .this is related to the concatenation operation defined , for example , in sec .1.2 of ; for concatenation the number of bonds is conserved , whereas for the merge operation the number of sites is preserved . merging two saws with and sitesrespectively results in a saw with sites .it is convenient to also include a pivot operation , , when merging the walks , and the result of merging two walks and is the merge operation is represented visually in fig .[ fig : merge ] . to merge two sub - walks ,pin the open circle of the left - hand sub - walk to the origin , and then pin the open circle of the right - hand sub - walk to the tail end of the left - hand sub - walk .finally , apply the symmetry to the right - hand sub - walk , using the second pin as the pivot .( 0,0 ) circle ( 3pt ) ; ( 1,0 ) circle ( 3pt ) ; ( 1,1 ) circle ( 3pt ) ; ( 0,0 ) ( 1,0 ) ; ( 1,0 ) ( 1,1 ) ; ( 1.5,0 ) node[q ] ; ( 2,0 ) circle ( 3pt ) ; ( 3,0 ) circle ( 3pt ) ; ( 3,-1 ) circle ( 3pt ) ; ( 2,0 ) ( 3,0 ) ; ( 3,0 ) ( 3,-1 ) ; ( 3.5,0.0 ) node ; ( 4,0 ) circle ( 3pt ) ; ( 5,0 ) circle ( 3pt ) ; ( 5,1 ) circle ( 3pt ) ; ( 6,1 ) circle ( 3pt ) ; ( 6,0 ) circle ( 3pt ) ; ( 4,0 ) ( 5,0 ) ; ( 5,0 ) ( 6,1 ) ( 6,0 ) ; ( 0,0 ) circle ( 3pt ) ; ( 1,0 ) circle ( 3pt ) ; ( 1,1 ) circle ( 3pt ) ; ( 0,1 ) circle ( 3pt ) ; ( 0,0 ) ( 1,0 ) ; ( 1,0 ) ( 1,1 ) ( 0,1 ) ; ( 1.5,0.5 ) node[q ] ; ( 1.5,0.58 ) node ; ( 2,0 ) circle ( 3pt ) ; ( 3,0 ) circle ( 3pt ) ; ( 3,1 ) circle ( 3pt ) ; ( 3,2 ) circle ( 3pt ) ; ( 2,0 ) ( 3,0 ) ; ( 3,0 ) ( 3,2 ) ; ( 3.5,0.5 ) node ; ( 5,0 ) circle ( 3pt ) ; ( 6,0 ) circle ( 3pt ) ; ( 6,1 ) circle ( 3pt ) ; ( 5,1 ) circle ( 3pt ) ; ( 4,1 ) circle ( 3pt ) ; ( 4,0 ) circle ( 3pt ) ; ( 4,-1 ) circle ( 3pt ) ; ( 5,0 ) ( 6,0 ) ; ( 6,0 ) ( 4,-1 ) ; here we define various quantities which are necessary for implementing our data structure and for calculating observables such as the mean - square end - to - end distance , .we first define various quantities which will be used to calculate observables which measure the size of a walk : a bounding box of a walk is a convex shape which completely contains the walk .the obvious choice of shape for is the rectangular prism with faces formed from the coordinate planes , , with the constants chosen so that the faces of the prism touch the walk , i.e. the bounding box has minimum extent .other choices are possible , e.g. other planes can be used such as , , and have the advantage of matching the shape of the walk more closely , but at the expense of more computational overhead and memory consumption . with closer fitting bounding boxes ,fewer intersection tests need to be performed to ascertain whether two walks intersect . however , in practice , the coordinate plane rectangular prism implementation was fastest on our computer hardware ( by a narrow margin ) , and has the benefit that it is straightforward to implement .the choice of bounding box for continuum models is not as obvious ; possibilities include spheres and oriented rectangular prisms .we note that the choice of bounding box shape determines the maximum number of sites , , a saw can have so that it is guaranteed that its bounding box contains the sites of the saw and no others .suppose we are given two saws for which the bounding boxes overlap : if each of the walks has or fewer sites , we can be certain that the two walks intersect , while if at least one of the walks has more than sites , it may be that the walks do not intersect .the value of determines the cut - off for intersection testing for the function * intersect * in sec .[ sec : implementationuser ] . for with ,the bounding box with faces formed from the coordinate planes leads to the maximum number of sites being two , as there are counter - examples with three sites ( e.g. the bounding box of also contains ) . for the bounding box with the faces being the coordinate planes and ,the maximum number of sites is three ( as the bounding box of also contains ) .it is possible to push this one step further so that the maximum number of sites is four , but five is not possible as we can see that in fig .[ fig : example ] has five sites , and an unvisited site on its convex hull , which must also therefore be interior to any bounding box .we write bounding boxes as a product of closed intervals , in the form ] , and ] is considered empty if . if any interval is empty , then the corresponding bounding box is also empty as it contains no interior sites .a quantity associated with the bounding box which we will find useful is the sum of the dimensions of the bounding box , * perim*. if $ ] , then we define for ( in fig .[ fig : example ] ) we have the following values for the various parameters : \times[0,1 ] ; \\\mathbf{x}_{\mathrm{e}}(\omega_a ) & = ( 3,0 ) ; \\ \mathbf{x}(\omega_a ) & = ( 1,0 ) + ( 1,1 ) + ( 2,1 ) + ( 3,1 ) + ( 3,0 ) \nonumber \\ & = ( 10,3);\\ x_2(\omega_a ) & = ( 1,0)\cdot(1,0 ) + ( 1,1)\cdot(1,1 ) + ( 2,1)\cdot(2,1 ) \nonumber \\ & \mathrel{\phantom{= } } + ( 3,1)\cdot(3,1 ) + ( 3,0)\cdot(3,0 ) \nonumber \\ & = 1 + 2 + 5 + 10 + 9 \nonumber \\ & = 27.\end{aligned}\ ] ] the observables , with , may be straightforwardly calculated from , , and .we give expressions for with , and note that higher euclidean - invariant moments can be obtained via ( ) ( these moments are calculated for in and for in ) .in addition we introduce another observable , , which measures the mean - square deviation of the walk from the endpoint . \nonumber \\ & = \frac{1}{2 } + \frac{1}{2}\mathbf{x}_{\mathrm{e } } \cdot \mathbf{x}_{\mathrm{e } } - \frac{1}{n } \hat{\mathbf{x}}_1 \cdot \mathbf{x } - \frac{1}{n } \mathbf{x}_{\mathrm{e } } \cdot \mathbf{x } + \frac{1}{n } x_2 \\ \mathcal{r}_{\mathrm{m}}^2 & = \frac{1}{n}\sum_{i=0}^{n-1 } |\omega(i)-\omega(n-1)|^2 \nonumber \\ & = \mathbf{x}_{\mathrm{e } } \cdot \mathbf{x}_{\mathrm{e } } - \frac{2}{n } \mathbf{x}_{\mathrm{e } } \cdot \mathbf{x } + \frac{1}{n } x_2 \label{eq : calrm}\end{aligned}\ ] ] in , we chose to calculate rather than , as it has a slightly simpler expression , and relied on the identity . compared with , has larger variance but smaller integrated autocorrelation time ( for the pivot algorithm ) . before performing the computational experiment in , we believed that given the same number of pivot attempts the confidence intervals for and would be comparable .we have since confirmed that working directly with results in a standard error which is of the order of 17% smaller for , an amount which is not negligible ; in future experiments we will calculate directly . herefollow some comments to aid in the interpretation of the pseudo - code description of the saw - tree data structure and associated algorithms .* all calls are by value , following the c programming language convention .data structures are passed to methods via pointers .* pointers : the walk is a data structure whose member variables can be accessed via pointers , e.g. the vector for the end - to - end distance for the walk is .the left - hand sub - walk of is indicated by , and the right - hand sub - walk by .this notation is further extended by indicating for the left - hand sub - walk of , for the right - hand sub - walk of , etc .. * suggestive notation for member variables used to improve readability ; all quantities , such as `` '' ( the end - to - end vector ) must correspond to a particular walk .e.g. , ( i.e. , superscript indicates that is the end - to - end vector for the left sub - walk ) , , . *variables with subscript are used for temporary storage only .* comments are enclosed between the symbols / * and * / following the c convention .* boolean negation is indicated via the symbol `` ! '' , e.g. ! true = false .the key insight which has enabled a dramatic improvement in the implementation of the pivot algorithm is the recognition that _ sequences _ of sites and pivots can be replaced by _binary trees_. the leaves of the tree are individual sites of the walk , and thus encode no information , while each of the ( internal ) nodes of the tree contain aggregate information about all sites which are below them in the tree .we call this data structure the saw - tree , which may be defined recursively : a saw - tree of sites either has and is a leaf , or has a left child saw - tree with sites , and a right child saw - tree with the remaining sites .our implementation of the saw - tree node is introduced in table [ tab : definition ] .a saw - tree consists of one or more saw - tree nodes in a binary tree structure ; the pointers and allow traversal from the root of the tree to the leaves , while allows for traversal from the leaves of the tree to the root .saw - trees are created by merging other saw - trees , with a symmetry operation acting on the right - hand walk . in particular ,any _ internal node _ may be expressed in terms of its left child , a symmetry operation , and its right child via a merge operation : lll + type & name & description ' '' '' + _ integer _ & & number of sites ' '' '' + _ saw - tree ptr _ & & parent + _ saw - tree ptr _ & & left - hand sub - walk + _ saw - tree ptr _ & & right - hand sub - walk + _ matrix _ & q & symmetry group element + _ vector _ & & + _ vector _ & & + _ integer _ & & + _ bounding box _ & & convex region ' '' '' + the leaves of the saw - tree correspond to sites in a saw , and are thus labeled from 0 to .a binary tree with leaves has internal nodes , and we label these nodes from 1 to , so that the symmetry is to the left of . the symmetry is not part of the saw - tree as it is applied to the whole walk , and thus can not be used in a merge operation . forsome applications it may be necessary to keep track of , e.g. when studying polymers in a confined region , but in this was not necessary . assume that the end - to - end vectors , , and symmetry group elements , , for a saw - tree and its left and right children are given .if we know the location of the anchor site of the parent node , , along with the overall _ absolute _ symmetry group element being applied to the walk , we can then find the same information for the left and right children as follows : thus can be determined for any site by iteratively performing this calculation while following the ( unique ) path from the root of the saw - tree to the appropriate leaf . n.b .: must be updated before .we give explicit examples of saw - trees in appendix [ sec : examplesawtrees ] . in fig .[ fig : sawtree_sequence ] , we give a saw - tree representation of a saw with sites which is precisely equivalent to the pivot sequence representation .we also give two equivalent representations of ( shown in fig .[ fig : example ] ) in figs .[ fig : sawtree_exampleaa ] and [ fig : sawtree_exampleab ] .conceptually we distinguish single - site walks ( individual sites ) , which reside in the leaves of the tree , from multi - site walks .in particular , the symmetry group element of a single site has no effect , and in the case where all monomers are identical then all single sites are identical . if the saw - tree structure remains fixed it is not possible to rotate part of the walk by updating a single symmetry group element , in contrast to the pivot sequence representation .this is because when we change a symmetry group element in a given node , it only alters the position of sites which are in the right child of the node . to rotate the part of the walk with sites labeled and greater , we choose the internal node of the saw - tree from the left .we then need to alter the symmetry group element of this node , and also all nodes which are above and to the right of it in the saw - tree . if we select a random node then it will likely be near the leaves of the tree , and assuming that the saw - tree is balanced this means that on average symmetry group elements will need to be altered . however , we note that the root node at the top of the tree has no parents , and therefore only one symmetry group element needs to be altered to rotate the right - hand part of the walk in this case . by utilizing tree - rotation operations , which alter the structure of the treewhile preserving node ordering , it is possible to move the node to the root of the saw - tree .once this has been done , it _ is _ then possible to implement a rotation of part of the walk by updating a single symmetry group element . on average , of these tree - rotation operations are required .binary trees are a standard data structure in computer science . by requiring trees to be balanced , i.e. so that the height of a tree with nodes is bounded by a constant times , optimal bounds can be derived for operations such as insertion and deletion of nodes from the tree .we refer the interested reader to sedgewick for various implementations of balanced trees , such as red - black balanced trees .we have the advantage that our saw - tree is , essentially , static , which means that we can make it perfectly balanced without the additional overhead of maintaining a balanced tree .included in this subsection are the primitive operations , which would generally not be called from the main program .left and right tree - rotations are modified versions of standard tree operations ; for binary trees , only ordering needs to be preserved , while for saw - trees the sequence of sites needs to be preserved , which means that symmetry group elements and other variables need to be modified .* procedure : * [ cols="<,^ , < " , ] child node[leaf ] child node[leaf ] ;0 * acknowledgments * i thank ian enting , tony guttmann , gordon slade , alan sokal , and two anonymous referees for useful comments on the manuscript .i would also like to thank an anonymous referee for comments on an earlier version of this article which led to deeper consideration of the algorithmic complexity of * shuffle_intersect*. i am grateful to tom kennedy for releasing his implementation of the pivot algorithm under the gnu gplv2 licence .computations were performed using the resources of the victorian partnership for advanced computing ( vpac ) . financial support from the australian research council is gratefully acknowledged .0 lawler , g.f . ,schramm , o. , werner , w. : on the scaling limit of planar self - avoiding walk . in : fractal geometry and applications : a jubilee of benoit mandelbrot , part 2 .pure math .339364 . am ., providence ( 2004 ) sergio caracciolo , anthony j. guttmann , iwan jensen , andrea pelissetto , andrew n. rogers , and alan d. sokal , _ correction - to - scaling exponents for two - dimensional self - avoiding walks _ , j. stat .* 120 * ( 2005 ) , 10371100 .james t. klosowski , martin held , joseph s. b. mitchell , henry sowizral , and karel zikan , _ efficient collision detection using bounding volume hierarchies of -dops _ , ieee t. vis .gr . * 4 * ( 1998 ) ,2136 .gregory f. lawler , oded schramm , and wendelin werner , _ on the scaling limit of planar self - avoiding walk _, fractal geometry and applications : a jubilee of benoit mandelbrot , part 2 .pure math .soc . , providence , 2004 , pp .
the pivot algorithm for self - avoiding walks has been implemented in a manner which is dramatically faster than previous implementations , enabling extremely long walks to be efficiently simulated . we explicitly describe the data structures and algorithms used , and provide a heuristic argument that the mean time per attempted pivot for -step self - avoiding walks is for the square and simple cubic lattices . numerical experiments conducted for self - avoiding walks with up to 268 million steps are consistent with behavior for the square lattice and behavior for the simple cubic lattice . our method can be adapted to other models of polymers with short - range interactions , on the lattice or in the continuum , and hence promises to be widely useful . 0 + + * keywords * self - avoiding walk ; polymer ; monte carlo ; pivot algorithm 0 0
being able to read news from other countries and written in other languages allows readers to be better informed .it allows them to detect national news bias and thus improves transparency and democracy .existing online translation systems such as _ google translate _ and _ _ bing translator _ _ are thus a great service , but the number of documents that can be submitted is restricted ( google will even entirely stop their service in 2012 ) and submitting documents means disclosing the users interests and their ( possibly sensitive ) data to the service - providing company . for these reasons , we have developed our in - house machine translation system onts .its translation results will be publicly accessible as part of the europe media monitor family of applications , , which gather and process about 100,000 news articles per day in about fifty languages .onts is based on the open source phrase - based statistical machine translation toolkit moses , trained mostly on freely available parallel corpora and optimised for the news domain , as stated above .the main objective of developing our in - house system is thus not to improve translation quality over the existing services ( this would be beyond our possibilities ) , but to offer our users a rough translation ( a `` gist '' ) that allows them to get an idea of the main contents of the article and to determine whether the news item at hand is relevant for their field of interest or not .a similar news - focused translation service is `` found in translation '' , which gathers articles in 23 languages and translates them into english .`` found in translation '' is also based on moses , but it categorises the news after translation and the translation process is not optimised for the news domain .europe media monitor ( emm ) gathers a daily average of 100,000 news articles in approximately 50 languages , from about 3,400 hand - selected web news sources , from a couple of hundred specialist and government websites , as well as from about twenty commercial news providers .it visits the news web sites up to every five minutes to search for the latest articles .when news sites offer rss feeds , it makes use of these , otherwise it extracts the news text from the often complex html pages .all news items are converted to unicode .they are processed in a pipeline structure , where each module adds additional information .independently of how files are written , the system uses utf-8-encoded rss format . inside the pipeline , different algorithmsare implemented to produce monolingual and multilingual clusters and to extract various types of information such as named entities , quotations , categories and more .onts uses two modules of emm : the named entity recognition and the categorization parts . named entity recognition ( ner ) is performed using manually constructed language - independent rules that make use of language - specific lists of trigger words such as titles ( president ) , professions or occupations ( tennis player , playboy ) , references to countries , regions , ethnic or religious groups ( french , bavarian , berber , muslim ) , age expressions ( 57-year - old ) , verbal phrases ( deceased ) , modifiers ( former ) and more .these patterns can also occur in combination and patterns can be nested to capture more complex titles , . in order to be able to cover many different languages , no other dictionaries and no parsers or part - of - speech taggers are used . to identify which of the names newly foundevery day are new entities and which ones are merely variant spellings of entities already contained in the database , we apply a language - independent name similarity measure to decide which name variants should be automatically merged , for details see .this allows us to maintain a database containing over 1,15 million named entities and 200,000 variants .the major part of this resource can be downloaded from http://langtech.jrc.it/jrc-names.html all news items are categorized into hundreds of categories .category definitions are multilingual , created by humans and they include geographic regions such as each country of the world , organizations , themes such as natural disasters or security , and more specific classes such as earthquake , terrorism or tuberculosis , articles fall into a given category if they satisfy the category definition , which consists of boolean operators with optional vicinity operators and wild cards .alternatively , cumulative positive or negative weights and a threshold can be used .uppercase letters in the category definition only match uppercase words , while lowercase words in the definition match both uppercase and lowercase words .many categories are defined with input from the users themselves .this method to categorize the articles is rather simple and user - friendly , and it lends itself to dealing with many languages , .in this section , we describe our statistical machine translation ( smt ) service based on the open - source toolkit moses and its adaptation to translation of news items . * which is the most suitable smt system for our requirements ? * the main goal of our system is to help the user understand the content of an article .this means that a translated article is evaluated positively even if it is not perfect in the target language .dealing with such a large number of source languages and articles per day , our system should take into account the translation speed , and try to avoid using language - dependent tools such as part - of - speech taggers . inside the moses toolkit, three different statistical approaches have been implemented : _ phrase based statistical machine translation _ ( pbsmt ) , _ hierarchical phrase based statistical machine translation _ and _ syntax - based statistical machine translation _ . to identify the most suitable system for our requirements , we run a set of experiments training the three models with europarl v4 german - english and optimizing and testing on the news corpus . for all of them, we use their default configurations and they are run under the same condition on the same machine to better evaluate translation time . for the syntax model we use linguistic information only on the target side . according to our experiments , in terms of performance the hierarchical model performs better than pbsmt and syntax ( 18.31 , 18.09 , 17.62 bleu points ) , but in terms of translation speed pbsmt is better than hierarchical and syntax ( 1.02 , 4.5 , 49 second per sentence ) .although , the hierarchical model has the best bleu score , we prefer to use the pbsmt system in our translation service , because it is four times faster . * which training data can we use ?* it is known in statistical machine translation that more training data implies better translation .although , the number of parallel corpora has been is growing in the last years , the amounts of training data vary from language pair to language pair .to train our models we use the freely available corpora ( when possible ) : europarl , jrc - acquis , dgt - tm , opus , se - times , tehran english - persian parallel corpus , news corpus , un corpus , czeng0.9 , english - persian parallel corpus distributed by elra and two arabic - english datasets distributed by ldc .this results in some language pairs with a large coverage , ( more than 4 million sentences ) , and other with a very small coverage , ( less than 1 million ) .the language models are trained using 12 model sentences for the content model and 4.7 million for the title model .both sets are extracted from english news . for less resourced languages such as farsi and turkish, we tried to extend the available corpora . for farsi, we applied the methodology proposed by , where we used a large language model and an english - farsi smt model to produce new sentence pairs .for turkish we added the movie subtitles corpus , which allowed the smt system to increase its translation capability , but included several slang words and spoken phrases . * how to deal with named entities in translation ?* news articles are related to the most important events .these names need to be efficiently translated to correctly understand the content of an article . from an smt point of view, two main issues are related to named entity translation : ( 1 ) such a name is not in the training data or ( 2 ) part of the name is a common word in the target language and it is wrongly translated , e.g. the french name `` bruno le maire '' which risks to be translated into english as `` bruno mayor '' . to mitigate both the effects we use our multilingual named entity database . in the source language , each news item is analysed to identify possible entities ; if an entity is recognised , its correct translation into english is retrieved from the database , and suggested to the smt system enriching the source sentence using the xml markup option in moses .this approach allows us to complement the training data increasing the translation capability of our system . * how to deal with different language styles in the news ?news title writing style contains more gerund verbs , no or few linking verbs , prepositions and adverbs than normal sentences , while content sentences include more preposition , adverbs and different verbal tenses .starting from this assumption , we investigated if this phenomenon can affect the translation performance of our system .we trained two smt systems , and , using the europarl v4 german - english data as training corpus , and two different development sets : one made of content sentences , news commentaries , and the other made of news titles in the source language which were translated into english using a commercial translation system . with the same strategy we generated also a title test set .the used a language model created using only english news titles .the news and title test sets were translated by both the systems . although the performance obtained translating the news and title corpora are not comparable , we were interested in analysing how the same test set is translated by the two systems .we noticed that translating a test set with a system that was optimized with the same type of data resulted in almost 2 blue score improvements : title - testset : 0.3706 ( ) , 0.3511 ( ) ; news - testset : 0.1768 ( ) , 0.1945 ( ) .this behaviour was present also in different language pairs . according to these results we decided to use two different translation systems for each language pair , one optimized using title data and the other using normal content sentences .even though this implementation choice requires more computational power to run in memory two moses servers , it allows us to mitigate the workload of each single instance reducing translation time of each single article and to improve translation quality .to evaluate the translation performance of onts , we run a set of experiments where we translate a test set for each language pair using our system and google translate .lack of human translated parallel titles obliges us to test only the content based model .for german , spanish and czech we use the news test sets proposed in , for french and italian the news test sets presented in , for arabic , farsi and turkish , sets of 2,000 news sentences extracted from the arabic - english and english - persian datasets and the se - times corpus . for the other languages we use 2,000 sentences which are not news but a mixture of jrc - acquis , europarl and dgt - tm data . it is not guarantee that our test sets are not part of the training data of google translate .each test set is translated by google translate - translator toolkit , and by our system .bleu score is used to evaluate the performance of both systems .results , see table [ results ] , show that google translate produces better translation for those languages for which large amounts of data are available such as french , german , italian and spanish .surprisingly , for danish , portuguese and polish , onts has better performance , this depends on the choice of the test sets which are not made of news data but of data that is fairly homogeneous in terms of style and genre with the training sets .the impact of the named entity module is evident for arabic and farsi , where each english suggested entity results in a larger coverage of the source language and better translations .for highly inflected and agglutinative languages such as turkish , the output proposed by onts is poor .we are working on gathering more training data coming from the news domain and on the possibility of applying a linguistic pre - processing of the documents . .[results ] automatic evaluation . [ cols="<,^,^ " , ]the translation service is made of two components : the connection module and the moses server .the connection module is a servlet implemented in java .it receives the rss files , isolates each single news article , identifies each source language and pre - processes it .each news item is split into sentences , each sentence is tokenized , lowercased , passed through a statistical compound word splitter , , and the named entity annotator module . for language modellingwe use the kenlm implementation , . according to the language , the correct moses servers , title and content ,are fed in a multi - thread manner .we use the multi - thread version of moses . when all the sentences of each article are translated , the inverse process is run : they are detokenized , recased , and untranslated / unknown words are listed . the translated title and content of each articleare uploaded into the rss file and it is passed to the next modules .the full system including the translation modules is running in a 2xquad - core with intel hyper - threading technology processors with 48 gb of memory .it is our intention to locate the moses servers on different machines .this is possible thanks to the high modularity and customization of the connection module . at the moment , the translation models are available for the following source languages : arabic , czech , danish , farsi , french , german , italian , polish , portuguese , spanish and turkish . our translation service is currently presented on a demo web site , see figure [ fig::demo ] , which is available at http://optima.jrc.it / translate/. news articles can be retrieved selecting one of the topics and the language .all the topics are assigned to each article using the methodology described in [ cat ] .these articles are shown in the left column of the interface .when the button `` translate '' is pressed , the translation process starts and the translated articles appear in the right column of the page .the translation system can be customized from the interface enabling or disabling the named entity , compound , recaser , detokenizer and unknown word modules .each translated article is enriched showing the translation time in milliseconds per character and , if enabled , the list of unknown words .the interface is linked to the connection module and data is transferred using rss structure .in this paper we present the optima news translation system and how it is connected to europe media monitor application .different strategies are applied to increase the translation performance taking advantage of the document structure and other resources available in our research group .we believe that the experiments described in this work can result very useful for the development of other similar systems .translations produced by our system will soon be available as part of the main emm applications .the performance of our system is encouraging , but not as good as the performance of web services such as google translate , mostly because we use less training data and we have reduced computational power . on the other hand , our in - house system can be fed with a large number of articles per day and sensitive data without including third parties in the translation process .performance and translation time vary according to the number and complexity of sentences and language pairs . the domain of news articles dynamically changes according to the main events in the world , while existing parallel data is static and usually associated to governmental domains .it is our intention to investigate how to adapt our translation system updating the language model with the english articles of the day .the authors thank the jrc s optima team for its support during the development of onts . c. callison - burch , and p. koehn and c. monz and k. peterson and m. przybocki and o. zaidan . 2009 . .proceedings of the joint fifth workshop on statistical machine translation and metricsmatr , pages 1753 .uppsala , sweden .p. koehn and f. j. och and d. marcu .proceedings of the 2003 conference of the north american chapter of the association for computational linguistics on human language technology , pages 4854 .edmonton , canada .p. koehn and h. hoang and a. birch and c. callison - burch and m. federico and n. bertoldi and b. cowan and w. shen and c. moran and r. zens and c. dyer and o. bojar and a. constantin and e. herbst 2007 . .proceedings of the annual meeting of the association for computational linguistics , demonstration session , pages 177180 .columbus , oh , usa . r. steinberger and b. pouliquen and a. widiger and c. ignat and t. erjavec and d. tufi and d. varga .proceedings of the 5th international conference on language resources and evaluation , pages 21422147 .genova , italy .m. turchi and i. flaounas and o. ali and t. debie and t. snowsill and n. cristianini .proceedings of the european conference on machine learning and knowledge discovery in databases , pages 746749 .bled , slovenia .
we propose a real - time machine translation system that allows users to select a news category and to translate the related live news articles from arabic , czech , danish , farsi , french , german , italian , polish , portuguese , spanish and turkish into english . the moses - based system was optimised for the news domain and differs from other available systems in four ways : ( 1 ) news items are automatically categorised on the source side , before translation ; ( 2 ) named entity translation is optimised by recognising and extracting them on the source side and by re - inserting their translation in the target language , making use of a separate entity repository ; ( 3 ) news titles are translated with a separate translation system which is optimised for the specific style of news titles ; ( 4 ) the system was optimised for speed in order to cope with the large volume of daily news articles .
biometric authentication systems are becoming prevalent in access control and in consumer technology .in such systems , the user submits their user name and his / her biometric sample , which is compared to the stored biometric template associated with this user name ( one - to - one matching ) .the popularity of biometric - based systems stems from a popular belief that such authentication systems are more secure and user friendly than systems based on passwords . at the same time, the use of such systems raises concerns about the security and privacy of the stored biometric data . unlike passwords , replacing a compromised biometric trait is impossible , since biometric traits ( e.g. , face , fingerprint , and iris ) are considered to be unique .therefore , the security of biometric templates is an important issue when considering biometric based systems .moreover , poor protection of the biometric templates can have serious privacy implications on the user , as discussed in previous work .various solutions have been proposed for protecting biometric templates ( e.g , ) .the most prominent of them are secure sketch and fuzzy extractors .unfortunately , these solutions are not well adopted in practice .the first reason for this is the tradeoff between security and usability due to the degradation in recognition rates .the second reason is related to the use of tokens that are required for storing the helper data , thus affecting usability .finally , these mechanisms rely on assumptions which are hard to verify ( e.g. , the privacy guarantees of secure sketch assume that the biometric trait is processed into an almost full entropy string ) . in this workwe propose a different approach for protecting biometric templates called _ honeyfaces_. in this approach , we hide the real biometric templates among a very large number of synthetic templates that are indistinguishable from the real ones .thus , identifying real users in the system becomes a very difficult ` needle in a haystack ' problem . at the same time , honeyfaces does not require the use of tokens nor does it affect recognition rate ( compared to a system that does not provide any protection mechanism ) .furthermore , it can be integrated with other privacy solutions ( e.g. , secure sketch ) , offering additional layers of security and privacy . for the simplicity of the discussion , let us assume that all biometric templates ( real and synthetic ) are stored in a _biometric `` password file''_. our novel approach enables the size of this file to be increased by several orders of magnitudes .such inflation offers a 4-tier defense mechanism for protecting the security and privacy of biometric templates with no usability overhead .namely , honeyfaces : * reduces the risk of the biometric password file leaking ; * increases the probability that such a leak is detected online ; * allows for post - priori detection of the ( biometric ) password file leakage ; * protects the privacy of the biometrics in the case of leakage ; in the following we specify how this mechanism works and its applications in different settings . the very large size of the `` password file '' improves the * resilience of system against its exfiltration*. we show that one can inflate a system with 270 users ( 180 kb `` password file '' ) into a system with up to users ( 56.6 tb `` password file '' ) .obviously , exfiltrating such a huge amount of information is hard .moreover , by forcing the adversary to leak a significantly larger amount of data ( due to the inflated file ) he either needs significantly more time , or has much higher chances of being caught by intrusion detection systems .thus , the file inflation facilitates in * detecting the leakage * while it happens .the advantages of increasing the biometric `` password file '' can be demonstrated in networks whose outgoing bandwidth is very limited , such as air - gap networks ( e.g. , those considered in ) . such networks are usually deployed in high - security restricted areas , and thus are expected to employ biometric authentication , possibly in conjunction with other authentication mechanisms .once an adversary succeeds in infiltrating the network , he usually has a very limited bandwidth for exfiltration , typically using a physical communication channel of limited capacity ( with a typical bandwidth of less than 1 kbit / sec ) . in such networks , inflating the size of the database increases the resilience against exfiltration of the database .namely , exfiltrating 180 kb of information ( the size of a biometric `` password file '' in a system with 270 users ) takes a reasonable time even in low bandwidth channels compared with 56.6 tb ( the size of the inflated biometric `` password file '' ) , which takes more than 5.2 days for exfiltration in 1 gbit / sec , 14.4 years in 1 mbit / sec , or about 14,350 years from an air - gaped network at the speed of 1 kbit / sec . similarly to honeywords ,the fake accounts enable * detection of leaked files*. namely , by using two - server authentication settings , each authentication query is first sent to the server that contains the inflated password file .once the first server authenticates the user , it sends a query to the second server that contains only the legitimate accounts , thus detecting whether a fake account was invoked with the `` correct '' credentials .this is a clear evidence that despite the hardness of exfiltration , the password file ( or a part of it ) was leaked .all the above guarantees heavily rely on the inability of the adversary to isolate the real users from the fake ones .we show that this task is nearly impossible in various adversarial settings ( when the adversary has obtained access to the password file ) .we also show that running membership queries to identify a real user by matching a facial image from an external source to the biometric `` password file '' is computationally infeasible .we analyze the robustness of the system in the worst case scenario in which the adversary has the facial images of all users except one and he tries to locate the unknown user among the synthetic faces .we show that the system protects the privacy of the users in this case too .to conclude , honeyfaces * protects the biometric templates of real users * in all settings that can be protected .the addition of a large number of synthetic faces may raise a concern about the degradation of the authentication accuracy .however , we show that this is not the case .the appearance of faces follows a multivariate gaussian distribution , which we refer to in this article as _ face - space _ , the parameters of which are learned from a set of real faces , including the faces of the system users .we sample synthetic faces from the same generative model constraining them to be at a certain distance from real and other synthetic faces .we selected this distance to be sufficiently large that new samples of real users would not collide with the synthetic ones .even though such a constraint limits the number of faces the system could produce , the number remains very large . using a training set of 500 real faces to build the generative face model, we successfully created synthetic faces .our honeyfaces system requires a method for generating synthetic faces which satisfies three requirements : * the system should be able to generate a ( very ) large number of unique synthetic faces .* these synthetic faces should be indistinguishable from real faces .* the synthetic faces should not affect the authentication accuracy of real users. these requirements ensure that the faces of the real users can hide among the synthetic ones , without affecting recognition accuracy .there are two lines of research related to the ideas introduced in this paper .one of them is honeyobjects , discussed in section [ sec : sub : sub : honeyobjects ] .the second one , discussed in section [ sec : sub : sub : biometric_synthesis ] , is the synthesis of biometric traits .honeyobjects are widely used in computer security .the use of honeypot users ( fake accounts ) is an old trick used by system administrators .login attempts to such accounts are a strong indication that the password file has leaked .later , the concept of honeypots and honeynets was developed .these tools are used to lure adversaries into attacking decoy systems , thus exposing their tools and strategies .honeypots and honeynets became widely used and deployed in the computer security world , and play an important role in the mitigation of cyber risks .recently , juels and rivest introduced honeywords , a system offering decoy passwords in addition to the correct one . a user first authenticates to the main server using a standard password - based authentication in which the server can keep track of the number of failed attempts .once one of the stored passwords is used , the server passes the query to a second server which stores only the correct password .identification of the use of a decoy password by the second server , suggests that the password file has leaked .obviously , just like in honeypots and honeynets , one needs to make sure that the decoy passwords are sampled from the same space as the real passwords ( or from a space as close as possible ) .to this end , there is a need to model passwords correctly , a non - trivial task , which was approached in several works .interestingly , we note that modeling human faces was extensively studied and very good models exist ( see the discussion in section [ sec : sub : generating ] ) . in honeywordsit is a simple matter to change a user s password once if it has been compromised .clearly it is not practicable to change an individual s facial appearance .thus , when biometric data is employed , the biometric `` password file '' itself should be protected .honeyfaces protects the biometric data by inflating the `` password file '' such that it prevents leaks , which is a significant difference between honeywords and honeyfaces .another decoy mechanism suggested recently , though not directly related to our work , is honey encryption .this is an encryption procedure which generates ciphertexts that are decrypted to different ( yet plausible ) plaintexts when decrypted under one of a few wrong keys , thus making ciphertext - only exhaustive search harder .artificial biometric data are understood as biologically meaningful data for existing biometric systems .biometric data synthesis was suggested for different biometric traits , such as faces ( e.g. , ) , fingerprints ( e.g , ) and iris ( e.g. , ) .the main application of biometrics synthesis has been augmenting training sets and validation of biometric identification and authentication systems ( see for more information on synthesis of biometrics ) .synthetic faces are also used in animation , facial composite construction , and experiments in cognitive psychology .making realistic synthetic biometric traits has been the main goal of all these methods .however , the majority of previous work did not address the question of distinguishing the synthetic samples from the real ones .the work in iris synthesis analyses the quality of artificial samples by clustering synthetic , real , and non - iris images into two clusters iris / non - iris .such a problem definition is obviously sub - optimal for measuring indistinguishability .supervised learning using real and synthetic data labels has much better chances of success in separating between real and synthetic samples than unsupervised clustering ( a weaker learning algorithm ) into iris / non - iris groups .these methods also used recognition experiments , in which they compare the similarity of the associated parameters derived from real and synthetic inputs .again , this is an indirect comparison that shows the suitability of the generation method for evaluating the quality of the recognition algorithm , but it is not enough for testing the indistinguishability between real and synthetic samples . in fingerprints , it was shown that synthetic samples generated by different methods could be distinguished from the real ones with high accuracy .subsequent methods for synthesis showed better robustness against distinguishing attacks that use statistical tests based on .several methods for synthetic facial image generation provide near photo - realistic representations , but to the best of our knowledge , the question of indistinguishability between real and synthetic faces has not been addressed before .section [ sec : sub : generating ] describes , with justification , the method we use for generating honeyfaces . in section[ sec : system ] we present our setup for employing honeyfaces in a secure authentication system. the privacy analysis of honeyfaces , discussed in section [ sec : privacyanalysis ] , shows that the adversary can not obtain private biometric information from the biometric `` password file '' .section [ sec : securityanalysis ] analyses the additional security offered by inflating the `` password file '' .we conclude the paper in section [ sec : summary ] .biometric systems take a raw sample ( usually an image ) and process it to extract features or a representation vector , robust ( as much as possible ) to changes in sampling conditions . in the honeyfaces system, we have an additional requirement the feature space should allow sampling of artificial `` outcomes '' ( faces ) in _ large numbers_. these synthetic faces will be used as the passwords of the fake users .different models have been proposed for generating and representing faces including , active appearance models , 3d deformable models , and convolutional neural networks .such models have been used in face recognition , computer animation , facial composite construction ( an application in law enforcement ) , and experiments in cognitive psychology . among these modelswe choose the active appearance model for implementing the honeyfaces concept .an active appearance model is a parametric statistical model that encodes facial variation , extracted from images , with respect to a mean face .this work has been extended and improved in many subsequent papers ( e.g. , ) . in this contextthe word ` active ' refers to fitting the appearance model ( am ) to an unknown face to subsequently achieve automatic face recognition .am can also be used with random number generation to create plausible , yet completely synthetic , faces .these models achieve near photo - realistic representations that preserve identity , although are less effective at modeling hair and finer details , such as birth marks , scars , or wrinkles which exhibit little or no spatial correspondence between individuals .our choice of using the am for honeyfaces is motivated by two reasons : 1 ) the representation of faces within an am is consistent with human visual perception and hence also consistent with the notion of face - space .in particular , perceptual similarity of faces is correlated with distance in am space .2 ) am is a well understood model used previously in face synthesis ( e.g. ) .alternative face models may also be considered , provided a sufficient number of training images ( as the functions of the representation length ) is available to adequately model the facial variation within the population of real faces .recent face recognition technology uses deep learning ( dl ) methods as they provide very good representations for verification .however , the image reconstruction quality from dl representation is still far from being satisfactory for our application .ams describe the variation contained within the training set of faces , used for its construction .given that this set spans all variations associated with identity changes , the am provides a good approximation to any desired face .this approximation is represented by a point ( or more precisely , localized contiguous region ) within the face - space , defined by the _ am _ _ coefficients_. the distribution of am coefficients of faces belonging to the same ethnicity are well approximated by an independent , multivariate , gaussian probability density function ( for example , see figure [ fig : distr_ex ] that presents the distribution of the first 21 am coefficients for a face - space constructed from 500 faces . )new instances of facial appearance , the synthetic faces , can be obtained by randomly sampling from such a distribution . for simplicity , hereafter we assume that faces belong to a single ethnicity . to accommodate faces from different ethnic backgrounds ,the same concept could be used with the mixture of gaussians distribution .we follow the procedure for am construction , proposed in .the training set of facial images , taken under the same viewing conditions , is annotated using a point model that delineates the face shape and the internal facial features . in this process, 22 landmarks are manually placed on each facial image .based on these points , 190 points of the complete model are determined ( see for details ) . for each face , landmark coordinates are concatenated to form a shape vector , . the data is then centered by subtracting the mean face shape , , from each observation . the shape principle components are derived from the set of mean subtracted observations ( arranged as columns ) using pca .the synthesis of a face shape ( denoted by ) from the _ shape model _ is done as follows , where is a vector in which the first elements are normally distributed parameters that determine the linear combination of shape principal components and the remaining elements are equal to zero .we refer to as the _ shape coefficients_. before deriving the texture component of the am , training images must be put into correspondence using non - rigid shape alignment procedure . each shape normalized and centered rgb image of a training face is then rearranged as a vector .such vectors for all training faces form a matrix which is used to compute the texture principle components by applying pca . a face texture ( denoted by )is reconstructed from the _ texture model _ as follows , where are the _ texture coefficients _ which are also normally distributed and is the mean texture . the final model is obtained by a pca on the concatenated shape and texture parameter vectors .let denote the principal components of the concatenated space .the am coefficients ( ) are obtained from the corresponding shape ( ) and texture ( ) as follows , \equiv q^t\left [ \begin{array}{c } w p_s^t(x-\bar{x})\\ p_g^t(g-\bar{g})\\ \end{array } \right]\ ] ] where is a scalar that determines the weight of shape relative to texture .figure [ fig_amm_example ] illustrates the shape vector ( center image ) and the shape free texture vector ( on the right ) used to obtain the am coefficients .am coefficients of a * real face * are obtained by projecting its shape and texture onto the shape and texture principal components correspondingly and then combining the shape and texture parameters into a single vector and projecting it onto the am principal components . in order to create the * synthetic faces * , we first estimate a -dimensional gaussian distribution of the am coefficients using the training set of real faces . then am coefficients of synthetic facesare obtained by directly sampling from this distribution , discarding the samples beyond standard deviations .such that all training samples are within standard deviations from the mean . ]theoretically , the expected distance between the samples from am distribution to its center is about standard deviation units .we observed that the distance of real faces from the center is indeed close to standard deviation units .in other words , am coefficients are most likely to lie on the surface of an -dimensional ellipsoid with radii , where . hence to sample synthetic faces, we use the following process : sample from a -dimensional gaussian , normalize to the unit length and multiply coordinate - wise by . to minimize the differences between the am representations of real and synthetic faces , we apply the same normalization process to the am coefficients of the real faces as well .the biometric `` password file '' of the honeyfaces system is composed of records , containing the am coefficients of either real or synthetic faces .the coefficients are sufficient for the authentication process without reconstructing the face .however , we use reconstructed faces in our privacy and security analysis , thus in the following , we show how to reconstruct faces from their corresponding am coefficients . first , the shape and texture coefficients are obtained from the am coefficients as follows , and , where ^t$ ] is the am basis .then the texture and shape of the face are obtained via eq .( [ eq : shape_bit ] ) and ( [ eq : texture_bit ] ) .finally , the texture is warped onto the shape , resulting in a facial image . figure [ fig : face_samples ] shows several examples of reconstructed real faces and synthetic faces , sampled from the estimated distribution of am coefficients .to prevent exfiltration and protect privacy of the users , we create a very large number of synthetic faces .these faces can be incorporated in the authentication system in different ways .for example , the honeywords method stores a list of passwords ( one of which is correct and the rest are fake ) per account . in our settings , boththe number of synthetic faces and the ratio of synthetic to real faces should be large .thus , the configuration , in which the accounts are created solely for real users , requires a very large number of synthetic faces to be attached to each account .hence , in such an implementation , during the authentication process , a user s face needs to be compared to a long list of candidates ( all fake faces stored with the user name ) .this would increase the authentication time by a factor equal to the synthetic - to - real ratio , negatively affecting the usability of the system and leading to an undesirable trade off between privacy and usability .another alternative is creating many fake accounts with a single face as a password . this does not change the authentication time of the system ( compared to a system with no fake accounts ) .since most real systems have very regular user names ( e.g. , the first letter of the given name followed by the family name ) , it is quite easy to generate fake accounts following such a convention . as we show in section [ sec : sub : blowup ] , this allows inflating the password file to more than 56.6 tb ( when disregarding the storage of user names ) .one can also consider a different configuration , aimed to fool an adversary that knows the correct user names , but not the real biometrics .specifically , we can store several faces in each account ( instead of only one ) in addition to the fake accounts ( aimed at adversaries without knowledge of user names ) .the faces associated with a fake account are all synthetic .the faces associated with a real account include one real face of that user and the rest are synthetic one .in such a configuration the authentication time does not increase significantly , but the total size of the `` biometric data '' and the ratio of real - to - synthetic faces remains large .moreover , the adversary that knows the user name still needs to identify the real face among several synthetic faces . in this work we implemented and analyzed the configuration in which real and decoy users have an account with a single passwordthe majority of the users are fake in order to hide the real ones .each user ( both real and fake ) has an account name and a password composed of 80 am coefficients .these coefficients are derived from the supplied facial image for real users or artificially generated for decoy ones .the number of training subjects for the face - space construction should be larger than the number of system users .this provides a better modeling of the facial appearance , allowing a large number of synthetic faces to be created , and protecting the privacy of system s users as discussed in section [ sec : privacyanalysis ] .we used a set of 500 subjects to train the am .270 of them were the users of the honeyfaces system .all images in the training set were marked with manual landmarks using a tool similar to the one used in .we computed a 50-dimensional shape model and a 350-dimensional texture model as described in section [ sec : sub : generating ] and we reduced the dimension of the am parameters to 80 .we note that this training phase is done once , and needs to contain the users of the system mainly for optimal authentication rates .however , as we later discuss in section [ sec : sub : facespace ] , extracting biometric information of real users from the face - space is infeasible .all representations used in the system were normalized to unit norm and then multiplied by 7 standard deviations .this way we forced all samples ( real and synthetic ) to have the same norm , making the distribution of distances of real and synthetic faces very similar to each other ( see figure [ fig : distr_ex ] ) .we used the resulting face - space to generate synthetic faces .we discarded synthetic faces that fall closer than a certain distance from the real or previously created synthetic faces .the threshold on the distance between faces of different identities was set to 4,800 , thereby minimizing the discrepancy between the distance distributions of real and synthetic faces .this minimum separation distance prevents collisions between faces and thus the addition of synthetic faces does not affect the authentication accuracy of the original system ( prior to inflation ) .the process of synthetic face generation is very efficient and takes only seconds on average using matlab .using 500 training faces we were able to create about synthetic faces , with sufficient distance from each other .we strongly believe that more faces can be generated ( especially if the size of the training set is increased ) , but faces that occupy 56.6 tb seems sufficient for proof of concept .the authentication process of most biometric systems is composed of the user supplying the user name and her or his facial image .this image ( hereafter the test image ) is aligned to conform with the reference image stored for that user .after the registration , the distance between the test and reference templates are computed and compared to some predefined threshold . to find the registration between the test and the reference facial templates in our system , we first reconstruct the facial shape of the corresponding subject in the database from the am coefficients ( as shown in section [ sec : sub : recon ] ) .we then run an automatic landmark detector on the test image ( using face++ landmark detector ) and use these landmarks and the corresponding locations in the reference shape to find the scaling , rotation , and translation transformations between them .then we apply this transformation to the reference shape to put it into correspondence with the coordinate frame of the test image . the am coefficients of the test imageare computed using the transformed reference shape and the test image itself ( as shown in section [ sec : sub : coefficients ] ) and then compared to the stored am coefficients the password , using the l2 norm .the threshold on the l2 distance was set to 3,578 which corresponds to 0.01% of far .note that the threshold is smaller than the distance between the faces ( 4,800 ) used for synthetic face generation . figure [ fig : lmk_samples ] illustrates the authentication process for the genuine and imposter attempts .we ran 270 genuine attempts , comparing the test image with the corresponding reference image , and about 4,200,000 impostor attempts ( due to the access to face++ ) . for a threshold producing an far of 0.01% ,our system showed the true acceptance rate ( 100-frr ) of 93.33% .figure [ fig : roc ] shows the corresponding roc curve .our tests showed no degradation in frr / far after the inclusion of synthetic faces .finding landmarks in a test image , using the face++ landmark detector , takes 1.42 seconds per subject on average .we note that the implementation of the landmark detector is kept at the face++ server and thus the reported times include network communications .running the detector locally will significantly reduce the running time .obtaining the am coefficients of a test image and comparing them to those of the target identity in the database takes additional 0.53 seconds on average .this brings us to a total of 1.95 seconds ( on average ) for a verification attempt .the system was implemented and tested in matlab r2014b 64-bit on windows 7 , in 64-bit os environment with intel s i7 - 4790 3.60ghz cpu and 16 gb ram .local implementation that uses c is expected to improve the running times significantly ( though not faster than 1 ms ) .our privacy analysis targets an adversary with access to the inflated biometric `` password file '' , and is divided into three cases . the first scenario , discussed in section [ sec : sub : noprior ] ,is an adversary who has no prior knowledge about the users of the system .such an adversary tries to identify the real users out of the fake ones .the second scenario , discussed in section [ sec : sub : out_source ] , concerns an adversary that tries to achieve the same goal , but has access to a comprehensive , external source of facial images that adequately represents the world wide variation ( population ) in facial appearance but does not know who the users are .the last scenario assumes that the adversary obtained the biometric data of all but one out of the system s users , and wishes to use this to find the biometrics of the remaining user .we discuss this case in section [ sec : sub : facespace ] .we first discuss the scenario in which the adversary has the full database ( e.g. , after breaking into the system ) and wishes to identify the real users but has no prior knowledge concerning the real users .more explicitly , this assumption means that the adversary does not have a candidate list and their biometrics , to check if they are in the database .an inflated password file is a file that contains facial templates , of which correspond to real faces and remaining are synthetic faces sampled from the same face - space as the real faces .a simulated password file is a file that contains facial templates , all of which are synthetic faces sampled from the same face - space .an adversary that can distinguish between an inflated password file and a simulated password file , can be transformed into an adversary that extracts all the real users .similarly , an adversary that can extract real users from a password file can be used for distinguishing between inflated and simulated password files .we start with the simpler case transforming an adversary that can extract the real faces into a distinguisher between the two files .the reduction is quite simple .if the adversary can extract real faces out of the password file ( and even only a single real face ) , we just give it the password file we have received . if the adversary succeeds in extracting any face out of it , we conclude that we received the inflated password file .otherwise , we conclude that we received a simulated password file .it is easy to see that the running time of the distinguishing attack and its success rate are exactly the same as that of the original extraction adversary .now , assume that we are given an adversary that can distinguish between an inflated password file and a simulated one with probability .we start by recalling that the advantage of distinguishing between two simulated ones is necessarily zero .hence , one can generate a hybrid argument , of replacing one face at a time in the file .when we replace a synthetic face with a different synthetic face , we have not changed the distribution of the file .thus , the advantage drops only when we replace a real face with a synthetic face , which suggests that if there are real users in the system , and total users in the system , we can succeed in identifying at least one of the real users of the system with probability greater than or equal to and running time of at most times the running time of the distinguishing adversary .[ cor_1 ] if the distributions of the inflated password file and the simulated password file are statistically indistinguishable , an adversary with no prior knowledge ( of either user s biometrics or user names ) can not identify the real users . theoretically , synthetic and real faces are sampled from the same distribution and thus are indistinguishable according to corollary [ cor_1 ] . however , in practice , synthetic faces are sampled from a parametric distribution which is estimated from real faces . the larger the set of faces , used to estimate the distribution , the closer these distributions will be . in practice, the number of training faces is limited which could introduce some deviations between the distributions .our following analysis shows that these deviations are too small to distinguish between the distributions of real and synthetic faces either by statistical tests or by human observers .the first part of the analysis performs a statistical test of the am coefficients of the real and the synthetic faces and shows that these distributions are indeed very close to each other .the second part studies the distribution of mutual distances among real and synthetic faces and reaches the same conclusion .finally , we perform a human experiment on the reconstructed and simulated faces , showing that even humans can not distinguish between them .the am coefficients are well approximated by a gaussian distribution in all dimensions .therefore , sampling am coefficients for synthetic faces from the corresponding distribution is likely to produce representations that can not be distinguished by standard hypothesis testing from real identities .the examples of real and synthetic distributions for the first 21 dimensions are depicted in figure [ fig : distr_ex ] and the following analysis verifies this statement .first , we show that coefficients of real and synthetic faces can not be reliably distinguished based on two sample kolmogorov smirnov ( ks ) test . to this end, we sampled a subset of 500 synthetic samples from 80-dimensional am and we compare it to the 500 vectors of coefficients of training images .we ran the ks test on these two sets for each of the 80 dimensions and recorded the result of the hypothesis test and the corresponding p - value .we repeated this test 50 times , varying the set of synthetic faces .the ks tests supported the hypothesis that the two samples come from the same distributions in 98.72% of the cases with a mean p - value 0.6 ( over 50 runs and 80 components , i.e. , 4000 tests ) .these results show that am coefficients of real and synthetic faces are indistinguishable using a two - sample statistical test .we analyzed the distributions of distances between the real faces , synthetic ones , and a mixture of both .figure [ fig : dist_distr ] shows that these distributions , both in the case of euclidean distances and in the case of angular distances , are very close .hence , the statistical distance between them is negligible , suggesting that attacks trying to use mutual distances are expected to be ineffective .we conducted a human experiment containing two steps .in the first step , the participants were shown a real face , not used in the experiment , and its reconstruction . in the second step of the experiment ,each participant was presented with the same set of 16 faces ( 11 of which were synthetic and 5 of which were real ) and was asked to classify them as real or fake .we also allowed the users to avoid answering in the case of uncertainty or fatigue .the 11 synthetic faces were chosen at random from all the synthetic faces we generated , and the 5 real ones were chosen at random from the 500 real faces . for the real faces , we computed the am coefficients for each real image and then used the method described in section [ sec : sub : recon ] to generate the real faces and synthetic faces from the model .examples of real and synthetic faces are provided in the second and third rows , respectively , of figure [ fig : face_samples ] . out of 179 answerswe have received , 97 were correct , showing a success rate of 54.19% .the fake faces received 120 answers , of which 66 were correct ( 55% ) .the real faces received 59 answers , of which 31 were correct ( 52.5% ) .our analysis shows that the answers for each face are distributed very similarly to the outcome of a binomial random variable with a probability of success at each trial of 0.5 .next , we analyze the case where an adversary has access to the inflated `` password file '' and to an extensive external source of facial images ( e.g. the internet ) .we consider two attack vectors : the first tries to use membership queries with random facial images to match real users of the system , the second attempts to distinguish between real and synthetic faces using a training process on a set of real facial images unrelated to the users of the system .an adversary could use a different source of facial images to try and run a membership query against the honeyfaces system to obtain the biometric of the real users . to match a random image from an external source of facial images, the adversary must run the authentication attempt with all users of the system ( including the fake ones ) .our experiments show that the current implementation takes about 2 seconds per authentication attempt ( mostly due to the landmarking via face++ ) . even under the unrealistic assumption that the authentication time could be reduced to 1 ms, it would take about seconds ( slightly more than 2 cpu years ) , to run the matching of a single facial image against fake faces .we note that one can not use a technique to speed up this search and comparison ( such as kd - trees ) as the process of comparison of faces requires aligning them ( based on the landmarks ) , which can not be optimized ( to the best of our knowledge ) .one can try to identify the membership of a person in the system by projecting his / her image onto the face - space of the system and analyzing the distance from the projection to the image itself .if the face - space was constructed from system users only , a small distance could reveal the presence of the person in the face - space .such an attack can be easily avoided by building the face - space from a sufficiently large ( external ) source of faces .such a face - space approximates many different appearances ( all combinations of people in the training set ) and thus people unrelated to the users of the system will also be close to the face - space . we conclude that a membership attack to obtain the real faces from the data base is impractical . the task of the adversary who obtained the inflated `` biometric password file '' is to distinguish the real faces from the synthetic ones .he can consider using a classifier that was trained to separate real faces from the fake ones . to this endthe adversary needs to construct a training set of real and synthetic faces .synthetic faces can be generated using the system s face - space .however , the real faces of the system are unavailable to the adversary .one way an adversary might approach this problem is by employing a different set of real faces ( a substitute set ) to construct the face - space .he then can create a training set by generating synthetic faces using that space and reconstructing the real faces from the substitute set following the algorithms described in section [ sec : sub : generating ] . a trained classifiercould then be used to classify the faces in the biometric `` password file '' .the substitute training set is likely to have different characteristics than the original one .the adversary could try to combine the system s face - space with the substitute set in attempt to improve the similarity of the training set to the biometric `` password file '' .then , the adversary can construct the training set of real faces by projecting the images from the substitute set on the mixed face - space and reconstructing them as described in section [ sec : sub : recon ] . to create a training set of synthetic faces ,the adversary can either use the mixed face - space or the system s face - space .deep learning and , in particular , convolutional neural networks ( cnn ) , showed close to human performance in face verification . it is a common believe that the success of the cnn in recognition tasks is due to its ability to extract good features .moreover , it was shown that cnn features can be successfully transferred to perform recognition tasks in similar domains ( e.g. ) .such techniques are referred to as fine tuning or transfer learning .it proceeds by replacing the upper layers of the fully trained dl network ( that solves a related classification problem ) by layers that fit the new recognition problem ( the new layers are initialized randomly ) .the updated network is then trained on the new classification problem with the smaller data set .note , that most of the network does not require training , only slight tuning to fit the new classification task , and the last layer can be well trained using good cnn features and a smaller data set . following this strategy, we took the vgg - face deep network that was trained to recognize 2,622 subjects and applied the transfer learning method to train a dl network to classify between real and synthetic faces . to this end, we replaced the last fully connected 2,622 size layer with a fully connected layer of size 2 and trained this new architecture in the following settings . in all experiments we split the training set for training and validation of the network .then we applied the trained network on a subset of system s data set to classify the images into real and synthetic .the subset included all real faces and a subset of the synthetic faces ( same size as real set to balance the classification results ) .setting 1 : a face - space was constructed from 500 faces belonging to the substitute set .the training set included 400 reconstructed real faces and 400 synthetic faces , generated using the substitute face - space .the validation set included 100 reconstructed real faces and 100 synthetic faces from the same domain , not included in the training set .the results on the substitute validation set showed that the dl network classifies 62.5% of faces correctly .the results on the system s set dropped to 53.33% , which is close to random .setting 2 : a face - space was constructed by combining the system s face - space with the substitute set .the training set included 400 real faces projected and reconstructed using the mixed face - space and 400 synthetic faces , generated using the mixed face - space .the validation set included 100 reconstructed real faces and 100 synthetic faces from the same domain , not included in the training set .the results on the validation set showed good classification : 75% of synthetic faces were classified as synthetic and 93% of real faces were classified as real .however , the same network classified all faces of the system s face set as synthetic .this result shows that using a mixed face - space to form a training set is not effective .the prime reason for this is the artifacts in synthetic images due to variation in viewing conditions between the sets .setting 3 : the real training and validation sets were the same as in setting 2 .the synthetic training and validation sets were formed by generating synthetic faces using system s face - space . herethe classifier was able to perfectly classify the validation set , but it classified all system s faces as synthetic .this shows that using real and synthetic faces from different face - spaces introduces even more differences between them , which do not exist in system s biometric `` password file '' .to conclude , the state - of - the - art deep learning classifier showed accuracy of 53.33% in distinguishing between real and synthetic faces in the system s biometric `` password file '' .this result is close to random guessing .an adversary who obtains the facial images of all but one of the real users of the system can try and use it for extracting information about the remaining user from the password file .if the training set used for constructing the face - space contains only the users of the system , the following simple attack will work : recall that the authentication procedure requires removing the mean face from the facial image obtained in the authentication process .thus , the mean of all faces in the training set is stored in the system .the adversary can find the last user by computing the mean of the users he holds and solving a simple linear equation . to mitigate this attack , and to allow better modeling of the facial appearance, the training set should contain a significant amount of training faces that are not users of the system .note that these additional faces must be discarded after the face - space is constructed .assuming that the training set for the face - space construction was not limited to a set of system users ( as is the case in our implementation ) , the adversary could try the following attack .create face - spaces by adding each unknown face from the biometric `` password file '' in turn to the real faces that are in the possession of the adversary . is equal to the number of synthetic faces in the biometric `` password file '' plus one real face .then compare these face - spaces to the one stored in the system ( using statistical distance between the distributions ) .such comparison provides a ranking of unknown faces to be the real face .if the attack is effective , we expect the face - space including the user to be highly ranked ( i.e. , to appear in a small percentile ) .however , if the distribution of the rankings associated with the face - space including the real face over random splits of known and 1 unknown face is ( close to ) uniform , then we can conclude that the adversary does not gain any information about the last user using this attack . in our implementation of the attack , we assume that the adversary knows 269 faces of real users and he tries to identify the last real user among the synthetic ones .running the attack with all synthetic faces is time consuming . to get statistics of rankings we can use a much smaller subset of synthetic faces .specifically , we used 100 synthetic faces and ran the experiment over 100 randomized splits into 269 known and 1 unknown faces .figure [ fig : dist_rankings ] shows the histogram of rankings associated with the face - space including the last real user in 100 experiments .the histogram confirms that the distribution of rankings is indeed uniform , which renders the attack ineffective .an alternative approach , that the adversary may take , is to analyze the effects of a single face on the face - space distribution .however , our experiments show that the statistical distances between neighboring distributions ( i.e. , generated from training sets that differ by a single face ) are insignificant . specifically , the average statistical distance between the distribution estimated from the full training set ( of 500 real faces ) and all possible sets of 499 faces ( forming 500 neighboring sets , each composed of a different subset of 499 faces ) is and the maximal distance is .these distances are negligible compared to the standard deviations of the face - space gaussians ( the largest std is 6,473.7 and the smallest is 304.1717 ) .these small differences suggest that one can use differential privacy mechanisms with no ( or marginal ) usability loss ( for example , by using ideas related to ) to mitigate attacks that rely on prior knowledge of the system s users .we leave the implementation and evaluation of this mechanism for future research . to conclude , the honeyfaces system protects the privacy of users even in the extreme case when the adversary learned all users but one , assuming that the training set for constructing the face - space contains a sufficiently large set of additional faces .we now discuss the various scenarios in which honeyfaces improve the security of a biometric data .we start by discussing the scenario of limited outgoing bandwidth networks ( such as air - gaped networks ) , and showing the affects of the increased file size on the exfiltration times .we follow by discussing the effects honeyfaces has on the detection of the exfiltration process .we conclude the analysis of the security offered by our solution in the scenario of partial exposure of the database .the time needed to exfiltrate a file is easily determined by the size of the file to be exfiltrated and the bandwidth .when the exfiltration bandwidth is very slow ( e.g. , in the air - gap networks studied in ) , a 640-byte representation of a face ( or 5,120-bit one ) takes between 5 seconds ( at 1,000 bits per second rate ) to 51 seconds ( in the more realistic 100 bits per second rate ) . hence , leaking even a 1 gbyte database takes between 92.6 to 926 days ( assuming full bandwidth , and no need for synchronization or error correction overhead ) .the size of the password file can be inflated to contain all the faces we created , resulting in a 56.6 tbytes file size ( whose leakage would take about 14,350 years in the faster speed ) . a possible way to decrease the file size is to compress the file .our experiments show that linux s zip version 3.0 , could squeeze the password file by only 4% .it is highly unlikely that one could devise a compression algorithm that succeeds in compressing significantly more . in other words , compressing the face file reduces the number of days to exfiltrate 1 gbyte to 88.9 days ( in the faster speed ) .one can consider a lossy compression algorithm , for example by using only the coefficients associated with the principle components ( carrying most information ) .we show in section [ sec : sub : rates ] that this approach requires using many coefficients for identification .hence , we conclude that if the bandwidth is limited , exfiltration of the full database in acceptable time limit is infeasible .the improved leakage detection stems from two possible defenses : the use of intrusion detection systems ( and data loss prevention products ) and the use of a two - server settings as in honeywords .intrusion detection systems , such as snort , monitor the network for suspicious activities .for example , a high outgoing rate of dns queries may suggest an exfiltration attempt and raise an alarm .similar exfiltration attempts can also increase the detection of data leakage ( such as an end machine which changes its http footprint and starts sending a large amount of information to some external server ) . hence , an adversary who does not take these tools into account is very likely to get caught . on the other hand , an adversary who tries to `` lay low ''is expected to have a reduced exfiltration rate , preventing quick leakage and returning to the scenario discussed in the previous section .the use of honeyfaces also allows for a two - server authentication setting similarly to honeywords .the first server uses a database composed of the real and the synthetic faces .after a successful login attempt is made into this system , a second authentication query is sent to the second server , which holds only the real users of the system . a successful authentication to the first server that uses a fake account , is thus detected at the second server , raising an alarm .we showed that exfiltrating the entire password file in acceptable time is infeasible if the bandwidth is limited .hence , the adversary can decide to pick one of two approaches ( or combine them ) when trying to exfiltrate the file either leak only partial database ( possibly with an improved ratio of real to synthetic faces ) , or to leak partial representations such as the first 10 am coefficients out of 80 per user . as we showed in the privacy analysis ( section [ sec : privacyanalysis ] ) , statistical tests or machine learning methods fail to identify the real faces among the synthetic ones . using membership queries to find real faces in the database is computationally infeasible without prior knowledge of the real user names .we conclude that reducing the size of the data set by identifying the real users or significantly improving the real to synthetic ratio is impossible . the second option is to leak a smaller number of the coefficients ( a partial representation ) .leaking a smaller number of coefficients can be done faster than the entire record , and allow the adversary to run on his system ( possibly with greater computational power ) , any algorithm he wishes for the identification of the real users . in the following ,we show that partial representations ( that significantly decrease the size of the data set ) do not provide enough information for successful membership queries .we experimented with 10 coefficients ( i.e. , assume that the adversary leaked the first 10 am coefficients of all users ) .as the adversary does not know the actual threshold for 10 coefficients , he can try and approximate this value using the database .our proposed method for this estimation is based on computing the distance distribution for 30,000 faces from the database , and setting a threshold for authentication corresponding to the 0.01% `` percentile '' of the mutual distances .we then take test sets of real users faces and of outsiders faces and for each face from these sets , computed the minimal distance from this face to all the faces in the reduced biometric password file .we assume that if this distance is smaller than the threshold , then the face was in the system , otherwise we conclude that the face was not in it .our experiments show that for the 0.01% threshold , 98.90% of the outsider set and 99.26% of the real users were below the threshold .in other words , there is almost no difference between the chance of determining that a user of the system is indeed a user vs. determining that an outsider is a user of the system .this supports the claim that 10 coefficients are insufficient to distinguish between real users and outsiders .we also used a smaller threshold which tried to maximize the success rate of an outsider to successfully match to a real face .for this smaller threshold , 71.08% of the outsiders were below it compared with 74.07% of the real users . to further illustrate the effects of partial representation on the reconstructed face , we show in figure [ fig : grad_rec ] the reconstruction of faces from 80 , 30 , and 10 coefficients , compared with the real face .as can be seen , faces reconstructed from 30 coefficients are somewhat related to the original face , but faces reconstructed from 10 , bare little resemblance to the original .although it is possible to match a degraded face to the corresponding original when a small number of faces are shown ( figure [ fig : grad_rec ] ) , visual matching is impossible among faces .thus , an adversary wishing to leak partial information about an image , needs to leak more than 10 coefficients . to conclude , exfiltrating even a partial set of faces ( or parts of the records ) does not constitute a plausible attack vector against the honeyfaces system .in this paper we explored the use of synthetic faces for increasing the security and privacy of face - based authentication schemes .we have proposed a new mechanism for inflating the database of users ( honeyfaces ) which guarantees users privacy with no usability loss .furthermore , honeyfaces offers improved resilience against exfiltration ( both the exfiltration itself and its detection ) .we also showed that this mechanism does not interfere with the basic authentication role of the system and that the idea allows the introduction of a two - server authentication solution as in honeywords .future work can explore the application of the honeyfaces idea to other biometric traits ( such as iris and fingerprints ) .we believe that due to the similar nature of iris codes ( that also follow multi - dimensional gaussian distribution ) , the application of the concept is going to be quite straightforward .the funds received under the binational uk engineering and physical sciences research council project ep / m013375/1 and israeli ministry of science and technology project 3 - 11858 , `` improving cyber security using realistic synthetic face generation '' allowed this work to be carried out .j. l. araque , m. baena , b. e. chalela , d. navarro , and p. r. vizcaya .synthesis of fingerprint images . in _ pattern recognition , 2002 . proceedings .16th international conference on _ , volume 2 , pages 422425 .ieee , 2002 .v. blanz and t. vetter .a morphable model for the synthesis of 3d faces . in _ proceedings of the 26th annual conference on computer graphics and interactive techniques _ , pages 187194 .acm press / addison - wesley publishing co. , 1999 .j. cui , y. wang , j. huang , t. tan , and z. sun . an iris image synthesis method based on pca and super - resolution . in _ 17th international conference on pattern recognition , icpr 2004 ,cambridge , uk , august 23 - 26 , 2004 ._ , pages 471474 , 2004 .j. donahue , y. jia , o. vinyals , j. hoffman , n. zhang , e. tzeng , and t. darrell .decaf : a deep convolutional activation feature for generic visual recognition . in _ international conference in machine learning ( icml ) _ , 2014 .g. j. edwards , t. f. cootes , and c. j. taylor . .in _ computer vision - eccv98 , 5th european conference on computer vision , freiburg , germany , june 2 - 6 , 1998 , proceedings , volume ii _ , pages 581595 , 1998 . c. imdahl , s. huckemann , and c. gottschlich . towards generating realistic synthetic fingerprint images . in _ 9th international symposium on image and signal processing and analysis , ispa 2015 , zagreb , croatia , september 7 - 9 , 2015 _ , pages 7882 , 2015 .a. juels and m. wattenberg . .in j. motiwalla and g. tsudik , editors , _ ccs 99 , proceedings of the 6th acm conference on computer and communications security , singapore , november 1 - 4 , 1999 ._ , pages 2836 .acm , 1999 .m. weir , s. aggarwal , b. de medeiros , and b. glodek . .in _ 30th ieee symposium on security and privacy ( s&p 2009 ) , 17 - 20 may 2009 , oakland , california , usa _ , pages 391405 .ieee computer society , 2009 .s. n. yanushkevich , v. p. shmerko , a. stoica , p. s. p. wang , and s. n. srihari .introduction to synthesis in biometrics . in _ image pattern recognition - synthesis and analysis in biometrics _ , pages 530 .l. zhang , l. lin , x. wu , s. ding , and l. zhang .end - to - end photo - sketch generation via fully convolutional representation learning . in _ proceedings of the 5th acm on international conference on multimedia retrieval _ , pages 627634 .acm , 2015 .
one of the main challenges faced by biometric - based authentication systems is the need to offer secure authentication while maintaining the privacy of the biometric data . previous solutions , such as secure sketch and fuzzy extractors , rely on assumptions that can not be guaranteed in practice , and often affect the authentication accuracy . in this paper , we introduce honeyfaces : the concept of adding a large set of synthetic faces ( indistinguishable from real ) into the biometric `` password file '' . this password inflation protects the privacy of users and increases the security of the system without affecting the accuracy of the authentication . in particular , privacy for the real users is provided by `` hiding '' them among a large number of fake users ( as the distributions of synthetic and real faces are equal ) . in addition to maintaining the authentication accuracy , and thus not affecting the security of the authentication process , honeyfaces offer several security improvements : increased exfiltration hardness , improved leakage detection , and the ability to use a two - server setting like in honeywords . finally , honeyfaces can be combined with other security and privacy mechanisms for biometric data . we implemented the honeyfaces system and tested it with a password file composed of 270 real users . the `` password file '' was then inflated to accommodate up to users ( resulting in a 56.6 tb `` password file '' ) . at the same time , the inclusion of additional faces does not affect the true acceptance rate or false acceptance rate which were 93.33% and 0.01% , respectively . biometrics ( access control ) , face recognition , privacy
in 1988 , a. barkai and c.d mcquaid reported a novel observation in population ecology while studying benthic fauna in south african shores : a predator - prey role reversal between a decapod crustacean and a marine snail .specifically , in malgas island , the rock lobster _ jasus lalandii _ preys on a type of whelk , _burnupena papyracea_. as could be easily expected , the population density of whelks soared upon extinction of the lobsters in a nearby island ( marcus island , just four kilometers away from malgas ) .however , in a series of very interesting controlled ecological experiments , barkai and mcquaid reintroduced a number of _ jasus lalandii _ in marcus island , to investigate whether the equilibrium observed in the neighboring malgas island could be restored .the results were simply astounding : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` the result was immediate .the apparently healthy rock lobsters were quickly overwhelmed by large number of whelks .several hundreds were observed being attacked immediately after release and a week later no live rock lobsters could be found at marcus island . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ surprisingly , and despite observations such as the report in , theoretical population biology has largely ignored the possibility of predators and preys switching their roles . of importance , the paper of barkai and mcquaid suggests the existence of a threshold control parameter responsible for switching the dynamics between ( a ) a classical predator - prey system with sustained or decaying oscillations , and ( b ) a predator ( the former prey ) driving its present - day prey to local extinction .it is worth noting there are some papers in the literature describing ratio - dependent predation ( see , for example and ) , but they are not related to the possibility of role - reversals . on the other hand ,the likelihood of changing ecological roles as a result of density dependence has already been documented for the case of mutualism by breton and , in 1998 , hernndez made an interesting effort to build a mathematical scheme capable of taking into account the possible switches among different possible ecological interactions .so , to the best of our knowledge , there are no theoretical studies supported by field evidence specifically addressing predator - prey role - reversals yet .predator - prey systems are generally modeled by adopting one of the many variations of the classical lotka - volterra model : where denotes the intrinsic preys rate of growth , corresponds to the rate of predation upon preys , stands for the predators death rate in absence of preys , and represents the benefit of predators due to the encounters with preys .our goal is to assess whether modeling the role - reversal behavior observed by barkai & mcquaid is possible , when adopting appropriate parameters and assumptions .for instance , if one considers quadratic density dependence in the preys as well as in the predators , non - constant rates of consumption of preys by the predators , and the profiting of predators by the existence of preys , then it is possible to suggest the following system : where represents the intrinsic growth rate of the prey in the absence of predators , the carrying capacity of the prey s habitat , the rate of preys consumption by the population of predators , the predators decay rate in the absence of preys , the intraspecific rate of competition among predators and , finally , the factor of predator s profiting from preys . the ratio is then the fraction of prey biomass that is actually converted into predator biomass .the latter should remain constant , since the fraction of preys biomass converted to predators biomass is a physiological parameter , rather than a magnitude depending on demographical variables .thus , a particular case of system ( [ e1 ] ) in the appropriate rescaled variables is : where all the parameters are positive and .in fact , all of the parameters have a relevant ecological interpretation : is the normalized intrinsic growth rate of the species with density , is a measure of the damage intensity of the second species on the first one , is the normalized rate of predators decay and is the benefit ( damage ) the second population gets from the first one .note the crucial role played by the interaction term , where stands for the first population threshold to switch from being prey to predator .the horizontal nullcline of the system of equations ( [ e2 ] ) , that is =0 ] , also has two branches : the horizontal axis and which is a parabola with , attaining its maximum at , the value of which is this term is positive if and only if . the zeros , and of equation ( [ e4 ] ) are given by the latter are real numbers if and only if . the rate of change of is then to analyze the system while keeping in mind the ecological interpretation of the variables and parameters , we will now consider the left branch of the horizontal nullcline ( [ e3 ] ) , with and the region of the phase plane of the system of equations ( [ e2 ] ) defined as the system of equations ( [ e2 ] ) has the equilibria : , plus those states of the system stemming from the intersection of the nullclines and in the region .such equilibria are defined by the in the interval satisfying the identity or , equivalently , the that are roots of the third order polynomial where , , and .the calculation of the nontrivial equilibria of ( [ e2 ] ) follows from the determination of the roots of ( [ e6 ] ) .consequently , due to the qualitative behavior of the functions and on , we are faced with the following possibilities : 1 .the nontrivial branches of the nullclines do not intersect each other in the region of interest .in such a case , the system ( [ e2 ] ) has just two equilibria : and in .figure 2a shows the relative position of the nullclines in this case , and figure 3a the phase portrait of the system . + for fixed positive values of and , and such that one can see that both nullclines become closer with increasing values of .+ , both nullclines touch tangentially .further changes in the parameter lead to a saddle - node bifurcation , and to the two transversal intersections depicted in figure 1.,title="fig:",width=230 ] [ fig:1a ] + , both nullclines touch tangentially .further changes in the parameter lead to a saddle - node bifurcation , and to the two transversal intersections depicted in figure 1.,title="fig:",width=230 ] [ fig:1b ] + + 2 . the nullclines and touch each other tangentially at the point in the region .again , figure 2b shows the relative position of the nullclines in this case , and figure 3b the phase portrait of the system .in such a case , in addition to satisfy ( [ e5 ] ) , must also satisfy the condition _ i.e. _ , + + if one assumes the existence of satisfying ( [ e5 ] ) , the required extra condition ( [ e7 ] ) imposes the restriction on , due to the positiveness of its left hand side .moreover , from a geometrical interpretation of ( [ e7 ] ) it follows that : * \(i ) if + there is not any such that . *\(ii ) if the condition ( [ e7 ] ) is satisfied just at . *\(iii ) if there exists exactly one value , , of such that the equality ( [ e7 ] ) holds .+ in any case , the point is a _ non - hyperbolic _ equilibrium of the system of equations ( [ e2 ] ) .in fact , the proof that a tangential contact of the nullclines results in a point where the determinant of the jacobian matrix of the system vanishes follows immediately , implying that at least one of its eigenvalues is zero .the nullclines intersect each other transversally at two points , and , belonging the region . for reference, please refer to figure 1 . in this casethe system of equations ( [ e2 ] ) has two extra equilibria which arise from the _ bifurcation _ of . + here ,if in addition to choosing the parameters , and such that , we select the rest of them such that : * \(i ) , i.e. , + >\frac{b}{c}(2-k),\ ] ] + guaranteeing the existence of the equilibria and above mentioned .moreover , the coordinates of these points satisfy , , with . here* \(ii ) , i.e. , =\frac{b}{c}(2-k).\ ] ] + here we have with and .meanwhile , .part of the local analysis of the system of equations ( [ e2 ] ) is based on the linear approximation around its equilibria .thus , we calculate the jacobian matrix of the system ( [ e2 ] ) : {(x , y ) } = \left[\begin{array}{cc } b(1 - 2x)-cy(k-2x ) & -cx(k - x)\\ fy(k-2x ) & -e-2ey+fx(k - x)\end{array } \right].\ ] ] by a straightforward calculation , we obtain the eigenvalues of the jacobian matrix ( [ e9 ] ) at the point .these are : and .hence , is saddle point of the system ( [ e2 ] ) , for all positive parameter values . by carrying out similar calculations we obtain the corresponding eigenvalues of matrix ( [ e9 ] ) at , which are : and .the restriction on implies that .therefore , is an asymptotically stable node for all the positive parameter values appearing in system ( [ e2 ] ) .now we carry out the local analysis of ( [ e2 ] ) .we notice two cases , depending on the relative position of the nullclines : * case 1 . * the main branches ( [ e3 ] ) and ( [ e4 ] ) of the nullclines do not intersect on . here , any trajectory of system ( [ e2 ] ) starting at the initial condition with positive and tends to the equilibria as time goes to infinity .thus , the region is the basin of attraction of . invariably , the species with density vanishes , implying non coexistence among the interacting species .meanwhile , the other species approach the associated carrying capacity .* the nullclines intersect each other at the points and , where none is tangential . here and satisfy and .in a neighborhood of and , the functions and satisfy the implicit function theorem .in particular , each one of the identities and define a function there .actually , these are and given in ( [ e3 ] ) and ( [ e4 ] ) , respectively .their derivative at with is calculated as follows by using these equalities , we can state the following proposition . * proposition 1 . * _ the equilibrium is not a saddle point .meanwhile , the equilibrium is a saddle point for all the parameter values ._ a proof of this proposition and some remarks can be found in appendix a. as we have already shown , system ( [ e2 ] ) has four equilibrium points .these are illustrated in figure 4 .the origin is a saddle point with the horizontal and vertical axis as its unstable and stable manifolds . and are , respectively , a node and a saddle for parameter values after the bifurcation , and is a stable node .the stable manifold of the saddle point is a separatrix dividing the phase space in two disjoint regions : the set of initial conditions going to , and the complement with points going to .moreover , our numerical solution shows the existence of an homoclinic trajectory starting and ending in the saddle point .thus , we have a bistable system . ) . from left to right: is a stable node , is a saddle and is another stable node . the heteroclinic trajectory joining the saddle point to the stable node is easily identified .the stable manifold is a separatrix between the basin of attraction of and .,width=453,height=359 ] the bistability of system ( [ e2 ] ) has an interesting ecological interpretation :the coexistence of the interacting species occurs whenever the initial population densities are located in the region above the saddle point unstable manifold . in this case, both populations evolve towards the attractor .on the other hand , if the initial population densities are below the separatrix , the population densities evolve towards the equilibria implying the non - coexistence of the species and , invariably , the species with population density vanishes .the heteroclinic trajectory of system ( [ e2 ] ) connecting the saddle ( ) with the node ( or focus , depending on the set of parameters ) , in addition to the coexistence of the species , also tells us that this occurs by the transition from one equilibrium to another as time increases .to describe more accurately our role - reversal system , we extended our model of system ( [ e2 ] ) to incorporate the spatial variation of the population densities . here ,if we denote by and the population density of the whelks and lobsters at the point at time , the resulting model is : where the subscript in and denotes the partial derivative with respect the time , and is the laplacian operator . here , , correspond to the diffusivity of the species with density and , i.e. that of whelks and lobsters , respectively .it is worth noting that the original variables have been rescaled , but still denote population densities .we then proceeded to construct numerical solutions of the system ( 10 ) in three different domains : a circle with radius 2.2 length units ( lu ) , an annulus defined by concentric circles of radii 2.2 lu and 1 lu , and a square with side length of 4.6 lu .all domains were constructed to depict similar distances between malgas island and marcus island ( roughly 4 km ) . in the first one , the annular domain, we try to mimic the island habitat of whelks and lobsters as a concentric domain .the other two domains are used to confirm the pattern formation characteristic of excitable media , and to reject any biases from the shape of the boundaries . to obtain numerical solutions of all spatial cases, we used the finite element method with adaptive time - stepping , and assumed zero - flux boundary conditions .accordingly , we discretized all spatial domains by means of delaunay triangulations , until a maximal side length of 0.17 was obtained .the latter defines the approximation error of the numerical scheme .we attempted to describe two entirely different situations by using a single set of kinetic parameters : that of malgas island , where both species co - exist , and marcus island , where whelks soar and lobsters become extinct .the only difference between these two cases was the initial conditions used .aside , one could intuitively assume whelks motion to be very slow , or even negligible in comparison to that of lobsters .however , it is worth considering how slow , and whether fluid motion could aftect this speed . while there is no data specific to _ jasus lalandii _ and _ burnupena papyracea _ in islands of the saldanha bay , data of similar species can be found in the literature .for instance , a related rock - lobster species , _ jasus edwardii _ has been found to move at a rate of 5 - 7 km / day .in contrast , whelks within the superfamily _ buccinoidea _ have been found to move towards food at rates between 50 and 220 meters / day ( see and ) .importantly , predation by whelks remains seemingly unaffected by variations in water flow . by putting these findings together ,we argue a reasonable model need not incorporate influences from shallow water currents , and would assume whelks to move toward ` bait ' at a speed roughly one order of magnitude smaller than that of lobsters .thus , we opted for a two - dimensional habitat , and one order of magnitude difference between the non - dimensional isotropic diffusion rates ( and ) .aside , our choice of reaction parameters was : , , , , and .regarding initial conditions , we adopted the following scenarios , representing the different scenarios of weighted biomass : 1 .malgas island : the initial density of whelks at each element was drawn from a uniform distribution 0.1 * u(0.25 , 0.05 ) , and that of lobsters from u(0.25 , 0.05 ) .marcus island : the initial density of whelks at each element was drawn from a uniform distribution u(0.25 , 0.05 ) , and that of lobsters from 0.1 * u(0.25 , 0.05 ) .results are shown in figure 5 , corresponding to averaged densities of whelks and lobsters in the three different spatial domains , respectively .simulations in an annular domain can be found in the supplementary material .interestingly , changes in density are usually accompanied with wave - like spatial transitions in each species density .examples of this spatial transient patterns can be found in figures 6 and 7 , for annular and rectangular domains in malgas island and marcus island , respectively .we have modeled a well documented case of role - reversal in a predator - prey interaction . our model pretends to capture the essential ecological factors within the study of barkai and mcquaid , who did an extraordinary field work and meticulously reported this striking role - reversal phenomenon happening between whelks and lobsters in the saldanha bay . the analysis of our model and corresponding numerical solutions clearly predict the coexistence of both populations and the switching of roles between the once denoted predators and preys . here , the coexistence scenario corresponds to the case when lobsters predate upon whelks , and role - reversal corresponds to the case when whelks drive the population of lobsters to extinction , as observed by barkai and mcquaid in the field .moreover , by introducing spatial variables and letting both populations diffuse within a spatial domain , we obtain patterns that are characteristic of excitable media . of particular interest is the upper row of figure 6 , where self - sustained waves travel in the annular region .the latter is not entirely surprising , as the ordinary differential equation model in which the spatial case was based shows bistability .nevertheless , our findings are novel in that , to the best of our knowledge , there are no reports of ecological interactions behaving as excitable media .pm was supported by unam - in107414 funding , and wishes to thank oist hospitality during last stages of this work .tml was supported by oist funding .* proof . *first we prove the second part of our proposition .at we have implying that and have the same sign at but with and , then and . by using the above calculations we obtain and then , the determinant of the jacobian matrix of the system ( [ e2 ] ) at for the proof of the first part of the proposition we follow a similar sign analysis as we did previously , by considering that , and that at the inequality holds where both derivatives are positive .thus , given that and the inequalities given that its discriminant[multiblock footnote omitted ] , ( [ e11 ] ) is a quadratic equation of hyperbolic type . in orderthe see more details of such quadratic , we calculate its gradient .this is the zero vector at the point \left[\frac{\partial^2 trj}{\partial y^2}\right]-\left\{\frac{\partial^2 trj}{\partial y\partial x}\right\}^2=-4c^2<0.\ ] ] therefore is a saddle point of the surface ( [ e10 ] ) .the value of $ ] at is s. s. powers , j. n. kittinger , ( 2002 ) , hydrodynamic mediation of predator - prey interactions : differential patterns of prey susceptibility and predator success explained by variation in water flow .journal of experimental marine biology and ecology , 85 : 245 - 257 .
predator - prey relationships are one of the most studied interactions in population ecology . however , little attention has been paid to the possibility of role exchange between species once determined as predators and preys , despite firm field evidence of such phenomena in the nature . in this paper , we build a model capable of reproducing the main phenomenological features of one reported predator - prey role - reversal system , and present results for both the homogeneous and the space explicit cases . we find that , depending on the choice of parameters , our role - reversal dynamical system exhibits excitable - like behaviour , generating waves of species concentrations that propagate through space .
starting from the first telescopic sunspot observations by david and johannes fabricius , galileo galilei , thomas harriot and christoph scheiner , the 400-year sunspot record is one of the longest directly recorded scientific data series , and forms the basis for numerous studies in a wide range of research such as , e.g. , solar and stellar physics , solar - terrestrial relations , geophysics , and climatology . during the 400-year interval, sunspots depict a great deal of variability from the extremely quiet period of the maunder minimum to the very active modern time .the sunspot numbers also form a benchmark data series , upon which virtually all modern models of long - term solar dynamo evolution , either theoretical or ( semi)empirical , are based .accordingly it is important to review the reliability of this series , especially since it contains essential uncertainties in the earlier part .the first sunspot number series was introduced by rudolf wolf who observed sunspots from 1848 until 1893 , and constructed the monthly sunspot numbers since 1749 using archival records and proxy data .sunspot activity is dominated by the 11-year cyclicity , and the cycles are numbered in wolf s series to start with cycle # 1 in 1755 . when constructing his sunspot series wolf interpolated over periods of sparse or missing sunspot observations using geomagnetic proxy data , thus losing the actual detailed temporal evolution of sunspots .sunspot observations were particularly sparse in the 1790 s , during solar cycle # 4 which became the longest solar cycle in wolf s reconstruction with an abnormally long declining phase ( see fig .[ fig : wsn]a ) .the quality of wolf s sunspot series during that period has been questioned since long .based on independent auroral observations , it was proposed by elias loomis already in 1870 that one small solar cycle may have been completely lost in wolf s sunspot reconstruction in the 1790 s , being hidden inside the interpolated , exceptionally long declining phase of solar cycle # 4 .this extraordinary idea was not accepted at that time .a century later , possible errors in wolf s compilation for the late 18th century have been emphasized again based on detailed studies of wolf s sunspot series .recently , a more extensive and consistent sunspot number series ( fig .[ fig : wsn]b ) , the group sunspot numbers ( gsn ) , was introduced by , which increases temporal resolution and allows to evaluate the statistical uncertainty of sunspot numbers .we note that the gsn series is based on a more extensive database than wolf s series and explicitly includes all the data collected by wolf .however , it still depicts large data gaps in 17921794 ( this interval was interpolated in wolf s series ) . based on a detailed study of the gsn series, revived loomis idea by showing that the lost cycle ( a new small cycle started in 1793 , which was lost in the conventional wolf sunspot series ) agrees with both the gsn data ( fig .[ fig : wsn]b ) and indirect solar proxies ( aurorae ) and does not contradict with the cosmogenic isotope data .the existence of the lost cycle has been disputed by based on data of cosmogenic isotope and sunspot numbers . however , as argued by , the lost cycle hypothesis does not contradict with sunspots or cosmogenic isotopes and is supported by aurorae observations .using time series analysis of sparse sunspot counts or sunspot proxies , it is hardly possible to finally verify the existence of the lost solar cycle .therefore , the presence of the lost cycle has so far remained as an unresolved issue . herewe analyze newly restored original solar drawings of the late 18th century to ultimately resolve the old mystery and to finally confirm the existence of the lost cycle .most of wolf s sunspot numbers in 1749 - 1796 were constructed from observations by the german amateur astronomer johann staudacher who not only counted sunspots but also drew solar images in the second half of the 18th century ( see an example in fig . [fig : staud ] ) .however , only sunspot counts have so far been used in the sunspot series , but the spatial distribution of spots in these drawings has not been analyzed earlier .the first analysis of this data , which covers the lost cycle period in 1790 s , has been made only recently using staudacher s original drawings .additionally , a few original solar disc drawings made by the irish astronomer james archibald hamilton and his assistant since 1795 have been recently found in the archive of the armagh observatory .after the digitization and processing of these two sets of original drawings , the location of individual sunspots on the solar disc in the late 18th century has been determined .this makes it possible to construct the sunspot butterfly diagram for solar cycles # 3 and 4 ( fig .[ fig : wsn]c ) , which allows us to study the existence of the lost cycle more reliably than based on sunspot counts only . despite the good quality of original drawings, there is an uncertainty in determining the actual latitude for some sunspots ( see for details ) .this is related to the limited information on the solar equator in these drawings .the drawings which are mirrored images of the actual solar disc as observed from earth , can not be analyzed by an automatic prodecure adding the heliographic grid .therefore , special efforts have been made to determine the solar equator and to place the grid of true solar coordinates for each drawing ( see fig .[ fig : staud ] ) . depending on the information available for each drawing, the uncertainty in defining the solar equator , , ranges from almost 0 up to a maximum of 15 .the latitude error of a sunspot , identified to appear at latitude , can be defined as where is the angular uncertainty of the solar equator in the respective drawing , and is the angular distance between the spot and the solar disc center .accordingly , the final uncertainty can range from 0 ( precise definition of the equator or central location of the spot ) up to 15 .we take the uncertain spot location into account when constructing the semiannually averaged butterfly diagram as follows .let us illustrate the diagram construction for the second half - year ( jul - dec ) of 1793 ( fig .[ fig : dist ] ) . during this periodthere were only two daily drawings by staudacher with the total of 8 sunspots : two spots on august 6th , which were located close to the limb near the equator , and six spots on november 3rd , located near the disc center at higher latitudes .the uncertainty in definition of the equator was large ( ) for both drawings . because of the near - limb location ( large ) of the first two spots , the error of latitude definition ( eq . [ eq : delta ] ) is quite large .the high - latitude spots of the second drawing are more precisely determined because of the central location of the spots . the latitudinal occurrence of these eight spots and their uncertainties are shown in fig .[ fig : dist ] as stars with error bars .the true position of a spot is within the latitudinal band , where is regarded as an observational error and as the formal center of the latitudinal band .accordingly , when constructing the butterfly diagram , we spread the occurrence of each spot within this latitudinal band with equal probability ( the use of other distribution does not affect the result ) .finally , the density of the latitudinal distribution of spots during the analyzed period is computed as shown by the histogram in fig .[ fig : dist ] .this density is the average number of sunspots occurring per half - year per 2 latitudinal bin .each vertical column in the final butterfly diagram shown in fig .[ fig : wsn]c is in fact such a histogram for the corresponding half - year . typically , the sunspots of a new cycle appear at rather high latitudes of about 2030 .this takes place around the solar cycle minimum .later , as the new cycle evolves , the sunspot emergence zone slowly moves towards the solar equator .this recurrent `` butterfly''-like pattern of sunspot occurrence is known as the _ sprer law _ and is related to the action of the solar dynamo ( see , e.g. * ? ? ?it is important that the systematic appearance of sunspots at high latitudes unambiguously indicates the beginning of a new cycle and thus may clearly distinguish between the cycles .one can see from the reconstructed butterfly diagram ( fig .[ fig : wsn]c ) that the sunspots in 17931796 appeared dominantly at high latitudes , clearly higher than the previous sunspots that belong to the late declining phase of the ending solar cycle # 4 .thus , a new `` butterfly '' wing starts in late 1793 , indicating the beginning of the lost cycle .since sunspot observations are quite sparse during that period , we have performed a thorough statistical test as follows .the location information of sunspot occurrence on the original drawings during 17931796 ( summarized in table [ tab1 ] ) allows us to test the existence of the lost cycle .the observed sunspot latitudes were binned into three categories : low ( ) , mid- ( ) and high latitudes ( ) , as summarized in column 2 of table [ tab2 ] .we use all available data on latitude distribution of sunspots since 1874 covering solar cycles 12 through 23 ( the combined royal greenwich observatory ( 18741981 ) and usaf / noaa ( 19812007 ) sunspot data set : http://solarscience.msfc.nasa.gov/greenwch.shtml ) as the reference data set .we tested first if the observed latitude distribution of sunspots ( three daily observations with low - latitude spots , one with mid - latitude and three with only high - latitude spots , see table [ tab2 ] ) is consistent with a late declining phase ( d - scenario , i.e. the period 17931796 corresponds to the extended declining phase of cycle # 4 ) or with the early ascending phase ( a - scenario , i.e. , the period 17931796 corresponds to the ascending phase of the lost cycle ) .we have selected two subsets from the reference data set : d - subset corresponding to the declining phase which covers three last years of solar cycles 12 through 23 and includes in total 11235 days when 33803 sunspot regions were observed ; and a - subset corresponding to the early ascending phase which covers 3 first years of solar cycles 13 through 23 and includes 10433 days when 47096 regions were observed .first we analyzed the probability to observe sunspot activity of each category on a randomly chosen day .for example , we found in the d - subset 4290 days when sunspots were observed at low latitudes below 8 .this gives the probability ( see first line , column 3 in table [ tab2 ] ) to observe such a pattern on a random day in the late decline phase of a cycle .similar probabilities for the other categories in table [ tab2 ] have been computed in the same way . next we tested whether the observed low - latitude spot occurrence ( three out of seven daily observations ) corresponds to declining / ascending phase scenario . the corresponding probability to observe events ( low - latitude spots ) during trials ( observational days )is given as where is the probability to observe the event at a single trial , and is the number of possible combinations .we assume here that the results of individual trials are independent on each other , which is justified by the long separation between observational days .thus , the probability to observed three low - latitude spots during seven random days is and 0.07 for d- and a - hypotheses , respectively .the corresponding probabilities are given in the first row , columns 56 of table [ tab2 ] .the occurrence of three days with low latitude activity is quite probable for both declining and ascending phases .thus , this criterion can not distinguish between the two cases .the observed mid - latitude spot occurrence ( one out of seven daily observations ) is also consistent with both d- and a - scenarios .the corresponding confidence levels ( 0.06 and 0.22 , respectively , see the second row , columns 56 of table [ tab2 ] ) do not allow to select between the two hypotheses .next we tested the observed high - latitude spot occurrence ( three out of seven observations ) in the d / a - scenarios ( the corresponding probabilities are given in the third row of table [ tab2 ] ) . the occurrence of three days with high - latitude activity is highly improbable during a late declining phase ( d - scenario ) .thus , the hypothesis of the extended cycle # 4 is rejected at the level of .the a - scenario is well consistent ( confidence 0.26 ) with the data .thus , the observed high - latitude sunspot occurrence clearly confirms the existence of the lost cycle .we also noticed that sunspots tend to appear in northern hemisphere ( 13 out of 16 observed sunspots appeared in the northern hemisphere ) . despite the rather small number of observations ,the statistical significance of asymmetry is quite good ( confidence level 99% ) , i.e. it can be obtained by chance with the probability of only 0.01 , in a purely symmetric distribution .nevertheless , more data are needed to clearly evaluate the asymmetry .thus , a statistical test of the sunspot occurrence during 17931796 confirms that : * the sunspot occurrence in 17931796 contradicts with a typical latitudinal pattern in the late declining phase of a normal solar cycle ( at the significance level of ) . *the sunspot occurrence in 17931796 is consistent with a typical ascending phase of the solar cycle , confirming the start of the lost solar cycle .we note that it has been shown earlier , using the group sunspot number , that the sunspot number distribution during 17921793 was statistically similar to that in the minimum years of a normal solar cycle , but significantly different from that in the declining phase . *the observed asymmetric occurrence of sunspots during the lost cycle is statistically significant ( at the significance level of 0.01 ) .therefore , the sunspot butterfly diagram ( fig .[ fig : wsn]c ) unambiguously proves the existence of the lost cycle in the late 18th century , verifying the earlier evidence based on sunspot numbers and aurorae borealis .an additional cycle in the 1790 s changes cycle numbering before the dalton minimum , thus verifying the validity of the _ gnevyshev - ohl _ rule of sunspot cycle pairing and the related 22-year periodicity in sunspot activity throughout the whole 400-year interval . another important consequence of the lost cycle is that , instead of one abnormally long cycle # 4 ( min - to - min length .5 years according to gsn ) there are two shorter cycles of about 9 and 7 years ( see fig . [fig : wsn]d ) .note also that some physical dynamo models even predict the existence of cycles of small amplitude and short duration near a grand minimum .the cycle # 4 ( 17841799 in gsn ) with its abnormally long duration dominates empirical studies of relations , e.g. , between cycle length and amplitude .replacing an abnormally long cycle # 4 by one fairly typical and one small short cycle changes empirical relations based on cycle length statistics .this will affect , e.g. , predictions of future solar activity by statistical or dynamo - based models , and some important solar - terrestrial relations .the lost cycle starting in 1793 depicts notable hemispheric asymmetry with most sunspots of the new cycle occurring in the northern solar hemisphere ( fig .[ fig : wsn]c ) .this asymmetry is statistically significant at the confidence level of 99% . a similar , highly asymmetric sunspot distribution existed during the maunder minimum of sunspot activity in the second half of the 17th century .however , the sunspots during the maunder minimum occurred preferably in the southern solar hemisphere , i.e. , opposite to the asymmetry of the lost cycle .this shows that the asymmetry is not constant , contrary to some earlier models involving the fossil solar magnetic field .interestingly , this change in hemispheric asymmetry between the maunder and dalton minimum is in agreement with an earlier , independent observation , based on long - term geomagnetic activity , that the north - south asymmetry oscillates at the period of about 200 - 250 years .concluding , the newly recovered spatial distribution of sunspots of the late 18th century conclusively confirms the existence of a new solar cycle in 17931800 , which has been lost under the preceding , abnormally long cycle compiled by rudolf wolf when interpolating over the sparse sunspot observations of the late 1790 s .this letter brings the attention of the scientific community to the need of revising the sunspot series in the 18th century and the solar cycle statistics .this emphasizes the need to search for new , yet unrecovered , solar data to restore details of solar activity evolution in the past ( e.g. , * ? ? ?the new cycle revises the long - held sunspot number series , restoring its cyclic evolution in the 18th century and modifying the statistics of all solar cycle related parameters. the northern dominance of sunspot activity during the lost cycle suggests that hemispheric asymmetry is typical during grand minima of solar activity , and gives independent support for a systematic , century - scale oscillating pattern of solar hemispheric asymmetry .these results have immediate practical and theoretical consequences , e.g. , to predicting future solar activity and understanding the action of the solar dynamo .we are grateful to dr .john butler from armagh observatory for his help with finding the old notes of hamilton s data .support from the academy of finland and finnish academy of sciences and letters ( visl foundation ) are acknowledged .
because of the lack of reliable sunspot observation , the quality of sunspot number series is poor in the late 18th century , leading to the abnormally long solar cycle ( 17841799 ) before the dalton minimum . using the newly recovered solar drawings by the 1819th century observers staudacher and hamilton , we construct the solar butterfly diagram , i.e. the latitudinal distribution of sunspots in the 1790 s . the sudden , systematic occurrence of sunspots at high solar latitudes in 17931796 unambiguously shows that a new cycle started in 1793 , which was lost in traditional wolf s sunspot series . this finally confirms the existence of the lost cycle that has been proposed earlier , thus resolving an old mystery . this letter brings the attention of the scientific community to the need of revising the sunspot series in the 18th century . the presence of a new short , asymmetric cycle implies changes and constraints to sunspot cycle statistics , solar activity predictions , solar dynamo theories as well as for solar - terrestrial relations .
coordinated networks of mobile robots are already in use for environmental monitoring and warehouse logistics . in the near future, autonomous robotic teams will revolutionize transportation of passengers and goods , search and rescue operations , and other applications .these tasks share a common feature : the robots are asked to provide service over a space .one question which arises is : when a group of robots is waiting for a task request to come in , how can they best position themselves to be ready to respond ?the distributed _ environment partitioning problem _ for robotic networks consists of designing individual control and communication laws such that the team divides a large space into regions .typically , partitioning is done so as to optimize a cost function which measures the quality of service provided over all of the regions . _ coverage control _ additionally optimizes the positioning of robots inside a region as shown in fig. [ fig : cover_example ] .this paper describes a distributed partitioning and coverage control algorithm for a network of robots to minimize the expected distance between the closest robot and spatially distributed events which will appear at discrete points in a non - convex environment .optimality is defined with reference to a relevant `` multicenter '' cost function . as with all multirobot coordination applications ,the challenge comes from reducing the communication requirements : the proposed algorithm requires only short - range gossip " communication , i.e. , asynchronous and unreliable communication between nearby robots .territory partitioning and coverage control have applications in many fields . in cyber - physical systems , applications include automated environmental monitoring , fetching and delivery , construction , and other vehicle routing scenarios . more generally , coverage of discrete sets is also closely related to the literature on data clustering and -means , as well as the facility location or -center problem . partitioning of graphs is its own field of research , see for a survey .territory partitioning through local interactions is also studied for animal groups , see for example .a broad discussion of algorithms for partitioning and coverage control in robotic networks is presented in which builds on the classic work of lloyd on optimal quantizer selection through centering and partitioning . "the lloyd approach was first adapted for distributed coverage control in . since this beginning ,similar algorithms have been applied to non - convex environments , unknown density functions , equitable partitioning , and construction of truss - like objects .there are also multi - agent partitioning algorithms built on market principles or auctions , see for a survey .while lloyd iterative optimization algorithms are popular and work well in simulation , they require synchronous and reliable communication among neighboring robots . as robots with adjacent regions may be arbitrarily far apart , these communication requirements are burdensome and unrealistic for deployed robotic networks . in response to this issue , in the authors have shown how a group of robotic agents can optimize the partition of a convex bounded set using a lloyd algorithm with gossip communication .a lloyd algorithm with gossip communication has also been applied to optimizing partitions of non - convex environments in , the key idea being to transform the coverage problem in euclidean space into a coverage problem on a graph with geodesic distances .distributed lloyd methods are built around separate partitioning and centering steps , and they are attractive because there are known ways to characterize their equilibrium sets ( the so - called centroidal voronoi partitions ) and prove convergence .unfortunately , even for very simple environments ( both continuous and discrete ) the set of centroidal voronoi partitions may contain several sub - optimal configurations .we are thus interested in studying ( discrete ) gossip coverage algorithms for two reasons : ( 1 ) they apply to more realistic robot network models featuring very limited communication in large non - convex environments , and ( 2 ) they are more flexible than typical lloyd algorithms meaning they can avoid poor suboptimal configurations and improve performance .there are three main contributions in this paper .first , we present a discrete partitioning and coverage optimization algorithm for mobile robots with unreliable , asynchronous , and short - range communication .our algorithm has two components : a _ motion protocol _ which drives the robots to meet their neighbors , and a _ pairwise partitioning rule _ to update territories when two robots meet .the partitioning rule optimizes coverage of a set of points connected by edges to form a graph .the flexibility of graphs allows the algorithm to operate in non - convex , non - polygonal environments with holes .our graph partition optimization approach can also be applied to non - planar problems , existing transportation or logistics networks , or more general data sets .second , we provide an analysis of both the convergence properties and computational requirements of the algorithm . by studying a dynamical system of partitions of the graph s vertices , we prove that almost surely the algorithm converges to a pairwise - optimal partition in finite time .the set of pairwise - optimal partitions is shown to be a proper subset of the well - studied set of centroidal voronoi partitions .we further describe how our pairwise partitioning rule can be implemented to run in anytime and how the computational requirements of the algorithm can scale up for large domains and large teams .third , we detail experimental results from our implementation of the algorithm in the player / stage robot control system .we present a simulation of 30 robots providing coverage of a portion of a college campus to demonstrate that our algorithm can handle large robot teams , and a hardware - in - the - loop experiment conducted in our lab which incorporates sensor noise and uncertainty in robot position . through numerical analysiswe also show how our new approach to partitioning represents a significant performance improvement over both common lloyd - type methods and the recent results in .the present work differs from the gossip lloyd method in three respects .first , while focuses on territory partitioning in a convex continuous domain , here we operate on a graph which allows our approach to consider geodesic distances , work in non - convex environments , and maintain connected territories .second , instead of a pairwise lloyd - like update , we use an iterative optimal two - partitioning approach which yields better final solutions .third , we also present a motion protocol to produce the sporadic pairwise communications required for our gossip algorithm and characterize the computational complexity of our proposal .preliminary versions of this paper appeared in and .compared to these , the new content here includes : ( 1 ) a motion protocol ; ( 2 ) a simplified and improved pairwise partitioning rule ; ( 3 ) proofs of the convergence results ; and ( 4 ) a description of our implementation and a hardware - in - the - loop experiment .in section [ sec : prelim ] we review and adapt coverage and geometric concepts ( e.g. , centroids , voronoi partitions ) to a discrete environment like a graph .we formally describe our robot network model and the discrete partitioning problem in section [ sec : algorithm ] , and then state our coverage algorithm and its properties .section [ sec : convergence ] contains proofs of the main convergence results . in section [ sec : results ] we detail our implementation of the algorithm and present experiments and comparative analysis .some conclusions are given in section [ sec : conclusion ] . in our notation, denotes the set of non - negative real numbers and the set of non - negative integers .given a set , denotes the number of elements in .given sets , their difference is . a set - valued map , denoted by , associates to an element of a subset of .we are given a team of robots tasked with providing coverage of a finite set of points in a non - convex and non - polygonal environment . in this sectionwe translate concepts used in coverage of continuous environments to graphs .let be a finite set of points in a continuous environment .these points represent locations of interest , and are assumed to be connected by weighted edges .let be an ( undirected ) weighted graph with edge set and weight map ; we let be the weight of edge .we assume that is connected and think of the edge weights as distances between locations .[ rem : discretization ] for the examples in this paper we will use a coarse _ occupancy grid map _ as a representation of a continuous environment . in an occupancy grid , each grid cell is either free space or an obstacle ( occupied ) . to form a weighted graph , each free cell becomes a vertex and free cells are connected with edges if they border each other in the grid .edge weights are the distances between the centers of the cells , i.e. , the grid resolution .there are many other methods to discretize a space , including triangularization and other approaches from computational geometry , which could also be used . in any weighted graph is a standard notion of distance between vertices defined as follows .a _ path _ in is an ordered sequence of vertices such that any consecutive pair of vertices is an edge of .the _ weight of a path _ is the sum of the weights of the edges in the path .given vertices and in , the _ distance _ between and , denoted , is the weight of the lowest weight path between them , or if there is no path .if is connected , then the distance between any two vertices in is finite . by convention , if .note that , for any .we will be partitioning into connected subsets or regions which will each be covered by an individual robot .to do so we need to define distances on induced subgraphs of .given , the _ subgraph induced by the restriction of to _ , denoted by , is the graph with vertex set equal to and edge set containing all weighted edges of where both vertices belong to . in other words ,we set .the induced subgraph is a weighted graph with a notion of distance between vertices : given , we write note that we define a _ connected subset of _ as a subset such that and is connected .we can then partition into connected subsets as follows .[ def : conpartitions ] given the graph we define a _ connected of _ as a collection of subsets of such that 1 . ; 2 . if ; 3 . for all ; and 4 . is connected for all .let to be the set of connected of . property ( ii ) implies that each element of belongs to just one , i.e. , each location in the environment is covered by just one robot .notice that each induces a connected subgraph in .in subsequent references to we will often mean , and in fact we refer to as the _ dominance subgraph _ or _ region _ of the -th robot at time . among the ways of partitioning ,there are some which are worth special attention . given a vector of distinct points ,the partition is said to be a _ voronoi partition of q generated by c _ if , for each and all , we have and , .note that the voronoi partition generated by is not unique since how to apportion tied vertices is unspecified . for our gossip algorithmswe need to introduce the notion of adjacent subgraphs .two distinct connected subgraphs , are said to be _ adjacent _ if there are two vertices , belonging , respectively , to and such that .observe that if and are adjacent then is connected .similarly , we say that robots and are adjacent or are neighbors if their subgraphs and are adjacent .accordingly , we introduce the following useful notion . for , we define the _ adjacency graph _ between regions of partition as , where if and are adjacent .note that is always connected since is .we define three coverage cost functions for graphs : , , and .let the _ weight function _ assign a relative weight to each element of .the _ one - center function _ gives the cost for a robot to cover a connected subset from a vertex with relative prioritization set by : a technical assumption is needed to solve the problem of minimizing : we assume from now on that a _ total order _ relation , , is defined on , i.e. , that . with this assumptionwe can deterministically pick a vertex in which minimizes as follows .[ def : centroid ] let be a totally ordered set , and let .we define the set of generalized centroids of as the set of vertices in which minimize , i.e. , further , we define the map as .we call the _ generalized centroid _ of . in subsequent usewe drop the word generalized " for brevity .note that with this definition the centroid is well - defined , and also that the centroid of a region always belongs to the region . with a slight notational abuse ,we define as the map which associates to a partition the vector of the centroids of its elements .we define the _ multicenter function _ to measure the cost for robots to cover a connected -partition from the vertex set : we aim to minimize the performance function with respect to both the vertices and the partition .we can now state the coverage cost function we will be concerned with for the rest of this paper .let be defined by in the motivational scenario we are considering , each robot will periodically be asked to perform a task somewhere in its region with tasks appearing according to distribution .when idle , the robots would position themselves at the centroid of their region . by partitioning as to minimize , the robot team would minimize the expected distance between a task and the robot which will service it .we introduce two notions of optimal partitions : centroidal voronoi and pairwise - optimal .our discussion starts with the following simple result about the multicenter cost function .[ prop : optimal - for - hgeneric ] let and .if is a voronoi partition generated by and is such that , then the second inequality is strict if any . proposition [ prop : optimal - for - hgeneric ] implies the following necessary condition : if minimizes , then and must be a voronoi partition generated by .thus , has the following property as an immediate consequence of proposition [ prop : optimal - for - hgeneric ] : given , if is a voronoi partition generated by then this fact motivates the following definition . is a _centroidal voronoi partition _ of if there exists a such that is a voronoi partition generated by and .the set of _ pairwise - optimal partitions _ provides an alternative definition for the optimality of a partition : a partition is pairwise - optimal if , for every pair of adjacent regions , one can not find a better two - partition of the union of the two regions .this condition is formally stated as follows . is a _ pairwise - optimal partition _ if for every , the following proposition states that the set pairwise - optimal partitions is in fact a subset of the set of centroidal voronoi partitions .the proof is involved and is deferred to appendix [ sec : appendix_c ] .see fig .[ fig : voronoi ] for an example which demonstrates that the inclusion is strict .[ prop : optpair ] let be a _pairwise - optimal partition_. then is also a _ centroidal voronoi partition_. for a given environment , a pair made of a centroidal voronoi partition and the corresponding vector of centroids is locally optimal in the following sense : can not be reduced by changing either or independently .a pairwise - optimal partition achieves this property and adds that for every pair of neighboring robots , there does not exist a two - partition of with a lower coverage cost . in other words , positioning the robots at the centroids of a centroidal voronoi partition ( locally ) minimizes the expected distance between a task appearing randomly in according to relative weights and the robot who owns the vertex where the task appears .positioning at the centroids of a pairwise - optimal partition improves performance by reducing the number of sub - optimal solutions which the team might converge to .we aim to partition among robotic agents using only asynchronous , unreliable , short - range communication . in section [ sec : model ]we describe the computation , motion , and communication capabilities required of the team of robots , and in section [ sec : problemformulation ] we formally state the problem we are addressing . in section [ sec : algorithm ] we propose our solution , the _ discrete gossip coverage algorithm _ , and in [ sec : illustrative ] we provide an illustration . in sections[ sec : convprop ] and [ sec : computation ] we state the algorithm s convergence and complexity properties .our discrete gossip coverage algorithm requires a team of robotic agents where each agent has the following basic computation and motion capabilities : 1 .agent knows its unique identifier ; 2 .agent has a processor with the ability to store and perform operations on subgraphs of ; and 3 .agent can determine which vertex in it occupies and can move at speed along the edges of to any other vertex in .the localization requirement in ( c3 ) is actually quite loose .localization is only used for navigation and not for updating partitions , thus limited duration localization errors are not a problem . the robotic agents are assumed to be able to communicate with each other according to the _ range - limited gossip communication model _ which is described as follows : 1 . given a communication range , when any two agents reside for some positive duration at a distance , they communicate at the sample times of a poisson process with intensity .recall that an homogeneous poisson process is a widely - used stochastic model for events which occur randomly and independently in time , where the expected number of events in a period is .[ rem : comm ] ( 1 ) this communication capability is the minimum necessary for our algorithm , any additional capability can only reduce the time required for convergence .for example , it would be acceptable to have intensity depend upon the pairwise robot distance in such a way that for .( 2)we use distances in the graph to model limited range communication .these graph distances are assumed to approximate geodesic distances in the underlying continuous environment and thus path distances for a diffracting wave or moving robot .assume that , for all , each agent maintains in memory a connected subset of environment .our goal is to design a distributed algorithm that iteratively updates the partition while solving the following optimization problem : subject to the constraints imposed by the robot network model with range - limited gossip communication from section [ sec : model ] . in the design of an algorithm for the minimization problemthere are two main questions which must be addressed .first , given the limited communication capabilities in ( c4 ) , how should the robots move inside to guarantee frequent enough meetings between pairs of robots ?second , when two robots are communicating , what information should they exchange and how should they update their regions ? in this sectionwe introduce the _ discrete gossip coverage algorithm _ which , following these two questions , consists of two components : 1 .the _ random destination & wait motion protocol _ ; and 2 .the _ pairwise partitioning rule_. the concurrent implementation of the random destination & wait motion protocol and the pairwise partitioning rule determines the evolution of the positions and dominance subgraphs of the agents as we now formally describe .we start with the random destination & wait motion protocol . ' '' '' width height .4pt* random destination & wait motion protocol * ' '' '' width height .4pteach agent determines its motion by repeatedly performing the following actions : agent samples a _ destination vertex _ from a uniform distribution over its dominance subgraph ; agent moves to vertex through the shortest path in connecting the vertex it currently occupies and ; and agent waits at for a duration . ' '' '' width height .4ptif agent is moving from one vertex to another we say that agent is in the _ moving _ state while if agent is waiting at some vertex we say that it is in the _ waiting _ state .the motion protocol is designed to ensure frequent enough communication between pairs of robots . in general ,any motion protocol can be used which meets this requirement , so could select from the boundary of or use some heuristic non - uniform distribution over . if any two agents and reside in two vertices at a graphical distance smaller that for some positive duration , then at the sample times of the corresponding communication poisson process the two agents exchange sufficient information to update their respective dominance subgraphs and via the pairwise partitioning rule . ' '' '' width height .4pt* pairwise partitioning rule * ' '' '' width height .4pt assume that at time , agent and agent communicate .without loss of generality assume that .let and denote the current dominance subgraphs of and , respectively. moreover , let denote the time instant just after .then , agents and perform the following tasks : agent transmits to agent and vice - versa initialize , , , compute and an ordered list of all pairs of vertices in compute the sets + + * if * + * then * ' '' '' width height .4pt some remarks are now in order .( 1 ) the pairwise partitioning rule is designed to find a minimum cost two - partition of .more formally , if list and sets and for are defined as in the pairwise partitioning rule , then and are an optimal two - partition of .\(2 ) while the loop in steps 4 - 7 must run to completion to guarantee that and are an optimal two - partition of , the loop is designed to return an intermediate sub - optimal result if need be .if and change , then will decrease and this is enough to ensure eventual convergence .( 3)we make a simplifying assumption in the pairwise partitioning rule that , once two agents communicate , the application of the partitioning rule is instantaneous .we discuss the actual computation time required in section [ sec : computation ] and some implementation details in section [ sec : results ] .( 4)notice that simply assigning to and to can cause the robots to `` switch sides '' in .while convergence is guaranteed regardless , switching may be undesirable in some applications . in that case ,any smart matching of and to and may be inserted .( 5)agents who are not adjacent may communicate but the partitioning rule will not change their regions . indeed , in this case and will not change from and .some possible modifications and extensions to the algorithm are worth mentioning . in casethe robots have heterogeneous dynamics , line 5 can be modified to consider per - robot travel times between vertices .for example , could be replaced by the expected time for robot to travel from to while would consider robot . herewe focus on partitioning territory , but this algorithm can easily be combined with methods to provide a service in as in .the agents could split their time between moving to meet their neighbors and update territory , and performing requested tasks in their region .the simulation in fig .[ fig : sim_four ] shows four robots partitioning a square environment with obstacles where the free space is represented by a grid . in the initial partition shown in the left panel ,the robot in the top right controls most of the environment while the robot in the bottom left controls very little .the robots then move according to the random destination & wait motion protocol , and communicate according to range - limited gossip communication model with ( four edges in the graph ) .the first pairwise territory exchange is shown in the second panel , where the bottom left robot claims some territory from the robot on the top left .a later exchange between the two robots on the top is shown in the next two panels .notice that the cyan robot in the top right gives away the vertex it currently occupies .in such a scenario , we direct the robot to follow the shortest path in to its updated territory before continuing on to a random destination .after 9 pairwise territory exchanges , the robots reach the pairwise - optimal partition shown at right in fig . [fig : sim_four ] .the expected distance between a random vertex and the closest robot decreases from down to .the strength of the discrete gossip coverage algorithm is the possibility of enforcing that a partition will converge to a pairwise - optimal partition through pairwise territory exchange . in theorem[ th : main ] we summarize this convergence property , with proofs given in section [ sec : convergence ] .[ th : main ] consider a network of robotic agents endowed with computation and motion capacities ( c1 ) , ( c2 ) , ( c3 ) , and communication capacities ( c4 ) .assume the agents implement the _ discrete gossip coverage algorithm _ consisting of the concurrent implementation of the _ random destination & wait motion protocol _ and the _ pairwise partitioning rule_. then , a. [ item : well - posedness ] the partition remains connected and is described by and b. [ item : convergence ] converges almost surely in finite time to a pairwise - optimal partition . by definition ,a pairwise - optimal partition is optimal in that can not be improved by changing only two regions in the partition . for simplicitywe assume uniform robot speeds , communication processes , and waiting times .an extension to non - uniform processes would be straightforward . in this subsectionwe explore the computational requirements of the discrete gossip coverage algorithm , and make some comments on implementation .cost function is the sum of the distances between and all other vertices in .this computation of one - to - all distances is the core computation of the algorithm . for most graphs of interestthe total number of edges is proportional to , so we will state bounds on this computation in terms of .computing one - to - all distances requires one of the following : * if all edge weights in are the same ( e.g. , for a graph from an occupancy grid ) , a breadth - first search approach can be used which requires in time and memory ; * otherwise , dijkstra s algorithm must be used which requires in time and in memory .let be the time to compute one - to - all distances in , then computing requires in time .[ prop : computation ] the motion protocol requires in memory , and in computation time .the partitioning rule requires in communication bandwidth between robots and , in memory , and can run in any time .we first prove the claims for the motion protocol .step 2 is the only non - trivial step and requires finding a shortest path in , which is equivalent to computing one - to - all distances from the robot s current vertex .hence , it requires in time and in memory .we now prove the claims for the partitioning rule . in step 1 , robots and transmit their subgraphs to each other , which requires in communication bandwidth . for step 3 , the robots determine , which requires in memory to store .step 4 is the start of a loop which executes times , affecting the time complexity of steps 5 , 6 and 7 .step 5 requires two computations of one - to - all distances in which each take .step 6 involves four computations of over different subsets of , however those for and can be stored from previous computation . since and are strict subsets of , step 5 takes longer than step 6 .step 7 is trivial , as is step 8 .the total time complexity of the loop is thus .however , the loop in steps 4 - 7 can be truncated after any number of iterations . while it must run to completion to guarantee that and are an optimal two - partition of , the loop is designed to return an intermediate sub - optimal result if need be .if and change , then will decrease .our convergence result will hold provided that all elements of are eventually checked if and do not change .thus , the partitioning rule can run in any time with each iteration requiring .all of the computation and communication requirements in proposition [ prop : computation ] are independent of the number of robots and scale with the size of a robot s partition , meaning the discrete gossip coverage algorithm can easily scale up for large teams of robots in large environments .this section is devoted to proving the two statements in theorem [ th : main ] .the proof that the pairwise partitioning rule maps a connected -partition into a connected -partition is straightforward .the proof of convergence is more involved and is based on the application of lemma [ lem : finite - lasalle ] in appendix [ sec : appendix_a ] to the discrete gossip coverage algorithm .lemma [ lem : finite - lasalle ] establishes strong convergence properties for a particular class of set valued maps ( set - valued maps are briefly reviewed in appendix [ sec : appendix_a ] ) .we start by proving that the pairwise partitioning rule is well - posed in the sense that it maintains a connected partition .to prove the statement we need to show that satisfies points ( i ) through ( iv ) of definition [ def : conpartitions ] . from the definition of the pairwise partitioning rule , we have that and .moreover , since and , it follows that and .these observations imply the validity of points ( i ) , ( ii ) , and ( iii ) for .finally , we must show that and are connected , i.e. , also satisfies point ( iv ) .to do so we show that , given , any shortest path in connecting to completely belongs to .we proceed by contradiction .let denote a shortest path in connecting to and let us assume that there exists such that .for to be in means that .this implies that this is a contradiction for .similar considerations hold for .the rest of this section is dedicated to proving convergence .our first step is to show that the evolution determined by the discrete gossip coverage algorithm can be seen as a set - valued map . to this end , for any pair of robots , , we define the map by where and .if at time the pair and no other pair of robots perform an iteration of the pairwise partitioning rule , then the dynamical system on the space of partitions is described by we define the set - valued map as observe that can then be rewritten as .the next two propositions state facts whose validity is ensured by lemma [ lemma : onmotionprotocol ] of appendix [ sec : appendix_b ] which states a key property of the random destination & wait motion protocol .[ prop : tk ] consider robots implementing the discrete gossip coverage algorithm .then , there almost surely exists an increasing sequence of time instants such that for some .the proof follows directly from lemma [ lemma : onmotionprotocol ] which implies that the time between two consecutive pairwise communications is almost surely finite .the existence of time sequence allows us to to express the evolution generate by the discrete gossip coverage algorithm as a discrete time process .let and ,then where is defined as in . given , let denote the information which completely characterizes the state of discrete gossip coverage algorithm just after the -th iteration of the partitioning rule , i.e. , at time .specifically , contains the information related to the partition , the positions of the robots at , and whether each robot is in the _ waiting _ or _ moving _ state at .the following result characterizes the probability that , given , the -th iteration of the partitioning rule is governed by any of the maps , .[ prop : pi ] consider a team of robots with capacities ( c1 ) , ( c2 ) , ( c3 ) , and ( c4 ) implementing the discrete gossip coverage algorithm .then , there exists a real number , such that , for any and \geq \bar{\pi}.\ ] ] assume that at time one pair of robots communicates .given a pair , we must find a lower bound for the probability that is the communicating pair .since all the poisson communication processes have the same intensity , the distribution of the chance of communication is uniform over the pairs which are `` able to communicate , '' i.e. , closer than to each other .thus , we must only show that has a positive probability of being able to communicate at time , which is equivalent to showing that is able to communicate for a positive fraction of time with positive probability .the proof of lemma [ lemma : onmotionprotocol ] implies that with probability at least any pair in is able to communicate for a fraction of time not smaller than where and are defined in the proof of lemma [ lemma : onmotionprotocol ] .hence the result follows . the property in proposition [ prop : pi ]can also be formulated as follows .let be the stochastic process such that is the communicating pair at time .then , the sequence of pairs of robots performing the partitioning rule at time instants can be seen as a realization of the process , which satisfies \geq \bar{\pi}\ ] ] for all .next we show that the cost function decreases whenever the application of from changes the territory partition .this fact is a key ingredient to apply lemma [ lem : finite - lasalle ] .[ lemma : tdecr ] let and let . if , then . without loss of generality assume that is the pair executing the pairwise partitioning rule .then according to the definition of the pairwise partitioning rule we have that if , , then from which the statement follows .we now complete the proof of the main result , theorem [ th : main ] .note that the algorithm evolves in a finite space of partitions , and by theorem [ th : main ] statement ( [ item : well - posedness ] ) , the set is strongly positively invariant .this fact implies that assumption ( i ) of lemma [ lem : finite - lasalle ] is satisfied . from lemma [ lemma : tdecr ]it follows that assumption ( ii ) is also satisfied , with playing the role of the function .finally , the property in is equivalent to the property of _ persistent random switches _ stated in assumption ( iii ) of lemma [ lem : finite - lasalle ] , for the special case .hence , we are in the position to apply lemma [ lem : finite - lasalle ] and conclude convergence in finite - time to an element of the intersection of the equilibria of the maps , which by definition is the set of the pairwise - optimal partitions .to demonstrate the utility and study practical issues of the discrete gossip coverage algorithm , we implemented it using the open - source player / stage robot control system and the boost graph library ( bgl ) .all results presented here were generated using player 2.1.1 , stage 2.1.1 , and bgl 1.34.1 . to compute distances in uniform edge weight graphs we extended the bgl breadth - first search routine with a distance recorder event visitor . to evaluate the performance of our gossip coverage algorithm with larger teams , we tested 30 simulated robots partitioning a map representing a portion of campus at the university of california at santa barbara .as shown in fig .[ fig : large_sim ] , the robots are tasked with providing coverage of the open space around some of the buildings on campus , a space which includes a couple open quads , some narrower passages between buildings , and a few dead - end spurs . for this large environmentthe simulated robots are on a side and can move at .each territory cell is . in this simulationwe handle communication and partitioning as follows .the communication range is set to ( 10 edges in the graph ) with .the robots wait at their destination vertices for .this value for was chosen so that on average one quarter of the robots are waiting at any moment .lower values of mean the robots are moving more of the time and as a result more frequently miss connections , while for higher the robots spend more time stationary which also reduces the rate of convergence . with the goal of improving communication , we implemented a minor modification to the motion protocol : each robot picks its random destination from the cells forming the open boundary is the set of vertices in which are adjacent to at least one vertex owned by another agent . ] of its territory . in our implementation, the full partitioning loop may take seconds for the largest initial territories in fig .[ fig : large_sim ] . we chose to stop the loop after a quarter second for this simulation to verify the anytime computation claimthe 30 robots start clustered in the center of the map between engineering ii and broida hall , and an initial voronoi partition is generated from these starting positions .this initial partition is shown on the left in fig .[ fig : large_sim ] with the robots positioned at the centroids of their starting regions .the initial partition has a cost of .the team spends about 27 minutes moving and communicating according to the discrete gossip coverage algorithm before settling on the final partition on the right of fig .[ fig : large_sim ] . the coverage cost of the final equilibrium improved by to .visually , the final partition is also dramatically more uniform than the initial condition .this result demonstrates that the algorithm is effective for large teams in large non - convex environments . over time for the simulation in fig .[ fig : large_sim].,height=125 ] fig .[ fig : large_sim_cost ] shows the evolution of during the simulation .the largest cost improvements happen early when the robots that own the large territories on the left and right of the map communicate with neighbors with much smaller territories .these big territory changes then propagate through the network as the robots meet and are pushed and pulled towards a lower cost partition .we conducted an experiment to test the algorithm using three physical robots in our lab , augmented by six simulated robots in a synthetic environment extending beyond the lab .our lab space is on a side and is represented by the upper left portion of the territory maps in fig .[ fig : experiment ] .the territory graph loops around a center island of desks .we extended the lab space through three connections into a simulated environment around the lab , producing a environment .the map of the environment was specified with a bitmap which we overlayed with a resolution occupancy grid representing the free territory for the robots to cover .the result is a lattice - like graph with all edge weights equal to .the resolution was chosen so that our physical robots would fit easily inside a cell .additional details of our implementation are as follows .we use erratic mobile robots from videre design , as shown in fig .[ fig : robot ] .the vehicle platform has a roughly square footprint , with two differential drive wheels and a single rear caster .each robot carries an onboard computer with a 1.8ghz core 2 duo processor , 1 gb of memory , and 802.11 g wireless communication . for navigation and localization , each robotis equipped with a hokuyo urg-04lx laser rangefinder .the rangefinder scans points over at with a range of meters .our mixed physical and virtual robot experiments are run from a central computer which is attached to a wireless router so it can communicate with the physical robots .the central computer creates a simulated world using stage which mirrors and extends the real space in which the physical robots operate .the central computer also simulates the virtual members of the robot team .these virtual robots are modeled off of our hardware : they are differential drive with the same geometry as the erratic platform and use simulated hokuyo urg-04lx rangefinders .we use the ` amcl ` driver in player which implements adaptive monte - carlo localization .the physical robots are provided with a map of our lab with a resolution and told their starting pose within the map .we set an initial pose standard deviation of in position and in orientation , and request localization updates using of the sensor s range measurements for each change of in position or in orientation reported by the robot s odometry system .we then use the most likely pose estimate output by ` amcl ` as the location of the robot .for simplicity and reduced computational demand , we allow the virtual robots access to perfect localization information .each robot continuously executes the random destination & wait motion protocol , with navigation handled by the ` snd ` driver in player which implements smooth nearness diagram navigation .for ` snd ` we set the robot radius parameter to , obstacle avoidance distance to , and maximum speeds to and .the ` snd ` driver is a local obstacle avoidance planner , so we feed it a series of waypoints every couple meters along paths found in .we consider a robot to have achieved its target location when it is within and it will then wait for . for the physical robots the motion protocol and navigation processes run on board , while there are separate threads for each virtual robot on the central computer .as the robots move , a central process monitors their positions and simulates the range - limited gossip communication model between both real and virtual robots .we set and .these parameters were chosen so that the robots would be likely to communicate when separated by at most four edges , but would also sometimes not connect despite being close .when this process determines two robots should communicate , it informs the robots who then perform the pairwise partitioning rule .our pairwise communication implementation is blocking : if robot is exchanging territory with , then it informs the match making process that it is unavailable until the exchange is complete .the results of our experiment with three physical robots and six simulated robots are shown in figs .[ fig : experiment ] and [ fig : exp_cost ] . the left column in fig .[ fig : experiment ] shows the starting positions of the team of robots , with the physical robots , labeled 1 , 2 , and 3 , lined up in a corner of the lab and the simulated robots arrayed around them .the starting positions are used to generate the initial voronoi partition of the environment .the physical robots own the orange , blue , and lime green territories in the upper left quadrant .we chose this initial configuration to have a high coverage cost , while ensuring that the physical robots will remain in the lab as the partition evolves . in the middle column , robots1 and 2 have met along their shared border and are exchanging territory . in the territory map , the solid red line indicates 1 and 2 are communicating and their updated territories are drawn with solid orange and blue , respectively .the camera view confirms that the two robots have met on the near side of the center island of desks . the final partition at right in fig .[ fig : experiment ] is reached after minutes .all of the robots are positioned at the centroids of their final territories .the three physical robots have gone from a cluster in one corner of the lab to a more even spread around the space . .the total cost is shown above in black , while for each robot is shown below in the robot s color . ][ fig : exp_cost ] shows the evolution of the cost function as the experiment progresses , including the costs for each robot .as expected , the total cost never increases and the disparity of costs for the individual robots shrinks over time until settling at a pairwise - optimal partition . in this experimentthe hardware challenges of sensor noise , navigation , and uncertainty in position were efficiently handled by the ` amcl ` and ` snd ` drivers .the coverage algorithm assumed the role of a higher - level planner , taking in position data from ` amcl ` and directing ` snd ` .by far the most computationally demanding component was ` amcl ` , but the position hypotheses from ` amcl ` are actually unnecessary : our coverage algorithm only requires knowledge of the vertex a robot occupies .if a less intensive localization method is available , the algorithm could run on robots with significantly lower compute power . in this subsectionwe present a numerical comparison of the performance of the discrete gossip coverage algorithm and the following two lloyd - type algorithms .this method is from and , we describe it here for convenience . at each discrete timeinstant , each robot performs the following tasks : ( 1 ) transmits its position and receives the positions of all adjacent robots ; ( 2 ) computes its voronoi region based on the information received ; and ( 3 ) moves to .this method is from .it is a gossip algorithm , and so we have used the same communication model and the random destination & wait motion protocol to create meetings between robots . say robots and meet at time , then the pairwise lloyd partitioning rule works as follows : ( 1 ) robot transmits to and vice versa ; ( 2 ) both robots determine ; ( 3 ) robot sets to be its voronoi region of based on and , and does the equivalent . for both lloyd algorithms we use the same tie breaking rule when creating voronoi regions as is present in the pairwise partitioning rule :ties go to the robot with the lowest index .our first numerical result uses a monte carlo probability estimation method from to place probabilistic bounds on the performance of the two gossip algorithms .recall that the chernoff bound describes the minimum number of random samples required to reach a certain level of accuracy in a probability estimate from independent bernoulli tests . for an accuracy and confidence ,the number of samples is given by for and , at least 116 samples are required .figure [ fig : bad_start ] shows both the initial territory partition of the extended laboratory environment used and also a histogram of the final results for the following monte carlo test .the environment and robot motion models used are described in section [ sec : implementation ] .starting from the indicated initial condition , we ran 116 simulations of both gossip algorithms .the randomness in the test comes from the sequence of pairwise communications .these sequences were generated using : ( 1 ) the random destination & wait motion protocol with sampled uniformly from the open boundary of and ; and ( 2 ) the range - limited gossip communication model with and . the cost of the initial partition in fig .[ fig : bad_start ] is , while the best known partition for this environment has a cost of just under .the histogram in fig .[ fig : bad_start ] shows the final equilibrium costs for 116 simulations of the discrete gossip coverage algorithm ( black ) and the gossip lloyd algorithm ( gray ) .it also shows the final cost using the decentralized lloyd algorithm ( red dashed line ) , which is deterministic from a given initial condition .the histogram bins have a width of and start from . for the discrete gossip coverage algorithm , out of trials reach the bin containing the best known partition and the mean final cost is .the gossip lloyd algorithm reaches the lowest bin in only of trials and has a mean final cost of .the decentralized lloyd algorithm settles at .our new gossip algorithm requires an average of pairwise communications to reach an equilibrium , whereas gossip lloyd requires .based on these results , we can conclude with confidence that there is at least an probability that 9 robots executing the discrete gossip coverage algorithm starting from the initial partition shown in fig .[ fig : bad_start ] will reach a pairwise - optimal partition which has a cost within of the best known cost .we can further conclude with confidence that the gossip lloyd algorithm will settle more than above the best known cost at least of the time starting from this initial condition . comparing discrete gossip coverage algorithm ( black bars ) , gossip lloyd algorithm ( gray bars ) , anddecentralized lloyd algorithm ( red dashed line ) .for the gossip algorithms , 116 simulations were performed with different sequences of pairwise communications .the decentralized lloyd algorithm is deterministic given an initial condition so only one final cost is shown .the initial cost for each test is drawn with the green dashed line . ]figure [ fig : multi_compare ] compares final cost histograms for different initial conditions for the same environment and parameters as described above .each initial condition was created by selecting unique starting locations for the robots uniformly at random and using these locations to generate an initial voronoi partition . the initial cost for each testis shown with the green dashed line . in 9 out of 10 teststhe discrete gossip coverage algorithm reaches the histogram bin with the best known partition in at least of trials .the two lloyd methods get stuck in sub - optimal centroidal voronoi partitions more than away from the best known partition in more than half the trials in 7 of 10 tests .we have presented a novel distributed partitioning and coverage control algorithm which requires only unreliable short - range communication between pairs of robots and works in non - convex environments . the classic lloyd approach to coverage optimization involves iteration of separate centering and voronoi partitioning steps . for gossip algorithms , however , this separation is unnecessary computationally and we have shown that improved performance can be achieved without it .our new discrete gossip coverage algorithm provably converges to a subset of the set of centroidal voronoi partitions which we labeled pairwise - optimal partitions . through numerical comparisons we demonstrated that this new subset of solutions avoids many of the local minima in whichlloyd - type algorithms can get stuck .our vision is that this partitioning and coverage algorithm will form the foundation of a distributed task servicing setup for teams of mobile robots .the robots would split their time between servicing tasks in their territory and moving to contact their neighbors and improve the coverage of the space .our convergence results only require sporadic improvements to the cost function , affording flexibility in robot behaviors and capacities , and offering the ability to handle heterogeneous robotic networks . in the bigger picture ,this paper demonstrates the potential of gossip communication in distributed coordination algorithms .there appear to be many other problems where this realistic and minimal communication model could be fruitfully applied .given a set , a set - valued map is a map which associates to an element a subset a set - valued map is non - empty if for all .given a non - empty set - valued map , an evolution of the dynamical system associated to is a sequence where for all a set is _ strongly positively invariant _ for if for all .[ lem : finite - lasalle ] let be a finite metric space . given a collection of maps , define the set - valued map by . given a stochastic process ,consider an evolution of satisfying assume that : 1 .there exists a set that is strongly positively invariant for ; 2 .there exists a function such that , for all and ; and 3 .there exist and such that , for all and , there exists such that \geq p. ] our goal is to lower bound the probability that and will communicate within the interval . to doso we construct _ one _ sequence of events of positive probability which enables such communication .consider the following situation : is in the _ moving _ state and needs time to reach its destination , whereas robot is in the _ waiting _ state at vertex and must wait there for time .we denote by ( resp . ) the time needed for ( resp . ) to travel from ( resp . ) to ( resp . ) . let be the event such that performs the following actions in without communicating with any robot : next , we lower bound the probability that event occurs . recall the definition of from sec . [sec : model ] .since a robot can have at most neighbors , the probability that ( i ) of happens is lower bounded by for ( ii ) , the probability that chooses is , which is lower bounded by .then , in order to spend at least at , must choose for consecutive times .finally , the probability that during this interval will not communicate with any robot other than is lower bounded by the probability that ( ii ) occurs is thus lower bounded by combining the bounds for ( i ) and ( ii ) , it follows that \geq \bigl(\tfrac{1}{{\left|q\right|}}\bigr)^{\lceil \frac{\delta}{\tau } \rceil } e^{-{{\lambda_{\textup{comm}}}}(\delta+\tau ) n}.\ ] ] the same lower bound holds for $ ] , meaning that &={\mathbb{p}}\left[e_{i}\right]\ , { \mathbb{p}}\left[e_{j}\right ] \geq \bigl(\tfrac{1}{{\left|q\right|}}\bigr)^{2 \lceil \frac{\delta}{\tau } \rceil } e^{-2 { { \lambda_{\textup{comm}}}}(\delta+\tau ) n}.\end{aligned}\ ] ] if event occurs , then robots and will be at adjacent vertices for an amount of time during the interval equal to since and are no more than , we can conclude that and will be within for at least . conditioned on occurring , the probability that and communicate in is lower bounded by .a suitable choice for from the statement of the lemma is thus it can be shown that this also constitutes a lower bound for the other possible combinations of initial states : robot is _ waiting _ and robot is _ moving _ ; robots and are both _ moving _ ; and robots and are both _waiting_. to create a contradiction , assume that is a pairwise - optimal partition but not a centroidal voronoi partition .in other words , there exist components and in and an element of one component , say , such that choose such that for all let be a shortest path in connecting to and let be the first element of the path starting from which is not in .let be such that . in the first case, we again have a contradiction using the same logic above with in place of . in the second case, we must further consider whether there exists a such that every vertex in is also in .if there is not such a path , then and we again have a contradiction as above . if there is such a path , then we can instead repeat this analysis using using in place of and considering the path formed by this and the vertices in after . since the next vertex playing the role of be closer to , we will eventually find a vertex which creates a contradiction .r. smith , j. das , h. heidarsson , a. pereira , f. arrichiello , i. cetnic , l. darjany , m .- e .garneau , m. howard , c. oberg , m. ragan , e. seubert , e. smith , b. stauffer , a. schnetzer , g. toro - farmer , d. caron , b. jones , and g. sukhatme , `` usc cinaps builds bridges , '' _ ieee robotics & automation magazine _ , vol .17 , no . 1 ,pp . 2030 , 2010 .s. yun , m. schwager , and d. rus , `` coordinating construction of truss structures using distributed equal - mass partitioning , '' in _ international symposium on robotics research _ , ( lucerne , switzerland ) , aug .2009 .l. c. a. pimenta , v. kumar , r. c. mesquita , and g. a. s. pereira , `` sensing and coverage for a network of heterogeneous robots , '' in _ ieee conf . on decision and control_ , ( cancn , mxico ) , pp . 39473952 , dec .2008 .r. cortez , h. tanner , and r. lumia , `` distributed robotic radiation mapping , '' in _ experimental robotics _( o. khatib , v. kumar , and g. pappas , eds . ) , vol .54 of _ springer tracts in advanced robotics _ , pp . 147156 , springer , 2009 .f. bullo , r. carli , and p. frasca , `` gossip coverage control for robotic networks : dynamical systems on the the space of partitions , '' _ siam journal on control and optimization _ , augavailable at http://motion.me.ucsb.edu/pdf/2008u-bcf.pdf .j. w. durham , r. carli , p. frasca , and f. bullo , `` discrete partitioning and coverage control with gossip communication , '' in _ asme dynamic systems and control conference _ , ( hollywood , ca , usa ) , pp . 225232 ,j. w. durham , r. carli , and f. bullo , `` pairwise optimal coverage control for robotic networks in discretized environments , '' in _ ieee conf . on decision and control_ , ( atlanta , ga , usa ) , pp . 72867291 , dec .2010 .b. gerkey , r. t. vaughan , and a. howard , `` the player / stage project : tools for multi - robot and distributed sensor systems , '' in _ int .conference on advanced robotics _ , ( coimbra , portugal ) , pp .317323 , june 2003 .
we propose distributed algorithms to automatically deploy a team of mobile robots to partition and provide coverage of a non - convex environment . to handle arbitrary non - convex environments , we represent them as graphs . our partitioning and coverage algorithm requires only short - range , unreliable pairwise `` gossip '' communication . the algorithm has two components : ( 1 ) a motion protocol to ensure that neighboring robots communicate at least sporadically , and ( 2 ) a pairwise partitioning rule to update territory ownership when two robots communicate . by studying an appropriate dynamical system on the space of partitions of the graph vertices , we prove that territory ownership converges to a pairwise - optimal partition in finite time . this new equilibrium set represents improved performance over common lloyd - type algorithms . additionally , we detail how our algorithm scales well for large teams in large environments and how the computation can run in anytime with limited resources . finally , we report on large - scale simulations in complex environments and hardware experiments using the player / stage robot control system .
reachability for continuous and hybrid systems has been an important topic of research in the dynamics and control literature .numerous problems regarding safety of air traffic management systems , , flight control , , ground transportation systems , , etc . have been formulated in the framework of reachability theory . in most of these applicationsthe main aim was to design suitable controllers to steer or keep the state of the system in a `` safe '' part of the state space .the synthesis of such safe controllers for hybrid systems relies on the ability to solve target problems for the case where state constraints are also present .the sets that represent the solution to those problems are known as capture basins .one direct way of computing these sets was proposed in , , and was formulated in the context of viability theory . following the same approach , the authors of , formulated viability , invariance and pursuit - evasion gaming problems for hybrid systems and used non - smooth analysis tools to characterize their solutions .computational tools to support this approach have been already developed by .an alternative , indirect way of characterizing such problems is through the level sets of the value function of an appropriate optimal control problem . by using dynamic programming , for reachability / invariant / viability problems without state constraints ,the value function can be characterized as the viscosity solution to a first order partial differential equation in the standard hamilton - jacobi form , , and .numerical algorithms based on level set methods have been developed by , , have been coded in efficient computational tools by , and can be directly applied to reachability computations . in the case where state constraints are also present , this target hitting problem is the solution to a reach - avoid problem in the sense of .the authors of , developed a reach - avoid computation , whose value function was characterized as a solution to a pair of coupled variational inequalities . in , , the authors proposed another characterization , which involved only one hamilton - jacobi type partial differential equation together with an inequality constraint .these methods are hampered from a numerical computation point of view by the fact that the hamiltonian of the system is in general discontinuous . in , a scheme based on ellipsoidal techniques so as to compute reachable sets for control systems with constraints on the statewas proposed .this approach was restricted to the class of linear systems . in ,this approach was extended to a list of interesting target problems with state constraints .the calculation of a solution to the equations proposed in , is in general not easy apart from the case of linear systems , where duality techniques of convex analysis can be used . in this paperwe propose a new framework of characterizing reach - avoid sets of nonlinear control systems as the solution to an optimal control problem .we consider the case where we have competing inputs and hence adopt the gaming formulation proposed in .we first restrict our attention to a specific reach - avoid scenario , where the objective of the control input is to make the states of the system hit the target at the end of our time horizon and without violating the state constraints , while the disturbance input tries to steer the trajectories of the system away from the target .we then generalize our approach to the case where the controller aims to steer the system towards the target not necessarily at the terminal , but at some time within the specified time horizon .both problems could be treated as pursuit - evasion games , and for a worst case setting we define a value function similar to and prove that it is the unique continuous viscosity solution to a quasi - variational inequality of a form similar to , .the advantage of this approach is that the properties of the value function and the hamiltonian ( both of them are continuous ) enable us using existing tools to compute the solution of the problem numerically . to illustrate our approach, we consider a reach - avoid problem that arises in the area of air traffic management , in particular the problem of collision avoidance in the presence of 4d constraints , called target windows .target windows ( tw ) are spatial and temporal constraints and form the basis of the cats research project , whose aim is to increase punctuality and predictability during the flight . in reachability approach of encoding tw constraints was proposed .we adopt this framework and consider a multi - agent setting , where each aircraft should respect its tw constraints while avoiding conflict with other aircraft in the presence of wind . since both control and disturbance inputs ( in our case the wind ) are present , this problem can be treated as a pursuit - evasion differential game with state constraints , which are determined dynamically by performing conflict detection . in sectionii we pose two reach - avoid problems for continuous systems with competing inputs and state constraints , and formulate them in the optimal control framework .section iii provides the characterization of the value functions of these problems as the viscosity solution to two variational inequalities . in sectioniv we present an application of this approach to a two aircraft collision avoidance scenario with realistic data .finally , in section v we provide some concluding remarks and directions for future work .consider the continuous time control system , and an arbitrary time horizon . with , , , and .let } ] denote the set of lebesgue measurable functions from the interval ] for all ] and } ] this solution will be denoted as let be a bound such that for all and } ] such that for all ] , then (\tau)=\gamma[\hat{v}](\tau) ] .we then use } ] . to answer this question on needs to determine whether there exists a choice of } ] , the trajectory satisfies and for all ] } } \sup_{v(\cdot ) \in\mathcal{v}_{[t , t ] } } \max \{l(\phi(t , t , x , u(\cdot),v(\cdot ) ) ) , \max_{\tau \in [ t , t ] } h(\phi(\tau , t , x , u(\cdot),v(\cdot ) ) ) \}.\ ] ] can be thought of as the value function of a differential game , where is trying to minimize , whereas is trying to maximize the maximum between the value attained by at the end of the time horizon and the maximum value attained by along the state trajectory over the horizon ] .equivalently , there exists a strategy } ] , } h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \ } \leq 0 ] such that for all } ] . or in other words, there exists a } ] , and for all ~ \phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) \notin a ] , and without passing through the set until they hit .in other words , we would like to determine the set } , ~\forall v(\cdot ) \in \mathcal{v}_{[t , t ] } , \\ & \exists \tau_1 \in [ t , t],~ ( \phi(\tau_1,t , x,\gamma(\cdot),v(\cdot ) ) \in r ) \land ( \forall \tau_2 \in [ t,\tau_1 ] , ~ \phi(\tau_2,t , x,\gamma(\cdot),v(\cdot ) ) \notin a ) \}. \nonumber\end{aligned}\ ] ] based on , define the augmented input as \in \textit{u } \times [ 0,1] ] the pseudo - time variable \rightarrow [ t , t] ] . based on the analysis of , equation implies that the trajectory of the augmented system visits only the subset of the states visited by the trajectory of the original system in the time interval ] , . _ the proof of this proposition is given in appendix a.we first establish the consequences of the principle of optimality for . +* lemma 1 . * _ for all ] : _ } } \sup_{v(\cdot)\in \mathcal{v}_{[t , t+\alpha ] } } \big [ \max \big \ { \max_{\tau \in [ t , t+\alpha ] } h(\phi(\tau , t , x , u(\cdot ) ) ) , v(\phi(t+\alpha , t , x , u(\cdot)),t+\alpha ) \big \ } \big].\ ] ] _ moreover , for all , ~ v(x , t ) \geq h(x) ] : _ the proof of this lemma is given in appendix b. we now introduce the hamiltonian , defined by * lemma 3 . * _ there exists a constant such that for all , and all _ : the proof of this fact is straightforward ( see , for details ) .we are now in a position to state and prove the following theorem , which is the main result of this section . + * theorem 1 . *_ is the unique viscosity solution over ] by (\tau ) = g(v(\tau)) ] .it is easy to see that is now non - anticipative and hence } ] and all such that , (\cdot),v(\cdot ) ) <-\theta < 0.\ ] ] by continuity , there exists such that (\cdot),v(\cdot))-x_0|^2 + ( t - t_0)^2 < \delta_2 ] . therefore , for all } ] be such that } h(\phi(\tau , t_0,x_0,\gamma(\cdot),v(\cdot))).\ ] ] * case 1.1 : * if ] such that } h(\phi(\tau , t_0,x_0,\gamma(\cdot),v(\cdot ) ) ) , v(\phi(\tau_0,t_0,x_0,\gamma(\cdot),v(\cdot)),\tau_0 ) \big \ } + \epsilon,\ ] ] and set .since for all we have that } h(\phi(\tau , t_0,x_0,\gamma(\cdot),\hat{v}(\cdot ) ) ) = h(\phi(\tau_0,t_0,x_0,\gamma(\cdot),\hat{v}(\cdot ) ) ) \leq v(\phi(\tau_0,t_0,x_0,\gamma(\cdot),\hat{v}(\cdot)),\tau_0).\ ] ] hence since holds for all } ] since by lemma 1 } } \max \big \ { & \max_{\tau \in [ t_0,t_0+\delta_3 ] } h(\phi(\tau , t_0,x_0,\gamma(\cdot),v(\cdot ) ) ) , \\&v(\phi(t_0 + \delta_3,t_0,x_0,\gamma(\cdot),v(\cdot)),t_0 + \delta_3 ) \big \ } , \end{aligned}\ ] ] then if } } v(\phi(t_0 + \delta_3,t_0,x_0,\gamma(\cdot),v(\cdot)),t_0 + \delta_3 ) , \ ] ] we can choose } ] such that } h(\phi(\tau , t_0,x_0,\gamma(\cdot),\hat{v}(\cdot ) ) ) + \epsilon,\ ] ] or equivalently , since .based on our initial hypothesis that , there exists a such that .if we take we establish a contradiction . +* consider an arbitrary and a smooth such that has a local minimum at .then , there exists such that for all with we would like to show that since it suffices to show that .this implies that for all there exists a such that for the sake of contradiction assume that there exists such that for all there exists such that since is smooth , there exists such that for all with hence , following , for and any } ] .therefore , for all } ] such that } } \big [ \max \big \ { \max_{\tau \in [ t_0,t_0+\delta_3 ] } h(\phi(\tau ,t_0,x_0,\hat{\gamma}(\cdot),v(\cdot ) ) ) , \\&v(\phi(t_0 + \delta_3,t_0,x_0,\hat{\gamma}(\cdot),v(\cdot)),t_0 + \delta_3 ) \big \ }\big ] - \frac{\delta_3 \theta}{2 } \\ & \geq \max \big \ { \max_{\tau \in [ t_0,t_0+\delta_3 ] } h(\phi(\tau , t_0,x_0,\hat{\gamma}(\cdot),v(\cdot ) ) ) , v(\phi(t_0 + \delta_3,t_0,x_0,\hat{\gamma}(\cdot),v(\cdot)),t_0 + \delta_3 ) \big \ } - \frac{\delta_3 \theta}{2 } \\ & \geq v(\phi(t_0 + \delta_3,t_0,x_0,\hat{\gamma}(\cdot),v(\cdot)),t_0 + \delta_3 ) - \frac{\delta_3 \theta}{2}.\end{aligned}\ ] ] the last statement establishes a contradiction , and completes the proof .consider the value function defined in the previous section .the following theorem proposes that is the unique viscosity solution of another variational inequality . + * theorem 2 . * _ \rightarrow \mathbb{r} ] , where . the angle that each segment forms with the axis and the flight path angle that it forms with the horizontal planeare shown in fig .1 . the discrete state stores the segment of the flight plan that the aircraft is currently in , and for we can define where is the length of the projection of its segment on the horizontal plane . assume perfect lateral tracking and set denote the the part of each segment covered on the horizontal plane ( see fig .1 ) . based on our assumption that each aircraft has constant heading angle at each segment , its and coordinates can be computed by : [ fig : subfigureexample ] to approximate accurately the physical model , the flight path angle , which is the angle that the aircraft forms with the horizontal plane , is a control input fixed according to the angle that the segment forms with the horizontal plane .if the aircraft will be cruising at that segment , whereas if it is positive or negative it will be climbing ) ] respectively .the speed of each aircraft apart from its type depends also on the altitude . at each flight levelthere is a nominal airspeed that aircraft tend to track , giving rise to a function .the dependence on the flight path angle indicates the discrete mode i.e. cruise , climb , descent , that an aircraft could be . for our simulations, we have assumed that at every level the airspeed could vary within of the nominal one ; this is restricted by the control input ] . * discrete states . * initial states . *control inputs ^t\in \left[-1,1\right ] \times \left[-\overline{\gamma_j}^p,\overline{\gamma_j}^p\right] ] * vector field . * domain .* guards . *reset map .apart from , the other two continuous states are the altitude , and the time .the last equation was included in order to track the tw temporal constraints . as stated above, is the flight path angle and is the wind speed , which acts as a bounded disturbance with , and for our simulations we used .since the flight path angle does not exceed 5 , for simplification we can assume that and .target windows represent spatial and temporal constraints that aircraft should respect .following , we assume that tw are located on the surface area between two air traffic control sectors .based on the structure of those sectors , the tw are either adjacent or superimposed ( fig .3 ) , and for simplicity we assume that there is a way point centered in the middle of each tw . our objective is to compute the set of all initial states at time for which there exists a non - anticipative control strategy , that despite the wind input can lead the aircraft inside the tw constraint set at least once within its time and space window , while avoiding conflict with the other aircraft . in air traffic, conflict refers to the loss of minimum separation between two aircraft .each aircraft is surrounded by a protected zone , which is generally thought of as a cylinder of radius 5nmi and height 2000 ft centered at the aircraft .if this zone is violated by another aircraft , then a conflict is said to have occurred . to achieve this goal, we adopt another simplification introduced in ; we eliminate time from the state equations , and perform a two - stage calculation .we define the spatial constraints of a tw centered at the way point as ) ] if the tw is superimposed .let also ] . but this set is the set , which was shown in section iii to be the zero sublevel set of , which is the solution to the following partial differential equation the terminal condition was chosen to be the signed distance to the set , and the avoid set is characterized by .this function represents the area where a conflict might occur , and it is computed online by performing conflict detection ( see appendix c ) . + * stage 2 : * compute the set of all states that start at time and for every wind can reach the set at time , while avoiding conflict with other aircraft . based on the analysis of section ii , this is the set , that can be computed by solving with terminal condition .the set is defined as whereas depends once again on the obstacle function .[ fig : subfigureexample ] the simulations for each aircraft are running in parallel , so at every instance , we have full knowledge of the backward reachable sets of each aircraft .based on that , algorithm 1 of appendix c describes the implemented steps for the reach - avoid computation .consider now the case where we have two aircraft each one with a tw , whose flight plans intersect , and they enter the same air traffic sector with a difference .4a depicts the two flight plans and fig .4b the projection of the flight plans on the horizontal plane .the target windows are centered at the last way point of each flight plan .the result of the two - stage backward reachability computation with tw as terminal sets is depicted in fig .the tubes at this figure include all the states that each aircraft could be , and reach its tw .we should also note that the tubes are the union of the corresponding sets .these sets at a specific time instance , would include all the states that could start at that time and reach the tw at the end of the horizon .5b is the projection of these tubes on the horizontal plane .as it was expected , the - projection coincides with the projection of the flight plans on the horizontal plane .this is reasonable , since in the hybrid model we assumed constant heading angle at each segment .moreover , based on the speed - altitude profiles , aircraft fly faster at higher levels , so at those altitudes there are more states that can reach the target .[ fig : subfigureexample ] we can repeat the previous computation , but now checking at every time if the sets , in the sense described before , satisfy the minimum separation standards .that way , the time and the points of each set where a conflict might occur , can be detected .the result of this calculation is illustrated in fig .the `` hole '' that is now around the intersection area of fig .5a represents the area where the two aircraft might be in conflict .[ fig : subfigureexample ] now that we managed to perform conflict detection , we are in a position to compute at every instance the obstacle function . since the conflict does not occur within the time interval of the tw , the set of the initial states that an aircraft could start and reach the set at time , while avoiding conflict with the other aircraft , should be computed .once the aircraft hits , it can also reach the tw within its time constraints . to obtain the solution to this reach - avoid problem, the variational inequality should be solved .if the conflict had occurred in ] .this in turn implies that there exists } ] or there exists ] .consider now the implications of .equation implies that there exists a } ] , and so also for , we can define (\cdot) ] such that and for all ~ \phi(\tau_2,t , x , u(\cdot),\hat{v}(\cdot ) ) \notin a ] .for we have that since , we showed before that , i.e. .so from we have that .since (\cdot) ] there exists ] . since we showed that for all , ~ \phi(\tau , t , x , u(\cdot),\hat{v}(\cdot ) ) \notin a ] if ] we have that .since in case 1.1 , was shown to be non - anticipative , we have a contradiction . + * part 2 . *next , we show that .consider such that and assume for the sake of contradiction that .then for all for all } ] such that for all ] such that .following the analysis of , consider that the strategy } ] , and choose the that corresponds to that strategy . in , was proven that the set of states visited by the augmented trajectory is a subset of the states visited by the original one .we therefore have that for all ] (\cdot),\hat{v}(\cdot ) ) \in a \longrightarrow \tilde{\phi}(\tau_2 , x , t,\tilde{\gamma}[\hat{v}](\cdot),\hat{v}(\cdot ) ) \in a.\ ] ] by we conclude that there exists a such that either for all ] (\cdot),\hat{v}(\cdot ) ) ) > \delta > 0.\ ] ] since , then for all there exists a non - anticipative strategy } ] .hence for all } ] and for all ] .for the last argument implies that (\cdot),\hat{v}(\cdot ) ) ) \leq \epsilon,\ ] ] and there exists ] such that } } \big [ \max \big \ { & \max_{\tau \in [ t , t+\alpha ] } h(\phi(\tau , t , x,\gamma_1(\cdot),v_1(\cdot ) ) ) , \\&v(\phi(t+\alpha , t , x,\gamma_1(\cdot),v_1(\cdot)),t+\alpha ) \big \ } \big ] - \epsilon , \nonumber\end{aligned}\ ] ] similarly , choose } ] we can define } ] such that for all and for all ] by (\tau)= \left\ { \begin{array}{rl } \gamma_1[v_1](\tau ) & \text{if } \tau \in [ t , t+\alpha)\\ \gamma_2[v_2](\tau ) & \text{if } \tau \in [ t+\alpha , t ] .\end{array } \right.\ ] ] it easy to see that } \rightarrow \mathcal{u}_{[t , t]} ] .+ hence , } } \sup_{v_2(\cdot)\in \mathcal{v}_{[t+\alpha , t ] } } \max \big \{ \max_{\tau \in [ t , t+\alpha ] } h(\phi(\tau , t , x,\gamma_1(\cdot),v_1(\cdot ) ) ) , \\&l(\phi(t , t+\alpha,\phi(t+\alpha , t , x,\gamma_1(\cdot),v_1(\cdot)),\gamma_2(\cdot),v_2(\cdot))),\\ & \max_{\tau \in [ t+\alpha , t ] } h(\phi(\tau , t+\alpha,\phi(t+\alpha , t , x,\gamma_1(\cdot),v_1(\cdot)),\gamma_2(\cdot),v_2(\cdot ) ) ) \big \ } - 2\epsilon\\ & \geq \sup_{v(\cdot)\in \mathcal{v}_{[t , t]}}\max \big \ { l(\phi(t , t , x,\gamma(\cdot),v(\cdot ) ) ) , \max_{\tau \in [ t , t ] } h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \big \ } - 2\epsilon \\ & \geq v(x , t)-2\epsilon.\end{aligned}\ ] ] therefore , .+ * case 2 : * .fix and choose now } ] such that } h(\phi(\tau , t , x,\gamma(\cdot),v_1(\cdot ) ) ) , v(\phi(t+\alpha , t , x,\gamma(\cdot),v_1(\cdot)),t+\alpha ) \big \ } + \epsilon.\ ] ] let for all and for all ] to be the restriction of the non - anticipative strategy over ] , we define (\tau ) = \gamma[\hat{v}](\tau) ] such that } h(\phi(\tau , t+\alpha,\phi(t+\alpha , t , x,\gamma(\cdot),v_1(\cdot)),\gamma'(\cdot),v_2(\cdot ) ) ) \big \ } + \epsilon .\nonumber\end{aligned}\ ] ] we can define \end{array } \right.\ ] ] therefore , from ( [ eq:8 ] ) and ( [ eq:9 ] ) } h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \big \ } + 2\epsilon , \nonumber\ ] ] which together with ( [ eq:7 ] ) implies . since , and bounded , is also bounded . for the second part fix and ] such that } } \max_{\tau \in [ t , t ] } \max \{l(\phi(t , t,\hat{x},\hat{\gamma}(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t,\hat{x},\hat{\gamma}(\cdot),v(\cdot ) ) ) \ } - \epsilon.\ ] ] by definition } } \max_{\tau \in [ t , t ] } \max \{l(\phi(t , t , x,\hat{\gamma}(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t , x,\hat{\gamma}(\cdot),v(\cdot ) ) ) \}.\ ] ] we can choose } ] : where is the lipschitz constant of .by the gronwall - bellman lemma , there exists a constant such that for all ] be such that } h(\phi(\tau , t , x,\hat{\gamma}(\cdot),\hat{v}(\cdot))).\ ] ] then * case 1 .* * case 2 . * in any case . the same argument with the roles of , reversed establishes that . since is arbitrary , finally consider and ] such that } } \max_{\tau \in [ t , t ] } \max \{l(\phi(t , t , x,\gamma(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \ } - \epsilon \\& \geq \max_{\tau \in [ t , t ] } \max \{l(\phi(t , t , x,\gamma(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \ } - \epsilon\end{aligned}\ ] ] by definition , } } \max_{\tau \in [ \hat{t},t ] } \max \{l(\phi(t,\hat{t},x,\hat{\gamma}(\cdot),v(\cdot ) ) ) , h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),v(\cdot ) ) ) \}.\ ] ] so we can choose } ] is the restriction of over ] , we define (\tau ) = \gamma[v](\tau) ] we have that . } \max \{l(\phi(t , t , x,\gamma(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \}\\ & - \max_{\tau \in [ \hat{t},t ] } \max \{l(\phi(t,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) , h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) \ } - 2\epsilon.\end{aligned}\ ] ] * case 1 .* } h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot))) ] } \max \{l(\phi(t , t , x,\gamma(\cdot),v(\cdot ) ) ) , h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) \ } \\&-\max_{\tau \in [ \hat{t},t ] } h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) - 2\epsilon \\ & \geq \max_{\tau \in [ t , t ] } h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) -\max_{\tau \in [ \hat{t},t ] } h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) - 2\epsilon.\end{aligned}\ ] ] let $ ] be such that } h(\phi(\tau,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot))).\ ] ] then } h(\phi(\tau , t , x,\gamma(\cdot),v(\cdot ) ) ) -h(\phi(\tau_0,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) - 2\epsilon \\ & \geq h(\phi(\tau_0,t , x,\gamma(\cdot),v(\cdot ) ) ) -h(\phi(\tau_0,\hat{t},x,\hat{\gamma}(\cdot),\hat{v}(\cdot ) ) ) - 2\epsilon \\ & = h(\phi(\tau_0,t , x,\gamma(\cdot),v(\cdot ) ) ) -h(\phi(\tau_0+t-\hat{t},t , x,\gamma(\cdot),v(\cdot ) ) ) - 2\epsilon \\ & \geq -c_h c_f |\tau_0-\tau_0-t+\hat{t}|-2\epsilon \\ & = -c_h c_f |\hat{t}-t|-2\epsilon,\end{aligned}\ ] ] where is the lipschitz constant of . in any case we havethat a symmetric argument shows that , and since is arbitrary this concludes the proof .[ [ section-2 ] ] the following algorithm summarizes the steps of the reach - avoid computation described in section iv .for simplicity , we have assumed that the tw do not overlap .[ alg : scpf ] * initialization*. set : + for , + for , + * * , + * * .* while is in the sector * * if * + solve .* for all in the sector * + . * for all * + define such that is a box .+ let , and , + , + .+ * else * + .+ * end for * + * end for * + * *. + * else if * + solve .+ repeat steps with instead of .+ * end if * + * end while *research was supported by the european commission under the project cats , fp6-tren-036889 . c. tomlin , g. pappas , and s. sastry , `` conflict resolution for air traffic management : a study in multiagent hybrid systems , '' _ ieee transactions on automatic control _ , vol .43 , no . 4 , pp . 509521 , 1998 .j. p. aubin , j. lygeros , m. quincampoix , s. sastry , and n. seube , `` impulse differential inclusions : a viability approach to hybrid systems , '' _ ieee transactions on automatic control _ , vol .47 , no . 1 , pp . 220 , 2002 .p. cardaliaguet , m. quincampoix , and p. saint - pierre , `` set valued numerical analysis for optimal control and differential games , '' _ in m.bardi , t.raghaven , and t.papasarathy ( eds . ) annals of the international society of dynamic games _ , pp . 177247 , 1999 .l. evans and p. souganidis , `` differential games and representation formulas for solutions of hamilton - jacobi - isaacs equations , '' _ indiana university of mathematics journal _ , vol .33 , no . 5 ,pp . 773797 , 1984 .i. mitchell , a. m. bayen , and c. tomlin , `` validating a hamilton jacobi approximation to hybrid reachable sets , '' _ in m.di.benedetto and a.sangiovanni-vincentelli ( eds . ) hybrid systems : computation and control springer verlag _ , pp .418432 , 2001 . i.mitchell and c. tomlin , `` level set methods for computations in hybrid systems , '' _ in m.di.benedetto and a.sangiovanni-vincentelli ( eds . ) hybrid systems : computation and control springer verlag _ , pp .310323 , 2000 . c. tomlin ,_ hybrid control of air traffic management systems_.1em plus 0.5em minus 0.4emuniversity of california , berkeley : ph.d .dissertation , department of electrical engineering and computer sciences , 1998 . `` sesar definition phase - deliverable d1 - air transport framework - the current situation , '' july 2006 .[ online ] .available : http://www.eurocontrol.int / sesar / public / standard_page / documentation.htm% l[http://www.eurocontrol.int / sesar / public / standard_page / documentation.htm% l ] i. lymperopoulos , j. lygeros , a. lecchini , w. glover , and j. maciejowski , `` a stochastic hybrid model for air traffic management processes , '' _ university of cambridge , department of engineering , technical report _ , vol .aut07 - 15 , 2007 .
a new framework for formulating reachability problems with competing inputs , nonlinear dynamics and state constraints as optimal control problems is developed . such reach - avoid problems arise in , among others , the study of safety problems in hybrid systems . earlier approaches to reach - avoid computations are either restricted to linear systems , or face numerical difficulties due to possible discontinuities in the hamiltonian of the optimal control problem . the main advantage of the approach proposed in this paper is that it can be applied to a general class of target hitting continuous dynamic games with nonlinear dynamics , and has very good properties in terms of its numerical solution , since the value function and the hamiltonian of the system are both continuous . the performance of the proposed method is demonstrated by applying it to a two aircraft collision avoidance scenario under target window constraints and in the presence of wind disturbance . target windows are a novel concept in air traffic management , and represent spatial and temporal constraints , that the aircraft have to respect to meet their schedule .
twitter and other social media have become important communication channels for the general public .it is thus not surprising that various stakeholder groups in science also participate on these platforms .scientists , for instance , use twitter for generating research ideas and disseminating and discussing scientific results .many biomedical practitioners use twitter for engaging in continuing education ( e.g. , journal clubs on twitter ) and other community - based purposes .policy makers are active on twitter , opening lines of discourse between scientists and those making policy on science .quantitative investigations of scholarly activities on social media often called altmetrics can now be done at scale , given the availability of apis on several platforms , most notably twitter .much of the extant literature has focused on the comparison between the amount of online attention and traditional citations collected by publications , showing low levels of correlation .such low correlation has been used to argue that altmetrics provide alternative measures of impact , particularly the broader impact on the society , given that social media provide open platforms where people with diverse backgrounds can engage in direct conversations without any barriers .however , this argument has not been empirically grounded , impeding further understanding of the validity of altmetrics and the broader impact of articles . a crucial step towards empirical validation ofthe broader impact claim of altmetrics is to identify scientists on twitter , because altmetric activities are often assumed to be generated by the public " rather than scientists , although it is not necessarily the case . to verify this, we need to be able to identify scientists and non - scientists .although there have been some attempts , they suffer from a narrow disciplinary focus and/or small scale .moreover , most studies use purposive sampling techniques , pre - selecting candidate scientists based on their success in other sources ( e.g. , highly cited in web of science ) , instead of organically finding scientists from the twitter platform itself . such reliance on bibliographic databases binds these studies to traditional citation indicators and thus introduces bias . for instance , this approach overlooks early - career scientists and favors certain disciplines . here we present the first large - scale and systematic study of scientists across many disciplines on twitter . as our method does not rely on external bibliographic databases and is capable of identifying any user types that are captured in twitter list , it can be adapted to identify other types of stakeholders , occupations , and entities .we study the demographics of the set of scientists in terms of discipline and gender , finding over - representation of social scientists , under - representation of mathematical and physical scientists , and a better representation of women compared to the statistics from scholarly publishing .we then analyze the sharing behaviors of scientists , reporting that only a small portion of shared urls are science - related .finally , we find an assortative mixing with respect to disciplines in the follower , retweet , and mention networks between scientists .our study serves as a basic building block to study scholarly communication on twitter and the broader impact of altmetrics .we classify current literature into two main categories , namely _ product_- vs. _ _ producer-__centric perspectives .the former examines the sharing of scholarly papers in social media and its impact , the latter focuses on who generates the attention . * product - centric perspective . *priem and costello formally defined twitter citations as direct or indirect links from a tweet to a peer - reviewed scholarly article online " and distinguished between first- and second - order citations based on whether there is an intermediate web page mentioning the article .the accumulation of these links , they argued , would provide a new type of metric , coined as altmetrics , " which could measure the broader impact beyond academia of diverse scholarly products .many studies argued that only a small portion of research papers are mentioned on twitter .for instance , a systematic study covering million papers indexed by both pubmed and web of science found that only of them have mentions on twitter , yet this is much higher than other social media metrics except mendeley .the coverages vary across disciplines ; medical and social sciences papers that may be more likely to appeal to a wider public are more likely to be covered on twitter .mixed results have been reported regarding the correlation between altmetrics and citations .a recent meta - analysis showed that the correlation is negligible ( ) ; however , there is dramatic differences across studies depending on disciplines , journals , and time window .* producer - centric perspective .* survey - based studies examined how scholars present themselves on social media . a large - scale survey with more than responses conducted by _ nature _ in revealed that more than were aware of twitter , yet only were regular users . a handful of studies analyzed how twitter is used by scientists .priem and costello examined scholars to study how and why they share scholarly papers on twitter .an analysis of emergency physicians concluded that many users do not connect to their colleagues while a small number of users are tightly interconnected .holmberg and thelwall selected researchers in disciplines and found clear disciplinary differences in twitter usages , such as more retweets by biochemists and more sharing of links for economists . note that these studies first selected scientists outside of twitter and then manually searched their twitter profiles .two limitations thus exist for these studies .first , the sample size is small due to the nature of manual searching .second , the samples are biased towards more well - known scientists .one notable exception is a study by hadgu and jschke , who presented a supervised learning based approach to identifying researchers on twitter , where the training set contains users who were related to some computer science conference handles .although this study used a more systematic method , it still relied on the dblp , an external bibliographic dataset for computer science , and is confined in a single discipline .defining science and scientists is a herculean task and beyond the scope of this paper .we thus adopt a practical definition , turning to the standard occupational classification ( soc ) system ( http://www.bls.gov/soc/ ) released by the bureau of labor statistics , united states department of labor .we use soc because not only it is a practical and authoritative guidance for the definition of scientists but also many official statistics ( e.g. , total employment of social scientists ) are released according to this classification system .soc is a hierarchical system that classifies workers into major occupational groups , among which we are interested in two , namely ( 1 ) computer and mathematical occupations ( code 15 - 0000 ) and ( 2 ) life , physical , and social science occupations ( code 19 - 0000 ) . from the two groups , we compile scientist occupations ( supporting table s1 ) .although authoritative , the soc does not always meet our intuitive classifications of scientists .for instance , biologists " is not presented in the classification .we therefore consider another source wikipedia to augment the set of scientist occupations . in particular , we add the occupations listed at http://en.wikipedia.org/wiki/scientist#by_field .we then compile a list of scientist titles from the two sources .this is done by combining titles from soc , wikipedia , and illustrative examples under each soc occupation .we also add two general titles : scientists " and researchers . " for each title , we consider its singular form and the core disciplinary term .for instance , for the title clinical psychologists , " we also consider clinical psychologist , " psychologists , " and psychologist ." we assemble a set of scientist titles using this method .our method of identifying scientists is inspired by a previous study that used twitter _lists _ to identify user expertise .a twitter _ list _ is a set of twitter users that can be created by any twitter user .the creator of a list needs to provide a name and optional description .although the purpose of lists is to help users organize their subscriptions , the names and descriptions of lists can be leveraged to infer attributes of users in the lists .imagine a user creating a list called economist " and putting http://twitter.com/betseystevenson[ ] in it ; this signals that may be an economist . if is included in numerous lists all named economist , " which means that many independent twitter users classify her as an economist , it is highly likely that is indeed an economist .this is illustrated in fig [ fig : list - name - wordcloud ] where the word cloud of the names of twitter lists containing is shown .we can see that economist " is a top word frequently appeared in the titles , signaling the occupation of this user .in other words , we crowdsource " the identity of each twitter user . .] in principle , we could use twitter s ` memberships ` api ( https://dev.twitter.com/rest/reference/get/lists/memberships ) , for each user , to get all the lists containing this user , and then infer whether this user is a scientist by analyzing the names and descriptions of these lists .however , this method is highly infeasible , because ( 1 ) most users are not scientists , ( 2 ) the distribution of listed counts is right - skewed : lady gaga , for example , is listed more than times ( https://www.electoralhq.com/twitter-users/most-listed ) , and ( 3 ) twitter api has rate limits .we instead employ a previously introduced list - based snowball sampling method that starts from a given initial set of users and expands to discover more .we improve this approach by more systematically obtaining the job title lexicon and the seed user set ( supporting text ) .we use the snowball sampling ( breadth - first search ) on twitter lists .we first identify seed users ( supporting text ) and put them into a queue . for each public user in the queue , we get all the lists in which the user appears , using the twitter ` memberships ` api. then , for each public list in the subset resulting lists whose name contains at least one scientist title , we get its members using the twitter ` members ` api ( https://dev.twitter.com/rest/reference/get/lists/members ) and put those who have not been visited into the queue .the two steps are repeated until the queue is empty , which completes the sampling process .note that to remove many organizations and anonymous users as well as to speed up the sampling , we only consider users whose names contain spaces .we acknowledge that this may drop many users with non - english names or the ones who do not disclose their names in a standard way .also note that this procedure is inherently blind towards those scientists who are not listed . from the sampling procedure ,we get users appearing in lists whose names contain scientist titles .to increase the precision of our method , the final dataset contains those users whose profile descriptions also contain scientist titles .a total number of users are found .table [ tab : top - listed ] shows the top scientists based on number of lists whose names contain scientist titles , suggesting that our sampling method can identify scientists in diverse disciplines .these top scientists have very different levels of popularities , with the most followed scientist neil degrasse tyson attracting more than million followers while miles kimball has thousand followers .-2.25in0 in .*top 30 scientists based on the number of twitter lists whose names contain scientist titles . * [ cols="<,>,>,>,<,>,>,>",options="header " , ] [ tab : community - top - nodes ]1 m. gabielkov , a. rao , and a. legout .studying social networks at scale : macroscopic anatomy of the twitter social graph . in _ proc .of the 2014 acm international conference on measurement and modeling of computer systems _ , pages 277288 , 2014 .n. k. sharma , s. ghosh , f. benevenuto , n. ganguly , and k. gummadi . inferring who - is -who in the twitter social network . in _ proc . of the 2012 acm workshop on online social networks _ , pages 5560 , 2012 .
metrics derived from twitter and other social media often referred to as altmetrics are increasingly used to estimate the broader social impacts of scholarship . such efforts , however , may produce highly misleading results , as the entities that participate in conversations about science on these platforms are largely unknown . for instance , if altmetric activities are generated mainly by scientists , does it really capture broader social impacts of science ? here we present a systematic approach to identifying and analyzing scientists on twitter . our method can be easily adapted to identify other stakeholder groups in science . we investigate the demographics , sharing behaviors , and interconnectivity of the identified scientists . our work contributes to the literature both methodologically and conceptually we provide new methods for disambiguating and identifying particular actors on social media and describing the behaviors of scientists , thus providing foundational information for the construction and use of indicators on the basis of social media metrics .
* power - law fit for overall flight . *first , we fit the flight length distribution of the geolife and nokia mdc datasets regardless of transportation modes ( see methods section ) .we fit truncated power - law , lognormal , power - law and exponential distribution ( see supplementary table s1 ) .we find that the overall flight length ( ) distributions fit a truncated power - law with exponent as 1.57 in the geolife dataset ( ) and 1.39 in the nokia mdc dataset ( ) ( fig .[ fig : all ] ) , better than other alternatives such as power - law , lognormal or exponential . figure .[ fig : all ] illustrates the pdfs and their best fitted distributions according to akaike weights .the best fitted distribution ( truncated power - law ) is represented as a solid line and the rest are dotted lines .we use logarithm bins to remove tail noises .our result is consistent with previous research ( ) , and the exponent is close to their results .we show the akaike weights for all fitted distributions in the supplementary table s2 .the akaike weight is a value between 0 and 1 .the larger it is , the better the distribution is fitted .the akaike weights of the power - law distributions regardless of transportation modes are 1.0000 in both datasets .the p - value is less than 0.01 in all our tests , which means that our results are very strong in terms of statistical significance .note that here the differences between fitted distributions are not remarkable as shown in the fig .[ fig : all ] , especially between the truncated power - law and the lognormal distribution .we use the loglikelihood ratio to further compare these two candidate distributions .the loglikelihood ratio is positive if the data is more likely in the power - law distribution , and negative if the data is more likely in the lognormal distribution .the loglikelihood ratio is 1279.98 and 3279.82 ( with the significance value ) in the geolife and the nokiamdc datasets respectively , indicating that the data is much better fitted with the truncated power - law distribution .* lognormal fit for single transportation mode .* however , the distribution of flight lengths in each single transportation mode is not well fitted with the power - law distribution . instead ,they are better fitted with the lognormal distribution ( see supplementary table s2 ) .all the segments of each transportation flight length are best approximated by the lognormal distribution with different parameters . in fig .[ fig : geolife ] and supplementary fig .s1 , we represent the flight length distributions of walk / run , bike , subway / train and car / taxi / bus in the geolife and the nokia mdc dataset correspondingly . the best fitted distribution ( lognormal )is represented as a solid line and the rest are dotted lines .table [ tab : parameters ] shows the fitted parameter for all the distributions ( in the truncated power - law , and in the lognormal ) .we can easily find that the is increasing over these transportation modes ( walk / run , bike , car / taxi / bus and subway / train ) , identifying an increasing average distance . compared to walk / run , bike or car / taxi / bus ,the flight distribution in subway / train mode is more right - skewed , which means that people usually travel to a more distant location by subway / train .it must be noted that our findings for the car / taxi / bus mode are different from these recent research results , which also investigated the case of a single transportation mode , and found that the scaling of human mobility is exponential by examining taxi gps datasets .the differences are mainly because few people tend to travel a long distance by taxi due to economic considerations .so the displacements in their results decay faster than those measured in our car / taxi / bus mode cases .* mechanisms behind the power - law pattern . *we characterize the mechanism of the power - law pattern with lvy flights by mixing the lognormal distributions of the transportation modes .previous research has shown that a mixture of lognormal distributions based on an exponential distribution is a power - law distribution .based on their findings , we demonstrate that the reason that human movement follows the lvy walk pattern is due to the mixture of the transportation modes they take .we demonstrate that the mixture of the lognormal distributions of different transportation modes ( walk / run , bike , train / subway or car / taxi / bus ) is a power - law distribution given two new findings : first , we define the change rate as the relative change of length between two consecutive flights with the same transport mode .the change rate in the same transportation mode is small over time .second , the elapsed time between different transportation modes is exponentially distributed . *lognormal in the same transportation mode .* let us consider a generic flight .the flight length at next interval of time , given the change rate , is it has been found that the change rate in the same transportation mode is small over time .the change rate reflects the correlation between two consecutive displacements in one trip . to obtain the pattern of correlation between consecutive displacements in each transportation mode, we plot the flight length point ( , ) from the geolife dataset ( fig .[ fig : rate ] ) . here represents the -th flight length and represents the -th flight length in a consecutive trajectory in one transportation mode . figure .[ fig : rate ] shows the density of flight lengths correlation in car / taxi / bus , walk / run , subway / train and bike correspondingly .( , ) are posited near the diagonal line , which identifies a clear positive correlation .similar results are also found in the nokia mdc dataset ( see supplementary fig .we use the pearson correlation coefficient to quantify the strength of the correlation between two consecutive flights in one transportation mode .the value of pearson correlation coefficient is shown in the supplementary table s3 .the value is less than 0.01 in all the cases , identifying very strong statistical significances . is positive in each transportation mode and ranges from 0.3640 to 0.6445 , which means that there is a significant positive correlation between consecutive flights in the same transportation mode , and the change rate in the same transportation mode between two time steps is small .the difference in the same transportation mode between two time steps is small due to a small difference in consecutive flights .we sum all the contributions as follows : we plot the change rate samples of the car / taxi / bus mode from the geolife dataset as an example in supplementary fig .we observe that the change rate fluctuates in an uncorrelated fashion from one time interval to the other in one transportation mode due to the unpredictable character of the change rate .the pearson correlation coefficient accepts the findings at the 0.03 - 0.13 level with p - value less than 0.05 ( see supplementary table s4 ) . by the central limit theorem ,the sum of the change rate is normally distributed with the mean and the variance , where and are the mean and variance of the change rate and is the elapsed time .then we can assert that for every time step , the logarithm of is also normally distributed with a mean and variance . note here that is the length of the flight at the time after intervals of elapsed time . in the same transportation mode ,the distribution of the flight length with the same change rate mean is lognormal , its density is given by ,\ ] ] which corresponds to our findings that in each single transportation mode the flight length is lognormal distributed . *transportation mode elapsed time .* we define elapsed time as the time spent in a particular transportation mode ; we found that it is exponentially distributed .for example , the trajectory samples shown in fig .[ fig : trajectory ] contain six trajectories with three different transportation modes , ( taxi , walk , subway , walk , taxi , walk ) .thus the elapsed time also consists of six samples ( , , , , , ) .the elapsed time is weighted exponentially between the different transportation modes ( see supplementary fig .similar results are also reported in .the exponentially weighted time interval is mainly due to a large portion of walk / run flight intervals .walk / run is usually a connecting mode between different transportation modes ( e.g. , the trajectory samples shown in fig .[ fig : trajectory ] ) , and walk / run usually takes much shorter time than any other modes .thus the elapsed time decays exponentially .for example , 87.93 of the walk distance connecting other transportation modes is within 500 meters and the travelling time is within 5 minutes in the geolife dataset . *mixture of the transportation modes .* given these lognormal distributions in each transportation mode and the exponential elapsed time between different modes , we make use of mixtures of distributions . we obtain the overall human mobility probability by considering that the distribution of flight length is determined by the time , the transportation mode change rate mean and variance .we obtain the distribution of single transportation mode distribution with the time , the change rate mean and variance fixed .we then compute the mixture over the distribution of since is exponentially distributed over different transportation modes with an exponential parameter .if the distribution of , , depends on the parameter . is also distributed according to its own distribution .then the distribution of , is given by . herethe in is the same as the in the . is the exponential distribution of elapsed time with an exponential parameter .so the mixture ( overall flight length ) of these lognormal distributions in one transportation mode given an exponential elapsed time ( with an exponent ) between each transportation mode is d t , \ ] ] which can be calculated to give where the power law exponent is determined by .the calculation to obtain is given in supplementary note 1 .if we substitute the parameters presented in table [ tab : parameters ] , we will get the in the geolife dataset , which is close to the original parameter , and in the nokia mdc dataset , which is close to the original parameter .the result verifies that the mixture of these correlated lognormal distributed flights in one transportation mode given an exponential elapsed time between different modes is a truncated power - law distribution .previous research suggests that it might be the underlying road network that governs the lvy flight human mobility , by exploring the human mobility and examining taxi traces in one city in sweden . to verify their hypothesis, we use a road network dataset of beijing containing 433,391 roads with 171,504 conjunctions and plot the road length distribution . as shown in supplementary fig .s5 , the road length distribution is very different to our power - law fit in flights distribution regardless of transportation modes .the in road length distribution is 3.4 , much larger than our previous findings in the geolife and in the nokia mdc .thus the underlying street network can not fully explain the lvy flight in human mobility .this is mainly due to the fact that it does not consider many long flights caused by metro or train , and people do not always turn even if they arrive at a conjunction of a road .thus the flight length tails in the human mobility should be much larger than those in the road networks .* data sets . *we use two large real - life gps trajectory datasets in our work , the geolife dataset and the nokia mdc dataset . the key information provided by these two datasets is summarized in table [ tab : mobilitydataset ] .we extract the following information from the dataset : flight lengths and their corresponding transportation modes .geolife is a public dataset with 182 users gps trajectory over five years ( from april 2007 to august 2012 ) gathered mainly in beijing , china .this dataset contains over 24 million gps samples with a total distance of 1,292,951 kilometers and a total of 50,176 hours .it includes not only daily life routines such as going to work and back home in beijing , but also some leisure and sports activities , such as sightseeing , and walking in other cities .the transportation mode information in this dataset is manually logged by the participants .the nokia mdc dataset is a public dataset from nokia research switzerland that aims to study smartphone user behaviour .the dataset contains extensive smartphone data of two hundred volunteers in the lake geneva region over one and a half years ( from september 2009 to april 2011 ) .this dataset contains 11 million data points and the corresponding transportation modes . *obtaining transportation mode and the corresponding flight length .* we categorize human mobility into four different kinds of transportation modality : walk / run , car / bus / taxi , subway / train and bike .the four transportation modes cover the most frequently used human mobility cases . to the best of our knowledge , this article is the first work that examines the flight distribution with all kinds of transportation modes in both urban and inter - city environments . in the geolife dataset, users have labelled their trajectories with transportation modes , such as driving , taking a bus or a train , riding a bike and walking .there is a label file storing the transportation mode labels in each user s folder , from which we can obtain the ground truth transportation mode each user is taking and the corresponding timestamps .similar to the geolife dataset , there is also a file storing the transportation mode with an activity i d in the nokia mdc dataset .we treat the transportation mode information in these two datasets as the ground truth . in order to obtain the flight distribution in each transportation mode, we need to extract the flights .we define a flight as the longest straight - line trip from one point to another without change of direction .one trail from an original to a destination may include several different flights ( fig .[ fig : trajectory ] ) . in order to mitigate gps errors ,we recompute a position by averaging samples ( latitude , longitude ) every minute .since people do not necessarily move in perfect straight lines , we need to allow some margin of error in defining the ` straight ' line .we use a rectangular model to simplify the trajectory and obtain the flight length : when we draw a straight line between the first point and the last point , the sampled positions between these two endpoints are at a distance less than 10 meters from the line .the same trajectory simplification mechanism has been used in other articles which investigates the lvy walk nature of human mobility .we map the flight length with transportation modes according to timestamp in the geolife dataset and activity i d in the nokia mdc dataset and obtain the final ( transportation mode , flight length ) patterns .we obtain 202,702 and 224,723 flights with transportation mode knowledge in the geolife and nokia mdc dataset , respectively . * identifying the scale range . * to fit a heavy tailed distribution such as a power - law distribution , we need to determine what portion of the data to fit ( ) and the scaling parameter ( ) .we use the methods from to determine and .we create a power - law fit starting from each value in the dataset. then we select the one that results in the minimal kolmogorov - smirnov distance , between the data and the fit , as the optimal value of .after that , the scaling parameter in the power - law distribution is given by where are the observed values of and is number of samples. * akaike weights .* we use akaike weights to choose the best fitted distribution .an akaike weight is a normalized distribution selection criterion .its value is between 0 and 1 .the larger the value is , the better the distribution is fitted .akaike s information criterion ( aic ) is used in combination with maximum likelihood estimation ( mle ) .mle finds an estimator of that maximizes the likelihood function of one distribution .aic is used to describe the best fitting one among all fitted distributions , here k is the number of estimable parameters in the approximating model . after determining the aic value of each fitted distribution , we normalize these values as follows .first of all , we extract the difference between different aic values called , then akaike weights are calculated as follows , 10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 & _ _ * * , ( ) . & _ _ * * , ( ) . , & _ _ * * , ( ) . , , , & _ _ * * , ( ) . , , & in _ _ ( ) . , & in _ _ ( ) . , &_ _ * * , ( ) . , , & _ _ * * , ( ) . ,& in _ _ ( ) ., , , & _ _ * * ( ) ., , & in _ _& in _ _ ( ) ., , , & _ _ * * , ( ) . , &_ _ * * , ( ) . , & _ _ * * , ( ) ._ et al . _ _ _ * * , ( ) . , , & _ _ * * , ; _ _ ( ) . , , &_ _ * * , ( ) . , & _ _ * * , ( ) . , & _ _ * * , ( ) . , ,& in _ _ ( ) ., , , & in _ _( ) . & _ _ ( , ) . , , , & _ _ * * , ; _ _ ( ) . , & _ _ * * , ; _ _ ( ) . , , & _ _ * * , ;_ _ ( ) . , , , & _ _ ** , ( ) . &_ _ ( ) . ., , & . in _ __ _ * * , ( ) . & __ _ _ ( ) . & _ _ ( ). _ _ * * , ( ) . , & _ _ * * , ; _ _ ( ) ._ _ ( , ) ._ _ ( , ) ._ et al . _ in _ _ ( ) . , , , & in _ _ ( ) . , &_ _ * * , ( ) .k.z . , m.m . , p.h ., w.r . and s.t .designed the research based on the initial idea by k.z . and s.t .. k.z .executed the experiments guided by m.m ., w.r . and s.t .k.z . and s.t .performed statistical analyses , and prepared the figures .w.r . and s.t .wrote the manuscript .all authors reviewed the manuscript .* competing financial interests : * the authors declare no competing financial interests ..the geolife and the nokia mdc human mobility datasets . [ cols="^,^,^",options="header " , ]given d t .\ ] ] d t \\ & = \frac{\lambda}{\sigma}\frac{1}{\sqrt{2\pi } } x^{-1 } \\ & \int_{t=0}^{\infty } exp(-\lambda t ) exp[-\frac{(\ln(x)-\mu t)^2}{2t\sigma ^2 } ] \frac{1}{\sqrt{t}}]dt \\ & = \frac{\lambda}{\sigma}\frac{1}{\sqrt{2\pi } } x^{-1 } \\ & \int_{t=0}^{\infty } exp[\frac{-(\ln(x)-\mu t)^2 - 2\lambda\sigma^2 t } { 2t\sigma ^2 } ] \frac{1}{\sqrt{t}}]d t \\ & = \frac{\lambda}{\sigma}\frac{1}{\sqrt{2\pi } } x^{-1 } exp(\frac{\ln{x } \mu}{\sigma^2 } ) \\ & \int_{t=0}^{\infty } exp[-(\frac{\mu^2 + 2\lambda\sigma^2}{2\sigma^2})t - \frac{(\ln x)^2}{2\sigma^2}\frac{1}{t } ] \frac{1}{\sqrt{t}}]d t .\end{aligned}\ ] ] using the substitution gives \frac{1}{\sqrt{u^2 } } ] 2 u du .\end{aligned}\ ] ] let and , from the integral table we get which helps us to get the expression for , the expression for is here the and the are the normalized mean and variance of the change rate , while the is the exponential parameter of elapsed time between different transportation modes .we normalize the and of different transportation modes following and .note here , , , and , , , represent the mean and standard deviation of the change rate in each transportation modes in both datasets , as shown in the table .the mean value is 5.54 and 6.05 and the variance is 0.5954 and 1.0165 in the geolife dataset and in the nokia mdc dataset respectively . combining the fitted exponential parameter in the geolife dataset and in the nokia mdc dataset, we obtain the final in the geolife dataset , which is close to the original parameter , and in the nokia mdc dataset , which is close to the original parameter .
human mobility has been empirically observed to exhibit lvy flight characteristics and behaviour with power - law distributed jump size . the fundamental mechanisms behind this behaviour has not yet been fully explained . in this paper , we propose to explain the lvy walk behaviour observed in human mobility patterns by decomposing them into different classes according to the different transportation modes , such as walk / run , bike , train / subway or car / taxi / bus . our analysis is based on two real - life gps datasets containing approximately 10 and 20 million gps samples with transportation mode information . we show that human mobility can be modelled as a mixture of different transportation modes , and that these single movement patterns can be approximated by a lognormal distribution rather than a power - law distribution . then , we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power - law distribution , providing an explanation to the emergence of lvy walk patterns that characterize human mobility patterns . understanding human mobility is crucial for epidemic control , urban planning , traffic forecasting systems and , more recently , various mobile and network applications . previous research has shown that trajectories in human mobility have statistically similar features as lvy walks by studying the traces of bank notes , cell phone users locations and gps . according to the this model , human movement contains many short flights and some long flights , and these flights follow a power - law distribution . intuitively , these long flights and short flights reflect different transportation modalities . figure . [ fig : trajectory ] shows a person s one - day trip with three transportation modalities in beijing based on the geolife dataset ( table [ tab : mobilitydataset ] ) . starting from the bottom right corner of the figure , the person takes a taxi and then walks to the destination in the top left part . after two hours , the person takes the subway to another location ( bottom left ) and spends five hours there . then the journey continues and the person takes a taxi back to the original location ( bottom right ) . the short flights are associated with walking and the second short - distance taxi trip , whereas the long flights are associated with the subway and the initial taxi trip . based on this simple example , we observe that the flight distribution of each transportation mode is different . in this paper , we study human mobility with two large gps datasets , the geolife and nokia mdc datasets ( approximately 10 million and 20 million gps samples respectively ) , both containing transportation mode information such as walk / run , bike , train / subway or car / taxi / bus . the four transportation modes ( walk / run , bike , train / subway and car / taxi / bus ) cover the most frequently used human mobility cases . first , we simplify the trajectories obtained from the datasets using a rectangular model , from which we obtain the flight length . here a flight is the longest straight - line trip from one point to another without change of direction . one trail from an origin to a destination may include several different flights ( fig . [ fig : trajectory ] ) . then , we determine the flight length distributions for different transportation modes . we fit the flight distribution of each transportation mode according to the akaike information criteria in order to find the best fit distribution . we show that human movement exhibiting different transportation modalities is better fitted with the lognormal distribution rather than the power - law distribution . finally , we demonstrate that the mixture of these transportation mode distributions is a power - law distribution based on two new findings : first , there is a significant positive correlation between consecutive flights in the same transportation mode , and second , the elapsed time in each transportation mode is exponentially distributed . the contribution of this paper is twofold . first , we extract the distribution function of displacement with different transportation modes . this is important for many applications . for example , a population - weighted opportunities ( pwo ) model has been developed to predict human mobility patterns in cities . they find that there is a relatively high mobility at the city scale due to highly developed traffic systems inside cities . our results significantly deepen the understanding of urban human mobilities with different transportation modes . second , we demonstrate that the mixture of different transportations can be approximated as a truncated lvy walk . this result is a step towards explaining the emergence of lvy walk patterns in human mobility .
quantum bits , or qubits , have been realized using , for example , superconducting circuits , quantum dots , trapped ions , single dopants in silicon , and nitrogen vacancy centres .the state of a qubit is affected by various sources of error such as finite qubit lifetime , measurement imperfections , non - ideal initialization , and imprecise external control .provided that these errors are below a certain threshold , they can be corrected with quantum error correction codes which encode the information of a logical qubit into an ensemble of physical qubits .surface codes , error correction codes with the highest known thresholds , may require thousands of physical qubits for each fault - tolerant logical qubit .controlling such a large ensemble of qubits consumes a great amount of power , rendering heat management at the qubit register an important challenge .the power consumption of a quantum processor can be decreased by implementing more accurate physical qubits , thus leading to smaller ensembles forming the logical qubits .however , it is known that gate errors also arise from the quantum - mechanical uncertainties in the control pulse . in the case of a resonant disposable control pulse ,this type of error is inversely proportional to the pulse energy , and hence poses a trade - off in the power management of the quantum computer . even in the absence of all other types of error, this result implies such a high level of dissipated power at the chip temperature that it challenges the commercially available cryogenic equipment , as we estimate in appendix [ appa ] for a typical superconducting quantum computer running a surface code to factorize a 2000-bit integer . in this work ,we derive the greatest lower bound for the gate error within the resonant jaynes cummings model .the inevitable error originates from the quantum nature of the driving mode and becomes dominant in the regime of low driving powers .in contrast to previous work , our constructive derivation does not need to assume any particular state of the system and is applicable to qubit rotations of arbitrary angles .in addition to the lower bound itself , our method naturally finds the bosonic quantum states of the pulse that reach the bound .we explicitly show that single - qubit rotations are optimally realized by applying a certain amount of squeezing to coherent states .the optimal states do not alone solve the above - mentioned heat dissipation problem , but we additionally find that back - action - induced correlations between the control pulse and the controlled qubit can be transferred to auxiliary qubits ( see also refs .thus we propose a control protocol where multiple gates are generated with a single control pulse which is frequently refreshed using auxiliary qubits .whereas previous studies suggest that it is not possible to save energy by reusing control pulses without sacrificing the minimum gate fidelity , our method exhibits orders of magnitude smaller energy consumption with no drop in the average gate fidelity .this paper is organized as follows . in sec .[ sec : semiclassical ] , we briefly summarize the formalism used to describe qubit rotations and discuss gate errors in the semiclassical model . in sec .[ sec : optimization ] , we derive the quantum limit of gate error . the refreshing protocol is constructed and studied in sec .[ sec : protocol ] and the key results are summarized and discussed further in sec . [ sec : discussion ] .let us first review the semiclassical formalism of single - qubit control and the resulting gate errors .the state of a qubit can be represented as a bloch vector constrained inside a unit sphere , see fig .single - qubit logic gates , realized using , e.g. , microwave pulses , rotate the bloch vector by about the axis . assuming that the control pulse is a classical waveform in resonance with the qubit transition energy , the system may be described in the rotating frame using a semiclassical interaction hamiltonian of the form where and denote the ground and excited states of the qubit , respectively , represents the classical amplitude and phase of the control field , is the coupling constant including the pulse envelope , and is the reduced planck constant .the gate is implemented by choosing the interaction time and the pulse envelope such that they satisfy . for example , setting and along the -axis , the temporal evolution operator ] , where is the displacement operator .the interaction time for each operation is , which is expected to yield states with .( b ) gate error for an operation as a function of the average photon number of the driving pulse which is initialized either in the coherent state ( red color ) or the squeezed cat state ( blue color ) .the highlighted areas indicate the range of error , depending on the initial state of the qubit , and the solid lines show the error averaged over qubit states distributed uniformly on the bloch sphere , .the inset shows the difference between the numerically calculated errors and their analytical first - order approximations ( table [ tab_1 ] ) , with dashed lines indicating the difference in maximum errors . ][ bt ! ] [ cols="^,^,^,^,^,^,^,^,^ " , ] we solve the drive states that minimize the average or maximum gate error for a given interaction time and a desired rotation . to this end , it is sufficient to consider only pure states , and hence we may employ the forms given by eq . .the error - minimizing states are the eigenstates of operators that correspond to the largest eigenvalue , by definition , the optimal states provide a fundamental lower bound for the error .we solve this eigenvalue equation numerically .examples of fidelity - optimal solutions are shown in fig .[ fig2]a using the wigner pseudo - probability function .the numerically obtained states can be accurately described using the squeezed coherent states , where and are the displacement and squeezing operators , respectively .importantly , the numerical solutions possess the correct amplitude and phase to satisfy the timing condition and to set the desired direction of the rotation axis , without imposing them explicitly . furthermore ,the average errors , as well as the optimal squeezing parameters , are equal to those obtained in the semiclassical approach in sec .[ sec : semiclassical ] . in the specific case of -rotations , a sum of two eigenvectors ,i.e. , the squeezed cat state where the positive constant ensures normalization , is a state that minimizes both the average and the maximum error simultaneously ( see appendix [ appb ] ) .comparison of errors produced by such a state and a coherent state is presented in fig .[ fig2]b .the numerical approach for solving the eigenstates of has the disadvantage of truncating the infinite - dimensional state vector to a finite vector of length , which might distort or exclude some of the possible solutions .however , the obtained gaussian - like solutions are not affected by changes in the cut - off for . raisingthe cut - off reveals more energetic solutions , but these correspond to pulses that implement the chosen gate after an integer number of unnecessary rotations .generally for gates , we find solutions with errors that vanish as in the limit , as shown in appendix [ appc ] .the lower bounds together with errors induced by non - squeezed coherent states are shown in table [ tab_1 ] .other gates , such as the pauli - z gate and the hadamard gate , can be constructed as sequences of gates .recently , it was shown that squeezing also improves the fidelity of the phase gate in the dispersive regime .schematic diagram of the drive - refreshing protocol . during one cycle ,the circulating drive pulse ( red ) induces a chosen rotation on one of the qubits in the register and is then refreshed by sequential interactions with each ancillary qubit \{}. in an ideal setting , each ancilla is prepared precisely into the state and reset after each cycle . in practice , the ancilla qubits are initially in their ground states and their preparation and reset is implemented by a circulating corrector pulse ( green ) . ] all of the fundamental lower bounds derived above are inversely proportional to the average photon number .intuitively , a drive with a large photon number should be capable of inducing multiple gates without changing substantially , thus decreasing the required amount of energy per gate for nearly equal error level .we show below that reusing a drive effectively decreases the energy consumption well below the lower bound of average gate error for disposable pulses .furthermore , the drive can be corrected between successive gates such that the consumption drops without essential decrease of the average gate fidelity . in our protocol, an itinerant control drive cyclically interacts with a register of resonant qubits and ancillary qubits , see fig .a cycle begins with the drive , initially in a suitable squeezed coherent state , applying a chosen gate operation with minimal error on a register qubit .consequently , the drive state changes due to the quantum back - action . to undo this ,the drive is set to sequentially interact with corrective ancilla qubits , initialized in a superposition of ground and excited states , for a time corresponding to a -rotation . as a result ,the purity , energy , and phase of the drive are restored in successive interactions ( see appendix [ appd ] ) . at the end of the cycle ,the ancilla qubits are reset and the refreshed drive is usable for another high - fidelity gate . with increasing number of ancilla qubits , the execution time of a full cycle increases andthus one itinerant pulse applies a gate on the register less frequently . to compensate for this , one could add another drive pulse for each ancilla in the array , and synchronize their travel times such that each qubit would interact with one of the pulses at a given time .such a system would apply as many gates on the register per cycle as there are itinerant pulses in circulation .however , we restrict our analysis to a single pulse . evolution of an ancilla state during a refreshing cycle : ( i ) preparation from the ground state into the state , ( ii ) drive refresh as a result the primary rotation ( red ) , and ( iii ) ancilla reset .we either assume that the preparative steps ( i ) and ( iii ) are ideal or induced by a corrector pulse as shown in fig .[ fig3 ] . ] the refreshing by the ancillary interactions is understood by considering the path traversed by the bloch vector of the ancillary qubit , as illustrated in fig .a drive lacking energy rotates the vector with smaller angular frequency , leaving the ancilla slightly biased towards the ground state and gaining energy in the process .similarly , excessive energy in the drive is transferred to the ancilla due to rotating it closer to the excited state .the hilbert space of this system is formally a composite space of the fock space of the drive and the two - level spaces of the register and ancilla qubits , the drive only interacts with one qubit at a time and therefore each interaction can be calculated in the subspace of the relevant qubit and the drive , assuming the qubits are not correlated .after the interaction , the drive state is extracted by tracing over the associated qubit space .namely , the iteration of the drive state is given by , \label{eq : nextstate}\ ] ] where acts in the subspace of the drive and the qubit in the protocol sequences described in the following sections .consider first the case where the ancilla qubits are perfectly reset during each cycle , and the gate we wish to apply on each register qubit is .the protocol is executed with the following steps : 1 .the drive state is initialized to the -minimizing state .a new register qubit is initialized in a random pure state , chosen uniformly from the bloch sphere .the drive interacts with the register qubit for interaction time [ eq . .the ancilla qubits are initialized to .the drive interacts with an ancilla qubit for interaction time .repeat for all ancillas .evaluate the average error of a hypothetical gate with eqs . and using the current drive state .continue from step ( ii ) . for gates other than ,the phases of the drive and ancillas , as well as the interaction time in step ( iii ) , but not step ( v ) , would be shifted accordingly .average error of gates generated by an itinerant drive pulse which initially had an average photon number and has reached the steady state due to ancilla refreshing .the drive is set to interact with ideal ancillas ( ) per cycle as indicated , leading to effective refreshing of the drive state .the dashed line indicates the lower bound of error which is achieved either with a disposable optimal pulse or with a pulse refreshed by infinitely many ideal ancillas .the inset shows the average gate error as a function of for . ]we numerically simulate the evolution of the drive and evaluate the average error of the gate for a register qubit after each cycle . during the protocol , the average error will increase from its initial lower - bound value at varying rates depending on the randomized states of the register qubits .we find that after many cycles , the drive reaches a steady state that generates the desired gates with a predictable average error .with 13 ancillas per cycle , the average error saturates after a hundred cycles ; with ten or more ancillas , the saturation takes less than ten cycles .if no corrective ancillas are used , the average error eventually reaches .figure [ fig5 ] shows how the eventual error level depends on the number of photons and ancillas .the average gate error approaches its theoretical lower bound , in the limit of many drive - refreshing ancilla qubits . for smaller rotation angles ,qualitatively similar results are obtained with more slowly accumulating error .thus a single itinerant drive pulse supplied with ideal ancilla states can generate an infinite number of high - fidelity gates . in the previous section , the qubits in the register were assumed to be essentially uncorrelated to justify the partial tracing over each qubit after the respective interaction .here we demonstrate the beneficial performance of our method in the case where the register qubits are maximally entangled . we initialize the register of qubits in the greenberger horne zeilinger ( ghz ) state .the control protocol is physically the same as in the previous section : the drive interacts with only one qubit at a time to implement a single - qubit gate and is refreshed by ideally prepared ancillas between each such gate .the target operation on the register is thus . due to the entangled register, the temporal evolution operators must be calculated in the hilbert space or for interactions between the drive and a register qubit , or drive and the ancilla , respectively .no partial trace over any register qubit is taken .after the drive has interacted with every register qubit once , the state of the register has transformed into and the total transformation error is computed as .\ ] ] we divide this error by the number of qubits to obtain the effective error per gate , . state preparation error per qubit for a register of qubits initially in a ghz state .the target gate is an rotation for all qubits individually , implemented by a squeezed state of photons ( ) that is refreshed by ideal ancillas per cycle .the circles represent the data , whereas the coloured lines extend the line segments between the first two data points , to distinguish deviations from linear behaviour .the black dashed line represents the error obtained using disposable pulses of constant photon number .the dotted line is the error due to disposable pulses of constant total energy . ]results of a simulation for an gate with the initial drive state are shown in fig .a behaviour similar to fig .[ fig5 ] is observed : with enough ancillary corrections between the register gates , the error produced by an itinerant drive can be reduced to the level given by individual pulses .the figure also suggests that even without corrections , reusing a drive of certain energy is more beneficial in practice than dividing the same amount of photons into individual , weaker disposable pulses .thus we conclude that regardless of the state of the register , refreshment of a drive pulse likely serves to improve the trade - off between gate error and required energy .the above case of entangled qubits also provides a way to compare our results to the previous work by gea - banacloche and ozawa , where they studied a register in a ghz state that was operated by a drive of photons on average .they showed that the maximum error of the gate in this system scales as per qubit .this scaling was used to argue that a pulse of average photon number can not outperform individual pulses of average photons , although their performance was not compared explicitly .the key differences here are that ref . does not consider the possibility of using ancillary qubits , and that it employs a definition of error which also accounts for the infidelity of the drive state .our results suggest that even though the errors due to both reused and disposable pulses of equal total energy increase almost linearly with , the prefactor of the former is much smaller and can be greatly improved by the refreshing protocol .average gate error as a function of the total mean number of initial photons , for and for , divided by the number of register gates generated .the ancilla states are non - ideally prepared by a corrector pulse initially in state . during the protocol ,the curve advances from right to left and the results are averaged over multiple simulations .the dashed line indicates the lower bound of error which is achieved either with a disposable optimal pulse or with a pulse refreshed by infinitely many ideal ancillas . ]the total energy consumption of the protocol can be meaningfully estimated only if the method and energy cost of the ancilla preparation is specified . to this end, we propose to prepare the ancillas by a circulating corrector pulse shown in fig .[ fig3 ] . in the full protocol ,the ancilla qubits are first prepared in their ground state and then controlled by the corrector pulse from cycle to cycle . with opposite phase and half the interaction time compared with the drive, the corrector pulse applies an gate on the ancilla before and after a gate introduced by the drive pulse . for simplicity , we assume that the state of the register is separable .the full protocol is given by the following steps : 1 .the drive state is initialized to , the corrector pulse to and all ancillas to the ground state .a new register qubit is initialized in a random pure state .the drive interacts with the register qubit with interaction time [ eq . .4 . an ancilla qubit interacts sequentially with the corrector , the drive , and the corrector again , with interaction times , , and , respectively .repeat for all other ancillas .5 . evaluate the average error of a hypothetical gate with eqs . and using the current drive state .continue from step ( ii ) .in addition to computing the drive state after each interaction , the state of the interacting qubit is also extracted for subsequent use by a partial trace over the drive degrees of freedom .this is justified if the ancilla qubits do not become strongly correlated during the evolution .this approximation is more accurate the closer the control pulses are to classical pulses which do not induce entanglement .since all ancilla qubits are prepared to the ground state , the energy consumption fully arises from the drive and corrector pulses , both of which have the initial average energy .thus , the average energy consumption per register gate is , where is the number of elapsed cycles , or equally gates generated . in the case where the drive - refreshing protocol is not used , , we have .results from multiple simulations are averaged and shown in fig .in contrast to the ideal case , the system accumulates error over repeated cycles and the average gate error does not saturate .nevertheless , we find that with a sufficient number of ancillary qubit interactions between the register gates , the average error remains nearly constant for a large number of successive gates. the protocol can be stopped before the error reaches a desired threshold .this shows that the total energy cost per register gate is effectively reduced to orders of magnitude below the lower bound for disposable pulses .in fact , fig .[ fig7 ] suggests that the gate error may be , in theory , reduced indefinitely without increasing the power consumption by using more energetic pulses .in this work , we derived the greatest lower bound for the error of a single - qubit gate implemented with a single resonant control mode of certain mean energy .in contrast to previous work , our method for obtaining the bound is not restricted to any particular gate or state of the qubit drive system .the method can also be used to find the quantum state of the drive mode that minimizes the average gate error , or alternatively the transformation error for a chosen initial qubit state .specifically , we found that the lower bounds for rotations about axes in the -plane are achieved by squeezing the quantum state of a coherent drive pulse by an amount that depends on the target gate .together with the recent result that squeezing also significantly improves the phase gate in the dispersive regime , our results suggest that squeezing may generally yield useful improvements in different control schemes .this calls for experimental studies on outperforming the widely - used coherent state .importantly , our results also impose a lower bound on the energy consumption of individually driven qubits . delivering the required power to the qubit level , possibly through a series of attenuators ,implies heat management challenges that must be addressed in future large - scale quantum computers . as a solution, we introduced a concrete protocol where an itinerant control pulse is used to generate multiple gates and is refreshed between them to avoid loss of gate fidelity .the refreshing process may also prove useful in correcting the phase and amplitude errors of a noisy control pulse .our protocol can possibly be realized in some form with future low - loss microwave components such as photon routers , circulators , and nanoelectromechanical systems .technical limitations in the quality of these devices will set in practice the trade - off between the achievable gate fidelity and the dissipated power . in the future, our work can be extended to error bounds for 2-qubit gates , state preservation , pulse amplification , and propagating control pulses composed of a continuum of bosonic modes .we thank paolo solinas and benjamin huard for useful discussions .this work was supported by the european research council under starting independent researcher grant no .278117 ( singleout ) and under consolidator grant no .681311 ( quess ) .we also acknowledge funding from the academy of finland through its centres of excellence program ( grant nos 251748 and 284621 ) and grant ( no .286215 ) and from the finnish cultural foundation .we estimate the power required by a superconductor - based quantum computer solving a 2000-bit factorization problem , stabilized by a surface code . for this particular computation ,the needed number of physical qubits has been estimated by fowler to be .we assume that the physical qubits are controlled with typical coherent microwave pulses and that gates are completed in equal time and with lower power than gates .the average power needed during one surface code cycle is calculated by counting the frequency of measurements , , , and cnot operations , and by taking a duration - weighted average of the corresponding powers .the operation times depend on implementation . using operation times achieved in ref . , ns , ns , and ns , for -rotations , controlled phase gates , and measurements , respectively , and assuming that our code executes as many operations in parallel as possible , the average power per physical qubit is approximately where the s denote the average drive powers for the the above - mentioned operations . for simplicity, we neglect the two - qubit gates and measurements and use .typical powers at the chip are of the order of w , after being generated in the room temperature and attenuated by tens of decibels on their way to roughly 10-mk base temperature . using only db of attenuation at the base temperature , the total power dissipation here becomes mw .such power level is much higher than the typical cooling power of in state - of - the - art dilution refrigerators at 10 mk .note that using an open transmission line is expected to consume more power than required in the single - mode case considered in sec .[ sec : optimization ] . the average energy density in a transmission line is given by , where is the capacitance per unit length and is the root mean square of the voltage . in a time interval , a propagating drive pulse advances a distance , effectively transporting a power of , where is the photon wavelength .in comparison , consider a resonator which is used to apply to the qubit for an equal operation time .the resonator requires a power , and with a typical qubit frequency of ghz , the ratio between the powers is .thus qubit control using propagating photons in a transmission line seems to lead to orders of magnitude higher power consumption than our single - mode case .however , a more comprehensive study employing the quantization of the transmission line is required to reach accurate estimates .we leave such study for future research .finally , let us consider the lower bound for the power to drive the qubits using disposable pulses .the minimum amount of photons ( see sec . [ minimizationmethod ] ) to produce the gate error used by fowler in ref . is photons at the qubit level . with ghz ,the corresponding powers are w and .this suggests that the lower bound for our example problem size is at the border where current refrigeration equipment fail to deliver the required cooling power , and hence significant increments in the problem size or non - ideal implementation of the suggested driving techniques call for inventive solutions to the emerging heat management problem .a way to avoid the attenuation at the base temperature would be to generate the control pulses at the chip level . to our knowledge, however , no present chip - level photon source is capable of producing pulses that are accurate and intense enough to induce quantum gates of high fidelity .furthermore , the operation efficiency of such devices needs to be sufficiently high to be a considerable alternative .typically microwave sources internally dissipate much more power than their maximum output .assuming the qubit drive system is initially in a pure state , , eq .reduces to \hat{k}\c_{0}\hat{k}^{\dagger}\right\ } \nonumber \\ \quad\;=1-\sum_{k=0}^{\infty}\left|\b{\chi_{0},k}(\hat{k}^{\dagger}\otimes\hat{\mathbb{i}})\hat{u}(t)\k{\chi_{0},\sigma_{0}}\right|^{2 } , \label{eq : err1}\end{gathered}\ ] ] where is the desired gate , is the interaction time , is the temporal evolution operator , and are the photon number states .we represent the basis of the qubit space using vectors and , and explicitly write in this basis , , and is given by with the shorthand notations , , and . using the expressions above , the matrix element in eq .can be structured as where ,\nonumber \\\gamma_{01}^{k}(\vartheta,\varphi ) & = & -is_{k}\left[k_{12}^{*}\sin^{2}\left(\frac{\vartheta}{2}\right)+k_{11}^{*}\frac{1}{2}\sin\left(\vartheta\right)\e{i\varphi}\right],\nonumber \\\gamma_{10}^{k}(\vartheta,\varphi ) & = & -is_{k+1}\left[k_{21}^{*}\cos^{2}\left(\frac{\vartheta}{2}\right)+k_{22}^{*}\frac{1}{2}\sin\left(\vartheta\right)\e{-i\varphi}\right],\nonumber \\\gamma_{11}^{k}(\vartheta,\varphi ) & = & c_{k+1}\left[k_{22}^{*}\sin^{2}\left(\frac{\vartheta}{2}\right)+k_{21}^{*}\frac{1}{2}\sin\left(\vartheta\right)\e{i\varphi}\right].\nonumber\end{aligned}\ ] ] the error is thus given by we can define the transformation operator through its matrix elements in the photon number basis as , where and is a kronecker delta that is zero for any negative index .equation is thus reduced to the form given in eq . , that is , using eq ., the average error and its corresponding operator can be structured in a similar manner . defining the matrix elements of the operator as average error also assumes the form of eq . .as shown in ref . , the average error integrated over the bloch sphere is equal to the arithmetic mean of the error of six so - called axial states .this provides an alternative expression for the operator , namely , .\end{aligned}\ ] ] we can optimize the maximum error if there exists an initial qubit state which produces the highest error regardless of the drive state , i.e. , . specifically for gates , computing the gradients of with respect to and shows that the maximum point is virtually independent of the drive state , and that the maximum error is obtained with and , or equivalently , where is the angle between the horizontal rotation axis and the -axis . due to symmetry , the initial drive state that optimizes is an eigenvector of which corresponds to the mean error of these two states .the elements of the commutator 12 & 12#1212_12%12[1][0] link:\doibase 10.1098/rspa.1998.0167 [ * * , ( ) ] http://dx.doi.org/10.1038/19718 [ * * , ( ) ] http://dx.doi.org/10.1038/nature13171 [ * * , ( ) ] http://dx.doi.org/10.1038/nature14270 [ * * , ( ) ] link:\doibase 10.1126/science.282.5393.1473 [ * * , ( ) ] http://dx.doi.org/10.1038/nature15263 [ * * , ( ) ] http://dx.doi.org/10.1038/35005011 [ * * , ( ) ] link:\doibase 10.1038/nature18648 [ * * , ( ) ] http://dx.doi.org/10.1038/nature11449 [ * * , ( ) ] http://dx.doi.org/10.1038/nature09256 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.87.307 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.032324 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.057902 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.217901 [ * * , ( ) ] http://stacks.iop.org/1464-4266/7/i=10/a=017 [ * * , ( ) ] link:\doibase 10.1103/physreva.74.060301 [ * * , ( ) ] link:\doibase 10.1103/physreva.78.032331 [ * * , ( ) ] http://stacks.iop.org/1751-8121/42/i=22/a=225303 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.022321 [ * * , ( ) ] link:\doibase 10.1109/proc.1963.1664 [ * * , ( ) ] link:\doibase 10.1080/09500349314551321 [ * * , ( ) ] link:\doibase 10.1103/physreva.93.040301 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.63.934 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.150402 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1103/physreva.39.1665 [ * * , ( ) ] link:\doibase 10.1103/physreve.89.052128 [ * * , ( ) ] _ _( , ) http://stacks.iop.org/1464-4266/4/i=1/a=201 [ * * , ( ) ] http://stacks.iop.org/1367-2630/16/i=4/a=045011 [ * * , ( ) ]link:\doibase 10.1126/science.1243289 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.180501 [ * * , ( ) ] link:\doibase 10.1103/physrevapplied.6.024009 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.073601 [ * * , ( ) ] http://dx.doi.org/10.1038/nphys2527 [ * * , ( ) ] link:\doibase 10.1016/s0375 - 9601(02)00069 - 5 [ * * , ( ) ]
in the near future , a major challenge in quantum computing is to scale up robust qubit prototypes to practical problem sizes and to implement comprehensive error correction for computational precision . due to inevitable quantum uncertainties in resonant control pulses , increasing the precision of quantum gates comes with the expense of increased energy consumption . consequently , the power dissipated in the vicinity of the processor in a well - working large - scale quantum computer seems unacceptably large in typical systems requiring low operation temperatures . here , we introduce a method for qubit driving and show that it serves to decrease the single - qubit gate error without increasing the average power dissipated per gate . previously , single - qubit gate error induced by a bosonic drive mode has been considered to be inversely proportional to the energy of the control pulse , but we circumvent this bound by reusing and correcting itinerant control pulses . thus our work suggests that heat dissipation does not pose a fundamental limitation , but a necessary practical challenge in future implementations of large - scale quantum computers .
the lisa observatory has incredible science potential , but that potential can only be fully realized by employing advanced data analysis techniques .lisa will explore the low frequency portion of the gravitational wave spectrum , which is thought to be home to a vast number of sources .since gravitational wave sources typically evolve on timescales that are long compared to the gravitational wave period , individual low frequency sources will be `` on '' for large fractions of the nominal three year lisa mission lifetime .moreover , unlike a traditional telescope , lisa can not be pointed at a particular point on the sky .the upshot is that the lisa data stream will contain the signals from tens of thousands of individual sources , and ways must be found to isolate individual voices from the crowd .this `` cocktail party problem '' is the central issue in lisa data analysis .the types of sources lisa is expected to detect include galactic and extra - galactic compact stellar binaries , super massive black hole binaries , and extreme mass ratio inspirals of compact stars into supermassive black holes ( emris ) .other potential sources include intermediate mass black hole binaries , cosmic strings , and a cosmic gravitational wave background produced by processes in the early universe . in the case of compact stellar binaries and emris ,the number of sources is likely to be so large that it will be impossible to resolve all the sources individually , so that there will be a residual signal that is variously referred to as a confusion limited background or confusion noise .it is important that this confusion noise be made as small as possible so as not to hinder the detection of other high value targets .several estimates of the confusion noise level have been made , and they all suggest that unresolved signals will be the dominant source of low frequency noise for lisa .however , these estimates are based on assumptions about the efficacy of the data analysis algorithms that will be used to identify and regress sources from the lisa data stream , and it is unclear at present how reasonable these assumptions might be .indeed , the very notion that one can first clean the data stream of one type of signal before moving on to search for other targets is suspect as the gravitational wave signals from different sources are not orthogonal .for example , when the signal from a supermassive black hole binary sweeps past the signal from a white dwarf binary of period , the two signals will have significant overlap for a time interval equal to the geometric mean of and , where is the time remaining before the black holes merge .thus , by a process dubbed `` the white dwarf transform , '' it is possible to decompose the signal from a supermassive black hole binary into signals from a collection of white dwarf binaries . as described in [ cocktail ] , optimal filtering of the lisa data would require the construction of a filter bank that described the signals from every source that contributes to the data stream . in principle one could construct a vast template bank describing all possible sources and look for the best match with the data . in practice the enormous size of the search space andthe presence of unmodeled sources renders this direct approach impractical .possible alternatives to a full template based search include iterative refinement of a source - by - source search , ergodic exploration of the parameter space using markov chain monte carlo ( mcmc ) algorithms , darwinian optimization by genetic algorithms , and global iterative refinement using the maximum entropy method ( mem ) .each approach has its strengths and weakness , and at this stage it is not obvious which approach will prove superior .here we apply the popular markov chain monte carlo method to simulated lisa data .this is not the first time that mcmc methods have been applied to gravitational wave data analysis , but it is first outing with realistic simulated lisa data .our simulated data streams contain the signals from multiple galactic binaries .previously , mcmc methods have been used to study the extraction of coalescing binary and spinning neutron star signals from terrestrial interferometers .more recently , mcmc methods have been applied to a simplified toy problem that shares some of the features of the lisa cocktail party problem .these studies have shown that mcmc methods hold considerable promise for gravitational wave data analysis , and offer many advantages over the standard template grid searches .for example , the emri data analysis problem is often cited as the greatest challenge facing lisa science . neglecting the spin of the smaller body yields a 14 dimensional parameter space , which would require templates to explore in a grid based search .this huge computational cost arises because grid based searches scale geometrically with the parameter space dimension .in contrast , the computational cost of mcmc based searches scale linearly with the . in fields such as finance ,mcmc methods are routinely applied to problems with , making the lisa emri problem seem trivial in comparison .a _ google _search on `` markov chain monte carlo '' returns almost 250,000 results , and a quick scan of these pages demonstrates the wide range of fields where mcmc methods are routinely used .we found it amusing that one of the _ google _ search results is a link to the _ pagerank _ mcmc algorithm that powers the _ google _ search engine .the structure of the paper follows the development sequence we took to arrive at a fast and robust mcmc algorithm . in [ cocktail ]we outline the lisa data analysis problem and the particular challenges posed by the galactic background .a basic mcmc algorithm is introduced in [ mcmc7 ] and applied to a full 7 parameter search for a single galactic binary .a generalized multi - channel , multi - source f - statistic for reducing the search space from to is described in [ fstat ] .the performance of a basic mcmc algorithm that uses the f - statistic is studied in [ mcmc_f ] and a number of problems with this simple approach are identified . a more advanced mixed mcmc algorithm that incorporates simulated annealingis introduced in [ mcmc_mix ] and is successfully applied to multi - source searches .the issue of model selection is addressed in [ bayes ] , and approximate bayes factorare calculated by super - cooling the markov chains to extract maximum likelihood estimates .we conclude with a discussion of future refinements and extensions of our approach in [ conclude ] .space based detectors such as lisa are able to return several interferometer outputs .the strains registered in the interferometer in response to a gravitational wave pick up modulations due to the motion of the detector .the orbital motion introduces amplitude , frequency , and phase modulation into the observed gravitational wave signal .the amplitude modulation results from the detector s antenna pattern being swept across the sky , the frequency modulation is due to the doppler shift from the relative motion of the detector and source , and the phase modulation results from the detector s varying response to the two gravitational wave polarizations .these modulations encode information about the location of the source .the modulations spread a monochromatic signal over a bandwidth , where is the co - latitude of the source and is the modulation frequency . in the low frequency limit , where the wavelengths are largecompared to the armlengths of the detector , the interferometer outputs can be combined to simulate the response of two independent 90 degree interferometers , and , rotated by 45 degrees with respect to each other .this allows lisa to measure both polarizations of the gravitational wave simultaneously .a third combination of signals in the low frequency limit yields the symmetric sagnac variable , which is insensitive to gravitational waves and can be used to monitor the instrument noise .when the wavelengths of the gravitational waves become comparable to the size of the detector , which for lisa corresponds to frequencies above 10 mhz , the interferometry signals can be combined to give three independent time series with comparable sensitivities .the output of each lisa data stream can be written as here describes the response registered in detector channel to a source with parameters .the quantity denotes the combined response to a collection of sources with total parameter vector and denotes the instrument noise in channel .extracting the parameters of each individual source from the combined response to all sources defines the lisa cocktail party problem . in practiceit will be impossible to resolve all of the millions of signals that contribute to the lisa data streams .for one , there will not be enough bits of information in the entire lisa data archive to describe all sources in the universe with signals that fall within the lisa band .moreover , most sources will produce signals that are well below the instrument noise level , and even after optimal filtering most of these sources will have signal to noise ratios below one .a more reasonable goal might be to provide estimates for the parameters describing each of the sources that have integrated signal to noise ratios ( snr ) above some threshold ( such as ) , where it is now understood that the noise includes the instrument noise , residuals from the regression of bright sources , and the signals from unresolved sources . while the noise will be neither stationary nor gaussian , it is not unreasonable to hope that the departures from gaussianity and stationarity will be mild .it is well know that matched filtering is the optimal linear signal processing technique for signals with stationary gaussian noise .matched filtering is used extensively in all fields of science , and is a popular data analysis technique in ground based gravitational wave astronomy .switching to the fourier domain , the signal can be written as , where includes instrument noise and confusion noise , and the signals are described by parameters . using the standard noiseweighted inner product for the independent data channels over a finite observation time , a wiener filter statistic can be defined : the noise spectral density is given in terms of the autocorrelation of the noise here and elsewhere angle brackets denote an expectation value .an estimate for the source parameters can be found by maximizing .if the noise is gaussian and a signal is present , will be gaussian distributed with unit variance and mean equal to the integrated signal to noise ratio the optimal filter for the lisa signal ( [ lisa_sig ] ) is a matched template describing all resolvable sources . the number of parameters required to describe a source ranges from 7 for a slowly evolving circular galactic binary to 17 for a massive black hole binary .a reasonable estimate for is around , so the full parameter space has dimension . since the number of templates required to uniformly cover a parameter space grows exponentially with , a grid based search using the full optimal filter is out of the question .clearly an alternative approach has to be found .moreover , the number of resolvable sources is not known a priori , so some stopping criteria must be found to avoid over - fitting the data .existing approaches to the lisa cocktail party problem employ iterative schemes .the first such approach was dubbed `` gclean '' due to its similarity with the `` clean '' algorithm that is used for astronomical image reconstruction .the `` gclean '' procedure identifies and records the brightest source that remains in the data stream , then subtracts a small amount of this source .the procedure is iterated until a prescribed residual is reached , at which time the individual sources are reconstructed from the subtraction record . a much faster iterative approach dubbed`` slice & dice '' was recently proposed that proceeds by identifying and fully subtracting the brightest source that remains in the data stream . a global least squares re - fit to all the current list of sourcesis then performed , and the new parameter record is used to produce a regressed data stream for the next iteration .bayes factors are used to provide a stopping criteria .there is always the danger with iterative approaches that the procedure `` gets off on the wrong foot , '' and is unable to find its way back to the optimal solution .this can happen when two signals have a high degree of overlap .a very different approach to the lisa source confusion problem is to solve for all sources simultaneously using ergodic sampling techniques .markov chain monte carlo ( mcmc ) is a method for estimating the posterior distribution , , that can be used with very large parameter spaces .the method is now in widespread use in many fields , and is starting to be used by astronomers and cosmologists .one of the advantages of mcmc is that it combines detection , parameter estimation , and the calculation of confidence intervals in one procedure , as everything one can ask about a model is contained in .another nice feature of mcmc is that there are implementations that allow the number of parameters in the model to be variable , with built in penalties for using too many parameters in the fit . in an mcmc approach , parameter estimates from wienermatched filtering are replaced by the bayes estimator which requires knowledge of - the posterior distribution of ( _ i.e. _ the distribution of conditioned on the data ) . by bayes theorem , the posterior distribution is related to the prior distribution and the likelihood by until recently the bayes estimator was little used in practical applications as the integrals appearing in ( [ be ] ) and ( [ post ] ) are often analytically intractable .the traditional solution has been to use approximations to the bayes estimator , such as the maximum likelihood estimator described below , however advances in the markov chain monte carlo technique allow direct numerical estimates to be made .when the noise is a normal process with zero mean , the likelihood is given by \ , , \ ] ] where the normalization constant is independent of .in the large snr limit the bayes estimator can be approximated by finding the dominant mode of the posterior distribution , , which finn and cutler & flannagan refer to as a maximum likelihood estimator .other authors define the maximum likelihood estimator to be the value of that maximizes the likelihood , .the former has the advantage of incorporating prior information , but the disadvantage of not being invariant under parameter space coordinate transformations .the latter definition corresponds to the standard definition used by most statisticians , and while it does not take into account prior information , it is coordinate invariant .the two definitions give the same result for uniform priors , and very similar results in most cases ( the exception being where the priors have a large gradient at maximum likelihood ) .the standard definition of the likelihood yields an estimator that is identical to wiener matched filtering .absorbing normalization factors by adopting the inverted relative likelihood , we have in the gravitational wave literature the quantity is usually referred to as the log likelihood , despite the inversion and rescaling . note that the maximum likelihood estimator ( mle ) , , is found by solving the coupled set of equations .parameter uncertainties can be estimated from the negative hessian of , which yields the fisher information matrix in the large snr limit the mle can be found by writing and taylor expanding ( [ ml ] ) . setting yields the lowest order solution expectation value of the maximum of the log likelihood is then this value exceeds that found in ( [ comp ] ) by an amount that depends on the total number of parameters used in the fit , , reflecting the fact that models with more parameters generally give better fits to the data .deciding how many parameters to allow in the fit is an important issue in lisa data analysis as the number of resolvable sources is not known a priori .this issue does not usually arise for ground based gravitational wave detectors as most high frequency gravitational wave sources are transient .the relevant question there is whether or not a gravitational wave signal is present in a section of the data stream , and this question can be dealt with by the neyman - pearson test or other similar tests that use thresholds on the likelihood that are related to the false alarm and false dismissal rates .demanding that - so it is more likely that a signal is present than not - and setting a detection threshold of yields a false alarm probability of 0.006 and a detection probability of 0.994 ( if the noise is stationary and gaussian ) .a simple acceptance threshold of for each individual signal used to fit the lisa data would help restrict the total number of parameters in the fit , however there are better criteria that can be employed .the simplest is related to the neyman - pearson test and compares the likelihoods of models with different numbers of parameters .for nested models this ratio has an approximately chi squared distribution which allows the significance of adding extra parameters to be determined from standard statistical tables .a better approach is to compute the bayes factor , which gives the relative weight of evidence for models and in terms of the ratio of marginal likelihoods here is the likelihood distribution for model and is the prior distribution for model .the difficulty with this approach is that the integral in ( [ marginal ] ) is hard to calculate , though estimates can be made using the laplace approximation or the bayesian information criterion ( bic ) .the laplace approximation is based on the method of steepest descents , and for uniform priors yields where is the maximum likelihood for the model , is the volume of the model s parameter space , and is the volume of the uncertainty ellipsoid ( estimated using the fisher matrix ) .models with more parameters generally provide a better fit to the data and a higher maximum likelihood , but they get penalized by the term which acts as a built in occam s razor .we begin by implementing a basic mcmc search for galactic binaries that searches over the full dimensional parameter space using the metropolis - hastings algorithm .the idea is to generate a set of samples , , that correspond to draws from the posterior distribution , . to do thiswe start at a randomly chosen point and generate a markov chain according to the following algorithm : using a proposal distribution , draw a new point .evaluate the hastings ratio accept the candidate point with probability , otherwise remain at the current state ( metropolis rejection ) . remarkably , this sampling scheme produces a markov chain with a stationary distribution equal to the posterior distribution of interest , , regardless of the choice of proposal distribution .a concise introduction to mcmc methods can be found in the review paper by andrieu _et al _ .on the other hand , a poor choice of the proposal distribution will result in the algorithm taking a very long time to converge to the stationary distribution ( known as the burn - in time ) .elements of the markov chain produced during the burn - in phase have to be discarded as they do not represent the stationary distribution .when dealing with large parameter spaces the burn - in time can be very long if poor techniques are used .for example , the metropolis sampler , which uses symmetric proposal distributions , explores the parameter space with an efficiency of at most , making it a poor choice for high dimension searches .regardless of the sampling scheme , the mixing of the markov chain can be inhibited by the presence of strongly correlated parameters .correlated parameters can be dealt with by making a local coordinate transformation at to a new set of coordinates that diagonalises the fisher matrix , .we tried a number of proposal distributions and update schemes to search for a single galactic binary .the results were very disappointing .bold proposals that attempted large jumps had a very poor acceptance rate , while timid proposals that attempted small jumps had a good acceptance rate , but they explored the parameter space very slowly , and got stuck at local modes of the posterior .lorentzian proposal distributions fared the best as their heavy tails and concentrated peaks lead to a mixture of bold and timid jumps , but the burn in times were still very long and the subsequent mixing of the chain was torpid .the mcmc literature is full of similar examples of slow exploration of large parameter spaces , and a host of schemes have been suggested to speed up the burn - in .many of the accelerated algorithms use adaptation to tune the proposal distribution .this violates the markov nature of the chain as the updates depend on the history of the chain .more complicated adaptive algorithms have been invented that restore the markov property by using additional metropolis rejection steps .the popular delayed rejection method and reversible jump method are examples of adaptive mcmc algorithms .a simpler approach is to use a non - markov scheme during burn - in , such as adaptation or simulated annealing , then transition to a markov scheme after burn - in .since the burn - in portion of the chain is discarded , it does not matter if the mcmc rules are broken ( the burn - in phase is more like las vegas than monte carlo ) . before resorting to complex acceleration schemes we tried a much simpler approach that proved to be very successful .when using the metropolis - hastings algorithm there is no reason to restrict the updates to a single proposal distribution .for example , every update could use a different proposal distribution so long as the choice of distribution is not based on the history of the chain .the proposal distributions to be used at each update can be chosen at random , or they can be applied in a fixed sequence . our experience with single proposal distributions suggested that a scheme that combined a very bold proposal with a very timid proposal would lead to fast burn - in and efficient mixing .for the bold proposal we chose a uniform distribution for each of the source parameters . here is the amplitude , is the gravitational wave frequency , and are the ecliptic co - latitude and longitude , is the polarization angle , is the inclination of the orbital plane , and is the orbital phase at some fiducial time .the amplitudes were restricted to the range ] mhz ( the data snippet contained 100 frequency bins of width ) .a better choice would have been to use a cosine distribution for the co - latitude and inclination , but the choice is not particularly important . when multiple sources were present each source was updated separately during the bold proposal stage . for the timid proposal we used a normal distribution for each eigendirection of the fisher matrix , .the standard deviation for each eigendirection was set equal to , where is the corresponding eigenvalue of , and is the search dimension .the factor of ensures a healthy acceptance rate as the typical total jump is then .all sources were updated simultaneously during the timid proposal stage .note that the timid proposal distributions are not symmetric since .one set of bold proposals ( one for each source ) was followed by ten timid proposals in a repeating cycle .the ratio of the number of bold to timid proposals impacted the burn - in times and the final mixing rate , but ratios anywhere from 1:1 to 1:100 worked well .we used uniform priors , , for all the parameters , though once again a cosine distribution would have been better for and .two independent lisa data channels were simulated directly in the frequency domain using the method described in ref . , with the sources chosen at random using the same uniform distributions employed by the bold proposal .the data covers 1 year of observations , and the data snippet contains 100 frequency bins ( of width ) .the instrument noise was assumed to be stationary and gaussian , with position noise spectral density and acceleration noise spectral density ..7 parameter mcmc search for a single galactic binary [ cols="<,^,^,^,^,^,^,^",options="header " , ] [ tab9 ] it is also interesting to compare the output of the 10 source mcmc search to the maximum likelihood one gets by starting at the true source parameters then applying the super cooling procedure ( in other words , cheat by starting in the neighborhood of the true solution ) .we found , and , which tells us that the mcmc solution , while getting two of the source parameters wrong , provides an equally good fit to the data .in other words , there is _ no _ data analysis algorithm that can fully deblend the two highly overlapping sources .our first pass at applying the mcmc method to lisa data analysis has shown the method to have considerable promise .the next step is to push the existing algorithm until it breaks .simulations of the galactic background suggest that bright galactic sources reach a peak density of one source per five frequency bins .we have shown that our current f - mcmc algorithm can handle a source density of one source per ten frequency bins across a one hundred bin snippet .we have yet to try larger numbers of sources as the current version of the algorithm employs the full dimensional fisher matrix in many of the updates , which leads to a large computational overhead .we are in the process of modifying the algorithm so that sources are first grouped into blocks that have strong overlap .each block is effectively independent of the others .this allows each block to be updated separately , while still taking care of any strongly correlated parameters that might impede mixing of the chain .we have already seen some evidence that high local source densities pose a challenge to the current algorithm .the lesson so far has been that adding new , specially tailored proposal distributions to the mix helps to keep the chain from sticking at secondary modes of the posterior ( it takes a cocktail to solve the cocktail party problem ) . on the other hand ,we have also seen evidence of strong multi - modality whereby the secondary modes have likelihoods within a few percent of the global maximum . in those cases the chain tends to jump back and forth between modes before being forced into a decision by the super - cooling process that follows the main mcmc run .indeed , we may already be pushing the limits of what is possible using any data analysis method .for example , the 10 source search used a model with 70 parameters to fit 400 pieces of data ( 2 channels 2 fourier components 100 bins ). one of our goals is to better understand the theoretical limits of what can be achieved so that we know when to stop trying to improve the algorithm !it would be interesting to compare the performance of the different methods that have been proposed to solve the lisa cocktail party problem .do iterative methods like gclean and slice & dice or global maximization methods like maximum entropy have different strengths and weakness compared to mcmc methods , or do they all fail in the same way as they approach the confusion limit ?it may well be that methods that perform better with idealized , stationary , gaussian instrument noise will not prove to be the best when faced with real instrumental noise .p. bender __ , _ lisa pre - phase a report _ , ( 1998 ) . c. r. evans , i. iben & l. smarr , apj * 323 * , 129 ( 1987 ) . v. m. lipunov , k. a. postnov & m. e. prokhorov , a&a * 176 * , l1 ( 1987 ) .d. hils , p. l. bender & r. f. webbink , apj * 360 * , 75 ( 1990 ) .d. hils & p.l. bender , apj * 537 * , 334 ( 2000 ) .g. nelemans , l. r. yungelson & s. f. portegies zwart , a&a * 375 * , 890 ( 2001 ) .l. barack & c. cutler , phys .rev . d*69 * , 082005 ( 2004 ) .j. r. gair , l. barack , t. creighton , c. cutler , s. l. larson , e. s. phinney & m. vallisneri , class .* 21 * , s1595 ( 2004 ) .a. j. farmer & e. s. phinney , mon . not .. soc . * 346 * , 1197 ( 2003 ) .s. timpano , l. j. rubbo & n. j. cornish , gr - qc/0504071 ( 2005 ) .l. barack & c. cutler , phys .rev . d*70 * , 122002 ( 2004 ) .n. christensen & r. meyer , phys .rev . d*58 * , 082001 ( 1998 ) ; n. christensen & r. meyer , phys .rev . d*64 * , 022001 ( 2001 ) ; n. christensen , r. meyer & a. libson , class .* 21 * , 317 ( 2004 ). n. christensen , r. j. dupuis , g. woan & r. meyer , phys .rev . d*70 * , 022001 ( 2004 ) ; r. umstatter , r. meyer , r. j. dupuis , j. veitch , g. woan & n. christensen , gr - qc/0404025 ( 2004 ) .r. umstatter , n. christensen , m. hendry , r. meyer , v. simha , j. veitch , s. viegland & g. woan , gr - qc/0503121 ( 2005 ) . l. page , s. brin , r. motwani & t. winograd , stanford digital libraries working paper ( 1998 ) .m. tinto , j. w. armstrong & f. b. estabrook , phys .rev . d*63 * , 021101(r ) ( 2001 ) . c. cutler , phys .d * 57 * , 7089 ( 1998 ) . n. j. cornish & l. j. rubbo , phys . rev . d*67 * , 022001 ( 2003 ) .t. a. prince , m. tinto , s. l. larson & j. w. armstrong , phys .d*66 * , 122002 ( 2002 ) .c. w. helstrom , _ statistical theory of signal detection _, 2nd edition ( pergamon press , london , 1968 ) .wainstein and v.d .zubakov , _ extraction of signals from noise _( prentice - hall , englewood cliffs , 1962 ) .thorne , in _ 300 years of gravitation _ , edited by s.w . hawking and w. israel ( cambridge university press , cambridge , england , 1987 ) , p. 330 .schutz , in _ the detection of gravitational waves _ , edited by d.g .blair ( cambridge university press , cambridge , england , 1991 ) , p. 406 .sathyaprakash and s.v .dhurandhar , phys .d * 44 * , 3819 ( 1991 ) .s.v . dhurandhar and b.s .sathyaprakash , phys .d * 49 * , 1707 ( 1994 ) . c. cutler and . .flanagan , phys .d * 49 * , 2658 ( 1994 ) .r. balasubramanian and s.v .dhurandhar , phys .d * 50 * , 6080 ( 1994 ) .sathyaprakash , phys . rev .d * 50 * , 7111 ( 1994 ) .apostolatos , phys .d * 52 * , 605 ( 1996 ) .e. poisson and c.m .will , phys .d * 52 * , 848 ( 1995 ) .r. balasubramanian , b.s .sathyaprakash , and s.v .dhurandhar , phys .d * 53 * , 3033 ( 1996 ) .owen , phys .d * 53 * , 6749 ( 1996 ) .b.j . owen and b.s .sathyaprakash , phys .d * 60 * , 022002 ( 1999 ) .l. s. finn , phys .d * 46 * 5236 ( 1992 ) .p. jaranowski & a. krolak , phys .rev . d*49 * , 1723 ( 1994 ) .p. jaranowski , a. krolak & b. f. schutz , phys . rev .d*58 * 063001 ( 1998 ) . m. h. a. davis , in _ gravitational wave data analysis _, edited by b. f. schutz , ( kluwer academic , boston , 1989 ) .f. echeverria , phys .d * 40 * , 3194 ( 1989 ). n.j . cornish & s.l .larson , phys .d*67 * , 103001 ( 2003 ) .j. hgbom , astr .* 15 * , 417 ( 1974 ) .cornish , _ talk given at gr17 , dublin , july ( 2004 ) _ ; n.j .cornish , l.j .rubbo & r. hellings , _ in preparation _ ( 2005 ) .g. schwarz , ann . stats . *5 * , 461 ( 1978 ) . ,w. r. gilks , s. richardson & d. j. spiegelhalter , ( chapman & hall , london , 1996 ) .d. gamerman , _ markov chain monte carlo : stochastic simulation of bayesian inference _ , ( chapman & hall , london , 1997 ) .c. andrieu , n. de freitas , a. doucet & m. jordan , machine learning * 50 * , 5 ( 2003 ) .
the laser interferometer space antenna ( lisa ) is expected to simultaneously detect many thousands of low frequency gravitational wave signals . this presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy . lisa data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals . because of the signal overlaps , a global fit to all the signals has to be performed in order to avoid biasing the solution . however , performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50,000 . markov chain monte carlo ( mcmc ) methods offer a very promising solution to the lisa data analysis problem . mcmc algorithms are able to efficiently explore large parameter spaces , simultaneously providing parameter estimates , error analysis , and even model selection . here we present the first application of mcmc methods to simulated lisa data and demonstrate the great potential of the mcmc approach . our implementation uses a generalized f - statistic to evaluate the likelihoods , and simulated annealing to speed convergence of the markov chains . as a final step we super - cool the chains to extract maximum likelihood estimates , and estimates of the bayes factors for competing models . we find that the mcmc approach is able to correctly identify the number of signals present , extract the source parameters , and return error estimates consistent with fisher information matrix predictions .
non - gaussian quantum states , endowed with properly enhanced nonclassical properties , may constitute powerful resources for the efficient implementation of quantum information , communication , computation and metrology tasks .indeed , it has been shown that , at fixed first and second moments , gaussian states _minimize _ various nonclassical properties .therefore , many theoretical and experimental efforts have been made towards engineering and controlling highly nonclassical , non - gaussian states of the radiation field ( for a review on quantum state engineering , see e.g. ) . in particular ,several proposals for the generation of non - gaussian states have been presented , and some successful ground - breaking experimental realizations have been already performed . concerning continuous - variable ( cv ) quantum teleportation , to datethe experimental demonstration of the vaidman - braunstein - kimble ( vbk ) teleportation protocol has been reported both for input coherent states , and for squeezed vacuum states .in particular , ref . has reported the teleportation of squeezing , and consequently of entanglement , between upper and lower sidebands of the same spatial mode .it is worth to remark that the efficient teleportation of squeezing , as well as of entanglement , is a necessary requirement for the realization of a quantum information network based on multi - step information processing . in this paper , adopting the vbk protocol , we study in full generality , e.g. including loss mechanisms and non - unity gain regimes , the teleportation of input single - mode coherent squeezed states using as non - gaussian entangled resources a class of non - gaussian entangled quantum states , the class of squeezed bell states .this class includes , for specific choices of the parameters , non - gaussian photon - added and photon - subtracted squeezed states . in tackling our goal, we use the formalism of the characteristic function introduced in ref . for an ideal protocol , and extended to the non - ideal instance in ref . . here , in analogy with the teleportation of coherent states , we first optimize the teleportation fidelity , that is , we look for the maximization of the overlap between the input and the output states . but the presence of squeezing in the unknown input state to be teleported prompts also an alternative procedure , depending on the physical quantities of interest . in fact , if one cares about reproducing in the most faithful way the initial state in phase - space , then the fidelity is the natural quantity that needs to be optimized . on the other hand , one can be interested in preserving as much as possible the squeezing degree at the output of the teleportation process , even at the expense of the condition of maximum similarity between input and output states . in this case, one aims at minimizing the difference between the output and input quadrature averages and the quadrature variances .it is important to observe that this distinction makes sense only if one exploits non - gaussian entangled resources endowed with tunable free parameters , so that enough flexibility is allowed to realize different optimization schemes .indeed , it is straightforward to verify that this is impossible using gaussian entangled resources .we will thus show that exploiting non - gaussian resources one can identify the best strategies for the optimization of different tasks in quantum teleportation , such as state teleportation vs teleportation of squeezing .comparison with the same protocols realized using gaussian resources will confirm the greater effectiveness of non - gaussian states vs gaussian ones as entangled resources in the teleportation of quantum states of continuous variable systems .the paper is organized as follows . in section [ secqtelep ], we introduce the single - mode input states and the two - mode entangled resources , and we recall the basics of both the ideal and the imperfect vkb quantum teleportation protocols . with respect to the instance of gaussian resources ( twin beam ) ,the further free parameters of the non - gaussian resource ( squeezed bell state ) allow one to undertake an optimization procedure to improve the efficiency of the protocols . in section [ sectelepfidelity ]we investigate the optimization procedure based on the maximization of the teleportation fidelity .we then analyze an alternative optimization procedure leading to the minimization of the difference between the quadrature variances of the output and input fields .this analysis is carried out in section [ secoptvar ] .we show that , unlike gaussian resources , in the instance of non - gaussian resources the two procedures lead to different results and , moreover , always allow one to improve on the optimization procedures that can be implemented with gaussian resources .finally , in section [ secconcl ] we draw our conclusions and discuss future outlooks .in this section , we briefly recall the basics of the ideal and imperfect vbk cv teleportation protocols ( for details see ref .the scheme of the ( cv ) teleportation protocol is the following .alice wishes to send to bob , who is at a remote location , a quantum state , drawn from a particular set according to a prior probability distribution .the set of input states and the prior distribution are known to alice and bob , however the specific state to be teleported that is prepared by alice remains unknown .alice and bob share a resource , e.g. a two - mode entangled state . the input state andone of the modes of the resource are available for alice , while the other mode of the resource is sent to bob .alice performs a suitable ( homodyne ) bell measurement , and communicates the result to bob exploiting a classical communication channel. then bob , depending on the result communicated by alice , performs a local unitary ( displacement ) transformation , and retrieves the output teleported state .the non - ideal ( realistic ) teleportation protocol includes mechanisms of loss and inefficiency : the photon losses occurring in the realistic bell measurements , and the noise arising in the propagation of optical fields in noisy channels ( fibers ) when the second mode of the resource is sent to bob .the photon losses occurring in the realistic bell measurements are modeled by placing in front of an ideal detector a fictitious beam splitter with non - unity transmissivity ( and corresponding non - zero reflectivity ) .the propagation in fiber is modeled by the interaction with a gaussian bath with an effective photon number , yielding a damping process with inverse - time rate . denoting by the input field mode , and by and , respectively , the first and the second mode of the entangled resource , the decoherence due to imperfect photo - detection in the homodyne measurement performed by alice involves the input field mode , and one mode of the resource , e.g. mode . throughout, we assume a pure entangled resource .indeed , it is simple to verify that considering mixed ( impure ) resources is equivalent to a consider a suitable nonvanishing detection inefficiency .the degradation due to propagation in fiber affects the other mode of the resource , e.g. mode , which has to reach bob s remote place at the output stage . denoting now by and the projectors corresponding , respectively , to a generic pure input single - mode state and a generic pure two - modeentangled resource , the characteristic function of the single - mode output field can be written as : \\&=e^{- \gamma_{\tau , r}|\alpha|^{2 } } \chi_{in}\left(g t \ , \alpha \right ) \chi_{res}\left(g t \ , \alpha^{*};e^{-\frac{\tau}{2 } } \ , \alpha\right ) , \end{split } \label{chioutfinale}\ ] ] where is the glauber displacement operator , ] is the characteristic function of the resource , is the gain factor of the protocol , is the scaled dimensionless time proportional to the fiber propagation length , and the function is defined as : we assume in principle to have some knowledge about the characteristics of the experimental apparatus : the inefficiency ( or ) of the photo - detectors , and the loss parameters and of the noisy communication channel .we consider as input state a single - mode coherent and squeezed ( cs ) state with unknown squeezing parameter and unknown coherent amplitude .we then consider as non - gaussian entangled resource the two - mode squeezed bell ( sb ) state , defined as : here is , as before , the displacement operator , is the single - mode squeezing operator , is the two - mode squeezing operator , with denoting the annihilation operator for mode , is the two - mode fock state ( of modes 1 and 2 ) with photons in the first mode and photons in the second mode , and and are two intrinsic free parameters of the resource entangled state , in addition to and , which can be exploited for optimization .note that particular choices of the angle in the class of squeezed bell states eq .( [ squeezbell ] ) allow one to recover different instances of two - mode gaussian and non - gaussian entangled states : for the gaussian twin beam ( twb ) ; for ] and the two - mode photon - subtracted squeezed ( pss ) state .the last two non - gaussian states are defined as : and are already experimentally realizable with current technology . in the following section we study , in comparison with the instance of two - mode gaussian entangled resources , the performance of the optimized two - mode squeezed bell states when used as entangled resources for the teleportation of input single - mode coherent squeezed states . for completeness , in the same context we make also a comparison with the performance , as entangled resources , of the more specific realizations ( [ photaddsqueez ] ) , ( [ photsubsqueez ] ) .the characteristic functions of states ( [ cohsqueezst ] ) , ( [ squeezbell ] ) , ( [ photaddsqueez ] ) , and ( [ photsubsqueez ] ) are computed and their explicit expressions are given in appendix [ appendixstates ] ..summary of the notation employed throughout this work to describe the different parameters that characterize the input coherent squeezed ( cs ) states [ eq . ( [ cohsqueezst ] ) ] , the shared entangled two - mode squeezed bell ( sb ) resources [ eq .( [ squeezbell ] ) ] , and the characteristics of non - ideal teleportation setups .see text for further details on the role of each parameter . [cols="^,^,^",options="header " , ] for ease of reference , table [ tableparam ] provides a summary of the parameters associated with the input states , the shared resources , and the sources of noise in the teleportation protocol .the commonly used measure to quantify the performance of a quantum teleportation protocol is the fidelity of teleportation , ] db . indeed , to date , the experimentally reachable values of squeezing fall roughly in such a range with db .we can then study the behavior of corresponding to the angle as a function of the effective input squeezing parameter , at fixed squeezing parameters of the resource and of the input state , respectively and , and at fixed loss parameters , , and .[ fig1sfidsbar ] shows that is quite insensitive to the value of . assuming the realistic range ] , with , and the variances - { \rm tr } [ z_j \rho_{j } ] ^{2} ] ( the cross - quadrature variance , with denoting the symmetrization ) of the quadrature operators , , associated with the single - mode input state and the output state of the teleportation protocol . the explicit expressions for the quantities , , and are reported in the appendix [ appendixquadratures ] .the quantities measuring the deviation of the output from the input are the differences between the output and input first and second quadrature moments : with given by eq .( [ eqc ] ) . from the above equations, we see that the assumption ( i.e. ) yields and .therefore , for , the input and output fields possess equal average position and momentum ( equal first moments),and equal cross - quadrature variance ; then , the optimization procedure reduces to the minimization of the quantity with respect to the free parameters of the non - gaussian squeezed bell resource , i.e. .moreover , as for the optimization procedure of section [ sectelepfidelity ] , it can be shown that the optimal choice for and is , once again , and .the optimization on the remaining free parameter yields the optimal value : \ , .\label{deltaoptvar}\ ] ] the optimal angle , corresponding to the minimization of the differences and between the output and input quadrature variances , is independent of , at variance with the optimal value , eq .( [ deltaoptfid ] ) , corresponding to the maximization of the teleportation fidelity .it is also important to note that in this case there are no questions related to a dependence on the input squeezing . for eq .( [ deltaoptvar ] ) reduces to .such a value is equal to the asymptotic value given by eq .( [ deltaoptfid2 ] ) for , so that , in this extreme limit the two optimization procedures become equivalent . in the particular cases of photon - added and photon - subtracted resources ,no optimization procedure can be carried out , and the parameter is simply a given specific function of ( see section [ secqtelep ] ) .we remark that , having automatically zero difference in the cross - quadrature variance at , finding the angles that minimize and precisely solves the problem of achieving the optimal teleportation of both the first moments and the full covariance matrix of the input state at once . in order to compare the performances of the gaussian and non - gaussian resources , and to emphasize the improvement of the efficiency of teleportation with squeezed bell - like states , we consider first the instance of ideal protocol ( , , ) , and compute , and explicitly report below ,the output variances of the teleported state associated with non - gaussian resources ( i.e. optimized squeezed bell - like states , photon - added squeezed states , photon - subtracted squeezed states ) , and with gaussian resources , i.e. twin beams . from eqs .( [ varxout])([eqc3 ] ) , we get : eq .( [ dzsb ] ) is derived exploiting the optimal angle ( [ deltaoptvar ] ) , which reduces to eq .( [ deltaoptfid2 ] ) in the ideal case .independently of the resource , the teleportation process will in general result in an amplification of the input variance .however , the use of non - gaussian optimized resources , compared to the gaussian ones , reduces sensibly the amplification of the variances at the output . looking at eq .( [ dztwb ] ) , we see that the teleportation with the twin beam resource produces an excess , quantified by the exponential term , of the output variance with respect to the input one . on the other hand , the use of the non - gaussian squeezed bell resource eq .( [ dzsb ] ) yields a reduction in the excess of the output variance with respect to the input one by a factor .let us now analyze the behaviors of the photon - added squeezed resources and of the photon - subtracted squeezed resources , eqs .( [ dzpas ] ) and ( [ dzpss ] ) , respectively .we observe that , in analogy with the findings of the previous section , the photon - subtracted squeezed resources exhibit an intermediate behavior in the ideal protocol ; indeed for low values of they perform better than the gaussian twin beam , but worse than the optimized squeezed bell states .the photon - added squeezed resources perform worse than both the twin beam and the other non - gaussian resources . these considerationsfollow straightforwardly from a quantitative analysis of the terms associated with the excess of the output variance in eqs .( [ dzpas ] ) and ( [ dzpss ] ) . moreover , again in analogy with the analysis of the optimal fidelity , for low values of , there exists a region in which the performance of photon - subtracted squeezed states and optimized squeezed bell states are comparable .finally , again in analogy with the case of the fidelity optimization , the output variance associated with the gaussian twin beam and with the optimized squeezed bell states coincide at a specific , large value of , at which the two resources become identical . the input variances ( [ varxin ] ) and ( [ varpin ] ) , and the output variances , are plotted in panels i and ii of fig .[ figvar ] for the ideal vkb protocol and in panels iii and iv of fig .[ figvar ] for the non - ideal protocol . in the instance of realistic protocol , for small resource squeezing degree ,similar conclusions can be drawn , leading to the same hierarchy among the entangled resources .however , analogously to the behavior of the teleportation fidelity , for high values of the photon - subtracted squeezed resources are very sensitive to decoherence .in fact , such resources perform worse and worse than the gaussian twin beam for greater than a specific finite threshold value . rather than minimizing the differences between output and input quadrature variances , one might be naively tempted to consider minimizing the difference between the ratio of the output variances and the ratio of the input variances .this quantity might appear to be of some interest because it is a good measure of how well squeezing is teleported in all those cases in which the input and output quadrature variances are very different , that is those situations in which the statistical moments are teleported with very low efficiency .however , it is of little use to preserve formally a scale parameter if the noise on the quadrature averages grows out of control .the procedure of minimizing the difference between output and input quadrature statistical moments is the only one that guarantees the simultaneous preservation of the squeezing degree and the reduction of the excess noise on the output averages and statistical moments of the field observables .we have studied the efficiency of the vbk cv quantum teleportation protocol for the transmission of quantum states and averages of observables using optimized non - gaussian entangled resources .we have considered the problem of teleporting gaussian squeezed and coherent states , i.e. input states with two unknown parameters , the coherent amplitude and the squeezing .the non - gaussian resources ( squeezed bell states ) are endowed with free parameters that can be tuned to maximize the teleportation efficiency either of the state or of physical quantities such as squeezing , quadrature averages , and statistical moments .we have discussed two different optimization procedures : the maximization of the teleportation fidelity of the state , and the optimization of the teleportation of average values and variances of the field quadratures .the first procedure maximizes the similarity in phase space between the teleported and the input state , while the second one maximizes the preservation at the output of the displacement and squeezing contents of the input .we have shown that optimized non - gaussian entangled resources such as the squeezed bell states , as well as other more conventional non - gaussian entangled resources , such as the two - mode squeezed photon - subtracted states , outperform , in the realistic intervals of the squeezing parameter of the entangled resource achievable with the current technology , entangled gaussian resources both for the maximization of the teleportation fidelity and for the maximal preservation of the input squeezing and statistical moments .these findings are consistent and go in line with previous results on the improvement of various quantum information protocols replacing gaussian with suitably identified non - gaussian resources . in the process, we have found that the two optimal values of the resource angle associated with the two optimization procedures are different and identified , respectively , by eqs .( [ deltaoptfid ] ) and ( [ deltaoptvar ] ) .this inequivalence is connected to the fact that , when using entangled non - gaussian resources with free parameters that are amenable to optimization , the fidelity is closely related to the form of the different input properties that one wishes to teleport , e.g. quasi - probability distribution in the phase space , squeezing , statistical moments of higher order , and so on .different quantities correspond to different optimal teleportation strategies .finally , regarding the vbk protocol , it is worth remarking that the maximization of the teleportation fidelity corresponds to the maximization of the squared modulus of the overlap between the input and the output ( teleported ) state , without taking into account the characteristics of the output with respect to the input state .therefore , part of the non - gaussian character of the entangled resource is unavoidably transferred to the output state .the latter then acquires unavoidably a certain degree of non - gaussianity , even if the presence of pure gaussian inputs .moreover , as verified in the case of non - ideal protocols , the output state is also strongly affected by decoherence .thus , in order to recover the purity and the gaussianity of the teleported state , purification and gaussification protocols should be implemented serially after transmission through the teleportation channel is completed . if the second ( squeezing preserving ) procedure is instead considered , the possible deformation of the gaussian character is not so relevant , because the shape reproduction is not the main goal , while purification procedures are again needed to correct for the extra noise added during teleportation when finite entanglement and realistic conditions are considered .an important open problem is determining a proper teleportation benchmark for the class of gaussian input states with unknown displacement and squeezing .such a benchmark is expected to be certainly smaller than in terms of teleportation fidelity , the latter being the benchmark for purely coherent input states with completely random displacement in phase space .our results indicate that optimized non - gaussian entangled resources will allow one to beat the classical benchmark , thus achieving unambiguous quantum state transmission via a truly quantum teleportation , with a smaller amount of nonclassical resources , such as squeezing and entanglement , compared to the case of shared gaussian twin beam resources . in this context ,[ fig1sfid ] provides strong and encouraging evidence that suitable uses of non - gaussianity in tailored resources , feasible with current technology , may lead to a genuine demonstration of cv quantum teleportation of displaced squeezed states in realistic conditions of the experimental apparatus .this would constitute a crucial step forward after the successful recent experimental achievement of the quantum storage of a displaced squeezed thermal state of light into an atomic ensemble memory .we acknowledge financial support from the european union under the fp7 strep project hip ( hybrid information processing ) , grant agreement no .here we report the characteristic functions for the single - mode input states and for the two - mode entangled resources .the characteristic function for the coherent squeezed states ( [ cohsqueezst ] ) , i.e. reads : the characteristic function for the squeezed bell - like resource ( [ squeezbell ] ) , i.e. reads : \ , , \end{split } \label{charfuncsb}\ ] ] where the complex variables are defined as : it is worth noticing that , for , eq . ( [ charfuncsb ] ) reduces to the well - known gaussian characteristic function of the twin beam . given the characteristic functions for the single - mode the input state and for the two - mode entangled resource , eqs .( [ chiinput ] ) and ( [ charfuncsb ] ) , respectively , it is straightforward to obtain the characteristic function for the single - mode output state of the teleportation protocol by using eq .( [ chioutfinale ] ) and replacing with .in this appendix , we report the analytical expressions for the mean values ] of the quadrature operators , , associated with the single - mode input state and the output state of the teleportation protocol. we also compute the cross - quadrature variance - 2{\rm tr } [ x_j \rho_{j } ] { \rm tr } [ p_j \rho_{j } ] $ ] , associated with the non - diagonal term of the covariance matrix of the density operator , where the subscript denotes the symmetrization .the mean values and the variances associated with the input single - mode coherent squeezed state ( [ cohsqueezst ] ) can be easily computed : and the mean values and the variances associated with the output single - mode teleported state , described by the characteristic function ( [ chioutfinale ] ) read : and with in the instance of gaussian resource , such quantity simplifies to : for suitable choices of in eq .( [ eqc2 ] ) , see section [ secqtelep ] , one can easily obtain the output variances associated with photon - added and photon - subtracted squeezed states .s. suzuki , h. yonezawa , f. kannari , m. sasaki , and a. furusawa , appl .89 * , 061116 ( 2006 ) ; h. vahlbruch , m. mehmet , n. lastzka , b. hage , s. chelkowski , a. franzen , s. gossler , k. danzmann , and r. schnabel , phys .lett . * 100 * , 033602 ( 2008 ) .o. glckl , u. l. andersen , r. filip , w. p. bowen , and g. leuchs , phys .lett . * 97 * , 053601 ( 2006 ) ; j. heersink , ch .marquardt , r. dong , r. filip , s. lorenz , g. leuchs , and u. l. andersen , phys .lett . * 96 * , 253601 ( 2006 ) ; a. franzen , b. hage , j. diguglielmo , j. fiurasek , and r. schnabel , phys .lett . * 97 * , 150505 ( 2006 ) ; b. hage , a. samblowski , j. diguglielmo , a. franzen , j. fiurasek , and r. schnabel , nature phys . * 4 * , 915 ( 2008 ) ; r. dong , m. lassen , j. heersink , ch .marquardt , r. filip , g. leuchs , and u. l. andersen , nature phys . * 4 * , 919 ( 2008 ) .k. jensen , w. wasilewski , h. krauter , t. fernholz , b. m. nielsen , a. serafini , m. owari , m. b. plenio , m. m. wolf , and e. s. polzik , e print arxiv:1002.1920 ( 2010 ) , nature phys .( advance online publication , doi:10.1038/nphys1819 ) .
we study the continuous - variable quantum teleportation of states , statistical moments of observables , and scale parameters such as squeezing . we investigate the problem both in ideal and imperfect vaidman - braunstein - kimble protocol setups . we show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources . specifically , we consider the teleportation of coherent squeezed states , exploiting squeezed bell states as entangled resources . this class of non - gaussian states , introduced in references , includes photon - added and photon - subtracted squeezed states as special cases . at variance with the case of entangled gaussian resources , the use of entangled non - gaussian squeezed bell resources allows one to choose different optimization procedures that lead to inequivalent results . performing two independent optimization procedures one can either maximize the state teleportation fidelity , or minimize the difference between input and output quadrature variances the two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and non - ideal setups .
in terms of the role in increasing the efficiency and aggregate network throughput , cognitive radio concept plays differently than the conventional spectrum allocation methods . in cognitive networks ,unlicensed secondary users opportunistically access radio bandwidth owned by licensed primary users in order to maximize their performance , while limiting interference to primary users communications . previously , cognitive radio mostly focused on a white space approach , where the secondary users are allowed to access only those time / frequency slots left unused by the licensed users .white space approach is based on zero interference rationale .but , due to noise and fading in channel and mechanism of channel sensing , errors in measurement are inevitable .therefore , in practical scenarios , there is some probability of having collision between primary and secondary users , which can be measured and used as a constraint for the optimization problem .there are some works investigating the coexistence of primary / secondary signals in the same time / frequency band by focusing on physical layer methods for static scenarios , e.g. , .considering the dynamism while superimposition of primary and secondary users on the same time / frequency slot , a strategy of secondary user has been derived where the primary user operates in slotted arq based networks .we consider ieee 802.11 based networks where primary users follow dcf protocol in order to access the channel . unlike the work , in our contemporary work ,we have developed a transmission strategy for the secondary user which picks a backoff counter intelligently or remains idle after having a transmission in a multiplexed manner .as the user needs to pass difs and backoff time period before flushing a packet into the air , the secondary user does not know the exact state of the primary user .therefore , the performace constraint of the primary user plays a great role in the decision making process of secondary user .our previous work revealed solution by formulating the problem as linear program being assumed that secondary user does know the traffic arrival distribution of primary user . as this approach assumes that the secondary transmitter has some knowledge of the current state and probabilistic model of the primary transmitter / receiver pair , limiting its applicability .for example , while it is likely that the secondary might read acks for the primary system , it is unlikely that the secondary will have knowledge of the pending workload of packets at the primary transmitter or will know the distribution of packet arrivals at the primary transmitter .therefore , we address this limitation by developing an on line learning approach that uses one feedback bit sent by the primary user and that approximately converges to the optimal secondary control policy .we will show that when the secondary user has access to such tiny knowledge , an online algorithm can obtain performance similar to an offline algorithm with some state information .rest of the paper is organized as follows , section [ sec : sysmodel ] illustrates system model of the network , which includes the detailed optimization problem and solution thereafter .results obtained from simulation have been shown in section [ sec : perfeval ] in order to verify the efficacy of the algorithm .finally section [ sec : concl ] concludes the paper .we consider interference mitigation scenario in ieee 802.11 based networks .the prime assumption on the interference mitigation strategy is that both users can decode their packets with some probability when they transmit together or individually .however , secondary user is constrained to cause no more than a fixed maximum degradation of the primary s performance .this approach is the other end of white space one .if primary user can not tolerate any loss , the optimal strategy for the secondary user is not to transmit at all .whereas in the work , secondary user can detect the slot occupancy and can only transmit in the slots which it finds empty and therefore incurs some throughput even if primary user can not tolerate any throughput loss .consider the network in figure [ fig : sysmodel ] with a primary and secondary source , namely and .destination of these source nodes are and respectively .we assume a quasi static channel , and time is divided into slots . before initiating a packet transmission ,both users first undergo difs period and decrements the backoff counter which is as large as each single time slot . while decrementing backoff counter , if the station detects a busy channel , it halts its decrementing process and resumes until it detects idle channel for the length of difs period .when the counter reaches to zero , packet is flushed out into the air .packets have a fixed size of l - bits , and transmission of a packet plus its associated feedback message fits the duration of a slot . ideally , packet transmission time is variable , but in this work for the sake simplicity , it is constant i.e. multiple of some slots .we denote by , , and , the random variables corresponding to the channel coefficients respectively between and ; and ; and ; and with , , and their respective probability distribution . the average decoding failure probability at the primary destination associated with a silent secondary source is denoted by , while the same probability when the secondary source transmits is .analogously , the average decoding failure probability at the secondary destination when the primary source is silent and transmitting is denoted with and respectively . control protocols implemented by the primary user is greatly impacted by the secondary user s transmission as discussed in the above paragraph .thus , it degrades the primary user s performance and this manner is true for the secondary user as well .however , the goal of the system design is to optimize secondary user s performance without doing harm to the primary user in some extent .therefore , upon receiving the feedback from the primary user , secondary one adjusts its transmission policy .packet arrival at the primary user is designed as a poisson arrival process with the parameter .the state of the network can be modeled as a homogeneous markov process .two parameters ( backoff stage , counter value ) referred to as ( b , c ) describe the state of a user , where can take any value between 0 and .backoff stage b varies from 1 to maximum backoff stage .here , is the maximum retry limit . having a transmission failure , each packet is attempted by the primary user for retransmission at most times . at each backoff stage , if a station reaches state ( i.e. backoff counter value becomes 0 ) , the station will send out a packet .if the transmission failure occurs at this point with some probability , the primary user moves to higher backoff stage with probability .if successful packet transmission happens , the primary user goes to idle state ( if there is no outstanding packet in queue ) or in the initial backoff stage having picked some backoff counter with the probability of .markov chain model of primary user has been illustrated in figure [ fig : primmarkov ] .secondary user tries each packet only once , after having transmission , it goes to idle state with some probability or picks a backoff counter with probability for the transmission of new packet from the queue .note that , secondary user s packet is assumed as backlogged or there is always one packet in the queue .however , in order to meet the performance loss constraint of primary user , secondary user needs to keep silent and therefore we have introduced a fake variable i.e. secondary user s packet arrival rate .markov chain model for the secondary user has been shown in figure [ fig : secmarkov ] . in both figures , and are function of and .detailed state transitions and steady state distribution of the problem have been skipped in this work due to space constraint .goal of this work to find a optimal strategy for the secondary limiting the performance loss of primary user .let us define the cost functions as the average cost incurred by the markov process in state if action is chosen .note that , represents the secondary source keeps silent and represents the picking of a backoff counter from secondary backoff counter window i.e. ] .length of this vector is ( is backoff window size of secondary user ) .index denotes the proportion of time secondary user keeps itself silent , subsequent indexes denote the portion of time backoff counter is chosen by the secondary user .as discussed previously , outcome of secondary user s action is not obtained instantaneously until the secondary user has its transmission . due to the interaction of secondary and primary user , the obtained throughput from each action vary and our cost function is the obtained average throughput ( added to the long term average throughput ) resultant from the taken action .let is the average throughput of secondary user while taking the action and is the average throughput when the secondary user really completes its packet transmission .then , the cost function at time is defined as follows : and our optimization problem thus stands to \ ] ] and the q - learning algorithm for solving equation [ eq : opt ] is illustrated as follows .* * step 1 : * let the time step .initialize each element of reward vector as some small number , such as 0 .* * step 2 : * check if the constraint of the primary user is satisfied . if not , choose the action of . otherwise , choose the action with index $ ] that has the highest value with some probability say , else let be a random exploratory action .in other words , + * * step 3 : * carry out action .wait until secondary user completes its transmission if it picks any backoff counter .or secondary user may choose the option of being silent .in either case , calculate the cost function and update the reward variable for the corresponding action .if the current state is and the resultant state is after taking the action , reward is updated as follows .+ * * step 4 : * set the current state as and repeat step 2 . when convergence is achieved , set .this is the typical q - learning algorithm . in our case, we do nt know the primary user s exact current state and also do nt know what the next state will be . therefore the equation [ eq : q - learning ] reduces to in order to obtain the optimal value of , we have found the following theorem .* theorem 1 : * step size parameter gives the convergence to the algorithm .* proof : * the choice results in the sample - average method , which is guaranteed to converge to the true action values by the law of large numbers . a well - known result in stochastic approximation theory gives us the conditions required to assure convergence with probability 1 : first condition is required to guarantee that the steps are large enough to eventually overcome any initial conditions or random fluctuations .the second condition guarantees that eventually the steps become small enough to assure convergence .note that , both convergence conditions are met for the sample - average case , .in this section we will evaluate the performace of our online algorithm .in addition , we have compared performance of this algorithm with the algorithm which has some information of primary user as presented in our work . throughout the simulation , we assume that the buffer size of the primary source is and the maximum retransmission time is . backoff window size in each stage is 4 , 6 , 8 , and 10 respectively .secondary user s backoff window size is as .we set the failure probabilities for the transmission of the primary source , , depending on the fact that secondary is silent or not , respectively .similarly , the failure probabilities of the secondary source are set to be .note that these failure probabilities are not known at the secondary source and it has to learn the optimal policy without any assumption on these parameters in advance .once again , the goal of the algorithm is to maximize the throughput of the secondary source . and figure [ fig : thrput - convergence ] depicts the convergence of secondary and primary user s throughput from 0th iteration to some number of iterations .throughput loss is defined as the difference between maximum achievable throughput and instantaneous throughput at a particular slot . from the given parameters ,maximum achievable throughput is calculated considering only a single user ( primary or secondary ) is acting on the channel .we see the convergence of throughput loss happens after a few iterations .and in order to extrapolate the cost functions of our algorithm , we also have shown convergence process of two actions picked up by the secondary user , i.e. probability of picking backoff counter 0 and 1 respectively in the figure [ fig : reward - convergence ] .we have initialzed cost of all actions at time slot zero .as the algorithm moves along with time , it updates its average reward according the formula presented in the algorithm .the algorithm is more prone to pick backoff counter with lower value that will be shown in the subsequent figures .however , in terms of general rule , algorithm does not pick the same action repeatedly .this is because , due to the interaction between primary and secondary users , the repeated action may cause to the degradation of the primary user s performance or it may degrade its own average reward value than the other actions .consequently , the algorithm moves to the other action and the average reward value over the time for different actions look similar . ]figure [ fig : secthrput - comparison ] shows the throughput of primary and secondary source with the increased packet arrival rate for a fixed tolerable primary source s throughput loss . as expected , throughput of the secondary source decreases as is increased gradually .a larger means that the primary source is accessing the channel more often .therefore , the number of slots in which the secondary source can transmit while meeting the constraint on the throughput loss of the primary source decreases .in addition , in this figure , we have projected the result obtained by our optimal algorithm .optimal algorithm though due to the protocol behavior is not fully aware of state of the system , has some better information than our proposed online algorithm .therefore , it incurs better performance in terms of achievable throughput for different value . whereas, our online algorithm though does not look like have similar performance , but gains better one than other blind generic algorithm .generic algorithm means , here secondary user picks its backoff counter uniformly . with this strategy, we see the performance for the secondary user is the worst .even worst news is that , this algorithm is completely blind about the performance constraint of primary user . ]figure [ fig : secstrategy - comparison ] compares the obtained secondary user s strategy for both our optimal and online algorithms .we have presented the proportion of idle slots and probability of picking backoff counter 0 . for the sake of page limit ,we have skipped other results here . in this resultapparently , we do nt see any match between two algorithms .however , we can explain the difference .in fact , online algorithm is mostly dependent on the primary user performance loss violation indicator and its own reward value for different actions .it tries to pick the action with maximum value , which is usually the backoff counter with lower value .otherwise , upon the signal of constraint violation , it keeps silent .therefore , we see that online algorithm puts more weights to the backoff of lower value and again backoff counter of lower value breaks the constraint more often and thus it keeps more silent than offline algorithm . whereas , optimal algorithm knows the arrival rate of primary user , it runs a near brute - force algorithm in order to find the optimal strategy of secondary user .we have proposed an on line learning approach in interference mitigation adopted ieee 802.11 based networks for the cognitive user .our approach relies only on the little performance violation feedback of the primary transmitter and uses q - learning to converge to nearly optimal secondary transmitter control policies .numerical simulations suggest that this approach offers performance that is close to the performance of the system when complete system state information is known .although , the strategy of both algorithms does not follow the exactly similar trend .q. zhao , l. tong , a. swami , and y. chen , `` decentralized cognitive mac for opportunistic spectrum access in ad hoc networks : a pomdp framework , '' _ ieee journal on selected areas in communications _ ,vol . 25 , no . 3 , pp . 589 600 , apr .o. simeone , y. bar - ness , and u. spagnolini , `` stable throughput of cognitive radios with and without relaying capability , '' _ communications , ieee transactions on _ , vol .55 , no . 12 ,2351 2360 , dec .h. su and x. zhang , `` cross - layer based opportunistic mac protocols for qos provisionings over cognitive radio wireless networks , '' _ selected areas in communications , ieee journal on _ , vol .26 , no . 1118 129 , jan .2008 .y. chen , q. zhao , and a. swami , `` joint design and separation principle for opportunistic spectrum access in the presence of sensing errors , '' _ information theory , ieee transactions on _ , vol .54 , no . 5 , pp .2053 2071 , may 2008 .w. zhang and u. mitra , `` spectrum shaping : a new perspective on cognitive radio - part i : coexistence with coded legacy transmission , '' _ communications , ieee transactions on _ , vol .58 , no . 6 , pp .1857 1867 , june 2010 .y. xing , c. n. mathur , m. haleem , r. chandramouli , and k. subbalakshmi , `` dynamic spectrum access with qos and interference temperature constraints , '' _ mobile computing , ieee transactions on _ , vol . 6 , no . 4 , pp . 423 433 , april 2007 . , `` cognitive interference management in retransmission - based wireless networks , '' in _ communication , control , and computing , 2009 .allerton 2009 .47th annual allerton conference on _, 30 2009-oct . 2 2009 , pp .94 101 .r. ruby and v. c. leung , `` determining the transmission strategy of congnive user in ieee 802.11 based networks , '' the university of british columbia , tech .rep . , 2011 .[ online ] .available : http://www.ece.ubc.ca/~rukhsana/files/icc2012.pdf s. mahadevan , `` average reward reinforcement learning : foundations , algorithms , and empirical results , '' _ mach ._ , vol . 22 , pp .159195 , january 1996 .[ online ] .available : http://dl.acm.org/citation.cfm?id=225667.225681
traditional concept of cognitive radio is the coexistence of primary and secondary user in multiplexed manner . we consider the opportunistic channel access scheme in ieee 802.11 based networks subject to the interference mitigation scenario . according to the protocol rule and due to the constraint of message passing , secondary user is unaware of the exact state of the primary user . in this paper , we have proposed an online algorithm for the secondary which assist determining a backoff counter or the decision of being idle for utilizing the time / frequency slot unoccupied by the primary user . proposed algorithm is based on conventional reinforcement learning technique namely q - learning . simulation has been conducted in order to prove the strength of this algorithm and also results have been compared with our contemporary solution of this problem where secondary user is aware of some states of primary user . cognitive radio , ism band , reinforcement learning , optimization , q - learning
the problem of simultaneous localization and mapping ( slam ) has a rich history over the past two decades , which is too broad to cover here , see e.g. .the extended kalman filter ( ekf ) based slam ( the ekf - slam ) has played an important historical role , and is still used , notably for its ability to close loops thanks to the maintenance of correlations between remote landmarks . the factthat the ekf - slam is inconsistent ( that is , it returns a covariance matrix that is too optimistic , see e.g. , , leading to inaccurate estimates ) was early noticed and has since been explained in various papers . in the present paperwe consider the inconsistency issues that stem from the fact that , as only relative measurements are available , the origin and orientation of the earth - fixed frame can never be correctly estimated , but the ekf - slam tends to think " it can estimate them as its output covariance matrix reflects an information gain in those directions of the state space .this lack of observability , and the poor ability of the ekf to handle it , is notably regarded as the root cause of inconsistency in ( see also references therein ) . in the present paperwe advocate the use of the invariant ( i)-ekf to prevent covariance reduction in directions of the state space where no information is available .the invariant extended kalman filter ( iekf ) is a novel methodology introduced in that consists in slightly modifying the ekf equations to have them respect the geometrical structure of the problem .reserved to systems defined on lie groups , it has been mainly driven by applications to localization and guidance , where it appears as a slight modification of the multiplicative ekf ( mekf ) , widely known and used in the world of aeronautics .it has been proved to possess theoretical local convergence properties the ekf lacks in , to be an improvement over the ekf in practice ( see e.g. , and more recently where the ekf is outperformed ) , and has been successfully implemented in industrial applications to navigation ( see the patent ) . in the present paper ,we slightly generalize the iekf framework , to make it capable to handle very general observations ( such as range and bearing or bearing only observations ) , and we show how the derived iekf - slam , a simple variant of the ekf - slam , allows remedying the inconsistency of ekf - slam stemming from the non - observability of the orientation and origin of the global frame .the issue of ekf - slam inconsistency has been the object of many papers , see to cite a few , where empirical evidence ( through monte - carlo simulations ) and theoretical explanations in various particular situations have been accumulated . in particular , the insights of have been that the orientation uncertainty is a key feature in the inconsistency .the article , in line with , also underlines the importance of the linearization process , as linearizing about the true trajectory solves the inconsistency issues , but is impossible to implement in practice as the true state is unknown .it derives a relationship that should hold between various jacobians appearing in the ekf equations when they are evaluated at the current state estimate to ensure consistency .a little later , the works of g.p .huang , a.i .mourikis , and s. i. roumeliotis have provided a sound theoretical analysis of the ekf - slam inconsistency as caused by the ekf inability to correctly reflect the three unobservable degrees of freedom ( as an overall rotation and translation of the global reference frame leave all the measurements unchanged ) .indeed , the filter tends to erroneously acquire information along the directions spanned by those unobservable transformations . to remedy this problem ,the above mentioned authors have proposed various solutions , the most advanced being the observability constrained ( oc)-ekf .the idea is to pick a linearization point that is such that the unobservable subspace seen " by the ekf system model is of appropriate dimension , while minimizing the expected errors of the linearization points . our approach , that relies on the iekf , provides an interesting alternative to the oc - ekf , based on a quite different route .indeed , the rationale is to apply the ekf methodology , but using alternative estimation errors to the standard linear difference between the estimate and the true state .any non - linear error that reflects a discrepancy between the true state and the estimate , necessarily defines a local frame around any point , and the idea underlying the iekf amounts to write the kalman jacobians and covariances in this frame .we notice and prove here that an alternative nonlinear error defines a local frame where the unobservable subspace is _ everywhere _ spanned by the same vectors . using this local frame at the current estimate to express kalman s covariance matrix will be shown to ensure the unobservable subspace seen " by the ekf system model is _ automatically _ of appropriate dimension .we thus obtain an ekf variant which automatically comes with consistency properties .moreover , we relate unobservability to the inverse of the covariance matrix ( called information matrix ) rather than on the covariance matrix itself , and we derive guarantees of information decrease over unobservable directions .contrarily to the oc - ekf , and as in the standard ekf , we use here the latest , and thus best , state estimate as the linearization point to compute the filter jacobians . in a nutshell , whereas the key fact for the analysis of is that the choice of the linearization point affects the observability properties of the linearized state error system of the ekf , the key fact for our analysis is that the choice of the error variable has similar consequences .theoretical results and simulations underline the relevance of the proposed approach .robot - centric formulations such as , and later are promising attempts to tackle unobservability , but they unfortunately lack convenience as the position of all the landmarks must be revised during the propagation step , so that the landmarks estimated position becomes in turn sensitive to the motion sensor s noise .they do not provably solve the observability issues considered in the present paper , and it can be noted the oc - ekf has demonstrated better experimental performance than the robocentric mapping filter , in . in particular , the very recent papers propose to write the equations of the slam in the robot s frame under a constant velocity assumption . using an output injection technique ,those equations become linear , allowing to prove global asymptotic convergence of any linear observer for the corresponding deterministic linear model .this is fundamentally a deterministic approach and property , and as the matrices appearing in the obtained linear model are functions of the observations , the behavior of the filter is not easy to anticipate in a noisy context : the observation noise thus corrupts the very propagation step of the filter .some recent papers also propose to improve consistency through local map joining , see and references therein .although appealing , this approach is rather oriented towards large - scale maps , and requires the existence of local submaps .but when using submap joining algorithm , inconsistency in even one of the submaps , leads to an inconsistent global map " .this approach may thus prove complementary , if the iekf slam proposed in the present paper is used to build consistent submaps .note that , the iekf slam can also be readily combined with other measurements such as the gps , whereas the submap approach is tailored for pure slam . from a methodology viewpoint ,it is worth noting our approach does not bring to bear estimation errors written in a robot frame , as .although based on symmetries as well , the estimation errors we use are slightly more complicated . finally , nonlinear optimization techniques have become popular for slam recently , see e.g. , as one of the first papers .links between our approach , and those novel methods are discussed in the paper s conclusion .the paper is organized as follows . in section [ sec:1 ] ,the standard ekf equations and ekf - slam algorithm are reviewed . in section [ sec:2 ]we recall the problem that neither the origin nor the orientation of the global frame are observable , but the ekf - slam systematically tends to think " it observes them , which leads to inconsistency . in section [ sec:22 ] we introduce the iekf - slam algorithm . in section [ sec:3 ]we show how the linearized model of the iekf always correctly captures the considered unobservable directions . in section [ sect::tools ] we derive a property of the covariance matrix output by the filter that can be interpreted in terms of fisher information . in section [ sec:4 ] simulations support the theoretical results and illustrate the benefits of the proposed algorithm .finally , the iekf theory of is briefly recapped in the appendix , and the iekf slam shown to be an application of this theory indeed .the equations of the iekf slam in 3d are then also derived applying the general theory .consider a general dynamical system in discrete time with state associated to a sequence of observations .the equations are as follows : where is the function encoding the evolution of the system , is the process noise , an input , the observation function and the measurement noise .the ekf propagates the estimate obtained after the observation , through the deterministic part of : the update of using the new observation is based on the first - order approximation of the non - linear system , around the estimate , with respect to the estimation errors defined as : using the jacobians , , and , the combination of equations , and yields the following first - order expansion of the error system where the second order terms , that is , terms of order have been removed according to the standard way the ekf handles non - additive noises in the model ( see e.g. , p. 386 ) . using the linear kalman equations with the gain computed , and letting , an estimate of the error accounting for the observation is computed , along with its covariance matrix .the state is updated accordingly : the detailed equations are recalled in algorithm [ algo::ekf ] .the assumption underlying the ekf is that through first - order approximations _ of the state error _ evolution , the linear kalman equations allow computing a gaussian approximation of the error after each measurement , yielding an approximation of the sought density .however , the linearizations involved induce inevitable approximations that may lead the filter to inconsistencies and sometimes even divergence .define and through and .define as and as . *propagation * * update * , {n|n-1 } ] .the obtained ekf - slam algorithm is recaped in algorithm [ algo::ekf_slam_linear ] .define and as in .define , as in and . *propagation * for all * update * \\ \vdots \\ \tilde h \left [ r(\hat \theta_{n|n-1})^t \left ( \hat p^k_{n|n-1}- \hat x_{n|n-1 } \right ) \right ] \end{pmatrix} ]in this section we come back to the general framework , .the standard issue of observability is fundamentally a deterministic notion so the noise is systematically turned off . [ def::non_obs ]we say a transformation of the system - is unobservable if for any initial conditions and the induced solutions of the dynamics with noise turned off , i.e. , yield the same output at each time step , that is : it concretely means that ( with all noises turned off ) if the transformation is applied to the initial state then none of the observations are going to be affected . as a consequence, there is no way to know this transformation has been applied . in line with will focus here on the observability properties of the linearized system . to that endwe define the notion of non - observable ( or unobservable ) shift which is an infinitesimal counterpart to definition [ def::non_obs ] , and is strongly related to the infinitesimal observability : [ def::non_obs_first_order ] let denote a solution of with noise turned off .a vector is said to be an unobservable shift of - around if : where is the linearization of at and where is the solution at of the linearized system initialized at , with denoting the jacobian matrix of computed at . in other words( see e.g. ) , for all , lies in the kernel of the observability matrix between steps and associated to the linearized error - state system model , i.e. , =0 ] , and from, we see that is defined as in below .thus the linearized ( first - order ) system model with respect to alternative error writes with , and where is the jacobian of computed at . as in the standard ekf methodology ,the matrices allow to compute the kalman gain and covariance .letting be the standardly defined innovation ( see algorithm [ algo::iekf_slam ] just after update " ) , is an estimate of the linearized error accounting for the observation , and is supposed to encode the dispersion .the final step of the standard ekf methodology is to update the estimated state thanks to the estimated linearized error .there is a small catch , though : being not anymore defined as a mere difference , simply adding to would not be appropriate .the most natural counterpart to in our setting , would be to choose for the values of making the right member of equal to the just computed .however , the iekf theory recalled in appendix [ gen : iekf ] , suggests an update that amounts to the latter to the first order , but whose non - linear structure ensures better properties .thus , the state is updated as follows , with defined by .algorithm [ algo::iekf_slam ] recaps the various steps of the iekf slam .define and as in .define , as in and . *propagation * for all * update * \\ \vdots \\ \tilde h \left [ r(\hat \theta_{n|n-1})^t \left ( \hat p^k_{n|n-1}- \hat x_{n|n-1 } \right ) \right ] \end{pmatrix} ]in this section we show the infinitesimal rotations and translations of the global frame are unobservable shifts in the sense of definition [ def::non_obs_first_order ] regardless of the linearization points used to compute the matrices and of eq . , a feature in sharp contrast with the usual restricting condition on the linearization points . in other wordswe show that infinitesimal rotations and translations of the global frame are always unobservable shifts of the system model_ linearized _ with respect to error regardless of the linearization point , a feature in sharp contrast with previous results ( see section [ sect::eks_slam8inconsistency ] and references therein ) .we can consider only one feature ( ) without loss of generality .the expression of the linearized system model has become much simpler , as the linearized error has the remarkable property to remain constant during the propagation step in the absence of noise , since in - .first , let us derive the impact of first - order variations stemming from rotations and translations of the global frame on the error as defined by , that is , an error of the following form [ prop::first_order_rotations_non_linear ] let be an estimate of the state . the first - order perturbation of the _ linearized _ estimation error defined by around 0, corresponding to an _ infinitesimal _ rotation of angle of the global frame , reads in the same way , an _infinitesimal _ translation of the global frame with vector implies a first - order perturbation of the error system of the form according to proposition [ prop::first_order_rotations_prelim ] , an infinitesimal rotation by an angle of the true state corresponds to the transformation . and .regarding of eq it corresponds to the variation direction of the state space is seen " by the _linearized _ error system as the vector .similarly , a translation of vector of the global frame yields the transformation .the effect on the linearized error of is obviously the perturbation neglecting terms of order .we can now prove the first major result of the present article : the infinitesimal transformations stemming from rotations and translations of the gobal frame are unobservable shifts for the iekf linearized model .[ slam : thm : obs ] consider the slam problem defined by equations and , and the iekf - slam algorithm [ algo::iekf_slam ]. let denote a linear combination of infinitesimal rotations and translations of the whole system defined as follows then is an unobservable shift of the linearized system model - of the iekf slam in the sense of definition [ def::non_obs_first_order ] , and this whatever the sequence of true states and estimates : the very structure of the iekf is consistent with the considered unobservability .note that definition [ def::non_obs_first_order ] involves a propagated perturbation , but as here is : we have .thus , the only point to check is : i.e. , .this is straightforward replacing with alternatively and .we obtained the consistency property we were pursuing : the linearized model correctly captures the unobservability of global rotations and translations . as a byproduct ,the unobservable seen by the filter is automatically of appropriate dimension .the standard ekf is tuned to reduce the state estimation error defined through the original state variables of the problem. albeit perfectly suited to the linear case , the latter state error has in fact absolutely no fundamental reason to rule the linearization process in a non - linear setting . the basic differencewhen analyzing the ekf and the iekf is that * in the standard ekf , there is a trivial correspondence between a small variation of the true state and a small variation of the estimation error .but the global rotations of the frame make the error vary in a non - trivial way as recalled in section [ sec:2 ] . * in the iekf approach , the effect of a small rotation of the state on the variation of the estimation error becomes trivial as ensured by proposition [ prop::first_order_rotations_non_linear ] .but the error is non - trivially related to the state , as its definition explicitly depends on the linearization point .many consistency issues of the ekf stem from the fact that the updated covariance matrix is computed before the update , namely at the predicted state , and thus does not account for the updated state s value , albeit supposed to reflect the covariance of the updated error .this is why the oc - ekf typically seeks to avoid linearizing at the latest , albeit best , state estimate , in order to find a close - by state such that the covariance matrix resulting from linearization preserves the observability subspace dimension .the iekf approach is wholly different : the updated covariance is computed at the latest estimate , which is akin to the standard ekf methodology .but it is then indirectly adapted to the updated state , since it is _ interpreted _ as the covariance of the error . and contrarily to the standard case , the definition of this error depends on .more intuitively , we can say the confidence ellipsoids encoded in are attached to a basis that undergoes a transformation when moved over from to , this transformation being tied to the unobservable directions .this prevents spurious reduction of the covariance over unobservable shifts , which are not identical at and .finally , note the alternative error is all but artificial : it naturally stems from the lie group structure of the problem .this is logical as the considered unobservability actually pertains to an _ invariance _ of the model - , that is the slam problem , to global translations and rotations .thus it comes as no surprise the _ invariant _ approach , that brings to bear invariant state errors that encode the very symmetries of the problem , prove fruitful ( see the appendix for more details ) .our approach can be related to the previous work . indeed , according to the latter article , failing to capture the right dimension of the observability subspace in the linearized model leads to `` spurious information gain along directions of the state space where no information is actually available '' and results in `` unjustified reduction of the covariance estimates , a primary cause of filter inconsistency '' .theorem [ slam : thm : obs ] proves that infinitesimal rotations and translations of the global frame , which are unobservable in the slam problem , are always `` seen '' by the iekf linearized model as unobservable directions indeed , so this filter does not suffer from `` false observability '' issues .this is our major theoretical result .that said , the results of the latter section concern the system with noise turned off , and pertain to an automatic control approach to the notion of observability as in .the present section is rather concerned with the estimation theoretic consequences of theorem [ slam : thm : obs ] .we prove indeed , that the iekf s output covariance matrix correctly reflects an absence of `` information gain '' along the unobservable directions , as mentioned above , but where the information is now to be understood in the sense of fisher information . as a by - product , this allows relating our results to a slightly different approach to slam consistency , that rather focuses on the fisher information matrix than on the observability matrix , see in particular .the exposure of the present section is based on the seminal article .see also for related ideas applied to slam .consider the system with output .define the collection of state vectors and observations up to time : joint probability distribution of the vector and of the vector is bayesian fisher information matrix ( bifm ) is defined as the following matrix based upon the dyad of the gradient of the log - likelihood : [\nabla_{\tilde x_n } \log p(\tilde y_n,\tilde x_n]^t)\]]and note that , for the slam problem it boils down to the matrix of .this matrix is of interest to us as it yields a lower bound on the accuracy achievable by any estimator used to attack the filtering problem - .indeed let be defined as the _ inverse _ of the right - lower block of ^{-1} ] and .it turns out , by extension of the results , that there exists a closed form for the lie exponential that writes with .the is also easily derived by extension of , but to save space , we only display it once : is defined as the matrix of eq .this section is a summary of the iekf methodology of .let be a matrix lie group .consider a general dynamical system on the group , associated to a sequence of observations , with equations as follows : where is an input matrix which encodes the displacement according to the evolution model , is a vector encoding the model noise , is the observation function and the measurement noise .the iekf propagates an estimate obtained after the previous observation through the deterministic part of : to update using the new observation , one has to consider an estimation error that is _ well - defined _ on the group . in this paperwe will use the following right - invariant errors which are equal to when .the terminology stems from the fact they are invariant to right multiplications , that is , transformations of the form with .note that , one could alternatively consider left - invariant errors but it turns out to be less fruitful for slam . the iekf update is based upon a first - order expansion of the non - linear system associated to the errors around .first , compute the full error s evolution that the term has disappeared !this is a key property for the successes of the invariant filtering approach . to linearize this equationwe define around through in the standard non - additive noise ekf methodology all terms of order , are assumed small and are neglected . using the bch formula , and neglecting the latter terms ,we get using the local invertibility of around , we get the following linearized error evolution in : where and . to linearize the output error , we now slightly adapt the iekf theory to account for the general form of output .note that , . as is assumed small , and as , a first - order taylor expansion in arbitrary , allows definit as follows as in the standard theory , the kalman gain matrix allows computing an estimate of the linearized error after the observation through , where .recall the state estimation errors defined by - are of the form , that is , .thus an estimate of after observation which is consistent with - , is obtained through the following lie group counterpart of the linear update the equations of the filter are detailed in algorithm [ algo::iekf ] .choose initial and [ algo::iekf ] define as in and let and .define as and as .* propagation * * update * , {n|n-1 } ] and .for , by extension of the results , we have the closed form : where .as easily seen by analogy with \vdots & \\[-0.5ex ] ( p^k)_\times r & \end{array } \right)\end{aligned}\ ] ] let the state be , and let be , and let and be their estimated counterparts .it is easily seen that up to terms that will disappear in the linearization process anyway , the model for the state is mapped through defined at , to a model of the form . using the matrix logarithm ,define as the solution of = r\hat r^{t} ] , {n|n-1 } $ ] t. bailey , j. nieto , j. guivant , m. stevens , and e. nebot. consistency of the ekf - slam algorithm . in _ intelligent robots and systems , 2006ieee / rsj international conference on _ , pages 35623568 .ieee , 2006 .s. bonnabel , p. martin , and e. salaun .invariant extended kalman filter : theory and application to a velocity - aided attitude estimation problem . in _ieee conference on decision and control _ , pp .1297 - 1304 , 2009 .g. p. huang , a. mourikis , st .an observability - constrained sliding window filter for slam . in _ intelligent robots and systems ( iros ) , 2011ieee / rsj international conference on _ , pages 6572 .ieee , 2011 .g. p. huang , a. i. mourikis , and s. i roumeliotis .analysis and improvement of the consistency of extended kalman filter based slam . in _ robotics and automation , 2008 .icra 2008 .ieee international conference on _ , pages 473479 .ieee , 2008 .s. j. julier and j. k. uhlmann . a counter example to the theory of simultaneous localization and map building . in _ robotics and automation , 2001 .proceedings 2001 icra .ieee international conference on _ , volume 4 , pages 42384243 .ieee , 2001 .k. w. lee , w. s. wijesoma , and j. i. guzman . on the observability and observability analysis of slam . in _ intelligent robots and systems ( iros ) , 2006ieee / rsj international conference on _ , pages 35693574 .ieee , 2006 .a. martinelli , n. tomatis , and r. siegwart .some results on slam and the closing the loop problem . in _ intelligent robots and systems ( iros ) , 2012ieee / rsj international conference on _ , pages 334339 .ieee , 2005 .s. thrun , y. liu , d. koller , a. y. ng,2 .ghahramani and h. durrant - whyte . simultaneous localization and mapping with sparse extended information filters . in _ the international journal of robotics research _ , 23(7 - 8 ) : 693 - 716 , 2004 .l. zhao , s. huang , and g. dissanayake .linear slam : a linear solution to the feature - based and pose graph slam based on submap joining . in _ intelligent robots and systems ( iros ) , 2013ieee / rsj international conference on ._ ieee , 2013 .
in this paper we address the inconsistency of the ekf - based slam algorithm that stems from non - observability of the origin and orientation of the global reference frame . we prove on the non - linear two - dimensional problem with point landmarks observed that this type of inconsistency is remedied using the invariant ekf , a recently introduced variant of the ekf meant to account for the symmetries of the state space . extensive monte - carlo runs illustrate the theoretical results .
* this article reviews the recently developed g - ratio imaging framework .* confounds in the methodology are detailed .* recent progress and applications are reviewed .the g - ratio is an explicit quantitative measure of the relative myelin thickness of a myelinated axon , given by the ratio of the inner to the outer diameter of the myelin sheath .both axon diameter and myelin thickness contribute to neuronal conduction velocity , and given the spatial constraints of the nervous system and cellular energetics , an optimal g - ratio of roughly 0.6 - 0.8 arises .spatial constraints are more stringent in the central nervous system ( cns ) , leading to higher g - ratios than in peripheral nerve .study of the g - ratio _ in vivo _ is interesting in the context of healthy development , aging , and disease progression and treatment . in demyelinating diseases such as multiple sclerosis ( ms ) ,g - ratio changes and axon loss occur , and the g - ratio changes can then partially recover during the remyelination phase .the possibility that the g - ratio is dependent on gender during development , driven by testosterone differences , has recently been proposed and investigated .possible clinical ramifications of a non - optimal g - ratio include `` disconnection '' syndromes such as schizophrenia , in which g - ratio differences have been reported .the g - ratio is expected to vary slightly in healthy neuronal tissue .the relationship between axon size and myelin sheath thickness is close to , but not exactly , linear , with the nonlinearity more pronounced for larger axon size , where the g - ratio is higher . during development ,axon growth outpaces myelination , resulting in a decreasing g - ratio as myelination catches up .there is relatively little literature on the spatial variation of the g - ratio in healthy tissue .values in the range 0.72 - 0.81 have been reported in the cns of small animals ( mouse , rat , guinea pig , rabbit ) .other primary pathology and disorders may lead to an abnormal g - ratio .these include leukodystropies and axonal changes , such as axonal swelling in ischemia .there are many outstanding questions in demyelinating disease that could be best answered by imaging the g - ratio _ in vivo_. for example , in ms , disease progression is still the topic of active research .most histopathological data are from patients at the latest stages of the disease .potential treatment includes agents for both immunosuppression and remyelination .however , if most demyelinated axons die quickly , and the rest remyelinate effectively on their own early in the disease , remyelination agents will be of little clinical value .detailed longitudinal study of the extent of remyelination can therefore aid in choosing avenues for therapy . while techniques exist for measurement of the g - ratio _ ex - vivo _ , measurement of the g - ratio _ in vivo _is an area of active research . currently , there are quantitative mri markers that are sensitive to the myelin volume fraction ( mvf ) and the intra - axonal volume fraction or axon volume fraction ( avf ) . in recentwork , it has been shown that measuring these two quantities is sufficient to compute one g - ratio for a voxel , or an _ aggregate _ g - ratio .the g - ratio is a function of the ratio of the mvf to the avf .the challenge then becomes how to estimate the mvf and the avf precisely and accurately with mri .the fiber density or fiber volume fraction ( fvf ) is the sum of the mvf and the avf , and the g - ratio imaging framework aims to decouple the fiber density from the g - ratio , such that a more complete picture of the microstructural detail can be achieved .this , coupled with other microstructural measures such as axon diameter , comprises the field of _ in vivo histology _ of white matter .we wish to describe microstructure in detail on a scale much finer than an imaging voxel , aggregated over the voxel .as previously defined , the g - ratio is the ratio of the inner to the outer diameter of the myelin sheath of a myelinated axon ( see fig . [ gcartoon ] ) .it has been shown in recent work that the g - ratio can be expressed as a function of the myelin volume fraction and the axon volume fraction , and hence can be estimated without explicit measurement of these diameters : this formulation applies to any imaging modality ( e.g. , electron microscopy ( em ) , where the mvf and avf can be measured after segmentation of the image - see fig.[gcartoon ] ) , but it is of particular interest to be able to estimate the g - ratio _ in vivo_. mri provides us with several different contrast mechanisms for estimation of these volume fractions , and given mvf and avf , we can estimate .we will hereafter refer to this mri - based g - ratio metric as for simplicity , but note that it is derived from mri images with certain contrasts sensitive but not equal to the mvf and avf .estimation of these quantities is discussed in the next sections .original ( top left ) and segmented ( top right ) electron micrograph showing axons of white matter , the intra - axonal space ( blue ) , and the myelin ( red ) the myelin appears black because of osmium preparation .the fiber g - ratio is the ratio of the inner to the outer radius of the myelin sheath surrounding an axon .the aggregate g - ratio can be expressed as a function of the myelin volume fraction ( mvf ) and the axon volume fraction ( avf ) .the myelin macromolecules , myelin water , and intra- and extra - axonal water compartments all have distinct properties , which can be exploited to generate mri images from which the respective compartment volume fractions can be estimated.,scaledwidth=50.0% ] diffusion mri is particularly well suited to aid in the estimation of the axon volume fraction .it is sensitive to the displacement distribution of water molecules moving randomly with thermal energy , and this displacement distribution is affected by the cellular structure present in the tissue . as the molecules impinge on the cellular membranes , organelles , and cytoskeleton , the displacement distribution takes on a unique shape depending on the environment .intra - axonal diffusion is said to be _ restricted _ , resembling free gaussian diffusion at short diffusion times , but departing markedly from gaussianity at longer times , where the displacement distribution is limited by the pore shape .there will be a sharp drop in the probability of displacement beyond the cell radius .extra - axonal diffusion is said to be _ hindered _ , resembling free gaussian diffusion , but with a smaller variance due to impingement of motion .many diffusion models exist for explicit estimation of the relative cellular compartment sizes .these include neurite orientation density and dispersion imaging ( noddi ) , the composite hindered and restricted model of diffusion ( charmed ) , diffusion basis spectrum imaging ( dbsi ) , restriction spectrum imaging ( rsi ) , white matter tract integrity ( wmti ) from diffusion kurtosis imaging ( dki ) , temporal diffusion spectroscopy , double pulsed field gradient ( dpfg ) mri , the spherical mean technique , the distribution of anisotropic microstructural environments in diffusion - compartment imaging ( diamond ) + , and many others .it is also possible to perform noddi with relaxed constraints ( noddida ) , and to do this calculation analytically ( lemonade ) .another approach , termed the apparent fiber density ( afd ) , uses high diffusion weighting to virtually eliminate the hindered diffusion signal , leaving only intra - axonal water .it has been used to estimate the the relative axon volume fraction of different fiber populations in a voxel .a modification , termed the tensor fiber density ( tfd ) , can be performed with lower diffusion weighting .the simplest diffusion mri models do not differentiate between the tissue compartments .for instance , the diffusion tensor models the entire displacement distribution as an anisotropic gaussian function . the parameters defining this function will change if the intra - axonal volume fraction changes , but to what extent is it practical to extract meaningful quantitative compartment volume fractions from the tensor ?recently , a framework called noddi - dti has been developed , in which the proximity of dti - based parameters to the computed noddi parameters is assessed , given certain assumptions .the fa and the mean diffusivity ( md ) are highly correlated in straight , parallel fiber bundles , and will change with changing avf , leading to estimates of the relative intra - axonal volume . however , this formulation is probably an oversimplification of the microstructural situation , and more detailed modeling is a better choice to ensure specificity to white matter fibers .the original full - brain g - ratio demonstration employed the noddi model of diffusion .it was chosen because of its suitability in the presence of complex subvoxel fiber geometry , including fiber divergence , which may occur to a significant scale in almost all imaging voxels , and its suitability on clinical scanners with relatively low gradient strength . having a fast implementation of the model fitting with numerical stability is important for large studies , hence, the convex - optimized amico implementation is beneficial .while diffusion mri is a modality of choice for imaging microstructure , it can only measure the displacement distribution of water molecules that are visible in a diffusion mri experiment .this limits us to water that is visible at an echo time ( te ) on the order of 50 - 100 ms , and therefore excludes water that is trapped between the myelin bilayers , which has a t on the order of 10 ms .hence , the estimates provided by these models are of the intra - axonal volume fraction _ of the diffusion visible volume_. myelin does not figure in the models. given , e.g. , the noddi model outputs , a complementary myelin imaging technique must be used to estimate the absolute axon volume fraction . the avf is given by with and the isotropic and restricted volume fractions from the noddi model , and the mvf obtained from a myelin mapping technique , examples of which are discussed below .diffusion contrast may not be our only window onto the axon volume fraction .recent work has shown that it is possible to disambiguate the myelin , intra - axonal , and extra - axonal water compartments using complex gradient echo ( gre ) images .the myelin water is separable from the combined intra- and extra - axonal water using multicomponent t reconstruction , providing a myelin marker ( see below ) .however , incorporation of the phase of the gre images potentially allows us to separate all three compartments based on frequency shifts .challenges include the fact that the frequency shift is dependent on the orientation of the axon to the main magnetic field b .when the axon is oriented perpendicular to b , the myelin water will experience a positive frequency shift , the intra - axonal water a negative frequency shift , and the extra - axonal water will not experience a frequency shift .note that the avf as defined by these diffusion mri models is specific to white matter .while it makes sense to define an axon volume fraction in grey matter , the models in general can not distinguish between axons and dendrites .the noddi model s parameter , for example , is `` neurite density '' , i.e. , all cellular processes that can be assumed to have infinitely restricted diffusion in their transverse plane .hence , the g - ratio from mri data is undefined in grey matter .the fiber volume fraction is the sum of the avf and the mvf .can diffusion mri , or any other mri contrast mechanism , measure the total fiber volume fraction itself ? clearly , gre images have potential , as discussed above .is diffusion imaging sensitive to the fvf , as opposed to the avf ?while myelin water is virtually invisible in diffusion mri , diffusion mri is not insensitive to myelin .first , the ratio of the intra- to extra - axonal diffusion mri visible water in a voxel will change as the myelin volume fraction in that voxel changes .hence , for example , the noddi parameter changes with demyelination , even if all axons remain intact .second , diffusion acquisitions are heavily t weighted , and t is myelin - sensitive .the total diffusion weighted signal thus decreases as myelin content increases .however , to robustly quantify myelin volume fraction , it is necessary to add a second contrast mechanism , even if it is additional t weighted images , to the scanning protocol .this is discussed below . despite the nomenclature ,as noted above , even the apparent fiber density and tensor fiber density are in fact relative axon densities .they would provide a relative fvf only if the g - ratio is constant . in a recent study of the g - ratio , the tfd was equated with the fvf , not the avf , for input to the g - ratio formula .the g - ratio is a function of the ratio of the mvf to the avf ( eq . [ geq ] ) , or alternately , the ratio of the mvf to the fvf : this means that conclusions reached about the variation of the g - ratio found by equating the tfd with the fvf will be robust in this case .absolute g - ratios in this case were calibrated to have a mean of 0.7 in healthy white matter . in early work on the fiber g - ratio , it was shown that assuming a simple white matter model of straight , parallel cylinders , the fraction anisotropy ( fa ) of the diffusion tensor is proportional to the total fiber volume fraction , with a quadratic relationship .the model has been shown to give reasonable values in human corpus callosum , however , it suffers from several problems .first , it applies only to straight , parallel fiber bundles , such as the corpus callosum and parts of the spinal tracts .however , the regions of the brain where this model can be expected to hold at all are very limited , as there are crossing or splaying fibers in up to 95% of diffusion mri voxels in parenchyma , and curvature is almost ubiquitous at standard imaging resolution .even the axons of callosal fibers are not straight and parallel , with splay up to 18 .second , the model assumes a relatively uniform , if random , packing of axons on the scale of the mri voxel . due to the nonlinear nature of the fa, it will depend strongly on the packing geometry .if two voxels , one with densely packed axons and one with sparsely packed axons , are combined into one , the fa for that voxel will not be the average of the two original voxels , whereas the fiber density will be .third , fa is in practice acquisition and b - value dependent .we note that if diffusion mri were capable of estimating the absolute avf or fvf as well as the ratio of the intra - axonal to extra - fiber water , the g - ratio could immediately be estimated from these two quantities , without further myelin imaging .this has yet to be done robustly , and it is therefore preferable to use a more robust independent myelin marker .there are many different contrasts and computed parameters that are sensitive to myelin .the possible sources of signal from the myelin compartment are the ultra - short t protons in the macromolecules of the myelin sheath itself ( t ) and the short t water protons present between the phospholipid bilayers ( t , see fig .[ gcartoon ] ) .most mri contrast mechanisms are sensitive to myelin content , but few are specific .the myelin phospholipid bilayers create local larmor frequency variations for water protons in their vicinity due to diamagnetic susceptibility effects .this results in myelin content modulated transverse relaxation times t and t , and longitudinal relaxation time t . the local larmor field shift ( fl ) and the susceptibility itself ( ) can be computed as well .ultra - short te ( ute ) imaging can be used to image the protons tightly bound to macromolecules . an alternate approach to isolating the myelin compartment is magnetization transfer ( mt ) imaging , where the ultra - short t macromolecular proton pool size can be estimated by transfer of magnetization to the observable water pool .mt based parameters sensitive to macromolecular protons include the magnetization transfer ratio ( mtr ) , the mt saturation index ( mt ) , the macromolecular pool size ( f ) from quantitative magnetization transfer , single - point two - pool modeling , and inhomogeneous mt .alternately , the myelin water can be imaged with quantitative multicomponent t or t relaxation , which yields the myelin water fraction ( mwf ) surrogate for myelin density .variants include gradient and spin echo ( grase ) mwf imaging , linear combination myelin imaging ( e.g. , ) , t prepared mwf imaging , multi - component driven equilibrium single point estimation of t ( mcdespot ) and direct visualization of the short transverse relaxation time component via an inversion recovery preparation to reduce long t signal ( vista ) .other alternate approaches exploiting myelin - modulated relaxation times include combined contrast imaging ( t/t ) or independent component analysis .proton density is also sensitive to macromolecular content , and the proton - density based macromolecular tissue volume ( mtv ) has been used as a quantitative myelin marker .while these mri measures have been shown to correlate highly with myelin content , they have not been incorporated in a specific tissue model in a manner similar to the diffusion signal , and hence some calibration is needed .this is still a topic of research .caveats of improper calibration of the mvf are discussed in section [ cal ] . in this and the previous sections ,we have discussed imaging techniques for both diffusion - visible microstructure and myelin .any multi - modal modal imaging protocol with contrasts such as these , sensitive to the axon and myelin volume fractions , is sensitive to the g - ratio ( e.g. , ) .the purpose of the explicit g - ratio formulation is to create a measure that is _ specific _ to the g - ratio .it is interesting to ask whether we could use a technique such as deep learning to estimate the g - ratio , skipping explicit modeling completely . in the following sections , we illustrate several important points about g - ratio imaging using experimental data acquired at our sitethe following describes the acquisition protocol .we acquired data from healthy volunteers and from multiple sclerosis patients .these data were acquired on a siemens 3 t trio mri scanner with a 32 channel head coil .a t structural mprage volume with 1 mm isotropic voxel size was acquired for all subjects . for diffusion imaging ,the voxel size was 2 mm isotropic . formost experiments , the noddi diffusion protocol consisted of 7 b=0 s / mm , 30 b=700 s / mm , and 64 b=2000 s / mm images , 3x slice acceleration , 2x grappa acceleration , all acquired twice with ap - pa phase encode reversal .for the other experiments , as detailed below when they are introduced , the slice acceleration and phase encode reversal were not employed . for a dataset optimized for diffusion tensor reconstruction ,a dataset with 99 diffusion encoding directions at b=1000 s / mm and 9 b=0 s / mm images was acquired . for magnetization transfer images, we also used 2 mm iso - tropic voxels to match the diffusion imaging voxel size . for mtr , one 3d non - selective pd - weighted rf - spoiled gradient echo ( spgr ) scan was acquired with tr=30 ms and excitation flip angle , and one mt - weighted scan was acquired with the same parameters and an mt pulse with 2.2 khz frequency offset and 540 mt pulse flip angle . for mt computation ,these same mt - on and mt - off scans were used , with one additional t-weighted scan with tr=11 ms and excitation flip angle . for qmt computation ,10-point logarithmic sampling of the z - spectrum from 0.433 - 17.235 khz frequency offset was acquired , with two mt pulse flip angles for each point , 426 and 142 , and excitation flip angle .the qmt acquisition was accelerated with 2x grappa acceleration .additional scans for correction of the maps included b field mapping using the double angle technique , with 60 and 120 flip angles , b field mapping using the two - point phase difference technique , with te/te = 4.0/8.48 ms , and t mapping using the variable flip angle technique , with flip angles 3 and 20 .additional t-flair and pd images were acquired for the ms subjects to aid in lesion segmentation .in this section , we discuss pitfalls and outstanding issues in g - ratio imaging . experimental results are included in these sections to illustrate these problems .the diffusion mri post - processing techniques described in section [ avf ] give a range of outputs .some are physical quantities ( such as the diffusion displacement distribution ; kurtosis ) , while some are parameters of detailed biological models ( such as the intra - axonal volume fraction ) .models are valuable , but the user has to be aware of the assumptions made .the parameter space in existing models ranges from three free parameters in noddi to six , twenty three , and thirty one in other models .recent analysis hypothesizes that the lower number of free parameters in , e.g. , noddi and charmed , may be matched to the level of complexity possible on current clinical systems , while high gradient strength , high b - values , and more b - shells may be necessary for more complex models , and would make them more optimal .this is a general problem with multi - exponential models when diffusion weighting is weak . on standard mr systems , relaxing the constraints on fixed parametershas been shown to lead to degeneracy of solutions .regularization approaches such as the spherical mean technique ( smt ) can make the problem less ill - posed .one of the fixed parameters in the noddi model is the parallel diffusivity in the intra- and extra - axonal space , both set to the same fixed value .other models explicitly model these as unequal ; for instance , wmti assumes that the intra - axonal diffusivity is less than or equal to the extra - axonal diffusivity .the actual values are unknown , however simulations have shown that the assumption of equal parallel diffusivities leads to a 34 - 53% overestimation of the intra - axonal compartment size if the diffusivities are in fact unequal , with the intra - axonal diffusivity either greater than or less than the extra - axonal diffusivity .independent of whether the intra- and extra - axonal parallel diffusivities are equal , another source of this bias is the tortuosity model employed by many models , including noddi , diamond , and the smt .this model computes the perpendicular extra - axonal diffusivity as a function of the diffusion - visible intra - axonal volume fraction of the non - csf tissue ( v in the noddi model ) .this tortuosity estimate is bound to be inaccurate because the tortuosity is expected to vary as the absolute fiber volume fraction of the non - csf tissue ( ) , not the diffusion - visible fiber volume fraction .these two quantities are very different , as the myelin and axon volume fractions are almost equal in healthy tissue .the mvf could be explicitly included in the equation , and would be expected to result in a myelin volume dependent reduction in : however , in healthy tissue , where the fvf should scale roughly as the v parameter , this tortuosity model does not appear to hold when applied to experimental data with independent estimates of the parallel and perpendicular extra - axonal diffusivities .another fixed parameter in the noddi model is the t relaxation time of all tissue , assumed to be the same , even in csf .this leads to an overestimation of v , which can be corrected given t estimates from , e.g. , a t mapping technique such as mcdespot .diffusion mri is exquisitely sensitive to fiber geometry .the fractional anisotropy , as mentioned above , may be more sensitive to geometry than to any microstructural feature . hence , microstructural models must be careful to take geometry ( crossing , splaying , curving , microscopic packing configuration ) into account .a typical diffusion imaging voxel is roughly 8 mm , while the axons probed by microstructural models are on the order of one micron .the noddi model that has been used in several g - ratio imaging studies to date assumes there is a single fiber population with potential splay or curvature , but does not explicitly model crossings .furthermore , the tortuosity model employed is probably not correct for varying packing density on the sub - voxel scale ; it has been shown to depend on the packing arrangement and break down for tight axon packing .this probably explains the discrepancy between model and experiment mentioned above , because the geometry of axonal packing can vary considerably for a given average volume fraction .to what extent does the fiber dispersion model of noddi handle crossing fiber bundles ?we have employed the diffusion mri simulator dsim to investigate this question .we simulated realistic axonal packing in voxels with straight , parallel fibers and with two equal size bundles of straight fibers crossing at 90 ( see fig.[sim ] ) .fiber volume fractions were set equal for both configurations and were varied from 0.3 to 0.7 .g - ratios were varied from 0.7 to 0.9 .the diffusion weighted signal was generated , and the noddi model parameters computed using the noddi matlab toolbox .the fvf was computed from the noddi parameters using the known mvf .the computed fvf was lower in the crossing fiber case for the noddi - based fvf .this demonstrates that the noddi model , while not explicitly designed for crossing fibers , gives acceptable results in this case , and can be used for full - brain g - ratio estimation at standard voxel size with significant subvoxel fiber crossing , with only a small decrease in the estimated fvf due to partial volume averaging of fiber orientations .simulated fibers in straight , parallel configuration ( left ) vs. crossing ( right ) , with equal fiber volume fraction and similar distributions of axon diameter and position .the noddi model underestimates the fvf by in the crossing fiber case , whereas the dti model underestimates the fvf by .,scaledwidth=50.0% ] * _ experiments : comparison of dti and noddi for fvf estimation _ * [ dtivsnoddi ] noddi works optimally with diffusion mri measurements made on at least two shells in q - space , i.e. , two different nonzero b - values , although recent work has proposed solutions for single shell data , at least where certain assumptions can be made about the tissue , or where high b - values are used . in contrast , the diffusion tensor can be robustly fitted and the fiber volume fraction inferred ( see section [ fvf ] ) using a much more sparsely sampled , single shell dataset .many research programs have large databases of single - shell diffusion data , often with limited angular sampling of q - space as well .it is therefore of interest to explore to what extent such data can be used in investigation of the g - ratio . in the simulations described above , we also computed the diffusion tensor using in - house software .the fvf was computed from the fa using the quadratic relationship determined from previous simulations . as expected ,the fa is not a predictor of fvf in the presence of crossing fibers : the computed fvf was lower in the crossing fiber case compared to the parallel fiber case for the dti - based fvf .to compare noddi and dti _ in vivo _ , diffusion and qmt data were acquired as described in section [ acq ] for one healthy volunteer , without slice acceleration or phase encode reversal .the qmt data were processed with in - house software and the noddi parameters as described above .the avf , mvf , and g - ratio were computed voxelwise from the diffusion and qmt data as described in section [ calculation ] . additionally , the diffusion tensor was calculated using the b=1000 s / mm diffusion shell .the fvf was then calculated from the fractional anisotropy of the diffusion tensor using the quadratic relationship .the corpus callosum was skeletonized on the fa image and a voxel - wise correlation between the fvf computed from dti and from noddi and qmt was performed for these voxels .the coefficient of proportionality between f and mvf was determined from previous em histological analysis .[ correlationscatter ] shows the fvf computed using both noddi and dti in the skeleton of the healthy human corpus callosum .the correlation between fvf measured using the two techniques was r=0.79 , with .this indicates a slight discrepancy between the fvf using noddi compared to dti , and a reasonably high correlation between techniques on the skeleton .correlation between dti- and noddi - derived fiber volume fraction on the skeleton of the corpus callosum.,scaledwidth=45.0% ] possible explanations for the higher estimates using noddi appear in section [ params ] , although without ground truth is is difficult to say which approach is more accurate .additionally , because the fa does not explicitly model compartments , it is subject to partial volume effects .while partial volume averaging with csf will decrease the fa , the fa - based quadratic fvf model appears to break down in this case .this effect could possibly be reduced by applying the free water elimination technique to obtain the correct fvf for the non - csf compartment and then scaling to reflect the partial volume averaging with csf afterward . to conclude , while the fa is generally a poor indicator of fvf , it may be a reasonable surrogate in certain special cases when data are limited .it is interesting to consider how useful imaging a cross - section of a white matter fascicle may be , regardless of the model used . if the g - ratio can be assumed to be constant along an axon , measurement of a cross - section is useful .however , in many pathological situations , such as wallerian degeneration , it is of interest to study the entire length of the axon .most existing diffusion models assume that extra - axonal diffusion is gaussian , hindered by the structures present , but not restricted .however , observation of tightly packed axons in microscopy ( e.g. , fig.[gcartoon ] ) indicates that the intra- and extra - axonal spaces may not be as distinguishable as the models assume .it is unclear to what extent the extra - axonal diffusion is non - gaussian . if axons are packed tightly together , is extra - axonal diffusion non - gaussian ?it is not clear whether the water mobility through the tight passageways between fibers is distinguishable from the restricted diffusion within spaces surrounded by contiguous myelin .if signal from the extra - axonal space is erroneously attributed to the intra - axonal space , the model output will be incorrect .some models make no attempt to distinguish between intra- and extra - cellular restricted diffusion , meaning the pore size estimates may reflect a mixture of the two .time - dependent ( i.e. , non - gaussian ) diffusion has recently been observed in the extra - axonal space using long diffusion times .this may be due to axon varicosity , axonal beading , or variation in axonal packing .diffusion modeling is an active field , and advances in the near future will hopefully improve precision and accuracy of avf estimates using diffusion mri .histological validation may aid in understanding the strengths and limitations of these estimates . at present , the limitations of these models propagate to the g - ratio , as do the limitations of mvf estimates , which are discussed below .how do we make a quantitative estimate of the mvf from myelin sensitive mri markers ?linear correlations have been shown between the individual myelin sensitive metrics ( such as f , mtr , r , mwf , and mtv ) with the mvf from histology .given the linear correlations that have been established , a logical first approximation is to assume a linear relationship between the chosen myelin - sensitive metric and the mvf .then , using the macromolecular pool size f as an example , the relationship is with and constants .while a non - zero value for has been indicated by some studies , this may be an artefact due to the inherent bias in linear regression .the assumption of a linear relationship hinges on the assumption that non - myelin macromolecular content scales linearly with myelin content , which can break down in disease , or even in healthy tissue .if the myelin and non - myelin macromolecular content do scale linearly , as is assumed here , a constant non - zero intercept is unlikely , meaning a theoretical prior that is reasonable .there is evidence that even if a simple scaling relationship exists between f and mvf , it is dependent on acquisition and post - processing details .for instance , a recent study calibrated f at two different sites , and found a different scaling factor for each .these scaling factors in turn differ from those obtained from other investigations .hence , careful calibration for each study must be performed .several studies have calibrated scaling factors based on a given expected g - ratio in healthy white matter , however , the g - ratio in healthy white matter is not precisely known .none of the myelin - sensitive mri markers is 100% specific to myelin , and most are sensitive to myelin in a slightly different way .magnetization transfer contrast is specific to macromolecules , and more specific to lipids than to proteins .macromolecules in the axon membrane itself , in neurofilaments within the axons , and in glial cell bodies , will contribute to the mt signal , with myelin constituting only 50% of the macromolecular content in healthy white matter .additionally , mt - based metrics such as the magnetization transfer ratio will have residual contrast from other mechanisms .we expect the mtr contrast to vary linearly with macromolecular content , but also with t .t has the reverse sensitivity to myelin than does the mt effect , meaning that these effects work against each other , reducing the dynamic range and power of mtr as a marker of myelin .furthermore , t is sensitive to iron and calcium content , intercompartmental exchange , and diffusion , and hence sensitive to axon size and axon count .this means the relationship between mtr and mvf may not be monotonic , and is certainly nonlinear .this nonlinearity is evident in published plots of mtr vs. f , e.g. , that shown by levesque et al . , and the lack of dynamic range of mtr is also evident .the mt technique aims to remove the t dependence in mtr .both mtr and mt depend on the offset frequency used in the acquisition .ihmt shows promise as a more myelin - specific mt marker due to its sensitivity to specific molecules in myelin that broaden the z - spectrum asymmetrically , although it has recently been shown that asymmetric broadening is not essential to generate a non - zero ihmt signal , and the technique suffers from low signal .qmt is the most comprehensive of the mt - based myelin markers , although its use is impeded by long acquisition times , and its parameters appear to be sensitive to the specific model and fitting algorithm .finally , mt based estimates of mylein volume will be insensitive to variation in the distance between the lipid bilayers .proton - density based techniques will , like mt , be sensitive to all macromolecules , with a different weighting on these macromolecules compared to the lipid- + dominated mt signal .relaxation - based myelin markers are also not 100% specific to myelin .the confounds with using t directly were mentioned above , and the dependence on iron and calcium concentration , intercompartmental exchange and diffusion will also affect t .t is also sensitive to iron concentration , as well as fiber orientation .isolating the short t or short t compartment enhances specificity to myelin , but mwf estimates vary nonlinearly with myelin content as the sheath thins and exchange and diffusion properties are modulated .variants may have biases , for example , the mc- + despot technique has been shown to overestimate the mwf .combining t and t in various ways may increase specificity , although they rely on myelin being the dominant source of contrast . in the ute technique , it is as yet unclear how to map the signal directly to myelin content .in addition to these confounds , most of these myelin markers have recently been shown to have orientation dependence .these include t , chi , mtr , and t of the macromolecular pool . while these myelin imaging techniques are certainly powerful tools in the study of healthy and diseased brain , can they be used reliably in the g - ratio imaging framework ? as an illustration of the effects of miscalibration of myelin markers , consider the following scenario .we investigate three mt - based myelin markers : mtr , mt , and macromolecular pool size f. we assume a simple linear scaling between our mri marker and the mvf .as described in previous work , we calibrate f using the same acquisition protocol in the macaque , coupled with _ex - vivo _ electron microscopy of the same tissue , and then calibrate mtr and mt to match the mean f - based mvf in white matter .we then compute the g - ratio , using the noddi model of diffusion and the mvf derived from the myelin markers ( eq.s [ avffrnoddi],[geq ] ) .mtr , mt , qmt , and noddi data were acquired for one healthy volunteer and one ms patient , as described in section [ acq ] .for the ms patient , the mt images were computed from the qmt mt - off and mt - on ( mt pulse offset 2.732 khz , flip angle 142 ) images and one additional t image with te=3.3 ms , tr=15 ms , and excitation flip angle .binary segmentation of white and grey matter was performed using beast , using the mprage image only .the macromolecular pool size f was computed using in - house software .the diffusion images were preprocessed using fsl , and the noddi parameters computed using the noddi matlab toolbox .mt was computed according to helms et al .lesion segmentation for the ms subject was performed with in - house software .a combined mri / histology dataset was used to scale each myelin marker ( mtr , mt , and f ) to give the mvf , with the assumption of a linear relationship ( eq . [ mvflin ] ) with intercept b=0 .correlations between the three myelin markers were computed in brain parenchyma .percent differences were computed between healthy white matter and healthy grey matter for each of the three myelin markers .the avf was computed using eq .[ avffrnoddi ] , and g - ratios were computed in the ms and healthy brains using eq .average g - ratios were computed in healthy white matter , normal appearing white matter ( nawm ) , and ms lesions .a theoretical computation was also performed , varying the mapping of an arbitrary myelin metric to mvf using eq .[ mvflin ] .we separately varied the slope ( c ) and the intercept ( b ) for a range of fiber volume fraction values and mapped the computed g - ratio as a function of fvf .when varying the slope , the intercept was fixed at the origin .mtr plotted versus f ( top ) and mt plotted versus f ( bottom ) in parenchyma .the mtr vs. f plot shows a marked nonlinearity ( r=0.59 ) , as is expected , while mt increases the linearity of the relationship ( r=0.77 ) and the dynamic range.,scaledwidth=50.0% ] the correlation of mtr with f was r=0.59 ( p.001 ) , and of mt with f was r=0.77 ( p.001 ) .[ mtr_mtsat_corrwf ] shows plots of mtr versus f ( top ) and mt versus f ( bottom ) in parenchyma . of note , the plot of mtr versus f appears to have a nonlinear shape , similar to that seen in the literature . when t effects are reduced using mt , the linearity and dynamic range increase . in healthy brain ,the percent difference between white and grey matter was 15.02% for mtr , 40.08% for mt , and 45.86% for f. the narrower dynamic range of the mvf derived from mtr can also be seen in fig .[ mvfs ] , where grey matter has markedly higher values .if this simple scaling to obtain the mvf is used in the g - ratio formula , the g - ratio in healthy white matter is relatively constant .however , when lesions exist , the contrast using the different mvf markers is very different . in the ms patient ,the mean g - ratio in normal appearing white matter ( nawm ) was 0.76 for all three mvf markers . in ms lesions ,the mean g - ratio was 0.65 , 0.80 , and 0.80 , for mtr , mt , and f , respectively .[ gratios ] shows the spatial distribution of g - ratios in the ms patient for the three mvf markers .[ mvfsim ] shows the theoretical effect of having an improper slope ( top ) or intercept ( bottom ) in the relationship between an arbitrary myelin marker and the mvf , in the case where the ( theoretical ) relationship is in fact linear .the plots show that the computed g - ratio becomes fiber density dependent , in addition to being incorrect .the mtr is a commonly used myelin marker , however , due to t sensitivity , it lacks dynamic range .mt correlates more highly with f , obtained from an explicit qmt model designed to isolate the macromolecular tissue content .it is important to note , however , that this correlation may be driven to some extent by the different b sensitivities of the techniques ( see section [ preproc ] ) , and the mtr was not corrected for b induced variability . independently of this demonstration of the potential of mt for myelin mapping , researchers have found that mt may be more sensitive to tissue damage than mtr in multiple sclerosis , with more correlation with disability metrics .mt has recently been used by other groups in g - ratio imaging of healthy adults .plots of the mvf derived from ( from left to right ) mtr , mt , and f , in healthy brain.,scaledwidth=50.0% ] plots of the g - ratio computed using ( from left to right ) mtr , mt , and f , in the ms patient .the arrow indicates a lesion in which the apparent g - ratio is lower than in nawm for mtr , but higher than in nawm for mt and f.,scaledwidth=50.0% ] if the mvf is miscalibrated in this g - ratio imaging formulation , there will be a residual dependence on fiber volume fraction in our formulation .this reduces the power of the g - ratio metric , which ideally is completely decoupled from the fiber density .independent of specificity of the myelin marker , if the myelin calibration is inaccurate , this residual dependence on fiber volume fraction occurs .it is clear that the g - ratio metric we will compute is g - ratio _ weighted _ , and the better the calibration , the more weighted to the g - ratio it will be .until quantitative myelin mapping is _ accurate _ , the g - ratio metric will not be specific to the g - ratio .effect of having an improper slope ( top ) or intercept ( bottom ) in the relationship between an arbitrary myelin marker and the mvf , in the case where the ( theoretical ) relationship is in fact linear .the plots show that the computed g - ratio becomes fiber density dependent , in addition to incorrect ., title="fig:",scaledwidth=50.0% ] effect of having an improper slope ( top ) or intercept ( bottom ) in the relationship between an arbitrary myelin marker and the mvf , in the case where the ( theoretical ) relationship is in fact linear .the plots show that the computed g - ratio becomes fiber density dependent , in addition to incorrect ., title="fig:",scaledwidth=50.0% ] one possible solution for mvf calibration is to calibrate the g - ratio to a known value in certain regions of interest , as mentioned above .however , care must be taken that this step is not adjusting for differences in the diffusion part of the pipeline ( e.g. , different implementations of the diffusion model ) , and therefore still leaving a fiber density dependence .additionally , the correct value in these regions of interest must be known .calibration based on expected mvf would remove this sensitivity , but is subject to error due to partial volume averaging of white matter with other tissue . if the relationship between the myelin - sensitive metric and the mvf is not a simple scaling , such calibration will fail .particular care needs to be taken when studying disease .if the assumed relationship between the myelin marker and the mvf is incorrect , the computed g - ratio will be incorrect .is it possible to compute a g - ratio that is correct to within a scaling factor , and not sensitive to the fiber density ?this would require that the avf or fvf be estimated independent of the mvf .simple models such as the diffusion tensor , apparent fiber density , and tensor fiber density , are indicators of fiber or axon density , but detailed modeling is most likely superior .consideration of contrasts other than diffusion mri , such as gradient - echo based approaches , might also help with this problem .the g - ratio is a function of the ratio of the mvf to the avf , and a technique that measures this ratio directly would be optimal .however , gre based estimates would be of the myelin and axon _ water _ fraction , and hence would still need to be calibrated for the volumetric occupancy of water in these tissues . in summary ,both specificity and accuracy are important for both avf and mvf estimation .this may require more sophisticated models in both contexts .for example , we have thus far ignored cell membranes .the axon membrane should technically be included in the avf , and its volume is up to 4% of the avf , but it would most likely be included in the mvf using mt - based mvf estimation . the methods described above include several pre - processing steps , including distortion and field inhomogeneity correction , that deserve further discussion .the mt - based contrasts are acquired with spin - warp acquisition trains , and the diffusion - based contrasts are acquired with single - shot epi . when any acquisition details are changed , the distortions in the images change , and co - registration of voxels for voxelwise quantitative computations becomes more difficult .the blip - up blip - down phase encode strategy ( section [ acq ] ) allows for precise correction of susceptibility - induced distortion in the diffusion images .lack of correction for this distortion leads to visible bands of artefactually high g - ratio near tissue - csf interfaces ( see , e.g , ) .this was illustrated by mohammadi et al . ( see fig .[ siawoosh_misreg ] ) .uncorrected diffusion mri data leads to g - ratios in the vicinity of unity at the edge of the genu of the corpus callosum , caused by voxels where the avf is artefactually high ( containing little or no csf ) , and the mvf low ( because the correctly localized voxels actually contain csf ) .the white matter - csf boundary is a region of obvious misregistration , but much of the frontal lobe suffers from susceptibility induced distortion , and would therefore have incorrect g - ratios .misregistration artefact due to susceptibility - induced distortion in diffusion weighted images .at left is an mt image with white matter outlined in red , for one slice ( a ) and a cropped region at the genu ( b ) . in the center ( c ,d ) is an original epi diffusion scan with no diffusion weighting and contrast inverted ( ib0 ) .the misregistration with the mt-defined white matter boundary is marked .at right ( e , f ) is the g - ratio computed with these contrasts .uncorrected diffusion mri data leads to g - ratios in the vicinity of unity at the edge of the genu , caused by voxels where the avf is artefactually high ( containing little or no csf ) , and the mvf low ( because the correctly localized voxels actually contain csf ) .reproduced from ., scaledwidth=50.0% ] multi - modal imaging protocols are a powerful tool for investigation of microstructure .we have thus far discussed combining multiple images with partially orthogonal contrasts in order to estimate the g - ratio .however , problems such as the above misregistration issue arise .can a single acquisition train provide multiple contrasts ?one such approach was described recently for simultaneous mapping of myelin content and diffusion parameters .it consists of an inversion - recovery preparation before a diffusion weighted sequence , allowing for fitting of a model that incorporates both t ( a myelin marker ) and axonal attributes .this approach is conceptually extensible to other myelin - sensitive preparations or modifications of a diffusion weighted sequence , such as quantitative t estimation .can the g - ratio be estimated using a single contrast mechanism ?this could also offer inherent co - registration , as well as possibly increase the acquisition speed .we have noted that if diffusion mri is used as an fvf marker , then diffusion is essentially a single - contrast g - ratio imaging technique .however , we reiterate that it is preferable to use a more robust myelin marker . as discussed in section [ avf ] , analysis of complex gre images may lead us to a technique for estimating the mvf and avf volume fractions from one set of images .our image processing pipeline for qmt analysis includes b and b correction .b correction for mtr is the subject of continuing research . as for mt, the acquisition strategy leads to a relatively b insensitive map .however , a semi - empirical b correction is also made to correct for higher order effects . using such correction , spatial uniformity of the mt mapis improved .correction for field inhomogeneity should be considered regardless of the myelin mapping technique employed .b correction is a particular concern in multicomponent quantitative t modeling .the g - ratio imaging paradigm extracts a single g - ratio metric per voxel . at typical imaging resolutionpossible for the constituent mr images , a voxel contains hundreds of thousands of axons .the g - ratio is not constant in tissue , but takes on a distribution of values ( see fig .[ macaque_g_dist ] , which shows the g - ratio distribution in the macaque corpus callosum , measured using electron microscopy ) .the range of myelination includes some unmyelinated axons within healthy white matter .the g - ratio distribution may broaden and become bi - modal in disease . even within a single axon with intactmyelin , the g - ratio may vary due to organelle swelling .fiber bundles that cross within one voxel may have different g - ratio distributions . in development, some fibers within one fiber bundle will fully develop , while others will be pruned , resulting in an interim bimodal g - ratio distribution within the fascicle .the current mri - based g - ratio framework will not be able to distinguish these cases , as it reports only an intermediate g - ratio value .it is robust to crossing fibers , in that it will report the same intermediate g - ratio value whether the separate bundles cross or lie parallel to each other .the broad g - ratio distribution is in part a resolution problem , but the g - ratio is expected to be heterogeneous on a scale smaller than we can hope to resolve with mri .g - ratio distributions from electron microscopy of the cynamolgous macaque corpus callosum , samples 1 - 8 from genu to splenium . reproduced from ., scaledwidth=50.0% ] the aggregate g - ratio we compute in the case of a distribution of values is not precisely fiber- or axon - area weighted , but is close to axon area weighted within a reasonable range of values .larger axons will have a greater weight in the aggregate g - ratio metric we measure .simply put , the aggregate g - ratio is the g - ratio one would measure if all axons had the same g - ratio . in the case of an ambiguous g - ratio distribution , what techniques can we use to infer what situation is occurring ? in multiple sclerosis , for example , two possible scenarios probably occur frequently .one is patchy demyelination , on a scale much smaller than a voxel and smaller than the diffusion distance , and the other is more globally distributed thin myelin .these two scenarios could give rise to equal avf , mvf , and aggregate g - ratio measurements .one possible way to differentiate these cases could be to look more closely at parameters available to us from diffusion models .it has been shown that the extra - axonal perpendicular diffusivity is relatively unchanged by patchy demyelination in a demyelinating mouse model , because diffusing molecules encounter normal hindrance to motion on most of their trajectory , whereas the axon water fraction is sensitive to this patchy demyelination .hence , the discrepancy between these two measures can be taken as a measure of patchy demyelination .alternatively , one can scan subjects longitudinally and infer disease progression . from the ambiguous timepoint described above, the axons in the patches that are demyelinated may die , leaving a decreased avf and mvf , and a return to a near - healthy g - ratio . in the case ofglobally thin myelin , the remyelination may continue , leaving a near - healthy avf , mvf , and g - ratio .note that the g - ratio metric still does not distinguish these pathologically distinct cases .there are two unknowns - the fiber density and the g - ratio ( or , alternately , the mvf and the avf ) , and one must consider both to have a full picture of the tissue . looking at the time courses , one can hypothesize what the g - ratio distribution was at the first timepoint. it would be technically challenging to measure the g - ratio distribution _ in vivo_. even with an estimate of a distribution of diffusion properties , and an estimate of the distribution of myelin - sensitive metric , the g - ratio distribution is ill defined .however , several recent acquisition strategies may help us get closer to this aim .one approach is to take advantage of the distinguishable diffusion signal between different fiber orientations . in the ir - prepared diffusion acquisition described above, the model specifies multiple fiber populations with distinct orientations , each with its own t value .this means the diffusion properties , including the restricted pool fraction ( a marker of intra - axonal signal from the charmed model ) , are paired with a corresponding t for each fiber orientation .hence , a g - ratio metric could be computed for each fiber orientation .this could be of benefit in , e.g. , microstructure informed white matter fiber tractography ( e.g. , ) of fiber populatons with distinct g - ratios .`` jumping '' from one fiber population to another is very common in tractography , and constraining tractography to pathways with consistent microstructural features could help reduce false positives in regions of closely intermingling tract systems . it may be possible , conceptually , to estimate the g - ratio distribution via a 2d spectroscopic approach . while extremely acquisition intensive , 2d spectroscopy of t andthe diffusion coefficient has been demonstrated recently .the acquisition involves making all diffusion measurements at different echo times .if a distribution of a myelin volume sensitive metric ( here , t ) can be estimated simultaneously with a distribution of a diffusion - based metric sensitive to the axon volume , it may be possible to infer the distribution of g - ratios .making a multi - modal imaging protocol short enough for the study of patient populations and use in the clinic is a considerable challenge .our investigation of mt as a replacement for qmt was done in the interest of reducing acquisition time .there exist other short mt - based approaches , such as single - point two - pool modeling and inhomogeneous mt .another approach could be to use compressed sensing for mt - based acquisitions .gre - based myelin water fraction approaches may also offer a faster approach for estimating the myelin water fraction , and possibly , as mentioned above , a way to eliminate the diffusion imaging part of the protocol .diffusion imaging has benefited from many acceleration approaches in recent years , including parallel imaging , which can also be used in the myelin mapping protocols , slice multiplexing , and hardware advances such as the connectom gradient system .this has been an incomplete but useful list of pitfalls .now , we will consider the promise of imaging the aggregate g - ratio weighted metric , despite its pitfalls .g - ratio imaging is being explored in many different contexts , described below .the promise of g - ratio imaging is its potential to provide us with valuable _ in vivo _ estimates of relative myelination . in the last few years, studies showing the potential of this framework have begun to emerge .[ healthyg ] shows an image of the g - ratio in healthy white matter using our qmt and noddi g - ratio protocol ( see section [ acq ] ) .the map is relatively flat , with a mean g - ratio of 0.75 .other groups have explored these and other mvf and avf sensitive contrasts for g - ratio mapping in healthy white matter .these include a study of the effects of age and gender in a population of subjects aged 20 to 76 using qmt and noddi , studies of healthy adults using mt and the tfd and mtv and dti , and a study of healthy subjects using the vista myelin water imaging technique and noddi . the g - ratio in healthy while matter , imaged using qmt and noddi.,scaledwidth=35.0% ] a variation of the g - ratio with age appears to be detectable with this methodology .a variation with gender has not been seen , and if it exists in adolescence , a study designed for sufficient statistical power at a precise age will be required to detect it .in addition to exploring the effect of age and gender , spatial variability of the g - ratio has been investigated . an elevated g - ratio at the splenium of the corpus callosum has been seen .the splenium has been reported to contain `` super - axons '' of relatively large diameter , and these would be expected , due to the nonlinearity of the g - ratio , to have relatively thinner myelin sheaths .electron microscopy in the macaque ( see fig . [ splenium ] ) confirms this ; the `` super - axons '' dominate the aggregate g - ratio measure , which was seen to be elevated in the splenium using both em measurements and mri of the same tissue .existence of `` super - axons '' in the splenium of the corpus callosum .top : drawing based on histology by aboitiz _( reproduced from ) , showing large diameter at the splenium .bottom : em of the g - ratio in the cynamolgous macaque showing large diameter axons at the splenium .these will dominate the aggregate g - ratio measure , which was elevated in the splenium using both em measurements and mri of the same tissue shown here ., scaledwidth=50.0% ] g - ratio imaging has also been performed in the healthy human spinal cord , where there are considerable technical challenges , such as motion , susceptibility , and the need for significantly higher resolution than we have described for cerebral applications .et al . _ acquired g - ratio data at 0.8 mm x 0.8 mm inplane voxel size .this study used the charmed model of diffusion , more accessible on scanners with high gradient strength , on a connectom skyra scanner .it used the mtv myelin marker .of interest , the g - ratio was not found to vary significantly across white matter tracts in the spinal cord , while the diffusion metrics ( restricted fraction , diffusivity of the hindered compartment , and axon diameter ) and the mtv metric did vary across tracts .this is expected , as heterogeniety in packing and axon diameter is expected to be greater than heterogeneity of the g - ratio , and the g - ratio is also robust to partial voluming effects .multiple groups have studied the g - ratio _ in vivo _ in the developing brain .axon growth outpaces myelination during development , and therefore a decreasing g - ratio is expected as myelination reaches maturity , as was seen in these studies . imagingthe g - ratio _ in vivo _ in multiple sclerosis has been explored by several groups and is of interest for several reasons .it can possibly help assess disease evolution , and can help monitor response to treatment .it has the potential to aid in the development of new therapies for remyelination .it can also help us understand which therapies might be more fruitful avenues of research . therapy for ms includes immunotherapy and remyelination therapy , however , most of our histological knowledge of ms comes from samples from older subjects at more advanced stages of the disease .if remyelination happens effectively , at least for some axons , at the earlier stages of the disease , and the myelin loss measured using myelin markers is due to fiber density loss only , then immunotherapy would be a more useful therapy , at least once most surviving axons have returned to near - normal g - ratios . despite the promise of imaging the g - ratio _ in vivo _ with ms , it is important to remember the effect of miscalibration of the myelin metric when interpreting g - ratio estimates in ms .there is evidence that fiber density drops precipitously in some lesions .we have seen ( fig .[ gratios ] ) that in this case , mtr for example does not drop enough , making ms lesions appear to have a lowered g - ratio instead of a higher g - ratio as expected . inspecting the bottom ( red ) curve in fig .[ mvfsim ] , we see that even if there is a linear relationship between the myelin marker of choice and the mvf , miscalibration leads to an apparent g - ratio metric that is elevated in regions of lower fiber density , and significantly lower in regions of healthy fiber density . this occurs when in fact all of the fibers have the same g - ratio , and could easily be interpreted as hypomyelination in an ms subject or population .to further complicate the situation , one must consider how the myelin sensitive metrics behave in the presence of astrocyte scarring , glial cell processes , and inflammatory cell swelling .the mapping of many myelin markers to mvf may change as the ratio of myelin to other visible macromolecular structures changes .it is also not clear how well the estimates of avf and mvf behave at very low fiber density .another point of concern in ms imaging is that all myelin will affect the mr signal , even if it is not part of an intact fiber .research indicates that there is acute demyelination followed by a period of clearance of myelin debris , followed by effective remyelination . during clearance ,remyelination can occur , but this myelin is of poor quality . on the scale of an mri voxel, there can be myelin debris , poor remyelination , and higher quality remyelination .the extent to which myelin debris affects the myelin volume estimates may depend on the myelin mapping technique chosen .further studies of ms are ongoing , including pediatric populations , optic neuritis , and studies investigating whether gado - linium enhancing lesions have a distinct g - ratio .g - ratio imaging has potential to aid in the understanding and treatment of multiple other diseases .white matter abnormalities may underlie many developmental disorders .these include pelizaeus merzbacher disease and sturge - weber syndrome .the g - ratio can change due to axonal changes that occur with intact myelin ( for example , axonal swelling due to infarction ) could result in an increased apparent g - ratio .g - ratio differences have been seen in schizophrenia using electron microscopy , and researchers hope to be able to study such changes _ in vivo _ in schizophrenia and other psychiatric disorders .another potential application of g - ratio imaging is bridging the gap between microstructure and large - scale functional measures such as conduction delays . the theoretically optimal g - ratio for signal conduction should predict conduction delays .the true promise of g - ratio imaging will come with validation ._ ex - vivo _ validation has been performed , investigating the g - ratio explicitly and the individual metrics used to compute it , both myelin and axon volume weighted or one of these two components of the g - ratio formulation .these studies compare _ in - vivo _ or _ ex - vivo _ mri metrics to electron microscopy , optical microscopy , myelin staining , immunohistochemistry , and coherent anti - stokes raman spectroscopy ( cars ) .while no microscopy technique is perfect , microscopy provides a reasonable validation for imaging techniques , taking into account the possibility for tissue shrinkage and distortion , limitations in contrast and resolution , and segmentation techniques .interpretation of findings of demyelinating models should take into account the particularities of the demyelinating challenge .et al . _ have shown that the extra - axonal diffusivity perpendicular to axons correlates with the g - ratio in a cuprizone demyelinating model in mice .this is probably driven by a fiber volume fraction decrease , because little axon loss would be expected in this model , meaning the g - ratio and fiber volume fraction correlate highly .similarly , west _ et al . _ have shown a correlation between the discrepancy between f and mwf and the g - ratio in a knockout model in mice .this is probably a correlation with absolute myelin thickness , via exchange effects , as opposed to the g - ratio per se .we have discussed the considerable promise of g - ratio imaging with mri . computingthe g - ratio metric is a useful way to interpret myelin volume - weighted and axon / fiber volume - weighted data .we have furthermore discussed the pitfalls of g - ratio imaging , including mr artefacts , lack of specificity , lack of spatial resolution , and acquisition time . with the confounds described in the text in mind , such as accuracy of myelin mapping , g - ratio imaging clearly gives us an _ aggregate _g - ratio + _ weighted _ metric .this imaging framework provides information on two quantities : the fiber density and the g - ratio , and attempts to decouple these two quantities to the best of the ability of our current imaging technology .the promise of g - ratio imaging includes the multitude of pathological conditions in which _ in vivo _ g - ratio estimates can aid in understanding disease , developing therapies , and monitoring disease progression .it also includes the study of normal variability , development , aging , and functional dynamics .the authors would like to thank tomas paus , robert dougherty , mathieu boudreau , eva alonso - ortiz , blanche perraud , j.f .cabana , christine tardif , jessica dubois , ofer pasternak , atef badji , robert brown , masaaki hori , and david rudko for their insights and contributions to this work .this work was supported by grants from campus alberta innovates , the canadian institutes for health research ( gbp , fdn-143290 , and jca , fdn-143263 ) , the natural science and engineering research council of canada ( ns , 2016 - 06774 , and jca , 435897 - 2013 ) , the montreal heart institute foundation , the fonds de recherche du qubec - sant ( jca , 28826 ) , the quebec bioimaging network ( ns , 8436 - 0501 ) , the canada research chair in quantitative magnetic resonance imaging ( jca ) , and the fonds de recherche du qubec - nature et technologies ( jca , 2015-pr-182754 ) .f. aboitiz and j. montiel .one hundred million years of interhemispheric communication : the history of the corpus callosum ._ brazilian journal of medical and biological research _ , 360 ( 4 ) , april 2003 .issn 0100 - 879x .doi : 10.1590/s0100 - 879x2003000400002 .url http://dx.doi.org/10.1590/s0100-879x2003000400002 .francisco aboitiz , arnold b. scheibel , robin s. fisher , and eran zaidel .fiber composition of the human corpus callosum ._ brain research _ ,5980 ( 1 - 2):0 143153 , december 1992 .issn 00068993 .doi : 10.1016/0006 - 8993(92)90178-c .url http://dx.doi.org/10.1016/0006-8993(92)90178-c .monika albert , jack antel , wolfgang bruck , and christine stadelmann .extensive cortical remyelination in patients with chronic multiple sclerosis ._ brain pathol _ , 170 ( 2):0 129138 , apr 2007 .issn 1015 - 6305 ( print ) ; 1015 - 6305 ( linking ) .doi : 10.1111/j.1750 - 3639.2006.00043.x .eva alonso - ortiz , ives r. levesque , raphal paquin , and g. bruce pike .field inhomogeneity correction for gradient echo myelin water fraction imaging ._ , page n / a , july 2016 .doi : 10.1002/mrm.26334 .url http://dx.doi.org/10.1002/mrm.26334 .jesper l. andersson , stefan skare , and john ashburner .how to correct susceptibility distortions in spin - echo echo - planar images : application to diffusion tensor imaging ._ neuroimage _ , 200 ( 2):0 870888 , october 2003 .issn 1053 - 8119 .doi : 10.1016/s1053 - 8119(03)00336 - 7. url http://dx.doi.org/10.1016/s1053-8119(03)00336-7 .yaniv assaf , tamar blumenfeld - katzir , yossi yovel , and peter j. basser .: a method for measuring axon diameter distribution from diffusion mri . _ magnetic resonance in medicine _ , 590 ( 6):0 13471354 , june 2008 .issn 0740 - 3194 .doi : 10.1002/mrm.21577 .url http://dx.doi.org/10.1002/mrm.21577 .t. e. behrens , h. j. berg , s. jbabdi , m. f. rushworth , and m. w. woolrich .probabilistic diffusion tractography with multiple fibre orientations : what can we gain ?_ neuroimage ._ , 340 ( 1):0 14455 . , 2007 . yves benninger , holly colognato , tina thurnherr , robin j. m. franklin , dino p. leone , suzana atanasoski , klaus - armin nave , charles ffrench constant , ueli suter , and joo b. relvas . signaling mediates premyelinating oligodendrocyte survival but is not required for cns myelination and remyelination . _journal of neuroscience _ , 260 ( 29):0 76657673 , july 2006 .issn 1529 - 2401 .doi : 10.1523/jneurosci.0444 - 06.2006 .url http://dx.doi.org/10.1523/jneurosci.0444-06.2006 .shai berman , jason d. yeatman , and aviv mezer . in vivo measurement of g - ratio in the corpus callosumusing the macromolecular tissue volume : evaluating changes as a function of callosal subregions , age and sex . in _qbin workshop : toward a super - big brain : promises and pitfalls of microstructural imaging _ , page 1 , 2016 .c h berthold , i nilsson , and m rydmark .axon diameter and myelin sheath thickness in nerve fibres of the ventral spinal root of the seventh lumbar nerve of the adult and developing cat ._ j anat _ ,1360 ( pt 3):0 483508 , may 1983 .t. a. bjarnason , i. m. vavasour , c. l. chia , and a. l. mackay .characterization of the nmr behavior of white matter in bovine brain ._ magnetic resonance in medicine _ , 540 ( 5):0 10721081 , november 2005 .issn 0740 - 3194 .doi : 10.1002/mrm.20680 .url http://dx.doi.org/10.1002/mrm.20680 .jean - franois cabana , ye gu , mathieu boudreau , ives r. levesque , yaaseen atchia , john g. sled , sridar narayanan , douglas l. arnold , g. bruce pike , julien cohen - adad , tanguy duval , manh - tung vuong , and nikola stikov .quantitative magnetization transfer imaging made easy with qmtlab : software for data simulation , analysis , and visualization ._ concepts magn ._ , 44a0 ( 5):0 263277 ,september 2015 .doi : 10.1002/cmr.a.21357 .url http://dx.doi.org/10.1002/cmr.a.21357 . jennifer s. w. campbell and g. bruce pike .potential and limitations of diffusion mri tractography for the study of language . _ brain and language _ , july 2013 .issn 0093934x .doi : 10.1016/j.bandl.2013.06.007 .url http://dx.doi.org/10.1016/j.bandl.2013.06.007 .jennifer s.w .campbell , nikola stikov , robert f. dougherty , and g. bruce pike .combined noddi and qmt for full - brain g - ratio mapping with complex subvoxel microstructure . in _ismrm 2014 _ , page 393 , 2014 .jennifer s.w .campbell , ilana r. leppert , mathieu boudreau , sridar narayanan , tanguy duval , julien cohen - adad , g. bruce pike , and nikola stikov .caveats of miscalibration of myelin metrics for g - ratio imaging . in _ohbm 2016 _ , page 1804 , 2016 .mara cercignani , giovanni giulietti , nick dowell , barbara spano , neil harrison , and marco bozzali . a simple method to scale the macromolecular pool size ratio for computing the g - ratio in vivo . in _ismrm 2016 _ , page 3369 , 2016 .mara cercignani , giovanni giulietti , nick g. dowell , matt gabel , rebecca broad , p. nigel leigh , neil a. harrison , and marco bozzali . characterizing axonal myelination within the healthy population : a tract - by - tract mapping of effects of age and gender on the fiber g - ratio ._ neurobiology of aging _ , october 2016 .issn 0197 - 4580 .doi : 10.1016/j.neurobiolaging.2016.09.016 .url http://dx.doi.org/10.1016/j.neurobiolaging.2016.09.016 .taylor chomiak and bin hu .what is the optimal value of the g - ratio for myelinated fibers in the rat cns ?a theoretical approach ._ plos one _ , 40 ( 11):0 e7754 + , november 2009 .doi : 10.1371/journal.pone.0007754 .url http://dx.doi.org/10.1371/journal.pone.0007754 . j. cohen - adad .what can we learn from t2 * maps of the cortex ?_ neuroimage _ , 93:0 189200 , june 2014 .issn 10538119 .doi : 10.1016/j.neuroimage.2013.01.023 .url http://dx.doi.org/10.1016/j.neuroimage.2013.01.023 .alessandro daducci , erick j. canales - rodrguez , hui zhang , tim b. dyrby , daniel c. alexander , and jean - philippe thiran .accelerated microstructure imaging via convex optimization ( amico ) from diffusion mri data ._ neuroimage _ , 105:0 3244 , january 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2014.10.026 .url http://dx.doi.org/10.1016/j.neuroimage.2014.10.026 .silvia de santis , daniel barazany , derek k. jones , and yaniv assaf . resolving relaxometry and diffusion properties within the same voxel in the presence of crossing fibres by combining inversion recovery and diffusion - weighted acquisitions ._ , 750 ( 1):0 372380 , january 2016 .doi : 10.1002/mrm.25644 .url http://dx.doi.org/10.1002/mrm.25644 .silvia de santis , derek k. jones , and alard roebroeck . including diffusion time dependence in the extra - axonal spaceimproves in vivo estimates of axonal diameter and density in human white matter ._ neuroimage _ , 130:0 91103 , april 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.01.047 .url http://dx.doi.org/10.1016/j.neuroimage.2016.01.047 .douglas c. dean , jonathan omuircheartaigh , holly dirks , brittany g. travers , nagesh adluru , andrew l. alexander , and sean c. l. deoni .mapping an index of the myelin g - ratio in infants using magnetic resonance imaging ._ neuroimage _ , 132:0 225237 , may 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.02.040 .url http://dx.doi.org/10.1016/j.neuroimage.2016.02.040 .sean c. l. deoni , brian k. rutt , tarunya arun , carlo pierpaoli , and derek k. jones .gleaning multicomponent t1 and t2 information from steady - state imaging data .med . _ , 600 ( 6):0 13721387 , december 2008 .issn 1522 - 2594 .doi : 10.1002/mrm.21704 .url http://dx.doi.org/10.1002/mrm.21704 .maxime descoteaux , jasmeen sidhu , eleftherios garyfallidis , jean - christophe houde , peter neher , bram stieltjes , and klaus h. maier - hein .false positive bundles in tractography . in _ismrm 2016 _ , page 790 , 2016 .m. d. does and j. c. gore .rapid acquisition transverse relaxometric imaging . _ journal of magnetic resonance _ , 1470 ( 1):0 116120 ,november 2000 .issn 1090 - 7807 .url http://view.ncbi.nlm.nih.gov/pubmed/11042054 .jiang du , guolin ma , shihong li , michael carl , nikolaus m. szeverenyi , scott vandenberg , jody corey - bloom , and graeme m. bydder .ultrashort echo time ( ute ) magnetic resonance imaging of the short t2 components in white matter of the brain using a clinical 3 t scanner ._ neuroimage _ , 87:0 3241 , february 2014 .issn 1095 - 9572 .doi : 10.1016/j.neuroimage.2013.10.053 .url http://dx.doi.org/10.1016/j.neuroimage.2013.10.053 . yiping p. du , renxin chu , dosik hwang , mark s. brown , bette k. kleinschmidt - demasters , debra singel , and jack h. simon .fast multislice mapping of the myelin water fraction using multicompartment analysis of t2 * decay at 3 t : a preliminary postmortem study . _ magnetic resonance in medicine _, 580 ( 5):0 865870 , november 2007 .issn 0740 - 3194 .doi : 10.1002/mrm.21409 .url http://dx.doi.org/10.1002/mrm.21409 .adrienne n dula , daniel f gochberg , holly l valentine , william m valentine , and mark d does .multiexponential t2 , magnetization transfer , and quantitative histology in white matter tracts of rat spinal cord ._ magn reson med _ , 630 ( 4):0 902909 , apr 2010 .issn 1522 - 2594 ( electronic ) ; 0740 - 3194 ( linking ) .doi : 10.1002/mrm.22267 .t. duval , s. lvy , n. stikov , j. campbell , a. mezer , t. witzel , b. keil , v. smith , l. l. wald , e. klawiter , and j. cohen - adad . weighted imaging of the human spinal cord in vivo ._ neuroimage _ , september 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.09.018 .url http://dx.doi.org/10.1016/j.neuroimage.2016.09.018 .tanguy duval , simon levy , nikola stikov , a. mezer , thomas witzel , boris keil , v. smith , lawrence l. wald , eric c. klawiter , and j. cohen - adad . in vivo mapping of myelin g - ratio in the human spinal cord . in _ismrm 2015 _ , page 5 , 2015 .tanguy duval , blanche perraud , manh - tung vuong , nibardo lopez rios , nikola stikov , and julien cohen - adad .validation of quantitative mri metrics using full slice histology with automatic axon segmentation . in _ismrm 2016 _ , page 396 , 2016 .simon f. eskildsen , pierrick coup , vladimir fonov , jos v. manjn , kelvin k. leung , nicolas guizard , shafik n. wassef , lasse r. stergaard , and d. louis collins . : brain extraction based on nonlocal segmentation technique ._ neuroimage _ , 590 ( 3):0 23622373 , february 2012 .issn 10538119 .doi : 10.1016/j.neuroimage.2011.09.012 .url http://dx.doi.org/10.1016/j.neuroimage.2011.09.012 .uran ferizi , torben schneider , thomas witzel , lawrence l. wald , hui zhang , claudia a. m. wheeler - kingshott , and daniel c. alexander .white matter compartment models for in vivo diffusion mri at 300mt / m ._ neuroimage _ , 118:0 468483 , september 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.06.027 .url http://dx.doi.org/10.1016/j.neuroimage.2015.06.027 .els fieremans , yves de deene , steven delputte , mahir s. zdemir , yves dasseler , jelle vlassenbroeck , karel deblaere , eric achten , and ignace lemahieu .simulation and experimental verification of the diffusion in an anisotropic fiber phantom . _journal of magnetic resonance _ , 1900 ( 2):0 189199 , february 2008 .issn 10907807 .doi : 10.1016/j.jmr.2007.10.014 .url http://dx.doi.org/10.1016/j.jmr.2007.10.014 .els fieremans , jens h. jensen , and joseph a. helpern .white matter characterization with diffusional kurtosis imaging ._ neuroimage _ , 580 ( 1):0 177188 , september 2011 .issn 10538119 .doi : 10.1016/j.neuroimage.2011.06.006 .url http://dx.doi.org/10.1016/j.neuroimage.2011.06.006 .els fieremans , lauren m. burcaw , hong - hsi lee , gregory lemberskiy , jelle veraart , and dmitry s. novikov . in vivo observation and biophysical interpretation of time - dependent diffusion in human white matter ._ neuroimage _ , 129:0 414427 , april 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.01.018 .url http://dx.doi.org/10.1016/j.neuroimage.2016.01.018 .e. k. fram , r. j. herfkens , g. a. johnson , g. h. glover , j. p. karis , a. shimakawa , t. g. perkins , and n. j. pelc . rapid calculation of t1 using variable flip angle gradient refocused imaging ._ magnetic resonance imaging _ , 50 ( 3):0 201208 , 1987 .issn 0730 - 725x .url http://view.ncbi.nlm.nih.gov/pubmed/3626789 .m. garcia , m. gloor , e - w w. radue , chh. stippich , s. g. wetzel , k. scheffler , and o. bieri .fast high - resolution brain imaging with balanced ssfp : interpretation of quantitative magnetization transfer towards simple mtr ._ neuroimage _ , 590 ( 1):0 202211 , january 2012 .issn 1095 - 9572 .url http://view.ncbi.nlm.nih.gov/pubmed/21820061 .paula j. gareau , brian k. rutt , stephen j. karlik , and j. ross mitchell .magnetization transfer and multicomponent t2 relaxation measurements with histopathologic correlation in an experimental model of ms ._ j. magn .reson . imaging _ , 110 ( 6):0 586595 , june 2000 .doi : 10.1002/1522 - 2586(200006)11:6%3c586::aid - jmri3%3e3.0.co;2-v .url http://dx.doi.org/10.1002/1522 - 2586(200006)11:6%3c586::aid - jmri3%3e3.% 0.co;2-v[http://dx.doi.org/10.1002/1522 - 2586(200006)11:6%3c586::aid- jmri3%3e3.% 0.co;2-v ] .gabriel girard , rutger fick , maxime descoteaux , rachid deriche , and demian wassermann .: microstructure - driven tractography based on the ensemble average propagator ._ ipmi 2015 _ , 24:0 675686 , 2015 .issn 1011 - 2499 .url http://view.ncbi.nlm.nih.gov/pubmed/26221712 . matthew f. glasser and david c. van essen .mapping human cortical areas in vivo based on myelin content as revealed by t1- and t2-weighted mri ._ j. neurosci ._ , 310 ( 32):0 1159711616 , august 2011 .issn 1529 - 2401 .doi : 10.1523/jneurosci.2180 - 11.2011 .url http://dx.doi.org/10.1523/jneurosci.2180-11.2011 .francesco grussu , torben schneider , hui zhang , daniel c. alexander , and claudia a. m. wheeler - kingshott .single shell diffusion mri noddi with in vivo cervical cord data . in _ismrm 2014 _ , page 1716 , 2014 .kevin d. harkins , adrienne n. dula , and mark d. does .effect of intercompartmental water exchange on the apparent myelin water fraction in multiexponential t2 measurements of rat spinal cord ._ magnetic resonance in medicine _ , 670 ( 3):0 793800 , march 2012 .issn 1522 - 2594 .doi : 10.1002/mrm.23053 .url http://dx.doi.org/10.1002/mrm.23053 .kevin d. harkins , junzhong xu , adrienne n. dula , ke li , william m. valentine , daniel f. gochberg , john c. gore , and mark d. does .the microstructural correlates of t1 in white matter . _ magn . reson ._ , 750 ( 3):0 13411345 , march 2016 .doi : 10.1002/mrm.25709 .url http://dx.doi.org/10.1002/mrm.25709 .helms , dathe , kallenberg , and dechent .erratum to : helms , dathe , kallenberg and dechent , high - resolution maps of magnetization transfer with inherent correction for rf inhomogeneity and t1 relaxation obtained from 3d flash mri .magn reson med 2008 dec;60(6):1396 - 1407 ._ , 640 ( 6):0 1856 , december 2010 .doi : 10.1002/mrm.22607 .url http://dx.doi.org/10.1002/mrm.22607 .gunther helms , henning dathe , kai kallenberg , and peter dechent .high - resolution maps of magnetization transfer with inherent correction for rf inhomogeneity and t1 relaxation obtained from 3d flash mri ._ , 600 ( 6):0 13961407 , december 2008 .doi : 10.1002/mrm.21732 .url http://dx.doi.org/10.1002/mrm.21732 . c hildebrand and r hahn .relation between myelin sheath thickness and axon size in spinal cord white matter of some vertebrate species ._ j neurol sci _ , 380 ( 3):0 421434 , oct 1978 .issn 0022 - 510x ( print ) ; 0022 - 510x ( linking ) .masaaki hori , nikola stikov , yasuaki nojiri , ryuji ad tsurushima , katsutoshi murata , keiichi ishigame , kouhei kamiya , yuichi suzuki , koji kamagata , and shigeki aoki .magnetic resonance myelin g - ratio mapping for the brain and cervical spinal cord : 10 minutes protocol for clinical application . in _ismrm 2016 _ , page 3377 , 2016 .elizabeth b hutchinson , alexandru avram , michal komlosh , m okan irfanoglu , alan barnett , evren ozarslan , susan schwerin , kryslaine radomski , sharon juliano , and carlo pierpaoli . a systematic comparative study of dti and higher order diffusion models in brain fixed tissue . in _ismrm 2016 _ , page 1048 , 2016 .dosik hwang , dong - hyun kim , and yiping p. du . ._ neuroimage _ , 520 ( 1):0 198204 , august 2010 .issn 10538119 .doi : 10.1016/j.neuroimage.2010.04.023 .url http://dx.doi.org/10.1016/j.neuroimage.2010.04.023 .ileana o. jelescu , jelle veraart , vitria adisetiyo , sarah s. milla , dmitry s. novikov , and els fieremans .one diffusion acquisition and different white matter models : how does microstructure change in human early development based on wmti and noddi ? _ neuroimage _ , 107:0 242256 , february 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2014.12.009 .url http://dx.doi.org/10.1016/j.neuroimage.2014.12.009 .ileana o. jelescu , jelle veraart , els fieremans , and dmitry s. novikov .degeneracy in model parameter estimation for multi - compartmental diffusion in neuronal tissue ._ nmr biomed ._ , 290 ( 1):0 3347 , january 2016 .doi : 10.1002/nbm.3450 .url http://dx.doi.org/10.1002/nbm.3450 .ileana o. jelescu , magdalena zurek , kerryanne v. winters , jelle veraart , anjali rajaratnam , nathanael s. kim , james s. babb , timothy m. shepherd , dmitry s. novikov , sungheon g. kim , and els fieremans . in vivo quantification of demyelination and recovery using compartment - specific diffusion mri metrics validated by electron microscopy ._ neuroimage _ , 132:0 104114 , may 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.02.004 .url http://dx.doi.org/10.1016/j.neuroimage.2016.02.004 .sune n. jespersen , carsten r. bjarkam , jens r. nyengaard , mallar m. chakravarty , brian hansen , thomas vosegaard , leif stergaard , dmitriy yablonskiy , niels chr c. nielsen , and peter vestergaard - poulsen .neurite density from magnetic resonance diffusion measurements at ultrahigh field : comparison with light microscopy and electron microscopy ._ neuroimage _ , 490 ( 1):0 205216 , january 2010 .issn 1095 - 9572 .doi : 10.1016/j.neuroimage.2009.08.053 .url http://dx.doi.org/10.1016/j.neuroimage.2009.08.053 .b. jeurissen , a. leemans , j - d .tournier , d. k. jones , and j. sijbers .estimating the number of fiber orientations in diffusion mri voxels : a constrained spherical deconvolution study . in _ismrm 2010 _ , page 573 , 2010 .enrico kaden , nathaniel d. kelm , robert p. carson , mark d. does , and daniel c. alexander .multi - compartment microscopic diffusion imaging ._ neuroimage _ , 139:0 346359 , june 2016 .issn 1095 - 9572 .url http://view.ncbi.nlm.nih.gov/pubmed/27282476 .valerij g. kiselev and kamil a. ilyasov . is the biexponential diffusion biexponential ?_ , 570 ( 3):0 464469 , march 2007 .doi : 10.1002/mrm.21164 .url http://dx.doi.org/10.1002/mrm.21164 .w. kucharczyk , p. m. macdonald , g. j. stanisz , and r. m. henkelman .relaxivity and magnetization transfer of white matter lipids at mr imaging : importance of cerebrosides and ph ._ radiology _ , 1920 ( 2):0 521529 , august 1994 .issn 0033 - 8419 .url http://view.ncbi.nlm.nih.gov/pubmed/8029426 .antoine lampron , antoine larochelle , nathalie laflamme , paul prfontaine , marie - michle plante , maria g. snchez , v. wee yong , peter k. stys , marie - ve tremblay , and serge rivest .inefficient clearance of myelin debris by microglia impairs remyelinating processes ._ journal of experimental medicine _ , 2120 ( 4):0 481495 , april 2015 .issn 1540 - 9538 .doi : 10.1084/jem.20141656 .url http://dx.doi.org/10.1084/jem.20141656 . c. laule , e. leung , d. k. lis , a. l. traboulsee , d. w. paty , a. l. mackay , and g. r. moore .myelin water imaging in multiple sclerosis : quantitative correlations with histopathology ._ multiple sclerosis _ , 120( 6):0 747753 , december 2006 .issn 1352 - 4585 .url http://view.ncbi.nlm.nih.gov/pubmed/17263002 .cornelia laule , irene m. vavasour , shannon h. kolind , david k. li , tony l. traboulsee , g.r .wayne moore , and alex l. mackay .magnetic resonance imaging of myelin ._ neurotherapeutics : the journal of the american society for experimental neurotherapeutics _ , 40 ( 3):0 460484 , july 2007 .issn 1933 - 7213 .doi : 10.1016/j.nurt.2007.05.004 .url http://dx.doi.org/10.1016/j.nurt.2007.05.004 .alfonso lema , courtney bishop , omar malik , miriam mattoscio , rehiana ali , richard nicholas , paolo a. muraro , paul m. matthews , adam d. waldman , and rexford d. newbould .a compararison of magnetization transfer methods to assess brain and cervical cord microstructure in multiple sclerosis ._ journal of neuroimaging _ ,page n / a , august 2016 .doi : 10.1111/jon.12377 .url http://dx.doi.org/10.1111/jon.12377 .ives levesque , john g. sled , sridar narayanan , a. carlos santos , steven d. brass , simon j. francis , douglas l. arnold , and g. bruce pike .the role of edema and demyelination in chronic t1 black holes : a quantitative magnetization transfer study . _ j. magn .reson . imaging _ , 210 ( 2):0 103110 , february 2005 .doi : 10.1002/jmri.20231 .url http://dx.doi.org/10.1002/jmri.20231 .ives r. levesque and g. bruce pike . characterizing healthy and diseased white matter using quantitative magnetization transfer and multicomponent t2 relaxometry : a unified view via a four - pool model ._ , 620 ( 6):0 14871496 , december 2009 .issn 1522 - 2594 .doi : 10.1002/mrm.22131 .url http://dx.doi.org/10.1002/mrm.22131 .chunlei liu , wei li , karen a. tong , kristen w. yeom , and samuel kuzminski .susceptibility - weighted imaging and quantitative susceptibility mapping in the brain ._ journal of magnetic resonance imaging _ , 420 ( 1):0 2341 , july 2015 .issn 1522 - 2586 .url http://view.ncbi.nlm.nih.gov/pubmed/25270052 .michael lustig , david donoho , and john m. pauly .sparse mri : the application of compressed sensing for rapid mr imaging . _ magnetic resonance in medicine _, 580 ( 6):0 11821195 , december 2007 .issn 0740 - 3194 .doi : 10.1002/mrm.21391 .url http://dx.doi.org/10.1002/mrm.21391 .a. mackay , k. whittall , j. adler , d. li , d. paty , and d. graeb .in vivo visualization of myelin water in brain by magnetic resonance . _magn reson med _ , 310 ( 6):0 673677 , june 1994 .issn 0740 - 3194 .url http://view.ncbi.nlm.nih.gov/pubmed/8057820 .l. magnollay , f. grussu , c. wheeler - kingshott , v. sethi , h. zhang , d. chard , d. miller , and o. ciccarelli .an investigation of brain neurite density and dispersion in multiple sclerosis using single shell diffusion imaging . in _ ismrm 2014 _ , page 2048 , 2014 .klaus maier - hein , peter neher , and et al .tractography - based connectomes are dominated by false - positive connections ._ biorxiv _ , pages 084137 + , november 2016 .doi : 10.1101/084137 .url http://dx.doi.org/10.1101/084137 .g. mangeat , s. t. govindarajan , c. mainero , and j. cohen - adad . ._ neuroimage _ , 119:0 89102 , october 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.06.033 .url http://dx.doi.org/10.1016/j.neuroimage.2015.06.033 .alan p. manning , kimberley l. chang , alex l. mackay , and carl a. michal .the physical mechanism of inhomogeneous magnetization transfer mri ._ journal of magnetic resonance _ ,november 2016 .issn 10907807 .doi : 10.1016/j.jmr.2016.11.013 .url http://dx.doi.org/10.1016/j.jmr.2016.11.013 .andrew melbourne , zach eaton - rosen , eliza orasanu , david price , alan bainbridge , m. jorge cardoso , giles s. kendall , nicola j. robertson , neil marlow , and sebastien ourselin .longitudinal development in the preterm thalamus and posterior white matter : mri correlations between diffusion weighted imaging and t2 relaxometry .brain mapp ._ , page n / a , march 2016 .doi : 10.1002/hbm.23188 .url http://dx.doi.org/10.1002/hbm.23188 .aviv mezer , jason d. yeatman , nikola stikov , kendrick n. kay , nam - joon cho , robert f. dougherty , michael l. perry , josef parvizi , le h. hua , kim butts - pauly , and brian a. wandell . quantifying the local tissue volume and composition in individual brains with magnetic resonance imaging ._ nat med _ , 190 ( 12):0 16671672 , december 2013 .issn 1078 - 8956 .doi : 10.1038/nm.3390 .url http://dx.doi.org/10.1038/nm.3390 .siawoosh mohammadi , daniel carey , fred dick , joern diedrichsen , martin i. sereno , marco reisert , martina f. callaghan , and nikolaus weiskopf .whole - brain in - vivo measurements of the axonal g - ratio in a group of 37 healthy volunteers ._ frontiers in neuroscience _ , 9 , 2015 .issn 1662 - 4548 .doi : 10.3389/fnins.2015.00441 .url http://dx.doi.org/10.3389/fnins.2015.00441 .jeroen mollink , michiel kleinnijenhuis , stamatios n sotiropoulos , michiel cottaar , anne - marie van cappellen van walsum , menuka pallebage gamarallage , olaf ansorge , saad jbabdi , and karla l miller .exploring fibre orientation dispersion in the corpus callosum : comparison of diffusion mri , polarized light imaging and histology . in _ismrm 2016 _ , page 795 , 2016 .j. p. mottershead , k. schmierer , m. clemence , j. s. thornton , f. scaravilli , g. j. barker , p. s. tofts , j. newcombe , m. l. cuzner , r. j. ordidge , w. i. mcdonald , and d. h. miller .high field mri correlates of myelin content and axonal density in multiple sclerosis a post - mortem study of the spinal cord . _journal of neurology _, 2500 ( 11):0 12931301 , november 2003 .issn 0340 - 5354 .url http://view.ncbi.nlm.nih.gov/pubmed/14648144 .lipeng ning , carl - fredrik westin , and yogesh rathi .estimation of bounded and unbounded trajectories in diffusion mri ._ frontiers in neuroscience _ , 10 , march 2016 .issn 1662 - 453x .doi : 10.3389/fnins.2016.00129 .url http://dx.doi.org/10.3389/fnins.2016.00129 .revital nossin - manor , dallas card , charles raybaud , margot j. taylor , and john g. sled .cerebral maturation in the early preterm period - a magnetization transfer and diffusion tensor imaging study using voxel - based analysis ._ neuroimage _ , 112:0 3042 , may 2015 .issn 1095 - 9572 .url http://view.ncbi.nlm.nih.gov/pubmed/25731990 .dmitry s. novikov , els fieremans , jens h. jensen , and joseph a. helpern .random walks with barriers ._ nature physics _, 70 ( 6):0 508514 , march 2011 .issn 1745 - 2473 .doi : 10.1038/nphys1936 .url http://dx.doi.org/10.1038/nphys1936 .dmitry s. novikov , ileana o. jelescu , and els fieremans . from diffusion signal moments to neurite diffusivities , volume fraction and orientation distribution : an exact solution . in _ismrm 2015 _ , page 469 , 2015 .joonmi oh , eric t. han , daniel pelletier , and sarah j. nelson . _ magnetic resonance imaging _ , 240 ( 1):0 3343 , january 2006 .issn 0730 - 725x .doi : 10.1016/j.mri.2005.10.016 .url http://dx.doi.org/10.1016/j.mri.2005.10.016 .se - hong oh , michel bilello , matthew schindler , clyde e. markowitz , john a. detre , and jongho lee .direct visualization of short transverse relaxation time component ( vista ) ._ neuroimage _ , 83:0 485492 , december 2013 .issn 10538119 .doi : 10.1016/j.neuroimage.2013.06.047 .url http://dx.doi.org/10.1016/j.neuroimage.2013.06.047 .andr pampel , dirk k. mller , alfred anwander , henrik marschner , and harald e. mller .orientation dependence of magnetization transfer parameters in human white matter ._ neuroimage _ , 114:0 136146 , july 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.03.068 .url http://dx.doi.org/10.1016/j.neuroimage.2015.03.068 .ofer pasternak , nir sochen , yaniv gur , nathan intrator , and yaniv assaf .free water elimination and mapping from diffusion mri ._ magnetic resonance in medicine _ , 620 ( 3):0 717730 , september 2009 .issn 1522 - 2594 .doi : 10.1002/mrm.22055 .url http://dx.doi.org/10.1002/mrm.22055 .toms paus and roberto toro .could sex differences in white matter be explained by g ratio ? _ frontiers in neuroanatomy _ , 3 , 2009 .issn 1662 - 5129 .doi : 10.3389/neuro.05.014.2009 .url http://dx.doi.org/10.3389/neuro.05.014.2009 .j s perrin , g leonard , m perron , g b pike , a pitiot , l richer , s veillette , z pausova , and t paus .sex differences in the growth of white matter during adolescence ._ neuroimage _ , 450 ( 4):0 10551066 , may 2009 .issn 1095 - 9572 ( electronic ) ; 1053 - 8119 ( linking ) .doi : 10.1016/j.neuroimage.2009.01.023 .m. pesaresi , r. soon - shiong , l. french , d. r. kaplan , f. d. miller , and t. paus .axon diameter and axonal transport : in vivo and in vitro effects of androgens ._ neuroimage _ , 115:0 191201 , july 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.04.048 .url http://dx.doi.org/10.1016/j.neuroimage.2015.04.048 .thomas prasloski , alexander rauscher , alex l. mackay , madeleine hodgson , irene m. vavasour , corree laule , and burkhard mdler .rapid whole cerebrum myelin water imaging using a 3d grase sequence ._ neuroimage _ , 630 ( 1):0 533539 , october 2012 .issn 1095 - 9572 .doi : 10.1016/j.neuroimage.2012.06.064 .url http://dx.doi.org/10.1016/j.neuroimage.2012.06.064 .david raffelt , tournier , stephen rose , gerard r. ridgway , robert henderson , stuart crozier , olivier salvado , and alan connelly .apparent fibre density : a novel measure for the analysis of diffusion - weighted magnetic resonance images ._ neuroimage _ , 590 ( 4):0 39763994 , february 2012 .issn 10538119 .doi : 10.1016/j.neuroimage.2011.10.045 .url http://dx.doi.org/10.1016/j.neuroimage.2011.10.045 .a. ramani , c. dalton , d. h. miller , p. s. tofts , and g. j. barker . precise estimate of fundamental in - vivo mt parameters in human brain in clinically feasible times ._ magnetic resonance imaging _ , 200 ( 10):0 721731 , december 2002 .issn 0730 - 725x .url http://view.ncbi.nlm.nih.gov/pubmed/12591568 .marco reisert , irina mader , roza umarova , simon maier , ludger tebartz van elst , and valerij g. kiselev .fiber density estimation from single q - shell diffusion imaging by tensor divergence. _ neuroimage _ , 77:0 166176 , august 2013 .issn 10538119 .doi : 10.1016/j.neuroimage.2013.03.032 .url http://dx.doi.org/10.1016/j.neuroimage.2013.03.032 .ariel rokem , jason d. yeatman , franco pestilli , kendrick n. kay , aviv mezer , stefan van der walt , and brian a. wandell . evaluating the accuracy of diffusion mri models in white matter ._ plos one _ , 100 ( 4):0 e0123272 + , april 2015 .doi : 10.1371/journal.pone.0123272 .url http://dx.doi.org/10.1371/journal.pone.0123272 .itamar ronen , matthew budde , ece ercan , jacopo annese , aranee techawiboonwong , and andrew webb .microstructural organization of axons in the human corpus callosum quantified by diffusion - weighted magnetic resonance spectroscopy of n - acetylaspartate and post - mortem histology ._ brain structure & function _ , 2190 ( 5):0 17731785 ,september 2014 .issn 1863 - 2661 .url http://view.ncbi.nlm.nih.gov/pubmed/23794120 .david a. rudko , l. martyn klassen , sonali n. de chickera , joseph s. gati , gregory a. dekaban , and ravi s. menon .origins of r2 orientation dependence in gray and white matter ._ pnas _ , 1110 ( 1):0 e159e167 , january 2014 .issn 1091 - 6490 .doi : 10.1073/pnas.1306516111 .url http://dx.doi.org/10.1073/pnas.1306516111 .pascal sati , peter van gelderen , afonso c. silva , daniel s. reich , hellmut merkle , jacco a. de zwart , and jeff h. duyn .micro - compartment specific t2 * relaxation in the brain ._ neuroimage _ , 77:0 268278 , august 2013 .issn 1095 - 9572 .doi : 10.1016/j.neuroimage.2013.03.005 .url http://dx.doi.org/10.1016/j.neuroimage.2013.03.005 .benoit scherrer , damien jacobs , maxime taquet , anne des rieux , benoit macq , sanjay p. prabhu , and simon k. warfield .measurement of restricted and hindered anisotropic diffusion tissue compartments in a rat model of wallerian degeneration . in _ismrm 2016 _ , page 1087 , 2016 .klaus schmierer , daniel j. tozer , francesco scaravilli , daniel r. altmann , gareth j. barker , paul s. tofts , and david h. miller .quantitative magnetization transfer imaging in postmortem multiple sclerosis brain ._ journal of magnetic resonance imaging : jmri _ , 260 ( 1):0 4151 , july 2007 .issn 1053 - 1807 .doi : 10.1002/jmri.20984 .url http://dx.doi.org/10.1002/jmri.20984 .klaus schmierer , claudia a. wheeler - kingshott , daniel j. tozer , phil a. boulby , harold g. parkes , tarek a. yousry , francesco scaravilli , gareth j. barker , paul s. tofts , and david h. miller .quantitative magnetic resonance of postmortem multiple sclerosis brain before and after fixation ._ magn reson med _ , 590 ( 2):0 268277 , february 2008 .issn 0740 - 3194 .doi : 10.1002/mrm.21487 .url http://dx.doi.org/10.1002/mrm.21487 .j m schrder , j bohl , and u von bardeleben . changes of the ratio between myelin thickness and axon diameter in human developing sural , femoral , ulnar , facial , and trochlear nerves . _ acta neuropathol _ , 760 ( 5):0 47183 , 1988 .farshid sepehrband , kristi a. clark , jeremy f. p. ullmann , nyoman d. kurniawan , gayeshika leanage , david c. reutens , and zhengyi yang .brain tissue compartment density estimated using diffusion - weighted mri yields tissue parameters consistent with histology . _brain mapp ._ , 360 ( 9):0 36873702 , september 2015 .doi : 10.1002/hbm.22872 .url http://dx.doi.org/10.1002/hbm.22872 .kawin setsompop , borjan a. gagoski , jonathan r. polimeni , thomas witzel , van j. wedeen , and lawrence l. wald .blipped - controlled aliasing in parallel imaging for simultaneous multislice echo planar imaging with reduced g - factor penalty ._ , 670 ( 5):0 12101224 , may 2012 .doi : 10.1002/mrm.23097 .url http://dx.doi.org/10.1002/mrm.23097 .noam shemesh , daniel barazany , ofer sadan , leah bar , yuval zur , yael barhum , nir sochen , daniel offen , yaniv assaf , and yoram cohen .mapping apparent eccentricity and residual ensemble anisotropy in the gray matter using angular double - pulsed - field - gradient mri ._ magn reson med _ , 680 ( 3):0 794806 , september 2012 .doi : 10.1002/mrm.23300 .url http://dx.doi.org/10.1002/mrm.23300 .john g. sled and g. bruce pike .quantitative imaging of magnetization transfer exchange and relaxation properties in vivo using mri ._ , 460 ( 5):0 923931 , november 2001 .doi : 10.1002/mrm.1278 .url http://dx.doi.org/10.1002/mrm.1278 .stephen m. smith , mark jenkinson , mark w. woolrich , christian f. beckmann , timothy e. behrens , heidi johansen - berg , peter r. bannister , marilena de luca , ivana drobnjak , david e. flitney , rami k. niazy , james saunders , john vickers , yongyue zhang , nicola de stefano , j. michael brady , and paul m. matthews .advances in functional and structural mr image analysis and implementation as fsl ._ neuroimage _ , 23 suppl 1:0 s208s219 , 2004 .issn 1053 - 8119 .doi : 10.1016/j.neuroimage.2004.07.051 .url http://dx.doi.org/10.1016/j.neuroimage.2004.07.051 .g. j. stanisz , a. szafer , g. a. wright , and r. m. henkelman . an analytical model of restricted diffusion in bovine optic nerve ._ magnetic resonance in medicine _ , 370 ( 1):0 103111 , january 1997 .issn 0740 - 3194 .url http://view.ncbi.nlm.nih.gov/pubmed/8978638 .nikola stikov , lee m. perry , aviv mezer , elena rykhlevskaia , brian a. wandell , john m. pauly , and robert f. dougherty .bound pool fractions complement diffusion measures to describe white matter micro and macrostructure ._ neuroimage _ , 540 ( 2):0 11121121 , january 2011 .issn 10538119 .doi : 10.1016/j.neuroimage.2010.08.068 .url http://dx.doi.org/10.1016/j.neuroimage.2010.08.068 .nikola stikov , jennifer s. campbell , thomas stroh , mariette lavele , stephen frey , jennifer novek , stephen nuara , ming - kai k. ho , barry j. bedell , robert f. dougherty , ilana r. leppert , mathieu boudreau , sridar narayanan , tanguy duval , julien cohen - adad , paul - alexandre a. picard , alicja gasecka , daniel ct , and g. bruce pike . quantitative analysis of the myelin g - ratio from electron microscopy images of the macaque corpus callosum ._ data in brief _ , 4:0 368373 , september 2015 .issn 2352 - 3409 .doi : 10.1016/j.dib.2015.05.019 .url http://dx.doi.org/10.1016/j.dib.2015.05.019 .nikola stikov , jennifer s. w. campbell , thomas stroh , mariette lavele , stephen frey , jennifer novek , stephen nuara , ming - kai ho , barry j. bedell , robert f. dougherty , ilana r. leppert , mathieu boudreau , sridar narayanan , tanguy duval , julien cohen - adad , paul picard , alicja gasecka , daniel ct , and g. bruce pike . in vivohistology of the myelin g - ratio with magnetic resonance imaging ._ neuroimage _ , may 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.05.023 .url http://dx.doi.org/10.1016/j.neuroimage.2015.05.023 .rudolf stollberger and paul wach .imaging of the active b1 field in vivo ._ , 350 ( 2):0 246251 , february 1996 .doi : 10.1002/mrm.1910350217 .url http://dx.doi.org/10.1002/mrm.1910350217 .carsten stber , markus morawski , andreas schfer , christian labadie , miriam whnert , christoph leuze , markus streicher , nirav barapatre , katja reimann , stefan geyer , daniel spemann , and robert turner .myelin and iron concentration in the human brain : a quantitative study of mri contrast ._ neuroimage _ , 93 pt 1:0 95106 , june 2014 .issn 1095 - 9572 .url http://view.ncbi.nlm.nih.gov/pubmed/24607447 .aaron szafer , jianhui zhong , and john c. gore . theoretical model for water diffusion in tissues ._ , 330 ( 5):0 697712 , may 1995 .doi : 10.1002/mrm.1910330516 .url http://dx.doi.org/10.1002/mrm.1910330516 .jonathan d. thiessen , yanbo zhang , handi zhang , lingyan wang , richard buist , marc r. del bigio , jiming kong , xin - min li , and melanie martin . quantitative mri and ultrastructural examination of the cuprizone mouse model of demyelination . _nmr biomed ._ , 260 ( 11):0 15621581 , november 2013 .issn 1099 - 1492 .doi : 10.1002/nbm.2992 .url http://dx.doi.org/10.1002/nbm.2992 .n. uranova , d. orlovskaya , o. vikhreva , i. zimina , n. kolomeets , v. vostrikov , and v. rachmanova .electron microscopy of oligodendroglia in severe mental illness ._ brain research bulletin _ , 550 ( 5):0 597610 , july 2001 .issn 0361 - 9230 .url http://view.ncbi.nlm.nih.gov/pubmed/11576756 .gopal varma , guillaume duhamel , cedric de bazelaire , and david c. alsop .magnetization transfer from inhomogeneously broadened lines : a potential marker for myelin ._ magn reson med _ , 730 ( 2):0 614622 , february 2015 .issn 1522 - 2594 .url http://view.ncbi.nlm.nih.gov/pubmed/24604578 .irene m. vavasour , cornelia laule , david k. b. li , anthony l. traboulsee , and alex l. mackay . is the magnetization transfer ratio a marker for myelin in multiple sclerosis ?_ j. magn .reson . imaging _, 330 ( 3):0 710718 , march 2011 .doi : 10.1002/jmri.22441 .url http://dx.doi.org/10.1002/jmri.22441 .logi vidarsson , steven m. conolly , kelvin o. lim , garry e. gold , and john m. pauly .echo time optimization for linear combination myelin imaging ._ magnetic resonance in medicine _ , 530 ( 2):0 398407 , february 2005 .issn 0740 - 3194 .doi : 10.1002/mrm.20360 .url http://dx.doi.org/10.1002/mrm.20360 .yong wang , qing wang , justin p. haldar , fang - cheng yeh , mingqiang xie , peng sun , tsang - wei tu , kathryn trinkaus , robyn s. klein , anne h. cross , and sheng - kwei song . ._ brain _ , 1340 ( 12):0 35903601 , december 2011 .issn 1460 - 2156 .doi : 10.1093/brain / awr307 .url http://dx.doi.org/10.1093/brain/awr307 .nikolaus weiskopf , john suckling , guy williams , marta m. correia , becky inkster , roger tait , cinly ooi , edward t. bullmore , and antoine lutti . _ frontiers in neuroscience _ , 7 , 2013 .issn 1662 - 4548 .url http://view.ncbi.nlm.nih.gov/pubmed/23772204 .kathryn west , nathaniel kelm , daniel gochberg , robert carson , kevin ess , and mark does .quantitative estimates of myelin volume fraction from t2 and magnetization transfer . in _ismrm 2016 _ , page 1277 , 2016 .kathryn l west , nathaniel d kelm , daniel f gochberg , robert p carson , kevin c ess , and mark d does .multiexponential t2 and quantitative magnetization transfer in rodent brain models of hypomyelination . in _ismrm 2014 _ , page 2088 , 2014 .kathryn l. west , nathaniel d. kelm , robert p. carson , and mark d. does . a revised model for estimating g - ratio from mri ._ neuroimage _ , august 2015 .issn 10538119 .doi : 10.1016/j.neuroimage.2015.08.017 .url http://dx.doi.org/10.1016/j.neuroimage.2015.08.017 . kathryn l. west , nathanial d. kelm , robert p. carson , and mark d. does . quantitative assessment of g - ratio from mri . in _qbin workshop : toward a super - big brain : promises and pitfalls of microstructural imaging _, page 28 , 2016 .kathryn l. west , nathaniel d. kelm , robert p. carson , daniel f. gochberg , kevin c. ess , and mark d. does .myelin volume fraction imaging with mri ._ neuroimage _ , december 2016 .issn 10538119 .doi : 10.1016/j.neuroimage.2016.12.067 .url http://dx.doi.org/10.1016/j.neuroimage.2016.12.067 .nathan s. white , trygve b. leergaard , helen darceuil , jan g. bjaalie , and anders m. dale . .brain mapp ._ , 340 ( 2):0 327346 , february 2013 .doi : 10.1002/hbm.21454 .url http://dx.doi.org/10.1002/hbm.21454 .tobias c. wood , camilla simmons , samuel a. hurley , anthony c. vernon , joel torres , flavio dellacqua , steve c. r. williams , and diana cash. whole - brain ex - vivo quantitative mri of the cuprizone mouse model ._ peerj _ , 4:0 e2632 + , november 2016 .issn 2167 - 8359 .doi : 10.7717/peerj.2632 .url http://dx.doi.org/10.7717/peerj.2632 .zhe wu , hongjian he , ying chen , song chen , hui liu , yiping p. du , and jianhui zhong .feasibility study of high resolution mapping for myelin water fraction and frequency shift using tissue susceptibility . in _ismrm 2016 _ , page 31 , 2016 .junzhong xu , hua li , kevin d. harkins , xiaoyu jiang , jingping xie , hakmook kang , mark d. does , and john c. gore . mapping mean axon diameter and axonal volume fraction by mri using temporal diffusion spectroscopy ._ neuroimage _ , 103:0 1019 , december 2014 .issn 10538119 .doi : 10.1016/j.neuroimage.2014.09.006 .url http://dx.doi.org/10.1016/j.neuroimage.2014.09.006 .v. l. yarnykh and khodanovich . analytical method of correction of b1 errors in mapping of magnetization transfer ratio in highfield magnetic resonance tomography ._ russian physics journal _ , 570 ( 12):0 17841788 , 2015 .doi : 10.1007/s11182 - 015 - 0451 - 7. url http://dx.doi.org/10.1007/s11182-015-0451-7 .vasily l. yarnykh .pulsed z - spectroscopic imaging of cross - relaxation parameters in tissues for human mri : theory and clinical applications ._ , 470 ( 5):0 929939 , may 2002 .doi : 10.1002/mrm.10120 .url http://dx.doi.org/10.1002/mrm.10120 .vasily l. yarnykh .fast macromolecular proton fraction mapping from a single off - resonance magnetization transfer measurement ._ magn reson med _ , 680 ( 1):0 166178 , july 2012 .issn 1522 - 2594 .url http://view.ncbi.nlm.nih.gov/pubmed/22190042 .aldo zaimi , tanguy duval , alicja gasecka , daniel ct , nikola stikov , and julien cohen - adad . : open source software for axon and myelin segmentation and morphometric analysis . _frontiers in neuroinformatics _ , 10 , august 2016 .issn 1662 - 5196 .doi : 10.3389/fninf.2016.00037 .url http://dx.doi.org/10.3389/fninf.2016.00037 .hui zhang , penny l. hubbard , geoff j. m. parker , and daniel c. alexander .axon diameter mapping in the presence of orientation dispersion with diffusion mri ._ neuroimage _ , 560 ( 3):0 13011315 , june 2011 .issn 10538119 .doi : 10.1016/j.neuroimage.2011.01.084 .url http://dx.doi.org/10.1016/j.neuroimage.2011.01.084 .hui zhang , torben schneider , claudia a. wheeler - kingshott , and daniel c. alexander . :practical in vivo neurite orientation dispersion and density imaging of the human brain ._ neuroimage _ , 610 ( 4):0 10001016 , july 2012 .issn 10538119 .doi : 10.1016/j.neuroimage.2012.03.072 .url http://dx.doi.org/10.1016/j.neuroimage.2012.03.072 .
the fiber g - ratio is the ratio of the inner to the outer diameter of the myelin sheath of a myelinated axon . it has a limited dynamic range in healthy white matter , as it is optimized for speed of signal conduction , cellular energetics , and spatial constraints . _ in vivo _ imaging of the g - ratio in health and disease would greatly increase our knowledge of the nervous system and our ability to diagnose , monitor , and treat disease . mri based g - ratio imaging was first conceived in 2011 , and expanded to be feasible in full brain with preliminary results in 2013 . this manuscript reviews the growing g - ratio imaging literature and speculates on future applications . it details the methodology for imaging the g - ratio with mri , and describes the known pitfalls and challenges in doing so . g - ratio , mri , myelin imaging , diffusion mri , white matter , microstructure
spatial indexes has always been an important issue for multi dimensional data sets in relational databases ( dbs ) , in particular for those dealing with spherical coordinates , e.g. latitude / longitude for earth locations or ra / dec for celestial objects .some db servers offer built - in capabilities to create indexes on these ( coordinate ) columns which consequently speed up the execution of queries involving them .however 1 . the use of these facilities could be not easy , 2 .they typically use a syntax quite different from the astronomical one , 3 .their performance is inadequate for the astronomical use . within the mcs library project ( calderone & nicastro 2007 ; nicastro & calderone 2006 , 2007 ; ) we have implemented the dif package , a tool which performs and manages in a fully automatic way the sky pixelisation with both the htm ( kunszt et al .2001 ) and healpix ( grski et al .2005 ) schema . using a simple tool ,any db table with sky coordinates columns can be easily indexed .this is achieved by using the facilities offered by the mysql db server ( which is the only server mcs supports at the moment ) , i.e. triggers , views and plugins . having a table with sky coordinates ,the user can make it fully indexed in order to perform quick queries on rectangular and circular regions ( cone ) or to create an healpix map file . an sql query to select objects in a cone will look like this : ` select * from mycatalogue where ` ` entriesincone(20 , 30 , 5 ) ` , where ( 20,30 ) are the coordinates of the center in degrees and 5 is the radius in arcmin .the important thing to note is that the db manager needs to supply only a few parameters in the configuration phase , whereas the generic user does not need to know anything about the sky pixelisation either for ` select ` or ` insert ` or ` update ` queries .it also demonstrates that there is no need to extend standard sql for astronomical queries ( see adql ) , at least if mysql is used as db server .in terms of db table indexing , mapping a sphere with a pixel scheme means transforming a 2d into a 1d space , consequently a standard b tree index can be created on the column with the pixel ids . on a large astronomical table , depending on the `` depth '' of the pixelisation , this could lead to a gain of a 45 orders of magnitude in search efficiency .the htm and healpix schema are widely used in astronomy and are now well mature to be considered as candidates for indexing tables containing astronomical data .they are both open source and distributed as c++ libraries .htm uses triangular pixels which can recursively be subdivided into four pixels .the base pixels are 8 , 4 for each hemisphere .these `` trixels '' are not equal - area but the indexing algorithm is very efficient for selecting point sources in catalogues .healpix uses equal - area pseudo - square pixels , particularly suitable for the analysis of large - scale spatial structures .the base pixels are 12 . using a 64 bit long integer to store the index ids leads to a limit for the pixels size of about 7.7 and 0.44 milli - arcsec on a side for htm and healpix , respectively . being able to quickly retrieve the list of objects in a given sky region is crucial in several projects .for example hunting for transient sources like grbs requires fast catalogues lookup so to quickly cross match known sources with the detected objects .the ir / optical robotic telescope rem ( nicastro & calderone 2006 ) uses htm indexed catalogues to get the list of objects in regions . in this case accessing one billion objects catalogues like the gsc2.3 takes some 10 msec . having a fully automatic htm and healpix indexing would be crucial for the management of the dbs of future large missions like gaia .also the virtual observatory project would greatly benefit from adopting a common indexing scheme for all the various types of archive it can manage .the relevant parameters for the two pixelisations are : max res .( ) : = ] ] ; ( order resolution parameter ) : $ ] + as mentioned the maximum resolution is related to the usage of 64 bit integers and it is intrinsic to the htm and healpix c++ libraries .mcs is a set of c++ high level classes aimed at implementing an application server , that is an application providing a service over the network .mcs provides classes to interact with , manage and extend a mysql db server .the included myro package allows a per row management of db grants whereas the dif package allows the automatic management of sky pixelisation with the htm and healpix schema .see the for more information .to enable dif , when installing mcs it is enough to give to the configure script the two options ` --enable - dif --with - mysql - source = path ` where ` path ` is the path to the mysql source directory .the htm and healpix c++ libraries are included in the dif package .a db named ` dif ` will be created containing an auxiliary table ` tbl ` and a _ virtual _ table ` dif ` which is dynamically managed by the dif db engine .now let s assume one has a db ` mydb ` with a table ` mycat ` containing the two coordinates column ` racs ` and ` deccs ` representing the centi - arcsec converted j2000 equatorial coordinates ( this requires 4 bytes instead of the 8 necessary for a double value ) . to make the table manageable using both the htm and healpix pixelisation schema it is enough to give the command : where ` dif` is the name of the script used to perform administrative tasks related to dif - handled tables , 6 is the htm depth and 8 is the healpix order whereas the 0 ( 1 ) selects the ring ( nested ) scheme .the last two parameters are the sql expressions which convert to degrees the coordinate values contained in the table fields ` racs ` and ` deccs ` .if the coordinates where already degrees , then it would have been enough to give their names , e.g. ` dif ... ra dec ` .the mysql root password is needed . in a future release we ll add the possibility to perform simple cross matching between ( dif managed ) catalogues .having an htm indexed catalogue , the query string to obtain the list of objects in a circular region centred on and with radius will be : + ` select * from mycat_htm where dif_htmcircle(60,30,40 ) ; ` + note the table name ` _ htm ` suffix which is needed to actually access the view handled by dif . for a rectangle with the same centre and sides along the axis and along the axis: + ` select * from mycat_htm where dif_htmrect(60,30,50,20 ) ; ` + giving only three parameters would imply a square selection . having chosen to use both htm and healpix indexing ,one could request all the healpix ids of the objects in a square by using an htm function : + ` select healpid from mycat_htm where dif_htmrect(60,30,50 ) ; ` + to simply get the ids of the pixels falling into a circular / rectangular region one can simply ` select i d from dif.dif where ... ` , i.e. no particular dif managed table is required . to obtain the order 10 ids in ring scheme onecan calculate them on the fly : + ` select dif_healplookup(0,10,racs/3.6e5,deccs/3.6e5 ) ` + ` from mycat_htm where dif_htmcircle(60,30,20 ) ; ` + giving 1 instead of 0 would give nested scheme ids . having ` ra ` and ` dec ` in degrees one would simply type ` ( 0,10,ra , dec ) ` . if one has just the healpix ids then entries on a circular region can be selected like in : + ` select * from mycat_healp where dif_healpcircle(60,30,40 ) ; ` + note the table name ` _ healp ` suffix .rectangular selections for only - healpix indexed tables will be available in the future .the current list of functions is : + ` dif_htmcircle ` , ` dif_htmrect ` , ` dif_htmrectv ` , ` dif_healpcircle ` , + ` dif_htmlookup ` , ` dif_healplookup ` , ` dif_sphedist ` .+ ` dif_htmrectv ` accepts the four corners of a rectangle which can then have any orientation in the sky . `dif_sphedist ` calculates the angular distance of two points on the sphere by using the haversines formula .a first version of idl user contributed library and demo programs aimed at producing healpix maps from the output of sql queries is available at the .calderone , g. , & nicastro , l. 2006 , in neutron stars and pulsars , mpe - report no .291 , astro - ph/0701102 nicastro , l. , & calderone , g. 2006 , in neutron stars and pulsars , mpe - report no .291 , astro - ph/0701099 nicastro , l. , & calderone , g. 2007 , grski k. m. , et al .2005 , , 622 , 759 kunszt p. z. , szalay a. s. , & thakar a. r. 2001 , in mining the sky : proc . of the mpa / eso / mpe workshop , ed .a. j. banday , s. zaroubi , m. bartelmann , 631
in various astronomical projects it is crucial to have coordinates indexed tables . all sky optical and ir catalogues have up to 1 billion objects that will increase with forthcoming projects . also partial sky surveys at various wavelengths can collect information ( not just source lists ) which can be saved in coordinate ordered tables . selecting a sub - set of these entries or cross - matching them could be un - feasible if no indexing is performed . sky tessellation with various mapping functions have been proposed . it is a matter of fact that the astronomical community is accepting the htm and healpix schema as the default for object catalogues and for maps visualization and analysis , respectively . within the mcs library project , we have now made available as mysql - callable functions various htm and healpix facilities . this is made possible thanks to the capability offered by mysql 5.1 to add external plug - ins . the dif ( dynamic indexing facilities ) package distributed within the mcs library , creates and manages a combination of views , triggers , db - engine and plug - ins allowing the user to deal with database tables indexed using one or both these pixelisation schema in a completely transparent way .
any cooperation among agents ( players ) being able to make strategic decisions becomes a _ coalition formation game _ when the players may for various personal reasons wish to belong to a relative _ small coalition _ rather than the `` grand coalition '' . partitioning players represents the crucial question in the game context and a stable partition of the players is referred to as an equilibrium . in ,the authors propose an abstract approach to coalition formation that focuses on simple merge and split stability rules transforming partitions of a group of players .the results are parametrized by a preference relationship between partitions from the point of view of each player .on the other hand , a coalition formation game is called to be _ hedonic _ if * _ the gain of any player depends solely on the members of the coalition to which the player belongs _ , and * _ the coalitions form as a result of the preferences of the players over their possible coalitions set_. accordingly , the stability concepts aiming _ hedonic conditions _ can be summarized as following : a partition could be _ individually stable , nash stable , core stable , strict core stable , pareto optimal , strong nash stable , strict strong nash stable_. in the sequel , we concentrate on the nash stability .the definition of the nash stability is quite simple : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a partition of players is nash stable whenever there is no player deviating from his / her coalition to another coalition in the partition_. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we refer to for further discussions concerning the stability concepts in the context of hedonic coalition formation games . in , the problem of generating nash stable solutions in coalitional games is considered . in particular , the authors proposed an algorithm for constructing the set of all nash stable coalition structures from players preferences in a given additively separable hedonic game . in , a bargaining procedure of coalition formation in the class of hedonic games , where players preferences depend solely on the coalition they belong to is studied .the authors provide an example of nonexistence of a pure strategy stationary perfect equilibrium , and a necessary and sufficient condition for existence .they show that when the game is totally stable ( the game and all its restrictions have a nonempty core ) , there always exists a no - delay equilibrium generating core outcomes .other equilibria exhibiting delay or resulting in unstable outcomes can also exist .if the core of the hedonic game and its restrictions always consist of a single point , it is shown that the bargaining game admits a unique stationary perfect equilibrium , resulting in the immediate formation of the core coalition structure . in ,drze and greenberg introduced the hedonic aspect in players preferences in a context concerning local public goods .moreover , purely hedonic games and stability of hedonic coalition partitions were studied by bogomolnaia and jackson in . in this paper, it is proved that if players preferences are additively separable and symmetric , then a nash stable coalition partition exists . for further discussion on additively separable and symmetric preferences , we refer the reader to . our work aims at considering stable strategies for hedonic coalition formation games .we first develop a simple decentralized algorithm finding the nash stability in a game if at least one exists .the algorithm is based on _ the best reply strategy _ in which each player decides serially his / her coalition .thus , the problem is considered as a non - cooperative game .we consider a _random round - robin _ fashion where each player determines its strategy in its turn according to a_ scheduler _ which is randomly generated for each round . under this condition, we prove that the algorithm converges to an equilibrium if it exists .then the fundamental question that comes is to determine which utility allocation methods may ensure a nash - stable partition .we address this issue in the sequel .we first propose the definition of _ the nash - stable core _ which is the set of all possible utility allocation methods resulting in nash - stable partitions .we show that efficient utility allocations where the utility of a group is completely shared between his / her members , may have no nash - stable partitions with some exceptions .rather , we proved that relaxing the efficiency condition may ensure the non - emptyness of the core .indeed we prove that if the sum of players gains within a coalition is allowed to be less than the utility of this coalitionn then a nash - stable partition always exist .a coalition formation game is given by a pair , where is the set of _ players _ and denotes the _ preference profile _ , specifying for each player his preference relation , i.e. a reflexive , complete and transitive binary relation . a coalition structure or a _ coalition partition _ is a set which partitions the players set , i.e. , , are disjoint coalitions such that . given and ,let denote the set such that . in its partition form, a coalition formation game is defined on the set by associating a utility value to each subset of any partition of . in its characteristic form ,the utility value of a set is independent of the other coalitions , and therefore , .the games of this form are more restricted but present interesting properties to achieve equilibrium . practically speaking, this assumption means that the gain of a group is independent of the other players outside the group .hedonic coalition formation games fall into this category with an additional assumption : [ def : hedonic ] a coalition formation game is called to be _ hedonic _ if * _ the gain of any player depends solely on the members of the coalition to which the player belongs _ , and * _ the coalitions form as a result of the preferences of the players over their possible coalitions set_. the preference relation of a player can be defined over a _preference function_. let us denote by the preference function of player .thus , player prefers the coalition to iff , we consider the case where the preference relation is chosen to be the utility allocated to the player in a coalition , then where refers to the utility received by player in coalition .in the case of transferable utility games ( tu games ) we are considering in this paper , the utility of a group can be tranfered among users in any way .thus , an utility allocation is set relatively efficient if for each coalition , the sum of individual utilities is equal to the coalition utility : .now , if the preferences of a player are _ additively separable _ , the preference can be even stated with a function characterizing how a player prefers another player in each coalition .this means that the player s preference for a coalition is based on individual preferences .this can be formalized as follows : the preferences of a player are said to be additively separable if there exists a function s.t . where , according to , is normalized and set to .a profile of additively separable preferences , represented by , satisfies _ symmetry _ if .the question we address in this paper concerns the stability of this kind of games .the stability concept for a coalition formation game may receive different definitions . in the litterature ,a game is either said _ individually stable , nash stable , core stable , strict core stable , pareto optimal , strong nash stable , strict strong nash stable_. we refer to for a thorough definition of these different stability concepts . in this paper, we concentrate only on the nash stability because we are interested in those games where the players do nt cooperate to take their decisions which means that only individual moves are allowed . the definition of the nash stability for an _ hedonic coalition formation game _ is simply : [ def : nashstability ] a partition of players is nash stable whenever no player is incentive to unilaterally change his or her coalition to another coalition in the partition . which can be mathematically formulated as : a partition is said to be nash - stable if no player can benefit from moving from his coalition to another existing coalition , i.e. : nash - stable partitions are immune to individual movements even when a player who wants to change does not need permission to join or leave an existing coalition . in the literature ( ) , the stability concepts being immune to individual deviation are _ nash stability , individual stability , contractual individual stability_. nash stability is the strongest within above .we concentrate our analysis on the partitions that are nash - stable .the notion of _ core stability _ has been used already in some models where immunity to coalition deviation is required .but the nash stable core has not been defined yet at the best of our knowledge .this is what we derive in the next section . under this definitionwe propose in this paper to evaluate the existence of nash stability and to propose an approach that ensures the convergence to a nash equilibrium of an approximated convex game .let us consider a hedonic tu game noted ( since is transferable to the players , we consider hedonic coalition formation games based on transferable utility ) . for the sake of simplicity , the preference function of player is assumed to be the gain obtained in the corresponding coalition , i.e. , as well as let denote the _ allocation method _ which directly sets up a corresponding preference profile .the corresponding space is equal to the number of set , i.e. where .we now define the operator , where is the set of all possible partitions .clearly , for any preference function , the operator finds the set of nash - stable partition , i.e. . if a nash - stable partition can not be found , the operator maps to empty set .moreover , the inverse of the operator is denoted as which finds the set of all possible preference functions that give the nash - stable partition .thus , the nash - stable core includes all those efficient allocation methods that build the following set : to know if the core is non empty , we need to state the set of constraints the partition function as to fulfill . under the assumption of _ efficiency _ , we have a first set of constraints then a given partition is nash - stable with respect to a given partition function if the following constraints hold : where is the unique set in containing .then , the nash - stable core is non - empty , iif : the nash - stable core can be further defined as : which let us to conclude : the nash - stable core can be non - empty .algorithmically , the nash - stable core is non - empty if the following linear program is feasible : however , it is not possible to state about the non - emptiness of the nash - stable core in the general case .further , searching in an exhaustive manner over the whole partitions is np - complet as the number of parititions grows according to the bell number .typicall , with only players , the number of partition is as large as .we now analyse some specific cases in the following . in the casethe grand coalition is targeted , the stability conditions are the following .let , then the following constaints hold : resulting in the following : those cooperative tu games that satisfy this condition are said to be _we now propose to formulate a special game where the utility is shared among players with an equal relative gain .let us denote the gain of player in coalition as in which is called _ the relative gain_. note that for an isolated player , one have .the preference relation can be determined w.r.t .the relative gain : the total allocated utilities in coalition is .therefore , , where is the _ marginal utility _ due to coalition .the symmetric relative gain sharing approach relies on equally dividing the marginal utility in a coalition , i.e. this choice means that each player in coalition has the same gain ; thus the effect of coalition is identical to the players within it .[ cor : equivalentevaluation ] * equivalent evaluation * : assume that . due to eq .( [ eq : equallydivmarutility ] ) , the following must hold it means that all players in prefer coalition to whenever the relative gain in is higher than .for this particular case we obtained the following theorems : [ lma : nashcoretwoplayersincaseofsymmetricrelativegain ] there is always a nash - stable partition when in case of symmetric relative gain .see appendix [ lma : nashcorethreeplayersincaseofsymmetricrelativegain ] there is always a nash - stable partition when in case of symmetric relative gain .see appendix thus , we can conclude that symmetric relative gain always results in a nash - stable partition when .however , this is not the case when . we can find many counter examples that justify it such as the following one : [ cexample : nashcoremorethanthreeplayersincaseofsymmetricrelativegain ] let the marginal utility for all possible be as following : let us now generate the preference profile according to these marginal utility values .notice that we could eliminate those marginal utilities which are negative since a player will prefer to be alone instead of a negative relative gain .further , ranking the positive relative gains in a descending order results in the following sequence : .\ ] ] according to the ranking sequence , we are able to generate the preference list of each player : note that this preference profile does not admit any nash - stable partition .thus , we conclude that symmetric relative gain allocation does nt provide always a nash - stable partition when .we now turn out to the case of separable and symmetric utility case .consider eq .( [ eq : additivelyseparable ] ) meaning that player gains from player in any coalition . in case of symmetry , such that .further , we denote as the utility that player gains in coalition .then , the sum of allocated utilities in coalition is given by let us point out that ( for example , , ] while the number of constraints is equal to the number of sets , i.e. . for , we have variables and as much as constraints . considering the former result we propose to transform an initial hedonic game under an additively separable and symmetric utility case , by relaxing the efficiency constraint . clearly, we propose to relaw the constraint of having the sum of allocated utilities in a coalition to be strictly equal to the utility of the coalition , i.e. .we assume that the system can not provide any coalition with additional utility and therefore the unique way is to taxe a group to ensure the convergence , which leads to having : now the following theorem may be stated : the nash - stable core is always non - empty in case of relaxed efficiency .a feasible solution of the following linear program guarantees the non - emptiness of the nash - stable core : which is equivalent to .\end{aligned}\ ] ] note that is always feasible since * there are no any inconsistent constraints , i.e. there are no at least two rows in that are equivalent , * the polytope is bounded in the direction of the gradient of the objective function . in the case where the system is able to provide some redistribution of utilities to a coalition , we can also allow the sum of individual utilities in a coalition to be higher than the coalition s utility . in this case, the system may be intersted however to find an additively separable and symmetric utility while minimizing the total deviation from the utilities .we can then propose to select the symmetric preferable preferences according to the following optimization problem : this formulation leads to an analytical solution : .however the system may be interested in adding hard constraints on some specific sets , typically the grand coalition , to avoid the risk of having to pay some additional costs .this can be done by adding additional constraints to the problem above , but if the constraints are expressed as linear equalities or inequalities the problem remains convex .in this section , we now develop a decentralized algorithm to reach a nash - stable partition whenever one exists in a hedonic coalition formation game .we in fact model the problem of finding a nash stable partition in a hedonic coalition formation by formulating it as a non - cooperative game and we state the following : a hedonic coalition formation game is equivalent to a non - cooperative game .let us denote as the set of strategies .we assume that the number of strategies is equal to the number of players .this is indeed sufficient to represent all possible choices .indeed , the players that select the same strategy are interpreted as a coalition . based on this equivalence , it is possible to reuse some classical results from game theory .we consider a _random round - robin _ algorithm where each player determines its strategy in its turn according to a _ scheduler _ which is randomly generated for each round .a scheduler in round is denoted as where is the turn of player .it should be noted that a scheduler is a random permutation of the set of players and therefore , our problem turns out to a _weakly acyclic games_. a non - cooperative game is classified as weakly acyclic if every strategy - tuple is connected to some pure - strategy nash equilibrium by a best - reply path .weakly acyclic games have the property that if the order of deviators is decided more - or - less randomly , and if players do not deviate simultaneously , then a best - reply path reaches an equilibrium if there exists at least one .the type of scheduler chosen in this work is by default considered as _memoryless _ since the identity of the deviator in each round does not depend on the previous rounds .however , it may be more efficient to design a scheduler according to the past observations .thus , we come up with so called _ algorithmic mechanism design _ that could enable to converge an equilibrium in less number of rounds .this kind of optimization is kept out of the scope of this paper . a _ strategy tuple _ in step denoted as , where is the strategy of player in step .the relation between a round and a step can be given by . in each step , only one dimension is changed in .we further denote as the partition in step .define as the set of players that share the same strategy with player .thus , note that for each step . the preference function of player is denoted as and verifies the following equivalence : where player is the one that takes its turn in step .any sequence of strategy - tuple in which each strategy - tuple differs from the preceding one in only one coordinate is called a _ path _ , and a unique deviator in each step strictly increases the utility he receives is an _improvement path_. obviously , any _ maximal improvement path _ which is an improvement path that can not be extended is terminated by stability . an algorithm for hedonic coalition formation can be given as following : set stability flag to zero generate a scheduler according to the scheduler , each player chooses the best - reply strategy check stability set stability flag to one the proposed algorithm [ alg : nashstabilityestablisher ] ( nash stability establisher ) always converges to a stable partition whenever there exists one .the proof exploits the propery mentioned above , relative to weakly acyclic games : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ every weakly acyclic game admits always a nash equilibrium ; since _ nash stability establisher _ is exactly a weakly acyclic game , it always converges to a partition which is nash - stable_. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let us denote as the initial partition where each player is alone .it corresponds to the case where each player chooses different strategy ; thus each player is alone in its strategy : .the transformation of strategy tuple and partition in each step can be denoted as following : where represents the the step in which the stable partition occurs .in fact , the stable partition in is exactly the nash equilibrium in a weakly acyclic game .we propose now to evaluate the use of the former algorithm on hedonic games transformed to additively separable and symmetric .such approach needs two steps . during the first step , the system computes the relative symmetric gains according to one of the suboptimal approximations proposed above .then , during the second step , the players do their moves according to the algorithm and using the modified utilities until an equilibrium is reached .the social optimum is the maximum total global utility , i.e. what is the partitioning of the players such that the total global utility is maximized .it can be formulated as a set partitioning optimization problem which can be given by by which we find a partition maximizing the global utility .note that the total social utility in case of a nash - stable partition will always be lower or equal to the one obtained by social optimum , i.e. .we could say that an approach is socially optimal if it reaches the social optimum .the distance between the social optimum and the equilibrium solution achieved is the price of anarchy .we study here the utility allocation based on relaxed efficiency according to the marginal utilities given in counter example [ cexample : nashcoremorethanthreeplayersincaseofsymmetricrelativegain ] .we also suppose that , , , .we calculate the social optimum which is equivalent to the set partitioning problem s optimization version .a utility allocation method can be found by solving the following linear program : which produces the following values : thus , the preference profile is obtained which admits the nash - stable partition .the total social utility can be calculated as .the social optimum in the considered example is found to be which is a result of partition .we suggested a decentralized algorithm for finding the nash stability in a game whenever there exists always at least one .the problem of finding the nash stability is considered as a non - cooperative game .we consider a _random round - robin _ fashion where each player determines its strategy in its turn according to a _ scheduler _ which is randomly generated for each round . under this condition, we proved that the algorithm converges to an equilibrium which is the indicator of the nash stability .moreover , we answer the following question : is there any utility allocation method which could result in a nash - stable partition ? we proposed the definition of the nash - stable core .we analyzed the cases in which the nash - stable core is non - empty , and prove that in case of the relaxed efficiency condition there exists always a nash - stable partition .according to corollary [ cor : equivalentevaluation ] , . thus , combining all constraint sets of all possible partitions , we have the following result constraint set : $ ] .it means that for any value of , symmetric relative gain always results in a nash - stable partition for two players case .note that there are possible partitions in case of .thus , according to equally divided marginal utility , the following variables occur : , , , . enumerating all possible partitions results in the following conditions : note that the constraint set covers all values in in case of , it also covers all values when .we are able to draw it since there are three dimensions : {nashcorethreeplayersincaseofsymmetricrelativegain.pdf}\ ] ] 1 k. r. apt and a. witzel .`` a generic approach to coalition formation , '' _ international game theory review _ , vol.11 , no.3 , pp . 347367 , 2009 . h. aziz and f. brandl , `` existence of stability in hedonic coalition formation games , '' _ in proceedings of the 11th international conference on autonomous agents and multiagent systems ( aamas 2012 ) _, jun . 2012 .h. keinnen , `` an algorithm for generating nash stable coalition structures in hedonic games , '' _ in proceedings of the 6th international conference on foundations of information and knowledge systems ( foiks10 ) _ , 2010 .j. drze and j. greenberg , `` hedonic coalitions : optimality and stability , '' _ econometrica _ , vol .9871003 , jan .a. bogomonlaia and m. jackson , `` the stability of hedonic coalition structures , '' _ games and economic behavior _ , vol .38 , pp . 201230 , jan .n. burani , and w. s. zwicker , `` coalition formation games with separable preferences , '' _ mathematical social sciences , elsevier _ , vol .1 , pp . 2752 , feb . 2003 .young , `` the evolution of conventions , '' _ econometrica _ , vol .61 , pp . 5784 , 1993 . i. milchtaich , `` congestion games with player - specic payoff functions , '' _ games and economic behavior _ vol .13 , pp . 111124 , 1996 . j. hajdukov , `` coalition formation games : a survey , '' _ international game theory review _ , vol .4 , pp . 613641 , 2006 . f. bloch and e. diamantoudi , `` noncooperative formation of coalitions in hedonic games , '' _ int j game theory _40 , pp . 263280 , 2011 .o. n. bondareva,``some applications of linear programming methods to the theory of cooperative games , '' _ problemy kybernetiki _ , vol .10 , 119139 , 1963 . l. s. shapley , `` on balanced sets and cores , '' _ naval research logistics quarterly _ , vol . 14 , pp .453460 , 1967 .
this paper studies _ the nash stability _ in hedonic coalition formation games . we address the following issue : for a general problem formulation , is there any utility allocation method ensuring a nash - stable partition ? we propose the definition of _ the nash - stable core _ and we analyze the conditions for having a non - empty nash - stable core . more precisely , we prove that using relaxed efficiency in utility sharing allows to ensure a non empty nash - stable core . then , a decentralized algorithm called _ nash stability establisher _ is proposed for finding the nash stability in a game whenever at least one exists . the problem of finding the nash stability is formulated as a non - cooperative game . in the proposed approach , during each round , each player determines its strategy in its turn according to a _ random round - robin scheduler_. we prove that the algorithm converges to an equilibrium if it exists , which is the indicator of the nash stability .
as is well known , the atom or atoms in the atomic clock are passive they do not `` tick''so the clock needs an active oscillator in addition to the atom(s ) . in designing an atomic clock to realize the second as a measurement unit in the international system of units ( si ) , one encounters two problems : ( a ) the resonance exhibited by the atom or atoms of the clock varies with the details of the clock s construction and the circumstances of its operation ; in particular the resonance shifts depending on the intensity of the radiation of the atoms by the oscillator .( b ) the oscillator , controlled by , in effect , a knob , drifts in relation to the knob setting .problem ( a ) is dealt with by introducing a wave function parametrized by radiation intensity and whatever other factors one deems relevant .the si second is then `` defined '' by the resonance that `` would be found '' at absolute zero temperature ( implying zero radiation ) . for a clock using cesium 133 atoms ,this imagined resonance is declared by the general conference of weights and measures to be 9 192 631 770 hz , so that the si second is that number of cycles of the radiation at that imagined resonance . to express the relation between a measured resonance and the imagined resonance at 0 k, a wave function is chosen .problem ( b ) is dealt with by computer - mediated feedback that turns the knob of the oscillator in response to detections of scattering of the oscillator s radiation by the atom(s ) of the clock , steering the oscillator toward an aiming point .a key point for this paper is that the wave function incorporated into the operation of an atomic clock can never be unconditionally known .the language of quantum theory reflects within itself a distinction between ` explanation ' and ` evidence ' . for explanations it offers the linear algebra of wave functions and operators , while for evidence it offers probabilities on a set of outcomes .outcomes are subject to quantum uncertainty , but uncertainty is only the tip of an iceberg : how can one `` know '' that a wave function describes an experimental situation ?the distinction within quantum theory between linear operators and probabilities implies a gap between any explanation and the evidence explained . : [ prop : one ] to choose a wave function to explain experimental evidence requires reaching beyond logic based on that evidence , and evidence acquired after the choice is made can call for a revision of the chosen wave function . because no wave function can be unconditionally known ,not even probabilities of future evidence can be unconditionally foreseen .here we show implications of the unknowability of wave functions for the second as a unit of measurement in the international system ( si ) , implications that carry over to both digital communications and to the use of a spacetime with a metric tensor in explaining clock readings at the transmission and reception of logical symbols .clocks that generate universal coordinated time ( utc ) are steered toward aiming points that depend not only on a chosen wave function but also on an hypothesized metric tensor field of a curved spacetime .like the chosen wave function , the hypothesis of a metric tensor is constrained , but not determined , by measured data .guesses enter the operations of clocks through the computational machinery that steers them . taking incoming data, the machinery updates records that determine an aiming point , and so involves the writing and reading of records .the writing must take place at a phase of a cycle distinct from a phase of reading , with a separation between the writing and the reading needed to avoid a logical short circuit . in sec .[ sec : turing ] we picture an explanation used in the operation of a clock as a string of characters written on a tape divided into squares , one symbol per square .the tape is part of a turing machine modified to be stepped by a clock and to communicate with other such machines and with keyboards and displays .we call this modified turing machine an _ open machine_. the computations performed by an open machine are open to an inflow numbers and formulas incalculable prior to their entry . because a computer cycles through distinct phases of memory use , the most direct propagation of symbols from one computer to another requires a symbol from one computer to arrive during a suitable phase of the receiving computer s cycle . in sec .[ sec : phasing ] we elevate this phase dependence to a principle that defines the _ logical synchronization _ necessary to a _ channel _ that connects clock readings at transmission of symbols to clock readings at their reception recognizing the dependence of logic - bearing channels on an interaction between evidence and hypotheses about signal propagation engenders several types of questions , leading to a _ discipline of logical synchronization _ , outlined in sec .[ sec : patterns ] .the first type of question concerns patterns of channels that are possible aiming points , as determined in a blackboard calculation that assumes a theory of signal propagation .[ sec : typei ] addresses examples of constraints on patterns of channels under various hypotheses of spacetime curvature , leading to putting `` phase stripes '' in spacetime that constrain channels to or from a given open machine .an example of a freedom to guess an explanation within a constraint of evidence is characterized by a subgroup of a group of clock adjustments , and a bound on bit rate is shown to be imposed by variability in spacetime curvature .[ sec : adj ] briefly addresses the two other types of questions , pertaining not to _ hypothesizing _ possible aiming points ` on the blackboard ' , but to _ using _ hypothesized aiming points , copied into feedback - mediating computers , for the steering of drifting clocks . after discussing steering toward aiming points copied from the blackboard ,we note occasions that invite revision of a hypothesized metric tensor and of patterns of channels chosen as aiming points .computer - mediated feedback , especially as used in an atomic clock , requires logic open to an inflow of inputs beyond the reach of calculation . to model the logic of a computer that communicates with the other devices in a feedback loop , we modify a turing machine to communicate with external devices , including other such machines .the turing machine makes a record on a tape marked into squares , each square holding one character of an alphabet . operating in a sequence of ` moments ' interspersed by ` moves ' , at any moment the machine scans one square of the tape , from which it can read , or onto which it can write , a single character . a move as defined in the mathematics of turing machines consists ( only ) of the logical relation between the machine at one moment and the machine at the next moment , thus expressing the logic of a computation , detached from its speed ; however , in a feedback loop , computational speed matters .let the moves of the modified turing machine be stepped by ticks of a clock .a step occurs once per a period of revolution of the clock hand .this period is adjustable , on the fly .we require that the cycle of the modified turing machine correspond to a unit interval of the readings of its clock . to express communication between open machines as models of computers , the modified turing machine can receive externally supplied signals and can transmit signals , with both the reception and the transmission geared to the cycle of the machine . in addition, the modified turing machine registers a count of moments at which signals are received and moments at which signals are transmitted . at a finer scale , _ the machine records a phase quantity in the cycle of its clock , relative to the center of the moment at which a signal carrying a character arrives ._ we call such a machine an _ open machine_. an open machine can receive detections and can command action , for instance the action of increasing or decreasing the frequency of the variable oscillator of an atomic clock . calculations performed on an open machine communicating with detectors and actuators proceed by moves made according to a rule that can be modified from outside the machine in the course of operation .these calculations respond to received influences , such as occurrences of outcomes underivable from the contents of the machine memory , when the open machine writes commands on a tape read by an external actuator .the wider physical world shows up in an open machine as both ( 1 ) unforeseeable messages from external devices and ( 2 ) commands to external devices .we picture a real - time computer in a feedback loop as writing records on the tape of an open machine .the segmentation into moments interspersed by moves is found not just in turing machines but in any digital computer , which implies the logical result of any computation is oblivious to variations in speed at which the clock steps the computer . + corollary 2.1 ._ no computer can sense directly any variation in its clock frequency . _although it can not directly sense variation in the tick rate of its clock , the logic of open machine stepped by an atomic clock can still control the adjustment of the clock s oscillator by responding to variations in the detection rate written moment by moment onto its turing tape .a flow of unforeseeable detections feeds successive computations of results , each of which , promptly acted on , impacts probabilities of subsequent occurrences of outcomes , even though those subsequent outcomes remain unforeseeable . the computation that steers the oscillator depends not just on unforeseeable inputs , but also on a steering formula encoded in a program .* remarks * : 1 . to appreciate feedback ,take note that a formula is distinct from what it expresses .for example a formula written along a stretch of a turing tape as a string of characters can contain a name for wave function as a function of time variable and space variables .the formula , containing , once written , just `` sits motionless , '' in contrast to the motion that the formula expresses . 2 .although unchanged over some cycles of a feedback loop , a feedback loop operates in a larger context , in which steering formulas are subject to evolution .sooner or later , the string that defines the action of an algorithm , invoking a formula , is apt to be overwritten by a string of characters expressing a new formula .occasions for rewriting steering formulas are routine in clock networks , including those employed in geodesy and astronomy .logical communication requires clocking .the reading of a clock of an open machine -reading has the form where indicates the count of cycles and is the phase within the cycle , with the convention that .we define a channel from to , denoted , as a set of pairs , each pair of the form .the first member is an -reading at which machine can transmit a signal and is a -reading at which the clock of machine can register the reception of the signal . define a _ repeating channel _ to be a channel such that )(\exists m , n , j , k ) ( m+\ell j.\phi_{a,\ell},n+\ell k.\phi_{b,\ell } ) ] \in \abr,\ ] ] for theoretical purposes , it is convenient to define an _ endlessly repeating channel _ for which ranges over all integers .again for theoretical purposes , on occasion we consider channels for which the phases are all zero , in which case we may omit writing the phases . because they are defined by local clocks without reference to any metric tensor, channels invoke no assumption about a metric or even a spacetime manifold .for this reason evidence from the operation of channels is independent of any explanatory assumptions involving a manifold with metric and , in particular , is independent of any global time coordinate , or any `` reference system '' .thus clock readings at the transmission and the reception of signals can prompt revisions of hypotheses about a metric tensor field . a record format for such evidence was illustrated in earlier work , along with the picturing of such records as _ occurrence graphs_. from the beating of a heart to the bucket brigade , life moves in phased rhythms . for a symbol carried by a signal from an open machine to be written into the memory of an open machine , the signal must be available at within a phase of the cycle of during which writing can take place , and the cycle must offer room for a distinct other phase .we elevate engineering commonplace to a principle pertaining to open machines as follows .[ prop : three ] a logical symbol can propagate from one open machine to another only if the symbol arrives within the writing phase of the receiving machine ; in particular , respect for phasing requires that for some positive any arrival phase satisfy the inequality prop .[ prop : three ] serves as a fixed point to hold onto while hypotheses about signal propagation in relation to channels are subject to revision .we call the phase constraint on a channel asserted by ( [ eq : main ] ) _ logical synchronization_. for simplicity and to allow comparing conditions for phasing with conditions for einstein synchronization , we take the engineering liberty of allowing transmission to occur at the same phase as reception , so that both occur during a phase interval satisfying ( [ eq : main ] ) . the alternative of demanding reception near values of can be carried out with little extra difficulty . +* remarks : * 1 . note that in the proposition is a phase of a cycle of a variable - rate clock that is _ not _ assumed to be in any fixed relation to a proper clock as conceived in general relativity . indeed , satisfying ( [ eq : main ] ) usually requires the operation of clocks at variable rates .the engineering of communications between computers commonly detaches the timing of a computer s receiver from that of the computer by buffering : after a reception , the receiver writes into a buffer that is later read by the computer . in analyzing open machines we do without buffering , confining ourselves to character - by - character phase meshing as asserted in prop .[ prop : three ] , which offers the most direct communication possible .given the definition of a channel and the condition ( [ eq : main ] ) essential to the communication of logical symbols , three types of questions arise : * type i : * what patterns of interrelated channels does one try for as aiming points ?* type ii : * how can the steering of open machines be arranged to approach given aiming points within acceptable phase tolerances ?* type iii : * how to respond to deviations from aiming points beyond tolerances ?such questions point the way to exploring what might be called a _ discipline of logical synchronization_. so far we notice two promising areas of application within this discipline : 1 .provide a theoretical basis for networks of logically synchronized repeating channels , highlighting 1 .possibilities for channels with null receptive phases as a limiting case of desirable behavior , and 2 .circumstances that force non - null phases . 2 .explore constraints on receptive phases imposed by gravitation , as a path to exploring and measuring gravitational curvature , including slower changes in curvature than those searched for by the laser gravitational wave observatory .answers to questions of the above types require hypotheses , if only provisional , about signal propagation . for this sectionwe assume that propagation is described by null geodesics in a lorentzian 4-manifold with one or another metric tensor field , as in general relativity .following perlick we represent an open machine as a timelike worldline , meaning a smooth embedding from a real interval into , such that the tangent vector is everywhere timelike with respect to and future - pointing .we limit our attention to worldlines of open machines that allow for signal propagation between them to be expressed by null geodesics .to say this more carefully , we distinguish the _ image _ of a worldline as a submanifold of from the worldline as a mapping .consider an open region of containing a smaller open region , with containing the images of two open machines and , with the property that every point of the image of restricted to is reached uniquely by one future - pointing null geodesic from the image of in and by one past - pointing null geodesic from the image of in .we then say and are _ radar linkable _ in .we limit our attention to open machines that are radar linkable in some spacetime region .in addition we assume that the channels preserve order ( what is transmitted later arrives later ) .indeed , we mostly deal with open machines in a gently curved spacetime region , adequately described by fermi normal coordinates around a timelike geodesic . for simplicity and to allow comparing conditions for phasing with conditions for einstein synchronization ,we take the liberty of allowing transmission to occur at the same phase as reception , so that both occur during a phase interval satisfying ( [ eq : main ] ) .the perhaps more realistic alternative of demanding reception near values of can be carried out with little difficulty . to develop the physics of channels ,we need to introduce three concepts : \(1 ) we define a _ group of clock adjustments _ as transformations of the readings of the clock of an open machine .as it pertains to endlessly repeating channels , a group of clock adjustments consists of functions on the real numbers having continuous , positive first derivatives .group multiplication is the composition of such functions , which , being invertible , have inverses . to define the action of on clock readings , we speak ` original clock readings ' as distinct from adjusted readings an adjustment acts by changing every original reading of a clock to an adjusted reading .as we shall see , clock adjustments can affect echo counts .\(2 ) to hypothesize a relation between the -clock and an accompanying proper clock , one has to assume one or another metric tensor field , relative to which to define proper time increments along s worldline ; then one can posit an adjustment such that where is the reading imagined for the accompanying proper clock when reads .\(3 ) we need to speak of positional relations between open machines . for this sectionwe assume that when an open machine receives a signal from any other machine then echoes back a signal to right away , so the echo count defined in sec .[ sec : phasing ] involves no delay at . in this case ,evidence in the form of an echo count becomes explained , under the assumption of a metric tensor field , as being just twice the radar distance from to the event of reception by .questions of type i concern constraints on channels imposed by the physics of signal propagation . herewe specialize to constraints on channels imposed by spacetime metrics , constraints obtained from mathematical models that , while worked out so to speak on the blackboard , can be copied onto turing tapes as aiming points toward which to steer the behavior of the clocks of open machines .questions of types ii and iii are deferred to the sec .[ sec : adj ] .we begin by considering just two machines . assuming an hypothetical spacetime ,suppose that machine is given as a worldline parametrized by its clock readings : what are the possibilities and constraints for an additional machine with two - way repeating channels and with a constant echo count ?we assume the idealized case of channels with null phases , which implies integer echo counts .for each -tick there is a future light cone and a past light cone . with tick events indicated ;( b ) light cones associated to ticks of ; ( c ) ticks of and at light cone intersections corresponding to ., width=470 ] the future light cone from an -reading has an intersection with the past light cone for the returned echo received at .[ fig:3 ] illustrates the toy case of a single space dimension in a flat spacetime by showing the two possibilities for a machine linked to by two - way channels at a given constant echo count . in each solution , the clocking of is such that a tick of occurs at each of a sequence of intersections of outgoing and incoming light cones from and to ticks of .note that the image of , and not just its clock rate , depend on the clock rate of .determination of the tick events for leaves undetermined the trajectory between ticks , so there is a freedom of choice .one can exercise this freedom by requiring the image of to be consistent with additional channels of larger echo counts .a clock adjustment of of the form for a positive integer increases the density of the two - way channel by and inserts events between successive -ticks , thus multiplying the echo count by . as increases without limit , becomes fully specified .turning to two space dimensions , the image of must lie in a tube around the image of , as viewed in a three - dimensional space ( vertical is time ) .so any timelike trajectory within the tube will do for the image of . for a full spacetime of 3 + 1 dimensions , the solutions for the image of fall in the corresponding `` hypertube . ''the argument does not depend on flatness and so works for a generic , gently curved spacetime in which the channels have the property of order preservation . and freely chosen ; ( b ) light signals lacing and define tick events ; ( c ) interpolated lacings of light signals added to make .,width=470 ] a different situation for two machines arises in case only the image of s worldline is specified while its clocking left to be determined . in this case the image of can be freely chosen , after which the clocking of both and is constrained , as illustrated in fig . [ fig:4 ] for the toy case of flat spacetime with 1 space dimension . to illustrate the constraint on clocking, we define a `` lacing '' of light signals to be a pattern of light signals echoing back and forth between two open machines as illustrated in fig .[ fig:4 ] ( b ) . for any event chosen in the image of ,there is a lacing that touches it .in addition to choosing this event , one can choose any positive integer to be , and choose events in the image of located after the chosen event and before the next -event touched by the lacing of light signals .the addition of lacings that touch each of the intermediating events corresponds to a repeating channel with echo count , along with a repeating channel with the same echo count .this construction does not depend on the dimension of the spacetime nor on its flatness , and so works also for a curved spacetime having the property of order preservation .evidence of channels as patterns of clock readings leaves open a choice of worldlines for its explanation . in the preceding example of laced channels between open machines and , part of this openness can be reflected within analysis by the invariance of the channels under a subgroup of the group of clock adjustments that `` slides the lacings , '' as follows .suppose that transmissions of an open machine occur at given values of -readings .we ask about clock adjustments that can change the events of a worldline that correspond to a given -reading .if a clock adjustment takes original -readings to a revised -readings , transmission events triggered by the original clock readings become triggered when the re - adjusted clock exhibits the _same readings_. as registered by original readings , the adjusted transmission occurs at . based on this relationwe inquire into the action of subgroups of on the readings of the clocks of two open machines and . in particular, there is a subgroup that expresses possible revisions of explanations that leave invariant the repeating channels with constant echo count .an element is a pair of clock adjustments that leaves the channels invariant , and such a pair can be chosen within a certain freedom . for the adjustment is free to : ( a ) assign an arbitrary value to ; and ( b ) , if , then for , choose the value of at will , subject to the constraints that and is less than the original clock reading for the re - adjusted first echo from . with these choices , is then constrained so that each lacing maps to another lacing .the condition ( a ) slides a lacing along the pair of machines ; the condition ( b ) nudges additional lacings that show up in the interval between a transmission and the receipt of its echo . in this way a freedom to guess within a constraint is expressed by . moving to more than two machines ,we invoke the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition : * an _ arrangement of open machines _ consists of open machines with the specification of some or all of the channels from one to another , augmented by proper periods of the clock of at least one of the machines . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( without specifying some proper periods , the scale of separations of one machine from another is open , allowing the arrangement to shrink without limit , thus obscuring the effect of spacetime curvature . )although gentle spacetime curvature has no effect on the possible channels linking two open machines , curvature does affect the possible channels and their echo counts in some arrangements of five or more machines , so that the possible arrangements are a measure spacetime curvature .the way that spacetime curvature affects the possible arrangements of channels is analogous to the way surface curvature in euclidean geometry affects the ratios of the lengths of the edges of embedded graphs .the effect on ratios shows up in mappings from graphs embedded in a plane to their images on a sphere .for example , a triangle can be mapped from a plane to a generic sphere , in such a way that each edge of the triangle is mapped to an arc of the same length along a great circle on the sphere .the same holds for two triangles that share an edge , as illustrated in fig .[ fig : sphere ] , panel ( a ) ; however , the gauss curvature of the sphere implies that the complete graph on 4 vertices generically embedded in the plane , shown in panel ( b ) , can not be mapped so as to preserve all edge lengths . the property that blocks the preservation of edge ratios is the presence of an edge in the plane figure that can not be slightly changed without changing the length of at least one other edge; we speak of such an edge as `` frozen . '' in a static spacetime , which is all we have so far investigated , a generic arrangement of 4 open machines , is analogous to the triangle on the plane in that a map to any gently curved spacetime can preserve all the echo counts . [prop : nine]assume four open machines in a static spacetime , with one machine stepped with a proper - time period , and let be any positive integer .then , independent of any gentle riemann curvature of the spacetime , the four open machines can be arranged , like vertices of a regular tetrahedron , to have six two - way channels with null phases , with all echo counts being ._ proof : _ assuming a static spacetime , choose a coordinate system with all the metric tensor components independent of the time coordinate , in such a way that it makes sense to speak of a time coordinate distinct from space coordinates ( for example , in a suitable region of a schwarzschild geometry ) .let denote the machine with specified proper period , and let , , and denote the other three machines . for , , we prove the possibility , independent of curvature , of the channels require that each of four machines be located at some fixed spatial coordinate . because the spacetime is static ,the coordinate time difference between a transmission at and a reception at any other vertex ( a ) is independent of the value of the time coordinate at transmission and ( b ) is the same as the coordinate time difference between a transmission at and a reception at .for this reason any one - way repeating channel of the form ( [ eq : vs ] ) can be turned around to make a channel in the opposite direction , so that establishing a channel in one direction suffices . for transmissions from any vertex to any other vertex , the coordinate - time difference between events of transmission equals the coordinate time difference between receptions . a signal from a transmission event on propagates on an expanding light cone , while an echo propagates on a light cone contracting toward an event of reception on . under the constraint that the echo count is , ( so the proper duration from the transmission event to the reception event for the echo is ), the echo event must be on a 2-dimensional submanifold a sphere , defined by constant radar distance of its points from with transmission at a particular ( but arbitrary ) tick of .in coordinates adapted to a static spacetime , this sphere may appear as a `` potatoid '' in the space coordinates , with different points on the potatoid possibly varying in their time coordinate .the potatoid shape corresponding to an echo count of remains constant under evolution of the time coordinate .channels from to the other three vertices involve putting the three vertices on this potatoid . put anywhere on the `` potatoid '' . put anywhere on the ring that is intersection of potatoid of echo count radiated from and that radiated from . put on an intersection of the potatoids radiating from the other three vertices .+ q.e.d .according to prop [ prop : nine ] the channels , and in particular the echo counts possible for a complete graph of four open machines in flat spacetime are also possible for a spacetime of gentle static curvature , provided that three of the machines are allowed to set their periods not to a fixed proper duration but in such a way that all four machines have periods that are identical in coordinate time .the same holds if fewer channels among the four machines are specified .but for five machines , the number of channels connecting them matters .five open machines fixed to space coordinates in a static spacetime are analogous to the 4 vertices of a plane figure , in that an arrangement corresponding to an incomplete graph on five vertices can have echo counts independent of curvature , while a generic arrangement corresponding to a complete graph must have curvature - dependent relations among its echo counts . [ prop:9.5 ] assuming a static spacetime , consider an arrangement of five open machines obtained by starting with a tetrahedral arrangement of four open machines with all echo counts of as in prop .[ prop : nine ] , and then adding a fifth machine : independent of curvature , a fifth open machine can be located with two - way channels having echo counts of linking it to any three of the four machines of tetrahedral arrangement , resulting in nine two - way channels altogether . _proof : _ the fifth machine can be located as was the machine , but on the side opposite to the cluster , , . + q.e.d .in contrast to the arrangement of 9 two - way channels , illustrated in fig .[ fig:5pt ] ( a ) consider an arrangement of 5 open machines corresponding to a complete graph on five vertices , with has ten two - way channels , as illustrated in fig .[ fig:5pt ] ( b ) . for five open machines in a generic spacetime , not all of the ten two - way channels can have the same echo counts .instead , channels in a flat spacetime as specified below can exist with about the simplest possible ratios of echo counts .label five open machines , , , , , and .take to be stepped by a clock ticking at a fixed proper period , letting the other machines tick at variable rates to be determined .let be any machine other than .for a flat spacetime it is consistent for the proper periods of all 5 machines to be , for the echo counts to be and for the echo counts to be , leading to twenty channels , conveniently viewed as in fig .[ fig:5pt ] ( b ) as consisting of ten two - way channels .[ prop : ten ] consider 5 open machines each fixed to space coordinates in a static curved spacetime in which the machines are all pairwise radar linkable , with 10 two - way channels connecting each machine to all the others ; then : 1 . allowing for the periods of the machines other than to vary ,it is consistent with the curvature for all but one of the ten two - way channels to have null phases and echo counts as in a flat spacetime , but at least one two - way channel must have a different echo count that depends on the spacetime curvature .2 . suppose of the 10 two - way links are allowed to have non - zero phases .if the spacetime does not admit all phases to be null , in generic cases the least possible maximum amplitude of a phase decreases as increases from 1 up to 10 .the periods of the clocks of the open machines can be taken to be the coordinate - time interval corresponding to the proper period at . _proof : _ reasoning as in the proof of prop .[ prop : nine ] with its reference to a static spacetime shows that the same echo counts are possible as for flat spacetime _ with the exception _ that at least one of the two - way channels must be free to have a different echo count . for , similar reasoning shows that allowing machines vary in echo count allows reduction in the maximum variation from the echo counts in a flat spacetime , compared to the case in which only machines are allowed to vary in echo count .+ q.e.d . adding the tenth two - way channel to an arrangement of five open machines effectively `` freezes '' all the echo counts . to define `` freezing ''as applied to echo counts , first take note an asymmetry in the dependence of echo counts on clock rates .consider any two machines and ; unlike echo count , which can change by changing it clock rate , the echo count is insensitive to s clock rate .an echo count will be said to be _ to _ and _ from _ ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition : * an arrangement of open machines is _ frozen _ if it has an echo count to a machine that can not be changed slightly without changing the length of another echo count to ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the property of being frozen is important because of the following .whether or not a frozen arrangement of open machines is consistent with an hypothesized spacetime depends on the weyl curvature of the spacetime .for example , think of the 5 open machines as carried by 5 space vehicles coasting along a radial geodesic in a schwarzschild geometry . in this examplethe variation of echo counts with curvature is small enough to be expressed by non - null phases of reception . in ferminormal coordinates centered midway between the radially moving open machine and one has the metric with a curvature parameter , where is the schwarzschild radial coordinate to the origin of the fermi normal coordinates , is the radial distance coordinate from from the center point between and , and and are transverse to the radial direction along which coasts .the speed of light is .we make the adiabatic approximation which ignores the time dependence of , so that in calculations to first order in curvature we take advantage of the ( adiabatically ) static spacetime by locating open machines at fixed values of .the metric is symmetric under rotation about the ( radially directed ) -axis .let and be located symmetrically at positive and negative values , respectively , of the -axis , and let , , and be located on a circle in the plane . with the five machines so located , the coordinate - time difference between transmissions is then the same as the coordinate - time difference between receptions , and the coordinate - time delay in one direction equals that in the opposite direction ( as stated in the proof of prop [ prop : nine ] ) .we construct seven two - way channels as above with null phases and show that the remaining 3 two - way channels can have the equal phases , but that this phase must be non - null with a curvature dependent amplitude .[ prop : eleven ] under the stated conditions , if is small enough so that + , then for a fixed separation between and , an adiabatic change in curvature imposes a constraint on bit rate possible for the channels , stemming from a lower bound on clock periods .suppose the cluster of 5 open machines is arranged so that the proper radar distance from to is 6,000 km , suppose the cluster descends from a great distance down to a radius of km from an earth - sized mass kg . for simplicity ,assume that the positions and clock rates are continually adjusted to maintain null phases for all but the three channels .because , prop .[ prop : eleven ] implies , which with ( [ eq : main ] ) implies that . substituting the parameter values, one finds that for the phases for the channels to satisfy ( [ eq : main ] ) , it is necessary that s. if an alphabet conveys bits / character , the maximum bit rate for all the channels in the 5-machine cluster is bits / s .turning from type i to questions of type ii , we look at how the preceding `` blackboard modeling '' of clocks , expressed in the mathematical language of general relativity , get put to work when models are encoded into the open machines that manage their own logical synchronization . for questions of type ii( and type iii ) both models that explain or predict evidence and the evidence itself , pertaining to physical clocks , come into play . models encoded into computers contribute to the steering of physical clocks in rate and relative position toward an aiming point , generating echo counts as evidence that , one acquired , can stimulate the guessing of new models that come closer to the aiming point .to express the effect of quantum uncertainty on logical synchronization , specifically on deviations from aiming points , one has to bring quantum uncertainty into cooperation with the representation of clocks by general - relativistic worldlines . this bringing together hinges on distinguishing evidence from its explanations .timelike worldlines and null geodesics in explanations , being mathematical , can have no _ mathematical _ connection to physical atomic clocks and physical signals . to make such a connection them one has to invoke the logical freedom to make a guess . within this freedom, one can resort to quantum theory to explain deviations of an atomic clock from an imagined proper clock , represented as a worldline , without logical conflict .because of quantum uncertainty and for other reasons , if an aiming point in terms of channels and a given frequency scale is to be reached , steering is required , in which evidence of deviations from the aiming point combine with hypotheses concerning how to steer . to keep things simple ,consider a case of an aiming point with null phases , involving two open machines and , as in the example of sec .[ sec : typei ] , modeled by a given worldline with given clock readings , where aims to maintain two - way , null - phase channel of given . for this registers arriving phases of reception and adjusts its clock rate and its position more or less continually to keep those phases small .deviations in position that drive position corrections show up not directly at but as phases registered by , so the steering of machine requires information about receptive phases measured by .the knowledge of the deviation in position of at can not arrive at until its effect has shown up at and been echoed back as a report to , entailing a delay of at least , hence requiring that machine predict the error that guides for prior to receiving a report of the error .that is , steering deviations by one open machine are measured in part by their effect on receptive phases of other open machines , so that steering of one machine requires information about receptive phases measured by other machines , and the deviations from an aiming point must increase with increasing propagation delays that demand predicting further ahead . as is clear from the cluster of five machines discussed in sec .[ sec : typei ] , the aiming - point phases can not in general all be taken to be zero . for any particular aiming - point phase will be a deviation of a measured phase quantity given by whatever the value of , adjustments to contain phases within tolerable bounds depends on phase changes happening only gradually , so that trends can be detected and responded to on the basis of adequate prediction ( aka guesswork ) . +* remarks : * 1 . unlike cycle counts of open machines , which we assume are free of uncertainty , measured phases and deviations of phases from aiming points are quantities subject to uncertainty . for logic to work in a network , transmission of logical symbolsmust preserve sharp distinctions among them ; yet the maintenance of sharp distinctions among transmitted symbols requires responses to fuzzy measurements . 2 .the acquisition of logical synchrony in digital communications involves an unforeseeable waiting time , like the time for a coin on edge to fall one way or the other .aiming points are not forever , and here we say a few words about questions of type iii , in which an aiming point based on a hypothesized metric tensor appears unreachable , and perhaps needs to be revised .we have so far looked at one or another manifold with metric as some given hypothesis , whether explored on the blackboard or coded into an open machine to serve in maintaining its logical synchronization . in this contextwe think of as `` given . '' but deviations of phases outside of tolerances present another context , calling for revising a metric tensor field . in this contextone recognizes that a metric tensor field is hypothesized provisionally , to be revised as prompted by deviations outside allowed tolerances in implementing an aiming point .drawing on measured phases as evidence in order to adjust a hypothesis of a metric tensor is one way to view the operation of the laser interferometer gravitational - wave observatory ( ligo ) . while ligo sensitivity drops off severely below 45 hz , the arrangement of five open machines of prop .[ prop : ten ] has no low - frequency cutoff , and so has the potential to detect arbitrarily slow changes in curvature .99 b. n. taylor and a. thompson , eds , _ the international system of units ( si ) _ , nist special publication 330 , 2008 edition , national institutes of science and technology. j. m. myers and f. h. madjid , `` a proof that measured data and equations of quantum mechanics can be linked only by guesswork , '' in s. j. lomonaco jr . and h.e .brandt ( eds . ) _ quantum computation and information _ , contemporary mathematics series , vol .305 , american mathematical society , providence , 2002 , pp .. f. h. madjid and j. m. myers , `` matched detectors as definers of force , '' ann .physics * 319 * , 251273 ( 2005 ) .j. m. myers and f. h. madjid , `` ambiguity in quantum - theoretical descriptions of experiments , '' in k. mahdavi and d. koslover , eds ., _ advances in quantum computation _ , contemporary mathematics series , vol . 482 ( american mathematical society , providence , i , 2009 ) , pp . 107123 . j. m. myers and f. h. madjid , `` what probabilities tell about quantum systems , with application to entropy and entanglement , '' in a. bokulich and g. jaeger , eds . , _ quantum information and entanglement _ , cambridge university press , cambridge uk , pp .127150 ( 2010 ) .a. m. turing , `` on computable numbers with an application to the entscheidungsproblem , '' proc .london math .soc . , series 2 , * 42 * , 230265 ( 193637 ) .m. soffel et al . , `` the iau resolutions for astrometry , celestial mechanics , and metrology in the relativistic framework : explanatory supplement , '' the astronomical journal , * 126 * , 26872706 ( 2003 ) . j. m. myers and f. h. madjid , `` rhythms essential to logical communication , '' in quantum information and computation ix , e. donkor , a. r. pirich , and h. e. brandt , eds , proceedings of the spie , * 8057 * , pp .80570n112 ( 2011 ) .j. m. myers and f. h. madjid , `` rhythms of memory and bits on edge : symbol recognition as a physical phenomenon , '' arxiv:1106.1639 , 2011 .h. meyr and g. ascheid , _ synchronization in digital communications _ , wiley , new york , 1990 .the ligo scientific collaboration ( http://www.ligo.org ) `` ligo : the laser interferometer gravitational - wave observatory , '' rep .prog . phys . *72 * , 076901 ( 2009 ) v. perlick , `` on the radar method in general - relativistic spacetimes , '' in h. dittus , c. lmmerzahl , and s. turyshev , eds . , _ lasers , clocks and drag - free control : expolation of relativistic gravity in space _ , ( springer , berlin , 2008 ) ; also arxiv:0708.0170v1 . f. k. manasse and c. w. misner , j. math phys ., `` fermi normal coordinates and some basic concepts in differential geometry , '' * 4 * , 735745 ( 1963 ) . t. e. parker , s. r. jefferts , and t. p. heavner , `` medium - term frequency stability of hydrogen masers as measured by a cesium fountain , '' 2010 ieee international frequency control symposium ( fcs ) , pp . 318323 (( available at http://tf.boulder.nist.gov/general/pdf/2467.pdf ) j. levine and t. parker , `` the algorithm used to realize utc(nist ) , '' 2002 ieee international frequency control symposium and pda exhibition , pp . 537542 ( 2002 )
a clock steps a computer through a cycle of phases . for the propagation of logical symbols from one computer to another , each computer must mesh its phases with arrivals of symbols from other computers . even the best atomic clocks drift unforeseeably in frequency and phase ; feedback steers them toward aiming points that depend on a chosen wave function and on hypotheses about signal propagation . a wave function , always under - determined by evidence , requires a guess . guessed wave functions are coded into computers that steer atomic clocks in frequency and position clocks that step computers through their phases of computations , as well as clocks , some on space vehicles , that supply evidence of the propagation of signals . recognizing the dependence of the phasing of symbol arrivals on guesses about signal propagation elevates ` logical synchronization . ' from its practice in computer engineering to a dicipline essential to physics . within this discipline we begin to explore questions invisible under any concept of time that fails to acknowledge the unforeseeable . in particular , variation of spacetime curvature is shown to limit the bit rate of logical communication .
within the framework of continuum mechanics there are surface and bulk material failure models .surface failure models are known by name of cohesive zone models ( czms ) . in the latter case ,continuum is enriched with discontinuities along surfaces - cohesive zones - with additional traction - displacement - separation constitutive laws .these laws are built qualitatively as follows : traction increases up to a maximum and then goes down to zero via increasing separation ( barenblatt , 1959 ; needleman , 1987 ; rice and wang , 1989 , tvergaard and hutchinson , 1992 ; camacho and ortiz , 1996 ; de borst , 2001 ; xu and needleman , 1994 ; roe and siegmund , 2003 ; moes et al , 1999 ; park et al , 2009 ; gong et al , 2012 ) .if the location of the separation surface is known in advance ( e.g. fracture along weak interfaces ) then the use of czm is natural .otherwise , the insertion of cracks in the bulk in the form of the separation surfaces remains an open problem , which includes definition of the criteria for crack nucleation , orientation , branching and arrest .besides , the czm approach presumes the simultaneous use of two different constitutive models , one for the cohesive zone and another for the bulk , for the same real material .certainly , a correspondence between these two constitutive theories is desirable yet not promptly accessible .the issues concerning the czm approach have been discussed by needleman ( 2014 ) , the pioneer of the field .bulk failure models are known by name of continuum damage mechanics ( cdm ) . in the latter case , material failure or damageis described by constitutive laws including softening in the form of the falling stress - strain curves ( kachanov , 1958 ; gurson , 1977 ; simo , 1987 ; voyiadjis and kattan , 1992 ; gao and klein , 1998 ; klein and gao , 1998 ; menzel and steinmann , 2001 ; dorfmann and ogden , 2004 ; lemaitre and desmorat , 2005 ; volokh , 2004 , 2007 ; benzerga et al , 2016 ) .remarkably , damage nucleation , propagation , branching and arrest naturally come out of the constitutive laws .unfortunately , numerical simulations based on the the bulk failure laws show the so - called pathological mesh - sensitivity , which means that the finer meshes lead to the narrower damage localization areas . in the limit case ,the energy dissipation in failure tends to zero with the diminishing size of the computational mesh .this physically unacceptable mesh - sensitivity is caused by the lack of a characteristic length in the traditional formulation of continuum mechanics .to surmount the latter pitfall gradient- or integral- type nonlocal continuum formulations are used where a characteristic length is incorporated to limit the size of the spatial failure localization ( pijaudier - cabot and bazant , 1987 ; lasry and belytschko , 1988 ; peerlings et al , 1996 ; de borst and van der giessen , 1998 ; francfort and marigo , 1998 ; silling , 2000 ; hofacker and miehe , 2012 ; borden et al , 2012 ) .the regularization strategy rooted in the nonlocal continua formulations is attractive because it is lucid mathematically .unluckily , the generalized nonlocal continua theories are based ( often tacitly ) on the physical assumption of long - range particle interactions while the actual particle interactions are short - range - on nanometer or angstrom scale. therefore , the physical basis for the nonlocal models appears disputable .a more physically - based treatment of the pathological mesh - sensitivity of the bulk failure simulations should likely include multi - physics coupling .such an attempt to couple mass flow ( sink ) and finite elastic deformation within the framework of brittle fracture is considered in the present work .cracks are often thought of as material discontinuities of zero thickness .such idealized point of view is probably applicable to nano - structures with perfect crystal organization . in the latter case fractureappears as a result of a separation - unzipping - of two adjacent atomic or molecular layers - fig .[ fig : schematic - cracks - of ] ( left ) . ] in the case of the bulk material with a sophisticated heterogeneous organization the crack appears as a result of the development of multiple micro - cracks triggered by the massive breakage of molecular or atomic bonds - fig .[ fig : schematic - cracks - of ] ( right ) .the bond breakage is not confined to two adjacent molecular layers and the process involves thousands layers within an area or volume with the representative characteristic size .it is in interesting that material failure does not require the breakage of all molecular or atomic bonds within a representative volume .only fraction of these bonds should be broken for the material disintegration .for example , in the case of natural rubber , roughly speaking , every third bond should be broken within a representative volume to create crack ( volokh , 2013a ) .the local bond failure leads to the highly localized loss of material .the latter , in our opinion , is the reason why even closed cracks are visible by a naked eye .thus , material flows out of the system during the fracture process .the system becomes open from the thermodynamic standpoint .however , cracks usually have very small thickness and the amount of the lost material is negligible as compared to the whole bulk .the latter observation prompts considering the system as the classical closed one .such approximation allows ignoring the additional supply of momenta and energy in the formulation of the initial boundary value problem described in the next sections .following the approach of continuum mechanics we replace the discrete molecular structure of materials by a continuously distributed set of material points which undergo mappings from the initial ( reference ) , , to current , , configuration : .the deformation in the vicinity of the material points is described by the deformation gradient . in what follows we use the lagrangean description with respect to the initial or reference configuration and define the local mass balance in the form where is the referential ( lagrangean ) mass density; is the referential mass flux ; is the referential mass source ( sink ) ; and in cartesian coordinates ._ we further assume that failure and , consequently , mass flow are highly localized and the momenta and energy balance equations can be written in the standard form without adding momenta and energy due to the mass alterations . _ in view of the assumption above , we write momenta and energy balance equations in the following forms accordingly and where is the velocity of a material point ; is the body force per unit mass ; is the first piola - kirchhoff stress and ; is the specific internal energy per unit mass ; is the specific heat source per unit mass ; and is the referential heat flux .entropy inequality reads where is the absolute temperature .substitution of from ( [ eq : energy balance ] ) to ( [ eq : entropy inequality ] ) yields or , written in terms of the internal dissipation , we introduce the specific helmholtz free energy per unit mass and , consequently , we have substituting ( [ eq : specific internal energy ] ) in ( [ eq : dissipation 1 ] ) we get then , we calculate the helmholtz free energy increment and substitute it in ( [ eq : dissipation 2 ] ) as follows the coleman - noll procedure suggests the following choice of the constitutive laws and , consequently , the dissipation inequality reduces to _ we further note that the process of the bond breakage is very fast as compared to the dynamic deformation process and the mass density changes in time as a step function .so , strictly speaking , the density rate should be presented by the dirac delta in time .we will not consider the super fast transition to failure , which is of no interest on its own , and assume that the densities before and after failure are constants and , consequently , _ then , the dissipation inequality reduces to which is obeyed because the heat flows in the direction of the lower temperature .it remains to settle the boundary and initial conditions .natural boundary conditions for zero mass flux represent the mass balance on the boundary or where is the unit outward normal to the boundary in the reference configuration .natural boundary conditions for given traction represent the linear momentum balance on the boundary or , alternatively , the essential boundary conditions for placements can be prescribed on initial conditions in complete the formulation of the coupled mass - flow - elastic initial boundary value problem law for the lagrangean mass flux can be written by analogy with the fourier law for heat conduction where is a mass conductivity constant for the isotropic case .constitutive law for the mass source is the very heart of the successful formulation of the theory and the reader is welcome to make a proposal .we choose , for example , the following constitutive law , whose motivation is clarified below , -\rho),\label{eq : mass source}\ ] ] where is a constant initial density ; is a material constant ; is the specific energy limiter per unit mass , which is calibrated in macroscopic experiments ; is a dimensionless material parameter , which controls the sharpness of the transition to material failure on the stress - strain curve ; and is a unit step function , i.e. if and otherwise .the switch parameter , which is necessary to prevent from material healing , will be explained below .substitution of ( [ eq : mass source ] ) and ( [ eq : mass flux ] ) in ( [ eq : mass flow ] ) yields -\frac{\rho}{j\rho_{0}}=0,\label{eq : mass balance}\ ] ] where is the characteristic length ._ it is remarkable that we , actually , do not need to know and separately and the knowledge of the characteristic length is enough_. for example , the estimate of the characteristic length for rubber is ( volokh , 2011 ) and for concrete it is ( volokh , 2013b ) . to justify the choice of the constitutive equation ( [ eq : mass source ] ) for the mass source / sink we note that in the case of the homogeneous deformation and mass flow the first term on the left hand side of ( [ eq : mass balance ] ) vanishes and we obtain .\ ] ] substituting this mass density in the hyperelastic constitutive law we have \frac{\partial w}{\partial\mathbf{f}}=h(\zeta)\exp[-(w/\varphi)^{m}]\frac{\partial w}{\partial\mathbf{f}},\label{eq : stress - strain}\ ] ] where are the helmholtz free energy and energy limiter per unit referential volume accordingly .constitutive law ( [ eq : stress - strain ] ) presents the hyperelasticity with the energy limiters - see volokh ( 2007 , 2013a , 2016 ) for the general background . integrating ( [ eq : stress - strain ] ) with respect to the deformation gradient we introduce the following form of the strain energy function where here and designate the constant bulk failure energy and the elastic energy respectively ; is the upper incomplete gamma function .the switch parameter $ ] is defined by the evolution equation where is a dimensionless precision constant .the physical interpretation of ( [ eq : energy with limiter ] ) is straight : material is hyperelastic for the strain energy below the failure limit - .when the failure limit is reached , then the strain energy becomes constant for the rest of the deformation process precluding the material healing .parameter is _ not an internal variable_. it is a switch : for the reversible process ; and for the irreversibly failed material and dissipated strain energy . for illustration, we present the following specialization of the intact strain energy for a filled natural rubber ( nr ) ( volokh , 2010 ) where , , and the failure parameters are , and . the cauchy stress , defined by , versus stretch curve for the uniaxial tension is shown in fig . [ fig : cauchy - stress ] for both cases with and without the energy limiter .material failure takes place at the critical limit point in correspondence with tests conducted by hamdi et al ( 2006 ) . versus stretch .dashed line specifies the intact model ; solid line specifies the model with energy limiter.[fig : cauchy - stress ] ] for the implications and experimental comparisons of the elasticity with energy limiters the reader is advised to look through volokh ( 2013a ; 2016 ) , for example .we completely skip this part for the sake of brevity .thus , the proposed constitutive law for the mass source is motivated by the limit case of the coupled formulations in which the deformation is homogeneous .crack in a bulk material is not an ideal unzipping of two adjacent atomic layers .it is rather a massive breakage of atomic bonds diffused in a volume of characteristic size .the massive bond breakage is accompanied by the localized loss of material .thus , material sinks in the vicinity of the crack .evidently , the law of mass conservation should be replaced by the law of mass balance , accounting for the mass flow in the vicinity of the crack .the coupled mass - flow - elasticity problem should be set for analysis of crack propagation . in the present work , we formulated the coupled problem based on the thermodynamic reasoning .we assumed that the mass loss related to the crack development was small as compared to the mass of the whole body .in addition , we assumed that the process of the bond breakage was very fast and the mass density jumped from the intact to failed material abruptly allowing to ignore the transient process of the failure development .these physically reasonable assumptions helped us to formulate a simple coupled initial boundary value problem ._ in the absence of failure localization into cracks the theory is essentially the hyperelasticity with the energy limiters .however , when the failure starts localizing into cracks the diffusive material sink activates via the mass balance equation and it provides the regularization of numerical simulations . _ the latter regularization is due to the mass diffusion - first term on the left hand side of ( [ eq : mass balance ] ) .the attractiveness of the proposed framework as compared to the traditional continuum damage theories is that no internal parameters ( like damage variables , phase fields etc . )are used while the regularization of the failure localization is provided by the physically sound law of mass balance .a numerical integration procedure for the formulated coupled initial boundary value problem is required and it will be considered elsewhere .the support from the israel science foundation ( isf-198/15 ) is gratefully acknowledged .barenblatt gi ( 1959 ) the formation of equilibrium cracks during brittle fracture .general ideas and hypotheses .axially - symmetric cracks .j appl math mech 23:622 - 636 + + benzerga aa , leblond jb , needleman a , tvergaard v ( 2016 ) ductile failure modeling .int j fract 201:29 - 80 + + borden mj , verhoosel cv , scott ma , hughes tjr , landis cm ( 2012 ) a phase - field description of dynamic brittle fracture .comp meth appl mech eng 217 - 220:77 - 95 ++ de borst r ( 2001 ) some recent issues in computational failure mechanics .int j numer meth eng 52:63 - 95 ++ de borst r , van der giessen e ( 1998 ) material instabilities in solids .john wiley & sons , chichester + + camacho gt , ortiz m ( 1996 ) computational modeling of impact damage in brittle materials .int j solids struct 33:2899 2938 + + dorfmann a , ogden rw ( 2004 ) a constitutive model for the mullins effect with permanent set in particle - reinforced rubber .int j solids struct 41:1855 - 1878 + + francfort ga , marigo jj ( 1998 ) revisiting brittle fracture as an energy minimization problem .j mech phys solids 46:1319 - 1342 + + gao h , klein p ( 1998 ) numerical simulation of crack growth in an isotropic solid with randomized internal cohesive bonds .j mech phys solids 46:187 - 218 + + gong b , paggi m , carpinteri a ( 2012 ) a cohesive crack model coupled with damage for interface fatigue problems .int j fract 137:91 - 104 + + gurson al ( 1977 ) continuum theory of ductile rupture by void nucleation and growth : part i - yield criteria and flow rules for porous ductile media .j eng mat tech 99:2151 + + hamdi a , nait abdelaziz m , ait hocine n , heuillet p , benseddiq n ( 2006 ) a fracture criterion of rubber - like materials under plane stress conditions .polymer testing 25:994 - 1005 + + hofacker m , miehe c ( 2012 ) continuum phase field modeling of dynamic fracture : variational principles and staggered fe implementation .int j fract 178:113 - 129 + + kachanov lm ( 1958 ) time of the rupture process under creep conditions .izvestiia akademii nauk sssr , otdelenie teckhnicheskikh nauk 8:26 - 31 ++ klein p , gao h ( 1998 ) crack nucleation and growth as strain localization in a virtual - bond continuum .eng fract mech 61:21 - 48 + + lasry d , belytschko t ( 1988 ) localization limiters in transient problems .int j solids struct 24:581 - 597 + + lemaitre j , desmorat r ( 2005 ) engineering damage mechanics : ductile , creep , fatigue and brittle failures .springer , berlin + + menzel a , steinmann p ( 2001 ) a theoretical and computational framework for anisotropic continuum damage mechanics at large strains .int j solids struct 38:9505 - 9523 + +moes n , dolbow j , belytschko t ( 1999 ) a finite element method for crack without remeshing .int j num meth eng 46:131 - 150 + + needleman a ( 1987 ) a continuum model for void nucleation by inclusion debonding .j appl mech 54:525 - 531 + + needleman a ( 2014 ) some issues in cohesive surface modeling .procedia iutam 10:221 - 246 + + park k , paulino gh , roesler jr ( 2009 ) a unified potential - based cohesive model of mixed - mode fracture .j mech phys solids 57:891 - 908 + + peerlings rhj , de borst r , brekelmans wam , de vree jhp ( 1996 ) gradient enhanced damage for quasi - brittle materials .int j num meth eng 39:3391 - 3403 + +pijaudier - cabot g , bazant zp ( 1987 ) nonlocal damage theory .j eng mech 113:1512 - 1533 + + rice jr , wang js ( 1989 ) embrittlement of interfaces by solute segregation .mater sci eng a 107:23 - 40 + + roe kl , siegmund t ( 2003 ) an irreversible cohesive zone model for interface fatigue crack growth simulation .eng fract mech 70:209 - 232 + + silling sa ( 2000 ) reformulation of elasticity theory for discontinuities and long - range forces .j mech phys solids48:175 - 209 + + simo jc ( 1987 ) on a fully three - dimensional finite strain viscoelastic damage model : formulation and computational aspects .comp meth appl mech eng 60:153 - 173 + + tvergaard v , hutchinson jw ( 1992 ) the relation between crack growth resistance and fracture process parameters in elastic - plastic solids .j mech phys solids 40:1377 - 1397 + + voyiadjis gz , kattan pi ( 1992 ) a plasticity - damage theory for large deformation of solidsi . theoretical formulation .int j eng sci 30:1089 - 1108 + +volokh ky ( 2004 ) nonlinear elasticity for modeling fracture of isotropic brittle solids .j appl mech 71:141 - 143 + + volokh ky ( 2007 ) hyperelasticity with softening for modeling materials failure .j mech phys solids 55:2237 - 2264 + + volokh ky ( 2010 ) on modeling failure of rubber - like materials .mech res com 37:684 - 689 + + volokh ky ( 2011 ) characteristic length of damage localization in rubber .int j fract 168:113 - 116 + + volokh ky ( 2013a ) review of the energy limiters approach to modeling failure of rubber .rubber chem technol 86:470 - 487 + + volokh ky ( 2013b ) characteristic length of damage localization in concrete .mech res commun 51:29 - 31 + + volokh ky ( 2016 ) mechanics of soft materials .springer + + xu xp , needleman a ( 1994 ) numerical simulations of fast crack growth in brittle solids .j mech phys solids 42:1397 - 1434 +
cracks are created by massive breakage of molecular or atomic bonds . the latter , in its turn , leads to the highly localized loss of material , which is the reason why even closed cracks are visible by a naked eye . thus , fracture can be interpreted as the local material sink . mass conservation is violated locally in the area of material failure . we consider a theoretical formulation of the coupled mass and momenta balance equations for a description of fracture . our focus is on brittle fracture and we propose a finite strain hyperelastic thermodynamic framework for the coupled mass - flow - elastic boundary value problem . the attractiveness of the proposed framework as compared to the traditional continuum damage theories is that no internal parameters ( like damage variables , phase fields etc . ) are used while the regularization of the failure localization is provided by the physically sound law of mass balance .
in this paper we consider the numerical integration of autonomous stochastic differential delay equations ( sddes ) in the it s sense with initial data ] .and the functions and are both locally lipschitz continuous in and , i.e. , there exists a constant such that for all with .moreover , we assume that [ init ] is hlder continuous in mean - square with exponent 1/2 , that is and is a continuous function satisfying in the following convergence analysis , we find it convenient to use continuous - time approximation solution .hence we define continuous version as follows where . for we can write it in integral form as follows where it is not hard to verify that , that is , coincides with the discrete solutions at the grid - points . in additional to the above two assumptions , we will need another one .[ anmb ] the exact solution and its continuous - time approximation solution have p - th moment bounds , that is , there exist constants such that \vee \mathbb{e}\left[\sup\limits_{0 \leq t \leq t}|\bar{y}(t)|^{p}\right ] \leq a \label{mb0}.\ ] ] now we state our convergence theorem here and give a sequence of lemmas that lead to a proof .[ ssbemain]under assumptions [ lcmc],[init],[anmb ] , if the implicit equation ( [ ssbe1 ] ) admits a unique solution , then the continuous - time approximate solution ( [ ce1 ] ) will converge to the true solution of ( [ sddes1 ] ) in the mean - square sense , i.e. , we need several lemmas to complete the proof of theorem [ ssbemain ] .first , we will define three stopping times where as usual is set as ( denotes the empty set ) .[ lem1 ] under assumption [ lcmc ] , [ init ] , there exist constants , such that for and _ proof . _ for , by definition of and , noticing that for with . using linear growth condition of and moment bounds in ( [ mb ] ) , we have appropriate constant so that as for estimate ( [ yd2 ] ) , there are four cases as to the location of and : 1 ) , 2 ) , 3 ) , 4 ) .+ noticing that the delay satisfies lipschitz condition ( [ initial2 ] ) , one sees that in the case 1 ) , combining hlder continuity of initial data ( [ initial1 ] ) and ( [ tau ] ) gives the desired assertion . in the case 2 ) , without loss of generality , we assume , .thus we have from ( [ ssbe1 ] ) and ( [ y*n2 ] ) that \nonumber \\ & & + \mu \sum_{k = j+1}^{i-1 } \left[h f(y^*_{k+1},\tilde{y}^*_{k+1})+ g(y^*_{k } , \tilde{y}^*_{k})\delta w_{k } \right ] , \label{yd5}\end{aligned}\ ] ] where as usual we define the second summation equals zero when . noticing from ( [ tau ] ) that , and combining local linear growth bound ( [ lf ] ) for , global linear growth condition for and moment bounds ( [ mb ] ) , we can derive from ( [ yd5 ] ) that in the case 3 ) and 4 ) , using an elementary inequality gives then combining this with results obtained in case 1 ) and 2 ) gives the required result , with a universal constant independent of .[ lem2 ] under assumption [ lcmc ] , [ init ] , for stepsize , there exists a constant such that \leq c_r h,\ ] ] with dependent on , but independent of ._ for simplicity , denote from ( [ sddes1 ] ) and ( [ ce2 ] ) ,we have = \mathbb{e } \left[\sup_{0 \leq s \leq t}\left|\bar{y}(s\wedge \sigma_r)-x(s\wedge \sigma_r)\right|^2 \right ] \nonumber \\ & = & \mathbb{e } \left[\sup_{0 \leq s \leq t}\left|\int_0^{s\wedge \sigma_r}f(y^*(r),\tilde{y}^*(r))-f(x(r),x(r-\tau(r)))\mbox{d}r \right.\right.\nonumber \\ & & \left.\left.+ \int_0^{s\wedge \sigma_r}g(y^*(r),\tilde{y}^*(r))-g(x(r),x(r-\tau(r)))\mbox{d}w(r)\right|^2\right ] \nonumber \\ & \leq & 2 t \mathbb{e } \int_0^{t\wedge \sigma_r } \left|f(y^*(s),\tilde{y}^*(s))-f(x(s),x(s-\tau(s)))\right|^2 \mbox{d}s \allowdisplaybreaks \nonumber \\ & & + 2\mathbb{e}\left[\sup_{0 \leq s \leq t } \left|\int_0^{s\wedge \sigma_r}g(y^*(r),\tilde{y}^*(r))-g(x(r),x(r-\tau(r)))\mbox{d}w(r)\right|^2\right ] \allowdisplaybreaks \nonumber \\ & \leq & 2(t+4)l_r \mathbb{e } \int_0^{t\wedge \sigma_r } \label{3.14}\end{aligned}\ ] ] where hlder s inequality and the burkholder - davis - gundy inequality were used again . using the elementary inequality , one computes from ( [ 3.14 ] ) that \nonumber \\ & \leq & 4(t+4)l_r \mathbb{e } \int_0^{t\wedge \sigma_r } & & + 4(t+4)l_r \mathbb{e } \int_0^{t\wedge \sigma_r } |\tilde{y}^*(s)- \bar{y}(s-\tau(s))|^2 + |\bar{y}(s-\tau(s))-x(s-\tau(s))|^2 \mbox{d}s \allowdisplaybreaks \nonumber \\ & \leq & 8(t+4)l_r \int_0^t \mathbb{e } [ \sup_{0 \leq r \leq s}|\bar{y}(r\wedge \sigma_r)-x(r\wedge \sigma_r)|^2 ] \mbox{d}s \nonumber \\ & & + 4(t+4)l_r \mathbb{e } \int_0^{t\wedge \sigma_r } \bar{y}(s)|^2 \mbox{d}s \nonumber \\ & & + 4(t+4)l_r \mathbb{e } \int_0^{t\wedge \sigma_r } |\tilde{y}^*(s)- \bar{y}(s-\tau(s))|^2 \mbox{d}s , \label{3.15}\end{aligned}\ ] ] where the fact was used that . by taking lemma [ lem1 ] into account ,we derive from ( [ 3.15 ] ) that , with suitable constants & \leq & 8(t+4)l_r \int_0^t \mathbb{e } [ \sup_{0 \leq r \leq s}|\bar{y}(r\wedge \sigma_r)-x(r\wedge \sigma_r)|^2 ] \mbox{d}s \nonumber \\ & & + 4(t+4)tl_rc_1(r)h + 4(t+4)tl_rc_2(r)h \nonumber \\ &= & \tilde{c}_r\int_0^t \mathbb{e } [ \sup_{0 \leq r \leq s}|e(r \wedge \sigma_r)|^2 ] \mbox{d}s + \bar{c}_rh.\end{aligned}\ ] ] hence continuous gronwall inequality gives the assertion ._ proof of theorem [ ssbemain ] ._ armed with lemma [ lem2 ] and assumption [ anmb ] , the result may be proved using a similar approach to that in ( * ? ? ?* theorem 2.2 ) and ( * ? ? ?* theorem 2.1 ) , where under the local lipschitz condition they showed the strong convergence of the em method for the sodes and sddes , respectively . under the global lipschitz condition and linear growth condition ( cf ) , we can choose uniform constants , in previous lemma [ lem1],[lem2 ] to be independent of .accordingly we can recover the strong order of 1/2 by deriving \leq c h,\ ] ] where is independent of and .in this section , we will give some sufficient conditions on equations ( [ sddes1 ] ) to promise a unique global solution of sddes and a well - defined solution of the ssbe method .we make the following assumptions on the sddes .[ olc ] the functions are continuously differentiable in both and , and there exist constants , such that the inequalities ( [ olc1]),([olc2 ] ) indicate that the first argument of satisfies one - sided lipschitz condition and the second satisfies global lipschitz condition .it is worth noticing that conditions of the same type as ( [ olc1 ] ) and ( [ olc2 ] ) have been exploited successfully in the analysis of numerical methods for deterministic delay differential equations ( ddes)(see and references therein ) . as for sdes without delay ,the conditions ( [ olc1 ] ) and ( [ olc3 ] ) has been used in .we compute from ( [ olc1])-([olc3 ] ) that on choosing the constant as the following condition holds in what follows we always assume that for the initial data satisfies [ eu ] assume that assumption [ olc ] is fulfilled .then there exists a unique global solution to system ( [ sddes1 ] ) .morever , for any , there exists constant \leq c(1+\mathbb{e}\|\psi\|^p).\ ] ] _ proof ._ see the appendix .[ mblem ] assume that satisfy the condition ( [ mc ] ) and is sufficiently small , then for the following moment bounds hold \vee \mathbb{e}\left[\sup\limits_{0 \leq t \leq t}|\tilde{y}^*(t)|^{2p}\right ] \vee \mathbb{e}\left[\sup\limits_{0 \leq t \leq t}|\bar{y}(t)|^{2p}\right ] \leq a \label{mb}.\ ] ] _ proof ._ inserting ( [ ssbe2 ] ) into ( [ ssbe1 ] ) gives hence expanding it and employing ( [ mc ] ) yields by definition of , one obtains .taking this inequality into consideration and letting , we have from ( [ 3.1 ] ) that denoting , one computes that by recursive calculation , we obtain raising both sides to the power gives ^p + \alpha^p n^{p-1 } \sum_{j=0}^{n-1 } thus ^p + \alpha^p m^{p-1}\mathbb{e}\sum_{j=0}^{m-1}|g(y^*_j,\tilde{y}^*_j)\delta w_j|^{2p}\right \}.\label{y*n3}\end{aligned}\ ] ] here , where is the largest integer number such that .now , using the burkholder - davis - gundy inequality ( theorem 1.7.3 in ) gives ^p \leq c_p\mathbb{e}\left [ \sum_{j=0}^{m-1}|y^*_j|^2|g(y^*_j,\tilde{y}^*_j)|^2h\right]^{p/2 } \allowdisplaybreaks\nonumber\\ & & \leq c_p ( kh)^{p/2}m^{p/2 - 1}\mathbb{e}\left[\sum_{j=0}^{m-1 } & & \leq \frac{1}{2}c_pk^{p/2}t^{p/2 - 1}h \mathbb{e}\left[\sum_{j=0}^{m-1 } \left(|y^*_j|^{2p } + 3^{p-1}(1+|y^*_j|^{2p}+|\tilde{y}^*_j|^{2p})\right)\right ] .\label{estimate1}\end{aligned}\ ] ] noticing that inserting it into ( [ estimate1 ] ) , we can find out appropriate constants such that ^p \nonumber \\ & \leq & \bar{c}h \sum_{j=0}^{m-1}\mathbb{e}\max_{0 \leq i \leq j } |y_i^*|^{2p } + \bar{c}(\mathbb{e}\|\psi\|^{2p}+1 ) .\label{estimate2}\end{aligned}\ ] ] at the same time , noting the fact and is independent of , one can compute that , with a constant that may change line by line \allowdisplaybreaks \nonumber \\ & \leq & \hat{c}h^{p-1}(\mathbb{e}\|\psi\|^{2p}+1 ) + \hat{c}h^p \sum_{j=0}^{m-1}\mathbb{e}\max_{0 \leq i \leq j } |y^*_i|^{2p}. \label{estimate3}\end{aligned}\ ] ] by definition ( [ ssbe1 ] ) , one sees that then using a similar approach used before , we can find out a constant to ensure that inserting ( [ estimate2]),([estimate3 ] ) into ( [ y*n3 ] ) and considering ( [ y*0 ] ) and , we have , with suitable constants thus using the discrete - type gronwall inequality , we derive from ( [ mbend ] ) that ] is immediate . to bound ] . from ( [ ssbe2 ] ), we have & \leq & 2^{2p-1}\left\ { \mathbb{e}\left[\sup_{0 \leq nh \leq t}|y^*_n|^{2p}\right ] + \mathbb{e}\left[\sup_{0 \leq nh \leq t}|g(y^*_n,\tilde{y}_n^*)\delta w_n|^{2p}\right]\right\ } \nonumber \\ & \leq & 2^{2p-1}\left\ { \mathbb{e}\left[\sup_{0 \leq nh \leq t}|y^*_n|^{2p}\right ] + \mathbb{e}\sum_{j=0}^{n}|g(y^*_j,\tilde{y}^*_j)\delta w_j|^{2p}\right\}. \nonumber\end{aligned}\ ] ] now ( [ estimate3 ] ) and bound of ] . to bound ] , ] follows immediately .[ lem6 ] under assumption [ olc ] , if , the implicit equation in ( [ ssbe1 ] ) admits a unique solution ._ let , then the implicit equation ( [ ssbe1 ] ) takes the form as where at each step , are known .observing that the assertion follows immediately from theorem 14.2 of . under assumption[ init],[olc ] , if , then the numerical solution produced by ( [ ssbe1])-([ssbe2 ] ) is well - defined and will converge to the true solution in the mean - square sense , i.e. , _ proof ._ noticing that assumption [ olc ] implies assumptions [ lcmc],[anmb ] by theorem [ eu ] and lemma [ mblem ] , and taking lemma [ lem6 ] into consideration , the result follows directly from theorem [ ssbemain ] .we remark that the problem class satisfying condition ( [ initial2 ] ) includes plenty of important models .in particular , stochastic pantograph differential equations ( see , e.g. , ) with and sddes with constant lag fall into this class and therefore corresponding convergence results follow immediately .in this section , we will investigate how ssbe shares exponential mean - square stability of general nonlinear systems . in deterministic case , nonlinear stability analysis of numerical methodsare carried on under a one - sided lipschitz condition .this phenomenon has been well studied in the deterministic case ( and references therein ) and stochastic case without delay . in what follows, we choose the test problem satisfying conditions ( [ olc1])-([olc3 ] ) .moreover , we assume that variable delay is bounded , that is , there exists , for we remark that this assumption does not impose additional restrictions on the stepsize and admits arbitrary large on choosing and close to 1 . to begin with, we shall first give a sufficient condition for exponential mean - square stability of analytical solution to underlying problem .[ ems1 ] under the conditions ( [ olc1]),([olc2]),([olc3 ] ) and ( [ bd ] ) , and with obeying any two solutions and with and satisfy where $ ] is the zero of , with ._ by it formula , we have } \mathbb{e}|x(r)-y(r)|^2 \mbox{d}s.\end{aligned}\ ] ] letting and noticing that exists for and is continuous , we derive from ( [ 5.12 ] ) that }u(s),\ ] ] where the upper dini derivative is defined as using theorem 7 in leads to the desired result .based on this stability result , we are going to investigate stability of the numerical method .[ ems3 ] under the conditions ( [ olc1]),([olc2]),([olc3 ] ) and ( [ bd ] ) , if , then for all , any two solutions produced by ssbe ( [ ssbe1])-([ssbe2 ] ) with and satisfy where is defined as _ proof ._ under , the first part is an immediate result from lemma [ lem6 ] . for the second part , in order to state conveniently , we introduce some notations from ( [ y*n2 ] ) , we have thus taking expectation and using ( [ olc3 ] ) yields now using the cauchy - schwarz inequality and conditions ( [ olc1])-([olc2 ] ) , we have inserting it into ( [ 5.3 ] ) gives here we have to consider which approach is chosen to treat memory values on non - grid points , piecewise constant interpolation ( ) or piecewise linear interpolation . in the latter case ,let us consider two possible cases : if , then inserting ( [ 5.10 ] ) , we derive from ( [ 5.4 ] ) that \mathbb{e}|x_n^*-y_n^*|^2 \\& \leq(1 + h\gamma_3+\tilde{\mu } h \gamma_2)\mathbb{e}|x_{n-1}^*-y_{n-1}^*|^2 + h\gamma_4 \mathbb{e}|\tilde{x}_{n-1}^*-\tilde{y}_{n-1}^*|^2 . \end{split}\ ] ] hence using the fact in ( [ beta ] ) gives if , it follows from ( [ 5.4 ] ) and that therefore , it is always true that inequality ( [ 5.6 ] ) holds for piecewise linear interpolation case .obviously ( [ 5.6 ] ) also stands in piecewise constant interpolation case .further , from ( [ ssbe1 ] ) one sees using a similar approach as before , one can derive denote noticing that , one can readily derive , we can deduce from ( [ 5.6 ] ) and ( [ 5.11 ] ) that here denotes the greatest integer less than or equal to .+ finally from ( [ ssbe2 ] ) , we have for large such that where is defined as in ( [ nuh2 ] ) .the stability result indicates that the method ( [ ssbe1])-([ssbe2 ] ) can well reproduce long - time stability of the continuous system satisfying conditions stated in theorem [ ems1 ] .note that the exponential mean - square stability under non - global lipschitz conditions has been studied in in the case of nonlinear sdes without delay .the preceding results can be regarded as an extension of those in to delay case .although the main focus of this work is on nonlinear sddes , in this section we show that the ssbe ( [ ssbe1])-([ssbe2 ] ) has a very desirable linear stability property .hence , we consider the scalar , linear test equation given by note that ( [ lineartest ] ) is a special case of ( [ sddes1 ] ) with , and satisfies conditions ( [ olc1])-([olc3 ] ) with by theorem [ ems1 ] , ( [ lineartest ] ) is mean - square stable if for constraint stepsize , i.e. , in ( [ bd ] ) , the ssbe proposed in our work applied to ( [ lineartest ] ) produces , \\ y_{n+1 } & = y_n^ * + [ cy_n^ * + d y_{n-\kappa}^*]\delta w_n .\end{array } \right.\end{aligned}\ ] ] in , the authors constructed a different ssbe for the linear test equation ( [ lineartest ] ) and their method applied to ( [ lineartest ] ) reads , \\z_{n+1 } & = z_n^ * + [ cz_n^ * +d z_{n-\kappa+1}]\delta w_n .\end{array } \right.\end{aligned}\ ] ] the stability results there ( * ? ? ?* theorem 4.1 ) indicate that under ( [ linearms ] ) the method ( [ ssbez ] ) can only preserve mean - square stability of ( [ lineartest ] ) with stepsize restrictions , but the new scheme ( [ ssbew ] ) exhibits a better stability property . for the linear equation ( [ lineartest ] ), if ( [ linearms ] ) holds , then the ssbe ( [ ssbew ] ) is mean - square stable for any stepsize ._ the assertion readily follows from theorem [ ems3 ] .apparently , the ssbe ( [ ssbew ] ) achieves an advantage over ( [ ssbez ] ) in stability property that the ssbe ( [ ssbew ] ) is able to inherit stability of ( [ lineartest ] ) for any stepsize .if one drops the stepsize restriction and allow for arbitrary stepsize , one can arrive at a sharper stability result from theorem [ ems3 ] . for the linear equation ( [ lineartest ] ) ,if ( [ linearms ] ) holds , then the ssbe([ssbe1])-([ssbe2 ] ) is mean - square stable for any stepsize .in this section we give several numerical examples to illustrate intuitively the strong convergence and the mean - square stability obtained in previous sections .the first test equation is a linear it sdde .\end{array } \right.\end{aligned}\ ] ] denoting as the numerical approximation to at end point in the -th simulation of all simulations , we approximate means of absolute errors as in our experiments , we use the ssbe ( [ ssbew ] ) to compute an `` exact solution '' with small stepsize and .we choose two sets of parameters as follows example i : example ii : example iii : with versus for example i ( left ) and example ii ( right).,title="fig:",width=240,height=172 ] with versus for example i ( left ) and example ii ( right).,title="fig:",width=240,height=172 ] 16cm + & & example ii & & & example iii + ( r)2 - 4 ( l)5 - 7 & em & ssbe ( [ ssbez ] ) & ssbe ( [ ssbew ] ) & em & ssbe ( [ ssbez ] ) & ssbe ( [ ssbew ] ) + + & 0.0008 & 0.0011 & 0.0008 & 0.0014 & 0.0020 & 0.0014 + & 0.0013 & 0.0016 & 0.0013&0.0025 & 0.0036 & 0.0023 + & 0.0021 & 0.0029 & 0.0019 & 0.0058 & 0.0070 & 0.0035 + & 0.0034 & 0.0058 & 0.0027 & 0.2744 & 0.0157 & 0.0053 + & 0.0086 & 0.0148 & 0.0038 & 6.1598e+010 & 0.0628 & 0.0078 + in figure [ 1 ] , computational errors versus stepsize on a log - log scale are plotted and dashed lines of slope one half are added .one can clearly see that ssbe ( [ ssbew ] ) for linear test equation ( [ lssde ] ) is convergent and has strong order of 1/2 . in table[ table1 ] , computational errors with are presented for the well - known euler - maruyama method , the ssbe method ( [ ssbez ] ) and the improved ssbe method ( [ ssbew ] ) in this paper . thereone can find that the improved ssbe method ( [ ssbew ] ) has the best accuracy among the three methods .in particular , for example iii with stiffness in drift term ( i.e. , ) , when the moderate stepsize was used , the euler - maruyama method becomes unstable and the two ssbe methods still remain stable , but with the improved ssbe ( [ ssbew ] ) producing better result .to compare stability property of the improved ssbe and ssbe in , simulations by ssbe ( [ ssbew ] ) and ( [ ssbez ] ) are both depicted in figure [ 2 ] , [ 3 ] .there solutions produced by ( [ ssbew ] ) and ( [ ssbez ] ) are plotted in solid line and dashed line , respectively .as is shown in the figures , methods ( [ ssbew ] ) and ( [ ssbez ] ) exhibit different stability behavior .one can observe from figure [ 2 ] that ( [ ssbew ] ) for example ii is mean - square stable for .but ( [ ssbez ] ) is unstable for . for exampleiii , the improved ssbe ( [ ssbew ] ) is always stable for , but ( [ ssbez ] ) becomes stable when the stepsize decreases to .the numerical results demonstrate that the scheme ( [ ssbew ] ) has a greater advantage in mean - square stability than ( [ ssbez ] ) . ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] ) with .upper left : , upper right : , lower left : , lower right : .,title="fig:",width=249,height=144 ] consider a nonlinear sdde with a time - varying delay as follows dt + \left [ x(t)+x(t-\tau(t))\right ] dw(t ) , t > 0 , \\x(t ) = 1 , \quad t \in [ -1,0 ] , \end{array } \right.\end{aligned}\ ] ] where .obviously , equation ( [ nlsdde ] ) satisfies conditions ( [ olc1])-([olc3 ] ) in assumption [ olc ] , with .thus and the problem is exponentially mean - square stable . as is shown in figure [ 4 ] , the ssbe ( [ ssbew ] ) can well reproduce stability for quite large stepsize .this is consistent with our result established in theorem [ ems3 ] . ) by ssbe ( [ ssbew ] ) using various stepsizes.,width=288,height=240 ]_ proof of theorem [ eu ] ._ since both and are locally lipschitz continuous , theorem 3.2.2 of shows that there is a unique maximal local solution on , where the stopping time . by it s formulawe obtain that for where the condition ( [ mc ] ) was used .thus now , raising both sides of ( [ ito2 ] ) to the power and using hlder s inequality yield by the burkholder - davis - gundy inequality , one computes that , with , & & \leq c_1\left\{1+\mathbb{e}\|\psi\|^p + \int_0^t \mathbb{e}\sup_{0 \leq r \leq s}|x(r\wedge\rho_r)|^p\mbox{d}s\right . \nonumber \\ & & \left .+ \mathbb{e}\left[\int_0^{t\wedge\rho_r } \label{ito5}\end{aligned}\ ] ] next , by an elementary inequality , + \frac{c_1}{2}t^{p/2 - 1}\mathbb{e}\int_0^{t\wedge\rho_r}|g(x(s),x(s-\tau(s)))|^p \mbox{d}s \nonumber \\ & \leq & \frac{1}{2c_1}\mathbb{e}\left[\sup\limits_{0 \leq s \leq t}|x(s\wedge\rho_r)|^{p}\right]+\frac{c_1}{2}(3t)^{p/2 - 1}k^{p/2}\int_0^t ( 1 + \mathbb{e}\sup_{0 \leq r \leq s}|x(r\wedge\rho_r)|^p+\mathbb{e}\|\psi\|^p ) \mbox{d}s . \nonumber\end{aligned}\ ] ] inserting it into ( [ ito5 ] ) , for proper constants we have that the gronwall inequality gives this implies letting leads to since is arbitrary , we must have a.s . and hence a.s .the existence and uniqueness of the global solution is justified .finally , the desired moment bound follows from ( [ ito6 ] ) by letting and setting .baker , e. buckwar , exponential stability in pth mean of solutions , and of convergent euler - type solutions , to stochastic delay differential equations , j. comput ., 184 ( 2 ) ( 2005 ) , pp.404 - 427 .k.burrage , p.m.burrage , t.tian , numerical methods for strong solutions of stochastic differential equations : an overview , proceedings : mathematical , physical and engineering , royal society of london 460(2004 ) , pp.373 - 402 .y.hu , semi - implicit euler - maruyama scheme for stiff stochastic equations , in stochastic analysis and related topics v : the silvri workshop , progr .38 , h. koerezlioglu , ed . ,birkhauser , boston , 1996 , pp.183 - 202 .a.jentzen , p.e.kloeden , a.neuenkirch , pathwise approximation of stochastic differential equations on domains : higher order convergence rates without global lipschitz coefficients , numer .112 , 1 ( 2009 ) , 41 - 64 .
a new , improved split - step backward euler ( ssbe ) method is introduced and analyzed for stochastic differential delay equations(sddes ) with generic variable delay . the method is proved to be convergent in mean - square sense under conditions ( assumption [ olc ] ) that the diffusion coefficient is globally lipschitz in both and , but the drift coefficient satisfies one - sided lipschitz condition in and globally lipschitz in . further , exponential mean - square stability of the proposed method is investigated for sddes that have a negative one - sided lipschitz constant . our results show that the method has the unconditional stability property in the sense that it can well reproduce stability of underlying system , without any restrictions on stepsize . numerical experiments and comparisons with existing methods for sddes illustrate the computational efficiency of our method . + * ams subject classification : * 60h35,65c20,65l20 . + * key words : * split - step backward euler method , strong convergence , one - sided lipschitz condition , exponential mean - square stability , mean - square linear stability
during past few years research in areas of wireless ad - hoc networks and wireless sensor networks ( wsns ) are escalated .ieee 802.15.4 is targeted for wireless body area networks ( wbans ) , which requires low power and low data rate applications .invasive computing is term used to describe future of computing and communications [ 1 - 3 ] . due to these concepts , personal and business domainsare being densely populated with sensors .one area of increasing interest is the adaptation of technology to operate in and around human body .many other potential applications like medical sensing control , wearable computing and location identification are based on wireless body area networks ( wbans ) .main aim of ieee 802.15.4 standard is to provide a low - cost , low power and reliable protocol for wireless monitoring of patient s health .this standard defines physical layer and mac sub layer .three distinct frequencies bands are supported in this standard . however , 2.4 ghz band is more important .this frequency range is same as ieee 802.11b / g and bluetooth .ieee 802.15.4 network supports two types of topologies , star topology and peer to peer topology .standard supports two modes of operation , beacon enabled ( slotted ) and non - beacon enabled ( unslotted ) .medium access control ( mac ) protocols play an important role in overall performance of a network . in broad ,they are defined in two categories contention - based and schedule - based mac protocols . in contention - based protocols like carrier sense multiple access with collision avoidance ( csma / ca ) ,each node content to access the medium .if node finds medium busy , it reschedules transmission until medium is free . in schedule - based protocols like time division multiple access ( tdma ) , each node transmits data in its pre - allocated time slot .this paper focuses on analysis of ieee 802.15.4 standard with non - beacon enabled mode configure in a star topology .we also consider that sensor nodes are using csma / ca protocol . to access channel data .in literature , beacon enabled mode is used with slotted csma / ca for different network settings . in [ 1 ] , performance analysis of ieee 802.15.4 low power and low data rate wireless standard in wbans is done . authors consider a star topology at 2.4 ghz with up to 10 body implanted sensors .long - term power consumption of devices is the main aim of their analysis .however , authors do not analyze their study for different data rates .an analytical model for non - beacon enabled mode of ieee 802.15.4 medium access control ( mac ) protocol is provided in [ 2 ] .nodes use un - slotted csma / ca operation for channel access and packet transmission .two main variables that are needed for channel access algorithm are back - off exponent ( be ) and number of back - offs ( nb ) .authors perform mathematical modeling for the evaluation statistical distribution of traffic generated by nodes .this mathematical model allows evaluating an optimum size packet so that success probability of transmission is maximize .however , authors do not analyze different mac parameters with varying data rates .authors carry out an extensive analysis based on simulations and real measurements to investigate the unreliability in ieee 802.15.4 standard in [ 3 ] .authors find out that , with an appropriate parameter setting , it is possible to achieve desired level of reliability .unreliability in mac protocol is the basic aspect for evaluation of reliability for a sensor network .an extensive simulation analysis of csma / ca algorithm is performed by authors to regulate the channel access mechanism .a set of measurements on a real test bed is used to validate simulation results .a traffic - adaptive mac protocol ( tamac ) is introduced by using traffic information of sensor nodes in [ 4 ] .tamac protocol is supported by a wakeup radio , which is used to support emergency and on - demand events in a reliable manner .authors compare tamac with beacon - enabled ieee 802.15.4 mac , wireless sensor mac ( wisemac ) , and sensor mac ( smac ) protocols .important requirements for the design of a low - power mac protocol for wbans are discussed in [ 5 ] .authors present an overview to heartbeat driven mac ( h - mac ) , reservation - based dynamic tdma ( dtdma ) , preamble - based tdma ( pb - tdma ) , and body mac protocols , with focusing on their strengths and weaknesses .authors analyze different power efficient mechanism in context of wbans . atthe end authors propose a novel low - power mac protocol based on tdma to satisfy traffic heterogeneity .authors in [ 6 ] , examine use of ieee 802.15.4 standard in ecg monitoring and study the effects of csma / ca mechanism .they analyze performance of network in terms of transmission delay , end - to - end delay , and packet delivery rate . for time critical applications , a payload size between 40 and 60 bytes is selected due to lower end - to - end delay and acceptable packet delivery rate . in [ 7 ] , authors state that ieee 802.15.4 standard is designed as a low power and low data rate protocol with high reliability .they analyze unslotted version of protocol with maximum throughput and minimum delay .the main purpose of ieee 802.15.4 standard is to provide low power , low cost and highly reliable protocol .physical layer specifies three different frequency ranges , 2.4 ghz band with 16 channels , 915 mhz with 10 channels and 868 mhz with 1 channel .calculations are done by considering only beacon enabled mode and with only one sender and receiver. however , it consumes high power . as number of sender increases , efficiency of 802.15.4 decreases .throughput of 802.15.4 declines and delay increases when multiple radios are used because of increase in number of collisions .a lot of work is done to improve the performance of ieee 802.15.4 and many improvements are made in improving this standard , where very little work is done to find out performance of this standard by varying data rates and also considering acknowledgement ( ack ) and no ack condition and how it affects delay , throughput , end - to - end delay and load .we get motivation to find out the performance of this standard with parameters load , throughput , delay and end to end delay at varying data rates .ieee 802.15.4 is proposed as standard for low data rate , low power wireless personal area networks ( wpans ) [ 1],[2 ] . in wpans ,end nodes are connected to a central node called coordinator .management , in - network processing and coordination are some of key operations performed by coordinator .the super - frame structure in beacon enabled mode is divided into active and inactive period .active period is subdivided into three portions ; a beacon , contention access period ( cap ) and contention free period ( cfp ) . in cfp , end nodes communicate with central node ( coordinator ) in dedicated time slots .however , cap uses slotted csma / ca . in non - beacon enabled mode , ieee 802.15.4 uses unslotted csma / ca with clear channel assessment ( cca ) for channel access . in [ 2 ] , ieee 802.15.4 mac protocol non - beacon enabled mode is used .nodes use un - slotted csma / ca operation for channel access and packet transmission .two main variables that are needed for channel access algorithm are back off exponent ( be ) and number of back offs ( nb ) .nb is the number of times csma / ca algorithm was required to back off while attempting channel access and be is related to how many back off periods , node must wait before attempting channel access .operation of csma / ca algorithm is defined in steps below : nb and be initialization : first , nb and be are initialized , nb is initialized to 0 and be to macminbe which is by default equal to 3 .+ random delay for collision avoidance : to avoid collision algorithm waits for a random amount of time randomly generated in range of , one back off unit period is equal to with + clear channel assessment : after this delay channel is sensed for the unit of time also called cca .if the channel is sensed to be busy , algorithm goes to step 4 if channel is idle algorithm goes to step 5 .+ busy channel : if channel is sensed busy then mac sub layer will increment the values of be and nb , by checking that be is not larger than .if value of nb is less than or equal to , then csma / ca algorithm will move to step 2 .if value of nb is greater than , then csma / ca algorithm will move to step 5 `` packet drop '' , that shows the node does not succeed to access the channel .+ idle channel : if channel is sensed to be idle then algorithm will move to step 4 that is `` packet sent '' , and data transmission will immediately start . fig .1 illustrates aforementioned steps of csma / ca algorithm , starting with node has some data to send .csma / ca is a modification of carrier sense multiple access ( csma ) .collision avoidance is used to enhance performance of csma by not allowing node to send data if other nodes are transmitting . in normal csma nodessense the medium if they find it free , then they transmits the packet without noticing that another node is already sending packet , this results in collision .csma / ca results in reduction of collision probability .it works with principle of node sensing medium , if it finds medium to be free , then it sends packet to receiver .if medium is busy then node goes to backoff time slot for a random period of time and wait for medium to get free . with improved csma / ca ,request to send ( rts)/clear to send ( cts ) exchange technique , node sends rts to receiver after sensing the medium and finding it free . after sending rts , node waits for cts message from receiver .after message is received , it starts transmission of data , if node does not receive cts message then it goes to backoff time and wait for medium to get free .csma / ca is a layer 2 access method , used in 802.11 wireless local area network ( wlan ) and other wireless communication .one of the problems with wireless data communication is that it is not possible to listen while sending , therefore collision detection is not possible .csma / ca is largely based on the modulation technique of transmitting between nodes .csma / ca is combined with direct sequence spread spectrum ( dsss ) which helps in improvement of throughput .when network load becomes very heavy then frequency hopping spread spectrum ( fhss ) is used in congestion with csma / ca for higher throughput , however , when using fhss and dsss with csma / ca in real time applications then throughput remains considerably same for both .2 shows the timing diagram of csma / ca . data transmission time , backoff slots time , acknowledgement time are given by equation 2 , 3 , and 4 respectively[2 ] .+ [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] [ tab : addlabel ] the following notations are used : + + + + + + + + + + + + + in csma / ca mechanism , packet may loss due to collision .collision occurs when two or more nodes transmits the data at the same time .if ack time is not taken in to account then there will be no retransmission of packet and it will be considered that each packet has been delivered successfully .the probability of end device successfully transmitting a packet is modeled as follows[3 ] . where , is the number of end devices that are connected to router or coordinator .be is the backoff exponent in our case it is 3 . is the probability of transmission success at a slot . is the probability of end device successfully allocated a wireless channel .general formula for is given by equation 8 .probability of time delay caused by csma / ca backoff exponent is estimated as in [ 7 ] .maximum number of backoff is 4 .value of be=3 has been used in following estimation and we estimate by applying summation from 3 to 5 . is the probability of time delay event . expectation of the time delay is obtained as from [ 7 ] . and are taken from equations 7 and 8 respectively .=p({e_{a}|e_{b}})\nonumber\\ \nonumber\\ = \frac{\sum_{n=0}^{7 } n\frac{1}{2_{be}}p + \sum _ { n=8}^{15 } n\frac{1}{2_{be}}p + \sum _ { n=16}^{31 } n\frac{1}{2_{be}}p}{\sum_{n=0}^{2^{be-1 } } n\frac{1}{2_{be}}p{1-p}^{be-2}}\end{aligned}\ ] ] statistical data of throughput , load , end - to - end delay and delay of ieee 802.15.4 at varying data rates is shown in table i. it shows different values of delay , throughput , end - to - end delay and load recorded at different time .load at all data rates and at all time intervals remains same .start time for simulation is kept at 0 seconds and stop time is kept to infinity .load in all three data rates at different time intervals remains same as shown in table i. there is very small difference between delay and end - to - end delay .at 20 kbps maximum delay of 2145 seconds is recorded with maximum throughput of 4352 bits / sec at 60 min . at 40kbps maximum delay of 380 seconds and minimum delay 2.5 secondsis recorded .throughput of 8805 ( bits / sec ) is the highest throughput recorded on 60 min . in case of 250 kbps delayremains very small , near to negligible where as throughput matches load with 10388 ( bits / sec ) .beacon order & 6 + superframe order & 0 + maximum routers & 5 + maximum depth & 5 + beacon enabled network & disabled + route discovery time & 10(sec ) + network parameter are given in table ii .non beacon mode is selected in our analysis and beacon order is kept at 6 . due to non - beacon enabled mode superframe order is not selected .maximum routers or nodes that can take part in simulation is 5 , each having tree depth of 5 .discovery time that is needed by each router to discover route is 10 sec .minimum backoff exponent & 3 + maximum number of backoff & 5 + channel sensing duration & 0.1 ( sec ) + data rates & 20 , 40 , 250 kbps + packet reception power & -85 ( dbm ) + transmission band & 2.4 ( mhz ) + packet size & 114 bytes + packet interarrival time & 0.045(sec ) + transmit power & 0.05 ( w ) + ack wait duration & 0.05 ( sec ) + number of retransmissions & 5 +simulation parameters of 802.15.4 with its value are shown in table iii .minimum be is kept at 3 with maximum no .of back - off to 5 .default settings of 802.15.4 are used in this simulation .packet reception power is kept at -85 dbm with transmitting power of 0.5 watt(w ) . in ackenabled case , ack wait duration is kept at 0.05 sec with no of retransmissions to 5 . in no ack casethese parameters are disabled .114 bytes is the packet size with interarrival time of 0.045 sec .transmission band used in this simulation is 2.4 ghz .simulations have been performed at varying data rates of 20 , 40 , 250 kbps .simulations for both ack and non ack cases have also been performed .opnet modeler is the simulator used for simulations .simulations are executed for one hour with update interval of 5000 events .graphs are presented in overlaid statistics . overlaid means that , graphs of each scenario has been combined with each other .data of graphs are averaged over time for better results . personal area network identity ( pan i d )is kept at default settings , coordinator automatically assigns pan i d to different personal area networks if they are attached .we consider non beacon mode for our simulations .using non - beacon enabled mode improves the performance and changing different parameters affects performance of 802.15.4 .csma / ca values are kept to default with minimum backoff exponent to 3 and having maximum backoff of 5 .changing these parameters does not affect its performance .we perform simulations with ack and non ack . in non ackthere is only delay due to node waiting while sensing medium , there is no delay due to ack colliding with packets . in ack casethere is collision for packets going towards receiver and ack packet coming from receiver at same time .delay in ack is more as compare to non ack case .we use standard structure of ieee 802.15.4 with parameters shown in table ii . in this section ,performance of default mac parameters of ieee 802.15.4 standard non beacon enabled mode .simulations are performed considering 10 sensor nodes environment with coordinator collecting data from all nodes .fig 3 , 4 , 5 and 6 show graphical representation of performance parameters of 802.15.4 .delay represents the end - to - end delay of all the packets received by 802.15.4 macs of all wpans nodes in the network and forwarded to the higher layer .load represents the total load in ( bits / sec ) submitted to 802.15.4 mac by all higher layers in all wpans nodes of the network .load remains same for all the data rates .throughput represents the total number of bits in ( bits / sec ) forwarded from 802.15.4 mac to higher layers in all wpans nodes to the network .end - to - end delay is the total delay between creation and reception of an application packet .delay , load and throughput are plotted as function of time .as load is increasing , there is increase in throughput and delay .when load becomes constant , throughput also becomes constant , however , delay keeps on increasing .delay in 802.15.4 occurs due to collision of data packets or sometimes nodes keeps on sensing channel and does not find it free .when node senses medium and find it free , it sends packet . at same time, some other nodes are also sensing the medium and find it free , they also send data packets and thus results in collision .collision also occurs due to node sending data packet and at same time coordinator sending ack of successfully receiving packet and causing collision . when ack is disabledthis type of collision will not occur .delay , throughput and load is analyzed at 40 kbps in fig 4 .with increase in load , there is increase in throughput and delay , however , it is less as compared to 20kbps , this is due to increase in data rate of 802.15.4 .increase in bit transfer rate from 20 to 40kbps causes decrease in delay and hence increases throughput .fig 5 shows behavior of 802.15.4 load , throughput and delay at 250kbps data rate .delay is negligible at this data rate , with throughput and load showing same behavior .delay approaching zero shows that , at 250 kbps data rate there are less chances of collision or channel access failure .ieee 802.15.4 performs best at this data rate compared to 20 and 40 kbps . at same timeend - to - end delay of ieee 802.15.4 at varying data rates of 20 , 40 and 250 kbps are shown in fig 6 .this figure shows that end to end delay for 20 kbps data rate is higher than 40 kbps and 250 kbps .minimum end - to - end delay is found at 250 kbps data rate . at 250 kbps, more data can pass at same time with less collision probability hence having minimum delay and at 20 kbps , less data transfers at same time causing more end to end delay .statistical data of end - to - end delay is shown in table i , which shows end to end variation with change in time .fig 7 shows the delay , throughput , load and end - to - end delay of ieee 802.11.4 at 20 kbps data rate with and without ack .load remains same in both cases .there is no collision because of ack packets due to which packets once send are not sent again .there is decrease in delay and increase in ack due to less collision .end - to - end delay performs same as delay .ieee 802.15.4 performs better with non ack other than ack due to decrease in collision probability in no ack compared to ack case .delay , throughput , load and end - to - end delay with and without ack at 40 kbps are presented in fig 8 .there is considerable difference between the analysis in ack and without ack case .delay is reduced to negligible at low value of in no ack case due to reason that , at this data rate there is no collision therefor , delay is nearly zero . as there is no collision and channel sensing time is also low , this increase throughput and load in non ack case , as compared to ack .9 shows analysis with ack and no ack cases of delay , throughput , load and end to end delay at 250 kbps , at this high data rates load and throughput in both cases becomes equal to each other and data is sent in first instant to coordinator by nodes .delay in both cases nearly equal to zero , which shows that , there is very less collision at this high data rates and channel sensing time is also very low .end to end delay slightly differs from delay in no ack case .in this paper , performance of ieee 802.15.4 standard with non - beacon enabled mode is analyzed at varying data rates .we have evaluated this standard in terms of load , delay , throughput and end - to - end delay with different mac parameters .we have also analyzed performance with ack enabled mode and no ack mode .we considered a full size mac packet with payload size of 114 bytes for data rates 20 kbps , 40 kbps and 250 kbps .it is shown that better performance in terms of throughput , delay , and end - to - end delay is achieved at higher data rate of 250kbps .ieee 802.15.4 performs worse at low data rates of 20kbps .performance of this standard improves with increase in data rate . in future research work, we will investigate the performance of ieee 802.15.4 in wbans by changing frequency bands on different data rates .we also intend to examine the effect of changing inner structure of mac layer in ieee 802.15.4 .1 f. timmons , n . ,`` analysis of the performance of ieee 802.15.4 for medical sensor body area networking '' , sensor and ad hoc communications and networks , 2004 . c. buratti and r. verdone ., `` performance analysis of ieee 802.15.4 non beacon - enabled mode '' , ieee transaction on vehicular technology , vol .58 , no . 7 ,september 2009 .anastasi , g ., `` the mac unreliability problem in ieee 802.15.4 wireless sensor networks '' , mswim09 proceedings of the 12th acm international conference on modeling , analysis and simulation of wireless and mobile systems , october 2009 .s. ullah , k. s. kwak ., `` an ultra - low power and traffic - adaptive medium access control protocol for wireless body area network '' , j med syst , doi 10.1007/s10916 - 010 - 9564 - 2 .s. ullah , b.shen , s.m.r .islam , p. khan , s. saleem and k.s .kwak ., `` a study of medium access control protocols for wireless body area networks '' .x. liang and i. balasingham . ,`` performance analysis of the ieee 802.15.4 based ecg monitoring network ''. b. latre , p.d .mil , i. moerman , b. dhoedt and p. demeester ., `` throughput and delay analysis of unslotted ieee 802.15.4 '' , journal of networks , vol . 1 , no. 1 , may 2006 .
ieee 802.15.4 standard is designed for low power and low data rate applications with high reliability . it operates in beacon enable and non - beacon enable modes . in this work , we analyze delay , throughput , load , and end - to - end delay of non - beacon enable mode . analysis of these parameters are performed at varying data rates . evaluation of non beacon enabled mode is done in a 10 node network . we limit our analysis to non beacon or unslotted version because , it performs better than other . protocol performance is examined by changing different medium access control ( mac ) parameters . we consider a full size mac packet with payload size of 114 bytes . in this paper we show that maximum throughput and lowest delay is achieved at highest data rate . ieee 802.15.4 , throughput , delay , end - to - end , load
the hypothesis testing theory is a well developed branch of mathematical statistics .the asymptotic approach allows to find satisfactory solutions in many different situations .the simplest problems , like the testing of two simple hypotheses , have well known solutions . recall that if we fix the first type error and seek the test which maximizes the power , then we obtain immediately ( by neyman - pearson lemma ) the most powerful test based on the likelihood ratio statistic .the case of composite alternative is more difficult to treat and here the asymptotic solution is available in the regular case .it is possible , using , for example , the score function test ( sft ) , to construct the asymptotically ( locally ) most powerful test .moreover , the general likelihood ratio test ( glrt ) and the wald test ( wt ) based on the maximum likelihood estimator are asymptotically most powerful in the same sense . in the non regular cases the situation became much more complex .first of all , there are different non regular ( singular ) situations . moreover , in all these situations, the choice of the asymptotically best test is always an open question .this work is an attempt to study all these situations on the model of inhomogeneous poisson processes .this model is sufficiently simple to allow us to realize the construction of the well known tests ( sft , glrt , wt ) and to verify that these test are asymptotically most powerful also for this model , in the case when it is regular . in the next paperwe study the behavior of these tests in the case when the model is singular .the `` evolution of the singularity '' of the intensity function is the following : regular case ( finite fisher information , this paper ) , continuous but not differentiable ( cusp - type singularity , ) , discontinuous ( jump - type singularity , ) . in all the three caseswe describe the tests analytically .more precisely , we describe the test statistics , the choice of the thresholds and the behavior of the power functions for local alternatives .note that the notion of _ local alternatives _ is different following the type of regularity / singularity .suppose we want to test the simple hypothesis against the one - sided alternative . in the regular case ,the local alternatives are usually given by , .in the case of a cusp - type singularity , the local alternatives are introduced by , . as to the case of a jump - type singularity ,the local alternatives are , . in all these problems ,the most interesting for us question is the comparison of the power functions of different tests . in singular cases ,the comparison is done with the help of numerical simulations .the main results concern the limit likelihood ratios in the non - regular situations .let us note , that in many other models of observations ( i.i.d ., time series , diffusion processes etc . )the likelihood ratios have the same limits as here ( see , for example , and ) .therefore , the results presented here are of more universal nature and are valid for any other ( non necessarily poissonian ) model having one of considered here limit likelihood ratios .we recall that is an inhomogeneous poisson process with intensity function , , if and the increments of on disjoint intervals are independent and distributed according to the poisson law in all statistical problems considered in this work , the intensity functions are periodic with some known period and depend on some one - dimensional parameter , that is , .the basic hypothesis and the alternative are always the same : and .the diversity of statements corresponds to different types of regularity / singularity of the function .the case of unknown period needs a special study .the hypothesis testing problems ( or closely related properties of the likelihood ratio ) for inhomogeneous poisson processes were studied by many authors ( see , for example , brown , kutoyants , lger and wolfson , liese and lorz , sung_ et al . _ , fazli and kutoyants , dachian and kutoyants and the references therein ) .note finally , that the results of this study will appear later in the work .for simplicity of exposition we consider the model of independent observations of an inhomogeneous poisson process : , where , , are poisson processes with intensity function , . here , , is a one - dimensional parameter .we have where is the mathematical expectation in the case when the true value is .note that this model is equivalent to the one , where we observe an inhomogeneous poisson process with periodic intensity , , and ( the period is supposed to be known ) .indeed , if we put , ] .the measures corresponding to poisson processes with different values of are equivalent .the likelihood function is defined by the equality ( see liese ) {\rm d}t\right\}\end{aligned}\ ] ] and the likelihood ratio function is we have to test the following two hypotheses a test is defined as the probability to accept the hypothesis .its power function is , .denote the class of tests of asymptotic size ] and we put for .now the random function is defined on and belongs to the space of continuous on functions such that as .introduce the uniform metric in this space and denote the corresponding borel sigma - algebra .the next theorem describes the weak convergence under the alternative ( with fixed ) of the stochastic process to the process in the measurable space .note that in this theorem was proved for a fixed true value . in the hypothesis testing problems considered here , we need this convergence both under hypothesis , that is , for fixed true value ( ) , and under alternative with `` moving '' true value . [ t1 ]let us suppose that the regularity conditions are fulfilled .then , under alternative , we have the weak convergence of the stochastic process to . according to ( * ? ? ?* theorem 1.10.1 ) , to prove this theorem it is sufficient to verify the following three properties of the process . 1 . the finite - dimensional distributions of converge , under alternative , to the finite - dimensional distributions of .the inequality holds for every and some constant .there exists , such that for some and all we have the estimate let us rewrite the random function as follows : for the first term we have therefore we only need to check the conditions 23 for the term the finite - dimensional distributions of converge , under alternative , to the finite - dimensional distributions of .the limit process for is hence for the details see , for example , .let the regularity conditions be fulfilled .then there exists a constant , such that for all and sufficiently large values of . according to (* ? ? ? * lemma 1.1.5 ) , we have : where is some intermediate point between and . let the regularity conditions be fulfilled .then there exists a constant , such that for all and sufficiently large value of . using the markov inequality ,we get according to ( * ? ? ?* lemma 1.1.5 ) , we have using the taylor expansion we get where is some intermediate point between and .hence , for sufficiently large providing , we have the inequality , and we obtain by distinguishability condition , we can write and hence and so , putting the estimate follows from and .the weak convergence of now follows from ( * ? ? ?* theorem 1.10.1 ) .in this section , we construct the score function test , the general likelihood ratio test , the wald test and two bayes tests . for all these tests we describe the choice of the thresholds and evaluate the limit power functions for local alternatives .let us introduce the _ score function test _( sft ) where is the ( )-quantile of the standard normal distribution and the statistic is .\ ] ] the sft has the following well - known properties ( one can see , for example , ( * ? ? ?* theorem 13.3.3 ) for the case of i.i.d .observations ) .the test and is laump . for its power functionthe following convergence hold : the property follows immediately from the asymptotic normality ( under hypothesis ) further , we have ( under alternative ) the convergence this follows from the le cam s third lemma and can be shown directly as follows .suppose that the intensity of the observed poisson process is , then we can write \\ & \quad + \frac{1}{\sqrt{n{\rm i}\left(\vartheta _ 1\right)}}\sum_{j=1}^{n}\int_{0}^{\tau } \frac{\dot\lambda \left(\vartheta _1,t\right)}{\lambda \left(\vartheta _1,t\right)}\left[\lambda \left(\vartheta _n , t\right)-\lambda \left(\vartheta _1,t\right)\right]{\rm d}t \\ & = \delta _n^*\left(\vartheta_1,x^n\right)+\frac{u_*}{{n{\rm i}\left(\vartheta _ 1\right)}}\sum_{j=1}^{n}\int_{0}^{\tau } \frac{\dot\lambda \left(\vartheta _1,t\right)^2}{\lambda \left(\vartheta _ 1,t\right)}{\rm d}t+o\left(1\right)\\ & = \delta _ n^*\left(\vartheta_1,x^n\right)+u_*+o\left({1}\right)\longrightarrow \delta + u_*.\end{aligned}\ ] ] to show that the sft is laump , it is sufficient to verify that the limit of its power function coincides ( for each fixed value ) with the limit of the power of the corresponding likelihood ratio ( neyman - person ) test ( n - pt ) .remind that the n - pt is the most powerful for each fixed ( simple ) alternative ( see , for example , theorem 13.3 in lehman and romano ) .of course , the n - pt is not a real test ( in our one - sided problem ) , since for its construction one needs to know the value of the parameter under alternative .the n - pt is defined by where the threshold and the probability are chosen from the condition , that is , of course , we can put because the limit random variable has continuous distribution function .the threshold can be found as follows .the lan of the family of measures at the point allows us to write hence , we have therefore the n - pt belongs to . for the power of the n - pt we have ( denoting as usually ) therefore the limits of the powers of the tests and coincide , that is , the score function test is asymptotically as good as the neyman - pearson optimal one . note that the limits are valid for any sequence of .so , for any , we can choose a sequence ] , , of inhomogeneous poisson process of intensity function where .the fisher information at the point is .recall that all our tests ( except bayes tests ) in regular case are laump .therefore they have the same limit power function .our goal is to study the power functions of different tests for finite .the normalized likelihood ratio is given by the expression where .the numerical simulation of the observations allows us to obtain the power functions presented in figures [ pf_regular_2 ] and [ pf_regular_glrt_wald ] .for example , the computation of the numerical values of the power function of the sft was done as follows .we define an increasing sequence of beginning at .then , for every , we simulate i.i.d .observations of n - tuples of inhomogeneous poisson processes , , with the intensity function and calculate the corresponding statistics , . the empirical frequency of acceptation of the alternativegives us an estimate of the power function : we repeat this procedure for different values of until the values of become close to .power functions of sft and bt1 ] power functions of glrt and wt ] in the computation of the power function of the bayes test bt1 , we take as _ a priori _ law the uniform distribution , that is , )$ ] .the thresholds of the bt1 are obtained by simulating random variables , , calculating for each of them the quantity and taking the -th greatest between them .some of the thresholds are presented in table [ thr_bt1 ] ..[thr_bt1]thresholds of bt1 [ cols="^,^,^,^,^,^,^,^",options="header " , ] note that for the small values of , under alternative , the power function of sft starts to decrease ( see figure [ pf_regular_glrt_wald ] ) .this interesting fact can be explained by the strongly non linear dependence of the likelihood ratio on the parameter .the test statistic can be rewritten as follows : \\ & \qquad+\sqrt{\frac{n}{{\rm i}\left(\vartheta_1\right)}}\int_{0}^{t}\frac{\dot\lambda \left(\vartheta _1,t\right)}{\lambda \left(\vartheta_1,t\right)}\left[\lambda \left(\vartheta _ 1+u\varphi_n , t\right)-\lambda \left(\vartheta_1,t\right ) \right]{\rm d}t\\ & = -3\varphi_n\sum_{j=1}^n\int_{0}^{3}\frac{t\sin(6t)}{3\cos^2(3\,t)+1}\left[{\rm d}x_j\left(t\right)-\left(3\cos^2\!\left(\left(3+u\varphi_n\right)t\right)\!+1\right){\rm d}t \right]\\ & \qquad+9\sqrt{\frac{n}{{\rm i}\left(\vartheta_1\right)}}\!\int_{0}^{3 } \!\frac{t\sin(6 t)}{3\cos^2(3\,t)+1}\times\left[\cos^2(3\,t)- \cos^2\left(\left(3+u\varphi_n\,\right)t\right)\right]{\rm d}t . \ ] ] the last integral becomes negative for some values of , which explains the loss of power of the sft ( for ) .this study was partially supported by russian science foundation ( research project no .14 - 49 - 00079 ) .the authors thank the referee for helpful comments .
we consider the problem of hypothesis testing in the situation when the first hypothesis is simple and the second one is local one - sided composite . we describe the choice of the thresholds and the power functions of the score function test , of the general likelihood ratio test , of the wald test and of two bayes tests in the situation when the intensity function of the observed inhomogeneous poisson process is smooth with respect to the parameter . it is shown that almost all these tests are asymptotically uniformly most powerful . the results of numerical simulations are presented . msc 2010 classification : 62m02 , 62f03 , 62f05 . _ key words : _ hypothesis testing , inhomogeneous poisson processes , asymptotic theory , composite alternatives , regular situation .
the solution of the absolute value equation ( ave ) of the following form is considered : here , , and denotes the component - wise absolute value of vector , i.e. , .the ave ( [ eq:1 ] ) is a special case of the generalized absolute value equation ( gave ) of the type where and .the gave ( [ eq:1a ] ) was introduced in and investigated in a more general context in .recently , these problems have been investigated in the literature .the ave ( [ eq:1 ] ) arises in linear programs , quadratic programs , bimatrix games and other problems , which can all be reduced to a linear complementarity problem ( lcp ) , and the lcp is equivalent to the ave ( [ eq:1 ] ) .this implies that ave is np - hard in its general form . beside, if , then the generalized ave ( [ eq:1a ] ) reduces to a system of linear equations , which have many applications in scientific computation .the main research of ave includes two aspects : one is the theoretical analysis , which focuses on the theorem of alternatives , various equivalent reformulations , and the existence and nonexistence of solutions ; see . and the other is how to solve the ave .we mainly pay attention to the letter . in the last decade ,based on the fact that the lcp is equivalent to the ave and the special structure of ave , a large variety of methods for solving ave ( [ eq:1 ] ) can be found in the literature ; see .these also include the following : a finite succession of linear programs ( slp ) is established in , which arise from a reformulation of the ave as the minimization of a piecewise - linear concave function on a polyhedral set and solving the latter by successive linearization ; a semi - smooth newton method is proposed , which largely shortens the computation time than the slp method in ; furthermore , a smoothing newton algorithm is presented in , which is proved to be globally convergent and the convergence rate is quadratic under the condition that the singular values of exceed 1 .this condition is weaker than the one used in .recently , the picard - hss iteration method is proposed to solve ave by salkuyeh in , which is originally designed to solve weakly nonlinear systems and its generalizations are also paid attention . the sufficient conditions to guarantee the convergence of this method and some numerical experimentsare given to show the effectiveness of the method .however , the numbers of the inner hss iteration steps are often problem - dependent and difficult to be determined in actual computations .moreover , the iteration vector can not be updated timely . in this paper , we present the nonlinear hss - like iteration method to overcome the defect mentioned above , which is designed originally for solving weakly nonlinear systems in .the rest of this paper is organized as follows . in section [ sec:2 ]the hss and picard - hss iteration methods are reviewed . in section [ sec:3 ] the nonlinear hss - like iteration method for solving ave ( [ eq:1 ] )is described .numerical experiments are presented in section [ sec:4 ] , to shown the feasibility and effectiveness of the nonlinear hss - like method .finally , some conclusions and an open problem are drew in section [ sec:5 ] .in this section , the hss iteration method for solving the non - hermitian linear systems and the picard - hss iteration method for solving the ave ( [ eq:1 ] ) are reviewed .let be a non - hermitian positive definite matrix , be a zero matrix , the gave ( [ eq:1a ] ) reduced to the non - hermitian system of linear equations because any square matrix possesses a hermitian and skew - hermitian splitting ( hss ) the following hss iteration method is first introduced by bai , golub and ng in for the solution of the non - hermitian positive definite system of linear equations ( [ eq:5 ] ) . * the hss iteration method . * + given an initial guess , compute for using the following iteration scheme until converges , where is a positive constant and is the identity matrix .when the matrix is positive definite , i.e. its hermitian part is positive definite , bai et al .proved that the spectral radius of the hss iteration matrix is less than 1 for any positive parameters ,i.e. , the hss iteration method is unconditionally convergent ; see . for the convenience of the subsequent discussion, the ave ( [ eq:1 ] ) can be rewritten as its equivalent form : recalling that the linear term and the nonlinear term are well separated and the picard iteration method is a fixed - point iteration , the picard iteration can be used to solve the ave ( [ eq:1 ] ) .when the matrix is large sparse and positive definite , the next iteration may be inexactly computed by hss iteration .this naturally lead to the following iteration method proposed in for solving the ave ( [ eq:1 ] ) . * the picard - hss iteration method . *+ let be a sparse and positive definite matrix , and be its hermitian and skew - hermitian parts respectively .given an initial guess and a sequence of positive integers , compute for using the following iteration scheme until satisfies the stopping criterion : \(a ) set \(b ) for , solve the following linear systems to obtain : where is a given positive constant and is the identity matrix ; \(c ) set .the advantage of the picard - hss iteration method is obvious .first , the two linear sub - systems in all inner hss iterations have the same shifted hermitian coefficient matrix and shifted skew - hermitian coefficient matrix , which are constant with respect to the iteration index .second , as the coefficient matrix and are hermitian and skew - hermitian respectively , the first sub - system can be solved exactly by making use of the cholesky factorization and the second one by the lu factorization .the last , these two sub - systems can be solve approximately by the conjugate gradient method and a krylov subspace method like gmres , respectively ; see .in the picard - hss iteration , the numbers of the inner hss iteration steps are often problem - dependent and difficult to be determined in actual computations .moreover , the iteration vector can not be updated timely .thus , to avoid these defect and still preserve the advantages of the picard - hss iteration method , based on the hss ( [ eq:6 ] ) and the nonlinear fixed - point equations the following nonlinear hss - like iteration method is proposed to solve the ave ( [ eq:1 ] ) .* the nonlinear hss - like iteration method . *+ let be a sparse and positive definite matrix , and be its hermitian and skew - hermitian parts respectively. given an initial guess , compute for using the following iteration scheme until satisfies the stopping criterion : where is a given positive constant and is the identity matrix .it is obvious that both and in the second step are updated in the nonlinear hss - like iteration , but only is updated in the picard - hss iteration .furthermore , the nonlinear hss - like iteration is a monolayer iteration scheme , but the picard - hss is an inner - outer double - layer iteration scheme . to obtain a one - step form of the nonlinear hss - like iteration, we define and then the nonlinear hss - like iteration scheme can be equivalently expressed as the ostrowski theorem , i.e. , theorem 10.1.3 in , gives a local convergence theory about a one - step stationary nonlinear iteration . based on this , bai et al . established the local convergence theory for the nonlinear hss - like iteration method in .however , these convergence theory has a strict requirement that must be -differentiable at a point such that . obviously , the absolute value function is non - differentiable .thus , the convergence analysis of the nonlinear hss - like iteration method for solving weakly nonlinear linear systems is unsuitable for solving ave , and need further discuss . at the end of this section, we remark that the main steps in the nonlinear hss - like iteration method can be alternatively reformulated into residual - updating form as follows .* the hss - like iteration method ( residual - updating variant ) . *+ given an initial guess , compute for using the following iterative procedure until satisfies the stopping criterion : \(1 ) set : , \(2 ) solve : , \(3 ) set : , , \(4 ) solve : , \(5 ) set : , + where is a given positive constant and is the identity matrix .in this section , the numerical properties of the picard , picard - hss and nonlinear hss - like methods are examined and compared experimentally by a suit of test problems .all the tests are performed in matlab r2013a on intel(r ) core(tm ) i5 - 3470 cpu 3.20 ghz and 8.00 gb of ram , with machine precision , and terminated when the current residual satisfies where is the computed solution by each of the methods at iteration , and a maximum number of the iterations 500 is used .in addition , the stopping criterion for the inner iterations of the picard - hss method is set to be where , , is the number of the inner iteration steps and is the prescribed tolerance for controlling the accuracy of the inner iterations at the -th outer iteration .if is fixed for all , then it is simply denoted by . here , we take . the first subsystem with the hermitian positive definite coefficient matrix in ( [ eq : hsslike ] )is solved by the cholesky factorization , and the second subsystem with the skew - hermitian coefficient matrix in ( [ eq : hsslike ] ) is solved by the lu factorization .the optimal parameters employed in the picard - hss and nonlinear hss - like iteration methods have been obtained experimentally .in fact , the experimentally found optimal parameters are the ones resulting in the least numbers of iterations and cpu times .as mentioned in the computation of the optimal parameter is often problem - dependent and generally difficult to be determined .we consider the two - dimensional convection - diffusion equation where , is its boundary , is a positive constant used to measure the magnitude of the diffusive term and is a real number .we use the five - point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms .let and denote the equidistant step size and the mesh reynolds number , respectively .then we get a system of linear equations , where is a matrix of order of the form with where , and are the identity matrices of order and respectively , means the kronecker product . in our numerical experiments , the matrix in ave ( [ eq:1 ] )is defined by ( [ eq : ex ] ) with different values of and different values of .it is easy to find that for every nonnegative number the matrix is in general non - symmetric positive definite .we use the zero vector as the initial guess , and the right - hand side vector of ave ( [ eq:1 ] ) is taken in such a way that the vector with is the exact solution , where denotes the imaginary unit ..the optimal parameters values for picard - hss and nonlinear hss - like methods ( p=0 ) . [ cols="<,<,<,>,>,>,>,>,>,>,>,>,>",options="header " , ]in this paper we have studied the nonlinear hss - like iteration method for solving the absolute value equation ( ave ) .this method is based on separable property of the linear term and nonlinear term and the hermitian and skew - hermitian splitting of the involved matrix .compared to that the picard - hss iteration scheme is an inner - outer double - layer iteration scheme , the nonlinear hss - like iteration is a monolayer and the iteration vector could be updated timely .numerical experiments have shown that the nonlinear hss - like method is feasible , robust and efficient nonlinear solver .the most important is it can outperform the picard - hss in actual implementation .
salkuyeh proposed the picard - hss iteration method to solve the absolute value equation ( ave ) , which is a class of non - differentiable np - hard problem . to further improve its performance , a nonlinear hss - like iteration method is proposed . compared to that the picard - hss method is an inner - outer double - layer iteration scheme , the hss - like iteration is only a monolayer and the iteration vector could be updated timely . some numerical experiments are used to demonstrate that the nonlinear hss - like method is feasible , robust and effective . absolute value equation , nonlinear hss - like iteration , fixed point iteration , positive definite 15a06,65f10,65h10
humans have evolved large brains , in part to handle the cognitive demands of social relationships .the social structures resulting from these relationships confer numerous fitness advantages .scholars distinguish between two types of social relationships : those representing strong and weak ties .strong ties are characterized by high frequency of interaction and emotional intimacy that can be found in relationships between family members or close friends .people connected by strong ties share mutual friends , forming cohesive social bonds that are essential for providing emotional and material support and creating resilient communities .in contrast , weak ties represent more casual social relationships , characterized by less frequent , less intense interactions , such as those occurring between acquaintances . by bridging otherwise unconnected communities ,weak ties expose individuals to novel and diverse information that leads to new job prospects and career opportunities .online social relationships provide similar benefits to those of the offline relationships , including emotional support and exposure to novel and diverse information .how and why do people form different social ties , whether online or offline ?of the few studies that addressed this question , shea et al . examined the relationship between emotions and cognitive social structures , i.e. , the mental representations individuals form of their social contacts . in a laboratory study, they demonstrated that subjects experiencing positive affect , e.g. , emotions such as happiness , were able to recall a larger number of more diverse and sparsely connected social contacts than those experiencing negative affect , e.g. , sadness .in other words , they found that positive affect was more closely associated with weak ties and negative affect with strong ties in cognitive social structures .this is consistent with findings that negative emotional experiences are shared more frequently through strong ties , not only to seek support but also as a means of strengthening the tie .in addition to psychological factors , social structures also depend on the participants socioeconomic and demographic characteristics . a study , which reconstructed a national - scale social network from the phone records of people living in the united kingdom , found that people living in more prosperous regions formed more diverse social networks , linking them to others living in distinct communities . on the other hand, people living in less prosperous communities formed less diverse , more cohesive social structures .the present paper examines how psychological and demographic factors affect the structure of online social interactions .we restrict our attention to interactions on the twitter microblogging platform . to study these interactions , we collected a large body of geo - referenced text messages , known as tweets , from a large us metropolitan area .further , we linked these tweets to us census tracts through their locations . census _tracts _ are small regions , on a scale of city blocks , that are relatively homogeneous with respect to population characteristics , economic status , and living conditions .some of the tweets also contained explicit references to other users through the ` @ ' mention convention , which has been widely adopted on twitter for conversations .we used mentions to measure the strength of social ties of people tweeting from each tract . using these data we studied ( at tract level ) the relationship between social ties , the socioeoconomic characteristics of the tract , and the emotions expressed by people tweeting from that tract .in addition , people tweeting from one tract often tweeted from other tracts .since geography is a strong organizing principle , for both offline and online social relationships , we measured the spatial diversity of social relationships , and studied its dependence on socioeconomic , demographic , and psychological factors .our work complements previous studies of offline social networks and demonstrates a connection between the structure of online interactions in urban places and their socioeconomic characteristics .more importantly , it links the structure of online interactions to positive affect .people who express happier emotions interact with a more diverse set social contacts , which puts them in a position to access , and potentially take advantage of , novel information . as our social interactions increasingly move online , understanding , and being able to unobtrusively monitor , online social structures at a macroscopic level is important to ensuring equal access to the benefits of social relationships . in the rest of the paper , we first describe data collection and methods used to measure emotion and social structure .then , we present results of a statistical study of social ties and their relationships to emotions and demographic factors .the related works are addressed after this .although many important caveats exist about generalizing results of the study , especially to offline social interactions , our work highlights the value of linking social media data to traditional data sources , such as us census , to drive novel analysis of online behavior and online social structures .eagle et al . explored the link between socioeconomic factors and network structure using anonymized phone call records to reconstruct the national - level network of people living in the uk .measures of socioeconomic development were constructed from the uk government s index of multiple deprivation ( imd ) , a composite measure of prosperity based on income , employment , education , health , crime , housing of different regions within the country .they found that people living in more prosperous regions formed more diverse social networks , linking them to others living in distinct communities . on the other hand, people living in less prosperous communities formed less diverse , more cohesive social structures .quercia et al . found that sentiment expressed in tweets posted around 78 census areas of london correlated highly with community socioeconomic well being , as measured by the index of multiple deprivation ( i.e. , qualitative study of deprived areas in the uk local councils ) . in another study they found that happy places tend to interact with other happy places , although other indicators such as demographic data and human mobility were not used in their research .other researcher used demographic factors and associated them to sentiment analysis to measure happiness in different places . for instance , mitchell et al . generated taxonomies of us states and cities based on their similarities in word use and estimates the happiness levels of these states and cities .then , the authors correlated highly - resolved demographic characteristics with happiness levels and connected word choice and message length with urban characteristics such as education levels and obesity rates , showing that social media may potentially be used to estimate real - time levels and changes in population - scale measures , such as obesity rates .psychological and cognitive states affect the types of social connections people form and their ability to recall them .when people experience positive emotions , or affect , they broaden their cognitive scope , widening the array of thoughts and actions that come to mind .in contrast , experiencing negative emotions narrow attention to the basic actions necessary for survival .shea et al . tested these theories in a laboratory , examining the relationship between emotions and the structure of networks people were able to recall .they found that subjects experiencing positive affect were able to recall a larger number of more diverse and sparsely connected social contacts than those experiencing negative emotions .the study did not resolve the question of how many of the contacts people were able to recall that they proceeded to actively engage .a number of innovative research works attempted to better understand human emotion and mobility .some of these works focuses on geo - tagged location data extracted from foursquare and twitter .researchers reported that foursquare users usually check - in at venues they perceived as more interesting and express actions similar to other social media , such as facebook and twitter .foursquare check - ins are , in many cases , biased : while some users provide important feedback by checking - in at venues and share their engagement , others subvert the rules by deliberately creating unofficial duplicate and nonexistent venues .los angeles ( la ) county is the most populous county in the united states , with almost 10 million residents .it is extremely diverse both demographically and economically , making it an attractive subject for research .we collected a large body of tweets from la county over the course of 4 months , starting in july 2014 .our data collection strategy was as follows .first , we used twitter s location search api to collect tweets from an area that included los angeles county .we then used twitter4j api to collect all ( timeline ) tweets from users who tweeted from within this area during this time period .a portion of these tweets were geo - referenced , i.e. they had geographic coordinates attached to them . in all , we collected 6 m geo - tagged tweets made by 340k distinct users .we localized geo - tagged tweets to tracts from the 2012 us census .a tract is a geographic region that is defined for the purpose of taking a census of a population , containing about 4,000 residents on average , and is designed to be relatively homogeneous with respect to demographic characteristics of that population .we included only los angeles county tracts in the analysis .we used data from the us census to obtain demographic and socioeconomic characteristics of a tract , including the mean household income , median age of residents , percentage of residents with a bachelor s degree or above , as well as racial and ethnic composition of the tract . to measure emotions ,we apply sentiment analysis , i.e. methods that process text to quantify subjective states of the author of the text .two recent independent benchmark studies evaluate a wide variety of sentiment analysis tools in various social media and twitter datasets . across social media ,one of the best performing tools is sentistrength , which also was shown to be the best unsupervised tool for tweets in various contexts .sentistrength quantifies emotions expressed in short informal text by matching terms from a lexicon and applying intensifiers , negations , misspellings , idioms , and emoticons .we use the standard english version of sentistrength to each tweet in our dataset , quantifying positive sentiment and negative sentiment , consistently with the positive and negative affect schedule ( panas ) .sentistrength has been shown to perform very closely to human raters in validity tests and has been applied to measure emotions in product reviews , online chatrooms , yahoo answers , and youtube comments .in addition , sentistrength allows our approach to be applied in the future to other languages , like spanish , and to include contextual factors , like sarcasm . beyond positivity and negativity, meanings expressed through text can be captured through the application of the semantic differential , a dimensional approach that quantifies emotional meaning in terms of valence , arousal , and dominance .the dimension of _ valence _ quantifies the level of pleasure or evaluation expressed by a word , _ arousal _ measures the level of activity induced by the emotions associated with a word , and _ dominance _ quantifies the level of subjective power or potency experienced in relation to an emotional word .research in psychology suggests that a multidimensional approach is necessary to capture the variance of emotional experience , motivating our three - dimensional measurement beyond simple polarity approximations .the state of the art in the quantification of these three dimensions is the lexicon of warriner , kuperman , and brysbaert ( wkb ) .the wkb lexicon includes scores in the three dimensions for more than 13,000 english lemmas .we quantify these three dimensions in a tweet by first lemmatizing the words in the tweet , to then match the lexicon and compute mean values of the three dimensions as in . the large size of this lexicon allows us to match terms in in 82.39% of the tweets in our dataset , which we aggregate to produce multidimensional measures of emotions .[ cols="^,^ " , ] figure [ fig : mobility - demo ] shows the association between spatial diversity and demographic characteristics .income does not appear to significantly affect spatial diversity : only the top tertile of tracts by incomes has a significantly different spatial diversity ( ) from the other two tertiles .education , however , has a stronger dependence : tracts with better - educated residents also have significantly higher ( ) spatial diversity than tracts with fewer educated residents .in addition , ethnicity appears to be a factor .tracts with larger hispanic population have significantly lower spatial diversity ( ) than other tracts .the availability of large scale , near real - time data from social media sites such as twitter brings novel opportunities for studying online behavior and social interactions at an unprecedented spatial and temporal resolution . by combining twitter data with us census, we were able to study how the socioeconomic and demographic characteristics of residents of different census tracts are related to the structure of online interactions of users tweeting from these tracts .moreover , sentiment analysis of tweets originating from a tract revealed a link between emotions and sociability of twitter users .our findings are broadly consistent with results of previous studies carried out in an offline setting , and also give new insights into the structure of online social interactions .we find that at an aggregate level , areas with better educated , somewhat younger and higher - earning population are associated with weaker social ties and greater spatial diversity ( or inter - tract mobility ) . in addition , twitter users express happier , more positive emotions from these areas .conversely , areas that have more hispanic residents are associated with stronger social ties and lower spatial diversity. people also express less positive , sadder emotions in these areas .since weak ties are believed to play an important role in delivering strategic , novel information , our work identifies a social inequity , wherein the already privileged ones ( more affluent , better educated , happier ) are in network positions that potentially allow them greater access to novel information .some important considerations limit the interpretation of our findings .first , our methodology for identifying social interactions may not give a complete view of the social network of twitter users .our observations were limited to social interactions initiated by users who geo - reference their tweets .this may not be representative of all twitter users posting messages from a given tract , if systematic biases exist in what type of people elect to geo - reference their tweets . for demographic analysis, we did not resolve the home location of twitter users .instead , we assumed that characteristics of an area , i.e. , of residents of a tract , influence the tweets posted from that tract .other subtle selection biases could have affected our data and the conclusions we drew .it is conceivable that twitter users residing in more affluent areas are less likely to use the geo - referencing feature , making our sample of twitter users different from the population of la county residents . recognizing this limitation, we did not make any claims about the behavior of la residents ; rather , we focused on the associations between emotions and characteristics of a place and the behavior of twitter users , with an important caveat that those who turn on geo - referencing may differ from the general population of twitter users . for the analysis of emotions, we only considered english language tweets , although a significant fraction of tweets were in spanish .this may bias the average affect of tracts , especially for low - valence tracts , which have a larger number of hispanic residents . in the future, we plan to address this question by conducting sentiment analysis of spanish language tweets .ma was supported by the usc viterbi - india internship program .lg acknowledge support by the national counsel of technological and scientific development cnpq , brazil ( 201224/20143 ) , and usc - isi visiting researcher fellowship .this work was also partially supported by darpa , under contract w911nf-12 - 1 - 0034 . this support is gratefully acknowledged .cramer , h. ; rost , m. ; and holmquist , l. e. 2011 .performing a check - in : emerging practices , norms and conflicts in location - sharing using foursquare . in _ proc .13th international conference on human computer interaction with mobile devices and services_. garcia , d. ; mendez , f. ; serdlt , u. ; and schweitzer , f. 2012 . political polarization and popularity in online participatory media : an integrated approach . in _ proc .first edition workshop on politics , elections and data_. mitchell , l. ; frank , m. r. ; harris , k. d. ; dodds , p. s. ; and danforth , c. m. 2013 .the geography of happiness : connecting twitter sentiment and expression , demographics , and objective characteristics of place . 8(5):e64417 .thelwall , m. ; buckley , k. ; paltoglou , g. ; skowron , m. ; garcia , d. ; gobron , s. ; ahn , j. ; kappas , a. ; kster , d. ; and holyst , j. a. 2013 .damping sentiment analysis in online communication : discussions , monologs and dialogs . in _computational linguistics and intelligent text processing_. springer .
the social connections people form online affect the quality of information they receive and their online experience . although a host of socioeconomic and cognitive factors were implicated in the formation of offline social ties , few of them have been empirically validated , particularly in an online setting . in this study , we analyze a large corpus of geo - referenced messages , or tweets , posted by social media users from a major us metropolitan area . we linked these tweets to us census data through their locations . this allowed us to measure emotions expressed in the tweets posted from an area , the structure of social connections , and also use that area s socioeconomic characteristics in analysis . we find that at an aggregate level , places where social media users engage more deeply with less diverse social contacts are those where they express more negative emotions , like sadness and anger . demographics also has an impact : these places have residents with lower household income and education levels . conversely , places where people engage less frequently but with diverse contacts have happier , more positive messages posted from them and also have better educated , younger , more affluent residents . results suggest that cognitive factors and offline characteristics affect the quality of online interactions . our work highlights the value of linking social media data to traditional data sources , such as us census , to drive novel analysis of online behavior .
recent evolution of mobile devices such as smart - phones and tablets has facilitated access to multi - media contents anytime and anywhere but such devices result in an explosive data traffic increase .the cisco expects by 2019 that these traffic demands will be grown up to 24.3 exabytes per month and the mobile video streaming traffic will occupy almost 72% of the entire data traffic .interestingly , numerous popular contents are asynchronously but repeatedly requested by many users and thus substantial amounts of data traffic have been redundantly generated over networks .motivated by this , caching or pre - fetching some popular video contents at the network edge such as mobile hand - held devices or small cells ( termed as _ local caching _ ) has been considered as a promising technique to alleviate the network traffic load .as the cache - enabled edge node plays a similar role as a local proxy server with a small cache memory size , the local wireless caching has the advantages of i ) reducing the burden of the backhaul by avoiding the repeated transmission of the same contents from the core network to end - users and ii ) reducing latency by shortening the communication distance . in recent years, there have been growing interests in wireless local caching .the related research has focused mainly on i ) femto - caching with cache - enabled small cells or access points ( called as caching helpers ) , ii ) device - to - device ( d2d ) caching with mobile terminals , and iii ) heterogeneous cache - enabled networks . for these local caching networks ,varieties of content placements ( or caching placements ) were developed and for given fixed content placement , the performance of cache - enabled systems with different transmission or cache utilization techniques was investigated . specifically , content placement to minimize average downloading delay or average ber was proposed for fixed network topology . in a stochastic geometric framework ,various content placements were also proposed either to minimize the average delay and average caching failure probability or to maximize total hit probability , offloading probability .however , these caching solutions were developed in limited environments ; they discarded wireless fading channels and interactions among multiple users , such as interference and loads at caching helpers . recently, the content placement on stochastic geometry modeling of caching was studied in . a tradeoff between content diversity and cooperative gain according to content placementwas discovered well in but the caching probabilities were determined with numerical searches only .moreover , in , cache memory size is restricted to a single content size and loads at caching helpers are not addressed . the optimal geographical caching strategy to maximize the total hit probabilitywas studied in cellular networks in . however , only hit probability whether the requested content is available or not among the covering base stations was investigated .none of the previous works successfully addressed the channel selection diversity and interactions among multiple users such as network interference and loads according to content placement .success of content delivery in wireless cache network depends mainly on two factors : i ) _ channel selection diversity gain _ and ii ) _ network interference_. for given realization of nodes in a network , these two factors dynamically vary according to what and how the nodes cache at their limited cache memory . specifically , if the more nodes store the same contents , they offer the shorter geometric communication distance as well as the better small - scale fading channel for the specific content request , which can be termed as channel selection diversity gain . on the contrary , if the nodes cache all contents uniformly , they can cope with all content requests but channel selection diversity gain can not help being small .moreover , according to content placement , the serving node for each content request dynamically changes , so the network interference from other nodes also dynamically varies .thus , it might be required to properly control the channel selection diversity gain and network interference for each content .recently , in , a tradeoff between content diversity and channel diversity was addressed in caching helper networks , where each caching helper is capable of storing only _one content_. however , although pathloss and small - scale fading are inseparable in accurately modeling wireless channels , the channel diversity was characterized with only small - scale fading and the effects of pathloss and network interference depending on random network geometry were not well captured . in this context , we address the problem of content placement with a more generalized model considering pathloss , network interference according to random network topology based on stochastic geometry , small - scale channel fading , and arbitrary cache memory size . in this generalized framework , we develop an efficient content placement to desirably control cache - based channel selection diversity and network interference .the main contributions of this paper are summarized as follows .* we model the stochastic wireless caching helper networks , where randomly located caching helpers store contents independently and probabilistically in their finite cache memory and each user receives the content of interest from the caching helper with the largest instantaneous channel power gain .our framework generalizes the previous caching helper network models by simultaneously considering small - scale channel fading , pathloss , network interference , and arbitrary cache memory size . * with stochastic geometry, we characterize the channel selection diversity gain according to content placement of caching helpers by deriving the cumulative distribution function of the smallest reciprocal of the channel power gain in a noise - limited network .we derive the optimal caching probabilities for each file in closed form to maximize the average content delivery success probability for given finite cache memory size , and propose an efficient algorithm to find the optimal solution . * in interference - limited networks ,we derive a lower bound of the average content delivery success probability in closed form .based on this lower bound with rayleigh fading , we derive near - optimal caching probabilities for each content in closed form to appropriately control the channel selection diversity and the network interference depending on content placement . *our numerical results demonstrate that the proposed content placement is superior to other content placement strategies because the proposed method efficiently balances channel selection diversity and network interference reduction for given content popularity and cache memory size .we also numerically investigate the effects of the various system parameters , such as the density of caching helpers , nakagami fading parameter , memory size , target bit rate , and user density , on the caching probability .the rest of this paper is organized as follows . in sectionii , we describe the system model and performance metric considered in this paper . we analyze the average content delivery success probability and desirable content placement of caching helpers in a noise- and interference - limited network in sections iii and iv , respectively . numerical examples to validate the analytical results and to investigate the effects of the system parametersare provided in section v. finally , the conclusion of this paper is given in section vi .we consider a downlink wireless video service network , where the caching helpers are capable of caching some contents in their limited caching storage , as depicted in fig .[ fig : system_model ] .we assume that all contents have the same size normalized to one for analytic simplicity .the caching helpers are randomly located and modeled as -d homogeneous poisson point process ( ppp ) with intensity .the caching helpers are equipped with a single antenna and their cache memory size is , so different contents can be cached at each helper since each content has unit size . the total number of contents is and the set ( library ) of content indices is denoted as .the contents have own popularity and their popularity distributions are assumed to follow the zipf distribution as in literature : where the parameter reflects the popularity distribution skewness .for example , if , the popularity of the contents is uniform .the lower indexed content has higher popularity , i.e. , if . note that our content popularity profile is not necessarily confined to the zipf distribution but can accommodate any discrete content popularity distribution .the users are also randomly located and modeled as -d homogeneous poisson point process ( ppp ) with intensity .based on slivnyak s theorem that the statistics observed at a random point of a ppp is the same as those observed at the origin in the process , we can focus on a reference user located at the origin , called a _typical user_. in this paper , we adopt _ random content placement _ where the caching helpers independently cache content with probability for all .according to the caching probabilities ( or policies ) , each caching helper randomly builds a list of up to contents to be cached by the probabilistic content caching method proposed in .2 presents an example of the probabilistic caching method and illustrates how a caching helper randomly chooses contents to be cached among total contents according to the caching probability when the cache memory size is and total number of contents is . in this scheme , the cache memory of size equally divided into ( ) blocks of unit size .then , starting from content 1 , each content sequentially fills the discontinuous memory blocks by the amount of from the first block .if a block is filled up in the filling process of content , the remaining portion of content continuously fills the next block .then , we select a random number within ] increases , the cdf of grows faster to 1 because the intensity of ppp is proportional to them .in other words , as the number of caching helpers that are storing the content of interest and accessible by the typical user increases or the small - scale fading channel becomes more deterministic , the intensity of ppp representing the reciprocal channel power gains grows and thus the smallest reciprocal becomes smaller .especially , for given and , the largest channel power gain ( i.e. , ) grows as increases , which implies an increase of the _ channel selection diversity gain _ according to the content placement .[ fig : cdf ] validates the accuracy of lemma 1 for varying and .the cdf of increases faster to 1 as either or increases .however , the cdf of depends more on than on , so optimal caching probabilities are affected more by the density of caching helpers than channel fading . from lemma 1 , the average success probability for content delivery is derived in the following theorem .when the typical user receives content from the caching helper with the largest instantaneous channel power gain , the average success probability for content delivery in a nakagami- fading channel is obtained as where , , , and is the target bit rate of content . & = \mathbb{p}\left[\log_2\big(1 + \frac{\eta}{\xi_{i,1 } } \big)\geq \rho_i\right]\\ & = \mathbb{p}\left[\xi_{i,1 } \leq \frac{\eta}{2^{\rho_i } - 1}\right]\\ & = f_{\xi_{i,1}}\left(\frac{\eta}{2^{\rho_i } - 1}\right)\\ & = 1-e^{-\kappa p_i\left(\frac{\eta}{2^{\rho_i}-1}\right)^{\delta}},\label{eqn : success_prob}\end{aligned}\ ] ] where is obtained from lemma 1 . substituting into, we obtain . from lemma 1 , we know that the channel selection diversity gain for a specific content increases as the number of caching helpers storing the content increases , i.e. , increases .however , due to limited memory space , i.e. , the constraint , storing the same content at more caching helpers ( increases ) loses the chance of storing the other contents and the corresponding channel diversity gains . in this subsection, we derive the optimal solution of problem , the optimal caching probabilities , in closed form . for each ,the function is convex with respect to since .since a weighted sum of convex functions is also convex function , problem is a constrained convex optimization problem and thus a unique optimal solution exists .the lagrangian function of problem is where is a constant , and are the nonnegative lagrangian multipliers for constraints and . after differentiating with respect to , we can obtain the necessary conditions for optimal caching probability , i.e. , _karush - kuhn - tucker_(kkt ) condition as follows : from the constraint in , the optimal caching probabilities are given by ^{+}\\ & = \frac{1}{\kappa t_i}\left[\log\left(f_i\kappa t_i\right)-\log\left(\omega \!+\ !\mu_i\right)\right]^{+ } , ~\forall i \!\in\ !\mathcal{f},\label{eqn : opt}\end{aligned}\ ] ] where ^{+}=\max\{z,0\} ] because always holds and for .note that for all because the second derivative of is ^ 3}\leq 0 ] andall simulation results are averaged over 10,000 realizations .[ fig : caching_comparison ] compares the average success probabilities of content delivery in a noise - limited network for three different content placement strategies ; i ) caching the most popular contents ( mpc ) , ii ) caching the contents uniformly ( uc ) , and iii ) proposed content placement found by algorithm 1 ( proposed ) .this figure demonstrates that the proposed content placement in is superior to both uc and mpc in terms of average success probability of content delivery .mpc is closer to the proposed content placement than uc for high , and vice versa for low . for varying and , the optimal caching probability of each content in a noise - limited networkis plotted in fig .[ fig : opt_sol_lambda_m ] , where the lower index indicates the higher popularity , i.e. , if . as or increases , the optimal caching probability becomes more uniform .it implies that it would be beneficial to increase hitting probability for all contents instead of focusing on channel selection diversity for a few specific contents .this is because channel power gains become higher as either the number of caching helpers increases or channels become more deterministic although channel selection diversity can be limited .this figure also exhibits that the optimal caching probability depends more on the geometric path loss than on small - scale fading , which matches the implication of fig .[ fig : cdf ] . fig .[ fig : opt_sol_target_bit_rate ] shows the optimal caching probability of each content in a noise - limited network for varying maximum target bit rate .as grows , the optimal caching probability becomes biased toward caching the most popular contents .if is large , increasing channel selection diversity gains of the most popular contents is more beneficial to improve success probability of content delivery . in fig .[ fig : opt_sol_m ] , the optimal caching probability of each content in a noise - limited network is plotted for varying cache memory size .the optimal caching probabilities scale with the cache memory size , but they become more uniform as increases . this is because less popular contents are accommodated in memory of larger size .[ fig : ps_with_opt_and_subopt ] compares the average success probabilities of content delivery with optimal obtained from by brute - force searches , with the proposed sub - optimal obtained from * p3 * , and the lower bound with the sub - optimal versus , when ( units/ ) , , , and . for each and ,the value of for a tighter lower bound is numerically found .since the content placement obtained from the lower bound is sub - optimal , the average content delivery success probability with the sub - optimal is bounded below that with optimal .although there is a large gap between the lower bound in and , the gap between the average content delivery success probabilities with the optimal and the proposed is small for an arbitrary target bit rate because and have quite similar shapes . consequently , the proposed sub - optimal caching probability is close to optimal caching probability although the sub - optimal caching probability is found from the lower bound in .[ fig : inter_comparision ] compares the average content delivery success probabilities among the proposed content placement schemes with numerically found yielding a tight lower bound and with , uc , and mpc versus the content popularity exponent .although the value of needs to be numerically found , any suboptimal solution even with the value which does not always satisfy the inequality in yields a lower average success probability of content delivery because of its suboptimality . from this fact , a suboptimal solution can be found by setting the value of to be the average load of a typical caching helper as for simplicity .[ fig : inter_comparision ] demonstrates that that both the proposed content placement schemes with numerically found and are superior to both uc and mpc in terms of average content delivery success probability for general .the average content delivery success probability with is quite similar to that with numerically found and outperforms uc and mpc . in an interference - limited network, for varying user density , the proposed caching probability of each content obtained by solving the convex optimization problem in * p3 * is plotted in fig .[ fig : inter_opt_sol_user ] , where the value of yielding a tight lower bound is numerically found .as the user density decreases , the optimal content placement tends to cache all contents with more uniform probabilities .we studied probabilistic content placement to desirably control cache - based channel selection diversity and network interference in a wireless caching helper network , with specific considerations of path loss , small - scale channel fading , network interference according to random network topology based on stochastic geometry , and arbitrary cache memory size . in a noise - limited case, we derived the optimal caching probabilities for each content in closed form in terms of the average success probability of content delivery and proposed a bisection based search algorithm to efficiently reach the optimal solution . in an interference - limited case , we derived a lower bound on the average success probability of content delivery .then , we found the near - optimal caching probabilities in closed form in rayleigh fading channels , which maximize the lower bound .our numerical results verified that the proposed content placement is superior to the conventional caching strategies because the proposed scheme efficiently controls the channel selection diversity gain and the interference reduction .we also numerically analyzed the effects of various system parameters , such as caching helper density , user density , nakagami fading parameter , memory size , target bit rate , and user density , on the content placement .since the pathloss dominates the small - scale fading effects according to lemma 1 , is approximated as the load of the tagged caching helper with the largest channel power gain averaged over fading ( i.e. , the load based on the association with long - term channel power gains ) , .moreover , the received sir with the association based on instantaneous channel power gains is larger than that with the association based on long - term channel power gains .accordingly , can be further bounded below as ,\label{rev:5_2}\end{aligned}\ ] ] where which is also validated in fig .[ fig : approx_check ] , where blue circle and green solid line represent and , respectively . in case of ,a closed form expression of is available as , but with multiple contents ( ) analytic evaluation of is hard due to the complicated form of . to circumvent this difficulty, we again take a lower bound of as , \label{rev:5_3}\end{aligned}\ ] ] where is a constant independent of and makes the inequality hold for all ranges of , and .note that since is a decreasing function with respect to and bounded below by zero , there must exist a certain value of which makes the inequality hold .the value of yielding a tight lower bound can be numerically determined ; in general becomes larger as diminishes and grows .[ fig : approx_check ] validates , where green and black dotted lines represent and our lower bound in , respectively .it is verified that there exists a finite value of yielding a lower bound of regardless of . in our setting ,the value of for a tighter lower bound is . although there exists a gap between and its lower bound , the shape of those two functions looks quite similar and thus the caching probabilities obtained from are close to the optimal caching probabilities . & \stackrel{(a)}{=}\sum_{i=1}^f f_i \int_0^{\infty}\!\mathbb{e}_{i_i}\!\left[\frac{\gamma(m_d , m_dp^{-1}\tau_i r^{\alpha}i_i)}{\gamma(m_d)}\right]\ !f_{|x_i|}(r)dr , \label{lower_aftsp}\end{aligned}\ ] ] where , is the gamma function defined as , is the upper incomplete gamma function defined as , is the location of the nearest caching helper storing content , is the pdf of the distance to the nearest caching helper storing content , and the equality ( a ) is obtained from the nakagami- fading channel power gain . since }{\gamma(m)}=e^{-my}\sum_{k=0}^{m-1}\frac{m^k}{k!}y^k$ ] , we have \\ & = \sum_{k=0}^{m_d-1}\frac{1}{k!}\left(m_d p^{-1}\tau_i r^{\alpha } \right)^k\mathbb{e}_{i_i}\left[i_i^ke^{-m_dp^{-1}\tau_i r^{\alpha}i_i}\right]\\ & \stackrel{(b)}{=}\!\sum_{k=0}^{m_d-1}\!\frac{1}{k!}\left(-m_d p^{-1}\tau_i r^{\alpha}\right)^k \!\frac{d^k}{ds^k}\mathcal{l}_{i_i}(s)|_{s=\frac{m_d\tau_i r^{\alpha}}{p } } , \label{inner}\end{aligned}\ ] ] where ( b ) is from and is the laplace transform of given by = \mathbb{e}\left[e^{-s\sum_{y\in \phi\setminus x_i}p|h_y|^2|y|^{-\alpha}}\right]\\ & \stackrel{(c)}{=}\mathbb{e}\left[\prod_{y\in \phi\setminus x_i } \mathbb{e}_{|h_y|^2}\left[e^{-sp|h_y|^2|y|^{-\alpha}}\right]\right]\\ & \stackrel{(d)}{= } \exp\left(-2\pi p_i\lambda\int_{r}^{\infty}\left[1-\mathbb{e}_g\left[e^{-spgv^{-\alpha}}\right]\right]vdv\right)\nonumber\\ & ~~~\times\exp\!\left(\!-2\pi(1 \!-\ !p_i ) \lambda\!\int_0^{\infty}\!\left[1\!-\!\mathbb{e}_g\!\left[e^{-spgv^{-\alpha}}\right]\right]vdv\!\right)\\ & \stackrel{(e)}{= } \exp\left(-2\pi p_i\lambda\int_{r}^{\infty}\frac{(spv^{-\alpha}+m_i)^{m_i}-m_i}{(spv^{-\alpha}+m_i)^{m_i}}vdv\right)\nonumber\\ & ~\times\exp\!\left(\!-2\pi(1\!-\!p_i ) \lambda\!\!\int_0^{\infty}\!\!\frac{(spv^{-\alpha}\!+\ !m_i)^{m_i}}vdv\!\right)\\ & = \exp\left(-2\pi\lambda\int_0^{\infty}\frac{(spv^{-\alpha}+m_i)^{m_i}-m_i}{(spv^{-\alpha}+m_i)^{m_i}}vdv\right)\nonumber\\ & ~~~\times\exp\left(2\pi p_i\lambda\int_0^r\frac{(spv^{-\alpha}+m_i)^{m_i}-m_i}{(spv^{-\alpha}+m_i)^{m_i}}vdv\right ) , \label{laplace}\end{aligned}\ ] ] where ( c ) is due to independence of the channel ; ( d ) comes from the probability generating functional ( pgfl ) of ppp ; ( e ) is from the mogment generating function ( mgf ) of the nakagami- distribution . substituting into , we obtain where and are the nakagami fading parameters of the desired and interfering links , respectively , and \right.\nonumber\\ & \left.+2\pi p_i\lambda\!\int_0^r \left [ 1 - \frac{m_i}{(spv^{-\alpha}+m_i)^{m_i}}\right]vdv\right),\\ f_{|x_i|}(r ) & = 2\pi p_i\lambda r\exp\left(-\pi p_i\lambda r^2\right).\end{aligned}\ ] ] 1 cisco , `` cisco visual networking index : global mobile data traffic forecast update , 2014 - 2019 , '' available at http://www.cisco.com .n. golrezaei , a. f. molisch , a. g. dimakis , and g. caire , `` femtocaching and device - to - device collaboration : a new architecture for wireless video distribution , '' _ ieee commun . mag .142 - 149 , apr . 2013 .k. shanmugam , n. golrezaei , a. g. dimakis , and a. f. molisch , and g. caire , `` femtocaching : wireless content delivery through distributed caching helpers , '' _ ieee trans .inform . theory _8402 - 8413 , dec . 2013. j. song , h. song , and w. choi , `` optimal caching placement of caching system with helpers , '' in proc ._ ieee int .( icc ) _ , london , uk , pp .1825 - 1830 , jun .2015 . s. h. chae , j. ryu , w. choi , and t. q. s. quek , `` cooperative transmission via caching helpers , '' in proc ._ ieee global commun .conference ( globecom ) _ , san diego , ca , pp . 1 - 6 , dec .b. baszczyszyn and a. giovanidis , `` optimal geographic caching in cellular networks , '' in proc ._ ieee int .( icc ) _ , london , uk , pp .3358 - 3363 , jun . 2015 .e. batu , m. bennis , and m. debbah , `` cache - enabled small cell networks : modeling and tradeoffs , '' _ eurasip journal on wireless commun . and netw . _ , vol .2015 , no .1 , pp . 1 - 11 , feb .j. li , y. chen , z. lin , w. chen , b. vucetic , and l. hanzo , `` distributed caching for data dissemination in the downlink of heterogeneous networks , '' _ ieee trans .3553 - 3668 , oct . 2015 .h. zhou , m. tao , e. chen , and w. yu , `` content - centric multicast beamforming in cache - enabled cloud radio access networks , '' in proc ._ ieee global commun ._ , pp . 1 - 6 , san diego , ca , dec . 2015 .z. chen , j. lee , t. q. s. quek , and m. kountouris , `` cooperative caching and transmission design in cluster - centric small cell networks , '' available at http://arxiv.org/abs/1601.00321 .j. kang and c. g. kang , `` mobile device - to - device ( d2d ) content delivery networking : a design and optimization framework , '' _ ieee / kics journal of commun . and netw .568 - 577 , oct . 2014 .n. golrezaei , a. g. dimakis , and a. f. molisch , `` scaling behavior for device - to - device communications with distributed caching , '' _ ieee trans .inform . theory _60 , no . 7 , pp . 4286 - 4298 , jul .n. golrezaei , a. f. molisch , and a. g. dimakis , `` base - station assisted device - to - device communications for high - throughput wireless video networks , '' _ ieee trans .wireless commun ._ , vol . 13 , no . 7 , pp . 3665 - 3676 , jula. altieri , p. piantanida , l. r. vega , and c. g. galarza , `` on fundamental trade - offs of device - to - device communications in large wireless networks , '' _ ieee trans .wireless commun .9 , pp . 4958 - 4971 , sep .y. guo , l. duan , and r. zhang , `` cooperative local caching and content sharing under heterogeneous file preferences , '' http://arxiv.org/abs/1510.04516v2 .m. afshang , h. s. dhillon , and p. h. j. chong , `` fundamentals of cluster - centric content placement in cache - enabled device - to - device networks , '' _ ieee trans .64 , no . 6 , pp . 2511 - 2526 , jun .m. afshang , h. s. dhillon , and p. h. j. chong , `` modeling and performance analysis of clustered device - to - device networks , '' _ ieee trans .wireless commun ._ , to appear .b. serbetci and j. goseling , `` on optimal geographical caching in heterogeneous cellular networks , '' available at http://arxiv.org/abs/1601.07322 . c. yang , y. yao , z. chen , and b. xia ,`` analysis on cache - enabled wireless heterogeneous networks , '' _ ieee trans .wireless commun .131 - 145 , jan . 2016 .j. rao , h. feng , c. yang , z. chen , and b. xia , `` optimal caching placement for d2d assisted wireless caching networks , '' available at http://arxiv.org/abs/1510.07865 .d. stoyan , w. kendall , and j. mecke , _ stochastic geometry and its applications . _ john wiley & sons , 1996 .m. haenggi , _ stochastic geometry for wireless networks _ , cambridge university press , 2013. s. h. chae , j. p. hong , and w. choi , `` optimal access in ofdma multi - rat cellular networks with stochastic geometry : can a single rat be better ?, '' _ ieee trans .wireless commun ._ , to appear. s. singh , h. s. dhillon and j. g. andrews , `` offloading in heterogeneous networks : modeling , analysis and design insights , '' _ ieee trans .wireless commun .5 , pp . 2484 - 2497 , may 2013 . s. singh , x. zhang , and j. g. andrews , `` joint rate and sinr coverage analysis for decoupled uplink - downlink biased cell associations in hetnets , '' _ ieee trans .wireless commun .5360 - 5373 , oct .
content delivery success in wireless caching helper networks depends mainly on cache - based channel selection diversity and network interference . for given channel fading and network geometry , both channel selection diversity and network interference dynamically vary according to what and how the caching helpers cache at their finite storage space . we study probabilistic content placement ( or caching placement ) to desirably control cache - based channel selection diversity and network interference in a stochastic wireless caching helper network , with sophisticated considerations of wireless fading channels , interactions among multiple users such as interference and loads at caching helpers , and arbitrary memory size . using stochastic geometry , we derive optimal caching probabilities in closed form to maximize the average success probability of content delivery and propose an efficient algorithm to find the solution in a noise - limited network . in an interference - limited network , based on a lower bound of the average success probability of content delivery , we find near - optimal caching probabilities in closed form to control the channel selection diversity and the network interference . we numerically verify that the proposed content placement is superior to other comparable content placement strategies . probabilistic content placement , caching probability , stochastic geometry , channel selection diversity
multiple optimised parameter estimation and data compression ( moped ; ) is a patented algorithm for the compression of data and the speeding up of the evaluation of likelihood functions in astronomical data analysis and beyond .it becomes particularly useful when the noise covariance matrix is dependent upon the parameters of the model and so must be calculated and inverted at each likelihood evaluation .however , such benefits come with limitations . since moped only guarantees maintaining the fisher matrix of the likelihood at a chosen point , multimodal and some degenerate distributions will present a problem . in this paper we report on some of the limitations of the application of the moped algorithm . in the cases where moped does accurately represent the likelihood function, however , its compression of the data and consequent much faster likelihood evaluation does provide orders of magnitude improvement in runtime . in ,the authors demonstrate the method by analysing the spectra of galaxies and in they illustrate the benefits of moped for estimation of the cmb power spectrum .the problem of `` badly '' behaved likelihoods was found by for the problem of light transit analysis ; nonetheless , the authors present a solution that still allows moped to provide a large speed increase .we begin by introducing moped in section 2 and define the original and moped likelihood functions , along with comments on the potential speed benefits of moped . in section3 we introduce an astrophysical scenario where we found that moped did not accurately portray the true likelihood function . in section 4we expand upon this scenario to another where moped is found to work and to two other scenarios where it does not .we present a discussion of the criteria under which we believe moped will accurately represent the likelihood in section 5 , as well as a discussion of an implementation of the solution provided by .full details of the moped method are given in , here we will only present a limited introduction .we begin by defining our data as a vector , .our model describes by a signal plus random noise , where the signal is given by a vector that is a function of the set of parameters defining our model , and the true parameters are given by .the noise is assumed to be gaussian with zero mean and noise covariance matrix , where the angle brackets indicate an ensemble average over noise realisations ( in general this matrix may also be a function of the parameters ) .the full likelihood for data points in is given by ^{\textrm{t } } \mathcal{n}(\btheta)^{-1 } [ { \bf x}-{\bf u}(\btheta)]\right\}}.\end{aligned}\ ] ] at each point , then , this requires the calculation of the determinant and inverse of an matrix . both scale as ,so even for smaller datasets this can become cumbersome .moped allows one to eliminate the need for this matrix inversion by compressing the data points in into data values , one for each parameters of the model .additionally , moped creates the compressed data values such that they are independent and have unit variance , further simplifying the likelihood calculation on them to an operation .typically , so this gives us a significant increase in speed .a single compression is done on the data , , and then again for each point in parameter space where we wish to compute the likelihood .the compression is done by generating a set of weighting vectors , ( ) , from which we can generate a set of moped components from the theoretical model and data , note that the weighting vectors must be computed at some assumed fiducial set of parameter values , .the only choice that will truly maintain the likelihood peak is when the fiducial parameters are the true parameters , but obviously we will not know these in advance for real analysis situations .thus , we can choose our fiducial model to be anywhere and iterate the procedure , taking our likelihood peak in one iteration as the fiducial model for the next iteration .this process will converge very quickly , and may not even be necessary in some instances . for our later examples , since we do know the true parameters we will use these as the fiducial ( ) in order to remove this as a source of confusion ( all equations , however , are written for the more general case ) .note that the true parameters , , will not necessarily coincide with the peak of the original likelihood or the peak of the moped likelihood ( see below ) .the weighting vectors must be generated in some order so that each subsequent vector ( after the first ) can be made orthogonal to all previous ones .we begin by writing the derivative of the model with respect to the parameter as .this gives us a solution for the first weighting vector , properly normalised , of the first compressed value is and will weight up the data combination most sensitive to the first parameter .the subsequent weighting vectors are made orthogonal by subtracting out parts that are parallel to previous vectors , and are normalized .the resulting formula for the remaining weighting vectors is where .weighting vectors generated with equations and form an orthnomal set with respect to the noise covariance matrix so that this means that the noise covariance matrix of the compressed values is the identity , which significantly simplifies the likelihood calculation .the new likelihood function is given by where represents the compressed data and represents the compressed signal .this is a much easier likelihood to calculate and is time - limited by the generation of a new signal template instead of the inversion of the noise covariance matrix .the peak value of the moped likelihood function is not guaranteed to be unique as there may be other points in the original parameter space that map to the same point in the compressed parameter space ; this is a characteristic that we will investigate .moped implicity assumes that the covariance matrix , , is independent of the parameters . with this assumption ,a full likelihood calculation with data points would require only an operation at each point in parameter space ( or if is diagonal ) . in moped, however , the compression of the theoretical data is an linear operation followed by an likelihood calculation .thus , one loses on speed if is diagonal but gains by a factor of otherwise . for the data sets we will analyze , .we begin , though , by assuming a diagonal for simplicity , recognizing that this will cause a speed reduction but that it is a necessary step before addressing a more complex noise model .one can iterate the parameter estimation procedure if necessary , taking the maximum likelihood point found as the new fiducial and re - analyzing ( perhaps with tighter prior constraints ) ; this procedure is recommended for moped in , but is not always found to be necessary .moped has the additional benefit that the weighting vectors , , need only to be computed once provided the fiducial model parameters are kept constant over the analysis of different data sets . computed compressed parameters , ,can also be stored for re - use and require less memory than storing the entire theoretical data set .in order to demonstrate some of the limitations of the applicability of the moped algorithm , we will consider a few test cases .these originate in the context of gravitational wave data analysis for the _ laser interferometer space antenna _( _ lisa _ ) since it is in this scenario that we first discovered such cases of failure .the full problem is seven - dimensional parameter estimation , but we have fixed most of these variables to their known true values in the simulated data set in order to create a lower - dimensional problem that is simpler to analyse .we consider the case of a sine - gaussian burst signal present in the lisa detector .the short duration of the burst with respect to the motion of lisa allows us to use the static approximation to the response . in frequency space ,the waveform is described by ( ) here is the dimensionless amplitude factor ; is the width of the gaussian envelope of the burst measured in cycles ; is the central frequency of the oscillation being modulated by the gaussian envelope ; and is the central time of arrival of the burst .this waveform is further modulated by the sky position of the burst source , and , and the burst polarisation , , as they project onto the detector .the one - sided noise power spectral density of the lisa detector is given by ( ) where is the light travel time along one arm of the lisa constellation , is the proof mass acceleration noise and is the shot noise .this is independent of the signal parameters and all frequencies are independent of each other , so the noise covariance matrix is constant and diagonal .this less computationally expensive example allows us to show some interesting examples .we begin by taking the one - dimensional case where the only unknown parameter of the model is the central frequency of the oscillation , .we set and ; we then analyze a segment of lisa data , beginning at , sampled at a cadence .for this example , the data was generated with random noise ( following the lisa noise power spectrum ) at an snr of with ( thus for moped ) .the prior range on the central frequency goes from to . samples uniformly spaced in were taken and their likelihoods calculated using both the original and moped likelihood functions .the log - likelihoods are shown in figure [ fig : likecomp ] .note that the absolute magnitudes are not important but the relative values within each plot are meaningful .both the original and moped likelihoods have a peak close to the input value . for the chosen template.,width=312 ]we see , however , that in going from the original to moped log - likelihoods , the latter also has a second peak of equal height at an incorrect . to see where this peak comes from, we look at the values of the compressed parameter as it varies with respect to as shown in figure [ fig : yf_vs_f ] .the true compressed value peak occurs at , where .however , we see that there is another frequency that yields this exact same value of ; it is at this frequency that the second , incorrect peak occurs . by creating a mapping from to that is not one - to - one , moped has created the possibility for a second solution that is indistinguishable in likelihood from the correct one .this is a very serious problem for parameter estimation .interestingly , we find that even when moped fails in a one - parameter case , adding a second parameter may actually rectify the problem , although not necessarily . if we now allow the width of the burst , , to be a variable parameter , there are now two orthognal moped weighting vectors that need to be calculated .this gives us two compressed parameters for each pair of and .each of these may have its own unphysical degeneracies , but in order to give an unphysical mode of equal likelihood to the true peak , these degeneracies will need to coincide . in figure [ fig : ytruecontours ] , we plot the contours in space of where as ranges over and values .we can clearly see the degeneracies present in either variable , but since these only overlap at the one location , near to where the true peak is , there is no unphysical second mode in the moped likelihood . and as they vary over and .the one intersection is the true maximum likelihood peak.,width=312 ] hence , when we plot the original and moped log - likelihoods in figure [ fig : fqlikes ] , although the behaviour away from the peak has changed , the peak itself remains in the same location and there is no second mode .adding more parameters , however , does not always improve the situation .we now consider the case where is once again fixed to its true value and we instead make the polarisation of the burst , , a variable parameter .there are degeneracies in both of these parameters and in figure [ fig : ytruecontours3 ] we plot the contours in -space where the compressed values are each equal to the value at the maximum moped likelihood point .these two will necessarily intersect at the maximum likelihood solution , near the true value ( hz and rad ) , but a second intersection is also apparent .this second intersection will have the same likelihood as the maximum and be another mode of the solution .however , as we can see in figure [ fig : fpslikes ] in the left plot , this is not a mode of the original likelihood function .moped has , in this case , created a second mode of equal likelihood to the true peak . and values as they vary as functions of and .,width=312 ] for an even more extreme scenario , we now fix to the true and allow the time of arrival of the burst to vary ( we also define ) . in this scenario , the contours in -space where are much more complicated .thus , we have many more intersections of the two contours than just at the likelihood peak near the true values and moped creates many alternative modes of likelihood equal to that of the original peak .this is very problematic for parameter estimation . in figure[ fig : ytruecontours2 ] we plot these contours so the multiple intersections are apparent .figure [ fig : ftlikes ] shows the original and moped log - likelihoods , where we can see the single peak becoming a set of peaks . and values as they vary as functions of and .we can see many intersections here other than the true peak.,width=312 ]what we can determine from the previous two sections is a general rule for when moped will generate additional peaks in the likelihood function equal in magnitude to the true one . for an -dimensional model , if we consider the -dimensional hyper - surfaces where , then any point where these hyper - surfaces intersect will yield a set of -parameter values with likelihood equal to that at the peak near the true values .we expect that there will be at least one intersection at the likelihood peak corresponding to approximately the true solution .however , we have shown that other peaks can exist as well .the set of intersections of contour surfaces will determine where these additional peaks are located .this degeneracy will interact with the model s intrinsic degeneracy , as any degenerate parameters will yield the same compressed values for different original parameter values .unfortunately , there is no simple way to find these contours other than by mapping out the values , which is equivalent in procedure to mapping the moped likelihood surface .the benefit comes when this procedure is significantly faster than mapping the original likelihood surface .the mapping of can even be performed before data is obtained or used , if the fiducial model is chosen in advance ; this allows us to analyse properties of the moped compression before applying it to data analysis . if the moped likelihood is mapped and there is only one contour intersection , then we can rely on the moped algorithm and will have saved a considerable amount of time , since moped has been demonstrated to provide speed - ups of a factor of up to in .however , if there are multiple intersections then it is necessary to map the original likelihood to know if they are due to degeneracy in the model or were created erroneously by moped . in this latter case, the time spent finding the moped likelihood surface can be much less than that which will be needed to map the original likelihood , so relatively little time will have been wasted . if any model degeneracies are known in advance , then we can expect to see them in the moped likelihood and will not need to find the original likelihood on their account .one possible way of determining the validity of degenerate peaks in the moped likelihood function is to compare the original likelihoods of the peak parameter values with each other . by using the maximum moped likelihood point found in each mode and evaluating the original likelihood at this point, we can determine which one is correct .the correct peak and any degeneracy in the original likelihood function will yield similar values to one another , but a false peak in the moped likelihood will have a much lower value in the original likelihood and can be ruled out .this means that a bayesian evidence calculation can not be obtained from using the moped likelihood ; however , the algorithm was not designed to be able to provide this .the solution for this problem presented in is to use multiple fiducial models to create multiple sets of weighting vectors .the log - likelihood is then averaged across these choices .each different fiducial will create a set of likelihood peaks that include the true peaks and any extraneous ones .however , the only peaks that will be consistent between fiducials are the correct ones .therefore , the averaging maintains the true peaks but decreases the likelihood at incorrect values .this was tested with 20 random fiducials for the two - parameter models presented and was found to leave only the true peak at the maximum likelihood value .other , incorrect , peaks are still present , but at log - likelihood values five or more units below the true peak .when applied to the full seven parameter model , however , the snr threshold for signal recovery is increased significantly , from to .the moped algorithm for reducing the computational expense of likelihood functions can , in some examples , be extremely useful and provide orders of magnitude of improvement .however , as we have shown , this is not always the case and moped can produce erroneous peaks in the likelihood that impede parameter estimation .it is important to identify whether or not moped has accurately portrayed the likelihood function before using the results it provides .some solutions to this problem have been presented and tested ,pg s phd is funded by the gates cambridge trust .feroz f , gair j , graff p , hobson m p , & lasenby a n , cqg , * 27 * 7 pp .075010 ( 2010 ) , arxiv:0911.0288 [ gr - qc ] .gupta s & heavens a f , mnras * 334 * 167 - 172 ( 2002 ) , arxiv : astro - ph/0108315 .heavens a f , jimenez r , & lahav o , mnras * 317 * 965 - 972 ( 2000 ) , arxiv : astro - ph/9911102 .protopapas p , jimenez r , & alcock a , mnras * 362 * 460 - 468 ( 2005 ) , arxiv : astro - ph/0502301 .
we investigate the use of the multiple optimised parameter estimation and data compression algorithm ( moped ) for data compression and faster evaluation of likelihood functions . since moped only guarantees maintaining the fisher matrix of the likelihood at a chosen point , multimodal and some degenerate distributions will present a problem . we present examples of scenarios in which moped does faithfully represent the true likelihood but also cases in which it does not . through these examples , we aim to define a set of criteria for which moped will accurately represent the likelihood and hence may be used to obtain a significant reduction in the time needed to calculate it . these criteria may involve the evaluation of the full likelihood function for comparison . [ firstpage ] methods : data analysis methods : statistical
the mathematical theory of compressive sensing ( cs ) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth . whereas the cs theory is now well developed , challenges concerning hardware implementations of cs - based acquisition devices , especially in optics , have only started being addressed .this paper will introduce a color video cs camera capable of capturing low - frame - rate measurements at acquisition , with high - frame - rate video recovered subsequently via computation ( decompression of the measured data ) .the coded aperture compressive temporal imaging ( cacti ) system uses a moving binary mask pattern to modulate a video sequence within the integration time many times prior to integration by the detector .the number of high - speed frames recovered from a coded - exposure measurement depends on the speed of video modulation . within the cacti framework, modulating the video times per second corresponds to moving the mask pixels within the integration time .if frames are to be recovered per compressive measurement by a camera collecting data at frames - per - second ( fps ) , the time variation of the code is required to be fps .the liquid - crystal - on - silicon ( lcos ) modulator used in can modulate as fast as fps by pre - storing the exposure codes , but , because the coding pattern is continuously changed at each pixel throughout the exposure , it requires considerable energy consumption ( ) . the mechanical modulator in , by contrast , modulates the exposure through periodic mechanical translation of a single mask ( coded aperture ) , using a pizeoelectronic translator that consumes minimal energy ( ) .the coded aperture compressive temporal imaging ( cacti ) now has been extended to the color video , which can capture r " , g " and b " channels of the context . by appropriate reconstruction algorithms , we can get frames color video from a single gray - scale measurement .while numerous algorithms have been used for cs inversion , the bayesian cs algorithm has been shown with significant advantages of providing a full posterior distribution .this paper develops a new bayesian inversion algorithm to reconstruct videos based on raw measurements acquired by the color - cacti camera . by exploiting the hybrid three dimensional ( 3d ) tree - structure of the wavelet and dct ( discrete cosine transform ) coefficients , we have developed a hidden markov tree ( hmt ) model in the context of a bayesian framework .research in has shown that by employing the hmt structure of an image , the cs measurements can be reduced .this paper extends this hmt to 3d and a sophisticated 3d tree - structure is developed for video cs , with color - cacti shown as an example .experimental results with both simulated and real datasets verify the performance of the proposed algorithm .the basic model and inversion method may be applied to any of the compressive video cameras discussed above .let be the continuous / analog spatiotemporal volume of the video being measured ; represents a moving mask ( code ) with denoting its spatial translation at time ; and denotes the camera spatial sampling function , with spatial resolution .the coded aperture compressive camera system modulates each temporal segment of duration with the moving mask ( the motion is periodic with the period equal to ) , and collapses ( sums ) the coded video into a single photograph ( ) : and , with the detector size pixels .the set of data , which below we represent as , corresponds to the compressive measurement .the code / mask is here binary , corresponding to photon transmission and blocking ( see figure [ fig : dec ] ) . denote , defining the original continuous video sampled in space and in time ( discrete temporal frames , , within the time window of the compressive measurement ) .we also define we can rewrite ( [ eq : cacti - measurement ] ) as where is an added noise term , , and denotes element - wise multiplication ( hadamard product ) . in ( [ eq : cacti - measurement - discrete ] ) , denotes the mask / code at the shift position ( approximately discretized in time ) , and is the underlying video , for video frame within cs measurement . dropping subscript for simplicity , ( [ eq : cacti - measurement - discrete ] ) can be written as \\ \mathbf{x}&=&\mathrm{vec}([\mathbf{z}_{1},\cdots,\mathbf{z}_{n_{t } } ] ) , \vspace{-3mm}\end{aligned}\ ] ] where and is standard vectorization .we record temporally compressed measurements for rgb colors on a bayer - filter mosaic , where the three colors are arranged in the pattern shown in the right bottom of figure [ fig : dec ] .the single coded image is partitioned into four components , one for r and b and two for g ( each is the size of the original spatial image ) .the cs recovery ( video from a single measurement ) is performed separately on these four mosaiced components , prior to demosaicing as shown in figure [ fig : dec](b ) .one may also jointly perform cs inversion on all 4 components , with the hope of sharing information on the importance of ( here wavelet and dct ) components ; this was also done , and the results were very similar to processing r , b , g1 and g2 separately .note that this is the key difference between color - cacti and the previous work of cacti in .an image s zero - tree structure has been investigated thoroughly since the advent of wavelets .the 3d wavelet tree structure of video , an extension of the 2d image , has also attracted extensive attention in the literature .introduced a tree - based representation to characterize the block - dct transform associated with jpeg . for the video representation ,we here use the wavelet in space and dct in time .considering the video sequence has frames with spatial pixels , and let denote the indices of the dct / wavelet coefficients .assume there are levels ( scales ) of the coefficients ( in figure [ fig:3d_tree ] ) .the parent - children linkage of the coefficients are as follows : a ) a root - node has 7 children , , where denotes the size of scaling ( ll ) coefficients ; b ) an internal node has 8 children ; and c ) a leaf - node has no children .when the tree structure is used in 3d dct , we consider the block size of the 3d dct is , and .the parent - children linkage is the same as with the wavelet coefficients .the properties of wavelet coefficients that lead to the bayesian model derived in the following section are : + 1 ) large / small values of wavelet coefficients generally persist across the scales of the wavelet tree ( the two states of the binary part of the model developed in the following section ) .+ 2 ) persistence becomes stronger at finer scales ( the confidence of the probability of the binary part is proportional to the number of coefficients at that scale ) .+ 3 ) the magnitude of the wavelet coefficients _ decreases exponentially _ as we move to the finer scales . in this paper, we use a multiplicative gamma prior , a typical shrinkage prior , for the non - zero wavelet coefficients at different scale to embed this decay .let , , be orthonormal matrices defining bases such as wavelets or the dct .define where symbolizes the 3d wavelet / dct coefficients corresponding to and and denotes the kronecker product .it is worth noting here the is the 3d transform of the projection matrix . unlike the model used in , where the projection matrix is put directly on the wavelet / dct coefficients , in the coding strategy of color - cacti , we get the projection matrix from the hardware by capturing the response of the mask at different positions . following this ,we transform row - by - row to the wavelet / dct domain , to obtain .the measurement noise is modeled as zero mean gaussian with precision matrix ( inverse of the covariance matrix ) , where is the identity matrix .we have : to model the sparsity of the 3d coefficients of wavelet / dct , the _ spike - and - slab _ prior is imposed on as : where is a vector of non - sparse coefficients and is a binary vector ( zero / one indicators ) denoting the two state of the hmt , with zero " signifying the low - state " in the hmt and one " symbolizing the high - state " .note when the coefficients lie in the low - state " , they are explicitly set to zero , which leads to the sparsity . to model the linkage of the tree structure across the scales of the wavelet / dct , we use the the binary vector , , which is drawn from a bernoulli distribution .the parent - children linkage is manifested by the probability of this vector .we model is drawn from a gaussian distribution with the precision modeled as a multiplicative gamma prior .the full bayesian model is : where denotes the component at level , and denotes the scaling coefficients of wavelet ( or dc level of a dct ) . in the experiments , we use the following settings : where is the number of coefficients at level , and is the length of .we developed the variational bayesian methods to infer the parameters in the model as in .the posterior inference of , thus is different from the model in , and we show it below : where denotes the expectation in .both simulated and real datasets are adopted to verify the performance of the proposed model for video reconstruction .the hyperparameters are setting as ; the same used in .best results are found when and are wavelets ( here the daubechies-8 ) and corresponds to a dct .the proposed tree - structure bayesian cs inversion algorithm is compared with the following algorithms : ) generalized alternating projection ( gap ) algorithm ; ) two - step iterative shrinkage / thresholding ( twist ) ( with total variation norm ) ; ) k - svd with orthogonal matching pursuit ( omp ) used for inversion ; ) a gaussian mixture model ( gmm ) based inversion algorithm ; and ) the linearized bregman algorithm .the -norm of dct or wavelet coefficients is adopted in linearized bregman and gap with the same transformation as the proposed model .gmm and k - svd are patch - based algorithms and we used a separate dataset for training purpose . a batch of training videos were used to pre - train k - svd and gmm , and we selected the best reconstruction results for presentation here .we consider a scene in which a basketball player performs a dunk ; this video is challenging due to the complicated motion of the basketball players and the varying lighting conditions ; see the example video frames in figure [ fig : dec](a ) .we consider a binary mask , with 1/0 coding drawn at random bernoulli(0.5 ) ; the code is shifted spatially via the coding mechanism in figure [ fig : dec](a ) ) , as in our physical camera .the video frames are spatially , and we choose .it can be seen clearly that the proposed tree - structure bayesian cs algorithm demonstrates improved psnr performance for the inversion .we test our algorithm using real datasets captured by our color - cacti camera , with selected results shown in figures [ fig:3balls]-[fig : hammer ] .figure [ fig:3balls ] shows low - framerate ( captured at 30fps ) compressive measurements of fruit falling / rebounding and corresponding high - framerate reconstructed video sequences . in the left are shown four contiguous measurements , and in the right are shown 22 frames reconstructed per measurement .note the spin of the red apple and the rebound of the orange in the reconstructed frames .figure [ fig : hammer ] shows a process of a purple hammer hitting a red apple with 3 contiguous measurements .we can see the clear hitting process from the reconstructed frames .we have implemented a color video cs camera , color - cacti , capable of compressively capturing and reconstructing videos at low - and high - framerates , respectively . a tree - structure bayesian compressive sensing framework is developed for the video cs inversion by exploiting the 3d tree structure of the wavelet / dct coefficients .both simulated and real datasets demonstrate the efficacy of the proposed model .x. yuan , p. llull , x. liao , j. yang , g. sapiro , d. j. brady , and l. carin , `` low - cost compressive sensing for color video and depth , '' in _ ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2014 .
a bayesian compressive sensing framework is developed for video reconstruction based on the color coded aperture compressive temporal imaging ( cacti ) system . by exploiting the three dimension ( 3d ) tree structure of the wavelet and discrete cosine transformation ( dct ) coefficients , a bayesian compressive sensing inversion algorithm is derived to reconstruct ( up to 22 ) color video frames from a _ single _ monochromatic compressive measurement . both simulated and real datasets are adopted to verify the performance of the proposed algorithm . compressive sensing , video , bayesian , tree structure , wavelet
kullback - leibler ( kl ) divergence ( relative entropy ) can be considered as a measure of the difference / dissimilarity between sources .estimating kl divergence from finite realizations of a stochastic process with unknown memory is a long - standing problem , with interesting mathematical aspects and useful applications to automatic categorization of symbolic sequences .namely , an empirical estimation of the divergence can be used to classify sequences ( for approaches to this problem using other methods , in particular true metric distances , see , ; see also ) . in ziv andmerhav showed how to estimate the kl divergence between two sources , using the parsing scheme of lz77 algorithm on two finite length realizations .they proved the consistence of the method by showing that the estimate of the divergence for two markovian sources converges to their relative entropy when the length of the sequences diverges .furthermore they proposed this estimator as a tool for an `` universal classification '' of sequences .a procedure based on the implementations of lz77 algorithm ( gzip , winzip ) is proposed in .the estimate obtained of the relative entropy is then used to construct phylogenetic trees for languages and is proposed as a tool to solve authorship attribution problems . moreover ,the relation between the relative entropy and the estimate given by this procedure is analyzed in .two different algorithms are proposed and analyzed in , see also . the first one is based on the burrows - wheeler block sorting transform , while the other uses the context tree weighting method .the authors proved the consistence of these approximation methods and show that these methods outperform the others in experiments . in is shown how to construct an entropy estimator for stationary ergodic stochastic sources using non - sequential recursive pairs substitutions method , introduced in ( see also and references therein for similar approaches ) . in this paperwe want to discuss the use of similar techniques to construct an estimator of relative ( and cross ) entropy between a pair of stochastic sources .in particular we investigate how the asymptotic properties of concurrent pair substitutions might be used to construct an optimal ( in the sense of convergence ) relative entropy estimator .a second relevant question arises about the computational efficiency of the derived indicator .while here we address the first , mostly mathematical , question , we leave the computational and applicative aspects for forthcoming research .the paper is structured as follows : in section [ sec : notations ] we state the notations , in section [ sec : nsrps ] we describe the details of the non - sequential recursive pair substitutions ( nsrps ) method , in section [ sec : scaling ] we prove that nsrps preserve the cross and the relative entropy , in section [ sec : convergence ] we prove the main result : we can obtain an estimate of the relative entropy by calculating the 1-block relative entropy of the sequences we obtain using the nsrps method .we introduce here the main definitions and notations , often following the formalism used in . given a finite alphabet , we denote with the set of finite words .given a word , we denote by its length and if and , we use to indicate the subword .we use similar notations for one - sided infinite ( elements of ) or double infinite words ( elements of ) .often sequences will be seen as finite or infinite realizations of discrete - time stochastic stationary , ergodic processes of a random variable with values in .the -th order joint distributions identify the process and its elements follow the consistency conditions : when no confusion will arise , the subscript will be omitted , and we will just use to denote both the measure of the cylinder and the probability of the finite word .equivalently , a distribution of a process can also be defined by specifying the initial one - character distribution and the successive conditional distributions : given an ergodic , stationary stochastic source we define as usual : where denotes the concatenated word and is just the process average . the following properties and results are very well known , but at the same time quite important for the proofs and the techniques developed here ( and also in ) : * * a process is -markov if and only if . * _ entropy theorem _ : for almost all realizations of the process , we have in this paper we focus on properties involving pairs of stochastic sources on the same alphabet with distributions and , namely _cross entropy _ and the related _ relative entropy _ ( or _ kullback leibler divergence _ ) : _ n - conditional cross entropy _ _cross entropy _ _ relative entropy ( kullback - leibler divergence ) _ note that moreover we stress that , if is k - markov then , for any namely for any : & = - \sum_{\omega \in a^{l - k},\,b\in a^k,\,a\in a } \mu ( \omega ba ) \log \nu(a\vert b ) \\ & = - \sum_{b\in a^k,\,a\in a } \mu(ba ) \log \nu(a\vert b)= h_k(\mu\|\nu ) \end{array}\ ] ] note that depends only on the two - symbol distribution of .entropy and cross entropy can be related to the asymptotic behavior of properly defined _ returning times _ and _ waiting times _ , respectively .more precisely , given an ergodic , stationary process , a sample sequence and , we define the returning time of the first characters as : similarly , given two realizations and of and respectively , we define the obviously .we now have the following two important results : [ returning ] if is a stationary , ergodic process , then [ waiting ] if is stationary and ergodic , is k - markov and the marginals of are dominated by the corresponding marginals of , i.e. , then now introduce a family of transformations on sequences and the corresponding operators on distributions : given ( including ) , and , a _ pair substitution _ is a map which substitutes sequentially , from left to right , the occurrences of with . for example or : is always an injective but not surjective map that can be immediately extended also to infinite sequences .the action of shorten the original sequence : we denote by the inverse of the contraction rate : for -_typical _ sequences we can pass to the limit and define : an important remark is that if we start from a source where admissible words are described by constraints on consecutive symbols , this property will remain true even after an arbitrary pair substitution . in other words ( see theorem 2.1 in ) : a pair substitution maps pair constraints in pair constraints .a pair substitution naturally induces a map on the set of ergodic stationary measures on by mapping typical sequences w.r.t . the original measure in typical sequences w.r.t .the transformed measure : given then ( theorem 2.2 in ) exists and is constant almost everywhere in , moreover are the marginals of an ergodic measure on . again in ,the following results are proved showing how entropies transform under the action of , with expanding factor : _ invariance of entropy _ _ decreasing of the 1-conditional entropy _ moreover , maps 1-markov measures in 1-markov measures .in fact : _ decreasing of the k - conditional entropy _ moreover maps -markov measures in -markov measures .while later on we will give another proof of the first fact , we remark that this property , together with the decrease of the 1-conditional entropy , reflect , roughly speaking , the fact that the amount of information of , which is equal to that of , is more concentrated on the pairs of consecutive symbols .as we are interested in sequences of recursive pair substitutions , we assume to start with an initial alphabet and define an increasing alphabet sequence , , , .given and chosen ( not necessarily different ) : * we indicate with a new symbol and define the new alphabet as ; * we denote with the substitution map which substitutes whit the occurrences of the pair in the strings on the alphabet ; * we denote with the corresponding map from the measures on to the measures on ; * we define by the corresponding normalization factor .we use the over - line to denote iterated quantities : and also the asymptotic properties of clearly depend on the pairs chosen in the substitutions .in particular , if at any step the chosen pair is the pair of maximum of frequency of then ( theorem 4.1 in ) : regarding the asymptotic properties of the entropy we have the following theorem that rigorously show that becomes asymptotically 1-markov : if then the main results of this paper is the generalization of this theorem to the cross and relative entropy . before entering in the details of our construction let us sketch here the main steps .in particular let us consider the cross entropy ( the same argument will apply to the relative entropy ) of the measure with respect to the measure : i.e. . as we will show , but for the normalization factor , this is equal to the cross entropy of the measure w.r.t the measure : moreover , as we have seen above , if we choose the substitution in a suitable way ( for instance if at any step we substitute the pair with maximum frequency ) then and the measure becomes asymptotically 1-markov as .interestingly , we do not know if also diverges ( we will discuss this point in the sequel ) .nevertheless , noticing that the cross entropy of a 1-markov source w.r.t a generic ergodic source is equal to the 1-markov cross entropy between the two sources , it is reasonable to expect that the cross entropy can be obtained as the following limit : this is exactly what we will prove in the two next sections .we first show how the relative entropy between two stochastic process and scales after acting with the _ same _ pair substitution on both sources to have and .more precisely we make use of theorem [ waiting ] and have the following : [ main1 ] if is ergodic , is a markov chain and , then if is a pair substitution _ proof . _ to fix the notations , let us denote by and the infinite realizations of the process of measure and respectively , and by and the corresponding finite substrings .let us denote by the characters involved in the pair substitution .moreover let us denote the waiting time with the shorter notation : we now explore how the waiting time rescale with respect to the transformation : we consider the first time we see the sequence inside the sequence . to start with , we assume that as we can always consider th .[ waiting ] for realizations with a fixed prefix of positive probability .moreover we choose a subsequence such that is the smallest such that . of course as . in this case , it is easy to observe that then , using theorem [ waiting ] = \nonumber\\ & = & \lim_{i\to + \infty } \frac{n_i}{|g(w_1^{n_i})|}\frac{\log|g(w_1^{t_{n_i}})|}{n_i}= \nonumber\\ & = & \lim_{i\to + \infty } \frac{n_i}{|g(w_1^{n_i})|}\left[\frac{1}{n_i}\log ( t_{n_i } ) + \frac{1}{n_i}\log\left(\frac{|g(w_1^{t_{n_i}})|}{t_{n_i}}\right)\right]=\nonumber\\ & = & z^{\mu } h(\mu\|\nu ) \label{kl1}\end{aligned}\ ] ] where in the last step we used the fact that as , the definition of and theorem [ waiting ] for and . note that for , equation ( [ kl1 ] ) reproduces the content of theorem 3.1 of : that thus implies note that the limit in th . [ waiting ]is almost surely unique and then the initial restrictive assumption and the use of the subsequence have no consequences on the thesis ; this concludes the proof . before discussing the convergence of relative entropy under successive substitutions we go thorough a simple explicit example of the theorem [ main1 ] , in order to show the difficulties we deal with , when we try to use the explicit expressions of the transformed measures we find in . _ example ._ we treat here the most simple case : and are bernoulli binary processes with parameters and respectively .we consider the substitution given by .it is long but easy to verify that is a stationary , ergodic , 1-markov with equilibrium state where .for example , given a -generic sequence , corresponding to a -generic sequence ( ) : clearly : using the same argument as before , it is now possible to write down the probability distribution of pair of characters for . againthe following holds for a generic process : \frac{{\mathcal g}\mu(10)}z= \mu(10)-\mu(010 ) -\mu(101)+\mu(0101 ) & \frac{{\mathcal g}\mu(11)}z=\mu(11)-\mu(011 ) & \frac{{\mathcal g}\mu(12)}z= \mu(101)-\mu(0101)\\[4pt ] \frac{{\mathcal g}\mu(20)}z= \mu(010)-\mu(0101 ) & \frac{{\mathcal g}\mu(21)}z= \mu(011 ) & \frac{{\mathcal g}\mu(22)}z= \mu(0101 ) \end{array}\ ] ] it is easy to see that .now we can write the transition matrix for the process as : for bernoulli processes : we now denote with the transition matrix for .for the two 1-markov processes , we have via straightforward calculations , using the product structure of the measure : \\ + z\mu(11)\left[\mu(00)\log\frac{\mu(00)}{\nu(00)}+\mu(1)\log\frac{\mu(1)}{\nu(1)}+\mu(01)\log\frac{\mu(01)}{\nu(01)}\right]\\ + z\mu(01)\left[\mu(00)\log\frac{\mu(00)}{\nu(00)}+\mu(1)\log\frac{\mu(1)}{\nu(1)}+\mu(01)\log\frac{\mu(01)}{\nu(01)}\right]\\ = z\mu(00 ) d(\mu\vert\vert\nu)+ z\mu(1)\left[\mu(00)\log\frac{\mu(00)}{\nu(00)}+\mu(1)\log\frac{\mu(1)}{\nu(1)}+\mu(01)\log\frac{\mu(01)}{\nu(01)}\right]\\ = z\mu(00 ) d(\mu\vert\vert\nu)+z\mu(1)\left[\mu(0)d(\mu\vert\vert\nu)+ d(\mu\vert\vert\nu)\right]\\ = z d(\mu\vert\vert\nu ) ( \mu(00)+\mu(10)+\mu(1))\\ = z d(\mu\vert\vert\nu)\end{aligned}\ ] ]we now prove that the renormalized 1-markov cross entropy between and converges to the cross - entropy between and as the number of pair substitution goes to . more precisely : [ main2 ] if as , _ proof ._ let us define , as in the following operators on the ergodic measures : is the projection operator that maps a measure to its 1-markov approximation , whereas is the operator such that for any arbitrary we notice ( see for the details ) that the normalization constant for is the same of that for : the measure is not -markov , but we know that it becomes 1-markov after steps of substitutions , in fact it becomes .moreover , as discussed in , it is an approximation of if diverges : for any of length , now it is easy to establish the following chain of equalities : where we have used the conservation of the cross entropy and the fact that if are 1-markov , as shown in eq .[ h - k - markov ] . to conclude the proof we have to show that this is an easy consequence of eq .[ convergenza ] the definition [ hk ] and eq .[ hktoh ] .it is important to remark that we are assuming the divergence of too , as not being necessary for the convergence to the ( rescaled ) two - characters relative entropy .nevertheless , it would be interesting to understand both the topological and statistical constraints that prevent or permit the divergence of the expanding factor .experimentally , it seems that if we start with two measures with finite relative entropy ( i.e. with absolutely continuous marginals ) , then if we choose the standard strategy ( most frequent pair substitution ) for the sequence of pair substitutions that yields the divergence of , we also simultaneously obtain the divergence of ( see for instance fig . [fig : z ] ) . on the other hand, it seems possible to consider particular sources and particular strategies of pairs substitutions withdiverging , that prevent the divergence of . at this momentwe do not have conclusive rigorous mathematical results on this subject . finally , let us note that th .[ main2 ] do not give directly an algorithm to estimate the relative entropy : in any implementation we would have to specify the `` optimal '' number of pairs substitutions , with respect to the length of initial sequences and also with respect to the dimension of the initial alphabet .namely , in the estimate we have to take into account at least two correction terms , which diverges with : the entropy cost of writing the substitutions and the entropy cost of writing the frequencies of the pairs of characters in the alphabet we obtain after the substitutions ( or equivalent quantities if we use , for instance , arithmetic codings modeling the two character frequencies ) . for what concerns possible implementations of the method it is important to notice that the nsrps procedure can be implemented in linear time .therefore it seems reasonable that reasonably fast algorithms to compute relative entropy via nsrps can be designed .anyway , preliminary numerical experiments show that for sources of finite memory this method seems to have the same limitations of that based on parsing procedures , with respect to the methods based on the analysis of context introduced in . in fig .[ fig : h ] we show the convergence of the estimates of the entropies of the two sources and of the cross entropy , given th .[ main2 ] , for two markov process of memory 5 . in this case , the numbers of substitutions is small with respect to the length of the sequences , then the correction terms are negligible .let us finally note that the cross entropy estimate might show large variations for particular values of .this could be interpreted by the fact that for these values of pairs with particular relevance for one source with respect to the other have been substituted .this example suggest that the nsrps method for the estimation of the cross entropy should be useful in sequences analysis , for example in order to detect strings with a peculiar statistical role .99 d. benedetto , e. caglioti , d. gabrielli : non - sequential recursive pair substitution : some rigorous results ._ issn : 1742 - 5468 ( on line ) * 09 * pp . 121 doi:10.1088/1742.-5468/2006/09/p09011 ( 2006 )
the entropy of an ergodic source is the limit of properly rescaled 1-block entropies of sources obtained applying successive non - sequential recursive pairs substitutions , . in this paper we prove that the cross entropy and the kullback - leibler divergence can be obtained in a similar way . _ keywords _ : information theory , source and channel coding , relative entropy .
smoothed particle hydrodynamics ( sph ) is a particle - based numerical method , pioneered by and , for solving the equations of hydrodynamics ( recent reviews include ; ; ; ) . in sph ,the particles trace the flow and serve as interpolation points for their neighbours .this lagrangian nature of sph makes the method particularly useful for astrophysics , where typically open boundaries apply , though it becomes increasingly popular also in engineering ( e.g. * ? ? ?the core of sph is the density estimator : the fluid density is _ estimated _ from the masses and positions of the particles via ( the symbol denotes an sph _ estimate _ ) where is the _ smoothing kernel _ and the _ smoothing scale _ , which is adapted for each particle such that ( with the number of spatial dimensions ) .similar estimates for the value of any field can be obtained , enabling discretisation of the fluid equations . instead , in _ conservative _ sph , the equations of motion for the particles are derived , following , via a variational principle from the discretised lagrangian \ ] ] . here, ) is the internal energy as function of density and entropy ( and possibly other gas properties ) , the precise functional form of which depends on the assumed equation of state .the euler - lagrange equations then yield ,\ ] ] where and , while the factors ( ; ) arise from the adaption of ( ) such that .equation ( [ eq : hydro ] ) is a discretisation of , and , because of its derivation from a variational principle , conserves mass , linear and angular momentum , energy , entropy , and ( approximately ) circularity .however , its derivation from the lagrangian is only valid if all fluid variables are smoothly variable . to ensure this , in particular for velocity and entropy, artificial dissipation terms have to be added to and .recent progress in restricting such dissipation to regions of compressive flow have greatly improved the ability to model contact discontinuities and their instabilities as well as near - inviscid flows .sph is _ not _ a monte - carlo method , since the particles are not randomly distributed , but typically follow a semi - regular glass - like distribution .therefore , the density ( and pressure ) error is much smaller than the expected from poisson noise for neighbours and sph obtains convergence .however , some level of particle disorder can not be prevented , in particular in shearing flows ( as in turbulence ) , where the particles are constantly re - arranged ( even in the absence of any forces ) , but also after a shock , where an initially isotropic particle distribution is squashed along one direction to become anisotropic . in such situations ,the sph force ( [ eq : hydro ] ) in addition to the pressure gradient contains a random ` e error ' error ' term of is only the dominant contribution to the force errors induced by particle discreteness .] , and sph converges more slowly than . since shocks and shear flows are common in star- and galaxy - formation , the ` e errors ' may easily dominate the overall performance of astrophysical simulations .one can dodge the ` e error ' by using other discretisations of .however , such approaches unavoidably abandon momentum conservation and hence fail in practice , in particular , for strong shocks . furthermore , with such modifications sph no longer maintains particle order , which it otherwise automatically achieves .thus , the ` e error ' is sph s attempt to resurrect particle order and prevent shot noise from affecting the density and pressure estimates .another possibility to reduce the ` e error ' is to subtract an average pressure from each particle s in equation ( [ eq : hydro ] ) .effectively , this amounts to adding a negative pressure term , which can cause the tensile instability ( see [ sec : stable : cont ] ) .moreover , this trick is only useful in situations with little pressure variations , perhaps in simulations of near - incompressible flows ( e.g. * ? ? ?the only remaining option for reducing the ` e error ' appears an increase of the number of particles contributing to the density and force estimates ( contrary to naive expectation , the computational costs grow sub - linear with ) .the traditional way to try to do this is by switching to a smoother and more extended kernel , enabling larger at the same smoothing scale ( e.g. * ? ? ? * ) . however , the degree to which this approach can reduce the ` e errors ' is limited and often insufficient , even with an infinitely extended kernel , such as the gaussian . therefore , one must also consider ` stretching ' the smoothing kernel by increasing .this inevitably reduces the resolution , but that is still much better than obtaining erroneous results .of course , the best balance between reducing the ` e error ' and resolution should be guided by results for relevant test problems and by convergence studies .unfortunately , at large the standard sph smoothing kernels become unstable to the pairing ( or clumping ) instability ( a cousin of the tensile instability ) , when particles form close pairs reducing the effective neighbour number .the pairing instability ( first mentioned by ) has traditionally been attributed to the diminution of the repulsive force between close neighbours approaching each other ( , , , , , ) .such a diminishing near - neighbour force occurs for all kernels with an inflection point , a necessary property of continuously differentiable kernels .kernels without that property have been proposed and shown to be more stable ( e.g. ) .however , we provide demonstrably stable kernels with inflection point , disproving these ideas . instead , our linear stability analysis in section [ sec : linear ] shows that non - negativity of the kernel fourier transform is a necessary condition for stability against pairing . based on this insightwe propose in section [ sec : smooth ] kernel functions , which we demonstrate in section [ sec : test ] to be indeed stable against pairing for all neighbour numbers , and which possess all other desirable properties .we also present some further test simulations in section [ sec : test ] , before we discuss and summarise our findings in sections [ sec : disc ] and [ sec : conc ] , respectively .ll@ & & & & + & & & & & & & & & + cubic spline & & & & & & & & & 1.732051 & 1.778002 & 1.825742 + quartic spline & & & & & & & & & 1.936492 & 1.977173 & 2.018932 + quintic spline & & & & & & & & & 2.121321 & 2.158131 & 2.195775 + + wendland , & & & & & & & & & 1.620185 & & + wendland , & & & & & & & & & 1.936492 & & + wendland , & & & & & & & & & 2.207940 & & + + wendland , & & & & & & & & & & 1.897367 & 1.936492 + wendland , & & & & & & & & & & 2.171239 & 2.207940 + wendland , & & & & & & & & & & 2.415230 & 2.449490sph smoothing kernels are usually isotropic and can be written as with a dimensionless function , which specifies the functional form and satisfies the normalisation . the re - scaling and with leaves the functional form of unchanged but alters the meaning of . in order to avoid this ambiguity , a definition of the smoothing scale in terms of the kernel , i.e. via a functional ] . moreover , the resulting ratios between for the do not match any of the definitions discussed above .however , this is just a coincidence for ( quintic spline ) since for the b - splines in 1d . ] .instead , we use the more appropriate also for the b - spline kernels , giving for the cubic spline in 3d , close to the conventional ( see table [ tab : kernel ] ) . at low order b - splines are only stable against pairing for modest values of ( we will be more precise in section [ sec : linear ] ) , while at higher they are computationally increasingly complex . therefore , alternative kernel functions which are stable for large are desirable . as the pairing instability has traditionally been associated with the presence of an inflection point ( minimum of ) , functions without inflection pointhave been proposed .these have a triangular shape at and necessarily violate point ( ii ) of our list , but avoid the pairing instability ) , but to keep a smooth kernel for the density estimate .however , such an approach can not be derived from a lagrangian and hence necessarily violates energy and/or entropy conservation . ] .for comparison we consider one of them , the ` hoct4 ' kernel of .the linear stability analysis of the sph algorithm , presented in the next section , shows that a necessary condition for stability against pairing is the non - negativity of the multi - dimensional fourier transform of the kernel .the gaussian has non - negative fourier transform for any dimensionality and hence would give an ideal kernel were it not for its infinite support and computational costs .therefore , we look for kernel functions of compact support which have non - negative fourier transform in dimensions and are low - order polynomials would avoid the computation of a square root .however , it appears that such functions can not possibly have non - negative fourier transform ( h. wendland , private communication ) .] in .this is precisely the defining property of the functions , which are given by with and the linear operator (r ) \equiv \int_r^\infty sf(s)\,\mathrm{d}s.\ ] ] in spatial dimensions , the functions with have positive fourier transform and are times continuously differentiable. in fact , they are the unique polynomials in of minimal degree with these properties . for large , they approach the gaussian , which is the only non - trivial eigenfunction of the operator .we list the first few wendland functions for one , two , and three dimensions in table [ tab : kernel ] , and plot them for in fig .[ fig : kernel ] . fig .[ fig : kernel ] plots the kernel functions of table [ tab : kernel ] , the gaussian , and the hoct4 kernel , all scaled to the same for . amongst the various scalings ( ratios for )discussed in [ sec : smooth : scale ] above , this gives by far the best match between the kernels .the b - splines and wendland functions approach the gaussian with increasing order .the most obvious difference between them in this scaling is their central value .the b - splines , in particular of lower order , put less emphasis on small than the wendland functions or the gaussian .obviously , the hoct4 kernel , which has no inflection point , differs significantly from all the others and puts even more emphasis on the centre than the gaussian ( for this kernel ) . for spherical kernels of the form ( [ eq : w ] ) , their fourier transform only depends on the product , i.e. . in 3d( denotes the fourier transform in dimensions ) (\kappa ) = 4\pi \kappa^{-1 } \int_0^\infty \sin(\kappa r)\,w(r)\,r\,\mathrm{d}r\ ] ] which is an even function and ( up to a normalisation constant ) equals /\mathrm{d}\kappa$ ] . for the b - splines , which are defined via their 1d fourier transform in equation ( [ eq : b - spline ] ) , this gives immediately (\kappa ) = 3 \left(\frac{\textstyle n}{\textstyle\kappa}\right)^{n+2 } \sin^n\!\frac{\textstyle\kappa}{\textstyle n } \left(1- \frac{\textstyle\kappa}{\textstyle n}\cot\frac{\textstyle\kappa}{\textstyle n}\right)\ ] ] ( which includes the normalisation constant ) , while for the 3d wendland kernels (\kappa ) = \left(-\frac{1}{\kappa}\frac{\mathrm{d}}{\mathrm{d}\kappa}\right)^{k+1 } \mathcal{f}_1\left[(1-r)^\ell_+\right](\kappa)\ ] ] ( we abstain from giving individual functional forms ) .all these are plotted in fig .[ fig : fk ] after scaling them to a common .notably , all the b - spline kernels obtain and oscillate about zero for large ( which can also be verified directly from equation [ eq : w : kappa : b ] ) , whereas the wendland kernels have at all , as does the hoct4 kernel .as non - negativity of the fourier transform is necessary ( but not sufficient ) for stability against pairing at large ( see [ sec : stable : cont ] ) , in 3d the b - splines ( of any order ) fall prey to this instability for sufficiently large , while , based solely on their fourier transforms , the wendland and hoct4 kernels may well be stable for all neighbour numbers . at large ( small scales ) , the hoct kernel has most power , caused by its central spike , while the other kernels have ever less small - scale power with increasing order , becoming ever smoother and approaching the gaussian , which has least small - scale power . the scaling to a common in fig .[ fig : fk ] has the effect that the all overlap at small wave numbers , since their taylor series the sph force ( [ eq : hydro ] ) is inseparably related , owing to its derivation via a variational principle , to the _ derivative _ of the density estimate .another important role of the sph density estimator is to obtain accurate values for in equation ( [ eq : hydro ] ) , and we will now assess the performance of the various kernels in this latter respect . in fig .[ fig : rho ] , we plot the estimated density ( [ eq : rho ] ) vs. neighbour number for the kernels of table [ tab : kernel ] and particles distributed in three - dimensional densest - sphere packing ( solid curves ) or a glass ( squares ) . while the standard cubic spline kernel under - estimates the density ( only values are accessible for this kernel owing to the pairing instability ) , the wendland kernels ( and gaussian , not shown ) tend to over - estimate it .it is worthwhile to ponder about the origin of this density over - estimation .if the particles were randomly rather than semi - regularly distributed , obtained for an unoccupied position would be unbiased ( e.g. * ? ? ?* ) , while at a particle position the self contribution to results in an over - estimate . of course , in sph and in fig .[ fig : rho ] particles are not randomly distributed , but at small the self - contribution still induces some bias , as evident from the over - estimation for _ all _ kernels at very small . the hoct4 kernel of read et al .( 2010 , _ orange _ ) with its central spike ( cf .[ fig : kernel ] ) shows by far the worst performance .however , this is not a peculiarity of the hoct4 kernel , but a generic property of all kernels without inflection point .these considerations suggest the _ corrected _ density estimate which is simply the original estimate ( [ eq : rho ] ) with a fraction of the self - contribution subtracted .the equations of motion obtained by replacing in the lagrangian ( [ eq : l ] ) with are otherwise identical to equations ( [ eq : hydro ] ) and ( [ eq : omega ] ) ( note that , since and differ only by a constant ) , in particular the conservation properties are unaffected . from the data of fig .[ fig : rho ] , we find that good results are obtained by a simple power - law with constants and depending on the kernel .we use = ( 0.0294,0.977 ) , ( 0.01342,1.579 ) , and ( 0.0116,2.236 ) , respectively , for the wendland , , and kernels in dimensions . the dashed curves and triangles in fig .[ fig : rho ] demonstrate that this approach obtains accurate density and hence pressure estimates .the sph linear stability analysis considers a plane - wave perturbation to an equilibrium configuration , i.e. the positions are perturbed according to \big)\ ] ] with displacement amplitude , wave vector , and angular frequency .equating the forces generated by the perturbation to linear order in to the acceleration of the perturbation yields a dispersion relation of the form this is an eigenvalue problem for the matrix with eigenvector and eigenvalue .the exact ( non - sph ) dispersion relation ( with , at constant entropy ) has only one non - zero eigenvalue with eigenvector , corresponding to longitudinal sound waves propagating at speed .the actual matrix in equation ( [ eq : dispersion ] ) depends on the details of the sph algorithm . for conservative sph with equation of motion ( [ eq : hydro ] ), gives it for in one spatial dimension .we derive it in appendix [ app : linear ] for a general equation of state and any number of spatial dimensions : where is the outer product of a vector with itself , bars denote sph estimates for the unperturbed equilibrium , , and [ eq : uupi ] here and in the remainder of this section , curly brackets indicate terms not present in the case of a constant , when our results reduce to relations given by and .since is real and symmetric , its eigenvalues are real and its eigenvectors mutually orthogonal ) one omits the factors but still adapts to obtain , as some practitioners do , then the resulting dispersion relation has an asymmetric matrix with potentially complex eigenvalues . ] . the sph dispersion relation ( [ eq : dispersion ] )can deviate from the true relation ( [ eq : p : exact ] ) in mainly two ways .first , the longitudinal eigenvalue ( with eigenvector ) may deviate from ( wrong sound speed ) or even be negative ( pairing instability ; ) .second , the other two eigenvalues may be significantly non - zero ( transverse instability for or transverse sound waves for ) .the matrix in equation ( [ eq : p ] ) is not accessible to simple interpretation .we will compute its eigenvalues for the various sph kernels in [ sec : stable : kern]-3 and figs .[ fig : stable : s3]-[fig : disprel ] , but first consider the limiting cases of the dispersion relation , allowing some analytic insight .+ + there are three spatial scales : the wavelength , the smoothing scale , and the nearest neighbour distance .we will separately consider the limit of well resolved waves , the continuum limit of large neighbour numbers , and finally the combined limit .if , the argument of the trigonometric functions in equations ( [ eq : uupi]a , b ) is always small and we can taylor expand them regardless of . ] .if we also assume a locally isotropic particle distribution , this gives to lowest order in ( is the unit matrix ; see also [ app : limit ] ) \ ] ] with the eigenvalues the error of these relations is mostly dictated by the quality of the density estimate , either directly via , , and , or indirectly via .the density correction method of equation ( [ eq : rho : corr ] ) can only help with the former , but not the latter .the difference between constant and adapted is a factor 4/9 ( for 3d ) in favour of the latter . for large neighbour numbers , , , and the sums in equations ( [ eq : uupi]a , b ) can be approximated by integrals , is to assume some _ radial distribution function _ ( as in statistical mechanics of glasses ) for the probability of any two particles having distance .such a treatment may well be useful in the context of sph , but it is beyond the scope of our study . ] with the fourier transform of . since , we have and thus from equation ( [ eq : p ] ) .\ ] ] , but towards larger the fourier transform decays , , and in the limit or , : short sound waves are not resolved .negative eigenvalues of in equation ( [ eq : p : fourier ] ) , and hence linear instability , occur only if itself or the expression within square brackets are negative . since , the latter can only happen if , which does usually not arise in fluid simulations ( unless , possibly , one subtracts an average pressure ) , but possibly in elasticity simulations of solids , when it causes the _ tensile _ instability ( an equivalent effect is present in smoothed - particle mhd , see ) . proposed an artificial repulsive short - range force , effectuating an additional pressure , to suppress the tensile instability . the pairing instability , on the other hand, is caused by for some .this instability can be avoided by choosing the neighbour number small enough for the critical wave number to remain unsampled , i.e. or ( though such small is no longer consistent with the continuum limit ) .however , if the fourier transform of the kernel is non - negative everywhere , the pairing instability can not occur for large . as pairing is typically a problem for large , this suggests that kernels with for every are stable against pairing for _ all _ values of , which is indeed supported by our results in [ sec : test : noise ] .the combined limit of is obtained by inserting the taylor expansion ( [ eq : w : taylor ] ) of into equation ( [ eq : p : fourier ] ) , giving + \mathcal{o}(h^4|{\boldsymbol{k}}|^4 ) \right).\ ] ] gave an equivalent relation for when the expression in square brackets becomes or ( for adapted or constant , respectively ) , which , he argues , bracket all physically reasonable values . however , in 3d the value for adaptive sph becomes , i.e. _ vanishes _ for the most commonly used adiabatic index .in general , however , the relative error in the frequency is .this shows that is indeed directly proportional to the resolution scale , at least concerning sound waves .we have evaluated the eigenvalues and of the matrix in equation ( [ eq : p ] ) for all kernels of table [ tab : kernel ] , as well as the hoct4 and gaussian kernels , for unperturbed positions from densest - sphere packing ( face - centred cubic grid ) simply because the configuration itself was unstable , not the numerical scheme . ] . in figs .[ fig : stable : s3]&[fig : stable : kernel ] , we plot the resulting contours of over wave number and smoothing scale ( both normalised by the nearest - neighbour distance ) or on the right axes ( except for the gaussian kernel when is ill - defined and we give instead ) for two wave directions , one being a nearest - neighbour direction .the top sub - panels of figs .[ fig : stable : s3]&[fig : stable : kernel ] refer to the longitudinal eigenvalue , when green and red contours are for , respectively , and , the latter indicative of the pairing instability . for the gaussian kernel ( truncated at ; fig .[ fig : stable : s3 ] ) everywhere , proving its stability at values for larger than plotted . in agreement with our analysis in [ sec : stable : cont ] , this is caused by truncating the gaussian , which ( like any other modification to avoid infinite neighbour numbers ) invalidates the non - negativity of its fourier transform .these theoretical results are confirmed by numerical findings of d. price ( referee report ) , who reports pairing at large for the truncated gaussian . ] , similar to the hoct4 and , in particular the higher - degree , wendland kernels .in contrast , all the b - spline kernels obtain at sufficiently large .the quintic spline , wendland , and hoct4 kernel each have a region of for close to the nyquist frequency and , , and , respectively . in numerical experiments similar to those described in [ sec : test : noise ] , the corresponding instability for the quintic spline and wendland kernels can be triggered by very small random perturbations to the grid equilibrium .however , such modes are absent in glass - like configurations , which naturally emerge by ` cooling ' initially random distributions .this strongly suggests , that these kernel- combinations can be safely used in practice . whether this also applies to the hoct4 kernel at we can not say , as we have not run test simulations for this kernel .note , that these islands of linear instability at small are not in contradiction to the relation between kernel fourier transform and stability and are quite different from the situation for the b - splines , which are only stable for sufficiently small .the bottom sub - panels of figs .[ fig : stable : s3]&[fig : stable : kernel ] show , when both families of kernels have with either sign occurring . implies growing transverse modes with a ` banding instability ' which appeared near a contact discontinuity in some of their simulations .however , they fail to provide convincing arguments for this connection , as their stability analysis is compromised by the use of the unstable cubic lattice . ] , which we indeed found in simulations starting from a slightly perturbed densest - sphere packing. however , such modes are not present in glass - like configurations , which strongly suggests , that transverse modes are not a problem in practice . the dashed lines in figs .[ fig : stable : s3]&[fig : stable : kernel ] indicate sound with wavelength . for ,such sound waves are well resolved in the sense that the sound speed is accurate to .this is similar to grid methods , which typically require about eight cells to resolve a wavelength .the effective sph sound speed can be defined as . in fig .[ fig : disprel ] we plot the ratio between and the correct sound speed as function of wave number for three different wave directions and the ten kernel- combinations of table [ tab : nh : kern ] ( which also gives their formal resolutions ) .the transition from for long waves to for short waves occurs at , but towards longer waves for larger , as expected . for resolved waves ( : left of the thin vertical lines in fig .[ fig : disprel ] ) , obtains a value close to , but with clear differences between the various kernel- combinations .surprisingly , the standard cubic spline kernel , which is used almost exclusively in astrophysics , performs very poorly with errors of few percent , for both and 55 .this is in stark contrast to the quartic spline with similar but accurate to .moreover , the quartic spline with resolves shorter waves better than the cubic spline with a smaller , in agreement with table [ tab : nh : kern ] . we should note that these results for the numerical sound speed assume a perfectly smooth simulated flow . in practice, particle disorder degrades the performance , in particular for smaller , and the resolution of sph is limited by the need to suppress this degradation via increasing ( and ) ..some quantities ( defined in [ sec : smooth : scale ] ) for kernel- combinations used in fig .[ fig : disprel ] and the test simulations of [ sec : test ] . is the nearest - neighbour distance for densest - sphere packing , which has number density .the cubic spline with is the most common choice in astrophysical simulations , the other values for the b - spline are near the pairing - stability boundary , hence obtaining close to the greatest possible reduction of the ` e errors ' . for , 200 , 400, we picked the wendland kernel which gave best results for the vortex test of [ sec : test : vortex ] .[ tab : nh : kern ] [ cols= " < , > , > , > , < " , ]in order to assess the wendland kernels and compare them to the standard b - spline kernels in practice , we present some test simulations which emphasise the pairing , strong shear , and shocks .all these simulations are done in 3d using periodic boundary conditions , , conservative sph ( equation [ eq : hydro ] ) , and the artificial viscosity treatment , which invokes dissipation only for compressive flows , and an artificial conductivity similar to that of . for some tests we used various values of per kernel , but mostly those listed in table [ tab : nh : kern ] . in order to test our theoretical predictions regarding the pairing instability ,we evolve noisy initial conditions with 32000 particles until equilibrium is reached .initially , , while the initial are generated from densest - sphere packing by adding normally distributed offsets with ( 1d ) standard deviation of one unperturbed nearest - neighbour distance . to enable a uniform - density equilibrium ( a glass ) , we suppress viscous heating .the typical outcome of these simulations is either a glass - like configuration ( right panel of fig .[ fig : noise : xy ] ) or a distribution with particle pairs ( left panel of fig .[ fig : noise : xy ] ) . in order to quantify these outcomes ,we compute for each particle the ratio between its actual nearest - neighbour distance and kernel - support radius .the maximum possible value for occurs for densest - sphere packing , when with the number density . replacing in equation ( [ eq : nh ] ) with , we obtain thus , the ratio is an indicator for the regularity of the particle distribution around particle .it obtains a value very close to one for perfect densest - sphere packing and near zero for pairing , while a glass typically gives .[ fig : noise : qmin ] plots the final value for the overall minimum of for each of a set of simulations . for all values tested for ( up to 700 ) ,the wendland kernels show no indication of a single particle pair .this is in stark contrast to the b - spline kernels , all of which suffer from particle pairing .the pairing occurs at and 190 for the quartic , and quintic spline , respectively , whereas for the cubic spline approaches zero more gradually , with at .these thresholds match quite well the suggestions of the linear stability analysis in figs .[ fig : stable : s3]&[fig : stable : kernel ] ( except that the indications of instability of the quintic spline at and the wendland kernel at are not reflected in our tests here ) .the quintic ( and higher - order ) splines are the only option amongst the b - spline kernels for appreciably larger than .we also note that grows substantially faster , in particularly early on , for the wendland kernels than for the b - splines , especially when operating close to the stability boundary .as discussed in the introduction , particle disorder is unavoidably generated in shearing flows , inducing ` e errors ' in the forces and causing modelling errors .a critical test of this situation consists of a differentially rotating fluid of uniform density in centrifugal balance ( , see also , , and ) .the pressure and azimuthal velocity are [ eqs : gresho ] with and the cylindrical radius .we start our simulations from densest - sphere packing with effective one - dimensional particle numbers , 102 , 203 , or 406 .the initial velocities and pressure are set as in equations ( [ eqs : gresho ] ) .there are three different causes for errors in this test .first , an overly viscous method reduces the differential rotation , as shown by ; this effect is absent from our simulations owing to the usage of the dissipation switch .second , the ` e error ' generates noise in the velocities which in turn triggers some viscosity .finally , finite resolution implies that the sharp velocity kinks at and 0.4 can not be fully resolved ( in fact , the initial conditions are not in sph equilibrium because the pressure gradient at these points is smoothed such that the sph acceleration is not exactly balanced with the centrifugal force ) . in fig .[ fig : gresho : dv ] we plot the azimuthal velocity at time for a subset of all particles at our lowest resolution of for four different kernel- combinations .the leftmost is the standard cubic spline with , which considerably suffers from particle disorder and hence e errors ( but also obtains too low at ) .the second is the wendland kernel with , which still suffers from the ` e error ' .the last two are for the wendland kernel with and the wendland kernel with but with in equation ( [ eq : gresho : p ] ) . in both cases ,the ` e error ' is much reduced ( and the accuracy limited by resolution ) either because of large neighbour number or because of a reduced pressure . in fig .[ fig : gresho ] , we plot the convergence of the velocity error with increasing numerical resolution for all the kernels of table [ tab : kernel ] , but with another for each , see also table [ tab : nh : kern ] . for the b - splines ,we pick a large which still gives sufficient stability against pairing , while for , 200 , and 400 we show the wendland kernel that gave best results . for the cubic spline, the results agree with the no - viscosity case in fig .6 of , demonstrating that our dissipation switch effectively yields inviscid sph .we also see that the rate of convergence ( the slope of the various curves ) is lower for the cubic spline than any other kernel .this is caused by systematically too low in the rigidly rotating part at ( see leftmost panel if fig .[ fig : gresho : dv ] ) at all resolutions .the good performance of the quartic spline is quite surprising , in particular given the rather low .the quintic spline at and the wendland kernel at obtain very similar convergence , but are clearly topped by the wendland kernel at , demonstrating that high neighbour number is really helpful in strong shear flows .our final test is the classical shock tube , a 1d riemann problem , corresponding to an initial discontinuity in density and pressure . unlike most published applications of this test, we perform 3d simulations with glass - like initial conditions .our objective here is ( 1 ) to verify the ` e-error ' reductions at larger and ( 2 ) the resulting trade - off with the resolution across the shock and contact discontinuities . other than for the vortex tests of [ sec : test : vortex ] , we only consider one value for the number of particles but the same six kernel- combinations as in fig .[ fig : gresho ] .the resulting profiles of velocity , density , and thermal energy are plotted in fig .[ fig : sod ] together with the exact solutions .note that the usual over - shooting of the thermal energy near the contact discontinuity ( at ) is prevented by our artificial conductivity treatment .this is not optimised and likely over - smoothes the thermal energy ( and with it the density ) .however , here we concentrate on the velocity . for the cubic spline with ,there is significant velocity noise in the post - shock region .this is caused by the re - ordering of the particle positions after the particle distribution becomes anisotropically compressed in the shock .this type of noise is a well - known issue with multi - dimensional sph simulations of shocks ( e.g. * ? ? ?* ) . with increasing velocity noise is reduced , but because of the smoothing of the velocity jump at the shock ( at ) the velocity error does not approach zero for large . instead , for sufficiently large ( in this test ) , the velocity error saturates : any ` e-error ' reduction for larger is balanced by a loss of resolution .the only disadvantage of larger is an increased computational cost ( by a factor when moving from the quintic spline with to the wendland kernel with , see fig .[ fig : time ] ) .the wendland kernels have an inflection point and yet show no signs of the pairing instability .this clearly demonstrates that the traditional ideas for the origin of this instability ( la , see the introduction ) were incorrect .instead , our linear stability analysis shows that in the limit of large pairing is caused by a negative kernel fourier transform , whereas the related tensile instability with the same symptoms is caused by an ( effective ) negative pressure .while it is intuitively clear that negative pressure causes pairing , the effect of is less obvious. therefore , we now provide another explanation , not restricted to large . by their derivation from the lagrangian ( [ eq : l ] ) , the sph forces tend to reduce the estimated total thermal energy at fixed entropy is constant , but not the entropy , so that . ] .thus , hydrostatic equilibrium corresponds to an extremum of , and stable equilibrium to a minimum when small positional changes meet opposing forces .minimal is obtained for uniform , since a re - distribution of the particles in the same volume but with a spread of gives larger ( assuming uniform ) .an equilibrium is meta - stable , if is only a local ( but not the global ) minimum .several extrema can occur if different particle distributions , each obtaining ( near)uniform , have different average .consider , for example , particles in densest - sphere packing , replace each by a pair and increase the spacing by , so that the average density ( but not ) remains unchanged .this fully paired distribution is in equilibrium with uniform , but the _ effective _ neighbour number is reduced by a factor 2 ( for the same smoothing scale ) . nowif , the paired distribution has lower than the original and is favoured . in practice ( and in our simulations in [ sec : test : noise ] ) , the pairing instability appears gradually : for just beyond the stability boundary , only few particle pairs form and the effective reduction of is by a factor .we conclude , therefore , that * pairing occurs if for some . * from fig .[ fig : rho ] we see that for the b - spline kernels always has a minimum and hence satisfies our condition , while this never occurs for the wendland or hoct4 kernels does not affect these arguments , because during a simulation in equation ( [ eq : rho : corr ] ) is _ fixed _ and in terms of our considerations here the solid curves in fig .[ fig : rho ] are simply lowered by a constant . ] . the stability boundary ( between squares and crosses in fig .[ fig : rho ] ) is towards slightly larger than the minimum of , indicating ( but also note that the curves are based on a regular grid instead of a glass as the squares ) .a disordered particle distribution is typically not in equilibrium , but has non - uniform and hence non - minimal .the sph forces , in particular their ` e errors ' ( which occur even for constant pressure ) , then drive the evolution towards smaller and hence equilibrium with either a glass - like order or pairing ( see also * ? ? ?thus , the minimisation of is the underlying driver for both the particle re - ordering capability of sph and the pairing instability .this also means that when operating near the stability boundary , for example using for the cubic spline , this re - ordering is much reduced . this is why in fig .[ fig : noise : qmin ] the transition between glass and pairing is not abrupt : for just below the stability boundary the glass - formation , which relies on the re - ordering mechanism , is very slow and not finished by the end of our test simulations. an immediate corollary of these considerations is that any sph - like method without ` e errors ' does not have an automatic re - ordering mechanism .this applies to modifications of the force equation that avoid the ` e error ' , but also to the method of , which employs a voronoi tessellation to obtain the density estimates used in the particle lagrangian ( [ eq : l ] ) .the tessellation constructs a partition of unity , such that different particle distributions with uniform have _ exactly _ the same average , i.e. the global minimum of is highly degenerate .this method has neither a pairing instability , nor ` e errors ' , nor the re - ordering capacity of sph , but requires additional terms for that latter purpose .neither the b - splines nor the wendland functions have been designed with sph or the task of density estimation in mind , but derive from interpolation of function values for given points . the b - splines were constructed to exactly interpolate polynomials on a regular 1d grid . however , this for itself is not a desirable property in the context of sph , in particular for 2d and 3d .the wendland functions were designed for interpolation of scattered multi - dimensional data , viz the coefficients are determined by matching the interpolant to the function values , resulting in the linear equations if the matrix is positive definite for _ any _ choice of points , then this equation can always be solved . moreover , if the function has compact support , then is sparse , which greatly reduces the complexity of the problem .the wendland functions were designed to fit this bill . as a side effectthey have non - negative fourier transform ( according to * ? ? ?* ) , which together with their compact support , smoothness , and computational simplicity makes them ideal for sph with large .so far , the wendland functions are the only kernels which are stable against pairing for all and satisfy all other desirable properties from the list on page . in smooth flows , i.e. in the absence of particle disorder , the only error of the sph estimates is the bias induced by the smoothing operation .for example , assuming a smooth density field ( e.g. * ? ? ?* ; * ? ? ?* ) with defined in equation ( [ eq : sigma ] ) .since also sets the resolution of sound waves ( [ sec : stable : long ] ) , our definition ( [ eq : h ] ) , , of the sph resolution scale is appropriate for smooth flows .the result ( [ eq : rho : bias ] ) is the basis for the traditional claim of convergence for smooth flows .true flow discontinuities are smeared out over a length scale comparable to ( though we have not made a detailed investigation of this ) . in practice, however , particle disorder affects the performance and , as our test simulations demonstrated , the actual resolution of sph can be much worse than the smooth - flow limit suggests .there is no consensus about the best neighbour number in sph : traditionally the cubic spline kernel is used with , while favours ( at or even beyond the pairing - instability limit ) and use their hoct4 kernel with even ( corresponding to a times larger ) . from a pragmatic point of view , the number of particles , the neighbour number , and the smoothing kernel ( and between them the numerical resolution ) are _ numerical parameters _ which can be chosen to optimise the efficiency of the simulation .the critical question therefore is : * which combination of and ( and kernel ) most efficiently models a given problem at a desired fidelity ? * clearly , this will depend on the problem at hand as well as the desired fidelity. however , if the problem contains any chaotic or turbulent flows , as is common in star- and galaxy formation , then the situation exemplified in the gresho - chan vortex test of [ sec : test : vortex ] is not atypical and large may be required for sufficient accuracy . but are high neighbour numbers affordable ? in fig .[ fig : time ] , we plot the computational cost versus for different kernels . at costs rise sub - linearly with ( because at low sph is data- rather than computation - dominated ) and high are well affordable . in the case of the vortex test , they are absolutely necessary as fig . [fig : gresho : c ] demonstrates : for a given numerical accuracy , our highest makes optimal use of the computational resources ( in our code memory usage does not significantly depend on , so cpu time is the only relevant resource ) .particle disorder is unavoidable in strong shear ( ubiquitous in astrophysical flows ) and causes random errors of the sph force estimator .the good news is that particle disorder is less severe than poissonian shot noise and the resulting force errors ( which are dominated by the e term of * ? ? ?* ) are not catastrophic .the bad news , however , is that these errors are still significant enough to spoil the convergence of sph . in this study we investigated the option to reduce the ` e errors ' by increasing the neighbour number in conjunction with a change of the smoothing kernel .switching from the cubic to the quintic spline at fixed resolution increases the neighbour number only by a factor ) for the smoothing scale .the conventional factor is 3.375 , almost twice 1.74 , but formally effects to a loss of resolution , since the conventional value for of the b - spline kernels is inappropriate . ]1.74 , hardly enough to combat ` e errors ' . for a significant reduction of the these errors one has to trade resolution andsignificantly increase beyond conventional values .the main obstacle with this approach is the pairing instability , which occurs for large with the traditional sph smoothing kernels . in [ sec : linear ] and appendix [ app : linear ] , we have performed ( it appears for the first time ) a complete linear stability analysis for conservative sph in any number of spatial dimensions .this analysis shows that sph smoothing kernels whose fourier transform is negative for some wave vector will inevitably trigger the sph pairing instability at sufficiently large neighbour number .such kernels therefore require to not exceed a certain threshold in order to avoid the pairing instability ( not to be confused with the tensile instability , which has the same symptoms but is caused by a negative effective pressure independent of the kernel properties ) . intuitively , the pairing instability can be understood in terms of the sph density estimator : if a paired particle distribution obtains a lower average estimated density , its estimated total thermal energy is smaller and hence favourable .otherwise , the smallest occurs for a regular distribution , driving the automatic maintenance of particle order , a fundamental ingredient of sph .the functions , presented in [ sec : kern : wend ] , have been constructed , albeit for different reasons , to possess a non - negative fourier transform , and be of compact support with simple functional form .the first property and the findings from our tests in [ sec : test : noise ] demonstrate the remarkable fact that these kernels are stable against pairing for _ all _ neighbour numbers ( this disproves the long - cultivated myth that the pairing instability was caused by a maximum in the kernel gradient ) .our 3d test simulations show that the cubic , quartic , and quintic spline kernels become unstable to pairing for , 67 , and 190 , respectively ( see fig .[ fig : noise : qmin ] ) , but operating close to these thresholds can not be recommended .a drawback of the wendland kernels is a comparably large density error at low . as we argue in [ sec : pairing : alt ] , this error is directly related to the stability against pairing .however , in [ sec : kern : dens ] we present a simple method to correct for this error without affecting the stability properties and without any other adverse effects .we conclude , therefore , that the wendland functions are ideal candidates for sph smoothing kernels , in particular when large are desired , since they are computationally superior to the high - order b - splines .all other alternative kernels proposed in the literature are computationally more demanding and are either centrally spiked , like the hoct4 kernel of , or susceptible to pairing like the b - splines ( e.g. * ? ? ?our tests of section [ sec : test ] show that simulations of both strong shear flows and shocks benefit from large .these tests suggest that for and 400 , respectively , the wendland and kernels are most suitable .compared to with the standard cubic spline kernel , these kernel- combinations have lower resolution ( increased by factors 1.27 and 1.44 , respectively ) , but obtain much better convergence in our tests . for small neighbour numbers, however , these tests and our linear stability analysis unexpectedly show that the quartic b - spline kernel with is clearly superior to the traditional cubic spline and can compete with the wendland kernel with .the reason for this astonishing performance of the quartic spline is unclear , perhaps the fact that near this spline is more than three times continuously differentiable plays a role .we note that , while the higher - degree wendland functions are new to sph , the wendland kernel has already been used ( , for example , employs it for 2d simulations ) .however , while its immunity to the pairing instability has been noted ( e.g. * ? ? ?* ) kernels in the context of 2d sph simulations .robinson refutes experimentally the traditional explanation ( la ) for the pairing instability and notices the empirical connection between the pairing instability and the non - negativity of the kernel fourier transform , both in excellent agreement with our results . ] , we are not aware of any explanation ( previous to ours ) nor of any other systematic investigation of the suitability of the wendland functions for sph .research in theoretical astrophysics at leicester is supported by an stfc rolling grant .we thank chris nixon and justin read for many stimulating discussions and the referee daniel price for useful comments and prompt reviewing .this research used the alice high performance computing facility at the university of leicester .some resources on alice form part of the dirac facility jointly funded by stfc and the large facilities capital fund of bis .we start from an equilibrium with particles of equal mass on a regular grid and impose a plane - wave perturbation to the unperturbed positions ( a bar denotes a quantity obtained for the unperturbed equilibrium ) : as in equation ( [ eq : x : pert : ] ) .we derive the dispersion relation by equating the sph force imposed by the perturbation ( to linear order ) to its acceleration to obtain the perturbed sph forces to linear order , we develop the internal energy of the system , and hence the sph density estimate , to second order in . if with and the first and second - order density corrections , respectively , then let us first consider the simple case of constant which remains unchanged during the perturbation .then , inserting into ( [ eq : varrho ] ) gives where ( assuming a symmetric particle distribution ) we can then derive with inserting these results into ( [ eq : hydro : p ] ) , we get if the are adapted such that remains a global constant , the estimated density is simply .we start by expanding to second order in both and . using a prime to denote differentiation w.r.t . , we have inserting these expressions into equation ( [ eq : hydro : p ] ) we find with relations ( [ eqs : derive:1 ] ) and ( [ eqs : derive:2 ] ) & & \label{eq : ddxi : ad } - \left ( \bar{c}^2-\frac{2\bar{p}}{\bar{\rho } } \right ) \frac{{\boldsymbol{a}}{\cdot}{\boldsymbol{t}}\,{\boldsymbol{t}}}{\bar{\rho}^2\bar{\omega}^2}\,\phi_i .\end{aligned}\ ] ] where
the numerical convergence of smoothed particle hydrodynamics ( sph ) can be severely restricted by random force errors induced by particle disorder , especially in shear flows , which are ubiquitous in astrophysics . the increase in the number of neighbours when switching to more extended smoothing kernels _ at fixed resolution _ ( using an appropriate definition for the sph resolution scale ) is insufficient to combat these errors . consequently , trading resolution for better convergence is necessary , but for traditional smoothing kernels this option is limited by the pairing ( or clumping ) instability . therefore , we investigate the suitability of the wendland functions as smoothing kernels and compare them with the traditional b - splines . linear stability analysis in three dimensions and test simulations demonstrate that the wendland kernels avoid the pairing instability for _ all _ , despite having vanishing derivative at the origin ( disproving traditional ideas about the origin of this instability ; instead , we uncover a relation with the kernel fourier transform and give an explanation in terms of the sph density estimator ) . the wendland kernels are computationally more convenient than the higher - order b - splines , allowing large and hence better numerical convergence ( note that computational costs rise sub - linear with ) . our analysis also shows that at low the quartic spline kernel with obtains much better convergence then the standard cubic spline . [ firstpage ] hydrodynamics methods : numerical methods : -body simulations
humankind has accrued _ a priori _ knowledge since the onset of _ homo sapiens_. from ancient cave paintings to modern research papers , the species desire toward sedimentation has been displayed as a documentary .an encyclopedia , a set of documents that contains a vast collection of information from the entire field of human knowledge , has played a pivotal role in disseminating these legacies .conventionally , a group of experts devote their expertise to these encyclopedias . taking advantage of technological developments , media that publish encyclopedias keep abreast of the times : handwriting , letterpress printing , and optical disks .the emergence of information technology has opened a new era of publishing traditional encyclopedias on the world wide web , which offers a variety of references and up - to - date information .although these new media can reduce the publication price , encyclopedia editing is still costly .besides the improvement of traditional encyclopedias , new media enable fresh challengers to participate in the competition .wikipedia , a representative player among the challengers , has proposed an entirely new manner : editing by volunteers with various backgrounds in a collective fashion .this new paradigm of sharing knowledge is one of the most famous examples of `` collective intelligence . ''however , due to the nature of open - edit policy , wikipedia does not guarantee that the contents are valid , thus it is regarded ambiguous and even inaccurate to utilize in scientific context . despite such a long - standing bias against the credibility of wikipedia , many studies suggest that wikipedia is more reliable than our prejudice ; wikipedia itself tends to refer reliable scientific sources .only 13% of wikipedia articles contain perceptible academic errors and the quantity of factual errors , omissions , or ambiguous statements in scientific context of wikipedia is comparable to traditional encyclopedias . gradually , prejudice against the quality of wikipedia s articles has been eroded and the number of citations to wikipedia in peer - reviewed scientific articles has increased over time .a bizarre gap between such prejudice and the actual situation appeals to the scholars , who have analyzed wikipedia s external characters and internal dynamics .for example , researchers have investigated editors of wikipedia and their editing patterns , and the occurrence and resolving of conflicts in wikipedia . despite the significant contributions of such endeavors , the previous studies mainly focus on the raw number of edits , and often neglect real time and the different editing patterns for articles with different sizes and ages . in this paper , we examine an exhaustive set of english wikipedia articles to understand how the article size and age displays external appearance in this open - editing encyclopedia . in particular , a simple time - rescaling method reveals articles belonging to various types , when we take account of the interrelation between observable parameters : the number of edits , the number of editors , and the article size .our analysis consists of both data analysis and modeling based on it .first , we use the entire edit history in wikipedia to inspect wikipedia s growth , mainly focusing on the number of edits , the number of editors , and the article size . in this process , we demonstrate that the consideration of real time is essential to understand the underlying dynamics behind the present wikipedia .second , to consider the formation of current wikipedia in more detail , we develop an agent - based model that imitates the interplay between an article and the editors in a society . our model shows inherent differences of articles belonging to different types of growth patterns .the results are consistent with real data , which suggests that a society s attitudes on wikipedia articles determine the growth pattern .we believe that this approach provides valuable insights for the formation of collective knowledge .we focus on the long - term formation of collective knowledge , which has significant effects on the progress of humankind over a variety of temporal scales .we hope that our work provides insights to solve some of the fundamental questions : why people collaborate , how the collective memory is formed , and how knowledge is spread and descended .the rest of the paper is organized as follows . in sec . [sec : data_set ] , we introduce the wikipedia data that we use in our investigation . in sec .[ sec : data_analysis ] , we propose a time - rescaling method and show that the articles in wikipedia can be classified into multiple types based on their growth pattern .we present our model and results in sec .[ sec : model ] , including verification of our model with real - data . finally , we present our conclusions and discussion in sec . [sec : conclusion ] . ) of editorsare logged - in and 83.9% ( ) remain anonymous ., scaledwidth=50.0% ]for the analysis , we use the december 2014 dump of english wikipedia .this dump contains the complete copy of wikipedia articles from the very beginning up to december 8 , 2014 , including the raw text source and metadata source in the extensible markup language ( xml ) format . in this data set , there are a total of articles across all categories with the full history of edits .each article documents either the wikipedia account identification ( i d ) or internet protocol ( ip ) address of the editor for each edit , the article size and timestamp for each edit , etc .a single character in english takes 1 byte , so the article size is the direct measure of article length .there are editing events ( `` edits '' from now on ) for all wikipedia articles in total , where individual articles edit numbers range from to .previous studies tend to sample data sets for various reasons , and thus articles with small numbers of edits are necessarily filtered out . however , fig .[ numeditspdf ] suggests a fat - tailed distribution for the number of edits , so the majority of articles are not edited as many times as the articles in the tail part of the distribution and those articles should not be neglected .therefore , we consider all entries and use the entire set for analysis. additionally , we use the i d and ip address , for logged - in editors and unlogged - in editors , respectively , to identify distinct editors . in total , editors have participated in the establishment of the current wikipedia . among them , 83.9% ( ) of editors are unlogged - in and only 16.1% ( ) of editors are logged - in .interestingly , the absolute share of logged - in editors is rather smaller than that of unlogged - in editors ; most of the heavy editors tend to be logged - in ( fig .[ numeditslogged ] ) .specifically , logged - in editors have modified the articles 455397682 times ( 77.5% ) in total , meanwhile unlogged - in editors have modified the articles only 132475517 times ( 22.5% ) .considering the fact that the number of unlogged - in editors exceeds that of logged - in editors , the average influence of a single unlogged - in editor is much smaller than that of logged - in editors ( on average , 69.8 times per logged - in editor and 3.9 times per unlogged - in editor ) .there are possible biases for ip addresses when an ip address is shared , e.g. , editors who use a library , public wifi , virtual private network ( vpn ) , etc . , or move the locations . in those cases , there will be under- or overestimation of the number of distinct editors .additionally , several home internet connection methods allocate ip addresses dynamically , e.g. , digital subscriber line ( dsl ) and dial - up . for those cases, there might be overestimation and misidentification of distinct editors .however , it is reported that cable and fiber to the home ( ftth ) dominate the u.s .market share , which provide quasistatic ip addresses .considering both the market shares and modest impact of single unlogged - in editors on the current wikipedia , we believe that our analysis is robust .in fact , we actually check that even when we exclude unlogged - in editors , our results reported in sec . [sec : data_analysis ] are not affected at all indeed .in addition , a small number of edits does not specify the editor , yet we use other information even for such cases based on the article size and timestamp .-th year '' corresponds to the edit event occurring between the first and the last day of the -th year since the onset of the article .the time differences follow fat - tailed distribution , which is a sign of the burstiness , with a daily periodic pattern ( day = s).,scaledwidth=50.0% ]previous studies on the wikipedia data set did not use the information about article size changes after the edits or real timestamps of the edits .we combine such information together with conventional measures , such as the number of edits and the number of editors , to display the nature of wikipedia .our first analysis of time and size differences between two consecutive edits reveals regularity , regardless of an article age and size .the time between the consecutive edits follows a fat - tailed distribution with characteristic periodicity from the human circadian rhythm ( fig .[ deltatimeperage ] ) , which suggests that the editing timescale of wikipedia is intermittent or `` bursty , '' meaning that brief but intense activities are followed by much smaller activities for a long time .these intense activities in wikipedia are reported as `` wikipedia edit war , '' which refers to significantly rapid consecutive editing by various editors with conflicting opinions .our observation indicates that the `` edit number '' ( or the number of edits ) , which many studies use as the proxy of the real time , is not an unbiased proxy of the time .counterposed to the assumption that english wikipedia has already become global media , we observe strong periodicity for the time between the consecutive edits in fig .[ deltatimeperage ] .the peaks are located at every s or a single day , which implies that native english speakers ( mostly people in the united states because of the relative population , we presume ) still dominate english wikipedia even though there is no barrier to global access .such a circadian pattern in the frequency of editing events is mainly driven by editors with specific cultural backgrounds for the data until the beginning of 2010s , as reported in ref .our observation indeed shows that the circadian rhythm also affects the interediting time in a collective fashion , and this domination still remains in the current wikipedia . besides the time scale, we observe that an article s growth is mainly addition and subtraction with a characteristic size scale , which are rather independent of the current size ( fig .[ deltasizepersize ] ) .this observation is counterposed to the recent report that the growth of collaborated open - source software and mammalian body masses are proportional to their size , and implies that the influence of a single edit becomes smaller as article size is increased .most previous research does not take into account the degree of the influence for a single edit , and thus considers all of the edits as affecting the article of wikipedia equally .however , our observations propose the necessity of combining the time and size difference between the edits with the conventional measures . in sec .[ sec : edit_scale_of_wikipedia ] , we have shown that the time between two consecutive edits is quite heterogeneous ( fig .[ deltatimeperage ] ) .this global effect of various timescales itself makes it unfair to directly compare the characteristic parameters of articles : the number of editors , edits , and the article size for different articles . to compensate for such an effect, we employ rescaled measures for article as , , and , where , the age of article , is measured as the time between the moment of onset and that of the latest edit of article . , , and are the number of edits , the number of editors , and the article size for article , respectively. the rescaled measures are free from the temporal effects , making it possible to recruit myriad articles into the same ground for analysis in the sense of growth per unit time . for the number of edits , the number of editors , and the article size from the data , we hereafter use their rescaled values unless stated otherwise .a natural step forward is to search for any possible interplay between , , and in the formation of current wikipedia .one can suppose that the number of edits has varied gradually as a function of the number of editors , because both measures reflect the degree of interest in the article .unexpectedly , we discover that the articles show a peculiar bimodality in their number of editors across the entire value of the number of edits [ see fig .[ bimodality](a ) ] .the bimodality is characterized by the linear relation with two distinct proportionality constants , and , respectively .in other words , there are two groups of articles , determined by the proportion between the number of editors involved in the articles and the editors average activity ; one group is dominated by a relatively small number of enthusiasts who edit articles frequently , and the other is composed of a relatively large number of editors who seldom edit articles .besides the cases of edits and editors , wikipedia shows a similar division of article size for given numbers of editors [ see fig .[ bimodality](b ) ] .there are two types of articles determined by the average article size produced by an editor per unit time .this relation is also described by the linear dependency , where for the upper mode and for the lower mode . in other words , editors for some articleshave generated about bytes on average , meanwhile the editors of the rest of the articles have generated only about bytes on average .our finding of bimodality in the two relations ( versus and versus ) triggers an interesting question : does each of the modes in one relation correspond to each mode in the other relation [ figs .[ bimodality](a ) and [ bimodality](b ) ] ?it seems natural to speculate that such modes have the counterparts in the other relation , or at least one is subordinative of the other .contrary to this speculation , our observation suggests that there is no visible relationship between the two different types of bimodality [ see figs .[ bimodality](c)[bimodality](f ) ] .the points in the figures are colored according to the modes to which the corresponding articles belong in the criteria based on or .we simply tear off the upper and lower modes by drawing a line [ the dashed lines in figs .[ bimodality](a ) and [ bimodality](b ) ] between the two modes and assign purple and blue colors for the points in the upper and the lower modes , respectively .those purple and blue points are totally mixed when the criterion is based on the other parameter relation .taken together , we conclude that there are at least four different groups of articles , which can be categorized by its growth per unit time .the possible mechanism behind the division is suggested based on our modeling study in sec .[ sec : model ] .to understand the underlying dynamics of the observed patterns , we develop a mechanistic model of editing dynamics by identifying two key factors that drive the evolution of wikipedia articles .we assume that there are two fundamental and inherent properties of an article reflecting the society s viewpoint on the article s topic : the preferences for referring wikipedia and the desires to edit ( namely , editability ) . in this section, we show that two such key drivers have elicited the wikipedia into its current state as shown in fig .[ bimodality ] .interestingly , each of those has a decisive effect on the distinct modality structure of and , respectively , and they have almost no impact on each other s modality .the preferences for referring wikipedia stems from its relative credibility compared to other conventional media . in other words , people tend to refer wikipedia more than other conventional media or opinion from others for certain topics .because of the nature of open - edit policy , there are long - lasting arguments of credibility , especially for the scientific contexts . as a result, people avoid referring wikipedia to reinforce their contention for scientific topics when they debate .nevertheless , several topics are almost free from the trust issue and wikipedia can be considered as a trustworthy source of knowledge .the subcultures such as animations , movies , and computer games are good examples , because the editors are not only a fan of the topic but also the creators of such cultures . in those cases , therefore , members of a society do not hesitate to utilize wikipedia as their grounds for the arguments .in addition , there are different levels of psychological barriers and desires in editing , depending on the topic . people tend to edit the article about which they have enough knowledge .thus , the average `` editability '' of articles , for members of a society , is diverse by its nature from the casual ones which are easily editable to the formal ones .this editability also depends on collective motives , which describe the significance of the topic as the common goal of social movements .therefore , the intrinsic rate of edit should be taken into account .besides these two key factors , editors are also engaged in articles when they have already given more effort to the articles by editing them , representing the feeling of attachment . additionally , it is hard to edit an article that already has a massive amount of information , so the motivation to edit will be reduced as the article size is increased .we describe how we implement the sociopsychological effects into our mathematical model in detail . by incorporating the aforementioned factors ,we create a mechanistic model of the article growth .the model comprises agents where the individual agents represent members of a society and all of the agents are connected to a single wikipedia article .note that we take a single wikipedia article in our model , as we assume that different degrees of editability and credibility yield different types of articles in real wikipedia . to account for the modality shown in the interplays between three measures , , , and in fig .[ bimodality ] , we introduce corresponding model parameters . first , the article has its own length corresponding to in our data analysis . at the beginning , the length is assigned as , where is the minimum length to which agent can reduce the article , so always .the number of edits at time , denoted as , is also defined as the total number of article updates until , under the update rules described in sec .[ sec : agent_wiki_dynamics ] .additionally , corresponds to the number of distinct agents who edited the article at least once . besides the quantities explicitly measured in data analysis, we also adopt internal parameters for the agents and the article .the agents are connected to each other with the erds - rny random network .such connections between agents stand for various relationships in society : friends , co - workers , even enemies .every agent has its own opinion ] .the wikipedia article also has its own opinion at time , which is the overall stance of wikipedia on the topic .we set , to get the insights of the situation that agents and the wikipedia article adjust their opinions to the most radical one .similar to the fact that it is impossible to gauge the `` stance '' of the article and agents to the topic , we do not explicitly display the values and those are used only for stochastic simulation .for each time step , the agent - agent interaction described in sec . [ sec : agent_agent_dynamics ] and the agent - wikipedia dynamics described in sec .[ sec : agent_wiki_dynamics ] occur in turn .our model colligates resolving of conflicts between agents with the contribution of agents to modify wikipedia . in our model , all members of society are open - minded and they can change their mind . for each timestep ,a pair of agents and , which are neighbors in a preassigned network , is chosen and they try to convince each other for the topic of the wikipedia article .we assume that agents rely on references to reinforce their opinion . for simplicity, we consider only two major types of references : wikipedia and general media .general media , denoting the entire set of references other than wikipedia , represent the ordinary viewpoint of the society toward the topic . as we described above , wikipedia is a more reliable source for certain topics .hence , we set a probability with which agents choose wikipedia as their reference , and this probability corresponds to the reliability of wikipedia [ see fig .[ modeldescription_fig](a ) ] .otherwise , agents decide to follow the standards of society by following general media s opinion , which is defined as the average opinion of entire agents in the society .in other words , the reference opinion once we choose the reference , an agent whose opinion is closer to the reference always succeeds in convincing the other agent . for the convenience ,we call the agent as whoever s opinion is closer to the reference than the other , i.e. , [ see fig . [ modeldescription_fig](a ) ]. agent changes its opinion toward s , while agent keeps its opinion .people tend to minimize the amount of changing ; thus the agent sets his / her target as or , depending on which one is closer . as a result ,the opinions of agents and at the next step are given by and respectively .the parameter ] accounts for the fact that people tend to edit more frequently when they have contributed to establishing the current state of the article more .the term in eq . represents the reduced motivation as the article size is increased , due to the amount of information .a recent report that the growth of wikipedia has slowed down supports this factor . if an agent decides to edit an article , wikipedia s opinion changes as / l(t ) \,.\end{gathered}\ ] ] in our model , the amount of change is inversely proportional to .figure [ deltasizepersize ] indicates that the impact of a single edit event should be decreased as the article size is increased , because the absolute amount of change is preserved .additionally , represents the physical and psychological limit for editing .the value of affects also mainly the timescale of simulation similar to .we fix this value as to set a moderate time scale , and this value does not have a large impact on our model conclusions . finally , the length parameter is changed after the update of the article s opinion , as follows : where the random variable is chosen according to the following rule .if the agent has modified the article toward an extreme position ( or ) , we suppose that the agent tend to append new contents to the article .in contrast , agents are likely to replace the contents to neutralize the article s opinion .specifically , we divide the update into the two following cases : ( i ) or ( toward an extreme ) and ( ii ) any other cases ( neutralize ) . for ( i ) , the article size is increased by drawn from the interval ] uniformly at random , which implies replacement of arguments .the fixed parameter is related to the physical limit in fig .[ deltasizepersize ] .the value of affects mainly the length .however , in this study , we use the ratio of the length to other measures rather than the absolute length of the article .we display the result with ( fig .[ modelresults ] ) , yet we verify that our conclusions are robust for other values of because the parameter governs only the overall length scale [ see fig .[ modeldescription_fig](b ) for the illustration on the criterion ] . in sec .[ sec : model_results ] , we discuss how the modes in fig .[ bimodality ] are formulated during the evolution of wikipedia in our model . and using the page view statistics of wikipedia ( 2014 ) and google -gram data set ( 2008 ) , which are the latest data sets for both .( a ) the average value of estimated , which is calculated by dividing the number of edits in 2014 by the page view statistics in 2014 , as a function of .as expected , the estimated decreases for larger values .( b ) the average value of estimated : the ratio of the page view statistics in 2014 to the google 1-gram frequency in 2008 , as a function of .both plots are drawn from the same sampled set of articles , with the conditions described in the text . ] for both versus and versus relations , our mechanistic model captures the essential features of the observed empirical relations reported in sec .[ sec : time_rescaled ] with proper parameter values . as we have shown in sec .[ sec : data_analysis ] , the proportionality coefficients between characteristic parameters are classified into two modes : in particular , both for the agent - agent interaction ( in sec .[ sec : agent_agent_dynamics ] ) and for the agent - wikipedia interaction ( in sec .[ sec : agent_wiki_dynamics ] ) are crucial to generate the splits of modes into different groups : is essential to reproduce a separation of [ see fig . [ modelresults](a ) ] and is indispensable for the division of [ see fig .[ modelresults](b ) ] . in the early stage , is almost unity across the systems with the entire parameter space composed of and , which corresponds to the single ( or unimodal , in contrast to the bimodal pattern shown in real data ) linear relation .while this single linear relation is characterized at the early stage , as time goes by , we observe the decreasing trend of . despite the fact that the decrement over time occurs for the entire parameter space ,the pace of decreasing is determined by , the base rate for editing an article . drops much slower for smaller values , which leads systems to fall into two different regimes : and [ fig .[ modelresults](a ) ] .interestingly , this divarication solely depends on the value of . on the other hand, also shows unimodality in the early stage , but it is suddenly increased with time only for [ fig .[ modelresults](b ) ] .analogous to , is also almost solely driven by , but there also exists a small amount of influence by ; small values of do not guarantee the large article size across all values , but low yields large article size at similar values of . based on our model results , we suggest a possible mechanism that yields the bimodality in fig .[ bimodality ] , which encourages us to verify the model results compared to the real data : either parameter or should be a decreasing function of and , respectively .however , we can not extract simulation parameters and directly from the data .we therefore use a bypass to estimate and . fortunately ,wikipedia offers page view statistics of articles that can be used for estimating such parameters .we assume that this page view in a certain period reflects the degree of interest of wikipedia users in the articles , and the number of edits in the same period naturally displays the editing frequency .thus , the ratio of the number of edits to this page view for a certain period can be related to the base edit rate .analogous to our presumption , this ratio is a decreasing function of ( see fig .[ modelverify ] ) . to treat the other parameter , we should employ the proxy that can reflect the general interest of the entire society in the topic .we suggest that the google books -gram , a vast digitized collection of documents produced in the world is a suitable choice .google books -gram is a database containing about 6% of english books ever published .this data set offers a yearly number of occurrences for any phrase less than six words from 1800 to 2008 , and this number of occurrences can be considered as the proxy of interests in society for a certain phrase . in our model, is the proportion of degree of interest in wikipedia versus that of the entire society . in other words ,wikipedia page view on a certain topic versus its -gram frequency can be the estimator of . for fair comparison, we also only take the wikipedia articles that satisfy the following conditions .first , the title of article should exist in google 1-gram data set in 2008 , the latest year of the data set .second , the article should be visited at least once in 2014 . to avoid the effect of inflectional variation of words, we use the stem of wikipedia articles title and google 1-gram data set , instead of using the word directly . after this filtering process, articles are left among the total set of articles .this estimator of also decays , as is increased as we expected . both figs .[ modelverify](a ) and [ modelverify](b ) indeed show the behaviors expected from their estimators ( and , respectively ) , which indicates that our model is suitable to describe the real wikipedia .note that the estimators of and should not be taken as the exact face values of model parameter values for a real article in wikipedia , and the results should be understood as a proxy of statistical properties of articles .first , page view statistics might be affected by the number of hyperlinks pointed to the article .such relative importance within the network topology may increase the page view by random visits , yet there is a positive feedback between the page view and the number of hyperlinks .an article also tends to have connections to popular articles , which eventually yields disproportionally many hyperlinks for popular items ; thus there could be overestimation of page views for the popular articles .moreover , there is a recent report that warns of the possible bias of google -gram as the proxy of real popularity in our society .this year - by - year level fluctuation may give unfairness to compare the frequencies many years apart .to avoid such fluctuation , we restrict our results for the year 2008 . additionally , word - by - word fluctuation should be canceled during the averaging process , because each data point corresponds to a massive number of articles . as a result, we believe that our observation is still valid , in spite of such fluctuations that might cause some degree of bias .the heterogeneity for the ratio of the number of editors to that of edits , , leads us to the eventual question : is this heterogeneity from structural inequality ?in other words , does the existence of dictatorship or monopoly of small group editors , or super editors , make it difficult for others to participate in editing processes ? to find the answer , we use the gini coefficient , which is a common measure for inequality in economics ranging from for the minimal inequality ( or the maximal equality ) to as the maximal inequality .we consider the number of edits for individual editors as the wealth variable in the gini coefficient .the trend of the gini coefficient as a decreasing function of shown in fig .[ gini](a ) suggests the modes with slope and in fig .[ bimodality](a ) are in equilibrium and non - equilibrium states , respectively .additional analysis of the gini coefficient in terms of the estimator ( the ratio of the number of edits in 2014 to the page view statistics in 2014 ) also indicates that the larger induces more severe inequality for editing [ see fig .[ gini](b ) ] .this is counterintuitive because it actually means that articles inducing larger motivation to edit eventually set a larger barrier to participate in editing .it is doubtful that the phenomenon is caused by the amount of information , since the gini coefficient does not vary much according to its amount of information [ see fig .[ gini](c ) ] .similar to the real wikipedia , our model also supports the observed inequality .although we use a simplified estimator of in our real data , the ratio of the number of edits in 2014 to the page view statistics , the gini coefficient is an increasing function of in the model as in the real data [ see fig . [ gini](d ) ] . additionally , since has a limited effect on the article size ( see fig . [ bimodality ] ) , the model observation of the gini coefficient is compatible to our observation that article size does not have a large effect on the gini coefficients .such logical elimination suggests that a few engaged and dominating editors make it indeed hard for laypeople to participate in editing processes .there are `` democratic '' articles ( with slope of in fig . [ bimodality ] ) and `` dictatorial '' articles ( with slope of in fig .[ bimodality ] ) . in short, inequality exists indeed .traditionally , collaboration used to be mainly regional and face - to - face interactions were demanded , which had prevented the world - wide formation of collective intelligence .nowadays , improvements of modern information technology bring us a whole new stage of online collaboration . in this study, we have examined such a new passion of collective intelligence through long - term data from wikipedia .people believe that such a new paradigm will eventually yield democratization of knowledge . as a representative medium , wikipediais also considered as a spearhead of such pro - democracy movements .however , our observation suggests that the current status of wikipedia is still apart from the perfect world - wide democracy .the observed periodicity for the time between edits alludes that the english wikipedia is still regional for english natives ( see fig .[ deltatimeperage ] ) .bimodality and its inequality index suggest that there are articles dominated by a small number of super editors ( figs .[ bimodality][gini ] ) .notwithstanding the fact that there is no explicit ownership for wikipedia articles , some kind of privatization by dedicated editors for given topics is happening in reality .the value of such dedicated editors should not be depreciated , of course .their dedication has indeed played the main role in keeping the current state - of - the - art accuracy in the current wikipedia . however ,in the long run , knowledge can not survive without collaboration between experts and society . although most advanced knowledge is invented by experts ,such experts occupy a rather small proportion in a society ; thus , knowledge without support from other members of the society will lose its dynamic force to sustain . additionally , despite our findings that the amount of contents created by an editor ( ) mainly depends on the degree of referring wikipedia ( namely ) , an equitable opportunity for participation also increases such individual productivity ( see fig .[ modelresults ] ) .our study not only gives significant insight into the formation and current state of wikipedia , but also offers the future direction of wikipedia .our simulation results suggest that such inequality is increased with time , which may result in less productivity and less accuracy as by - product in the future than now [ see figs . [modelresults](a ) and [ gini](e ) ] .it is indeed already reported that the growth of wikipedia is slowing down and our observation suggests that it will become even slower if we do not take any active action . to sustain collaborating environments ,it is worth giving more motivation and incentives to the newbies to reduce the monopolized structure in wikipedia .we hope that extending our approach to various collaboration environments such as open - source movement might give us the insight for the future investment that brings us a new level of collaborating environments .finally , we would like to emphasize that the results and implications of our study are not restricted to the wikipedia or online collaboration systems , but have much wider applications in human or nonhuman interactions in the world .we are grateful to beom jun kim ( ) , pan - jun kim ( ) , and hyunggyu park ( ) for insightful comments .this work was supported by the national research foundation of korea through grant no .2011 - 0028908 ( j.y . and h.j . ) .a. kittur , b. suh , and e.h .chi , _ can you ever trust a wiki ? : impacting perceived trustworthiness in wikipedia _ , proceedings of the 2008 acm conference on computer supported cooperative work ( cscw 08 ) , p. 477( 2008 ) .adler , k. chatterjee , l. de alfaro , m. faella , i. pye , and v. raman , _ assigning trust to wikipedia content _, proceedings of the 4th international symposium on wikis ( wikisym 08 ) , article no . 26( 2008 ) .bould , e. s. hladkowicz , a .- a.e .pigford , l .- a .ufholz , t. postonogova , e. shin , and s. boet , _ references that anyone can edit : review of wikipedia citations in peer reviewed health science literature _ ,bmj * 348 * , g1585 ( 2014 ) .a. kittur , b. suh , b. a. pendleton , and e.h .chi , _ he says , she says : conflict and coordination in wikipedia _ , proceedings of the sigchi conference on human factors in computing systems ( chi07 ) ,( 2007 ) .bryant , a. forte , and a. bruckman , _ becoming wikipedian : transformation of participation in a collaborative online encyclopedia _ , proceedings of the 2005 international acm siggroup conference on supporting group work ( group 05 ) ( 2005 ) .lakhani and r. wolf , why hackers do what they do : understanding motivation and effort in free / open source software projects , _ perspectives on free and open source software _( mit press , cambridge , ma , 2005 ) .a. capocci , v.d.p .servedio , f. colaiori , l.s .buriol , d. donato , s. leonardi , and g. caldarelli , _ preferential attachment in the growth of social networks : the internet encyclopedia wikipedia _ , phys .e * 74 * , 036116 ( 2006 ) .h. hasan and c. pfaff , _ emergent conversational technologies that are democratising information systems in organisations : the case of the corporate wiki _ , proceedings of the information systems foundations ( isf ) : theory , representation and reality conference ( 2006 ) .
wikipedia is a free internet encyclopedia with an enormous amount of content . this encyclopedia is written by volunteers with various backgrounds in a collective fashion ; anyone can access and edit most of the articles . this open - editing nature may give us prejudice that wikipedia is an unstable and unreliable source ; yet many studies suggest that wikipedia is even more accurate and self - consistent than traditional encyclopedias . scholars have attempted to understand such extraordinary credibility , but usually used the number of edits as the unit of time , without consideration of real - time . in this work , we probe the formation of such collective intelligence through a systematic analysis using the entire history of english wikipedia articles , between 2001 and 2014 . from this massive data set , we observe the universality of both timewise and lengthwise editing scales , which suggests that it is essential to consider the real - time dynamics . by considering real time , we find the existence of distinct growth patterns that are unobserved by utilizing the number of edits as the unit of time . to account for these results , we present a mechanistic model that adopts the article editing dynamics based on both editor - editor and editor - article interactions . the model successfully generates the key properties of real wikipedia articles such as distinct types of articles for the editing patterns characterized by the interrelationship between the numbers of edits and editors , and the article size . in addition , the model indicates that infrequently referred articles tend to grow faster than frequently referred ones , and articles attracting a high motivation to edit counterintuitively reduce the number of participants . we suggest that this decay of participants eventually brings inequality among the editors , which will become more severe with time .
graphical models are a class of statistical models which combine the rigour of a probabilistic approach with the intuitive representation of relationships given by graphs .they are composed by a set of _ random variables _ describing the data and a _ graph _ in which each _ vertex _ or _ node _ is associated with one of the random variables in .nodes and the corresponding variables are usually referred to interchangeably .the _ edges _ are used to express the dependence relationships among the variables in .different classes of graphs express these relationships with different semantics , having in common the principle that graphical separation of two vertices implies the conditional independence of the corresponding random variables .the two examples most commonly found in literature are _ markov networks _ , which use undirected graphs ( ugs , see * ? ? ?* ) , and _bayesian networks _ , which use directed acyclic graphs ( dags , see * ? ? ?* ) . in the context of bayesian networks ,edges are often called _ arcs _ and denoted with ; we will adopt this notation as well . the structure of ( that is , the pattern of the nodes and the edges ) determines the probabilistic properties of a graphical model .the most important , and the most used , is the factorisation of the _ global distribution _( the joint distribution of ) into a set of lower - dimensional _ local distributions_. in markov networks , local distributions are associated with _ cliques _ ( maximal subsets of nodes in which each element is adjacent to all the others ) ; in bayesian networks , each local distribution is associated with one node conditional on its _ parents _ ( nodes linked by an incoming arc ) . in markov networksthe factorisation is unique ; different graph structures correspond to different probability distributions .this is not so in bayesian networks , where dags can be grouped into _ equivalence classes _ which are statistically indistinguishable .each such class is uniquely identified by the underlying ug ( i.e. in which arc directions are disregarded , also known as _ skeleton _ ) and by the set of _ v - structures _ ( i.e. converging connections of the form , , in which and are not connected by an arc ) common to all elements of the class . as for the global and the local distributions ,there are many possible choices depending on the nature of the data and the aims of the analysis .however , literature have focused mostly on two cases : the _ discrete case _ , in which both the global and the local distributions are multinomial random variables , and the _ continuous case _ , in which the global distribution is multivariate normal and the local distributions are univariate ( in bayesian networks ) or multivariate ( in markov networks ) normal random variables . in the former , the parameters of interest are the _ conditional probabilities _ associated with each variable , usually represented as conditional probability tables . in the latter ,the parameters of interest are the _ partial correlation coefficients _ between each variable and its neighbours in . conjugate distributions ( dirichlet and wishart , respectively ) are then used for learning and inference in a bayesian setting .the choice of an appropriate probability distribution for the set of the possible edges is crucial to make the derivation and the interpretation of the properties of and easier .we will first note that a graph is uniquely identified by its edge set ( or by its arc set for a dag ) , and that each edge or arc is uniquely identified by the nodes and , it is incident on . therefore ,if we model with a random variable we have that any edge set ( or arc set ) is just an element of its sample space ; and since there is a one - to - one correspondence between graphs and edge sets , probabilistic properties and inferential results derived for traditional graph - centric approaches can easily be adapted to this new edge - centric approach and vice versa . in addition , if we denote , we can clearly see that . on the other hand , for ugs and even larger for dags and their equivalence classes .we will also note that an edge or an arc has only few possible states : * an edge can be either present ( ) or missing from an ug ( ) ; * in a dag , an arc can be present in one of its two possible directions ( or ) or missing from the graph ( and ) .this leads naturally to the choice of a bernoulli random variable for the former , and to the choice of a trinomial random variable for the latter , where is the arc and is the arc .therefore , a graph structure can be modelled through its edge or arc set as follows : * ugs , such as markov networks or the skeleton and the moral graph of bayesian networks , can be modelled by a _ multivariate bernoulli random variable _ ; * directed graphs , such as the dags used in bayesian networks , can be modelled by a _ multivariate trinomial random variable_. in addition to being the natural choice for the respective classes of graphs , these distributions integrate smoothly with and extend other approaches present in literature .for example , the probabilities associated with each edge or arc correspond to the _ confidence coefficients _ from and the _ arc strengths _ from . in a frequentist setting , they have been estimated using bootstrap resampling ; in a bayesian setting , markov chain monte carlo ( mcmc ) approaches have been used instead .let , be bernoulli random variables with marginal probabilities of success , that is , .then the distribution of the random vector ^t ] has some interesting numerical properties . from basic probability theory ,we know its diagonal elements are bounded in the interval ] . as a result ,we can derive similar bounds for the eigenvalues of , as shown in the following theorem .[ thm : mvebereigen ] let , and let be its covariance matrix .let , be the eigenvalues of .then see appendix [ app : proofs ] .these bounds define a closed convex set in , described by the family \right\}\ ] ] where is the non - standard simplex construction and properties of the multivariate trinomial random variable are similar to the ones illustrated in the previous section for the multivariate bernoulli . for this reason , and because it is a particular case of the multivariate multinomial distribution , the multivariate trinomial distribution is rarely the focus of research efforts in literature .some of its fundamental properties are covered either in or in monographs on contingency tables analysis such as .let , be trinomial random variables assuming values and denoted as with .then the distribution of the random vector ^t ] of the real axis , the maximum standard deviation of equals .the maximum is reached if takes the values and with probabilities each .see . in both caseswe obtain that the maximum variance is achieved for and is equal to , so ] .furthermore , we can also prove that the eigenvalues of are bounded using the same arguments as in lemma [ thm : mvebereigen ] .[ thm : dirlambda ] let , and let be its covariance matrix .let , be the eigenvalues of .then see the proof of lemma [ thm : mvebereigen ] in appendix [ app : proofs ] .these bounds define again a closed convex set in , described by the family \right\},\ ] ] where is the non - standard simplex from equation [ eq : simplex ] . another useful result , which we will use in section [ sec : dagprop ] to link inference on ugs and dags , is introduced below .[ thm : triber2 ] let ; then and + . see appendix [ app : proofs ] .it follows that the variance of each can be decomposed in two parts : the first is a function of the corresponding component of the transformed random vector , while the second depends only on the probabilities associated with and ( which correspond to and in equation [ eqn : tridef ] ) .the results derived in the previous section provide the foundation for characterising and . to this end, it is useful to distinguish three cases corresponding to different configurations of the probability mass among the graph structures : * _ minimum entropy _ :the probability mass is concentrated on a single graph structure .this is the best possible configuration for , because only one edge set ( or one arc set ) has a non - zero posterior probability . in other words ,the data provide enough information to identify a single graph with posterior probability ; * _ intermediate entropy _ : several graph structures have non - zero probabilities .this is the case for informative priors and for the posteriors resulting from real - world data sets ; * _ maximum entropy _ :all graph structures in have the same probability .this is the worst possible configuration for , because it corresponds to the non - informative prior from equation [ eqn : flatprior ] . in other words ,the data do not provide any information useful in identifying a high - posterior graph . clearly , _minimum _ and _ maximum entropy _ are limiting cases for ; the former is non - informative about , while the latter identifies a single graph in .as we will show in sections [ sec : ugprop ] ( for ugs ) and [ sec : dagprop ] ( for dags ) , they provide useful reference points in determining which edges ( or arcs ) have significant posterior probabilities and in analysing the variability of the graph structure . in the _ minimum entropy _ case , only one configuration of edges has non - zero probability , which means that the uniform distribution over arising from the _ maximum entropy _ case has been studied extensively in random graph theory ; its two most relevant properties are that all edges are independent and have . as a result , ; all edges display their maximum possible variability , which along with the fact that they are independent makes this distribution non - informative for as well as .the _ intermediate entropy _case displays a middle - ground behaviour between the _ minimum _ and _ maximum entropy _ cases .the expected value and the covariance matrix of do not have a definite form beyond the bounds derived in section [ sec : mvber ] . when considering posteriors arising from real - world data , we have in practice that most edges in represent conditional dependence relationships that are completely unsupported by the data .this behaviour has been explained by with the tendency of `` good '' graphical models to represent the causal relationships underlying the data , which are typically sparse . as a result , we have that and for many , so is almost surely singular unless such edges are excluded from the analysis .edges that appear with have about the same marginal probability and variance as in the _ maximum entropy _ case , so their marginal behaviour is very close to random noise . on the other hand ,edges with probabilities near or can be considered to have a good support ( against or in favour , respectively ) . as approaches or , approaches its _minimum entropy_. the closeness of a multivariate bernoulli distribution to the _ minimum _ and _ maximum entropy _ cases can be represented in an intuitive way by considering the eigenvalues ^t ] in modulus , while the correlation takes values in ] and ] ( for the multivariate bernoulli ) or ] for the multivariate bernoulli distribution and ] interval and associate high values to graphs whose structures display a high variability .since they vary on a known and bounded scale , they are easy to interpret as absolute quantities ( i.e. goodness - of - fit statistics ) as well as relative ones ( i.e. proportions of total possible variability ) . they also have a clear geometric interpretation as distances in , as they can all be rewritten as function of the eigenvalues .this allows , in turn , to provide an easy interpretation of otherwise complex properties of and and to derive new results .first of all , the measures introduced in equation [ eqn : normalised ] can be used to select the best learning algorithm in terms of structure stability for a given data set .different algorithms make use of the information present in the data in different ways , under different sets of assumptions and with varying degrees of robustness .therefore , in practice different algorithms learn different structures from the same data and , in turn , result in different posterior distributions on .if we rewrite equation [ eqn : structlearn ] to make this dependence explicit , and denote with the covariance matrix of the distribution of the edges ( or the arcs ) induced by , then we can choose the optimal structure learning algorithm as or , equivalently , using or instead of .such an algorithm has the desirable property of maximising the information gain from the data , as measured by the distance from the non - informative prior in .in other words , is the algorithm that uses the data in the most efficient way .furthermore , an optimal can be identified even for data sets without a `` golden standard '' graph structure to use for comparison ; this is not possible with the approaches commonly used in literature , which rely on variations of hamming distance and knowledge of such a `` golden standard '' to evaluate learning algorithms ( see , for example * ? ? ?* ) . similarly , it is possible to study the influence of different values of a tuning parameter for a given structure learning algorithm ( and again a given data set ) .such parameters include , for example , restrictions on the degrees of the nodes and regularisation coefficients . if we denote these tuning parameters with , we can again choose an optimal as another natural application of the variability measures presented in equation [ eqn : normalised ] is the study of the consistency of structure learning algorithms .it has been proved in literature that most of structure learning algorithms are increasingly able to identify a single , minimal graph structure as the sample size diverges ( see , for example * ? ? ?therefore , converges towards the _ minimum entropy _ case and all variability measures converge to zero .however , convergence speed has never been analysed and compared across different learning algorithms ; any one of , or provides a coherent way to perform such an analysis .lastly , we may use the variability measures from equation [ eqn : normalised ] as basis to investigate different prior distributions for real - world data modelling and to define new ones .relatively little attention has been paid in literature to the choice of the prior over , and the uniform _ maximum entropy _distribution is usually chosen for computational reasons .its only parameter is the _ imaginary sample size _ , which expresses the weight assigned to the prior distribution as the size of an imaginary sample size supporting it .however , choosing a uniform prior also has some drawbacks .firstly , and have shown that both large and small values of the imaginary sample size have unintuitive effects on the sparsity of a bayesian network even for large sample sizes .for instance , large values of the imaginary sample size may favour the presence of an arc over its absence even when both and imply the variables the arc is incident on are conditionally independent .secondly , a uniform prior assigns a non - null probability to all possible models .therefore , it often results in a very flat posterior which is not able discriminate between networks that are well supported by the data and networks that are not .following s suggestion that `` good '' graphical models should be sparse , sparsity - inducing priors such as the ones in and should be preferred to the _ maximum entropy _ distribution , as should informative priors .for example , the prior proposed in introduces a prior probability to include ( independently ) each arc in a bayesian network with a given topological ordering , which means and for all in .thus , , and . the prior proposed in , on the other hand , controls the number of parents of each node for a given topological ordering .therefore , it favours low values of in and again for all .clearly , the amount of sparsity induced by the hyperparameters of these priors determines the variability of both the prior and the posterior , and can be controlled through the variability measures from equation [ eqn : normalised ] .furthermore , these measures can provide inspiration in devising new priors with the desired form and amount of sparsity .bayesian inference on the structure of graphical models is challenging in most situations due to the difficulties in defining and analysing prior and posterior distributions over the spaces of undirected or directed acyclic graphs .the dimension of these spaces grows super - exponentially in the number of variables considered in the model , making even map analyses problematic . in this paper, we propose an alternative approach to the analysis of graph structures which focuses on the set of possible edges of a graphical model instead of the possible graph structures themselves .the latter are uniquely identified by the respective edge sets ; therefore , the proposed approach integrates smoothly with and extends both frequentist and bayesian results present in literature .furthermore , this change in focus provides additional insights on the behaviour of individual edges ( which are usually the focus of inference ) and reduces the dimension of the sample space from super - exponential to quadratic in the number of variables . for many inference problemsthe parameter space is reduced as well , and makes complex inferential tasks feasible . as an example , we characterise several measures of structural variability for both bayesian and markov networks using the second order moments of and .these measures have several possible applications and are easy to interpret from both an algebraic and a geometric point of view .the author would like to thank to adriana brogini ( university of padova ) and david balding ( university college london ) for proofreading this article and providing many useful comments and suggestions .furthermore , the author would also like to thank giovanni andreatta and luigi salce ( university of padova ) for their assistance in the development of the material .since is a real , symmetric , non - negative definite matrix , its eigenvalues are non - negative real numbers ; this proves the lower bound in both inequalities .the upper bound in the first inequality holds because as the sum of the eigenvalues is equal to the trace of .this in turn implies which completes the proof .it is easy to show that each , with and .it follows that the parameter collection of reduces to after the transformation . therefore , is a uniquely identified multivariate bernoulli random variable according to the definition introduced at the beginning of section [ sec : mvber ] .let s assume by contradiction that is cyclic ; this implies that there are one or more nodes such that for some .however , this would mean that in we would have which is not possible since is assumed to be acyclic .each possible arc can appear in the graph in only one direction at a time , so a directed acyclic graph with nodes can have at most arcs .therefore but in the _ maximum entropy _ case we also have that , so which completes the proof . in the maximum entropy case ,all arcs have the same marginal distribution function , \\ & \frac{1}{4 } + \frac{1}{4(n - 1 ) } & & \text{in } ( -1 , 0 ] \\ & \frac{3}{4 } - \frac{1}{4(n - 1 ) } & & \text{in } ( 0 , 1 ] \\ & 1 & & \text{in } ( 1 , + \infty ) \end{aligned } \right.,\ ] ] so the joint distribution of any pair of arcs and can be written as a member of the farlie - morgenstern - gumbel family of distribution as .\end{aligned}\ ] ] then if we apply hoeffding s identity from equation [ eqn : hoeffding ] and replace the joint distribution function with the right hand of equation [ eqn : this ] we have that - f_a(a_{ij})f_a(a_{kl } ) \right| \\ & = \sum_{\{-1 , 0\}}\sum_{\{-1 , 0\ } } ( 1 - f_a(a_{ij}))(1 - f_a(a_{kl } ) ) .\end{aligned}\ ] ] we can now compute the bounds for and using only the marginal distribution function from equation [ eqn : distrfun ] and the variance from equation [ eqn : approxvar ] , thus obtaining the expressions in equation [ eqn : boundcov ] and equation [ eqn : boundcor ] .below are reported the exact values of the parameters of the marginal trinomial distributions and of the first and second order moments of the multivariate trinomial distribution in the maximum entropy case .all these quantities have been computed by a complete enumeration of the directed acyclic graphs of a given size ( , , , and ) .
graphical model learning and inference are often performed using bayesian techniques . in particular , learning is usually performed in two separate steps . first , the graph structure is learned from the data ; then the parameters of the model are estimated conditional on that graph structure . while the probability distributions involved in this second step have been studied in depth , the ones used in the first step have not been explored in as much detail . in this paper , we will study the prior and posterior distributions defined over the space of the graph structures for the purpose of learning the structure of a graphical model . in particular , we will provide a characterisation of the behaviour of those distributions as a function of the possible edges of the graph . we will then use the properties resulting from this characterisation to define measures of structural variability for both bayesian and markov networks , and we will point out some of their possible applications . marco scutari + genetics institute , university college london , united kingdom + m.scutari.ac.uk graphical models stand out among other classes of statistical models because of their use of graph structures in modelling and performing inference on multivariate , high - dimensional data . the close relationship between their probabilistic properties and the topology of the underlying graphs represents one of their key features , as it allows an intuitive understanding of otherwise complex models . in a bayesian setting , this duality leads naturally to split model estimation ( which is usually called _ learning _ ) in two separate steps . in the first step , called _ structure learning _ , the graph structure of the model is estimated from the data . the presence ( absence ) of a particular edge between two nodes in implies the conditional ( in)dependence of the variables corresponding to such nodes . in the second step , called _ parameter learning _ , the parameters of the distribution assumed for the data are estimated conditional to the graph structure obtained in the first step . if we denote a graphical model with , so that , then we can write graphical model estimation from a data set as furthermore , following , we can rewrite structure learning as the prior distribution and the corresponding posterior distribution are defined over the space of the possible graph structures , say . since the dimension of grows super - exponentially with the number of nodes in the graph , it is common practice to choose as a non - informative prior , and then to search for the graph structure that maximises . unlike such a _ maximum a posteriori _ ( map ) approach , a full bayesian analysis is computationally unfeasible in most real - world settings . therefore , inference on most aspects of and is severely limited by the nature of the graph space . in this paper , we approach the analysis of those probability distributions from a different angle . we start from the consideration that , in a graphical model , the presence of particular edges and their layout are the most interesting features of the graph structure . therefore , investigating and through the probability distribution they induce over the set of their possible edges ( identified by the set of unordered pairs of nodes in ) provides a better basis from which to develop bayesian inference on . this can be achieved by modelling as a multivariate discrete distribution encoding the joint state of the edges . then , as far as inference on is concerned , we may rewrite equation [ eqn : structlearn ] as as a side effect , this shift in focus reduces the effective dimension of the sample space under consideration from super - exponential ( the dimension of ) to polynomial ( the dimension ) in the number of nodes . the dimension of the parameter space for many inferential tasks , such as the variability measures studied in this paper , is likewise reduced . the content of the paper is organised as follows . basic definitions and notations are introduced in section [ sec : definitions ] . the multivariate distributions used to model are described in section [ sec : distributions ] . some properties of the prior and posterior distributions on the graph space , and , are derived in section [ sec : properties ] . we will focus mainly on those properties related with the first and second order moments of the distribution of , and we will use them to characterise several measures of structural variability in section [ sec : variability ] . these measures may be useful for several inferential tasks for both bayesian and markov networks ; some will be sketched in section [ sec : variability ] . conclusions are summarised in section [ sec : conclusion ] , and proofs for the theorems in sections [ sec : distributions ] to [ sec : variability ] are reported in appendix [ app : proofs ] . appendix [ app : numbers ] lists the exact values for some quantities of interest for , computed for several graph sizes .
the application of cross - correlation techniques to measure velocity shifts has a long history ( simkin 1972 , 1974 ; lacy 1977 ; tonry & davis 1979 ) , and with the advent of massive digital spectroscopic surveys of galaxies and stars , the subject has renewed interest .the recently completed sloan digital sky survey ( sdss ) has collected spectra for more than 600,000 galaxies and 90,000 quasars ( adelman - mccarthy et al .2007 , york et al . 2000 ) .the sdss has also obtained spectra for about 200,000 galactic stars , and it is now being extended at lower galactic latitudes by segue with at least as many spectra ( rockosi 2005 , yanny 2005 ) .another ongoing galactic survey , rave , is expected to collect high - resolution spectra for a million stars by 2011 ( steinmetz et al .2006 ) , and the plans for the gaia satellite include measuring radial velocities for 10 stars by 2020 ( katz et al .2004 ) . extracting the maximum possible information from these spectroscopic surveysrequires carefully designed strategies .cross - correlation has been the target of numerous developments in recent years ( see , e.g. , mazeh & zucker 1994 , statler 1995 , torres , latham & stefanik 2007 , zucker 2003 ) , but several practical aspects of its implementation would benefit from further research .these include the selection of templates ( e.g. , observed vs. synthetic libraries ) , how to combine measurements from multiple templates , the method to determine the maximum of the cross - correlation function , data filtering , and error determination .some of these issues are briefly addressed in this paper , but our focus is on how the requirement of coherence among all entries in a radial velocity data base can be used to improve the original measurements .a different but plausible approach has been recently proposed by zucker & mazeh ( 2006 ) .the doppler shifts of targets in a spectroscopic survey are determined one at a time .each object s projected velocity is measured independently , not counting a possible common set of cross - correlation templates . for a given template , from any pair of ( projected ) velocity measurements , we can derive a relative velocity between the two objects involved .however , that figure will likely be numerically different from the value inferred from the direct cross - correlation between their spectra , even if the two objects are of the same class . in this paper, we argue that it is possible to improve the original determinations by imposing consistency among all available measurements .our discussion is oriented to the case of a homogeneous sample : multiple observations of the same or similar objects . in the following sectioni introduce cross - correlation , with a brief discussion about error evaluation .section [ basic ] presents the notion of _ self - improvement _ and section [ general ] extends the method to the more realistic scenario in which the spectra in a given data set have varying signal - to - noise ratios . in [ sdss ]we explore an application of the proposed technique involving low - resolution spectra , concluding the paper with a brief discussion and reflections about future work .the most popular procedure for deriving relative velocities between a stellar spectrum and a template is the cross - correlation method ( tonry & davis 1979 ) .this technique makes use of all the available information in the two spectra , and has proven to be far superior than simply comparing the doppler shifts between the central wavelengths of lines when the signal - to - noise ratio is low .the cross - correlation of two arrays ( or spectra ) * t * and * s * is defined as a new array * c * if the spectrum * t * is identical to * s * , but shifted by an integer number of pixels , the maximum value in the array * c * will correspond to its element . cross - correlation can be similarly used to measure shifts that correspond to non - integer numbers . in this case , finding the location of the maximum value of the cross - correlation function can be performed with a vast choice of algorithms .the most straightforward procedure to estimate realistic uncertainties involves an accurate noise model and monte - carlo simulations , and that is the method we use in section [ sdss ] .we employ gaussians and low - order polynomials to model the peak of the cross - correlation function . for these simple models , implemented in a companion idl code ,it is possible to derive analytical approximations that relate the uncertainty in the location of the maximum of the cross - correlation function to the covariance matrix [ u .digital cross - correlation , introduced in section [ xcorr ] , is commonly employed to derive doppler radial velocities between two spectra .the discussion in this section is , nonetheless , more general , and deals with the statistical improvement of a set of relative velocity measurements .if three spectra of the same object are available and we refer to the relative radial velocity between the first two as , an alternative estimate of can be obtained by combining the other relative velocity measurements , .assuming uniform uncertainties , the error - weighted average of the two values is .for a set of spectra , we can obtain an improved relative radial velocity determination between the pair by generalizing this expression it can be seen from eq .[ ci ] that the correlation of * t * and * s * is equal to the reverse of the correlation between * s * and * t*. thus , when the relative velocities between two spectra is derived from cross - correlation and the spectra have a common sampling , it will be satisfied that , but this will not be true in general .for example , if we are dealing with grating spectroscopy in air , changes in the refraction index with time may alter the wavelength scale and the spectral range covered by any particular pixel , requiring interpolation .if our choice is to interpolate the second spectrum ( * s * ) to the scale of the first ( * t * ) , this may introduce a difference between and due to different interpolation errors .we can accommodate the general case by writing note that this definition ensures that , and .if the quality of the spectra is uniform , and all measured radial velocities have independent uncertainties of the same size , the primed values would have an uncertainty . despite be numerically different from , will be highly correlated with , and thus the uncertainty in the primed velocities will not be reduced that fast . in addition , all are also correlated with all , driving the improvement farther away from the ideal behavior . we can expect that after a sufficient number of spectra are included , either random errors will shrink below the systematic ones or all the available information will already be extracted , and no further improvement will be achieved .the case addressed in section [ basic ] corresponds to a set of spectra of the same quality .if the uncertainties in the measured relative radial velocities differ significantly among pairs of spectra , eq . [ vprime ] can be generalized by using a weighted average where and the uncertainty is in the common case in which , the counterpart of eq .[ symmetry ] for dealing with spectra of varying signal - to - noise ratios reduces to where in the next section we use simulated spectra for a case study : multiple observations of the same object or massive surveys involving large numbers of very similar objects at intermediate spectral resolution .the sdss spectrographs deliver a resolving power of , over the range 381910 nm .these two fiber - fed instruments are attached to a dedicated 2.5 m telescope at apache point observatory ( gunn et al .each spectrograph can obtain spectra for 640 targets simultaneously . as a result of a fixed exposure time in sdss spectroscopic observations, the flux in a stellar spectrum at a reference wavelength of 500 nm , , correlates well with the magnitude of the star and with the signal - to - noise ratio at 500 nm ( ) . on average ,we find at mag . to build a realistic noise model, we used the fluxes and uncertainties for 10,000 spectra publicly released as part of dr2 ( abazajian et al .2004 ) to derive , by least - squares fitting , a polynomial approximation .when is expressed in erg s , which are the units used in the sdss data base , we can write where . this relationship holds in the range .the uncertainties in the sdss fluxes for stars mostly relatively bright calibration sources are not dominated by photon noise , but by a _ floor _ noise level of 23% associated with a combination of effects , including imperfect flat - fielding and scattered light corrections .errors are highly variable with wavelength , but the noise at any given wavelength depends linearly on the signal . based on the same set of sdss spectra used for eq . [ snr500 ] , we determine the coefficients in the relation which we use here for our numerical experiments . for a given choice of ,we interpolate the table of coefficients and derived from sdss data , and by inverting eq .[ snr500 ] we derive the flux at 500 nm .finally , we scale the spectrum fluxes and calculate the expected errors at all wavelengths using eq .gaussian noise is introduced for each pixel position , according to the appropriate error , simulating multiple observations of the same star to create an entire library of spectra .we employed a spectrum of hd 245 , a nearby g2 star ) , surface gravity ( ) , and kinematics , make this object a prototypical thick - disk turn - off star .] , to produce spectra that resemble sdss observations with various signal - to - noise ratios .radial velocities are also artificially introduced .the spectrum of hd 245 used here has a resolving power of at 660 nm to 7700 at 480 nm .this variation is , however , irrelevant when smoothing the data to as we do in these experiments . ] and is included in the elodie.3 database ( moultaka et al .2004 , prugniel & soubiran 2001 ) . as the rest of the library, this spectrum was obtained with the 1.9 m telescope and the elodie spectrograph at haute provence .the original fluxes are resampled to , and then smoothed to by gaussian convolution .the output fluxes are sampled with 12 pixels per resolution element . the doppler shift due to the actual radial velocity of hd 245has already been corrected in the elodie library .new values for the radial velocity in the library of simulated sdss observations are drawn from a normal distribution with a km s , as to approximate the typical range found in f- and g - type stellar spectra included in the sdss ( mostly thick - disk and halo stars ) .the wavelength scale is then doppler shifted , changed to vacuum ( ) , and the spectrum resampled with a step of in ( approximately 2.17 pixels per resolution element ) .the elodie spectra only cover the range nm , and therefore a similar range is finally kept for the sdss - style files , which include 2287 pixels .we employed a set of 40 test spectra with , measuring the relative radial velocities for all possible pairs .[ xcf ] illustrates two sample spectra and their cross - correlation function . to avoid very large or small numbers ,the input arrays are simply divided by their maximum values before cross - correlation .we used second and third order polynomials , as well as a gaussian to model the cross - correlation function and estimate the location of its maximum by least - squares fitting .the solid line in the lower panels of the figure are the best - fitting models .we experimented varying the number of pixels involved in the least - squares fittings ( ) .with the sampling used , the measured relative shifts in pixel space ( ) correspond to a velocity , where is the speed of light in vacuum ; one pixel corresponds to 69 km s .we compare the relative velocities between all pairs of spectra derived from the measurement of the location of the cross - correlation peaks with the _ known _ , randomly drawn , relative velocities .the average difference for the 1600 velocities ( 40 spectra ) and the rms scatter ( ) are used to quantify systematic and random errors , respectively .our experiments exhibit no systematic errors in the derived velocity when the number of points entering the fit was an odd number , i.e. , when we use the same number of data points on each side from the pixel closest to the peak of the cross - correlation function .modest offsets ( ) , however , are apparent when fitting polynomials to an even number of data points , despite we enforce the maximum to be bracketed by the two central data points .random errors increase sharply with the number of data points involved in the fittings for the polynomial models , but not for the gaussian model .the best results for the polynomials are obtained when the lowest possible orders are used . using less than 11 points for the gaussian did not produce reliable results , as there was not enough information to constrain all the parameters of the model , which includes a constant base line .the best performance km s was obtained using a second order polynomial and . using a gaussian modelachieved a minimum km s , fairly independent of .the third order polynomial provided the poorest performance , s at best . the cross - correlation can be computed in fourier space , taking advantage of the correlation theorem ( brigham 1974 ) .this fact is usually exploited to speed up the calculation dramatically , as fast fourier transforms can be calculated with a number of operations proportional to , compared to required by eq .note , however , that for medium - resolution surveys of galactic stars , the velocity offsets , limited by the galactic escape velocity , usually correspond to a limited number of pixels .therefore , it is only necessary to compute the values of * c * in the vicinity of the center of the array , rendering the timing for a direct calculation similar to one performed in transformed space ) took seconds in fourier space ( arrays padded to or ) , while in pixel space , with a lag range restricted to pixels ( km s ) , it took seconds . ] .spectra and .the solid line represents the original distribution , and the dashed line the result after applying self - improvement .the error distributions are symmetric because the array * v * is antisymmetric ., width=317 ] to test the potential of the proposed self - improvement technique we repeat the same exercise described in [ sdss1 ] , but using increasingly larger datasets including up to 320 spectra , and adopting three different values for the per pixel at 500 nm : 50 , 25 , and 12.5 .we calculated the cross - correlation between all pairs of spectra ( matrix * v * ) , and performed quadratic fittings to the 3 central data points , cubic polynomial fittings to the central 4 points , and gaussian fittings involving the 11 central points .we estimated the uncertainties in our measurements by calculating the rms scatter between the derived and the known relative velocities for all pairs .then we applied eq .[ symmetry ] to produce a second set of _ self - improved _ velocities .( because the array of wavelengths , , is common to all spectra , the matrix * v * is antisymmetric and we can use eq .[ symmetry ] instead of eq .[ vprime ] . ) a first effect of the transformation from * v * to * v * , is that the systematic offsets described in [ sdss1 ] when using polynomial fittings with even values of disappear ( the same systematic error takes place for measuring and , canceling out when computing ) .more interesting are the effects on the width of the error distributions . fig .[ dist ] illustrates the case when a quadratic model is used for and .the solid line represents the original error distribution and the dashed line the resulting distribution after self - improvement .[ sigma ] shows the rms scatter as a function of the number of spectra for our three values of the ratio at 500 nm .the black lines lines show the original results , and the red lines those obtained after self - improvement .each panel shows three sets of lines : solid for the quadratic model , dotted for the cubic , and dashed for the gaussian .extreme outliers at km s , if any , were removed before computing the width of the error distribution ( ) .note the change in the vertical scale for the case with .for the experiments with , several runs were performed in order to improve the statistics , and the uncertainty ( standard error of the mean ) is indicated by the error bars .these results are based on the gaussian random - number generation routine included in idl 6.1 , but all the experiments were repeated with a second random number generator and the results were consistent . as described in [ sdss1 ] , the quadratic model performs better on the original velocity measurements for and . at the lowest considered value of 12.5 , however , the gaussian model delivers more accurate measurements .self - improvement reduces the errors in all cases .although a second order polynomial fitting works better than third order for the original measurements , the two models deliver a similar performance after self - improvement .interestingly , the impact of self improvement is smaller on the results from gaussian fittings than on those from polynomial fittings .as expected , the errors in the original measurements are nearly independent of the number of spectra in the test , but there is indication that at low signal - to - noise the errors after self - improvement for the polynomial models decrease as the sample increases in size , until they plateau for . from these experiments, we estimate that the best accuracy attainable with the original cross - correlation measurements are about 3 , 6 , and 15 km s at , 25 , and 12.5 , respectively .our results also indicate that by applying self - improvement to samples of a few hundred spectra , these figures could improve to roughly 2.5 , 4 , and 9 km s at , 25 , and 12.5 , respectively .we obtained an independent estimate of the precision achievable by simply measuring for 320 spectra the wavelength shift of the core of several strong lines ( h , h , h , and h ) relative to those measured in the solar spectrum ( see allende prieto et al .2006 ) , concluding that radial velocities can be determined from line wavelength shifts with a uncertainty of 3.8 km s at , 7.2 km s at , and 15.9 km s at only 1020% worse than straight cross - correlation but these absolute measurements can not take advantage of the self - improvement technique .allende prieto et al . ( 2006 ) compared radial velocities determined from the wavelength shifts of strong lines for sdss dr3 spectra of g and f - type dwarfs with the sdss pipeline measurements based on cross - correlation .the derived scatter between the two methods was 12 km s or , assuming similar performances , a precision of 8.5 km s for a given method .the spectra employed in their analysis have a distribution approximately linear between and 65 , with and with mean and median values of 22 and 18 , respectively .their result is in line with the expectations based on our numerical tests that indicate a potential precision of 67 km s at .independent estimates by the sdss team are also consistent with these values ( rockosi 2006 ; see also www.sdss.org ) . after correcting for effects such as telescope flexure ,the wavelength scale for stellar spectra in dr5 is accurate to better than 5 km s ( adelman - mccarthy et al . 2007 ) .this value , derived from the analysis of repeated observations for a set of standards and from bright stars in the old open cluster m67 , sets an upper limit to the accuracy of the radial velocities from sdss spectra , but random errors prevail for .provided no other source of systematic errors is present , our tests indicate that self - improvement could reduce substantially the typical error bars of radial velocities from low signal - to - noise sdss observations .this paper deals with the measurement of relative doppler shifts among a set of spectra of the same or similar objects .if random errors limit the accuracy of the measured relative velocity between any two spectra , there is potential for improvement by enforcing self - consistency among all possible pairs .this situation arises , for example , when a set of spectroscopic observations of the same object are available and we wish to co - add them to increase the signal - to - noise ratio .the spectra may be offset due to doppler velocity offsets or instrumental effects , the only difference being that in the former case the spectra should be sampled uniformly in velocity ( or ) space for cross - correlation , while in the latter a different axis may be more appropriate .another application emerges in the context of surveys that involve significant numbers of spectra of similar objects .radial velocities for individual objects can be derived using a small set of templates and later _ self - improved _ by determining the relative velocities among all the survey targets and requiring consistency among all measurements .the potential of this technique is illustrated by simulating spectra for a fictitious survey of g - type turn - off stars with the sdss instrumentation .our simulations show that applying self - improvement has a significant impact on the potential accuracy of the determined radial velocities .the tests performed dealt with relative velocities , but once the measurements are linked to an absolute scale by introducing a set of well - known radial velocity standards in the sample , the relative values directly translate into absolute measurements .the ongoing segue survey includes , in fact , large numbers of g - type stars , and therefore our results have practical implications for this project . the proposed scheme handles naturally the case when multiple templates are available .templates and targets are not treated differently .relative velocities are measured for each possible pair to build , and consistency is imposed to derive by using eqs .[ vprime ] or [ vprimegen ] .if , for example , the templates have been corrected for their own velocities and are the first 10 spectra in the sample , the velocity for the star ( ) can be readily obtained as the weighted average of the elements , where runs from 1 to 10 .the final velocities would take advantage of all the available spectra , not just the radial velocity templates , with differences in signal - to - noise among spectra already accounted for automatically .very recently , zucker & mazeh ( 2006 ) have proposed another approach with the same goals as the method discussed here .their procedure determines the relative velocities of a set of spectra by searching for the doppler shifts that maximize the value of the parameter , where is the maximum eigenvalue of the correlation matrix a two - dimensional array whose element is is the cross - correlation function between spectra and .zucker & mazeh s algorithm is quite different from the self - improvement method presented here .it involves finding the set of velocities that optimally aligns the sample spectra , whereas self - improvement consists on performing very simple algebraic operations on a set of radial velocities that have already been measured .self - improvement is obviously more simple to implement , but a detailed comparison between the performance of the two algorithms in practical situations would be very interesting .this paper also touches on the issue of error determination for relative radial velocities derived from cross - correlation , and convenient analytical expressions are implemented in an idl code available online .we have not addressed many other elements that can potentially impact the accuracy of doppler velocities from cross - correlation , such as systematic errors , filtering , sampling , or template selection .the vast number of spectra collected by current and planned spectroscopic surveys should stimulate further thought on these and other issues with the goal of improving radial velocity determinations .there is certainly an abundance of choices that need to be made wisely .
the measurement of doppler velocity shifts in spectra is a ubiquitous theme in astronomy , usually handled by computing the cross - correlation of the signals , and finding the location of its maximum . this paper addresses the problem of the determination of wavelength or velocity shifts among multiple spectra of the same , or very similar , objects . we implement the classical cross - correlation method and experiment with several simple models to determine the location of the maximum of the cross - correlation function . we propose a new technique , _ self - improvement _ , to refine the derived solutions by requiring that the relative velocity for any given pair of spectra is consistent with all others . by exploiting all available information , spectroscopic surveys involving large numbers of similar objects may improve their precision significantly . as an example , we simulate the analysis of a survey of g - type stars with the sdss instrumentation . applying _ self - improvement _ refines relative radial velocities by more than 50% at low signal - to - noise ratio . the concept is equally applicable to the problem of combining a series of spectroscopic observations of the same object , each with a different doppler velocity or instrument - related offset , into a single spectrum with an enhanced signal - to - noise ratio .
studying the dynamics of nearly one - dimensional structures has various scientific and industrial applications , for example in biophysics ( cf . and the references therein ) and visual computing ( cf . ) as well as in civil and mechanical engineering ( cf . ) , microelectronics and robotics ( cf . ) . in this regard ,an appropriate description of the dynamical behavior of flexible one - dimensional structures is provided by the so - called special cosserat theory of elastic rods ( cf . , ch . 8 , and the original work ) .this is a general and geometrically exact dynamical model that takes bending , extension , shear , and torsion into account as well as rod deformations under external forces and torques . in this context , the dynamics of a rod is described by a governing system of twelve first - order nonlinear partial differential equations ( pdes ) with a pair of independent variables where is the arc - length and the time parameter . in this pde system ,the two kinematic vector equations ( ( 9a)(9b ) in , ch .8) are parameter free and represent the compatibility conditions for four vector functions in .whereas the first vector equation only contains two vector functions , the second one contains all four vector functions .the remaining two vector equations in the governing system are dynamical equations of motion and include two more dependent vector variables and . moreover , these dynamical equations contain parameters ( or parametric functions of ) to characterize the rod and to include the external forces and torques . because of its inherent stiffness caused by the different deformation modes of a cosserat rod , a pure numerical treatment of the full cosserat pde system requires the application of specific solvers; see e.g. . in order to reduce the computational overhead caused by the stiffness, we analyzed the lie symmetries of the first kinematic vector equation ( ( 9a ) in , ch .8) and constructed its general and ( locally ) analytical solution in which depends on three arbitrary functions in and three arbitrary functions in . in this contributionwe perform a computer algebra - based lie symmetry analysis to integrate the full kinematic part of the governing cosserat system based on our previous work in .this allows for the construction of a general analytical solution of this part which depends on six arbitrary functions in .we prove its generality and apply the obtained analytical solution in order to solve the dynamical part of the governing system . finally , we prove its practicability by simulating the dynamics of a flagellated microswimmer . to allow for an efficient solution process of the determining equations for the infinitesimal lie symmetry generators , we make use of the maple package sade ( cf . ) in addition to desolv ( cf . ) .this paper is organized as follows .section [ sec:2 ] describes the governing pde system in the special cosserat theory of rods . in section [ sec:3 ] , we show that the functional arbitrariness in the analytical solution to the first kinematic vector equation that we constructed in can be narrowed down to three arbitrary bivariate functions . our main theoretical result is presented in section [ sec:4 ] , in which we construct a general analytical solution to the kinematic part of the governing equations by integrating the lie equations for a one - parameter subgroup of the lie symmetry group .section [ sec:5 ] illustrates the practicability of this approach by realizing a semi - analytical simulation of a flagellated microswimmer .this is based on a combination of the analytical solution of the kinematic part of the cosserat pde and a numerical solution of its dynamical part .some concluding remarks are given in section [ sec:6 ] and limitations are discussed in section [ sec:7 ] .in the context of the special cosserat theory of rods ( cf . ) , the motion of a rod is defined by a vector - valued function \times { \mathbb{r}}\ni ( s , t ) \mapsto \left({{\boldsymbol{r}}}(s , t),\,{{\boldsymbol{d}}}_1(s , t),\,{{\boldsymbol{d}}}_2(s , t)\right)\in { \mathbb{e}}^3\ , . \label{rd1d2}\ ] ] here , denotes the time and is the arc - length parameter identifying a _ material cross - section _ of the rod which consists of all material points whose reference positions are on the plane perpendicular to the rod at .moreover , and are orthonormal vectors , and denotes the position of the material point on the centerline with arc - length parameter at time .the euclidean 3-space is denoted with .the vectors , and are called _ directors _ and form a right - handed orthonormal moving frame .the use of the triple is natural for the intrinsic description of the rod deformation whereas describes the motion of the rod relative to the fixed frame .this is illustrated in figure [ fig1 ] . from the orthonormality of the directorsfollows the existence of so - called _ darboux _ and _ twist _ vector functions and determined by the kinematic relations the _ linear strain _ of the rod and the _ velocity of the material cross - section _ are given by vector functions and .( d1 ) at ( 609.500 mm , 393.654 mm ) ; ( d2 ) at ( 451.588 mm , 30.955 mm ) ; ( d3 ) at ( 839.662 mm , 92.118 mm ) ; ( o1 ) at ( 631.054 mm , 174.816 mm ) ; ( ax ) at ( 887.632 mm , 543.492 mm ) ; ( ay ) at ( 1369.466 mm , 541.844 mm ) ; ( az ) at ( 1126.066 mm , 941.984 mm ) ; ( o2 ) at ( 1118.542 mm , 639.549 mm ) ; ( r ) at ( 1495.859 mm , 221.154 mm ) ; ( s0 ) at ( 608.083 mm , 533.215 mm ) ; ( sl ) at ( 1832.694 mm , 552.899 mm ) ; ( 0.3 , 0 ) rectangle ( 0.12 * 0.0352777778 * 1920- 0.3 , 0.12 * 0.0352777778 * 1080 ) ; at ( 0 , 0 ) forms a right - handed orthonormal basis . the directors and span the local material cross - section , whereas is perpendicular to the cross - section .note that in the presence of shear deformations is unequal to the tangent of the centerline of the rod.,title="fig : " ] ; ( o2 ) ( r ) node[black , pos=0.65 , right , inner sep=5pt , xshift=0pt , yshift=0pt ] ; ( o2 ) circle ( 0.1 ) ; ( o1 ) ( d1 )node[black , anchor = west , inner sep=2pt , xshift=0pt , yshift=0pt ] ; ( o1 ) ( d2 )node[black , anchor = south east , inner sep=0pt , xshift=0pt , yshift=0pt ] ; ( o1 ) ( d3 ) node[black , anchor = west , inner sep=0pt , xshift=0pt , yshift=0pt ] ; at ( ax ) ; at ( ay ) ; at ( az ) ; at ( s0 ) ; at ( sl ) ; the components of the _ strain variables _ and describe the deformation of the rod : the flexure with respect to the two major axes of the cross - section , torsion , shear , and extension .the triples are functions in , that satisfy the _ compatibility conditions _ substitution of into the left equation in leads to on the other hand one obtains , and with and , and therefore similarly , the second compatibility condition in is equivalent to with and . the first - order pde system ( [ ke1])([ke2 ] ) with independent variables and dependent variables ( [ kv ] ) forms the kinematic part of the governing cosserat equations ( ( 9a)(9b ) in , ch . 8) .the construction of its general solution is the main theoretical result of this paper .the remaining part of the governing equations in the special cosserat theory consists of two vector equations resulting from newton s laws of motion . for a rod density and cross - section ,these equations are given by \partial_t{{\boldsymbol{h}}}(s , t)=\partial_s{{\boldsymbol{m}}}(s , t)+{{\boldsymbol{\nu}}}(s , t)\times { { \boldsymbol{n}}}(s , t)+{{\boldsymbol{l}}}(s , t)\ , , \end{array } \label{nl}\end{aligned}\ ] ] where are the _ contact torques _ , are the _ contact forces _, are the _ angular momenta _ , and and are the _ external forces _ and _ torque densities_. the contact torques and contact forces corresponding to the _ internal stresses _ ,are related to the extension and shear strains as well as to the flexure and torsion strains by the _ constitutive relations _ under certain reasonable assumptions ( cf . ) on the structure of the right - hand sides of , together with the kinematic relations and , it yields to the governing equations ( cf . , ch .8 , ( 9.5a)(9.5d ) ) { { \boldsymbol{\nu}}}_t={{\boldsymbol{v}}}_s+{{\boldsymbol{\kappa}}}\times{{\boldsymbol{v}}}-{{\boldsymbol{\omega}}}\times { { \boldsymbol{\nu}}}\,,\\[0.15 cm ] \rho{{\boldsymbol{j}}}\cdot { { \boldsymbol{\omega}}}_t=\hat{{{\boldsymbol{m}}}}_s+{{\boldsymbol{\kappa}}}\times\hat{{{\boldsymbol{m}}}}+{{\boldsymbol{\nu}}}\times \hat{{{\boldsymbol{n}}}}-{{\boldsymbol{\omega}}}\times(\rho{{\boldsymbol{j}}}\cdot{{\boldsymbol{\omega}}})+{{\boldsymbol{l}}}\,,\\[0.15 cm ] \rho a{{\boldsymbol{v}}}_t={{\boldsymbol{n}}}_s+{{\boldsymbol{\kappa}}}\times \hat{{{\boldsymbol{n}}}}-{{\boldsymbol{\omega}}}\times ( \rho a{{\boldsymbol{v}}})+{{\boldsymbol{f}}}\ , , \end{array } \label{sct}\end{aligned}\ ] ] in which is the inertia tensor of the cross - section per unit length .the dynamical part of contains parameters characterizing the rod under consideration of and the external force and torque densities and , whereas the kinematic part is parameter free .in , we constructed a general solution to that is the first equation in the pde system . in so doing, we proved that the constructed solution is ( locally ) analytical and provides the structure of the twist vector function and the darboux vector function : where and are arbitrary vector - valued analytical functions , and .it turns out that the functional arbitrariness of and is superfluous , and that with is still a general solution to .this fact is formulated in the following proposition .[ pro:1 ] the vector functions and expressed by with an arbitrary analytical vector function , are a general analytical solution to .let be a fixed point .the right - hand sides of and satisfy for an arbitrary vector function analytical in .it is an obvious consequence of the fact that is a solution to for arbitrary analytical in .also , the equalities and can be transformed into each other with reflecting the invariance of under .the equalities and are linear with respect to the partial derivatives and , and their corresponding jacobians .the determinants of the jacobian matrices and coincide because of the symmetry and read let and be two arbitrary vector functions analytical in .we have to show that there is a vector function analytical in satisfying and .for that , chose real constants such that and set . then and are solvable with respect to the partial derivatives of and in a vicinity of , and we obtain the first - order pde system of the form where the vector function is linear in its first argument and analytical in at .also , the system inherits the symmetry under the swap and is _ passive _ and _ orthonomic _ in the sense of the riquer - janet theory ( cf . and the references therein ) , since its vector - valued _ passivity ( integrability ) condition _ holds due to symmetry .therefore , by riquier s existence theorems that generalize the cauchy - kovalevskaya theorem , there is a _ unique _solution of analytical in and satisfying .in this section , we determine a general analytical form of the vector functions and in describing the linear strain of a cosserat rod and its velocity .these functions satisfy the second kinematic equation of the governing pde system under the condition that the darboux and the twist functions , and , occurring in the last equation , are given by and which contain the arbitrary analytical vector function . similarly , as we carried it out in for the integration of , we analyze lie symmetries ( cf . and the references therein ) and consider the _ infinitesimal generator _ of a lie group of point symmetry transformations for . the coefficients with in are functions of the independent and dependent variables .the _ infinitesimal criterion of invariance _ of reads where in addition to those in , the _ prolonged _ infinitesimal symmetry generator contains extra terms caused by the presence of the first - order partial derivatives in and . the invariance conditions lead to an overdetermined system of linear pdes in the coefficients of the infinitesimal generator .this _ determining _ system can be easily computed by any modern computer algebra software ( cf .we make use of the maple package desolv ( cf . ) which computes the determining system and outputs 138 pdes . since the completion of the determining systems to involution is the most universal algorithmic tool of their analysis ( cf . ) , we apply the maple package janet ( cf . ) first and compute a janet involutive basis ( cf . ) of 263 elements for the determining system , which took about 80 minutes of computation time on standard hardware .then we detected the functional arbitrariness in the general solution of the determining system by means of the differential hilbert polynomial computable by the corresponding routine of the maple package differentialthomas ( cf .it shows that the general solution depends on eight arbitrary functions of .however , in contrast to the determining system for which is quickly and effectively solvable ( cf . ) by the routine _ pdesolv _ built in the package desolv , the solution found by this routine to the involutive determining system for needs around one hour of computation time and has a form which is unsatisfactory for our purposes , since the solution contains nonlocal ( integral ) dependencies on arbitrary functions . on the other hand , the use of sade ( cf . ) leads to a satisfying result .unlike desolv , sade uses some heuristics to solve simpler equations first in order to simplify the remaining system . in so doing, sade extends the determining systems with certain integrability conditions for a partial completion to involution . in our casethe routine _ liesymmetries _ of sade receives components of the vectors in and outputs the set of nine distinct solutions in just a few seconds .the output solution set includes eight arbitrary functions in which is in agreement with .each solution represents an infinitesimal symmetry generator . among the generators , there are three that include an arbitrary vector function , which we denoted by , with vanishing coefficients , .the sum of these generators is given by it generates a one - parameter lie symmetry group of point transformations ( depending on the arbitrary vector function ) of the vector functions and preserving the equality for fixed and . in accordance to lie s first fundamental theorem ( cf . ) , the symmetry transformations generated by , are solutions to the following differential ( lie ) equations whose vector form reads the equations can easily be integrated , and without a loss of generality the group parameter can be absorbed into the arbitrary function .this gives the following solution if one takes and into account .] to : the vector functions , , , and expressed by and with two arbitrary analytical functions and form a general analytical solution to . the fact that and form a general analytical solution to was verified in proposition [ pro:1 ] .we have to show that , given analytical vector functions and satisfying with analytical and satisfying , there exists an analytical vector function satisfying .consider the last equalities as a system of first - order pdes with independent variables and a dependent vector variable . according to the argumentation in the proof of proposition [ pro:1] , this leads to the fact , that the equations in are invariant under the transformations this symmetry implies the satisfiability of the integrability condition without any further constraints .therefore , the system is passive ( involutive ) , and by riquier s existence theorem , there is a solution to analytical in a point of analyticity of .to demonstrate the practical use of the analytical solution to the kinematic cosserat equations , we combine it with the numerical solution of its dynamical part .the resulting analytical solutions and for the kinematic part of ( [ sct ] ) contain two parameterization functions and , which can be determined by the numerical integration of the dynamical part of ( [ sct ] ) .the substitution of the resulting analytical solutions and into the latter two ( dynamical ) equations of ( [ sct ] ) , the replacement of the spatial derivatives with central differences , and the replacement of the temporal derivatives according to the numerical scheme of a forward euler integrator , leads to an explicit expression .iterating over this recurrence equation allows for the simulation of the dynamics of a rod . in order to embed this into a scenario close to reality, we consider a flagellated microswimmer .in particular , we simulate the dynamics of a swimming sperm cell , which is of interest in the context of simulations in biology and biophysics .since such a highly viscous fluid scenario takes place in the low reynolds number domain , the advection and pressure parts of the navier - stokes equations ( cf . ) can be ignored , such that the resulting so - called _ steady stokes equations _become linear and can be solved analytically .therefore , numerical errors do not significantly influence the fluid simulation part for which reason this scenario is appropriate for evaluating the practicability of the analytical solution to the kinematic cosserat equations .the _ steady stokes equations _ are given by in which denotes the fluid viscosity , the pressure , the velocity , and the force . similar to the fundamental work in we use a regularization in order to develop a suitable integration of ( [ eq : stokes1])([eq : stokes2 ] ) . for that, we assume in which is a smooth and radially symmetric function with , is spread over a small ball centered at the point .let be the corresponding green s function , i.e. , the solution of and let be the solution of , both in the infinite space bounded for small .smooth approximations of and are given by for and , the solution of the biharmonic equation .the pressure satisfies , which can be shown by applying the divergence operator on ( [ eq : stokes1])([eq : stokes2 ] ) , and is therefore given by . using this ,we can rewrite ( [ eq : stokes1 ] ) as with its solution the so - called _ regularized stokeslet_. for multiple forces centered at points , the pressure and the velocity can be computed by superposition . because and are radially symmetric , we can additionally use and obtain , , and only depend on the norm of their arguments , we change the notation according to this . ]\,.\nonumber\end{aligned}\ ] ] the flow given by ( [ eq : stokessolution2 ] ) satisfies the incompressibility constraint ( [ eq : stokes2 ] ) . because of the integration of leads to , leads to the expression to determine .we make use of the specific function which is smooth and radially symmetric . up to now , this regularized stokeslet ( [ eq : stokessolution1])([eq : stokessolution2 ] ) allows for the computation of the velocities for given forces .similarly , we can tread the application of a torque by deriving an analogous _ regularized rodlet _ ; see e.g. . in the inverse case, the velocity expressions can be rewritten in the form of the equations for which can be transformed into an equation system with a -matrix . since in general is not regular , an iterative solver have to be applied .a flagellated microswimmer can be set up by a rod representing the centerline of the flagellum ; see .additionally , a constant torque perpendicular to the flagellum s base is applied to emulate the rotation of the motor . from forces and torque the velocity fieldis determined . repeating this procedure to update the system stateiteratively introduces a temporal domain and allows for the dynamical simulation of flagellated microswimmers ; see figures [ fig : bacteria ] and [ fig : spermcell ] .compared to a purely numerical handling of the two - way coupled fluid - rod system , the step size can be increased by four to five orders of magnitude , which leads to an acceleration of four orders of magnitude .this allows for real - time simulations of flagellated microswimmers on a standard desktop computer . and[ fig : spermcell ] can be carried out in real - time on a machine with an intel(r ) xeon e5 with 3.5 ghz and 32 gb ddr - ram . ]+ , the flagellum of a sperm cell does not have its motor at its base as simulated here .instead several motors are distributed along the flagellum ( cf . ) , for which reason this simulation is not fully biologically accurate , but still illustrates the capabilities of the presented approach.,scaledwidth=100.0% ]we constructed a closed form solution to the kinematic equations of the governing cosserat pde system and proved its generality .the kinematic equations are parameter free whereas the dynamical cosserat pdes contain a number of parameters and parametric functions characterizing the rod under consideration of external forces and torques .the solution we found depends on two arbitrary analytical vector functions and is analytical everywhere except at the values of the independent variables for which the right - hand side of vanishes .therefore , the hardness of the numerical integration of the cosserat system , in particular its stiffness , is substantially reduced by using the exact solution to the kinematic equations .the application of the analytical solution prevents from numerical instabilities and allows for highly accurate and efficient simulations .this was demonstrated for the two - way coupled fluid - rod scenario of flagellated microswimmers , which could efficiently be simulated with an acceleration of four orders of magnitude compared to a purely numerical handling .this clearly shows the usefulness of the constructed analytical solution of the kinematic equations .because of the presence of parameters in the dynamical part of the cosserat pdes , the construction of a general closed form solution to this part is hopeless . even if one specifies all parameters and considers the parametric functions as numerical constants , the exact integration of the dynamical equations is hardly possible .we analyzed lie symmetries of the kinematic equations extended with one of the dynamical vector equations including all specifications of all parameters and without parametric functions .while the determining equations can be generated in a reasonable time , their completion to involution seems to be practically impossible .this work has been partially supported by the max planck center for visual computing and communication funded by the federal ministry of education and research of the federal republic of germany ( fkz-01imc01/fkz-01im10001 ) , the russian foundation for basic research ( 16 - 01 - 00080 ) , and a biox stanford interdisciplinary graduate fellowship . the reviewers valuable comments that improved the manuscript are gratefully acknowledged .y. blinkov , c. cid , v. gerdt , w. plesken , d. robertz : the maple package janet : ii .linear partial differential equations .computer algebra in scientific computing , casc 2003 , v. ganzha , e. mayr , e. vorozhtsov ( eds . ) , 4154 , springer , ( 2003 ) .j. butcher , j. carminati , k.t . vu . : a comparative study of some computer algebra packages which determine the lie point symmetries of differential equations .155 , 92114 ( 2003 ) .w. hereman : review of symbolic software for lie symmetry analysis .crs handbook of lie group analysis of differential equations , vol . 3 : new trends in theoretical developments and computational methods , chap. 13 , n. h. ibragimov ( ed . ) , 367413 , boca raton , fl , crs press ( 1996 ) .d. michels , d. lyakhov , v. gerdt , g. sobottka , a. weber : lie symmetry analysis for cosserat rods .computer algebra in scientific computing , casc 2014 , v. p. gerdt , w .koepf , w .m .seiler , e. v. vorozhtsov ( eds . ) , 324334 , springer , ( 2014 ) .d. michels , d. lyakhov , v. gerdt , g. sobottka , a. weber : on the partial analytical solution to the kirchhoff equation .computer algebra in scientific computing , casc 2015 , v. p. gerdt , w. koepf , w. m. seiler , e. v. vorozhtsov ( eds . ) , 320331 , springer , ( 2015 ) .
based on a lie symmetry analysis , we construct a closed form solution to the kinematic part of the ( partial differential ) cosserat equations describing the mechanical behavior of elastic rods . the solution depends on two arbitrary analytical vector functions and is analytical everywhere except a certain domain of the independent variables in which one of the arbitrary vector functions satisfies a simple explicitly given algebraic relation . as our main theoretical result , in addition to the construction of the solution , we proof its generality . based on this observation , a hybrid semi - analytical solver for highly viscous two - way coupled fluid - rod problems is developed which allows for the interactive high - fidelity simulations of flagellated microswimmers as a result of a substantial reduction of the numerical stiffness . rods , differential thomas decomposition , flagellated microswimmers , general analytical solution , kinematic equations , lie symmetry analysis , stokes flow , symbolic computation .
embedded control systems are ubiquitous and can be found in several applications including aircraft , automobiles , process control , and buildings .an embedded control system is one in which the computer system is designed to perform dedicated functions with real - time computational constraints .typical features of such embedded control systems are the control of multiple applications , the use of shared networks used by different components of the systems to communicate with each other for control , a large number of sensors as well as actuators , and their distributed presence in the overall system. the most common feature of such distributed embedded control systems ( des ) is shared resources .constrained by space , speed , and cost , often information has to be transmitted using a shared communication network . in order to manage the flow of information in the network , protocols that are time - triggered and event - triggered been suggested over the years .associated with each of these communication protocols are different set of advantages and disadvantages .the assignment of time - triggered ( tt ) slots to all control - related signals has the advantage of high quality of control ( qoc ) due to the possibility of reduced or zero delays , but leads to poor utilization of the communication bandwidth , high cost , overall inflexibility , and infeasibility as the number of control applications increase .on the other hand , event - triggered ( et ) schedules often result in poor control performance due to the unpredictable temporal behavior of control messages and the related large delays which occurs due to the lack of availability of the bus .these imply that a hybrid protocol that suitably switches between these two schedules offers the possibility of exploiting their combined advantages of high qoc , efficient resource utilization , and low cost .such a hybrid protocol is the focus of this paper . to combine the advantage of tt and et policies ,hybrid protocols are increasingly being studied in recent years .examples of such protocols are flexray and ttcan , used extensively in automotive systems . while several papers have considered control using tt protocols( see for example , ) and et protocols ( see for example , ) , control using hybrid protocols has not been studied in the literature until recently .the co - design problem has begun to be addressed of late as well ( see for example , ) . in ,the design of scheduling policies that ensure a good quality of control ( qoc ) is addressed . in , the schedulability analysis of real - time tasks with respect to the stability of control functionsis discussed . in , modeling the real - time scheduling process as a dynamic system , an adaptive self - tuning regulator is proposed to adjust the bandwidth of each single task in order to achieve an efficient cps utilization .the focus of most of the papers above are either on a simple platform or on a single processor .a good survey paper on co - design can be found in .our focus in this paper is on the co - design of adaptive switching controllers and hybrid protocols so as to ensure good tracking in the presence of parametric uncertainties in the plant being controlled while utilizing minimal resources in the des .the hybrid protocol that is addressed in this paper switches between a tt and a et scheme .the tt scheme , which results in a negligible delay in the processing of the control messages , is employed when a control action is imperative and the et scheme , which typically results in a non - zero delay , is employed when the controlled system is well - behaved , with minimal tracking error .the latter is in contrast to papers such as and where the underlying _ event _ is associated with a system error exceeding a certain threshold , while here an _ event _ corresponds to the case when the system error is small .the controller is to be designed for multiple control applications , each of which is subjected to a parametric uncertainty .an adaptive switching methodology is introduced to accommodate these uncertainties and the hybrid nature of the protocol .switched control systems and related areas of hybrid systems and supervisory control have received increased attention in the last decade ( see e.g. , ) and used in several applications ( see e.g. ) .adaptive switched and tuned systems have been studied as well ( see ) .the combined presence of uncertainties and switching delays makes a direct application of these existing results to the current problem inadequate .the solution to the problem of co - design of an adaptive swtiched controller and switches in a hybrid protocol was partially considered in , where the control goal was one of stabilization . in this paper, we consider tracking , which is a non - trivial extension of .the main reason for this lies in the trigger for the switch , which corresponds to a system error becoming small . in order to ensure that this error continues to remain small even in the presence of a non - zero reference signal, we needed to utilize fundamental properties of the adaptive system with persistent excitation , and derive additional properties in the presence of reference signals with an invariant persistent excitation property . these properties in turn are suitably exploited and linked with the switching instants , and constitute the main contribution of this paper . in section [ sec :problem ] the problem is formulated , and preliminaries related to adaptive control and persistent excitation are presented . in section [ sec : switchingadaptivecontroller ] , the switching adaptive controller is described and the main result of global boundedness is proved . concluding remarks are presented in section [ sec : conclusion ] .the problem that we address in this paper is the simultaneous control of plants , , , in the presence of impulse disturbances that occur sporadically , using a hybrid communication protocol .we assume that each of these applications have the following problem statement .the plant to be controlled is assumed to have a discrete time model described by + b_0u(k - d)+\sum_{l=1}^{m_2}b_lu(k - l - d)+d(k - d)\end{gathered}\ ] ] where and are the input and output of the -th control application , respectively , at the time - instant and is a time - delay . the disturbance are assumed to be impulses that can occur occasionally with their inter - arrival time lower - bounded by a finite constant .the parameters of the -th plant are given by , , , and are assumed to be unknown .it is further assumed that the sampling time of the controller is a constant , so that .the goal is to choose the control input such that tracks a desired signal , with all signals remaining bounded . the model in ( [ eq : model ] ) can be expressed as where is the backward shift operator and the polynomials and are given by the following assumptions are made regarding the plant poles and zeros : \1 ) an upper bound for the orders of the polynomials in ( [ eq : polyab ] ) is known and 2 ) all zeros of lie strictly inside the closed unit disk . [ass : fixeddelay ] for any delay , eq .( [ eq : model ] ) can be expressed in a _ predictor form _ as follows : with where and are the unique polynomials that satisfy the equation equation ( [ eq : predictorform ] ) can be expressed as where , , , and are defined as with , , , , and and the coefficients of the polynomials in ( [ eq : alphabeta ] ) with respect to the delay and finite initial conditions from eqs .( [ eq : regressor])-([eq : phiandtheta ] ) , we observe that a feedback controller of the form realizes the objective of stability and follows the desired bounded trajectory in the absence of disturbances .designing a stabilizing controller essentially boils down to a problem of implementing ( [ eq : feedbackcontroller ] ) with the controller gain .two things should be noted : ( i ) controller ( [ eq : feedbackcontroller ] ) is not realizable as and are not known , and ( ii ) the dimension of , as well as the entries of depend on the delay .since and are unknown , we replace them with their parameter estimates and derive the following adaptive control input where denotes the -th element of the parameter estimation and is the estimate of . is adjusted according to the adaptive update law : with ^t ] is the estimation of the controller gains ( eq . [ eq : phiandtheta ] ) , and . if , the adaptive controller is given by is given in eq .( [ eq : phiandthetaklein ] ) , is given in eq .( [ eq : phiandtheta ] ) , ^t ] and the et protocol is applied for ,p\in{\ensuremath{\mathbb{n}}}_0 ] ( see figure [ fig : error ] ) .[ ass : dimpulse ] the disturbance in ( [ eq : predictorform ] ) is an impulse train , with the distance between any two consecutive impulses greater than a constant .this is the main result of the paper : [ thm : betaknown ] let the plant and disturbance in ( [ eq : predictorform ] ) satisfy assumptions [ ass : fixeddelay ] , [ ass : pe ] , and [ ass : dimpulse ] .consider the switching adaptive controller in ( [ eq : ttcontroller ] ) and ( [ eq : etcontroller ] ) with the hybrid protocol in ( [ eq : protocol ] ) and the following parameter estimate selections at the switching instants then there exists a positive constant such that for all , the closed loop system has globally bounded solutions .a qualitative proof of theorem [ thm : betaknown ] is as follows : + first , theorem [ thm : fixeddelay ] shows that if either of the individual control strategies ( [ eq : ttcontroller ] ) or ( [ eq : etcontroller ] ) is deployed , then boundedness is guaranteed .that is , for a sufficiently large dwell time which the controller stays in the tt protocol , with the controller in ( [ eq : ttcontroller ] ) , boundedness can be shown . after a finite number of switches ,when the system switches to an et protocol , it is shown that the regressor vector remains in the same subspace as in the earlier switch to et and hence , the corresponding tracking error remains small even after the switch to et .hence the stay in et is ensured for a finite time , guaranteeing boundedness with the overall switching controller ._ proof of theorem [ thm : betaknown ] : _ we define an equivalent reference signal combines the effect of both the disturbance as where is given by and is the transfer function of the plant ( [ eq : model ] ) .also , we define a reference model signal given by where the transfer functions is given by and the optimal feedback gain is given by ( [ eq : phiandthetaklein ] ) .the overall ideal closed - loop system is given by the block diagram shown in figure [ fig : blockdiagram01 ] .we note that when there is no disturbance , the output corresponds to the desired regressor vector , and its first element of the vector corresponds to . ]when the algorithm is in mode , the underlying error equation is given by with .when the system is in mode , the error equation is given by with .define as choose lyapunov function where .let .the proof consists of the following four stages : * stage :* let there exist a sequence of finite switching times with the properties described above .then the errors and are bounded for all .+ the proof of stage 1 is established using the following three steps : * step - * there exists a such that ;{\ensuremath{e_{\text{th}}}\xspace}]:\;|e_1(k_1)|<\varepsilon\leqslant{\ensuremath{e_{\text{th}}}\xspace} ] is greater than 2 , i.e. , + stage 2 is established using the following steps : * step - * if then if , then for for is bounded for all .+ the following steps will be used to establish stage 3 : * step - * and , for all . during and during the control input is bounded for all and hence all signals are bounded .+ the following two steps will be used to prove stage 4 : * step - * all signals are bounded .we note that the proofs of stages 1 , 3 , and 4 are identical to that in and are therefore omitted here .since stage 2 differs significantly from its counterpart in due to , we provide its proof in detail below .[ ] [ ] [ 0.8]time are assumed to occur at .,title="fig : " ] in this step we show that if the tracking error is small the state signal error is also small .the signal is the output produced by the following transfer function with as the input : where is the inverse of the plant transfer function with the input signal and given in ( [ eq : phistern ] ) . from assumption [ ass : fixeddelay ] , it follows that is a stable transfer function .hence , as tends to zero , also tends to zero . if , then we first show that for and .we note that the reference model given in ( [ eq : phistern ] ) is a linear system and hence there exists a state space representation with being completely reachable .then it follows directly from lemma [ cor : inomega ] that for and .together with step 2 - 1 it follows that if , then . for first , we show that the error of the signal generated by the reference model signal together with the last parameter estimation value at the end of the previous et phase is small and therefore the output error is below the threshold . from step 2 - 2 we know that is in the same subspace as . from step 2 - 1 we know that is close to which in turn generates together with and an error which is according to theorem [ thm : fixeddelay ] . hence , from step 2 - 1 we know that is close to , according to step 2 - 4 we have . and this step shows that the error at the beginning of the et mode is below the threshold for at least steps . from step 2 - 3we know that .according to the parameter choice in ( [ eq : thetachoice2 ] ) , the controller uses a constant initial value for the first steps .thus , the error because steps 2 - 1 to 2 - 5 can be applied .theorem [ thm : betaknown ] implies that the plant in ( [ eq : predictorform ] ) can be guaranteed to have bounded solutions with the proposed adaptive switching controller in ( [ eq : ttcontroller ] ) and ( [ eq : etcontroller ] ) and the hybrid protocol in ( [ eq : protocol ] ) , in the presence of disturbances .the latter is assumed to consist of impulse - trains , with their inter - arrival lower bounded .we note that if no disturbances occur , then the choice of the algorithm in ( [ eq : protocol ] ) implies that these switches cease to exist , and the event - triggered protocol continues to be applied . and switching continues to occur with the onset of disturbances , with theorem [ thm : betaknown ] guaranteeing that all signals remain bounded with the tracking errors converging to before the next disturbance occurs .the nature of the proof is similar to that of all switching systems , in some respects .a common lyapunov function was used to show the boundedness of parameter estimates , which are a part of the state of the overall system ( in stage 3 ) .the additional states were shown to be bounded using the boundedness of the tracking errors and ( in stage 1 ) and the control input using the method of induction ( in stage 4 ) .since the switching instants themselves were functions of the states of the closed - loop system , we needed to show that indeed these switching sequences exist , which was demonstrated in stage 2 . to this end , the sufficient richness properties of the reference signal were utilized to show that the signal vectors of a reference model and the system converge to the same subspace .next , it was shown that the error generated by the reference model is small and thus concluded that the tracking error at the switch from tt to et stays below the threshold .it is the latter that distinguishes the adaptive controller proposed in this paper , as well as the methodology used for the proof , from existing adaptive switching controllers and their proofs in the literature .in this work we considered the control of multiple control applications using a hybrid communication protocol for sending control - related messages . these protocols switch between time - triggered and event - triggered methods , with the switches dependent on the closed - loop performance , leading to a co - design of the controller and the communication architecture .in particular , this co - design consisted of switching between a tt and et protocol depending on the amplitude of the tracking error , and correspondingly between two different adaptive controllers that are predicated on the resident delay associated with each of these protocols .these delays were assumed to be fixed and equal to for the tt protocol and greater than for the et protocol .it was shown that for any reference input whose order of sufficient richness stays constant , the signal vector and the parameter error vector converge to subspaces which are orthogonal to each other .the overall adaptive switching system was shown to track such reference signals , with all solutions remaining globally bounded , in the presence of an impulse - train of disturbances with the inter - arrival time between any two impulses greater than a finite constant .
the focus of this paper is on the co - design of control and communication protocol for the control of multiple applications with unknown parameters using a distributed embedded system . the co - design consists of an adaptive switching controller and a hybrid communication architecture that switches between a time - triggered and event - triggered protocol . it is shown that the overall co - design leads to an overall switching adaptive system that has bounded solutions and ensures tracking in the presence of a class of disturbances .
going hand - in - glove with analytic models of accretion disks , discussed in chapter 2.1 , are direct numerical simulations .although analytic theories have been extremely successful at explaining many general observational properties of black hole accretion disks , numerical simulations have become an indispensable tool in advancing this field .they allow one to explore the full , non - linear evolution of accretion disks from a first - principles perspective .because numerical simulations can be tuned to a variety of parameters , they serve as a sort of `` laboratory '' for astrophysics .the last decade has been an exciting time for black hole accretion disk simulations , as the fidelity has become sufficient to make genuine comparisons between them and real observations .the prospects are also good that within the next decade , we will be able to include the full physics ( gravity + hydrodynamics + magnetic fields + radiation ) within these simulations , which will yield complete and accurate numerical representations of the accretion process .in the rest of this chapter i will review some of the most recent highlights from this field .one of the most exciting recent trends has been a concerted effort by various collaborations to make direct connections between very sophisticated simulations and observations . of course, observers have been clamoring for this sort of comparison for years !perhaps the first serious attempt at this was presented in .schnittman produced a simulation similar to those in and coupled it with a ray - tracing and radiative transfer code to produce `` images '' of what the simulated disk would look like to a distant observer . by creating images from many time dumps in the simulation, schnittman was able to create light curves , which were then analyzed for variability properties much the way real light curves are .following that same prescription , a number of groups have now presented results coupling general relativistic mhd ( grmhd ) simulations with radiative modeling and ray - tracing codes ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?more recent models have even included polarization measurements .this approach is most applicable to very low - luminosity systems , such as sgr a * and m87 .a sample light curve for sgr a * covering a 16-hour window is shown in figure [ fig : lightcurve ] . in the case of m87, modeling has focused on accounting for the prominent jet in that system . along with modeling light curves and variability, this approach can also be used to create synthetic broadband spectra from simulations ( e.g. * ? ? ?* ; * ? ? ?* ) , which can be compared with modern multi - wavelength observing campaigns ( see chapter 3.1 ) .this is very useful for connecting different components of the spectra to different regions of the simulation domain .for example , figure [ fig : spectrum ] shows that the sub - mm bump in sgr a * is well represented by emission from relatively cool , high - density gas orbiting close to the black hole , while the x - ray emission seems to come from comptonization by very hot electrons in the highly magnetized regions of the black hole magnetosphere or base of the jet .as important as the radiative modeling of simulations described in section [ sec : matching ] has been , its application is very limited . this isbecause , in most cases , the radiative modeling has been done after the fact ; it was not included in the simulations themselves .therefore , the gas in the accretion disk was not allowed to respond thermodynamically to the cooling .this calls into question how much the structure obtained from the simulation reflects the true structure of the disk .fortunately , various groups are beginning to work on treating the thermodynamics of accretion disks within the numerical simulations with greater fidelity .thus far , two approaches have principally been explored : 1 ) _ ad hoc _ cooling prescriptions used to artificially create _ optically thick , geometrically thin _ disks and 2 ) fully self - consistent treatments of radiative cooling for _ optically thin , geometrically thick _ disks .we review each of these in the next 2 sections . for the _ ad hoc _ cooling prescription , cooling is assumed to equal heating ( approximately ) everywhere locally in the disk . since this is the same assumption as is made in the shakura - sunyaev and novikov - thorne disk models , this approach has proven quite useful in testing the key assumptions inherent in these models ( e.g * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?in particular , these simulations have been useful for testing the assumption that the stress within the disk goes to zero at the innermost stable circular orbit ( isco ) . a corollary to thisis that the specific angular momentum of the gas must remain constant at its isco value inside this radius .both of these effects have been confirmed in simulations of sufficiently thin disks , as shown in figure [ fig : isco ] .another approach to treating the thermodynamics of accretion disks has been to include _ physical _ radiative cooling processes directly within the simulations .so far there has been very limited work done on this for optically thick disks , but an optically - thin treatment was introduced in .similar to the after - the - fact radiative modeling described in section [ sec : matching ] , the optically - thin requirement restricts the applicability of this approach to relatively low luminosity systems , such as the quiescent and low / hard states of black hole x - ray binaries .recently this approach has been applied to sgr a * , which turns out to be right on the boundary between where after - the - fact radiative modeling breaks down and a self - consistent treatment becomes necessary .figure [ fig : sgra ] illustrates that this transition occurs right around an accretion rate of for sgr a*. ( _ black _ ) , ( _ blue _ ) , and ( _ red _ ) . for each accretion rate ,two simulations are shown , one that includes cooling self - consistently ( model names ending in `` c '' ) and one that does not .the spectra begin to diverge noticeably at .figure from .,scaledwidth=70.0% ]another area where a lot of interesting new results have come out is in the study of how magnetic field topology and strength affect black hole accretion .although there is now convincing evidence that the blandford - znajek mechanism works as predicted in powering jets ( e.g. * ? ? ?* ; * ? ? ?* ) , one lingering question is still how the accretion process supplies the required poloidal flux onto the black hole .simulations have demonstrated that such field can , in many cases , be generated self - consistently within mri - unstable disks ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . however , this is strongly dependent on the initial magnetic field topology , as shown in .figure [ fig : beckwith ] nicely illustrates that when there is no net poloidal magnetic flux threading the inner disk , the magnetically - driven jet can be 2 orders of magnitude less energetic than when there is . at this time it is unclear what the `` natural '' field topology would be , or even if there is one .( _ top left _ ) , magnetic field strength ( _ top right _ ) , electromagnetic energy flux ( _ bottom left _ ) , and angular momentum flux ( _ bottom right _ ) . dashed lines show standard deviation from the average . ]although strong poloidal magnetic fields are useful for driving powerful jets , they can also create interesting feedback affects on an accretion disk . in the case where a black hole is able to accumulate field with a consistent net flux for an extended period of time , it is possible for the amassed field to eventually `` arrest '' the accretion process .an example of an arrested state is shown in figure [ fig : arrest ] . in a two - dimensional simulation where plasma with a constant net flux is fed in from the outer boundary ,a limit - cycle behavior can set in , where the mass accretion rate varies by many orders of magnitude between the arrested and non - arrested states .figure [ fig : arrested_mdot ] provides an example of the resulting mass accretion history .it is straightforward to show that the interval , , between each non - arrested phase in this scenario grows with time according to where is the radial infall velocity of the gas , is the strength of the magnetic field , and is the density of the gas . .starting from , a pattern of cyclic accretion develops ( seen as a sequence of spikes ) .figure from .,scaledwidth=70.0% ] in three - dimensions , the magnetic fields are no longer able to perfectly arrest the in - falling gas because of a `` magnetic '' rayleigh - taylor effect .basically , as low density , highly magnetized gas tries to support higher density gas in a gravitational potential , it becomes unstable to an interchange of the low- and high - density materials .indeed , such a magnetic rayleigh - taylor effect has been seen in recent simulations by . ) and meridional ( ) snapshots of the gas density of a magnetically arrested flow in 3d .black lines show field lines in the image plane .( panel e ) : time evolution of the mass accretion rate .( panel f ) : time evolution of the large - scale magnetic flux threading the bh horizon . (panel g ) : time evolution of the energy outflow efficiency .figure from . ]the results of are important for another reason .these were the first simulations to demonstrate a jet efficiency greater than unity .since the efficiency measures the amount of energy extracted by the jet , normalized by the amount of energy made available via accretion , a value indicates more energy is being extracted than is being supplied by accretion .this is only possible if some other source of energy is being tapped in this case the rotational energy of the black hole .this was the first demonstration that a blandford - znajek process _ must _ be at work in driving these simulated jets .there is observational evidence that several black - hole x - ray binaries ( bhbs ) , e.g. gro j1655 - 40 , v4641 sgr and gx 339 - 4 , and active galactic nuclei ( agn ) , e.g. ngc 3079 , ngc 1068 , and ngc 4258 , may have accretion disks that are tilted with respect to the symmetry plane of their central black hole spacetimes .there are also compelling theoretical arguments that many black hole accretion disks should be tilted .this applies to both stellar mass black holes , which can become tilted through asymmetric supernovae kicks or binary captures and will remain tilted throughout their accretion histories , and to supermassive black holes in galactic centers , which will likely be tilted for some period of time after every major merger event .close to the black hole , tilted disks may align with the symmetry plane of the black hole , either through the bardeen - petterson effect in geometrically thin disks or through the magneto - spin alignment effect in the case of geometrically thick , magnetically - choked accretion .however , for weakly magnetized , moderately thick disks ( ) , no alignment is observed . in such cases ,there are many observational consequences to consider . chapter 4.3 of this book discusses the two primary methods for estimating the spins of black holes : continuum - fitting and reflection - line modeling . both rely on an assumed monotonic relation between the inner edge of the accretion disk ( assumed to coincide with the radius of the isco ) and black hole spin , .this is because what both methods actually measure is the effective inner radius of the accretion disk , .one problem with this is that it has been shown that tilted disks do not follow such a monotonic behavior , at least not for disks that are not exceptionally geometrically thin .figure [ fig : fragile09 ] shows an example of the difference between how depends on for untilted and tilted simulated disks .similar behavior has been confirmed using both dynamical and radiative measures of .the implication is that spin can only be reliably inferred in cases where the inclination of the inner accretion disk can be independently determined , such as by modeling jet kinematics . of simulated untilted ( _ circles _ ) and tilted ( _ diamonds _ ) accretion disks as a function of black - hole spin using a surface density measure .the _ solid _ line is the isco radius .figure from .,scaledwidth=70.0% ] for geometrically thin , shakura - sunyaev type accretion disks , the bardeen - petterson effect may allow the inner region of the accretion disk to align with the symmetry plane of the black hole , perhaps alleviating concerns about measuring , at least for systems in the proper state ( `` soft '' or `` thermally dominant '' ) and luminosity range , where is the eddington luminosity .extremely low luminosity systems , though , such as sgr a * , do not experience bardeen - petterson alignment .further , for a system like sgr a * that is presumed to be fed by winds from massive stars orbiting in the galactic center , there is no reason to expect the accretion flow to be aligned with the black hole spin axis .therefore , a tilted configuration should be expected . in light of this, presented an initial comparison of the effect of tilt on spectral fitting of sgr a*. figure [ fig : dexter12 ] gives one illustration of how important this effect is ; it shows how the probability density distribution of four observables change if one simply accounts for the two extra degrees of freedom introduced by even a modestly tilted disk .the take away point should be clear ignoring tilt artificially constrains these fit parameters ! one remarkable outcome of considering tilt in fitting the spectral data for sgr a *is that tilted simulations seem able to naturally resolve a problem that had plagued earlier studies .spectra produced from untilted simulations of sgr a * have always yielded a deficit of flux in the near - infrared compared to what is observed . for untilted simulations ,this can only be rectified by invoking additional spectral components beyond those that naturally arise from the simulations .tilted simulations , though , produce a sufficient population of hot electrons , _ without any additional assumptions _ , to produce the observed near - infrared flux ( see comparison in figure [ fig : tilted_spectrum ] ) .they are able to do this because of another unique feature of tilted disks : the presence of standing shocks near the line - of - nodes at small radii .these shocks are a result of epicyclic driving due to unbalanced pressure gradients in tilted disks leading to a crowding of orbits near their respective apocenters .figure [ fig : henisey12 ] shows the orientation of these shocks in relation to the rest of the inner accretion flow .orders of magnitude in comparable untilted simulations .figure from .,scaledwidth=80.0% ] a worthwhile future direction to pursue in this area would be a robust comparison of tilted disk simulations using both grmhd and smoothed - particle hydrodynamics ( sph ) numerical methods .the grmhd simulations ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) enjoy the advantage of being `` first principles '' calculations , since they include all of the relevant physics , whereas the sph simulations ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) enjoy the advantage of being more computationally efficient , though they make certain assumptions about the form of the `` viscosity '' in the disk .thus far , the grmhd and sph communities have proceeded separately in their studies of tilted accretion disks , and it has yet to be demonstrated that the two methods yield equivalent results .this would seem to be a relatively straightforward and important thing to check .a few years ago , it might have been very ambitious to claim that researchers would soon be able to perform global radiation mhd simulations of black hole accretion disks , yet a lot has happened over that time , so that now it is no longer a prediction but a reality . in the realm of newtonian simulations ,a marvelous study was published by , showing global ( though two - dimensional ) radiation mhd simulations of accretion onto a black hole in three different accretion regimes : , , and .the remarkably different behavior of the disk in each of the simulations ( illustrated in figure [ fig : ohsuga11 ] ) is testament to how rich this field promises to be as more groups join this line of research .the specifics of this work are discussed more in chapter 5.3 . the other big thing to happen ( mostly ) within the past year is that a number of groups have now tackled , for the first time , the challenge of developing codes for _ relativistic _ radiation mhd in black hole environments .so far none of these groups have gotten to the point of simulating accretion disks in the way did ( they are still mostly at the stage of code tests and simple one- and two - dimensional problems ) , but with so many groups joining the chase , one can surely expect rapid progress in the near future .one result of some astrophysical interest is the study of bondi - hoyle ( wind ) accretion onto a black hole , including the effects of radiation ( see figure [ fig : zanotti11 ] ) .work presented in this chapter was supported in part by a high - performance computing grant from oak ridge associated universities / oak ridge national laboratory and by the national science foundation under grant no .
as the title suggests , the purpose of this chapter is to review the current status of numerical simulations of black hole accretion disks . this chapter focuses exclusively on _ global _ simulations of the accretion process within a few tens of gravitational radii of the black hole . most of the simulations discussed are performed using general relativistic magnetohydrodynamic ( mhd ) schemes , although some mention is made of newtonian radiation mhd simulations and smoothed particle hydrodynamics . the goal is to convey some of the exciting work that has been going on in the past few years and provide some speculation on future directions . = 1
integration of the form , where is either or , is widely encountered in many engineering and scientific applications , such as those involving fourier or laplace transforms. often such integrals are approximated by numerical integrations over a finite domain , resulting in a truncation error , in addition to the discretization error .one example is a discrete fourier transform ( dft ) , where there is a truncation error due to cut - off in the tail , in addition to the discretization error . in theorythe cut - off error can always be reduced by extending the finite domain at the expense of computing time . however , in many cases a sufficiently long integration domain covering a very long tail can be computationally expensive , such as when the integrand itself is a semi - infinite integration ( e.g. forward fourier or laplace transform ) , or when the integrand decays to zero very slowly ( e.g. a heavy tailed density or its characteristic function ) . much work has been done to directly compute the tail integration in order to reduce the truncation error .examples include nonlinear transformation and extrapolation ( wynn 1956 , alaylioglu et al 1973 , sidi 1980 , 1982 , 1988 , levin and sidi 1981 ) and application of special or generalized quadratures ( longman 1956 , hurwitz and zweifel 1956 , bakhvalov and vasileva 1968 , piessens 1970 , piessens and haegemans 1973 , patterson 1976 , evans and webster 1997 , evans and chung 2007 ) , among many others .this paper describes a very simple , perhaps the simplest , end - point correction to account for the tail integration over the entire range .the treatment of the tail reduces the usual truncation error significantly to a much smaller discrete error , thus increasing overall accuracy of the integration , while requiring virtually no extra computing effort . for the same accuracy ,this simple tail correction allows a much shorter finite integration domain than would be required otherwise , thus saving computer time while avoiding extra programming effort .to our knowledge this result is not known in the literature and we believe it deserves to be published for its elegant simplicity and broad applicability . though it is possible that our formula is a rediscovery of a very old result hidden in the vast literature related to numerical integration .the paper is organized as follows . in section 2, we derive the tail integration approximation and its analytical error .a few examples are shown to demonstrate the effectiveness of the tail integration approximation in section 3 .concluding remarks are given in section 4 .consider integration . without loss of generality , we assume ( a change of variable results in the desired form ) . for derivation procedure and the resulting formula are very similar . in the following ,we assume that * the integral exists ; * all derivatives exist and as .the truncation error of replacing by is simply the tail integration for higher accuracy , instead of increasing truncation length at the cost of computing time , we propose to compute the tail integration explicitly by a very economical but effective simplification .assume approaches zero as and the truncation point can be arbitrarily chosen in a numerical integration .let , where is some large integer .dividing integration from to into cycles with an equal length of yields now assume that is piecewise linear within each -cycle , so that each of the integrals in ( 2 ) can be computed exactly . that is , in the range ] , is given by ( 9 ) .* _ example 1 : _ * in this example , the closed form results are figure 1 compares the `` magic '' point value representing simplified tail integration with the exact tail integration as functions of parameter for , i.e. the truncated lengths .the figure shows that a simple formula ( 4 ) matches the exact semi - infinite tail integration surprisingly well for the entire range of parameter .corresponding to figure 1 , the actual errors of using formula ( 4 ) are shown in table 1 , in comparison with the truncation errors without applying the correction term given by ( 4 ) .figure 2 shows the same comparison at an even shorter truncated length of .the error of using ( 4 ) is .if is large , the function is `` short tailed '' and it goes to zero very fast .the absolute error is very small even at .the relative error ( against the already very small tail integration ) , given by , is actually large in this case .but this large relative error in the tail approximation does not affect the high accuracy of the approximation for the whole integration .what is important is the error of the tail integration relative to the whole integration value . indeed ,relative to the exact integration , the error of using ( 4 ) is , which is about at .the condition as is not satisfied in this case if . however , as discussed above , the application of formula ( 4 ) does not cause any problem . for a small value of parameter , the truncation error will be large unless the truncated length is very long .for instance , with the truncation error ( if ignore the tail integration ) is more than 70% at ( , as the case in figure 1 ) , and it is more than 88% at ( , as the case in figure 2 ) . on the other hand , if we add the `` magic '' value from formula ( 4 ) to approximate the tail integration , the absolute error of the complete integration due to this approximation is less than 0.01% , and the relative error is at both and . in other words , by including this one - point value , the accuracy of integration has dramatically improved by several orders of magnitude at virtually no extra cost , compared with the truncated integration . for the truncated integration to have similar accuracy as , we need to extend the truncated length from to for this heavy tailed integrand .* _ example 2 : _ * .this example has a heavier tail than the previous one . here ,we have closed form for , but not for or , or can be accurately computed by adaptive integration functions available in many numerical packages .here we used _imsl _ function based on the modified clenshaw - curtis integration method ( clenshaw and curtis 1960 ; piessens , doncker - kapenga , berhuber and kahaner 1983 ) .figure 3 compares the `` exact '' tail integration with the one - point value .again the one - point approximation does an extremely good job . even at the shortest truncation length of just the one - point approximation is very close to the exact semi - infinite tail integration . applying the analytical error formula ( 9 ) to * * , * * we have taking the first three leading terms we get at and at .the relative error is about 1% at and it is about 0.002% at . apparently , if the extra correction term is included as in ( 7 ) , the error reduces further by an order of magnitude at and by several orders of magnitude at .corresponding to figure 3 , the actual errors of using formula ( 4 ) are shown in table 2 , in comparison with the truncation errors without applying the correction term given by ( 4 ) .figure 4 shows the truncated integration and the truncated integration with the tail modification ( 4 ) added , i.e. , along with the correct value of the full integration .the contrast between results with and without the one - point tail approximation is striking . at the shortest truncation length of ( , the relative error due to truncation for the truncated integration is more than 30% , but with the tail approximation added , the relative error reduces to 0.5% . at , the largest truncation length shown in figure 4 ,the relative error due to truncation is still more than 4% , but after the `` magic '' point value is added the relative error reduces to less than . another interesting way to look at these comparisons , which is relevant for integrating heavy tailed functions , is to consider the required truncation length for the truncated integration to achieve the same accuracy as the one with the `` magic '' value added . for the truncated integration to achieve the same accuracy of ( integration truncated at one - cycle plus the magic point value ) , we need to extend the integration length to . for to achieve the same accuracy of , the integration length has to be extended to more than ! on the other hand , if we add the tail approximation to , the relative error reduces from 0.5% to less than !this error reduction requires no extra computing , since is simply a number given by .* _ example 3 : _ * **. * * we have remarked that the piecewise linear assumption does not require monotonicity , i.e. can be oscillating , as long as its frequency is relatively small compared with the principal cycles .for example , when the function is the characteristic function of a compound distribution , it oscillates with its frequency approaching zero in the long tail . in the current example with , there is a closed form for , but not for or , figure 5 compares the `` exact '' tail integration with the one - point approximation for the case .again the one - point approximation performs surprisingly well , despite itself is now an oscillating function , along with the principal cycles in .the piecewise linearity assumption is apparently still valid for relatively mild oscillating .corresponding to figure 5 , the actual errors of using formula ( 4 ) are shown in table 3 , in comparison with the truncation errors without applying the correction term given by ( 4 ) .not surprisingly , the errors are larger in comparison with those in examples 1 and 2 , due to the fact that now is itself an oscillating function .still , table 3 shows the truncation error is reduced by an order of magnitude after applying the simple formula ( 4 ) .figure 6 compares the truncated integration against , along with the correct value of the full integration . at truncation length , the shortest truncation length shown in figures 5 and 6 , the relative error is less than 0.06% and it is less than 0.01% at . in comparison ,the truncated integration without the end point correction has relative error of 2.7% and 0.2% , respectively for those two truncation lengths . applying the analytical error formula ( 9 ) to and noting and with and , we obtain where only the first two leading terms corresponding to the 2 and 4 derivatives are included , leading to at that agrees with the actual error .similar to the previous example , if we include the extra correction term , the error reduces further by two orders of magnitude at .the purpose of example 3 is to show that the piecewise linear approximation in the tail could still be valid even if there is a secondary oscillation in , provided its frequency is not as large as the principal oscillator .if the parameter is larger than one , then we can simply perform a change of variable with and integrate in terms of .better still , for any value of , we can make use of the equality to get rid of the secondary oscillation altogether before doing numerical integration . in practice, the secondary oscillation often has a varying frequency with a slowly decaying magnitude , such as in the case of the characteristic function of a compound distribution with a heavy tail . in this caseit might be difficult to effectively apply regular numerical quadratures in the tail integration , but the simple one - point formula ( 4 ) might be very effective .all these examples show dramatic reduction in truncation errors if tail integration approximation ( 4 ) is employed , with virtually no extra cost .if the extra correction term is included , i.e. using ( 7 ) instead of ( 4 ) , the error is reduced much further .we have derived perhaps the simplest but efficient tail integration approximation , first intuitively by piecewise linear approximation , then more generally through integration by parts .analytical higher - order correction terms and thus error estimates are also derived . the usual truncation error associated with a finite length of the truncated integration domaincan be reduced dramatically by employing the one - point tail integration approximation , at virtually no extra computing cost , so a higher accuracy is achieved with a shorter truncation length . under certain conditions outlined in the present study , the method can be used in many practical applications . for example , the authors have successfully applied the present method in computing heavy tailed compound distributions through inverting their characteristic functions , where the function itself is a semi - infinite numerical integration ( luo , shevchenko and donnelly 2007 ) . of course there are more elaborate methods in the literature which are superior to the present simple formula in terms of better accuracy and broader applicability , such as some of the extrapolation methods proposed by wynn 1956 and by sidi 1980 , 1988 .the merit of the present proposal is its simplicity and effectiveness - a single function evaluation for the integrand at the truncation point is all that is needed to reduce the truncation error , often by orders of magnitude .it can not be simpler than that .also , in some applications the function may not even exist in closed form , for instance when is the characteristic function of some compound distributions as mentioned above , then itself is a semi - infinite integration of a highly oscillatory function , which could only be obtained numerically. in such cases some of the other more sophisticated methods relying on a closed form of may not be readily applicable .we would like to thank david gates , mark westcott and three anonymous refrees for many constructive comments which have led to significant improvements in the manuscript . 0.5 @@ & & + & & 0.2241 + & & 0.1422 + & & 0.1006 + & & 0.0637 + & & 0.0318 + 0.5 @@ & & + & & 0.0105 + & & 0.0053 + & & 0.0035 + & & 0.0026 + & & 0.0021 + and the truncated integration plus the one - point approximation of tail integration , , as functions of the truncated length , where .the solid line represents the exact value of the full integration without truncation error , . ] and the truncated integration plus the one - point approximation of tail integration , , as functions of the truncated length , where , .the solid line represents the exact value of the full integration without truncation error , . ]
integration of the form , where is either or , is widely encountered in many engineering and scientific applications , such as those involving fourier or laplace transforms . often such integrals are approximated by a numerical integration over a finite domain , leaving a truncation error equal to the tail integration in addition to the discretization error . this paper describes a very simple , perhaps the simplest , end - point correction to approximate the tail integration , which significantly reduces the truncation error and thus increases the overall accuracy of the numerical integration , with virtually no extra computational effort . higher order correction terms and error estimates for the end - point correction formula are also derived . the effectiveness of this one - point correction formula is demonstrated through several examples . * keywords : * numerical integration , fourier transform , laplace transform , truncation error .
in constructing a model for the self - assembly of addressable structures , we note that the designed interactions should be much stronger than any attractive interactions between subunits that are not adjacent in a correctly assembled structure .the designed interactions that stabilize the target structure can be described by a connectivity graph , , in which each vertex represents a distinct subunit and each edge indicates a correct bond .this graph allows us to describe the connectivity of the structure independently of the geometry and spatial organization of the building blocks . for structures constructed from dna bricks ,the edges of indicate the hybridization of dna strands with complementary sequences that are adjacent in the target structure . an example three - dimensional dna - brick structure is shown along with its connectivity graph in figures [ fig : ramp_example]a and [ fig : ramp_example]b . in an ideal solution with exclusively designed interactions ,the subunits assemble into clusters in which all allowed bonds are encoded in the connectivity graph of the target structure . in order to compute the free - energy difference between a particular cluster size and the unbonded single - stranded bricks, we must consider all the ways in which a correctly bonded cluster with a given number of monomers can be assembled .these ` fragments ' of the target structure correspond to connected subgraphs of the connectivity graph . in a dilute solution with strong designed interactions ,the numbers of edges and vertices are the primary factors determining the stability of a particular fragment .we therefore identify all of the possible on - pathway assembly intermediates by grouping fragments into sets with the same number of edges and vertices and counting the total number of fragments in each set. this theoretical approach is powerful because it can predict the free - energy landscape as a function of the degree of assembly between the monomers and the target structure .furthermore , the predicted landscape captures the precise topology of the target structure , which is essential for understanding the assembly of addressable , finite - sized structures . in the case of dna - brick structures, we can assign dna hybridization free energies to the edges of the target connectivity graph in order to determine the temperature dependence of the free - energy landscape ; for example , figure [ fig : ramp_example]d shows the free - energy profile of the 86-strand dna - brick structure with random dna sequences at three temperatures .our theoretical approach allows us to calculate the nucleation barrier , , by examining the free energies of clusters corresponding to fragments with exactly vertices .the critical number of strands required for nucleation is : transient clusters with fewer than strands are more likely to dissociate than to continue incorporating additional strands .the presence of a substantial nucleation barrier therefore inhibits the proliferation of large , partially assembled fragments that stick together to form non - target aggregates .over a significant range of temperatures , we find that the free - energy profiles of dna - brick structures exhibit both a nucleation barrier and a thermodynamically stable intermediate structure . the nucleation barrier is associated with the minimum number of subunits that must be assembled in order to complete one or more _ cycles _ , i.e. closed loops of stabilizing bonds in a fragment .for example , the critical number of monomers in the example structure at 319 k , , is one fewer than the nine subunits required to form a bicyclic fragment of the target structure . under the conditions where nucleation is rate controlling , the minimum free - energy structure is _ not _ the complete 86-particle target structure , but rather a structure with only particles .this incomplete structure is favored by entropy , since it can be realized in many more ways than the unique target structure .hence , the temperature where nucleation is rate controlling is higher than the temperature where the target structure is the most stable cluster .the existence of thermodynamically stable intermediates is a typical feature of dna - brick structures and of complex addressable , finite - sized structures in general .this behavior is not compatible with classical nucleation theory ( cnt ) , which predicts that , beyond the nucleation barrier , large clusters are always more stable than smaller clusters . as a consequence , in ` classical ' nucleation scenarios such as crystallization, there is a sharp boundary in temperature and concentration at which the largest - possible ordered structure , rather than the monomeric state , becomes thermodynamically stable .typically , a simple fluid must be supersaturated well beyond this boundary in order to reduce the nucleation barrier , which arises due to the competition between the free - energy penalty of forming a solid liquid interface and the increased stability due to the growth of an ordered structure. yet in the case of addressable self - assembly , and dna bricks in particular , a nucleation barrier for the formation of a stable partial structure may exist even when the target structure is unstable relative to the free monomers . an experiment to assemble such a structure requires a _ protocol _ : first nucleation at a relatively high temperature , and then further cooling to complete the formation of the target structure .this behavior can be seen in figure [ fig : ramp_example]e , where we identify a narrow temperature window in which there is a significant yet surmountable nucleation barrier . unlike cnt , the nucleation barrier does not diverge as the temperature is increased . instead , there is a well - defined temperature above which all clusters have a higher free energy than the free monomers .as the temperature is lowered further , the nucleation barrier disappears entirely before the equilibrium yield , defined as the fraction of all clusters that are correctly assembled as the complete target structure , increases measurably above zero .the equilibrium yield tends to 100% at low temperatures , since we have thus far assumed that only designed interactions are possible .therefore , because of the presence of stable intermediate structures , it is typically impossible to assemble the target structure completely at any temperature where nucleation is rate controlling . in order to examine the importance of a nucleation barrier for preventing misassembly , we calculate the free - energy difference between all off - pathway intermediates and all on - pathway intermediates , , by estimating the probability of incidental interactions between partially assembled structures. from the connectivity graph of the example dna - brick structure , we can calculate the total free energy of aggregated clusters by considering all the ways that partially assembled structures can interact via the dangling ends of the single - stranded bricks , as shown in figure [ fig : ramp_example]c .we also estimate this free - energy difference in the case of slow nucleation , , by only allowing one of the interacting clusters in a misassembled intermediate to have .the above analysis supports our claim that a substantial nucleation barrier is essential for accurate self - assembly .our calculations show that even with very weak incidental interactions , incorrect bonding between the multiple dangling ends of large partial structures prevents error - free assembly at equilibrium , since .the presence of a nucleation barrier slows the approach to equilibrium , maintaining the viability of the correctly assembled clusters .these theoretical predictions are confirmed by extensive monte carlo simulations of the structure shown in figure [ fig : ramp_example]a . in these simulations ,the dna bricks are modeled as rigid particles that move on a cubic lattice , but otherwise the sequence complementarity and the hybridization free energies of the experimental system are preserved. using realistic dynamics, we simulate the assembly of the target structure using a single copy of each monomer . in figure[ fig : ramp_example]f , we compare a representative trajectory from a simulation using a linear temperature ramp with a trajectory from a constant - temperature simulation starting from free monomers in solution .we also report the largest stable cluster size averaged over many such trajectories in figure [ fig : ramp_example]g .nucleation first occurs within the predicted nucleation window where . at 319 k ,the size of the largest stable cluster coincides precisely with the predicted average cluster size at the free - energy minimum in figure [ fig : ramp_example]d .intermediate structures assembled via a temperature ramp continue to grow at lower temperatures , while clusters formed directly from a solution of free monomers become arrested in conformations that are incompatible with further growth ( figure [ fig : ramp_example]f,_inset _ ) . in agreement with our theoretical predictions ,the simulation results demonstrate that a time - dependent protocol is essential for correctly assembling a complete dna - brick structure .in the modular assemblies reported in ref . , the maximum coordination number of bricks in the interior of the structure is four .however , one can envisage other building blocks , such as functionalized molecular constructs or nano - colloids , that have a different coordination number . to investigate the effect of the coordination number on the nucleation barrier, we compare the free - energy profile of a 48-strand dna - brick structure with those of two higher - coordinated structures ( figure [ fig : coordination_number]a ) : a simple cubic structure with coordination number and a close - packed structure with .( for a discussion of two - dimensional structures , see sec .[ sec:2d ] . ) in figure [ fig : coordination_number]b , we show the free - energy profiles at 50% yield assuming identical bond energies within each structure .one striking difference between the structure and the higher - coordinated examples is the stability of the target at 50% yield . in the dna - brick structure ,the target structure coexists in nearly equal populations with a partial structure that is missing a single cycle . in the structures with higher coordination numbers , however , the target has the same free energy as the free monomers at 50% yield .intermediate structures are therefore globally unstable at all temperatures , as predicted by cnt .a second point of distinction among these structures lies in the relative stability of intermediate cluster sizes .whereas the dna - brick structure assembles by completing individual cycles , the cubic structure grows by adding one face at a time to an expanding cuboid . with , the greater diversity of fragments withthe same number of vertices smooths out the free - energy profile near the top of the nucleation barrier .the fitted black line in figure [ fig : coordination_number]b shows that the assembly of this structure does in fact obey cnt ( see sec .[ sec : cnt ] ) .the differences among these free - energy profiles originate from the topologies of the connectivity graphs of the example structures .the most important determinant of the nucleation behavior is simply the number of vertices required to complete each additional cycle in the target connectivity graph , which is controlled by the maximum coordination number of the subunits . ) and thus does not affect the shape of the free - energy profile . ]our findings imply that controlled self - assembly of three - dimensional addressable structures is unlikely to be achieved straightforwardly using subunits with coordination numbers higher than four . in higher - coordinated structures , which are well described by cnt, it would be necessary to go to high supersaturation in order to find a surmountable nucleation barrier ; however , such an approach is likely to fail due to kinetic trapping. yet in dna - brick structures , the nucleation barrier is surmountable at low supersaturation and is relatively insensitive to the size of the target structure ( figure [ fig : coordination_number]c ) .the reliable self - assembly of large dna - brick structures is thus a direct consequence of the small number of bonds made by each brick .recent publications have argued that equal bond energies should enhance the stability of the designed structure and reduce errors during growth. by contrast , we find that the kinetics of dna - brick assembly are actually worse if one selects dna sequences that minimize the variance in the bond energies .our observation is consistent with the successful use of random dna sequences in the original experiments with dna bricks. here again , the nucleation behavior is responsible for this unexpected result . to demonstrate the difference between random dna sequences and sequences chosen to yield monodisperse bond energies , we consider the relatively simple non - convex dna - brick structure shown in figure [ fig : central_hole]a .this 74-brick structure , constructed by removing the interior strands and two faces from a cuboidal structure , assembles roughly face - by - face when using random dna sequences . the relevant nucleation barrier , as predicted theoretically in figure [ fig : central_hole]b and confirmed with monte carlo simulations in figure [ fig : central_hole]c , is the completion of the third face . with monodisperse bond energies and an equivalent mean interaction strength ,a much larger nucleation barrier appears before the first face forms .attempts to reduce this nucleation barrier by increasing the mean bond energy result in kinetic trapping and arrested growth . despite promising fluctuations in the largest cluster size in the simulation trajectory with monodisperse energies , multiple competing nuclei appear , and the largest cluster remains poorly configured for further assembly ( figure [ fig : central_hole]c,_inset _ ) .the use of sequences with a broad distribution of hybridization free energies results in a more suitable nucleation barrier because such a distribution selectively stabilizes small and floppy intermediate structures .this is a statistical effect : since there are far fewer ways of constructing a maximally connected fragment with a given number of monomers , the chance that randomly assigned sequences concentrate the strongest bonds in a compact fragment is vanishingly small in a large structure . as a result ,the dominant nucleation pathways no longer need to follow the maximally connected fragments .the use of a broad distribution of bond energies therefore tends to reduce nucleation barriers , since unstable fragments near the top of a barrier contain fewer cycles and are thus affected more significantly by the variance in the bond - energy distribution .the insights provided by our predictive theory allow us to understand the general principles underlying the unexpected success of dna - brick self - assembly .slow , controlled nucleation at low supersaturation is achieved for large structures since each brick can only make a small number of designed connections . because of an appreciable nucleation barrier that appears in a narrow temperature window , monomer depletion does not pose a significant problem for one - pot assembly .surprisingly , complex structures with randomly selected complementary dna sequences experience enhanced nucleation , making larger intermediate structures kinetically accessible at higher temperatures .the use of a temperature ramp plays a more crucial role than previously thought .cooling the dna - brick solution slowly is not just a convenient way of locating good assembly conditions , as in the case of conventional crystals ; rather , it is an essential non - equilibrium protocol for achieving error - free assembly of finite - sized structures .the explanation of slow nucleation and fast growth that was originally proposed in refs . and is therefore incomplete : fast growth allows the dna bricks to assemble into a stable , on - pathway intermediate that must be annealed at lower temperatures to complete the target structure .remaining out of equilibrium throughout the assembly protocol , as is necessary in order to avoid the aggregation of partial structures , relies on the slow diffusion of large intermediates .this is a reasonable assumption , since the rate of diffusion changes approximately inversely with the radius of a fragment in solution. our approach also suggests how to improve the design of dna - brick nanostructures beyond the random selection of uniformly distributed dna sequences . for a given target structure , it is easy to tune the nucleation barrier by adjusting the statistical distribution of bond energies .complementary dna sequences can then be assigned to the structure in order to achieve the desired distribution of hybridization free energies .furthermore , with an understanding of the origin of the nucleation barrier in a particular structure , it is possible to optimize the annealing protocol rationally in order to increase the yield of the target assembly .our approach also provides a means of systematically investigating how local modifications to the coordination number through the fusing of adjacent strands affect the nucleation behavior of dna - brick structures. the theoretical method used here greatly simplifies the quantitative prediction of nucleation barriers and intermediate structures with widespread applications for controlling the self - assembly of biomolecular or synthetic building blocks .addressable self - assembly holds great promise for building intricate three - dimensional structures that are likely to require optimization on a case - by - case basis . because our predictive theory is sensitive to the details of a particular target structure , performing these calculations for nanostructures of experimental interest will enable the precise engineering of assembly properties at the design stage . in order for potential users to perform such experimental protocol design ,we provide a user - friendly software package online at https://github.com/wmjac/pygtsa .this work was carried out with support from the european research council ( advanced grant 227758 ) and the engineering and physical sciences research council programme grant ep / i001352/1 .w.m.j . acknowledges support from the gates cambridge trust and the national science foundation graduate research fellowship under grant no .we compute the hybridization free energies of complementary 8-nucleotide dna sequences using established empirical formulae assuming salt concentrations of [ na = 1 moldm and [ mg = 0.08 moldm .for the calculations with monodisperse bond energies , we use the sequences provided in ref . .the strengths of incidental interactions are estimated based on the longest attractive overlap for each pair of non - complementary sequences . in calculations of the equilibrium yield and free - energy profiles , we report the average thermodynamic properties using 1000 randomly chosen complete sets of dna sequences .see sec .[ sec : distributions ] for further details .constant - temperature lattice monte carlo simulations are carried out using the virtual move monte carlo algorithm in order to produce physical dynamics .rigid particles , each with four distinct patches fixed in a tetrahedral arrangement , are confined to a cubic lattice .a single copy of each required subunit is present in the simulation box with lattice sites .complete details are given in ref . .for comparison with the results of these simulations , the theoretical calculations reported here assume the same dimensionless monomer concentration , , lattice coordination number , , and fixed number of dihedral angles , ( see sec . [sec : theory ] ) .29ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) in __ ( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , vol .we construct the connectivity graph , , from the designed bonds between adjacent subunits in the target structure .an example connectivity graph is shown in figure [ fig : ramp_example]b . from this graphwe are able to determine all the relevant thermodynamic properties of the intermediate and target structures in a near - equilibrium assembly protocol .a thorough explanation of this theoretical method is presented in ref . ; here we summarize the key equations .the connected subgraphs ( ` fragments ' ) of the target connectivity graph are grouped into sets , , in which all fragments have precisely edges and vertices . assuming that subunits can only form clusters with designed bonds , the dimensionless grand potential , , can be written in terms of a sum over all sets of fragments , the average fugacity of the fragments in each set , , depends on the topologies of the fragment graphs as well as the geometry of the subunits and the solution conditions .ignoring excluded volume interactions , we can approximate as where is the rotational entropy of a monomer , is the dihedral entropy of an unconstrained dimer and is the dimensionless concentration .the mean dimensionless dihedral entropy of a fragment is , where is the number of bridges in the fragment .the exponentially weighted mean bond energy within each set , , is where is the hybridization free energy of bond , is the inverse temperature and is the edge set of fragment .the inner average runs over all fragments in the set with _ quenched _ random bond energies . in the case of dna - brick structures , the outer average samples dna sequences so that each complete set of bond energies for the target structure is chosen independently from the same distribution of hybridization free energies .the free energy of a correctly bonded cluster of monomers is thus since we do not distinguish among equally sized clusters with varying compositions .this definition is appropriate for studying nucleation , as any subset of monomers has the potential to serve as a nucleation site .the equilibrium yield , , is defined as the fraction of all clusters in solution that are correctly formed , where is the grand - canonical average number of copies of fragment in solution and is the fugacity of the target structure . in the case of structures with higher coordination numbers , a few edges may be removed from the connectivity graph without allowing any subunit to disassociate or rotate . for these structures ,we replace in eq .( [ eq : equilibrium_yield ] ) with a sum over the fugacities of all fragments that enforce the correct geometry of the target structure .classical nucleation theory predicts a free - energy barrier for the nucleation of a stable , ordered structure from an unstable , fluid phase .the height of the barrier and the size of the critical nucleus vary with the degree of supersaturation of the fluid phase. assuming spherical nuclei , the classical prediction for the free - energy difference between a nucleus of the ordered phase containing monomers and the bulk fluid phase is where is the bulk free - energy difference per particle between the fluid phase and the ordered phase , is the free - energy cost per unit area of forming an interface between the two phases and is the number density of the ordered phase . in figure[ fig : coordination_number]b , the black line shows the fit to eq .( [ eq : cnt ] ) with .in the main text , we report all results of the theoretical calculations assuming a dimensionless concentration of for comparison with the lattice monte carlo simulations . changing thisconcentration shifts both the equilibrium yield and the nucleation barrier linearly with .the concentration dependence of the nucleation barrier of the example 86-strand structure at several temperatures is shown in figure [ fig : concentration_dependence ] .although we have assumed equal concentrations of all monomers , polydispersity in the monomer concentrations can be easily incorporated into the theoretical treatment in much the same way as the distributions of designed interaction energies ., on the dimensionless concentration , , for the example 86-strand structure shown in figure [ fig : ramp_example]a.,width=321 ]we find that the shapes of the free - energy profiles of two - dimensional structures are also strongly affected by their coordination numbers . in figure[ fig : two_dimensional_structures ] , we show the free - energy profiles at 50% yield of two similarly sized two - dimensional structures with coordination numbers and .the behavior of the two - dimensional structure with is similar to that of the four - coordinated three - dimensional structures examined in the main text .the two - dimensional structure with exhibits the same face - by - face assembly as three - dimensional structures with octahedral coordination ; in the two - dimensional case , however , coexistence at 50% yield occurs between the target structure and fragments with the same number of monomers but fewer bonds .two - dimensional dna - tile structures with have been successfully assembled. in general , the nucleation barriers in two - dimensional structures are much lower than in three - dimensional structures with a similar number of monomers . consequently , a lower supersaturation is required in order for the target structure to become kinetically accessible , and kinetic trapping is therefore less likely to interfere with accurate assembly .nevertheless , these calculations suggest that lower coordinated two - dimensional structures , such as the hexagonal lattice pictured in figure [ fig : two_dimensional_structures]a , might assemble more robustly in experiments .dna hybridization free energies are strongly temperature - dependent. in figure [ fig : hybridization_free_energies]a , we compare the two hybridization free - energy distributions used in the main text . the mean and the variance of the hybridization free energies of 8-nucleotide sequences are shown for both the case of randomly chosen sequences and the case of sequences selected to yield monodisperse bond energies. designed interactions occur between complementary sequences , while incidental interactions are calculated based on the most attractive overlapping regions of two non - complementary sequences . in the lattice monte carlo simulations ,all monomers on adjacent lattice sites experience a weak 100 k repulsion at all temperatures . in calculations involving designed interactions ,this repulsion is subtracted from the mean interaction strength .when estimating incidental interactions , we ignore associations between non - complementary sequences that have a maximum attractive interaction of less than 100 k. the means and variances reported in figure [ fig : hybridization_free_energies]a are thus calculated based on the fraction of pairs of non - complementary sequences that attract more strongly than 100 k ; this fraction is shown in figure [ fig : hybridization_free_energies]b .these incidental interaction distributions are clearly approximate and are defined in order to match the lattice monte carlo simulations .nevertheless , the choices made in defining these distributions have a negligible effect on the calculated values of and are irrelevant to the prediction of nucleation barriers and equilibrium yields .in order to apply this theoretical method to an experimental system , the designed interaction distributions should be recalculated in accordance with the experimental solution conditions .
the field of complex self - assembly is moving toward the design of multi - particle structures consisting of thousands of distinct building blocks . to exploit the potential benefits of structures with such ` addressable complexity , ' we need to understand the factors that optimize the yield and the kinetics of self - assembly . here we use a simple theoretical method to explain the key features responsible for the unexpected success of dna - brick experiments , which are currently the only demonstration of reliable self - assembly with such a large number of components . simulations confirm that our theory accurately predicts the narrow temperature window in which error - free assembly can occur . even more strikingly , our theory predicts that correct assembly of the complete structure may require a time - dependent experimental protocol . furthermore , we predict that low coordination numbers result in non - classical nucleation behavior , which we find to be essential for achieving optimal nucleation kinetics under mild growth conditions . we also show that , rather surprisingly , the use of heterogeneous bond energies improves the nucleation kinetics and in fact appears to be necessary for assembling certain intricate three - dimensional structures . this observation makes it possible to sculpt nucleation pathways by tuning the distribution of interaction strengths . these insights not only suggest how to improve the design of structures based on dna bricks , but also point the way toward the creation of a much wider class of chemical or colloidal structures with addressable complexity . recent experiments with short pieces of single - stranded dna have shown that it is possible to assemble well - defined molecular superstructures from a single solution with more than merely a handful of distinct building blocks . these experiments use complementary dna sequences to encode an addressable structure in which each distinct single - stranded ` brick ' belongs in a specific location within the target assembly . a remarkable feature of these experiments is that even without careful control of the subunit stoichiometry or optimization of the dna sequences , a large number of two- and three - dimensional designed structures with thousands of subunits assemble reliably. the success of this approach is astounding given the many ways in which the assembly of an addressable structure could potentially go wrong. any attempt to optimize the assembly yield or to create even more complex structures should be based on a better understanding of the mechanism by which dna bricks manage to self - assemble robustly . the existence of a sizable nucleation barrier , as originally proposed in refs . and , would remedy two possible sources of error that were previously thought to limit the successful assembly of multicomponent nanostructures : the depletion of free monomers and the uncontrolled aggregation of partially formed structures . slowing the rate of nucleation would suppress competition among multiple nucleation sites for available monomers and give the complete structure a chance to assemble before encountering other partial structures . recent simulations of a simplified model of a three - dimensional addressable structure have provided evidence of a free - energy barrier for nucleation, suggesting that the ability to control this barrier should enable the assembly of a wide range of complex nanostructures . we therefore need to be able to predict how such a barrier depends on the design of the target structure and on the choice of dna sequences . until now , however , there have been no reliable techniques to predict the existence , let alone the magnitude , of a nucleation barrier for self - assembly in a mixture of complementary dna bricks . here we show that the assembly of three - dimensional dna - brick nanostructures is indeed a nucleated process , but only in a narrow range of temperatures . the nucleation barrier in these systems is determined entirely by the topology of the designed interactions that stabilize the target structure . controllable nucleation is therefore a general feature of addressable structures that can be tuned through the rational choice of designed interactions . we find that the reliable self - assembly of three - dimensional dna bricks is a direct consequence of their unusual nucleation behavior , which is not accounted for by existing theories that work for classical examples of self - assembly , such as crystal nucleation . we are thus able to provide a rational basis for the rather unconventional protocol used in the recent dna - brick experiments by showing that they exploit a narrow window of opportunity where robust multicomponent self - assembly can take place .
despite a few important successes ( e.g. , bean et al . 2007 , and references therein ) , astrometric measurements with mas precision have so far proved of limited utility when employed as either a follow - up tool or to independently search for planetary mass companions orbiting nearby stars ( see for example sozzetti 2005 , and references therein ) . in several past exploratory works( casertano et al . 1996 ; lattanzi et al .1997 , 2000 ; sozzetti et al 2001 , 2003 ) , we have shown in some detail what space - borne astrometric observatories with - level precision , such as gaia ( perryman et al .2001 ) , can achieve in terms of search , detection and measurement of extrasolar planets of mass ranging from jupiter - like to earth - like . in those studies we adopted a qualitatively correct description of the measurements that each mission will carry out , and we estimated detection probabilities and orbital parameters using realistic , non - linear least squares fits to those measurements .those exploratory studies , however , need updating and improvements . in the specific case of planet detection and measurement with gaia, we have thus far largely neglected the difficult problem of selecting adequate starting values for the non - linear fits , using perturbed starting values instead .the study of multiple - planet systems , and in particular the determination of whether the planets are coplanar within suitable tolerances is incomplete .the characteristics of gaia have changed , in some ways substantially , since our last work on the subject ( sozzetti et al 2003 ) .last but not least , in order to render the analysis truly independent from the simulations , these studies should be carried out in double - blind mode .we present here a substantial program of double - blind tests for planet detection with gaia ( preliminary findings were recently presented by lattanzi et al .( 2005 ) ) , with the three - fold goal of obtaining : a ) an improved , more realistic assessment of the detectability and measurability of single and multiple planets under a variety of conditions , parametrized by the sensitivity of gaia ; b ) an assessment of the impact of gaia in critical areas of planet research , in dependence on its expected capabilities ; and c ) the establishment of several centers with a high level of readiness for the analysis of gaia observations relevant to the study of exoplanets .we carry out detailed simulations of gaia observations of synthetic planetary systems and develop and utilize in double - blind mode independent software codes for the analysis of the data , including statistical tools for planet detection and different algorithms for single and multiple keplerian orbit fitting that use no a priori knowledge of the true orbital parameters of the systems .overall , the results of our earlier works ( e.g. , lattanzi et al . 2000 ; sozzetti et al . 2001 , 2003 ) are essentially confirmed , with the fundamental improvement due to the successful development of independent orbital fitting algorithms applicable to real - life data that do not utilize any a priori knowledge of the orbital parameters of the planets . in particular , the results of the t1 test ( planet detection ) indicate that planets down to astrometric signatures , corresponding to times the assumed single - measurement error , can be detected reliably and consistently , with a very small number of false positives ( depending on the specific choice of the threshold for detection ) .the results of the t2 test ( single - planet orbital solutions ) indicate that : 1 ) orbital periods can be retrieved with very good accuracy ( better than 10% ) and small bias in the range yrs , and in this period range the other orbital parameters and the planet mass are similarly well estimated .the quality of the solutions degrades quickly for periods longer than the mission duration , and in particularly the fitted value of is systematically underestimated ; 2 ) uncertainties in orbit parameters are well understood ; 3 ) nominal uncertainties obtained from the fitting procedure are a good estimate of the actual errors in the orbit reconstruction . modest discrepancies between estimated and actual errors arise only for planets with extremely good signal ( errors are overestimated ) and for planets with very long period ( errors are underestimated ) ; such discrepancies are of interest mainly for a detailed numerical analysis , but they do not touch significantly the assessment of gaia s ability to find planets and our preparedness for the analysis of perturbation data .the results of the t3 test ( multiple - planet orbital solutions ) indicate that 1 ) over 70% of the simulated orbits under the conditions of the t3 test ( for every two - planet system , periods shorter than 9 years and differing by at least a factor of two , , ) are correctly identified ; 2 ) favorable orbital configurations ( both planet with periods yr and astrometric signal - to - noise ratio , redundancy of over a factor of 2 in the number of observations ) have periods measured to better than 10% accuracy of the time , and comparable results hold for other orbital elements ; 3 ) for these favorable cases , only a modest degradation of up to in the fraction of well - measured orbits is observed with respect to single - planet solutions with comparable properties ; 4 ) the overall results are mostly insensitive to the relative inclination of pairs of planetary orbits ; 5 ) over 80% of the favorable configurations have measured to better than 10 degrees accuracy , with only mild dependencies on its actual value , or on the inclination angle with respect to the line of sight of the planets ; 6 ) error estimates are generally accurate , particularly for fitted parameters , while modest discrepancies ( errors are systematically underestimated ) arise between formal and actual errors on .g dwarf primary at 200 pc , while the blue curves are for a 0.5- m dwarf at 25 pc .the radial velocity curve ( pink line ) is for detection at the level , assuming m s , , and 10-yr survey duration . for transit photometry ( green curve ) , milli - mag , , , , uniform and dense ( datapoints ) sampling .black dots indicate the inventory of exoplanets as of october 2007 .transiting systems are shown as light - blue filled pentagons .jupiter and saturn are also shown as red pentagons.,scaledwidth=75.0% ][ nplan ] in figure [ detmeas ] we show gaia s discovery space in terms of detectable and measurable planets of given mass and orbital separation around stars of given mass at a given distance from earth ( see caption for details ) . from the figure, one would then conclude that gaia could discover and measure massive giant planets ( ) with au orbiting solar - type stars as far as the nearest star - forming regions , as well as explore the domain of saturn - mass planets with similar orbital semi - major axes around late - type stars within 30 - 40 pc .these results can be turned into a number of planets detected and measured by gaia , using galaxy models and the current knowledge of exoplanet frequencies . by inspection of tables [ nplan ] and [ nmult ], we then find that gaia could measure accurately thousands of giant planets , and accurately determine coplanarity ( or not ) for a few hundred multiple systems with favorable configurations .in conclusion , gaia s main strength continues to be the ability to measure actual masses and orbital parameters for possibly thousands of planetary systems .the gaia data have the potential to a ) significantly refine our understanding of the statistical properties of extrasolar planets : the predicted database of several thousand extrasolar planets with well - measured properties will allow for example to test the fine structure of giant planet parameters distributions and frequencies , and to investigate their possible changes as a function of stellar mass with unprecedented resolution ; b ) help crucially test theoretical models of gas giant planet formation and migration : for example , specific predictions on formation time - scales and the role of varying metal content in the protoplanetary disk will be probed with unprecedented statistics thanks to the thousands of metal - poor stars and hundreds of young stars screened for giant planets out to a few aus ; c ) improve our comprehension of the role of dynamical interactions in the early as well as long - term evolution of planetary systems : for example , the measurement of orbital parameters for hundreds of multiple - planet systems , including meaningful coplanarity tests will allow to discriminate between various proposed mechanisms for eccentricity excitation ; d ) aid in the understanding of direct detections of giant extrasolar planets : for example , actual mass estimates and full orbital geometry determination for suitable systems will inform direct imaging surveys about where and when to point , in order to estimate optimal visibility , and will help in the modeling and interpretation of giant planets phase functions and light curves ; e ) provide important supplementary data for the optimization of the target selection for darwin / tpf : for example , all f - g - k - m stars within the useful volume ( pc ) will be screened for jupiter- and saturn - sized planets out to several aus , and these data will help probing the long - term dynamical stability of their habitable zones , where terrestrial planets may have formed , and maybe found .
in this paper , we first summarize the results of a large - scale double - blind tests campaign carried out for the realistic estimation of the gaia potential in detecting and measuring planetary systems . then , we put the identified capabilities in context by highlighting the unique contribution that the gaia exoplanet discoveries will be able to bring to the science of extrasolar planets during the next decade .
models of cyclic dominance are traditionally employed to study biodiversity in biologically inspired settings .the simplest such model is the rock - paper - scissors game , where rock crashes scissors , scissors cut paper , and paper wraps rock to close the loop of dominance .the game has no obvious winner and is very simple , yet still , it is an adequate model that captures the essence of many realistic biological systems .examples include the mating strategy of side - blotched lizards , the overgrowth of marine sessile organisms , genetic regulation in the repressilator , parasitic plant on host plant communities , and competition in microbial populations .cyclical interactions may also emerge spontaneously in the public goods game with correlated reward and punishment , in the ultimatum game , and in evolutionary social dilemmas with jokers or coevolution .an important result of research involving the rock - paper - scissors game is that the introduction of randomness into the interaction network results in global oscillations , which often leads to the extinction of one species , and thus to the destruction of the closed loop of dominance that sustains biodiversity .more precisely , in a structured population where the interactions among players are determined by a translation invariant lattice , the frequency of every species is practically time - independent because oscillations that emerge locally can not synchronize and come together to form global , population - wide oscillations .however , if shortcuts or long - range interactions are introduced to the lattice , or if the original lattice is simply replaced by a small - world network , then initially locally occurring oscillations do synchronize , leading to global oscillations and to the accidental extinction of one species in the loop , and thus to loss of biodiversity .if the degree distribution of interaction graph is seriously heterogeneous , however , then such kind of heterogeneity can facilitate stable coexistence of competing species .interestingly , other type of randomness , namely the introduction of mobility of players , also promotes the emergence of global oscillations that jeopardize biodiversity .interestingly , however , although long - range interactions and small - world networks abound in nature , and although mobility is an inherent part to virtually all animal groups , global oscillations are rarely observed in actual biological systems . it is thus warranted to search for universal features in models of cyclic dominance that work in the opposite way of the aforementioned types of randomness .the questions is , what is the missing ingredient that would prevent local oscillations to synchronize across the population to form global oscillations ?preceding research has already provided some possible answers .for instance peltomki and alava observed that global oscillations do not occur if the total number of players is conserved .mobility , for example , then has no particular impact on biodiversity because oscillations are damped by the conservation law .however , the consequence of the conservation law does not work anymore if a tiny fraction of links forming the regular lattice is randomly rewired .zealots , on the other hand , have been identified as a viable means to suppress global oscillations in the rock - paper - scissors game in the presence of both mobility and interaction randomness .in addition to these examples , especially in the realm of statistical physics , there is a wealth of studies on the preservation and destruction of biodiversity in models of cyclic dominance . herewe wish to extend the scope of this research by considering a partly overlooked property , namely the consideration of site - specific heterogeneous invasion rates .importantly , we wish to emphasize an important distinction to species - specific heterogeneous invasion rates , which have been considered intensively before . in the latter case ,different pairs of species are characterized by different invasion rates , but these differences are then applied uniformly across the population . in case of spatially variable invasion rates ,these could be site - specific , and hence particular pairs of species may have different invasion rates even though they are of the same type .such a setup has many analogies in real life , ranging from differing resources , quality or quantity wise , to variations in the environment , all of which can significantly influence the local success rate of the governing microscopic dynamics . notably , this kind of heterogeneity was already studied in a two - species lotka - volterra - like system , and in a three - species cyclic dominant system where a lattice has been used as the interaction network .the latter work concluded that the invasion heterogeneity in spatial rock - paper - scissors models has very little effect on the long - time properties of the coexistence state . in this paper , we go beyond the lattice interaction topology , exploring the consequences of quenched and annealed randomness being present in the interaction network . in the latter case , as we will show , it could be a decisive how heterogeneity is introduced into the invasion rate because annealed randomness does not change the oscillation but quenched heterogeneity can mitigate the global oscillation effectively . inwhat follows , we first present the main results and discuss the implications of our research , while details concerning the model and the methodology are described in the methods section .we first consider results obtained with species - specific invasion rates . indeed ,it is possible to argue that it is too idealistic to assume homogenous invasion rates between different species , and that it would be more realistic to assume that these invasion rates are heterogeneous . but as results presented in fig . [suppressed ] show , this kind of generalization does not bring about a mechanism that would suppress global oscillations .these oscillations clearly emerge for homogeneous species - specific invasion rates , as soon as the fraction of rewired links of the square lattice exceeds a threshold .if we then assume that species - specific invasion rates are heterogeneous , say , , and ( here denotes the invasion rates of transition where runs from to in a cyclic manner ) , it can be observed that nothing really changes .in fact , the threshold in remains much the same , and the order parameter ( the area of the limit cycle in the ternary diagram ) reaches the same close to plateau it does when these invasion rates are homogenous .further along this line , we can even adopt invasion rates that are chosen uniformly at random from the unit interval at each particular instance of the games .more precisely , we still keep the original direction of invasion , but the strength of the invasion rate is chosen randomly in each particular case . but no matter the fact that this rather drastically modifies the microscopic dynamics , the presence of shortcuts will still trigger global oscillations ( marked random in fig . [ suppressed ] ) .we thus arrive at the same conclusion that was already pointed out in , which is that heterogeneous invasion reaction rates have very little effect on the dynamics and the long - time properties of the coexistence state . having established the ineffectiveness of heterogeneous species - specific invasion rates to prevent local oscillations to synchronize across the population to form global oscillations , we next consider site - specific heterogeneous interaction rates , denoted as and applied to each site . here determines the probability that a neighbor will be successful when trying to invade player according to the original rule . assuch , different values of influence the success of microscopic dynamics locally .moreover , these invasion rates are determined once at the start of the game and can be drawn from different distributions .the simplest case is thus to consider values drawn uniformly at random from the unit interval . as results in fig . [suppressed ] show ( see quenched random ) , this modification of the rock - paper - scissors game clearly blocks the emergence of global oscillations regardless of the value of .indeed , even if the square lattice is , through rewiring , transformed into a regular random graph , the order parameter still remains zero . even if the uniform distribution is replaced by a simple discrete double - peaked distribution ( practically it means that half of the players has for example ,while the other half retains ) , the global oscillations never emerge ( see quenched double in fig .[ suppressed ] ) .the coordination effect leading up to global oscillations is thus very effectively disrupted by heterogeneous site - specific invasion rates , and this regardless of the distribution from which these rates are drawn . to illustrate the dramatically contrasting consequences of different types of randomness, we show in fig .[ ternary ] representative time evolutions for both cases .the comparison reveals that , as deduced from the values of the order parameter displayed in fig .[ suppressed ] , time - varying invasion rates fail to suppress global oscillations , the emergence of which is supported by the small - world properties of the interaction network ( ternary diagram and the time course on the left ) .the limit cycle denoted black in the ternary diagram and the large - amplitude oscillations of the densities of species in the corresponding bottom panel clearly attest to this fact .this stationary state is robust and is reached independently of the initial mixture of competing strategies .conversely , quenched heterogeneous interaction rates drawn from a uniform distribution clearly suppress global oscillations ( ternary diagram and the time course on the right ) . here, the system will always evolve into the state , central point of the diagram , even if we launched the evolution from a biased initial state .thus , if heterogeneities are fixed in space , just like in several realistic biological systems , then this effectively prohibits global oscillation by disrupting the organization of a coordinated state , i.e. , synchronization of locally occurring oscillations across the population . as demonstrated previously , the type of randomness in the interaction network responsible for the emergence of global oscillations plays a negligible role .be it quenched through the one - time rewiring of a fraction of links forming the original translation invariant lattice , or be it annealed through the random selection of far - away players to replace nearest neighbors as targets of invasion with probability , there exist a critical threshold in both where global oscillations emerge if invasion rates are homogeneous .accordingly , it makes sense to test whether heterogeneous site - specific invasions rates are able to suppress such oscillations regardless of the type of randomness that supports them . to that effect , we make use of the discrete double - peaked distribution , where the fraction of sites having a lower invasion rate then the rest of the population at can be a free parameter determining the level of heterogeneity .evidently , at we retain the traditional rock - paper - scissors game with homogeneous invasion rates ( all sites have ) , while for the fraction of sites having , and thus the level of heterogeneity , increases . at the other extreme , for , we of course again obtain a homogeneous population where everybody has , but we do not explore this option since it is practically identical , albeit the evolutionary process is much slower . by introducing heterogeneity into the system gradually , we can monitor how it influences the stationary state . in fig .[ nu ] , we present representative results for both quenched and annealed randomness of interaction graph ( see legend ) .the first observation is that only a minute fraction of suppressed nodes ( less than ) suffices to fully suppress global oscillations , and this regardless of the applied high and values that practically ensure an optimal support for local oscillations to synchronize across the population into global oscillations .moreover , it can be observed that both transitions to the oscillation - free state are continuous . in other words , there does not exist a sharp drop in the value of at a particular value of .instead , the suppression of global oscillations is gradual as the level of site - specific invasion heterogeneity in the population increases .similar in spirit , another way to introduce invasion heterogeneity gradually into the population is to use a fixed fraction of nodes with a lower invasion rate , but vary the difference to .accordingly , we have a fraction of nodes , which instead of have the invasion rate . here becomes the free parameter , which for zero returns the traditional rock - paper - scissors game with homogeneous invasion rates , while for the distance in the peaks of the discrete double - peaked distribution , and thus the level of heterogeneity in the population , increases .representative results obtained with this approach are shown in fig .[ dif ] for both quenched and annealed randomness of interaction graph ( see legend ) . in comparison with results presented in fig .[ nu ] , it can be observed that increasing has somewhat different consequences than increasing . in the former case , when is small , the slight heterogeneity has no particular influence on the stationary state and global oscillations persist well beyond for annealed randomness and for quenched randomness .but if the difference reaches a sufficiently large value , global oscillations disappear in much the same gradual way as observed before in fig .[ nu ] , although the transition for annealed randomness is more sudden . to sum up our observations thus far , different versions of the same concept reveal that spatially quenched heterogeneity in site - specific invasion rates is capable to effectively suppress global oscillations that would otherwise be brought about by either annealed or quenched randomness in the interaction network .however , there is yet another possible source or large - amplitude global oscillations in the population , namely mobility .as is well - known , mobility can give rise to global oscillation that jeopardizes biodiversity .although subsequent research revealed that global oscillations due to mobility do not emerge if the total number of competing players is conserved , more recently it was shown that , if in addition to a conservation law also either quenched or annealed randomness is present in the interaction network , then mobility still induces global oscillations . in particular ,if the site exchange is intensive then only a tiny level of randomness in the host lattice suffices to evoke global oscillations .lastly , we thus verify if heterogeneity in site - specific invasion rates is able to suppress global oscillations brought about by mobility . as results in fig .[ mob ] show , the impact of quenched invasion heterogeneity is very similar to the above - discussed cases .it is worth noting that conceptually similar behavior can be observed when biological species are hosted in a turbulent flow of fluid environment .in fact , as a general conclusion , neither randomness in the interaction network nor the mobility of players can compensate for the detrimental impact of spatial invasion heterogeneity on global oscillations , thus establishing the latter as a very potent proponent of biodiversity in models of cyclic dominance .we have studied the impact of site - specific heterogeneous invasion rates on the emergence of global oscillations in the spatial rock - paper - scissors game . we have first confirmed that species - specific heterogeneous invasion rates , either fixed or varying over time , fail to disrupt the synchronization of locally emerging oscillations into a global oscillatory state on a regular small - world network . on the contrary, we have then demonstrated that site - specific heterogeneous invasion rates , determined once at the start of the game , successfully hinder the emergence of global oscillations and thus preserves biodiversity .we have shown this conclusion to be valid independently of the properties of the distribution that determines the invasion heterogeneity , specifically demonstrating the failure of coordination for uniformly and double - peak distributed site - specific invasion rates . moreover, our research has revealed that quenched site - specific heterogeneous invasion rates preserve biodiversity regardless of the type of randomness that would be responsible for the emerge of global oscillations . in particular , we have considered quenched and annealed randomness in the interaction network , as well as mobility .regardless of the type of randomness that would promote local oscillations to synchronize across the population to form global oscillations , site - specific heterogeneous invasion rates were always found to be extremely effective in suppressing the emergence of global oscillations .drawing from the colloquial expression used to refer to alcohol that is consumed with the aim of lessening the effects of previous alcohol consumption , the introduction of randomness in the form of site - specific heterogeneous invasion rates lessens , in fact fully suppresses , the effects of other types of randomness hence the `` hair of the dog '' phenomenon .our setup takes into account heterogeneities that are inherently present in virtually all uncontrolled environments , ranging from bacterial films to plant communities .examples include qualitative and quantitative variations in the availability of nutrients , local differences in the habitat , or any other factors that are likely to influence the local success rate of the governing dynamics .the consideration of site - specific heterogeneous invasion rates and their ability to suppress global oscillations joins the line of recent research on the subject , showing for example that the preservation of biodiversity is promoted if a conservation law is in place for the total number of competing players , or if zealots are introduced to the population .notably , previously it was shown that zealotry can have a significant impact on the segregation in a two - state voter model , and research in the realm of the rock - paper - scissors game confirmed such an important role of this rather special uncompromising behavior .in general , since global , population - wide oscillations are rarely observed in nature , it is of significance to determine key mechanisms that may explain this , especially since factors that promote such oscillations , like small - world properties , long - range interactions , or mobility , are very common . in this sense ,site - specific heterogeneous invasion rates fill an important gap in our understanding of the missing ingredient that would prevent local oscillations to synchronize across the population to form global oscillations .there is certainly no perfect spatial system where microscopic processes would unfold identically across the whole population .these imperfections are elegantly modeled by the heterogeneous host matrix that stores the individual invasion rates of each player .as we have shown , the coordination of species evolution is highly sensitive on such kind of heterogeneities when they are fixed in space .ultimately , this prevents the synchronization of locally emerging oscillations , and gives rise to a `` hair of the dog''-like phenomenon , where one type of randomness is used to mitigate the adverse effects of other types of randomness .we hope that these theoretical explorations will help us to better understand the rare emergence of global oscillations in nature , as well as inspire further research , both experimental and theoretical , along similar lines .the spatial rock - paper - scissors game evolves on a square lattice with periodic boundary conditions , where each site is initially randomly populated by one of the three competing species . for convenience ,we introduce the notation , where runs from to in a cyclic manner .hence , species ( for example paper ) invades species ( rock ) , while species invades species ( scissors ) , which in turn invades species to close the loop of dominance .the evolution of species proceeds in agreement with a random sequential update , where during a full monte carlo step ( ) we have chosen every site once on average and a neighbor randomly . in case of different playersthe invasion was executed according to the rock - scissors - paper rule with probability . in the simplest , traditional version of the game , all invasion rates between species are equal to .species - specific heterogeneous invasion rates can be introduced through the parameter , which is simply the probability for the invasion to occur when given a chance .the values , and can be determined once at the start of the game , or they can be chosen uniformly at random from the unit interval at each particular instance of the game . on the other hand , site - specific heterogeneous invasion rates , which we denote as ,apply to each site in particular , and determine the probability that a neighbor will be successful when trying to invade player according to the rule .this rule can be considered as `` prey - dependent '' because the value at the prey s position determines the probability of invasion . as an alternative rule, we can consider the value of predator s position that determines the invasion probability .lastly , we can assume that the values of both the predator and prey s positions influence the invasion rate via their product . while the time dependence of the evolution will be different in the mentioned three cases but the qualitative behavior is robust .therefore we restrict ourself to the first mentioned `` prey - dependent '' rule .these invasion rates are determined once at the start of the game and can be drawn uniformly at random from the unit interval , or from any other distribution . here ,in addition to uniformly distributed , we also consider site - specific heterogeneous invasion rates drawn from a discrete double - peaked distribution , where a fraction of sites have , while the remaining have . to test the impact of site - specific heterogeneous invasion rates under different circumstances , we consider interaction randomness in the form of both quenched and annealed randomness. quenched randomness is introduced by randomly rewiring a fraction of the links that form the square lattice whilst preserving the degree of each site .this is done only once at the start of the game. this procedure returns regular small - world networks for small values of and a regular random network in the limit .annealed randomness , on the other hand , is introduced so that at each instance of the game a potential target for an invasion is selected randomly from the whole population with probability , while with probability the invasion is restricted to a randomly selected nearest neighbor .this procedure returns well - mixed conditions for , while for only short - range invasions as allowed by the original square lattice are possible .we also consider the impact of site - specific heterogeneous invasion rates in the presence of mobility .the latter is implemented so that during each instance of the game we choose a nearest - neighbor pair randomly where players exchange their positions with probability .oppositely , with probability , the dominant species in the pair invades the other in agreement with the rules or the rock - paper - scissors game .the parameter hence determines the intensity of mobility while the number of players is conserved .technically , however , the strategy exchange between neighboring players is determined not only by the level of mobility , but it also depends on the individual and values characterizing the neighboring sites and . in this way , we can consider the fact that different sites may be differently sensitive to the change of strategy , and the success of mutual change is then practically determined by the site that is more reluctant to change its state .accordingly , when the strategy exchange is supposed to be executed , then this happens only with the probability that is equal to the smaller of and values ( all the other details of the model remain the same as above ) .global oscillations are characterized with the order parameter , which is defined as the area of the limit cycle in the ternary diagram .this order parameter is zero when each species occupies one third of the population , and becomes one when the system terminates into an absorbing , single - species state .we have used lattices with up to sites , which was large enough to avoid accidental fixations when the amplitude of oscillations was large , and which allowed an accurate determination of strategy concentrations that are valid in the large population size limit .naturally , the relaxation time depends sensitively on the model parameters and the system size , but mcs was long enough even for the slowest evolution that we have encountered during this study .
global , population - wide oscillations in models of cyclic dominance may result in the collapse of biodiversity due to the accidental extinction of one species in the loop . previous research has shown that such oscillations can emerge if the interaction network has small - world properties , and more generally , because of long - range interactions among individuals or because of mobility . but although these features are all common in nature , global oscillations are rarely observed in actual biological systems . this begets the question what is the missing ingredient that would prevent local oscillations to synchronize across the population to form global oscillations . here we show that , although heterogeneous species - specific invasion rates fail to have a noticeable impact on species coexistence , randomness in site - specific invasion rates successfully hinders the emergence of global oscillations and thus preserves biodiversity . our model takes into account that the environment is often not uniform but rather spatially heterogeneous , which may influence the success of microscopic dynamics locally . this prevents the synchronization of locally emerging oscillations , and ultimately results in a phenomenon where one type of randomness is used to mitigate the adverse effects of other types of randomness in the system .
due to the exploding popularity of all things wireless , the demand for wireless data traffic increases dramatically . according to a cisco report, global mobile data traffic will increase 13-fold between 2012 and 2017 .this dramatic demand puts on pressure on mobile network operators ( mnos ) to purchase more spectrum .however , wireless spectrum is a scarce resource for mobile services . even if the continued innovations in technological progress relax this constraint as it provides more capacity and higher quality of service ( qos ) , the shortage of spectrum is still the bottleneck when the mobile telecommunications industry is moving toward wireless broadband services . to achieve a dominant position for future wireless services , thus, it is significant how new spectrum is allocated to mnos .since the spectrum is statically and infrequently allocated to an mno , there has been an ongoing fight over access to the spectrum . in south korea , for example , the korea communications commission ( kcc ) planed to auction off additional spectrum in both 1.8 ghz and 2.6 ghz bands .the main issue was whether korea telecom ( kt ) acquires the contiguous spectrum block or not .due to the kt s existing holding downlink 10 mhz in the 1.8 ghz band , it could immediately double the existing long term evolution ( lte ) network capacity in the 1.8 ghz band at little or no cost .this is due to the support of the downlink up to 20 mhz contiguous bandwidth by lte release 8/9 . to the user side, there is no need for upgrading their handsets .lte release 10 ( lte - a ) can support up to 100 mhz bandwidth but this requires the carrier aggregation ( ca ) technique , for which both infrastructure and handsets should be upgraded . if kt leases the spectrum block in the 1.8 ghz band , kt might achieve a dominant position in the market . on the other hand , other mnos expect to make heavy investments as well as some deployment time to double their existing lte network capacities compared to kt .thus , the other mnos requested the government to exclude kt from bidding on the contiguous spectrum block to ensure market competitiveness .although we consider the example of south korea , this interesting but challenging issue on spectrum allocation is not limited to south korea but to most countries when asymmetric - valued spectrum blocks are auctioned off to mnos .spectrum auctions are widely used by governments to allocate spectrum for wireless communications .most of the existing auction literatures assume that each bidder ( i.e. , an mno ) only cares about his own profit : what spectrum block he gets and how much he has to pay . given spectrum constraints , however , there is some evidence that a bidder considers not only to maximize his own profit in the event that he wins the auction but to minimize the weighted difference of his competitor s profit and his own profit in the event that he loses the auction .this strategic concern can be interpreted as a _spite motive _ , which is the preference to make competitors worse off .since it might increase the mno s relative position in the market , such concern has been observed in spectrum auctions . in this paper, we study bidding and pricing competition between two competing / spiteful mnos with considering their existing spectrum holdings . giventhat asymmetric - valued spectrum blocks are auctioned off to them , we developed an analytical framework to investigate the interactions between two mnos and users as a three - stage dynamic game .in tage i , two spiteful mnos compete in a first - price sealed - bid auction .departing from the standard auction framework , we address the bidding behavior of the spiteful mno . in tage ii , two competing mnos optimally set their service prices to maximize their revenues with the newly allocated spectrum . in tage iii , users decide whether to stay in their current mno or to switch to the other mno for utility maximization .our results are summarized as follows : * _ asymmetric pricing structure _ :we show that two mnos announce different equilibrium prices to the users , even providing the same quality in services to the users . * _ different market share _ : we show that the market share leader , despite charging a higher price , still achieve more market share .* _ impact of competition _ : we show that the competition between two mnos leads to some loss of their revenues . * _ cross - over point between two mno s profits _ : we show that two mnos profits are switched .the rest of the paper is organized as follows : related works are discussed in ection ii . the system model and three - stage dynamic gameare described in ection iii . using backward induction, we analyze user responses and pricing competition in ections vi and v , and bidding competition in ection vi .we conclude in section ii together with some future research directions .in wireless communications , the competition among mnos have been addressed by many researchers .yu and kim studied price dynamics among mnos .they also suggested a simple regulation that guarantees a pareto optimal equilibrium point to avoid instability and inefficiency .niyato and hossain proposed a pricing model among mnos providing different services to users .however , these works did not consider the spectrum allocation issue .more closely related to our paper are some recent works .the paper studied bandwidth and price competition ( i.e. , bertrand competition ) among mnos . by taking into account mnos heterogeneity in leasing costs and users heterogeneity in transmission power and channel conditions , duan _et al_. presented a comprehensive analytical study of mnos spectrum leasing and pricing strategies in . in ,a new allocation scheme is suggested by jointly considering mnos revenues and social welfare .x. feng _ et al ._ suggested a truthful double auction scheme for heterogeneous spectrum allocation .none of the prior results considered mnos existing spectrum holdings even if the value of spectrum could be varied depending on mnos existing spectrum holdings .we consider two mnos ( and ) compete in a first - price sealed - bid auction , where two spectrum blocks and are auctioned off to them as shown in ig .1 . note that and are the same amount of spectrum ( i.e. , 10 mhz spectrum block ) . without loss of generality ,we consider only the downlink throughput the paper .note that both mnos operate frequency division duplex lte ( fdd lte ) in the same area . due to the mnos existing spectrum holdings ( i.e. , each mno secures 10 mhz downlink spectrum in the 1.8 ghz band ) ,the mnos put values on spectrum blocks and asymmetrically . if mno leases , twice ( 2x ) improvements in capacity over his existing lte network capacity are directly supported to users . in third generation partnership project ( 3gpp )lte release 8/9 , lte carriers can support a maximum bandwidth of 20 mhz for both in uplink and downlink , thereby allowing for mno to provide double - speed lte service to users without making many changes to the physical layer structure of lte systems . on the other hand , mno who leases should make a huge investment to double the capacity after some deployment time . without loss of generality, we assume that mno leases . to illustrate user responses , we define the following terms as follows . * definition 1 . * ( asymmetric phase ) _ assume that mno launches double - speed lte service at time .when , we call this period asymmetric phase due to the different services provided by mnos and . _ * definition 2 . * ( symmetric phase ) _ assume that denotes the expiration time for the mnos new spectrum rights .when , we call this period symmetric phase because of the same services offered by mnos and ._ we investigate the interactions between two mnos and users as a three - stage dynamic game as shown in ig .2 . in tagei , two spiteful mnos compete in a first - price sealed - bid auction where asymmetric - valued spectrum blocks and are auctioned off to them .the objective of each mno is maximizing his own profit when is assigned to him , as well as minimizing the weighted difference of his competitor s profit and his own profit when is allocated to him . in tage ii, two competing mnos optimally announce their service prices to maximize their revenues given the result of tage i. the analysis is divided into two phases : asymmetric phase and symmetric phase . in tage iii ,users determine whether to stay in their current mno or to switch to the new mno for utility maximization .to predict the effect of spectrum allocation , we solve this three - stage dynamic game by applying the concept of backward induction , from tage iii to tage i.each user subscribes to one of the mnos based on his or her mno preference .let us assume that mnos and provide same quality in services to the users so they have the same reserve utility before spectrum auction .each mno initially has 50% market share and the total user population is normalized to 1 . in asymmetric phase ,the users in mnos and obtain different utilities , i.e. , where is a user sensitivity parameter to the double - speed lte service than existing one .it means that users care more about the data rate as increases .the users in mno have more incentive to switch to mno as increases . when they decide to change mno , however , they face switching costs , the disutility that a user experiences from switching mnos . in the case of higher switching costs ,the users in mno have less incentive to switch .the switching cost varies among users and discounts over time . to model such users time - dependent heterogeneity ,we assume that the switching cost is heterogeneous across users and uniformly distributed in the interval ] is a parameter called the spite ( or competition ) coefficient . _ as noted , mno is self - interested and only tries to maximize his own profit when .when , mno is completely malicious and only attempts to obtain more market share by forcing mno to lease the less - valued spectrum block .for given ] , we can derive the optimal bidding strategies that maximize the objective function in efinition 3 as follows .* proposition 3 . * _ in a first - price sealed - bid auction , the optimal bidding strategy for a spiteful mno and is : = 0.5mu=0.5mu=0.5mu _ * proof . * without loss of generality , suppose that mno knows his bid .further , we assume that mno infer that the bidding strategy of mno on is drawn uniformly and independently from $ ] .the mno s optimization problem is to choose to maximize the expectation of = 0.5mu=0.5mu=0.5mu } { \rm { } } f(b_j ) db_j \nonumber \\ & & + \int\limits_{b_i } ^{r^a ( t_1 , t_2 ) } { \left [ { ( 1 - \alpha _ i ) ( \pi ^b ( t_1 , t_2 ) ) - \alpha _ i ( r^a ( t_1 , t_2 ) - b_j ) } \right ] } { \rm { } } f(b_j ) db_j . \nonumber \\\end{aligned}\ ] ] differentiating equation ( 25 ) with respect to , setting the result to zero and multiplying by give = 5mu=5mu=5mu since the same analysis can be applied to the mno , the proof is complete . roposition 3 states that the mnos equilibrium bidding strategies .intuitively , the more spiteful the mno is , the more aggressively the mno tends to bid . for consistency , we assume that . then we can now calculate mno s profit and mno s profit as follows = 2mu=2mu=2mu where is calculated by substraction of the bidding price of ( 24 ) from of ( 19 ) .under two different costs ( , ) .other parameters are , , , , , , and .,width=326 ] under two different spite coefficients ( , ) .other parameters are , , , , , , and .,width=326 ] to get some insight into the properties of the mnos equilibrium profits , let us define is different from of ( 19 ) where is the revenue gain from relative to without considering any cost .] , which can be interpreted as the profit gain from relative .when , the profit of mno is higher than that of mno .it implies that mno could gain a competitive advantage over mno in both market share and profit . when , the situation is reversed .mno could take the lead in the profit despite losing some market share to mno .if the role of the government is to ensure fairness in two mnos profits , the government may devise two different schemes : setting appropriate reserve prices and imposing limits on the timing of the double - speed lte services . according to the ofcom report ,setting the reserve prices closer to market value might be appropriate .it indicates that the government set and by estimating the value asymmetries between spectrum blocks and ( i.e. , ) and the spite coefficient .ig . 6 shows the profit gain as a function of under two different reserve prices for ( i.e. , , ) .for example , if , the government should set the reserve prices , . on the other hand, the government should set the reserve prices , when .besides setting appropriate reserve prices , the government can impose limits on the timing of the double - speed lte service . in south korea , for instance, korea telecom ( kt ) who acquired the continuous spectrum spectrum is allowed to start its double - speed lte service on metropolitan areas immediately in september 2013 , other major cities staring next march , and nation - wide coverage starting next july .this scheme implies to reduces by limiting the timing of the double - speed lte service to the mno who acquires spectrum block .7 shows the profit gain as a function of under two different spite coefficients ( i.e. , , ) .in this paper , we study bidding and pricing competition between two spiteful mnos with considering their existing spectrum holdings .we develop an analytical framework to investigate the interactions between two mnos and users as a three - stage dynamic game . using backward induction, we characterize the dynamic game s equilibria . from this, we show the asymmetric pricing structure and different market share between two mno .perhaps counter - intuitively , our results show that the mno who acquires the less - valued spectrum block always lowers his price despite providing double - speed lte service to users .we also show that the mno who acquires the high - valued spectrum block , despite charging a higher price , still achieves more market share than the other mno .we further show that the competition between two mnos leads to some loss of their revenues . with the example of south korea, we investigate the cross - over point at which two mnos profits are switched , which serves as the benchmark of practical auction designs .results of this paper can be extended in several directions . extending this work, it would be useful to propose some methodologies for setting reserve prices , .second , we could consider an oligopoly market where multiple mnos initially have different market share before spectrum allocation , where our current research is heading .m. shi , j. chiang , and b .- d .price competition with reduced consumer switching costs : the case of `` wirelss number portability '' in the cellular phone industry , " , vol .1 , pp . 2738 , 2006 .dotecon and aetha , spectrum value of 800mhz , 1800mhz , and 2.6ghz , " a dotecon and aetha report , jul . 2012 .available : http://stakeholders.ofcom.org.uk/binaries/consultations/award800mhz/ statement / spectrum value.pdf .
we study bidding and pricing competition between two spiteful mobile network operators ( mnos ) with considering their existing spectrum holdings . given asymmetric - valued spectrum blocks are auctioned off to them via a first - price sealed - bid auction , we investigate the interactions between two spiteful mnos and users as a three - stage dynamic game and characterize the dynamic game s equilibria . we show an asymmetric pricing structure and different market share between two spiteful mnos . perhaps counter - intuitively , our results show that the mno who acquires the less - valued spectrum block always lowers his service price despite providing double - speed lte service to users . we also show that the mno who acquires the high - valued spectrum block , despite charing a higher price , still achieves more market share than the other mno . we further show that the competition between two mnos leads to some loss of their revenues . by investigating a cross - over point at which the mnos profits are switched , it serves as the benchmark of practical auction designs .
unlike water , a layer of sand will not flow unless its surface is inclined beyond a characteristic angle , known as the maximum angle of stability .this simple fact translates into a host of threshold phenomena wherever granular material is found .many such phenomena play a crucial role in the erosion of earth s surface , and very likely manifest themselves in the richness of the patterns exhibited by drainage networks . depending on geological , hydrological , and climatological properties , erosion by wateris mainly driven either by overland flow or subsurface flow .the former case occurs when the shear stress imposed by a sheet flow exceeds a threshold .erosion in the latter case known as seepage erosion , or sapping occurs when a subsurface flow emerges on the surface . herethe eroding stresses derive not only from the resulting sheet flow but also the process of seepage itself .the onset of erosion for both overland flow and seepage is threshold - dependent , but the additional source of stress in the case of seepage has the potential to create significantly different erosive dynamics . herewe study the seepage case . whereas the case of horton overland flow has been extensively studied ,seepage erosion has received less attention . suggests that erosive stresses due to seepage are more widespread in typical environments than commonly assumed .he also provides a detailed description of seepage erosion in the field , together with a discussion of the various factors that influence its occurrence .another focus of attention has been the controversial possibility that many erosive features on mars appear to have resulted from subsurface flows .although the importance of seepage stresses in erosion have been realized by and , comprehensive quantitative understanding is difficult to obtain .the complexity arises from the interdependent motion of the sediment and fluid the `` two - phase phenomenon '' which , of course , is common to _ all _ problems of erosion . to further understand seepage erosion ,we proceed from experiments .questions concerning the origin of ancient martian channels have motivated considerable experimental work in the past .the process of seepage erosion has also been studied as an example of drainage network development .our experiments , following those of and others , are designed to enable us to construct a predictive , quantitative theory .consequently , they stress simplicity and completeness of information .although our setup greatly simplifies much of nature s complexity , we expect that at least some of our conclusions will improve general understanding , and therefore be relevant to real , field - scale problems .a previous paper by provided a qualitative overview of the phenomenology in our experiment .it described the main modes of sediment mobilization : channelization , slumping , and fluidization .here we provide quantitative understanding of the onset and transitions between these modes .our emphasis is on the threshold phenomena associated with the onset of erosion , which we will ultimately characterize in the same way that others have characterized the onset of dry granular flow beyond the maximum angle of stability .this involves a construction of a generalized shields criterion valid in the presence of seepage through an inclined surface .a major conclusion is that the onset of erosion driven by seepage is significantly different from the onset of erosion driven by overland flow .we find that there is a critical slope , significantly smaller than the maximum angle of stability , above which the threshold disappears .therefore any slope greater than is unstable to erosion if there is seepage through it .this result is similar to well - known conclusions for the stability to frictional failure of slopes with uniform seepage .an important distinction in our work , however , concerns the mode of sediment mobilization and its local nature . the existence of the critical slope for seepage erosion may provide a useful quantitative complement to the qualitative distinctions between seepage and overland flow that have already been identified . the remaining modes of sediment mobilization , fluidization and slumping , are modeled using well established ideas .the result of applying these ideas together with the generalized shields criterion provides a theoretical prediction of the outcomes of the experiment , i.e. , a phase diagram .agreement between theory and experiment is qualitative rather than quantitative .we nevertheless believe that our theoretical approach is fundamentally sound and that better agreement would follow from improved experimental procedures .in our experimental setup , first introduced by , a pile of identical cohesionless glass beads mm in diameter is saturated with water and compacted to create the densest possible packing .it is then shaped into a trapezoidal wedge inclined at an angle with slope as shown in fig.[fig : expt ] .the downslope length of the wedge is cm , its width across the slope is cm , and its height in the middle is approximately cm .water enters the sandpile underneath through a fine metal mesh and exits at the lower end of the pile through the same kind of mesh .a constant head at the inlet is maintained by keeping a constant water level in the reservoir behind the sandbox with the help of an outflow pipe .the slope of the pile and the water level are the control parameters of the experiment .the degree of packing of the granular pile is the variable most difficult to control .our particular method of feeding water into the sandpile , similar to that of , can be motivated in three ways .the most important justification is the fact that the amount of water flowing on the surface can be finely controlled in our geometry .this feature is essential in probing the onset of erosion .second , our setup allows us to access heads larger than the height of the pile , which therefore allows us to explore dynamic regimes unavailable if water enters the pile through a mesh in the back .third , a similar seepage water flow geometry can exist in the field wherever water travels beneath an impermeable layer that terminates .we have performed two types of experiments : steady and non - steady . for a fixed water level and in absence of sediment motion ,water flow reaches steady state . by monitoring the total water flux through the systemwe estimate the time to reach steady state to be approximately ten minutes . to explore the onset of sediment motion , we raised the water level in small increments , waiting each time for steady state to be established . due to the particular shape of the bulk flow in our experiment ,surface flow exists over a finite region of the surface .the width of this seepage face and therefore the depth of the surface flow can be tuned by changing . because of the finite extent of surface flow , its depth and therefore the viscous shear stress reaches a maximum at a certain location .thus , by increasing we can continuously tune the maximum shear stress experienced by the surface grains .the maximum shear stress reaches a critical value for the onset of sediment motion in a certain location on the slope .as we show below , we can compute where the maximum shear stress occurs and thus can reliably detect the onset of sediment motion visually because we focus our attention on this location . once sediment begins to move, channels form almost immediately .these channels grow in length , width , and depth .an example of the evolving channel network is shown in fig.[fig : channels ] .depending on the slope , as the channels deepen , the pile becomes unstable to fluidization or slumping . for slopes lower than approximately 0.05 ,the fluidization threshold is reached before sediment is mobilized on the surface .m. the slope of the pile is and the water level cm.,width=316 ] we also explored the non - steady evolution of the bulk and surface water flow and resulting sediment motion by raising the water level to some higher value from zero . in this caseone of three things can happen .the pile can be fluidized within a few seconds or fail by slumping as shown in fig.[fig : slump ] .if this does not occur , the water emerges on the surface just above the inlet .a sheet of water then washes down the slope of the pile . during this initial wash , sediment is mobilized and incipient channels form .these channels grow during subsequent relaxation of the bulk water flow towards steady state . because of the initial wash s erosive power , channels are able to form and grow for lower water pressures than in steady experiments . angle to the slope after the water flow has been stopped .the width of the imaged region is approximately m .slumping happens along a convex upward arc which looks darker because it is deeper and therefore wetter.,width=316 ] outcomes of a large number of non - steady experiments and several steady experiments for varying slope and the water level are summarized in the phase diagram in fig.[fig : phase ] .each symbol in the plot represents one experiment .the sediment is either immobile ( stable seepage ) , or it is mobilized on the surface where channels form ( channelization ) or in the bulk ( slumping or fluidization ) . in several experiments ,slumping or fluidization happened after channels formed and grew . in the following sections we describe the computations that allow us to construct the theoretical boundaries between the three different modes of sediment mobilization in our experiment . ;those that produced channels are indicated by ; and those that produced fluidization and/or slumping within one hour of the beginning of the experiment are represented by .the straight line and gray - shaded curves are theoretical predictions for the boundaries separating the four regions indicated by their labels .the thickness of the lines indicates uncertainty in the theory .the boundary between the uneroded and channelized states is reasonably well approximated by our theory .the theoretical boundaries for fluidization and slumping , however , appear to overestimate the critical water level , possibly as a result of inhomogeneities , dynamic changes in the sandpile s shape , or from the assumption of a steady state.,width=316 ]whereas steady - state flow can be readily characterized quantitatively , non - steady flow characterization requires knowledge of the water - table dynamics .however , the theory of the water - table dynamics is less well established than that of the flow through the bulk of a porous medium . also , our steady - state experiments probe all aspects of sediment dynamics .we can therefore focus on the quantitative characterization of the steady - state flow . to study the onset of erosion quantitatively we need to be able to establish a correspondence between the experimentally measurable quantities such as the slope , the water level , the size of the seepage face , and the water fluxes .the seepage and surface fluxes are the most difficult to measure . in this sectionwe set up their computation .the computation is designed to enable us to infer water fluxes indirectly by measuring the size of the seepage face . in the following sections we will use this computation to quantify the onset of erosion and to compute the slumping and liquefaction boundaries of the channelization phase diagram shown in fig.[fig : phase ] .fig.[fig : expt ] specifies the key quantities and coordinate systems we use in computing the fluxes .the flow profile is independent of the -coordinate across the slope of the sandpile except near the side walls of the box .we therefore treat the box as if it were infinitely wide .flow is then two - dimensional and the specific discharge vector is in the - plane .we will use two coordinate systems . as shown in fig.[fig : expt ] , the coordinate is measured vertically from the bottom of the box while the coordinate is the normal distance away from the surface of the pile .the flow is governed by darcy s law , where is the scalar hydraulic conductivity , and is the total hydraulic head of the pore water .both and have units of velocity while the scaled pore pressure has units of length . here is the density of water and is the magnitude of the acceleration of gravity .we have measured via a -tube relaxation experiment .to do so we created a water level difference between the two arms of a transparent -shaped tube of width partially filled with glass beads .the rate of change of is given by . by measuring the rate of change of we deduced the value of the hydraulic conductivity mm / s ( ) .hydraulic conductivity is sensitive to the packing of the grains and is the variable most difficult to control in our experiment .water incompressibility implies , therefore yielding laplace s equation , to compute the pore pressure , boundary conditions must be specified .the walls of the box are impenetrable .therefore the discharge vector is parallel to the walls .in other words , the flux in the direction normal to the walls vanishes. thus .because the glass beads in our experiment are small , capillarity is important . when a tube filled with glass beads is lowered into a reservoir of water , the porous bead - pack fully saturates in a region that extends above the surface of the water by a capillary rise .we measured mm for our material .the capillary rise is a measure of the average radius of the water menisci at the edge of the fully saturated zone .the pore pressure at the edge of the fully saturated zone is ( without loss of generality we set the atmospheric pressure to zero ) .water can rise above the fully saturated zone through the smaller pores and narrower throats .thus a partially saturated capillary fringe exists above the fully saturated zone .however , in this fringe the water is effectively immobile since it is confined to the smaller pores and narrower throats .since water flows only in the fully saturated zone , we define the water table to be at its edge .thus , the pore pressure at the water table is equal to the negative capillary rise . in steady statethe discharge vector is parallel to the water table .this extra condition allows us to determine the location of the water table in steady state .we neglect the pressure drop across the inlet mesh .therefore , the pore pressure at the inlet mesh is .the boundary conditions at the surface of the sandpile and at the outlet mesh are more subtle .when no water seeps out , i.e. , when the discharge vector is parallel to the surface , the curvature of the water menisci between grains can freely adjust so that the pressure can vary between zero and .therefore when , no seepage occurs . otherwise , the pore pressure equals the atmospheric pressure ( we neglect the pressure exerted by the thin layer of water on the surface ) , and the discharge vector has a component normal to the surface , i.e. , there is either exfiltration or infiltration . , or the pressure at the water table when it is below the surface of the pile .note that seepage occurs only where the pore pressure reaches atmospheric pressure .slope is , water level cm.,width=316 ] to obtain the steady state location of the water table , we guess its position and solve laplace s equation with the boundary condition on the water table .we then move the water table in the direction of the local discharge vector by an amount proportional to its length .iteration of this procedure converges to the steady - state position of the water table .an example is shown in figure [ fig : table_pressure_overland ] .once the steady flow pattern is known , we can calculate the overland water flux by integrating the one - dimensional continuity condition which states that the downslope derivative of the overland flux is equal to the seepage flux : this section we assume , based on direct observation , that the onset of channel incision coincides with the onset of erosion ( i.e. , we never observed a homogeneously eroding state ) . in other words ,when the overland water flux becomes strong enough to carry grains , the flow of sediment becomes immediately unstable to perturbations transverse to the downslope direction and incipient channels form . using this assumption and the calculation of the overland water flux we can deduce the threshold condition for the onset of erosion .it is universally assumed after that the hydrodynamic stresses exerted on the sandpile by the fluid flowing on its surface determine whether cohesionless granular material is entrained . in the limit of laminar flow ,the dominant hydrodynamic stress is the viscous shear stress .appropriately scaled this shear stress is termed the shields number , defined by where is the density of the granular material , is the grain diameter ( mm in our experiment ) , and the surface is not inclined .the conventional shields number is the ratio between the horizontal force exerted by the flow and vertical force due to grain s weight . to generalize the notion of the shields number to the situation with seepage through an inclined surface , we make two changes in eq . .we first add the tangential component of the seepage force density acting over a length to the numerator of .the numerator thus becomes .note that we did not include the tangential component of the grain s weight to the numerator .defined in this way , the generalized shields number measures the effect of the fluid : both the bulk as well as the surface flows .-axis of the resultant of the grain s weight and the seepage force both scaled by .,width=220 ] second , we replace the denominator of eq . by the resultant ( vectorial sum ) of the seepage force on a grain and its submerged weight , as shown in fig.[fig : mod_shields ] , both scaled by ( to obtain stress as in the numerator and for agreement with the conventional shields number ) , projected onto the -axis . according to the grains on the surface of the bed experience a seepage force roughly half as large as the grains several layers deep .consequently , we assume that the seepage force is reduced by a factor of ; therefore where is the inclination angle of the surface .the importance of the seepage stresses for the criterion for the onset of erosion was previously realized by .it can be shown that their equation ( 10 ) expressing marginal stability of a surface grain is equivalent to writing , the tangent of the angle of internal friction .the generalized shields number eq . is a measure of the relative importance of the tangential and normal forces acting on a grain at the surface of the sandpile .therefore , we expect to be a control parameter for erosion . in other words , there exists a critical shields number , such that when , surface grains are immobile , and when , sediment is mobilized .note that ( [ eq : shields ] ) reduces to the classical definition of the shields number for a flat surface without seepage . also notethat since we did not include the tangential component of grain s weight , the critical shields number at the onset of sediment motion vanishes when the inclination angle reaches the maximum angle of stability .although we obtain the seepage force density as a result of computing the pore - water pressure , to calculate the boundary shear stress we must estimate the thickness of the surface water layer . since this thickness changes slowly in the downslope direction , we can approximate the surface flow by the steady flow of a uniform layer of viscous fluid . also , the surface water flux is small enough for turbulence to be of no importance .the thickness of laminar surface flow for a given flux is where is the viscosity , while the viscous shear stress exerted on the sandpile is the particle reynolds number can then be calculated using the bottom shear stress and shear velocity as where is the kinematic viscosity of water .we estimate that in our experiments , this particle reynolds number varies between 5 and 20 depending on the slope of the pile and the water level .we verify this estimate of the reynolds number by a direct measurement of the thickness of the surface flow .we find that this thickness is several grain diameters .this justifies the laminar flow assumption used in obtaining eq . .using , the shields number can now be conveniently rewritten as this expression can be further simplified by noting that along the seepage face .therefore at the surface wherever there is overland flow .we arrive at the final expression for the modified shields number which depends on the surface flow thickness , the normal component of the seepage force density at the surface , and the seepage force reduction factor in our geometry , both the surface and the seepage water fluxes reach a maximum somewhere along the slope .therefore the shields number has a maximum value as well .below we calculate this maximum shields number in steady state for a given slope and water level .for a pile of slope . at waterfirst seeps through the surface and the shields number jumps to a nonzero value .afterwards it increases rapidly as ( solid line ) .inset : corresponding size of the computed seepage face.,width=316 ] we now explore the consequences of seepage for the phenomenology of the onset of erosion . because of the additional force on the surface grains , seepage flow is more erosive than overland flow .this notion is reflected quantitatively in the generalized shields number .let us examine how the maximum shields number varies with the water level in our experiment .a representative graph of the maximum shields number versus the water level is shown in fig.[fig : shields_h ] .below a water level that is a function of the slope , no water seeps out to the surface of the pile . even though the water table may be at the surface, the pressure at the water table is below atmospheric pressure and capillarity prevents seepage . when , i.e. , exactly at the onset of seepage , the pressure reaches at some point on the surface . since the seepage flux is still zero , along the wet part of the surface .therefore , just above the seepage onset , when the water layer thickness and the seepage flux are both infinitesimally small , the maximum shields number is in contrast to overland flow , the consequence of seepage is that as soon as the water emerges on the surface , the maximum shields number is some non - zero value which depends on the slope .this also implies that there exists a critical slope such that is equal to the critical shields number , i.e. , for slopes greater than seepage is always erosive. note that for low - density particles this critical slope can be arbitrarily small .the expression for the critical slope for seepage erosion in eq . is analogous to well - known formulas for stability of slopes to coulomb failure due to uniform seepage .our result applies locally to the point where non - uniform seepage first emerges on the surface . in this situation ,the pile is generally stable to coulomb failure and the sediment is eroded only locally on the surface . as a function of the downslopecoordinate at the onset of seepage and just above.,width=220 ] we now show that above , the maximum shields number increases rapidly as a power of the water level excess . at the onset of seepage ,i.e. , when , the pressure at the water table reaches atmospheric pressure at some point located at and on the surface .even though the water table is at the surface , there is no seepage anywhere , i.e. , . because the pressure is smooth , it can be approximated by a quadratic function near this point so that , where and are constants with appropriate dimensions .when the water level is raised by a small increment , the lowest order change in the head at the water table is an increase of with the exception of the region where this increase would lead to a positive pressure . as illustrated in fig.[fig : increment ] , in this region the pore pressure is set to , and thus this region becomes the seepage face .the width of the seepage face scales like the square root of , i.e. , as seen in the inset of fig.[fig : shields_h ] .the seepage flux can be estimated by noting that the hydraulic head is modified by an amount over a vertical region of order .therefore we obtain .the total surface flux therefore scales like the product of the seepage flux and the width of the seepage face , i.e. , . the lowest order change in the maximum shields number is due to the change of the surface flow depth .thus as we claimed above , just above the water level for the onset of seepage , where the constant is a function of the slope .variation with water level of the computed maximum shields number shown in fig.[fig : shields_h ] is consistent with eq . .in the previous sections we have detailed the way of calculating the bulk and surface water fluxes in our experiment and the resulting maximum generalized shields number . in this sectionwe use this calculation to examine the onset of the sediment flow and channelization .our first goal is to measure the threshold or critical shields number required for the mobilization of sediment .we then use this measured value of the critical shields number to predict the outcome of steady - state experiments for various values of the slope and the water level and thus compute the channelization boundary in the phase diagram in fig.[fig : phase ] .the actual maximum shields number in the experiment differs from the quantity calculated in eq .( [ eq : shields - prefinal1 ] ) .in addition to random errors in the measurements of the pile dimensions and water level , there are several sources of systematic error .for example , the pressure drop across the inlet mesh results in a lower effective hydraulic head .also , our measurement of the capillary rise is dependent on a visual estimate of the fully saturated zone and thus can be a source of systematic error .we indeed find that the size of the seepage face calculated for a particular water level is greater than measured in the experiment .however , the size of the seepage face translates directly into the surface water flux and therefore the maximum shields number .the inset of figure [ fig : shields_h ] shows the typical dependence of the size of the seepage face on the water level .the variation of the maximum shields number with the size of the seepage face is shown in fig.[fig : shields_seep ] for three different slopes .we use this computed correspondence between the size of the seepage face and the maximum shields number to infer the maximum shields number in the experiment by measuring the size of the seepage face . to measure the critical shields number we raise the water level in increments of a few millimeters at a time .each time the water level is increased , the seepage flow is allowed to reach a steady state . in each of these steady stateswe measure the seepage face size and infer the corresponding maximum shields number .eventually , sediment is mobilized and we record the size of the seepage face and compute the corresponding maximum shields number .this number is an upper bound on the critical shields number for our granular material at that particular slope .the lower bound on the critical shields number is obtained from the largest seepage face at which no sediment is moving or sediment motion is only transient . averaging over several experiments with slope we estimate the critical shields number to be it is not obvious that the generalized critical shields number for the onset of seepage driven erosion should coincide with the critical shields number for overland flow . however our measured value of the critical generalized shields number is within the scatter of the existing data for overland flow summarized in .our measurement the critical generalized shields number is equivalent to measuring the angle of internal friction due to the correspondence of our definition of and howard and mclane s equation ( 10 ) .deviations from flatness of the pile s surface result in the fluctuations of the thickness of the surface water film . as a result ,the maximum bottom shear stress in the experiment is systematically greater than that calculated at a given size of the seepage face .thus the shields number calculated for a particular size of the seepage face is the lower bound on the actual shields number in the experiment . in principle, the critical shields number should vary with the slope of the pile .evidence for this is the fact that at the maximum angle of stability any additional forcing from the water flowing over the bed mobilizes sediment .since it is reasonable to assume that the critical shields number is continuous and monotonic , we arrive at the notion that it decreases monotonically with slope and vanishes at the maximum angle of stability . for small slopes the critical shields numberis expected to decrease as the cosine of the inclination angle since this is the lowest order change in the stabilizing effect of gravity . for most slopes in our experiments , is within a few percent of unity and thus we can ignore the variation of the critical shields number with slope .this assumption allows us to predict the water level at which erosion and therefore channelization should commence in our experiment .in fig.[fig : phase ] a boundary is drawn between regions where sediment is expected to mobilize and remain immobile . to obtain this line we computed for each slope the water level at which the shields number is equal to the critical shields number .below this water level , i.e. , when , the maximum shields number is below critical and thus sediment is immobile .conversely , for , the maximum shields number is above critical and thus sediment is mobilized and channels form .the channelization boundary is widened because of the uncertainty in the critical shields number .qualitative agreement of the channelization boundary with experiments is perhaps due to the opposite action of two effects .first , channelization occurs for lower water levels in non - steady experiments .this happens because in non - steady experiments the maximum shields number overshoots its steady - state value .the overshoot is greatest for small slopes .second , a pressure drop across the inlet mesh and the compacted region of sand close to it has an opposite effect which increases the water level needed for channelization .these two effects , though small , could together affect the accuracy of our predictions . since these effects act in opposite ways , our the predictions of the calculated channelization water level agree qualitatively with the experiments .having computed the channelization boundary in the phase diagram , we now pursue a quantitative description of the other two modes of sediment mobilization exhibited by our sandpile .higher water pressures can cause the sandpile to fail in one of two ways .first , an upward seepage force can lift sand and result in a fluidization or quicksand instability .second , the pile can become unstable to slipping , slumping , or sliding .both failure mechanisms have been discussed by a number of studies , e.g. , those of or .and height and compare it with its weight.,width=144 ] fluidization occurs when at some point in the sandpile the pore pressure is larger than the total hydrostatic pressure due to the weight of the sand and the water above . to see this we compute the total seepage force acting on a slice of sand of width between point and point on the surface of the pile directly above .the vertical component of this force is ( see fig.[fig : fluidize ] ) where is the height of the slice . when this force exceeds the submerged weight of the slice , the slice is lifted and the bed is fluidized . here is the total density of the saturated sand , which for our sand is approximately 2 g/ .thus fluidization occurs when there exists points and on the surface directly above such that for uniform seepage this condition is equivalent to those in and . to construct the fluidization boundary in the phase diagram ( fig.[fig : phase ] ) , we find the water level above which there exists at least one point in the pile for which condition is satisfied .below this fluidization water level this condition is not satisfied for any point in the pile .in addition to fluidization the sandpile can fail by slumping .this can happen in one of two ways .frictional failure can occur in the bulk of the pile due to the seepage stresses .alternatively , surface avalanching can occur . to establish an upper bound on the water level at which the sandpile slumps via either mechanism we use the criterion developed by for determining when a slope is destabilized by uniform groundwater seepage .essentially it requires calculating the vectorial sum of the seepage and gravity forces acting on a small element of soil near the surface . when the angle between this total force and the downward normal to the surface , which we will call the effective inclination angle , exceeds the maximum angle of stability , the surface grains are destabilized .we measured the maximum angle of stability to be for dry glass beads .the slumping boundary in the phase diagram ( fig.[fig : phase ] ) is constructed by computing the effective inclination angle along the surface of the pile and noting the water level , at which the effective inclination angle reaches the maximum angle of stability at some point of the surface .figure [ fig : phase ] shows the critical water level at which fluidization and slumping should occur according to the criteria above .failure occurs at systematically lower water levels in the experiment .there are several effects which can account for this difference .first , any irregularities in the construction of the pile such as voids or surface height fluctuations make the pile more unstable to fluidization and slumping .second , we compute the instability of an uneroded pile , whereas in most experiments , the pile failed after erosion had changed the shape of the pile . the decrease of pile s heightdue to erosion increases the head gradient in the bulk and thus makes the pile more prone to slumping and/or fluidization . and three different water levels .for the highest water level , a region on the surface has an effective angle above the maximum angle of stability and thus the slope is unstable to slumping .the inset shows the plateau value of the effective inclination angle as a function of slope .when the plateau value reaches the maximum angle of stability , even a small amount of seepage destabilizes the pile to slumping.,width=316 ] at , a jump in the slumping water level is observed in both the experiment and the model .this jump is a purely geometric effect .slumping occurs when , somewhere along the slope , the effective inclination angle , which includes the effect of the seepage force , exceeds the maximum angle of stability . as shown in fig.[fig : slump_jump ] , for slopes smaller than , the effective inclination angle is flat and develops a peak under the water inlet as the water pressure is increased .when the top of this peak crosses the value of the maximum angle of stability , the pile slumps . when the slope exceeds , however , the value of the plateau in the effective inclination angle is above the maximum angle of stability .therefore , for these slopes , the pile will be unstable to slumping as soon as the water emerges on the surface .this article reports on our progress in understanding seepage erosion of a simple non - cohesive granular material in a laboratory - scale experiment introduced in .our ultimate goal is to construct a quantitative predictive theory of the onset and growth of the channel network observed in this experiment .this goal requires a complete sediment transport model as well as the calculation of the relevant water fluxes .here we obtain the latter and focus on the onset of erosion .prediction of the onset of erosion based on the generalized shields conjecture explains qualitatively the channelization boundary in the experimental phase diagram .by invoking well established simple ideas we also roughly explain the fluidization and slumping boundaries in the phase diagram .greater discrepancy with the experiment for these boundaries indicates that better understanding of the slumping / fluidization mechanisms particular to our experiment is needed .the central result of our exploration is the introduction of the generalized shields criterion for seepage erosion . as a consequence of seepage forces on the surface grains , the threshold for the onset of erosion driven by seepage is slope dependent .the threshold disappears at a critical slope determined by the critical shields number for overland flow and the density contrast between the granular material and water . in most cases this critical slope is significantly smaller than the maximum angle of stability .we find , therefore , that slopes above this critical slope are unstable to any amount of seepage . as a consequence , slopes that sustain seepagemust be inclined at an angle smaller than the critical angle for seepage erosion .this behavior contrasts strongly with the threshold phenomena in erosion by overland flow , and therefore provides a mechanistic foundation for distinguishing the two types of erosion .this work was supported by a doe grants de - fg02 - 99er15004 and de - fg02 - 02er15367 .aharonson , o. , m. t. zuber , d. h. rothman , n. schorghofer , and k. x. whipple , drainage basins and channel incision on mars , _ proceedings of the national academy of sciences usa _ , _ 99 _ , 17801783 , 2002 .baker , v. r. , spring sapping and valley network development , in _ groundwater geomorphology : the role of subsurface water in earth - surface processes and landforms _ , edited by c. higgins and d. coates , chap . 11 , geological society of america , boulder , colorado , 1990 .buffington , j. m. , and d. r. montgomery , a systematic analysis of eight decades of incipient motion studies , with special reference to gravel - bedded rivers , _ water resources research _ , _33_(8 ) , 19932029 , 1997 .dunne , t. , k. x. whipple , and b. f. aubry , microtopography and hillslopes and initiation of channels by horton overland flow , in _ natural and anthropogenic influences in fluvial geomorphology _ , pp . 2744 , american geophysical union , 1995 .howard , a. d. , groundwater sapping experiments and modelling , in _ sapping features of the colorado plateau , a comparative planetary geology field guide _ , edited by a. d. howard , r. c. kochel , and h. e. holt , pp .7183 , nasa scientific and technical information division , washington , d.c . , 1988 .kochel , r. c. , a. d. howard , and c. mclane , channel networks developed by groundwater sapping in fine - grained sediments : analogs to some martian valleys , in _ models in geomorphology _ , edited by m. j. woldenberg , pp . 313341 , allen and unwin , boston , 1985 .kochel , r. c. , d. w. simmons , and j. f. piper , groundwater sapping experiments in weakly consolidated layered sediments : a qualitative summary , in _ sapping features of the colorado plateau , a comparative planetary geology field guide _ , edited by a. d. howard , r. c. kochel , and h. e. holt , pp .8493 , nasa scientific and technical information division , washington , d.c . , 1988 .shields , a. , _ anwendung der hnlichkeitsmechanik und der turbulenzforschung auf die geschiebebewegung _ , heft 26 , mitteilung der preussischen versuchsanstalt fr wasserbau und schiffbau , berlin , germany , ( in german ) , 1936 .
we study channelization and slope destabilization driven by subsurface ( groundwater ) flow in a laboratory experiment . the pressure of the water entering the sandpile from below as well as the slope of the sandpile are varied . we present quantitative understanding of the three modes of sediment mobilization in this experiment : surface erosion , fluidization , and slumping . the onset of erosion is controlled not only by shear stresses caused by surfical flows , but also hydrodynamic stresses deriving from subsurface flows . these additional forces require modification of the critical shields criterion . whereas surface flows alone can mobilize surface grains only when the water flux exceeds a threshold , subsurface flows cause this threshold to vanish at slopes steeper than a critical angle substantially smaller than the maximum angle of stability . slopes above this critical angle are unstable to channelization by any amount of fluid reaching the surface .
since the pioneering papers by watts and strogatz on small - world networks and barabsi and albert on scale - free networks , complex networks , which describe many systems in nature and society , have become an area of tremendous recent interest . in the last few years , modeling real - life systems has attracted an exceptional amount of attention within the physics community .while a lot of models have been proposed , most of them are stochastic .however , because of their advantages , deterministic networks have also received much attention .first , the method of generating deterministic networks makes it easier to gain a visual understanding of how networks are shaped , and how do different nodes relate to each other ; moreover , deterministic networks allow to compute analytically their properties : degree distribution , clustering coefficient , average path length , diameter , betweenness , modularity and adjacency matrix whose eigenvalue spectrum characterizes the topology .the first model for deterministic scale - free networks was proposed by barabsi __ in ref . and was intensively studied in ref . . another elegant model , called pseudofractal scale - free web ( psw ) ,was introduced by dorogovtsev , goltsev , and mendes , and was extended by comellas _. . based on a similar idea of psw , jung _ et al ._ presented a class of recursive trees .additionally , in order to discuss modularity , ravasz _ et al ._ proposed a hierarchical network model , the exact scaling properties and extensive study of which were reported in refs . and , respectively .recently , in relation to the problem of apollonian space - filing packing , andrade _ et al . _ introduced apollonian networks which were also proposed by doye and massen in ref . and have been intensively investigated .in addition to the above models , deterministic networks can be created by various techniques : modification of some regular graphs , addition and product of graphs , edge iterations and other mathematical methods as in refs . . as mentioned by barabsi __ , it would be of major theoretical interest to construct deterministic models that lead to scale - free networks . herewe do an extensive study on pseudofractal scale - free web .the psw can be considered as a process of edge multiplication .in fact , a clique ( edge is a special case of it ) can also reproduce new cliques and the number of the new reproduction may be different at a time . motivated by this , in a simple recursive way we propose a general model for psw by including two parameters , with psw as a particular case of the present model .the deterministic construction of our model enables one to obtain the analytic solutions for its structure properties . by adjusting the parameters, we can obtain a variety of scale - free networks .before introducing our model we give the following definitions on a graph ( network ) . the term _ size _ refers to the number of edges in a graph .the number of nodes in a graph is called its _order_. when two nodes of a graph are connected by an edge , these nodes are said to be _ adjacent _ , and the edge is said to join them .complete graph _ is a graph in which all nodes are adjacent to one another .thus , in a complete graph , every possible edge is present .the complete graph with nodes is denoted as ( also referred in the literature as -_clique _ ) .two graphs are _isomorphic _ when the nodes of one can be relabeled to match the nodes of the other in a way that preserves adjacency .so all -cliques are isomorphic to one another . and .only the first three steps are shown.,width=491 ] the network is constructed in a recursive way .we denote the network after steps by , ( see fig .[ recursive ] ) .then the network at step is constructed as follows : for , is a complete graph ( or -clique ) consist of -cliques ) , and has nodes and edges . for , obtained from by adding new nodes for each of its existing subgraphs isomorphic to a -clique , and each new node is connected to all the nodes of this subgraph . in the special case and ,it is reduced to the pseudofractal scale - free web described in ref . . in the limiting case of , we obtain the same networks as in ref . . however , our family is richer as can take any natural value .there is an interpretation called ` aggregation ' for our model . as an example , here we only explain them for the case of and .figure [ pseudofractal ] illustrates the growing process for this particular case , which may be accounted for as an ` aggregation ' process described in detail as follows .first , three of the initial triangle ( ) are assembled to form a new unit ( ) .then we assemble three of these units at the hubs ( the nodes with highest degree ) in precise analogy with the step leading from to to form a new cell ( ) ( see fig . [ aggregation ] ) .this process can be iterated an arbitrary number of times .moreover , an alternative explanation of our model which is often useful is that of ` miniaturization ' ( see ref . ) . and ) , exhibiting the first three steps.,width=453 ] to , which is obtained by adjoining of three copies of at the hubs.,width=453 ]below we will find that the tunable parameters and control some relevant characteristics of the network . because is a particular case , for conveniences , we treat and separately ._ order and size ._ in the case of , we denote by .let us consider the total number of nodes and total number of edges in .denote as the number of nodes created at step .note that the addition of each new node leads to two new edges . by construction , for , we have and considering the initial condition and , it follows that then the number of nodes increases with time exponentially and the total number of nodes present at step is thus for large , the average degree is approximately ._ degree distribution ._ let be the degree of node at step . then by construction , it is not difficult to find following relation : which expresses a preference attachment .if node is added to the network at step , and hence therefore , the degree spectrum of the network is discrete .it follows that the degree distribution is given by and that the cumulative degree distribution is substituting for in this expression using gives so the degree distribution follows the power law with the exponent . for the particular case of , eq .( [ gamma1 ] ) recovers the result previously obtained in ref . ._ second moment of degree distribution ._ let us calculate the second moment of degree distribution .it is defined by ^{2},\end{aligned}\ ] ] where is the degree of a node at step , which was generated at step .this quality expresses the average of degree square over all nodes in the network .it has large impact on the dynamics of spreading and the onset of percolation transitions taking place in networks .when is diverging , the networks allow the onset of large epidemics whatever the spreading rate of the infection , at the same time the networks are extremely robust to random damages , in other words , the percolation transition is absent . substituting eqs .( [ nv1 ] ) , ( [ nt1 ] ) and ( [ ki1 ] ) into eq .( [ ki21 ] ) , we derive \nonumber\\ & \approx&\frac{8\,(m+1)^{2t+1}}{m(2m+1)}\rightarrow\infty \qquad\hbox{for large .}\end{aligned}\ ] ] in this way , second moment of degree distribution has been calculated explicitly , and result shows that it diverges as an exponential law .so the networks are resilient to random damage and are simultaneously sensitive to the spread of infections ._ degree correlations ._ as the field has progressed , degree correlation has been the subject of particular interest , because it can give rise to some interesting network structure effects . an interesting quantity related to degree correlationsis the average degree of the nearest neighbors for nodes with degree , denoted as .when increases with , it means that nodes have a tendency to connect to nodes with a similar or larger degree . in this casethe network is defined as assortative .in contrast , if is decreasing with , which implies that nodes of large degree are likely to have near neighbors with small degree , then the network is said to be disassortative . if correlations are absent , .we can exactly calculate for the networks using eq .( [ ki1 ] ) to work out how many links are made at a particular step to nodes with a particular degree . except for three initial nodes generated at step 0, no nodes born in the same step , which have the same degree , will be linked to each other .all links to nodes with larger degree are made at the creation step , and then links to nodes with smaller degree are made at each subsequent steps .this results in the expression for . herethe first sum on the right - hand side accounts for the links made to nodes with larger degree ( i.e. ) when the node was generated at .the second sum describes the links made to the current smallest degree nodes at each step . substituting eqs .( [ nv1 ] ) and ( [ ki1 ] ) into eq .( [ knn1 ] ) , after some algebraic manipulations , eq .( [ knn1 ] ) is simplified to ^{t_i}-\frac{2(m+1)}{m}+\frac{2m}{m+1}\,(t - t_i).\end{aligned}\ ] ] thus after the initial step grows linearly with time . writing eq .( [ knn2 ] ) in terms of , it is straightforward to obtain ^{t}\,\left ( \frac{k}{2}\right)^{-\frac{\ln\left [ \frac{(m+1)^{2}}{2m+1}\right ] } { \ln(m+1)}}\nonumber\\ \qquad\qquad\qquad\qquad-\frac{2(m+1)}{m}+\frac{2m}{m+1}\,\frac{\ln(\frac{k}{2})}{\ln(m+1)}.\end{aligned}\ ] ] therefore , is approximately a power law function of with negative exponent , which shows that the networks are disassortative .note that of the internet exhibit a similar power - law dependence on the degree , with ._ clustering coefficient . _the clustering coefficient defines a measure of the level of cohesiveness around any given node . by definition , the clustering coefficient of node is the ratio between the number of edges that actually exist among the neighbors of node and its maximum possible value , , i.e. , .the clustering coefficient of the whole network is the average of all individual .next we will compute the clustering coefficient of every node and their average value .obviously , when a new node joins the network , its degree and is and , respectively .each subsequent addition of a link to that node increases both and by one .thus , equals to for all nodes at all steps .so one can see that , there is a one - to - one correspondence between the degree of a node and its clustering . for a node with degree ,the exact expression for its clustering coefficient is .therefore , the clustering coefficient spectrum of nodes is discrete . using this discreteness , it is convenient to work with the cumulative distribution of clustering coefficient as it is worth noting that for the special case of , this result has been obtained previously .the clustering coefficient of the whole network at arbitrary step can be easily computed , }.\end{aligned}\ ] ] in the infinite network size limit ( ) , .thus the clustering is high and increases with .moreover , similarly to the degree exponent , is tunable by choosing the right value of parameter : in particular , ranges from ( in the special case of ) to limit of 1 when becomes very large ._ diameter ._ the diameter of a network is defined as the maximum of the shortest distances between all pairs of nodes , which characterizes the longest communication delay in the network .small diameter is consistent with the concept of small - world and it is easy to compute for our networks .below we give the precise analytical computation of diameter of denoted by .it is easy to see that at step ( resp . ) , the diameter is equal to 1 ( resp .2 ) . at each step , one can easily see that the diameter always lies between a pair of nodes that have just been created at this step . in order to simplify the analysis, we first note that it is unnecessary to look at all the nodes in the networks in order to find the diameter . in other words ,some nodes added at a given step can be ignored , because they do not increase the diameter from the previous step .these nodes are those that connect to edges that already existed before step . indeed , for these nodes we know that a similar construction has been done in previous steps , so we can ignore them for the computation of the diameter .let us call `` outer '' nodes the nodes which are connected to a edge that did not exist at previous steps .clearly , at each step , the diameter depends on the distances between outer nodes . at any step , we note that an outer node can not be connected with two or more nodes that were created during the same step .indeed , we know that from step , no outer node is connected to two nodes of the initial triangle .thus , for any step , any outer node is connected with nodes that appeared at pairwise different steps .now consider two outer nodes created at step , say and .then is connected to two nodes , and one of them must have been created before or during step .we repeat this argument , and we end up with two cases : ( 1 ) is even . then, if we make jumps " , from we reach the initial triangle , in which we can reach any by using an edge of and making jumps to in a similar way . thus .( 2 ) is odd . in this casewe can stop after jumps at , for which we know that the diameter is 2 , and make jumps in a similar way to reach . thus .it is easily seen that the bound can be reached by pairs of outer nodes created at step .more precisely , those two nodes and share the property that they are connected to two nodes that appeared respectively at steps , . hence , formally , for any . note that , thus the diameter is small and scales logarithmically with the number of network nodes . in these cases , the analysis is a little difficult than those of the last subsection .an alternative approach has to be adopted , although it may also holds true for the first case in some situations .the method of the last subsection is relatively easy to generalize to these cases , and below we will address it , focusing on order , size , degree distribution , clustering coefficient and diameter ._ order and size ._ let , be the number of nodes and edges created at step , respectively .denote as the total number of -cliques in the whole network at step .note that the addition of each new node leads to new -cliques and new edges . by construction , we have , and . thus one can easily obtain ( ) , ( ) and ( ) . from above results, we can easily compute the order and size of the networks .the total number of nodes and edges present at step is }{q}\end{aligned}\ ] ] and respectively . for infinite ,the average degree is approximately ._ degree distribution ._ when a new node is added to the graph at step , it has degree and forms new -cliques .let be the total number of -cliques at step that will created new nodes connected to the node at step .so at step , . by construction , we can see that in the subsequent steps each new neighbor of generates new -cliques with as one node of them .let be the degree of at step .it is not difficult to find following relations for : and from the above two equations , we can derive (i , t-1) ] and ^{t - t_i-1} ] shown by eq .( [ ki ] ) is the degree of the nodes created at step . on and .,width=340 ]it can be easily proved that for arbitrary fixed , increases with , and that for arbitrary fixed , increases with .in the infinite network order limit ( ) , eq .( [ ac ] ) converges to a nonzero value .when , for , 2 , 3 and 4 , equal to 0.8000 , 0.8571 0.8889 and 0.9091 , respectively .when , for , 3 , 4 and 5 , are 0.8571 , 0.9100 , 0.9348 and 0.9490 , respectively. therefore , the clustering coefficient of our networks is very high .moreover , similarly to the degree exponent , clustering coefficient is determined by and .figure [ cc ] shows the dependence of on and ._ diameter ._ in what follows , the notations and express the integers obtained by rounding to the nearest integers towards infinity and minus infinity , respectively .now we compute the diameter of , denoted for ( is a particular case that is treated separately in the last subsection ) : _ step 0_. the diameter is ._ steps 1 to . in this case , the diameter is 2 , since any new node is by construction connected to a -clique forming a -clique , and since any -clique during those steps contains at least ( even ) or + 1 ( odd ) nodes from the initial -clique obtained after step 0 .hence , any two newly added nodes and will be connected respectively to sets and , with and , where is the node set of ; however , since ( even ) and + 1 ( odd ) , where denotes the number of elements in set , we conclude that , and thus the diameter is 2 ._ steps to . in any of these steps , some newly added nodes might not share a neighbor in the original -clique ; however , any newly added node is connected to at least one node of the initial -clique .thus , the diameter is equal to 3 . _further steps_. similar to the case of , we call `` outer '' nodes the nodes which are connected to a -clique that did not exist at previous steps .clearly , at each step , the diameter depends on the distances between outer nodes .now , at any step , an outer node can not be connected with two or more nodes that were created during the same step .moreover , by construction no two nodes that were created during a given step are neighbors , thus they can not be part of the same -clique .therefore , for any step , some outer nodes are connected with nodes that appeared at pairwise different steps .thus , if denotes an outer node that was created at step , then is connected to nodes , , where all the are pairwise distinct .we conclude that is necessarily connected to a node that was created at a step .if we repeat this argument , then we obtain an upper bound on the distance from to the initial -clique .let , where .then , we see that is at distance at most from a node in .hence any two nodes and in lie at distance at most ; however , depending on , this distance can be reduced by 1 , since when , we know that two nodes created at step share at least a neighbor in . thus , when , , while when , .one can see that these bounds can be reached by pairs of outer nodes created at step . more precisely , those two nodes and share the property that they are connected to nodes that appeared respectively at steps . based on the above arguments, one can easily see that for , the diameter increases by 2 every steps .more precisely , we have the following result , for any and ( when , the diameter is clearly equal to 1 ) : where if , and 1 otherwise . when gets large , , while , thus the diameter grows logarithmically with the number of nodes .it is easy to see that these cases of have very similar topological properties to the case .additionally , for the cases of , the networks will again be disassortative with respect to degree because of the lack of links between nodes with the same degree ; the second moment of degree distribution will also diverge , which is due to the fat tail of the degree distribution .to sum up , we have proposed and investigated a deterministic network model , which is constructed in a recursive fashion .our model is actually a tunable generalization of the growing deterministic scale - free networks introduced in ref . . aside from their deterministic structures ,the statistical properties of the resulting networks are equivalent with the random models that are commonly used to generate scale - free networks .we have obtained the exact results for degree distribution and clustering coefficient , as well as the diameter , which agree well with large amount of real observations .the degree exponent can be adjusted , the clustering coefficient is very large , and the diameter is small . therefore, out model may perform well in mimicking a variety of scale - free networks in real - life world . moreover , our networks consist of cliques , which has been observed in variety of the real - world networks , such as movie actor collaboration networks , scientific collaboration networks and networks of company directors .this research was supported in part by the national natural science foundation of china ( nnsfc ) under grant nos . 60373019 , 60573183 , and 90612007 .lili rong gratefully acknowledges partial support from nnsfc under grant nos .70431001 and 70571011 .the authors thank the anonymous referees for their valuable comments and suggestions .
we propose a general geometric growth model for pseudofractal scale - free web , which is controlled by two tunable parameters . we derive exactly the main characteristics of the networks : degree distribution , second moment of degree distribution , degree correlations , distribution of clustering coefficient , as well as the diameter , which are partially determined by the parameters . analytical results show that the resulting networks are disassortative and follow power - law degree distributions , with a more general degree exponent tuned from 2 to ; the clustering coefficient of each individual node is inversely proportional to its degree and the average clustering coefficient of all nodes approaches to a large nonzero value in the infinite network order ; the diameter grows logarithmically with the number of network nodes . all these reveal that the networks described by our model have small - world effect and scale - free topology . complex networks , scale - free networks , disordered systems , networks
it is well established that the magnetic field that permeates the solar corona has a highly complex structure .although it is very difficult to measure directly the magnetic field vector in the corona , this complexity can be inferred from observations of the line - of - sight magnetic field at the photosphere . with each new satellite mission that is launched ,we observe photospheric magnetic flux concentrations on ever smaller scales ( that seem to exhibit a power - law distribution with size , * ? ? ?magnetic field extrapolations based on these observed photospheric polarity distributions exhibit an often bewildering degree of complexity . understanding the evolution of such a complex magnetic field structure is a major challenge . in recent years, significant progress has been made in developing tools with which to characterise the coronal magnetic field .one approach involves segregating the photospheric magnetic field into discrete flux patches .this then allows the corona to be divided into distinct domains , each defined by the flux connecting pairs of these patches . between these coronal flux domainsare _ separatrix surfaces _ , that emanate from magnetic null points .the intersection of two such separatrix surfaces forms a _ separator _ field line a field line that connects two null points and lies at the intersection of four flux domains .indeed , magnetic field extrapolations reveal the presence of a web of null points , separatrix surfaces , and separators that form a _ skeleton _ based upon which the magnetic connectivity of the coronal field may be understood ( e.g. * ? ? ?* ; * ? ? ?the separatrix surfaces of this skeleton represent locations at which the mapping between boundary points via the magnetic field lines exhibits discontinuities . also of interest are layers in which this field line mapping exhibits strong ( but finite ) gradients .these are known as _ quasi - separatrix layers _ ( qsls ) , being regions at which the _ squashing factor _, , is large .null points , separators , and qsls , at which the field line mapping is either discontinuous or varies rapidly , are of interest not only in analysing the structure of the coronal magnetic field , but for understanding its dynamics .this is because these locations are prime sites for the formation of current layers at which magnetic reconnection may occur , releasing stored magnetic energy ( * ? ? ?* and references therein ) .in particular , they have been implicated in the formation of current sheets associated with solar flares , jets , and coronal mass ejections ( e.g. * ? ? ? * ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?one major piece of supporting evidence is the coincidence of flare ribbons with footpoints of separatrix and qsl field lines in coronal field extrapolations . one particular location at which the magnetic field line mapping is discontinuous is at the interface between closed and open magnetic flux , i.e. the boundary between magnetic field lines that are anchored at both ends at the photosphere , and those that extend out into the heliosphere .magnetic reconnection at this open - closed flux boundary is one of the principal mechanisms proposed to explain the properties of the slow , non - steady , solar wind ( e.g. * ? ? ?. the slow solar wind is characterised by strong fluctuations in both velocity and plasma composition , the latter of which is consistent with the wind being composed of some component of closed - flux coronal plasma ( e.g. * ? ? ?reconnection specifically at the open - closed flux boundary is also implicated in the generation of impulsive _ solar energetic particle ( sep ) _ events , due to the observed ion abundances of these events .typically , computational models of the sun s global magnetic field exclude the outflowing plasma of the solar wind , but include its effect by imposing a magnetic field that is purely radial at some height above the photosphere ( termed the ` source surface ' ) . excluding all contributions to the solar magnetic field other than the global dipole, the coronal field is characterised by two polar coronal holes of open magnetic field lines and a band of closed flux around the equator , these two being separated by separatrix surfaces that meet the base of the heliospheric current sheet ( hcs ) at the source surface .the question arises : when the full complexity of the coronal field is introduced , what is the nature of the boundary between open and closed flux ? in a series of papers , fisk and co - workers ( e.g. * ? ? ?? * ; * ? ? ?* ) developed a model for the dynamics of the sun s open magnetic flux , that was also used to explain the acceleration of the solar wind mediated by reconnection between open and closed field lines ( termed _ interchange reconnection _ by * ? ? ? * ) . in their model , open field lines can freely mix with and diffuse through the closed field regions , and indeed it is predicted that this open flux component should become uniformly distributed throughout the ( predominantly ) closed field region . while noting that such a scenario requires the presence of current sheets in the corona between open and closed flux , these studies do not address the magnetic field structure in detail .indeed , the topological admissibility of such free mixing of open and closed flux has since been questioned , making it difficult to reconcile the interchange reconnection solar wind acceleration mechanism with the broad observed latitudinal extension of the slow solar wind streams ( up to , especially at solar minimum ) ( e.g. * ? ? ?nonetheless , recent modeling of the global coronal magnetic field has suggested a resolution to the apparent contradiction that plasma that appears to originate in the closed corona is observed far from the hcs at large radii .it has been demonstrated that additional regions of open flux that are disconnected from the polar coronal holes ( at the photosphere ) may indeed exist .the distinct photospheric regions of open magnetic flux are partitioned by multi - separatrix structures associated with multiple nulls points , typically comprising a dome - shaped separatrix enclosing the closed flux between the two open field regions , intersecting with a vertical _ separatrix curtain _ .even when coronal holes are not disconnected , there may exist very narrow channels of open magnetic flux at the photosphere connecting two larger open flux regions . in this casethe narrow channel is associated with a qsl curtain .both the qsl and separatrix curtains extend out into the heliosphere , and have been shown to map out a broad latitudinal band around the hcs , termed the _ s - web _ .the corresponding arc structures at the source surface in global models are associated with pseudo - streamers in the observations , and there is growing evidence that these structures are associated with slow solar wind outflow ( e.g. * ? ? ?the above studies have revealed that the open - closed flux boundary has a complex topological structure involving null points and their associated separatrices , separators and qsls .moreover , it has been recently demonstrated that when reconnection occurs in astrophysical plasmas , the 3d topological complexity can dramatically increase beyond that of the equilibrium field ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?in this paper we use simple static magnetic field models to investigate the implications of reconnection for the magnetic field connectivity at the open - closed flux boundary when the reconnecting current layer exhibits a fragmented structure , expected to be the typical case in the corona . the paper is structured as follows . in section [ 3dtearsec ]we summarise recent relevant results on current layer instabilities . in sections[ domesec ] and [ curtainsec ] we investigate the topological effect of reconnection in configurations defined by an isolated separatrix dome and a separatrix curtain . in sections[ discusssec ] and [ concsec ] , respectively , we present a discussion of the results and our conclusions .in recent years a major advance in our understanding of magnetic reconnection has been the realisation that the reconnection rate can be substantially enhanced when the current layer breaks up in response to a tearing instability .while the linear phase of the classical tearing mode is slow , non - linear tearing in two dimensions ( 2d ) via the _ plasmoid instability _ can grow explosively , and lead to a reconnection rate that is only weakly dependent on the resistivity . in 2d as the instability proceeds a myriad of magnetic islands are formed as current layers fragment into chains of x - type and o - type nulls .the conditions for onset of the instability are that the ( inflow ) lundquist number be , and the current layer aspect ratio be .it has recently been demonstrated that the plasmoid instability also occurs in three - dimensional ( 3d ) current layers . performed 3d particle - in - cell simulations of an initially planar , infinite current layer .they noted that magnetic ` flux ropes ' formed in place of the magnetic islands from the 2d picture with flux often threading in and out of multiple flux ropes . by contrast studied mhd simulations of a 3d magnetic null point undergoing external shear driving .they observed the initial formation of a laminar current layer centred on the null , which was found to become unstable at a threshold similar to the 2d case ( lundquist number , aspect ratio ) .this threshold onset condition is very likely to be exceeded for typical current sheets formed in the corona . in this study, it was demonstrated that the onset of tearing leads to the creation of new 3d null points in bifurcation processes . in particular ,3d spiral nulls are formed that are the analogue of 2d islands , and the spine lines of each of these nulls forms the axis of a pair of magnetic flux ropes , as shown in figure [ wptop ] .crucially , and in contrast to the 2d case , these flux ropes are open structures they are not surrounded by flux surfaces as 2d islands are ( this is required on topological grounds due to the variation of along the direction of the flux rope axis and the solenoidal condition on ) . as a result , no new isolated domains of magnetic connectivity are formed .rather , the flux ropes are composed of a mixture of flux from the two connectivity domains ( flux located above and below the separatrix of the single null prior to the instability ) , wrapped around each other .the result is that , when the field line connectivity is analysed , a layer is found in which magnetic flux from the two connectivity domains is rather efficiently mixed in spiral patterns associated with the multiple flux rope structures .our intention here is to investigate the effect on the global connectivity when the fragmented , reconnecting current layer is embedded in some generic coronal field structures .( a ) ) , and ( b ) after the addition of one flux ring ( state 1a ) sample field lines in the flux ropes are coloured red .green boxes outline the regions in which connectivity and maps are calculated in figures [ dome_oc_map ] , [ dome_q_map].,title="fig:",width=264 ] ( b ) ) , and ( b ) after the addition of one flux ring ( state 1a ) sample field lines in the flux ropes are coloured red .green boxes outline the regions in which connectivity and maps are calculated in figures [ dome_oc_map ] , [ dome_q_map].,title="fig:",width=264 ] we first examine the simplest generic coronal configuration containing a 3d null point the isolated dome topology ( see figure [ dometop](a ) ) .such a configuration is always present , for example , when a photospheric region of one magnetic polarity is embedded in a region of opposite polarity .such 3d nulls are preferred sites of current sheet formation , and the significance of reconnection in an isolated dome configuration has been considered , for example , by . as demonstrated by , spine - fan reconnection in such a dome topologyis characterised by a transfer of flux in one side of the separatrix dome and out the other side , which can be driven dynamically or occur during a relaxation process as the coronal field seeks a minimum energy state . in order to examine the effect of tearing of the null point current sheet on the field structure we consider the following simple model .the fields that we construct are not equilibrium fields ( e.g. are not force - free ) however this is of no importance since we do not consider here any dynamical processes .rather , our purpose is solely to examine the field topology / geometry that results from breakup of a reconnecting current layer .we consider the ( dimensionless ) field where .this magnetic field contains a null point at above a photospheric plane represented by , see figure [ dometop](a ) .the isolated dome topology appears over a wide range of scales in the corona .hereafter we discuss any length scales in terms of a characteristic ` macroscopic ' length scale of the overall structure , that we denote . in our model fieldboth the separatrix dome diameter at and the null point height are of order 1 , so for the model field . from magnetic field extrapolationsit is observed that in the corona this scale of the dome separatrix structure can be as large as a few hundred mm ( usually in the vicinity of active regions , e.g. * ? ? ?* ; * ? ? ?* ) , and at least as small as a few tens of km ( in quiet - sun regions , where the lowest null point height in extraploations is likely limited by the magnetogram resolution , e.g. * ? ? ?* ) . onto the ` background ' field ( [ bdome ] )we super - impose a magnetic flux ring to simulate the topological effect of magnetic reconnection occurring at some particular location in the volume .this method is motivated and described in detail in .the field of this flux ring is taken of the form we begin by super - imposing a single flux ring of this form onto the field of equation ( [ bdome ] ) , centred at the null ( i.e. we set , ) . and are the characteristic size of the flux ring in the -plane ( plane of ) and along , respectively . for small , the effect of adding the flux ring is to collapse the spine and fan of the null point towards one another , as described by .this has the effect of transferring flux in one side of the dome and out of the other , and is consistent with the topology of a single laminar reconnection layer at the null .however , for larger values of the field becomes elliptic at as the flux ring field dominates over the hyperbolic background field .the result is a bifurcation of the original null into three null points as described above in section [ 3dtearsec ] see figure [ wptop ] and the generation of a pair of flux ropes ( see figure [ dometop](b ) ) .this models the magnetic topology when the current layer undergoes a spontaneous tearing instability as observed by .parameters for the magnetic field denoted state 1b are presented in table [ tbl ] . in order to visualise the new field structure created after tearing onset we first plot a connectivity map of field lines from the lower boundary .that is , we trace field lines from a grid of footpoints on the lower boundary and distinguish field lines that are closed ( return to the lower boundary ) and open ( exit through the upper boundary ) .the resulting map can be seen in figure [ dome_oc_map ] . while the flux ropes are approximately circular near the apex of the dome , they are compressed towards the separatrix and stretched in the azimuthal direction by the global field geometry , and thus appear as flattened spiral structures in the connectivity map ( figure [ dome_oc_map](a ) ) . in order to more clearly visualise this structure we reproduce the map in a polar coordinate system in figure [ dome_oc_map](b ) .the observed spiral pattern of mixing of open and closed flux reproduces the behaviour in the dynamic mhd simulations see figure 8 of .( a ) coordinates centred at .( c ) state 1c containing three flux rings.,title="fig:",width=264 ] ( b ) coordinates centred at .( c ) state 1c containing three flux rings.,title="fig:",width=264 ] ( c ) coordinates centred at .( c ) state 1c containing three flux rings.,title="fig:",width=264 ] one can include the effect of a further breakup of the current layer through the inclusion of additional flux rings .adding two such flux rings centred at the non - spiral nulls of state 1a leads each of these nulls to undergo a bifurcation , resulting in a total of seven nulls .this naturally introduces additional spiral structures in the field line mapping , as show in figure [ dome_oc_map](c ) ( state 1c , for parameters see table [ tbl ] ) , and if one were to iterate this procedure by adding more flux ropes a mapping with complexity of the order of that seen in the mhd simulations of could be obtained .( a ) on the top boundary , for ( a ) state 1a , ( b ) state 1b , and ( c ) state 1c.,title="fig:",width=264 ] ( b ) on the top boundary , for ( a ) state 1a , ( b ) state 1b , and ( c ) state 1c.,title="fig:",width=264 ] ( c ) on the top boundary , for ( a ) state 1a , ( b ) state 1b , and ( c ) state 1c.,title="fig:",width=264 ] we now turn to consider the characteristics of the open flux that exits the domain through the top boundary at .since all of this flux is open , a connectivity map does not reveal this structure .however , we can use for example the _ squashing factor _ , , to visualise the field line mapping from to . herewe plot on the surface .the distribution of is obtained by integrating field lines from a rectangular grid of typically around footpoints and then calculating the required derivatives using finite differences over this grid . is formally infinite on spine and fan field lines since they represent discontinuities in the field line mapping .however , calculating numerically as we do here they show up only as sharp points and lines , respectively , with very high values of .one should therefore not attach physical meaning to the maximum value of in the plots ( attained at the separatrix / spine footpoints ) as it is determined entirely by the resolution of the field line grid . in the background dome topology of equation ( [ bdome ] )a single spine line intersects the boundary , and the -map displays a single maximum at the origin .when a null point bifurcation occurs during reconnection , the topological structure changes to that shown in figure [ wptop ] . in this casea vertical separatrix extends up to the top boundary , bounded on either side by a pair of spine lines .examining the distribution on for state 1a ( figure [ dome_q_map]a ) , the separatrix footprint is clearly in evidence ( horizontal line of high ) .increasing the strength of the flux ring in the model ( state 1b ) leads to a lengthening of this separatrix due to the increased separation of the nulls ( figure [ dome_q_map]b ) .a further breakup of the current sheet leads to the appearance of multiple vertical separatrices , as seen in figure [ dome_q_map](c ) .there are two additional noteworthy features of the distributions .first , note the arcs of high and low that run parallel to the separatrix footprint .these become more pronounced and numerous as the flux ring strength is increased ( compare figures [ dome_q_map](a , b ) ) .their origin can be understood as follows .consider field lines traced down from the top boundary that enter one of the flux ropes .some local bundles of field lines will spiral around the rope axis and then ` leave ' the flux rope at its top or bottom ( in ) with a range of values of but roughly constant values ( being the azimuthal angle in the -plane ) .due to their range of values on leaving the rope they diverge in the azimuthal direction as they are traced onwards to the lower boundary and therefore exhibit relatively high values .by contrast , adjacent field lines that leave the flux rope along its sides at approximately equal values but differing values are naturally squeezed in towards the fan as they approach the photosphere they do not diverge with the null point fan geometry owing to their close alignment in the direction .this leads to a lower . for a stronger flux ring ( more substantial flux rope )field lines have the opportunity to spiral multiple times around the rope axis , leading to multiple stripes . the second feature to note are the high- ridges emanating from each spine footpoint in the -direction , that are present for the following reason . adding the flux rings naturally generates a strong field component in the plane. therefore the two non - spiral nulls have quite asymmetric fan eigenvalues ( their ratio is around 2.5 in state 1a ) .the weak field direction corresponds to the -direction , and it is natural that is largest in this weak - field region of diverging fan field lines . to test the robustness of the structures in described above , two -maps were calculated using magnetic fields taken from the dynamic mhd simulation of . to avoid discontinuities in brought on by the corners of the domain we calculated using the foot points of field lines traced from a fixed grid on the top boundary to a cylindrical surface defined by , red and grey surfaces in fig .[ sim_q_map](c ) respectively .[ sim_q_map](a ) shows on the top `` open '' boundary soon after tearing has occurred in the layer see also fig . 5 of .note that is the vertical direction in the simulation domain and that the spine - fan collapse occurs in the plane , fig [ sim_q_map](c ) . at this time there is a single pair of flux ropes within the current layer . despite significant fine structure ( likely resulting from turbulent dynamics in the outflow region ) the structures in our simple model described above are clearly evident also in the dynamic mhd simulation .a short high- line corresponding to a vertical separatrix surface is apparent near , whilst a number of parallel stripes of can be seen extending to either side of it .additionally , two high- ridges emanate from the ends of this separatrix surface .the observed closed loop of high results from the flux rope pair being located in the reconnection outflow having detached from the open - closed boundary , see the discussion of .this splits the field lines that connect from to into two bundles : those that connect directly from the top boundary to the side and those that loop first around the back of the flux rope pair .the foot points of the latter are found within the loop of high .note that the field line connectivity changes continuously around the boundary of this loop , so the value of is large but finite . at the later timetwo pairs of flux ropes are present in the outflow region of the current layer resulting in an additional separatrix footprint being present , fig .[ sim_q_map](b ) .the gap observed between the pair of vertical separatrix footprints ( in contrast to fig . [ dome_q_map](c ) ) is again a result of the detachment of the nulls from the open - closed boundary . ) .red and blue spheres correspond to nulls with topological degree of and respectively , whilst the two separators are shown in green and purple .green boxes outline the regions in which connectivity maps are calculated.,width=302 ] we conclude that tearing of the reconnecting current layer at an isolated coronal null separatrix dome leads to the formation of an envelope around the initial dome structure in which magnetic flux from inside and outside the dome is efficiently mixed together .additionally , vertical separatrix curtains are formed during each null point bifurcation .the implications of these results will be discussed in section [ discusssec ] .isolated separatrix dome structures associated with a single null as considered in the previous section separate small pockets of closed flux from the open flux in the polar regions ( as well as being prevalent in closed flux regions ) .however , it is also typical to have much more complicated separatix configurations separating open and closed flux . in particular , in global field extrapolations it is seen that vertical _ separatrix curtains _ lie between coronal holes that are of the same polarity but are disconnected at the photosphere .these curtains , together with qsls associated with narrow corridors of open flux , are associated with arc structures at the source surface in global models that are interpreted as being associated with _pseudo - streamers _ .we consider here a simple model containing a vertical separatrix surface representing one of these curtains .this intersects a separatrix dome associated with three coronal null points along two separator lines .the separatrix curtain consists of the fan surface of one of these nulls ( see figure [ curtaintop ] ) .the magnetic field expression for our model is as follows again a characteristic length scale of the overall structure is of order 1 in the model field ( say the null point height or separation see figure [ curtaintop ] ) , that we refer to as . on the sun , is observed to be as large as a quarter of the solar radius ( mm , e.g. * ? ? ?* ) , and may be at least as small as tens of kilometers in quiet sun regions as discussed before .note that to find the separators in these models the `` progressive interpolation '' method was used , whereby field lines were traced from a ring encircling one of the associated null points on its fan plane to identify the approximate position of each separator , before using an iterative bisection procedure to find each separator to a desired accuracy .connecting the three null points along the top of the separatrix dome are a pair of separator field lines , fig .[ curtaintop ] .like 3d null points , these are known to be preferred sites for current sheet formation and magnetic reconnection ( e.g. * ? ? ?it has been previously observed that these sheets are prone to fragment , yielding a current layer containing multiple separators .we thus begin by considering the effect of super - imposing one then more flux rings to simulate the effect of tearing in a reconnecting current layer around the separator .such a model with a single flux ring was presented by using a background field with two nulls , both with initially planar fan surfaces .they noted that as they increased the strength of the flux ring new separators appeared , coinciding with the formation of distinct new domains of magnetic flux connectivity . by magnetic flux domain here and throughout we mean a volume within which there is a continuous change of field line connectivity .distinct flux domains are bounded by separatrix surfaces at which this connectivity change is discontinuous .we observe the same effect when adding a single flux ring on the separator state 2a ( see table [ tbl ] ) .tracing field lines from the photosphere ( ) and making a connectivity map as before , we observe the presence of a region of open flux nested within the closed field region , figure [ sep_oc_1rope_phot](a , b ) . increasing the strength of the flux ring ,we observe progressively more open and closed flux volumes being created , nested within one another fig .[ sep_oc_1rope_phot](c ) , state 2b .the formation of these new flux domains corresponds to the formation of new pairs of separators joining the two associated nulls .figure [ sep_diagram](a ) demonstrates this for state 2b . whereas originally one separator joined the central and end nulls the formation of the three nested flux domains ( fig .[ sep_oc_1rope_phot](b ) ) corresponds to the birth of three additional pairs of separators , giving seven in total ( green and purple field lines ) , see below for a further discussion . on the surface , and ( b ) close - up connectivity map , both for state 2b .the black region contains footpoints of field lines that connect to the photosphere at ( on one side of the separatrix dome / curtain structure ) , in the white regions field lines connect to ( on the other side ) .the letters correspond to the letters marking the flux tubes in the 3d plot of figure 10 .( c ) connectivity map for state 2c.,scaledwidth=45.0% ] we now turn to examine the connectivity of field lines that extend outwards into the heliosphere ( those that exit through the top boundary ) . throughoutwe consider the surface as being the ` top ' boundary field lines are close to vertical above this plane and so little deformation of the field line mapping occurs . in figure [ sep_oc_1rope_top](a ) a map of plotted on the top boundary ( as calculated between the two surfaces and ) for state 2b .note that a colour scale is not shown since the maximum value is arbitrary , depending only on the resolution of the field line grid .we observe the imprint of the separatrix curtain , as well as additional nested loop structures that correspond to additional separatrix surfaces separating nested flux domains . in figure [ sep_oc_1rope_top](b )a connectivity map is plotted field lines that intersect the black region connect to the photosphere at on one side of the separatrix dome / curtain structure , while field lines intersecting the white region connect to the photosphere on the other side of the dome , .embedding our structure in a global field these two different regions would correspond to open field regions of the same polarity that are disconnected at the photosphere ( see e.g. figure 5 of * ? ? ?* ) , and thus the figure shows that flux from the two disconnected coronal holes forms a mixed , nested pattern .these nested connectivity regions are entirely equivalent to those described in the photospheric connectivity maps above . it is expected that in a dynamic evolution there is continual reconnection of field lines within the current layer , and thus a mixing of plasma between all of the nested flux domains ( see e.g. * ? ? ?as such , field lines at large height in these nested flux regions will continually be reconnected with those from the closed flux region .the addition of further flux rings representing further plasmoid structures in the reconnecting current layer leads to the formation of additional adjacent sets of nested open / closed flux domains .figure [ sep_oc_1rope_top](c ) shows the connectivity map when three flux rings are present , state 2c .the complexity quickly becomes very high , with extremely thin layers of connectivity with characteristic thickness of order even though the flux rope structures and their collective footprint in the solar wind remain much larger , having diameters of order .the inclusion of further flux ropes would decrease the length scales in the mapping yet further .this complexity of field line mapping was observed in a related context by .the direct association between the newly - created nested flux domains and additional separators that form in the domain is demonstrated in the right - hand frame of figure [ sep_diagram](a ) . herewe note that the seven separators lie at the intersections of the four different connectivity regions .the nested formation of flux regions and the associated pairs of separators may be understood as follows .consider a bundle of field lines passing in along the open spine of the null at . as the flux ring strength is increased some of these field lines are wrapped repeatedly around the axis of the flux rope before they reach the photosphere .field lines reach the photosphere near spine footpoints of the central null , either at or .they may do this directly , or by first winding once , twice , or more times around the flux rope axis .this is demonstrated in figure [ newfig ] , where flux tubes are plotted from each of the nested connectivity domains that intersect the upper boundary of state 2b ( marked ` a ' , ` b ' and ` c ' in figure [ sep_oc_1rope_top](b ) ) . each additional winding corresponds to a new flux domain .this is because as the strength of the flux ring is increased the separatrix surfaces of the two nulls ( red and cyan curves in the right panel of figure [ sep_diagram](a ) ) fold over to intersect with one another multiple times .each additional pair of intersections correspond to a pair of new flux domains ( bounded by portions of the separatrices ) and a pair of new separators ( see the right - hand image of figure [ sep_diagram](a ) and figures 3 and 4 of * ? ? ?field lines wind more times closer to the flux ring axis , and so new topological domains are formed within the previous ones along with a pair of separators .this explains the nested nature of the connectivity domains observed in fig .[ sep_oc_1rope_phot ] and why each new pair of separators have one half twist more than the preceding two , fig . [ sep_diagram](a ) .( b).,scaledwidth=50.0% ] so far we modelled the case in which the current layer forms along the separator , with this current layer breaking up but the number of nulls remaining fixed that is , no null bifurcations occurred .however , another distinct possibility is that the current layer that forms in response to a dynamic driving of the system contains one or more of the coronal nulls .when such a current layer breaks up we would expect a bifurcation of the corresponding null point(s ) , which naturally should also coincide with the formation of additional separators indeed this may well occur even when the current is focussed away from the nulls , as observed by .we consider here two distinct cases , in the first of which the null point initially at is bifurcated into multiple null points , and in the second of which the central null ( initially at ( 0,0,1 ) ) is bifurcated .consider first the situation where the end null is bifurcated .first , adding a single flux ring of sufficient strength at the initial location of the null we obtain a bifurcation to form three nulls as in figure [ wptop ] ( state 3a ) see figure [ sep_diagram](b ) .let us now examine the result for the magnetic flux connectivity , considering first flux intersecting the photosphere . in figure [ sep_oc_1ropeen](a )we see that the photospheric connectivity map appears as before : new nested open and closed flux domains are created in the vicinity of the spine footpoints of the central null .the connectivity map for open flux traced from the upper boundary is shown in figure [ sep_oc_1ropeen](c ) ( where the colours have the same meaning as before ) .as shown in the -map ( figure [ sep_oc_1ropeen]b ) , the main separatrix curtain is diverted in the positive -direction for negative .it terminates on the spine of one of the null points located in the vicinity of .there is then an additional separatrix footprint orthogonal to this bounded by the spines of the newly created nulls as in figures [ wptop ] , [ dome_q_map ] .interestingly though , the arcs of high emanating from this separatrix ( as in figure [ dome_q_map ] ) now form the boundaries of the flux domains that connect to opposite sides of the dome footprint .this can be understood by considering that each arc of high in fig .[ dome_q_map ] represents a further half turn of field lines along the axis of the flux rope , i.e. the outermost two arcs correspond to field lines exiting along one or other spine of the central null having wound up to once around the flux rope axis , the next pair to field lines that first wind between once and twice around the flux rope axis , and so on .when these field lines are mapped on to the photosphere as in the domed single null case this leads to the continuous but rapid change in connectivity denoted by the ridges . when such field lines separate along the spine of a distant null the change in connectivity becomes discontinuous , forming the nested flux domains ,see also section [ subsubtop ] . as before ,the connectivity maps quickly become significantly more complex when additional flux ropes are added .figures [ sep_oc_1ropeen](d , e ) show the photospheric and upper boundary connectivity maps when an additional two flux rings are added to generate a bifurcation to a state with three flux ropes pairs and seven nulls , state 3b .again , characteristic length scales of the mapping layers of order or below are observed within a mixed flux region with dimensions of order . ( a ) .( b ) on the upper boundary .( c ) close - up connectivity map on the upper boundary with colours as in figure [ sep_oc_1rope_top ] note that the -direction is stretched in ( c ) for clarity.,title="fig:",height=188 ] + ( b ) .( b ) on the upper boundary .( c ) close - up connectivity map on the upper boundary with colours as in figure [ sep_oc_1rope_top ] note that the -direction is stretched in ( c ) for clarity.,title="fig:",height=204 ] ( c ) .( b ) on the upper boundary .( c ) close - up connectivity map on the upper boundary with colours as in figure [ sep_oc_1rope_top ] note that the -direction is stretched in ( c ) for clarity.,title="fig:",height=204 ] finally , suppose that the fragmentation of the coronal current layer leads to a bifurcation of the central null point .adding a single flux ring leads to a bifurcation to a state with three null points ( state 4 ) as before .analysis of the resulting topology reveals a situation that mirrors state 3a . as shown in figure [ sep_oc_1ropecn](a ) , this time the photospheric connectivity maps show adjacent crescent - shaped domains of open and closed flux , symmetric about since the null bifurcation now leads to a bifurcation of both of the initial separators .correspondingly , nested flux domains of alternating connectivity are now observed in the connectivity map for the upper boundary ( figure [ sep_oc_1ropecn](b ) ) , this time emanating from the footpoints of both of the vertical open spines .as shown in the above models , reconnection at the sun s open - closed flux boundary can result in that boundary taking on a highly non - trivial structure . in the presence of an isolated null point separatrix domeno new flux domains are created but an envelope forms around the initial dome structure in which magnetic flux from inside and outside the dome is efficiently mixed together in spiral patterns . as shown by magnetic flux is continually and recursively reconnected from open to closed and back again within this envelope ( i.e. is reconnected back and forth multiple times between open and closed regions * ? ? ?the result for the field at large heights is that a flux tube is present around the original spine line within which field lines are being continually reconnected with those from the closed region beneath the dome .if we consider a more complicated structure in which coronal separators are present , the breakup of the current layer leads to the formation of new flux domains .in particular , new open and closed magnetic flux domains form in nested structures , whose length scales become rapidly shorter ; even for the models considered here containing just three flux rope pairs characteristic length scales of the mapping layers of order or smaller are observed .the expectation is that in a dynamic evolution , continual transfer of flux / plasma between the narrow open and closed layers would occur .the new regions of flux are observed to form in the vicinity of the footpoints of spine field lines in the pre - reconnection field .together they cover a region of comparable scale to the distribution of current and flux rope structures , here of order .our results imply that in the vicinity of open spine structures and open separatrix curtain structures , an efficient mixing of open and closed magnetic flux , and the associated plasma , is likely to take place whenever reconnection occurs at the corresponding nulls or separatrices .this is an attractive ingredient for explaining observed properties of the slow solar wind by the interchange reconnection model . in particular ,the slow solar wind is known to be highly fluctuating in both composition and velocity ( in both space and time ) , with the composition properties varying from close to those of the closed corona to nearly photospheric .contributing factors to this fluctuating , filamentary structure could be the bursty nature of the interchange reconnection , and the complex spatial structuring on large and small scales of the open - closed boundary .combining our results with those from simulations of current layer instabilities , it is clear that the reconnection process should lead to a highly dynamic magnetic topology in which regions of open and closed flux are born and evolve in a complex pattern .interchange reconnection models for solar wind acceleration share the common feature that they require regions of open flux at photospheric heights that are at least predominantly surrounded by closed flux .this is consistent with observations of significant components of solar wind outflow emanating from locations adjacent to active regions ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?the models of fisk and co - workers take a statistical approach to the evolution of the sun s open flux , in which they assume a random orientation for the closed loops and thus an isotropic diffusion of open field lines as they random walk through the closed flux region .one piece of observational evidence that is argued to support their model of open field dynamics is the coincidence of extended open field regions with minima in the local rate of flux emergence .interestingly , the models also predict that the random walk of open field lines will induce a braiding of field lines in the heliosphere , and this is also a prediction of the mhd simulations of ( see figure [ sim_q_map ] ) , though in our model the braiding is induced by turbulent dynamics in the reconnecting current layer .the s - web model , by contrast to the models of fisk and co - workers , seeks to identify explicitly the locations of possible interchange reconnection and thus outflow by analysing the detailed magnetic field topology . in a given magnetic field extrapolationopen field channels and patches can be identified , however there are indications that the number of disconnected open field regions can greatly increase when the resolution of the photospheric magnetogram is increased .such an increase corresponds to an increase in the number of arcs in the s - web . in this paper , we have argued that when interchange reconnection occurs at a structure of the open - closed boundary and the full reconnection dynamics are included it will tend to generate a layer within which open and closed flux are mixed .so in this picture , each separatrix arc of the s - web becomes not a single line but a band within which the open - closed flux boundary is highly structured .this may partially mitigate against the concern that the static s - web is not space - filling and therefore can not provide a continuous slow solar wind outflow .the dimensions of this layer of mixed open - closed flux are of course crucial , but can not be readily estimated from the present approach .they will depend both on the size and geometry of the fragmented reconnecting current layer , and the overall global field geometry that connects this volume out to the heliosphere in any given situation .while the latter can be estimated from static models , elucidating the former will require a full , detailed dynamical understanding of the reconnection process .we note also that our results perhaps provide a ` bridge ' between the fisk et al . and s - web models , in the sense that the broad bands of mixed open - closed flux expected to form around the arcs of the s - web can be thought of as a regions within which the diffusive random walk envisaged by fisk and co - workers becomes highly efficient .the indication is then that the broadly uniform diffusion coefficient in those models should in fact be highly structured in both space and time .however , to determine the spatial and temporal distribution of these regions of efficient open - closed flux mixing and thus their overall effect will require a statistical study of the structures of the open - closed flux boundary , together with the results of the dynamical simulations suggested above .we now consider the implications of our results for signatures of particle acceleration in topologies involving coronal null points and separators .of course , explaining sep and flare ribbon observations requires knowledge of particle behaviour , and that remains to be studied .however , the above results allow us to make a number of predictions .consider first impulsive sep events .a series of recent observational studies has shown that impulsive sep sources are located in open field regions magnetically well - connected with the target ( * ? ? ? * and references therein ) . however , since their composition is more indicative of closed field regions , it has been proposed that they are accelerated directly during interchange reconnection in the low corona .the present results demonstrate that particles accelerated ( by some means ) during the reconnection process readily have access to open field lines , since all field lines are recursively reconnected from open to closed within the envelope of mixed flux described above .what s more , when open and closed flux are mixed into sufficiently thin layers , the distinction between the two is lost for the particles certainly the case if the layers are thinner than the larmor radius .furthermore , the dynamics in the current layer are most likely to be turbulent , and braiding of field lines is expected , both of which are known to lead to enhancement of cross - field particle transport ( e.g. * ? ? ? * and references therein ) .our results , combined with those of , also provide insights into the expected structure of flare ribbons in coronal null point and pseudo - streamer topologies .in particular , it has been shown that the separatrix and qsl footprints often map onto the locations of the flare ribbons .however , the flare ribbons usually exhibit additional structure , often bright kernel - like structures that move along the ribbons .these features could correspond to the footpoints of the flux rope structures formed during 3d current sheet fragmentation , these being associated with bundles of efficiently mixed open and closed flux .if this were the case , one would expect the motion of the bright features to be linked to the velocity of the outflow from the reconnection region , multiplied by some factor resulting from the geometry of the magnetic connection between the reconnection site and the photosphere .however , as shown by , the flux ropes exhibit a complicated dynamics as they kink and interact with one another in the reconnection region , so the motion of their footprint on the photosphere is expected to deviate significantly from a simple advection .we note that performing the same procedure as herein adding a flux ring to simulate the effect of reconnection for a background field defined by a qsl or hyperbolic flux tube leads not to more structure in the associated -map , but simply to a break of the high- layer ( results not presented here ) .this may correspondingly imply that the signatures of bursty reconnection in such a field topology are different to the case when separatrices are present .one further specific conclusion that can be drawn regards the nature of the flare ribbons associated with the spine footpoints in the isolated dome topology . and reported observations of the flare ribbons in such a topology and noted that the ribbons that were postulated to be connected with the spine footpoints were extended structures with high aspect ratio ( rather than being circular ) .they proposed that this was related to the distribution of the squashing factor for field lines around the spine in the associated extrapolated equilibrium field .the results of section [ domesec ] provide us with an alternative hypothesis : the extended elliptical ribbons may mark out the imprint of the vertical separatrix curtains ( and surrounding arcs of high ) associated with a fragmented current layer .it may also be that both effects are important for the spine footpoint ribbon extension .of course it remains to be seen what accelerated particle distributions are expected during a dynamic reconnection event , and this will be pursued in a future study .magnetic reconnection in the solar corona is likely to occur in highly fragmented current layers , as demonstrated in recent 3d simulations . herewe have used simple static magnetic field models to investigate the implications of the current layer fragmentation on the large - scale topology of representative solar coronal field structures .we have shown that this fragmentation can vastly increase the topological complexity beyond that of the equilibrium magnetic field .in particular , when the fragmenting current layer forms at the open - closed magnetic flux boundary , the structure of that boundary can become highly complex .the results , however , are also relevant for the studied topologies in the case where all flux is globally closed in this case some flux closes at some distant point on the photosphere ( but is ` locally open ' ) .we considered here two principal topologies ; the isolated null point dome and the separatrix curtain topology .both of these are observed over a broad range of characteristic scales in the corona , from hundreds of mm down to tens of km . in the presence of an isolated dome ,non - linear tearing of the reconnecting current layer leads to the formation of an envelope of magnetic flux around the initial dome structure in which flux from inside and outside the dome is efficiently mixed together .magnetic flux is continually recursively reconnected from open to closed and back again within this envelope .the result for the field at large heights is that a flux tube is present around the original spine line within which field lines are being continually reconnected with those from the closed region beneath the dome .such isolated dome structures are typically found in abundance in coronal holes in magnetic field extrapolations . in the separatrix curtain ( ` pseudo - streamer ' )topology of section [ curtainsec ] we saw that the breakup of the current layer leads to the formation of new flux domains .in particular , open and closed flux ( as well as flux from pairs of disconnected open field regions ) form in nested domains with very short length scales . the thickness of the adjacent open and closed flux domains can be many orders of magnitude smaller than the global length scale of the field structure or indeed of the flux ropes in the current layer in our models with only three flux ropes the mapping layers were two orders of magnitude smaller than the flux ropes .the expectation is that in a dynamic evolution , continual reconnection between the narrow layers of open and closed flux would occur within a flux envelope surrounding the new nested flux domains .our static models predict that in the corona immediately above the pseudo - streamer this envelope will cover a region of comparable scale to the distribution of current and flux rope structures .however , this would be expected to widen with height as the field strength reduces with radial distance from the sun and the field expands laterally .understanding particle acceleration in topologies such as those studied could help us comprehend both the release of impulsive seps to open field lines and the appearance of certain flare ribbons .in particular , the appearance of extended flare ribbons at spine footpoints in the dome topology could be related to the separatrix footprints that appear there when null point bifurcations occur as the current layer fragments .what s more , the efficient mixing of open and closed flux in the reconnection process provides a natural mechanism for accelerated particles to access the open field region .future studies of particle acceleration during the reconnection process will reveal much more .global magnetic field extrapolations are now revealing the huge complexity of the coronal field , and in particular the structure of the boundary between open and closed magnetic flux .regions of open flux that are either disconnected from the polar coronal holes at the photosphere or connected only by narrow open - flux corridors contribute arcs to the s - web .our results show that whenever reconnection occurs at a null point or separator of the open - closed boundary , the associated separatrix arc of the s - web becomes not a single line but a band of finite thickness within which the open - closed flux boundary is highly structured .the dimensions of this band are of course crucial , but can not be readily estimated from the present approach . the next step then , to determine the importance of this effect , requires dynamical mhd simulations of the process in order to quantify the dimensions of this band and the flux associated with it .dp acknowledges financial support from the uk s stfc ( grant number st / k000993 ) and the leverhulme trust .pw acknowledges support from an appointment to the nasa postdoctoral program at goddard space flight center , administered by oak ridge associated universities through a contract with nasa ., s. k. ( 1996 ) . . in balasubramaniam , k. s. , keil , s. l. , and smartt , r. n. , editors , _ solar drivers of the interplanetary and terrestrial disturbances _ , volume 95 of _ astronomical society of the pacific conference series _, page 1 .cccccccccccc state & -field & & & & & & & & & + 1a & dome & 0.03 & & 0.025 & 0.02 & & & & & + 1b & dome & 0.05 & & 0.025 & 0.02 & & & & & + 1c & dome & 0.08 & & 0.025 & 0.02 & 0.05 & & & 0.005 & 0.005 + 2a & curtain & 0.18 & & 0.1 & 0.167 & & & & & + 2b & curtain & 0.25 & & 0.1 & 0.167 & & & & & + 2c & curtain & 0.25 & & 0.1 & 0.167 & 0.06 & & & 0.005 & 0.167 + 3a & curtain & 0.22 & & 0.1 & 0.333 & & & & & + 3b & curtain & 0.3 & & 0.1 & 0.333 & 0.05 & & & 0.0025 & 0.333 + 4 & curtain & 0.2 & & 0.1 & 0.333 & & & & & +
global magnetic field extrapolations are now revealing the huge complexity of the sun s corona , and in particular the structure of the boundary between open and closed magnetic flux . moreover , recent developments indicate that magnetic reconnection in the corona likely occurs in highly fragmented current layers , and that this typically leads to a dramatic increase in the topological complexity beyond that of the equilibrium field . in this paper we use static models to investigate the consequences of reconnection at the open - closed flux boundary ( interchange reconnection " ) in a fragmented current layer . we demonstrate that it leads to efficient mixing of magnetic flux ( and therefore plasma ) from open and closed field regions . this corresponds to an increase in the length and complexity of the open - closed boundary . thus , whenever reconnection occurs at a null point or separator of this open - closed boundary , the associated separatrix arc of the so - called _ s - web _ in the high corona becomes not a single line but a band of finite thickness within which the open - closed boundary is highly structured . this has significant implications for the acceleration of the slow solar wind , for which the interaction of open and closed field is thought to be important , and may also explain the coronal origins of certain solar energetic particles . the topological structures examined contain magnetic null points , separatrices and separators , and include a model for a pseudo - streamer . the potential for understanding both the large scale morphology and fine structure observed in flare ribbons associated with coronal nulls is also discussed .
we are assisting at a booming expansion of nanoparticle research and technology .synthesis method especially make fast progresses .analysis methods , however , are not up to speed . a fundamental simple task as determining and controllingthe size distribution of nanoparticles ( nps hereafter ) is currently a complex experimental work , involving electron microscopy and combined techniques . in this workwe want to highlight the possibilities offered in this issue by a much less complex technique as powder diffraction .powder diffraction is a widespread technique with a great potential to meet the increasing demands of microstructural material characterization .the methods of powder diffraction data analysis have reached maturity for micrometer - sized polycrystalline materials . however , when the particle size falls much below 100 nm , specifically tuned methods of analysis are needed to extract meaningful information from powder diffraction patterns .in fact , nanoparticles ( nps hereafter ) present unique analytical challenges . in the most complex cases ,non - crystallographic structures may occur .surface - related deformation fields are another challenge . in these extreme cases ,the classical crystallographic formalism becomes quite useless .the debye scattering function ( that is , the direct evaluation of the np structure factor from the interatomic distances ) is the only choice in those cases .we are currently developing methods to increase the efficiency of such calculations and make them a practical tool .even for crystalline nps , however , the small size plays a decisive role .bragg peaks may be so much broadened that they can not be simply separated and many approximations , commonly accepted for micrometer size domains , fail .as we will show , also models specifically corrected for nps may fail for ultra - small nps ( say below 5 nm diameter , as it will be better specified ) .again for these ultra - small sizes the debye scattering function is the only choice for obtaining precise results , while the smaller number of atoms makes it extremely practical .the plan of the paper is the following . in sec .[ sec1 ] we discuss the shape - based method for calculating np powder patterns in relation to the surface structure and to its limits of validity at small sizes .application to full - pattern fit on a test - case ( 20-nm ceo ) is shown in sec .summary and conclusions are given in sec .scherrer s formula is the most known method for extracting size information from powder patterns ( namely , from the bragg peaks width ) .this is a simple method , but accurate only to the order of magnitude .however , since scherrer s work , line profile analysis has made enormous progress .theoretical progress on understanding the physical origin of peak broadening has been focused on the dislocation analysis , size broadening being considered as a side effect to be corrected for in order to determine the defect structure . nevertheless , today it is possible to determine the parameters of a ( log - normal ) size distribution of crystallites , together with information on type and concentration of dislocations .these methods are , however , complex and sophisticated , requiring a fairly high signal - to - noise ratio , low and flat background , a precise deconvolution of the instrumental broadening and especially well - isolated bragg peaks .full - pattern fitting methods ( _ cf . _ sec .[ sec2 ] ) are more direct and robust , especially when the target is the size analysis .firstly , they use all the experimental information , regardless of partial or total peak overlap , increasing redundancy and therefore precision and decreasing experimental requirement .furthermore , they allow the evaluation of a np - characteristic feature , namely the variation with size of the lattice parameter ( an effect that can be important below 20 nm ) .corrections for texture , microabsorption , anisotropic elastic peak shifts and instrumental broadening can also be implemented .an efficient and precise method to evaluate np diffraction patterns is needed to perform full - pattern fits .hereafter we discuss the shape - based method with a thorough analysis of its validity limits .we shortly recall some methods for the calculation of the powder diffraction intensity for a np with known periodic structure and definite size and shape . in the following the length of a vector be denoted by . accordingly, will be the scattering vector of length , where is the scattering half - angle and the incident wavelength ; shall denote the scattering vector associated with a bragg peak , its length being .a np occupies a geometrical region of space .we recall the definition of a shape function , such that if lies inside , otherwise .we shall hereforth suppose that so that its fourier transform is real .however , defining the shape of a crystal means also to describe what happens to the atoms on the surface .these are increasingly important at very small sizes .in fact , there are different ways of interpreting the action of , the most meaningful ones being : * truncating sharply the scattering density ( the electron density for x - rays ) at the surface ; * selecting all whole unit cells whose origins are in and all whole atoms whose centres lie in the selected cells ; * selecting all whole atoms whose centres are in .useful illustrations are found in fig . 1 of ref .( see figs . 1a , 1c and 1d , respectively for a , b , c ) .to evaluate the diffracted intensities , in cases b ) , c ) , one may utilize the debye function . in this way the chosen model is faithfully represented .it is possible , however , to proceed in a different way , that is , by the shape - function method .accordingly , we first evaluate the scattering amplitude .the explicit expressions are , for cases a , b , c : where is the reciprocal lattice ; is the fourier transform from the fourier amplitudes and from the related intensities , where is the unit cell volume . ] of , or and it satisfies because ; is the unit cell structure factor where the sum index runs on the atoms in the unit cell , which have form factors and position vectors ( relative to the cell origin ) ; is the same as the former but evaluated in ; and is the mixed expression it is evident that form a ) is simpler but by construction less reasonable - for electron and x - ray diffraction - than b ) and c ) .in fact , the sharp truncation of the electron density at the surface is unjustified . for neutron nuclear elastic scatteringthe atoms are point scatterers , therefore , construction a ) coincides with c ) .accordingly , in the neutron case , the atomic form factors are constant and . formb ) depends on an appropriate choice of the unit cell .clearly , it preserves the stoichiometric composition and symmetry .form c ) needs a careful implementation ( regarding the definition of ) to preserve stoichiometry , that is important for ionic compounds ; however , it is clearly more flexible .remark also that , in the case of monoatomic lattices , instead - as for simple - cubic , face - centered or body - centered cubic metals - construction b ) and c ) will be coincident and . squaring eqs .( [ eq : ampla],[eq : amplb],[eq : amplc ] ) we obtain the intensities . supposing centrosymmetric and real , we have here , we have neglected cross - summations of the form where overbar stands for complex conjugate and , for x = a , b , c , respectively , it is , or .neglecting is , first of all , a question of convenience , because its evaluation - either analytical or numerical - is a nightmare .there are obvious reasons for neglecting for large particles . consider a spherical particle with cubic structure with lattice parameter and radius . is large only for , and decreases as for .as for any bragg peak it is , can be neglected . for smaller particles the situation is different . in refs .it is proposed that is negligible due to a certain statistical ` smearing ' of the np surface region on a thickness of the order of the lattice parameter .however , this hypothesis can not be accepted by default .firstly , the order at the surface strongly depends on the considered crystal phase and on the actual sample .consider that for a np of diameter , the fraction of atoms included in a layer of thickness is ( about 50% at , still 12% at ) .the structure of this large fraction should be carefully considered on a case - by - case basis .relaxations in the core due to a disordered layer of thickness should also be considered . secondly , supposing a default smearing of the np boundaries flattens the different construction principles of forms a , b , c. in fact , the differences among them regard the finest details of the np surface structure .we shall hereafter assess the effect of neglecting on the calculation of a powder diffraction pattern .[ appa ] we carry out some relevant calculations .evidently this will depend on the choice of form a , b , or c. examples are reported in the following section . for form it turns out that , even when is not negligible , it yields a contribution that is approximately proportional to the retained term of the scattered intensity .this means that the effect of neglecting may be just a small error on the global scale factor for samples composed of particles of equal size .however , as this effect is size - dependent , it may hamper the evaluation of size distribution when this is not very narrow . a size - related correction factor for the scale factor may - and should - be evaluated ( see app .[ appa ] ) in this case .this of course is an undesired complication . in casesa ) and c ) the neglected term depends on the crystal structure ( see app .[ appa ] ) .it is not a constant scale factor for all bragg peaks , and it may have a significant gradient in the bragg peak positions .at very small sizes the latter may induce a systematic error also in the lattice constant determination .however , in the x - ray case , for form a ) is larger - and has a larger gradient in the bragg peak neighbourhood - than the corresponding term for form c ) . to obtain a powder diffraction pattern, we must integrate ( x = a , b , c , see eqs .( [ eq : ampla],[eq : amplb],[eq : amplc ] ) ) at constant .we write in polar coordinates as , where is the orientation defined by the pair .we have to integrate over the set of all orientations ( with ) , as in detail , considering the expressions for the different cases , we have the integration in case b ) is much more difficult and it can not generally be expressed in closed form even for simple shapes .therefore , as a careful implementation of form c ) is at least as good a description as form b ) , we shall disregard b ) in the following .suppose now that is a sphere of radius and volume , we have {y=2\pi q r } \label{eq : ip2}\ ] ] and , as , } \label{eq : ip3}\\ \text{with}&&{y=2\pi ( q^2+h^2 - 2qh\cos\psi)^{1/2 } r}.\nonumber\end{aligned}\ ] ] substituting in eqs .( [ eq : intap],[eq : intcp ] ) yields now we consider the crystal s laue group so that we can extend the summation on the asymmetric part of the reciprocal lattice : where is the multiplicity of subject to . evaluation of is only slightly more complex than , and the gain in accuracy justifies the effort .we have computed test patterns to compare forms a ) and c ) , considering nps of diameter , being this the lower size limit of validity of the shape - based approach .we have considered au spherical nps of diameter 5 nm ( =0.40786 nm , =0.154056 nm , , lorentz correction and debye - waller factor , with nm ) .the powder pattern was calculated exactly by the debye sum and by eqs .( [ eq : intap3],[eq : intcp3 ] ) .the profiles showed in fig .[ fig1]a are calculated on an absolute scale .they match quite well , but a maximum error is present in both cases a , c .the profile agreement index between and is 3.1% , between and is =4.4% .the difference profiles ( fig .[ fig1]b ) show that has a similar shape to , while is quite different .accordingly , refining a scale factor between and lowers to 2.0% ( with featureless difference , fig .[ fig1]c ) , while a scale factor between and yields =3.5% , with still a characteristic difference profile .furthermore , the peak positions result very little shifted ( ) between and , while they are shifted up to between and ( fig .[ fig1]d ) .then , we have considered znse spherical nps of diameter 4.8 nm ( =0.5633 nm , =0.154056 nm , , lorentz correction and debye - waller factor with nm ) . once more , the powder pattern was calculated exactly by the debye sum and by eqs .( [ eq : intap3],[eq : intcp3 ] ) .the profiles - calculated on an absoulte scale ( fig .[ fig2]a ) - match quite well with a maximum error for both cases a , c .the profile agreement index between and is 1.8% , between and is =3.1% .the difference profiles ( fig .[ fig2]b ) show again that has a similar shape to , while is quite different .accordingly , we have refined again a scale factor ( and this time also a different debye - waller factor ) between and . decreases to 1.6% with featureless difference ( fig .[ fig2]c ) . on the opposite ,when refining scale factor and debye - waller factor between and the agreement index does not go below =3.1% .also the difference profile is little changed ( fig .[ fig2]c ) .again , the peak positions result very little shifted ( ) between and , while peak shifts up to between and are visible ( fig . [ fig2]d ) .form c ) again turns out to be less affected than a ) by neglecting the cross - term .a small variation of the debye - waller factor ( from 0.005 to 0.0047 nm ) is due to the fact that the -neglection error changes slightly the intensity ratios .this is however less troublesome than the peak shifts observed for form a ) .it results that at np diameters the errors in the shape - based diffraction pattern calculations , whatever form we choose , start to be evident .this approach should not be used below this threshold .also , form a ) - which is the standard choice for large particles - shows a much larger error and should be avoided in favor of c ) .there are several experimental and theoretical reasons to believe that np powders have a log - normal distribution of np size .the log - normal distribution of np radii is usually written in terms of its mode and width , as ^ 2}{2w_{r}^2}\right\}. \label{eq : lon0}\ ] ] the most direct information on a distribution is provided by the distribution - averaged np radius and the relevant standard deviation . for a log - normal , the latter parameters are related to the former by and we shall use a form depending directly on , . setting two adimensional parameters , , we have .\label{eq : lonx}\ ] ] volume- and area - averaged np diameters can be derived by - ray powder diffraction patterns of a nanocrystalline 20-nm ceo sample , available for a round - robin , were downloaded ( ` http://www.boulder.nist.gov/div853/balzar/ ` , ` http://www.du.edu/~balzar/s-s_rr.htm ` ) .the np size is well inside the limits of validity of the shape - based method . among the available datasets ,the selected raw data were collected at the nsls x3b1 beamline of the brookhaven national laboratory in flat - plate geometry , with a double - crystal si(111 ) monochromator on the incident beam ( , = 12(0.01)60 ) and a ge(111 ) analyzer crystal on the diffracted beam .three data preprocessing stages have been accomplished .first , the instrumental function has been deconvoluted by an original advanced technique , including denoising and background subtraction , described in ref . .secondly , the pattern has been fitted by generic asymmetric voigt profiles so as to obtain information about peak positions and intensities . by comparing the intensities as evaluated from the fit with the theoretical onesa small correction for texture and/or microabsorption has been evaluated .the intensity corrections so obtained have then been stored and used in the subsequent stages .finally , the peak positions were found to be slightly anisotropically shifted .this has been attributed to a small residual stress , due _e.g. _ to dislocations . to confirm this point ,we have evaluated the average lattice spacing variations {{{\ensuremath{{\bm{h}}}}}}=-\frac{\pi}{360}\cot(\theta_{{{\ensuremath{{\bm{h}}}}}})\delta(2\theta_{{{\ensuremath{{\bm{h}}}}}}) ] s with eq .( 28 ) of ref . .the magnitudes of the residual stress tensor components , at least for those which can be determined in this way , resulted to be in the range 110 mpa .the values of are below , and {{{\ensuremath{{\bm{h}}}}}}$ ] range in 17 , which are quite small values . as the strain broadeningis of the same order of magnitude of the peak shifts , we can confirm that strain broadening is rather small in the ceo sample and can be neglected , as in ref . .also the residual - stress peak shifts so obtained have been saved as fixed corrections for the subsequent stages .the total intensity diffracted by the powder np sample is described by the sum where , ; is of eq .( [ eq : intcp3 ] ) evaluated at ; and is a polynomial modelling the background .the step is chosen so as to have an integer number of atoms in each -th x - ray sphere of radius , while keeping the point density constant and preserving stoichiometry .it is evidently possible to use a size - dependent lattice parameter in the calculation of . for this samplethis has been deemed unnecessary .indeed , for diameters of 20 nm , the lattice parameter of ceo has been found to be already equal to the bulk value .a least - square full - pattern refinement means minimizing the quantity here is the -th point of the experimental pattern corresponding to the scattering vector , the number of experimental points and the weights are the estimated inverse variance of the observations .the refined parameters are : the average nps radius ( ) and the radius dispersion , the isotropic debye - waller factors for o and ce atoms , the cubic unit cell parameter and seven background coefficients . for the minimization , we have used ( for this work ) a modified simplex algorithm , which is robust but time - consuming; however , computing times were reasonable. a derivative - based algorithm ( newton , in progress ) should give a handsome acceleration .the final results are given in tab .[ tab1 ] , together with the corresponding values of ref . .the debye - waller factors result to nm and nm .the calculated profile is plotted in fig .[ fig4 ] with the experimental pattern and the profile difference .the excellent fit quality and the final gof value ( 1.21 ) indicate the achievement of a reliable result . indeed, the estimated parameters are in good agreement with ref . .the slight discrepancy ( nm ) , larger than standard deviations , might be explained by the improved deconvolution method here applied and by the use of the whole pattern instead of a limited number of peaks as in ref . ..comparison of size distribution results .standard deviations are in brackets .units are nm . [ cols="<,^,^,^,^ " , ] [ tab1 ]the method of shape - convolution to calculate the diffraction pattern of np powders has been thoroughly discussed with respect to its limits of validity .concerns in applying this method below its optimal size range have been demonstrated theoretically and by simulated patterns .finally , the effectiveness of full - pattern powder data analysis based on the shape - convolution method was proved to obtain precise size distribution information on np powder samples with a log - normal distribution of spherical crystallites .41 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , * * , ( ) . , ** , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . ,* * , ( ) . , , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , , * * , ( ) . , , , , , , , , , , , * * , ( ) . , , , , , , , , , * * , ( ) . , , , , , , , , * * , ( ) . ,_ _ ( , ) , chap ., pp . . , , ( ) , . , * * , ( ) . ,* * , ( ) . ,* * , ( ) . , * * , ( ) . , * * , ( ) . , _ _ ( , ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , , , , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . ,* * , ( ) . , * * , ( ) . , * * , ( ) . , _ _ ( , ) , pp . . , , , , * * , ( ) . , , , , , , , , , , ,* * , ( ) . , , , ( ) , . ,* * , ( ) . , * * , ( ) . , , , , , , , * * , ( ) . ,* * , ( ) ., , , , , , , _ _ ( , ) , chap . , pp . .of 5.0 nm diameter with fcc structure ( =0.40786 nm ) has been constructed according to principle c ) of sec .[ sec2 ] . in this case , as the monoatomic fcc wigner - seitz unit cell contains one atom , principle c ) coincides with b ) . + a : the powder diffraction pattern : red , exact intensity calculated by the debye function ; blue dotted , calculated by approach a ) , eq .( [ eq : intap3 ] ) ; green dashed , by approach c ) , eq .( [ eq : intcp3 ] ) .all intensities have been calculated on an absolute scale and then scaled by the same factor .+ b : lower line , red , difference ; middle line , green , difference ; upper line , blue , the exact powder pattern ( debye method ) for comparison .+ c : lower line , red , difference after refining a scale factor ; middle line , green , difference refining a scale factor ; upper line , blue , the exact powder pattern ( debye method ) for comparison .note that the c)-type pattern difference is flattened while the a)-type retains sharp contributions .+ d : detail around the ( 111 ) peak of the and patterns ( same coding as in part a ) after scaling , showing a significant peak shift for the pattern ., title="fig:",width=321 ] [ fig1 ] of 4.8 nm diameter with fcc structure ( =0.5633 nm ) has been constructed according to principle c ) of sec .[ sec2 ] . in this case , as the fcc wigner - seitz unit cell contains two atoms , construction c ) differs from b ) .+ a : the powder diffraction pattern : red , exact intensity calculated by the debye function ; blue dotted , calculated by approach a ) , eq .( [ eq : intap3 ] ) ; green dashed , by approach c ) , eq .( [ eq : intcp3 ] ) .again all intensities have been calculated on an absoulte scale .+ b : lower line , red , difference ; middle line , green , difference ; upper line , blue , the exact powder pattern ( debye method ) for comparison .+ c : lower line , red , difference after refining a scale factor and an overall - isotropic debye - waller factor ( has been evaluated with ) ; middle line , green , difference after refining a scale factor and ; upper line , blue , the exact powder pattern ( debye method ) for comparison . note again that the c)-type pattern difference is flattened while the a)-type retains sharp contributions .+ d : detail around the ( 531 ) peak of the and patterns ( same coding as in part a ) after scaling , showing a significant peak shift for the pattern ., title="fig:",width=321 ] [ fig2 ] plotted against the relevant peaks diffraction angle .error bars have been evaluated assuming a constant error of 0.0006 on the anisotropic angular peak shift .calculated values refer to the model of ref . where residual stress components have been refined ., title="fig:",width=321 ] [ fig3 ] powder pattern final fit .blue diamonds - the observed deconvoluted intensity ; red continuous line - the calculated intensity ; black continuous line , below - difference profile ( same scale ) ., title="fig:",width=321 ] [ fig4 ]assume to deal with particles of centrosymmetric shape and equivalent spherical radius ( _ i.e. _ , the radius of the sphere of equal volume . ) .the shape fourier transform is then a real even function : recall also that the gradient of an even function is odd : our aim is to evaluate - for the different forms a ) , b ) , c ) as introduced in sec .[ sec2 ] and carried out in sec .[ sec22 ] , sec .[ sec23 ] - the neglected residual intensity contribution of eq .( [ eq : x ] ) with respect to the respective retained term ( _ cf .( [ eq : inta],[eq : intb],[eq : intc ] ) ) in the immediate vicinity of a bragg peak .let the nearest bragg peak to .first note that , if , is of order , so we neglect it altogether . if is very close to , set ( so ) .we can drop in the sum over all terms with because they are and reorder the second sum , obtaining at the same time , for with , the intensities of eqs .( [ eq : inta],[eq : intb],[eq : intc ] ) can be approximated by the -th term of the rhs sum , neglecting terms of .furthermore , in general , .therefore , the ratios are given by note that , because of eqs .( [ eq : even],[eq : oddg ] ) , we have we can immediately veryfy that in case b ) it results in the sum above the term with index is always accompanied by a term with index .setting also , and using eq .( [ eq : oddg ] ) , we have where denotes an arbitrarily chosen half - space of the reciprocal lattice without the origin . now , expanding in taylor series at , we have note also in eq .( [ eq : aar2 ] ) that does not depend on the considered bragg reflection .therefore , we can write and the proportionality constant can be evaluated by eq .( [ eq : aar2 ] ) with .we can conclude that the effect of neglecting will be just a relative error on the global profile scale factor .this factor is size - dependent , however , therefore for size distribution analysis at small sizes it may be necessary to introduce a correction as from eq .( [ eq : aar2 ] ) .cases a ) , c ) , are more complex .we are interested to powder diffraction , where is to be integrated at constant , therefore we shall consider \label{eq : topo}\ ] ] expanding in taylor series at , we have we shall now develop and in cases a , c. first , recall that the atomic form factors are constants for neutron scatering and monotonically decreasing smooth functions in the x - ray case . in the latter case , furthermore , the form factors of different elements have remarkably similar profiles . for a structure with atoms in the unit cell ,it is then possible to approximate with appropriate constants . therefore the structure factor ratios appearing in eq .( [ eq : aar3 ] ) can be simplified as independent of .note that now we can write explicitly using eqs .( [ eq : aar3],[eq : topo ] ) and .\label{eq : xpl1}\end{aligned}\ ] ] splitting the sum , reordering in one part , using eq .( [ eq : even2 ] ) and recombining , we have .\label{eq : xpl}\ ] ] again as in eq .( [ eq : prb2 ] ) , we can pair terms with and . using eq .( [ eq : taup ] ) , we obtain .\label{eq : xpl2}\end{aligned}\ ] ] define now the arbitrary half - lattice as that defined by a plane passing through the origin and containing . the origin is excluded .we have .\label{eq : xplf}\ ] ] then , evaluating the gradient in , using eq .( [ eq : oddg ] ) , we have finally .\label{eq : xplg}\end{aligned}\ ] ] the gradient is a vector .we have to take its angular average to determine the effect on the powder pattern .this is done by simply taking the scalar product with : .\label{eq : palle}\end{aligned}\ ] ] for spherical shape , it will be ; therefore terms with will be zero and those with will be most important . both and are damped oscillatory functions with amplitude .as , the magnitudes of both and are of order .unfortunately , eq . ( [ eq : palle ] ) can not be estimated more in detail , because of the dependence from the ` reduced ' structure factors .however , we can assess that its importance would be smaller than the corresponding term for case a ) for x - ray scattering . in casea ) , we can trace the same steps as in case c ) but instead of the ` reduced ' structure factors we have to consider the ratios and in the analog sums of eq .( [ eq : xplf ] ) and eq .( [ eq : palle ] ) for and there will appear terms as .\label{eq : casea2}\ ] ] the most important terms for the powder pattern are again those with .the structure factors ( see eq .( [ eq : strf ] ) ) depend on form factors , and for these will be strongly different .this in turn will amplify the differences .therefore it is likely that for case a ) the effect of the neglected term will be significantly larger than for case c ) .the examples reported in sec .[ sec23 ] show just that .
the increasing scientific and technological interest in nanoparticles has raised the need for fast , efficient and precise characterization techniques . powder diffraction is a very efficient experimental method , as it is straightforward and non - destructive . however , its use for extracting information regarding very small particles brings some common crystallographic approximations to and beyond their limits of validity . powder pattern diffraction calculation methods are critically discussed , with special focus on spherical particles with log - normal distribution , with the target of determining size distribution parameters . a 20-nm ceo sample is analyzed as example .
since many decades , design codes ensure a very low probability that a building collapses under ordinary loads , like self weight , dead and live service load , or snow .nevertheless buildings still do collapse , from time to time .an extremely small fraction of collapses originates from unlikely combinations of intense ordinary load with very poor strength of the building .the majority of structural collapses are due to accidental events that are not considered in standard design .examples of such events are : gross design or construction errors , irresponsible disregard of rules or design prescriptions , and several rare load scenarios like e.g. earthquakes , fire , floods , settlements , impacts , or explosions .accidental events have low probability of occurrence , but high potential negative consequences . since risk is a combination of probability and consequences , the risk related to accidental eventsis generally significant . in 1968a gas explosion provoked the partial collapse of the ronan point building in london .this event highlighted for the first time the urgency for _ robust _ structures , enduring safety in extraordinary scenarios .since then , interes was driven by striking catastrophic collapses , until in 2001 the tragic collapse of the world trade center renewed the attention to the topic ( see e.g. and ) .the last decades , several design rules aimed at improving structural robustness have been developed ( see e.g. ) .accidental events can be classified into _ identified _ and _ unidentified _ .identified events are statistically characterizable in terms of intensity and frequency of occurrence .examples are earthquakes , fire not fueled by external sources , gas explosions , and unintentional impacts by ordinary vehicles , airplanes , trains , or boats . specific design rules and even entire codesare devoted to specific identified accidental events .unidentified events comprise a wide variety of incidents whose intensity and frequency of occurrence can not be described statistically , e.g. terrorist attacks or gross errors .the risk related to unidentified accidental events can be mitigated both by structural and nonstructural measures .nonstructural measures such as barriers and monitoring can reduce the probability that an accidental event affects the structural integrity , others like a wise distribution of plants and facilities can minimize the negative consequences of eventual collapses .otherwise , structural measures can improve local resistance of structural elements to direct damage , e.g. the design of _ key elements _ for intense local load , or the application of the _ enhanced local resistance _method .structural measures can also provide progressive collapse resistance , i.e. prevent spreading of local direct damage inside the structure to an extent that is disproportioned with respect to the initial event .usual strategies to improve progressive collapse resistance are compartmentalization of structures and delocalization of stress after local damage .stress delocalization can be obtained exploiting redundancy , plastic stress redistributions ( masoero , wittel et al . , 2010 ) , ties , and moment resisting connections . nowadaysseveral design codes employ the conventinal _ alternate load path method ( alpm ) _ to evaluate progressive collapse resistance , e.g. and .the method consists in removing one key element , generally a column or a wall , and measuring the extent of subsequent collapse .if the final collapse is unacceptably wide , some of the previously listed measures have to be employed . hence structures are first designed and subsequently tested to be robust - they are not conceived _ a priori_.this course of action excludes optimizations of the basic structural topology and geometry , that actually play a key role in the response to local damage , considering as an example the very different behavior of redundant and statically determined structures .anti - seismic design already contains some prescriptions that should be considered before starting a new design , e.g. geometric regularity on the horizontal and on the vertical planes .furthermore , anti - seismic _ capacity design _ requires a hierarchy of the structural elements ensuring that earthquakes can only provoke ductile collapse of the horizontal beams , while failure of columns and brittle ruptures due to shear are inhibited .differently , for what concerns progressive collapse resistance , optimal overall geometric features are not known , except for the concepts of redundancy and compartmentalization .furthermore , the idea of hierarchically maximizing progressive collapse resistance is completely absent . in this paper, we make a first step to cover this deficiency , showing that progressive collapse resistance can be improved by hierarchy in the overall geometry ( _ topological _ hierarchy ) and in the relative strength and stiffness of horizontal and vertical structural elements ( _ mechanical _ hierarchy ) .our approach incorporates the simulation of progressive collapse of regular 2d frames made of reinforced concrete ( rc ) subjected to the sudden removal of structural elements , following the alpm framework .we first describe the analyzed frame structures and briefly sketch the approach that is based on the _ discrete element method ( dem)_. after the model description , we present the results of the simulations , with focus on the effect of geometry and hierarchy on the activated collapse mechanisms and , consequently , on progressive collapse resistance .we consider two representative sets of regular 2d framed structures in fig .[ fig_struct]-a .each set consists of three frames with identical total width and different topological _ hierarchical level _ , where is the number of structural cells in a frame .the horizontal beams , excluded those of the secondary structure , carry a uniform load per unit length .the frames are made of rc with typical mechanical parameters of concrete and steel , as shown in table [ tabmecpar ] . the total height of the structure is kept constant , and two different height - bay aspect ratios of the structural cells are considered ( and ) . & & + + specific weight & & kg / m & 2500 + young modulus & & n / m & 30 + compressive strength ( high ) & & n / m & 35 + compressive low ( low ) & & n / m & 0.35 + ultimate shortening & & - & 0.0035 + + young s modulus & & n / m & 200 + yield stress & & n / m & 440 + ultimate strain & & - & 0.05 + there exist several ways of introducing hierarchy into the topology of framed structures ; here we call a structure hierarchical " if it has a primary structure , made of few massive structural elements , that supports a secondary one .the latter defines the living space and has negligible stiffness and strength compared to the primary structure .the frames with and can be seen as reorganizations of those with . in detail, each column of the frames with corresponds to two columns of the frames with , and the same is valid for the beams , disregarding the first floor beam of the frames with , which is simply deleted ( see fig . [fig_struct]-a ) .analogously , the geometry of the frames with can be obtained starting from the frames with .the cross sections of columns are square ( see fig . [fig_struct]-b ) , with edges proportional to with factor .the beams have rectangular cross section whose height is proportional to with factor , and whose base is proportional to with aspect ratio .the reinforcement is arranged as shown in fig .[ fig_struct]-b , with area proportional to the area of the cross section by factor for the columns ( i.e. 8 when n=11 ) , and for the beams ( i.e. 4 when n=11 ) .the damage areas , ( dotted in fig .[ fig_struct]-a ) contain the structural elements that are suddenly removed to represent an accidental damage event , following the alpm framework .the damage is identical for frames with same , and is defined by the breakdown of one third of the columns on a horizontal line .the columns and beams removed from frames with correspond to the structural elements removed from frames with and .this kind of damage is employed to represent accidental events with a given amount of destructive energy or spatial extent , like explosions or impacts . in this workwe do not explicitly simulate very local damage events like gross errors , which would be better represented by the removal of single elements .nevertheless , we will generalize our results to consider also localized damage events .we employ the discrete element method ( dem ) to simulate the dynamics after a sudden damage .dem is based on a lagrangian framework , where the structure is meshed by massive elements interacting through force potentials .the equations of motion are directly integrated , in our case using a 5 order gear predictor - corrector scheme , with time increments between 10 and 10 ( see masoero , wittel et al . , 2010 ) .dem is an equivalent formulation to finite elements , converging to the same numerical solution of the dynamics if identical force - displacement laws are implemented. a detailed description of the algorithm for 3d systems can be found in ( masoero , wittel et al . , 2010 ) ; , together with a discussion on the applicability . for this work ,the code was restricted to 2d by allowing only two displacements and one rotation in the vertical plane . in ( masoero ,vallini et al . , 2010 ) , the dem model is tested against dynamic energy - based collapse analyses of a continuous horizontal beam suddenly losing a support .in the appendix , we compare our dem results to experimental observations of a 2d frame undergoing quasi - static column removal . to the best of our knowlede ,literature still lacks on experiments of dynamic collapse of framed structures due to accidental damage . in the followingwe will review only the essentials of our model , focusing on the details tha are relevant for the application to 2d frames .we assume simplified force - displacement laws for the beam element and for the hertzian contacts .predicting collapse of real structures would require more specialized interaction as compared to here , for example using the fiber approach for the cross sections .by contrast , we are interested in fundamental mechanisms of damage propagation within complex structural systems . in this research perspective , and according to a basic principle of statistical mechanics , minimizing the complexity of local interactions improves the interpretation of the systemic response . despite the strong assumptions , in the appendix we show that our model can match reasonably well with with experimental observations . in a first step , the structure needs to be assembled by discrete elements and beams .[ fig_2d_mesh]-a shows the four types of spherical discrete elements ( sde ) that we employed .columns and beams are made of 9 sdes , respectively with diameter and , slightly smaller than the distance between them to prevent contact form occurring before local rupture . constrainedsdes and connection sdes have same diameter as column sdes .the constrained sdes are clamped to a plane that represents the ground by means of the hertzian contact model , discussed further in this section .pairs of sdes are connected by _euler - bernoulli beam elements - ( ebe ) _ that , when deformed , transmit forces and moments to their edge nodes , locally labeled 0 and 1 ( see fig .[ fig_2d_mesh]-b ) .the mass of an sde is defined on the basis of the ebes connected to it .namely , where labels the generic ebe connected to sde , and is the cross sectional area of the structural element corresponding to the ebe .the external load is introduced adding a mass to the beam sdes . is not treated directly as a force to avoid downward accelerations of the sdes greater than gravity during free fall . for sufficiently small deformations ,the ebes are linear elastic and exert a force proportional to the elongation and directed along the segment , a shear force proportional to the sum of the nodal rotations , and a bending moment proportional to the nodal effective rotations , defined as , and .furthermore , we introduce damping by forces and moments directed opposite to , , and , and proportional to the time derivative of with factor / m , and of , with factor .geometric nonlinearity due to large displacements is considered by referring rotations and elongation to the segment . in the small deformations regime of our simulations, is with good approximation equal to the axial force inside the ebe , and thus perpendicular to .if overcomes a yield threshold in tension or under compression , the ideally plastic regime is entered and plastic axial strain is applied to maintain or . neglecting the contributions of concrete in tension and of steel in compression, we set the yield thresholds in terms of and to : ideally plastic regime in bending is entered when .we obtain the bending yield threshold and the corresponding yielding effective rotation , neglecting the strength contribution of concrete and assuming a lever arm between upper and lower reinforcement equal to the height of the cross section : is the cross sectional moment of inertia of the ebe , and is the fraction of reinforcement in tension ( for columns and for beams , as in fig .[ fig_struct]-b ) . considers the beneficial compression effect compression in the ebe .we set assuming bending carried by the reinforcement alone , and that the strain in the reinforcement put under tension by equals the compressive strain due to , namely : in this way , eventual tension inside the ebe produces negative , and thus reduces .when yielding in bending occurs , plastic rotations are added at the edge nodes of the ebe .if only , with , is greater than , then only is applied to restore . differently ,if both and are greater than , both and are applied to restore . for the sake of simplicity , we assume yielding in bending uncoupled from yielding in axial direction . furthermore , we neglect yielding due to shear because small plastic deformations are generally associated with shear .we consider an ebe failed when excessive and are cumulated .for this purpose , the coupled breaking criterion : is employed . , , and are the maximum allowed plastic elongation , shortening , and rotation in uncoupled conditions .we consider high plastic capacity of the structural elements setting , , and ( see table [ tabmecpar ] ) . failed ebes are instantly removed from the system .we neglect ruptures due to shear assuming that , in agreement with a basic principle of capacity design , a sufficient amount of bracings ensures the necessary shear strength .the hertzian contact model is employed for the sdes to consider collisions between structural elements .the model consist of repulsive forces between partially overlapping sdes , damped by additional forces proportional and opposite to the overlapping velocity .we also set tangential forces that simulate static and dynamic friction , as well as damping moments opposed to the relative rolling velocity . a similar hertzian contact model is also employed for sdes colliding with the ground plane . in the following simulations we employ contact parameters that can be found in .we do not transcribe them because impacts do not affect significantly the collapse mechanisms sudied here .nevertheless , in general simulation algorithms for progressive collapse should consider impacts , because initial damage located at upper stories generates falling debris , and because impacts can drive the transition from partial to total collapse ( see ( masoero , wittel et al . , 2010 ) and ) .in granular dynamics , the contact parameters are generally set referring to the material of the grains . in our modelthe sdes represent large heterogeneous portions of structural elements , for which there are not conventionally defined contact parameters so far .we emply parameters yielding a qualitatively realistic dynamics ( e.g. the elements do not rebound or pass through each other ) , and chosen from sets of possible one that were defined through preliminary studies .such studies also indicated that the collapse loads of a beam due to debris impact varies of less than 15% upon orders of magnitude change in the contact parameters .the simulations are organized into two steps : first the structure is equilibrated under the effect of and gravity , then the ebes inside the damage area are suddenly removed , and the subsequent dynamic response is simulated .our aim is to quantify three _ collapse loads _ :* : maximum static load that the intact structure can carry ; * : minimum _ critical load _ that causes dynamic collapse after damage . applied statically to the intact structure first, it is then kept constant during the post - damage dynamic response . * : minimum load corresponding to total collapse after damage . by definition , . in our demmodel we do not have a straightforward unique measure of load , because the mass of the sdes depends on the external load and on the self weight of the structural elements .the mass of the beam sdes effectively acts as a distributed horizontal load . on the other hand , the columns at each storytransmit vertical concentrated forces either to other columns at a lower story , or to the horizontal transfer beam over the damage area .therefore we introduce a load measure that we call _ equivalent load _ , applied to the massless structure and analytically related to the geometry , the mass , and the activated collapse mechanism of the frames in the simulations .namely , is defined to produce the same static effect as the various masses and concentrated forces of the simulation frames , at the critical points where collapse is triggered .the derivation of the analytical expressions used in this work is shown in .for each analyzed structure , we first apply the entire structural mass . in a subsequent step ,the external load is increased until the intact structure collapses in static conditions .the collapse mechanism indicates what equivalent load expression should be used to compute .then we slightly decrease , equilibrate , introduce the damage , and calculate whether dynamic progressive collapse is triggered and to what an extent . performing several simulations with progressively smaller , the final extent of collapse changes from total to partial , and we employ again an adequate equivalent load to compute .if the structure collapses even when is reduced to zero , we start reducing the specific weight of the structural elements , i.e. the structural mass .when dynamic collapse does not occur anymore , an adequate equivalent load provides .once we obtain the collapse loads , we estimate the progressive collapse resistance referring to the _ residual strength fraction _ . actually , progressive collapse resistance is more directly related to , but the advantage of is that it can not be improved by simply strengthening the structural elements , which would increase both and .robustness - oriented structural optimization is required to increase , which therefore is a good indicator to compare different structural solutions . in our model , the bending yield threshold not depend on the strength of concrete .therefore , setting the high value / mm , the mainly compressed columns get much stronger than the horizontal beams , that fail in bending ( see figs .[ fig_2d_bendcoll_before],[fig_2d_bendcoll ] ) .the resulting collapse mechanisms resemble triple - hinge and four hinges mechanisms , reflecting the large plastic capacity of the structural elements .s corresponds to the first breaking of an ebe . ]s corresponds to the application of the initial damage . ]if the initial damage triggers a bending mechanism , frames with undergo total collapse , while frames with lower hierarchical level initially suffer only a local collapse ( see fig .[ fig_par - tot - bend ] ) .the local collapse can nevertheless evolve to total collapse , if high applied load and plastic capacity cause the falling central part of the structure to dynamically drag down the lateral portions ( masoero , wittel et al . , 2010 ) .= 13kn / m ) and total ( =26kn / m ) bending collapse after damage of a frame with very strong columns , =11 , and .time corresponds to the application of the initial damage . ]the collapse loads , expressed in terms of equivalent loads , are summarized in fig .[ fig_mu - rsr - bend ] as a function of the hierarchical level , for different slenderness of the structural cells . in fig .[ fig_mu - rsr - bend ] , superscript indicates bending collapse mechanism .we employ equivalent loads referring to perfectly brittle or perfectly plastic bending failure ( see the appendix ) .the collapse loads decrease with , i.e. a slender structure seems weaker , and increase with , i.e. hierarchical frames are stronger .the residual strength fraction does not depend on , while hierarchical structures with low are more robust than homogeneous ones ( see fig .[ fig_mu - rsr - bend ] ) .in fact , the concentration of bending moment at the connection between a beam hanging above the damage area and the first intact column depends on the _ number _ of removed columns . in the simulations , we remove a constant fraction of one third of the columns on a horizontal line ( see fig . [ fig_struct ] ) .therefore homogeneous structures lose more columns and are less robust toward the bending collapse mechanisms . on the other hand ,since the number of removed columns is decisive , we expect that the hierarchical level does not influence toward bending collapse in case of single column removal .finally we consider the 2d frame as part of a regular 3d structure and divide the collapse loads in fig .[ fig_mu - rsr - bend ] by , i.e.by the tributary length of the beams in the direction perpendicular to the frame . in this way ,collapse loads per unit area are obtained in fig .[ fig_divl_mu - bend ] , showing that : does not influence and ; structures with slender cells are less likely to collapse entirely ; is independent from the hierarchical level ; is proportional to . for frames that undergo bending progressive collapse .] divided by , considering the 2d frames as part of regular 3d structures . ]progressive compressive failure of the columns , also called pancake collapse , occurs when we set the compressive strength of concrete to a small value / mm ( see fig .[ fig_2d_pancake ] ) .this choice is unphysical but allows us to separate the effect of strength reduction from that of stiffness reduction in the columns .more realistic scenarios would involve columns with small cross section and highly reinforced , tall beams .the columns immediately next to the damage area are the first to fail under compression , and then progressive collapse spreads horizontally to the outside .we employ equivalent loads referring to the two limit cases of _ local _ and of _ global _ pancake collapse .local pancake collapse occurs when the bending stiffness of the beams is very low and when the compressive failure of the columns is very brittle . in this case, the overload after damage is entirely directed to the intact columns that are closer to the damage area , and collapse propagates by _nearest neighbor _ interactions . on the other hand ,high stiffness of the beams and large plastic capacity of the columns induce _ democratic _ redistribution of overload between the columns .consequently , the columns crush simultaneously triggering global pancake collapse .the collapse dynamics recorded in our simulations resembles global pancake . note that in the studied framed structures , all the columns have identical compressive strength without disorder .therefore , once the first two columns crush , pancake collapse can not be arrested .nevertheless , at some / mm our frames undergo partial collapse because the progressive failure of the columns can be arrested by the initiation of bending collapse .= 5 , , and =11 , . ][ fig_2d_mu - rsr_pank ] , where superscript indicates pancake collapse , shows that the collapse loads increase with the structural slenderness , because the columns have tributary area related to and compressive strength proportional to .furthermore , hierarchical structures with small appear to be stronger than homogeneous ones both in terms of and of . finally , the residual strength fraction toward pancake collapse is remarkably higher than that toward bending collapse ( cf .[ fig_mu - rsr - bend ] ) , and is neither influenced by the hierarchical level , nor by .in fact , toward global pancake mode is related to the _ fraction _ of columns that are initially removed at one story . in our simulations, we always remove one third of the columns at one story , and obtain the constant value , slightly smaller than a theoretical 2/3 because of dynamics .we showed how the dynamic strength after damage of 2d frames depends on the activated collapse mechanism .we can now drive a series of conclusions regarding the effect of damage extent , structural slenderness , and topological and mechanical hierarchy .bending collapse provokes a local intensification of bending moments at the connections between the transfer beams above the damage area and the first intact column .consequently , and the residual strength fraction decrease with the number of removed columns . in analogy with fracture mechanics , structures that are prone to bending collapse correspond to notch sensitive materials , andthe number of removed columns corresponds to the crack width .if global pancake collapse is triggered , and decrease with the fraction of removed columns , which is analogous to plastic failure of materials that are not notch sensitive .consistently , corresponding to global pancake collapse is remarkably larger than that corresponding to bending collapse . the structural slenderness affects in general the collapse loads for both bending and pancake collapse modes .the effect of depends on the scaling of cross section and reinforcement of the structural elements , with the beam length and with the column height ( see the analytical results in , regarding the simulations in this paper ) .nevertheless turns out to be independent from , because is the ratio between two collapse loads with same scaling respect to and .considering structural topology , in case of bending collapse hierarchical structures are more robust toward initial damage with fixed spatial extent ( e.g. explosion , impact ) , and as robust as homogeneous structures toward single column removal ( e.g. design error ) .the reason is that toward bending collapse decreases with the number of removed columns at one story .this confirms the analogy with fracture mechanics , where notch sensitive hierarchical materials are tougher than homogeneous ones . on the other hand , considering global pancake collapse , structural hierarchy does not influence toward initial damage with fixed spatial extent , while hierarchical structures are more sensitive than homogeneous ones to single column removals .this is due to toward global pancake collapse decreasing with the fraction of removed columns .[ fig_2d_mu - rsr_pank ] shows that damaged frames undergoing global pancake collapse can carry the of the static ultimate load of the intact structure . since well designed structures can carry a remarkably greater than the environmental load expected when an accidental event occurs , related to global pancake collapsecan ensure structural robustness for most of the practical cases . on the other hand , related to bending collapse is generally much smaller , making structures vulnerable to accidental damage . in this work we considered idealized structures , with simplified geometry and mechanical behavior of the elements .reducing local complexity enables a better interpretation of the coral system response to damage .this study provides a basis of knowledge preceding the incorporation of more details and degrees of freedom , to investigate further aspects of progressive collapse .shear failures can cause brittle ruptures and reduce the collapse resistance of large structural elements .different locations of the initial damage may activate different collapse mechanisms .for example , damaging the upper stories would cause debris impacts , while removing external columns reduces without producing significant lateral toppling .the dem algorithm was already applied to 3d structures in ( masoero , wittel et al . , 2010 ) , showing that the bending and pancake collapse mechanisms persist also in 3d . on the other hand , in 3d structuresthe horizontal floor slabs improve the horizontal redistribution of loads and the catenary action , increasing the strength toward bending collapse and impacting debris ( see the appendix and , e.g. , ) .it is worth noting that horizontal ties and diaphragms increase the strength both after and before damage , causing a compensation that limits the effect on .finally , future works can incorporate a detailed description of structural connections , which are crucial for energy dissipation , catenary effect , and compartmentalization .coming back to the central theme of structural hierarchy , our results already suggest that hierarchical structures are more robust toward accidental damage. an optimal solution would be to design : 1 ) a primary frame made of few large elements , with columns weaker than the beams , and 2 ) a secondary structure , made of many smaller elements , which defines the living space and follows traditional design rules .the primary frame would provide topological hierarchy , maximizing toward bending collapse and enabling new possible compartmentalization strategies .the strong beams and weak columns of the primary frame would favor pancake collapse over bending collapse , and improve the vertical compartmentalization of high - rise buildings against falling debris . on the other hand , in real structures , the beams generally fail before the columns , andimposing the opposite is expensive .nevertheless , designing a strong - beam weak - column behavior _ only for the primary frame _ can significantly limit the extra cost .hierarchical structures can be a novel and somehow counterintuitive feature of robustness - oriented capacity design .planning structural hierarchy requires understanding the complex system response to local damage , and should drive the design process since the very beginning .by contrast , traditional design is focused on local resistance against ordinary actions , and considers robustness toward accidents only at the end .this generally leads to non - hierarchical structures with strong columns and poorly understood system behavior .in addition , anti - seismic capacity design requires plastic failure of the beams to precede columns rupture ( see e.g. ) .overcoming these contradictions is a challenge toward optimizing structures against exceptional events .22 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 s. alexander .new approach to disproportionate collapse ._ struct . eng ._ , 820 ( 23/24):0 1418 , 2004 . z.p .baant and y. zhou .why did the world trade center collapse ? - simple analysis .mech .- asce _ , 1280 ( 1):0 26 , 2002 . .actions on structures - part 1 - 7 : general actions - accidental actions . technical report en 1991 - 1 - 7 , bsi , 2004 . .design of structures for earthquake resistance .technical report , bsi , 2004 .design of steel framed buildings at risk from terrorist attack . __ , 820 ( 22):0 3138 , 2004 . a. calvi .il crollo delle torri gemelle : analisi dellevento e insegnamenti strutturali .master s thesis , politecnico di torino , 2010 .( in italian ) .carmona , f.k .wittel , f. kun , and h.j .fragmentation processes in impact of spheres .e _ , 770 ( 5):0 243253 , 2008 .cherepanov and i.e. esparragoza . progressive collapse of towers : the resistance effect ._ , 143:0 203206 , 2007 .chiaia and e. masoero .analogies between progressive collapse of structures and fracture of materials ._ , 1540 ( 1 - 2):0 177193 , 2008 .technical report , department of defence , 2005 .technical report , gsa , 2003 .h. gulvanessian and t. vrouwenvelder .robustness and the eurocodes .eng . int ._ , 2:0 161171 , 2006 .r. hamburger and a. whittaker .design of steel structures for blast - related progressive collapse resistance ._ modern steel constr ._ , march:0 4551 , 2004. r. lakes .materials with structural hierarchy . _, 361:0 511515 , 1993 .e. masoero ._ progressive collapse and robustness of framed structures_. phd thesis , politecnico di torino , italy , 2010 .e. masoero , p. dar , and b.m .chiaia . progressive collapse of 2d framed structures : an analytical model ._ , 54:0 94102 , 2013 . c. pearson and n. delatte .ronan point apartment tower collapse and its effect on building codes ._ j. perf .fac . -asce _ , 190 ( 2):0 172177 , 2005 . t. pschel and t. schwager ._ computational granular dynamics_. springer - verlag gmbh , berlin , 2005 .u. starossek. progressive collapse of structures : nomenclature and procedures ._ , 160 ( 2):0 113117 , 2006 .val and e.g. val .robustness of framed structures ._ , 160 ( 2):0 108112 , 2006 .vlassis , b.a izzuddin , a.y .elghazouli , and d.a nethercot . progressive collapse of multi - storey buildings due to sudden column loss - part ii : application ._ , 300 ( 5):0 14241438 , 2008 . w .- j .yi , q .- f .he , y. xiao , and s.k .experimental study on progressive collapse - resistant behavior of reinforced concrete frame structures ._ aci structural journal _ , 1050 ( 4):0 433439 , 2008 .in this appendix , we compare the numerical predictions of our dem model with the experimental observations in .we also briefly discuss some effects of catenary actions on collapse resistance .the experimental setup in consists of a plane frame made of reinforced concrete ( see fig .[ fig_exper](a ) ) .columns are square in section ( 200x200 mm ) , beams are rectangular ( 200 mm tall , 100 mm wide ) . everywhere ,the longitudinal reinforcement is symmetrically distributed within the cross section ( 4 steel bars ) .the strength and ultimate strain of concrete and steel are specified in , while the elastic moduli are not .the mid column at the first floor is replaced by jacks that provides an upward vertical force . in the middle of the top floor, a servo - hydraulic actuator applies a constant downward vertical force =109kn , to represent the self weight of upper stories .initially , and then it is progressively reduced to reproduce quasi - static column loss , until a bending mechanism triggers collapse ( see fig .[ fig_exper](b ) ) . during the experiments , the increasing values of the midspan inflection plotted against , to get the force - displacement reaction curve .the integral of the curve represents the energy dissipation capacity , which relates to the dynamic strength of the structure with respect to the activated collapse mechanism .our target is to capture the experimental reaction curve through dem simulations .the parametrization of our model , based on the geometry and mechanical data in , is straightforward .therefore , we focus on the discrepancies between model and experimental inputs , and a few necessary additional assumptions . regarding the overall geometry , we consider all the columns to be equally tall ( 1,100 mm ) , while in the experiments the columns at the first floor were taller ( 1,567 mm ) .this discrepancy should not have a significant effect on the collapse mechanism and the strength .the mechanical behavior of the real steel bars was strain hardening , with yielding at 416mpa , and rupture at 526mpa . in our model , we consider two limit cases of elastic - perfectly plastic behavior of the steel bars : _ weak steel _`` ws '' with yielding threshold set at 416mpa , and _ strong steel _ `` ss '' yielding at 526mpa . provide two measures of the ultimate tensile strain of the steel bars .we employ , which was measured on a longer bar segment , because in our simulation the strain develops within relatively long euler - bernoulli beam elements , ebes .we assume young moduli for the steel , and for the concrete . in order to better understand the development of catenary actions , we consider two limit cases of cross section behavior under tension : _ fully reacting sections _`` frs '' , where the concrete always contributes to the tensile stiffness , and _ partially reacting sections _`` prs '' , where the concrete cracks and only the steel provides axial stiffness as soon as the cross section goes in tension . furthermore , in order to focus exclusively on ruptures due to tensile strain in the steel , we allow for an infinite rotation capacity of the cross sections .we subject our model frames to gravity , but remain in the quasi - static regime by adding a high viscous damping force proportional to the velocity of each spherical discrete element .we repeat numerous simulations with fixed and , ranging from to values that are small enough to cause the quasi - static rupture of at least one ebe .we track the midspan deflection comparison in fig .[ fig_exper](c ) . in the experimental results ,as decreases , the system crosses several stages : ( i ) linear elastic mm , ( ii ) elasto - plastic mm , ( iii ) plastic hinges mm , ( iv ) catenary action mm , and ( v ) collapse .the transition from elastic to elasto - plastic is not evident from the curve , as well as that from plastic hinge to catenary action .by contrast , plastic hinges formation is clearly marked by a sudden change of slope at mm .our simulations do not capture the initial elasto - plastic stage because we do not model the non - linear elasto - plastic behavior of concrete .this leads to an overestimation of the stiffness before the formation of the plastic hinges . nevertheless , the additional strain energy produced by this approximation is negligible when compared to the energy dissipated in the subsequent stages , ie .the overestimation of the initial stiffness is irrelevant for the actual dynamic collapse . assuming weak steel ws , yielding at 416mpa , provides a good agreement with the experiment in terms of transition point to the plastic hinges stage . considering fractured concrete under tension yields the prs - ws curve , which underestimates the structural strength at large .the reason for this divergence can be that the steel hardens under strain , with reaction stress increasing from from 416mpa ( ws ) to 526mpa ( ss ) . this interpretation is supported by the fact that the prs - ws and prs - ss curves envelop the experimental one . in particular , the prs - ss curve reproduces well the last part of the experimental curve , as well as the collapse point . despite strong simplifying assumptions in the formulation , our dem model provides reasonably good quantitative predictions of the experimental results . for the simulations in the body of this paper , we always considered fully reactive cross sections with concrete that does not crack under tension . the frs - ws curve in fig .[ fig_exper](c ) shows that this assumption leads to an overestimation of the static collapse strength against column removal ( % ) .let us conjecture that frs induce the same strength increase of + 70% in the structure without column removal , i.e. in . from a heuristic application of energy conservation , one can estimate the dynamic collapse load after sudden column removal by considering the mean in the catenary stage : from the experiment , and from the simulation with frs - ws .consequently , the increase in post - damage dynamic collapse strength due to frs is approximately % . in conclusion ,assuming fully reactive cross sections causes an _ underestimation _ of the residual strength fraction , which our example quantifies as % . however this assumption does not affect the main statement of this work on hierarchical structures
in this paper , we study the response of 2d framed structures made of rectangular cells , to the sudden removal of columns . we employ a simulation algorithm based on the discrete element method , where the structural elements are represented by elasto - plastic euler bernoulli beams with elongation - rotation failure threshold . the effect of structural cell slenderness and of topological hierarchy on the dynamic residual strength after damage is investigated . topologically _ hierarchical _ frames have a primary structure made of few massive elements , while _ homogeneous _ frames are made of many thin elements . we also show how depends on the activated collapse mechanisms , which are determined by the mechanical hierarchy between beams and columns , i.e. by their relative strength and stiffness . finally , principles of robustness - oriented capacity design which seem to be in contrast to the conventional anti - seismic capacity design are addressed . * keywords : * frames , progressive collapse , robustness , hierarchy
in healthy cells , a loopback mechanism involving the protein p53 is believed to cause growth arrest and apoptosis as a response to dna damage .mutations in the sequence of p53 that potentially interfere with this mechanism have been observed to lead to the upraise of cancer . under normal conditionsthe amount of p53 protein in the cell is kept low by a genetic network built of the mdm2 gene , the mdm2 protein and the p53 protein itself .p53 is produced at a essentially constant rate and promotes the expression of the mdm2 gene .on the other hand , the mdm2 protein binds to p53 and promotes its degradation , decreasing its concentration .when dna is damaged , a cascade of events causes phosphorylation of several serines in the p53 protein , which modifies its binding properties to mdm2 . as a consequence ,the cell experiences a sudden increase in the concentration of p53 , which activates a group of genes ( e.g. , p21 , bax ) responsible for cell cycle arrest and apoptosis .this increase in p53 can reach values of the order of 16 times the basal concentration .a qualitative study of the time dependence of the concentration of p53 and mdm2 has been carried out in ref .approximately one hour after the stress event ( i.e. , the dna damage which causes phosphorylation of p53 serines ) , a peak in the concentration of p53 is observed , lasting for about one hour .this peak partially overlaps with the peak in the concentration of mdm2 , lasting from to hours after the stress event .another small peak in the concentration of p53 is observed after several hours .the purpose of the present work is to provide the simpest mathematical model which describes all the known aspect of the p53mdm2 loop , and to investigate how the loop is robust to small variations to the ingredients of the model .the `` weak points '' displayed by the system , namely those variations in some parameters which cause abrupt changes in the overall behaviour of the loop , are worth to be investigated experimentally because they can contain informations about how a cell bacomes tumoral . the model we suggestis described in fig .the total number of p53 molecules , produced at constant rate , is indicated with .the amount of the complexes built of p53 bound to mdm2 is called .these complexes cause the degradation of p53 ( through the ubiquitin pathway ) , at a rate , while mdm2 re enters the loop .furthermore , p53 has a spontaneous decay rate .the total number of mdm2 proteins is indicated as .since p53 activates the expression of the mdm2 gene , the production rate of mdm2 is proportional ( with constant ) to the probability that the complex p53/mdm gene is built .we assume that the complex p53/mdm2gene is at equilibrium with its components , where is the dissociation constant and only free p53 molecules ( whose amount is ) can participate into the complex .the protein mdm2 has a decay rate . the constants and not only the spontaneous degradation of the proteins , but also their binding to some other part of the cell , not described explicitely by the model .the free proteins p53 and mdm2 are considered to be at equilibrium with their bound complex pm , and the equilibrium constant is called . the dynamics of the system can be described by the equations in the second equation we allow a delay in the production of mdm2 , due to the fact that the transcription and translation of mdm2 lasts for some time after that p53 has bound to the gene .the choice of the numeric parameters is somewhat difficult , due to the lack of reliable experimental data .the degradation rate through ubiquitin pathway has been estimated to be , while the spontaneous degradation of p53 is .the dissociation constant between p53 and mdm2 is ( expressed as number of molecules , assuming for the nucleus a volume of ) , and the dissociation constant between p53 and the mdm2 gene is . in lack of detailed values for the proteinproduction rates , we have used typical values , namely and .the degradation rate of mdm2 protein has been chosen of the order of to keep the stationary amount of mdm2 of the order of .the behaviour of the above model is independent on the volume in which we assume the reaction takes place . that is, multiplying , , and by the same constant gives exactly the same dynamics of the rescaled quantities and .futhermore , due to the fact that the chosen parameters put the system in the saturated regime , an increase in the producing rates and with respect to and will not affect the response .on the contrary , a decrease of and with respect to and can drive the system into a non saturated regime , inhibiting the response mechanism .in the case that the production of mdm2 can be regarded as instantaneous ( no delay , ) , the concentration of p53 is rather insensitive to the change of the dissociation constant .the stationary values of and are found as fixed points of the equations [ eq1 ] ( see appendix ) and in table i we list the stationary values of the amount of p53 molecules for values of spanning seven orders of magnitude around the basal value . moreover , transient oscillatory behaviour upon change in the dissociation constant is not observed .this is supported by the fact that the eigenvalues of the stationary points ( listed in table i ) have negative real parts , indicating stable fixed points , and rather small imaginary pats indicating absence of oscillations .more precisely , the variation of the stationary amount of p53 if the dissociation constant undergoes a change can be estimated , under the approximation that ( cf . the appendix ) , to be the fact that is approximately linear with with a proportionality constant which is at most of the order of makes this system rather inefficient as response mechanism .furthermore , it does not agree with the experimental data which show a peak of p53 followed , after several minutes , by a peak in mdm2 , and not just a shift of the two concentration to higher values . to check whether the choice of the system parameters affects the observed behaviour , we have repeated all the calculations varying each parameter of five orders of magnitude around the values used above. the results ( listed in table ii for and and not shown for the other parameters ) indicate the same behaviour as above ( negative real part and no or small imaginary part in the eigenvalues ) .consequently , the above results about the dynamics of p53 seem not to be sensitive to the detailed choice of parameters ( on the contrary , the amount of mdm2 is quite sensitive ) .the dynamics changes qualitatively if we introduce a nonzero delay in eqs .[ eq1 ] . keeping that the halflife of an rna molecule is of the order of 1200 s , we repeat the calculations with . the eqs .[ eq1 ] are solved numerically , starting from the conditions and and making use of a variable step adams algorithm .after the system has reached its stationary state under basal condition , a stress is introduced ( at time s ) by changing instantaneously the dissociation constant . in fig .[ fig2 ] we display a case in which the stress multiplies by a factor ( a ) , a case in which it divides it by a factor ( b ) and by a factor ( c ) .when is increased by any factor , the response is very similar to the response of the system without delay ( cf .e.g. fig . [ fig2]a ) . on the contrary, when is decreased the system displays an oscillatory behaviour . the height of the response peakis plotted in fig .[ fig3 ] as a function of the quantity which multiplies .if the multiplier is larger than the response is weak or absent . at the value the system has a marked response ( cf . also figs .[ fig2]b and c ) .the maximum of the first peak takes place approximately after the stress , which is consistent with the lag time observed in the experiment , and the peaks are separated from . although it has been suggested that the effect of the stress is to increase the dissociation constant between p53 and mdm2 , our results indicate that an efficient response take place if decreases of a factor ( cf .[ fig2]b ) .one has to notice that the conclusions of ref . have been reached from the analysis _ in vivo _ of the overall change in the concentration of p53 , not from the direct measurement of the binding constant after phosphorylation .our results also agree with the finding that p53asp20 ( a mutated form of p53 which mimicks phosphorylated p53 , due to the negative charge owned by aspartic acid ) binds mdm2 _ in vitro _more tightly than p53ala20 ( which mimicks unphosphorylated p53 ) .this hypothesis is supported by molecular energy calculations made with classical force fields .even if this kind of force fields is not really reliable for the calculation of binding constants , it gives an estimate of the sign of the change in interaction among p53 and mdm2 upon phosphorylation .we have performed an energy minimization of the conformation of the system composed by the binding sites of p53 and mdm2 , starting form the crystallographic positions of ref . and using the force fields mm3 and mmff , for both the wild type system and for the system where serine 20 of p53 in phosphorylated . using mm3 we found that the phosphorylated system has an electrostatic energy kcal / mol lower than the wild type system , while this difference is kcal / mol using the mmff force field .our calculations suggest that phopshprylated p53 is more attracted by mdm2 due to the enhanced interaction of phosphorylated ser20 with lys60 , lys46 and lys70 of mdm2 , and consequently the dissociation constant is lowered . the robustness of the response mechanism with respect to the parameters of the system , which is typical of many biological systems ( cf . , e.g. , ) , has been checked both to assess the validity of the model and to search for weak points which could be responsible for the upraise of the disease .each parameter has been varied of five orders of magnitude around its basal quantity .the results are listed in table iii .one can notice that the response mechanism is quite robust to changes in the parameters , and . for low values of or system no longer oscillates , but displays , in any case , a rapid increase in the amount of which can kill the cell .this is true also for large values of .what is dangerous for the cell is a decrease of or of , which would drop the amount of p53 and let the damaged cell survive .this corresponds either to an increase of the affinity between p53 and the mdm2 gene , or to an increase of mdm2 half life . to be noted that , unlike the case , the system with delay never displays damped oscillations as a consequence of the variation of the parameters in the range studied in the present work .this sharp behaviour further testifies to the robustness of the response mechanism .anyway , one has to keep in mind that the oscillating response produces the death of the cell , and consequently the long time behaviour is only of theoretical interest .the minimum value of the delay which gives rise to the oscillatory behaviour is . for values of the delay larger than this threshold , the amplitude of the response is linear with ( cf .4 ) , a fact which is compatible with the explanation of the response mechanism of section iv .the lag time before the p53 response is around ( in accordance with the 1h delay observed experimentally and is independent on all parameters , except and .the dependence of the lag time on is approximately linear up to ( the longest delay analyzed ) .increasing the lag time increases to ( for ) and ( for ) . on the other hand ,the period of oscillation depends only on the delay , being approximately linear with it .we have repeated the calculations squaring the variable in eqs .1 , to keep into account the cooperativity induced by the fact that the active form of p53 is a dimer of dimers .the results display qualitative differences neither for non delayed nor for the delayed system .all these facts can be rationalized by analyzing the mechanism which produces the response .the possibility to trigger a _ rise _ in p53 as a dynamic response to an _ increased _ binding between p53 and mdm2 , relies on the fact that a sudden increase in p m binding diminishes the production of , and therefore ( subsequently ) diminishes the amount of .in other words , while the change in has no direct effect in the first of eqs . [ eq1 ] , it directly reduces mdm2 production by subtracting p53 from the gene which producese mdm2 . mathematically , the oscillations arise because the saturated nature of the binding imply that pm is approximately equal to the minimum between p and m. each time the curves associated with p and m cross each other ( either at a given time or instants before ) , the system has to follow a different set of dynamic equations than before , finding itself in a state far from stationarity .this gives rise to the observed peaks .to be precise , the starting condition ( before the stress ) is .the stress reduces the dissociation constant of , at least , one order of magnitude , causing a drop in , which falls below . for small values of ( to be precise , for ) , one can make the simplification , and consequently rewrite eqs .[ eq1 ] as each period after the stress can be divided in four phases . in the first one and , so that stays constant at its stationary value , while decreases with time constant towards zero ( not exactly zero , since the approximated equations do not hold for ) . in the second phase onehas to consider the second ( ) and the third ( ) equations ( [ simply2 ] and [ simply3 ] ) .the new stationary value for is which is much larger than since .this boost takes place in a time of the order of , so if , as in the present case , has no time to reach the stationary state and ends in a lower value . in the meanwhile, remains in the low value given by eq .[ simply2 ] . at a time after the stress eq .[ simply2 ] gives way to eq .[ simply4 ] .the latter is composed by a positive term which is if and under the opposite condition . since ( it refers to the boost of ) ,then the new stationary value of is .the raise of takes place in a time of the order of and causes the decrease of , whose production rate is ruled by .the fourth phase begins when approaches .now one has to keep eqs .[ simply1 ] and [ simply4 ] , so that returns to the basal value , while stays for a period of at the value reached in the third phase .after such period , eq .[ simply2 ] substitutes eq .[ simply4 ] and another peak takes place .the heigth of the p53 peak is given by if has time to reach its stationary state of phase two ( i.e. , if ) , or by if the passage to the third phase takes place before it can reach the stationary state .the width of the peak is and the spacing among the peaks , so that the oscillation period is . the necessary conditions for the response mechanism to be effective are 1 ) that , that is that the stationary value of just after the stress is much lower than the stationary value of , 2 ) that , in such a way that the stationary state of in the second phase is much larger than that in the first phase , in order to display the boost , 3 ) that , otherwise has not enough time to decrease in phase one and to increase in phase three .the failure of the response for low values of ( cf . table 3 ) is due to the fall of condition 2 ) , the failure for small is caused by condition 1 ) , the failure at small and large values of is associated with conditions 3 ) and 1 ) , respectively . at low values of response does not take place because the positive term in eq .[ simply4 ] is always , and thus never decreases .in summa , we have shown that the delay is an essential ingredient of the system to have a ready and robust peak in p53 concentration as response to a damage stress . in order to have a peakwhich is comparable with those observed experimentally , the dissociation constant between p53 and mdm2 has to decrease of a factor .although it is widely believed that phosphorylation of p53 increases the dissociation constant , we observe an oscillating behaviour similar to the experimental one only if decreases . in this casethe response is quite robust with respect to the parameters , except upon increaasing of the half life of mdm2 and upon decreasing of the dissociation constant between p53 and the mdm2 gene , in which cases there is no response to the stress .moreover , an increase in the production rate of mdm2 can delay the response and this can be dangerous to the cell as well .we hope that detailed experimental measurements of the physical parameters of the system will be made soon , in order to improve the model and to be able to make more precise predictions about the weak point of the mechanism , weak points which could be intimately connected with the upraise of cancer .the stationary condition for eqs .[ eq1 ] without delay can be found by the intersection of the curves which have been obtained by the conditions , explicitating from the first of eqs .[ eq1 ] and substituting it in the second and the third , respectively . to be noted that is linear in .the variation of the stationary value of p53 upon change in the dissociation constant can be found keeping that where the approximation has been used .consequently , which assumes the largest value when is smallest . using the parameters listed above , the proportionality constant is , at most , . 99 b. vogelstein , d. lane and a. j. levine , nature * 408 * ( 2000 ) 307310 m. d. shair , chemistry and biology * 4 * ( 1997 ) 791794 m. l agarwal , w. r. taylor , m. v. chernov , o. b. chernova and g. r. stark , j. biol . chem . * 273 * ( 1998 ) 14 r. l. bar or _et al . _ ,usa * 97 * ( 2000 ) 11250 m. s. greenblatt , w. p. bennett , m. hollstein and c. c. harris , cancer res .* 54 * ( 1994 ) 48554878 t. gottlieb and m. oren , biochim .acta * 1287 * ( 1996 ) 77102 y. haupt , r. maya , a. kazaz and m. oren , nature * 387 * ( 1997 ) 296299 m. h. g. kubbutat , s. n. jones and k. h. vousden , nature * 387 * ( 1997 ) 299393 t. unger , t. juven gershon , e. moallem , m. berger , r. vogt sionov , g. lozano , m. oren and y. haupt , embo journal * 18 * ( 1999 ) 18051814 w. el deiry , cancer biology * 8 * ( 1998 ) 345357 j. d. oliner , j. a. pietenpol , s. thiagalingam , j. gyuris , k. w. kinzler and b. vogelstein , nature * 362 * ( 1993 ) 857860 k. d. wilkinson , cell .. biol . * 11 * , 141 ( 2000 ) p. h. kussie _et al . _ ,science * 274 * ( 1996 ) 948 p. balagurumoortitiy _et al . _ ,usa * 92 * , 8591 ( 1995 ) b. alberts _ et al ._ , _ molecular biology of the cell _ , garland ( 1994 ) f. s. holstege __ , cell * 95 * , 717 ( 1995 ) j. h. lii and n. l. allinger , j. comput .* 12 * ( 1991 ) 186 t. a. halgren , j .comput . chem .* 17 * ( 1996 ) 490 m. a. savageau , nature * 229 * , 542 ( 1971 ) u. alon , m. g. surette , n. barkai , s. leibler , nature * 397 * , 168 ( 1999 ) m. g. mateu , m. m. sanchez del pino , a. r. fersht , nature struct . biol .* 6 * , 191 ( 1999 ) .stationary values and for the amount of p53 and mdm2 , respectively , calculated at .in the last column the eigenvalues of the linearized ( around the fixed points , ) dynamical matrix are displayed , by real and imaginary part .the real part of the eigenvalues is always negative and the imaginary part , when different from zero , is lower than the real part , indicating that the stationary values are always stable and the dynamics is overdamped . [cols="^,^,^,^",options="header " , ]
a feedback mechanism that involves the proteins p53 and mdm2 , induces cell death as a controled response to severe dna damage . a minimal model for this mechanism demonstrates that the respone may be dynamic and connected with the time needed to translate the mdm2 protein . the response takes place if the dissociation constant between p53 and mdm2 varies from its normal value . although it is widely believed that it is an increase in that triggers the response , we show that the experimental behaviour is better described by a decrease in the dissociation constant . the response is quite robust upon changes in the parameters of the system , as required by any control mechanism , except for few weak points , which could be connected with the onset of cancer . pacs : 87.16.yc
complex system modeling and simulation often mandate global sensitivity analysis , which constitutes the study of how the global variation of input , due to its uncertainty , influences the overall uncertain behavior of a response of interest .most common approaches to sensitivity analysis are firmly anchored in the second - moment properties the output variance which is divvied up , qualitatively or quantitatively , to distinct sources of input variation .there exist a multitude of methods or techniques for calculating the resultant sensitivity indices of a function of independent variables : the random balance design method , the state - dependent parameter metamodel , sobol s method , and the polynomial dimensional decomposition ( pdd ) method , to name but four .a few methods , such as those presented by kucherenko , tarantola , and annoni and rahman , are also capable of sensitivity analysis entailing correlated or dependent input .implicit in the variance - driven global sensitivity analysis is the assumption that the statistical moments satisfactorily describe the stochastic response . in many applications, however , the variance provides a restricted summary of output uncertainty .therefore , sensitivity indicators stemming solely from the variance should be carefully interpreted .a more rational sensitivity analysis should account for the entire probability distribution of an output variable , meaning that alternative and more appropriate sensitivity indices , based on probabilistic characteristics above and beyond the variance , should be considered .addressing some of these concerns has led to a sensitivity index by exploiting the distance between two output probability density functions .such sensitivity analysis establishes a step in the right direction and is founded on the well - known total variational distance between two probability measures .there remain two outstanding research issues for further improvements of density - based sensitivity analysis .first , there is no universal agreement in selecting the total variational distance as the undisputed measure of dissimilarity or affinity between two output probability density functions .in fact , a cornucopia of divergence or distance measures exist in the literature of information theory .therefore , a more general framework , in the spirit of density - based measures , should provide diverse choices to sensitivity analysis .second , the density - based sensitivity indices in general are more difficult to calculate than the variance - based sensitivity indices .this is primarily because the probability density function is harder to estimate than the variance .moreover , nearly all estimation methods available today are very expensive due to the existence of the inner and outer integration loops . therefore , efficient computational methods for computing density - based sensitivity indices are desirable .the purpose of this paper is twofold .first , a brief exposition of the -divergence measure is given in section 2 , setting the stage for a general multivariate sensitivity index , referred to as the -sensitivity index , presented in section 3 .the section includes new theoretical results representing fundamental properties and important inequalities pertaining to the -sensitivity index .second , section 4 introduces three distinct approximate methods for estimating the -sensitivity index .the methods depend on how the probability densities of a stochastic response are estimated , including an efficient surrogate approximation commonly used for high - dimensional uncertainty quantification .numerical results from three mathematical functions , as well as from a computationally intensive stochastic mechanics problem , are reported in section 5 .finally , conclusions are drawn in section 6 .let , , , and represent the sets of positive integer ( natural ) , non - negative integer , real , and non - negative real numbers , respectively . for , denote by the -dimensional euclidean space and by the -dimensional multi - index space .these standard notations will be used throughout the paper .let be a measurable space , where is a sample space and is a -algebra of the subsets of , satisfying and , and be a -finite measure on .let be a set of all probability measures on , which are absolutely continuous with respect to .for two such probability measures , let and denote the radon - nikodym derivatives of and with respect to the dominating measure , that is , and .let ] ; , , are excluded . ]strictly convex at , that is , for any and such that ; ; and evaluated on two sides of the point on the graph of lies above the function value . ] and 4 .equal to _ zero _ at , that is , .the -divergence , describing the difference or discrimination between two probability measures and , is defined by the integral provided that the undefined expressions are interpreted by to define the -divergence for absolutely continuous probability measures in terms of elementary probability theory , take to be the real line and to be the lebesgue measure , that is , , , so that and are simply probability density functions , denoted by and , respectively .then the -divergence can also be defined by the divergence measures in ( [ 1 ] ) and ( [ 2 ] ) were introduced in the 1960s by csiszr , ali and silvey , and morimoto .similar definitions exist for discrete probability measures .vajda , liese and vajda , and sterreicher discussed general properties of the -divergence measure , including a few axiomatic ones .the basic but important properties are as follows : + 1. _ non - negativity and reflexivity : _ with equality if and only if . + 2 ._ duality : _ , where , , is the * -conjugate ( convex ) function of .when , is * -self conjugate ._ invariance : _ , where , .symmetry : _ if and only if , where , , and . when , the symmetry and duality properties coincide .range of values : _ , where and .the left equality holds if and only if . the right equality holds if and only if , that is , for mutually singular ( orthogonal ) measures , and is attained when .+ the normalization condition is commonly adopted to ensure that the smallest possible value of is _ zero_. but fulfilling such condition by the class of convex functions is not required .this is because , for the subclass such that satisfies , the shift by the constant sends every to .indeed , some of these properties may still hold if or if is not restricted to the convexity properties . depending on how is defined , the -divergence may or may not be a true metric .for instance , it is not necessarily symmetric in and for an arbitrary convex function ; that is , the -divergence from to is generally not the same as that from to , although it can be easily symmetrized when required .furthermore , the -divergence does not necessarily satisfy the triangle inequality .it is well known that has a versatile functional form , resulting in a number of popular information divergence measures .indeed , many of the well - known divergences or distances commonly used in information theory and statistics are easily reproduced by appropriately selecting the generating function .familiar examples of the -divergence include the forward and reversed kullback - leibler divergences and , kolmogorov total variational distance , hellinger distance , pearson divergence , neyman divergence , divergence , vajda divergence , jeffreys distance , and triangular discrimination , to name a few , and are defined as d\xi , \label{3a}\ ] ] d\xi = : d_{kl}\left ( p_2 \parallel p_1\right ) , \label{3b}\ ] ] ^ 2 d\xi , \label{3d}\ ] ] d\xi , \label{3e}\ ] ] d\xi : = d_{p}\left ( p_2 \parallel p_1\right ) , \label{3f}\ ] ] , ~\alpha \in \mathbb{r } \setminus \{\pm 1\ } , \label{3g}\ ] ] \ln \left [ \dfrac{f_1(\xi)}{f_2(\xi ) } \right ] d\xi , \label{3i}\ ] ] ^ 2}{f_1(\xi)+f_2(\xi ) } d\xi . \label{3j}\ ] ] the definitions of some of these divergences , notably the two kullback - leibler and pearson - neyman divergences , are inverted when the -divergence is defined by swapping and in ( [ 1 ] ) or ( [ 2 ] ) .there are also many other information divergence measures that are not subsumed by the -divergence measure .see the paper by kapur or the book by taneja .nonetheless , any of the divergence measures from the class of -divergences or others can be exploited for sensitivity analysis , as described in the following section .let be a complete probability space , where is a sample space , is a -field on , and ] , , but otherwise follow independent and arbitrary probability distributions .then , from ( [ 5a ] ) through ( [ 5d ] ) , ( 1 ) the anova component functions , , , and for ; ( 2 ) the variances , , , and for ; and ( 3 ) the sobol indices , , and for . as all univariate sobol indices are the same , so are the contributions of input variables to the variance of .hence , according to the sobol index , all input variables are equally important , regardless of their probability distributions .this is unrealistic , but possible because the variance is just a moment and provides only a partial description of the uncertainty of an output variable . in contrast, the -sensitivity indices will vary depending on the choice of the input density functions , therefore , providing a more rational measure of the influence of input variables .it is important to derive and emphasize the fundamental properties of the -sensitivity index inherited from the -divergence measure .the properties , including a few important inequalities , are described in conjunction with six propositions as follows .the -sensitivity index of for , , is non - negative and vanishes when and are statistically independent .[ p1 ] since by virtue of the non - negativity property of the -divergence and for any , the first line of ( [ 5 ] ) yields proving the first part of the proposition .if and are statistically independent , then for any , resulting in , owing to the reflexivity property or the range of values ( left equality ) of the -divergence . in that case , proving the second part of the proposition .the range of values of is where and .[ p2 ] see the proof of proposition [ p1 ] for the left inequality .the right inequality is derived from the largest value of , which is , according to the range of values ( right equality ) of the -divergence .therefore , ( [ 4b ] ) yields \le \mathbb{e}_{\mathbf{x}_u } \left [ f(0)+f^*(0 ) \right ] = f(0)+f^*(0),\ ] ] completing the proof . from proposition[ p2 ] , has a sharp lower bound , which is _ zero _ since . in contrast , may or may not have an upper bound , depending on whether is finite or infinite . if there is an upper bound , then the largest value is a sharp upper bound , and hence can be used to scale to vary between 0 and 1 . for instance , when , the result is the well - known variational distance measure and the upper bound of the associated sensitivity index ( say ) is . when or , then , meaning that the sensitivity index ( say ) or ( say ) , derived from the kullback - leibler divergence measure or , has no upper bound . no scaling is possible in such a case .the -sensitivity index of for all input variables is where and .[ p3 ] the probability measure is a dirac measure , representing an almost sure outcome , where and .decompose into two disjoint subsets and and observe that therefore , the probability measures and are mutually singular ( orthogonal ) , that is , .consequently , , according to the range of values ( right equality ) of the -divergence .finally , for , ( [ 4b ] ) yields = \mathbb{e}_{\mathbf{x } } \left [ f(0)+f^*(0 ) \right ] = f(0)+f^*(0).\ ] ] for the special case of , the index derived from the total variational distance .therefore , when normalized , , which is the same value reported by borgonovo .let .if and are statistically independent , then [ p4 ] in addition , if and are disjoint subsets , that is , , then [ p4b ] for any , observe that and . since is independent of , the probability measures and are the same , yielding .applying this condition to the expression of the -sensitivity index of for in the first line of ( [ 5 ] ) and noting results in proving the first part of the proposition . here , the second equality is obtained by recognizing that does not depend on and .the third equality is attained by integrating out with respect to on , resulting in .the second part of the proposition results from the reduction , , when . as a special case ,consider and , where , .then , according to proposition [ p4 ] , , meaning that there is no contribution of to the sensitivity of for if does not depend on .the -sensitivity index of for , , is invariant under smooth and uniquely invertible transformations ( diffeomorphisms ) of and .[ p5 ] for , let and be smooth and uniquely invertible , that is , diffeomorphic maps of random variables and . from elementary probability theory, the probability densities of the transformed variables , , and are respectively , where \in \mathbb{r}^{|u|\times |u|} ] is the conditional sensitivity index of for .furthermore , if and are statistically independent , then [ p6 ] applying the expectation operator on both sides of the triangle inequality yields since , for , does not depend on , the first integral on the right side of ( [ sr3 ] ) reduces to therefore , ( [ sr3 ] ) becomes recognizing the sensitivity indices , , and to be respectively the integral on the left side , the first integral on the right side , and the second integral on the right side of ( [ sr4 ] ) produces the upper bound in ( [ sr2 ] ) . in addition , observe that the sensitivity index is non - negative , represents the contribution of the divergence from to , and vanishes if and only if and are statistically independent .therefore , reaches the lower bound , which is , if and only if and are statistically independent . to obtain ( [ sr2b ] ) , use the last line of ( [ 5 ] ) to write where , by invoking the statistical independence between and , the numerator and denominator of the argument of become and respectively . applying ( [ sr4c ] ) and ( [ sr4d ] ) to ( [ sr4b ] ) results in which transforms ( [ sr2 ] ) to ( [ sr2b ] ) andhence completes the proof . as a special case ,consider again and , where , .then , according to proposition [ p6 ] , applicable to sensitivity indices rooted in metric -divergences only , which states the following : if depends on , then the contribution of to the sensitivity of for increases from , but is limited by the residual term . if and are statistically independent , then vanishes , resulting in . this agrees with proposition [ p4 ] , which , however , is valid whether or not the underlying -divergence is a metric .in addition , if and are statistically independent , then , yielding .borgonovo derived the same bounds for a special case when the sensitivity index stems from the total variational distance .proposition [ p6 ] , by contrast , is a general result and applicable to sensitivity indices emanating from all metric -divergences .a plethora of -sensitivity indices are possible by appropriately selecting the convex function in ( [ 4b ] ) or ( [ 5 ] ) . listed in table [ table1 ]are ten such sensitivity indices derived from the forward and reversed kullback - leibler divergences , total variational distance , hellinger distance , pearson divergence , neyman divergence , divergence , vajda divergence , jeffreys distance , and triangular discrimination in ( [ 3a ] ) through ( [ 3j ] ) .three prominent sensitivity indices , for example , the mutual information f_{\mathbf{x}_u , y}(\mathbf{x}_u,\xi ) d{\mathbf{x}_u}d\xi = : h_{u , kl'}\ ] ] between and , the squared - loss mutual information ^ 2 f_{y}(\xi ) f_{\mathbf{x}_u}(\mathbf{x}_u ) d{\mathbf{x}_u}d\xi \\ & = & \int_{\mathbb{r}^{|u|}\times\mathbb{r } } \dfrac{f_{\mathbf{x}_u , y}(\mathbf{x}_u,\xi)}{f_{y}(\xi ) f_{\mathbf{x}_u}(\mathbf{x}_u ) } \left[1 - \left\ { \dfrac{f_{y}(\xi ) f_{\mathbf{x}_u}(\mathbf{x}_u)}{f_{\mathbf{x}_u , y}(\mathbf{x}_u,\xi ) } \right\}^2 \right ] f_{\mathbf{x}_u , y}(\mathbf{x}_u,\xi ) d{\mathbf{x}_u}d\xi \\ & = : & h_{u , n } \end{array}\ ] ] between and , and borgonovo s importance measure of on , are rooted in reversed kullback - leibler , neyman , and total variational divergences or distances , respectively .indeed , many previously used sensitivity or importance measures are special cases of the -sensitivity index derived from the -divergence ..ten special cases of the -sensitivity index [ cols="<,<,<",options="header " , ] [ table8 ] table [ table8 ] presents the approximate univariate sensitivity indices ( total variational distance ) and ( reversed kullback - leibler divergence ) of the maximum von mises stress by the pdd - kde - mc method .the pdd expansion coefficients were estimated by -variate dimension - reduction integration , requiring one- ( ) or at most two - dimensional ( ) gauss quadratures .the order of orthogonal polynomials and number of gauss quadrature points in the dimension - reduction numerical integration are and , respectively .the indices are broken down according to the choice of selecting and . in all pdd approximations , the sample size .the sensitivity indices by the pdd - kde - mc methods in table [ table8 ] quickly converge with respect to and/or .since fea is employed for response evaluations , the computational effort of the pdd - kde - mc method comes primarily from numerically determining the pdd expansion coefficients .the expenses involved in estimating the pdd coefficients vary from 25 to 33 fea for the univariate pdd approximation and from 277 to 481 fea for the bivariate pdd approximation , depending on the two values of . based on the sensitivity indices in table [ table8 ] , the horizontal boundary conditions ( and ) are highly important ; the vertical load ( ) , elastic modulus ( ) , and vertical boundary conditions ( and ) are slightly important ; and the horizontal load ( ) and poisson s ratio ( ) are unimportant in influencing the maximum von mises stress .it is important to recognize that the respective univariate and bivariate pdd solutions in this particular problem are practically the same .therefore , the univariate pdd solutions are not only accurate , but also highly efficient .this is because of a realistic example chosen , where the individual main effects of input variables on the von mises stress are dominant over their interactive effects .finally , this example also demonstrates the non - intrusive nature of the pdd - kde - mc method , which can be easily integrated with commercial or legacy computer codes for analyzing large - scale complex systems .a general multivariate sensitivity index , referred to as the -sensitivity index , is presented for global sensitivity analysis .the index is founded on the -divergence , a well - known divergence measure from information theory , between the unconditional and conditional probability measures of a stochastic response .the index is applicable to random input following dependent or independent probability distributions .since the class of -divergence subsumes a wide variety of divergence or distance measures , numerous sensitivity indices can be defined , affording diverse choices to sensitivity analysis .several existing sensitivity indices or measures , including mutual information , squared - loss mutual information , and borgonovo s importance measure , are shown to be special cases of the proposed sensitivity index .a detailed theoretical analysis reveals the -sensitivity index to be non - negative and endowed with a range of values , where the smallest value is _ zero _ , but the largest value may be finite or infinite , depending on the generating function chosen .the index vanishes or attains the largest value when the unconditional and conditional probability measures coincide or are mutually singular .unlike the variance - based sobol index , which is invariant only under affine transformations , the -sensitivity index is invariant under nonlinear but smooth and uniquely invertible transformations .if the output variable and a subset of input variables are statistically independent , then there is no contribution from that subset of input variables to the sensitivity of the output variable . for a metric divergence , the resultant -sensitivity index for a group of input variables increases from the unconditional sensitivity index for a subgroup of input variables , but is limited by the residual term emanating from the conditional sensitivity index .three new approximate methods , namely , the mc , kde - mc , and pdd - kde - mc methods , are proposed to estimate the -sensitivity index .the mc and kde - mc methods are both relevant when a stochastic response is inexpensive to evaluate , but the methods depend on how the probability densities of a stochastic response are calculated or estimated .the pdd - kde - mc method , predicated on an efficient surrogate approximation , is relevant when analyzing high - dimensional complex systems , demanding expensive function evaluations .therefore , the computational burden of the mc and kde - mc methods can be significantly alleviated by the pdd - kde - mc method . in all three methods developed, the only requirement is the availability of input - output samples , which can be drawn either from a given computational model or from actual raw data .numerical examples , including a computationally intensive stochastic boundary - value problem , demonstrate that the proposed methods provide accurate and economical estimates of density - based sensitivity indices ., _ on the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling _ ,phil . mag ., 50 ( 1900 ) , pp .
this article presents a general multivariate -sensitivity index , rooted in the -divergence between the unconditional and conditional probability measures of a stochastic response , for global sensitivity analysis . unlike the variance - based sobol index , the -sensitivity index is applicable to random input following dependent as well as independent probability distributions . since the class of -divergences supports a wide variety of divergence or distance measures , a plethora of -sensitivity indices are possible , affording diverse choices to sensitivity analysis . commonly used sensitivity indices or measures , such as mutual information , squared - loss mutual information , and borgonovo s importance measure , are shown to be special cases of the proposed sensitivity index . new theoretical results , revealing fundamental properties of the -sensitivity index and establishing important inequalities , are presented . three new approximate methods , depending on how the probability densities of a stochastic response are determined , are proposed to estimate the sensitivity index . four numerical examples , including a computationally intensive stochastic boundary - value problem , illustrate these methods and explain when one method is more relevant than the others . borgonovo s importance measure , -sensitivity index , kernel density estimation , mutual information , polynomial dimensional decomposition , squared - loss mutual information .
the introduction of new quantum mechanical technologies promises to fundamentally alter the way we communicate .quantum key distribution ( qkd ) , for instance , will allow us to communicate in an intrinsically secure way .but new quantum communication technologies will require a new telecommunications infrastructure , one which is quantum - enabled .that is , this network must be able to properly accommodate the quantum properties that quantum communications inherently rely on. such a quantum network will contain many novel components , such as quantum memories , quantum repeaters , or , most generally , quantum channels .these components must each operate in a strictly quantum way .of course , no technology is perfect , and quantum technologies offer a new set of practical challenges .however , as we have learned from qkd , perfectly ideal devices are not a necessity . by shifting our efforts into classical post - processing of data ,we can deal with imperfections in quantum technologies .the question then becomes , how much imperfection can be tolerated before a device is no longer operating in a sufficiently quantum way ?we can enforce a minimal quantum requirement on devices by insisting that they do not act as _ measure and prepare _channels ( or , in the parlance of qkd , _ intercept and resend _ channels ) , since communication through such channels is equivalent to classical communication .indeed , this type of channel destroys any quantum correlations in bipartite states when one subsystem is sent through it .of course , this is just the minimum requirement .it is also important to quantify the quantum behaviour , as is done in the field of entanglement measures , or in qkd through the secret key rate . for quantum channels , we can ask , _ how well does the channel preserve quantum correlations in bipartite systems , when only one subsystem passes through it ? _ to study this question , we take a state with well - quantified quantum correlations , send one subsystem through the channel , and examine the output .we then compare the quantum correlations detectable in the output with the input correlations .in fact , as we shall see , we can test for these correlations in a so - called ` prepare and measure ' picture , bypassing the need to use actual bipartite states .a strong quantum channel is one which preserves all or nearly all of the quantum correlations .this idea corresponds to what we shall call the _quantum throughput_. such a measure would allow us to characterize the suitability of devices for quantum communication tasks .the goal of this work is to illustrate that these ideas about device characterization via quantum throughput can be implemented in a meaningful way .although we will make specific choices regarding device types or quantification measures , the basic idea remains quite general , and our scheme can be extended and adapted to other methods as well . finally , if we picture a future quantum communications network consisting of many components , it should be evident that any device - testing procedure should be as experimentally practical as possible . ideally , we seek a testing scenario where a finite number of test states and a limited set of measurements are sufficient to understand the quantum throughput .the latter requirement is especially important for optical systems , which are perhaps the most natural choice of carrier for quantum information . in these systems ,full tomography is not really a practical option because of the dimension of the hilbert space .we have previously examined quantum correlations in optical devices in a qualitative way ; in the present contribution , we will extend those results to provide a quantitative picture of optical devices .the rest of this paper is organized as follows . in sec .[ sec : quant ] we outline our quantitative device - testing scheme , focusing mainly on optical systems .we show how to estimate important parameters from homodyne measurements on the output , and how to use these estimates to make quantitative statements about the optical device . in sec .[ sec : results ] , we give the results of this quantification procedure for a wide class of optical channels , and examine the strength of our method . sec .[ sec : conclusion ] summarizes the paper , while appendices [ app : overlapbounds]-[app : offdiagbounds ] provide technical details and derivations .the quantum device testing procedure we employ is the same as the one found in .this protocol is based on the idea that a truly quantum channel should be distinguishable from those channels where the input quantum state is temporarily converted to classical data before a new quantum state is output , a so - called _ measure and prepare _ channel .measure and prepare channels are also called _ entanglement - breaking _ channels , as the two notions are equivalent .this provides a hint on how to quantify a channel s quantum throughput , namely by sending part of an entangled state through the channel and determining the amount of entanglement that still remains afterwards . to this end , imagine we have an entangled state of the form \ ] ] where system is a qubit and system is an optical mode .we can assume , without loss of generality , that , so that and denote coherent states of opposite phase .this is an entangled state for all values , as can be seen by calculating the entropy of entanglement .keeping subsystem a isolated , an optical channel can be probed using subsystem b of this state , followed by local projective measurements by alice and homodyne measurements by bob .these expectation values , along with the knowledge of alice s reduced density matrix , can be used to determine just how much of the initial state s entanglement is remaining .of course , states like eq .( [ eq : initialstate ] ) may be difficult to create and therefore not suited for practical device testing .however , notice that alice s reduced density matrix does not depend on what happens in the optical channel , nor on any of bob s measurement results .her expectation values can be completely determined from the initial state .indeed , alice s measurement results can be thought of as classical registers which merely record which mode state was sent through the device .this observation allows us to move from an entanglement - based ( eb ) picture to an equivalent ` prepare and measure ' ( pm ) scenario , in which alice s measurements are absorbed into the initial state preparation . in a pm scenario ,we retain full knowledge of , in particular the off - diagonal coherence term {01}={\left\langle{\alpha}\right\vert { { \hspace{-0.1 em}}}{{\hspace{-0.1 em}}}\left .{ -\alpha}\right\rangle} ] .then eqs . ( 66 ) and ( 69 ) from give directly the following bound : = : u_j.\ ] ] this bound comes up several times , so it is denoted ( ) to make later equations more readable .importantly , the bound can be calculated using only the measured variances of the conditional states .estimating the overlap is more involved .we need to derive bounds on its magnitude based on our available information .again , we begin with bounds provided in refs . . with suitable relaxations ,their bounds can be put into a specific form which will be more desirable for us later , as we would ultimately like to do a convex optimization .the specific details of this relaxation are straightforward , and are outlined in appendix [ app : overlapbounds ] .we will need an additional parameter , , which can be calculated directly using the measured first moments .defining two coherent states with the same means as the conditional states , the new parameter is given through the overlap of these coherent states , with this definition in place , we can give the relaxed bounds where and having these bounds , obtained purely through homodyne measurements , we can now move on to estimating the elements of the projected density matrix .we can already estimate matrix elements of the form using eq .( [ eq : noisebound ] ) , but to build we also require bounds on the supplementary elements for . to get these ,we first expand into its eigenbasis , eq .( [ eq : eigendecomp ] ) . then , using the fact that $ ] for any normalized vector , we can easily derive the following bounds on the desired matrix element ( see appendix [ app : suppdiagbounds ] for full details ) : analogous bounds can be given for . finally , we need to estimate some elements of the off - diagonal blocks of , or else there would be no way to differentiate an entangeld state from a classical mixture of the conditional states . to this end , we label the off - diagonal block of the full density matrix by , so that it is naturally split into the form where the diagonal blocks correspond to the two conditional states . in the pm picture ,we hold full knowledge of the alice s reduced density matrix where . each element in eq .( [ eq : rhoa ] ) is the trace of the corresponding element in eq .( [ eq : rhoblocks ] ) , so we can enforce the condition . using this as our starting point , and with an appropriate basis choice for system b , we can determine the following off - diagonal bounds which can be incorporated into : details on how to arrive at these inequalities can be found in appendix [ app : offdiagbounds ] .we now have sufficient information to construct a useful estimate of the projected state . to summarize, we have the quantities and , which can be calculated from measurements of the first moments and second moments , respectively .we want to determine , which is the projection of from eq .( [ eq : rhoblocks ] ) onto the subspace spanned by .we have estimated some of the overlaps of with these basis vectors in eqs .( [ eq : suppdiagbounds1]-[eq : suppdiagbounds2 ] ) and ( [ eq : offdiagbound1]-[eq : offdiagbound2 ] ) .these estimates depend only on the input parameter and on the output state quantities , , and .this last overlap quantity is itself bounded to a region defined by eqs .( [ eq : overlapbounds]-[eq : bupper ] ) , which depends only on , and . hence , for a fixed input overlap and a fixed set of homodyne measurement results , we have a parameter region which forms a set of constraints on .this region must be searched to find the minimal entanglement compatible with .we will now move on to address the question of how to find the minimal entanglement compatible with our constraints . as mentioned earlier, we will choose the negativity as the entanglement measure for demonstrating our method .in principle , we would like to find the minimal entanglement using the methods of semidefinite programming .but we must make some simplifications and relaxations which will allow us to do so .first , we exploit the fact that local unitary operations can not change the quantity of entanglement . therefore , without loss of generality , we can assume that the overlap of the maximal eigenstates is real and positive ( since this can be accomplished by a relative change of phase on subsystem b ) . as well , we can perform local phase changes on subsystem a , which allows us to also make the restriction the other off - diagonal element of interest , , is in general still a complex number .the main problem is that eq .( [ eq : offdiagbound2 ] ) is a _ non - convex _ constraint on . to use this constraint in a semidefinite program, we have to replace it with a set of convex constraints .we accomplish this by denoting the right - hand side of eq .( [ eq : offdiagbound2 ] ) as and expanding our constraints to the region this new constraint still non - convex , but we can search for the minimum entanglement independently in each of the four quadrants , where the constraints are convex ( see fig .[ fig : convexregions ] ) , and take the minimum over these four searches .the final result will be a lower bound to the minimum entanglement in the region constrained by eq .( [ eq : offdiagbound2 ] ) .we can extend this idea further , replacing the inscribed square from fig .[ fig : convexregions ] with any other inscribed polygon . with more sides , we can better approximate the non - convex constraint eq .( [ eq : offdiagbound2 ] ) , but this will also increase the number of convex subregions which must be searched to find the overall minimum .numerical evidence indicates that the minimum entanglement is often , though not always , found at a point outside the circle .we tested with an inscribed octagon and it was not found to alter the final results significantly .the final hurdle comes from the overlap .since the maximal eigenstates will in general have a non - zero overlap ( indeed , for zero overlap , we will not find any entanglement in ) , we must construct an orthogonal basis in order to explicitly write down a matrix representing .doing so introduces matrix elements that are both linear and quadratic in the overlap .if the overlap is used as a parameter in the semidefinite programming , this non - linear dependence becomes problematic .fortunately , it turns out that to find the minimal entanglement we only need to consider the case where the overlap takes the largest allowed value , i.e. .the reason for this is that , for fixed values of , , and , there always exists a cptp map on the b subsystem which preserves the maximal eigenvalues while making the corresponding overlap larger .such a local map can not increase the entanglement , so indeed the minimal entanglement will be found at .this useful result will be shown in detail elsewhere .in the previous section , we outlined a method for calculating the effective entanglement in optical systems .this began with the observation that we can get bounds just by looking at the most significant two - qubit subsystem .the remainder of sec .[ sec : quant ] provided the necessary tools to allow us to calculate these bounds efficiently as a semidefinite program .now that all the pieces are in place , we can turn to applying our scheme . to illustrate our quantification method, we use data corresponding to the action of the optical channel on the field quadratures , which we assume to be symmetric for both signal states and for both quadratures .these symmetry assumptions are made solely to aid the graphical representation of our results , and our method does not rely on them .it is also important to note that , beyond the symmetry , we do not make any assumptions about how the channel works . in the absence of experimental data , we merely parameterize the channel s effect on the first quadrature moments by a loss parameter and on the second moments by the excess noise . specifically , if the means of the two conditional output states are denoted by from eq .( [ eq : cohmean ] ) , then the loss is parameterized through the transmittivity and the symmetric excess noise ( expressed in shot noise units ) by the input states are characterized entirely by the overlap parameter .the quantification program was carried out using the negativity , this measure has all the properties demanded by our quantification method , but more importantly , the trace norm of a matrix can be computed efficiently as a semidefinite program .we have normalized the negativity so that a maximally entangled two - qubit state has .our calculations were done in matlab using the yalmip interface along with the solver sdpt3 .our main results are shown in fig .[ fig : mainresults ] , where the minimal negativity of compatible with the initial overlap and excess noise is given , for various values of the transmittivity .this quantity gives a lower bound on the negativity of the full state .the entanglement of the initial state , eq .( [ eq : initialstate ] ) , is also shown as a function of the initial overlap in fig .( [ fig : test1 ] ) . for figs .( [ fig : test2]-[fig : test3 ] ) , the modification is made to eq .( [ eq : initialstate ] ) for these comparisons .the initial entanglement can be compared with the calculated bounds to help understand the quantum throughput of a device . in the limit of zero excess noise and zero loss ,our entanglement bound is tight with the initial entanglement .our bounds are quite high for very low noise , but they become lower as the measurement results get more noisy . at some point , a non - trivial entanglement bound can no longer be given , despite the fact that quantum correlations can still be proven for higher noise values ( cf . ) . as well , for larger loss values , the tolerance for excess noise is lower , and the region where non - trivial bounds can be given becomes smaller .the exact noise value where our bounds become trivial depends on the initial overlap and on the measured loss , but the highest tolerable excess noise is around 5% of the vacuum for .this shrinks to about 3% for a transmittivity of .though the quantification region is small , it is within the limits of current experimental technology .some entanglement degradation should be expected as the noise is increased , but , as mentioned earlier , entanglement can still be verified ( though not previously quantified ) under the same testing scenario up to much higher noise values than seen here .thus , our bounds do not provide the full picture .the weakening of the bounds with higher noise is mainly due to the estimation procedure .certain approximations become cruder ( though still valid ) as the noise increases .first , for higher noise , the conditional states become more mixed , spreading out into more of the infinite - dimensional mode hilbert space .this leads to additional information being lost when we truncate down from to .another problem stems from the bounds we use to estimate .higher noise leads to weaker bounds on the maximal eigenvalues from eq .( [ eq : noisebound ] ) , which weakens all other inequalities . to examine the effects of these two approximations, we briefly consider a simple channel where the test state , eq .( [ eq : initialstate ] ) , is mixed at a beam - splitter with a thermalized vacuum .the first moments reduce by a factor of , and the increased variances of the output optical states can be determined from the mean photon number of the thermal state . for , the conditional output states are displaced thermal states .the reason for studying this channel is that we can _ exactly _ determine the maximal eigenvalues , , and the overlap .this allows us to study our approximations independently , since we decouple the effects of the two - qubit projection from the homodyne parameter estimation ( in practice , of course , our quantification scheme must use both ) . in fig .( [ fig : comparison ] ) we show the result of the quantification scheme , when this extra information is included .we see that the tolerable excess noise is of the vacuum , more than three times what it would be if we had to estimate the eigenvalues and overlap using homodyne results ( cf( [ fig : test3 ] ) ) . also included in fig .( [ fig : comparison ] ) is an entanglement verification curve , obtained using the methods of .any points with lower noise than this verification curve must come from entangled states .the two - qubit projection is tight to the entanglement verification curve for low overlaps . for higher values ,the projection becomes weaker , only working to about half the noise value that the entanglement verification curve reaches .ideally , we want to be able to calculate non - trivial values for the entanglement wherever it be verified .this would give us a true quantitative complement to existing entanglement verification methods .one obvious extension to our method would be to truncate the mode subspace using the two largest eigenstates from each conditional state , or even more . in theory, this would strictly improve the estimates .however , in practice , this will increase the complexity of the quantification calculation , since some simplifying assumptions ( i.e. certain overlaps are real ) may no longer be valid . as well , the number of additional minimizations we have to do , as in our non - convex relaxation of eq .( [ eq : offdiagbound2 ] ) , increases fourfold with each added dimension .another approach might therefore be necessary to overcome this problem .nevertheless , the quantification scheme outlined here is a useful method for characterizing the degree of quantumness of optical channels , especially when these channels introduce low noise .we have outlined a method for quantifying the effective entanglement in qubit - mode systems using only homodyne measurement results and knowledge of the initial preparation .this quantification method works particularly well if the mode subsystem exhibits low noise . by combining this quantification scheme with a device testing scenario which uses two nonorthogonal test states ,one can examine how strongly an optical device or experiment is operating in the quantum domain .our scheme provides a useful tool for understanding the quantum nature of optical devices , especially the question of how well they preserve quantum correlations .in this appendix , we derive the bounds from eqs .( [ eq : blower]-[eq : bupper ] ) for the absolute value of the overlap of the maximal eigenstates , . from , we have the following : _ overlap bounds ._ let the largest eigenvalue of be parameterized by and let the fidelity between the conditional states and the coherent states from eq .( [ eq : cohmean ] ) be given by and let then the following holds : with and since we can not calculate or in practice , we now modify these bounds from the above form found in to one involving only the parameters ( calculated from first moments ) and the ( calculated from second moments ) . to do this, we make use only of the obvious inequality from this , we can easily derive the following auxiliary inequalities : it is important to note that the second and third inequalities only hold so long as . for symmetric noise, the value corresponds to , almost twice the vacuum variance .this value is far outside the region where our method gives non - trivial bounds , so it is not an issue . substituting the inequalities ( [ eq : aux1]-[eq : aux3 ] ) into eqs .( [ eq : oldoverlaplowerbound ] ) and ( [ eq : oldoverlapupperbound ] ) , we arrive at the bounds given in eqs .( [ eq : blower]-[eq : bupper ] ) .here we aim to bound the quantities for , as found in eqs .( [ eq : suppdiagbounds1]-[eq : suppdiagbounds2 ] ) .an eigenbasis expansion of leads to a lower bound can be derived in a similar way : the bounds for follow by interchanging indices .this appendix outlines the derivation of the off - diagonal bounds from eqs .( [ eq : offdiagbound1]-[eq : offdiagbound2 ] ) .we completely know , which constrains that we must have .first , we consider the full density matrix in the basis defined by for system and the eigenbasis of , , for system b. we can still write this in the block form of eq .( [ eq : rhoblocks ] ) , where we denote the diagonal elements of the block by and the diagonal elements of the block by ( the diagonal elements of are its eigenvalues ) . using the triangle inequality, we have from positivity of , we find and from the cauchy - schwarz inequality , the first sum is just and the second is . now , using the bounds from appendix [ app : suppdiagbounds ] , we get which we can substitute above to obtain replacing with , we are led to the off - diagonal bound by applying the same arguments using the eigenbasis of , we can arrive at an analogous bound for .
quantum communication relies on optical implementations of channels , memories and repeaters . in the absence of perfect devices , a minimum requirement on real - world devices is that they preserve quantum correlations , meaning that they have some thoughput of a quantum mechanical nature . previous work has verified throughput in optical devices while using minimal resources . we extend this approach to the quantitative regime . our method is illustrated in a setting where the input consists of two coherent states while the output is measured by two homodyne measurement settings .
in both classical and quantum physics isolated systems can display unpredictable behavior , but the reasons for the unpredictability are quite different . in classical ( hamiltonian )mechanics unpredictability is a consequence of chaotic dynamics , or exponential sensitivity to initial conditions , which makes it impossible to predict the phase - space trajectory of a system to a certain accuracy from initial data given to the same accuracy .this unpredictability , which comes from not knowing the system s initial conditions precisely , is measured by the kolmogorov - sinai ( ks ) entropy , which is the rate at which initial data must be supplied in order to continue predicting the coarse - grained phase - space trajectory . in quantum mechanicsthere is no sensitivity to initial conditions in predicting the evolution of a state vector , because the unitary evolution of quantum mechanics preserves the inner product between state vectors .the absence of sensitivity to initial conditions seems to suggest that there is no quantum chaos . yetquantum mechanics has an even more fundamental kind of unpredictability , which has nothing to do with dynamics : even if a system s state vector is known precisely , the results of measurements are generally unpredictable . to compare the unpredictability of classical and quantum dynamics , we first remove the usual sources of unpredictability from consideration and then introduce a new source of unpredictability that is the same in both classical and quantum dynamics .the first step is to focus in classical physics on the evolution of phase - space distributions , governed by the liouville equation , instead of on phase - space trajectories , and to focus in quantum physics on the evolution of state vectors , governed by the schrdinger equation .the liouville equation preserves the overlap between distributions , so there is no sensitivity to initial conditions in predicting the evolution of a phase - space distribution . by shifting attention from phase - space trajectories to distributions ,we remove lack of knowledge of initial conditions as a source of unpredictability .moreover , by considering only schrdinger evolution of state vectors , i.e. , evolution uninterrupted by measurements , we eliminate the intrinsic randomness of quantum measurements as a source of unpredictability .the conclusion that there is no chaos in quantum evolution is now seen to be too facile . were things so simple , one would have to conclude that there is no chaos in classical liouville evolution either . having taken both classical and quantum unpredictability out of the picture , we introduce a new source of unpredictability to investigate chaos in the dynamics .we do this by adding to the system hamiltonian , either classical or quantum mechanical , a stochastic perturbation .we measure the unpredictability introduced by the perturbation in terms of the increase of system entropy . by gathering information about the history of the perturbation, one can make the increase of system entropy smaller . to characterize the resistance of the system to predictability, we compare the information gathered about the perturbation with the entropy reduction that this information purchases .we say that a system is _ hypersensitive to perturbation _ if the perturbation information is much larger than the associated system - entropy reduction , and we regard hypersensitivity to perturbation as the signature of chaos in liouville or schrdinger evolution ( see sec .[ sechyp ] ) .for classical systems we have shown that systems with chaotic dynamics display an _ exponential _ hypersensitivity to perturbation , in which the ratio of perturbation information to entropy reduction grows exponentially in time , with the exponential rate of growth given by the ks entropy .thus , for classical systems , we have established that exponential hypersensitivity to perturbation characterizes chaos in liouville evolution in a way that is exactly equivalent to the standard characterization of chaos in terms of the unpredictability of phase - space trajectories ( see sec .[ secclassical ] ) . for a variety of quantum systemswe have used numerical simulations to investigate hypersensitivity to perturbation .the simulations suggest that hypersensitivity to perturbation provides a characterization of chaos in quantum dynamics : quantum systems whose classical dynamics is chaotic display a quantum hypersensitivity to perturbation , which comes about because the perturbation generates state vectors that are nearly randomly distributed in the system hilbert space , whereas quantum systems whose classical dynamics is not chaotic do not display hypersensitivity to perturbation ( see sec .[ secquantum ] ) .hypersensitivity to perturbation , in either classical or quantum mechanics , is defined in terms of information and entropy .the entropy of an isolated physical system ( gibbs entropy for a classical system , von neumann entropy for a quantum system ) does not change under hamiltonian time evolution .if the time evolution of the system is perturbed through interaction with an incompletely known environment , however , averaging over the perturbation typically leads to an entropy increase . throughout this paper , we make the simplifying assumption that the interaction with the environment is equivalent to a stochastic perturbation of the hamiltonian , a restriction we hope to be able to remove in the future .conditions under which this assumption is valid are discussed in .the increase of the system entropy can be limited to an amount , the _ tolerable entropy increase _ , by obtaining , from the environment , information about the perturbation .we denote by the minimum information about the perturbation needed , on the average , to keep the system entropy below the tolerable level . a formal definition of the quantities , , and can be found in for the classical case and in for the quantum case .entropy and information acquire physical content in the presence of a heat reservoir at temperature .if all energy in the form of heat is ultimately exchanged with the heat reservoir , then each bit of entropy , i.e. , each bit of _ missing information _ about the system state , reduces by the amount the energy that can be extracted from the system in the form of useful work .the connection between _ acquired _ information and work is provided by landauer s principle , according to which not only each bit of missing information , but also each bit of acquired information , has a free - energy cost of .this cost , the _landauer erasure cost _, is paid when the acquired information is erased .acquired information can be quantified by algorithmic information .we now define that a system is hypersensitive to perturbation if the information required to reduce the system entropy from to is large compared to the entropy reduction , i.e. , the information purchases a reduction in system entropy , which is equivalent to an increase in the useful work that can be extracted from the system ; hypersensitivity to perturbation means that the landauer erasure cost of the information is much larger than the increase in available work .hypersensitivity to perturbation means that the inequality ( [ eqhyp ] ) holds for almost all values of .the inequality ( [ eqhyp ] ) tends always to hold , however , for sufficiently small values of .the reason is that for these small values of , one is gathering enough information from the perturbing environment to track a particular system state whose entropy is nearly equal to the initial system entropy . in other words ,one is essentially tracking a particular realization of the perturbation among all possible realizations .thus , for small values of , the information becomes a property of the perturbation ; it is the information needed to specify a particular realization of the perturbation .the important regime for assessing hypersensitivity to perturbation is where is fairly close to , and it is in this regime that one can hope that reveals something about the system dynamics , rather than properties of the perturbation .in this section we do not aim for rigor ; many statements in this section are without formal proof .instead , our objective here is to extract the important ideas from the rigorous analysis given in and to use them to develop a heuristic physical picture of why chaotic systems display exponential hypersensitivity to perturbation . for a simple illustration and a system where exact solutions exist ,see .this section is an abbreviated version of the discussion section of .consider a classical hamiltonian system whose dynamics unfolds on a -dimensional phase space , and suppose that the system is perturbed by a stochastic hamiltonian whose effect can be described as diffusion on phase space .suppose that the system is globally chaotic with ks entropy .for such a system a phase - space density is stretched and folded by the chaotic dynamics , developing exponentially fine structure as the dynamics proceeds .a simple picture is that the phase - space density stretches exponentially in half the phase - space dimensions and contracts exponentially in the other half of the dimensions .the perturbation is characterized by a perturbation strength and by correlation cells .we can take the perturbation strength to be the typical distance ( e.g. , euclidean distance with respect to some fixed set of canonical cordinates ) that a phase - space point diffuses under the perturbation during an -folding time , , in a typical contracting dimension .the perturbation becomes effective ( in a sense defined precisely in ref . ) when the phase - space density has roughly the same size in the contracting dimensions as the perturbation strength .once the perturbation becomes effective , the effects of the diffusive perturbation and of the further exponential contraction roughly balance one another , leaving the _ average _ phase - space density with a constant size in the contracting dimensions .the correlation cells are phase - space cells over which the effects of the perturbation are well correlated and between which the effects of the perturbation are essentially uncorrelated .we assume that all the correlation cells have approximately the same phase - space volume .we can get a rough idea of the effect of the perturbation by regarding the correlation cells as receiving independent perturbations .moreover , the diffusive effects of the perturbation during an -folding time are compressed exponentially during the next such -folding time ; this means that once the perturbation becomes effective , the main effects of the perturbation at a particular time are due to the diffusion during the immediately preceding -folding time .since a chaotic system can not be shielded forever from the effects of the perturbation , we can choose the initial time to be the time at which the perturbation is just becoming effective .we suppose that at the unperturbed density is spread over correlation cells , being the time when the unperturbed density occupies a single correlation cell .the essence of the ks entropy is that for large times the unperturbed density spreads over correlation cells , in each of which it occupies roughly the same phase - space volume .the exponential increase of continues until the unperturbed density is spread over essentially all the correlation cells .we can regard the unperturbed density as being made up of _ subdensities _ , one in each occupied correlation cell and all having roughly the same phase - space volume .after , when the perturbation becomes effective , the _ average _ density continues to spread exponentially in the expanding dimensions .as noted above , this spreading is not balanced by contraction in the other dimensions , so the phase - space volume occupied by the average density grows as , leading to an entropy increase just as the unperturbed density can be broken up into subdensities , so the average density can be broken up into _ average subdensities _, one in each occupied correlation cell .each average subdensity occupies a phase - space volume that is times as big as the volume occupied by an unperturbed subdensity .the unperturbed density is embedded within the phase - space volume occupied by the average density and itself occupies a volume that is smaller by a factor of .we can picture a _ perturbed _density crudely by imagining that in each occupied correlation cell the unperturbed subdensity is moved rigidly to some new position within the volume occupied by the _average _ subdensity ; the result is a _perturbed subdensity_. a _ perturbed density _ is made up of perturbed subdensities , one in each occupied correlation cell .all of the possible perturbed densities are produced by the perturbation with roughly the same probability .suppose now that we wish to hold the entropy increase to a tolerable amount .we must first describe what it means to specify the phase - space density at a level of resolution set by a tolerable entropy increase .an approximate description can be obtained in the following way .take an occupied correlation cell , and divide the volume occupied by the average subdensity in that cell into nonoverlapping volumes , all of the same size .aggregate all the perturbed subdensities that lie predominantly within a particular one of these nonoverlapping volumes to produce a _coarse - grained subdensity_. there are coarse - grained subdensities within each occupied correlation cell , each having a phase - space volume that is bigger than the volume occupied by a perturbed subdensity by a factor of a _ coarse - grained density _ is made up by choosing a coarse - grained subdensity in each occupied correlation cell .a coarse - grained density occupies a phase - space volume that is bigger than the volume occupied by the unperturbed density by the factor of eq .( [ cgvolume ] ) and hence represents an entropy increase thus to specify the phase - space density at a level of resolution set by means roughly to specify a coarse - grained density . the further entropy increase on averaging over the perturbationis given by what about the information required to hold the entropy increase to ?since there are coarse - grained subdensities in an occupied correlation cell , each produced with roughly the same probability by the perturbation , it takes approximately bits to specify a particular coarse - grained subdensity . to describe a coarse - grained density, one must specify a coarse - grained subdensity in each of the occupied correlation cells .thus the information required to specify a coarse - grained density and , hence , the information required to hold the entropy increase to given by corresponding to there being a total of coarse - grained densities .the entropy increase ( [ furtherincrease ] ) comes from counting the number of _ nonoverlapping _ coarse - grained densities that are required to fill the volume occupied by the average density , that number being .in contrast , the information comes from counting the exponentially greater number of ways of forming _ overlapping _ coarse - grained densities by choosing one of the nonoverlapping coarse - grained subdensities in each of the correlation cells .the picture developed in this section , summarized neatly in eq .( [ picsum ] ) , requires that be big enough that a coarse - grained subdensity is much larger than a perturbed subdensity , so that we can talk meaningfully about the perturbed subdensities that lie predominantly _ within _ a coarse - grained subdensity . if becomes too small , eq . ( [ picsum ] ) breaks down , and the information , rather than reflecting a property of the chaotic dynamics as in eq .( [ picsum ] ) , becomes essentially a property of the perturbation , reflecting a counting of the number of possible realizations of the perturbation .the boundary between the two kinds of behavior of is set roughly by the number of contracting phase - space dimensions . when , the characteristic scale of a coarse - grained subdensity in the contracting dimensions is a factor of larger than the characteristic size of a perturbed subdensity in the contracting dimensions . in this regimethe picture developed in this section is at least approximately valid , because a coarse - grained subdensity can accommodate several perturbed subdensities in each contracting dimension .the information quantifies the effects of the perturbation on scales as big as or bigger than the finest scale set by the system dynamics .these effects , as quantified in , tell us directly about the size of the exponentially fine structure created by the system dynamics . thus becomes a property of the system dynamics , rather than a property of the perturbation .in contrast , when , we are required to keep track of the phase - space density on a very fine scale in the contracting dimensions , a scale smaller than the characteristic size of a perturbed subdensity in the contracting dimensions .subdensities are considered to be distinct , even though they overlap substantially , provided that they differ by more than this very fine scale in the contracting dimensions .the information is the logarithm of the number of realizations of the perturbation which differ by more than this very fine scale in at least one correlation cell .the information becomes a property of the perturbation because it reports on the effects of the perturbation on scales finer than the finest scale set by the system dynamics , scales that are , at the time of interest , irrelevant to the system dynamics . we are now prepared to put in final form the exponential hypersensitivity to perturbation of systems with a positive ks entropy : once the chaotic dynamics renders the perturbation effective , this exponential hypersensitivity to perturbation is essentially independent of the form and strength of the perturbation .its essence is that within each correlation cell there is a roughly even trade - off between entropy reduction and information , but for the entire phase - space density the trade - off is exponentially unfavorable because the density occupies an exponentially increasing number of correlation cells , in each of which it is perturbed independently . what about systems with regular , or integrable dynamics ?though we expect no universal behavior for regular systems , we can get an idea of the possibilities from the heuristic description developed in this section .hypersensitivity to perturbation requires , first , that the phase - space density develop structure on the scale of the strength of the perturbation , so that the perturbation becomes effective , and , second , that after the perturbation becomes effective , the phase - space density spread over many correlation cells . for many regular systems there will be no hypersensitivity simply because the phase - space density does not develop fine enough structure .regular dynamics can give rise to nonlinear shearing , however , in which case the density can develop structure on the scale of the strength of the perturbation and can spread over many correlation cells . in this situation , one expects the picture developed in this section to apply at least approximately : to hold the entropy increase to requires giving bits per occupied correlation cell ; is related to by eq .( [ picsum ] ) , with being the number of correlation cells occupied at time .thus regular systems can display hypersensitivity to perturbation if becomes large ( although this behavior could be eliminated by choosing correlation cells that are aligned with the nonlinear shearing produced by the system dynamics ) , but they can not display _ exponential _ hypersensitivity to perturbation because the growth of is slower than exponential . a more direct way of stating this conclusion is to reiterate what we have explained in this section and shown in ref . : exponential hypersensitivity to perturbation is equivalent to the spreading of phase - space densities over an exponentially increasing number of phase - space cells ; such exponential spreading holds for chaotic , but not for regular systems and is quantified by a positive value of the kolmogorov - sinai entropy .the simplifying restriction on the interaction with the environment made in sec .[ sechyp ] means , for the quantum case , that the interaction with the environment is equivalent to a stochastic unitary time evolution .given this assumption , we can proceed as follows . at a given time, we describe the result of the perturbed time evolution by a list of vectors in -dimensional hilbert space , with probabilities , each vector in the list corresponding to a particular realization of the perturbation , which we call a _ perturbation history_. averaging over the perturbation leads to a system density operator with entropy consider the class of measurements on the environment whose outcomes partition the list into groups labeled by .we denote by the number of vectors in the group ( ) .the vectors in the group and their probabilities are denoted by and , respectively .the measurement outcome , occurring with probability indicates that the system state is in the group .the system state conditional on the measurement outcome is described by the density operator we define the conditional system entropy the average conditional entropy and the average information we now describe nearly optimal measurements , i.e. , nearly optimal groupings , for which is a close approximation to , the minimum information about the environment needed , on the average , to keep the system entropy below a given tolerable entropy , as described in sec .[ sechyp ] . given ,we want to partition the list of vectors into groups so as to minimize the information without violating the condition .to minimize , it is clearly favorable to make the groups as large as possible .furthermore , to reduce the contribution to of a group containing a given number of vectors , it is favorable to choose vectors that are as close together as possible in hilbert space . herethe distance between two vectors and can be quantified in terms of the hilbert - space angle consequently , to find a nearly optimal grouping , we choose an arbitrary _ resolution angle _ ( ) and group together vectors that are less than an angle apart .more precisely , groups are formed in the following way . starting with the first vector , , in the list , the first group is formed of and all vectors in that are within an angle of .the same procedure is repeated with the remaining vectors to form the second group , then the third group , continuing until no ungrouped vectors are left .this grouping of vectors corresponds to a partial averaging over the perturbations . to describe a vector at resolution level amounts to averaging over those details of the perturbation that do not change the final vector by more than an angle . for each resolution angle , the grouping procedure described above defines an average conditional entropy and an average information .if we choose , for a given , the tolerable entropy , then to a good approximation , the information is given by . by determining the entropy and the information as functions of the resolution angle , there emerges a rather detailed picture of how the vectors are distributed in hilbert space . if is plotted as a function of by eliminating the angle , one obtains a good approximation to the functional relationship between and . as a further characterization of our list of vectors ,we calculate the distribution of hilbert - space angles between all pairs of vectors and . for vectorsdistributed randomly in -dimensional hilbert space , the distribution function is given by the maximum of this is located at ; for large - dimensional hilbert spaces , is very strongly peaked near the maximum , which is located at , very near . to investigateif a quantum map shows hypersensitivity to perturbation , we use the following numerical method .we first compute a list of vectors corresponding to different perturbation histories .then , for about 50 values of the angle ranging from 0 to , we group the vectors in the nearly optimal way described above .finally , for each grouping and thus for each chosen angle , we compute the information and the entropy . in addition, we compute the angles between all pairs of vectors in the list and plot them as a histogram approximating the distribution function . in this section, we present a typical numerical result for the quantum kicked top taken from , where more details can be found .we look at the time evolution of an initial hilbert - space vector at discrete times .after time steps , the unperturbed vector is given by is the unitary floquet operator and where is the angular momentum vector for a spin- particle evolving in -dimensional hilbert space .depending on the initial condition , the classical map corresponding to the floquet operator ( [ eqqtop ] ) displays regular as well as chaotic behavior .following , we choose initial hilbert - space vectors for the quantum evolution that correspond to classical initial conditions located in regular and chaotic regions of the classical dynamics , respectively . for this purpose ,we use _ coherent states _ . in this section , we consider two initial states .the first one is a coherent state centered in a regular region of the classical dynamics ; we refer to it as the _ regular initial state_. the second one , referred to as the _ chaotic initial state _, is a coherent state centered in a chaotic region of the classical dynamics .the perturbation is modeled as an additional rotation by a small random angle about the axis .the system state after perturbed steps is thus given by where , with , is the unperturbed floquet operator ( [ eqqtop ] ) followed by an additional rotation about the axis by an angle , the parameter being the _perturbation strength_. there are different perturbation histories obtained by applying every possible sequence of perturbed unitary evolution operators and for steps .we have applied the method described in sec .[ secdist ] to find numerically a nearly optimal grouping of the list of vectors generated by all perturbation histories .figure [ figtop ] shows results for spin and a total number of vectors after perturbed steps .we used a _ twist parameter _ and perturbation strength . for fig .[ figtop](a ) , the chaotic initial state was used .the distribution of hilbert - space angles , , is concentrated at large angles ; i.e. , most pairs of vectors are far apart from each other .the information needed to track a perturbed vector at resolution level is 12 bits at small angles , where each group contains only one vector . at information suddenly drops to 11 bits , which is the information needed to specify one pair of vectors out of pairs , the two vectors in each pair being generated by perturbation sequences that differ only at the first step .the sudden drop of the information to 10 bits at similarly indicates the existence of quartets of vectors , generated by perturbation sequences differing only in the first two steps .figure [ figtop](a ) suggests that , apart from the organization into pairs and quartets , there is not much structure in the distribution of vectors for a chaotic initial state .the quartets seem to be rather uniformly distributed in a -dimensional hilbert space ( see for a definition of the number of explored hilbert - space dimensions , ) .the inset in fig .[ figtop](a ) shows the approximate functional dependence of the information needed about the perturbation , , on the tolerable entropy , based on the data points and .there is an initial sharp drop of the information , reflecting the grouping of the vectors into pairs and quartets . then there is a roughly linear decrease of the information over a wide range of values , followed by a final drop with increasing slope down to zero at the maximum value of the tolerable entropy , .the large slope of the curve near can be regarded as a signature of hypersensitivity to perturbation .the linear regime at intermediate values of is due to the finite size of the sample of vectors : in this regime the entropy of the group is limited by , the logarithm of the number of vectors in the group .figure [ figtop](b ) shows data for vectors after 12 perturbed steps in the regular case .the distribution of perturbed vectors starting from the regular initial state is completely different from the chaotic initial condition of fig .[ figtop](a ) .the angle distribution is conspicuously nonrandom : it is concentrated at angles smaller than roughly , and there is a regular structure of peaks and valleys .accordingly , the information drops rapidly with the angle .the number of explored dimensions is , which agrees with results of peres that show that the quantum evolution in a regular region of the kicked top is essentially confined to a 2-dimensional subspace .the vs. curve in the inset bears little resemblance to the chaotic case .summarizing , one can say that , in the regular case , the vectors do not get far apart in hilbert space , explore only few dimensions , and do not explore them randomly . to obtain better numerical evidence for hypersensitivity in the chaotic case and for the absence of it in the regular case would require much larger samples of vectors , a possibility that is ruled out by restrictions on computer memory and time .the hypothesis most strongly supported by our data is the random character of the distribution of vectors in the chaotic case . in the following sectionwe show that randomness in the distribution of perturbed vectors implies hypersensitivity to perturbation .guided by our numerical results we now present an analysis of hypersensitivity to perturbation for quantum systems based on the conjecture that , for chaotic systems , hilbert space is explored randomly by the perturbed vectors .we consider a hamiltonian quantum system whose classical phase - space dynamics is chaotic and assume the system is perturbed by a stochastic hamiltonian that classically gives rise to diffusion on phase space .we suppose that at time the system s state vector has a wigner distribution that is localized on phase space .we further assume that at the perturbation is just becoming effective in the classical sense described in sec .[ secclassical ] .our numerical analyses suggest the following picture . for times , the entropy of the average density operator ( [ eqrhos ] ) increases linearly with time .this is in accordance with an essentially classical argument given by zurek and paz . denoting the proportionality constant by , we have since the von neumann entropy of a density operatoris bounded by the logarithm of the dimension of hilbert space , it follows that the realizations of the perturbation i.e . , the state vectors that result from the different perturbation histories explore at least a number of hilbert - space dimensions , which increases exponentially .our main conjecture now is that these dimensions are explored quasi - randomly , i.e. , that the realizations of the perturbation at time are distributed essentially like random vectors in a -dimensional hilbert space . starting from this main conjecture, we will now derive an estimate of the information needed to keep the system - entropy increase below the tolerable amount . following the discussion on grouping vectors in sec .[ secdist ] , a tolerable entropy increase corresponds to gathering the realizations of the perturbation into hilbert - space spheres of radius .the state vectors in each such sphere fill it randomly ( since the perturbation is diffusive , there are plenty of vectors ) , so the entropy of their density operator which is the tolerable entropy is ( eq .( b6 ) of ) .the number of spheres of radius in -dimensional hilbert space is ( eq .( 5.1 ) of ) , so the information needed to specify a particular sphere is the information consistently underestimates the actual value of , which comes from an optimal grouping of the random vectors ; the reason is that the perfect grouping into nonoverlapping spheres of uniform size assumed by eq .( [ eqiphi ] ) does not exist .using eq .( [ eqiphi ] ) to eliminate from eq .( [ eqhphi ] ) gives an expression for as a function of , from which could be eliminated in favor of by invoking eq .( [ eqd ] ) . the behavior of as a function of expressed in eq .( [ eqhi ] ) is the universal behavior that we conjecture for chaotic systems , except for when is so close to that , as the spheres approximation used above breaks down for angles for which hilbert space can accommodate only one sphere . since increases and decreases with , increases as decreases from its maximum value of . to gain more insight into eq .( [ eqhi ] ) , we calculate the derivative which is the marginal tradeoff between between information and entropy . for near , so that , the information becomes , and the derivative ( [ eqdi ] ) can be written as for , i.e. , when eq .( [ eqhi ] ) is valid , the size of the derivative ( [ eqdiapprox ] ) is determined by , with a slowly varying logarithmic correction .this behavior , characterized by the typical slope , gives an _ exponential _ hypersensitivity to perturbation , with the classical number of correlation cells , , roughly replaced by the number of explored hilbert - space dimensions , .it is a remarkable fact that the concept of perturbation cell or perturbation correlation length ( see sec .[ secclassical ] ) did not enter this quantum - mechanical discussion .indeed , our numerical results suggest that our main conjecture holds for a single correlation cell , i.e. , for a perturbation that is correlated over all of the relevant portion of phase space .that we find this behavior indicates that we are dealing with an intrinsically quantum - mechanical phenomenon .what seems to be happening is the following . for tolerable entropies , where is the dimension of classical phase space as in sec .[ secclassical ] , we can regard a single - cell perturbation as perturbing a classical system into a set of nonoverlapping densities . in a quantum analysisthese nonoverlapping densities can be crudely identified with orthogonal state vectors .the single - cell quantum perturbation , in conjunction with the chaotic quantum dynamics , seems to be able to produce arbitrary linear superpositions of these orthogonal vectors , a freedom not available to the classical system .the result is a much bigger set of possible realizations of the perturbation .this paper compares and contrasts hypersensitivity to perturbation in classical and quantum dynamics .although hypersensitivity provides a characterization of chaos that is common to both classical and quantum dynamics , the mechanisms for hypersensitivity are different classically and quantum mechanically .the classical mechanism has to do with the information needed to specify the phase - space distributions produced by the perturbation this is classical information whereas the quantum mechanism has to do with the information needed to specify the random state vectors produced by the perturbation this is quantum information because it relies on the superposition principle of quantum mechanics .captured in a slogan , the difference is this : _ a stochastic perturbation applied to a classical chaotic system generates classical information , whereas a stochastic perturbation applied to a quantum system generates quantum information_.
hypersensitivity to perturbation is a criterion for chaos based on the question of how much information about a perturbing environment is needed to keep the entropy of a hamiltonian system from increasing . in this paper we give a brief overview of our work on hypersensitivity to perturbation in classical and quantum systems . #
video compression is a major requirement in many of the recent applications like medical imaging , studio applications and broadcasting applications .compression ratio of the encoder completely depends on the underlying compression algorithms .the goal of compression techniques is to reduce the immense amount of visual information to a manageable size so that it can be efficiently stored , transmitted , and displayed .3-d dwt based compressing system enables the compression in spatial as well as temporal direction which is more suitable for video compression .moreover , wavelet based compression provide the scalability with the levels of decomposition . due to continuous increase in size of the video frames ( hd to uhd ) , video processing through software coding toolsis more complex .dedicated hardware only can give higher performance for high resolution video processing .in this scenario there is a strong requirement to implement a vlsi architecture for efficient 3-d dwt processor , which consumes less power , area efficient , memory efficient and should operate with a higher frequency to use in real - time applications .+ from the last two decades , several hardware designs have been noted for implementation of 2-d dwt and 3-d dwt for different applications .majority of the designs are developed based on three categories , viz .( i ) convolution based ( ii ) lifting - based and ( iii ) b - spline based .most of the existing architectures are facing the difficulty with larger memory requirement , lower throughput , and complex control circuit .in general the circuit complexity is denoted by two major components viz , arithmetic and memory component .arithmetic component includes adders and multipliers , whereas memory component consists of temporal memory and transpose memory .complexity of the arithmetic components is fully depends on the dwt filter length .in contrast size of the memory component is depends on dimensions of the image . as image resolutions are continuously increasing ( hd to uhd ) ,image dimensions are very high compared to filter length of the dwt , as a result complexity of the memory component occupied major share in the overall complexity of dwt architecture .+ convolution based implementations - provides the outputs within less time but require high amount of arithmetic resources , memory intensive and occupy larger area to implement .lifting based a implementations requires less memory , less arithmetic complex and possibility to implement in parallel. however it require long critical path , recently huge number of contributions are noted to reduce the critical path in lifting based implementations . for a general liftingbased structure provides critical path of , by introducing 4 stage pipeline it cut down to . in huang et al ., introduced a flipping structure it further reduced the critical path to .though , it reduced the critical path delay in lifting based implementation , it requires to improve the memory efficiency .majority of the designs implement the 2-d dwt , first by applying 1-d dwt in row - wise and then apply 1-d dwt in column wise .it require huge amount of memory to store these intermediate coefficients . to reduce this memory requirements , several dwt architecturehave been proposed by using line based scanning methods - .huang et al . ,- give brief details of b - spline based 2-d idwt implementation and discussed the memory requirements for different scan techniques and also proposed a efficient overlapped strip - based scanning to reduce the internal memory size .several parallel architectures were proposed for lifting - based 2-d dwt - .y. hu et al . , proposed a modified strip based scanning and parallel architecture for 2-d dwt is the best memory - efficient design among the existing 2-d dwt architectures , it requires only 3n + 24p of on chip memory for a n image with parallel processing units ( pu ) .several lifting based 3-d dwt architectures are noted in the literature - to reduce the critical path of the 1-d dwt architecture and to decrease the memory requirement of the 3-d architecture . among the best existing designs of 3-d dwt , darji et al . produced best results by reducing the memory requirements and gives the throughput of 4 results / cycle .still it requires the large on - chip memory ( ) . in this paper, we propose a new parallel and memory efficient lifting based 3-d dwt architecture , requires only words of on - chip memory and produce 8 results / cycle .the proposed 3-d dwt architecture is built with two spatial 2-d dwt ( cdf 9/7 ) processors and four temporal 1-d dwt ( haar ) processors . proposed architecture for 3-d dwt replaced the multiplication operations by shift and add , it reduce the cpd from to .further reduction of cpd to is done by introducing pipeline in the processing elements . to eliminate the temporal memory and to reduce the latency ,haar wavelet is incorporated in temporal processor .the resultant architecture has reduce the latency , on chip memory and to increase the speed of operation compared to existing 3-d dwt designs .the following sections provide the architectural details of proposed 3-d dwt through spatial and temporal processors .organization of the paper as follows .theoretical background for dwt is given in section ii .detailed description of the proposed architecture for 3-d dwt is provided in section iii .implementation results and performance comparison is given in section iv . finally , concluding remarks are given in section v. +lifting based wavelet transform designed by using a series of matrix decomposition specified by the daubechies and sweledens in . by applying the flipping to the lifting scheme ,the multipliers in the longest delay path are eliminated , resulting in a shorter critical path .the original data on which dwt is applied is denoted by ] and approximation coefficients ] ) , along with ] which is required for the pe ) .structure of the pes are given in the fig .[ 3d_2](b ) , it shows that multiplication is replaced with the shift and add technique .the original multiplication factor and the value through the shift and add circuit are noted in table.[tab1 ] , it shows that variation between original and adopted one is extremely small . as shown in fig .[ 3d_2](b ) , time delay of shift is one and remaining all pes are having delay of . to reduce the cpd of pu ,pes from pe to pe are divided in to two pipeline stages , and each pipeline stage has a delay of , as a result cpd of pu is reduced to and pipeline stages are increased to nine and is shown in fig . [ 3d_2](c ) .the outputs ] , and $ ] corresponding to pe and pe of last pu and pe of last pu is saved in the memories memory , memory and memory respectively , shown in fig .[ 3d_1](a ) .those stored outputs are inputted for next subsequent columns of the same row . for a image rowsis equivalent to .so the size of the each memory is words and total row memory to store these outputs is equals to .output of each pu are under gone through a process of scaling before it producing the outputs h and l. these outputs are fed to the transposing unit .the transpose unit has number of transpose registers ( one for each pu ) .[ 3d_3](a ) shows the structure of transpose register , and it gives the two h and two l data alternatively to the column processor . the structure of the column processor ( cp ) is shown in fig . [ 3d_1](b ) . to match with the throughput of rp , cpis also designed with two number of pus in our architecture .each transpose register produces a pair of h and l in an alternative order and are fed to the inputs of one pu of the cp .the partial results produced are consumed by the next pe after two clock cycles . assuch , shift registers of length two are needed within the cp between each pipeline stages for caching the partial results ( except between and pipeline stages ) . at the output of the cp ,four sub - bands are generated in an interleaved pattern , and so on .outputs of the cp are fed to the re - arrange unit .[ 3d_3](b ) shows the architecture for re - arrange unit , and it provides the outputs in sub - band order and simultaneously , by using registers and multiplexers . for multilevel decomposition, the same dwt core can be used in a folded architecture with an external frame buffer for the ll sub - band coefficients ..original and adopted values for multiplication [ cols= " < ,< , < " , ]the proposed 3-d dwt architecture has been described in verilog hdl .a uniform word length of 14 bits has been maintained throughout the design .simulation results have been verified by using xilinx ise simulator .we have simulated the matlab model which is similar to the proposed 3-d dwt hardware architecture and verified the 3-d dwt coefficients .rtl simulation results have been found to exactly match the matlab simulation results .the verilog rtl code is synthesized using xilinx ise 14.2 tool and mapped to a xilinx programmable device ( fpga ) 7z020clg484 ( zynq board ) with speed grade of -3 .table [ fpga_results ] shows the device utilization summary of the proposed architecture and it operates with a maximum frequency of 265 mhz .the proposed architecture has also been synthesized using synopsys design compiler with 90-nm technology cmos standard cell library .it consumes 43.42 mw power and occupies an area equivalent to 231.45 k equivalent gate at frequency of 200 mhz .the performance comparison of the proposed 2-d and 3-d dwt architectures with other existing architectures is figure out in tables [ 2dcompare ] and [ 3dcompare ] respectively .the proposed 2-d processor requires zero multipliers , 34p ( pis number of parallel pus ) adders , 60p+3n internal memory .it has a critical path delay of with a throughput of four outputs per cycle with /2p computation cycles to process an image with size . when compared to recent 2-d dwt architecture developed by the y.hu et al . , cpd reduced to from with the cost of small increase in hardware resources .table [ 3dcompare ] shows the comparison of proposed 3-d dwt architecture with existing 3-d dwt architecture .it is found that , the proposed design has less memory requirement , high throughput , less computation time and minimal latency compared to , , , and . though the proposed 3-d dwt architecture has small disadvantage in area and frequency , when compared to , the proposed one has a great advantage in remaining all aspects .table [ 3d_asic ] gives the comparison of synthesis results between the proposed 3-d dwt architecture and .it seems to be proposed one occupying more cell area , but it included total on chip memory also , where as in on chip memory is not included .power consumption of the proposed 3-d architecture is very less compared to .in this paper , we have proposed memory efficient and high throughput architecture for lifting based 3-d dwt . the proposed architecture is implemented on 7z020clg484 fpga target of zynq family , also synthesized on synopsys design vision for asic implementation .an efficient design of 2-d spatial processor and 1-d temporal processor reduces the internal memory , latency , cpd and complexity of a control unit , and increases the throughput . when compared with the existing architectures the proposed scheme shows higher performance at the cost of slight increase in area .the proposed 3-d dwt architecture is capable of computing 60 uhd ( 3840 ) frames in a second .30 q. dai , x. chen , and c. lin,``a novel vlsi architecture for multidimensional discrete wavelet transform,''__ieee transactions on circuits and systems for video technology _ _ , vol .14 , no . 8 , pp . 1105 - 1110 , aug .2004 . c. cheng and k. k. parhi , `` high - speed vlsi implementation of 2-d discrete wavelet transform , '' _ ieee trans .signal process .393 - 403 , jan . 2008 .b. k. mohanty and p. k. meher , `` memory - efficient high - speed convolution - based generic structure for multilevel 2-d dwt.''__ieee transactions on circuits and systems for video technology , _ _ vol .353 - 363 , feb . 2013 .i. daubechies and w. sweledens , `` factoring wavelet transforms into lifting schemes , '' _ j. fourier anal .247 - 269 , 1998 .huang , p.c .tseng , and l .- g .chen , `` flipping structure : an efficient vlsi architecture for lifting - based discrete wavelet transform , '' _ ieee trans . signal process .1080 - 1089 , apr . 2004 .xiong , j .- w .tian , and j. liu , `` a note on flipping structure : an efficient vlsi architecture for lifting - based discrete wavelet transform , '' _ ieee transactions on signal processing , _ vol .54 , no . 5,pp .1910 - 1916 , may 2006 c .- t .huang , p .- c .tseng , and l .- g .chen , `` analysis and vlsi architecture for 1-d and 2-d discrete wavelet transform , '' _ ieee trans .signal process .1575 - 1586 , apr .cheng , c .- t .huang , c .- y .ching , c .- j .chung , and l .- g .chen , `` on - chip memory optimization scheme for vlsi implementation of line based two - dimentional discrete wavelet transform , '' _ ieee transactions on circuits and systems for video technology , _ vol .17 , no . 7 , pp .814 - 822 , jul .liao , m. k. mandal , and b. f. cockburn , `` efficient architectures for 1-d and 2-d lifting - based wavelet transforms , '' _ ieee transactions on signal processing , _ vol .5 , pp . 1315 - 1326 , may 2004 .b.f . wu and c.f .chung , `` a high - performance and memory - efficient pipeline architecture for the 5/3 and 9/7 discrete wavelet transform of jpeg2000 codec , '' _ ieee trans .circuits syst .video technol ., _ vol . 15 , no . 12 , pp .1615 - 1628 , dec .xiong , j. tian , and j. liu , `` efficient architectures for two - dimensional discrete wavelet transform using lifting scheme , '' _ ieee transactions on image processing , _ vol .607 - 614 , mar .w. zhang , z. jiang , z. gao , and y. liu , `` an efficient vlsi architecture for lifting - based discrete wavelet transform , '' _ ieee transactions on circuits and systems - ii : express briefs , _ vol .158 - 162 , mar .b. k. mohanty and p. k. meher , `` memory efficient modular vlsi architecture for high throughput and low - latency implementation of multilevel lifting 2-d dwt , '' _ ieee transactions on signal processing , _ vol .5 , pp . 2072 - 2084 , may 2011 .a.darji , s. agrawal , ankit oza , v. sinha , a.verma , s. n. merchant and a. n. chandorkar , `` dual - scan parallel flipping architecture for a lifting - based 2-d discrete wavelet transform,''__ieee transactions on circuits and systems - ii : express briefs , _ _ vol .61 , no . 6 , pp .433 - 437 , jun . 2014 .b. k. mohanty , a. mahajan , and p. k. meher , `` area and power efficient architecture for high - throughput implementation of lifting 2-d dwt , '' _ ieee trans .circuits syst .briefs , _ vol .59 , no . 7 , pp .434 - 438 , jul . 2012 .y. hu and c. c. jong,``a memory - efficient high - throughput architecture for lifting - based multi - level 2-d dwt,''__ieee transactions on signal processing , _ _ vol .20 , pp.4975 - 4987 , oct .15 , 2013 .y. hu and c. c. jong , `` a memory - efficient scalable architecture for lifting - based discrete wavelet transform,''__ieee transactions on circuits and systems - ii : express briefs , _ _ vol .60 , no . 8 , pp . 502 - 506 , aug .j. xu , z.xiong , s. li , and ya - qin zhang , `` memory - constrained 3-d wavelet transform for video coding without boundary effects , '' _ ieee transactions on circuits and systems for video technology , _ vol .812 - 818 , sep . 2002 .m. weeks and m. a. bayoumi , `` three - dimensional discrete wavelet transform architectures,''__ieee transactions on signal processing , _ _ vol .50 , no . 8 , pp.2050 - 2063 , aug .z. taghavi and s. kasaei , `` a memory efficient algorithm for multidimensional wavelet transform based on lifting , '' _ in proc .acoust speech signal process .( icassp ) _ , vol .401 - 404 , 2003 .q. dai , x. chen , and c. lin , `` novel vlsi architecture for multidimensional discrete wavelet transform , '' _ ieee transactions on circuits and systems for video technology , _ vol .14 , no . 8 , pp . 1105 - 1110 ,a. das , a. hazra , and s. banerjee,``an efficient architecture for 3-d discrete wavelet transform,''__ieee transactions on circuits and systems for video technology , _ _ vol .286 - 296 , feb . 2010 .b. k. mohanty and p. k. meher , `` memory - efficient architecture for 3-d dwt using overlapped grouping of frames,''__ieee transactions on signal processing _ _ , vol .11 , pp.5605 - 5616 , nov . 2011 .a. darji , s. shukla , s. n. merchant and a. n. chandorkar , `` hardware efficient vlsi architecture for 3-d discrete wavelet transform , '' _ proc . of int .conf . on vlsi design and int .conf . on embedded systems _ pp .348 - 352 , 5 - 9 jan . 2014 .w.sweldens , `` the lifting scheme : a construction of second generation of wavelets , '' _ siam journal on mathematical analysis , _vol.29 no.2 , pp .511 - 546 , 1998 .
this paper presents a memory efficient , high throughput parallel lifting based running three dimensional discrete wavelet transform ( 3-d dwt ) architecture . 3-d dwt is constructed by combining the two spatial and four temporal processors . spatial processor ( sp ) apply the two dimensional dwt on a frame , using lifting based 9/7 filter bank through the row rocessor ( rp ) in row direction and then apply in the colum direction through column processor ( cp ) . to reduce the temporal memory and the latency , the temporal processor ( tp ) has been designed with lifting based 1-d haar wavelet filter . the proposed architecture replaced the multiplications by pipeline shift - add operations to reduce the cpd . two spatial processors works simultaneously on two adjacent frames and provide 2-d dwt coefficients as inputs to the temporal processors . tps apply the one dimensional dwt in temporal direction and provide eight 3-d dwt coefficients per clock ( throughput ) . higher throughput reduces the computing cycles per frame and enable the lower power consumption . implementation results shows that the proposed architecture has the advantage in reduced memory , low power consumption , low latency , and high throughput over the existing designs . the rtl of the proposed architecture is described using verilog and synthesized using 90-nm technology cmos standard cell library and results show that it consumes 43.42 mw power and occupies an area equivalent to 231.45 k equivalent gate at frequency of 200 mhz . the proposed architecture has also been synthesised for the xilinx zynq 7020 series field programmable gate array ( fpga ) . index terms : discrete wavelet transform , 3-d dwt , lifting based dwt , vlsi architecture , flipping structure , strip - based scanning .
opportunistic beamforming ( obf ) is a well known adaptive signaling scheme that has received a great deal of attention in the literature as it attains the sum - rate capacity with full channel state information ( csi ) to a first order for large numbers of mobile users in the network , while operating on partial csi feedback from the users . in this paper, we consider a cellular network which operates according to the obf framework in a multi - cell environment with variable number of transmit beams at each bs .the number of transmit beams is also referred to as the transmission rank ( tr ) in the paper , and we focus on optimally setting the transmission rank at each bs in the network .the earliest work of obf appeared in the landmark paper , where the authors have introduced a single - beam obf scheme for the single - cell multiple - input single - output ( miso ) broadcast channel .the concept was extended to random orthogonal beams in , where is the number of transmit antennas .the downlink sum - rate of this scheme scales as , where is the number of users in the system .recently , the authors in have considered using variable trs at the bs , and they have showed that the downlink sum - rate scales as in interference - limited networks , where is the tr employed by the bs .the gains of adapting variable tr compared to a fixed one is clearly demonstrated in , however , how to select the tr for obf is still an open question which has only been characterized in the asymptotic sense for the single - cell system in - , and a two - transmit antenna single - cell system in . in all of the above works ,the users are assumed to be homogeneous with the large - scale fading gain ( alternatively referred to as the path loss in this paper ) equal to unity .obf in heterogenous networks has been considered in - . in , the authors focused on the fairness of the network and obtained an expression for the ergodic capacity of this fair network . in , the authors modeled the user locations using a spatial poisson point process , and studied the outage capacity of the system . in , the authors considered an interference - limited network and derived the ergodic downlink sum - rate scaling law of this interference - limited network .the trs in - are considered to be fixed . in this paper , we are interested in the quality of service ( qos ) delivered to the users . more precisely , we focus on a set of qos constraints that will ensure a guaranteed minimum rate per beam with a certain probability at each bs .previous studies have shown that user s satisfaction is a non - decreasing , concave function of the service rate ; this suggests that the user s satisfaction is insignificantly increased by a service rate higher than what the user demands , but drastically decreased if the provided rate is below the requirement .the network operator can promise a certain level of qos to a subscribed user . to this end , the qos is closely related to the tr of the bs . increasingthe tr will increase the number of co - scheduled users .however , increasing the tr will also increase interference levels in the network , which will decrease the rate of communication per beam .a practical question arises ; what is a suitable tr to employ at each bs while achieving a certain level of qos in multi - cell heterogeneous networks ? the authors in have performed a preliminary study of this problem for a single - cell system consisting of homogeneous users with identical path loss values of unity .the main contributions of this paper are summarized as follows .we focus on finding the achievable trs without violating the above mentioned set of qos constraints .this can be formulated into a feasibility problem .for some specific cases , we derive analytical expressions of the achievable tr region , and for the more general cases , we derive expressions that can be easily used to find the achievable tr region .the achievable tr region consists of all the achievable tr tuples that satisfy the qos constraints .numerical results are presented for a two - cells scenario to provide further insights on the feasibility problem ; our results show that the achievable tr region expands when the qos constraints are relaxed , the snr and the number of users in a cell are increased , and the size of the cells are decreased .we consider a multi - cell multi - user miso broadcast channel .the system consists of bss ( or cells ) , each equipped with transmit antennas .each cell consists of users , each equipped with a single receive antenna .a bs will only communicate with users in its own cell .let denote the channel gain vector between bs and user in cell .the elements in are independent and identically distributed ( i.i.d . )random variables , each of which is drawn from a zero mean and unit variance _ circularly - symmetric complex gaussian _distribution .the large - scale fading gain ( may alternatively referred to as path loss in this paper ) between bs and user in cell is denoted by . the path loss ( pl ) values of all the usersare governed by the pl model for , where represents the distance between the user and the bs of interest .therefore , the random pl values are also i.i.d . among the users , where the randomness stems from the fact that users locations are random .moreover , we assume a quasi - static block fading model over time .the bss operate according to the obf scheduling and transmission scheme as follows. the bss will first pre - determine the number of beams to be transmitted .bs generates random orthogonal beamforming vectors and transmits different symbols along the direction of these beams ( is the tr employed by bs ) .this process is simultaneously carried out at all bss .for bs , let and denote the beamforming vector and the transmitted symbol on beam , respectively .the received signal at user in cell can be written as where is the additive complex gaussian noise .we assume that = \rho_i$ ] , where is a scaling parameter to satisfy the total power constraint at each bs . for conciseness ,we assume . each user will measure the sinr values on the beams from its associated bs , and feed them back .for the beam generated using , the received sinr at user located in cell is given by \left\{\sigma_n^2l_i + g_{i , i , k } \sum_{\substack{l\neq m \\ l=1}}^{l_i } |\mathbf{h}_{i , i , k}^\top \mathbf{w}_{i , l}|^2 \right . \nonumber \\ & \hspace{1.5 cm } \left . + \sum_{j \neq i}^m g_{j , i , k } \frac{l_i}{l_j }\sum_{\substack{t=1}}^{l_j } |\mathbf{h}_{j , i , k}^\top \mathbf{w}_{j , t}|^2 \right\}^{-1}.\end{aligned}\ ] ] once the bss have received the feedback from the users , each bs will select a set of users for communication by assigning each beam to the in - cell user having the highest sinr on it , _i.e. _ , the user with sinr value . for cell ,let and denote the distributions of the sinr on a beam at user and the maximum sinr on a beam , respectively . since the maximum number of co - scheduled users in cell is equal to , increasing will have the effect of increasing the number of co - scheduled users .however , increasing will also increase the amount of intra - cell and inter - cell interference , which will decrease the rate of communication per beam .therefore , we focus on finding an achievable tr m - tuple with a set of qos constraints at all the bss that will ensure a guaranteed minimum rate per beam with a certain probability . to this end , we consider that an outage probability of can be tolerated at each bs , where the outage event refers to the received sinr of the scheduled user on a beam being below a target sinr threshold value , _i.e. _ , for all .there is also a natural constraint on due to the orthogonality requirement among the beams , _i.e. _ , for all .we focus on finding the achievable such that these constraints are not violated .this is a non - trivial problem for the system of interest due to the presence of intra - cell and inter - cell interferences , and the sinr values on a beam being not identically distributed among the users due to their different locations .we note that there is an implicit constraint that must be an integer . for the analysis, we will relax the integer constraint and assume that is sufficiently large such that the constraints is always satisfied for all .denote as an achievable tr m - tuple with the relaxed constraints ; the corresponding achievable m - tuple is given by for all , where represents the floor function . since the sinr on a beam is a strictly decreasing function of the tr , we have the following property ; given an achievable m - tuple , another m - tuple is achievable if for all . in the remaining parts of the paper, we will focus on finding the achievable trs and the achievable tr region , where the achievable tr region is defined to consist all the achievable m - tuples .we will call the constraints on the qos constraints .we will start our analysis with a simple single - cell scenario .we drop the cell index for brevity . for a single cell ,the sinr expression in ( [ eq : sinr_expression ] ) reduces to a given pl value , by using techniques similar to those used in , it is not hard to show that is given by therefore , by conditioning on , the cdf of is given by .\end{aligned}\ ] ] first we consider the simplest case where the user s are located equidistant to the bs , _i.e. _ , the user s pl values are identical and deterministic , and given by . for this simplest case ,a closed - form expression for the achievable tr can be obtained , and it is formally presented through the following theorem .[ thm : single_homo_iid ] for the system in consideration with and , the achievable trs are given by is the target sinr threshold value .with equal pl values , the qos constraint is given by ^k \leq p.\end{aligned}\]]solving for completes the proof . setting makes the result in theorem [ thm : single_homo_iid ] consistent with .now , we will consider the users to be heterogenous as in section [ sec : sys_model ] .we model the cell as a disk with radius . given the non - identical pl values , is given by ( [ eq : cdf_sc_noniid ] ) for this setup , and the qos constraint can be written as \leq p.\end{aligned}\ ] ] since the user locations are random in our setup , removing the conditioning of by averaging over the pl values gives us the qos constraint of interest .this idea is formally presented in the following lemma .[ lem : unbounded_path_model ] for the system in consideration with and the random pl values governed by the pl model for , the qos constraint is given by ^k \leq p,\end{aligned}\]]where is the target sinr threshold value and is the lower incomplete gamma function .since the users are located uniformly over the plane , the cdf of the distance from the user to its associated bs is given by .let denote the cdf of the pl value , which is given by . since the pl values are i.i.d .among the users , we have ^k.\end{aligned}\]]substituting for the cdfs and setting gives us ^k.\end{aligned}\]]evaluating the integral completes the proof .the achievable tr region consists of all the achievable that satisfy ( [ eq : qos_constraint_unbounded_pathloss_model ] ) .next , we will focus on the general multi - cell scenario .similar to what we have done in the previous section , we will start the analysis by obtaining an expression for the conditional distribution of the sinr on a beam at a user .this result is formally presented in the following lemma .[ lem : sinr_cdf ] consider user in cell .given the pl values from all the bss to user , _i.e. _ , given , the conditional distribution of the sinr on a beam is given by the conditional distribution of the sinr can be obtained using a result in , which is summarized as follows .suppose , are independent exponentially distributed random variables ( rvs ) with parameters .then where is a constant . given all the pl values , can be re - written as is a constant , and and are independent exponentially distributed rvs with parameters and , respectively .therefore , directly using the result in completes the proof . using the above lemma , given all pl values ,the conditional cdf of the maximum sinr on a beam can be written as , we will use this expression to find the achievable tr region considering different scenarios , similar to what we have done in section [ sec : l ] . forthe clarity of presentation and the ease of explanation , we present the analysis for the two - cells scenario ; the analysis of the -cells scenario can be easily extended using the same techniques .first we consider the classical wyner model for the two - cells scenario .the users pl values are deterministic as follows ; the pl value between all the users to their associated bs is unity , and the pl value between all the users to the interfering bs is . for this setup ,the qos constraint for cell one is given by ^k \leq p,\end{aligned}\]]where is the target sinr threshold .the qos constraint for cell two can be easily obtained by interchanging and in the indices .analytical expressions that characterize the achievable tr region for this setup are formally presented through the following theorem .[ thm : wyner_model ] for the wyner model , given a fixed , the achievable trs for cell one is given by is the lambert - w function given by the defining equation , , , , , and is the target sinr threshold .with some simple manipulations , we can re - write the qos constraint in ( [ eq : qos_mc_iid_unequal ] ) following chain of inequalities holds which completes the proof . given a fixed , the achievable trs for cell two can be easily obtained by interchanging and in the indices .the achievable tr region consists of all the achievable tuples . when , the result in theorem [ thm : wyner_model ] can be further simplified , and the result is presented in the following corollary . [ cor : wyner_model ] for the wyner model ,if , the achievable trs are given by next , we consider the users to be heterogeneous as in section [ sec : sys_model ] . for this scenario ,if all the path loss values are given , the qos constraint for cell one can be written using as \leq p.\end{aligned}\]]the qos constraint for cell two can be easily obtained by interchanging and in the indices . since the user locations are random , we need to remove the conditioning on by averaging over the pl values . with multiple bss , the pl values between the user and each bs are correlated . hence , it is difficult to average over the pl values directly as in the single cell case because it is difficult to obtain the cdf of the path loss value .nonetheless , since the pl values are directly related to the distance between the user and each of the bss , we can perform a change of variables by writing each pl as a function of the user and bs locations , and then average over the location process by making use of the fact that the users are located uniformly over the plane . for the purpose of illustrating the idea , consider a user in cell one and let denote its exact locationcoordinate on the two dimensional plane . for convenience , we assume that a user is always connected to the closest bs geographically , _i.e. _ , the two cells are arranged in a rectangular grid on the two dimensional plane . hence and are independent and uniformly distributed within the cell for all .let denote the location coordinate of bs .figure [ fig : two_cell_model ] illustrates the setup .the distance from the user to bs one and two is therefore and , respectively .thus the pl values are given by and , respectively .the following lemma presents the qos constraints for a two - cells scenario consisting of heterogeneous users with random pl values .[ lem : multi_cell_hetero ] for the system in consideration with bs being located at , given a fixed , the qos constraint of cell one is where is the target sinr , is the area of cell one , and is defined by the following integral ^{\alpha/2}\right)}{\left [ \left(\frac{(x - x_1)^2+y^2}{(x - x_2)^2+y^2}\right)^{\frac{\alpha}{2 } } \frac{l_1}{l_2}\eta+1\right]^{l_2 } } dx dy,\end{aligned}\]]and the integration is over the area of cell one .first we substitute and to ( [ eq : sinr_distribution ] ) to get . given a user s locationcoordinate , is given by ^{-\alpha/2 } } \right ) \\ & \left[(s+1)^{l_1 - 1 } \left ( \left(\frac{(x_{1,k}-x_1)^2+(y_{1,k})^2}{(x_{1,k}-x_2)^2+(y_{1,k})^2}\right)^{\frac{\alpha}{2 } } \frac{l_1}{l_2}s+1\right)^{l_2}\right]^{-1}.\end{aligned}\]]averaging ( [ eq : cdf_mc_noniid ] ) over cell one gives us is the joint pdf of and .since the location coordinates are i.i.d . amongthe users , we have ^k.\end{aligned}\]]substituting for completes the proof .given a fixed , the qos constraint for cell two can be easily obtained by interchanging and in the indices .the achievable tr region is given by all the tuples that satisfy the qos constraints for both cells .in this section , we present our numerical results for the single cell and two - cell scenarios . in all the simulations , the cell is modeled as a disk with radius for the single cell scenario , and each cell is modeled as a square with cell area for the two - cell scenario . in figures [ fig : region_homo1]-[fig : region_hetero ] , we show the achievable tr regions for the two - cells scenario . figures [ fig : region_homo1 ] and [ fig : region_homo2 ] show the achievable tr regions for the wyner model with and , respectively .the dotted line connecting the origin and the corner point in each region represents the achievable tr set given in corollary [ cor : wyner_model ] .figure [ fig : region_hetero ] shows the achievable tr regions for heterogeneous users , using the result of lemma [ lem : multi_cell_hetero ] . for a given , any below the boundary can be achieved , whereas any above the boundary will violate the qos constraints . moreover ,if the system wants to maximize the multiplexing gain at each bs , operating at strictly below the boundary is sub - optimal in a sense that we can further increase the trs without violating the qos constraints .therefore , the boundary curve can be considered as the pareto optimal boundary between the achievable and un - achievable tr pairs . as can be observed from the figures ,tr region expands when the qos constraints are relaxed , _ i.e. _ , is increased and/or is decreased . relaxing the qos constraintsallows more interference in the network , thus expanding the achievable tr region .moreover , the achievable tr region also expands when is increased . the achievable rate on a beam increases due to multi - user diversity ,therefore , more beams / interference can be tolerated without violating the constraints .the achievable tr region will also change with and and will be discussed further in figures [ fig : vsk]-[fig : vssnr ] ..,width=259 ] .,width=259 ] . ,width=259 ] let denote the maximum achievable tr with the relaxed constraints on .figure [ fig : vsk ] shows vs. , for the single - cell scenario and two - cells scenario with equal trs . as can be observed from the figure , for a fixed , decreases as the cell size increases .this is because the users are uniformly located in the cell and as the cell size increases , the users locations will be more spread out . as a consequence ,the sinr on each beam will decrease and we must compensate this by decreasing the tr ( to decrease the interferences ) . figure [ fig : vssnr ] shows vs. snr for fixed number of users , where snr is defined as . as can be observed from the figure , increases with snr .intuitively , when decreases , we can increase the tr ( effectively introduces additional interferences ) while still satisfying the qos constraints .therefore , the achievable tr region will expand with decreased cell size or increased snr .finally , decreases as , the number of cells , increases .this is because the sinr on each beam decreases with .for single cell and two - cell scenarios with .,width=288 ]in this paper , we considered a multi - cell multi - user miso broadcast channel .each cell employs the obf scheme with variable trs . we focused on finding the achievable trs for the bss to employ with a set of qos constraints that ensures a guaranteed minimum rate per beam with a certain probability at each bs .we formulated this into a feasibility problem for the single - cell and multi - cell scenarios consisting of homogeneous users and heterogeneous users .analytical expressions of the achievable trs were derived for systems consisting of homogeneous users and for systems consisting of heterogeneous users , expressions were derived which can be easily used to find the achievable trs .an achievable tr region was obtained , which consists of all the achievable tr tuples for all the cells to satisfy the qos constraints .numerical results showed that the achievable tr region expands when the qos constraints are relaxed , the snr and the number of users in a cell are increased , and the size of the cells are decreased .99 p. viswanath , d. n. c. tse and r. laroia , `` opportunistic beamforming using dumb antennas , '' _ ieee trans .inf . theory _ ,1277 - 1294 , jun . 2002 .m. sharif and b. hassibi , `` on the capacity of mimo broadcast channels with partial side information , '' _ ieee trans .inf . theory _506 - 522 , feb . 2005 .m. wang , f. li and j. s. evans , `` opportunistic beamforming with precoder diversity in multi - user mimo systems , '' in _ proc .ieee vehicular technol .dresden , germany _ , jun . 2 - 5 , 2013 .j. wagner , y .- c . liang and r. zhang , `` random beamforming with systematic beam selection , '' in _ proc .symp . on personal , indoor and mobile radio commun . , helsinki , finland _ , pp . 1 - 5 , sept .11 - 14 , 2006 .j. l. vicario _et al . _ , `` beam selection strategies for orthogonal random beamforming in sparse networks , '' _ ieee trans .wireless commun ._ , vol . 7 , no .9 , pp . 3385 - 3396 , sept .et al . _ ,`` transmission mode selection in a heterogeneous network using opportunistic beamforming , '' in proc ._ ieee global commun .atlanta , ga _ ,9 - 13 , 2013 .y. huang and b. rao , `` performance analysis of random beamforming with heterogeneous users , '' in _ proc .annual conf . on inf .sciences and systems , princeton , nj _ , pp .1 - 5 , mar .21 - 23 , 2012 .t. samarasinghe , h. inaltekin and j. s. evans , `` outage capacity of opportunistic beamforming with random user locations , '' in _ proc .ieee global commun .atlanta , ga _ ,9 - 13 , 2013 .m. wang , t. samarasinghe and j. s. evans , `` multi - cell opportunistic beamforming in interference - limited networks , '' in _ proc .australian commun .theory workshop , sydney , australia _ , feb .3 - 5 , 2014 .n. enderl and x. lagrange , `` user satisfaction models and scheduling algorithms for packet - switched services in umts , '' in _ proc .ieee vehicular technol .conf . , jeju , korea _ , pp .1704 - 1709 , apr .22 - 25 , 2003 .n. zorba and a. i. prez - niera , `` robust power allocation scheme for multibeam opportunistic transmission strategies under quality of service constraints , '' _ ieee j. on sel .areas commun .26 , no . 8 , pp . 1025 - 1034 , aug .n. zorba and a. i. prez - neira , `` optimum number of beams in multiuser opportunistic scheme under qos constraints , '' in _ proc .ieee workshop on smart antennas , vienna , austria _ , feb .26 - 27 , 2007 .d. n. c. tse and p. viswanath , _ fundamentals of wireless communications_. cambridge , u.k .: cambridge univ .press , 2005 .t. samarasinghe , h. inaltekin and j. s. evans , `` the feedback - capacity tradeoff for opportunistic beamforming under optimal user selection , '' _ performance evaluation _ ,70 , issues 7 - 8 , pp .472 - 492 , jul .y. huang and b. rao , `` multicell random beamforming with cdf - based scheduling : exact rate and scaling laws , '' in _ proc .ieee vehicular technol .conf . , las vegas , nv _ , sept . 2 - 5 , 2013 .i. gradshteyn and i. ryzhik , _ table of integrals , series , and products seventh edition ._ academic press , 2007 .s. kandukuri and s. boyd , `` optimal power control in interference - limited fading wireless channels with outage - probability specifications , '' _ ieee trans .wireless commun ._ , vol . 1 ,, jan . 2002 .a. d. wyner , `` shannon - theoretic approach to a gaussian cellular multiple - access channel , '' _ ieee trans .inf . theory _ ,1713 - 1727 , nov .
in this paper , we consider a multi - cell multi - user miso broadcast channel . the system operates according to the opportunistic beamforming framework in a multi - cell environment with variable number of transmit beams ( may alternatively be referred as the transmission rank ) at each base station . the maximum number of co - scheduled users in a cell is equal to its transmission rank , thus increasing it will have the effect of increasing the multiplexing gain . however , this will simultaneously increase the amount of interference in the network , which will decrease the rate of communication . this paper focuses on optimally setting the transmission rank at each base station such that a set of quality of service ( qos ) constraints , that will ensure a guaranteed minimum rate per beam at each base station , is not violated . expressions representing the achievable region of transmission ranks are obtained considering different network settings . the achievable transmission rank region consists of all achievable transmission rank tuples that satisfy the qos constraints . numerical results are also presented to provide further insights on the feasibility problem .
registration , the task of establishing correspondences between multiple instances of objects such as images , landmarks , curves , and surfaces , plays a fundamental role in a range of computer vision applications including shape modeling , motion compensation and optical flow , remote sension , and medical imaging . in the subfield of computational anatomy ,establishing inter - subject correspondences between organs allows the statistical study of organ shape and shape variability .examples of the fundamental role of registration include quantifying developing alzheimer s disease by establishing correspondences between brain tissue at different stages of the disease ; measuring the effect of copd on lung tissue after removing the variability caused by the respiratory process ; and correlating the shape of the hippocampus to schizophrenia after inter - subject registration . in this paper , we survey the role of symmetry in diffeomorphic registration and deformation modeling and link symmetry as seen from the field of geometric mechanics with the image registration problem .we focus on large deformations modeled in subgroups of the group of diffeomorphic mappings on the spatial domain , the approach contained in the large deformation diffeomorphic metric mapping ( lddmm , ) framework .connections with geometric mechanics have highlighted the role of symmetry and resulted in previously known properties connected with the registration of specific data types being described in a common theoretical framework .we wish to describe these connections in a form that highlights the role of symmetry and points towards future applications of the ideas .it is the aim that the paper will make the role of symmetry in registration and deformation modeling clear to the reader that has no previous familiarity with symmetry in geometric mechanics and symmetry groups in mathematics .one of the main reasons symmetry is useful in numerics is in it s ability to reduce how much information one must carry . as a toy example , consider the a top spinning in space . upon choosing some reference configuraiton ,the orientation of the top is given by a rotation matrix , i.e. an element .if i ask for you to give me the direction of the pointy tip of the top , ( which is pointing opposite in the reference ) it suffices to give me . however , is contained in space of dimension , while the space of possible directions is the -sphere , , which is only of dimension .therefore , providing the full matrix is excessive in terms of data .it suffices to just provide the vector .note that if , then .therefore , given only the direction , we can only reconstruct up to an element which preserves .the group of element which preserve is identifiable with .this insight allows us to express the space of directions as a homogenous space . in terms of infomationwe can cartoonishly express this by the expression this example is typically of all group quotients .if is some universe of objects and is a group which acts freely upon , then the orbit space hueristically contains the data of minus the data which transforms .thus reduction by symmetry can be implemented when a problem posed on has symmetry , and can be rewritten as a problem posed on .the later space containing less data , and is therefore more efficient in terms of memory .registration of objects contained in a spatial domain , e.g. the volume to be imaged by a scanner , can be formulated as the search for a deformation that transforms both domain and objects to establish an inter - object match .the data available when solving a registration problem generally is incomplete for encoding the deformation of every point of the domain .this is for example the case when images to be matched have areas of constant intensity and no derivative information can guide the registration . similarly ,when 3d shapes are matched based on similarity of their surfaces , the deformation of the interior can not be derived from the available information .the deformation model is in these cases over - complete , and a range of deformations can provide equally good matches for the data .here arises _ symmetry _ : the subspaces of deformations for which the registration problem is symmetric with respect to the available information . when quotienting out symmetry subgroups , a vastly more compact representation is obtained . in the image case ,only displacement orthogonal to the level lines of the image is needed ; in the shape case , the information left in the quotient is supported on the surface of the shape only .we start with background on the registration problem and the large deformation approach from a variational viewpoint . following this, we describe how reduction by symmetry leads to an eulerian formulation of the equations of motion when reducing to the lie algebra .symmetry of the dissimilarity measure allows additional reductions , and we use isotropy subgroups to reduce the complexity of the registration problem further .lastly , we survey the effect of symmetry in a range of concrete registration problems and end the paper with concluding remarks .the registration problem consists in finding correspondences between objects that are typically point sets ( landmarks ) , curves , surfaces , images or more complicated spatially dependent data such as diffusion weighted images ( dwi ) .the problem can be approached by letting be a spatial domain containing the objects to be registered . can be a differentiable manifold or , as is often the case in applications , the closure of an open subset of , , e.g. the unit square .a map can deform or warp the domain by mapping each to .acts on an image by composition with the inverse warp , .given two images , image registration involves finding a warp such that is close to as measured by a dissimilarity measure . ]the deformation encoded in the warp will apply to the objects in as well as the domain itself .for example , if the objects to be registered consist of points sets , , the set will be mapped to . for surfaces , similarly results in the warped surface . because those operations are associative ,the mapping acts on or and we write and for the warped objects .an image is a function , and acts on as well , in this case by composition with its inverse , see figure [ fig : registration ] . for this must be is invertible , and commonly we restrict to the set of invertible and differentiable mappings . for various other types of data objects, the action of a warp on the objects can be defined in a way similar to the case for point sets , surfaces and images .this fact relates a range registration problems to the common case of finding appropriate warps that trough the action brings the objects into correspondence .trough the action , different instances of a shape can be realized by letting warps act on a base instance of the shape , and a class of shape models can therefor be obtained by using deformations to represent shapes .the search for appropriate warps can be formulated in a variational formulation with an energy where is a dissimilarity measure of the difference between the deformed objects , and is a regularization term that penalizes unwanted properties of such as irregularity .if two objects and is to be matched , can take the form using the action of on ; for image matching , an often used dissimilarity measure is the -difference or sum of square differences ( ssd ) that has the form .the regularization term can take various forms often modeling physical properties such as elasticity and penalizing derivatives of in order to make it smooth .the free - form - deformation ( ffd , ) and related approaches penalize directly . for some choices of , existence and analytical properties of minimizers ofhave been derived , however it is in general difficult to ensure solutions are diffeomorphic by penalizing in itself .instead , flow based approaches model one - parameter families or paths of mappings , ] . given an intial condition , the point given by solving this initial value problem is uniquely determined ( if it exists ) . under reasonable conditions exists for each , and there is a map which we call the flow of .if is time - dependent we can consider the initial value problem with . under certain conditions ,this will also yield a flow map , which is the flow from time to . if is smooth , the flow map is smooth as well , in particular a diffeophism .we denote the set of diffeomorphisms by .conversely , let be a time - dependent diffeomorphism .thus , for any , we observe that is a curve in .if this curve is differentiable we may consider its time - derivative , , which is a vector above the point . from these observationsit imediately follows that is a vector above .therefore the map , given by is a vector - field which we call the _ eulerian velocity field of . the eulerian velocity field contains less data than , and this reduction in data can be viewed from the perspective of symmetry . given any , the curve can be transformed to the curve .we observe that thus and both have the same eulerian velocity fields .in other words , the eulerian velocity field , , is invariant under particle relablings .schematically , the following holds finally , we will denote some linear operators on the space of vector - fields .let and let .the _ push - forward _ of by , denoted , is the vector - field given by (x ) = \left .d\phi \right|_{\phi^{-1}(x ) } \cdot u ( \phi^{-1}(x)).\end{aligned}\ ] ] by inspection we see that is a linear operator on the vector - space of vector - fields .one can view as `` in a new coordinate system '' because any differential geometric property of is also inherited by .for example , if then ( y ) = 0 ] by the equation , w \rangle + \langle m , \pounds_u[w ] \rangle = 0\end{aligned}\ ] ] for all .this is a satisfying because for a fixed and we observe , w \rangle + \langle m,\pounds_u[w ] \rangle = 0 = \frac{d}{dt } \langle m , w \rangle = \frac{d}{dt}\langle ( \phi_t^u ) _ * m , ( \phi_t^u ) _ * \rangle \label{eq : product_rule}\end{aligned}\ ] ] this is nothing but a coordinate free version of the product rule .the variational formulation of lddmm is equivalent to minimizing the energy where is a distance metric on , is the identity diffeomorphism , and is a function which measures the disparity between the deformed template and the target image. given images , we consider the dissimilarity measure [ ex : imssd ] in this article we will consider the distance metric , \mathfrak{x}(m ) ) \\\phi^v_{0,1 } \circ \varphi_0 = \varphi_1 } } \left ( \int_0 ^ 1 \| v(t ) \| dt \right),\end{aligned}\ ] ] where is some norm on .if is induced by an inner - product , then this distance metric is ( formally ) a riemannian distance metric on .note that the distance metric , , is written in terms of a norm , defined on . in fact , the norm on induces a riemannian metric on given by and is the reimannian distance with respect to this metric .if the norm imposes a hilbert space structure on the vector - fields it can be written in terms of a psuedo - differential operator as , u \rangle ] , where {/g_f} ] .again , this is useful in the sense of data , as is illustrated in the following example .consider the dissimilarity measure of example [ ex : two_particles ] .the function , is finally , note that acts upon by the left action {g_f } \in \operatorname{diff}(m ) / g_f \stackrel{\psi \in \operatorname{diff}(m ) } { \longmapsto } [ \psi \circ \varphi]_{/g_f } \in \operatorname{diff}(m ) / g_f.\end{aligned}\ ] ] usually we will simply write for the action of on a given .this means that acts upon infinitesimally , as it is the lie algebra of .consider the setup of example [ ex : two_particles ] . here and the left action of is given by for and .the infinitesimal action of on is these constructions allow us to rephrase the initial optimization problem using a reduced curve energy .minimization of is equivalent to minimization of where is obtained by integrating the ode , with the intial condition {/g_f} ]we can define the ( time - dependent ) isotropy algebra this is nothing but the lie - algebra associated to the isotropy group .it turns out that the velocity field which minimizes ( or ) is orthogonal to with respect to the chosen inner - product .intuitively this is quite sensible because velocities which do not change do not alter the data , and simply waste control effort .this intuitive statement is roughly the content of the following proof .let satisfy or . then ] .let ^*w(1) ] . denoting ] is constant .we ve already verified that at , this inner - product is zero , thus , w(t ) \rangle = 0 ] satisfies for some covectors and for any . in other words where denotes the dirac delta functional cetnered at .this orthogonality constrain allows one to reduce the evolution equation on to an evolution equation on ( which might be finite dimensional if is large enough ) .in particular there is a map uniquely defined by the conditions and with respect to the chosen inner - product on vector - fields .consider the setup of example [ ex : two_particles ] with .then . let be the matrix - valued reproducing kernel of ( see ) .then is given by where are such that and .one can immediately observe that is injective and linear in .in other words is an injective linear map for fixed .because the optimal is orthogonal to we may invert on .in particular , we may often write the equation of motion on rather than on . this is a massive reduction if is finite dimensional . in particular, the inner - product structure on induces a riemannian metric on given by , v(q , v_2 ) \rangle.\end{aligned}\ ] ] the equations of motion in and map to the geodesic equations on .let extremize or .then there exists a unique trajectory such that .moreover , is a geodesic with respect to the metric .let minimize .thus satisfies . by the previous proposition orthogonal to .as is injective on , there exists a unique such that .note that can be written as thus , minimizers of correspond to geodesics in with respect to the metric . if we let be the hamiltonian induced by the metric on we obtain the most data - efficient form or and .minimizers of ( or ) are : {/g_f}. \end{cases } \label{eq : extreme4}\end{aligned}\ ] ] we see that this is a boundary value problem posed entirely on .if is finite dimensional , this is a massive reduction in terms of data requirements .consider the setup of example [ ex : two_particles ] with .the metric on is most easily expressed on the cotangent bundle .if is the matrix valued kernel of , the metric on takes the form a related approach to defining distances on a space of objects to be registered consists of defining an object space upon which acts transitively there exists a such that with distance here the distance on is defined directly from the distance in the group that acts on the objects , see for example . with this approach ,the riemannian metric descends from to a riemannian metric on and geodesics on lift by horizontality to geodesics on .the quotient spaces obtained by reduction by symmetry and their geometric structure corresponds to the object spaces and geometries defined with this approach .intuitively , reduction by symmetry can be considered a removal of redundant information to obtain compact representations while letting the metric descend to the object space constitutes an approach to defining a geometric structure on an already known space of objects . the solutions which result are equivalent to the ones presented in this article because where for some fixed reference object .we here give a number of concrete examples of how symmetry reduce the infinite dimensional registration problem over to lower , in some cases finite , dimensional problems . in all examples, the symmetry of the dissimilarity measure with respect to a subgroup of gives a reduced space by quotienting out the symmetry subgroup .the space used in the examples in section [ sec : red ] constitutes a special case of the landmark matching problem where sets of landmarks , are placed into spatial correspondence trough the left action of by minimizing the dissimilarity measure .the landmark space arises as a quotient of from the symmetry group as in in example [ ex : two_particles ] .reduction from to in the landmark case has been used in a series of papers starting with .landmark matching is a special case of jet matching as discussed below .hamilton s equations take the form on where denotes the spatial derivative of the reproducing kernel .generalizing the situation in example [ ex : ex5 ] , the momentum field is a finite sum of dirac measures that through the map gives an eulerian velocity field as a finite linear combination of the kernel evaluated at : .registration of landmarks is often in practice done by optimizing over the initial value of the momentum in the ode to minimize , a strategy called shooting . using symmetry , the optimization problem is thus reduced from an infinite dimensional time - dependent problem to an dimensional optimization problem involving integration of a dimensional ode on .the space of smooth non - intersecting closed parametrized curves in is also known as the space of embeddings , denoted .the parametrization can be removed by considering the right action of on given by then the quotient space is the space of _ unparametrized curves_. the space is a special case of a nonlinear grassmannian .it is not immediately clear if this space is a manifold , although it is certainly an orbifold .in fact the same question can be asked of and .a few conditions must be enforced on the space of embeddings and the space of diffeormophisms in order to impose a manifold structure on these spaces , and these conditions along with the metric determine whether or not the quotient can inherit a manifold structure .we will not dwell upon these matters here , but instead we refer the reader to the survey article .when the parametrization is not removed , embedded curves and surfaces can be matched with the current dissimilarity measure .the objects are considered elements of the dual of the space of differential -forms on . in the surface case, the surface can be evaluated on a -form by where is an orthonormal basis for and the surface element .the dual space is linear an can be equipped with a norm thereby enabling surfaces to be compared with the norm .note that the evaluation does not depend on the parametrization of .the isotropy groups for curves and surfaces generalize the isotropy groups of landmarks by consisting of warps that keeps the objects fixed , i.e. the momentum field will be supported on the transported curves / surfaces for optimal paths for in .images can be registered using either the -difference defined in example [ ex : imssd ] or with other dissimilarity measures such as mutual information or correlation ratio . the similarity will be invariant to any infinitesimal deformation orthogonal to the gradient of dissimilarity measure . in the case , this is equivalent to any infinitesimal deformation orthogonal to the level lines of the moving image .the momentum field thus has the form for a smooth function on and the registration problem can be reduced to a search over the scalar field instead of vector field .minimizers for follow the pde with representing the deformed image at time .-difference will be orthogonal to level lines of the image and symmetry implies that the momentum field will be orthogonal to the level lines so that for a time - dependent scalar field . ] in an extension of the landmark case has been developed where higher - order information is advected with the landmarks .these higher - order particles or _ jet - particles _ have simultaneously been considered in fluid dynamics , the spaces of jet particles arise as extensions of the reduced landmark space by quotienting out smaller isotropy subgroups .let be the isotropy subgroup for a single landmark let know be a positive integer .for any -differentiable map from a neighborhood of , the -jet of is denoted . in coordinates , consists of the coefficients of the order taylor expansions of about at .the higher - order isotropy subgroups are then given by that is , the elements of fix the taylor expansion of the deformation up to order .the definition naturally extends to finite number of landmarks , and the quotients can be identified as the sets with being the space of rank tensors .intuitively , the space is the regular landmark space with information about the position of the points ; the 1-jet space carry for each jet information about the position and the jacobian matrix of the warp at the jet position ; and the 2-jet space carry in addition the hessian matrix of the warp at the jet position .the momentum for in coordinates consists of vectors representing the local displacement of the points . with the 1-jet space ,the momentum in addition contains matrices that can be interpreted as locally linear deformations at the jet positions . in combination with the displacement, the 1-jet momenta can thus be regarded locally affine transformations .the momentum fields for add symmetric tensors encoding local second order deformation .the local effect effect of the jet particles is sketched in figure [ fig : discimage ] .when the dissimilarity measure is dependent not just on positions but also on higher - order information around the points , reduction by symmetry implies that optimal solutions for will be parametrized by -jets in the same way as parametrize optimal paths for in the landmark case .the higher - order jets can thus be used for landmark matching when the dissimilarity measure is dependent on the local geometry around the landmarks .for example , matching of first order structure such as image gradients lead to 1-order jets , and matching of local curvature leads to -order jets. the image matching problem can be discretized by evaluating the -difference at a finite number of points . in practice, this alway happens when the integral is evaluated at finitely many pixels of the image . in , it is shown how this reduces the image matching pde to a finite dimensional system on when the integral is approximated by pointwise evaluation at a grid where denotes the grid spacing . approximates to order , .the reduced space encodes the position of the points , , and the lifted eulerian momentum field is a finite sum of point measures . for each grid point, the momentum encodes the local displacement of the point , see figure [ fig : discimage ] . in , a discretization scheme with higher - order accuracyis in addition introduced with an approximation of .the increased accuracy results in the entire energy being approximated to order .the solution space in this cases become the jet - space . for a given order of approximation , a corresponding reduction in the number of required discretization pointsis obtained .the reduction in the number of discretization points is countered by the increased information encoded in each 2-jet .the momentum field thus encodes both local displacement , local linear deformation , and second order deformation , see figure [ fig : discimage ] .the discrete solutions will converge to solutions of the non - discretized problem as ., and the image matching pde is reduced to an ode on a finite dimensional reduced space . with the approximation , the momentum field will encode local displacement as indicated by the horizontal arrows ( top row ) . with a first order expansion ,the solution space will be jet space and locally affine motion is encoded around each grid point ( middle row ) .the approximation includes second order information and the system reduces to the jet space with second order motion encoded at each grid point ( lower row ) . ]image matching is symmetric with respect to variations parallel to the level lines of the images . with diffusion weighted images ( dwi ) and the variety of models for the diffusion information ( e.g. diffusion tensor imaging dti cite , gaussian mixture fields cite ) , first or higher - order information can be reintroduced into the matching problem .in essence , by letting the dissimilarity measure depend on the diffusion information , the full symmetry of the image matching problem is reduced to an isotropy subgroup of .the exact form of the of dwi matching problem depends on the diffusion model and how acts on the diffusion image . in , the diffusionis represented by the principal direction of the diffusion tensor , and the data objects to be match are thus vector fields .the action by elements of is defined by the action rotates the diffusion vector by the jacobian of the warp keeping its length fixed .similar models can be applied to dti with the preservation of principle direction scheme ( ppd , ) and to gmf based models .the dependency on the jacobian matrix implies that a reduced model must carry first order information in a similar fashion to the 1-jet space , however , any irrotational part of the jacobian can be removed by symmetry .the full effect of this has yet to be explored .incidentally , the equation of motion = 0 \\ u = k * m\end{aligned}\ ] ] is an eccentric way of writing euler s equation for an invicid incompressible fluid if we assume is initially in the space of divergence free vector - fields and is a dirac - delta distribution ( which impies . )this fact was exploited in to create a sequece of regularized models to euler s equations by considering a sequence of kernels which converge to a dirac - delta distribution . moreover ,if one replaces by the subgroup of volume preserving diffeomorphisms , then ( formally ) one can produce incompressible particle methods using the same reduction arguments presented here .in fact , jet - particles were independently discoverd in this context as a means of simulating fluids in .it is notable that is a mechanics paper , and the particle methods which were produced were approached from the perspective of reduction by symmetry . in one of the kernel parameters in which controls the compressibility of the taken to the incompressible limit .this allowed a realization of the particle methods described in .the constructions of is the same as presented in this survey article , but with replaced by the group of volume preserving diffeomorphisms of , denoted .the information available for solving the registration problem is in practice not sufficient for uniquely encoding the deformation between the objects to be registered .symmetry thus arises in both particle relabeling symmetry that gives the eulerian formulation of the equations of motion and in symmetry groups for specific dissimilarity measures . for landmarkmatching , reduction by symmetry reduces the infinite dimensional registration problem to a finite dimensional problem on the reduced landmark space . for matching curves and surfaces, symmetry implies that the momentum stays concentrated at the curve and surfaces allowing a reduction by the isotropy groups of warps that leave the objects fixed . in image matching , symmetry allows reduction by the group of warps that do not change the level sets of the image .jet particles have smaller symmetry groups and hence larger reduced spaces and that encode locally affine and second order information .reduction by symmetry allow these cases to be handled in one theoretical framework .we have surveyed the mathematical construction behind the reduction approach and its relation to the above mentioned examples . as data complexity rises both in term of resolution an structure , symmetry will continue to be an important tool for removing redundant information and achieving compact data representations .hoj would like to thank darryl holm for providing a bridge from geometric mechanics into the wonderful world of image registration algorithms .hoj is supported by the european research council advanced grant 267382 fcca .ss is supported by the danish council for independent research with the project `` image based quantification of anatomical change '' .d. c. alexander , j. c. gee , and r. bajcsy , _strategies for data reorientation during non - rigid warps of diffusion tensor images _ , medical image computing and computer - assisted intervention miccai99 ( chris taylor and alain colchester , eds . ) , lecture notes in computer science , no . 1679 , springer berlin heidelberg , january 1999 , pp .463472 ( en ) .r abraham and j e marsden , _ foundations of mechanics _, benjamin / cummings publishing co. inc .advanced book program , reading , mass ., 1978 , second edition , revised and enlarged , with the assistance of tudor ratiu and richard cushman .reprinted by ams chelsea , 2008 .d. c. alexander , c. pierpaoli , p. j. basser , and j. c. gee , _ spatial transformations of diffusion tensor magnetic resonance images _ , ieee transactions on medical imaging * 20 * ( 2001 ) , no . 11 , 11311139 ( eng ) .thomas brox , andrs bruhn , nils papenberg , and joachim weickert , _ high accuracy optical flow estimation based on a theory for warping _ , computer vision - eccv 2004 ( toms pajdla and ji matas , eds . ) , lecture notes in computer science , no .3024 , springer berlin heidelberg , january 2004 , pp .2536 ( en ) . richard g. boyes , daniel rueckert , paul aljabar , jennifer whitwell , jonathan m. schott , derek l. g. hill , and nicholas c. fox , _ cerebral atrophy measurements using jacobian integration : comparison with the boundary shift integral _ , neuroimage * 32 * ( 2006 ) , no . 1 ,159169 ( eng ) .guang cheng , baba c. vemuri , paul r. carney , and thomas h. mareci , _ non - rigid registration of high angular resolution diffusion images represented by gaussian mixture fields _ , proceedings of the 12th international conference on medical image computing and computer - assisted intervention : part i ( berlin , heidelberg ) , miccai 09 , springer - verlag , 2009 , pp . 190197 .r. derfoul and c. le guyader , _ a relaxed problem of registration based on the saint venant kirchhoff material stored energy for the mapping of mouse brain gene expression data to a neuroanatomical mouse atlas _ , siam journal on imaging sciences ( 2014 ) , 21752195 .suma dawn , vikas saxena , and bhudev sharma , _ remote sensing image registration techniques : a survey _ , image and signal processing ( abderrahim elmoataz , olivier lezoray , fathallah nouboud , driss mammass , and jean meunier , eds . ) , lecture notes in computer science , no .6134 , springer berlin heidelberg , january 2010 , pp .103112 ( en ) .vladlena gorbunova , sander s. a. m. jacobs , pechin lo , asger dirksen , mads nielsen , alireza bab - hadiashar , and marleen de bruijne , _ early detection of emphysema progression _ , medical image computing and computer - assisted intervention : miccai... international conference on medical image computing and computer - assisted intervention * 13 * ( 2010 ) , no .pt 2 , 193200 ( eng ) .joan glauns , _ transport par diffomorphismes de points , de mesures et de courants pour la comparaison de formes et lanatomie numrique _ , ph.d .thesis , universit paris 13 , villetaneuse , france , 2005 .sarang c. joshi , michael i. miller , and ulf grenander , _ on the geometry and shape of brain sub - manifolds _ , international journal of pattern recognition and artificial intelligence * 11 * ( 1997 ) , no . 08 , 13171343 .alexis roche , grgoire malandain , xavier pennec , and nicholas ayache , _ the correlation ratio as a new similarity measure for multimodal image registration _ , proceedings of the first international conference on medical image computing and computer - assisted intervention , miccai 98 , springer - verlag , 1998 , acmi d : 709612 , pp .11151124 .d rueckert , l i sonoda , c hayes , d l hill , m o leach , and d j hawkes , _ nonrigid registration using free - form deformations : application to breast mr images _ , ieee transactions on medical imaging * 18 * ( 1999 ) , no . 8 , 712721 .
we survey the role of symmetry in diffeomorphic registration of landmarks , curves , surfaces , images and higher - order data . the infinite dimensional problem of finding correspondences between objects can for a range of concrete data types be reduced resulting in compact representations of shape and spatial structure . this reduction is possible because the available data is incomplete in encoding the full deformation model . using reduction by symmetry , we describe the reduced models in a common theoretical framework that draws on links between the registration problem and geometric mechanics . symmetry also arises in reduction to the lie algebra using particle relabeling symmetry allowing the equations of motion to be written purely in terms of eulerian velocity field . reduction by symmetry has recently been applied for jet - matching and higher - order discrete approximations of the image matching problem . we outline these constructions and further cases where reduction by symmetry promises new approaches to registration of complex data types .